text
stringlengths
4
5.48M
meta
stringlengths
14
6.54k
\section{Introduction} A promising strategy for the indirect detection of dark matter is the search for photons arising from dark matter annihilation in galactic subhalos, including those which host dwarf spheroidal galaxies (dSphs)~\cite{Fermi-LAT:2015att,Geringer-Sameth:2014qqa,MAGIC:2016xys,HAWC:2017mfa}. This strategy is promising because the photons will point back to the subhalo, which is a region with a large dark matter density, but relatively small baryonic density~\cite{Mateo_1998,McConnachie_2012}. There is thus relatively little astrophysical fore/background to a potential dark matter signal. The dependence of this photon signal on the properties of an individual subhalo is encoded in the $J$-factor, which in turn depends on the dark matter velocity distribution in the subhalo, and on the velocity-dependence of the dark matter annihilation cross section. Different models for the velocity-dependence of the dark matter annihilation cross section can lead to $J$-factors with different normalizations and angular dependences~~\cite{Robertson:2009bh,Belotsky:2014doa,Ferrer:2013cla,Boddy:2017vpe,Zhao:2017dln,Petac:2018gue,Boddy:2018ike,Lacroix:2018qqh,Boddy:2019wfg,Boddy:2020,Ando:2021jvn}. In this way, the microphysics of the dark matter annihilation cross section is connected to both the amplitude and morphology of the resulting photon signal. For this reason, it is important to determine $J$-factors which arise under all theoretically-motivated assumptions for the velocity-dependence of the cross section. The most well-studied case is $s$-wave annihilation, in which $\sigma v$ is velocity-independent. In recent work \citep[e.g.,][]{bergstrom:2018asd}, $J$-factors have been calculated for other well-motivated examples, such as $p$-wave, $d$-wave, and Sommerfeld-enhanced annihilation. But most of these calculations have been performed under the assumption that the dark matter density profile $\rho(r)$ is of the Navarro-Frenk-White (NFW) form~\cite{Navarro:1995iw}. Our goal in this work is to generalize this calculation to other density profiles which are commonly used, and motivated by $N$-body simulation results. We will consider generalized NFW, Einasto~\cite{Einasto:1965czb}, Burkert~\cite{Burkert:1995yz}, and Moore~\cite{Moore:1997sg} profiles. Like the standard NFW profile, these density distributions are characterized by only two dimensional parameters, $\rho_s$ and $r_s$. The dependence of the $J$-factor on these parameters is largely determined by dimensional analysis~\cite{Boddy:2019wfg}. Given our results, one can easily determine the amplitude and angular distribution of the photon signal for any subhalo and choice of density profile, in terms of the halo parameters and the velocity-dependent cross section. Our strategy will be to use the Eddington inversion method~\cite{10.1093/mnras/76.7.572} to determine the dark matter velocity distribution $f(r,v)$ from $\rho(r)$. This velocity-distribution will, in turn, determine the $J$-factor. For each functional form, we will be able to determine a scale-free $J$-factor which depends on velocity-dependence of the annihilation cross section, but is independent of the halo parameters. The dependence of the $J$-factor on $\rho_s$ and $r_s$ is entirely determined by dimensional analysis. This will leave us with a set of dimensionless numerical integrals to perform, for any choice of the velocity-dependence and of the density distribution functional form, which in turn determine the $J$-factor for any values of the subhalo parameters. \renew{We will also find that, for some classes of profiles, one can find analytic approximations for the velocity and angular distributions. These analytic computations will yield insights which generalize to larger classes of profiles than those we consider. For example, we will find that, in the case of Sommerfeld-enhanced annihilation, the annihilation rate has a physical divergence if the inner slope of the profile is steeper than $4/3$ (independent of the shape at large distance), requiring one to account for deviations from the Coulomb limit.} The plan of this paper is as follows. In Section~\ref{sec:formalism}, we review the general formalism for determining the $J$-factor. In Section~\ref{sec:models}, we describe the models of dark matter particle physics and astrophysics which we will consider. We present our results in Section~\ref{sec:results}, and conclude in Section~\ref{sec:conclusion}. \section{General Formalism} \label{sec:formalism} We will follow the formalism of~\cite{Boddy:2019wfg}, which we review here. We consider the scenario in which the dark matter is a real particle whose annihilation cross section can be approximated as $\sigma v = (\sigma v)_0 \times S(v/c)$, where $(\sigma v)_0$ is a constant, independent of the relative velocity $v$. The $J$-factor describes the astrophysical contribution to the dark matter annihilation flux \begin{eqnarray} J_S (\theta) &=& \int d\ell \int d^3 v_1 \int d^3 v_2~ f({\bf r}(\ell, \theta), {\bf v}_1)~ f({\bf r}(\ell, \theta), {\bf v}_2) \nonumber\\ &\,& \times S(|{\bf v}_1 - {\bf v}_2|/c) , \label{eq:JFactor} \end{eqnarray} where $f$ is the dark matter velocity distribution, $\ell$ is the distance along the line of sight, and $\theta$ is the angle between the the line-of-sight direction and the direction from the observer to the center of the subhalo. \subsection{Scale-free \texorpdfstring{$J$}{J}} We will assume that the dark matter density profile $\rho(r)$ depends only on two dimensionful parameters, $\rho_s$ and $r_s$. In that case, we may rewrite the density profile in the scale-free form $\tilde \rho (\tilde r)$, where \begin{eqnarray} \tilde r &\equiv& r / r_s , \nonumber\\ \tilde \rho (\tilde r) &\equiv& \rho (r) / \rho_s . \end{eqnarray} $\tilde \rho (\tilde r)$ has no dependence on the parameters $\rho_s$ and $r_s$. Aside from $\rho_s$ and $r_s$, the only relevant dimensionful constant is $G_N$. We also define a scale-free velocity using the only combination of these parameters with units of velocity, \begin{eqnarray} \tilde v &\equiv& v / \sqrt{4\pi G_N \rho_s r_s^2} , \end{eqnarray} in terms of which we may define the scale-free velocity distribution \begin{eqnarray} \tilde f (\tilde r, \tilde v) &\equiv& \left(4\pi G_N \rho_s r_s^2 \right)^{3/2} \rho_s^{-1} f (r,v) , \end{eqnarray} where $\tilde \rho (\tilde r) = \int d^3 \tilde v ~ \tilde f (\tilde r, \tilde v)$ and where $\tilde f (\tilde r, \tilde v)$ is independent of the dimensionful parameters. We will assume that the velocity-dependence of the dark matter annihilation cross section has a power-law form, given by $S(v/c) = (v/c)^n$. We may then express the $J$-factor in scale-free form. \begin{eqnarray} J_{S(n)} (\tilde \theta) &=& 2 \rho_s^2 r_s \left(\frac{4\pi G_N \rho_s r_s^2}{c^2} \right)^{n/2} \tilde J (\tilde \theta) , \nonumber\\ J_{S(n)}^{tot} &=& \frac{4\pi \rho_s^2 r_s^3}{D^2} \left(\frac{4\pi G_N \rho_s r_s^2}{c^2} \right)^{n/2} \tilde J^{\rm tot} , \label{eq:JFactorToScaleFree} \end{eqnarray} where the scale-free quantities $\tilde J_{S(n)} (\tilde \theta)$ and $\tilde J_{S(n)}^{tot}$ are given by~\cite{Boddy:2019wfg} \begin{eqnarray} \tilde J_{S(n)}^{tot} &\approx& \int_0^\infty d\tilde \theta ~ \tilde \theta ~ \tilde J_{S(n)} (\tilde \theta) , \nonumber\\ \tilde J_{S(n)} (\tilde \theta) &\approx& \int_{\tilde \theta}^\infty d\tilde r ~ \left[1 - \left(\frac{\tilde \theta}{\tilde r}\right)^2 \right]^{-1/2} P_n^2 (\tilde r), \label{eq:Js_Jstot} \end{eqnarray} and where \begin{eqnarray} P_n^2 &=& \int d^3 \tilde v_1 d^3 \tilde v_2 ~ |\tilde {\bf v}_1 - \tilde {\bf v}_2|^n \tilde f (\tilde r, \tilde v_1) \tilde f (\tilde r, \tilde v_2) . \label{eq:P2n} \end{eqnarray} In the case of $s$-wave annihilation, $P_{n=0}^2 = \tilde \rho^2$. $P_n^2$ is thus the generalization of $\tilde \rho^2$ relevant to computation of the $J$-factor for velocity-dependent dark matter annihilation. Note, that if $n$ is a positive, integer, then the expression for $P_n^2$ can be expressed in terms of one-dimensional integrals. In particular, we find \begin{eqnarray} P_{n=2}^2 (\tilde r) &=& [\tilde \rho (\tilde r)]^2 \left[ 2\langle \tilde v^2 \rangle (\tilde r)\right] , \nonumber\\ P_{n=4}^2 (\tilde r) &=& [\tilde \rho (\tilde r)]^2 \left[ 2 \langle \tilde v^4 \rangle (\tilde r) + \frac{10}{3} \left(\langle \tilde v^2 \rangle (\tilde r) \right)^2 \right] \end{eqnarray} where $\langle \tilde v^m \rangle (\tilde r) = 4\pi [\int_0^\infty d\tilde v ~ \tilde v^{m+2} \tilde f (\tilde r, \tilde v)] / \tilde \rho (\tilde r)$. For the case of $n=-1$, one must perform the two-dimensional integral. \subsection{Eddington Inversion} If the subhalo is in equilibrium, then the velocity-distribution can be written as a function of the integrals of motion. Since we have assumed that the velocity distribution is spherically symmetric and isotropic, it can be written as a function only of the energy per particle, $E = v^2/2 + \Phi(r)$, where $\Phi(r)$ is the gravitational potential\footnote{Following convention, we use the symbol $\Phi$ for both the photon flux and the gravitational potential. We trust the meaning of $\Phi$ will be clear from context.} (that is, $f(r,v) = f(E(r,v))$). The velocity distribution can then be expressed in terms of the density using the Eddington inversion formula~~\cite{10.1093/mnras/76.7.572}, yielding \begin{eqnarray} f(E) &=& \frac{1}{\sqrt{8} \pi^2 } \int_E^{\Phi(\infty)} \frac{d^2 \rho}{d \Phi^2} \frac{d\Phi}{\sqrt{\Phi - E}} , \end{eqnarray} where \begin{eqnarray} \Phi(r) &=& \Phi(r_0) + 4\pi G_N \rho_s r_s^2 \int_{\tilde r_0}^{\tilde r} \frac{dx}{x^2} \int_0^x dy ~y^2 \tilde \rho(y) . \end{eqnarray} Note, we have assumed that the baryonic contribution to the gravitational potential is negligible. In terms of the scale-free gravitational potential and energy $\tilde \Phi (\tilde r) \equiv \Phi (r) / 4\pi G_N \rho_s r_s^2$, $\tilde E \equiv E / 4\pi G_N \rho_s r_s^2$, we then find \begin{eqnarray} \tilde f(\tilde r, \tilde v) = \tilde f(\tilde E(\tilde r, \tilde v)) &=& \frac{1}{\sqrt{8} \pi^2 } \int_{\tilde E}^{\tilde \Phi(\infty)} \frac{d^2 \tilde \rho}{d \tilde \Phi^2} \frac{d\tilde \Phi}{\sqrt{\tilde \Phi - \tilde E}} . \nonumber\\ \label{eq:ScaleFreeEddignton} \end{eqnarray} The scale-free quantities $\tilde J$ and $\tilde J^{tot}$ depend on the functional form of the dark matter density distribution ($\tilde \rho$), and on the velocity dependence of the annihilation cross section ($n$), but are independent of the parameters $\rho_s$ and $r_s$. For any functional form of $\tilde \rho$, and any choice of $n$, one can compute $\tilde J (\tilde \theta)$ and $\tilde J^{tot}$ by performing the integration described above. For any individual subhalo with parameters $\rho_s$ and $r_s$, a distance $D$ away from Earth, the $J$-factor is then determined by Eq.~\ref{eq:JFactorToScaleFree}. This calculation has been performed for the case of an NFW profile, in which case $\tilde \rho (\tilde r) = \tilde r^{-1} (1+ \tilde r)^{-2}$ \citep{Boddy:2019wfg}. We will extend this result to a variety of other profiles. \section{Dark Matter Astrophysics and Microphysics } \label{sec:models} We will consider four theoretically well-motivated scenarios for the power-law velocity dependence of the dark matter annihilation cross section ($S(v/c) = (v/c)^n$). \begin{itemize} \item{$n=0$ ({\it $s$-wave}): \new{In this case, the dark matter initial state has orbital angular momentum $L=0$, and }$\sigma v$ is independent of $v$ \new{in the non-relativisitic limit}. This is the standard case which is usually considered. } \item{$n=2$ ({\it $p$-wave}): \new{In this case, the dark matter initial state has orbital angular momentum $L=1$.} This case can arise if dark matter is a Majorana fermion which annihilates to a Standard Model (SM) fermion/anti-fermion pair through an interaction respecting minimal flavor violation (MFV) (see, for example,~\cite{Kumar:2013iva}). } \item{$n=4$ ({\it $d$-wave}): \new{In this case, the dark matter initial state has orbital angular momentum $L=2$.} This case can arise if dark matter is a real scalar annihilating to an SM fermion/anti-fermion pair through an interaction respecting MFV (see, for example,~\cite{Kumar:2013iva,Giacchino:2013bta,Toma:2013bka}). } \item{$n=-1$ ({\it Sommerfeld-enhancement in the Coulomb limit}): This case can arise if there is a long-range attractive force between dark matter particles, mediated by a very light particle. \new{If the dark matter initial state is $L=0$, a $1/v$ enhancement arises because the dark matter initial state is an eigenstate of the Hamiltonian with a long-range attractive potential. If the mediator has non-zero mass, then the $1/v$ enhancement will be cutoff for small enough velocity, but we focus on the case in which this cutoff is well below the velocity scale of the dark matter particles. For a detailed discussion, see~\cite{ArkaniHamed:2008qn,Feng:2010zp}, for example.} } \end{itemize} \new{Despite significant effort, there is no consensus on the functional form of the dark matter profile which one should expected in subhalos.} We consider various dark matter profiles, which are motivated by $N$-body simulations \new{ and stellar observations}: \begin{itemize} \item{{\it Generalized NFW} [$\tilde \rho(\tilde r) = \tilde r^{-\gamma} (1+\tilde r)^{-(3-\gamma)}$]: $\gamma=1$ corresponds to the standard NFW case \new{\cite{Navarro:1995iw}, and was originally proposed as a good fit to the density found in $N$-body simulations. The generalization to $\gamma \neq 1$ was first studied in~\cite{Zhao:1996mr}, and has been argued to be a good fit $N$-body simulation results for larger values of $\gamma$~\cite{Klypin:2000hk}, although previous work had also indicated that smaller values of $\gamma$ may also be acceptable~\cite{Klypin:1997fb}. We will consider a broad range of choices of $\gamma$ ranging from $0.6$ to $1.4$. (Note, for $\gamma \geq 1.5$, the $s$-wave annihilation rate would diverge.)} } \item \textit{Einasto profile} [$\tilde \rho(\tilde r) = \exp (-(2/\alpha)(\tilde r^\alpha -1))$]: \new{This profile has been found to be at least as good fit as NFW to densities found in $N$-body simulations when $\alpha$ lies roughly in the range $0.12 < \alpha < 0.25$ (see, for example,~\cite{Gao:2007gh,Ludlow:2016qow}), and we will consider values of $\alpha$ in this range.} \item{{\it Burkert profile} [$\tilde \rho(\tilde r) = (1+\tilde r)^{-1} (1+\tilde r^2)^{-1}$]: This is a commonly used example of a cored profile\new{, which was found to be a good fit to observations of stellar motions in dwarf galaxies~\cite{Burkert:1995yz}}.} \item{{\it Moore profile} [$\tilde \rho(\tilde r) = (\tilde r^{1.4} (1+\tilde r)^{1.4})^{-1}$]: This is an example of a very cuspy profile\new{, which was found to be a good fit to the $N$-body simulations considered in~\cite{Moore:1997sg,Klypin:2000hk}}.} \end{itemize} \section{Results} \label{sec:results} For any choice of $\tilde \rho (\tilde r)$ and of $n$, the $J$-factor is determined by three parameters ($\rho_s$, $r_s$ and $D$), and by a scale-free normalization ($\tilde J_{S(n)}^{\rm tot}$) and an angular distribution ($\tilde J_{S(n)} (\tilde \theta) / \tilde J_{S(n)}^{\rm tot}$), which must be determined by numerical integration. We can characterize the angular size of gamma-ray emission from a subhalo with the quantity $\langle \theta \rangle /\theta_0$, defined as \begin{eqnarray} \frac{\langle \theta \rangle}{\theta_0} &\equiv& \frac{\int_0^\infty d\tilde \theta ~ \tilde \theta^2 \tilde J_{S(n)}(\tilde \theta)}{\tilde J_{S(n)}^{\rm tot}} . \label{eq:theta} \end{eqnarray} \begin{table*} \centering \begin{tabular}{|M{0.5cm}|M{0.9cm}|M{0.9cm}|M{0.9cm}|M{0.9cm}|M{0.8cm}|M{0.8cm}|M{0.8cm}|M{0.8cm}|M{0.8cm}|M{0.8cm}|M{0.8cm}|M{0.8cm}|M{0.8cm}|M{0.8cm}|M{0.8cm}|M{1.1cm}|M{0.9cm}|} \hline\hline \multicolumn{18}{|c|}{$\tilde{J}_{S(n)}^{tot}$} \\ [0.5ex] \hline\hline & \multicolumn{10}{|c|}{NFW ($\gamma$)} & \multicolumn{5}{|c|}{Einasto ($\alpha$)} & Burkert & Moore\\ [0.5ex] \hline n & 0.6 & 0.7 & 0.8 & 0.9 & 1.0 & 1.1 & 1.2 & 1.25 & 1.3 & 1.4 & 0.13 & 0.16 & 0.17 & 0.20 & 0.24 & \multicolumn{2}{|c|}{ }\\ \hline \csvreader[head to column names=false, late after line = \\\hline]{Results/tables/j_over_jtot.csv}{}{\csvlinetotablerow} \end{tabular} \caption{Numerical values for the scale-free normalization $\tilde{J}_{S(n)}^{tot}$ (defined in Eq.~\ref{eq:Js_Jstot}) for $n=-1, 0, 2,$ and $4$, where the profile is taken to be either generalized NFW (with $\gamma$ as listed), Einasto (with $\alpha$ as listed), Burkert, or Moore. } \label{table:Jtot} \end{table*} \begin{table*}[t] \centering \begin{tabular}{|M{0.5cm}|M{0.83cm}|M{0.83cm}|M{0.83cm}|M{0.83cm}|M{0.83cm}|M{0.83cm}|M{0.83cm}|M{0.83cm}|M{0.83cm}|M{0.83cm}|M{0.83cm}|M{0.83cm}|M{0.83cm}|M{0.83cm}|M{0.83cm}|M{1.1cm}|M{0.9cm}|} \hline\hline \multicolumn{18}{|c|}{$\langle \theta \rangle / \theta_0$} \\ [0.5ex] \hline\hline & \multicolumn{10}{|c|}{NFW ($\gamma$)} & \multicolumn{5}{|c|}{Einasto ($\alpha$)} & Burkert & Moore\\ [0.5ex] \hline n & 0.6 & 0.7 & 0.8 & 0.9 & 1.0 & 1.1 & 1.2 & 1.25 & 1.3 & 1.4 & 0.13 & 0.16 & 0.17 & 0.20 & 0.24 & \multicolumn{2}{|c|}{ }\\ \hline \csvreader[head to column names=false, late after line = \\\hline]{Results/tables/theta_over_theta0.csv}{}{\csvlinetotablerow} \end{tabular} \caption{ Numerical values for the angular distribution $\langle \theta \rangle / \theta_0$ (defined in Eq.~\ref{eq:theta}) for $n=-1, 0, 2,$ and $4$, where the profile is taken to be either generalized NFW (with $\gamma$ as listed), Einasto (with $\alpha$ as listed), Burkert, or Moore. } \label{table:angular} \end{table*} In Tables~\ref{table:Jtot} and~\ref{table:angular}, we present $\tilde J_{S(n)}^{\rm tot}$ and $\langle \theta \rangle / \theta_0$, respectively, for all of the profiles ($\tilde \rho (\tilde r)$) and choices of $n$ which we consider. We also plot $\tilde J_{S(n)} (\tilde \theta) / \tilde J_{S(n)}^{\rm tot}$ for all of the these profiles and choices of $n$ in Figures~\ref{fig:NFW} (generalized NFW),~\ref{fig:einasto} (Einasto),~\ref{fig:burkert} (Burkert), and ~\ref{fig:moore} (Moore). \begin{figure*}[hptb] \centering \includegraphics[width=\textwidth]{NFW_profile_2x2.pdf} \caption{The scale-free photon angular distribution arising from Sommerfeld-enhanced \new{$n=-1$} (upper left), $s$-wave \new{$n=0$} (upper right), $p$-wave \new{$n=2$} (lower left), and $d$-wave \new{$n=4$} (lower right) dark matter annihilation in a generalized NFW subhalo \new{where the inner region goes like $\propto r^{-\gamma}$} (the profile parameter $\gamma$ varies from $0.6$ to $1.4$, as labelled). \new{The dashed lines show the analytic approximation from Eq.~\ref{eq:JsInnerSlope} for Sommerfeld-enhanced dark matter with $\gamma=0.7, 1.0, 1.3$.}} \label{fig:NFW} \end{figure*} \begin{figure*}[hptb] \centering \includegraphics[width=\textwidth]{Einasto_profile_2x2.pdf} \caption{ The scale-free photon angular distribution arising from Sommerfeld-enhanced (upper left), $s$-wave (upper right), $p$-wave (lower left), and $d$-wave (lower right) dark matter annihilation in an Einasto subhalo (the profile parameter $\alpha$ varies from $0.12$ to $0.25$, as labelled).} \label{fig:einasto} \end{figure*} \begin{figure}[hptb] \includegraphics[width=1.0\columnwidth]{Burkert.pdf} \caption{The scale-free photon angular distribution for the Burkert profile, with $n = -1, 0, 2, 4$, as labelled. } \label{fig:burkert} \end{figure} \begin{figure}[hptb] \includegraphics[width=1.0\columnwidth]{overplot_moore_and_1_4.pdf} \caption{The scale-free photon angular distribution for the Moore profile (solid lines), with $n = 0, 2, 4$, as labelled. For comparison, the scale-free angular distribution for generalized NFW ($\gamma = 1.4$) is also plotted (dotted lines). } \label{fig:moore} \end{figure} We see that for relatively cuspy profiles, smaller values of $n$ lead to an angular distribution which is more sharply peaked at small angles. On the other hand, we see that for a cored profile, such as Burkert, the angular distribution is largely constant at small angles, regardless of $n$. \subsection{Inner slope limit} To better understand the dependence of the gamma-ray angular distribution on the density profile and on the velocity-dependence of the dark matter annihilation cross section, we will consider the innermost region of the subhalo, for which $\tilde r \ll 1$. In this region, care must be taken during the numerical integration to achieve precise results, especially in the case of Sommerfeld-enhanced annihilation. \renew{The divergence near the origin requires fine-grained sampling of the integrands in order to obtain convergence of the integrals. However, the numerical accuracy of such integrals can be hard to estimate. Additionally, we cannot determine \textit{a priori} whether the integral will converge for any given model (as will be discussed for certain Sommerfeld-enhanced annihilation models later in this section.} Fortunately, we will find that if $\tilde \rho (\tilde r)$ has power law behavior, then we can solve for $\tilde f (\tilde E)$ analytically in the inner slope region, giving us simple expressions for $P^2_n (\tilde r)$ and $\tilde J_{S(n)} (\tilde \theta)$, which can be matched to the full numerical calculation. We may relate the density distribution to the velocity distribution using \begin{eqnarray} \tilde \rho (\tilde r) &=& 4\pi \int_0^{\tilde v_{esc} (\tilde r)} d\tilde v ~ \tilde v^2 \tilde f(\tilde r, \tilde v) , \nonumber\\ &=& 4\sqrt{2}\pi \int_{\tilde \Phi (\tilde r)}^{\tilde \Phi (\infty)} d\tilde E ~ \tilde f (\tilde E) \sqrt{\tilde E - \tilde \Phi (\tilde r) } . \end{eqnarray} We assume that, in the inner slope region, we have $\tilde \rho (\tilde r) = \tilde \rho_0 \tilde r^{-\gamma}$, with $\gamma \new{\geq} 0$. We then have \begin{eqnarray} \tilde \Phi (\tilde r) &=& \frac{\tilde \rho_0}{(3-\gamma)(2-\gamma)} \tilde r^{2-\gamma} , \end{eqnarray} where we adopt the convention $\tilde \Phi (0) =0$. Defining $x = E / \tilde \Phi (\tilde r)$, we then have \begin{eqnarray} \tilde \rho_0 \tilde r^{-\gamma} &=& 4\sqrt{2}\pi \left(\tilde \Phi (\tilde r) \right)^{3/2} \int_1^{\frac{\tilde \Phi (\infty)}{\tilde \Phi (\tilde r)}} dx ~ \sqrt{x-1}~ \tilde f \left(x \tilde \Phi(\tilde r)\right) . \label{eq:rho_f_innerslope} \nonumber\\ \end{eqnarray} For $\tilde r \ll 1$ we may take $\tilde \Phi (\infty) / \tilde \Phi (\tilde r) \rightarrow \infty$, in which case the integral above depends on $\tilde r$ only through the argument of $\tilde f$. For $\gamma > 0$, we can solve eq.~\ref{eq:rho_f_innerslope} with the ansatz $\tilde f (\tilde E) = \tilde f_0 \tilde E^\beta$, where $\beta = (\gamma - 6)/[2(2-\gamma)] < -3/2$ and \begin{eqnarray} \tilde f_0 &=& \frac{\tilde \rho_0}{4\sqrt{2} \pi} \left[\frac{\tilde \rho_0}{(3-\gamma)(2-\gamma)} \right]^{-(\beta + 3/2)} \nonumber\\ &\,& \times \left[\int_1^\infty dx~ x^\beta \sqrt{x-1} \right]^{-1} . \label{eq:f0} \end{eqnarray} This matches the expression found in Ref.~\cite{Baes_2021}. Given this expression for $\tilde f (\tilde E (\tilde r, \tilde v))$, we can perform the integral in eq.~\ref{eq:P2n}, yielding \begin{eqnarray} \tilde{P}^2_n (\tilde r \ll 1) &=& C_{\gamma, n} \tilde r^{b_n} , \end{eqnarray} where $b_n ={n + \gamma (1-(6+n)/2)} $ and \begin{eqnarray} C_{\gamma, n} &=& 16\pi^2 f_0^2 \left[\frac{\tilde \rho_0}{(3-\gamma)(2-\gamma)} \right]^{2\beta + (6+n)/2} \nonumber\\ &\,& \times \int_0^\infty dy_1 \int_0^\infty dy_2~ y_1^2 y_2^2 \left[\frac{y_1^2}{2} +1 \right]^\beta \left[\frac{y_2^2}{2} +1 \right]^\beta \nonumber\\ &\,& \times \left[\frac{(y_1 + y_2)^{n+2} - (|y_1 - y_2|)^{n+2}}{2(n+2)y_1 y_2} \right] . \end{eqnarray} Note, however, that this integral only converges if \new{ $ n < -3 - 2\beta = 2\gamma/(2-\gamma)$.} For larger values of $n$, the dark matter annihilation rate is dominated by high velocity particles, and it is necessary to determine the velocity-distribution outside of the small $\tilde E$ regime. But for Sommerfeld-enhanced annihilation ($n=-1$), the integral will converge for all of the cuspy slopes we consider. Eq.~\ref{eq:Js_Jstot} then simplifies in the limit $\tilde \theta \ll 1$ to \begin{eqnarray} \tilde J_{S(n)} (\tilde \theta \ll 1) &\approx& C_{\gamma, n} \tilde \theta^{1+b_n} \int_{1}^{\tilde r_0 / \tilde \theta} dx \frac{x^{b_n}}{\sqrt{1-x^{-2}}} , \label{eq:JsInnerSlope} \end{eqnarray} where the integral in eq.~\ref{eq:Js_Jstot} is truncated at $\tilde r_0 \leq 1$. We assume that the power-law description of $\tilde \rho$ is accurate for $\tilde r < \tilde r_0$, and truncate the integral outside this region. For $b_n < -1$ and $\tilde \theta \ll \tilde r_0$, the integral is insensitive to this cutoff. For a cuspy profile, we thus have analytical expressions for the $\tilde J_{S(n)}$ at small $\tilde \theta$, and these expressions match the full expression obtained from numerical integration \new{(see Fig.~\ref{fig:NFW}, upper left panel)}. It is interesting to note that the exponent $b_n$ exhibits a degeneracy between $\gamma$ and $n$. Thus, for example, the power law behavior of $\tilde J_{S(n)}$ for the case of Sommerfeld-enhanced annihilation ($n=-1$) and a pure NFW profile ($\gamma = 1$) is identical to that of $s$-wave annihilation ($n=0$) for a generalized NFW profile with $\gamma = 1.25$. However, the normalization coefficients $C_{\gamma, n}$ are different. This implies that, for a cuspy profile, a detailed analysis of the angular distribution at both small angles and intermediate angles is in principle sufficient to resolve the velocity-dependence of dark matter annihilation. \begin{figure}[hptb] \includegraphics[width=0.95\columnwidth]{1_and_1_25.pdf} \caption{The scale-free photon angular distribution for a generalized NFW profile, with either $\gamma = 1.25$, $n=0$ (blue) or $\gamma = 1.0$, $n=-1$ (purple).} \label{fig:125} \end{figure} To illustrate this point, in Fig.~\ref{fig:125} we plot $\tilde J_{S(n)}(\tilde \theta) / \tilde J_{S(n)}^{tot}$ for two generalized NFW profiles, $\gamma = 1$ ($n=-1$) and $\gamma = 1.25$ ($n=0$). This figure confirms our analytical result; both of these models yield angular distributions which exhibit the same behavior at small angles. But they differ at larger angles, implying that with sufficient data and angular resolution, it is in principle possible to determine the velocity-dependence of the annihilation cross section. Indeed, for $\gamma = 1.25$, $n=0$, we find $\langle \theta \rangle / \theta_0 = 0.21$, which is significantly smaller than the value found for $\gamma = 1.0$, $n=-1$ ($\langle \theta \rangle / \theta_0 = 0.32$). \new{This result is to be expected, since the $\gamma = 1.25$, $n=0$ model has a much more cuspy profile than the $\gamma = 1.0$, $n=-1$ model.} \renew{Moreover, both profiles illustrated in Fig.~\ref{fig:125} have a density which falls of as $r^{-3}$ at large distance. If the profile were made less steep at large distances (in order for the angular distribution to fall off less rapidly), the mass of the halo would grow as a power law with distance.} Thus, if the slope of angular dependence in the innermost region can be determined, then the scale at which that power law behavior cuts off is sufficient to distinguish $s$-wave annihilation from Sommerfeld-enhanced annihilation, with Sommerfeld-enhanced annihilation producing a more extended angular distribution. \renew{Although we have plotted the angular distributions in terms of $\tilde \theta = \theta / \theta_0$, this result does not depend on one's ability to determine $r_S$ experimentally. A rescaling of $r_s$ (or, equivalently, $\theta_0$) would amount to a shift of one of the curves plotted in Fig.~\ref{fig:125}, but not a change in its shape.} In a similar vein, we have compared the angular distribution for the Moore profile and generalized NFW profile ($\gamma = 1.4$) in Figure~\ref{fig:moore}. Both profiles have the same inner slope, but the Moore profile yields more extended emission. This result is echoed in Table~\ref{table:angular}, where we see that $\langle \theta \rangle / \theta_0$ is about $\sim 20\%$ larger for a Moore profile than for generalized NFW with $\gamma = 1.4$, for $n = 0, 2, 4$. \begin{figure*}[hptb] \includegraphics[width=0.95\textwidth]{comparing_vel_dependence-2.pdf} \caption{Comparisons of different velocity-dependent models with the same DM profile. } \label{fig:n_compare} \end{figure*} \new{In Fig.~\ref{fig:n_compare}, we supplement the values of $\langle\theta\rangle/\theta_0$ by illustrating the differences in the angular spread of the annihilation of a given DM profile for different velocity-dependent models. For the cuspy profiles, Sommerfeld emission dominates near the center and at larger angles but is the smallest in between. On the other hand, $d$-wave emission is smallest near the center and at larger angles but dominates in between. Quantitatively, we can see from Table~\ref{table:angular} that for the cuspy profiles $\langle\theta\rangle/\theta_0$ increases with increasing $n$. } Interestingly, we find that, for Sommerfeld-enhanced annihilation ($n=-1$), we have $b_{n=-1} < -3$ for $\gamma > 4/3$. For $b_n < -3$, the integral for $\tilde J_{S(n)}^{tot}$ diverges at small $\tilde \theta$. This implies that for a profile, such as Moore, with $\gamma > 4/3$, our treatment of Sommerfeld-enhanced annihilation has been inconsistent. In particular, we have implicitly assumed that dark matter annihilation does not deplete the dark matter density significantly, which may not be the case. Moreover, the $1/v$ Sommerfeld-enhancement of the annihilation cross section is cut off at a velocity-scale which depends on the mediator mass~\cite{ArkaniHamed:2008qn}, and we have assumed that this cutoff is at a velocity small enough to be irrelevant. It is also interesting to note that, for cuspy profiles $\tilde J_{S(n=2)}^{tot}$ tends to be significantly smaller than $\tilde J_{S(n=0)}^{tot}$, while $\tilde J_{S(n=4)}^{tot}$ is only a slightly smaller than $\tilde J_{S(n=2)}^{tot}$. This may seem counter-intuitive, since the integrals which determine $P_n^2$ have integrands which scale as powers of $\tilde v^n$. But as we have seen, for larger $n$, $P_n^2$ becomes more sensitive to the high-velocity tail of particles which are not confined to core. As a result, we find $\langle \tilde v^4 \rangle \gg (\langle \tilde v^2 \rangle )^2$. \subsection{Cored profile} The situation is somewhat different for a cored profile. For the Burkert profile, which exhibits a core, the differences in the angular distribution arising from $n = -1, 0, 2$ or $4$ are much smaller. In particular, the angular distribution is flat at small angles, regardless of $n$. This implies that morphology of the photon signal carries less information regarding the velocity-dependence of dark matter annihilation. We can again understand this behavior by considering an analytic approximation. Let us approximate the cored profile with $\tilde \rho (\tilde r) = \tilde \rho_0$ for $\tilde r < 1$, and assume the density vanishes rapidly for $\tilde r > 1$. For $\tilde r < 1$ we then have $\tilde \Phi (r) = (\tilde \rho_0 /6) \tilde r^2$, and Eq.~\ref{eq:rho_f_innerslope} can be rewritten as \begin{eqnarray} \tilde \rho_0 &=& 4\sqrt{2} \pi \left[\frac{\tilde \rho_0 \tilde r^{2}}{6} \right]^{3/2} \int_1^{\tilde r^{-2}} dx~ \sqrt{x-1} \times \tilde f\left(x \frac{\tilde \rho_0 \tilde r^{2}}{6} \right), \nonumber\\ \end{eqnarray} for small $\tilde r$, where we have made the approximation that particles do not explore the region outside the core. In this case, one cannot find a power-law solution for $\tilde f$ while taking the upper limit of integration to infinity, as the integral would not converge. Instead, this equation can be solved \new{for $\tilde r \ll 1$} by taking $\tilde f = (9\sqrt{3} / 4\pi) \tilde \rho_0^{-1/2}$. We thus see that, for a cored profile, the velocity distribution is independent of $\tilde E$ for paths confined to the innermost region. This implies that, for $\tilde r \ll 1$, $\tilde f$, and thus $P^2_n$, are independent of $\tilde r$. If the velocity distribution is independent of $\tilde r$, the angular distribution of the gamma-ray signal cannot depend on $n$, since the effects of velocity-suppression do not depend on the distance from the center of the subhalo. Indeed, we can confirm this result by noting that, for a cored profile, since $P^2_n$ is independent of $\tilde r$ at small $\tilde r$ for all $n$, we can rewrite Eq.~\ref{eq:JsInnerSlope} as \begin{eqnarray} \tilde J_{S(n)}^{cored} (\tilde \theta) &\propto& \tilde \theta \int_1^{\tilde r_0 / \tilde \theta} dx~[1-x^{-2}]^{-1/2} . \end{eqnarray} But in this case, we cannot ignore the upper limit of integration, and we find that $\tilde J_{S(n)}^{cored} (\tilde \theta)$ becomes independent of $\tilde \theta$ at small angle. This result matches what is found from a complete numerical calculation for the Burkert profile. More generally, we see from Table~\ref{table:angular} that, as profiles become more cored, the difference in $\langle \theta \rangle / \theta_0$ between the $n=-1, 0, 2$ and $4$ become smaller. The above argument suggests that the degeneracy of all four cases is only broken by the behavior of the profile at larger $\tilde r$, as one leaves the core. For a Burkert profile, $\langle \theta \rangle / \theta_0$ tends to decrease as $n$ increases. This behavior can be readily understood, because annihilation at large angles is dominated by particles which are far from the core. As particles get farther from the core, the escape velocity (which is the largest allowed velocity for a bound particle) decreases, suppressing annihilation for larger $n$. But interestingly, $\langle \theta \rangle / \theta_0$ tends to increase with $n$ for the case of generalized NFW. The suppression of annihilation far from the core with larger $n$ still occurs in this case. But there is an additional effect; $P_n^2 (r)$ has a less steep slope in the inner region for large $n$. Thus, for cuspy profiles, as $n$ increases, the angular distribution is suppressed both at large and very small angles, with the overall effect being to increase the average angular size of emission. For a cored profile like Burkert, on the other hand, the second effect is not present, as the angular distribution in the inner slope region is flat for any $n$. \section{Conclusion} \label{sec:conclusion} We have determined the effective $J$-factor for the cases of $s$-wave, $p$-wave, $d$-wave and Sommerfeld-enhanced (in the Coulomb limit) dark matter annihilation for a variety of dark matter profiles, including generalized NFW, Einasto, Burkert, and Moore. We have assumed that the dark matter velocity distribution is spherically-symmetric and isotropic, and have recovered the velocity distribution from the density distribution by numerically solving the Eddington inversion equation. If the density-profile is power-law in the inner slope region, then the velocity-distribution in the inner slope region can be determined analytically, yielding results which match the full numerical calculation. \new{We have found that, for a large class of profiles, the angular dependence of the photon flux at small angles is completely determined by the steepness of the cusp and the power-law velocity dependence. Although there is a degeneracy between these two quantities in the angular distribution at small angles, this degeneracy is broken at larger angles. } \new{ For a cored profile, on the other hand, the velocity distribution is largely independent of position. Thus, although the velocity-dependence of the annihilation cross section will affect the overall rate of dark matter annihilation, it will not affect the distribution within the core. Instead, the effect of the velocity-dependence on the photon angular distribution is largely determined by what happens at the edge of the core. } Our analysis has focused on the magnitude and angular distribution of the dark matter signal. We have not considered astrophysical backgrounds, or the angular resolution of a realistic detector. It would be interesting to apply these results to a particular instrument in development, to determine the specifications needed to distinguish the velocity-dependence of a potential signal in practice. For a cuspy profile, it is apparent from Figure~\ref{fig:NFW} that, to resolve the power-law angular slope dependence of the inner slope region, one would need an angular resolution of better than $1/10$ of the angle subtended by the scale radius. Interestingly, we have found that if the dark matter density profile has a power-law steeper than $\gamma = 4/3$ (an example is the Moore profile), then the rate of Sommerfeld-enhanced annihilation in the Coulomb limit diverges at the core. In a specific particle physics model, one expects that the $1/v$ Sommerfeld-enhancement in the Coulomb limit will not be valid at arbitrarily small velocities, unless the particle mediating dark matter self-interactions is truly massless. It is often assumed that this cut off occurs at velocities which are negligible, but if the profile is steep enough, then this effect cannot be ignored. Moreover, if the dark matter annihilation rate at the core is sufficiently large, then the effect of annihilation on the dark matter distribution also cannot be ignored. It would be interesting to consider Sommerfeld-enhanced annihilation in the very cuspy limit in more detail. As we have seen, one would need excellent angular resolution to robustly distinguish the dark matter velocity-dependence of a single dark matter subhalo (for recent work on determining the velocity-dependence using an ensemble of subhalos, see, for example,~\cite{Baxter:2021pzi,Runburg:2021pwh}). The Galactic Center is a larger target, and it would be interesting to perform a similar analysis for that case. One important difference, in that case, is that there is a large baryonic contribution to the gravitational potential, which would affect the dark matter velocity distribution. {\bf Acknowledgements} We are grateful to Andrew B.~Pace and Louis E.~Strigari for useful discussions. The work of BB and VL is supported in part by the Undergraduate Research Opportunities Program, Office of the Vice Provost for Research and Scholarship (OVPRS) at the University of Hawai`i at Mānoa. The work of JK is supported in part by DOE grant DE-SC0010504. The work of JR is supported by NSF grant AST-1934744.
{'timestamp': '2022-07-29T02:01:35', 'yymm': '2110', 'arxiv_id': '2110.09653', 'language': 'en', 'url': 'https://arxiv.org/abs/2110.09653'}
\section{Motivation and significance} \label{1_motivation} Developing rigorous machine learning models is crucial in improving the performance of AI solutions~\cite{brown_machine_2021}. However, building custom machine learning models comes with numerous significant challenges, including high demands for time, resources, and specialist expertise. Other challenges include the potential for scalability issues, assessing and maintaining data quality, and technical debt in managing code bases \cite{baier_challenges_2019}. The time required for resolving issues during development is significantly longer for codebases with low code quality compared to those with high code quality~\cite{tornhill2022code}. In a conventional industrial data science environment, members across different teams (such as data engineers, data scientists, and software engineers) are required to collaborate effectively to take a machine learning model from conceptualisation to deployment. Communication across teams can be rather perplexing, adding to the complexities of developing and deploying AI solutions. These inefficiencies indicate the need for a tool that can remove the burden of developing and optimising machine learning models manually. Automated machine learning (AutoML) is one solution that aims to address some of the concerns of developing and deploying machine learning solutions. AutoML allows users to automate much of the tasks of the model building process such as data cleaning, feature engineering, and model development and evaluation. Automating these tasks significantly reduces the time and costs associated with the machine learning pipeline. evoML is an automated tool that brings the entire data science cycle onto a single platform. It provides options to generate and deploy machine learning models with a few easy steps, at an expedited rate, with minimal input from data scientists and developers. Contrary to other autoML platforms, a critical element of evoML is its functionalities in multi-objective and code optimisation. Optimisation can be rather cumbersome, particularly in commercial development environments, as a result of which developers tend to minimise efforts at optimisation or resort to single-objective optimisation. evoML enables teams to develop and optimise machine learning models with ease, making a strong case of its adoption in commercial software development settings. \section{Software description} \label{2_software_desc} evoML is a software platform that offers a range of capabilities for data preprocessing, feature engineering, model generation, model optimisation, model evaluation, model code optimisation, and model deployment. These functionalities can be accessed through a web-based interface or through a workstation client. A key component of the platform is the model optimisation feature, which is integrated into the model development process. evoML also includes visualisations to aid in the analysis of outputs from each section of the platform. \subsection{Glossary of evoML Features and Functionalities} \textbf{Best Model}: The model that evoML suggests as the one with best performance metrics for a selected task. \textbf{Data Viewer}: A feature providing users a cross-sectional view of the dataset. \textbf{Dataset}: The dataset created on evoML using data ingested by users. \textbf{Deployed Models}: Models that have been deployed to carry out ML-based prediction tasks required by the user. \textbf{Feature Engineering}: In Feature Engineering, evoML provides automated functionalities to produce more meaningful features from existing ones in the dataset. \textbf{Features}: This functionality gives features of a given dataset. These features will be used by evoML to generate further insights and visualisations. \textbf{Green Metrics}: Selecting the Green Metrics feature will include training/prediction carbon emissions and electricity consumption as an objective to be optimised in the model. \textbf{Machine Learning Task}: evoML offers three machine learning tasks (1) classification, (2) regression, and (3) forecasting. Based on the dataset and the prediction target, the platform sets the machine learning task to one of the above three. Users are also able to change the machine learning task as preferred. \textbf{ML Models}: ML Models refer to models that have been generated by the platform, including the best model. \textbf{Relationships}: Relationships provides information and visualisations on correlations observed between different variables of a dataset. \textbf{Trial}: Term used to refer to the end-to-end model building cycle of evoML. A trial consists of data preprocessing, feature engineering, model building, and model evaluation. \subsection{Software Architecture} evoML offers two options for user interaction: a web interface designed for users with limited coding experience, and an evoML client that enables advanced users to integrate the platform into their existing systems. This flexible software architecture enables users of all skill levels to access and use evoML's range of machine learning capabilities. The web interface provides an easy-to-use, visual interface that allows users to build and optimise models without needing to write code. For advanced users who are comfortable with coding, the evoML client provides a code-based interface that can be easily integrated into existing systems and workflows. The platform guides users through the following phases when moving from conceptualisation to deployment of machine learning models: First, the \textbf{data preprocessing phase} allows users to upload and view their datasets using the ``dataset" feature. Next, the \textbf{feature engineering phase} involves selecting and manipulating features from the dataset to create the most effective features for the desired task. The platform then builds machine learning models for the selected task. After that, the \textbf{model optimisation phase} uses an iterative process to optimise the developed models and determine the ``best model" for the task. The best model, along with relevant metrics and visualisations, is provided to users to help them make informed decisions about their prediction task. Lastly, \textbf{model code optimisation} and \textbf{model deployment}, evoML provides functionalities for users to optimise model code and deploy models for their use case, which allows users to easily bring their models into a usable state and incorporate them into their workflow or product. The overall architecture of this process is illustrated in Figure~\ref{fig:evoml-arch}. For a deeper understanding of the evoML architecture, refer to the documentation available at: \url{https://docs.evoml.ai/} \begin{figure*}[ht] \centering \includegraphics[width=0.9\linewidth]{img/evoml-flowchart-3.png} \caption{Overview of evoML software architecture} \label{fig:evoml-arch} \end{figure*} \subsection{Primary Components} There are two primary components of the evoML platform: (1) Datasets and (2) Trials. \subsubsection{Datasets} The datasets component of evoML allows users to upload their data to the platform and perform analysis to identify trends and patterns within the data. This feature enables users to gain a better understanding of their data and to inform the development of machine learning models that can effectively extract valuable insights and make accurate predictions. \textbf{Data upload and preprocessing}: New data can be uploaded to evoML in one of the following formats: \begin{enumerate} \item Local device: Data files uploaded in the format of CSV, Feather, Parquet, JSON, Avro, and archives. \item Database: Data can be sourced from an SQL database, with support for a wide range of platforms including MySQL, PostgreSQL, MongoDB, KDB, and Exasol. \item Storage service: Data may be sourced from cloud storage services such as AWS S3, Microsoft Azure, or Minio. \item FTP: Data uploaded from an FTP server \end{enumerate} Upon uploading data, evoML performs statistical analysis to aid users in exploring the data before building a model. This analysis helps users better understand the characteristics and patterns present in the data prior to building the model. \textbf{Data evaluation}: evoML provides data visualisation tools to help users evaluate the validity of uploaded datasets. These visualisations provide a clear and intuitive representation of the data, allowing users to easily identify any potential issues or inconsistencies that may impact the accuracy of their model. By thoroughly evaluating their datasets, users can ensure that their models are built on a strong foundation of reliable and relevant data. \textbf{Feature engineering}: The feature engineering component of evoML has the capability to identify feature correlations and modify and combine features to derive more useful features to be used in the model. Figure~\ref{fig:feature-eng} provides an overview of the feature engineering functionality of the platform. \begin{figure*}[ht] \centering \includegraphics[width=0.9\linewidth]{img/feature-engineering-new.png} \caption{Overview of the feature engineering functionality} \label{fig:feature-eng} \end{figure*} \subsubsection{Trials} The Trials component of evoML is a key feature that helps users to develop, evaluate, optimise, and deploy machine learning models. This includes tasks such as selecting and preprocessing data, training and evaluating models using various algorithms and hyperparameter configurations, and optimising model performance through techniques such as source code optimisation and internal representation modification. Once a model has been developed and optimised, the trials component can then be used to deploy the model for use in production environments. Overall, the trials component of evoML provides a comprehensive set of tools and capabilities that enable users to efficiently develop and deploy machine learning models that are optimised for their specific use cases and deliver value to their organisations. \textbf{Machine learning model creation}: An existing or a newly created dataset can be used to find the optimal machine learning model to carry out a selected machine learning task. evoML provides the functionality for a user to select a feature (i.e. a column) to predict, which will provide the basis for the machine learning task. A range of additional options are provided to refine the scope of the trial. Users can also choose to optimise a selected loss function, including green metrics such as energy consumption or carbon emissions. For each trial, evoML provides a set of options for feature inclusion, feature selection, and feature generation of the model. Users are able to select or omit features from the dataset to be considered in the model development process. There are also options to customise feature generation options, for instance, by selecting the nature of combinations of variables. \textbf{Multi-objective optimisation}: As highlighted in the introduction, a valuable feature of evoML is its optimisation capabilities, which enable users to optimise the performance of their machine learning models in various ways. During the trial creation process, users can select up to three objectives to optimise. These objectives may include metrics such as training time, prediction time, green metrics, and explainability, among others. This allows users to tailor their models to the specific requirements and constraints of their use cases. For instance, users may want to optimise models for faster training and prediction times in order to reduce the computational resources required for model deployment. Alternatively, users may prioritise green metrics, such as energy efficiency or carbon footprint, in order to minimise the environmental impact of their models. By providing the ability to optimise models based on a wide range of objectives, evoML enables users to develop and deploy machine learning models that are tailored to their specific needs and constraints. \textbf{Model explanation and interpretation}: evoML provides a range of tools and capabilities for interpreting and evaluating the performance of machine learning models. Upon the creation of a model, users can use a variety of visualisations, such as the confusion matrix, ROC curve, precision recall curve, and density plot, to understand the model's behaviour and identify areas for improvement. Additionally, evoML provides a range of metrics, such as F1 score, precision, recall, accuracy, and log loss, which are available for train, validation, and test datasets, to help users quantitatively evaluate the performance of the developed models. These tools and capabilities enable users to more effectively understand and optimise the performance of their models, helping to ensure that they are delivering accurate and valuable insights. \textbf{Model Code Optimisation}: Using evoML, users can identify the optimal model and further enhance the speed and efficiency of the model. This is achieved through the use of a variety of techniques including lower level source code optimisation techniques~\cite{10.1145/3236024.3236043} and modifications to the internal representation of the model~\cite{nakandala2019compiling}. This component enables users to improve the speed and efficiency of the model by optimising the way in which it processes data and makes predictions. \subsection{Supported Machine Learning Models} Nasteski~\cite{nasteski17} provides an overview of supervised learning models. Based on those models' effectiveness for certain types of tasks, their popularity and availability of resources to support and maintain them, as well as user demand, evoML consists of a wide variety of machine learning algorithms as well as neural network algorithms for tasks such as classification, regression, and time-series forecasting. Specifically, the platform includes 46 classification algorithms, 46 regression algorithms, and 6 time-series forecasting algorithms, providing users with a diverse set of tools to choose from when developing machine learning models. Table~\ref{tab:clf}, Table~\ref{tab:reg}, and Table~\ref{tab:for} list the machine learning classification, regression, and time-series forecasting algorithms that are available within the evoML platform.. \begin{table}[ht] \caption{Classification models}\label{tab:clf} \begin{tabular}{@{}ll@{}} \toprule Model type & Model name \\ \midrule\midrule Bayesian & \begin{tabular}[c]{@{}l@{}}Gaussian Process, Gaussian Naive Bayes, \\ Bernoulli Naive Bayes, Gaussian Process,\\ Linear Discriminant Analysis, \\ Quadratic Discriminant Analysis\end{tabular} \\ \midrule Ensemble & \begin{tabular}[c]{@{}l@{}}Random Forest, Bagging, \\ Extremely Randomized Tree Ensemble, \\ Gradient Boosting, AdaBoost, \\ CatBoosting, LightGBM\end{tabular} \\ \midrule Gradient & \begin{tabular}[c]{@{}l@{}}Adaptive Gradient, Coordinate Descent, \\ Fast Iterative Shrinkage/Thresholding, \\ Stochastic Averaged Gradient, \\ Stochastic Averaged Gradient Ascent, \\ Stochastic Variance-reduced Gradient\end{tabular} \\ \midrule Kernel & \begin{tabular}[c]{@{}l@{}}Gaussian Process, Label Propagation, \\ Label Spreading, Support Vector Machine, \\ Linear Support Vector Machine, Kernel SVM\end{tabular} \\ \midrule Linear & \begin{tabular}[c]{@{}l@{}}Logistic Regression, Logistic Regression CV, \\ Ridge, Ridge CV, Perceptron, Passive Aggressive, \\ Stochastic Dual Coordinate Ascent,\\ Stochastic Gradient Descent\end{tabular} \\ \midrule Nearest Neighbors & \begin{tabular}[c]{@{}l@{}}K-Nearest Neighbors, Nearest Centroid, \\ Radius Neighbors\end{tabular} \\ \midrule Neural Network & \begin{tabular}[c]{@{}l@{}}Multilayer Perceptron\\Convolution Neural Network, Recurrent Neural Network\\ Long Short-Term Memory, Gated Recurrent Unit\\Fully Convolutional Network\end{tabular} \\ \midrule Semi Supervised & Label Propagation, Label Spreading \\ \midrule \begin{tabular}[c]{@{}l@{}}Support Vector \\ Machine\end{tabular} & \begin{tabular}[c]{@{}l@{}}Support Vector Machine, Kernel SVM,\\ Linear Support Vector Machine\end{tabular} \\ \midrule Tree-based & \begin{tabular}[c]{@{}l@{}}Random Forest, Gradient Boosting, \\ Decision Tree, CatBoosting, LightGBM\\ Extremely Randomized Tree Ensemble,\\ Extremely Randomized Tree\end{tabular} \\ \bottomrule \end{tabular} \end{table} \begin{table}[ht] \caption{Regression models}\label{tab:reg} \begin{tabular}{@{}ll@{}} \toprule Model type & Model name \\ \midrule\midrule Bayesian & \begin{tabular}[c]{@{}l@{}}Bayesian Ridge, Gaussian Process,\\ Automatic Relevance Determination\end{tabular} \\ \midrule Ensemble & \begin{tabular}[c]{@{}l@{}}Gradient Boosting, Random Forest, \\ AdaBoost, Bagging, CatBoosting, \\ LightGBM\end{tabular} \\ \midrule Gradient & \begin{tabular}[c]{@{}l@{}}Adaptive Gradient, Coordinate Descent, \\ Fast Iterative Shrinkage/Thresholding, \\ Stochastic Averaged Gradient, \\ Stochastic Averaged Gradient Ascent\end{tabular} \\ \midrule Kernel & Kernel Ridge, Gaussian Process \\ \midrule Linear & \begin{tabular}[c]{@{}l@{}}Linear Regression, Ridge, Ridge CV, \\ Lasso, Lasso CV, Elastic Net, Elastic Net CV, \\ Least Angle, Lasso Lars, Bayesian Ridge, \\ Automatic Relevance Determination, \\ Stochastic Gradient Descent, Passive Aggressive, \\ Random Sample Consensus, Huber, Theil-Sen,\\ Partial Least Squares, Stochastic Dual Coordinate Ascent\\ Orthogonal Matching Pursuit\end{tabular} \\ \midrule Nearest Neighbors & K-Nearest Neighbors, Radius Neighbors \\ \midrule Neural Network & \begin{tabular}[c]{@{}l@{}}Multilayer Perceptron\\ Convolution Neural Network, Recurrent Neural Network\\Long Short-Term Memory, Gated Recurrent Unit\\Fully Convolutional Network\end{tabular} \\ \midrule Tree-based & \begin{tabular}[c]{@{}l@{}}Decision Tree, Extremely Randomized Tree, \\ Gradient Boosting, Random Forest, \\ CatBoosting, LightGBM\end{tabular} \\ \bottomrule \end{tabular} \end{table} \begin{table}[ht] \caption{Forecasting models}\label{tab:for} \begin{tabular}{@{}ll@{}} \toprule Forecasting models & \begin{tabular}[c]{@{}l@{}}Auto ARIMA Forecaster, Auto ETS, \\ Local Global Trend Forecaster, Naive Forecaster, \\ Prophet Forecaster, Damped Local Trend Forecaster\end{tabular} \\ \bottomrule \end{tabular} \end{table} \subsection{Generative AI and code optimisation} evoML is centred around the fundamentals of generative AI and nature-inspired optimisation (Reinforcement Learning~\cite{kaelbling1996reinforcement}, Evolutionary Algorithm~\cite{zhou2011multiobjective}, Bayesian Optimisation\cite{garnett_bayesoptbook_2023} etc.). With data as input, evoML generates optimised machine learning models. Code optimisation allows evoML to further optimise models at code level, and users are able to go through model code to get a clear sense of the model's prediction process. Giavrimis et al.~\cite{9678650} conducted a study to assess the impact of optimising the codebase of the mlpack machine learning library on its performance. They found that through code optimisation, the library was able to achieve a $27.9\%$ reduction in execution time and a $2.7\%$ reduction in memory usage while maintaining the library's predictive capabilities. \section{Illustrative Examples} \label{3_examples} \textbf{Customer churn prediction with evoML} This example considers a customer churn dataset taken from Kaggle\footnote{available at: https://www.kaggle.com/datasets/blastchar/telco-customer-churn)}. The dataset contains information on a fictional telecommunications company that offers home phone and internet services to customers. It captures information of 7,043 customers across 21 columns. These columns, such as ``gender," become features in the model. Figure~\ref{fig:churn-1} gives a snapshot of the dataset uploaded to evoML. The example will use the above dataset to build a model that can predict whether a given customer is likely to churn or not. This is a classification task, with \textbf{churn} being the target feature. \begin{figure*}[ht] \centering \includegraphics[width=0.9\linewidth]{img/churn-1.jpg} \caption{Overview of churn dataset} \label{fig:churn-1} \end{figure*} To develop the model, users begin by creating a ``trial" on evoML and selecting the appropriate dataset and task. The platform then uses a range of techniques to develop and optimise a model that is best suited for the task at hand. The feature engineering overview allows users to see the changes that evoML has applied to the existing features in the dataset. Upon the completion of the ``trial", evoML gives the best model for the churn prediction task, along with additional information to evaluate its performance. The platform also includes a deployment option that exposes the best model and makes it available for instant churn predictions. \section{Impact} \label{4_impact} evoML is a unique platform that integrates the entire data science cycle into a single, cohesive environment. While other automated machine learning (autoML) platforms offer automated model building capabilities, none of them currently provide the ability to optimise the models within the same automated process. evoML stands out as the only platform that offers automated model and code optimisation as a core feature, enabling users to optimise the performance of their machine learning models and artificial intelligence solutions in a way that is consistent with net-zero impact goals. Additionally, evoML's multi-objective optimisation functionality allows users to optimise models over multiple hyperparameters, further improving performance while also considering the environmental impact of the model's deployment. These optimisation capabilities make it easy for data scientists and developers to incorporate optimisation into their workflows in a way that is aligned with net-zero impact objectives, ultimately leading to better overall performance of machine learning models. \section{Conclusions} \label{5_conclusion} evoML is a comprehensive automated machine learning platform that provides a range of functionalities for data wrangling, feature engineering, model development, model evaluation, model code optimisation, and model deployment. These capabilities can be accessed through the evoML user interface as a no-code option or through the evoML client for users with coding experience. A key feature of evoML is its multi-objective optimisation capability, which enables users to optimise models based on multiple criteria. This is a unique feature that is not currently offered by other autoML platforms. The inclusion of multi-objective optimisation in evoML helps to overcome the challenges and resource constraints that can often hinder manual code and optimisation efforts. As a result, developers can more easily implement optimisation features and build high-performing machine learning models without putting in significant time and effort. Overall, evoML is a valuable tool for anyone looking to automate the machine learning model development and optimisation process. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{'timestamp': '2022-12-22T02:03:00', 'yymm': '2212', 'arxiv_id': '2212.10671', 'language': 'en', 'url': 'https://arxiv.org/abs/2212.10671'}
\section{Extensions of DQN}\label{DQN Techniques} \subsection{Target Network and \texorpdfstring{{\Large $\epsilon$}}{epsilon}-Greedy Strategy}\label{target network section} As already introduced in the earliest paper of DQN \cite{DQN}, it is usually necessary to keep a target network as a reference to train the reinforcement learning network. Because the minimized loss in Eq. (\ref{TD loss}), i.e.,\\ \begin{equation}\label{TD loss (copy)} L=Q(s_t,a_t)-\left(r(s_t,a_t)+\gamma\,\max_{a_{t+1}}Q(s_{t+1},a_{t+1})\right)\,, \end{equation}\\ is to minimize a loss of the Q-network against the Q-network itself, a direct gradient descent method is found to be generally unstable. Therefore, to make it stable, in Eq. (\ref{TD loss (copy)}) we use a fixed target network to evaluate the $Q$-function on the right and use the currently trained network to evaluate the $Q$-function on the left, and we use the gradient descent method to modify the parameters of the currently trained network only. In order to synchronise the target network with the current learning progress, we update the target network by assigning the trained network's parameters to it after some predetermined training time, and then repeat the training. The training loss is therefore\\ \begin{equation}\label{TD loss target network} L=Q_{\theta_1}(s_t,a_t)-\left(r(s_t,a_t)+\gamma\,\max_{a_{t+1}}Q_{\theta_2}(s_{t+1},a_{t+1})\right)\,, \end{equation}\\ where $\theta_1$ represents the parameters of the currently trained neural network and $\theta_2$ represents the target network. Only parameters $\theta_1$ are modified by the gradient descent.\\ The $\epsilon$-greedy strategy is a strategy of taking actions for a reinforcement learning agent (or controller) during learning. The greedy strategy means to always take the action that has the highest expected reward, and the $\epsilon$-greedy strategy means to use the greedy strategy with a probability of $1-\epsilon$, and with a probability of $\epsilon$ it takes a random action. This $\epsilon$-greedy strategy uses such an $\epsilon$ random search to explore possibilities of small deviations from the greedy strategy and encourages exploration of the action space. Typically this $\epsilon$ hyperparameter is manually annealed during training. This technique is standard, and especially it is used when no other exploration strategies are adopted.\\ \subsection{Double Q-Learning}\label{DoubleDQN section} This technique is introduced in Ref. \cite{DoubleDQN}. When we look at the loss that is optimized as in Eq. (\ref{TD loss (copy)}), we can notice that there is a $\max$ operation involved, which always takes the maximal value of the neural network output. As we know that the neural network has fluctuating parameter values, this $\max$ operation will make the neural-network estimation of the function $Q$ always higher than what it should be. In order to make this $Q$ estimation center at its correct value, we separate the decision of the best action $\max_{a_{t+1}}$ and the evaluation of the $Q$-function by using two different neural networks.\\ We keep two neural networks with different parameters $\theta_1$ and $\theta_2$. To decide the optimal action $a^*_{t+1}$, we use the network with parameters $\theta_1$:\\ \begin{equation} a^*_{t+1}=\argmax_{a_{t+1}}Q_{\theta_1}(s_{t+1},a_{t+1}). \end{equation}\\ Then we evaluate this optimal action $a^*_{t+1}$ on the other neural network $\theta_2$:\\ \begin{equation} r(s_t,a_t)+\gamma\,Q_{\theta_2}(s_{t+1},a^*_{t+1}). \end{equation}\\ Because the fluctuations in the two neural networks are typically uncorrelated, this estimated $Q$-function does not always takes the maximum of a network's fluctuation now. The above strategy does not impede training because the two networks $\theta_1$ and $\theta_2$ learn the reinforcement learning task simultaneously, and both of them output reasonable decisions throughout training. In cases where both of them work perfectly, the decision outputted from one network would be identical to that from the other, and the loss reduces to the previous case as in Eq. (\ref{TD loss (copy)}). In practice, it is found sufficient to use the target network in Section \ref{target network section} as the network $\theta_2$, and to use the trained network as $\theta_1$. Note that the choice of $a^*_{t+1}$ depends on the trained network $\theta_1$ but is not involved in the gradient computation for the gradient descent.\\ \subsection{Dueling DQN Structure}\label{Duel DQN section} This technique is introduced in Ref. \cite{DuelDQN}. As introduced in Chapter \ref{deep reinforcement learning}, deep learning usually comes with numerical imprecision and noise. Due to its properties and its gradient-descent-based training, when the output value of the neural network is large, the gradients are large, and the numerical imprecision resulting from gradient descent is large. Therefore, if we have a generally large $Q$-function value which is the network output, it would be difficult for the network to learn small details of the $Q$-function further. However, since we use the formula $a^*_{t+1}=\argmax_{a_{t+1}}Q_{\theta_1}(s_{t+1},a_{t+1})$ to decide the best action $a^*_{t+1}$, it is necessary to reliably compare the $Q$ values for different actions $a_{t+1}$, which can possibly be very close and hard to learn. To alleviate this problem, a dueling structure of the DQN was proposed, which is used to separately predict the mean value of $Q$ for all actions and the deviations from the mean for each action choice $a$. In this way, even though the mean is large and has a large error, the deviations from the mean can be small and reliably learned, which gives a correct action choice and a better final performance.\\ \subsection{Prioritized Memory Replay}\label{Prioritized Replay} This technique is introduced in Ref. \cite{prioritizedSampling}. Its idea is simple: if the loss on some training data is high, then there is more to learn on those data, and we should sample them more often so that they are learned for more times. This strategy is especially important when the reward is sparse, or when there are only a few moments that are crucial. As demonstrated in Ref. \cite{prioritizedSampling}, in some extreme cases this strategy even accelerates the speed of learning for $10^3$ times, which is striking. Therefore, it is almost always used together with usual reinforcement learning. \\ To implement the prioritized sampling, we first need to define the priority. As in common cases, we set the probability $p$ of a data being sampled as\\ \begin{equation}\label{sampling priority} p\propto |L|^\alpha, \end{equation}\\ where $L$ is the previously estimated loss on the data and $\alpha$ is a hyperparameter. In order to make the original optimization target unaffected by prioritized sampling, the loss of each data is rescaled by the probability of the data being sampled, so that the loss minimization is still done equally on the whole dataset, not merely concentrating on high loss cases.\\ In cases where the memory replay is of a limited size, when the memory replay buffer is full and new experiences should be stored, we take away the samples with the lowest loss from the replay buffer to store new experiences so that the memory is best utilized. Because this strategy would potentially change the optimization target, we do so only when the memory replay can definitely not be enlarged.\\ \subsection{Noisy DQN}\label{Noisy DQN} The noisy DQN is introduced in Ref. \cite{NoisyDQN}, and it is an exploration strategy that helps with the $\epsilon$-greedy exploration. As explained in Section \ref{target network section}, the $\epsilon$-greedy strategy only explores small deviations from the deterministic greedy strategy and cannot explore totally different long-term strategies. Instead of completely random exploration, we may utilize the neural network itself to explore by introducing random variables into the neural network. In Ref. \cite{NoisyDQN}, noisy layers are used to replace the usual network layers. It can be written as\\ \begin{equation} \boldsymbol{y}=(\boldsymbol{b}+\boldsymbol{b}_{\text{noisy}}\odot\epsilon_b) + (\textbf{W}+\textbf{W}_{\text{noisy}}\odot\epsilon_w)\boldsymbol{x}, \end{equation}\\ where $\epsilon_b$ and $\epsilon_w$ are sampled random variables, $\odot$ denotes the element-wise multiplication, and $\boldsymbol{b}_{\text{noisy}}$ and $\textbf{W}_{\text{noisy}}$ are learned parameters that rescale the noise. In most cases $\boldsymbol{b}_{\text{noisy}}$ and $\textbf{W}_{\text{noisy}}$ gradually shrinks during learning, showing a self-annealing behaviour. Concerning implementation, we follow Ref. \cite{NoisyDQN} to use factorized Gaussian noises to construct $\epsilon_w$ and then rescale it by the square root function while keeping its sign.\\ There is still one technical caveat when using this strategy. During training, the noise parameters $\epsilon_b$ and $\epsilon_w$ should be sampled differently for different data in a training batch; otherwise $\boldsymbol{b}_{\text{noisy}}$ and $\textbf{W}_{\text{noisy}}$ will always experience similar gradients to those of $\boldsymbol{b}$ and $\textbf{W}$ and it will not work correctly. For efficiency, one single noise can be used on multiple training data, and the number of noises used is a hyperparameter.\\ Above are the techniques that are adopted in our reinforcement learning. Two other techniques in Rainbow DQN \cite{RainbowDQN} are not used; however, for the sake of completeness, we briefly discuss and explain them in the following.\\ \subsection{Multi-Step Learning} As introduced in Ref. \cite{ReinforcementlearningAnintroduction}, we can consider an alternative for Eq. (\ref{TD loss (copy)}) that learns multiple time steps altogether rather than from step $t$ to $t+1$. The motivation comes from the fact that, when the reinforcement learning learns step by step, the information of future rewards also propagate step by step, and if the number of steps is large, the learning process can be very slow. This results in a modified loss:\\ \begin{equation}\label{multi-step loss} L=Q(s_t,a_t)-\sum_{k=0}^{n-1}\gamma^k r(s_{t+k},a_{t+k})+\gamma^n\,\max_{a_{t+n}}Q(s_{t+n},a_{t+n})\,, \end{equation}\\ where $n$ is the number of steps that are considered altogether as a large step. This strategy is heuristic, and although it can result in faster learning, it no longer preserves the original definition of the $Q$-function as in Section \ref{reinforcement learning} and therefore does not theoretically guarantee the convergence to the optimal solution. Since in our experiments we compare the optimal controls, we do not adopt this strategy.\\ \subsection{Distributional Learning} As suggested in Ref. \cite{DistributionalDQN}, it is possible to let the neural network learn the distribution of future rewards rather than the $Q$-function which is the mean of future rewards. This strategy modifies the loss function drastically, changing it from the difference between two values to the difference between two distributions, as measured by the Kullbeck-Leibler divergence. This strategy has the advantage that its loss function correctly represents how well the AI has understood the behaviour of the environment in a stochastic setting, while the loss defined with the $Q$-function may be dominated by the variance that results from environmental noises. Therefore when combined with prioritized sampling, this distributional learning strategy can have a significant increase in performance in the long term.\\ However, in order to approximately represent a probability distribution, in the original paper the authors discretized the space of future rewards, and used one output value for each discretized point per action choice \cite{DistributionalDQN}. This requires us to presume the region of rewards that can possibly be achieved, and to apply proper discretization of the rewards. These requirements contradict our original purposes of research on the quantum control problems, where we have hoped to know what rewards could be achieved and how precisely an optimal reward could be acquired. Therefore, we do not adopt this strategy.\\ \section{Hyperparameter Settings}\label{DQN Settings} In this section, we present our hyperparameter settings and discuss the important ones. All our hyperparameter settings are determined empirically and we do not guarantee their optimality.\\ \subsection{Gradient Descent} We use the RMSprop gradient descent optimizer provided by Pytorch \cite{RMSprop}. The learning rate is annealed from $2\times10^4$ to $1\times10^6$ by 5 steps. The 5 learning rates are: $2\times10^{-4}$, $4\times10^{-5}$, $8\times10^{-6}$, $2\times10^{-6}$, $1\times10^{-6}$. The learning rate schedules during training are different for different problems and cases, including different input cases, and we have set the learning rate schedules empirically by observing when the loss stopped to change and levelled off. The momentum parameter is set to be 0.9, and the $\epsilon$ parameter which is used to prevent numerical divergence is set to be $10^{-5}$.\\ To obtain a useful training loss for minimization, we deform the loss value $L$ as in Eq. (\ref{TD loss (copy)}) to construct the Huber loss, i.e.,\\ \begin{equation} L'=\left\{\begin{split} \frac{1}{2}L^2,\qquad\ \text{if } |L|<1,\\ |L|-\frac{1}{2},\quad \text{if } |L|\ge 1. \end{split}\right. \end{equation}\\ such that the training loss is a mean-squared-error when $|L|$ is small, and it is a L1 loss when $|L|$ is large. Unlike the usual mean-squared-error loss, this strategy guarantees that large loss values do not result in extremely large gradients that may cause instability. Also, before each update step of the parameters, we manually clip the gradient of each parameter such that the gradients have a maximal norm of 1. \subsection{Prioritized Replay Settings}\label{Prioritized replay setting} The memory replay buffer that supports prioritized sampling is efficiently realized using the standard Sum Tree data structure, and we have used the Sum Tree code\footnote{\url{https://github.com/jaromiru/AI-blog/blob/master/SumTree.py}} released under the MIT License on GitHub. Concerning the hyperparameter settings, our settings are different for different problems and they are summarized in Table \ref{prioritized replay settings}. Different input cases for the same problem share the same settings.\\ \begin{table}[hbt] \centering \begin{tabular}{p{8.4em}ccccc} \toprule & $\alpha$ & $\beta$ & $p_\epsilon$ & $L_{\max}$ & $p_{\text{replace low loss}}$ \\ \midrule cooling oscillators & 0.4 & $0.2\to 1.0$ & 0.001 & 10 & 0.9 \\ \midrule stabilizing inverted oscillators & 0.8 & $0.2\to 1.0$ & 0.001 & 10 & 0.9 \\ \midrule cooling quartic oscillators & 0.4 & $0.2\to 1.0$ & 0.001 & 10 & 0.8 \\ \bottomrule \end{tabular} \caption{The prioritized replay settings used in our experiments.} \label{prioritized replay settings} \end{table}\\ Among the parameters in Table \ref{prioritized replay settings}, $\beta$ denotes the extent of loss rescaling of the data with different priorities such that the optimization target is kept the same, and after each training step, it is incremented by 0.001 until it reaches 1. A small parameter $p_\epsilon$ is added to the sampling probability for every data so that all data have finite probabilities to be sampled. The parameter $L_{\max}$ is a cutoff of large losses in order to compute moderate probabilities, and for a probability $p_{\text{replace low loss}}$ we use new experience data to replace the training data that have lowest losses when the replay buffer is full; otherwise we randomly take out existing training data to store new data. Note that actually we need to sort out a portion of data that have the lowest losses at a time, not the single data. Otherwise it would be extremely computationally inefficient and almost stop the training algorithm. In our implementation we sort out the 1\% portion of the data with lowest losses each time, and then we replace them one by one.\\ \subsection{Other Reinforcement Learning Settings}\label{other settings} The period for updating the target network discussed in Section \ref{target network section} is set to 300 training steps. However, to facilitate learning at the initial stage, we set it to 30 steps at the start, and after the simulated system has achieved a maximal time of $20\times\frac{1}{\omega_c}$ during which it does not fail, we set the target network update period to 150 steps, and after it has achieved $50\times\frac{1}{\omega_c}$, we set the update period to 300 steps. Note that if the training succeeds, it must be able to achieve the $t_{\text{max}}$ that is $100\times\frac{1}{\omega_c}$.\\ Concerning the neural networks, in order to accelerate training, we follow Ref. \cite{weightNormalization} to separate the learning of the weight matrices and the norms of the weight matrices. This strategy is applied to both usual network layers and the noisy layers as in Section \ref{Noisy DQN}. During training, we use a minibatch size of 512, and we sample 32 different noises on those data in the noisy layers. For computational efficiency, we always sample the noises in advance to use them later. The number of training steps is set to be proportional to the number of experiences that are stored into the memory buffer. For the quadratic problems, in the position and momentum input case, each experience is sampled and learned for 8 times on average, and in other input cases, each experience is sampled for 16 times on average. For the quartic problem, each experience is sampled for 8 times. \\ To obtain a trained neural network to evaluate, we track the performance of the networks during training. When the reinforcement learning agent performs better than the current performance record, we re-evaluate it for two more times, and if the average performance achieves a new performance record, we save this neural network in the hard disk for further evaluation after the training. Finally, we pick the 10 best trained networks to do more detailed evaluation as described in Section \ref{performance evaluation} to give our final performances. For the stabilizing inverted oscillator problem, it is hard to measure the performance of a reinforcement learning agent in a single trial. Therefore, whenever the reinforcement learning agent succeeds in one episode, we test it for two more episodes, and if it succeeds for all these three trials, we store the network in the hard disk when such successful networks have already appeared for 8 times. That is, we save one network per eight networks that are tested to be successful. In this way we can guarantee that the saved networks are not too close to each other and are roughly equispaced in the experienced training time. After the training, we evaluate the performances of the 10 latest neural networks that are saved. \section{Deterministic Linear-Quadratic Control}\label{linear quadratic regulator} The controller for the linear-quadratic control problem is called a \textit{linear-quadratic regulator} (LQR) \cite{LinearQuadraticControl}, and we introduce its problem setting and derivations in this section.\\ \subsection{Problem Setting and the Quadratic Optimal Cost} As introduced in Chapter \ref{Introduction}, a control problem is concerned with a controlled system state, which is denoted as $x$, and a control parameter, which is denoted as $u$. Here $x$ and $u$ are vectors and we do not write them in boldfaces for simplicity. The linear-quadratic problem involves a linear system in the sense that the time evolution of $x$ is linear with respect to combined $x$ and $u$, i.e. \\ \begin{equation}\label{linear control equation} dx=Fx\,dt+Gu\,dt, \end{equation}\\ where $F$ and $G$ are matrices, and we have suppressed time dependences of all the variables $F$, $G$, $x$ and $u$. Then, the control problem considers minimization of a quadratic control cost\footnote{This quantity has many nomenclatures. It is also called a performance index or score or cost-to-go.} accumulated from the start of the control at time $t_0$ to the end time $T$, which is formulated as\\ \begin{equation}\label{quadratic cost equation} V(x(t_0), u(\cdot), t_0)=\int_{t_0}^{T}\left (u^{\dagger}Ru+x^{\dagger}Qx\right )dt+x^{\dagger}(T)Ax(T), \end{equation}\\ where $Q$ and $A$ are non-negative matrices and $R$ is a positive matrix, and therefore we have $V\ge0$. In the above equation, $V$ is dependent only on the initial condition of $x$, i.e., $x(t_0)$, because the time-evolution equation of $x$ is determined once we know the control trajectory $u(\cdot)$. From now on we always assume that the control stops at time $T$, while the starting time of control $t_0$ may be changed. \\ From Eq.~(\ref{linear control equation}) we can conclude that the controlled system $x$ is memoryless, and therefore for any consistent control strategy, the control $u$ must be a function that is only dependent on the current values of $x$ and $t$. Let us denote a possible optimal control strategy of the problem by $u^*$, which minimizes the control cost $V$. The minimized cost is then denoted by $V^*(x(t_0),t_0)$, where $u^*$ has been totally determined by $x$ and $t$ and therefore disappeared. Now we prove that $V^*$ must have the form of\\ \begin{equation*} V^*(x(t),t)=x^{\dagger}(t)P(t)x(t), \end{equation*}\\ where matrix $P(t)$ is independent of $x$, and is symmetric (self-conjugate) and non-negative. First, we show that for a constant $\lambda$, we have $V^*(\lambda x,t)=|\lambda|^2V^*( x,t)$. This can be shown as follows. First,\\ \begin{equation}\label{quadratic form 1} \begin{split} V^*(\lambda x,t)&=V(\lambda x,u^*_{\lambda x}(\cdot),t)\\ &\le V(\lambda x,\lambda u^*_{x}(\cdot),t)\\ &=|\lambda|^2V( x, u^*_{x}(\cdot),t)\\ &=|\lambda|^2 V^*( x,t), \end{split} \end{equation}\\ where we have used $u^*_{x}(\cdot)$ to denote the optimal control for an initial state $x$ and $u^*_{\lambda x}(\cdot)$ for an initial state $\lambda x$. Using the optimality of control $u^*_{\lambda x}(\cdot)$ regarding the initial state $\lambda x$ we proceed from the first line to the second line, and using the linearity of Eq. (\ref{linear control equation}) and the definition of $V$ in Eq. (\ref{quadratic cost equation}) we proceed from the second line to the third line. Note that we have suppressed the time argument of the initial condition. Similarly, we have\\ \begin{equation}\label{quadratic form 2} |\lambda|^2 V^*( x,t)\le |\lambda|^2 V( x,\lambda^{-1}u^*_{\lambda x}(\cdot),t)=V( \lambda x,u^*_{\lambda x}(\cdot),t)=V^*(\lambda x,t), \end{equation}\\ and therefore we have\\ \begin{equation}\label{quadratic form condition 1} V^*(\lambda x,t)=|\lambda|^2V^*( x,t). \end{equation}\\ Secondly, we show that $V^*(x_1,t)+V^*(x_2,t)=\frac{1}{2}\left[V^*(x_1+x_2,t)+V^*(x_1-x_2,t)\right]$. It is done similarly using the linearity of trajectory $x(\cdot)$ regarding the initial condition $x(t)$ and control $u(\cdot)$ and using the quadratic property of $V$:\\ \begin{equation}\label{quadratic form 3} \begin{split} V^*(x_1,t)+V^*(x_2,t)&=\frac{1}{4}\left[V^*(2x_1,t)+V^*(2x_2,t)\right]\\ &\le\frac{1}{4}\left[V((x_1+x_2)+(x_1-x_2),u^*_{x_1+x_2}+u^*_{x_1-x_2},t)\right.\\ &\qquad\left.+V((x_1+x_2)-(x_1-x_2),u^*_{x_1+x_2}-u^*_{x_1-x_2},t)\right].\\ \end{split} \end{equation}\\ From Eq. (\ref{linear control equation}), we know that for an initial state $x_1(t_0)$ and control $u_1(\cdot)$ we can obtain a trajectory $x_1(\cdot)$, and if there is another initial state $x_2(t_0)$ and control $u_2(\cdot)$ and a resulting trajectory $x_2(\cdot)$, we can obtain a third valid trajectory $k_1x_1(\cdot)+k_2x_2(\cdot)$ with control $k_1u_1(\cdot)+k_2u_2(\cdot)$, which is the linearity. By Eq. (\ref{quadratic cost equation}) we have\\ \begin{equation} \begin{split} V((x_1+x_2),(u_1+u_2)(\cdot),t)&=\int_{t}^{T}\left [(u_1+u_2)^{\dagger}(s)R(u_1+u_2)(s)\right.\\ &\qquad\left.+(x_1+x_2)^{\dagger}(s)Q(x_1+x_2)(s)\right ]ds\\ &\qquad+(x_1+x_2)^{\dagger}(T)A(x_1+x_2)(T),\\[6pt] V((x_1-x_2),(u_1-u_2)(\cdot),t)&=\int_{t}^{T}\left [(u_1-u_2)^{\dagger}(s)R(u_1-u_2)(s)\right.\\ &\qquad\left.+(x_1-x_2)^{\dagger}(s)Q(x_1-x_2)(s)\right ]ds\\ &\qquad+(x_1-x_2)^{\dagger}(T)A(x_1-x_2)(T), \end{split} \end{equation} and therefore\\ \begin{equation} \begin{split} V((x_1+x_2),(u_1+u_2)(\cdot),t)&+V((x_1-x_2),(u_1-u_2)(\cdot),t)\\ &=2\int_{t}^{T}\left (u_1^{\dagger}Ru_1+x_1^{\dagger}Qx_1+u_2^{\dagger}Ru_2+x_2^{\dagger}Qx_2\right )ds\\ &\qquad+2x_1^{\dagger}(T)Ax_1(T)+2x_2^{\dagger}(T)Ax_2(T)\\ &=2V(x_1,u_1(\cdot),t)+2V(x_2,u_2(\cdot),t). \end{split} \end{equation}\\ Note that the cancellation of the $x_1(\cdot)$ and $x_2(\cdot)$ terms is possible due to the fact that they are the trajectories resulting from initial conditions $x_1$ and $x_2$ and controls $u_1$ and $u_2$, and therefore they are already fully determined. Using the above formula we can transform Eq. (\ref{quadratic form 3}) into\\ \begin{equation}\label{quadratic form 4} \begin{split} V^*(x_1,t)+V^*(x_2,t)&\le\frac{1}{2}\left[V(x_1+x_2,u^*_{x_1+x_2},t)+V(x_1-x_2,u^*_{x_1-x_2},t)\right]\\ &=\frac{1}{2}\left[V^*(x_1+x_2,t)+V^*(x_1-x_2,t)\right]. \end{split} \end{equation}\\ Next we perform a change of variables:\\ \begin{equation}\label{quadratic form 5} \begin{split} \frac{1}{2}\left[V^*(x_1+x_2,t)+V^*(x_1-x_2,t)\right]&\le\frac{1}{4}\left[V^*(2x_1,t)+V^*(2x_2,t)\right]\\ &=V^*(x_1,t)+V^*(x_2,t). \end{split} \end{equation}\\ Therefore, we have obtained \\ \begin{equation}\label{quadratic form condition 2} V^*(x_1,t)+V^*(x_2,t)=\frac{1}{2}\left[V^*(x_1+x_2,t)+V^*(x_1-x_2,t)\right], \end{equation}\\ and together with Eq. (\ref{quadratic form condition 1}) we have the result that $V^*$ takes a quadratic form with respect to $x$, as shown in Ref. \cite{LinearQuadraticControl}. Therefore it can be written as \\ \begin{equation}\label{deterministic control cost} V^*(x,t)=x^{\dagger}P(t)x, \end{equation}\\ where we have denoted the moment to start the control as $t$ rather than $t_0$, and this is because we will do variations on this variable later. The matrix $P(t)$ above is symmetric and non-negative and independent of $x$. Its symmetric property can be assumed without loss of generality because we can equivalently rewrite it as $\frac{1}{2}\left (P+P^{\dagger}\right )$, and its non-negativity follows from the non-negativity of $V$. Note that everything here is implicitly dependent on the ending time $T$ of the control.\\ \subsection{The Hamilton-Jacobi-Bellman Equation and the Optimal Control}\label{quadratic optimal control appendix} To solve for the matrix $P(t)$, we need to introduce the Hamilton-Jacobi-Bellman (HJB) equation. This equation is in essence similar to the Bellman equation for the Q-function in Section \ref{Q learning}, but it has no discount factor and deals with continuous space. To proceed, we first derive this HJB equation.\\ For an optimal control cost $V^*(x(t),t)$, by its optimality its control $u^*$ should minimize its own expectation of its control cost, and therefore for an infinitesimal time $dt$ it satisfies\\ \begin{equation}\label{HJB 1} V^*(x(t),t)=\min_u\left \{V^*(x(t+dt),t+dt) +\ell(x(t),u) dt \right \}, \end{equation}\\ where $\ell$ is the integrated term in the control cost (i.e., $u^{\dagger}Ru+x^{\dagger}Qx$ in Eq. (\ref{quadratic cost equation})), and the state $x(t+dt)$ is dependent on the control $u$, introducing a non-trivial $u$ dependence into the minimization. By the Taylor-expansion of $V^*(x(t+dt),t+dt)$ up to the first order of $dt$, we obtain\\ \begin{equation}\label{HJB 2} V^*(x(t+dt),t+dt)=V^*(x(t),t)+\partial_xV^*(x(t),t)\,dx+\partial_tV^*(x(t),t)\,dt, \end{equation}\\ where $\partial_x$ and $\partial_t$ denote the differentiation with respect to the first and the second arguments of $V^*$. Therefore Eq. (\ref{HJB 1}) can be written as\\ \begin{equation}\label{HJB 3} -\partial_tV^*(x(t),t)\,dt=\min_u\left \{\partial_xV^*(x(t),t)\,dx +\ell(x(t),u)dt \right \}. \end{equation} If we express the time-evolution equation of $x$ as $\dfrac{dx}{dt}=f(x,u)$, then we have the HJB equation as\\ \begin{equation}\label{HJB} -\partial_tV^*(x(t),t)=\min_u\left \{\partial_xV^*(x(t),t)f(x,u) +\ell(x(t),u) \right \}. \end{equation}\\ Sometimes this equation is also written as\\ \begin{equation}\label{HJB2} \dot{V}(x,t)+\min_u\left \{\nabla V(x,t)\cdot f(x,u)+\ell(x,u)\right \}=0, \end{equation}\\ where the superscript stars are omitted, and the superscript dot and nabla denote $\partial_t$ and $\partial_x$.\\ Because the optimal control cost $V^*(x,t)$ is defined on the whole space of $x$ and $t$ and by definition $V^*$ is the smallest possible control cost, we have the fact that as long as $V^*$ exists, it is unique. Therefore, if we have a function $V$ that satisfies the HJB equation (Eq. (\ref{HJB})), it must be the unique optimal control cost $V^*$, and the HJB equation becomes a sufficient and necessary condition for optimal control. In addition, the optimal control strategy $u^*$ is to take the minimization of $u$ in the above HJB equation, which is possibly not unique. This sufficient and necessary condition provided by the HJB equation is important because it makes ``guessing" the optimal control possible.\\ After we have obtained the HJB equation, we now turn to solve the matrix $P(t)$ in Eq. (\ref{deterministic control cost}). By straightforward substitution, we have\footnote{In usual control theory, complex values are not considered. However, for consistency with quantum mechanics, we have generalized the derivation to allow complex values. Therefore, one or two terms in our derivation may be different from those in standard textbooks.}\\ \begin{equation}\label{LQR optimal 1} \begin{split} -x^{\dagger}\dot{P}x&=\min_u\left \{x^{\dagger}P(Fx+Gu)+(Fx+Gu)^{\dagger}Px+u^{\dagger}Ru + x^{\dagger}Qx \right \}\\ &=\min_u\left \{x^{\dagger}(PF+F^{\dagger}P)x + x^{\dagger}Qx+x^{\dagger}PGu+u^{\dagger}G^{\dagger}Px+u^{\dagger}Ru \right \}. \end{split} \end{equation} We can arrange the $u$ dependent terms into a complete square to eliminate them through minimization:\\ \begin{equation}\label{LQR optimal 2} \begin{split} -x^{\dagger}\dot{P}x&=\min_u\left \{x^{\dagger}(PF+F^{\dagger}P)x + x^{\dagger}Qx+(u^{\dagger}+x^{\dagger}PGR^{-1})R(R^{-1}G^{\dagger}Px+u) \right.\\ &\qquad\qquad\left.-x^{\dagger}PGR^{-1}G^{\dagger}Px \right \}\\ &=x^{\dagger}(PF+F^{\dagger}P)x + x^{\dagger}Qx-x^{\dagger}PGR^{-1}G^{\dagger}Px, \end{split} \end{equation} where we have assumed that $P$ is symmetric, or hermitian, and that $R$ is positive. The minimization in the above equation is completed by forcing the condition $R^{-1}G^{\dagger}Px+u=0$, and therefore the optimal control $u^*$ is \\ \begin{equation}\label{LQR control law} u^*=-R^{-1}G^{\dagger}Px, \end{equation}\\ where the matrix $P$ is obtained from the differential equation (\ref{LQR optimal 2}) and matrices $G$ and $R$ are defined in Eq. (\ref{linear control equation}) and (\ref{quadratic cost equation}). The above equation concerning $P$ can be further simplified by removing the arbitrary variable $x$ on both sides. Then we obtain the matrix Riccati equation:\\ \begin{equation}\label{LQR finite time} -\dot{P}=PF+F^{\dagger}P -PGR^{-1}G^{\dagger}P+ Q. \end{equation}\\ Since this equation is a differential equation, we still need to find its boundary condition to solve it. Recall that there is a final control-independent term $x^{\dagger}(T)Ax(T)$ in the definition of $V$ in Eq. (\ref{quadratic cost equation}), we obtain the boundary condition for $P(t)$, i.e. $P(T)=A$. In cases $A=0$, we have $P(T)=0$. This completes the optimal control for finite-horizon control problems, i.e., $T<\infty$.\\ Next we consider the case of $T\to\infty$. Although it is still possible to proceed with time dependent matrices $F$, $G$, $R$ and $Q$, for simplicity we require them to be constant in the following discussion. \\ When all the matrices in the definition of the problem are time-independent, the property of the controlled system becomes completely time-invariant if $T$ also goes to infinity. In this case, the control strategy $u$ should be consistent in time, i.e. time-independent, and thus $P$ cannot change with time\footnote{This result can also be obtained by analysing the differential equation and take the limit that the time is infinitely long.}, and we have $\dot{P}=0$, which yields the continuous-time algebraic Riccati equation (CARE):\\ \begin{equation}\label{continuous Riccati equation} PF+F^{\dagger}P -PGR^{-1}G^{\dagger}P+ Q=0. \end{equation}\\ Mathematical software usually provides numerical methods to solve this algebraic Riccati equation, and this equation is of great importance in engineering and control theory. We summarise the results for the infinite-time control case below.\\ \begin{equation} dx=Fx\,dt+Gu\,dt, \end{equation} \begin{equation} V(x(t_0), u(\cdot))=\int_{t_0}^{\infty}\left (u^{\dagger}Ru+x^{\dagger}Qx\right )dt,\quad R>0,Q\ge0, \end{equation} \begin{equation} V^*(x(t_0))=x^{\dagger}(t_0)Px(t_0),\qquad P\ge0, \end{equation} \begin{equation} u^*=-R^{-1}G^{\dagger}Px, \end{equation} \begin{equation} PF+F^{\dagger}P -PGR^{-1}G^{\dagger}P+ Q=0, \end{equation}\\ where $u^*$ is the optimal control, and $V^*$ is the optimal cost.\\ From the above equations, we can see that a finite choice of the matrix $R$ is necessary to produce a non-divergent $u^*$. Otherwise, when solving for $u^*$, typically its value goes to infinity. This is reasonable since the condition $R=0$ implies that the control is for free and we can use as large control as we want, which clearly does not produce finite control strengths. However, we may look at the conditions where the control $u^*$ is precisely zero, and interpret the infinite control strengths as pushing the state $x$ to these stationary points where $u^*$ is zero. This justifies our treatment of the control forces in Chapter \ref{control quadratic potentials} where we have assumed $R=0$. It can be seen that in this case, the time evolution of the system, i.e. matrix $F$, is no longer relevant for this control problem, and the optimal control is determined by the loss.\\ For completeness, we present the results of discrete linear systems with discrete controls, i.e., $x_n$ and $u_n$, without proof. The proof follows the same line as the above continuous case, and interested readers are referred to Ref. \cite{LinearQuadraticControl}.\\ \begin{equation} x_{t+1}=Fx_{t}+Gu_{t}, \end{equation} \begin{equation} V(x_{t_0}, u_{(\cdot)}))=\sum_{t=t_0}^{\infty}\left (u_t^{\dagger}Ru_t+x_{t+1}^{\dagger}Qx_{t+1}\right ),\quad R>0,Q\ge0, \end{equation} \begin{equation} V^*(x(t_0))=x^{\dagger}(t_0)Px(t_0),\qquad P\ge0, \end{equation} \begin{equation} u^*=-\left (G^{\dagger}SG+R\right )^{-1}G^{\dagger}SFx,\qquad S = P+Q, \end{equation} \begin{equation}\label{discrete Riccati equation} S=F^{\dagger}\left[S-SG\left(G^{\dagger}SG+R\right)^{-1}G^{\dagger}S\right]F+Q. \end{equation}\\ The last equation \ref{discrete Riccati equation} is called the discrete-time algebraic Riccati equation (DARE). \section{Linear-Quadratic Control with Gaussian Noise}\label{linear quadratic Gaussian appendix section} When an additive Gaussian noise is introduced into the time-evolution equation, the optimal control strategy in Eqs. (\ref{LQR control law}) and (\ref{continuous Riccati equation}) is not changed. In this section we prove this result.\\ \subsection{Definitions of the Optimal Control and Control Cost} When a Gaussian noise introduced, the time-evolution equation (\ref{linear control equation}) becomes\\ \begin{equation}\label{linear control Gaussian} dx=Fx\,dt+Gu\,dt + \hat{Q}^{\frac{1}{2}}\,dW, \end{equation}\\ where $dW$ is a multi-dimensional Wiener increment with the variance $dt$ for each dimension, and $\hat{Q}$ is a semidefinite covariance matrix. \\ Without noise, as long as the system is controllable, it is always possible to use $u$ to control the state $x$ to become 0 and leave both $x$ and $u$ zero afterwards to stop the accumulation of control cost, which shows that there always exist control strategies with finite control costs even when the limit $T\to\infty$ is taken, and therefore the optimal cost $V^*$ must also have a finite value. However, this is not true when noise is present. When the system contains noise, expectation values of the state typically converge to some steady values, and they are not zero. Therefore, the expected control cost $V$ would be proportional to the control time $T$ in the long term, and we have $\lim\limits_{T\to\infty}V\to\infty$ and $\lim\limits_{T\to\infty}\frac{1}{T}V\to c$ where $c$ is some constant. Therefore, we can neither consider a minimization of $\lim\limits_{T\to\infty}V$ nor $\lim\limits_{T\to\infty}\frac{1}{T}V$ to deduce the optimal control, since $\lim\limits_{T\to\infty}\frac{1}{T}V$ is never affected by any short-time change of control $u$. In this case, in order to define the optimal control for an infinite time horizon $T$, we investigate the convergence of the optimal control strategy $u^*$ at the large limit of $T$, rather than the convergence of the control cost $V^*$. Therefore we start from a finite $T$ to deduce $V^*$ and $u^*$.\\ In the presence of a Wiener increment, the derivation of the HJB equation (\ref{HJB}) should be modified. Especially, the Taylor expansion of $V^*(x(t+dt),t+dt)$ should be taken to the second order of $dx$. Note that the control cost $V$ is an expectation now. The Taylor expansion is\\ \begin{equation}\label{control cost stochastic expansion} \begin{split} V^*(x(t+dt),t+dt)&=E\left[V^*+\partial_tV^*dt+\partial_xV^*dx+\frac{1}{2}\partial_x^2V^*dx^2\right]\\ &=V^*+\partial_tV^*dt+\partial_xV^*(Fx\,dt+Gu\,dt)+\frac{1}{2}\text{tr}\left(\partial_x^2V^*\hat{Q}\right)\,dt, \end{split} \end{equation} and the HJB equation (\ref{HJB}) becomes\\ \begin{equation}\label{HJB stochastic} -\partial_tV^*(x(t),t)=\min_u\left \{\partial_xV^*(x(t),t)(Fx+Gu)+\frac{1}{2}\text{tr}\left(\partial_x^2V^*(x(t),t)\hat{Q}\right) +(u^{\dagger}Ru+x^{\dagger}Qx) \right \}. \end{equation}\\ As discussed in Section \ref{quadratic optimal control appendix}, if the solution $V^*$ to the above equation exists, it must be unique. Therefore, we try to make up such a solution by referring to the results obtained in the previous section.\\ \subsection{The LQG Optimal Control}\label{LQG optimal result appendix} As explained above, we expect $V$ to be proportional to time $T$. Since Eq. (\ref{HJB stochastic}) is only different from the deterministic case by the term $\dfrac{1}{2}\text{tr}\left(\partial_x^2V^*(x(t),t)\hat{Q}\right)$, without any change of the optimal control, we expect this term to be a constant, which is indeed the case for a $V^*$ quadratic in $x$. Its value is $\dfrac{1}{2}\text{tr}\left(2P(t)\hat{Q}\right)=\text{tr}\left(P(t)\hat{Q}\right)$. Recall that $P(t)$ is constrained by the continuous algebraic Riccati equation and boundary condition and is independent of the control $u$. Therefore, the minimization in Eq. (\ref{HJB stochastic}) concerning $u$ is not affected by the term $\text{tr}\left(P(t)\hat{Q}\right)$. Assuming that $\partial_xV^*$ is the same as the deterministic case, we can proceed similarly and reaches\\ \begin{equation}\label{HJB stochastic makeup 1} -\partial_tV^*(x(t),t) = x^{\dagger}(PF+F^{\dagger}P)x + x^{\dagger}Qx-x^{\dagger}PGR^{-1}G^{\dagger}Px + \text{tr}\left(P(t)\hat{Q}\right), \end{equation}\\ where we have used the optimal control $u^*=-R^{-1}G^{\dagger}Px$ as before.\\ The above formula clearly suggests an additional term in $V^*$ to represents the effect of $\text{tr}\left(P\hat{Q}\right)$, so that the original quadratic term $x^{\dagger}P(t)x$ can be kept unaltered. It can be constructed as follows:\\ \begin{equation}\label{HJB stochastic makeup 2} V^*(x,t)=x^{\dagger}P(t)x+\int_{t}^{T}\text{tr}\left(P(s)\hat{Q}\right)ds, \end{equation}\\ where $P(t)$ satisfies the finite-time Riccati equation (\ref{LQR finite time}). For this constructed $V^*$, Eq.~(\ref{HJB stochastic makeup 1}) is clearly satisfied, and therefore the HJB equation (\ref{HJB stochastic}) is also satisfied, so that the optimal control $u^*$ is a valid control. As a final step, we confirm the convergence behaviour of $u^*$ at large $T$.\\ The convergence of $u^*$ at large $T$ by definition depends on the convergence of the matrix $P(t)$. However, this is already shown in Section \ref{quadratic optimal control appendix}. We have known that in the deterministic case $P(t)$ loses its dependency on $t$ at the limit $T\to\infty$, and the finite-time Riccati equation that governs the evolution of $P(t)$ is the same as the equation for the stochastic case here. Therefore at the large limit of $T$, $P(t)$ also loses its time-dependence in this stochastic problem, and $u^*$ converges. The $P$ matrix finally satisfies the continuous-time algebraic Riccati equation (\ref{continuous Riccati equation}), which is time-independent. This completes our proof. \section{Stochastic It\^o--Taylor Expansion} Let $W(t)$ be a Wiener process with time argument $t\in [0,T]$ that is randomly chosen and satisfies \\ \begin{equation} W(t+dt)-W(t)\sim\mathcal{N}(0,dt), \end{equation}\\ and the quantity $X_t$ evolves according to the It\^o stochastic differential equation \\ \begin{equation}\label{defining equation of X_t} dX_t = a(X_t)\,dt+b(X_t)\,dW, \end{equation}\\ where $a$ and $b$ are smooth functions differentiable up to some necessary orders, when their derivatives appear in the following context.\\ We consider a function of the variable $X_t$ as $f(X_t)$. By the It\^o formula:\\ \begin{equation}\label{Ito formula} df(X_t) = \left(a(X_t)\frac{\partial}{\partial x}f(X_t) + \frac{1}{2}b^2(X_t)\frac{\partial^2}{\partial x^2}f(X_t) \right)dt + b(X_t)\frac{\partial}{\partial x}f(X_t)\,dW_t, \end{equation}\\ where the partial differentials are taken with respect to the argument of function $f(\cdot)$, and\\ $dW_t\equiv W(t+dt)-W(t)$. All time dependences are indicated as subscripts. The It\^o formula can be considered as a Taylor expansion of $f(\cdot)$ on its argument using the first and the second terms subject to the relation $dW^2=dt$.\\ First, we rewrite the stochastic differential equations in their integral forms formally:\\ \begin{equation}\label{integral form of stochastic equation} X_t = X_{0}+\int_{0}^{t}a(X_s)\,ds+\int_{0}^{t}b(X_s)\,dW_s, \end{equation}\\ which is just the previous stochastic differential equation on $dX_t$. Similarly we have\\ \begin{equation}\label{integral form of f} f(X_t) = f(X_0)+\int_{0}^{t}\left(a(X_s)f'(X_s) + \frac{1}{2}b^2(X_s)f''(X_s) \right)ds + \int_{0}^{t}b(X_s)f'(X_s)\,dW_s, \end{equation}\\ where we have written differentiation with respect to its argument as primes, which we will also write in the form of $f^{(n)}$. We then apply the above equation to $a(X_t)$ and $b(X_t)$, and substitute them into Eq. (\ref{integral form of stochastic equation}), obtaining \\ \begin{equation}\label{first stochastic expansion} \begin{split} X_t &= X_{0}+\int_{0}^{t}a(X_{s_1})\,d{s_1}+\int_{0}^{t}b(X_{s_1})\,dW_{s_1}\\ &=X_{0}\\ +&\int_{0}^{t}\left(a(X_0)+\int_{0}^{s_1}\left(a(X_{s_2})a'(X_{s_2}) + \frac{1}{2}b^2(X_{s_2})a''(X_{s_2}) \right)ds_2 + \int_{0}^{s_1}b(X_{s_2})a'(X_{s_2})\,dW_{s_2}\right)\,d{s_1}\\ +&\int_{0}^{t}\left(b(X_0)+\int_{0}^{s_1}\left(a(X_{s_2})b'(X_{s_2}) + \frac{1}{2}b^2(X_{s_2})b''(X_{s_2}) \right)d{s_2} + \int_{0}^{s_1}b(X_{s_2})b'(X_{s_2})\,dW_{s_2}\right)\,dW_{s_1}\\ &=X_{0}+\int_{0}^{t}a(X_0)\,d{s_1}+\int_{0}^{t}b(X_0)\,dW_{s_1} \\ &+\int_{0}^{t}\int_{0}^{s_1}\left(aa' + \frac{1}{2}b^2a'' \right)ds_2\,d{s_1} + \int_{0}^{t}\int_{0}^{s_1}a'b\ dW_{s_2}\,d{s_1}\\ &+\int_{0}^{t}\int_{0}^{s_1}\left(ab' + \frac{1}{2}b^2b'' \right)d{s_2}\,dW_{s_1} + \int_{0}^{t}\int_{0}^{s_1}bb'\ dW_{s_2}\,dW_{s_1}\\ &=X_{0}+a(X_0)\,t+b(X_0)(W_t-W_0) + R, \end{split} \end{equation}\\ where $R$ stands for the remaining terms, and $dW_{s_2}$ and $dW_{s_1}$ are the Wiener increments for the same Wiener process $W_t$. It can easily be realized that the terms $\left(aa' + \frac{1}{2}b^2a'' \right)$ and $a'b$ which depend on $X_{s_2}$ can be further expanded by Eq. (\ref{integral form of f}), and this results in integrations with a constant $f(X_0)$ plus a remaining factor that scales with orders of $t$. This recursive expansion produces the It\^o--Taylor expansion. For simplicity, we define two operators to represent this substitutive recursion:\\ \begin{equation}\label{differential operator L} L^0:=a\,\frac{\partial}{\partial x}+\frac{1}{2}b^2\frac{\partial^2}{\partial x^2}, \qquad L^1=b\,\frac{\partial}{\partial x}, \end{equation}\\ where the functions $a$ and $b$ take the same arguments as the function that is differentiated by them. Then Eq. (\ref{integral form of f}) can be written into\\ \begin{equation}\label{integral f simpler} f(X_t) = f(X_0)+\int_{0}^{t}L^0f(X_s)ds + \int_{0}^{t}L^1f(X_s)\,dW_s. \end{equation}\\ To demonstrate the recursive expansion, we expand Eq. (\ref{first stochastic expansion}) for a second time:\\ \begin{equation}\label{second stochastic expansion} \begin{split} X_t &=X_{0}+a(X_0)\,t+b(X_0)(W_t-W_0)+\int_{0}^{t}\int_{0}^{s_1}L^0a(X_{s_2})\ ds_2\,d{s_1} + \int_{0}^{t}\int_{0}^{s_1}L^1a(X_{s_2})\ dW_{s_2}\,d{s_1}\\ &+\int_{0}^{t}\int_{0}^{s_1}L^0b(X_{s_2})\ d{s_2}\,dW_{s_1} + \int_{0}^{t}\int_{0}^{s_1}L^1b(X_{s_2})\ dW_{s_2}\,dW_{s_1}\\ &=X_{0}+a(X_0)\,t+b(X_0)(W_t-W_0)+\int_{0}^{t}\int_{0}^{s_1}L^0a(X_0)\ ds_2\,d{s_1} + \int_{0}^{t}\int_{0}^{s_1}L^1a(X_0)\ dW_{s_2}\,d{s_1}\\ &+\int_{0}^{t}\int_{0}^{s_1}L^0b(X_0)\ d{s_2}\,dW_{s_1} + \int_{0}^{t}\int_{0}^{s_1}L^1b(X_0)\ dW_{s_2}\,dW_{s_1}\\ &+\int_{0}^{t}\int_{0}^{s_1}\int_{0}^{s_2}L^0L^0a(X_{s_3})\ d{s_3}\,ds_2\,d{s_1} + \int_{0}^{t}\int_{0}^{s_1}\int_{0}^{s_2}L^1L^0a(X_{s_3})\ dW_{s_3}\,ds_2\,d{s_1} \\ &+ \int_{0}^{t}\int_{0}^{s_1}\int_{0}^{s_2}L^0L^1a(X_{s_3})\ d{s_3}\,dW_{s_2}\,d{s_1} + \int_{0}^{t}\int_{0}^{s_1}\int_{0}^{s_2}L^1L^1a(X_{s_3})\ dW_{s_3}\,dW_{s_2}\,d{s_1}\\ &+\int_{0}^{t}\int_{0}^{s_1}\int_{0}^{s_2}L^0L^0b(X_{s_3})\ d{s_3}\,d{s_2}\,dW_{s_1} + \int_{0}^{t}\int_{0}^{s_1}\int_{0}^{s_2}L^1L^0b(X_{s_3})\ dW_{s_3}\,d{s_2}\,dW_{s_1}\\ &+\int_{0}^{t}\int_{0}^{s_1}\int_{0}^{s_2}L^0L^1b(X_{s_3})\ d{s_3}\,dW_{s_2}\,dW_{s_1} + \int_{0}^{t}\int_{0}^{s_1}\int_{0}^{s_2}L^1L^1b(X_{s_3})\ dW_{s_3}\,dW_{s_2}\,dW_{s_1},\\ \end{split} \end{equation} where the terms other than triple integrals can be evaluated directly. Obviously, this involves permutations of multiple $t$ and $W$ integrals. For notational simplicity we define an ordered list structure to denote the permutations as \\ \begin{equation}\label{index list} \alpha=(j_1,j_2,\dots,j_l),\quad j_i=0,1\,,\quad i=1,2,\dots,l \end{equation}\\ and define their manipulation as\\ \begin{equation}\label{index list manipulation 1} \alpha * \tilde{\alpha} = (j_1,j_2,\dots,j_l, \tilde{j}_1,\tilde{j}_2,\dots,\tilde{j}_{\tilde{l}}), \end{equation} \begin{equation}\label{index list manipulation 2} -\alpha=(j_2,j_3,\dots,j_l) ,\quad\alpha-=(j_1,j_2,\dots,j_{l-1}), \quad()\equiv v, \end{equation}\\ that is, we use $v$ to denote an empty list. Then we define multiple It\^o integral as\\ \begin{equation}\label{multile Ito integral 1} I_v[f(\cdot)]_{0,t}:=f(t), \end{equation} \begin{equation}\label{multile Ito integral 2} I_{\alpha*(0)}[f(\cdot)]_{0,t}:=\int_{0}^{t}I_{\alpha}[f(\cdot)]_{0,s}ds,\quad I_{\alpha*(1)}[f(\cdot)]_{0,t}:=\int_{0}^{t}I_{\alpha}[f(\cdot)]_{0,s}dW_s. \end{equation}\\ Thus they are defined recursively. We give the following examples for the sake of illustration.\\ \begin{equation}\label{multile Ito integral examples} \begin{split} I_{(0)}[f(\cdot)]_{0,t}&=\int_{0}^{t}f(s)ds,\\ I_{(1,1)}[f(\cdot)]_{0,t}&=\int_{0}^{t}\int_{0}^{s_1}f(s_2)\ dW_{s_2}\,dW_{s_1},\\ I_{(1,1,0)}[f(\cdot)]_{0,t}&=\int_{0}^{t}\int_{0}^{s_1}\int_{0}^{s_2}f(s_3)\ dW_{s_3}\,dW_{s_2}\,ds_1. \end{split} \end{equation}\\ Therefore, among the indices in the parenthesis $(\cdots)$ 1 stands for integrating with respect to $W$, and 0 stands for integrating with respect to $t$, and all the integrations are multiple integrals. Note that the sequence of integration should be carried out from the left to the right. From now on, when we do not write the $[f(\cdot)]_{(0,t)}$ part in the expression $I_{\alpha}[f(\cdot)]_{0,t}$, i.e. simply as $I_{\alpha}$, we mean $I_{\alpha}[1]_{0,t}$, which is a constant factor dependent only on $t$.\\ Also for simplicity, we use the list structure to denote multiple actions of $L^0$ and $L^1$:\\ \begin{equation}\label{coefficient functions} f_v:=f,\quad f_{(j)*\alpha}:=L^j f_\alpha,\quad j=0,1\,. \end{equation}\\ For example, we have $L^0L^0L^1f$ as $f_{(0,0,1)}$. Eq. (\ref{integral f simpler}) can be written and expanded as\\ \begin{equation}\label{integral f using indices} \begin{split} f(X_t)=&f(X_0)+I_{(0)}[f_{(0)}(\cdot)]_{0,t}+I_{(1)}[f_{(1)}(\cdot)]_{0,t}\\ \\ =&f(X_0)+I_{(0)}\left[f_{(0)}(X_0)+I_{(0)}[L^0f_{(0)}(\circ)]_{0,\cdot}+I_{(1)}[L^1f_{(0)}(\circ)]_{0,\cdot}\right]_{0,t}\\ &+I_{(1)}\left[f_{(1)}(X_0)+I_{(0)}[L^0f_{(1)}(\circ)]_{0,\cdot}+I_{(1)}[L^1f_{(1)}(\circ)]_{0,\cdot}\right]_{0,t}\\ \\ =&f(X_0)+I_{(0)}f_{(0)}(X_0)+I_{(0)}\left[I_{(0)}[f_{(0,0)}(\circ)]_{0,\cdot}+I_{(1)}[f_{(1,0)}(\circ)]_{0,\cdot}\right]_{0,t}\\ &+I_{(1)}f_{(1)}(X_0)+I_{(1)}\left[I_{(0)}[f_{(0,1)}(\circ)]_{0,\cdot}+I_{(1)}[f_{(1,1)}(\circ)]_{0,\cdot}\right]_{0,t}\\ \\ =&f(X_0)+I_{(0)}f_{(0)}(X_0)+I_{(0,0)}[f_{(0,0)}(\cdot)]_{0,t}+I_{(1,0)}[f_{(1,0)}(\cdot)]_{0,t}\\ &+I_{(1)}f_{(1)}(X_0)+I_{(0,1)}[f_{(0,1)}(\cdot)]_{0,t}+I_{(1,1)}[f_{(1,1)}(\cdot)]_{0,t}. \end{split} \end{equation}\\ This is Eq. (\ref{first stochastic expansion}). Because the multiple integrals $I_\alpha$ decrease with the length of $\alpha$ and the time scale $t$, they are supposed to decrease in value when expanded to higher orders. Note that a single $1$ index in $\alpha$ provides $\frac{1}{2}$ order while a single $0$ index provides order 1. When we expand this It\^o--Taylor expansion, the number of its terms increases exponentially with increasing the order. We now state the It\^o--Taylor expansion:\\ \begin{equation}\label{Ito-Taylor expansion} f(X_t)=\sum_{\alpha\in\mathcal{A}}I_\alpha f_\alpha(X_0)+\sum_{\alpha\in\mathcal{B}(\mathcal{A})}I_\alpha [f_\alpha(\cdot)]_{0,t}, \end{equation} \begin{equation}\label{expansion index lists} \alpha\in\mathcal{A}\text{ and }\alpha\ne v \Rightarrow-\alpha\in\mathcal{A},\quad \mathcal{B}(\mathcal{A})=\{\alpha:\alpha\notin\mathcal{A}\text{ and }-\alpha\in\mathcal{A}\}, \end{equation}\\ where $\mathcal{A}$ and $\mathcal{B}$ are respectively called a hierarchical set and a remainder set.\\ When making a numerical approximation, we throw away the remainder part and take the $\mathcal{A}$ part only, and we say the approximation is up to order $k$ if $\mathcal{B}$ only contains terms of order $k+\frac{1}{2}$ regarding time $t$. We take $t$ in the above equation as the time step $dt$ for the approximation of $X_t$ in the defining equation Eq. (\ref{defining equation of X_t}), so that we approximate $dX_t$ up to order $(dt)^k$.\\ To calculate the multiple integrals $I_\alpha$, we need an important relation:\\ \begin{equation}\label{relation between Ito multiple integrals} \begin{split} I_{(1)}I_{\alpha}=W_tI_{(j_1,\dots,j_l)}=&\sum_{i=0}^{l}I_{(j_1,\dots,j_i,1,j_{i+1},\dots,j_l)}+\sum_{i=1}^{l}\delta_{j_i,1}I_{(j_i,\dots,j_{i-1},0,j_{i+1},\dots,j_l)},\\ I_{(0)}I_{\alpha}&=tI_{(j_1,\dots,j_l)}=\sum_{i=0}^{l}I_{(j_1,\dots,j_i,0,j_{i+1},\dots,j_l)}. \end{split} \end{equation}\\ That is, when we multiply the two integrals, we first insert the index of the first integral into the index list of the second one, i.e. $\alpha$, and sum over different inserting positions, and then if the first integral is a Wiener process, it cancels each Wiener integral index 1 in the list $\alpha$ and produces a 0. We first show the $I_{(0)}$ part:\\ \begin{equation} d(I_{(0)}I_\alpha)=d(tI_{(j_1,\dots,j_l)})=dt\,I_{(j_1,\dots,j_l)}+tI_{(j_1,\dots,j_{(l-1)})}dW^{j_l}, \end{equation} \begin{equation} I_{(0),t}I_{\alpha,t}=\int^{t}d(I_{(0),s}I_{\alpha,s})=I_{(j_1,\dots,j_l,0),t}+\int^{t} sI_{(j_1,\dots,j_{(l-1)}),s}dW^{j_l}_s, \end{equation}\\ where we write $dW^0$ as $dt$. Recursively we have \\ \begin{equation} \begin{split} \int^{t} sI_{(j_1,\dots,j_{(l-1)}),s}dW^{j_l}_s&=\int^{t} I_{(j_1,\dots,j_{(l-1)},0),s}dW^{j_l}_s+\int^{t}\int^{s_1} sI_{(j_1,\dots,j_{(l-2)}),s}dW^{j_{l-1}}_{s_2}dW^{j_l}_{s_1}\\ &=I_{(j_1,\dots,j_{(l-1)},0,j_l),t}+\int^{t}\int^{s_1} s_2I_{(j_1,\dots,j_{(l-2)}),s_2}dW^{j_{l-1}}_{s_2}dW^{j_l}_{s_1}, \end{split} \end{equation}\\ and therefore by induction and the fact $I_v=1$ we have \\ \begin{equation} I_{(0)}I_{\alpha}=\sum_{i=0}^{l}I_{(j_1,\dots,j_i,0,j_{i+1},\dots,j_l)}. \end{equation}\\ The same process applies to the case of $I_{(1)}=W_t$, with the only difference being that\\ \begin{equation} d(WI_{(j_1,\dots,j_l)})=dW\,I_{(j_1,\dots,j_l)}+WI_{(j_1,\dots,j_{(l-1)})}dW^{j_l} +\delta_{j_l,1}dt\,I_{(j_1,\dots,j_{(l-1)})}, \end{equation}\\ due to the fact that $dW^2=dt$. The final result Eq. (\ref{relation between Ito multiple integrals}) follows straightforwardly. \section{Order 1.5 Strong Scheme}\label{numerical update rule appendix} In our numerical simulation, we use an order 1.5 strong approximation. The word \textit{strong} means that the simulated stochastic variable approaches a certain real trajectory with error of order 1.5 , and this directly relates to the use of It\^o--Taylor expansion. For consistency with Ref. \cite{NumericalSimulationofStochasticDE}, we rewrite $X_t$ into $Y_n$ where $n$ denotes the $n$-th time step. The update rule of $Y_n$ is essentially\\ \begin{equation}\label{1.5 Taylor} Y_{n+1}=Y_n+aI_{(0)}+bI_{(1)}+a_{(0)}I_{(0,0)}+b_{(0)}I_{(0,1)}+a_{(1)}I_{(1,0)}+b_{(1)}I_{(1,1)}+b_{(1,1)}I_{(1,1,1)}, \end{equation}\\ where we have used the notation in Eq. (\ref{coefficient functions}). For completeness we present the terms of $a$ and $b$:\\ \begin{equation}\label{derivatives appendix} \begin{split} a_{(0)}=aa'+\frac{1}{2}b^2a'',&\quad b_{(0)}=ab'+\frac{1}{2}b^2b''\\ a_{(1)}=ba',\quad b_{(1)}=bb',&\quad b_{(1,1)}=b((b')^2+bb'') \end{split} \end{equation}\\ Then we need to evaluate the constant terms $I$. According to Eq. (\ref{relation between Ito multiple integrals}), we have\\ \begin{equation} \begin{split} I_{(0)}&=dt,\quad I_{(1)}={} dW,\quad I_{(0,0)}=\frac{1}{2}dt^2,\\ dW\,dW=I_{(1)}I_{(1)}&=2I_{(1,1)}+I_{(0)},\quad dW\,dt=I_{(1)}I_{(0)}=I_{(1,0)}+I_{(0,1)},\\ &dWI_{(1,1)}={}3I_{(1,1,1)}+I_{(1,0)}+I_{(0,1)}, \end{split} \end{equation}\\ where we have used the symbol $dt$ to denote our iteration time step, which is sometimes also written as $\Delta t$ or $\Delta$. Thus both $dt$ and $dW$ are finite, and do not involve integrated values, i.e. $dW^2\ne dt$. Then we reach the result:\\ \begin{equation}\label{numerical step definition} \begin{split} I_{(1,1)}&=\frac{1}{2}(dW^2-dt),\quad I_{(1,0)}=:dZ,\quad I_{(0,1)}=dW\,dt-dZ,\\ I_{(1,1,1)}&=\frac{1}{3}(\frac{dW}{2}(dW^2-dt)-dW\,dt)=dW(\frac{1}{6}dW^2-\frac{1}{2}dt)=\frac{dW}{2}\left(\frac{dW^2}{3}-dt\right), \end{split} \end{equation}\\ where we have to define an additional random variable $dZ$. Here $dZ$ satisfies \\ \begin{equation}\label{dZ} dZ\sim\mathcal{N}\left(0,\frac{1}{3}dt^3\right),\quad E(dZ\,dW)=\frac{1}{2}dt^2, \end{equation}\\ which is a Gaussian variable correlated to $dW$. Its properties are discussed in Ref. \cite{NumericalSimulationofStochasticDE}. Then we can write everything explicitly:\\ \begin{equation}\label{1.5 Taylor complete} \begin{split} Y_{n+1}={}&Y_n+a\,dt+b\,dW+\frac{1}{2}bb'(dW^2-dt)+\frac{1}{2}\left(aa'+\frac{1}{2}b^2a''\right)dt^2\\ &+a'b\,dZ+\left(ab'+\frac{1}{2}b^2b''\right)(dW\,dt-dZ)\\ &+\frac{1}{2}b\left(bb''+(b')^2\right)dW\left(\frac{1}{3}dW^2-dt\right). \end{split} \end{equation}\\ This is the order 1.5 strong Taylor scheme, and we ignore arguments when they are just $Y_n$. To obtain stable and precise numerical results, the deterministic second-order term with $dt^2$ is also included, while the stochastic terms of the second order are not. Next, we need to avoid the explicit calculation of derivatives and use the numerically evaluated values. It then becomes\\ \begin{equation}\label{order 1.5 explicit} \begin{split} Y_{n+1}={}&Y_n+b\,dW+\frac{1}{4}\left(a(Y_+)+2a+a(Y_-)\right)dt+\frac{1}{4\sqrt{dt}}\left(b(Y_+)-b(Y_-)\right)(dW^2-dt)\\ &+\frac{1}{2\sqrt{dt}}\left(a(Y_+)-a(Y_-)\right)dZ+\frac{1}{2dt}\left(b(Y_+)-2b+b(Y_-)\right)(dW\,dt-dZ)\\ &+\frac{1}{4dt}\left(b(\Phi_+)-b(\Phi_-)-b(Y_+)+b(Y_-)\right)dW\left(\frac{1}{3}dW^2-dt\right), \end{split} \end{equation} \begin{equation} Y_\pm = Y_n+a\,dt\pm b\sqrt{dt},\quad \Phi_\pm = Y_+ \pm b(Y_+)\sqrt{dt} \end{equation}\\ which can be confirmed by expanding $Y_\pm$ and $\Phi_\pm$ up to order 1 of $dt$. This is because the coefficients are at least of order $\frac{1}{2}$, and the $dt$ term which is supposed to approximate up to $(dt)^2$ has coefficient $dt$ that is of order 1:\\ \begin{equation} \begin{split} \frac{dt}{4}\left(a(Y_+)+2a+a(Y_-)\right)&=\frac{dt}{4}\left(4a+2a'a\,dt+a''b^2\,dt\right)=a\,dt+\frac{dt^2}{2}(aa'+\frac{1}{2}b^2a''),\\ \frac{1}{2\sqrt{dt}}\left(b(Y_+)-b(Y_-)\right)&=\frac{1}{2\sqrt{dt}}(b'\cdot 2b\sqrt{dt})=bb',\qquad \frac{1}{2\sqrt{dt}}a(Y_+)-a(Y_-)=a'b,\\ \frac{1}{2dt}\left(b(Y_+)-2b+b(Y_-)\right)&=\frac{1}{2dt}\left(b'\cdot 2a\,dt+b''b^2\,dt\right)=ab'+\frac{1}{2}b^2b'',\\ \frac{1}{2dt}(b(\Phi_+)-b(\Phi_-)-b(Y_+)&+b(Y_-) ) =\frac{1}{2dt}\left(2b'b(Y_+)\sqrt{dt}+2b''bb(Y_+)dt-2bb'\sqrt{dt} \right)\\ &\qquad\qquad = \frac{1}{2dt}\left(2b'(b'b\sqrt{dt})\sqrt{dt}+2b''bb(Y_+)dt \right)\\ &\qquad\qquad = b(b')^2+b^2b''. \end{split} \end{equation}\\ Therefore the above update rule Eq. (\ref{order 1.5 explicit}) is correct. However, this is not enough. In actual numerical approximations, the simulated vector components rotate in the complex plane, and the high-frequency components rotate faster and accumulate more error. Because the error always increases the norm of the values, it accumulates and diverges exponentially, which is especially severe for high-frequency components that are around the energy cutoff. Therefore, if possible we use an implicit method that takes the update target $Y_{n+1}$ into its arguments, which solves the high-frequency divergence problem by forcing the error not to increase the norm, and typically to shrink the values with high error so that they do not accumulate or propagate. The implicit scheme update rule is the following:\\ \begin{equation}\label{order 1.5 implicit} \begin{split} Y_{n+1}={}&Y_n+b\,dW+\frac{1}{2}\left(a(Y_{n+1})+a\right)dt+\frac{1}{4\sqrt{dt}}\left(b(Y_+)-b(Y_-)\right)(dW^2-dt)\\ &+\frac{1}{2\sqrt{dt}}\left(a(Y_+)-a(Y_-)\right)\left(dZ-\frac{1}{2}dW\,dt\right)+\frac{1}{2dt}\left(b(Y_+)-2b+b(Y_-)\right)(dW\,dt-dZ)\\ &+\frac{1}{4dt}\left(b(\Phi_+)-b(\Phi_-)-b(Y_+)+b(Y_-)\right)dW\left(\frac{1}{3}dW^2-dt\right), \end{split} \end{equation} where $dZ,\ Y_\pm$ and $\Phi_\pm$ are the same as before. Note that there is an additional term $$\dfrac{1}{2\sqrt{dt}}\left(a(Y_+)-a(Y_-)\right)\left(-\frac{1}{2}dW\,dt\right),$$ which equals $-\dfrac{a'b}{2}dW\,dt$. This is to cancel the extra $b\,dW$ term inside $Y_{n+1}$, which appears if we expand $Y_{n+1}$ by It\^o--Taylor expansion (see Eq. (\ref{1.5 Taylor complete})):\\ \begin{equation}\label{Y_n+1} Y_{n+1}=Y_n+a\,dt+b\,dW+\frac{1}{2}bb'(dW^2-dt)+\frac{1}{2}\left(aa'+\frac{1}{2}b^2a''\right)dt^2\dots, \end{equation} \begin{equation} a(Y_{n+1})=a+a'\left(a\,dt+b\,dW+\frac{1}{2}bb'(dW^2-dt)\right)+\frac{1}{2}a''b^2\,dW^2, \end{equation} \begin{equation} \frac{dt}{2}a(Y_{n+1})=\frac{dt}{2}a+\frac{dt^2}{2}a'a+\frac{dt\,dW}{2}a'b+\frac{dt^2}{4}a''b^2+\frac{dt(dt-dW^2)}{4}a''b^2+\dots, \end{equation}\\ where we have ignored all stochastic terms equal to or above order 1. By taking $dW^2=dt$, we recover the order 1.5 It\^o--Taylor expansion (\ref{1.5 Taylor complete}) as the explicit update scheme case.\\ In our numerical simulation of Eq. (\ref{position measurement evolution}), we can split the function that governs deterministic evolution as $a=a_1+a_2$, where $a_1$ is the Hamiltonian part $a_1(Y)=HY$ which is linear and can be evaluated implicitly, because \\ \begin{equation} \begin{split} Y_{n+1}=\frac{1}{2}(HY_{n+1}+HY_n)dt\quad&\Rightarrow\quad (I-\frac{dt}{2}H)Y_{n+1}=\frac{dt}{2}HY_n\\ &\Rightarrow\quad Y_{n+1}=(I-\frac{dt}{2}H)^{-1}\,\frac{dt}{2}HY_n. \end{split} \end{equation}\\ However, $a_2$ is a nonlinear part. For the case of our simulated differential equation, we can only evaluate $a_1(Y_{n+1})$ implicitly but not $a_2(Y_{n+1})$, therefore we use a mix of explicit and implicit schemes. Because the terms $$\frac{1}{2}a(Y_{n+1})dt-\dfrac{1}{2\sqrt{dt}}\left(a(Y_+)-a(Y_-)\right)\cdot\frac{1}{2}dW\,dt$$ in the implicit scheme Eq. (\ref{order 1.5 implicit}) replace the term $$\frac{1}{4}\left(a(Y_+)+a(Y_-)\right)dt$$ in the explicit scheme Eq. (\ref{order 1.5 explicit}), we separately use them for $a_1$ and $a_2$ in our simulated equation, that is\\ \begin{equation}\label{partial implicit} \frac{1}{2}a_1(Y_{n+1})dt-\dfrac{1}{2\sqrt{dt}}\left(a_1(Y_+)-a_1(Y_-)\right)\cdot\frac{1}{2}dW\,dt+\frac{1}{4}\left(a_2(Y_+)+a_2(Y_-)\right)dt. \end{equation}\\ Substituting it into the corresponding part $\frac{1}{4}\left(a(Y_+)+a(Y_-)\right)dt$ of Eq. (\ref{order 1.5 explicit}), we obtain an update rule to use in our experiments. On the other hand, when the simulated state is not a vector but a density matrix, this method is hard to apply, because there seems to be no readily available method to quickly solve equations of the form $Y=-i[H,Y]dt+C$ with a banded matrix $H$.\\ Due to the simplicity of evaluating $a_1$, we add one more term concerning $a_1$ into our update rule equation. First, we have linearity $a'=a'_1+a'_2$, and we have $a''_1=0$ and $a'_1$ is a constant linear mapping. We know that the values concerning $a_1$ are typically larger than those concerning $a_2$ and $b$. Therefore, we add an additional deterministic 3-rd order term $\frac{1}{6}dt^3(a'_1)^2a$ into the update rule, which corresponds to the term $\frac{dt^3}{6}f^{(3)}$ of the Taylor expansion, and it indeed also exists in the It\^o--Taylor expansion. Because the implicit method uses $\dfrac{dt}{2}a_1(Y_{n+1})$, which already includes a term $\dfrac{dt^3}{4}a'_1a'_1a$, for implicit methods we modify it by adding a term $-\dfrac{1}{12}dt^3(a'_1)^2a$ to correct its value. This method turns out to reduce the numerical error, as it works in the direction to prevents the norm increase. This is especially important when an implicit method is not used, because Taylor expansion up to lower or higher orders around order 3 all results in update rules that increase the norm, which would make the value diverge exponentially. \section{Background}\label{introBackground} \subsection{Quantum Control}\label{quantum control} In general, a control problem is to find a control protocol that maximizes a score or minimizes a cost which encodes a prescribed target of control. The control as an output from the controller is a time sequence which influences evolution of the controlled system. When the evolution of the system is deterministic, the problem can be formulated as \cite{LinearQuadraticControl}\\ \begin{equation}\label{control} u^*=\argmin_u \left[\int_{0}^{T} L(u,x,t)\, dt + L_T(u,x)\right],\quad dx=f(x,u)dt, \end{equation}\\ where $u$ is the control variable, $L$ stands for the loss, $T$ is the total time considered in this control, $L_{T}$ is a specific loss for the final state, $f$ represents the rule of time evolution of the system, and the goal of a control problem is to find this $u^*$, which is called the optimal control, and $x$ and $u$ depend on time $t$. If $u$ does not use information obtained from the controlled system $x$ during its control, it is called \textit{open-loop} control \cite{ControlSystems}; if it depends on information obtained from the controlled system, it is called \textit{closed-loop} control, or \textit{feedback} control. The feedback control necessarily involves measurement, and therefore it is, in fact, the measurement-feedback control in the quantum case. For classical systems, the control problem can often be solved rather straightforwardly, while for quantum systems this is not the case. The main reason lies in the complexity of description of the controlled system. To describe a classical system, a few numbers as relevant physical quantities suffice, while for a quantum system, we need many more parameters to describe the superposition among different components, and if it is a many-body state we even need exponentially many parameters. This difficulty inhibits straightforward analysis to solve quantum control problems, except for a few cases where the quantum system can be simplified. \\ This quantum control problem did not attract as much attention as it deserves until the development of controllable artificial quantum systems in the last two decades \cite{ColdAtomQuantumControl}. Examples of the controllable systems are quantum dots in semiconductors and photonic crystals, trapped ion systems, superconducting qubits, optical lattices, nitrogen-vacancy (NV) centers, coupling optical systems, cavity optomechanical systems and so on \cite{QuantumDots,QuantumSpinDots,QuantumSimulation,TrappedIon,NVCenter,Superconducting,OpticalQuantumComputation,CavityOptomechanics}. They are used as platforms for simulation of quantum dynamics or considered as potential candidates for quantum computation devices. However, none of them is perfect. They always come with sources of noise and contain small error terms that cannot be controlled in a straightforward manner. For example, superconducting qubits use the lowest two energy levels of its small superconducting circuit as the logical $|0\rangle$ and $|1\rangle$ states for quantum computation, but there is always a non-zero probability for the state to jump to energy levels higher than $|0\rangle$ and $|1\rangle$ and to go out of control. To find a control strategy to suppress such problems, typically approximations and assumptions are made to simplify the situations, often involving use of perturbative expansions or reduced subspace effective Hamiltonians, and then a control is calculated. For the superconducting qubit example, the control pulses are optimized so that one or two nearest energy levels above the operating qubit levels are suppressed. Concerning decoherence, the dynamical decoupling control is a good example, which considers the fast limit of control and series expansion \cite{DynamicDecoupling}. Another example could be the spin echo control technique, which specifically deals with time-independent inhomogeneous imperfections \cite{SpinEcho}. All these methods come along with important assumptions and approximations. Moreover, it should be mentioned that usually the analysis does not straightforwardly give the optimal control as defined in Eq. (\ref{control}), but only gives some hints so that reasonable control protocols can be designed by hand.\\ When the above analysis-based method does not work, numerical search algorithms provide an alternative. They assume that the control is a series of pulses, or a superposition of trigonometric waves, which can be parametrized with a sequence of parameters, and they consider small variations of the control parameters to get closer to the control target, such as fidelity, and then they do it iteratively to gradually modify the control parameters. These algorithms include QOCT \cite{QuantumOptimalControlTheory}, CRAB \cite{CRAB} and GRAPE \cite{GRAPE}. However, since these methods are either effectively or directly based on gradients, if the situation is complicated, they can easily be trapped in local optima and do not give satisfactory solutions, as exemplified in Ref. \cite{EvolutionaryQuantumControl}. Nevertheless, they have been shown to be useful in many simple practical scenarios where no other means are available to find a control, even including simple chaotic systems \cite{ChaosControl}. One weakness of these methods is that they can only be used as open-loop controls, for which the starting point and the endpoint of control are prescribed beforehand. Actually it is almost impossible to optimize the control and at the same time make it conditioned on all types of measurement outcomes.\\ On the whole, it can be recognized that there is no universal approach to quantum control, and most of the current methods are ad hoc for specific situations. Therefore, if the controlled system becomes more complicated and involved, it would be much more difficult to find a satisfactory control using the above strategies, and it is desirable if we can find some general and better strategy to overcome the difficulty of analysing a complex system in order to obtain a control. \\ \subsection{Deep Learning} Deep learning, namely machine learning with deep neural networks, has become popular and used extensively in recent years, especially for tasks that were previously considered difficult for AI to do. For instance, deep learning has established new records on many problem-solving contests \cite{ImageNetClassification,ZeroResourceSpeech} and defeated human champions in games \cite{chess-like}, and it is still being researched and developing rapidly. Deep learning will be introduced more formally and in detail in Chapter 3, and therefore we only give a brief introduction here to present the general idea.\\ Generally speaking, deep learning uses a deep neural network as a powerful function approximator that can learn patterns of previous data to give predictions on new data, as illustrated in Fig. \ref{fig:deeplearning}. It learns by modifying its internal parameters to fit given data-answer pairs as training data. Due to the complexity and universality of deep neural networks, it has turned out that this simple learning procedure can make the neural network correctly learn various complex relations between the data and the answer. For example, based on this simple method, it can be used to recognize objects, modify images and do translation \cite{Translation, ImagenetSota, DeepPS}, and evaluate the advantages and disadvantages in chess \cite{chess-like}. It is generally believed that neural networks can almost learn any functions, as long as functions are sufficiently smooth and do not appear to be pathological. However, as a drawback, deep learning always gives approximate solutions and does not explain its reason. Deep learning is typically not precise, and is generally a blackbox technique which we cannot explain well so far \cite{DeepLearningBook}.\\ \begin{figure}[tb] \centering \subfloat{ \begin{minipage}[c]{0.11\linewidth} \centering \includegraphics[width=\linewidth]{chapter1/deeplearning1} \end{minipage}}\qquad {\LARGE $\Rightarrow$}\qquad \subfloat{ \begin{minipage}[c]{0.144\linewidth} \centering \includegraphics[width=\linewidth]{chapter1/deeplearning2} \end{minipage}} \caption{Working of a deep learning system. It learns by fitting its internal connection parameters (weights and biases) into the given data-answer pairs, such that the computation of the network precisely relates every data to its corresponding answer for the whole training dataset that it learns. After training, it is used to predict something on new data.} \label{fig:deeplearning} \end{figure} \section{Combination of Deep Learning and Quantum Control} \label{introDeepLearningControl} To use deep learning for a control problem, the reinforcement learning scheme needs to be implemented, and it will be explained in detail in Chapter 3. Reinforcement learning with deep learning uses a neural network to evaluate which control option is good and which control option is bad during the control, and it learns by exploring its environment, which stands for the controlled system behaviour. It explores its environment to accumulate experience, and it learns the accumulated experience, and its goal is set to maximize the control target when making control decisions. Overall, it learns and explores different possibilities of its environment automatically, and can learn underlying rules of the environment and often avoid local optima. Therefore, it can be seen as an alternative to the gradient-based quantum control algorithms in Section \ref{quantum control}. One advantage of reinforcement learning based control is that, it can deal with both open-loop control and closed-loop control in the same way. Since the neural network needs to take information about the controlled system to give a control output, it does not matter if the controlled system is changed suddenly due to measurement-backaction: if the system changed, the control output from the neural network is also changed and that is all. The versatility of AI makes all kinds of control scenarios possible without the need for human design, which is difficult with only conventional methods. \\ \begin{figure}[thb] \centering \includegraphics[width=0.3\linewidth]{chapter1/The-cart-pole-system}\qquad\qquad \includegraphics[width=0.3\linewidth]{"chapter1/inverted_potential"} \caption{The classical cart-pole system and the controlled inverted potential quantum system with measurement. In either case the system is unstable, and the particle tends to fall off from the center. The target of control is to move the cart, or apply an external force, so that the particle stays at the center. For the quantum case, position measurement is necessary to prevent the wavefunction from expanding, and at the same time serves as a source of random noise.} \label{fig:cart-pole-system} \end{figure} Existing researches on deep-reinforcement-based quantum control are not many, and almost all of them only consider discrete systems composed of spins and qubits, and mostly focus on error correction or noise-resistant manipulation under some noise models \cite{GateControlDeepLearning, ErrorCorrectionDeepLearning, SpinControlDeepLearning}, which are clearly for practical purposes. Also, most of them only involve deterministic evolution of the states. In our research, we consider a system in continuous position space subject to measurement, which is yet to be investigated, and we use deep reinforcement learning to control the system and compare its control strategy with existing conventional controls to gain insight into what is learned by the AI and how it may outperform existing methods. \\ Specifically, we consider a particle in a 1D quadratic potential under continuous position measurement. The measurement introduces stochasticity into the system, and makes the system more realistic. When the potential of the system is upright, i.e. its minimum lies at the center, the system is just a usual harmonic oscillator; when the potential is inverted, the system is essentially an inverted pendulum and becomes analogous to the standard cart-pole system \cite{CartPole} which is a benchmark for reinforcement learning control (Fig. \ref{fig:cart-pole-system}). In the former case, the target of control is set to cool down the harmonic oscillator, which is ground-state cooling and is important as a real problem in experiments \cite{MeasurementFeedbackControlOptomechanic}. In the latter case, the target of control is to keep the particle at the center of the potential, which amounts to stabilizing the unstable system, and in both cases, the controller uses an external force exerted on the particle to control it. We train a neural network following the strategy of reinforcement learning, and compare its performance on the two tasks with the performance of the optimal control obtained from the linear-quadratic-Gaussian (LQG) control theory \cite{LinearQuadraticControl}. Next, we extend the problem to an anharmonic setting by changing the potential to be quartic, and repeat the above procedure. For this case, an optimal control strategy is not known, and therefore we use suboptimal control strategies and Gaussian approximations and the local approximation of the linear control to derive several control protocols from a conventional point of view, and we compare their performances with the reinforcement learning control. We also compare the behaviour of the controls by looking at their outputs, and we discuss the properties of the underlying quantum systems to gain insights on the controllers' behaviour. \\ \section{Outline}\label{outline} The present thesis is organized as follows.\\ In Chapter \ref{continuous quantum measurement}, we present a review of continuous measurement on quantum systems. We give the formulation of a general measurement, and formally derive the stochastic differential equations that govern the evolution of a quantum state subjected to continuous measurement, where the evolution is called a quantum trajectory. We discuss both jump and diffusive trajectories, and give the equation that is used in our investigated control problem.\\ In Chapter \ref{deep reinforcement learning}, we present a review of deep reinforcement learning. We start from the basics of machine learning and introduce deep learning with its motivation and uses, and introduce reinforcement learning, especially a particular type of reinforcement learning called Q-learning, which is used in our research. Finally we discuss the implementation of deep learning for a reinforcement learning problem, i.e. deep reinforcement learning.\\ In Chapter \ref{control quadratic potentials}, we describe the quadratic control problems that are introduced in the last section. We first analyse the problems to show that they can be solved by the standard LQG control, and then describe our problem setting and our learning system in detail, and we present the results of the reinforcement learning and those of the optimal control. We compare the results, and also directly compare the output from the deep learning system with that from the optimal control. We find that, both final performances and the control behaviours of the two are similar, which implies that the AI correctly learned the optimal control. There also exist small traces of imperfections concerning the AI's behaviour. We will make discussions on these results. \\ In Chapter \ref{control quartic potentials}, we describe the quartic anharmonic control problems. We follow the same line of reasoning as the quadratic case and show that this quartic case cannot be simplified in the same way as the quadratic one, and the system exhibits intrinsic quantum mechanical behaviour that cannot be modelled classically. We then discuss possible control strategies based on existing ideas and compare their performances with our trained reinforcement learning controllers, and organise the results and discussions in the same way as Chapter \ref{control quadratic potentials}. We find that when properly configured, the reinforcement learning controller could outperform all of our derived control strategies, which demonstrates the supremacy and the universality of reinforcement learning.\\ In Chapter \ref{summary}, we discuss the conclusions of this thesis and their implications, and we discuss the future perspectives.\\ Some technical details are discussed in appendices. Appendix A reviews the linear-quadratic-Gaussian (LQG) control theory that is used in Chapters \ref{control quadratic potentials} and \ref{control quartic potentials}. Appendix B explains the numerical methods implemented in our numerical simulation of the quantum systems. Appendix C presents detailed adopted techniques and configurations of our reinforcement learning algorithm. \section{General Model of Quantum Measurement}\label{measurement model} In the postulates of quantum mechanics \cite{NielsenChuang}, a general measurement is described by a set of linear operators $\{M_m\}$, with $m$ denoting measurement outcomes. These operators act on the measured quantum system state space and satisfy the completeness condition\\ \begin{equation}\label{completeness} \sum_{m}M^{\dagger}_{m}M_{m}=I \end{equation}\\ such that the unconditioned post-measurement quantum state as a sum over measurement outcomes is trace-preserved:\\ \begin{equation}\label{quantum operation} \rho'=\mathcal{E}\left(\rho\right)=\sum_{m}M_m\rho M^{\dagger}_m\quad\Rightarrow\quad\text{tr}\left(\rho'\right)=\text{tr}\left(\rho\sum_{m}M^{\dagger}_{m}M_{m}\right)=\text{tr}(\rho), \end{equation}\\ where $\rho$ is the measured quantum state and $M_m\rho M^{\dagger}_m$ represents the state after a measurement outcome $m$ is observed. This condition of trace preservation ensures that the total probability of all measurement outcomes is one. To obtain a normalized state after a certain measurement outcome, it is divided by its outcome probability and becomes\\ \begin{equation}\label{postmeasurement state} \rho_m=\dfrac{M_m\rho M^{\dagger}_m}{\text{tr}\left(M_m\rho M^{\dagger}_m\right)}\,, \end{equation}\\ where the trace $\text{tr}\left(M_m\rho M^{\dagger}_m\right)$ is the probability of measurement outcome $m$ for state $\rho$.\\ The simplest and standard measurement is projection measurement $\{P_m\}$, satisfying Eq. (\ref{completeness}) and\\ \begin{equation}\label{projector1} \forall m,\quad\forall n \in \mathbb{Z}^{+},\quad (P_m)^{n} = P_m,\quad P^{\dagger}_m = P_m \end{equation}\\ and\\ \begin{equation}\label{projector2} P_i P_j = \delta_{ij}P_i,\qquad \delta_{ij}=\left\{ \begin{array}{rl} 0 & \text{if } i \ne j;\\ 1 & \text{if } i = j, \end{array} \right. \end{equation} such that they are projectors. These projector properties ensure that after a measurement outcome $m$ is obtained, if you measure it again immediately, the measurement outcome must again be $m$ and the state is not changed. This is the simplest and basic quantum measurement we have. Now we show it is possible to extend the projection measurement to a general measurement $\{M_m\}$ as in equations (\ref{completeness}) to (\ref{postmeasurement state}) by using an indirect measurement scheme. \begin{figure}[t] \centering \begin{tikzpicture} \draw[thick] (0,0) rectangle (1.2,2.3); \node (U) at (0.6,1.15) {{\large $U$}}; \node (e) at (-1.5,0.4) {{\large $\rho_e$}}; \node (rho) at (-1.5,1.9) {{\large $\rho$}}; \draw[thick] (e) -- (0,0.4); \draw[thick] (rho) -- (0,1.9); \draw[thick] (2.7,0) rectangle (3.7,0.8); \draw[thin] (3.62,0.2) arc (40:140:0.55); \draw[thin,->] (3.2,0.05) -- (3.4,0.6); \draw[thick] (1.2,0.4) -- (2.7,0.4); \node (m) at (4.2,0.4) {{\large $m$}}; \node (rhom) at (3.75,1.9) {{\large $M_m\rho M^{\dagger}_m$}}; \draw[thick] (1.2,1.9) -- (rhom); \end{tikzpicture} \caption{Indirect measurement model. We let a meter interact with a measured quantum state, and we measure the meter to obtain a direct measurement result $m$.} \label{fig:indirect measurement} \end{figure}\\ Suppose we want to measure a state $\rho$. We prepare a meter state $\rho_e$ which is known and not entangled with $\rho$, and let it interact with $\rho$ through a unitary evolution $U$, and then we measure the state of the meter using the projection measurement as schematically illustrated in figure \ref{fig:indirect measurement}. For a measurement outcome $m$ on the meter, the unnormalized post-measurement state of the initial $\rho$ becomes\\ \begin{equation}\label{indirect measurement1} \tilde{\rho}_m=\text{tr}_e \left((I\otimes P_m)U(\rho \otimes \rho_e)U^{\dagger}(I\otimes P_m)\right)=\text{tr}_e \left((I\otimes P_m)U(\rho \otimes \rho_e)U^{\dagger}\right). \end{equation}\\ For simplicity, we assume $\rho_e$ is pure, i.e. $\rho_e=|\psi_e\rangle\langle\psi_e|$, and we decompose $P_m$ into $P_m=\sum_{i}|\psi_i\rangle\langle\psi_i|$. The above result can be written as\\ \begin{equation}\label{indirect measurement2} \tilde{\rho}_m=\sum\nolimits_{i}\langle\psi_i|U\left(|\psi_e\rangle\langle\psi_e| \otimes \rho\right) U^{\dagger}|\psi_i\rangle\,. \end{equation}\\ If we define $M_{m,i}\equiv\langle\psi_i|U|\psi_e\rangle$, it becomes\\ \begin{equation}\label{indirect measurement3} \tilde{\rho}_m=\sum_{i}M_{m,i}\,\rho M^{\dagger}_{m,i}\,, \end{equation}\\ which can be considered as a measurement operator set $\{M_{(m,i)}\}$ with measurement outcomes $(m,i)$ as in equations (\ref{completeness}) to (\ref{postmeasurement state}), and we discard information on index $i$. If the projector $P_m$ only projects into one basis $|\psi_m\rangle\langle\psi_m|$, i.e. $i$ only has one choice, then it results in the measurement operator set $\{M_{m}\}$, and $\rho_m=\dfrac{M_m\rho M^{\dagger}_m}{\text{tr}\left(M_m\rho M^{\dagger}_m\right)}\,$. The completeness condition (\ref{completeness}) can be deduced from the completeness of projectors $\{P_m\}$. Conversely, for a given set of measurement operators $\{M_{m}\}$ that satisfy completeness condition (\ref{completeness}), there exists a unitary $U$ such that $\langle\psi_m|U|\psi_e\rangle=M_{m}$, and it allows us to implement the measurement $\{M_{m}\}$ through an indirect measurement with a direct projection measurement on the meter \cite{NielsenChuang}.\\ \section{Continuous Limit of Measurement}\label{continuous measurement} \subsection{Unconditional State Evolution} We consider the continuous limit of repeated measurements in infinitesimal time. When the measurement outcomes are not taken into account, the state evolution is deterministic, as $\rho\to\mathcal{E}(\rho)$ for each measurement done. We require this deterministic evolution to be continuous, that is\\ \begin{equation}\label{continuous} 0<\left\|\lim_{dt\to 0}\dfrac{\mathcal{E}(\rho)-\rho}{dt}\right\|<\infty,\quad \mathcal{E}(\rho)=\sum_{m}M_m\rho M^{\dagger}_m, \end{equation}\\ where $\mathcal{E}$ necessarily depends on $dt$.\\ We first consider a binary measurement $\{M_0,M_1\}$ with two measurement outcomes. Due to the requirements $\sum_{m}M^{\dagger}_{m}M_{m}=I$ and $\left(\mathcal{E}(\rho)-\rho\right)\sim dt$, we set $M_0=I-\frac{R\,dt}{2}$ such that\\ \begin{equation}\label{binary default} M_0 \rho M^{\dagger}_0=(I-R\,dt)\rho \end{equation}\\ with $M_1$ satisfying\\ \begin{equation}\label{binary non-default} M_1 = L_1\,\sqrt{dt},\quad L^{\dagger}_1L_1=R \end{equation}\\ for the condition $\sum_{m}M^{\dagger}_{m}M_{m}=I$. In this way, all requirements for a continuous measurement are satisfied \cite{QuantumMeasurement}. Then similarly, we may add more operators $M_i$ into the operator set $\{M_m\}$, and they satisfy\\ \begin{equation}\label{multiple measurement outcomes} M_i=L_i\,\sqrt{dt},\quad i=1,2,\cdots,m\ ,\quad M_0=I-\frac{dt}{2}\sum^{m}_{i=1}L^{\dagger}_iL_i, \end{equation}\\ which produce the Lindblad equation\\ \begin{equation}\label{Lindblad} \frac{d\rho}{dt}=-\frac{i}{\hbar}[H,\rho]+\sum_{i}\gamma_i\left(L_i\rho L^{\dagger}_i-\frac{1}{2}\{L^{\dagger}_iL_i,\rho\}\right), \end{equation}\\ where a self-evolution term $[H,\rho]$ with Hamiltonian $H$ is taken into account, $\{\cdot,\cdot\}$ is the anticommutator, and $\gamma_i$ characterizes the strength of measurement. The above results can also be derived from the indirect measurement model by a repetition of a week unitary interaction between the state and the meter followed by a projection measurement on the meter \cite{OpenQuantumSystemsAngelRivas}.\\ \subsection{Quantum Trajectory Conditioned on Measurement Outcomes}\label{Quantum Trajectory section} When the measurement outcomes of a continuous measurement are observed, the quantum state conditioned on the outcomes follows a quantum trajectory. This quantum trajectory can be considered as a stochastic process, and it is not necessarily continuous.\\ The probability to get a measurement outcome $i$ in an infinitesimal measurement in time $dt$ is\\ \begin{equation}\label{measurement outcome 1} p_i(dt)=\text{tr}(L_i\rho L^{\dagger}_i)\,dt\,, \end{equation}\\ which vanishes with $dt$. Therefore, in an infinitesimal length of time, measurement outcomes other than the outcome 0 can only appear with vanishingly small probabilities. We therefore take the limit that all measurement outcomes are sparse in time except for the outcome 0, that is, two or more of them do not occur in the same infinitesimal time interval in $dt$. If we denote the number of measurement outcome $i$ in $dt$ as $dN_i(t)$, they obey the following:\\ \begin{equation}\label{measurement events} dN_i(t)= 0\ \text{or}\ 1,\quad dN_idN_j=\delta_{ij}\,dN_i,\quad E\left[dN_i(t)\right]=\text{tr}\left(L_i\rho(t)L^{\dagger}_i\right)dt\,. \end{equation}\\ We now write the stochastic differential equation with these random variables to describe the state evolution, conditioned on these measurement outcomes \cite{QuantumMeasurement}. For a pure state $|\psi\rangle$, we have\\ \begin{equation}\label{jump evolution pure} \begin{split} d|\psi(t)\rangle&=\left[\sum_i dN_i(t)\left(\frac{L_i}{\sqrt{\langle L^{\dagger}_i L_i\rangle}}-1\right)+[1-\sum_i dN_i(t)]\left(\frac{M_0}{\sqrt{\langle M_0^{\dagger}M_0\rangle}}-1\right)\right]|\psi(t)\rangle\\ &=\left[\sum_i dN_i(t)\left(\frac{L_i}{\sqrt{\langle L^{\dagger}_i L_i\rangle}}-1\right)+1\cdot\left(\frac{1-\frac{dt}{2}\sum^{m}_{i=1}L^{\dagger}_iL_i}{\sqrt{\langle 1-dt\sum^{m}_{i=1}L^{\dagger}_iL_i\rangle}}-1\right)\right]|\psi(t)\rangle\\ &=\left[\sum_i dN_i(t)\left(\frac{L_i}{\sqrt{\langle L^{\dagger}_i L_i\rangle}}-1\right)-dt\left(\frac{1}{2}\sum^{m}_{i=1}L^{\dagger}_iL_i-\frac{1}{2} \sum^{m}_{i=1}\langle L^{\dagger}_iL_i\rangle\right)\right]|\psi(t)\rangle, \end{split} \end{equation}\\ where the term $[1-\sum_i dN_i(t)]$ is replaced by $1$ due to few non-zero events of $dN_i$. When the state Hamiltonian is taken into account, Eq. (\ref{jump evolution pure}) reduces to\\ \begin{equation}\label{jump evolution with Hamiltonian} d|\psi(t)\rangle=\left[\sum_i dN_i(t)\left(\frac{L_i}{\sqrt{\langle L^{\dagger}_i L_i\rangle}}-1\right)-dt\left(\frac{i}{\hbar}H+\frac{1}{2}\sum^{m}_{i=1}L^{\dagger}_iL_i-\frac{1}{2} \sum^{m}_{i=1}\langle L^{\dagger}_iL_i\rangle\right)\right]|\psi(t)\rangle. \end{equation}\\ This is called a nonlinear stochastic Schr\"odinger equation (SSE). For a general mixed state, the equation is\\ \begin{align}\label{jump evolution with Hamiltonian mixed} d\rho=-dt\frac{i}{\hbar}(H_{\text{eff}}\rho-\rho H^{\dagger}_{\text{eff}})&+dt\sum^{m}_{i=1}\langle L^{\dagger}_iL_i\rangle\rho+\sum^{m}_{i=1}dN_i\left(\frac{L_i\rho L^{\dagger}_i}{\langle L^{\dagger}_iL_i\rangle}-\rho\right),\\ H_{\text{eff}}&:=H-\frac{i\hbar}{2}\sum_{i=1}^{m}L^{\dagger}_i L_i. \end{align}\\ Note that the expectation value $\langle\cdot\rangle$ depends on the current state $\rho(t)$ or $|\psi(t)\rangle$, and it introduces nonlinearity regarding the state by the $\langle\cdot\rangle\rho$ and $\langle\cdot\rangle|\psi(t)\rangle$ terms.\\ Physically, when the operators $L_i$ are far from the identity and change the quantum state much, the rare non-zero $dN_i$ events are called quantum jumps, meaning that the state changes suddenly during its evolution. At another limit in which the operators $L_i$ are close to the identity and non-zero $dN_i$ occurs more frequently, the state can evolve smoothly. This is called the diffusive limit, and there are multiple ways to achieve this limit. If we have discrete measurement outcomes $0,1,2$ such that\\ \begin{equation}\label{diffusive limit discrete} L_1=\sqrt{\frac{\Gamma}{2}}(I+l\,\hat{a}),\quad L_2=\sqrt{\frac{\Gamma}{2}}(I-l\,\hat{a}),\quad M_0=I-\frac{\Gamma l^2}{2}\hat{a}^{\dagger}\hat{a}\,dt\,, \end{equation} \begin{equation}\label{diffusive limit discrete condition} l\to 0,\quad\Gamma\to\infty,\quad\Gamma l^2=\frac{\gamma}{2}, \end{equation}\\ where $\gamma$ is a constant. In this case, the frequency of measurement outcome $i=1,2$ becomes high:\\ \begin{equation}\label{frequent detection} \begin{split} E\left[\delta N_{1,2}(t)\right]=\text{tr}\left(L_{1,2}\,\rho L^{\dagger}_{1,2}\right)dt\cdot\frac{\delta t}{dt}&=\frac{\Gamma}{2}\left(1\pm l\langle\hat{a}+\hat{a}^{\dagger}\rangle+l^2\langle\hat{a}^{\dagger}\hat{a}\rangle\right)\delta t\\ &\approx\frac{\Gamma}{2}\left(1\pm l\langle\hat{a}+\hat{a}^{\dagger}\rangle\right)\delta t, \end{split} \end{equation}\\ where $\delta N_i(t)$ denotes the number of measurement outcome $i$ in a small time interval $\delta t$, and we assume that the state does not change much during this interval. Under this assumption, $\delta N_i(t)$ is a Poisson distribution for the interval $\delta t$, and therefore at $\Gamma\to\infty$, it is non-negligible and can be approximated to be Gaussian:\\ \begin{equation}\label{Wiener process} \delta N_{1,2}(t)=\frac{\Gamma}{2}\left(1\pm l\langle\hat{a}+\hat{a}^{\dagger}\rangle\right)\delta t+\sqrt{\frac{\Gamma}{2}}\left(1\pm\frac{{l}}{2}\langle\hat{a}+\hat{a}^{\dagger}\rangle\right)\delta W_{1,2}(t), \end{equation} \begin{equation}\label{Wiener Gaussian} \delta W_{1,2}(t)\sim\mathcal{N}\left(0,\sqrt{\delta t}^2\right), \end{equation}\\ where $\Gamma\delta t$ is large, and $\delta W_i(t)$ is a Wiener increment satisfying $\delta W_i\delta W_j=\delta_{ij}\delta t$. To proceed, we first calculate $d\rho$:\\ \begin{equation}\label{d rho} \begin{split} d\rho=&-dt\frac{i}{\hbar}(H_{\text{eff}}\rho-\rho H^{\dagger}_{\text{eff}})+dt\sum^{2}_{i=1}\langle L^{\dagger}_iL_i\rangle\rho+\sum^{2}_{i=1}dN_i\left(\frac{L_i\rho L^{\dagger}_i}{\langle L^{\dagger}_iL_i\rangle}-\rho\right)\\ =&-dt\frac{i}{\hbar}[H,\rho]-dt{\frac{\Gamma}{2} }\left\{I+l^2\hat{a}^\dagger\hat{a},\rho\right\}+\Gamma\left(1+l^2\langle\hat{a}^{\dagger}\hat{a}\rangle\right)dt\,\rho\\ &+dN_1\left(\frac{\rho+l\hat{a}\rho+l\rho\hat{a}^{\dagger}+l^2\hat{a}\rho\hat{a}^{\dagger}}{1+l\langle\hat{a}+\hat{a}^{\dagger}\rangle+l^2\langle\hat{a}^{\dagger}\hat{a}\rangle}-\rho\right)+dN_2\left(\frac{\rho-l\hat{a}\rho-l\rho\hat{a}^{\dagger}+l^2\hat{a}\rho\hat{a}^{\dagger}}{1-l\langle\hat{a}+\hat{a}^{\dagger}\rangle+l^2\langle\hat{a}^{\dagger}\hat{a}\rangle}-\rho\right)\\ =&-dt\frac{i}{\hbar}[H,\rho]-dt{\frac{\Gamma l^2}{2} }\left\{\hat{a}^\dagger\hat{a},\rho\right\}+\Gamma l^2\langle\hat{a}^{\dagger}\hat{a}\rangle dt\,\rho\\ &+dN_1\left(l\hat{a}\rho+l\rho\hat{a}^{\dagger}+l^2\hat{a}\rho\hat{a}^{\dagger}-l\langle\hat{a}+\hat{a}^{\dagger}\rangle{\rho}+{l^2}\langle\hat{a}+\hat{a}^{\dagger}\rangle^2{\rho}-l^2\langle\hat{a}^{\dagger}\hat{a}\rangle\rho-l^2\langle\hat{a}+\hat{a}^{\dagger}\rangle(\hat{a}\rho+\rho\hat{a}^{\dagger})\right)\\ &+dN_2\left(-l\hat{a}\rho-l\rho\hat{a}^{\dagger}+l^2\hat{a}\rho\hat{a}^{\dagger}+l\langle\hat{a}+\hat{a}^{\dagger}\rangle{\rho}+{l^2}\langle\hat{a}+\hat{a}^{\dagger}\rangle^2{\rho}-l^2\langle\hat{a}^{\dagger}\hat{a}\rangle\rho-l^2\langle\hat{a}+\hat{a}^{\dagger}\rangle(\hat{a}\rho+\rho\hat{a}^{\dagger})\right)\\ =&-dt\frac{i}{\hbar}[H,\rho]-dt{\frac{\Gamma l^2}{2} }\left\{\hat{a}^\dagger\hat{a},\rho\right\}+\Gamma l^2\langle\hat{a}^{\dagger}\hat{a}\rangle dt\,\rho+(dN_1-dN_2)\left(l\hat{a}\rho+l\rho\hat{a}^{\dagger}-l\langle\hat{a}+\hat{a}^{\dagger}\rangle{\rho}\right)\\ &+(dN_1+dN_2)(l^2\hat{a}\rho\hat{a}^{\dagger}-l^2\langle\hat{a}^{\dagger}\hat{a}\rangle\rho-l^2\langle\hat{a}+\hat{a}^{\dagger}\rangle(\hat{a}\rho+\rho\hat{a}^{\dagger})+{l^2}\langle\hat{a}+\hat{a}^{\dagger}\rangle^2{\rho}), \end{split} \end{equation} where we have expanded the denominators up to $O(l^2)$. To accumulate a total of $\dfrac{\delta t}{dt}\to\infty$ steps, we substitute Eq. (\ref{Wiener process}) into the above. It can be checked easily that most of the terms are cancelled and to the leading order in $l$ it becomes\\ \begin{equation}\label{delta rho} \begin{split} \delta\rho=&-\delta t\frac{i}{\hbar}[H,\rho]-\delta t\frac{\Gamma }{2}\sum^{m}_{i=1}\left[-2l^2\hat{a}\rho\hat{a}^{\dagger}+l^2\{\hat{a}^\dagger\hat{a},\rho\}\right]+\sqrt{\Gamma}l\sum_{i=1}^{m}\left[(\hat{a}-\langle\hat{a}\rangle)\rho+\rho(\hat{a}^{\dagger}-\langle\hat{a}^{\dagger}\rangle)\right]\delta W\\ =&\left[-\frac{i}{\hbar}[H,\rho]-\frac{\gamma }{4}\sum^{m}_{i=1}\left(\{\hat{a}^\dagger\hat{a},\rho\}-2\hat{a}\rho\hat{a}^{\dagger}\right)\right]\delta t+\sqrt{\frac{\gamma }{2}}\sum_{i=1}^{m}\left[(\hat{a}-\langle\hat{a}\rangle)\rho+\rho(\hat{a}^{\dagger}-\langle\hat{a}^{\dagger}\rangle)\right]\delta W, \end{split} \end{equation} where the rule of calculation follows the It\^o calculus, and we have used a single Wiener increment $\delta W$ to represent the term $(\delta N_1 - \delta N_2)$. The above equation shows that our initial assumption that $\rho$ does not change much during a sufficiently small time interval $\delta t$ is true, as diverging quantities are cancelled and only non-diverging $\gamma$ terms before $\delta t$ and $\delta W$ remain, which scales with the length of a chosen time interval $\delta t$.\footnote{As can be seen from our derivation, the function before $\delta W$ should be evaluated at time $t$ but not at time $t+\dfrac{\delta t}{2}$. This point is crucial in stochastic calculus, and in this case it is called It\^o calculus. Another caveat is that the stochastic differential equations converge in $\sqrt{dt}$ rather than $dt$, which is different from usual differential equations and is important when we try to prove the above result from a rigorous mathematical point of view.} Rewriting $\delta t$ as $dt$, we obtain the final result\\ \begin{equation}\label{stochastic total equation} d\rho=\left[-\frac{i}{\hbar}[H,\rho]-\frac{\gamma }{4}\sum^{m}_{i=1}\left(\{\hat{a}^\dagger\hat{a},\rho\}-2\hat{a}\rho\hat{a}^{\dagger}\right)\right]dt+\sqrt{\frac{\gamma }{2}}\sum_{i=1}^{m}\left[(\hat{a}-\langle\hat{a}\rangle)\rho+\rho(\hat{a}^{\dagger}-\langle\hat{a}^{\dagger}\rangle)\right]dW, \end{equation}\\ where $dW\sim\mathcal{N}\left(0,\sqrt{dt}^2\right)$. For a pure state it is\\ \begin{equation}\label{stochastic total equation pure} \begin{split} d|\psi\rangle=&\left[-\frac{i}{\hbar}H-\frac{\gamma}{4}\sum_{i=1}^{m}\left(\hat{a}^{\dagger}\hat{a}-\hat{a}\langle\hat{a}+\hat{a}^{\dagger}\rangle+\frac{1}{4}\langle\hat{a}+\hat{a}^{\dagger}\rangle^2\right)\right]dt|\psi\rangle\\ &+\sqrt{\frac{\gamma}{2}}\sum_{i=1}^{m}\left(\hat{a}-\frac{1}{2}\langle\hat{a}+\hat{a}^{\dagger}\rangle\right)dW|\psi\rangle. \end{split} \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.5\linewidth]{chapter2/homodyne} \caption{Setup of homodyne detection. An intense laser field (called a local oscillator) is superposed with a measured signal through a beam splitter. The two output fields are detected by two detectors which convert incident photons into electric currents. The electric currents are then fed into a balanced detector. Reproduced from Ref. \cite{ehek2015}.} \label{fig:homodyne} \end{figure}\\ In real experiments, the above model describes a homodyne detection using a strong local oscillator, and $dN_1-dN_2$ is the observed signal as a differential photocurrent, which is illustrated in Fig. \ref{fig:homodyne}, which has a mean value of $\Gamma l \langle\hat{a}+\hat{a}^{\dagger}\rangle$ and a standard deviation of $\sqrt{\Gamma dt}$ regarding the measured signal. The random variable $dW$ represents the observed signal deviating from its mean value. This experimental setting explains why we need two measurement outcomes $dN_1$ and $dN_2$. If we only take one measurement outcome and block either branch of the light that goes out from the beam splitter, it comes back to the source signal and disturbs the Hamiltonian of the measured system drastically. \\ In the cases where measurement outcomes are not discrete but are real-valued, such as the direct position measurement, the measured system can also have a diffusive quantum trajectory, to which the above analysis does not apply. In such cases, we may assume the measurements are weak and performed repeatedly, so that they are averaged to obtain a total measurement, and then according to the central limit theorem, we can approximately describe the measurement effects and results as a Gaussian random process:\\ \begin{equation}\label{Gaussian measurement} M_q=\left(\frac{\gamma dt}{\pi}\right)^\frac{1}{4}e^{-\frac{\gamma dt}{2}(\hat{x}-q)^2}, \end{equation}\\ where we have assumed that the measured physical quantity is the position of a particle. Note that the completeness condition is automatically satisfied by the property of the Gaussian integral: \begin{equation}\label{Gaussian measurement completeness} \int_{-\infty}^{\infty}M^{\dagger}_qM_q\, dq=1\,. \end{equation}\\ Therefore it is a valid measurement. In this case, measurement outcomes are by themselves Gaussian and can be modelled as $\left(\langle\hat{x}\rangle+\dfrac{dW}{\sqrt{2\gamma}dt}\right)$, which is given in Ref. \cite{DIOSI1988419}. The deduction follows by explicitly calculating $M_q\rho M^{\dagger}_q$ by a straightforward expansion, and the final result is exactly the same as before, provided that $\hat{a}$ and $\hat{a}^{\dagger}$ are replaced by $\hat{x}$. The result for a pure state is\\ \begin{equation}\label{position measurement evolution} d|\psi\rangle=\left[\left(-\frac{i}{\hbar}H-\frac{\gamma}{4}(\hat{x}-\langle\hat{x}\rangle)^2\right)dt+\sqrt{\dfrac{\gamma}{2}}(\hat{x}-\langle\hat{x}\rangle)dW\right]|\psi\rangle, \end{equation}\\ where $dW$ is a Wiener increment as before. This is the equation that we use in the simulation of quantum systems in our research. If we express it in terms of the density matrix that admits a mixed state, the equation becomes\\ \begin{equation}\label{position measurement mixed} d\rho=-\frac{i}{\hbar}[H,\rho]dt-\frac{\gamma}{4}[\hat{x},[\hat{x},\rho]]dt+\sqrt{\frac{\gamma}{2}}\{\hat{x}-\langle \hat{x}\rangle,\rho\}dW. \end{equation}\\ In order to model incomplete information on measurement outcomes, we define a measurement efficiency parameter $\eta$, satisfying $0\le\eta\le1$, which represents the ratio of measurement outcomes that are obtained. In the above equations, measurement outcomes are represented by a Wiener increment $dW$, which can be considered as an accumulated value in a small time interval. Therefore, we rewrite the Wiener increment into a series of smaller Wiener increments which represent repeated weak measurement results, and zero out a portion of those results to obtain a total incomplete measurement outcome, i.e.,\\ \begin{equation} dW=\sum_{i=1}^N dW_i,\qquad dW_i\sim\mathcal{N}\left (0,\frac{dt}{N}\right ), \end{equation}\\ where we have discretized the time in units of $dt$ into $N$ steps to obtain $N$ Weiner increments. The original condition $dW\sim\mathcal{N}(0,{dt})$ is clearly satisfied. Then after removing a portion $1-\eta$ of the measurement results, the total $dW$ becomes\\ \begin{equation} dW=\sum_{i=1}^{\eta N} dW_i,\qquad dW_i\sim\mathcal{N}\left (0,\frac{dt}{N}\right ), \end{equation}\\ and therefore we have $dW\sim\mathcal{N}(0,{\eta\, dt})$, and the time-evolution equation is\\ \begin{equation}\label{position measurement incomplete information} d\rho=-\frac{i}{\hbar}[H,\rho]dt-\frac{\gamma}{4}[\hat{x},[\hat{x},\rho]]dt+\sqrt{\frac{\gamma\eta}{2}}\{\hat{x}-\langle \hat{x}\rangle,\rho\}dW,\qquad dW\sim\mathcal{N}(0,{dt}), \end{equation}\\ where we have rescaled $dW$ such that it is now a standard Wiener increment. \section{Deep Learning}\label{deep learning} \subsection{Machine Learning} Generally speaking, learning usually refers to a process in which unpredictable becomes predictable, by building a model to correctly relate different pieces of relevant information. Machine learning aims to automatize this process. Although humans often achieve learning via a sequence of logical reasoning and validation, up to now, machines do not have a good common knowledge base to achieve creative logical reasoning to learn. To compensate for this deficiency, machine learning systems usually have a set of possible models beforehand, which represents conceivable relations among the different pieces of information that it is going to learn. The set of conceivable relations here is formally called the \textit{hypothesis space}. Then, it learns some provided example data, by looking for a model in its hypothesis space which fits the observed data best, and finally use the found model as the relation among the pieces of information it learns to give prediction on new data. As a result, the learned model is almost always only an approximate solution to the underlying problem. Nevertheless, it still works well enough in cases where a given problem cannot be modelled precisely but can be approximated easily.\\ To formally give a definition of machine learning, according to Tom M.~Mitchell \cite{MachineLearningBook}, ``A computer program is said to learn from experience $E$ with respect to some class of tasks $T$ and performance measure $P$, if its performance at tasks in $T$, as measured by $P$, improves with experience $E$." \\ \subsection{Feedforward Neural Networks} As discussed, the setting of the hypothesis space of a machine learning system is crucial for the performance. Because different problems have different properties, before the emergence of deep learning, researchers considered various approximate models to describe the corresponding real-life problems, including text-voice transform, language translation, image recognition, etc., and therefore the researchers specialized in different machine learning tasks usually worked separately. However, the deep neural network as a general hypothesis space set outperformed all previous research results in 2012 \cite{ImageNetClassification}, and started a deep learning boom. Below we introduce the deep neural network model, or precisely, the deep feedforward neural network following the line of thoughts of the last section. \\ When we attempt to model a relation between two quantities, the simplest guess is the linear relation. Although real-world problems are typically high-dimensional and more complex, we may hold on to this linearity even in a multidimensional setting, and assume\\ \begin{equation}\label{Linear} \boldsymbol{y}=\textbf{M}\boldsymbol{x}+\boldsymbol{b}\, , \end{equation}\\ where we model the relation between $\boldsymbol{y}$ and $\boldsymbol{x}\,$. Here $\boldsymbol{y}$ and $\boldsymbol{x}$ are vectors, $\textbf{M}$ is a matrix. $\boldsymbol{b}$~is an additional bias term as a small compromise starting from the linear guess. $\textbf{M}$ and $\boldsymbol{b}$ are learned by fitting $\boldsymbol{y}$ and $\boldsymbol{x}$ pairs into existing training data pairs $\{(\boldsymbol{x},\boldsymbol{y})_i\}$. This process of learning is called \textit{linear regression} \cite{DeepLearningBook}.\\ Obviously, the simple linear (or affine) model above would not work for realistic complex problems, as it cannot model nonlinear relations. Therefore, we apply a simple nonlinear function $\sigma$ after the linear map, which is called an \textit{activation function}. This name is an analogy to the activation function controlling firings of neurons in neuroscience. For simplicity, this function is a scalar function and is applied to a vector in an element-wise manner, acting on every component of the vector separately. Then, the function may be constructed as\\ \begin{equation}\label{multilayer perceptron} \boldsymbol{y}=f(\boldsymbol{x})=\textbf{M}_2\cdot\sigma(\textbf{M}_1\boldsymbol{x}+\boldsymbol{b}_1)+\boldsymbol{b}_2\, . \end{equation}\\ In practice, there is almost no constraint on the activation function, as long as it is nonlinear. The most commonly used two functions are the ReLU (Rectified Linear Unit) and the sigmoid, which are shown in figure \ref{activation functions}.\\ \begin{figure}[htb] \centering \subfloat[ReLU\quad$\sigma(x)=\max(0,x)$]{\label{ReLU} \begin{minipage}[c]{0.35\linewidth} \centering $\sigma(x)$\\ \includegraphics[width=\linewidth]{chapter3/ReLU} \end{minipage}}$x$\qquad\qquad \subfloat[Sigmoid\quad$\sigma(x)=\frac{1}{1+e^{-x}}$]{\label{sigmoid} \begin{minipage}[c]{0.35\linewidth} \centering $\sigma(x)$\\ \includegraphics[width=\linewidth]{chapter3/logistic} \end{minipage}}$x$ \caption{two frequently used activation functions in deep learning} \label{activation functions} \end{figure} The ReLU function simply zeros all negative values and keeps all positive values, and the sigmoid is a transition between zero and one, which is also known as the standard logistic function. Because most of the time we use the ReLU as the activation function $\sigma$, we assume using the ReLU in the following context unless otherwise mentioned. \\ An immediate result which can be drawn is that the function $f$ in Eq. (\ref{multilayer perceptron}) is universal, in the sense that it can approximate an arbitrary continuous mapping from $\boldsymbol{x}$ to $\boldsymbol{y}\,$, provided that the parameters $\textbf{M}_1,\boldsymbol{b}_1,\textbf{M}_2,\boldsymbol{b}_2$ have sufficiently many dimensions and are complicated enough.\\ The above argument can be shown easily for the ReLU as $\sigma\,$, and then similarly for other activation functions. For simplicity we first consider one-dimensional $x,y\,$. First we note that the ReLU just effectively bends a line, and $\textbf{M}$ and $\boldsymbol{b}$ can be used to replicate, rotate and shift the straight line ${x}={x}\,$; then it can be realized that the above function in Eq.~(\ref{multilayer perceptron}) is constituted of three successive processes: (1) copying the line ${x}={x}\,$, plus customizable rotation and shift by $\textbf{M}_1,\boldsymbol{b}_1\,$, (2) bending each resultant line by $\sigma $, (3) rotating, shifting and summing all lines by $\textbf{M}_2,\boldsymbol{b}_2\,$. Therefore, with correctly picked $\textbf{M}_1,\boldsymbol{b}_1,\textbf{M}_2,\boldsymbol{b}_2\,$, this function can be used to construct arbitrary lines that are piece-wise linear with finitely many bend points. Thus, it can approximate arbitrary functions from $x$ to $y$ with arbitrary precision, provided that parameters $\textbf{M}_1,\boldsymbol{b}_1,\textbf{M}_2,\boldsymbol{b}_2$ are appropriately chosen. For the case of higher dimensional $\boldsymbol{x}$ and $\boldsymbol{y}$, instead of bended lines, the function $f$ constructs polygons in the $\boldsymbol{x}$ space and the universality follows similarly. This argument of universality also holds for other types of nonlinear functions besides the ReLU, and can be shown by similar constructive arguments.\\ Now, we consider using the function in Eq. (\ref{multilayer perceptron}) as our hypothesis space for machine learning. Given data points $\{(\boldsymbol{x},\boldsymbol{y})_i\}$, if we follow the universality argument and fit the parameters in Eq. (\ref{multilayer perceptron}) to make $f$ reproduce the relation from $\boldsymbol{x}$ to $\boldsymbol{y}$ over the whole dataset $\{(\boldsymbol{x},\boldsymbol{y})_i\}$, then, as can also be seen from the universality argument, $f$ becomes the nearest-neighbour interpolation of data points $\{\boldsymbol{x}_i\}$. For any unseen data point $\boldsymbol{x}$ absent from the training set $\{(\boldsymbol{x},\boldsymbol{y})_i\}$, to predict its corresponding $\boldsymbol{y}\,$, $f$ finds its nearest neighbours in the data set $\{\boldsymbol{x}_i\}$ and predict its $\boldsymbol{y}$ as a linear interpolation of those neighbours' $\boldsymbol{y}$ values. This is a direct consequence of the polygon argument in the above paragraph, and implies that using this parametric function as the hypothesis space is still simple and naive, and that it cannot easily learn complex relations between $\boldsymbol{x}$ and $\boldsymbol{y}$ unless we have numerous data points in the training set to represent all possibilities of $\boldsymbol{x}$. Thus, we need further improvement so that the function can learn complex relations more easily.\\ As mentioned earlier, the nonlinear function $\sigma$ between linear mappings has its biological analogue as the activation function. This is because in neuroscience, the activation of a single neuron is influenced by its input linearly if its input is above an activation threshold, and if the input is below the threshold, there is no activation. This phenomenon is exactly modelled by the ReLU function following a linear mapping, where the linear mapping is connected to input neurons. Although each individual neuron functions simply, when they are connected, they may show extremely complex behaviour collectively. Motivated by this observation, we choose to apply the functions $f_i(\boldsymbol{x}):=\sigma(\textbf{M}_i\boldsymbol{x}+\boldsymbol{b}_i)$ sequentially and put them into the form $f_i\circ f_j \circ f_k\cdots\circ f_l$ to build a deep neural network. In this case, the output vector value of every $f_i$ represents a layer of neurons, with each scalar in it representing a single neuron, and every layer is connected to the previous layer through the weight matrix $\textbf{M}\,$. Note that the dimension of every $f_i$ may not be equal. This artificial network of neurons is called a feedforward neural network, since its information only goes in one direction and does not loops back to previous neurons. It can be written as follows:\\ \begin{equation}\label{feedforward neural network} \boldsymbol{y}=f(\boldsymbol{x})=\textbf{M}_n\left(f_{n-1}\circ f_{n-2} \circ f_{n-3}\cdots\circ f_1\left(\boldsymbol{x}\right)\right)+\boldsymbol{b}_n\,, \end{equation}\\ where\\ \begin{equation}\label{feedforward neural network2} f_i(\boldsymbol{x})=\sigma\left(\textbf{M}_i\boldsymbol{x}+\boldsymbol{b}_i\right),\qquad\sigma=\max(0,x)\,. \end{equation}\\ Intuitively speaking, although one layer of $f_i$ only bends and folds the line a few times, successive $f_i\,$s can fold on existing foldings and finally make the represented function more complex but with a certain regular shape. This equation (Eq. \ref{feedforward neural network}) is the deep feedforward neural network that we use in deep learning as our hypothesis space. For completeness, we recapitulate and define some relevant terms below.\\ The neural network is a function $f$ that gives an output $\boldsymbol{y}$ when provided with an input $\boldsymbol{x}$ as in Eq. (\ref{feedforward neural network}). The \textit{depth} of a deep neural network refers to the $n$ number in Eq. (\ref{feedforward neural network}), which is the number of linear mappings involved. $f_i$ stands for the $i$-th layer, and the activation values of the $i$-th layer is the output of $f_i\,$, and the \textit{width} of layer $f_i$ refers to the dimension of its output vector, i.e. the number of scalars (or neurons) involved. These involved scalars are called \textit{units} or \textit{hidden units} of the layer. When there is no constraint on matrix $\textbf{M}\,$, all units of adjacent layers are connected by generally non-zero entries in matrix $\textbf{M}$ and this case is called \textit{fully connected}. Usually, we call the $n$-th layer as the \textit{top} layer and the first layer as the \textit{bottom}. Concerning output $\boldsymbol{y}\,$, in Eq. (\ref{feedforward neural network}) it is a general real-valued vector, but if we know there exists some preconditions on the properties of $\boldsymbol{y}$, we may add an \textit{output function} at the top of the network $f$ to constrain $\boldsymbol{y}$ so that the preconditions are satisfied. This is especially useful for image classification tasks, where the neural network is supposed to output a probability distribution over discrete choices. For the classification case, the partition function is used as the output function to change the unnormalized output into a distribution. \\ \subsection{Training a Neural Network} In the last section we defined our feedforward neural network $f$ with parameters $\{\textbf{M}_i,\boldsymbol{b}_i\}_{i=1}^{n}\,$. Different from the cases of Eq. (\ref{Linear}) and (\ref{multilayer perceptron}), it is not directly clear how to find appropriate parameters $\{\textbf{M}_i,\boldsymbol{b}_i\}_{i=1}^{n}$ to fit $f$ to a given dataset $\{(\boldsymbol{x},\boldsymbol{y})_i\}$. As the first step, we need to have a measure to evaluate how well a given $f$ fits into the dataset. For this purpose, a \textit{loss function} is used, which measures the difference between $f(\boldsymbol{x}_i)$ and $\boldsymbol{y}_i$ on a dataset $S\equiv\{(\boldsymbol{x},\boldsymbol{y})_i\}$, and a larger loss implies a lower performance. In the simplest case, the L2 loss is used:\\ \begin{equation}\label{L2 loss} L=\frac{1}{|S|}\sum_{(\boldsymbol{x}_i,\boldsymbol{y}_i)\in S}||{f}(\boldsymbol{x}_i)-\boldsymbol{y}_i||^2\,, \end{equation}\\ where $||\cdot||$ is the usual L2 norm on vectors. This loss is also termed \textit{mean squared error} (MSE), i.e. the average of the squared error $||{f}(\boldsymbol{x}_i)-\boldsymbol{y}_i||^2$. It is widely used in machine learning problems as a fundamental measure of difference.\\ With a properly chosen loss function, the original problem of finding a $f$ that best fits the dataset $S$ reduces to finding a $f$ that minimizes the loss, which is an optimization problem in the parameter space $\{\textbf{M}_i,\boldsymbol{b}_i\}_{i=1}^{n}\,$. This optimization problem is clearly non-convex and hard to solve. Therefore, instead of looking for a global minimum in the parameter space, we only look for a local minimum which hopefully has a low enough loss to accomplish the learning task well. This is done via \textit{gradient descent}. Namely, we calculate the gradient of the loss with respect to all the parameters, and then modify all the parameters following the gradient to decrease the loss using a small step size, and then we repeat this process. Denoting the parameters by $\boldsymbol{\theta}\,$, the iteration process is given as below:\\ \begin{equation}\label{gradient descent} \boldsymbol{\theta}'=\boldsymbol{\theta} -\epsilon\,\nabla_{\boldsymbol{\theta}}L\left(\boldsymbol{\theta},\{(\boldsymbol{x},\boldsymbol{y})_i\}\right)\,, \end{equation}\\ where $\epsilon$ is the iteration step size and is called the \textit{learning rate}. It is clear that, with a small enough learning rate, the above iteration indeed converges to a local minimum of $L$. This process of finding a solution is called \textit{training} in machine learning, and before training we initialize the parameters randomly. In practice, although Eq. (\ref{L2 loss}) uses the whole training set $S$ to define $L\,$, during training we only sample a minibatch of data points from $S$ to evaluate the gradient, and this is called \textit{stochastic gradient descent} (SGD). The sampling method significantly improves efficiency. \\ As can be seen, both training and evaluation of a neural network requires a great amount of matrix computation. In a typical modern neural network, the number of parameters is on the order of tens of millions and the required amount of computation is huge. Therefore, the potential of this neural network strategy did not attract much attention until the technology of GPU (Graphic Processing Unit)-based parallelized computation becomes available in recent years \cite{CUDA}, which makes it possible to train modern neural networks in hours or a few days, which would previously take many months on CPUs. This development of technology makes large-scale deep learning possible and is one important reason for the deep learning boom in recent years.\\ In real cases, the iteration in training process usually does not follow Eq. (\ref{gradient descent}) exactly. This is because this iteration strategy can cause the iteration to go forth and back inside a valley-shaped region, or be disturbed by local noise on gradients, or be blocked by barriers in the searched parameter space, etc. To alleviate these problems, some alternative algorithms have been developed as improved versions of the basic gradient descent, including Adam \cite{Adam}, RMSprop \cite{RMSprop}, and gradient descent with momentum \cite{DeepLearningBook}. Basically, these algorithms employ two strategies. The first one is to give the iteration step an inertia, so that the iteration step is not only influenced by the current gradient, but also by all previous gradients, and the effect of previous gradients decays exponentially every time a step is done. This is called the \textit{momentum} method, and the so-called \textit{momentum term} is one minus the decay coefficient, usually set to 0.9 , which represents how much the inertia is preserved per step. This training method is actually ubiquitous in deep learning. The second strategy is normalization of parameter gradients, such that the average of the iteration step for each parameter becomes roughly constant and not proportional to the magnitude of the gradient. In sparse gradient cases, this strategy dramatically speeds up training. RMSprop and Adam adopt this strategy.\\ To summarize, we first train a neural network to fit a given dataset $S=\{(\boldsymbol{x},\boldsymbol{y})_i\}$ by minimizing a predefined loss. Then we use the neural network to predict $\boldsymbol{y}$ of new unseen data points $\boldsymbol{x}\,$. Concerning practical applications, fully connected neural networks as described above are commonly used for regression tasks, of which the target is to fit a real-valued function, which is often multivariable. For image classification, motivated by the fact that a pixel in an image is most related to nearby pixels, we put the units of a neural network layer also into a pixelized space, and connect a unit only to adjacent units between layers. To extract nonlocal information as an output, we downsample the pixelized units. This structure is called the \textit{convolutional neural network}, and is the current state-of-the-art method for image classification tasks \cite{ImagenetSota}. Many other neural network structures exist, but we leave them here since they are not directly relevant to our study. Although deep neural networks work extremely well for various tasks, so far we do not precisely know the reason, which is still an important open question nowadays. \\ \section{Reinforcement Learning}\label{reinforcement learning} \subsection{Problem Setting} In a reinforcement learning (RL) task, we do not have a training dataset beforehand, but we let the AI interact with an environment to accumulate experience, and learn from the accumulated experience with a target set to maximize a predefined reward. Because the AI learns by exploring the environment to perform better and better, this learning process is called reinforcement learning, in contrast to supervised and unsupervised learning in which case the AI learns from a pre-existing dataset. \\ Reinforcement learning is important when we do not understand an environment, but we can simulate the environment for an AI to interact with and gain experience from. Examples of reinforcement learning tasks include video games \cite{AtariEnvironment}, modern e-sports games \cite{Starcraft2}, chess-like games \cite{chess-like}, design problems \cite{MaterialDesign} and physical optimization problems in quantum control \cite{ErrorCorrectionReinforcement, GateOptimization}. In all situations the environment that the AI interact with is modelled by a Markov decision process (MDP) or a game in game theory, where there is a state representing the current situation of the environment, and the AI inputs the state and outputs an action which influences future evolution of the environment state. The goal of the AI is maximize the expected total reward in the environment, such as scores in games, winning probability in chess, and fidelity of quantum gates. This setting is illustrated in figure \ref{fig:agent-environment-interaction-in-reinforcement-learning}. Note that the evolution of the environment state and actions of the AI are discrete in time steps. \\ \begin{figure}[htb] \centering \includegraphics[width=0.8\linewidth]{chapter3/Agent-Environment-interaction-in-Reinforcement-Learning} \caption{Setting of reinforcement learning, where an AI agent interacts with an environment. Subscripts $t$ and $t+1$ denote successive time steps. Reproduced from Ref. \cite{reinforcementLearningFigure}.} \label{fig:agent-environment-interaction-in-reinforcement-learning} \end{figure} \subsection{Q-learning}\label{Q learning} There are many learning strategies for a reinforcement learning task. The most basic one is brute-force search, which is to test all possible action choices under all circumstances to find out the most beneficial strategy, in the sense of total expected reward. This strategy can be used to solve small-scale problems such as tic-tac-toe (figure \ref{fig:tic-tac-toe-game}). \begin{figure}[hbt] \centering \includegraphics[width=0.09\linewidth]{chapter3/Tic-tac-toe-game} \caption{A tic-tac-toe game example, reproduced from Ref. \cite{tic-tac-toe}.} \label{fig:tic-tac-toe-game} \end{figure}\\ However, this strategy is applicable to tic-tac-toe only because the space of the game state is so small that it can be easily enumerated. In most cases, brute-force search is not possible, and we need better methods, often heuristic ones, to achieve reinforcement learning. In this section we discuss the mostly frequently used method, Q-learning \cite{Q-learning}, which is used in our research in the next few chapters.\\ The Q-learning is based on a function $Q^\pi(s,a)$ which is defined to be the expected future reward at a state $s$ when taking an action $a$, provided with a certain policy $\pi$ to decide future actions. In addition, expected rewards in future steps are discounted exponentially by $\gamma$ according to how far they are away from the present:\\ \begin{equation}\label{Q function} Q^\pi(s_t,a_t)=r(s_t,a_t)+\mathbb{E}_{{(a_{t+i}\sim\pi(s_{t+i}),\, s_{t+i})}_{i=1}^{\infty}}\left[\sum_{i=1}^{\infty}\gamma^i\, r(s_{t+i},a_{t+i})\right],\quad 0<\gamma<1\,, \end{equation}\\ where $r(s,a)$ is the reward, and the expectation is taken over future trajectories $(s_{t+i},a_{t+i})_{i=1}^{\infty}\,$. Note that the environment evolution $(s_{t},a_{t})\mapsto s_{t+1}$ is solely determined by the environmental property and cannot be controlled.\\ This $Q$ function has a very important recursive property, that is\\ \begin{equation}\label{Q recursive} Q^{\pi}(s_t,a_t)=r(s_t,a_t)+\gamma\,\mathbb{E}_{a_{t+1}\sim\pi{(s_{t+1})},\,s_{t+1}}\left[Q^{\pi}(s_{t+1},a_{t+1})\right] \end{equation}\\ which can be shown directly from Eq. (\ref{Q function}) with non-divergent $Q\,$s. If the action policy $\pi^*$ is optimal, $Q^{\pi^*}$ then satisfies the following Bellman Eq. (\ref{Bellman equation}) \cite{OptimalControlTheoryIntroduction}:\\ \begin{equation}\label{Bellman equation} Q^{\pi^*}(s_t,a_t)=r(s_t,a_t)+\gamma\,\mathbb{E}_{s_{t+1}}\left[\max_{a_{t+1}}Q^{\pi^*}(s_{t+1},a_{t+1})\right]\,, \end{equation}\\ which is straightforward to show by following Eq. (\ref{Q recursive}), using the fact that policy $\pi^*$ takes every action to maximize $Q\,$; otherwise it cannot be an optimal policy as $Q$ would be increased by taking a maximum. \\ Then we look for such a $Q$ function. This function can be obtained by the following iteration:\\ \begin{equation}\label{Q iteration} Q'(s_t,a_t)=r(s_t,a_t)+\gamma\,\mathbb{E}_{s_{t+1}}\left[\max_{a_{t+1}}Q(s_{t+1},a_{t+1})\right]\,. \end{equation}\\ After sufficiently many iterations, $Q'$ converges to $Q^{\pi^*}\,$ \cite{Q-learning}. This is due to the discount factor $0<\gamma<1$ and a non-diverging reward $r(s,a)\,$, which makes iterations of the above equation drop the negligible final term $\mathbb{E}_{s_{t+1}}\left[\max_{a_{t+1}}Q(s_{t+1},a_{t+1})\right]$ as it would be multiplied by $\gamma^n$ after $n$ iterations and diminishes exponentially, and the remaining value of $Q'$ would be purely determined by reward $r(s,a)\,$. Since this converged $Q'$ is not influenced by the initial $Q$ choice at the start of iterations, this $Q'$ must be unique, and therefore it is also the $Q^{\pi^*}$ in Eq. (\ref{Bellman equation}), so this converged $Q'$ is both unique and optimal. This is rigorously proved in Ref. \cite{Q-learning}. After obtaining the $Q^{\pi^*}\,$, we simply use $\max_{a_{t}}Q^{\pi^*}(s_{t},a_{t})$ to decide the action $a_{t}$ for a state $s_{t}\,$, and this results in the optimal policy $\pi^*\,$.\\ An important caveat here is that, the optimality above cannot be separated from the discount factor $\gamma\,$, which puts exponential discount on future rewards. This $\gamma$ is necessary for convergence of $Q\,$, but it also results in a ``time horizon" beyond which the optimal policy does not consider future rewards. However, the goal of solving the reinforcement learning problem is to actually maximize total accumulated reward, which corresponds to $\gamma=1\,$. The ideal case is that $\pi^*$ converges for $\gamma\to 1\,$; however, this is not always true, and different $\gamma\,$s represent strategies that have different amounts of foresight. In addition, a large $\gamma$ such as 0.9999 often make the learning difficult and hard to proceed. Therefore in practice, Q-learning usually does not achieve the absolute optimality that corresponds to $\gamma=1$, except for the case where the reinforcement learning task is well-bounded so that $\gamma\to1$ does not cause any problem.\\ \subsection{Implementation of Deep Q-Network Learning}\label{deep reinforcement learning implementation} The Q-learning strategy that uses a deep neural network to approximate $Q$ function is called \textit{deep Q-network} (DQN) learning. It simply constructs a deep feedforward neural network to represent the $Q$ function, with input $s$ and several outputs as evaluated $Q$ values for each action choice $a\,$. Note that now action $a$ is a choice from a finite set and is not a continuous variable. The training loss is defined to minimize the absolute value of the \textit{temporal difference} error (TD error), which comes from Eq. (\ref{Q iteration}):\\ \begin{equation}\label{TD loss} L=Q(s_t,a_t)-\left(r(s_t,a_t)+\gamma\,\max_{a_{t+1}}Q(s_{t+1},a_{t+1})\right)\,, \end{equation}\\ where $(s_t,a_t,r,s_{t+1})$ is a sampled piece of experience of the AI interacting with the environment. Due to the nature of sampling during training, we ignore evaluation of the expectation of $s_{t+1}$ in the equation. \\ Up to now we have obtained all the necessary pieces of information to implement deep reinforcement learning. First, we initialize a feedforward neural network with random parameters. Second, we use this neural network as our $Q$ function and use strategy $\max_{a_{t}}Q(s_{t},a_{t})$ to take actions (which may be modified). Third, we sample from what the AI has experienced, i.e. $(s_t,a_t,r,s_{t+1})$ data tuples, and calculate the error, (Eq. \ref{TD loss}), and use gradient descent to minimize the absolute error. Fourth, we repeat the second and third steps until the AI performs well enough. This system can be divided into three parts, and diagrammatically it is shown in figure \ref{fig:reinforcementlearningsystem}. Note that the second and the third steps above can be parallelized and executed simultaneously. \begin{figure}[htb] \centering \includegraphics[width=0.7\linewidth]{chapter3/reinforcementLearningSystem} \caption{Reinforcement learning system, reproduced from Ref. \cite{TCLoss}.} \label{fig:reinforcementlearningsystem} \end{figure}\\ In real scenarios, a great deal of technical modifications are applied to the above learning procedure in order to make the learning more efficient and to improve the performance. These advanced learning strategies include separation of the trained $Q$ function and the other $Q$ function term used in calculating the loss \cite{DQN}, taking random actions on occasion \cite{DQN}, prioritized experience sampling strategy \cite{prioritizedSampling}, double Q-networks to separately decide actions and $Q$ values \cite{DoubleDQN}, duel network structure to separately learn the average $Q$ value and $Q$ value change due to each action \cite{DuelDQN}, using random variables inside network layers to induce organized random exploration in the environment \cite{NoisyDQN}. These are the most important recent developments, and they improve the performance and stability of deep Q-learning, which is discussed in Appendix \ref{experiment details appendix}. These techniques are incorporated together in Ref. \cite{RainbowDQN} and the resultant algorithm is called \textit{Rainbow DQN}. We follow this algorithm in our reinforcement learning researches in the next chapters. The hyperparameter settings and details of our numerical experiments are provided in the corresponding chapters and Appendix \ref{experiment details appendix}. \section{Analysis of Control of a Quadratic Potential}\label{analysis of quadratic} \subsection{Gaussian Approximation}\label{quadra Gaussian approx} The state of a particle in quadratic potentials under position measurement is known to be well approximated by a Gaussian state \cite{LinearStochasticSchrodingerEquation, CoherentStatesByDecoherence}. We discuss this result in detail in this section, and derive a sufficient representation of the evolution equation of a particle only in terms of the first and second moments of its Wigner distribution in $(x,p)$ phase space. This significantly simplifies the problem, and reduces it to an almost classical situation.\\ By the term \textit{Gaussian state}, we mean that the one-particle state has a Gaussian shape distribution as its Wigner distribution in phase space. We show that a Gaussian-shaped Wigner distribution always keeps its Gaussian property when evolving in a quadratic potential. First, we consider the problem in the Heisenberg picture and evaluate the time evolutions of operators $\hat{x}$ and $\hat{p}$:\\ \begin{equation}\label{differential equation for x p} \frac{d}{dt}\hat{x}=\frac{i}{\hbar}[H,\hat{x}],\qquad \frac{d}{dt}\hat{p}=\frac{i}{\hbar}[H,\hat{p}], \end{equation} \begin{equation} H=\frac{k}{2}\hat{x}^2+\frac{\hat{p}^2}{2m},\qquad k\in\mathbb{R},\quad m \in\mathbb{R}^+, \end{equation}\\ where $k$ can be both positive and negative. We do not assume a particular sign of $k$ in the following calculations. We substitute $H$ into Eq. (\ref{differential equation for x p}) and obtain\\ \begin{equation} \frac{d}{dt}\hat{x}=\frac{\hat{p}}{m},\qquad \frac{d}{dt}\hat{p}=-k\hat{x}, \end{equation}\\ which constitute a system of differential equations as\\ \begin{equation} \frac{d}{dt} \left(\begin{array}{c} \hat{x}\\ \hat{p} \end{array}\right) = \left(\begin{array}{cc} 0 & \frac{1}{m}\\ -k & 0 \end{array}\right) \left(\begin{array}{c} \hat{x}\\ \hat{p} \end{array}\right). \end{equation}\\ This is solved by eigendecomposition:\\ \begin{equation} \left(\begin{array}{c} \hat{x}(t)\\ \hat{p}(t) \end{array}\right) = e^{tA}\left(\begin{array}{c} \hat{x}(0)\\ \hat{p}(0) \end{array}\right),\qquad A:= \left(\begin{array}{cc} 0 & \frac{1}{m}\\ -k & 0 \end{array}\right) \end{equation}\\ \begin{equation} A=Q\Lambda Q^{-1}=\left(\begin{array}{cc} -\frac{1}{\sqrt{-mk}} & \frac{1}{\sqrt{-mk}}\\ 1 & 1 \end{array}\right)\left(\begin{array}{cc} -\sqrt{\frac{-k}{m}} & 0\\ 0 & \sqrt{\frac{-k}{m}} \end{array}\right)\left(\begin{array}{cc} -\frac{\sqrt{-mk}}{2} & \frac{1}{2}\\ \frac{\sqrt{-mk}}{2} & \frac{1}{2} \end{array}\right) \end{equation}\\ \begin{equation} M:=e^{tA}=Qe^{t\Lambda}Q^{-1}=Q\left(\begin{array}{cc} e^{-t\sqrt{\frac{-k}{m}}} & 0\\ 0 & e^{t\sqrt{\frac{-k}{m}}} \end{array}\right)Q^{-1}, \end{equation}\\ \begin{equation}\label{operator evolution} \left(\begin{array}{c} \hat{x}(t)\\ \hat{p}(t) \end{array}\right) =M\left(\begin{array}{c} \hat{x}(0)\\ \hat{p}(0) \end{array}\right). \end{equation}\\ For simplicity, we define\\ \begin{equation} \lambda:=\sqrt{\frac{-k}{m}}\equiv i\sqrt{\frac{k}{m}}, \end{equation}\\ and then the matrix $M$ can be written as\\ \begin{equation}\label{operator evolution matrix} M=\frac{1}{2}\left(\begin{array}{cc} e^{t\lambda}+e^{-t\lambda} & \frac{1}{\sqrt{-mk}}(e^{t\lambda}-e^{-t\lambda})\\ \sqrt{-mk}(e^{t\lambda}-e^{-t\lambda}) & e^{t\lambda}+e^{-t\lambda} \end{array}\right), \end{equation}\\ which is a symplectic matrix.\\ Therefore, a free evolution in a quadratic potential equivalently transforms the $\hat{x}$ and $\hat{p}$ operators through a symplectic transformation. Next we show that this results in the same symplectic transform of its Wigner distribution in terms of phase-space coordinates $(x,p)$. To show this, we use the characteristic function definition of Wigner distribution \cite{GaussianQuantumInformation}, as follows:\\ \begin{equation}\label{Wigner definition1} W(\textbf{x})=\int_{\mathbb{R}^2}\frac{d^2\boldsymbol{\xi}}{(2\pi)^2}\exp(-i\textbf{x}^T \boldsymbol{\Omega}\boldsymbol{\xi})\chi(\boldsymbol{\xi}), \end{equation}\\ \begin{equation}\label{Wigner definition2} \textbf{x}=\left(\begin{array}{c} x\\ p \end{array}\right),\quad \boldsymbol{\Omega}=\left(\begin{array}{cc} 0 & 1\\ -1 & 0 \end{array}\right),\quad \textbf{x},\boldsymbol{\xi}\in\mathbb{R}^2, \end{equation}\\ \begin{equation}\label{Wigner definition3} \chi(\boldsymbol{\xi})= \text{tr}(\rho D(\boldsymbol{\xi})),\qquad D(\boldsymbol{\xi}):=\exp({i\hat{\textbf{x}}^T\boldsymbol{\Omega}\boldsymbol{\xi}}),\quad \hat{\textbf{x}}:= \left(\begin{array}{c} \hat{x}\\ \hat{p} \end{array}\right), \end{equation}\\ where $\rho$ is a quantum state, $\chi$ is called the Wigner characteristic function and $D$ is the Weyl operator. The Wigner distribution is essentially a Fourier transform of the Wigner characteristic function, and both the characteristic function $\chi$ and the Wigner distribution $W$ contain complete information about the state $\rho$. Now we consider the time evolution of $\rho$, or equivalently, the time evolution of $\hat{\textbf{x}}$. According to Eqs. (\ref{operator evolution}) and (\ref{operator evolution matrix}) and the symplectic property of $M$, we have\\ \begin{equation} \begin{split} \chi(\boldsymbol{\xi},t)=\text{tr}(\rho D(\boldsymbol{\xi},t))&=\text{tr}(\rho \exp({i\hat{\textbf{x}}^T M^T\boldsymbol{\Omega}\boldsymbol{\xi}}))\\ &=\text{tr}(\rho \exp({i\hat{\textbf{x}}^T M^T\boldsymbol{\Omega}MM^{-1}\boldsymbol{\xi}}))\\ &=\text{tr}(\rho \exp({i\hat{\textbf{x}}^T \boldsymbol{\Omega}M^{-1}\boldsymbol{\xi}}))\\ &=\chi(M^{-1}\boldsymbol{\xi},0). \end{split} \end{equation}\\ Note that the matrix $M$ has a determinant equal to 1, i.e. $|M|=1$, and therefore it is always invertible. Then the Wigner distribution is\\ \begin{equation} \begin{split} W(\textbf{x},t)&=\int_{\mathbb{R}^2}\frac{d^2\boldsymbol{\xi}}{(2\pi)^2}\exp(-i\textbf{x}^T \boldsymbol{\Omega}\boldsymbol{\xi})\chi(\boldsymbol{\xi},t)\\ &=\int_{\mathbb{R}^2}\frac{d^2\boldsymbol{\xi}}{(2\pi)^2}\exp(-i\textbf{x}^T \boldsymbol{\Omega}\boldsymbol{\xi})\chi(M^{-1}\boldsymbol{\xi},0)\\ &=\int_{\mathbb{R}^2}\frac{d^2(M\boldsymbol{\xi})}{(2\pi)^2|M|}\exp(-i\textbf{x}^T \boldsymbol{\Omega}M\boldsymbol{\xi})\chi(\boldsymbol{\xi},0)\\ &=\int_{\mathbb{R}^2}\frac{d^2(M\boldsymbol{\xi})}{(2\pi)^2}\exp(-i\textbf{x}^T{M^{-1}}^T \boldsymbol{\Omega}\boldsymbol{\xi})\chi(\boldsymbol{\xi},0)\\ &=W(M^{-1}\textbf{x},0), \end{split} \end{equation}\\ where we have changed the integration variable $\boldsymbol{\xi} \gets M\boldsymbol{\xi}$ in the third line and used the condition of unbounded integration area $\mathbb{R}^2$ in the last line.\\ We see that a Wigner distribution $W(\textbf{x},0)$ simply evolves to $W(M\textbf{x},t)$ after time $t$. This is merely a linear transformation of phase space coordinates, and therefore the distribution as a whole remains unaltered, with only the position, orientation and width of the distribution possibly being changed. The shape of distribution never changes. Thus, a Gaussian distribution always stays Gaussian. This fact also holds true for Hamiltonians including the terms $(\hat{x}\hat{p}+\hat{p}\hat{x})$ and $\hat{x}$ and $\hat{p}$, which can be proved similarly. \\ Then, if at some instants we do weak position measurements that can be approximated to be Gaussian on the state $\rho$, as in Section \ref{Quantum Trajectory section}, the Gaussianity of the Wigner distribution on position coordinate $x$ increases, and along with the rotation and movement of the distribution in phase space, the Gaussianity of the whole distribution monotonically increases, and as a result it is always rounded to be approximately Gaussian in the long term. The main idea is that, non-Gaussianity never emerges by itself in quadratic potentials and under position measurement.\\ \subsection{Effective Description of Time Evolution} In this case, the state can be fully described by its means and covariances as a Gaussian distribution in phase space. Those quantities are\\ \begin{equation}\label{mean and covariance matrix} \begin{split} \langle\hat{x}\rangle,\quad\langle\hat{p}\rangle,\quad &V_x:=\langle\hat{x}^2\rangle-\langle\hat{x}\rangle^2,\quad V_p:=\langle\hat{p}^2\rangle-\langle\hat{p}\rangle^2,\\ C&:=\frac{1}{2}\langle\hat{x}\hat{p}+\hat{p}\hat{x}\rangle-\langle\hat{x}\rangle\langle\hat{p}\rangle, \end{split} \end{equation}\\ i.e., totally five real values.\\ Therefore, when describing the time evolution of the state under continuous measurement, we may only describe the time evolution of the above five quantities instead, which is considerably simpler. We now derive their evolution equations.\\ We use the evolution equation for a state under continuous position measurement Eq. (\ref{position measurement incomplete information}) as a starting point:\\ \begin{equation} d\rho=-\frac{i}{\hbar}[H,\rho]dt-\frac{\gamma}{4}[\hat{x},[\hat{x},\rho]]dt+\sqrt{\frac{\gamma\eta}{2}}\{\hat{x}-\langle \hat{x}\rangle,\rho\}dW, \end{equation}\\ \begin{equation} dW\sim\mathcal{N}(0,dt),\quad \gamma>0,\quad\eta\in[0,1]. \end{equation}\\ Recall that $dW$ is a Wiener increment and $\gamma$ and $\eta$ represent the measurement strength and efficiency respectively, and we use It\^o calculus formulation. We can now evaluate the time evolution of the quantities in Eq. (\ref{mean and covariance matrix}).\\ \begin{equation}\label{calculate x} \begin{split} d\langle\hat{x}\rangle=\text{tr}(\hat{x}\,d\rho)&=\text{tr}\left(\frac{i}{\hbar}\rho[H,\hat{x}]dt-\frac{\gamma}{4}[\hat{x},[\hat{x},\rho]]\hat{x}dt+\sqrt{\frac{\gamma\eta}{2}}\{\hat{x}-\langle \hat{x}\rangle,\rho\}\hat{x}\,dW\right)\\ &=\text{tr}\left(\rho\frac{\hat{p}}{m}dt-0+\sqrt{\frac{\gamma\eta}{2}}(2\hat{x}^2\rho-2\langle \hat{x}\rangle\hat{x}\rho)\,dW\right)\\ &=\frac{\langle \hat{p}\rangle}{m}dt+\sqrt{2\gamma\eta}(\langle\hat{x}^2\rangle-\langle \hat{x}\rangle^2)\,dW\\ &=\frac{\langle \hat{p}\rangle}{m}dt+\sqrt{2\gamma\eta}V_x\,dW. \end{split} \end{equation}\\ \begin{equation}\label{calculate p} \begin{split} d\langle\hat{p}\rangle=\text{tr}(\hat{p}\,d\rho)&=\text{tr}\left(\frac{i}{\hbar}\rho[H,\hat{p}]dt-\frac{\gamma}{4}[\hat{x},[\hat{x},\rho]]\hat{p}dt+\sqrt{\frac{\gamma\eta}{2}}\{\hat{x}-\langle \hat{x}\rangle,\rho\}\hat{p}\,dW\right)\\ &=\text{tr}\left(\rho(-k\hat{x})dt-\frac{\gamma}{4}\rho(\hat{x}\hat{x}\hat{p}-2\hat{x}\hat{p}\hat{x}+\hat{p}\hat{x}\hat{x})+\sqrt{2\gamma\eta}\left(\rho\frac{\hat{x}\hat{p}+\hat{p}\hat{x}}{2}-\rho\langle \hat{x}\rangle\hat{p}\right)\,dW\right)\\ &=-k{\langle \hat{x}\rangle}dt+\sqrt{2\gamma\eta}C\,dW. \end{split} \end{equation}\\ \begin{equation}\label{calculate Vx1} \begin{split} dV_x&=\text{tr}(\hat{x}^2\,d\rho)-d(\langle \hat{x}\rangle^2)=\text{tr}(\hat{x}^2\,d\rho)-2\langle \hat{x}\rangle d\langle \hat{x}\rangle-(d\langle \hat{x}\rangle)^2\\ &=\text{tr}(\hat{x}^2\,d\rho)-2\langle \hat{x}\rangle \left(\frac{\langle \hat{p}\rangle}{m}dt+\sqrt{2\gamma\eta}V_x\,dW\right)-2\gamma\eta V_x^2\,dt. \end{split} \end{equation}\\ Since it is too lengthy, we calculate $\text{tr}(\hat{x}^2\,d\rho)$ separately. In following calculations we need to use symmetric properties of $\rho$ as a Gaussian state. First we use $\text{tr}((\hat{x}-\langle\hat{x}\rangle)^3\rho)=0$, which means that the skewness of a Gaussian distribution is zero. It leads to \begin{equation}\label{skewness Gaussian} \text{tr}(\hat{x}^3\rho)=3\langle\hat{x}^2\rangle\langle\hat{x}\rangle-2\langle\hat{x}\rangle^3=3V_x\langle\hat{x}\rangle+\langle\hat{x}\rangle^3. \end{equation}\\ \begin{equation}\label{calculate Vx2} \begin{split} \text{tr}(\hat{x}^2\,d\rho)&=\text{tr}\left(\frac{i}{\hbar}\rho[H,\hat{x}^2]dt-\frac{\gamma}{4}[\hat{x},[\hat{x},\rho]]\hat{x}^2dt+\sqrt{\frac{\gamma\eta}{2}}\{\hat{x}-\langle \hat{x}\rangle,\rho\}\hat{x}^2\,dW\right)\\ &=\text{tr}\left(\rho\frac{\hat{x}\hat{p}+\hat{p}\hat{x}}{m}dt+\sqrt{2\gamma\eta}\left(\rho{\hat{x}^3}-\rho\langle \hat{x}\rangle\hat{x}^2\right)\,dW\right)\\ &=\frac{\langle\hat{x}\hat{p}+\hat{p}\hat{x}\rangle}{m}dt+\sqrt{2\gamma\eta}\left(3V_x\langle\hat{x}\rangle+\langle\hat{x}\rangle^3-\langle \hat{x}\rangle (V_x+\langle \hat{x}\rangle^2)\right)\,dW\\ &=\frac{\langle\hat{x}\hat{p}+\hat{p}\hat{x}\rangle}{m}dt+2\sqrt{2\gamma\eta}V_x\langle\hat{x}\rangle\,dW, \end{split} \end{equation}\\ \begin{equation}\label{calculate Vx3} \begin{split} dV_x&=\text{tr}(\hat{x}^2\,d\rho)-2\langle \hat{x}\rangle \left(\frac{\langle p\rangle}{m}dt+\sqrt{2\gamma\eta}V_x\,dW\right)-2\gamma\eta V_x^2\,dt\\ &=\left(\frac{2C}{m}-2\gamma\eta V_x^2\right)dt. \end{split} \end{equation}\\ Next we calculate $V_p$ in a similar manner.\\ \begin{equation}\label{calculate Vp1} \begin{split} dV_p&=\text{tr}(\hat{p}^2\,d\rho)-d(\langle \hat{p}\rangle^2)\\ &=\text{tr}(\hat{p}^2\,d\rho)-2\langle \hat{p}\rangle \left(-k{\langle \hat{x}\rangle}dt+\sqrt{2\gamma\eta}C\,dW\right)-2\gamma\eta C^2\,dt. \end{split} \end{equation}\\ We need to use the following symmetric property:\\ \begin{equation}\label{calculate Vp2} \begin{split} &\text{tr}((\hat{p}-\langle\hat{p}\rangle)(\hat{x}-\langle\hat{x}\rangle)(\hat{p}-\langle\hat{p}\rangle)\rho)=0\\ \Rightarrow\quad&\text{tr}\left(\frac{\hat{x}\hat{p}^2+\hat{p}^2\hat{x}}{2}\rho\right)=\text{tr}((\hat{p}\hat{x}\hat{p})\rho)=2C\langle\hat{p}\rangle+V_p\langle\hat{x}\rangle+\langle\hat{p}\rangle^2\langle\hat{x}\rangle, \end{split} \end{equation}\\ \begin{equation}\label{calculate Vp3} \begin{split} \text{tr}(\hat{p}^2\,d\rho)&=\text{tr}\left(\frac{i}{\hbar}\rho[H,\hat{p}^2]dt-\frac{\gamma}{4}[\hat{x},[\hat{x},\rho]]\hat{p}^2dt+\sqrt{\frac{\gamma\eta}{2}}\{\hat{x}-\langle \hat{x}\rangle,\rho\}\hat{p}^2\,dW\right)\\ &=\text{tr}\left(-k\rho(\hat{x}\hat{p}+\hat{p}\hat{x})dt-\frac{\gamma}{4}\rho(\hat{x}\hat{x}\hat{p}^2-2\hat{x}\hat{p}^2\hat{x}+\hat{p}^2\hat{x}\hat{x})\right .\\ &\qquad\qquad\qquad\qquad\qquad\left .+\sqrt{2\gamma\eta}\left(\rho\frac{(\hat{x}\hat{p}^2+\hat{p}^2\hat{x})}{2}-\rho\langle \hat{x}\rangle\hat{p}^2\right)\,dW\right)\\ &=-k\langle\hat{x}\hat{p}+\hat{p}\hat{x}\rangle dt+\text{tr}\left(-\frac{\gamma}{4}\rho(2i\hbar\, \hat{x}\hat{p}-2i\hbar\, \hat{p}\hat{x})\right)+\sqrt{2\gamma\eta} 2C\langle\hat{p}\rangle\,dW\\ &=-k\langle\hat{x}\hat{p}+\hat{p}\hat{x}\rangle dt+\frac{\gamma}{2}\hbar^2 dt+\sqrt{2\gamma\eta} 2C\langle\hat{p}\rangle\,dW, \end{split} \end{equation}\\ \begin{equation}\label{calculate Vp4} \begin{split} dV_p&=\text{tr}(\hat{p}^2\,d\rho)-2\langle \hat{p}\rangle \left(-k{\langle \hat{x}\rangle}dt+\sqrt{2\gamma\eta}C\,dW\right)-2\gamma\eta C^2\,dt\\ &=(-2kC-2\gamma\eta C^2+\frac{\gamma}{2}\hbar^2)dt. \end{split} \end{equation}\\ Finally, we calculate the covariance $C$.\\ \begin{equation}\label{calculate C1} \begin{split} dC=\frac{1}{2}\text{tr}\left((\hat{x}\hat{p}+\hat{p}\hat{x})\,d\rho\right)&-d(\langle\hat{x}\rangle\langle\hat{p}\rangle)\\ =\frac{1}{2}\text{tr}\left((\hat{x}\hat{p}+\hat{p}\hat{x})\,d\rho\right)&-\langle\hat{x}\rangle d\langle\hat{p}\rangle-\langle\hat{p}\rangle d\langle\hat{x}\rangle-d\langle\hat{x}\rangle d\langle\hat{p}\rangle\\ =\frac{1}{2}\text{tr}\left((\hat{x}\hat{p}+\hat{p}\hat{x})\,d\rho\right)&-\langle\hat{x}\rangle(-k{\langle \hat{x}\rangle}dt+\sqrt{2\gamma\eta}C\,dW)\\ &- \langle\hat{p}\rangle\left(\frac{\langle p\rangle}{m}dt+\sqrt{2\gamma\eta}V_x\,dW\right)-2\gamma\eta V_xC\,dt. \end{split} \end{equation}\\ Here we need the following symmetry:\\ \begin{equation}\label{calculate C2} \begin{split} &\text{tr}((\hat{x}-\langle\hat{x}\rangle)(\hat{p}-\langle\hat{p}\rangle)(\hat{x}-\langle\hat{x}\rangle)\rho)=0\\ \Rightarrow\quad&\text{tr}\left(\frac{\hat{p}\hat{x}^2+\hat{x}^2\hat{p}}{2}\rho\right)=\text{tr}((\hat{x}\hat{p}\hat{x})\rho)=2C\langle\hat{x}\rangle+V_x\langle\hat{p}\rangle+\langle\hat{x}\rangle^2\langle\hat{p}\rangle. \end{split} \end{equation}\\ \begin{equation}\label{calculate C3} \begin{split} \frac{1}{2}\text{tr}\left((\hat{x}\hat{p}+\hat{p}\hat{x})\,d\rho\right)={}&\frac{1}{2}\text{tr}\left(\frac{i}{\hbar}\rho[H,(\hat{x}\hat{p}+\hat{p}\hat{x})]dt-\frac{\gamma}{4}[\hat{x},[\hat{x},\rho]](\hat{x}\hat{p}+\hat{p}\hat{x})dt\right.\\ &\quad+\left.\sqrt{\frac{\gamma\eta}{2}}\{\hat{x}-\langle \hat{x}\rangle,\rho\}(\hat{x}\hat{p}+\hat{p}\hat{x})\,dW\right)\\ ={}&\frac{1}{2}\text{tr}\left(\rho\left(\frac{2\hat{p}^2}{m}-2k\hat{x}^2\right)dt-0+\sqrt{2\gamma\eta}\left(2\hat{x}\hat{p}\hat{x}\rho-(\hat{x}\hat{p}+\hat{p}\hat{x})\langle \hat{x}\rangle\rho\right)\,dW\right)\\ ={}&\left(\frac{V_p+\langle \hat{p}\rangle^2}{m}-k(V_x+\langle \hat{x}\rangle^2)\right)dt\\ &\quad+\frac{1}{2}\left(\sqrt{2\gamma\eta}\left(4C\langle\hat{x}\rangle+2V_x\langle\hat{p}\rangle+2\langle\hat{x}\rangle^2\langle\hat{p}\rangle-2C\langle \hat{x}\rangle-2\langle\hat{p}\rangle\langle\hat{x}\rangle^2\right)\,dW\right)\\ ={}&\left(\frac{V_p+\langle \hat{p}\rangle^2}{m}-k(V_x+\langle \hat{x}\rangle^2)\right)dt+\sqrt{2\gamma\eta}\left(C\langle\hat{x}\rangle+V_x\langle\hat{p}\rangle\right)\,dW,\\ \end{split} \end{equation}\\ \begin{equation}\label{calculate C4} \begin{split} dC&=\frac{1}{2}\text{tr}\left((\hat{x}\hat{p}+\hat{p}\hat{x})\,d\rho\right)-\langle\hat{x}\rangle(-k{\langle \hat{x}\rangle}dt+\sqrt{2\gamma\eta}C\,dW)\\ &\qquad\qquad- \langle\hat{p}\rangle\left(\frac{\langle \hat{p}\rangle}{m}dt+\sqrt{2\gamma\eta}V_x\,dW\right)-2\gamma\eta V_xC\,dt\\ &=\left(\frac{V_p}{m}-kV_x-2\gamma\eta V_xC\right)dt. \end{split} \end{equation}\\ The results are summarized as follows:\\ \begin{equation}\label{mean and covariance evolution} \begin{split} d\langle\hat{x}\rangle&=\frac{\langle \hat{p}\rangle}{m}dt+\sqrt{2\gamma\eta}\,V_x\,dW,\\ d\langle\hat{p}\rangle&=-k{\langle \hat{x}\rangle}dt+\sqrt{2\gamma\eta}\,C\,dW,\\ dV_x&=\left(\frac{2C}{m}-2\gamma\eta V_x^2\right)dt,\\ dV_p&=(-2kC-2\gamma\eta C^2+\frac{\gamma}{2}\hbar^2)dt,\\ dC&=\left(\frac{V_p}{m}-kV_x-2\gamma\eta V_xC\right)dt. \end{split} \end{equation}\\ Our results coincide with the results presented in Ref. \cite{HarmonicOscillatorControl}, and we have also verified that our results are correct through numerical calculation. From the above equations, we can see that only the average position and momentum are perturbed by the stochastic term $dW$, and the covariances form a closed set of equations and evolve deterministically. Therefore, we can calculate their steady values:\\ \begin{equation}\label{covariances solution} \begin{split} dV_x=dV_p&=dC=0\cdot dt,\qquad V_x,V_p>0.\\ \Rightarrow C=\frac{-k+\sqrt{k^2+\gamma^2\eta\hbar^2}}{2\gamma\eta}&,\ V_x=\sqrt{\frac{C}{m\gamma\eta}},\ V_p=2C\sqrt{m\gamma\eta C}+k\sqrt{\frac{mC}{\gamma\eta}}. \end{split} \end{equation}\\ We observe that a state always evolves into a steady shape in numerical simulation, and due to this convergence we assume that the covariances are simply fixed as the above values. Then, the degrees of freedom considerably decrease, and the only remaining ones are the two real quantities $\langle\hat{x}\rangle$ and $\langle\hat{p}\rangle$, which are the means of the Gaussian distribution in phase space.\\ Equivalently, we may say that the degrees of freedom of the state is represented by a displacement operator $D(\alpha)$, which displaces the state from the origin of phase space, i.e. $\langle\hat{x}\rangle=\langle\hat{p}\rangle=0$. We denote the state centered at the origin of phase space by $\rho_0$, and we have \\ \begin{equation}\label{displaced means} \langle\hat{x}\rangle_{D\rho_0D^\dagger}=\sqrt{\frac{\hbar}{2m\omega}}(\alpha+\alpha^*),\quad\langle\hat{p}\rangle_{D\rho_0D^\dagger}=i\sqrt{\frac{\hbar m\omega}{2}}(\alpha^*-\alpha), \end{equation}\\ \begin{equation}\label{displacement definition} D(\alpha)=e^{\alpha\hat{a}^\dagger-\alpha^*\hat{a}^\dagger},\quad \alpha\in\mathbb{C}, \end{equation} \begin{equation}\label{annihilator definition} \hat{a}:=\sqrt{\frac{m\omega}{2\hbar}}(\hat{x}+\frac{i}{m\omega}\hat{p}),\quad\omega:=\sqrt{\frac{|k|}{m}}, \end{equation}\\ where $\hat{a}$ is the annihilation operator. It has the following properties:\\ \begin{equation}\label{annihilation operator properties} [\hat{a},\hat{a}^\dagger]=1,\quad \hat{x}=\sqrt{\frac{\hbar}{2m\omega}}(\hat{a}^\dagger+\hat{a}),\quad \hat{p}=i\sqrt{\frac{\hbar m\omega}{2}}(\hat{a}^\dagger-\hat{a}), \end{equation}\\ and\\ \begin{equation}\label{annihilation operator displaced} D^\dagger(\alpha)\hat{a}D(\alpha)=\hat{a}+\alpha,\quad D^\dagger(\alpha)\hat{a}^\dagger D(\alpha)=\hat{a}^\dagger+\alpha^*, \end{equation}\\ \begin{equation}\label{number operator displaced} \hat{n}:=\hat{a}^\dagger\hat{a},\quad D^\dagger(\alpha)\hat{n}D(\alpha)=\hat{n}+(\alpha\hat{a}^\dagger+\alpha^*\hat{a}) + |\alpha|^2, \end{equation}\\ where $\hat{n}$ is the number operator. In the above calculations we do not assume the sign of $k$, but here it is necessary to use the positive coefficient $|k|$ to give an appropriate definition for the operators, and therefore $\omega$ is always positive.\\ For the state $\rho_0$, we have $\langle\hat{x}\rangle_{\rho_0}=\langle\hat{p}\rangle_{\rho_0}=0$, and therefore $\langle\alpha\hat{a}^\dagger+\alpha^*\hat{a}\rangle_{\rho_0}=0$. Then we have\\ \begin{equation}\label{number operator calculation} \langle\hat{n}\rangle_{D\rho_0D^\dagger}=\langle\hat{n}\rangle_{\rho_0} + |\alpha|^2. \end{equation}\\ Therefore, we can use $|\alpha|^2$ to express the expectation value of the operator $\left(\frac{\hat{p}^2}{2m}+\frac{|k|}{2}\hat{x}^2\right)$ for state $D\rho_0D^\dagger$: \begin{equation}\label{energy displaced} \left\langle\frac{\hat{p}^2}{2m}+\frac{|k|}{2}\hat{x}^2\right\rangle_{D\rho_0D^\dagger}=\hbar\omega\left(\langle\hat{n}\rangle_{D\rho_0D^\dagger}+\frac{1}{2}\right)=\hbar\omega\left(\langle\hat{n}\rangle_{\rho_0}+\frac{1}{2}+|\alpha|^2\right), \end{equation}\\ where $\langle\hat{n}\rangle_{\rho_0}$ is a constant determined by the covariances in Eq. (\ref{covariances solution}). We also have $\langle\hat{x}\rangle_{D\rho_0D^\dagger}$ and $\langle\hat{p}\rangle_{D\rho_0D^\dagger}$ to represent the real and imaginary parts of $\alpha$, so we obtain the following formula:\\ \begin{equation}\label{energy representation} \left\langle\frac{\hat{p}^2}{2m}+\frac{|k|}{2}\hat{x}^2\right\rangle_{D\rho_0D^\dagger}=\hbar\omega\left(\langle\hat{n}\rangle_{\rho_0}+\frac{1}{2}\right)+\left(\frac{\langle\hat{p}\rangle^2_{D\rho_0D^\dagger}}{2m}+\frac{|k|}{2}\langle\hat{x}\rangle^2_{D\rho_0D^\dagger}\right). \end{equation}\\ Now, if we want to evaluate $\left\langle\frac{\hat{p}^2}{2m}+\frac{|k|}{2}\hat{x}^2\right\rangle$ for the state $\rho$, we can just replace the operators $\hat{p}$ and $\hat{x}$ by the means $\langle\hat{p}\rangle$ and $\langle\hat{x}\rangle$. \\ As we can see, this system turns out to be very simple. This can be understood by the fact that, concerning free evolution, a non-negative Wigner distribution behaves in phase space exactly in the same way as the corresponding classical distribution unless the Hamiltonian contains terms that are more than quadratic or non-analytic, since its evolution equation would reduces to the Liouville equation \cite{WignerDistributionEvolution, StatisticalMechanicsGibbs}. In addition, the position measurement only shrinks and squeezes the distribution in the $x$ direction, which does not introduce negativity into the distribution \cite{SqueezedStates}, and therefore the distribution evolves almost classically. The only quantumness in this system is the measurement backaction by $dW$ and the uncertainty principle which introduces a constant term in $dV_p$ (see Eq. (\ref{mean and covariance evolution})).\\ \subsection{Optimal Control}\label{quadra optimal control} As the system is simplified, we now consider control of this quadratic system. The system is summarized by the following:\\ \begin{equation}\label{equation sets of quadra} \begin{split} d\langle\hat{x}\rangle&=\frac{\langle \hat{p}\rangle}{m}dt+\sqrt{2\gamma\eta}\,V_x\,dW,\\ d\langle\hat{p}\rangle&=-k{\langle \hat{x}\rangle}dt+\sqrt{2\gamma\eta}\,C\,dW,\\ C&=\frac{-k+\sqrt{k^2+\gamma^2\eta\hbar^2}}{2\gamma\eta},\\ V_x=\sqrt{\frac{C}{m\gamma\eta}}&,\quad V_p=2C\sqrt{m\gamma\eta C}+k\sqrt{\frac{mC}{\gamma\eta}}, \end{split} \end{equation}\\ where the only independent degrees of freedom are $\langle\hat{x}\rangle$ and $\langle\hat{p}\rangle$. We consider using an external force to control the system, which is just an additional term $F_\text{con}\hat{x}$ added to the total Hamiltonian $H$. Then the time evolution becomes\\ \begin{equation}\label{controlled quadra} \begin{split} d\langle\hat{x}\rangle&=\frac{\langle \hat{p}\rangle}{m}dt+\sqrt{2\gamma\eta}\,V_x\,dW,\\ d\langle\hat{p}\rangle&=(-k{\langle \hat{x}\rangle}-F_{\text{con}})dt+\sqrt{2\gamma\eta}\,C\,dW,\\ \end{split} \end{equation}\\ where $F_{\text{con}}$ actually gives a force in the opposite direction of its sign. We can confirm that the equations concerning $V_x$, $V_p$ and $C$ are not changed explicitly, or by interpreting the additional $F_{\text{con}}\hat{x}$ term in the Hamiltonian as a shift of the operator $\hat{x}$ by an amount of $\frac{F_{\text{con}}}{k}$, which clearly does not affect the covariances.\\ When $k$ is larger than zero, the Hamiltonian $H=\frac{\hat{p}^2}{2m}+\frac{k}{2}\hat{x}^2$ represents a harmonic oscillator, and here we consider controlled cooling of this system. Since the system involves only one particle, cooling amounts to decreasing its energy from an arbitrarily chosen initial state. Because we assume continuous measurement on this system, the previous analysis and simplification apply.\footnote{We do not consider a measurement strength that varies in time.} The target of control is to minimize energy $\langle H\rangle$, which amounts to minimizing the functional $\left(\frac{\langle\hat{p}\rangle^2}{2m}+\frac{k}{2}\langle\hat{x}\rangle^2\right)$ with $k>0$ according to Eq. (\ref{energy representation}), under the above time-evolution equations (\ref{controlled quadra}). As we consider a general cooling task, the minimization should be considered as minimizing the time-averaged total energy. We call this minimized function as a \textit{loss}, which is sometimes also called a control score. It is denoted as $L$:\\ \begin{equation}\label{control loss} L:=\lim\limits_{T\to\infty}\frac{1}{T}\int_{0}^{T}\left(\frac{\langle\hat{p}\rangle^2}{2m}+\frac{k}{2}\langle\hat{x}\rangle^2\right)dt. \end{equation}\\ Here the infinite limit of time is not crucial. It is put here for translational invariance of the control in time, and sufficient long-term planning of the control.\\ When the system is noise-free and deterministic, that is, \begin{equation}\label{deterministic controlled quadra} \begin{split} d\langle\hat{x}\rangle&=\frac{\langle \hat{p}\rangle}{m}dt,\\ d\langle\hat{p}\rangle&=(-k{\langle \hat{x}\rangle}-F_{\text{con}})dt, \end{split} \end{equation} the $L$ above can converge to 0 due to the term $\left (\lim\limits_{T\to\infty}\frac{1}{T}\right )$ and therefore is not a proper measure of loss that we wish to minimize. Therefore, we use the above definition (\ref{control loss}) only when the system contains noise, and for a deterministic case we need to redefine it as\\ \begin{equation}\label{deterministic loss} L:=\lim\limits_{T\to\infty}\int_{0}^{T}\left(\frac{\langle\hat{p}\rangle^2}{2m}+\frac{k}{2}\langle\hat{x}\rangle^2\right)dt, \end{equation}\\ which is not essentially different from Eq. (\ref{control loss}) but is well-behaved when we try to minimize it. Since these two definitions are only different by some mathematical subtlety, we do not specifically distinguish them when it is unnecessary, and it is clear from the context which one is being considered.\\ As introduced in Section \ref{quantum control}, we now seek for the optimal strategy of controlling the variable $F_{\text{con}}$ such that $L$ is minimized. For the deterministic system Eq. (\ref{deterministic controlled quadra}), minimization of $L$ can be achieved in a simple manner, by borrowing some ideas from physics. First, we note that the time-evolution equations concerning $\langle\hat{x}\rangle$ and $\langle\hat{p}\rangle$ are effectively classical, which means that, we have a classical particle with position $x$ and momentum $p$ satisfying $x_{t=0}=\langle\hat{x}\rangle_{t=0}$ and $p_{t=0}=\langle\hat{p}\rangle_{t=0}$, and the time evolution of $(x,p)$ can be the same as that of $(\langle\hat{x}\rangle,\langle\hat{p}\rangle)$ of the underlying quantum system, which can also be seen from the Ehrenfest theorem concerning quadratic potentials \cite{QuantumText}. Then, when looking at the functional $L$ as expressed in Eq. (\ref{deterministic loss}), one may recall the action and the Hamilton principle, i.e., a classical trajectory of mechanical variables $(x,p)$ minimizes the total action which is the time integral of the Lagrangian, which is very similar to the form of Eq. (\ref{deterministic loss}). Therefore, if we construct a Lagrangian $\mathcal{L}$ defined with $x$ and $p$ such that minimization of $\left(\int\mathcal{L}\,dt\right)$ corresponds to minimization of the loss $L$, then we can obtain a trajectory of variables $(\langle\hat{x}\rangle,\langle\hat{p}\rangle)$ as the classical trajectory of $(x,p)$ that minimizes the loss $L$. After we obtain the desired trajectory, an external control is applied to keep the quantities $(\langle\hat{x}\rangle,\langle\hat{p}\rangle)$ such that they stay on the desired trajectory. This completes a simple derivation of the so-called linear-quadratic optimal control for our system.\\ Following this argument, we define $\mathcal{L}=\frac{m\dot{x}^2}{2}+\frac{kx^2}{2}=\mathcal{T}-\mathcal{V}$, where the classical kinetic energy is $\mathcal{T}=\frac{p^2}{2m}=\frac{m\dot{x}^2}{2}$ and the potential is $\mathcal{V}=-\frac{kx^2}{2}$. Note that a Lagrangian must be defined in the form of $\mathcal{L}(\boldsymbol{q},\dot{\boldsymbol{q}},t)$ so that for this functional the Hamilton principle holds \cite{ClassicalMechanics}. The action $\left(\int\mathcal{L}\,dt\right)$ is equal to the loss $L$, and therefore a classical particle travelling in the potential $\mathcal{V}$ has mechanical variables $(x,p)$ which minimize $L$ when they are substituted by $(\langle\hat{x}\rangle,\langle\hat{p}\rangle)$, as $\langle\hat{x}\rangle$ and $\langle\hat{p}\rangle$ are constrained by the same relation as $x$ and $p$, i.e. $\frac{d}{dt}\langle\hat{x}\rangle=\frac{\langle \hat{p}\rangle}{m}$ (Eq. \ref{deterministic controlled quadra}) and $\frac{d}{dt}{x}=\frac{p}{m}$, which makes sure that when $x$ is substituted by $\langle \hat{x} \rangle$, $p$ is substituted by $\langle\hat{p}\rangle$. From a viewpoint of optimization, we see that the minimization of $L$ is done under the constraint $\frac{d}{dt}\langle\hat{x}\rangle=\frac{\langle \hat{p}\rangle}{m}$, which is achieved by the same constraint $\frac{d}{dt}x=\frac{p}{m}$ of the classical Lagrangian. Therefore, all necessary conditions are indeed satisfied, and a trajectory of $(\langle\hat{x}\rangle,\langle\hat{p}\rangle)$ which minimizes $L$ must be a classical physical trajectory of $(x,p)$ for Lagrangian $\mathcal{L}$, and we should use the control to achieve such a trajectory.\\ Next, we consider what trajectories of $\mathcal{L}$ can be used to minimize $L$. Because of the unstable potential $\mathcal{V}=-\frac{kx^2}{2}$ which is high at the center and low at both sides, a classical particle would have a divergent total action $\left(\int^\infty\mathcal{L}\,dt\right)$ unless the particle precisely stops at the top of the potential with a zero momentum, in which case the action becomes non-divergent. Therefore, we specifically look at the conditions under which it can be non-divergent. In order to precisely stop at the top of the potential $\mathcal{V}$, as a first condition its velocity and position should have opposite signs, so that it moves towards the center, and as a second condition it needs to dissipate all its energy when exactly reaching the top, i.e. $\frac{p^2}{2m}-\frac{kx^2}{2}=0$. Therefore, the trajectory of the particle's $(x,p)$ satisfies\\ \begin{equation}\label{optimal trajectory} p=-\sqrt{mk}\,x. \end{equation}\\ This is the main result of our optimal control.\\ Then, whenever the above condition is not satisfied for our state with $(\langle\hat{x}\rangle,\langle\hat{p}\rangle)$, we apply control $F_\text{con}$ to influence the evolution of $\langle\hat{p}\rangle$ so that it changes to satisfy the condition. If $F_\text{con}$ is not bounded, we can modify $\langle\hat{p}\rangle$ in an infinitesimal length of time to satisfy the condition quickly, and then keep a moderate strength of $F_\text{con}$ to keep $\langle\hat{p}\rangle$ always satisfying it. This is the optimal control which minimizes $L$ if the system variables evolve according to Eq. (\ref{deterministic controlled quadra}), which is deterministic and does not include noise.\\ The important but difficult final step is to show that, when measurement backaction noise is included as in Eq. (\ref{controlled quadra}), the above control strategy is still optimal. This is called the \textit{separation theorem} in the context of control theory, and it is not straightforward to prove. Therefore, we resort to the standard Linear-Quadratic-Gaussian (LQG) control theory \cite{LinearQuadraticControl} and prove it in the context of control theory in Sec.~\ref{linear quadratic Gaussian appendix section} in appendices. Since the reasoning follows a different line of thoughts, we do not discuss it here further.\footnote{A more general and rigorous proof can be found in Ref.~\cite{FeedbackControlOfLinearStochasticSystems}.} \\ Regarding the case of $k<0$, in which the system amounts to an inverted pendulum, we consider the minimization of a loss defined as \begin{equation}\label{inverted potential optimal control loss} L=\int\left(\frac{\langle\hat{p}\rangle^2}{2m}+\frac{|k|}{2}\langle\hat{x}\rangle^2\right)dt \end{equation} so that when it is minimized, both the position and momentum are kept close to zero, and therefore the particle stays stable near the origin of the $x$ coordinate. This makes the problem the same as before and produces the same optimal trajectory condition, that is \begin{equation}\label{optimal trajectory inverted} p=-\sqrt{m|k|}\,x, \end{equation} and we use this as the conventional optimal control strategy for the inverted harmonic potential problem.\\ \section{Numerical Experiments}\label{quadra experiments} In this section, we describe the settings of our numerical experiments of the simulated quantum control in quadratic potentials under continuous position measurement. Detailed settings concerning specific deep reinforcement learning techniques and the corresponding hyperparameters are given in Appendix \ref{experiment details appendix}. \\ \subsection{Problem Settings} First, we describe our settings of the simulated quantum system and the control.\\ \subsubsection{Loss Function} As discussed in previous sections, we set the targets of the control to be minimizing the energy of the particle and keeping the particle near the center respectively for the harmonic oscillator and the inverted harmonic potentials. Because the problem is stochastic, the minimized quantity is actually the expectation of the loss, written as the following:\\ \begin{equation}\label{harmonic optential original loss} E[L_1] = E\left [\lim\limits_{T\to\infty}\frac{1}{T}\int_{0}^{T}\left(\frac{\langle\hat{p}\rangle^2}{2m}+\frac{k}{2}\langle\hat{x}\rangle^2\right)dt\right ], \end{equation}\\ \begin{equation}\label{inverted potential original loss} E[L_2] = E\left [\frac{1}{T}\int_{0}^{T}g\left (\langle\hat{x}\rangle,\langle\hat{p}\rangle\right )\,dt\right ], \end{equation} where $E[\cdot]$ denotes the expectation value over trajectories of its stochastic variables, and $g$ is a function which judges whether the particle is away from the center (falling out) or is near the center (staying stable). The function $g$ is 1 when the particle is away, and is 0 otherwise. In our numerical simulation of the particle, we stop our simulation when the particle is already away and take $g=1$ afterwards. Therefore, we use a large $T$ rather than taking its infinite limit; otherwise it approaches 1. Concerning controls, we use deep reinforcement learning to learn to minimize these two quantities as reviewed in Chapter \ref{deep reinforcement learning}. However, the optimal control discussed in the last section only applies to loss functions of quadratic forms, and does not directly apply to Eq. (\ref{inverted potential original loss}). In order to obtain a control strategy for the inverted potential problem, we need to artificially define a different loss function to use optimal control theory. This is done in Eq. (\ref{inverted potential optimal control loss}). We use the optimal control derived from the artificially defined loss there and compare its performance with the deep reinforcement learning that directly learns the original loss. \\ \subsubsection{Simulation of the Quantum System} Although we have a set of equations (\ref{controlled quadra}) which effectively describes the time evolution of the system, we still decide to numerically simulate the original quantum state in its Hilbert space. This is because we want to confirm that reinforcement learning can directly learn from a numerically simulated continuous-space quantum system without the need of simplification, and at the same time numerical error and computational budget are still acceptable. After this is confirmed, we may carry our strategy to other problems that are more difficult and cannot be simplified. Another reason for simulating the original quantum state is that, we input the quantum state directly to the neural network to make it learn from the state as well, and therefore we need the state.\\ To simulate the quantum system, we express the state in terms of the energy eigenbasis of the harmonic potential $V=\frac{|k|}{2}\hat{x}^2$. This simulation strategy is precise and efficient, because the state is Gaussian and thus can be expressed in terms of squeezing and displacement operators, which result in exponentially small values in high energy components of the harmonic eigenbasis. We set the energy cutoff of our simulated space at the 130-th excited state, and whenever the component on the 120-th excited state exceeds a norm of $10^{-5}$, we judge that the numerical error is going to be high and we stop the simulation. We call this as \textit{failing}, because the controller fails to keep the state stable around the center; otherwise the high energy components would not be large. This is used as our criterion to judge whether a control is successful or not.\\ To reduce the computational cost, we only consider pure states, and the time-evolution equation is Eq. (\ref{position measurement evolution}), i.e., \begin{equation*} d|\psi\rangle=\left[\left(-\frac{i}{\hbar}H-\frac{\gamma}{4}(\hat{x}-\langle\hat{x}\rangle)^2\right)dt+\sqrt{\dfrac{\gamma}{2}}(\hat{x}-\langle\hat{x}\rangle)dW\right]|\psi\rangle, \end{equation*} where the Hamiltonian $H$ is \begin{equation}\label{Hamiltonian quadra} H=\frac{\hat{p}^2}{2m}+\frac{k}{2}\hat{x}^2+F_{\text{con}}\hat{x}. \end{equation} Also, due to the time-evolution equations of $\langle\hat{x}\rangle$ and $\langle\hat{p}\rangle$ (Eq.~(\ref{controlled quadra})), incomplete information on measurement outcomes, i.e. $\eta<1$, does not change the optimal control strategy, since the strategy merely depends on $k$ and $m$. Thus, we have the fact that the optimal control is the same for both a pure state with complete information and a mixed state with partial measurement information. As shown in Eq.~(\ref{controlled quadra}), the difference between incomplete and complete measurement information in this problem is only at the size of the additive noise, which does not essentially affect the system behaviour. Therefore, we do not experiment on mixed states with incomplete measurement results for simplicity.\\ To numerically simulate the time-evolution equation, we discretize the time into time steps and do iterative updates to the state. The implemented numerical update scheme is a mixed explicit-implicit 1.5 order strong convergence scheme for It\^o stochastic differential equations with additional 2nd and 3rd order corrections of deterministic terms. It is nontrivial and is described in Section \ref{numerical update rule appendix} of Appendix \ref{numerical simulation appendix}. We numerically verified that our method has small numerical error, and specifically, the covariances of our simulated state differ from the calculated ones (see Eq. (\ref{equation sets of quadra})) by an amount of $10^{-4}\sim 10^{-6}$ when the state is stable, and differ by an amount of $10^{-3}\sim 10^{-4}$ when the simulated system fails, which is always below $10^{-2}$. Therefore, we believe that our numerical simulation of this stochastic system is sufficiently accurate. An example of the simulated system is plotted in Fig.~\ref{fig:sampleharmonic}.\\ To initialize the simulation, the state is simply set to be the ground state of the harmonic eigenbasis at the beginning, and then it evolves under the position measurement and control. The state leaves the ground state because of the measurement backaction, and it continuously gains energy if no control force is applied. Thus to keep the state at a low energy, it is necessary to make use of the control. When the total simulation time exceeds a preset threshold $t_{\text{max}}$ or when the simulation fails, we stop the simulation and restart a new one. We call one simulation from the start to the end as an \textit{episode}, following the usual convention of reinforcement learning. \\ \begin{figure}[tb] \centering\subfloat[]{ \includegraphics[width=0.48\linewidth]{chapter4/sample_harmonic}} \subfloat[]{ \includegraphics[width=0.48\linewidth]{chapter4/sample_inverted}} \caption{Examples of controlled wavefunctions $\psi$ in the problems of cooling a harmonic oscillator (a) and stabilizing an inverted oscillator (b), plotted in $x$ space, together with schematic plots of the controlled potential (grey) and the probability distribution density (red). The real part and the imaginary part of the wavefunctions are plotted in blue and orange, respectively. The units of the horizontal axis is $\sqrt{\frac{\hbar}{m\omega}}$.} \label{fig:sampleharmonic} \end{figure} \subsubsection{Constraints of Control Force} In practice, the applied external control force $F_{\text{con}}$ must be finite and bounded. Because this external force as shown in Eq. (\ref{Hamiltonian quadra}) is equivalent to shifting the center of the potential by an amount of $\dfrac{F_{\text{con}}}{k}$, we compare the width of the wavefunction and the distance of the potential shift, and keep them to be of the same order of magnitude. The wavefunctions in our experiments have standard deviations in the position space of around $0.67$ and $0.80$ in units of $\sqrt{\frac{\hbar}{m\omega}}$ for the harmonic oscillator problem and the inverted oscillator problem, and therefore the wavefunctions have widths of around $2.7$ and $3.2$. The allowed shifts of potentials for these two problems are correspondingly set to be $[-5,+5]$ and $[-10,+10]$\footnote{We do not use penalty on large control forces to prevent the divergence of control as done in the usual linear-quadratic control theory. This is because we also do not put penalty on any control choice of our neural network output and we want to compare the two strategies fairly.}, all in units of $\sqrt{\frac{\hbar}{m\omega}}$. In our numerical experiments, we find that the inverted problem is quite unstable. On the one hand, the noise is intrinsically unbounded as a Gaussian variable so it can overcome any finite control force; on the other hand, in the inverted potential, any deviation from the center makes the system harder to control since the particle tends to move away, and when it has deviated too much, the bounded control force may not be strong enough to work well. This is why we have set the allowed control force for the inverted problem to be larger. In the harmonic oscillator case, no matter how far the noise makes the particle deviate from the center, the particle always comes back again by oscillations, and a control force can always have a favourable effect on the particle to reduce its momentum as desired. This is why we have set the allowed control force for the harmonic oscillator problem to be small. In our experiments, we found that these allowed control force regions are sufficient to demonstrate the efficient cooling and control of the particle, as demonstrated in Section \ref{quadra performance}.\\ To make the control practical, we do not allow the controller to change its control force too many times during one oscillation period of the system, i.e. within one time period $\dfrac{2\pi}{\omega}$, where $\omega=\sqrt{\dfrac{|k|}{m}}$. This condition is imposed both for the harmonic and the inverted oscillator problems. We set that the controller can output 36 different control forces in one oscillation period $\dfrac{2\pi}{\omega}$, which amounts to 18 controls in half an oscillation period, i.e. the particle moving from one side to the other and changing the direction, or 9 controls in a one-quarter period. Our choice of this specific number here is only for divisibility regarding the time step of the simulated quantum system. A control force $F_{\text{con}}$ is applied to the system as a constant before the next control force is outputted from the controller. Therefore, the control forces on the system regarded as a function of time become a sequence of step functions, and we call each constant step in it as a \textit{control step}, in comparison to the time step of the numerical simulation.\\ \begin{figure}[tb] \centering \includegraphics[width=0.6\linewidth]{chapter4/harmonic_control_sequence} \caption{An example of the optimal control $\frac{1}{k}F_{\text{con}}$ and the average position $\langle\hat{x}\rangle$ in the cooling harmonic oscillator problem, plotted against time $t$ of the system. The units are consistent with Table~\ref{quadra parameter settings}. It can be seen that the control force $F_{\text{con}}$ is almost random, with a mean of zero, and $\langle\hat{x}\rangle$ fluctuates around zero.} \label{fig:harmoniccontrolsequence} \end{figure} Then, we need to specifically consider the implementation of the optimal controls. As control forces are bounded and discretized in time, we consider variations of the original continuous optimal control to satisfy these constraints, which is common in control theory. First, because each force is applied as a constant during the control step, the target of the control should be adapted and set to that the particle should be on the desired optimal trajectory at the end of a control step. We do so by solving for the control force $F_{\text{con}}$ using the time-evolution equation (\ref{deterministic controlled quadra}) and the current state $(\langle\hat{x}\rangle,\langle\hat{p}\rangle)$, with the target state $(\langle\hat{x}\rangle+d\langle\hat{x}\rangle,\langle\hat{p}\rangle+d\langle\hat{p}\rangle)$ satisfying the optimal trajectory condition in Eq.~(\ref{optimal trajectory inverted}). This is solved by simply taking $dt$ in Eq.~(\ref{deterministic controlled quadra}) to be the time of the control step and expanding $d\langle\hat{x}\rangle$ and $d\langle\hat{p}\rangle$ up to ${dt}^2$; this strategy is fairly accurate since we repeat it 18 times per one half of the oscillation period. The resulting control $F_{\text{con}}$ is linear with respect to the system variables $(\langle\hat{x}\rangle,\langle\hat{p}\rangle)$, and it is called a linear control. Finally, we bound $F_{\text{con}}$ by simply clipping it within the required bounds, ignoring the excessive values that are beyond the constraints. An example of the resulting control is shown in Fig.~\ref{fig:harmoniccontrolsequence}.\\ \subsubsection{Parameter Settings} Many parameters of the simulated quantum system are redundant and can actually be rescaled to produce the same quantum evolution. Therefore, most of the parameters are arbitrary and we summarize our settings in Table \ref{quadra parameter settings}.\\ \begin{table}[hbt] \centering \begin{tabular}{cccccc} \toprule $\omega$ ($\omega_c$) & $m$ ($m_c$) & $k$ ($m_c\omega^2_c$) & $dt$ ($\frac{1}{\omega_c}$) & $\eta$ & $\gamma$ ($\frac{m_c\omega_c^2}{\hbar}$) \\[8pt] $\pi$ & $\dfrac{1}{\pi}$ & $\pm\pi$ & $\dfrac{1}{720}, \dfrac{1}{1440}$ & 1 & $\pi,2\pi$ \\ \bottomrule \end{tabular}\\[10pt] \begin{tabular}{cccc} \toprule $n_{\text{max}}$ & $F_{\text{con}}$ ($\sqrt{\hbar m_c\omega_c^{3}}$) & $N_\text{con}$ & $t_{\text{max}}$ ($\frac{1}{\omega_c}$) \\[8pt] 130 & $[-5\pi,+5\pi]$, $[-10\pi,+10\pi]$ & 18 & 100 \\ \bottomrule \end{tabular}\\ \caption{Parameter settings of the simulation of quadratic potentials under the position measurement. Units are shown in the parenthesis. The physical quantities $m_c$ and $\omega_c$ are used as a reference, and the parameters without units are dimensionless. The values are separated by a comma to show the specific settings for the harmonic (left) and the inverted (right) oscillator problems. } \label{quadra parameter settings} \end{table}\\ In Table \ref{quadra parameter settings}, $\eta$ denotes the measurement efficiency, and because we only consider pure states, it is 1; $\gamma$ is the measurement strength, and we set it so that the size of the wavefunction is about the same as that of the ground-state wavefunction of a harmonic oscillator. For the harmonic oscillator $\gamma=\pi$, and for the inverted oscillator $\gamma=2\pi$, and even if $\gamma$ is changed, our numerical simulation is still expected to produce similar results. In both the harmonic and the inverted problems we simulate the time evolution of the state only till time $t_{\text{max}}$. Other parameters include $n_{\text{max}}$, the high-energy cutoff, $dt$, the simulation time step, the control forces $F_{\text{con}}$ that is shown in ranges, and $N_\text{con}$, the number of output controls from the controller per unit time $\frac{1}{\omega_c}$. The oscillation period of the harmonic system here is exactly $2\times\frac{1}{\omega_c}$, which is our time scale. For example, for each simulation episode we simulate for time $100\times\frac{1}{\omega_c}$, i.e., 50 oscillation periods of the harmonic oscillator.\\ \subsubsection{Evaluation of Performance}\label{performance evaluation} In this subsection, we explain how we evaluate the performances of different controllers. The performances are evaluated according to our numerical simulation, using the controllers to output control forces at every control step. Because the goals for the harmonic oscillator and the inverted oscillator problems are different, we use different methods to evaluate them.\\ For the problem of cooling a harmonic oscillator, we run the numerical simulation with control for 1000 episodes\footnote{Sometimes we simulate for more episodes.}, and sample the expected energy of the state as the phonon number $\langle\hat{n}\rangle$ for 4000 times in these episodes, and calculate the sample mean and estimate the standard error of the mean. We first initialize the state and let it evolve under control for 20 periods of oscillations, i.e. time $40\times\frac{1}{\omega_c}$, and then sample its $\langle\hat{n}\rangle$ per time $15\times\frac{1}{\omega_c}$, till the end of the episode. This is to remove the effect of initialization and sample correlations during sampling. The value $\langle\hat{n}\rangle$ is evaluated at the control steps, which is for consistency with the optimal control target that aims to put the state on the desired trajectory at the end of control steps. In this way we numerically evaluate the expected energy of the particle being controlled, or, the expected value of the loss function as in Eq.~(\ref{harmonic optential original loss}).\\ For the inverted oscillator problem, we use a totally different measure of performance. We use the failure events to judge whether the controlled particle is near or away from the center, which means that, if the simulated system fails (i.e., the components on eigenbasis $n=120$ exceeds the norm of $10^{-5}$), then the criterion function $g$ in Eq.~(\ref{inverted potential original loss}) equals 1; otherwise $g$ equals 0. In our numerical simulation, failure events correspond to the center of the wavefunction staying near $x=8$ or $x=9$ in units of $\sqrt{\frac{\hbar}{m\omega}}$, while the control force is allowed to move the center of the potential to $x=10$ at most. For simplicity, we consider a failure event in some time interval as a Bernoulli distribution of ``to fail" and ``not to fail", and we use the failing probability in one episode as our measure of control performance. To evaluate a controller, we run simulations of 1000 episodes to estimate its failing probability, and the variance of the estimation is calculated based the formula of binomial distributions.\\ \subsection{Reinforcement Learning Implementation}\label{quadraReinforcementImplementation} In this section we describe how neural networks are trained to learn control strategies. We use deep Q-learning, and since we mainly follow Section \ref{deep reinforcement learning implementation} which is already explained in detail, we only discuss our specific settings for the problems concerned here. \\ We use Pytorch \cite{Pytorch} as our deep learning library, and when there are settings not mentioned in this thesis, it means that we use the default ones. For example, we use the default random initialization strategy for neural networks provided by Pytorch, and we do not manually set random seeds for the main process. As required, we sample random numbers in the main process to set the random seeds of subprocesses, which interact with the environment as reinforcement learning actors to accumulate experiences. \\ \subsubsection{Neural Network Architecture} Neural network architecture is mostly related to the structure of its input, and therefore we need to decide what is the input to the neural network first. We consider three cases. The first is to use the quintuple $\left (\langle \hat{x}\rangle,\langle \hat{p}\rangle,V_x,V_p,C\right )$ as the input, because we know that this quintuple completely describes a Gaussian state, and actually it is already more than enough to determine the optimal control. The second case is to use the raw values of the wavefunction as the input, by taking all real-part and imaginary-part values of its components on the chosen eigenbasis. The third case is to use the measurement outcomes as the input. One or two measurement outcomes are not sufficient to determine the properties of the current state, and therefore we need to input many outcomes that are sequential in time into the neural network. In this case, the neural network is either a convolutional network or a recurrent neural network; otherwise it cannot process so many values ordered sequentially. Roughly speaking, we need the neural network to learn appropriate coarse-graining strategy for such a large number of measurement outcomes. \\ For the first and the second cases, the neural network is 4-layer fully connected feedforward neural network with the numbers of internal units being (512, 256, 256+128, 21+1), where the final two layers are separated into two different branches of computation as suggested in Ref.~\cite{DuelDQN} (see Sec.~\ref{Duel DQN section}). The 21 outputs are used to predict the action values deviating from their mean for different control choices, and the last 1 output is used to predict the mean action value. For the third input case, the neural network is a 6-layer feedforward neural network, with its first three layers being 1-dimensional convolutional layers, and others being fully connected ones. The kernel sizes of the three convolutional layers and their strides are respectively (13,5), (11,4), (9,4), with the number of filters being (32, 64, 64)\footnote{Experimentally we found that a larger size of the layers does not necessarily perform better, probably due to larger noises. Also, we cannot apply batch normalization between the convolutional layers, because this is a reinforcement learning task and the training target is not static.}. The fully connected three layers have their numbers of hidden units as (256, 256+128, 21+1), which is similar to the network of the previous case. Following Ref.~\cite{NoisyDQN}, the last two layers in our networks are constructed as noisy layers (see Sec.~\ref{Noisy DQN}). We notice that measurement outcomes only are not sufficient to predict the state, and we also need previous control forces that have been exerted on the state. Thus, we input the force and the measurement outcome data in parallel as a time sequence as the network input. The input time sequence contains force and measurement outcome history in the last $6\times\frac{1}{\omega_c}$ time for the cooling harmonic oscillator problem and contains the last $4\times\frac{1}{\omega_c}$ time for the inverted oscillator problem, which is due to the different measurement strength $\gamma$ in the two problems.\\ As discussed in Section \ref{reinforcement learning}, to implement Q-learning we need to define the action choices of the control, which is supposed to be a discrete set. However, currently we only have an allowed interval of control force values. Therefore, we discretize this interval into equispaced 21 different values as our 21 different control force choices, and then we do Q-learning as introduced in Section \ref{reinforcement learning}, which is the same for the harmonic and the inverted oscillator problems.\\ \subsubsection{Training Settings} As we have used many deep reinforcement techniques, the hyperparameter settings relevant for the specific techniques are put into Appendix \ref{experiment details appendix}. In this subsection we only describe the settings and strategies that are relevant to the discussion we have presented so far.\\ First, in order to optimize the neural network with the training loss, we use the \linebreak RMSprop algorithm \cite{RMSprop} with its initial learning rate set to $2\times10^{-4}$ and momentum set to 0.9, and the training minibatch size is 512. The $\gamma$ coefficient that serves as the time horizon of Q-learning in Eq.~(\ref{Q iteration}) is 0.99, which means that future rewards are discounted by 0.99 for each control step. As $\frac{1}{1-0.99}=100$ and we have 36 controls per oscillation, the time horizon of the controller is around $10\times\frac{1}{\omega_c}$. We assume that this is sufficient for learning the tasks, i.e. a control does not influence the behaviour of the state after 5 periods of oscillation.\\ The reward of reinforcement learning is set to the control loss times a negative sign. For the cooling harmonic oscillator problem, because the value of $\langle\hat{n}\rangle$ decreases to a small number when the state is cooled, we multiply the loss by 5, and we also shift it by 0.4 such that it is closer to zero and can be either negative or positive. However, for the second and the third input cases as the learning is more difficult, we find that such large loss values make the learning process noisy and unstable, and as a result $\langle\hat{n}\rangle$ never decreases to a small value and the learning does not proceed. Therefore, for the second and third input cases we only rescale the loss by 2, and we shift it by 1. In addition, we stop the simulation whenever $\langle\hat{n}\rangle>10$ for the second input case or $\langle\hat{n}\rangle>20$ for the third input case is satisfied. With these modifications, the learned loss values are always reasonably small and the learning can proceed. Since the learned states always have low energy, in the wavefunction input case we only use the first 40 energy eigenbasis of the state as input. \\ Regarding the inverted oscillator problem, experimentally we find that the learning is unstable if we give a negative reward only at the moment when the simulation fails. This is probably due to the stochasticity of the system, because the stochasticity may push a state at the edge of failure either to fail or to turn back, which makes the training loss very high due to the unexpected behaviours of the states. In this case, the reward is called to be sparse. To alleviate this problem, we add a small portion of negative $\langle\hat{n}\rangle$ into the reward to facilitate the learning, which is multiplied by a factor of 0.02, with the original reward of a failure event being $-10$.\\ We use a memory replay buffer to store accumulated experiences of our reinforcement learning actors, and its size is set to containing a few thousands or hundreds of $t_{\text{max}}$ episodes. Due to different sizes of the inputs of different input cases, the sizes of the replay buffers are made different since we have limited computer memory. The sizes for the three cases are respectively 6000, 4000 and 400 episodes. To compromise, when new experiences are stored into the memory replay, we take away the experiences that are unimportant first, i.e. with low training loss values, so that important pieces of memory are preserved. We also use prioritized sampling during training as in Ref.~\cite{prioritizedSampling}, which is described in more detail in Sec.~\ref{Prioritized Replay} and \ref{Prioritized replay setting}.\\ To encourage exploring different possibilities of control, we make the reinforcement learning actors use the $\epsilon$-greedy strategy to take actions \cite{DQN}, i.e., the an action is randomly taken with a small probability $\epsilon$, and with a probability $1-\epsilon$ it is taken to pursue the highest expected reward (see Sec.~\ref{target network section}). This probability $\epsilon$ is 40\% at the beginning of training and then decreases rapidly to 2\%, and after several stages of decrease, it is suppressed to 0.01\% around the end of training.\\ The number of simulated episodes for training is around 10000 for the problem of cooling a harmonic oscillator and is around 30000 for the problem of stabilizing an inverted oscillator. Actually the simulation of the inverted problem fails quickly during the first a few thousand episodes, within an episode time length of around 1 or $2\times\frac{1}{\omega_c}$, and only the later episodes last longer, which shows that this problem is indeed a difficult problem. \\ \section{Comparison of Reinforcement Learning and Optimal Control}\label{quadra comparison} \subsection{Performances}\label{quadra performance} \begin{table}[tb] \centering \begin{tabular}{cp{8em}p{7.8em}p{8.8em}} \toprule & & cooling harmonic oscillators & stabilizing inverted oscillators \\ \midrule \multirow{3}{3.7em}{Network Input} & $\left (\langle \hat{x}\rangle,\langle \hat{p}\rangle,V_x,V_p,C\right )$ & $0.3252\pm0.0024$ & $85.4\%\pm1.1\%$ \\ \cmidrule{2-4} & wavefunction & $0.3266\pm0.0040$ & $73.3\%\pm1.4\%$ \\ \cmidrule{2-4} & measurement outcomes & $0.4505\pm0.0053$ & $0.0\%$\linebreak {\small ($7.2\%\pm1.2\%$ provided with examples)} \\ \midrule \multicolumn{2}{c}{optimal control} & $0.3265\pm0.0032$ & $89.8\%\pm1.0\%$ \\ \bottomrule \end{tabular} \caption{Performance results for the problem of cooling harmonic oscillators and stabilizing inverted oscillators. The numbers behind $\pm$ signs show the estimated standard deviations of the reported means. As discussed in Sec.~\ref{performance evaluation}, the values reported for the cooling problem are the average phonon numbers $\langle\hat{n}\rangle$ and for the inverted problem are the success rates in one episode, i.e., one minus the probability of failure.} \label{quadra results} \end{table} \subsubsection{Cooling Harmonic Oscillators} We now compare the performances of trained neural networks as described above with the performances of our derived optimal controls. For the problem of cooling a harmonic oscillator, the training was completed within one day on a Titan X GPU with 10-process parallelization running in Python, for each of the three input cases. The resulting performances in terms of $\langle\hat{n}\rangle$ are listed in Table \ref{quadra results}, compared with the optimal control. We can see that the performances except for the measurement input case are within one standard error, and therefore we conclude that they perform approximately the same. For completeness, we analytically calculate the theoretical optimal performance for this problem if provided with unbounded control. Assuming that $\langle\hat{p}\rangle=-\sqrt{mk}\langle\hat{x}\rangle$ always holds\footnote{From the perspective of the LQG theory, this condition can be obtained by taking the small control cost limit.}, the time evolution of $\langle\hat{x}\rangle$ becomes an Ornstein–Uhlenbeck process and we can calculate the expectation $E\left[\langle\hat{x}\rangle^2\right]$ and evaluate the loss given in Eq.~(\ref{harmonic optential original loss}). The result is $\langle\hat{n}\rangle\approx0.2565$, which is actually smaller than the above values. This discrepancy comes from the finite time length of our allowed control steps. If the number of control steps $N_{\text{con}}$ in time $\frac{1}{\omega_c}$ increases to 72, the performance of our optimal control is evaluated to be $\langle\hat{n}\rangle\approx0.2739\pm0.0047$, which is closer to the theoretical value and shows that our optimal control strategy is indeed valid. Therefore, by comparing with the optimal control in Table \ref{quadra results}, we can argue that our reinforcement learning is successful. Note that actually $\langle\hat{n}\rangle$ can never be reduced to zero, because the position measurement squeezes the wavefunction in space and the term $\langle\hat{n}\rangle_{\rho_0}$ in Eq.~(\ref{energy representation}) is non-zero. \\ As for the measurement-outcome-based input case, the lower performance may result from the intrinsic stochasticity of the measurement outcomes that disturb the process of learning. In deep learning practices, injected random noise in training data is typically found to decrease the final performance, and therefore, the measurement input case may have been affected similarly. This is possibly due to the interplay among the noise and the deep network structure and the gradient-based optimization. There seems to be no straightforward solution, and in this case it may be difficult for the reinforcement learning to learn from measurement outcomes alone.\\ \subsubsection{Stabilizing Inverted Oscillators} Regarding the inverted harmonic potential problem, as the system is considerably more stochastic, a neural network is trained for three days for each input case. The resulting performances represented as success rates are given in Table \ref{quadra results}, i.e., one minus the probability of a failure event occurring in an episode. Here we find that only the first input case achieves a performance comparable to the optimal control, and that the performance roughly decreases with the increased difficulty of learning for the three input cases. To confirm that the measurement-outcome-based network can really learn, we add 100 episodes that are controlled by the optimal control into its memory replay as examples, such that when the network is trained, it learns both its own experiences and the examples given by the optimal control. In this case, the network performed better, and at the end of training it achieved a success rate of $7.2\%$, which implies that this network is indeed possible to complete the given task, but with much more difficulty. \\ In order to make a fairer comparison between the optimal control and the reinforcement learning controller, we consider a discretized version of the optimal control that has control forces discretized in the same way as the reinforcement learning controller. In order to obtain discretized values, we set the outputs of the optimal control to their nearest neighbouring values among the equispaced 21 choices of forces as the neural network, and then use the forces as the control. The performance of this discertized controller is evaluated, and the result is $89.3\%\pm1.0\%$, within one standard deviation compared with the continuous controller, which demonstrates that the discretization does not significantly decrease the controller's performance, and the performance of the reinforcement learning is not restricted by the discretization.\\ Finally we consider an even more discretized control strategy, the bang-bang protocol. The bang-bang protocol uses one of the extrema as its output, and in our case it outputs either the maximal force to the left or the maximal force to the right. If the optimal control has an output to the left, its variation as a bang-bang protocol outputs the left maximal output force, and vice versa. We then test the performance of this bang-bang protocol variation. Its performance is $71.0\%\pm1.4\%$, which is significantly lower. This bang-bang protocol always has the right direction of the control force, but its strength is larger than the optimal control and therefore the controlled system is perturbed more strongly. This shows that the inverted oscillator system is quite sensitive, and increased noise and disturbance reduce the stability of the system. Obviously this bang-bang protocol is a suboptimal control strategy, and both the best reinforcement learning control and our optimal control have performances superior to it. \\ \subsection{Response to Different Inputs}\label{quadraInputResponse} To see what the reinforcement learning has learned in detail, we plot its output control force against different inputs of $\left (\langle \hat{x}\rangle,\langle \hat{p}\rangle\right )$, fixing $\left (V_x,V_p,C\right )$ at their average values. For the harmonic oscillator problem, the control force regarding $\left (\langle \hat{x}\rangle,\langle \hat{p}\rangle\right )$ is given in Fig.~\ref{fig:harmonic response small}, and for the inverted oscillator problem it is given in Fig.~\ref{fig:inverted response small}, and they are compared with the optimal controls. The leftmost and rightmost tails are the regions where the state is both staying away and moving away quickly, in which case it is almost destined to fail, and the central cliff of the plots is where the state stays most of the time. If the cliff becomes vertical, the control strategy reduces to the bang-bang protocol.\\ \begin{figure}[tb!] \centering \includegraphics[width=0.4\linewidth]{"chapter4/baseline_harmonic"}\qquad\quad \includegraphics[width=0.4\linewidth]{"chapter4/DQN190705_harmonic_small"} \caption{The control force plotted against the $\left (\langle \hat{x}\rangle,\langle \hat{p}\rangle\right )$ input for the problem of cooling a harmonic oscillator. The left panel shows the force of the optimal control, and the right one shows the control force of a trained reinforcement learning actor.} \label{fig:harmonic response small} \end{figure} \begin{figure}[tb!] \centering \includegraphics[width=0.4\linewidth]{"chapter4/baseline_inverted"}\qquad\quad \includegraphics[width=0.4\linewidth]{"chapter4/DQN190705_inverted_small"} \caption{The control force plotted against the $\left (\langle \hat{x}\rangle,\langle \hat{p}\rangle\right )$ input for the problem of stabilization in an inverted harmonic potential. The left panel shows the the optimal control, and the right one shows our trained reinforcement learning actor.} \label{fig:inverted response small} \end{figure} From the above figures we see that, the reinforcement learning controllers have learned qualitatively the same as the optimal control, and therefore we may say that they have learned the underlying properties of the system. However, some defects exist in their control decisions, which may be attributed to noise and insufficient training. Especially for the inverted oscillator problem, while the controller has 21 choices of control, it prefers outputting 4 or 5 choices around the central cliff in Fig. \ref{fig:inverted response small}, though all control choices are equispaced. This is shown more clearly in Fig. \ref{fig:inverted response big}. We suspect this to be an artefact of reinforcement learning. The reinforcement learning actor may discover when to use some of its control choices at the beginning, and then it converged to them rather than learning some other choices. This artefact may be suppressed by refinement of the training procedure, and the final performance may potentially increase. One reason for this artefact is that, in order to confirm that the performance of the neural network is lower than that of the optimal control, we need one thousand episodes of simulations, which amounts to $2\times10^6$ control steps, and therefore similarly, it is not easy for the reinforcement learning to discover that its strategy can still be improved. Also, the time horizon of control is set to be $\frac{1}{10}$ of an episode as discussed earlier, and therefore it may hard for the reinforcement learning to learn an optimal strategy on such long-time controls.\\ \section{Conclusion}\label{quadra conclusion} In this chapter, we have shown that the problems of controlling a particle near the center of quadratic potentials with position measurement has optimal control solutions, and we have shown that a deep-reinforcement-learning-based controller can learn such a problem and control the system with a performance comparable to the optimal control. Also, the training of the reinforcement learning is within practical computational budgets, which is not specific to this quadratic case. We expect that similar reinforcement-learning-based controllers can be trained following the same line for other potentials where no optimal or reasonable control strategies are known, and we may carry the same settings to those other problems when possible. Therefore, naturally we move to the case of quartic potentials in the next chapter.\\ \begin{figure}[b!] \centering \includegraphics[width=0.9\linewidth]{chapter4/DQN190705_inverted} \caption{The control force of trained reinforcement learning plotted against $\left (\langle \hat{x}\rangle,\langle \hat{p}\rangle\right )$ input for the inverted harmonic potential problem. This is a finer and enlarged version of the right figure of Fig. \ref{fig:inverted response small}.} \label{fig:inverted response big} \end{figure} \section{Analysis of a Quartic Potential} In this section we show that a quartic potential cannot be simplified in the same way as a quadratic potential by revisiting the arguments presented in Section \ref{analysis of quadratic}. To obtain reasonable control strategies, we consider approximations to the time evolution of the system and present the corresponding derived controls, which serve as the comparison group as conventional control strategies. To make the properties of a quartic potential clear, we use figures to illustrate the behaviour of a particle inside a quartic potential before going into the next section.\\ \subsection{Breakdown of Gaussian Approximation and Effective Description}\label{quartic breakdown} Following Section \ref{analysis of quadratic}, we consider the time evolutions of operators $\hat{x}$ and $\hat{p}$ in the Heisenberg picture for the Hamiltonian with a quartic potential:\\ \begin{equation} H=\frac{\hat{p}^2}{2m}+\lambda\hat{x}^4, \end{equation} \begin{equation}\label{quartic operator evolution} \frac{d}{dt}\hat{x}=\frac{i}{\hbar}[H,\hat{x}]=\frac{\hat{p}}{m},\qquad \frac{d}{dt}\hat{p}=\frac{i}{\hbar}[H,\hat{p}]=-4\lambda\hat{x}^3, \end{equation}\\ which involves the third-order term of $\hat{x}$. Therefore, its time evolution is not a simple linear transformation between operators $\hat{x}$ and $\hat{p}$. It is nonlinear, and therefore the shape of its Wigner distribution is not preserved. Also, since the Hamiltonian is more than quadratic, the time-evolution equation of the Wigner distribution is different from the classical Liouville equation in phase space, and it is known that there exists no trajectory description of the Wigner distribution \cite{WignerAnharmonic}, which means that there is a non-classical effect.\\ We then consider the time evolution of the average momentum $\langle\hat{p}\rangle$. As shown in Eq. \nolinebreak(\ref{quartic operator evolution}), we have\\ \begin{equation}\label{quartic expected momentum evolution} \frac{d}{dt}\langle\hat{p}\rangle=-4\lambda\langle\hat{x}^3\rangle, \end{equation}\\ and if Gaussian approximation holds, $\langle\hat{x}^3\rangle$ can be calculated as in Eq. (\ref{skewness Gaussian}) exploiting the zero skewness property of the Gaussian. Therefore, we verify whether or not its zero skewness can always be kept zero as a Gaussian state in a quartic potential. The time evolution of skewness is\\ \begin{equation} \begin{split} d\left \langle\left (\hat{x}-\langle\hat{x}\rangle\right )^3\right \rangle&= \text{tr}\left (d\rho(\hat{x}-\langle\hat{x}\rangle)^3+\rho\, d(\hat{x}-\langle\hat{x}\rangle)^3\right )\\ &=\text{tr}\left (\frac{i}{\hbar}\rho\left [H,(\hat{x}-\langle\hat{x}\rangle)^3\right ] dt-3\rho\, (\hat{x}-\langle\hat{x}\rangle)^2 d\langle\hat{x}\rangle\right)\\ &=\text{tr}\left (\frac{i}{\hbar}\rho\left [\frac{\hat{p}^2}{2m},(\hat{x}-\langle\hat{x}\rangle)^3\right ] dt- \rho\,\dfrac{3\langle\hat{p}\rangle}{m} (\hat{x}-\langle\hat{x}\rangle)^2 dt\right )\\ &=\frac{3\left \langle(\hat{x}-\langle\hat{x}\rangle)\hat{p}(\hat{x}-\langle\hat{x}\rangle)\right\rangle}{m} dt- \dfrac{3\langle\hat{p}\rangle\langle(\hat{x}-\langle\hat{x}\rangle)^2\rangle}{m} dt\\ &=\frac{3\left \langle(\hat{x}-\langle\hat{x}\rangle)(\hat{p}-\langle\hat{p}\rangle)(\hat{x}-\langle\hat{x}\rangle)\right\rangle}{m} dt, \end{split} \end{equation}\\ which means that the skewness of the phase-space Wigner distribution in the $x$ direction is dependent on another skewness of it in the $x,p$ plane. We then calculate the time evolution of $\left \langle(\hat{x}-\langle\hat{x}\rangle)(\hat{p}-\langle\hat{p}\rangle)(\hat{x}-\langle\hat{x}\rangle)\right\rangle$:\\ \begin{equation} \begin{split} d\left \langle(\hat{x}-\langle\hat{x}\rangle)(\hat{p}-\langle\hat{p}\rangle)(\hat{x}-\langle\hat{x}\rangle)\right\rangle&=\text{tr} (d\rho(\hat{x}-\langle\hat{x}\rangle)(\hat{p}-\langle\hat{p}\rangle)(\hat{x}-\langle\hat{x}\rangle)\\ &\qquad\quad+\rho\, d(\hat{x}-\langle\hat{x}\rangle)(\hat{p}-\langle\hat{p}\rangle)(\hat{x}-\langle\hat{x}\rangle) )\\ &=\frac{2\left \langle(\hat{p}-\langle\hat{p}\rangle)(\hat{x}-\langle\hat{x}\rangle)(\hat{p}-\langle\hat{p}\rangle)\right\rangle }{m}dt\\ &\qquad\quad-4\lambda\left \langle\hat{x}^3(\hat{x}-\langle\hat{x}\rangle)^2\right \rangle dt+4\lambda \langle\hat{x}^3\rangle\left\langle(\hat{x}-\langle\hat{x}\rangle)^2\right \rangle dt\\ &=\frac{2\left \langle(\hat{p}-\langle\hat{p}\rangle)(\hat{x}-\langle\hat{x}\rangle)(\hat{p}-\langle\hat{p}\rangle)\right\rangle }{m}dt\\ &\qquad\quad-4\lambda\left \langle(\hat{x}^3-\langle\hat{x}^3\rangle)(\hat{x}-\langle\hat{x}\rangle)^2\right \rangle dt. \end{split} \end{equation}\\ The first term of the above is still a skewness, but the second term is rather nontrivial. This is because $\hat{x}^3-\langle\hat{x}^3\rangle\neq\left (\hat{x}-\langle\hat{x}\rangle\right )^3$. It appears to be non-zero, and we show this fact now. We evaluate the term $\left \langle(\hat{x}^3-\langle\hat{x}^3\rangle)(\hat{x}-\langle\hat{x}\rangle)^2\right \rangle$ for a Gaussian state:\\ \begin{equation}\label{5th moment 1} \left \langle(\hat{x}^3-\langle\hat{x}^3\rangle)(\hat{x}-\langle\hat{x}\rangle)^2\right \rangle=\langle\hat{x}^5\rangle-2\langle\hat{x}^4\rangle\langle\hat{x}\rangle+\langle\hat{x}^3\rangle\langle\hat{x}\rangle^2-\langle\hat{x}^2\rangle\langle\hat{x}\rangle^3+\langle\hat{x}\rangle^5. \end{equation}\\ Using the fact that a Gaussian distribution is central-symmetric and therefore has odd central moments being zero, similar to the skewness, we have $\langle(\hat{x}-\langle\hat{x}\rangle)^5\rangle=0$ and therefore\\ \begin{equation} \langle\hat{x}^5\rangle=5\langle\hat{x}^4\rangle\langle\hat{x}\rangle-10\langle\hat{x}^3\rangle\langle\hat{x}\rangle^2+10\langle\hat{x}^2\rangle\langle\hat{x}\rangle^3-4\langle\hat{x}\rangle^5, \end{equation}\\ and Eq (\ref{5th moment 1}) becomes\\ \begin{equation}\label{5th moment 2} \left \langle(\hat{x}^3-\langle\hat{x}^3\rangle)(\hat{x}-\langle\hat{x}\rangle)^2\right \rangle = 3\langle\hat{x}^4\rangle\langle\hat{x}\rangle-9\langle\hat{x}^3\rangle\langle\hat{x}\rangle^2+9\langle\hat{x}^2\rangle\langle\hat{x}\rangle^3-3\langle\hat{x}\rangle^5. \end{equation}\\ Using the fact that excess kurtosis is zero for a Gaussian distribution, i.e. $\dfrac{\langle(\hat{x}-\langle\hat{x}\rangle)^4\rangle}{V^2_x}\nolinebreak =\nolinebreak3$, where $V_x$ is the variance of $x$, we have\\ \begin{equation}\label{kurtosis Gaussian} \langle\hat{x}^4\rangle=4\langle\hat{x}^3\rangle\langle\hat{x}\rangle-6\langle\hat{x}^2\rangle\langle\hat{x}\rangle^2+3\langle\hat{x}\rangle^4+3V_x^2, \end{equation}\\ and Eq. (\ref{5th moment 2}) becomes\\ \begin{equation}\label{5th moment 3} \begin{split} \left \langle(\hat{x}^3-\langle\hat{x}^3\rangle)(\hat{x}-\langle\hat{x}\rangle)^2\right \rangle &= 3\langle\hat{x}^3\rangle\langle\hat{x}\rangle^2-9\langle\hat{x}^2\rangle\langle\hat{x}\rangle^3+6\langle\hat{x}\rangle^5+9V_x^2\langle\hat{x}\rangle\\ &= 3\left (3V_x\langle\hat{x}\rangle+\langle\hat{x}\rangle^3\right )\langle\hat{x}\rangle^2-9\langle\hat{x}\rangle^5+6\langle\hat{x}\rangle^5-9V_x\langle\hat{x}\rangle^3+9V_x^2\langle\hat{x}\rangle\\ &= 9V_x^2\langle\hat{x}\rangle. \end{split} \end{equation}\\ Therefore, whenever the mean $\langle\hat{x}\rangle$ of the Gaussian distribution is not zero, it gradually loses its Gaussianity in the quartic potential, and therefore the Wigner distribution cannot always be Gaussian.\\ Besides the failure of the Gaussian approximation, it is known that a quartic system is hard to analyse since it corresponds to the 1-dimensional $\phi^4$ theory \cite{anharmonicphi4}. Therefore we give up on analysing the system explicitly.\\ \subsection{Approximate Controls}\label{quartic controllers} Although the state can no longer be described by only a few parameters, we still hope to obtain some reasonable control strategies for it. In this subsection we discuss possible control strategies.\\ If the system evolves deterministically, we can always find an appropriate control by simulating the state under the control and repeatedly refining the control to make the system evolve into a desired state. This local search method of control is actually widely used and it has many different variations and names as introduced in Chapter \ref{Introduction} \cite{QuantumOptimalControlTheory, CRAB, GRAPE, EvolutionaryQuantumControl}. In engineering, the famous ones include so-called differential dynamic programming (DDP) \cite{DDP} and the iterative linear-quadratic-Gaussian method (ILQG) \cite{ILQG}. However, these methods have strong limitations. First of all, since they search for possible controlled evolutions of a given state, if there are $N$ different given states, then they need to search $N$ times, using one for each, and if we do not know the given state in advance, we would need to do the search immediately after the state is given, which is time-consuming and inappropriate as a realistic control strategy. On the other hand, as these methods are based on local search, they can hardly be applied to systems that contain noise. This is because the methods investigate only one certain trajectory of the system and refine that investigated specific trajectory by back-and-forth iteration, which would result in a control that is inapplicable to other possibilities of noise. For a different random realization of noise, the controller does not have an idea about how to control the system and generally the control would fail. The control problem in this setting is called a stochastic optimal control (SOC) problem \cite{AICO}. To deal with this, the ILQG algorithm treats the effect of noise as small deviations from the expected trajectory and deals with the deviations perturbatively, but as already mentioned in the original paper, this method only handles a small noise that does not significantly affect the overall evolution of the system \cite{ILQG}. In our case, the quartic system has noise-induced behaviour, which makes an actual trajectory drastically different from the expected trajectory in the long term, and therefore this perturbative ILQG method can hardly apply. On the other hand, machine learning strategies such as the graphical-model-based approximate inference control (AICO) are often used to handle these stochastic control problems \cite{AICO}. Overall, if we want to cool down a particle in a quartic potential with position measurement and a random initial state, we actually have no existing well-established method that can exploit the properties of the system to control it, except for machine learning.\\ In this case, to obtain control strategies other than machine learning as a comparison group, we give up looking for usual globally optimal or locally optimal solutions and consider suboptimal controls and approximations of the system. As before, we use an external force to control the system, which is a linear term $F_{\text{con}}\hat{x}$ added to the system Hamiltonian that is parametrized by the control parameter $F_{\text{con}}$. \subsubsection{damping controller} As a first control strategy, we consider the time derivative of the system energy. It is\\ \begin{equation} \begin{split} dE=d\langle H\rangle=\text{tr}(d\rho\,H)&=\text{tr}(-\frac{i}{\hbar}[H+F_{\text{con}}\hat{x},\rho]H\,dt)=-\frac{F_{\text{con}}\langle\hat{p}\rangle}{m}dt,\\[6pt] &\quad H=\frac{\hat{p}^2}{2m}+\lambda\hat{x}^4. \end{split} \end{equation}\\ Therefore, whenever $\langle\hat{p}\rangle>0$ is satisfied, we can use a positive $F_{\text{con}}$ to decrease the energy, and when $\langle\hat{p}\rangle<0$ is satisfied, we can use a negative $F_{\text{con}}$, which is the same for a classical system. This is called a steepest descent method. In numerical simulation, because we need to use finite time control steps, the action of this controller should be set to remove the particle's momentum at each control step, i.e. satisfying $\langle\hat{p}\rangle+d\langle\hat{p}\rangle=0$ with $dt$ taken to be the time of the control step. However, in numerical simulation we find that such a strategy would prevent the state from moving to the center and keep it at a high energy. Therefore, we use a damping strategy instead, i.e., $\langle\hat{p}\rangle+d\langle\hat{p}\rangle=(1-\zeta)\langle\hat{p}\rangle$ with $0\le\zeta<1$. We experimentally found that $\zeta=0.5$ results in the best performance in our problem setting and therefore we use the parameter setting $\zeta=0.5$. We call this the damping controller. Note that for a quadratic potential this strategy is not optimal, and it results in overdamping of the system. \subsubsection{quadratic controller} As a second control strategy, we apply the usual linear-quadratic-Gaussian controller to the quartic system. This is because linear-quadratic-Gaussian controllers are often used in real situations where we have some quantities to minimize and do not care about the strict optimality of control \cite{LQGasApproximation}, and therefore, we just follow this strategy. To apply the linear-quadratic-Gaussian (LQG) theory we need to set a quadratic control target, define system variables, and linearise the system. After these procedures, the LQG theory produces a controller which is optimal on the transformed system, and we use this controller to control our original system. In analogy to the harmonic problem in Chapter \ref{control quadratic potentials}, we use $\langle\hat{x}\rangle$ and $\langle\hat{p}\rangle$ as our system variables, which actually only contain partial information of the system, and we define the minimized loss as $L=\int \left (\frac{k}{2}\langle\hat{x}\rangle^2+\frac{\langle\hat{p}\rangle^2}{2m}\right )dt$, where the parameter $k$ is searched and determined by testing the controller's performance. Following the arguments presented in Chapter \ref{control quadratic potentials}, with discretized control steps, the control force is determined such that $(\langle\hat{x}\rangle+d\langle\hat{x}\rangle,\langle\hat{p}\rangle+d\langle\hat{p}\rangle)$ satisfies the optimal trajectory condition $\langle\hat{p}\rangle=-\sqrt{km}\langle\hat{x}\rangle$ to the first order of $dt$, with $dt$ being the time of a control step. We call this derived controller as the quadratic controller, because it only considers observables that are sufficient for the quadratic problem case. \subsubsection{Gaussian approximation controller} As a fourth control strategy, we use a Gaussian approximation of the state and establish the correspondence between $(\langle\hat{x}\rangle,\langle\hat{p}\rangle)$ of the quantum state and $(x,p)$ of a classical particle, as done in the quadratic system case. By Eq. (\ref{quartic expected momentum evolution}), we know that the evolution of $\langle\hat{p}\rangle$ depends on $\langle\hat{x}^3\rangle$, which involves skewness and is not defined for a classical point-like particle. However, based on a Gaussian approximation, the state has zero skewness and $\langle\hat{x}^3\rangle=3V_x\langle\hat{x}\rangle+\langle\hat{x}\rangle^3$ as shown in Eq. (\ref{skewness Gaussian}). If we further assume that $V_x$ is a constant, we obtain a classical particle having $(x,p)$ corresponding to $(\langle\hat{x}\rangle,\langle\hat{p}\rangle)$, where $p$ evolves according to $\dfrac{dp}{dt}=-12\lambda V_xx-4\lambda x^3$, i.e. in a static potential $V=6\lambda V_x x^2+\lambda x^4$. Then by following the same Lagrangian argument in Section \ref{quadra optimal control}, we derive an optimal control protocol for the classical particle in this quartic potential. In this way we can go beyond the quadratic problem. Using Eq. (\ref{kurtosis Gaussian}), the loss under the above Gaussian approximation is\\ \begin{equation}\label{quartic Gaussian approx loss} \begin{split} L&=\int\left (\frac{\langle\hat{p}^2\rangle}{2m}+\lambda\langle\hat{x}^4\rangle\right )\,dt\\ &=\int\left (\frac{\langle\hat{p}\rangle^2+V_p}{2m}+\lambda(4\langle\hat{x}^3\rangle\langle\hat{x}\rangle-6\langle\hat{x}^2\rangle\langle\hat{x}\rangle^2+3\langle\hat{x}\rangle^4+3V_x^2)\right )\,dt\\ &=\int\left (\frac{\langle\hat{p}\rangle^2+V_p}{2m}+\lambda(6V_x\langle\hat{x}\rangle^2+\langle\hat{x}\rangle^4+3V_x^2)\right )\,dt, \end{split} \end{equation}\\ and we take $V_p$ and $V_x$ as constants. These above assumptions are expected to hold when the position measurement on the system is strong, so that the Gaussian measurement dominates the evolution of the shape of the state, and the quartic potential has little effect on the state. \\ To make the loss function as a classical action, we interpret the integrated term in Eq. (\ref{quartic Gaussian approx loss}) as a Lagrangian, with the potential term being $\mathcal{V}=-6\lambda V_x x^2-\lambda x^4$. Following the same argument as Section \ref{quadra optimal control}, the optimal trajectory should satisfy\\ \begin{equation} p=-\sqrt{2m(6\lambda V_x+\lambda x^2)}x, \end{equation}\\ so that the particle exactly stops at the top of the potential. We call this controller as our Gaussian approximation controller.\\ To implement it in numerical simulation, we proceed similarly as before to set the state $(\langle\hat{x}\rangle,\langle\hat{p}\rangle)$ on the optimal trajectory after applying a control step. Since the variance $V_x$ may actually change in time, at each step we calculate the current variance to determine the control.\\ \subsection{Behaviour of a Quartic Potential}\label{quartic potential demonstration} Before proceeding into the next section, we use examples to demonstrate the typical behaviour of a quartic potential according to our numerical simulation. In our simulation, we initialize the state as a Gaussian wave packet near the center of the potential, and let it evolve in time. First we investigate the deterministic case with no position measurement imposed. The time evolution is plotted in Fig. \ref{wave evolution}. It can be seen that at the beginning the wave is localized in space and shows oscillatory behaviour inside the potential, with its center of mass moving forward and backward. However, as the wave slowly spreads and delocalize throughout time, the particle gradually spread in the bottom of the potential and delocalize, and the center of mass ceases to oscillate, as shown in Fig. \ref{quartic deterministic center of mass}.\clearpage \begin{figure}[p] \begin{tikzpicture} \node[inner sep=0pt] (wave evolution1) at (0,0) {\centering \subfloat[]{ \includegraphics[width=0.47\linewidth]{chapter5/Figure_2}}}; \node[inner sep=0pt] (wave evolution2) at (8.2,0) {\centering\subfloat[]{ \includegraphics[width=0.47\linewidth]{chapter5/Figure_3}}}; \draw[-stealth,thick] (wave evolution1.east) -- (wave evolution2.west); \node[inner sep=0pt] (wave evolution3) at (0,-6.8) {\centering\subfloat[]{ \includegraphics[width=0.47\linewidth]{chapter5/Figure_4}}}; \draw[-stealth,thick] (wave evolution2.south west) -- (wave evolution3.north east); \node[inner sep=0pt] (wave evolution4) at (8.2,-6.8) {\centering\subfloat[]{ \includegraphics[width=0.47\linewidth]{chapter5/Figure_5}}}; \draw[-stealth,thick] (wave evolution3.east) -- (wave evolution4.west); \node[inner sep=0pt] (wave evolution5) at (0,-13.8) {\centering\subfloat[]{ \includegraphics[width=0.47\linewidth]{chapter5/Figure_6}}}; \draw[-stealth,thick] (wave evolution4.south west) -- (wave evolution5.north east); \node[inner sep=0pt] (wave evolution6) at (8.2,-13.8) {\centering\subfloat[]{ \includegraphics[width=0.47\linewidth]{chapter5/Figure_7}}}; \draw[-stealth,thick] (wave evolution5.east) -- (wave evolution6.west); \end{tikzpicture} \caption{Deterministic time evolution in a quartic potential $V$ for a particle initialized as a Gaussian wave packet in (a), and panels (a) to (f) are ordered chronologically as indicated by the arrows. The grey arrows inside figures show the propagation directions of the density wave, and its movement of the center of mass becomes more and more obscured as time elapses. The particle gradually delocalize and loses its forward-backward oscillation in position space. In the plots, blue and orange curves show the real and imaginary parts of the wave function, read ones show the probability density distribution, and grey one show the quartic potential.} \label{wave evolution} \end{figure}\clearpage \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{chapter5/Figure_1} \caption{Time evolution of the average position $\langle\hat{x}\rangle$ of the wavefunction in Fig. \ref{wave evolution} (orange curve). The units of the time and position are consistent with those in the main text. As the state delocalizes, its center of mass gradually ceases to oscillate.} \label{quartic deterministic center of mass} \end{figure} The different behaviour from quadratic potentials is mainly due to the irregular potential shape. The bottom of the quartic potential is a plateau, and the wave function evolves almost freely. However, the two sides of the potential abruptly increase in value and therefore the wave front is strongly reflected at both ends and interfere with the tail of the wavefunction, which can be observed from the distribution density plots in Fig. \ref{wave evolution}(b) and \ref{wave evolution}(c). As the process repeats, the wavefunction becomes more and more delocalized and tend to resemble a wavefunction in an infinite-well potential, and its center-of-mass motion disappears. This is contrary to the quadratic case, where the shape of the wavefunction is preserved and no interference emerges. \begin{figure}[b!] \centering \includegraphics[width=0.6\linewidth]{chapter5/Figure_8b} \caption{Time evolution of the average position $\langle\hat{x}\rangle$ of the wavefunction initialized as in Fig. \ref{quartic deterministic center of mass} but subject to continuous position measurement. This figure is to be compared with Fig. \ref{quartic deterministic center of mass}. At the beginning, the wavefunction is initialized with a low variance in position and is affected less by the measurement, but as it delocalizes, it experiences a stronger measurement effect and noise, and its center-of-mass oscillation is also increasingly more perturbed by the noise. Concerning the probability density distribution of the wavefunction, we can observe a rough ``envelope" which signifies the center of mass.} \label{quartic stochastic center of mass} \end{figure}\\ On the other hand, when there is weak position measurement imposed, the properties of the system will change as illustrated in Fig. \ref{quartic stochastic center of mass}. The position measurement shrinks the wavefunction and makes the wavefunction imbalance at the central plateau of the potential, and therefore the center of the wavefunction slowly oscillates with noise, which is in contrast to the case of Fig. \ref{quartic deterministic center of mass}. Without measurement we observe no such long-term oscillatory behaviour. Due to this reason, it is indeed possible for those simple controllers described in Section \ref{quartic controllers} to successfully cool down the quartic system when the position measurement is imposed. Otherwise the situation would be extremely unclear, because without the measurement-induced effect both $\langle\hat{x}\rangle$ and $\langle\hat{p}\rangle$ tend to shrink to zero while the energy of the system is high.\\ \section{Numerical Experiments}\label{quartic experiments} In this section we discuss our experimental setting for simulating the quartic system. Because the reinforcement learning implementation is almost the same as the quadratic case in Chapter \ref{control quadratic potentials}, we only briefly discuss the parameter settings and choices. \begin{figure}[tb] \centering \subfloat[]{\label{quartic harmonic components} \includegraphics[width=0.48\linewidth]{chapter5/Figure_10}}\ \subfloat[]{\label{quadratic harmonic components} \includegraphics[width=0.47\linewidth]{chapter5/Figure_11}} \caption{Fraction of the component in the harmonic-oscillator eigenbasis for states in (a) quartic and (b) quadratic potentials. The states are subject to the continuous position measurement. The ordinate is plotted in a log scale.} \label{components in harmonic basis} \end{figure}\\ \subsection{Simulation of the Quantum System}\label{quartic simulation} The numerical simulation of a quartic potential is carried out in discretized real space. This is because we find that, as the wavefunction has become non-Gaussian, it is inefficient to simulate the wavefunction using the eigenbasis of a harmonic oscillator. We plot the norms of the wavefunction on high-energy components when it is simulated using the eigenbasis of a harmonic oscillator in Fig.~\ref{components in harmonic basis}(a), and compare it with a state evolving in a quadratic potential as plotted in Fig.~\ref{components in harmonic basis}(b). As the figure shows, for a state in a quadratic potential which is effectively Gaussian, the norms of its components on the high-energy part of the harmonic basis decrease exponentially, which is shown as the straight line in the log scale plot in Fig.~\ref{components in harmonic basis}(b). However, the components for a state in a quartic potential do not decrease exponentially as shown in Fig. \ref{components in harmonic basis}(a), and thus simulating the state using the harmonic basis is both inaccurate and inefficient. Therefore, we change to the usual finite-difference time-domain (FDTD) method, which discretizes both space and time to simulate the wave function. Our parameters and choices are described as follows.\\ We use a central difference method to evaluate the derivatives of the wavefunction in position space, i.e. to approximate the operators $\dfrac{\partial}{\partial x}$ and $\dfrac{\partial^2}{\partial x^2}$. The central difference method we use involves 9 points in total, and has $O(d^8)$ errors for the first- and second-order derivative evaluations with $d$ being the discretization grid size. The numerical evaluations of the derivatives are found to be fairly accurate by comparing the results with analytically calculated derivatives for Gaussian wave packets, with an error of around $10^{-5}$. Except for the discretization, for simplicity we keep most of our settings the same as those in the quadratic problems in Section~\ref{quadra experiments}, including the time step, the control step, the control force and the mass, and also the reference quantities $m_c$ and $\omega_c$ that define the units of mass and frequency. However, some other parameter settings need to be changed. As a starting point, we need to determine the quartic potential coefficient $\lambda$. To make the quartic system of a similar size as the quadratic system, we set the values of the harmonic potential and the quartic potential to be equal around position $x=3.5\left (\sqrt{\frac{\hbar}{m_c\omega_c}}\right )$\footnote{The expressions in parenthesis are physical units.}, and therefore we take the value $\lambda=\dfrac{\pi}{25}\left (\frac{m^2_c\omega^3_c}{\hbar}\right )$. Next, to avoid the numerical divergence problem that results from high energy modes on the discretized space grid and large potentials, we restrict our simulated space to be from $x=-8.5\left (\sqrt{\frac{\hbar}{m_c\omega_c}}\right )$ to $x=+8.5\left (\sqrt{\frac{\hbar}{m_c\omega_c}}\right )$, and we discretize it using a step of $d=0.1\left (\sqrt{\frac{\hbar}{m_c\omega_c}}\right )$. Note that because the quartic potential increases fast, the largest value of the simulated potential exceeds the harmonic potential at position $x=20\left(\sqrt{\frac{\hbar}{m_c\omega_c}}\right )$, and therefore a properly controlled and cooled wavefunction does not have enough energy to come close to the border of the space, which we have also confirmed in our numerical simulation.\\ Even with such a coarse grid and moderate potential values, the numerical simulation still diverges after tens of thousands of time steps. To make minimal additional modifications to handle this diverging problem, we add higher order correction terms concerning the Hamiltonian into our state-update equation of the numerical simulation. This is easily done by adding more Taylor-expansion terms $\sum_n\frac{1}{n!}(-{dt}\frac{i}{\hbar}H)^n|\psi\rangle$ with $dt$ being our simulation time step. In our case, we sum to the $5$-th order correction, and the numerical divergence disappears. It is important to note that such a correction does not necessarily increase the precision of the simulation, because when adding these additional terms, we do not consider their interactions with other non-Hamiltonian terms that would result in values of similar orders. On the other hand, the space discretization also limits our simulation precision. The additional higher order corrections of the Hamiltonian is only intended to alleviate the numerical divergence, not to increase an overall precision. To avoid large numerical errors, we claim that the simulation fails when its energy exceeds an amount of $20\ (\hbar\omega_c)$ or when the norm of the wavefunction exceeds $10^{-5}$ near the border of the space.\footnote{Specifically, we evaluate the value of the wavefunction on the 5th leftmost point and the 5th rightmost point on our space grid. This is because a point too close to the border is affected by the error coming from the finite-difference-based derivative estimation, and the wavefunction may not evolve correctly.} When such conditions are not violated, we numerically confirmed that our simulation of the wavefunction has an error below $0.05\%$\footnote{This is confirmed by simulating the same wavefunction using finer discretization steps and then comparing the simulation results. We only evaluated the deterministic evolution of the state, i.e., $\gamma=0$.} for simulation time $10\times\frac{1}{\omega_c}$, i.e., the time horizon of the reinforcement learning control.\\ The parameters used are summarized as follows:\\ \begin{table}[hbt] \centering \begin{tabular}{ccccccc} \toprule $\lambda$ $\left (\frac{m^2_c\omega^3_c}{\hbar}\right )$ & $m$ ($m_c$) & $d$ $\left (\sqrt{\frac{\hbar}{m_c\omega_c}}\right )$ & $x$ $\left (\sqrt{\frac{\hbar}{m_c\omega_c}}\right )$ & $dt$ $\left (\frac{1}{\omega_c}\right )$ & $\eta$ & $\gamma$ $\left( \frac{m_c\omega_c^2}{\hbar}\right) $ \\[8pt] $\dfrac{\pi}{25}$ & $\dfrac{1}{\pi}$ & $0.1$ & $[-8, +8]$ & $\dfrac{1}{1440}$ & 1 & $0.01\pi$ \\ \bottomrule \end{tabular}\\[8pt] \begin{tabular}{ccc} \toprule $F_{\text{con}}$ ($(m_c\hbar)^\frac{1}{2}\omega_c^\frac{3}{2}$) & $N_\text{con}$ & $t_{\text{max}}$ $\left (\frac{1}{\omega_c}\right )$ \\[8pt] $[-5\pi,+5\pi]$ & 18 & 100 \\ \bottomrule \end{tabular}\\ \caption{The parameter settings used in our quartic system simulations. The units are specified in parenthesis, and we use the same reference quantities $m_c$ and $\omega_c$ as Table \ref{quadra parameter settings}. When there is no unit, the quantity is dimensionless.} \label{quartic parameter settings} \end{table} In the above Table \ref{quartic parameter settings}, the parameter $\lambda$ denotes the strength of our quartic potential, $m$ is the mass, $d$ is the discretization step of position space, $x$ represents the positions considered in our simulation, $dt$ is the time step, $\eta$ is measurement efficiency, and $\gamma$ is the measurement strength. This measurement strength is much smaller than the ones used in quadratic problems, because we want the wavefunction to be sufficiently non-local in our quartic potential. For the control part, $F_{\text{con}}$ is the external control force, $N_\text{con}$ is the number of control steps in a unit time $\frac{1}{\omega_c}$, and $t_{\text{max}}$ is the total time of a simulation episode. When we implement our derived conventional controllers as in Sec. \ref{quartic controllers}, we always discretize the control forces in the same way as the neural network controller before applying the forces, which is a strategy already mentioned in Sec.~\ref{quadra performance} concerning the discretized optimal control. \\ A final issue here is the initialization of the wavefunction. This is not a significant issue for the case of quadratic potentials, where the state quickly evolves into a Gaussian state with fixed covariances and follow simple time-evolution equations. However, as illustrated in Fig.~\ref{wave evolution}, we know that a freely evolving wavefunction in a quartic potential is typically non-Gaussian, and therefore, in order to initialize a ``typical" state in the quartic potential, we use a two-step initialization strategy. First, we initialize the state as a Gaussian wave packet as in Fig.~\ref{wave evolution}(a). Second, we let the state evolve in the potential for time $15\times\frac{1}{\omega_c}$ under the position measurement. If the resulting state has an energy below $18\ (\hbar\omega_c)$, we accept it as an initialized state that can be used in our simulation; otherwise we repeat the above initialization process to obtain other states. The initialization energy is dependent on the initial Gaussian wave packet, and we set its wave-number to be randomly chosen from the interval $[-0.4,+0.4]\ \left (\sqrt{\frac{m_c \omega_c}{\hbar}}\right )$, and we set its mean to be 0 and standard deviation to be $1\ \left (\sqrt{\frac{m_c\omega_c}{\hbar}}\right )$. The units are the same as those in Table~\ref{quartic parameter settings}, Fig.~\ref{wave evolution} and Fig.~\ref{quartic deterministic center of mass}. This initialization strategy produces initial energies of the states that range from $1$ to $18\ (\hbar\omega_c)$ with an average around $10\ (\hbar\omega_c)$. \\ When evaluating a controller's performance, since the oscillation behaviour is much slower compared with the quadratic case, we sample the energy of the controlled state per time $20\times\frac{1}{\omega_c}$ after the episode has started for time $40\times\frac{1}{\omega_c}$, i.e., using 3 samples in each simulated episode. Then we calculate the sample mean and the standard error.\\ \subsection{Implementation of Reinforcement Learning} The reinforcement learning system is implemented basically in the same way as that described in Sec.~\ref{quadraReinforcementImplementation} for the quadratic problems. One difference here is that, we do not consider measurement-outcome-based controllers, since we know that they would result in lower performances as discussed in Sec.~\ref{quadra comparison}.\\ Similar to the case of quadratic problems, we consider two cases concerning the neural network inputs. While the second case is simply to input the wavefunction, the first case is no longer to input the quintuple $\left (\langle \hat{x}\rangle,\langle \hat{p}\rangle,V_x,V_p,C\right )$, because we know that these five values are not sufficient to describe the state. To proceed further, we input higher order moments of the phase space quasi-distribution of the state beyond the covariances. Namely, we input the third-order central moments $\langle \left (\hat{x}-\langle\hat{x}\rangle\right )^3\rangle$, $\langle \left (\hat{x}-\langle\hat{x}\rangle\right )\left (\hat{p}-\langle\hat{p}\rangle\right )\left (\hat{x}-\langle\hat{x}\rangle\right )\rangle$, $\langle \left (\hat{p}-\langle\hat{p}\rangle\right )\left (\hat{x}-\langle\hat{x}\rangle\right )\left (\hat{p}-\langle\hat{p}\rangle\right )\rangle$ and $\langle \left (\hat{p}-\langle\hat{p}\rangle\right )^3\rangle$ into the neural network, which are called skewnesses, and similarly, we input all forth- and fifth-order central moments. The resulting input contains 20 values in total, and this is used as the first input case for our quartic problem. For the wavefunction input case, as we know that the wavefunction near the borders of the simulated position space is less relevant and contains numerical error, we do not input wavefunction values near the border. Specifically, regarding the wavefunction amplitudes defined on spatial grids, we discard 15 wavefunction values on both the left and the right border, and use the rest as the neural network input, by separating the complex amplitudes into their real-valued parts and imaginary-valued parts. \\ The reward is defined to be $-2$ times the expected energy of the state in the quartic potential, shifted by $2$. Similar to the quadratic case in Sec.~\ref{quadraReinforcementImplementation}, the simulation is stopped when the energy is too high, with the threshold energy being $20\ (\hbar\omega_c)$. This keeps both the learned loss values reasonably small and the numerical simulation error of the system reasonably small, which is already mentioned in Sec.~\ref{quartic simulation}. The number of simulation episodes for training is set to be 12000, and the replay memory sizes of the two input cases are respectively set to contain 6000 and 2000 episodes that are of the length of $t_{\text{max}}$.\\ \section{Comparison of Reinforcement Learning and Conventional Control}\label{quartic performance section} In this section we present the performances of the reinforcement learning controllers and the controllers described in Sec.~\ref{quartic controllers}. The results are summarized in Table. \ref{quartic results}. Unlike the quadratic case, we do not directly compare the behaviours of reinforcement learning controllers and conventional controllers as in Sec.~\ref{quadraInputResponse}, because the relevant inputs have become much more complicated and high-dimensional and can hardly be plotted.\\ \begin{table}[tb] \centering \begin{tabular}{cp{8em}p{6.5em}} \toprule & & cooling quartic oscillators \\ \midrule \multirow{2}{3.7em}{Network Input} & \raggedright phase space distri-\linebreak bution moments \linebreak (up to the fifth) & $0.7393\pm0.0003$ \tabularnewline \cmidrule{2-3} & wavefunction & $0.7575\pm0.0009$ \\ \midrule \multirow{3}{3.7em}{Derived Controls} & damping control & $0.7716\pm0.0024$ \\ \cmidrule{2-3} & quadratic control & $0.8211 \pm 0.0031$ \\ \cmidrule{2-3} & Gaussian approximation & $0.7562\pm0.0010$ \\ \midrule \multicolumn{2}{l}{Ground State Energy} & $0.7177$ \\ \bottomrule \end{tabular} \caption{Performance results for the problem of cooling a quartic oscillator. The numbers after the $\pm$ signs show the estimated standard deviations for the above reported means. The values show the average energies of the controlled states in units of $\hbar\omega_c$.} \label{quartic results} \end{table} The damping controller uses a damping factor of 0.5, and the quadratic controller uses a parameter $k=2\times\lambda\cdot\frac{\hbar}{m_c\omega_c}$, which are determined by a rough grid search of the possible values. The ground-state energy is calculated in our discretized and bounded position space, by direct diagonalization of the system Hamiltonian.\footnote{For completeness, the first three excited states have energies that are given by 2.5718, 5.0463 and 7.8816 in units of $\hbar\omega_c$. It can be seen that the controlled states are very close to the ground state.} In Table \ref{quartic results}, it can be seen that the first input case of the reinforcement learning controller performs best, clearly better than all the other controllers by many times of standard deviations. Also we find that the energy variance of its controlled states is much smaller than other controllers, which means that the control is very stable, and results in an extremely small standard error as given in Table \ref{quartic results}. Compared with the first input case, the second input case performs comparatively worse, probably because the neural network cannot easily evaluate the relevant physical observables using the wavefunction, as physical observables are always quadratic in terms of the wavefunctions. This difficulty typically induces noise and small error in an AI's decisions, and can lower its final performance slightly. In fact, we indeed find a larger variance of the energies of the controlled states, which supports the above interpretation.\\ \section{Conclusion}\label{quartic conclusion} In this chapter, we have shown that the problem of cooling a particle in a one-dimensional quartic potential can be learned and solved well by a deep-reinforcement-learning-based controller, and the controller can outperform all conventional non-machine-learning strategies that we have considered, which serves as an example of application of deep reinforcement learning to physics. We have also found that when relevant information of the input cannot be extracted by the neural network precisely and easily, the neural network typically performs slightly worse. This fact suggests that adaptation and modification of the neural network architecture for specific physical problems may increase the final performance of the AI, which is a possible direction of future research concerning applying deep learning to physics. \\ On the other hand, we find that this cooling problem for quartic potentials does not significantly demonstrate a superior performance of deep reinforcement learning. This is probably due to the fact that all the controllers have cooled the states to have energies that are very close to the ground state, and as can be observed experimentally, all the resulting wavefunctions have become very similar in shape, which suggests that a perturbative solution to the problem can probably work very well after all, and that the system eventually becomes quite static and simple. Therefore, the complexity of the properties of the quartic potential does not necessarily imply that conventional strategies cannot cool the system well, and in order to demonstrate significant superior performance of deep reinforcement learning, we actually need problems that are more complicated than this cooling task. \chapter*{Abstract}\label{abstract} \addcontentsline{toc}{chapter}{\numberline{}Abstract} \input{abstract/abstract} \chapter*{Acknowledgements}\label{acknowledgements} \addcontentsline{toc}{chapter}{\numberline{}Acknowledgements} \input{abstract/acknowledgements} \tableofcontents \chapter{Introduction}\label{Introduction} \pagenumbering{arabic} \input{chapter1/chapter1} \chapter{Continuous Measurement on Quantum Systems}\label{continuous quantum measurement} \input{chapter2/chapter2} \chapter{Deep Reinforcement Learning}\label{deep reinforcement learning} \input{chapter3/chapter3} \chapter{Control in Quadratic Potentials}\label{control quadratic potentials} \input{chapter4/chapter4} \chapter{Control in Quartic Potentials}\label{control quartic potentials} \input{chapter5/chapter5} \chapter{Conclusions and Future Perspectives}\label{summary} \input{chapter6/chapter6} \renewcommand\chaptername{Appendix} \begin{appendices} \chapter{Linear-Quadratic-Gaussian Control}\label{LQG appendix} \input{"appendices/LQG_control"} \chapter{Numerical Simulation of Stochastic Differential Equation}\label{numerical simulation appendix} \input{"appendices/numerical_simulation"} \chapter{Details of the Reinforcement Learning Algorithm}\label{experiment details appendix} \input{"appendices/experimental_details"} \end{appendices}
{'timestamp': '2022-12-15T02:15:01', 'yymm': '2212', 'arxiv_id': '2212.07385', 'language': 'en', 'url': 'https://arxiv.org/abs/2212.07385'}
\section{Introduction} Optical imaging is without any doubt one of the main tools to investigate galaxies and dark matter through weak and strong gravitational lensing. Because of the large available data sets, it is crucial to extract all information available in noisy data and to simulate images precisely to calibrate the various methods and properly deal with possible biases. There is thus a pressing need to extract clean galaxy images from data. In particular, several studies have shown how all methods used to measure the ellipticity of galaxies require realistic simulations for their calibration \citep{viola11,bartelmann12,refrejer12,2012arXiv1204.5147M,massey13,gorvich16,bruderer16}. This issue is becoming pressing because of the stringent requirements posed by upcoming wide-field surveys such as the ESA Euclid space mission \citep{laureijs09} and the Large Synoptic Survey Telescope \citep{kaiser02}, among others. Galaxy models based on simple analytical recipes, for example based on the Sersic profile \cite{1968adga.book.....S}, have been widely used at this end \citep{heymans06,great08,kitching12}. These models have proven to well suffice for ground-based observations, but more accurate simulations are now necessary to include complex morphologies to account for spiral, irregular and cuspy galaxies. Also in the strong gravitational lensing regime, accurate galaxy models are now needed to investigate the use of substructures of strongly magnified galaxies to better constrain the mass distribution of lenses such as for example galaxy clusters \citep{meneghetti08,zitrin15}. For these reasons, galaxies observed with HST have been modelled with shapelets \citep{refregier03,refregier03b,massey03,massey07} to achieve noise-free images. Even if this approach deals with complex morphologies, artifacts may arise because of the oscillatory behavior of shapelets. Moreover, also smooth galaxies such as for example ellipticals are not very well reconstructed by this approach because of their slope, which is not well compatible with a Gaussian, which the Hermite polynomials in the shapelets are derived from. Also, image cut-outs have been extracted from HST data \citep{rowe14,mandelbaum15}, but these stamps are affected by the instrumental noise limiting their applicability. In this paper, we present a method to retrieve and reconstruct clean galaxy images in a model-independent way, which also preserves the statistical properties of the reconstructed sample. We do this using Expectation Maximization Principal Components Analysis \cite{bailey12}, which we use to derive a set of orthonormal basis functions optimized for the specific data set to be processed. Other studies used standard PCA \citep{Jolliffe86} to model elliptical galaxies \citep{joseph14J}. However, these are galaxies with smooth morphology, and this method cannot deal with weights and missing data. In contrast, the procedure discussed in this work allows us to process astronomical images with masked areas and pixel-dependent variance, and it allows us to introduce regularization terms to be used when deriving the principal components and impose smoothness on the basis. \begin{figure} \centering \includegraphics[width=1.0\hsize]{./fig/image-galaxy-split} \caption{This schematic representation shows how a postage-stamp of a galaxy image is rearranged in form of a vector, $\vec{d}_i$. All rows of pixels in the matrix composing the image are simply concatenated.} \label {fig:subaru-train-cov} \end{figure} \begin{figure*} \centering \includegraphics[width=0.33\hsize]{./fig/magDistribution_str-gal_clash} \hfill \includegraphics[width=0.33\hsize]{./fig/magError_str-gal_clash} \hfill \includegraphics[width=0.33\hsize]{./fig/mag-size_str-gal_clash} \caption {Statistical properties of the sources detected in the simulation and reproducing a Subaru image with 20 minutes of exposure time. A comparison with the real data is provided. From left to right, the panels show the magnitude distribution of the objects, their photometric error as a function of magnitude, and their size versus magnitude relation. } \label{fig:sim-stat} \end{figure*} As a test case, we analyzed $7038$ galaxies extracted from the \cite{rafelski15} catalog with redshift up to $z<4.0$ and maximum magnitude up to $m_{F775W}<30$. The catalog contains all photometric information including the photometric redshifts of the objects. We modeled these galaxies in all 5 optical bands extracted from the Hubble eXtreme Deep Field \citep[XDF hereafter,][]{illingworth13}. We tested the quality of the models by comparing their moment of brightness against those measured with the weak-lensing Shapelens library \citep{2011MNRAS.412.1552M}. Moreover we showed how to use these models to construct realistic simulations of astronomical images. The structure of this paper is a follows: in Sec.~(\ref{sec:empca}), we derive the EMPCA which are then used in Sec.~(\ref{sec:models}) to create the models of the galaxy images. The description of the analysis of the XDF data set in Sec.~(\ref{sec:models}), a simple sky simulation based on our models is presented in Sec.~(\ref{sec:simulations}), and the conclusions are given in Sec.~(\ref{sec:conclusions}). \section{A linear model for galaxy images}\label{sec:empca} In this section, we discuss how to model the images of individual galaxies to obtain a noise-free reconstruction. Let us now consider the case in which we have one single object placed in the center of a postage-stamp. This cut-out can be modeled as \begin{equation} \label{eqn:datamodel} d(\vec{x})=g(\vec{x})+n(\vec{x}) \;, \end{equation} where $g(\vec{x})$ is the object contribution we are interested in, $n(\vec{x})$ the one of the noise (e.g. photon noise, read-out noise, dark current), and $\vec{x}\in\mathbb{R}^{2}$ denotes the position in the image. For simplicity, we assume the noise to be uncorrelated with standard deviation $\sigma$, defined by $\langle n_i\,n_j \rangle =\sigma^2\,\delta(i-j)$. This 2-dimensional image $d(\vec{x})$, consisting of $n=n_x \times n_y$ pixels, can be represented as a data vector $\vec{d} \in \mathbb{R}^{n}$ whose elements $d(x_i)=d_i$ are the intensities of the $i_{th}$ pixels. A visual impression of how pixels are rearranged in a vector is shown in Figure~(\ref{fig:subaru-train-cov}). The most general linear model to describe this data element is \begin{equation} \label{eqn:model} \tilde{d}(\vec{x})=\sum_{k=1}^M a_k \phi_k(\vec{x}) \;, \end{equation} where $\vec{\phi}_k\;$ is a collection of $M$ vectors, $\left\{\vec{\phi}_k \;\in\; \mathbb{R}^{n} \;\mid\; k=1,...,M\right\}$. The goal now is to define an optimal set of vectors $\phi_k$ capturing the relevant signal and sort them depending on their information content (power) such that each vector contains more information than the following one. Once this is achieved, the sm can be split into two terms \begin{equation}\label{eq:model-split} d(\vec{x})=\sum_{k=1}^M a_k \phi_k(\vec{x}) + \sum_{M+1}^{n} a_k \phi_k(\vec{x}) = \tilde{g}(\vec{x}) + \tilde{n}(\vec{x}) \;, \end{equation} where now $\tilde{g}(\vec{x})$ is the model of the object we are interested in, and $\tilde{n}(\vec{x})$ a term containing most of the noise and a small, and hopefully negligible, amount of information. The number of components, $M$, fixes the amount of information which is going to be kept in the model and the amount of noise which is going to be suppressed. Some information loss is inevitable, otherwise one could fully recover the real image of the object which is obviously an impossible task. A common way to achieve this decomposition is provided by the Principal Component Analysis \cite[PCA hereafter,][]{Jolliffe86} which takes advantage of the entire data set, i.e. the postage-stamps of all galaxies in the sample, $\left\{\vec{d}_j \;\in\; \mathbb{R}^n \;\mid\; j=1,...,s\right\}$, and consists in finding the set of vectors, $\vec{\phi}_k\; \in\;\mathbb{R}^{n}$, minimizing the quantity \begin{equation}\label{eq:chisq} \chi^2 = \sum_{ij}^{n,s} \left(d_{ij}-\sum^n_{k=1}a_{kj}\phi_{ki}\right)^2 \;. \end{equation} In other words, we are looking for the model based on all coefficients, $a_{jk}$, and vectors, $\vec{\phi}_k$, which best fit all images at once. Here and throughout the paper, the index $i$ runs over the number of pixels, $j$ over the number of galaxies, and $k$ over the number of components which, in the case of the PCA, is equal to the number of pixels. The coefficients of the $j_{th}$ galaxy are derived with the scalar product $a_{kj}=\sum_id_{ji}\phi_{ki}$ or by linear fitting if needed. Equation~(\ref{eq:chisq}) can be easily generalized to account for correlated noise. \begin{figure} \centering \includegraphics[width=0.98\hsize]{./fig/simulation-subru-basis_gal_no_lab} \caption{First six principal components, $\vec{\phi}_i$, derived from the noisy simulated image. The components are rearranged as images. From the data emerge the main features related to dipolar and quadrupolar structures as well as radially symmetric ones.} \label{fig:subaru-basis} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\hsize]{./fig/simulation-subru-object_gal_neg_cut} \caption{One of the galaxies randomly extracted from the simulation for which we show the signal and noise splitting: in the left panel the original postage-stamps, $\vec{d}_j$, the model, $\vec{\tilde{g}_j}$ in the center and the residuals, $\vec{\tilde{n}_j}$ on the right. } \label{fig:subaru-object} \end{figure} Usually, the principal components are found by diagonalizing the centered covariance matrix of the data and ordering its eigenvectors by decreasing eigenvalues, $\lambda_k$. Another way to find the solution for the minimum of Equation~(\ref{eq:chisq}) is through the Expectation Maximization Principal Components Analysis (EMPCA hereafter). This is an iterative algorithm which consists in finding one basis component at a time. To find the first component $\vec{\phi}_1$, we start with a random vector (all its $n$ components have a random value) with which we compute the first coefficient of all objects through the scalar product previously defined, $a_{1j}=\sum_id_{ji}\phi_{1i}$. These coefficients are then used to update $\vec{\phi}_1$, \begin{equation} \phi_1^{new}(\vec{x}) = \frac{\sum_j^s a_{1j} d_j(\vec{x})}{\sum_j^s a_{1j}^2} \;, \end{equation} which is renormalized for convenience and which now will fit the data set better. The refined $\vec{\phi}_1^{new}$ vector is subsequently used to compute a new set of $a_{1j}$ coefficients necessary for the next iteration, and so on. This procedure is converging toward the only absolute minimum of the $\chi^2$ function, as demonstrated by \cite{srebro03}. In our case, this iterative process is stopped when the variation in the principal component, $\vec{\phi}_1$, is smaller that a certain value, $|\Delta\phi_1|<\epsilon$, which we set to $\epsilon = 10^{-6}$. The next principal component, $\vec{\phi}_{k+1}$, is found by applying the same procedure to that part of the signal which has not been captured by the previous $k$ components, \begin{equation} \phi_{k+1}^{new}(\vec{x}) = \frac{\sum_j^s a_{k+1 j}\, \tilde{n}_{j}(\vec{x})}{\sum_j^s a_{k+1 j}^2} \;. \end{equation} Here, $\tilde{n}_{j}(\vec{x}) = d_j(\vec{x})-\tilde{g}_{j}(\vec{x})$ is the residual part of the signal for which this component is evaluated, and $\tilde{g}_{j}(\vec{x})=\sum_{l=1}^k a_{lj} \phi_l(\vec{x})$ is the updated model of the $j-th$ galaxy. In analogy to the derivation of the first component, the coefficients of the $j-th$ galaxy, $a_{k+1\,j}=\sum_id_{ji}\phi_{k+1\,i}$, are based on the previous iteration, and $\vec{\phi}_{k+1}^{new}$ is renormalized each time. With this procedure we compute at the same time the principal components, $\vec{\phi}_k$, the noise, $\tilde{\vec{n}}_j$, and the signal estimate, $\tilde{\vec{g}}_j$, we are aiming at (see Equation~\ref{eq:model-split}). The procedure ensures the orthogonality of the principal components. The further advantages of this method with respect to a geometrical interpretation of the principal components are that it allows to take advantage of weights, masks and the noise covariance by including them in the $\chi^2$ function, and to impose further conditions on the basis such as for example a regularization term to obtain smooth basis components \citep{bailey12}. \begin{figure} \centering \includegraphics[width=0.98\hsize]{./fig/simulation-subru-eigenvalues_gal} \caption{Power of the principal components resulting for the simulation with Sersic galaxies (continuous line) and for the real Subaru image mimicked by the simulation (dashed line). The power has been normalized with respect to the total variance, $\sigma^2=\sum_k^n\lambda_k$. In both cases, their amplitude drops rapidly with the order. } \label{fig:subaru-eigenvalues} \end{figure} Here, we have derived smooth principal components by bilateral smoothing during their construction. This is an edge-preserving algorithm with an adaptive smoothing scale which allows to leave those regions of the basis unaffected which show steep gradients that would be blurred otherwise. The bilateral smoothing implemented is based on the product of two Gaussians, one acting on angular scales as it would do a normal convolution and another based on the local luminosity gradient such that areas with ``sharp'' features do not get smoothed (if the gradient is small the angular convolution takes the lead, while it is made ineffective otherwise). Alternatively, we used a different regularization scheme base on a Savitzky–Golay filter and obtained similar results. It is important to introduce this regularization during the basis construction and not thereafter (by smoothing the final basis or the models) to preserve the orthonormality of the basis, the information present in the data, and to make the basis more stable against noise fluctuations. Additionally, we reduced the scatter in the galaxy morphologies by rotating the input images by multiples of $\pi/2$ such as to have their position angle range between 0 and $90$ degrees. We chose such discrete rotations instead of aligning the galaxies along their major axes to avoid the introduction of any pixel correlations. \begin{figure*} \centering \includegraphics[width=0.48\hsize]{./fig/photo-distribution} \includegraphics[width=0.48\hsize]{./fig/redshift-distribution} \caption{Magnitude (left panel) and redshift distributions (right panel) of the sources with a maximum $\chi^2_{mod}$ of 4 for the BPZ redshift. The right panel shows the number of galaxies in the bins split according to their F775W magnitude.} \label {fig:cat-photo-z} \end{figure*} In the following, we focus on the caveats of this approach which one should keep in mind to properly use it. First of all, the basis is derived from noisy data, and this will of course impact on them. This is relevant at higher orders where the EMPCA have to deal with smaller and less prominent features approaching the noise regime, as is clear from the discussion of Eq.~(\ref{eq:model-split}). Second, the number of galaxies to produce the basis is limited, and a small sample will not be representative of the entire population. On the other hand, wide field surveys provide us with large data sets which even allow to split the sample into subsets of galaxies with similar properties (such as for example size or ellipticity) and further reduce the data scatter for a better optimization of the basis. It is also preferable to compute the basis for the sample which has to be processed and not on another independent set of images even if with compatible size and quality. After all, the basis components are evaluated by finding the optimal model of the specific data at hand. Third, the regularization used to impose smoothness on the basis might decrease the level of high frequency features present in the data, if one wants to use it at all. In any case, one can always decide and adapt the regularization which better suits the case at hands. One final remark about the point spread function (PSF): The PSF is not an issue for this approach because it aims at the image as it is. However, some care has to be taken if the models are going to be used for weak-lensing purposes because the PSF ellipticity and distortions may leave an imprint on the derived basis and thus on the models based on them. It is easy to cope with this issue with the procedure adopted in this work. In fact, the rotation applied to the postage-stamps prevents any isotropy which my be induced by the PSF. \section {Defining the number of components}\label{sec:sn-separation} As discussed in Section~(\ref{sec:empca}), the number of basis components, $M$, defines the amount of noise which is going to be suppressed and the amount of information which is going to be kept in the model. Its value depends on the specific task we have at hands. Various schemes for the definition of $M$ have been proposed in the literature, for instance through the analysis of the scree graph \citep{cattell66,cattell77} or the log-eigenvalue diagram (LEV) \citep{Farmer71,maryon79,beltrando90}. These approaches are not very stable because they largely depend on specific features of these diagrams which may not be well defined in certain cases. Other approaches rely on a $\chi^2$ approach, \begin{equation}\label{eq:chisq_new} \chi_j^2 = \sum_{i=1}^n (d_{ij}-\tilde{g}_{ij})^2 \, \end{equation} applied to each individual galaxy (not the overall set!) and which better suits our needs \citep{ferre90}. Below, we discuss two criteria aiming at different goals. \begin{enumerate} \item {\it Defining $M$ to minimize the model variance}: if we search for the model $\tilde{g}$ with the minimum variance, it is necessary to minimize the number of basis components. Here we seek for the smallest $M$, which will be specific for each individual object, by including one component at a time until a certain convergence criteria is reached, for instance until we obtain a reduced $\chi^2$ close to unity or until the $\chi^2$ is not changing by more than a certain threshold. Formally this criterion reads $\left\{\, \mbox{min}\{k\} \in \mathbb{N} : \chi^2_k -\chi^2_{k+1}\le t \,\right\}$, where $t$ is the threshold to be set. A note of caution is in order here: by construction, the $\chi^2$ is monotonically decreasing with the order (at the order $n$ the $\chi^2$ will be zero) and under- or over-fitting might be an issue. Moreover, the basis components are sorted by their information content (the higher the order, the less relevant is the component), but this sorting is based on the statistics of all objects in the sample and may not be proper for a specific object. It may happen that for a specific object, one of the higher-order components is more relevant than lower-order components, and the convergence process may stop before this component is reached. \item {\it Defining $M$ to maximize model fidelity}: for ensuring not to miss any valuable signal, it is necessary to include all components, for instance by visually inspecting the basis or by finding the $M$ for which the expectation value of the global $\chi^2$ is the minimum. This $M$ can not be evaluated directly but it can be approximated by \begin{equation}\label{eq:expectationval} \hat{f}_q=\sum_{k=q+1}^n \hat\lambda_k + \sigma^2 \left[ 2nq-n^2+2(n-q)+4\sum_{l=1}^q\sum_{k=q+1}^n\frac{\hat\lambda_l}{\hat{\lambda}_l-\hat{\lambda}_k} \right] \;, \end{equation} under the assumption of uncorrelated noise. Here, $\lambda_k$ is the power related to the $k-th$ component. The variance can be estimated as $\sigma^2_q=1/(n-q)\sum_{k=q+1}^n\lambda_k$ and $\hat{\lambda}_k=\frac{n-1}{n}\lambda_k$ \citep{ferre90}. With these criteria, $M$ is the same for all objects because it is based on the statistics of the entire data sample, in contrast to what we discussed in point (i) where the $\chi^2$ was evaluated for each individual object. This approach returns the highest ``fidelity'' because the largest number of sensible components is used. \end{enumerate} \section {Simulation of ground base observations}\label{sec:simulation_ground} To test the quality of the model reconstruction, we produced a set of simulations with {\it EasySky}. {\it EasySky} allows to use any object contained in a postage-stamp image, Sersic galaxies, galaxies with a single or double Sersic components (bulge plus disc) in the same fashion as the Great08 \citep{great08,kitching12} simulations, and stars with a Moffat profile and arbitrary ellipticity (if stars are to be included). The objects can be displaced in various ways: randomly across the whole field-of-view, on a regular grid with stars on one side and galaxies on the other, or on a regular grid but with the stars located equidistantly from the other galaxies. The galaxies can be (1) randomly rotated, (2) kept with their semi-major axes aligned with one direction, (3) within the same quadrant, or (4) produced in pairs with angles rotated by 90 degrees with respect to each other. The latter configuration is a useful feature for weak-lensing calibrations. The fluxes and sizes of both galaxies and stars can be kept fixed or randomly distributed following a given luminosity function to better re-sample the data to be simulated. The image can be convolved with an arbitrary kernel, and a simple shear distortion can be applied to the galaxies. \begin{figure} \centering \includegraphics[width=1.0\hsize]{./fig/basis_4scales} \caption {The first four principal components for three different sets of galaxies. The upper, middle and bottom rows refer to galaxies with semi-major axis, $A$, within the range $5<A<7$, $18<A<20$, $50<A<60$, and $70<A<80$ pixels, respectively. The larger the galaxies are, the larger is the typical size of the principal component and the larger is their morphological complexity.} \label{fig:basis} \end{figure} Here, we describe a simple but quite realistic synthetic image which we used to show the signal-to-noise splitting of Eq.~(\ref{eq:model-split}). We included $10\,000$ galaxies located on a $100\times100$ regular grid but with the centroid displaced by a random shift within $1.5$ pixels. The galaxies are characterized by a Sersic profile \citep{1968adga.book.....S} with a fixed index $n=2$. They are convolved with a PSF described by a Moffat function with $\beta=4.8$, $\mbox{FWHM}=4.45$ pixels \citep{moffat69}, and a complex ellipticity $g_{PSF}=-0.019-i\,0.007$, adapted to the fiducial values adopted in Great08 \citep{great08}. The noise variance, the source fluxes, scale radii and ellipticities have been randomly distributed such as to resemble those of a stacked image obtained with the OmegaCam mounted on the Subaru telescope and 20 minutes of total exposure time in the $i^\prime$ filter. A comparison of the magnitude distribution, the photometric errors and the size-magnitude relation between the simulated images and a real Subaru image is shown in Fig.~(\ref{fig:sim-stat}). The simulation has been processed as it would be with a real image, i.e. detecting the sources, separating galaxies from stars with SExtractor \citep{1996A&AS..117..393B}, and creating postage-stamps sized $60\times60$ pixels for each object based on the measured astrometry. In this case, we did not apply any rotation to the galaxy images for simplicity. This will be done in the more sophisticated simulation discussed in Sec.~(\ref{sec:simulations}) below. The principal components, $\vec{\phi}_i$, obtained for this sample are shown in Fig.~(\ref{fig:subaru-basis}), where for visualization purposes only we inverted the process sketched in Fig.~(\ref{fig:subaru-train-cov}) to rearrange the vectors in form of an image. It is interesting to note how the data deliver principal components with radially symmetric profiles ($w_1$ and $w_4$), dipolar ($w_2$ and $w_3$), and quadrupolar ($w_5$ and $w_6$) structures as well. Higher modes show hexapoles and more complex structures. It is easy to interpret these shapes: for instance the circularly symmetric components take care for the average brightness profile, while the dipolar ones account for a large fraction of the object's ellipticity. In Fig.~(\ref{fig:subaru-eigenvalues}), we show the power of the principal components normalized by the total variance, $\sigma^2=\sum_k^n\lambda_k$. The continuous line represents the simulation, and the dashed line the real SUBARU image mocked by the simulation. In both cases, the power of the components drops rapidly with their order, and the same kinks are visible in the curves. The drop in power is less dramatic for real data because of the more complex morphology of the galaxies. At orders higher than $15$, there is a clear plateau for the simulated images because here we enter the regime of uncorrelated random noise containing no more features. The real image lacks such a plateau because in this case the noise is correlated because the image results from stacking. \begin{figure} \centering \includegraphics[width=1.0\hsize]{./fig/models_neg2c} \caption {Four galaxies extracted from the XDF sample. From left to right, the columns show: the original image (with other objects present in the postage-stamp already removed), the model generated with the EMPCA, the model residuals, and the segmentation of the image we derived from the model. The image of the smaller galaxies has been enlarged for visualization purposes.} \label{fig:models} \end{figure} An example for the signal-to-noise splitting, $d(\vec{x})= \tilde{g}(\vec{x}) + \tilde{n}(\vec{x})$, discussed in Sec.~(\ref{sec:empca}), is displayed in Fig.~(\ref{fig:subaru-object}) where, from left to right, we show the original data $\tilde{d}(\vec{x})$, the model $\tilde{g}(\vec{x})$, and the residuals $\tilde{n}(\vec{x})$. The maximum order $M$ has been determined with the criterion $\left\{\, \mbox{min}\{k\} \in \mathbb{N} : \chi^2_k -\chi^2_{k+1}\le t \,\right\}$, setting $t=0.005$. As expected, the galaxy model is compact, i.e. it vanishes at a certain distance form the center of the galaxy, most of the noise is removed from the image, and the residuals are fully uncorrelated. A more quantitative assessment of the reconstruction quality, based on the brightness moments of the images, is discussed in Sect.~(\ref{sec:simulations}) where we deal with galaxies with complex morphologies. \section{Modeling the XDF galaxies}\label{sec:models} We now come to the full analysis of real data. Here, we processed the ACS/WFC stacked images of the Hubble eXtreme Deep Field \citep[XDF, hereafter, see][]{illingworth13} which covers an area of $10.8$ arcmin$^2$ down to the $\sim30$ AB magnitude ($5\sigma$). The images are drizzled with a scale of $0''.03$ pixel$^{-1}$ and have been obtained with the F435W, F606W, F775W, F814W, and F850LP filters for a total exposure time of $1177$ks. We used all objects listed in the UVUDF catalog which are classified as galaxies \citep{rafelski15}. To avoid artifacts and truncated objects, we discarded those objects in the areas affected by the ghosts and halos of stars or close to the edges of the survey. In this way, we selected 8543 galaxies from an effective area of $9.20$ arcmin$^2$. We further cut the sample by rejecting all galaxies with an F775W magnitude larger than $30$, ending up with 7038 objects. The redshift and magnitude distributions of the sources with a maximum $\chi^2_{mod}$ of 4 for the BPZ redshift are shown in Fig.~(\ref{fig:cat-photo-z}). \begin{figure} \centering \includegraphics[width=1.0\hsize]{./fig/xdf_empca_stamps_color_v2} \caption {A collection of galaxy models obtained with the EMPCA and based on the F435W, F606W, and F775W filters. The first and second top panels show the same objects displayed in Fig.~(\ref{fig:models}). The size of each box is of $5.4$ arcsec.} \label{fig:color-stamps} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\hsize]{./fig/XDF_acsfwfc_compare_noisless-xdf-96min-cfhtlens2_wide_neg} \caption {Four simulations realized with {\it EasySky} displaying a field of $0.65\times0.65$ arcmin$^2$ observed with the F775W filter without noise (upper left panel), with a XDF like noise (upper right panel), a noise equivalent to one HST orbit (bottom left), and a CFHTLens-quality image for a seeing of $0.7$ arcsec (bottom right panel). } \label {fig:comparison-clean-dirty} \end{figure} When computing the principal components, we split the sample in groups of galaxies with similar size, and evaluate the basis for each of these subsamples separately. This is to obtain a collection of basis sets, one per sub-sample, which is optimized for galaxies with that specific size. If we used all galaxies at once, the features captured by the EMPCAs will be distributed on a larger number of components because then the range of sizes they have to reproduce will be larger. To further reduce the amount of scatter in the data, we rotated the galaxies by $90$ deg, whenever necessary, to align them within the same quadrant. We did not align the galaxies' major axes along the same direction to avoid the introduction of additional correlation among the pixels for a marginal improvement that could barely be justified. We finally take advantage of all bands by including all of them in the training sets. This further reduces the noise and enriches the number of features which can be reproduce with the same basis set. In Fig.~(\ref{fig:basis}), we plot the first four principal components computed for three different sub-sets of galaxies with semi-major axis, $A$, within the ranges $5<A<7$, $18<A<20$, $50<A<60$, and $70<A<80$ pixels for the first, second, third and forth rows, respectively. The semi-major axes, $A=\sqrt{I (1-e) / \pi}$, was derived from the isophotal area, $I$, and the object ellipticity, $e$, which turned out to be a suitable choice for our pourpose. As expected, the larger the galaxies in the training sample are, the larger is the typical scale of the EMPCA. Additionally, one can see that larger galaxies show principal components with more complex features because of their larger variety in morphology and substructure. Having computed the basis for each sub-sample, we use them to create the galaxy models as discussed in Sec.~(\ref{sec:empca}). Since some of the variance due to the noise is still present in the model, which is unavoidable in general, we set all pixels to zero with amplitude less than $t=\sigma/5$, where $\sigma$ is the pixel variance of the original image. This is to avoid such areas of the image which, in fact, do not show any evidence of signal. The code to perform the overall analysis is called {\it EasyEMPCA}. \begin{figure*} \centering \includegraphics[width=0.48\hsize]{./fig/moments_F435W_nocut_2data_order16.png} \includegraphics[width=0.48\hsize]{./fig/moments_F606W_nocut_2data_order16.png} \includegraphics[width=0.48\hsize]{./fig/moments_F775W_nocut_2data_order16.png} \includegraphics[width=0.48\hsize]{./fig/moments_F850LP_nocut_2data_order16.png} \caption {Dispersion around the true value of the brightness moments of the models reconstructed with EMPCA (blue) and as measured with Shapelens (red).} \label {fig:moments} \end{figure*} Figure~(\ref{fig:models}) displays four galaxies belonging to the same four samples used to create the basis shown in Fig.~(\ref{fig:basis}). The images have been rescaled to better visualize their details. For each object we show, from left to right, the original image, the model, the residuals, and the segmentation used to remove nearby objects. The color images of the same galaxies are shown in the top left and central panels of Fig.~(\ref{fig:color-stamps}) together with additional examples. The color stamps have a field size of $5.4$ arcsec, are based on the F435W, F606W, and F775W filters and show once more the range of sizes and morphologies which can be modeled. In this case, the galaxies are visualized without any rescaling. The residuals are compatible with the image noise except for very concentrated features which are slightly missed because of the regularization scheme we adopted when constructing the basis functions which the models are based on (see Fig.~\ref{fig:models}). \section {Testing the brightness moments}\label{sec:simulations} In this section, we ``feed'' {\it EasySky} with the postage-stamps of the galaxy models created in Sec.~(\ref{sec:models}) and extracted from the XDF Survey with {\it EasyEMPCA}. In this case, the galaxies have been arranged on a regular grid, randomly flipped, and rotated by multiples of $90$ degrees. Their flux has not been changed to produce an image as close as possible to the original data. Such image permutations do not affect the original quality of the stamps because no interpolation is involved in this process, as it would happen when applying arbitrary rotations. In Fig.~(\ref{fig:comparison-clean-dirty}), we show a realization of a portion of $0.65 \times 0.65$ arcmin$^2$ field-of-view in the F755W band. The noise-free image is shown in the top left panel, while the other panels show three images with different levels of noise and resolution to resemble the XDF survey (upper right panel), one orbit exposure with HST (bottom left), and a CFHTLens stacked image with a seeing of $0.7$ arcsec (bottom right panel). We can now process these simulations like real images to verify and quantify the accuracy of the galaxy reconstructions. Do do so, we first detect the objects with SExtractor and then apply EasyPCA with the complete procedure to derive the basis and the models (see Sec.~\ref{sec:empca}). It is important to note that even if the mock galaxies used in the simulations have been produced with the EMPCA, we took care to create a sample of galaxies which is as independent as possible from the original sample. This is why the galaxy images have been randomly flipped, rotated and split into training sets differing, in number and components, from the training set used to create the simulations in first place. Further more, the noise in the simulated images is not the same as that in the real data. To have a quantitative assessment of the reconstruction quality, we measured the brightness moments, \begin{equation} G_{ij} = \int_{-\infty}^\infty d(\vec{x}) x_1^i x_2^j \d x \;, \end{equation} of all galaxies in the simulation with noise to quantify the deviations with respect to those expected in the noise-free images. In Fig.~(\ref{fig:moments}), we plot the scatter in the brightness moments, $\Delta G_{ij}=G_{ij}^{measured}-G_{ij}^{true}$, up to the second order (as required for weak-lensing measurements). The moments except $G_{00}$ have been normalized by flux. We finally processed the same images with a well-established weak-lensing method to have a direct comparison. At this end, we used the Shapelens library which allows to measure the brightness moments by iteratively matching the data to an elliptical weight function to maximize the signal-to-noise ratio \citep{2011MNRAS.412.1552M}. The brightness moments of the models based on the EMPCA (blue contours) are reproduced with an accuracy comparable to that achieved by Shapelens (red contours), proving the quality of the models. \section{Conclusions}\label{sec:conclusions} We have described how optical images of galaxies can be fitted with an optimized linear model based on the Expectation Maximization Principal Components Analysis (EMPCA). This method relies on the data alone, avoiding any assumptions regarding the morphology of the objects to be modeled even if they have complex or irregular shapes. As a test case, we have analyzed the galaxies listed in the \cite{rafelski15} catalog which covers the Hubble eXtreme Deep Field (XDF). We selected those objects with magnitude $m_{W755}<30$, far from the field edges and without overlapping artifacts caused by the few stars present across the field. We collected 7038 postage-stamps of noise-free galaxy images with redshift up to $z=4.0$. We have shown how the modeled galaxies well represent the entire collection of galaxies, from small to large and from regular to irregular. Two codes have been implemented to this end: {\it EasySky} to create the simulations and {\it EasyEMPCA} to model the galaxies. The residuals appear uncorrelated except at very sharp features because of the regularization scheme we adopted during the basis construction. To further verify the quality of the reconstructions, we simulated a set of galaxy images, with and without noise, covering the entire spectrum of shapes and luminosities of the objects present in the XDF. We processed the simulations with the same procedure applied to a real data set: we detected the objects with SExtractor, derived the EMPCA basis and fitted the data with the linear model based on this basis. We then measured the brightness moments up to the second order of the model reconstructions and compared them to those of the noise-free simulations. The quality of the reconstructions very well competes with a well-established method to measure galaxy brightness moments such as the iterative adaptive scheme implemented in Shapelens. The procedure discussed in this paper can be used to derive the properties of galaxies such as their fluxes and shapes, or to create reliable simulations of optical images. In this respect, the accuracy of such simulations is gaining importance for the lensing community. For instance, in the strong-lensing regime, they are necessary to understand how substructures in strongly magnified galaxies can be used to access additional information on the lensing mass distribution, such as galaxy clusters. In the weak-lensing regime, all methods to measure the ellipticites of galaxies require precise simulations for their calibration, on which depends the bias of such measurements and all quantities derived from them. The method we discussed in this work appears as a promising solution to create such simulations. \section*{Acknowledgments} We would like to thank Marc A. Rafelski and Anton M. Koekemoer for kindly provide us the segmentation map of the UVUDF data, and Massimo Meneghetti for the useful discussion. This work was supported by the Transregional Collaborative Research Centre TRR 33.
{'timestamp': '2016-07-21T02:00:12', 'yymm': '1607', 'arxiv_id': '1607.05724', 'language': 'en', 'url': 'https://arxiv.org/abs/1607.05724'}
\section{Introduction and main results} Spaces with a rich family of retractions often occur both in topology and functional analysis. For example, systems of retractions were used by Amir and Lindenstrauss to characterize Eberlein compact spaces \cite{amirLind} or by Gul$'$ko \cite{gul} to prove that a compact space $K$ is Corson whenever $\C_p(K)$ has the Lindel\"of $\Sigma$-property. This line of research continued for a long time (for a survey see e.g. \cite{kalendaSurvey} and Chapter 19 of \cite{kubisKniha}). The optimal notion of an indexed system of retractions in this area was defined in \cite{kubisSmall}. We slightly generalize this notion to a more general situation -- we consider countably compact spaces, not only compact ones. \begin{definition}A \textit{retractional skeleton} in a countably compact space $X$ is a family of continuous retractions $\mathfrak{s} = \{r_s\}_{s\in\Gamma}$, indexed by an up-directed partially ordered set $\Gamma$, such that \begin{enumerate}[\upshape (i)] \item $r_s[X]$ is a metrizable compact for each $s\in\Gamma$, \item $s,t\in\Gamma$, $s\leq t \Rightarrow r_s = r_s\circ r_t = r_t\circ r_s$, \item given $s_0 < s_1 < \cdots$ in $\Gamma$, $t = \sup_{n\in\omega}s_n$ exists and $r_t(x) = \lim_{n\to\infty}r_{s_n}(x)$ for every $x\in X$, \item for every $x\in X$, $x = \lim_{s\in\Gamma}r_s(x)$. \end{enumerate} We say that $D(\mathfrak{s}) = \bigcup_{s\in\Gamma}r_s[X]$ is the \textit{set induced by the retractional skeleton} $\mathfrak{s}$ in $X$.\\ If $D(\mathfrak{s}) = X$ then we say that $\mathfrak{s}$ is a \textit{full retractional skeleton}. \end{definition} Let us point out that the condition (i) in the definition of a retractional skeleton is equivalent to \begin{itemize} \item[(i')] $r_s[X]$ has a countable network for each $s\in\Gamma$. \end{itemize} Indeed, any metrizable compact has a countable network. Conversely, if $r$ is any retraction on $X$, then $r[X]$ is a closed subset of $X$, hence it is countably compact. If it has countable network, it is Lindel\"of and hence compact. Finally, a compact with countable network is metrizable. Another type of a structured system of retractions was recently introduced in \cite{roj14}. \begin{definition} A space $X$ is \emph{monotonically retractable} if we can assign to any countable set $A\subset X$ a retraction $r_A$ and a countable set $\N(A)$ such that the following conditions are fulfilled: \begin{itemize} \item $A\subset r_A[X]$. \item The assignment $A\mapsto \N(A)$ is $\omega$-monotone, i.e., \begin{itemize} \item[(i)] if $A\subset B$ are countable subsets of $X$, then $\N(A)\subset\N(B)$; \item[(ii)] if $(A_n)$ is an increasing sequence of countable subsets of $X$, then $\N(\bigcup_{n=1}^\infty A_n)=\bigcup_{n=1}^\infty \N(A_n)$. \end{itemize} \item $\N(A)$ is a network of $r_A$, i.e. $r_A^{-1}(U)$ is the union of a subfamily of $\N(A)$ for any open set $U\subset X$. \end{itemize} \end{definition} Our main result is the following characterization of monotonically retractable countably compact spaces using the notion of a retractional skeleton. \begin{theorem}\label{t:charactFullSkeleton} A countably compact space is monotonically retractable if and only if it has a full retractional skeleton. \end{theorem} Since a compact space has a full retractional skeleton if and only if it is Corson \cite[Theorem 3.11]{cuthCMUC}, we get the following positive answer to Question 6.1 of \cite{tkaRoj}. \begin{corollary}\label{t:charactCorson} A compact space is monotonically retractable if and only if it is Corson. \end{corollary} As another corollary we obtain the following result from \cite{roj14}. \begin{corollary}\label{c:roj14} Any first countable countably compact subspace of an ordinal is monotonically retractable. \end{corollary} Indeed, it is enough to observe that any first countable countably compact subspace of an ordinal admits a full retractional skeleton. To do that one can use the formula from \cite[Example 6.4]{kubisSmall}. Theorem~\ref{t:charactFullSkeleton} will be proved in Section 4, it is an immediate consequence of Theorem~\ref{t:main}. Corollary~\ref{t:charactCorson} is in fact easier, it follows already from Proposition~\ref{p:charactCorson}. Further, we apply our results to prove the following `noncommutative' analogues of the results of \cite{kalendaCharact}. These theorems provide answers to Problem 1 of \cite{cuthCMUC} and Problem 1 of \cite{kubisSkeleton}. The topological property sought in the quoted problems is `to be monotonically Sokolov'. This class of spaces was introduced and studied in \cite{tkaRoj}, we recall the definition in the next section. \begin{theorem}\label{t:answerCuth}Let $K$ be a compact space and $D$ be a dense subset of $K$. Then the following two conditions are equivalent: \begin{enumerate}[\upshape (i)] \item $D$ is induced by a retractional skeleton in $K$. \item $D$ is countably compact and $(\C(K),\tau_p(D))$ is monotonically Sokolov. \end{enumerate} \end{theorem} This theorem will be proved in the last section. The next one is its Banach-space counterpart. Projectional skeleton is Banach-space analogue of retractional skeleton, these notions are dual in a sense. For exact definitions and details see \cite{kubisSkeleton} or \cite{cuthSimul}. \begin{theorem}\label{t:answerKubis}Let $E$ be a Banach space and $D\subset E^*$ a norming subspace. Then the following two conditions are equivalent: \begin{enumerate}[\upshape (i)] \item $D$ is induced by a projectional skeleton. \item $D$ is weak$^*$ countably closed and $(E,\sigma(E,D))$ is monotonically Sokolov. \end{enumerate} \end{theorem} This theorem will proved in the last section, using its more precise version, Theorem~\ref{t:skeletonIffXSokolov}. \section{Preliminaries} In this section we collect basic notation, terminology and some known facts which will be used in the sequel. We denote by $\omega$ the set of all natural numbers (including $0$), by $\en$ the set $\omega\setminus\{0\}$. If $X$ is a set then $\exp(X) = \{Y\setsep Y\subset X\}$. We denote by $[X]^{\leq\omega}$ all countable subsets of $X$. All topological spaces are assumed to be Tychonoff. Let $T$ be a topological space. \begin{itemize} \item $\tau(T)$ denotes the topology of $T$ and $\tau(x,T) = \{U\in\tau(T)\setsep x\in U\}$ for any $x\in T$. \item A subset $S\subset T$ is said to be countably closed if $\ov{C}\subset S$ for every countable subset $C\subset S$. It is easy to check that a countably closed subset of a countably compact space is countably compact. \item A family $\N$ of subsets of $T$ is said to be a \emph{network} of $T$ if any open set in $T$ is the union of a subfamily of $\N$ \item If $A\subset T$, then a family $\N$ of subsets of $T$ is said to be an \emph{external network} of $A$ in $T$ if for any $a\in A$ and $U\in\tau(a,T)$ there exists $N\in\N$ such that $a\in N\subset U$. \item If $Y$ is a topological space and $f:T\to Y$ is a continuous map, then a family $\N$ of subsets of $T$ is said to be a \emph{network of $f$} if for any $x\in T$ and $U\in\tau(f(x),Y)$ there exists $N\in\N$ such that $x\in N$ and $f[N]\subset U$. \item $\beta T$ denotes the \v{C}ech-Stone compactification of $T$. \end{itemize} For any topological spaces $X$ and $Y$ the set of continuous functions from $X$ to $Y$ is denoted by $\C(X,Y)$. We write $\C(X)$ instead of $\C(X,\er)$ and $\C_b(X)$ for the set of all bounded functions from $\C(X)$. By $C_p(X)$ we denote the space $\C(X)$ equipped with the the topology of pointwise convergence (i.e., the topology inherited from $\er^X$). Moreover, if $D\subset X$ is dense, we denote by $\tau_p(D)$ the topology of the pointwise convergence on $D$ (i.e. the weakest topology on $\C(X)$ such that $f\mapsto f(d)$ is continuous for every $d\in D$). We shall consider Banach spaces over the field of real numbers. If $E$ is a Banach space and $A\subset E$, we denote by $\sspan{A}$ the linear hull of $A$. $B_E$ is the closed unit ball of $E$; i.e., the set $\{x\in E:\; \|x\| \leq 1\}$. $E^*$ stands for the (continuous) dual space of $E$. For a set $A\subset E^*$ we denote by $\ov{A}^{w^*}$ the weak$^*$ closure of $A$. Given a set $D\subset E^*$ we denote by $\sigma(E,D)$ the weakest topology on $E$ such that each functional from $D$ is continuous. A set $D\subset E^*$ is \textit{r-norming} if $\|x\| \leq r. \sup\{|x^*(x)|:\;x^*\in D\cap B_{E^*}\}$. We say that a set $D\subset E^*$ is norming if it is $r$-norming for some $r\geq 1$. The following definitions come from \cite{tkaRoj}. \begin{definition}Let $X, Y$ be sets, $\O\subset\exp(X)$ closed under countable increasing unions, $\N\subset\exp(Y)$ and $f:\O\to\N$. We say that $f$ is \emph{$\omega$-monotone} if \begin{enumerate}[\upshape (i)] \item $f(A)$ is countable for every countable $A\in\O$; \item if $A\subset B$ and $A,B\in\O$ then $f(A)\subset f(B)$; \item if $\{A_n\setsep n\in\omega\}\subset \O$ and $A_n\subset A_{n+1}$ for every $n\in\omega$ then $f(\bigcup_{n\in\omega}A_n) = \bigcup_{n\in\omega}f(A_n)$. \end{enumerate} \end{definition} \begin{definition}A space $T$ is \emph{monotonically Sokolov} if we can assign to any countable family $\F$ of closed subsets of $T$ a continuous retraction $r_\F:T\to T$ and a countable external network $\N(\F)$ for $r_\F(T)$ in $T$ such that $r_\F(F)\subset F$ for every $F\in\F$ and the assignment $\N$ is $\omega$-monotone. \end{definition} In the following statement we sum up some properties of monotonically retractable and monotonically Sokolov spaces which we will use later. They follow from results of \cite{tkaRoj}. \begin{fact}\label{f:mrs} Let $X$ be a topological space. Then: \begin{enumerate}[\upshape (a)] \item $X$ is monotonically retractable if and only if $\C_p(X)$ is monotonically Sokolov. \item $X$ is monotonically Sokolov if and only if $\C_p(X)$ is monotonically retractable. \item Any closed subspace of a monotonically retractable (resp. monotonically Sokolov) space is monotonically retractable (resp. monotonically Sokolov). \item A countable product of monotonically Sokolov spaces is monotonically Sokolov. \item Any monotonically Sokolov space is Lindel\"of. \item Any monotonically retractable space is normal and $\omega$-monolithic (i.e., any separable subset has countable network). \item Any monotonically retractable space has countable tightness. \item Any countably compact subset of a monotonically retractable space is closed (and hence monotonically retractable). \end{enumerate} \end{fact} \begin{proof} The assertions (a) and (b) are proved in \cite[Theorem 3.5]{tkaRoj}, the assertions (c)--(f) follow from \cite[Theorems 3.4 and 3.6]{tkaRoj}. Let us show (g). Let $X$ be a monotonically retractable space. Consequently, $Y = \C_p(X)$ is monotonically Sokolov by (a) and by (d) $Y^n$ is monotonically Sokolov for every $n\in\en$. It follows by (e) that $Y^n$ is Lindel\"of for every $n\in\en$ and, by \cite[Theorem II.1.1]{archangelskii}, $\C_p(Y) = \C_p(\C_p(X))$ has a countable tightness. Since $X$ embeds in $\C_p(\C_p(X))$, it must have a countable tightness. Finally, let us prove (h). Let $X$ be a monotonically retractable space and $A\subset X$ countably compact. Fix $a\in \ov{A}$. Since $X$ has a countable tightness (by (g)), there is a countable set $S\subset A$ with $a\in \ov{S}$. By (f) $\ov{S}\cap A$ has a countable network, hence it is Lindel\"of. Since $A$ is countably compact, $\ov{S}\cap A$ is countably compact and Lindel\"of; hence, compact. It follows that $\ov{S}\cap A$ is closed in $X$ and $a\in A$. \end{proof} In the following we summarize some easy facts concerning sets induced by a retractional skeleton. \begin{fact}\label{f:basics}Let $X$ be a countably compact space and let $D$ be a set induced by a retractional skeleton in $X$. Then: \begin{enumerate}[\upshape (i)] \item $D$ is countably closed in $X$. \item $D$ is sequentially compact. \item If $X$ is compact, then $X=\beta D$. \end{enumerate} \end{fact} \begin{proof}Let $\mathfrak{s} = \{r_s\}_{s\in\Gamma}$ be a retractional skeleton in $X$ with $D = D(\mathfrak{s})$. Whenever $A\subset D$ is countable, there is $s\in\Gamma$ with $A\subset r_s[X]$; hence, $\ov{A}\subset r_s[X]$ is metrizable compact and (i) and (ii) follows. The assertion (iii) is proved in \cite[Theorem 32]{kubisSkeleton}. \end{proof} \section{The method of elementary models} The purpose of this section is to briefly recall the reader of some basic facts concerning the method of elementary models. This is a set-theoretical method which can be used in various branches of mathematics. A. Dow in \cite{dow} illustrated the use of this method in topology, P. Koszmider in \cite{kos05} used it in functional analysis. Later, inspired by \cite{kos05}, W. Kubi\'s in \cite{kubisSkeleton} used it to construct retractional (resp. projectional) skeleton in certain compact (resp. Banach) spaces. In \cite{cuth} the method has been slightly simplified and specified. We briefly recall some basic facts. More details may be found e.g. in \cite{cuth} and \cite{cuthKalenda}. First, let us recall some definitions. Let $N$ be a fixed set and $\phi$ a formula in the language of $ZFC$. Then the {\em relativization of $\phi$ to $N$} is the formula $\phi^N$ which is obtained from $\phi$ by replacing each quantifier of the form ``$\forall x$'' by ``$\forall x\in N$'' and each quantifier of the form ``$\exists x$'' by ``$\exists x\in N$''. If $\phi(x_1,\ldots,x_n)$ is a formula with all free variables shown (i.e., a formula whose free variables are exactly $x_1,\ldots,x_n$) then $\phi$ is said to be {\em absolute for $N$} if \[ \forall a_1,\ldots,a_n\in N\quad (\phi^N(a_1,\ldots,a_n) \leftrightarrow \phi(a_1,\ldots,a_n)). \] A list of formulas, $\phi_1,\ldots,\phi_n$, is said to be {\em subformula closed} if every subformula of a formula in the list is also contained in the list. The method is based mainly on the following theorem (a proof can be found in \cite[Chapter IV, Theorem 7.8]{Kunen}). \begin{theorem}\label{T:countable-model} Let $\phi_1, \ldots, \phi_n$ be any formulas and $Y$ any set. Then there exists a set $M \supset Y$ such that $\phi_1, \ldots, \phi_n \text{ are absolute for } M$ and $|M| \leq \max(\omega,|Y|)$. \end{theorem} Since the set from Theorem~\ref{T:countable-model} will often be used, the following notation is useful. \begin{definition} Let $\phi_1, \ldots, \phi_n$ be any formulas and $Y$ be any countable set. Let $M \supset X$ be a countable set such that $\phi_1, \ldots, \phi_n$ are absolute for $M$. Then we say that $M$ is an \emph{elementary model for $\phi_1,\ldots,\phi_n$ containing $X$}. This is denoted by $M \prec (\phi_1,\ldots,\phi_n; Y)$. \end{definition} The fact that certain formula is absolute for $M$ will always be used in order to satisfy the assumption of the following lemma from \cite[Lemma 2.3]{cuthRmoutilZeleny}. Using this lemma we can force the model $M$ to contain all the needed objects created (uniquely) from elements of $M$. \begin{lemma}\label{l:unique-M} Let $\phi(y,x_1,\ldots,x_n)$ be a formula with all free variables shown and $Y$ be a countable set. Let $M$ be a fixed set, $M \prec (\phi, \exists y \colon \phi(y,x_1,\ldots,x_n);\; Y)$, and $a_1,\ldots,a_n \in M$ be such that there exists a set $u$ satisfying $\phi(u,a_1,\ldots,a_n)$. Then there exists $u \in M$ such that $\phi(u,a_1,\ldots,a_n)$. \end{lemma} \begin{proof}Let us give here the proof just for the sake of completeness. Using the absoluteness of the formula $\exists u\colon \phi(u,x_1,\ldots,x_n)$ there exists $u\in M$ satisfying $\phi^M(u,a_1,\ldots,a_n)$. Using the absoluteness of $\phi$ we get, that for this $u\in M$ the formula $\phi(u,a_1,\ldots,a_n)$ holds. \end{proof} It would be very laborious and pointless to use only the basic language of the set theory. For example, having a function $f$, we often write $y = f(x)$ and we know that this is a shortcut for a formula with free variables $x$, $y$, and $f$. Indeed, consider the formula \[ \varphi(x,y,z) = \forall a (a\in z \leftrightarrow (a=x\vee a=y)). \] Then $\varphi(x,y,z)$ is true if and only if $z = \{x,y\}$. Recall that $y = f(x)$ means $\{\{x\},\{x,y\}\}\in f$. Hence, $y = f(x)$ if and only if the following formula is true \[ \forall z (\forall a (a\in z \leftrightarrow \varphi(x,x,a)\vee \varphi(x,y,a)) \Rightarrow z\in f). \] Therefore, in the following text we use this extended language of the set theory as we are used to. We shall also use the following convention. \begin{convention} Whenever we say ``\emph{for any suitable model $M$ (the following holds \dots)}'' we mean that ``\emph{there exists a list of formulas $\phi_1,\ldots,\phi_n$ and a countable set $Y$ such that for every $M \prec (\phi_1,\ldots,\phi_n;Y)$ (the following holds \dots)}''. \end{convention} By using this new terminology we lose the information about the formulas $\phi_1,\ldots,\phi_n$ and the set $Y$. However, this is not important in applications. Let us recall several further results about elementary models (all the proofs are based on Lemma \ref{l:unique-M} and they can be found in \cite[Proposition 2.9, 2.10 and 3.2]{cuth} and \cite[Lemma 4.8]{cuthCMUC}). \begin{lemma}\label{l:predp} There are formulas $\theta_1,\dots,\theta_m$ and a countable set $Y_0$ such that any $M\prec(\theta_1,\ldots,\theta_n;\; Y_0)$ satisfies the following conditions: \begin{itemize} \item If $f\in M$ is a mapping, then $\dom(f)\in M$, $\rng(f)\in M$ and $f[M]\subset M$. \item If $A$ is finite, then $A\in M$ if and only if $A\subset M$. \item If $A\in M$ is a countable set, then $A\subset M$. \item If $A,B\in M$, then $A\cup B\in M$. \end{itemize} \end{lemma} Moreover, we will need to find suitable models in a ``monotonic way''. Thus, the following lemma from \cite[Lemma 4]{cuthKalenda} will be useful as well. \begin{lemma}\label{l:BasicSkolem} Let $\phi_1,\ldots,\phi_n$ be a subformula closed list of formulas and let $R$ be a set such that $\phi_1,\ldots,\phi_n$ are absolute for $R$. Then there exists a function $\psi:[R]^{\leq\omega}\to [R]^{\leq\omega}$ such that \begin{enumerate}[\upshape (i)] \item For every $A\in[R]^{\leq\omega}$, $\psi(A)\prec (\phi_1,\ldots,\phi_n; A)$. \item The mapping $\psi$ is $\omega$-monotone. \end{enumerate} \end{lemma} \begin{definition}We say that the function from Lemma \ref{l:BasicSkolem} is a \emph{Skolem function for $\phi_1,\ldots,\phi_n$ and $R$}. \end{definition} \section{Proof of the main result} In this section we are going to prove our main result, Theorem~\ref{t:charactFullSkeleton}. This is the content of the equivalence (i)$\Leftrightarrow$(iii) from the following theorem. We add one more equivalent condition, formulated with the use of elementary models, because via this condition the proof will be done. \begin{theorem}\label{t:main}Let $X$ be a countably compact space. Then the following are equivalent: \begin{itemize} \item[(i)] $X$ has a full retractional skeleton. \item[(ii)] For any suitable model $M$, $\C(X)\cap M$ separates the points of $\ov{X\cap M}$. \item[(iii)] $X$ is monotonically retractable. \end{itemize} \end{theorem} Recall that a compact space is Corson if and only if it has a full retractional skeleton; see e. g. \cite[Theorem 3.11]{cuthCMUC}. Hence, in case $X$ is compact, the equivalence (i)$\Leftrightarrow$(ii) comes from \cite[Theorem 7]{ban91} (for a simplified and generalized version see \cite[Theorem 30]{kubisSkeleton}, a more detailed proof which suits our situation the most can be found in \cite[Theorem 4.9]{cuthCMUC}). The rest of this section will be devoted to the proof of Theorem~\ref{t:main}. We start by proving the implication (iii)$\Rightarrow$(ii). This is the content of the following proposition -- note that it holds even without assuming countable compactness of $X$. Let us also remark that this already provides a proof of Corollary~\ref{t:charactCorson}. \begin{proposition}\label{p:charactCorson} Let $X$ be a monotonically retractable space. Then, for any suitable model $M$, $\C_b(X)\cap M$ separates the points of $\ov{X\cap M}$. \end{proposition} \begin{proof} Suppose that for any countable set $A\subset X$, we have a retraction $r_A:X\to X$ and a family $\N(A)$ that witness the monotone retractability of $X$. Fix formulas $\phi_1,\ldots,\phi_n$ containing the formulas $\theta_1,\dots,\theta_m$ from Lemma \ref{l:predp} and the formula (and its subformulas) marked by $(*)$ in the proof below, and a countable set $Y$ containing the set $Y_0$ from Lemma \ref{l:predp} and the set $\{\C_b(X),\N\}$. Fix $M\prec(\phi_1,...,\phi_n;\; Y)$. Put $A = \bigcup\{B\in[X]^{\leq\omega}\setsep B\in M\}$. This is a countable set. \begin{claim}\label{claim1}$\N(A)\subset M$\end{claim} \begin{proof}By Lemma \ref{l:predp}, the set $\{B\in[X]^{\leq\omega}\setsep B\in M\}$ is closed under finite unions. Hence, there exists an increasing sequence $(B_n)_{n\in\en}$ with $A = \bigcup_{n\in\en} B_n$ and $B_n\in M$ for every $n\in\en$. Since the assignment $\N$ is $\omega$-monotone, we have $\N(A) = \bigcup_{n\in\en}\N(B_n)$. Fix $n\in\en$. By Lemma~\ref{l:predp}, $\N(B_n)\in M$ and $\N(B_n)\subset M$. Consequently, $\N(A)\subset M$.\end{proof} \begin{claim}\label{claim2}$\ov{X\cap M}\subset r_A[X]$\end{claim} \begin{proof}Fix $x\in X\cap M$. Then $\{x\}\in [X]^{\leq\omega}\cap M$ (by Lemma~\ref{l:predp}); hence, $x \in A$. Thus, $X\cap M\subset A$ and $\ov{X\cap M}\subset \ov{A}\subset r_A[X]$.\end{proof} Fix $x,y\in\ov{X\cap M}$, $x\neq y$. By Claim \ref{claim2}, $x,y\in r_A[X]$. Find sets $U\in\tau(x,r_A[X])$ and $V\in\tau(y,r_A[X])$ such that $\ov{U}\cap\ov{V} = \emptyset$. Since $\N(A)$ is a network of $r_A$, we can find $N_x\in\N(A)$ and $N_y\in\N(A)$ with $x\in N_x\subset r_A^{-1}[U]$ and $y\in N_y\subset r_A^{-1}[V]$. Note that $\ov{N_x}\cap\ov{N_y} = \emptyset$ and recall that $X$ is normal by Fact~\ref{f:mrs}. By Claim \ref{claim1}, $N_x, N_y\in M$. Hence, by Lemma \ref{l:unique-M} and the absoluteness of the formula (and its subformula) \[ \exists f\in\C_b(X)\quad(\forall a\in N_x: f(a) = 0 \wedge \forall b\in N_y: f(b) = 1),\eqno{(*)} \] there is $f\in\C_b(X)\cap M$ with $f(x) = 0 \neq 1 = f(y)$. Thus, $\C_b(X)\cap M$ separates the points of $\ov{X\cap M}$. \end{proof} We continue with proving the equivalence (i)$\Leftrightarrow$(ii) from Theorem~\ref{t:main}. In fact, this equivalence is essentially known due to the following result: \begin{lemma}\label{l:generskel} {\rm (\cite[Theorem 30]{kubisSkeleton}, see also \cite[Theorem 4.9]{cuthCMUC})} Let $K$ be a compact space and $X\subset K$ a dense countably compact subset. The following assertions are equivalent: \begin{itemize} \item[(i)] $X$ is induced by a retractional skeleton in $K$. \item[(ii)] For any suitable model $M$, $\C(K)\cap M$ separates the points of $\ov{X\cap M}$. \end{itemize} \end{lemma} In fact, the quoted results use a slightly stronger assumption that $X$ is countably closed in $K$. However, if $X$ is induced by a retractional skeleton, it is automatically countably closed by Fact~\ref{f:basics}, so this assumption is not used for (i)$\Rightarrow$(ii). For the opposite implication, by \cite[Theorem 4.9]{cuthCMUC} it follows from (ii) that $X$ is contained in a set $Y$ induced by retractional skeleton. Now it follows easily from \cite[Theorem 32]{kubisSkeleton} that $X=Y$. The proof of the equivalence (i)$\Leftrightarrow$(ii) from Theorem~\ref{t:main} will be done by reducing the situation to the use of Lemma~\ref{l:generskel}. More precisely, let us consider $K=\beta X$. We will show that the validity of assertion (i) in Theorem~\ref{t:main} is equivalent to the validity of (i) in Lemma~\ref{l:generskel} and similarly for the respective assertions (ii). We begin with the assertions (ii). The key tool to do that is the following easy lemma. \begin{lemma}\label{l:separatePointsInCompactification} Let $L$ be a compact space and $A\subset L$ a dense countably compact subset. Let $S\subset \C(L)$ be a countable set separating the points of $A$. Then $S$ separates the points of $L$. \end{lemma} \begin{proof}Arguing by contradiction, let $x_1,x_2\in L$ be such that $x_1\neq x_2$ and $f(x_1) = f(x_2)$ for every $f\in S$. Find $g\in\C(L)$ with $g(x_1)\neq g(x_2)$. Denote, for $i\in\{1,2\}$, \[ A_i = \bigcap_{h\in S\cup\{g\}}\{t\in L\setsep h(x_i) = h(t)\}. \] Then $A_1, A_2$ are nonempty $G_\delta$ sets and $A_1\cap A_2 = \emptyset$. Hence, e. g. by \cite[Lemma 1.11]{kalendaSurvey}, there are $y_1\in A\cap A_1$ and $y_2\in A\cap A_2$. This is a contradiction because $S$ does not separate the points $y_1\neq y_2$ from $A$. \end{proof} Let us now show the equivalence of the respective assertions (ii) from Theorem~\ref{t:main} and Lemma~\ref{l:generskel}. Suppose that the assertion (ii) from Theorem~\ref{t:main} holds. Let $M$ be such a suitable model containing moreover the extension map $f\mapsto \beta f$, $f\in \C(X)$, and the restriction map $f\mapsto f\restriction_X$, $f\in\beta X$. Then $\C(X)\cap M=\{f\restriction_X:f\in \C(\beta X)\cap M\}$. Hence the validity of assertion (ii) from Lemma~\ref{l:generskel} follows from the previous lemma applied to $L=\overline{X\cap M}^{\beta X}$, $A=\overline{X\cap M}^X$ and $S=\{f\restriction_L:f\in C(\beta X)\cap M\}$. The converse implication can be proved in the same way, just the final use of the previous lemma is not necessary. The equivalence of the respective assertions (i) is the content of the following proposition. \begin{proposition}\label{p:rSkeletonInBetaX} Let $X$ be a countably compact space. Then $X$ has a full retractional skeleton if and only if it is induced by a retractional skeleton in $\beta X$. Moreover, if $\{r_s\}_{s\in\Gamma}$ is a full retractional skeleton in $X$, then there is a retractional skeleton $\{R_s\}_{s\in\Gamma}$ in $\beta X$ inducing $X$ such that $R_s\restriction_X = r_s$ for every $s\in\Gamma$. \end{proposition} \begin{proof} We start by proving the `if part'. Let $\mathfrak{s} = \{R_s\}_{s\in\Gamma}$ be a retractional skeleton in $\beta X$ with $D(\mathfrak{s}) = X$. Then $\mathfrak{s'} = \{R_s\restriction_X\}_{s\in\Gamma}$ is a full retractional skeleton in $X$. Indeed, since $X$ is induced by $\mathfrak{s}$, we have, for each $s\in\Gamma$, $R_s[\beta X]\subset X$. Since $R_s$ is a retraction, we get $R_s[\beta X]=R_s[R_s[\beta X]]\subset R_s[X]$, hence $R_s[X]=R_s[\beta X]$. It follows that ranges of $R_s\restriction_X$ cover $X$. Further, it is immediate that (i)--(iv) from the definition of a retractional skeleton are satisfied. This finishes the proof. To show the `only if' part let $\mathfrak{s} = \{r_s\}_{s\in\Gamma}$ be a full retractional skeleton in $X$. Fix $s\in\Gamma$. We extend the retraction $r_s:X\to\beta X$ to a continuous function $R_s:\beta X\to\beta X$. Then $R_s$ is a retraction because $R_s\circ R_s = R_s$ on a dense subset $X$; hence, $R_s\circ R_s = R_s$ on $\beta X$. Moreover, $r_s[X] = R_s[\beta X]$ because $R_s[X] = r_s[X]$ is compact and dense in $R_s[\beta X]$; hence, $R_s[\beta X] = r_s[X] \subset X$ is a metrizable compact. Now, it is immediate that $\mathfrak{s'} = \{R_s\}_{s\in\Gamma}$ is a system of retractions on $\beta X$ satisfying (i), (ii) from the definition of a retractional skeleton and $D(\mathfrak{s'}) = X$. In order to verify that (iii) from the definition of a retractional skeleton holds, let us fix a sequence $s_0 < s_1 < \cdots$ in $\Gamma$ with $t = \sup_{n\in\omega}s_n\in\Gamma$ and $x\in\beta X$. Then $R_t(x)\in X$. Therefore, $R_t(x) = r_t(R_t(x)) = \lim_{n\to\infty} r_{s_n}(R_t(x)) = \lim_{n\to\infty} R_{s_n}(x)$. Finally, let us fix $x\in\beta X$. It remains to show that $\lim_{s\in\Gamma} R_s(x) = x$. Arguing by contradiction, let $y\in\beta X$ be a cluster point of the net $\{R_s(x)\}_{s\in\Gamma}$ with $y\neq x$. Fix $U\in\tau(x,\beta X)$ and $V\in\tau(y,\beta X)$ with $\ov{U}\cap\ov{V} = \emptyset$ and $s_0\in\Gamma$. We inductively find sequences $\{s_n\}_{n\in\en}$, $\{s'_n\}_{n\in\en}$ of indices from $\Gamma$, $\{x_n\}_{n\in\en}$ of points from $X$ and $\{U_n\}_{n\in\en}$ of sets from $\tau(x,\beta X)$ such that, for every $n\in\en$, \begin{itemize} \item $\ov{U_1}\subset U$, \item $s_n\leq s'_n\leq s_{n+1}$, $\ov{U_{n+1}}\subset U_n$, \item $R_{s_n}(U_n)\subset V$, \item $x_n\in U_n\cap X$ and \item $R_{s'_n}(x_n) = x_n$. \end{itemize} Let us describe the inductive process. Let $s'_{n-1}$ and $U_{n-1}$ be defined (we put $s'_0 = s_0$ and $U_0 = U$ if $n=1$). Find $s_n\geq s'_{n-1}$ with $R_{s_n}(x)\in V$ and $U_n\in\tau(x,\beta X)$ with $\ov{U_n}\subset U_{n-1}$ and $R_{s_n}(U_n)\subset V$. Since $X$ is dense in $\beta X$ we can find $x_n\in X\cap U_n$. Finally, we choose $s'_n$ to be such that $s'_n\geq s_n$ and $R_{s'_n}(x_n) = x_n$. By Fact~\ref{f:basics}, $X$ is sequentially compact and so we may without loss of generality assume that the sequence $\{x_n\}_{n\in\en}$ converges to some $z\in X$. For $t = \sup_{n\in\omega}s_n$ we get \[ R_t(z) = \lim_{n\to\infty} R_t(x_n) = \lim_{n\to\infty} R_t(R_{s'_n}(x_n)) = \lim_{n\to\infty} R_{s'_n}(x_n) = \lim_{n\to\infty} x_n = z. \] Hence, $R_t(z) = z\in\ov{U}$. Moreover, $x_k\in U_n$ for every $k\geq n$; hence, $z\in\bigcap_{n\in\en}\ov{U_n}$ and \[ R_t(z)\in R_t\left[\bigcap_{n\in\en} \ov{U_n}\right] = R_t\left[\bigcap_{n\in\en} U_n\right] \subset\ov{V}, \] which is a contradiction with $\ov{U}\cap\ov{V} = \emptyset$. \smallskip The `moreover part' follows immediately from the construction. \end{proof} To complete the proof of Theorem~\ref{t:main} we will show (ii)$\Rightarrow$(iii). The idea is to use a Skolem function given by (ii) to create the required $\omega$-monotone mapping. We begin with the following lemma which gives a formula for a network of a given retraction. \begin{lemma}\label{l:networkForRetract} Let $X$ be a space and $r:X\to X$ a retraction with a compact range. Let $S\subset\C(X)$ be a subset separating the points of $r[X]$ such that $f\circ r = f$ for every $f\in S$. Then \begin{multline*}\N(S) = \{f_1^{-1}(I_1)\cap\dots\cap f_k^{-1}(I_k)\setsep f_1,\dots,f_k\in S,\\ I_1,\dots, I_k\mbox{ are open intervals with rational endpoints}\}\end{multline*} is a network of $r$. \end{lemma} \begin{proof}Fix $y\in X$ and $U\in\tau(r(y),r(X))$. For any $z\in r[X]\setminus U$ there is a function $f_z\in S$ such that $f_z(y)\ne f_z(z)$. We can find disjoint open intervals $I_z$ and $J_z$ with rational endpoints such that $f_z(y)\in I_z$ and $f_z(z)\in J_z$. The open sets $f_z^{-1}(J_z)$, $z\in r[X]\setminus U$ cover the compact set $r[X]\setminus U$, so there are $z_1,\dots,z_k\in r[X]\setminus U$ such that $\bigcup_{i=1}^k f_{z_i}^{-1}(J_{z_i})\supset r[X]\setminus U$. Then \[ y\in \bigcap_{i=1}^k f_{z_i}^{-1}(I_{z_i})\subset r^{-1}[U], \] which completes the proof. \end{proof} We continue by the following lemma which we use to improve a bit the model provided by the assumption (ii). \begin{lemma}\label{l:retrakce} Let $X$ be a countably compact space satisfying the assumption (ii) from Theorem~\ref{t:main}. Then for any suitable model $M$ the following holds: \begin{itemize} \item[(i)] $\C(X)\cap M$ separates the points of $\overline{X\cap M}$. \item[(ii)] $\overline{X\cap M}$ is compact. \item[(iii)] There is a retraction $r:X\to X$ with $r[X]=\ov{X\cap M}$ such that $f\circ r= f$ for each $f\in\C(X)\cap M$. \end{itemize} \end{lemma} \begin{proof} In this proof we will use the identification of any $n\in\omega$ with the set $\{0,\dots,n-1\}$. Further, denote by $\B$ the set of all the open intervals with rational endpoints and by $\B^{<\omega}$ the set of all the functions whose domain is some $n\in\omega$ and whose values are in $\B$. Fix formulas $\phi_1,\ldots,\phi_n$ containing all the formulas $\theta_1,\dots,\theta_m$ from Lemma \ref{l:predp}, the formulas provided by the assertion (ii) from Theorem~\ref{t:main} and the formulas (and their subformulas) marked by $(*)$ in the proof below, and a countable set $Y$ containing the set $Y_0$ from Lemma \ref{l:predp}, the set provided by the assertion (ii) from Theorem~\ref{t:main} and the set $\{X, \C(X), \B, \B^{<\omega}\}$. Fix $M\prec(\phi_1,...,\phi_n;\; Y)$. Set $A=\C(X)\cap M$. By the assumptions $A$ separates points of $\ov{X\cap M}$, hence the assertion (i) is fulfilled. Let us define the mapping $\Phi:X\to \er^A$ by the formula \[ \Phi(x)(f)=f(x), \quad f\in A,x\in X. \] Then $\Phi$ is continuous, hence $\Phi[X]$ is countably compact. Since $A$ is countable, we deduce that $\Phi[X]$ is a metrizable compact. Moreover, $\Phi$ is a closed mapping (any closed $F\subset X$ is countably compact, $\Phi[F]$ is then countably compact, hence compact and thus closed). By the already proved condition (i) the mapping $\Phi$ is one-to-one when restricted to $\ov{X\cap M}$, so it is a homeomorphism of $\ov{X\cap M}$ onto its image. In particular, $\ov{X\cap M}$ is compact, which proves the assertion (ii). The next step is to prove that $\Phi[X]=\Phi[\ov{X\cap M}]$. Since $\Phi$ is a closed mapping, it is enough to show that $\Phi[X\cap M]$ is dense in $\Phi[X]$. To do that fix $x_0\in X$ and $U$ an open set in $\er^A$ containing $\Phi(x_0)$. It follows that there is a finite set $F\subset A$ and intervals $I_f\in \B$, $f\in F$, such that \[ \Phi(x_0)\in \{z\in \er^A\setsep z(f)\in I_f\mbox{ for }f\in F\}\subset U, \] which means \[ x_0\in\{x\in X\setsep f(x)\in I_f\mbox{ for }f\in F\}\subset \Phi^{-1}(U). \] Since $F\subset A\subset M$ and $F$ is finite, Lemma~\ref{l:predp} yields $F\in M$. Further, by absoluteness of the formula \[ \exists n\in\omega\; \exists \eta\quad (\eta \mbox{ is a mapping of $n$ onto }F)\eqno{(*)} \] and its subformulas, there is $n\in\omega$ and an onto mapping $\eta:n\to F$ in $M$. Let us further define mapping $\zeta:n\to\B$ by $\zeta(i)=I_{\eta(i)}$. Since $\zeta\in\B^{<\omega}$, $\B^{<\omega}\in M$ and it is countable, it follows from Lemma~\ref{l:predp} that $\zeta\in M$. Finally, by absoluteness of the formula \[ \exists x\in X\; \forall i\in n \quad(\eta(i)(x)\in \zeta(i)) \eqno{(*)} \] and its subformulas, there is $x\in X\cap M$ such that $\eta(i)(x)\in\zeta(i)$ for each $i\in n$, in other words $f(x)\in I_f$ for each $f\in F$, hence $\Phi(x)\in U$. This completes the proof that $\Phi[X\cap M]$ is dense in $\Phi[X]$, hence $\Phi[X]=\Phi[\ov{X\cap M}]$. Finally, set $r=\left(\Phi\restriction_{\ov{X\cap M}}\right)^{-1}\circ \Phi$. It is clear that $r$ is a continuous retraction on $X$ with the range $\ov{X\cap M}$. Further, if $f\in A$ and $x\in X$, then \[ (f\circ r)(x)=f(r(x))=\Phi(r(x))(f)=\Phi\left(\left(\Phi\restriction_{\ov{X\cap M}}\right)^{-1}\left( \Phi(x)\right)\right)(f)=\Phi(x)(f)=f(x), \] hence $f\circ r=f$, which completes the proof of (iii). \end{proof} Finally, we give the proof of the missing implication of Theorem~\ref{t:main}. \smallskip Let us assume that the assertion (ii) holds. We will show that (iii) holds as well. For any countable $S\subset \C(X)$ let us define $\N(S)$ by the formula given in Lemma~\ref{l:networkForRetract}. Note that the assignment $S\mapsto \N(S)$ is $\omega$-monotone. Let a subformula closed list of formulas $\phi_1,\ldots,\phi_n$ and a countable set $Y$ be the ones provided by Lemma~\ref{l:retrakce}. By Theorem~\ref{T:countable-model} there exists a set $R\supset X\cup Y$ such that $\phi_1,\ldots,\phi_n$ are absolute for $R$. Let $\psi$ be a Skolem function for $\phi_1,\ldots,\phi_n$ and $R$; see Lemma~\ref{l:BasicSkolem}. For every $A\in[X]^{\leq\omega}$, we put $M(A) = \psi(A\cup Y)$. By Lemma~\ref{l:BasicSkolem}, $M(A)\prec(\phi_1,\ldots,\phi_n;Y)$, $M(A)\supset A$ and the assignment $A\mapsto M(A)$ is $\omega$-monotone. Let $r_A$ be the retraction assigned to $M(A)$ by Lemma~\ref{l:retrakce} and $\O(A) = \N(\C(X)\cap M(A))$. Then $\O(A)$ is a countable network of $r_A$ and $A\subset M(A)\cap X\subset \ov{M(A)\cap X} = r_A[X]$. Finally, the assignment $A\mapsto \O(A)$ is $\omega$-monotone because it is a composition of $\omega$-monotone mappings $A\mapsto \C(X)\cap M(A)$ and $\N$. This completes the proof. \section{A function-space characterization of compact spaces with retractional skeleton} The aim of this section is to prove Theorem~\ref{t:answerCuth} and Theorem~\ref{t:answerKubis}. In fact, instead of the latter we prove a more precise version, namely Theorem~\ref{t:skeletonIffXSokolov} below. Let us start with Theorem~\ref{t:answerCuth}. It answers \cite[Problem 1]{cuthCMUC} and can be viewed as a noncommutative analogue of \cite[Theorem 2.1]{kalendaCharact}. Let us comment it a bit. Let $K$ be a compact space and $D\subset K$ dense subset. Then $D$ is induced by a commutative retractional skeleton in $K$ (here, \emph{commutative} means that each two projections from the skeleton commute, not only the compatible pairs) if and only if $D$ is a ``$\Sigma$-subset'' of $K$; see e.g. the \cite[p. 56]{cuthCMUC}. In \cite[Theorem 2.1]{kalendaCharact} it is proved that $D$ is a $\Sigma$-subset of $K$ if and only if $D$ is countably compact and $(\C(K),\tau_p(D))$ is primarily Lindel\"of (see {\cite[Definition 1.2]{kalendaCharact}}). To characterize sets induced by a possibly noncommutative retractional skeleton, we replace the property to be primarily Lindel\"of by the monotonical Sokolov property. \begin{proof}[Proof of Theorem \ref{t:answerCuth}] Assume that $D$ is induced by a retractional skeleton in $K$. One can notice that, by Fact~\ref{f:basics}, $D$ is countably compact and $K=\beta D$. Hence, $D$ has a full retractional skeleton (by the trivial implication of Proposition~\ref{p:rSkeletonInBetaX}). By Theorem~\ref{t:main}, $D$ is monotonically retractable; hence, by Fact~\ref{f:mrs}, $\C_p(D)$ is monotonically Sokolov. Taking into account that $\beta D = K$, $\C_p(D)$ is homeomorphic to $(\C(K),\tau_p(D))$. Thus, $(\C(K),\tau_p(D))$ is monotonically Sokolov. For the converse implication, let us assume that $D$ is countably compact and $(\C(K),\tau_p(D))$ is monotonically Sokolov. By Fact~\ref{f:mrs} the space $\C_p(\C(K),\tau_p(D))$ is monotonically retractable. The mapping $\Psi:D\to\C_p(\C(K),\tau_p(D))$ defined by \[ \Psi(d)(f)=f(d),\quad f\in \C(K), d\in D \] is continuous (by the very definition of the respective topologies) and one-to-one. Further, for any closed $F\subset D$ its image $\Psi(F)$ is countably compact and hence closed in $\C_p(\C(K),\tau_p(D))$ (by Fact~\ref{f:mrs}). It follows that $\Psi$ is a homeomorphism of $D$ onto a closed subset of $\C_p(\C(K),\tau_p(D))$, thus $D$ is monotonically retractable. So, by Theorem~\ref{t:main} that $D$ has a full retractional skeleton. It follows that $D$ is countably closed in $K$ (it follows from the definitions that the closure in $D$ of any countable subset of $D$ is compact). Further, $(\C(K),\tau_p(D))$ is Lindel\"of by Fact~\ref{f:mrs}. Hence, by \cite[Proposition 2.13]{kalendaCharact}, $\beta D = K$. By Proposition~\ref{p:rSkeletonInBetaX}, $D$ is induced by a retractional skeleton in $K$.\end{proof} Let us now formulate the more precise version of Theorem~\ref{t:answerKubis} which we will prove. It is the following theorem which can be viewed as a noncommutative version of \cite[Theorem 2.7]{kalendaSurvey} (which is a precise formulation of \cite[Theorem 2.3]{kalendaCharact}). \begin{theorem}\label{t:skeletonIffXSokolov} Let $E$ be a Banach space and $D$ a dense subset of $(B_{E^*},w^*)$. Then the following are equivalent: \begin{enumerate}[\upshape (i)] \item $\sspan(D)$ is induced by a 1-projectional skeleton in $E$ and $\sspan(D)\cap B_{E^*} = D$. \item $D$ is a convex symmetric set induced by a retractional skeleton in $(B_{E^*},w^*)$. \item $D$ is weak$^*$ countably compact and $(E,\sigma(E,D))$ is monotonically Sokolov. \end{enumerate} \end{theorem} A \emph{projectional skeleton} in a Banach space $E$ is an indexed system of bounded linear projections on $E$ with the properties of a retractional skeleton, except for the first property which is replaced by the assumption that the projections have separable ranges. By metrizability the last condition implies that the ranges of projections cover $E$. A $1$-projectional skeleton is a projectional skeleton formed by norm one projections. The subspace generated by a projectional skeleton is the union of ranges of adjoint projections. The adjoint projections of a $1$-projectional skeleton on $E$ form a retractional skeleton on $(B_{E^*},w^*)$. For exact definitions and explanations see \cite{kubisSkeleton} or \cite{cuthSimul}. We will deal with projectional skeletons via retractional skeletons using the following lemma. In its proof we use the notation from \cite{kubisSkeleton}. \begin{lemma}\label{l:projectionalIffRetractional}Let $E$ be a Banach space and $D\subset E^*$ a 1-norming subspace of $E^*$. Then $D$ is induced by a 1-projectional skeleton if and only if $D\cap B_{E^*}$ is induced by a retractional skeleton in $(B_{E^*},w^*)$. \end{lemma} \begin{proof} The `only if part' is easy and is proved in \cite[Theorem 4.2]{cuthSimul}. Let us prove the `if part'. Suppose that $D\cap B_{E^*}$ is induced by a retractional skeleton in $(B_{E^*},w^*)$. By \cite[Theorem 4.2]{cuthSimul}, $D$ is a subset of a set $D(\mathfrak{s})$ induced by a 1-projectional skeleton. Thus, by the `if part', $D(\mathfrak{s})\cap B_{X^*}$ is induced by a retractional skeleton. We have $D\cap B_{X^*}\subset D(\mathfrak{s})\cap B_{X^*}$, both induced by a retractional skeleton. Consequently, by \cite[Lemma 3.2]{cuthCMUC}, $D\cap B_{X^*} = D(\mathfrak{s})\cap B_{X^*}$ and $D = D(\mathfrak{s})$ is induced by a 1-projectional skeleton. \end{proof} Now we proceed to the proof of Theorem~\ref{t:skeletonIffXSokolov}. We follow the line of the proof of \cite[Theorem 2.3]{kalendaCharact}; instead of ``$\Sigma$-subset'' we use the notion of a set induced by a retractional skeleton and instead of ``homeomorphic to a closed coordinatewise bounded subset of some $\Sigma(\Gamma)$'' we use spaces with a full retractional skeleton. Thus, some technical details must be handled in a slightly different way. Namely, we need the following analogue of \cite[Lemma 2.18]{kalendaCharact}. \begin{lemma}\label{l:kalendaBetaA}Let $X$ be a Banach space and $A\subset (B_{E^*},w^*)$ be a dense, convex and symmetric set with a full retractional skeleton. If $(E,\sigma(E,A))$ is Lindel\"of, then $B_{E^*} = \beta A$. \end{lemma} \begin{proof} The proof is identical with the proof of \cite[Lemma 2.18]{kalendaCharact} which goes through a technical \cite[Lemma 2.17]{kalendaCharact}. There, instead of a set with a full retractional skeleton (resp. a set induced by a retractional skeleton), set ``homeomorphic to a closed coordinatewise bounded subset of some $\Sigma(\Gamma)$'' (resp. ``$\Sigma$-subset'') is considered. Thus, it is enough to observe that sets with a full retractional skeleton (resp. sets induced by a retractional skeleton) have the topological properties needed in the proofs. Namely, it is enough to use the following properties: \begin{itemize} \item If $D$ has a full retractional skeleton, is is countably closed in each superspace, in particular $A$ is countably closed in $(B_{E^*},w^*)$. \item If $D$ has a full retractional skeleton, it is induced by a retractional skeleton in $\beta D$ (Proposition~\ref{p:rSkeletonInBetaX}). \item If $D$ is induced by a retractional skeleton in a compact space $K$ and $F\subset D$ is relatively closed, then $F$ is induced by a retractional skeleton in $\ov{F}^K$ (\cite[Lemma 3.5]{cuthCMUC}) and hence $\ov{F}^K=\beta F$ (Fact~\ref{f:basics}). \item If $D_i$ are sets induced by a retractional skeleton in compact spaces $K_i$ for $i = 1,\ldots,n$, then $D_1\times\ldots\times D_n$ is induced by a retractional skeleton in $K_1\times\ldots\times K_n$ (see the proof of \cite[Theorem 31]{kubisSkeleton}). \end{itemize} \end{proof} \begin{proof}[Proof of Theorem \ref{t:skeletonIffXSokolov}] Implication (i)$\Rightarrow$(ii) follows immediately from Lemma~\ref{l:projectionalIffRetractional}. Let us assume that (ii) holds. By Fact~\ref{f:basics}, $D$ is weak$^*$ countably compact. Put $K = (B_{E^*},w^*)$. By Theorem~\ref{t:answerCuth}, $(\C(K),\tau_p(D))$ is monotonically Sokolov. Consider the $\sigma(E,D)$-$\tau_p(D)$ homeomorphism $I:E\to\C(B_{E^*},w^*)$ defined by $I(x)(x^*) = x^*(x)$, $x\in E$, $x^*\in E^*$. It is a standard fact, see e. g. \cite[Lemma 4.4]{cuthSimul}, that $I(E)$ is $\tau_p(D)$-closed in $\C(B_{E^*},w^*)$. By Fact~\ref{f:mrs} $I(E)$ is monotonically Sokolov and hence $(E,\sigma(E,D))$ is monotonically Sokolov. It remains to prove (iii)$\Rightarrow$(i). Let us assume that (iii) holds. Since $(E,\sigma(E,D))$ is monotonically Sokolov, $\C_p(E,\sigma(E,D))$ is monotonically retractable due to Fact~\ref{f:mrs}. Observe that $\sspan(D)\subset \C(E,\sigma(E,D))$ and that the inclusion map $i:(\sspan(D),w^*)\to \C_p(E,\sigma(E,D))$ is a homeomorphism. Since $D$ is weak$^*$ countably compact, it is a countably compact subset of $\C_p(E,\sigma(E,D))$, so $D$ is closed in the latter space by Fact~\ref{f:mrs}. Hence $D$ is monotonically retractable. If we put $A = \sspan(D)\cap B_{E^*}\subset\C_p(E,\sigma(E,D))$, then $D$ is dense in $A$ (since it is weak$^*$ dense in $B_{E^*}$). It follows that $D=A$. By Theorem~\ref{t:main} $A$ has a full retractional skeleton. By Lemma~\ref{l:kalendaBetaA}, $\beta A = B_{E^*}$ and hence $A$ is induced by a retractional skeleton in $B_{E^*}$ by Proposition~\ref{p:rSkeletonInBetaX}. Finally, by Lemma~\ref{l:projectionalIffRetractional}, $\sspan(D)$ is induced by a 1-projectional skeleton in $E$. \end{proof} We finish by showing how Theorem~\ref{t:answerKubis} follows from Theorem~\ref{t:skeletonIffXSokolov}. Similarly as above, for details concerning projectional skeletons we refer to \cite{kubisSkeleton} where all the fact needed in the proof bellow may be found. \begin{proof}[Proof of Theorem \ref{t:answerKubis}] Let $\langle E,\|\cdot\|\rangle$ be a Banach space and $D\subset E^*$ a norming subspace. Then there is an equivalent norm $|\cdot|$ on $X$ such that $D$ becomes 1-norming; see e. g. \cite[Proposition 1]{kubisSkeleton}. Then $D\cap B_{\langle E,|\cdot|\rangle^*}$ is weak$^*$ dense in $B_{\langle E,|\cdot|\rangle^*}$. So, it follows from Theorem~\ref{t:skeletonIffXSokolov} that $D$ is induced by a $1$-projectional skeleton in $E$ if and only $D\cap B_{\langle E,|\cdot|\rangle^*}$ is weak$^*$-countably compact and $(E,\sigma(E,D))$ is monotonically Sokolov. Since the topology $\sigma(E,D)$ does not depend on the choice of an equivalent norm and any subspace induced by a projectional skeleton is weak$^*$ countably closed, the assertion (ii) holds if and only if $D$ is induced by a $1$-projectional skeleton in $\langle E,|\cdot|\rangle$. Now, it is enough to show that $D$ is induced by a projectional skeleton in $\langle E,\|\cdot\|\rangle$ if and only if $D$ is induced by a 1-projectional skeleton in $\langle E,|\cdot|\rangle$. If $D$ is induced by a 1-projectional skeleton in $\langle E,|\cdot|\rangle$, the same system of projections is a projectional skeleton in $\langle E,\|\cdot\|\rangle$ and induces $D$. Conversely, let $D$ be induced by a projectional skeleton in $\langle E,\|\cdot\|\rangle$. By \cite[Theorem 15]{kubisSkeleton} then $D$ generates projections in $E$ and there exists a 1-projectional skeleton $\mathfrak{s}$ in $\langle E,|\cdot|\rangle$ such that $D\subset D(\mathfrak{s})$. Since the projectional skeleton in $\langle E,\|\cdot\|\rangle$ inducing $D$ remains to be a projectional skeleton in $\langle E,|\cdot|\rangle$, \cite[Corollary 19]{kubisSkeleton} implies $D= D(\mathfrak{s})$. \end{proof} \def\cprime{$'$}
{'timestamp': '2014-03-19T01:09:11', 'yymm': '1403', 'arxiv_id': '1403.4480', 'language': 'en', 'url': 'https://arxiv.org/abs/1403.4480'}
\section{Decoupling methods} \label{decoupling_methods} Even though Problem \ref{semi_discrete_problem} is discretized in time, it is still coupled across the interface. That makes solving the subproblems independently impossible. To deal with this obstacle, we chose to use an iterative approach on each of the subintervals $I_n$ and introduce decoupling strategies. For a fixed time interval $I_n$ every iteration of a decoupling method consists of the following steps: \begin{enumerate} \item Using the solution of the solid subproblem from the previous iteration $\vec{U}_{s, k}^{(i - 1)}$, we set the boundary conditions on the interface at the time $t_n$, solve the fluid problem and get the solution $\vec{U}_{f, k}^{(i)}$. \item Similarly, we use the solution $\vec{U}_{f, k}^{(i)}$ for setting the boundary conditions of the solid problem and obtain an intermediate solution $\widetilde{\vec{U}}_{s, k}^{(i)}$. \item We apply a decoupling function to the intermediate solution $\widetilde{\vec{U}}_{s, k}^{(i)}$ and acquire $\vec{U}_{s, k}^{(i)}$. \end{enumerate} This procedure is visualized by \[ \vec{U}_{s, k}^{(i - 1)} \xrightarrow[\text{subproblem}]{\text{fluid}} \vec{U}_{f, k}^{(i)} \xrightarrow[\text{subproblem}]{\text{solid}} \widetilde{\vec{U}}_{s, k}^{(i)} \xrightarrow[\text{function}]{\text{decoupling}} \vec{U}_{s, k}^{(i)}. \] The main challenge emerges from the transition between $\widetilde{\vec{U}}_{s, k}^{(i)}$ and $\vec{U}_{s, k}^{(i)}$. In the next subsections, we will present two techniques. The first one is the relaxation method described in Section \ref{relaxation}. The second one, in Section \ref{shooting}, is the shooting method. We clarify how the intermediate solution $\widetilde{\vec{U}}_{s,k}^{(i)}$ is obtained from $\vec{U}_{s, k}^{(i - 1)}$ by the definition of Problem~\ref{decoupled_problem}. \begin{problem} For a given $\vec{U}_{s, k}^{(i - 1)} \in X_{s, k}^n$, find $\vec{U}_{f, k}^{(i)} \in X_{f, k}^n$ and $\widetilde{\vec{U}}_{s, k}^{(i)} \in X_{s, k}^n$ such that: \begin{flalign*} B_f^n &\left(\begin{array}{l} \vec{U}_{f, k}^{(i)} \\ \vec{U}_{s, k}^{(i - 1)} \end{array}\right)(\boldsymbol{\Phi}_{f, k}) = F_f^n(\boldsymbol{\Phi}_{f, k}) \\ B_s^n & \left(\begin{array}{l} \vec{U}_{f, k}^{(i)} \\ \widetilde{\vec{U}}_{s, k}^{(i)} \end{array}\right)(\boldsymbol{\Phi}_{s, k}) = F_s^n(\boldsymbol{\Phi}_{s, k}) \end{flalign*} for all $\boldsymbol{\Phi}_{f, k} \in Y_{f, k}^n$ and $\boldsymbol{\Phi}_{s, k} \in Y_{s, k}^n$. \label{decoupled_problem} \end{problem} \begin{remark} \normalfont Even though in Problem \ref{decoupled_problem} we demand $\vec{U}_{s, k}^{(i - 1)} \in X_{s, k}^n$, in fact, assuming we already know $\vec{U}_{s, k}(t_{n - 1})$, it is sufficient to set $\left(\vec{U}_{s,k}^{(i - 1)}(t_n)\right)\Big|_{\Gamma}$. The semi-discrete fluid operator (\ref{b_f^n}) is coupled with the solid operator (\ref{b_s^n}) only across the interface~$\Gamma$. Additionally, the interpolation operator (\ref{interpolation_operator_primal}) constructs values over the whole time interval $I_n$ based only on values in the points $t_{n - 1}$ and $t_n$. \label{boundary_values} \end{remark} \subsection{Relaxation method} \label{relaxation} The first of the presented methods consists of a simple interpolation operator being an example of a fixed point method. It contains the iterated solution of each of the two subproblems, taking the interface values from the last iteration of the other problem. For reasons of stability, such explicit partitioned iteration usually requires the introduction of a damping parameter. Here, we only consider fixed damping parameters. \begin{definition}[Relaxation Function] Let $\vec{U}_{s, k}^{(i -1)} \in X_{s, k}^n$ and $\widetilde{\vec{U}}_{s, k}^{(i)} \in X_{s, k}^n$ be the solid solution of Problem \ref{decoupled_problem}. Then for $\tau \in [0, 1]$ the relaxation function $R: X_{s, k}^n \to X_{s, k}^n$ is defined as: \begin{equation*} R(\vec{U}_{s, k}^{(i -1)})\coloneqq \tau \widetilde{\vec{U}}_{s, k}^{(i)} + (1 - \tau)\vec{U}_{s, k}^{(i -1)} \end{equation*} \end{definition} Assuming that we already know the value $\vec{U}_{s, k}(t_{n -1})$, we pose \begin{equation*}\left\{ \begin{aligned} \vec{U}_{s, k}^{(0)}(t_n)&\coloneqq \vec{U}_{s, k}(t_{n - 1}), \\ \vec{U}_{s, k}^{(i)}(t_n)&\coloneqq R(\vec{U}_{s,k}^{(i-1)})(t_n). \end{aligned}\right. \end{equation*} The stopping criterion is based on checking how far the computed solution is from the fixed point. We evaluate the $l^{\infty}$ norm of $\left( \widetilde{\vec{U}}_{s, k}^{(i + 1)}(t_n) - \vec{U}_{s, k}^{(i)}(t_n) \right)\Big|_{\Gamma}$ and once for $i_{\text{stop}}$ this norm is desirably small, we set $$\vec{U}_{k}(t_n) \coloneqq \left(\begin{matrix} \vec{U}_{f, k}^{(i_{\text{stop}})} \\ \vec{U}_{s, k}^{(i_{\text{stop}})} \end{matrix}\right)(t_n).$$ \subsection{Shooting method} \label{shooting} Here we present another iterative method, where we define a root-finding problem on the interface. We use the Newton method with a matrix-free GMRES method for approximation of the inverse of the Jacobian. \begin{definition}[Shooting Function] Let $\vec{U}_{s, k}^{(i -1)} \in X_{s, k}^n$ and $\widetilde{\vec{U}}_{s, k}^{(i)} \in X_{s, k}^n$ be the solid solution of Problem \ref{decoupled_problem}. Then the shooting function $S: (X_{s, k}^n)^2 \to (L^2(\Gamma))^2$ is defined as: \begin{equation} S(\vec{U}_{s, k}^{(i -1)})\coloneqq \left(\vec{U}_{s,k}^{(i - 1)}(t_n) - \widetilde{\vec{U}}_{s,k}^{(i)}(t_n) \right)\Big|_{\Gamma} \label{shooting_function} \end{equation} \end{definition} Our aim is finding the root of function (\ref{shooting_function}). To do so, we employ the Netwon method \begin{equation*} S'(\vec{U}_{s,k}^{(i - 1)})\vec{d} = -S(\vec{U}_{s,k}^{(i - 1)}). \end{equation*} In each iteration of the Newton method, the greatest difficulty causes computing and inverting the Jacobian $S'(\vec{U}_{s,k}^{(i - 1)})$. Instead of approximating all entries of the Jacobian matrix, we consider an approximation of the matrix-vector product only. Since the Jacobian matrix-vector product can be interpreted as a directional derivative, one can assume \begin{equation} S'(\vec{U}_{s,k}^{(i - 1)})\vec{d} \approx \frac{S(\vec{U}_{s,k}^{(i - 1)} + \varepsilon \vec{d} ) - S(\vec{U}_{s,k}^{(i - 1)} )}{\varepsilon}. \label{jacobian_operator} \end{equation} In principle, the vector $\vec{d}$ is not known. Thus, the formula above can not be used for solving the system directly. However, it is possible to use this technique with iterative solvers which only require the computation of matrix-vector products. Because we did not want to assume much structure of the operator (\ref{jacobian_operator}), we chose the matrix-free GMRES method. Such matrix-free Newton-Krylov methods are frequently used if the Jacobian is not available or too costly for evaluation~\cite{KnollKeyes2004}. Once $\vec{d}$ is computed, we set \begin{equation} \begin{cases} \vec{U}_{s, k}^{(0)}(t_n)\big|_{\Gamma}: = \vec{U}_{s, k}(t_{n - 1})\big|_{\Gamma}, \\ \vec{U}_{s, k}^{(i)}(t_n)\big|_{\Gamma}\coloneqq \vec{U}^{(i - 1)}_{s, k}(t_n)\big|_{\Gamma} + \vec{d}. \end{cases} \end{equation} Here, we stop iterating when the $l^{\infty}$ norm of $S(\vec{U}_{s, k}^{(i)})$ is sufficiently small and then we take $$\vec{U}_{k}(t_n)\big|_{\Gamma} \coloneqq \left(\begin{matrix} \vec{U}_{f, k}^{(i_{\text{stop}})} \\ \widetilde{\vec{U}}_{s, k}^{(i_{\text{stop}})} \end{matrix}\right)(t_n)\big|_{\Gamma}.$$ We note that the method presented here is similar to the one presented in \cite{Degroote2009}, where the authors also introduced a root-finding problem on the interface and solved it with a quasi-Newton method. The main difference lies in the approximation of the inverse of the Jacobian. Instead of using a matrix-free linear solver, there the Jacobian is approximated by solving a least-squares problem. \subsection{Numerical comparison of the performance} \label{comparison} \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{semilogyaxis}[ title style={align=center}, title={\large{\textcolor{white}{adjust to the next line}} \\ \large{\textbf{No micro time-stepping}}}, xlabel={\large{Evaluations of the decoupling function}}, ylabel={\large{Error on the interface in $l^{\infty}$ norm}}, xmin=0, xmax=14, ymin=1.6782839494474103e-18, ymax=7.2335262037580485e-06, xtick={0,5,10}, ytick={1.0e-16, 1.0e-12, 1.0e-8}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1.0, 5.063468341658501e-06) (2.0, 1.5164739991123107e-06) (3.0, 4.541735599230415e-07) (4.0, 1.360218687206842e-07) (5.0, 4.073761844344486e-08) (6.0, 1.2200638101940217e-08) (7.0, 3.6540077493352834e-09) (8.0, 1.09435037943872e-09) (9.0, 3.2775047633182807e-10) (10.0, 9.815903496973628e-11) (11.0, 2.9397962946332215e-11) (12.0, 8.804490045335847e-12) (13.0, 2.636884962462999e-12) (14.0, 7.897291452087574e-13) }; \addplot[ color=black, mark=square, ] coordinates { (1.0, 7.2335262037580485e-06) (6.0, 1.6782839494474103e-18) }; \legend{\large{Relaxation}, \large{Shooting}} \end{semilogyaxis} \end{tikzpicture} \end{center} \begin{multicols}{2} \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{semilogyaxis}[ title style={align=center}, title={\large{\textbf{Micro time-stepping}}\\\large{\textbf{in the fluid subdomain}}}, xlabel={\large{Evaluations of the decoupling function}}, ylabel={\large{Error on the interface in $l^{\infty}$ norm}}, xmin=0, xmax=14, ymin=1.6143131874927976e-18, ymax=7.2336889009813875e-06, xtick={0,5,10}, ytick={1.0e-16, 1.0e-12, 1.0e-8}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1.0, 5.063582229714357e-06) (2.0, 1.516508024155365e-06) (3.0, 4.5418372513637254e-07) (4.0, 1.3602490562847677e-07) (5.0, 4.073852572897223e-08) (6.0, 1.2200909154588915e-08) (7.0, 3.654088726135697e-09) (8.0, 1.0943745710675214e-09) (9.0, 3.277577034094575e-10) (10.0, 9.816119403036871e-11) (11.0, 2.939860797104603e-11) (12.0, 8.804682734489964e-12) (13.0, 2.6369425288130414e-12) (14.0, 7.897463161900205e-13) }; \addplot[ color=black, mark=square, ] coordinates { (1.0, 7.2336889009813875e-06) (6.0, 1.6143131874927976e-18) }; \legend{\large{Relaxation}, \large{Shooting}} \end{semilogyaxis} \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{semilogyaxis}[ title style={align=center}, title={\large{\textbf{Micro time-stepping}}\\\large{\textbf{in the solid subdomain}}}, xlabel={\large{Evaluations of the decoupling function}}, ylabel={\large{Error on the interface in $l^{\infty}$ norm}}, xmin=0, xmax=14, ymin=3.1299108171965233e-17, ymax=7.234364854672545e-06, xtick={0,5,10}, ytick={1.0e-16, 1.0e-12, 1.0e-8}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1.0, 5.0640553973980905e-06) (2.0, 1.516686459736926e-06) (3.0, 4.5424819164532013e-07) (4.0, 1.3604752339998525e-07) (5.0, 4.074629352738889e-08) (6.0, 1.2203533968598318e-08) (7.0, 3.6549644338047178e-09) (8.0, 1.0946637381482126e-09) (9.0, 3.27852382732407e-10) (10.0, 9.819197494872247e-11) (11.0, 2.9408554182419644e-11) (12.0, 8.807880058075925e-12) (13.0, 2.637965706152789e-12) (14.0, 7.900728285347777e-13) }; \addplot[ color=black, mark=square, ] coordinates { (1.0, 7.234364854672545e-06) (6.0, 3.1299108171965233e-17) }; \legend{\large{Relaxation}, \large{Shooting}} \end{semilogyaxis} \end{tikzpicture} \end{center} \end{multicols} \caption{Performance of decoupling methods for Configuration \ref{configuration_1} in one macro time-step in the case of $M_{n} = 1$ and $L_{n} = 1$ (top), $M_{n} = 10$ and $L_{ n} = 1$ (left), $M_{n} = 1$ and $L_{n} = 10$ (right).} \label{comparison_one_timestep_configuration_1} \end{figure} \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{semilogyaxis}[ title style={align=center}, title={\large{\textcolor{white}{adjust to the next line}} \\ \large{\textbf{No micro time-stepping}}}, xlabel={\large{Evaluations of the decoupling function}}, ylabel={\large{Error on the interface in $l^{\infty}$ norm}}, xmin=0, xmax=20, ymin=2.5981947607924254e-17, ymax=0.007656308303176444, xtick={0, 10, 20}, ytick={1.0e-16, 1.0e-12, 1.0e-8, 1.0e-4}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1.0, 0.005359415811535416) (2.0, 0.0016050925565451867) (3.0, 0.00048070951497492957) (4.0, 0.00014396779939445696) (5.0, 4.311694922066177e-05) (6.0, 1.2913105301192662e-05) (7.0, 3.8673490456006905e-06) (8.0, 1.1582333328392495e-06) (9.0, 3.468795986062422e-07) (10.0, 1.0388706259737299e-07) (11.0, 3.111316455267489e-08) (12.0, 9.318090293869872e-09) (13.0, 2.790677516931567e-09) (14.0, 8.35780819315297e-10) (15.0, 2.503082492867421e-10) (16.0, 7.496490104927928e-11) (17.0, 2.2451248778689066e-11) (18.0, 6.723954703037041e-12) (19.0, 2.0137587739115056e-12) (20.0, 6.030839615136963e-13) }; \addplot[ color=black, mark=square, ] coordinates { (1.0, 0.007656308303176444) (6.0, 1.3362380439487633e-12) (11.0, 2.5981947607924254e-17) }; \legend{\large{Relaxation}, \large{Shooting}} \end{semilogyaxis} \end{tikzpicture} \end{center} \begin{multicols}{2} \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{semilogyaxis}[ title style={align=center}, title={\large{\textbf{Micro time-stepping}}\\\large{\textbf{in the fluid subdomain}}}, xlabel={\large{Evaluations of the decoupling function}}, ylabel={\large{Error on the interface in $l^{\infty}$ norm}}, xmin=0, xmax=20, ymin=3.8236446003307725e-17, ymax=0.007656309743571194, xtick={0, 10, 20}, ytick={1.0e-16, 1.0e-12, 1.0e-8, 1.0e-4}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1.0, 0.005359416820499795) (2.0, 0.0016050927552661936) (3.0, 0.0004807095435034251) (4.0, 0.00014396779865734116) (5.0, 4.3116946220042164e-05) (6.0, 1.2913103569907667e-05) (7.0, 3.867348277715173e-06) (8.0, 1.1582330281757546e-06) (9.0, 3.468794850001077e-07) (10.0, 1.0388702187616845e-07) (11.0, 3.111315033098777e-08) (12.0, 9.318085433252181e-09) (13.0, 2.7906758946635684e-09) (14.0, 8.357802788856557e-10) (15.0, 2.503080787801899e-10) (16.0, 7.4964826958197e-11) (17.0, 2.245124566513367e-11) (18.0, 6.723933459764385e-12) (19.0, 2.0137405319378455e-12) (20.0, 6.030960440037066e-13) }; \addplot[ color=black, mark=square, ] coordinates { (1.0, 0.007656309743571194) (6.0, 1.3908079328133397e-12) (11.0, 3.8236446003307725e-17) }; \legend{\large{Relaxation}, \large{Shooting}} \end{semilogyaxis} \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{semilogyaxis}[ title style={align=center}, title={\large{\textbf{Micro time-stepping}}\\\large{\textbf{in the solid subdomain}}}, xlabel={\large{Evaluations of the decoupling function}}, ylabel={\large{Error on the interface in $l^{\infty}$ norm}}, xmin=0, xmax=20, ymin=5.225568472448191e-16, ymax=0.008832788810731775, xtick={0, 10, 20}, ytick={1.0e-16, 1.0e-12, 1.0e-8, 1.0e-4}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1.0, 0.006182952167512061) (2.0, 0.001851894094775634) (3.0, 0.0005546722347904078) (4.0, 0.00016613331350273555) (5.0, 4.9759619095894025e-05) (6.0, 1.4903812813475945e-05) (7.0, 4.463933820745542e-06) (8.0, 1.3370206885870224e-06) (9.0, 4.004594325296193e-07) (10.0, 1.1994411584002287e-07) (11.0, 3.5925215834573944e-08) (12.0, 1.0760187730531687e-08) (13.0, 3.2228516215462864e-09) (14.0, 9.652967270027322e-10) (15.0, 2.8912230173214463e-10) (16.0, 8.659680184574186e-11) (17.0, 2.5937321917881572e-11) (18.0, 7.768221735936033e-12) (19.0, 2.3266147157063255e-12) (20.0, 6.973162229222618e-13) }; \addplot[ color=black, mark=square, ] coordinates { (1.0, 0.008832788810731775) (6.0, 2.2599031490117177e-11) (11.0, 5.225568472448191e-16) }; \legend{\large{Relaxation}, \large{Shooting}} \end{semilogyaxis} \end{tikzpicture} \end{center} \end{multicols} \caption{Performance of decoupling methods for Configuration \ref{configuration_2} in one macro time-step in the case of $M_{n} = 1$ and $L_{n} = 1$ (top), $M_{n} = 10$ and $L_{ n} = 1$ (left), $M_{n} = 1$ and $L_{n} = 10$ (right).} \label{comparison_one_timestep_configuration_2} \end{figure} In Figures~\ref{comparison_one_timestep_configuration_1} and \ref{comparison_one_timestep_configuration_2} we present the comparison of the performance of both methods based on the number of micro time-steps. We assumed that the micro time-steps have a uniform size. We performed the simulations in the case of no micro time-stepping ($L_n = 1$, $M_n = 1$), micro time-stepping in the fluid subdomain ($M_n = 10$, $L_n = 1$) and the solid subdomain ($M_n = 1$, $L_n = 10$). Figure~\ref{comparison_one_timestep_configuration_1} shows results for the right hand side according to Configuration~\ref{configuration_1}. Figure~\ref{comparison_one_timestep_configuration_2} corresponds to Configuration~\ref{configuration_2}. We investigated one macro time-step $I_2 = [0.02, 0.04]$. We set the relaxation parameter to $\tau = 0.7$. Both methods are very robust concerning the number of micro time-steps. The relaxation method, as expected, has a linear convergence rate. In both cases, despite the nested GMRES method, the performance of the shooting method is much better. For Configuration~\ref{configuration_1}, the relaxation method needs 13 iterations to converge. The shooting method needs only 2 iterations of the Newton method (which is the reason why each of the graphs in Figure~\ref{comparison_one_timestep_configuration_1} displays only two evaluations of the error) and overall requires 6 evaluations of the decoupling function. In the case of Configuration~\ref{configuration_2}, both methods need more iterations to reach the same level of accuracy. The number of iterations of the relaxation method increases to 20 while the shooting method needs 3 iterations of the Newton method and 11 evaluations of the decoupling function. \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{axis}[ title style={align=center}, title={\large{\textcolor{white}{adjust to the next line}} \\ \large{\textbf{No micro time-stepping}}}, xlabel={\large{Macro time-step}}, ylabel={\large{Evaluations of the decoupling function}}, xmin=0, xmax=50, ymin=0, ymax=25, xtick={10, 20, 30, 40}, ytick={0, 5, 10, 15, 20, 25}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1, 15) (2, 14) (3, 14) (4, 14) (5, 14) (6, 15) (7, 15) (8, 14) (9, 15) (10, 15) (11, 15) (12, 14) (13, 14) (14, 15) (15, 14) (16, 15) (17, 15) (18, 14) (19, 14) (20, 14) (21, 15) (22, 15) (23, 15) (24, 15) (25, 14) (26, 14) (27, 14) (28, 15) (29, 15) (30, 15) (31, 14) (32, 14) (33, 14) (34, 15) (35, 15) (36, 15) (37, 14) (38, 15) (39, 14) (40, 15) (41, 15) (42, 15) (43, 15) (44, 14) (45, 14) (46, 14) (47, 15) (48, 15) (49, 14) (50, 15) }; \addplot[ color=black, mark=square, ] coordinates { (1, 6) (2, 6) (3, 6) (4, 6) (5, 6) (6, 6) (7, 6) (8, 6) (9, 6) (10, 6) (11, 5) (12, 6) (13, 6) (14, 6) (15, 6) (16, 6) (17, 6) (18, 6) (19, 6) (20, 6) (21, 6) (22, 5) (23, 6) (24, 6) (25, 6) (26, 6) (27, 6) (28, 6) (29, 6) (30, 6) (31, 6) (32, 6) (33, 6) (34, 6) (35, 6) (36, 6) (37, 6) (38, 6) (39, 6) (40, 6) (41, 6) (42, 5) (43, 6) (44, 6) (45, 6) (46, 6) (47, 6) (48, 6) (49, 6) (50, 5) }; \legend{\large{Relaxation}, \large{Shooting}} \end{axis} \end{tikzpicture} \end{center} \begin{multicols}{2} \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{axis}[ title style={align=center}, title={\large{\textbf{Micro time-stepping}}\\\large{\textbf{in the fluid subdomain}}}, xlabel={\large{Macro time-step}}, ylabel={\large{Evaluations of the decoupling function}}, xmin=0, xmax=50, ymin=0, ymax=25, xtick={10, 20, 30, 40}, ytick={0, 5, 10, 15, 20, 25}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1, 15) (2, 14) (3, 14) (4, 14) (5, 14) (6, 15) (7, 15) (8, 14) (9, 15) (10, 15) (11, 15) (12, 14) (13, 14) (14, 15) (15, 14) (16, 15) (17, 15) (18, 14) (19, 14) (20, 14) (21, 15) (22, 15) (23, 15) (24, 15) (25, 14) (26, 14) (27, 14) (28, 15) (29, 15) (30, 15) (31, 14) (32, 14) (33, 14) (34, 15) (35, 15) (36, 15) (37, 14) (38, 15) (39, 14) (40, 15) (41, 15) (42, 15) (43, 15) (44, 14) (45, 14) (46, 14) (47, 15) (48, 15) (49, 14) (50, 15) }; \addplot[ color=black, mark=square, ] coordinates { (1, 6) (2, 6) (3, 6) (4, 6) (5, 6) (6, 6) (7, 6) (8, 6) (9, 6) (10, 6) (11, 5) (12, 6) (13, 6) (14, 6) (15, 6) (16, 6) (17, 6) (18, 6) (19, 6) (20, 6) (21, 6) (22, 5) (23, 6) (24, 6) (25, 6) (26, 6) (27, 6) (28, 6) (29, 6) (30, 6) (31, 6) (32, 6) (33, 6) (34, 6) (35, 6) (36, 6) (37, 6) (38, 6) (39, 6) (40, 6) (41, 6) (42, 5) (43, 6) (44, 6) (45, 6) (46, 6) (47, 6) (48, 6) (49, 6) (50, 5) }; \legend{\large{Relaxation}, \large{Shooting}} \end{axis} \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{axis}[ title style={align=center}, title={\large{\textbf{Micro time-stepping}}\\\large{\textbf{in the solid subdomain}}}, xlabel={\large{Macro time-step}}, ylabel={\large{Evaluations of the decoupling function}}, xmin=0, xmax=50, ymin=0, ymax=25, xtick={10, 20, 30, 40}, ytick={0, 5, 10, 15, 20, 25}, legend pos=north east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1, 15) (2, 14) (3, 14) (4, 14) (5, 14) (6, 15) (7, 14) (8, 14) (9, 15) (10, 15) (11, 14) (12, 15) (13, 14) (14, 15) (15, 15) (16, 15) (17, 15) (18, 14) (19, 14) (20, 14) (21, 15) (22, 15) (23, 14) (24, 14) (25, 14) (26, 15) (27, 14) (28, 15) (29, 15) (30, 14) (31, 14) (32, 14) (33, 14) (34, 15) (35, 15) (36, 15) (37, 15) (38, 14) (39, 14) (40, 14) (41, 15) (42, 15) (43, 14) (44, 14) (45, 14) (46, 15) (47, 15) (48, 14) (49, 15) (50, 14) }; \addplot[ color=black, mark=square, ] coordinates { (1, 6) (2, 6) (3, 6) (4, 5) (5, 6) (6, 6) (7, 6) (8, 6) (9, 6) (10, 6) (11, 6) (12, 6) (13, 6) (14, 6) (15, 6) (16, 6) (17, 6) (18, 6) (19, 6) (20, 6) (21, 6) (22, 6) (23, 6) (24, 6) (25, 6) (26, 6) (27, 6) (28, 6) (29, 5) (30, 6) (31, 6) (32, 6) (33, 6) (34, 6) (35, 6) (36, 6) (37, 6) (38, 6) (39, 6) (40, 6) (41, 5) (42, 5) (43, 6) (44, 6) (45, 6) (46, 6) (47, 6) (48, 5) (49, 5) (50, 5) }; \legend{\large{Relaxation}, \large{Shooting}} \end{axis} \end{tikzpicture} \end{center} \end{multicols} \caption{Number of evaluations of the decoupling functions for Configuration \ref{configuration_1} needed for convergence on the time interval $I = [0, 1]$ for $N = 50$ in the case of $M_{n} = 1$ and $L_{n} = 1$ (top), $M_{n} = 10$ and $L_{ n} = 1$ (left), $M_{n} = 1$ and $L_{n} = 10$ (right).} \label{comparison_whole_timeline_configuration_1} \end{figure} \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{axis}[ title style={align=center}, title={\large{\textcolor{white}{adjust to the next line}} \\ \large{\textbf{No micro time-stepping}}}, xlabel={\large{Macro time-step}}, ylabel={\large{Evaluations of the decoupling function}}, xmin=0, xmax=50, ymin=0, ymax=25, xtick={10, 20, 30, 40}, ytick={0, 5, 10, 15, 20, 25}, legend pos=south east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1, 21) (2, 20) (3, 20) (4, 20) (5, 20) (6, 21) (7, 21) (8, 21) (9, 20) (10, 20) (11, 21) (12, 21) (13, 21) (14, 21) (15, 21) (16, 20) (17, 20) (18, 21) (19, 21) (20, 21) (21, 21) (22, 21) (23, 20) (24, 21) (25, 21) (26, 21) (27, 20) (28, 21) (29, 20) (30, 20) (31, 21) (32, 21) (33, 21) (34, 20) (35, 21) (36, 21) (37, 20) (38, 21) (39, 21) (40, 21) (41, 20) (42, 21) (43, 21) (44, 21) (45, 21) (46, 21) (47, 20) (48, 20) (49, 20) (50, 21) }; \addplot[ color=black, mark=square, ] coordinates { (1, 10) (2, 11) (3, 11) (4, 11) (5, 11) (6, 11) (7, 11) (8, 11) (9, 11) (10, 11) (11, 11) (12, 11) (13, 11) (14, 11) (15, 11) (16, 11) (17, 11) (18, 11) (19, 11) (20, 11) (21, 11) (22, 11) (23, 11) (24, 11) (25, 11) (26, 11) (27, 11) (28, 11) (29, 11) (30, 11) (31, 11) (32, 11) (33, 11) (34, 11) (35, 11) (36, 11) (37, 11) (38, 11) (39, 11) (40, 11) (41, 11) (42, 11) (43, 11) (44, 11) (45, 11) (46, 11) (47, 11) (48, 11) (49, 11) (50, 11) }; \legend{\large{Relaxation}, \large{Shooting}} \end{axis} \end{tikzpicture} \end{center} \begin{multicols}{2} \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{axis}[ title style={align=center}, title={\large{\textbf{Micro time-stepping}}\\\large{\textbf{in the fluid subdomain}}}, xlabel={\large{Macro time-step}}, ylabel={\large{Evaluations of the decoupling function}}, xmin=0, xmax=50, ymin=0, ymax=25, xtick={10, 20, 30, 40}, ytick={0, 5, 10, 15, 20, 25}, legend pos=south east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1, 21) (2, 20) (3, 20) (4, 20) (5, 20) (6, 21) (7, 21) (8, 21) (9, 20) (10, 20) (11, 21) (12, 21) (13, 21) (14, 21) (15, 21) (16, 20) (17, 20) (18, 21) (19, 21) (20, 21) (21, 21) (22, 21) (23, 20) (24, 21) (25, 21) (26, 21) (27, 20) (28, 21) (29, 20) (30, 20) (31, 21) (32, 21) (33, 21) (34, 20) (35, 21) (36, 21) (37, 20) (38, 21) (39, 21) (40, 21) (41, 20) (42, 21) (43, 21) (44, 21) (45, 21) (46, 21) (47, 20) (48, 20) (49, 20) (50, 21) }; \addplot[ color=black, mark=square, ] coordinates { (1, 10) (2, 11) (3, 11) (4, 11) (5, 11) (6, 11) (7, 11) (8, 11) (9, 11) (10, 11) (11, 11) (12, 11) (13, 11) (14, 11) (15, 11) (16, 11) (17, 11) (18, 11) (19, 11) (20, 11) (21, 11) (22, 11) (23, 11) (24, 11) (25, 11) (26, 11) (27, 11) (28, 11) (29, 11) (30, 11) (31, 11) (32, 11) (33, 11) (34, 11) (35, 11) (36, 11) (37, 11) (38, 11) (39, 11) (40, 11) (41, 11) (42, 11) (43, 11) (44, 11) (45, 11) (46, 11) (47, 11) (48, 11) (49, 11) (50, 11) }; \legend{\large{Relaxation}, \large{Shooting}} \end{axis} \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[scale = 0.55] \begin{axis}[ title style={align=center}, title={\large{\textbf{Micro time-stepping}}\\\large{\textbf{in the solid subdomain}}}, xlabel={\large{Macro time-step}}, ylabel={\large{Evaluations of the decoupling function}}, xmin=0, xmax=50, ymin=0, ymax=25, xtick={10, 20, 30, 40}, ytick={0, 5, 10, 15, 20, 25}, legend pos=south east, ymajorgrids=true, grid style=dashed, ] \addplot[ color=black, mark=o, ] coordinates { (1, 21) (2, 20) (3, 20) (4, 20) (5, 20) (6, 21) (7, 21) (8, 21) (9, 20) (10, 21) (11, 21) (12, 21) (13, 21) (14, 21) (15, 21) (16, 20) (17, 21) (18, 21) (19, 21) (20, 21) (21, 21) (22, 18) (23, 21) (24, 21) (25, 21) (26, 21) (27, 21) (28, 20) (29, 21) (30, 21) (31, 21) (32, 21) (33, 21) (34, 21) (35, 20) (36, 21) (37, 21) (38, 21) (39, 21) (40, 21) (41, 18) (42, 21) (43, 21) (44, 21) (45, 21) (46, 21) (47, 20) (48, 21) (49, 21) (50, 21) }; \addplot[ color=black, mark=square, ] coordinates { (1, 11) (2, 11) (3, 11) (4, 11) (5, 11) (6, 11) (7, 11) (8, 11) (9, 11) (10, 11) (11, 11) (12, 11) (13, 11) (14, 11) (15, 11) (16, 11) (17, 11) (18, 11) (19, 11) (20, 11) (21, 11) (22, 11) (23, 11) (24, 11) (25, 11) (26, 11) (27, 11) (28, 11) (29, 11) (30, 11) (31, 11) (32, 11) (33, 11) (34, 11) (35, 11) (36, 11) (37, 11) (38, 11) (39, 11) (40, 11) (41, 11) (42, 11) (43, 11) (44, 11) (45, 11) (46, 11) (47, 11) (48, 11) (49, 11) (50, 11) }; \legend{\large{Relaxation}, \large{Shooting}} \end{axis} \end{tikzpicture} \end{center} \end{multicols} \caption{Number of evaluations of the decoupling functions for Configuration \ref{configuration_2} needed for convergence on the time interval $I = [0, 1]$ for $N = 50$ in the case of $M_{n} = 1$ and $L_{n} = 1$ (top), $M_{n} = 10$ and $L_{ n} = 1$ (left), $M_{n} = 1$ and $L_{n} = 10$ (right).} \label{comparison_whole_timeline_configuration_2} \end{figure} In Figures~\ref{comparison_whole_timeline_configuration_1} and \ref{comparison_whole_timeline_configuration_2} we show the number of evaluations of the decoupling function needed to reach the stopping criteria throughout the complete time interval $I = [0, 1]$ for $N = 50$. Similarly, we performed the simulations in the case of no micro time-stepping, micro time-stepping in the fluid and the solid subdomain. We considered both Configuration~\ref{configuration_1} and~\ref{configuration_2}. In the case of Configuration~\ref{configuration_1}, the number of evaluations of the decoupling function using the relaxation method varied between 14 and 15. For the shooting function, this value was mostly equal to 6 with a few exceptions when only 5 evaluations were needed. For Configuration~\ref{configuration_2}, the relaxation method needed between 18 and 21 iterations while for the shooting method it was almost exactly constant to 11. For each configuration, graphs corresponding to no micro time-stepping and micro time-stepping in the fluid subdomain are the same, while introducing micro time-stepping in the solid subdomain resulted in slight variations. For both decoupling methods, the independence of the performance from the number of micro time-steps extends to the whole time interval $I$. \section{Goal oriented estimation} \label{goal_oriented_estimation} In Section 1 we formulated the semi-discrete problem enabling usage of different time-step sizes in fluid and solid subdomains, whereas in Section 2 we presented methods designed to efficiently solve such problems. However, so far the choice of the step sizes was purely arbitrary. In this section, we are going to present an easily localized error estimator, which can be used as a criterion for the adaptive choice of the time-step size. For the construction of the error estimator, we used the dual weighted residual (DWR) method~\cite{BeckerRannacher2001}. Given a differentiable functional $J: X \to \mathbb{R}$, our aim is finding a way to approximate $J(\vec{U}) - J(\vec{U}_k)$, where $\vec{U}$ is the solution to Problem~\ref{continuous_problem} and $\vec{U}_k$ is the solution to Problem~\ref{semi_discrete_problem}. The goal functional $J: X \to \mathbb{R}$ is split into two parts $J_f: X_f \to \mathbb{R}$ and $J_s:X_s \to \mathbb{R}$ which refer to the fluid and solid subdomains, respectively $$J(\vec{U}): = J_f(\vec{U}_f) + J_s(\vec{U}_s). $$ The DWR method embeds computing the value of $J$ in the optimal control framework - it is equivalent to solving the following optimization problem \begin{equation*} J(\vec{U}) = \min !, \quad B(\vec{U})(\boldsymbol{\Phi}) = F(\boldsymbol{\Phi}) \textnormal{ for all }\boldsymbol{\Phi} \in X, \end{equation*} where \begin{flalign*} B(\vec{U})(\boldsymbol{\Phi}) & \coloneqq B_f(\vec{U})(\boldsymbol{\Phi}_f) + B_s(\vec{U})(\boldsymbol{\Phi}_s), \\ F(\boldsymbol{\Phi}) & \coloneqq F_f(\boldsymbol{\Phi}_f) + F_s(\boldsymbol{\Phi}_s). \end{flalign*} Solving this problem corresponds to finding stationary points of a Lagrangian $\mathcal{L}: X \times (X \oplus Y_k) \to \mathbb{R}$ \begin{equation*} \mathcal{L}(\vec{U}, \vec{Z}): = J(\vec{U}) + F(\vec{Z}) - B(\vec{U})(\vec{Z}). \end{equation*} We can not take $X \times X$ as the domain of $\mathcal{L}$ because we operate in a nonconforming set-up, that is $Y_k \notin X$. Because the form $B$ describes a linear problem, finding stationary points of $\mathcal{L}$ is equivalent to solving the following problem: \begin{problem} For a given $\vec{U} \in X$ being the solution of Problem \ref{continuous_problem}, find $\vec{Z} \in X$ such that: \begin{flalign*} B(\boldsymbol{\Xi}, \vec{Z}) = J'_{\vec{U}}(\boldsymbol{\Xi}) \end{flalign*} for all $\boldsymbol{\Xi} \in X$. \label{lagrangian_problem} \end{problem} The solution $\vec{Z}$ is called an \textit{adjoint solution}. By $J'_{\vec U}(\boldsymbol{\Xi})$ we denote the Gateaux derivative of $J(\cdot)$ at $\vec U$ in direction of the test function $\boldsymbol{\Xi}$. \subsection{Adjoint problem} \label{adjoint_problem} \subsubsection{Continuous variational formulation} \label{adjoint_continuous_variational_formulation} As the first step in decoupling the Problem~\ref{lagrangian_problem}, we would like to split the form $B$ into forms corresponding to fluid and solid subproblems. However, we can not fully reuse the forms (\ref{a_f}) and (\ref{a_s}) because of the interface terms - the forms have to be sorted regarding test functions. Thus, after defining abbreviations, \[ \begin{aligned} \boldsymbol{\Xi}_f &\coloneqq \left(\begin{matrix} \xi_f \\ \eta_f \end{matrix}\right), \quad& \boldsymbol{\Xi}_s &\coloneqq \left(\begin{matrix} \xi_s \\ \eta_s \end{matrix}\right), \quad& \boldsymbol{\Xi} &\coloneqq \left(\begin{matrix} \boldsymbol{\Xi}_f \\ \boldsymbol{\Xi}_s \end{matrix}\right), \\ \vec{Z}_f &\coloneqq \left(\begin{matrix} z_f \\ y_f \end{matrix}\right), & \vec{Z}_s& \coloneqq \left(\begin{matrix} z_s \\ y_s \end{matrix}\right), & \vec{Z} &\coloneqq \left(\begin{matrix} \vec{Z}_f \\ \vec{Z}_s \end{matrix}\right) \end{aligned} \] we choose the splitting \begin{equation*} B(\boldsymbol{\Xi})(\vec{Z})\coloneqq \widetilde{B}_f(\boldsymbol{\Xi}_f)(\vec{Z}) + \widetilde{B}_s(\boldsymbol{\Xi}_s)(\vec{Z}), \end{equation*} where \begin{subequations} \begin{flalign*} \widetilde{B}_f(\boldsymbol{\Xi}_f)(\vec{Z}) \coloneqq & - \int_I \langle \eta_f, \partial_t z_f \rangle_f \diff t + \int_I \widetilde{a}_f(\boldsymbol{\Xi}_f)(\vec{Z}) \diff t + (\eta_f(T), z_f(T))_f, \\ \widetilde{B}_s(\boldsymbol{\Xi}_s)(\vec{Z}) \coloneqq & - \int_I \langle \eta_s, \partial_t z_s \rangle_s \diff t - \int_I \langle \xi_s, \partial_t y_s \rangle_s \diff t + \int_I \widetilde{a}_s(\boldsymbol{\Xi}_s)(\vec{Z}) \diff t \\ & \qquad + (\eta_s(T), z_s(T))_s + (\xi_s(T), y_s(T))_s \nonumber \end{flalign*} \end{subequations} and \begin{subequations} \begin{flalign*} \widetilde{a}_f(\boldsymbol{\Xi}_f)(\vec{Z}) \coloneqq & \; (\nu \nabla \eta_f, \nabla z_f)_f + (\beta \cdot \nabla \eta_f, z_f)_f + (\nabla \xi_f, \nabla y_f)_f \\ &\qquad - \langle \partial_{\vec{n}_f} \xi_f, y_f \rangle_{\Gamma} + \frac{\gamma}{h} \langle \xi_f, y_f \rangle_{\Gamma} - \langle \nu \partial_{\vec{n}_f} \eta_f, z_f \rangle_{\Gamma} + \frac{\gamma}{h}\langle \eta_f, z_f \rangle_{\Gamma} \nonumber \\ & \qquad + \langle \nu \partial_{\vec{n}_f} \eta_f, z_s \rangle_{\Gamma}, \nonumber \\ \widetilde{a}_s(\boldsymbol{\Xi}_s)(\vec{Z}) \coloneqq & \; (\lambda \nabla \xi_s, \nabla z_s)_s + (\delta \nabla \eta_s, \nabla z_s)_s - (\eta_s, y_s)_s \\ &\qquad - \frac{\gamma}{h} \langle \xi_s, y_f \rangle_{\Gamma} - \frac{\gamma}{h} \langle \eta_s, z_f \rangle_{\Gamma} - \langle \delta \partial_{\vec{n}_s} \eta_s, z_s \rangle_{\Gamma}. \nonumber \end{flalign*} \end{subequations} We have applied integration by parts in time which reveals that the adjoint problem runs backward in time. That leads to the formulation of a continuous adjoint variational problem: \begin{problem} For a given $\vec{U} \in X$ being the solution of Problem \ref{continuous_problem}, find $\vec{Z} \in X$ such that: \[ \begin{aligned} \widetilde{B}_f(\boldsymbol{\Xi}_f)(\vec{Z}) &= (J_f)'_{\vec{U}}(\boldsymbol{\Xi}_f) \\ \widetilde{B}_s(\boldsymbol{\Xi}_s)(\vec{Z}) &= (J_s)'_{\vec{U}}(\boldsymbol{\Xi}_s) \end{aligned} \] for all $\boldsymbol{\Xi}_f \in X_f $ and $\boldsymbol{\Xi}_s \in X_s $. \label{adjoint_continuous_problem} \end{problem} \subsubsection{Semi-discrete Petrov-Galerkin formulation} \label{adjoint_semi_discrete_petrov_galerkin_formulation} The semi-discrete formulation for the adjoint problem is similar to the one of the primal problem. The main difference lies in the fact that this time trial functions are piecewise constant in time $\vec{Z}_k \in Y_k$, while test functions are piecewise linear in time $\boldsymbol{\Xi}_f \in X_{f, k}$, $\boldsymbol{\Xi}_s \in X_{s, k}$. After the rearrangement of the terms in accordance to test functions on every interval $I_n$, we arrive with the scheme \begin{subequations} \begin{flalign*} \widetilde{B}_f^n(\boldsymbol{\Xi}_{f, k})(\vec{Z}_k) = & \ \frac{k_{f, n}^{M_{n}}}{2}\widetilde{a}_f(\boldsymbol{\Xi}_{f, k}(t_n))(i_n^f\vec{Z}_{k}(t_n)) \\ & \quad+ \sum_{m = 1}^{M_{n} - 1} \bigg\{ (\eta_{f, k}(t^m_{f, n}), z_{f, k}(t^m_{f, n}) - z_{f, k}(t^{m + 1}_{f, n}))_f \nonumber \\ & \qquad \qquad +\frac{k^m_{f, n}}{2}\widetilde{a}_f(\boldsymbol{\Xi}_{f, k}(t_{f, n}^m))(i_n^f\vec{Z}_{k}(t_{f, n}^{m})) \nonumber\\ & \qquad \qquad + \frac{k^{m + 1}_{f, n}}{2}\widetilde{a}_f(\boldsymbol{\Xi}_{f, k}(t_{f, n}^m))(i_n^f\vec{Z}_{k}(t_{f, n}^{m + 1})) \bigg\} \nonumber \\ & \quad+ (\eta_{f, k}(t_{n - 1}), z_{f, k}(t_{n - 1}) - z_{f, k}(t_{f, n}^{1}))_f \nonumber\\ &\quad+ \frac{k^1_{f, n}}{2}\widetilde{a}_f(\boldsymbol{\Xi}_{f, k}(t_{n - 1}))(i_n^f\vec{Z}_{k}(t_{f, n}^{1})), \nonumber \\ \widetilde{B}^n_s(\boldsymbol{\Xi}_{s, k})(\vec{Z}_k) = & \ \frac{k_{s, n}^{L_n}}{2}\widetilde{a}_s(\boldsymbol{\Xi}_{s, k}(t_n))(i_n^s\vec{Z}_{k}(t_n)) \\ & \quad+ \sum_{l = 1}^{L_n - 1} \bigg\{ (\eta_{s, k}(t_{s, n}^l), z_{s, k}(t_{s, n}^l) - z_{s, k}(t_{s, n}^{l + 1}))_s \nonumber \\ & \qquad \qquad + (\xi_{s,k}(t_{s, n}^l), y_{s, k}(t_{s, n}^l) - y_{s, k}(t_{s, n}^{l + 1}))_s \nonumber \\ & \qquad \qquad + \frac{k^l_{s, n}}{2}\widetilde{a}_s(\boldsymbol{\Xi}_{s, k}(t_{s, n}^l))(i_n^s\vec{Z}_{k}(t_{s, n}^l)) \nonumber\\ & \qquad \qquad + \frac{k^{l + 1}_{s, n}}{2}\widetilde{a}_s(\boldsymbol{\Xi}_{s, k}(t^l_{s, n}))(i_n^s\vec{Z}_{k}(t^{l + 1}_{s, n})) \bigg\} \nonumber \\ & \quad+ (\eta_{s, k}(t_{n - 1}), z_{s, k}(t_{n - 1}) - z_{s, k}(t^1_{s, n}))_s \nonumber \\ & \quad + (\xi_{s, k}(t_{n - 1}), y_{s, k}(t_{n - 1}) - y_{s, k}(t_{s, n}^1))_s \nonumber \\ & \quad + \frac{k^1_{s, n}}{2}\widetilde{a}_s(\boldsymbol{\Xi}_{s, k}(t_{n - 1}))(i_n^s\vec{Z}_{s, k}(t_{s, n}^1)). \nonumber \end{flalign*} \end{subequations} Note that the adjoint problem does not have a designated initial value at the final time $T$. Instead, the starting value is implicitly defined by the variational formulation. The final schemes are constructed as sums over the macro time intervals $I_n$ and values at the final time $T$ \begin{subequations} \begin{flalign*} \widetilde{B}_f(\boldsymbol{\Xi}_{f, k})(\vec{Z}_k) = & \sum_{n = 1}^{N} \widetilde{B}_f^n(\boldsymbol{\Xi}_{f, k})(\vec{Z}_k) + (\eta_{f, k}(T), z_{f, k}(T))_f,\\ \widetilde{B}_s(\boldsymbol{\Xi}_{s, k})(\vec{Z}_{s, k}) = & \sum_{n = 1}^{N}\widetilde{B}_s^n(\boldsymbol{\Xi}_{s, k})(\vec{Z}_{k}) + (\eta_{s, k}(T), z_{s, k}(T))_s + (\xi_{s, k}(T), y_{s,k}(T))_s. \end{flalign*} \end{subequations} With that at our disposal, we can formulate a semi-discrete adjoint variational problem: \begin{problem} For a given $\vec{U} \in X$ being the solution of Problem \ref{continuous_problem}, find $\vec{Z}_k \in Y_k$ such that: \[ \begin{aligned} \widetilde{B}_f(\boldsymbol{\Xi}_{f, k})(\vec{Z}_{k}) &= (J_f)'_{\vec{U}}(\boldsymbol{\Xi}_{f, k}) \\ \widetilde{B}_s(\boldsymbol{\Xi}_{s,k})(Z_k) &= (J_s)'_{\vec{U}}(\boldsymbol{\Xi}_{s,k}) \end{aligned} \] for all $\boldsymbol{\Xi}_{f, k} \in X_{f, k}$ and $\boldsymbol{\Xi}_{s, k} \in X_{s, k}$. \label{adjoint_semi_discrete_problem} \end{problem} After formulating the problem in a semi-discrete manner, the decoupling methods from Section~\ref{decoupling_methods} can be applied. \subsection{A posteriori error estimate} \label{aposteriori_error} We define the primal residual, split into parts corresponding to the fluid and solid subproblems \begin{equation*} \rho(\vec{U})(\boldsymbol{\Phi}) \coloneqq \rho_f(\vec{U})(\boldsymbol{\Phi}_f) + \rho_s(\vec{U})(\boldsymbol{\Phi}_s), \end{equation*} where \begin{flalign*} \rho_f(\vec{U})(\boldsymbol{\Phi}_f) & \coloneqq F_f(\boldsymbol{\Phi}_f) - B_f(\vec{U})(\boldsymbol{\Phi}_f), \\ \rho_s(\vec{U})(\boldsymbol{\Phi}_s) & \coloneqq F_s(\boldsymbol{\Phi}_s) - B_s(\vec{U})(\boldsymbol{\Phi}_s). \nonumber \end{flalign*} Similarly, we establish the adjoint residual resulting from the adjoint problem \begin{equation*} \rho^*(\vec{Z})(\boldsymbol{\Xi})\coloneqq \rho_f^*(\vec{Z})(\boldsymbol{\Xi}_f) + \rho_s^*(\vec{Z})(\boldsymbol{\Xi}_s) \end{equation*} with \begin{flalign*} \rho_f^*(\vec{Z})(\boldsymbol{\Xi}_f) & \coloneqq (J_f)'_{\vec{U}}(\boldsymbol{\Xi}_f) - \widetilde{B}_f(\boldsymbol{\Xi}_f)(\vec{Z}) \\ \rho_s^*(\vec{Z})(\boldsymbol{\Xi}_s) & \coloneqq (J_s)'_{\vec{U}}(\boldsymbol{\Xi}_s)- \widetilde{B}_s(\boldsymbol{\Xi}_s)(\vec{Z}). \nonumber \end{flalign*} Becker and Rannacher \cite{BeckerRannacher2001} introduced the a posteriori error representation: \begin{multline} J(\vec{U}) - J(\vec{U}_k) = \frac{1}{2} \min_{\boldsymbol{\Phi}_k \in Y_{ k}}\rho(\vec{U}_k)(\vec{Z} - \boldsymbol{\Phi}_k) + \frac{1}{2}\min_{\boldsymbol{\Xi}_k \in X_k}\rho^*(\vec{Z}_k)(\vec{U} - \boldsymbol{\Xi}_k)\\ + \mathcal{O}(|\vec{U} - \vec{U}_k|^3, |\vec{Z} - \vec{Z}_k|^3) \label{estimator} \end{multline} This identity can be used to derive an a posteriori error estimate. Two steps of approximation are required: first, the third order remainder is neglected and second, the approximation errors $\vec Z-\boldsymbol{\Phi}_k$ and $\vec U-\boldsymbol{\Xi}_k$, the \emph{weights}, are replaced by interpolation errors $\vec Z-i_k\vec Z$ and $\vec U-i_k \vec U$, which are then replaced by discrete reconstructions, since the exact solutions $\vec U, \vec Z\in X$ are not available. See \cite{MeidnerRichter2014} and \cite{SchmichVexler2008} for a discussion of different reconstruction schemes. Due to these approximation steps, this estimator is not precise and it does not result in rigorous bounds. The estimator consists of a primal and adjoint component. Each of them is split again into a fluid and a solid counterpart \begin{equation} \sigma_k \coloneqq \theta_{f, k} + \theta_{s, k} + \vartheta_{f, k} + \vartheta_{s, k}. \label{residualds_formula} \end{equation} The primal estimators are derived from the primal residuals using $\vec{U}_k$ and $\vec{Z}_k$ being the solutions to Problems~\ref{semi_discrete_problem} and \ref{adjoint_semi_discrete_problem}, respectively \begin{flalign*} \theta_{f,k} & \coloneqq \frac{1}{2} \rho_f(\vec{U}_k)(\vec{Z}_{f, k}^{(1)} - \vec{Z}_{f, k}), \\ \theta_{s,k} & \coloneqq \frac{1}{2} \rho_s(\vec{U}_k)(\vec{Z}_{s, k}^{(1)} - \vec{Z}_{s, k}). \nonumber \end{flalign*} The adjoint reconstructions $\vec{Z}_{f, k}^{(1)}$ and $\vec{Z}_{s, k}^{(1)}$ approximating the exact solution are constructed from $\vec{Z}_k$ using linear extrapolation (see Figure~\ref{reconstruction}, right) \begin{flalign*} \vec{Z}_{f, k}^{(1)}\big|_{I_{f, n}^m} \coloneqq & \frac{t - \bar{t}^{m + 1}_{f, n}}{\bar{t}^{m - 1}_{f, n} - \bar{t}^{m + 1}_{f, n}}\vec{Z}_{f, k}(t^{m - 1}_{f, n}) + \frac{t - \bar{t}^{m - 1}_{f, n}}{\bar{t}^{m + 1}_{f, n} - \bar{t}^{m - 1}_{f, n}}\vec{Z}_{f, k}(t^{m + 1}_{f, n}), \\ \vec{Z}_{s, k}^{(1)}\big|_{I_{s, n}^m} \coloneqq & \frac{t - \bar{t}^{m + 1}_{s, n}}{\bar{t}^{m - 1}_{s, n} - \bar{t}^{m + 1}_{s, n}}\vec{Z}_{s, k}(t^{m - 1}_{s, n}) + \frac{t - \bar{t}^{m - 1}_{s, n}}{\bar{t}^{m + 1}_{s, n} - \bar{t}^{m - 1}_{s, n}}\vec{Z}_{s, k}(t^{m + 1}_{s, n}), \nonumber \end{flalign*} with the interval midpoints \begin{equation}\label{midpoints} \bar{t}^m_{f, n} = \frac{t^m_{f, n} + t^{m - 1}_{f, n}}{2},\qquad \bar{t}^m_{s, n} = \frac{t^m_{s, n} + t^{m - 1}_{s, n}}{2}. \end{equation} The adjoint estimators are based on the adjoint residuals \begin{flalign*} \vartheta_{f,k} \coloneqq \frac{1}{2}\rho_f^*(\vec{Z}_k)(\vec{U}_{f, k}^{(2)} - \vec{U}_{f, k}), \\ \vartheta_{s,k} \coloneqq \frac{1}{2}\rho_s^*(\vec{Z}_k)(\vec{U}_{s, k}^{(2)} - \vec{U}_{s, k}). \nonumber \end{flalign*} The primal reconstructions $\vec{U}_{f, k}^{(2)}$ and $\vec{U}_{s, k}^{(2)}$ are extracted from $\vec{U}_k$ using quadratic reconstruction. The reconstruction is performed on the micro time mesh level on local patches consisting of two neighboring micro time-steps (see Figure~\ref{reconstruction}, left). In general, the patch structure does not have to coincide with the micro and macro time mesh structure - two micro time-steps being in the same local patch do not have to be in the same macro time-step. Additionally, we demand two micro time steps from the same local patch to have the same length. \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale=0.9] \draw (-2.75, -0.75) -- (2.75, -0.75); \draw[ultra thick] (0,0) parabola (2,2); \draw[ultra thick] (0,0) parabola (-2,2); \draw[fill = black] (0,0) circle (0.075cm); \draw[fill = black] (2,2) circle (0.075cm); \draw[fill = black] (-2,2) circle (0.075cm); \draw (-2, -0.65) -- (-2, -0.85); \draw (0, -0.65) -- (0, -0.85); \draw (2, -0.65) -- (2, -0.85); \node at (0, - 1.25) {$t^{m}_{f, n}$}; \node at (-2, - 1.25) {$t^{m - 1}_{f, n}$}; \node at (2, - 1.25) {$t^{m + 1}_{f, n}$}; \draw[dashed] (-2,-0.75) -- (-2,2); \draw[dashed] (0,-0.75) -- (0,0); \draw[dashed] (2,-0.75) -- (2,2); \draw (-2,2) -- (0,0) -- (2,2); \draw (-2.75, 1.5) -- (-2, 2); \draw (2, 2) -- (2.75, 0.5); \node at (2,2.5) {$\vec{U}_{f, k}(t^{m + 1}_{f, n})$}; \node at (-2,2.5) {$\vec{U}_{f, k}(t^{m - 1}_{f, n})$}; \node at (0,1.15) {$\vec{U}_{f, k}(t^{m}_{f, n})$}; \end{tikzpicture}\hspace{0.5cm} \begin{tikzpicture}[scale=0.9] \draw (0.5, 0.0) -- (7.5, 0.0); \draw (1.0, -0.1) -- (1.0, 0.1); \draw (3.0, -0.1) -- (3.0, 0.1); \draw (5.25, -0.1) -- (5.25, 0.1); \draw (7.25, -0.1) -- (7.25, 0.1); \draw (2.0, -0.05) -- (2.0, 0.05); \draw (6.25, -0.05) -- (6.25, 0.05); \node at (1.0, - 0.5) {$ t^{m - 2}_{f, n}$}; \node at (2.0, - 0.5) {$\bar{t}^{m - 1}_{f, n}$}; \node at (3.0, - 0.5) {$ t^{m - 1}_{f, n}$}; \node at (5.25, - 0.5) {$ t^{m}_{f, n}$}; \node at (6.25, - 0.5) {$\bar{t}^{m + 1}_{f, n}$}; \node at (7.25, - 0.5) {$t^{m + 1}_{f, n}$}; \draw[fill = black] (3,0.75) circle (0.075cm); \draw[fill = black] (2,0.75) circle (0.075cm); \draw[fill = black] (6.25,2.875) circle (0.075cm); \draw[fill = black] (7.25,2.875) circle (0.075cm); \draw (2, 0.75) -- (3, 1.25); \draw (5.25, 2.375) -- (6.25, 2.875); \draw[ultra thick] (3, 1.25) -- (5.25, 2.375); \draw[ultra thick] (3, 1.15) -- (3, 1.35); \draw[ultra thick] (5.25, 2.275) -- (5.25, 2.475); \draw[dashed] (3, 1.25) -- (3, 0); \draw (1, 0.75) -- (3, 0.75); \draw[dashed] (2, 0.75) -- (2, 0); \draw[dashed] (6.25,2.875) -- (6.25, 0); \draw[dashed] (7.25,2.875) -- (7.25, 0); \draw (5.25,2.875) -- (7.25,2.875); \draw[dashed] (5.25, 2.375) -- (5.25, 0); \node at (8.4,2.85) {$\vec{Z}_{f, k}(t^{m + 1}_{f, n})$}; \node at (1.5,1.35) {$\vec{Z}^{(1)}_{f, k}(\bar{t}^{m - 1}_{f, n})$}; \node at (4.15,0.75) {$\vec{Z}_{f, k}(t^{m - 1}_{f, n})$}; \node at (6,3.45) {$\vec{Z}^{(1)}_{f, k}(\bar{t}^{m + 1}_{f,n})$}; \end{tikzpicture} \end{center} \caption{Reconstruction of the primal solution $\vec{U}_{f, k}^{(2)}$ (left) and the adjoint solution $\vec{Z}_{f, k}^{(1)}$ (right).} \label{reconstruction} \end{figure} We compute the effectivity of the error estimate using \begin{equation*} \textnormal{eff}_k \coloneqq \frac{\sigma_k}{J(\vec{U}_{\textnormal{exact}}) - J(\vec{U}_k)} , \end{equation*} where $J(\vec{U}_{\textnormal{exact}})$ can be approximated by extrapolation in time. \subsection{Adaptivity} \label{adaptivity} The residuals (\ref{residualds_formula}) can be easily localised by restricting them to a specific subinterval \begin{alignat*}{2} & \theta_{f, k}^{n, m}\coloneqq \theta_{f, k}|_{I_{f, n}^{m}}, \qquad && \theta_{s, k}^{n, m}\coloneqq \theta_{s, k}|_{I_{s, n}^{m}}, \\ & \vartheta_{f, k}^{n, m}\coloneqq \vartheta_{f, k}|_{I_{f, n}^{m}}, \qquad && \vartheta_{s, k}^{n, m}\coloneqq \vartheta_{s, k}|_{I_{s, n}^{m}}. \end{alignat*} After defining global numbers of subintervals $M\coloneqq \sum_{n = 1}^N M_n$ and $L\coloneqq \sum_{n = 1}^N L_n$ we can compute an average for each of the components \begin{equation} \bar{\sigma}_{k}\coloneqq \frac{1}{2M} \sum_{n = 1}^{N} \sum_{m = 1}^{M_n} \left( |\theta_{f, k}^{n, m}| + |\vartheta_{f, k}^{n, m}| \right) + \frac{1}{2L} \sum_{n = 1}^{N} \sum_{l = 1}^{L_n} \left( | \theta_{s, k}^{n, l}| + |\vartheta_{s, k}^{n, l}|\right). \label{partial_average} \end{equation} This way we can obtain satisfactory refining criteria \begin{flalign} & \left( \left|\theta_{f, k}^{n, m} \right| \geq \bar{\sigma}_k \textnormal{ or } \left|\vartheta_{f, k}^{n, m} \right| \geq \bar{\sigma}_k \right) \Longrightarrow \textnormal{ refine } I_{f, n}^m, \label{refine_criterium} \\ & \left( \left|\theta_{s, k}^{n, l} \right| \geq \bar{\sigma}_k \textnormal{ or } \left|\vartheta_{s, k}^{n, l} \right| \geq \bar{\sigma}_k \right) \Longrightarrow \textnormal{ refine } I_{s, n}^l. \nonumber \end{flalign} Taking into account the time interval partitioning structure, we arrive with the following algorithm: \begin{enumerate} \item Mark subintervals using the refining criteria (\ref{refine_criterium}). \item Adjust the local patch structure - in case only one subinterval from a specific patch is marked, mark the other one as well (see Figure~\ref{local_patches}). \begin{figure}[t] \centering \begin{tikzpicture}[scale = 0.8] \draw (-2.75, -0.75) -- (2.75, -0.75); \draw[ultra thick] (0,0) parabola (2,2); \draw[ultra thick] (0,0) parabola (-2,2); \draw[fill = black] (0,0) circle (0.075cm); \draw[fill = black] (2,2) circle (0.075cm); \draw[fill = black] (-2,2) circle (0.075cm); \draw (-2, -0.65) -- (-2, -0.85); \draw (0, -0.65) -- (0, -0.85); \draw (2, -0.65) -- (2, -0.85); \node at (0, - 1.25) {$ t^{m}_{f, n}$}; \node at (-2, - 1.25) {$t^{m - 1}_{f, n}$}; \node at (2, - 1.25) {$ t^{m + 2}_{f, n}$}; \node at (1, - 1.25) {$ t^{m + 1}_{f, n}$}; \draw[dashed] (-2,-0.75) -- (-2,2); \draw[dashed] (0,-0.75) -- (0,0); \draw[dashed] (2,-0.75) -- (2,2); \draw (1, -0.65) -- (1, -0.85); \draw[->] (1.0, -2.1) -- (1.0, -1.65); \node at (1.0, -2.35) {refine}; \draw[thick, ->] (2.75,0.75) .. controls (3.25,1.25) and (4.25,1.25) .. (4.75,0.75); \draw (4.75, -0.75) -- (9.75, -0.75); \draw[ultra thick] (7.5,0) parabola (9.5,2); \draw[ultra thick] (7.5,0) parabola (5.5,2); \draw[fill = black] (7.5,0) circle (0.075cm); \draw[fill = black] (9.5,2) circle (0.075cm); \draw[fill = black] (5.5,2) circle (0.075cm); \draw (5.5, -0.65) -- (5.5, -0.85); \draw (7.5, -0.65) -- (7.5, -0.85); \draw (9.5, -0.65) -- (9.5, -0.85); \node at (7.5, - 1.25) {$ t^{m + 1}_{f, n}$}; \node at (5.5, - 1.25) {$t^{m - 1}_{f, n}$}; \node at (9.5, - 1.25) {$t^{m + 3}_{f, n}$}; \draw[dashed] (5.5,-0.75) -- (5.5,2); \draw[dashed] (7.5,-0.75) -- (7.5,0); \draw[dashed] (9.5,-0.75) -- (9.5,2); \draw (6.5, -0.65) -- (6.5, -0.85); \draw (8.5, -0.65) -- (8.5, -0.85); \node at (6.5, - 1.25) {$ t^{m}_{f, n}$}; \draw[->] (8.5, -2.1) -- (8.5, -1.65); \node at (8.5, -2.35) {refine}; \node at (8.5, - 1.25) {$ t^{m + 2}_{f, n}$}; \draw[->] (6.5, -2.1) -- (6.5, -1.65); \node at (6.5, -2.35) {refine}; \end{tikzpicture} \caption{An example of preserving the local patch structure during the marking procedure: if the time step is refined, the other time step belonging to the same patch will also be refined.} \label{local_patches} \end{figure} \item Perform time refining. \item Adjust the macro time-step structure - in case within one macro time-step there exist a fluid and a solid micro time-step that coincide, split the macro time-step into two macro time-steps at this point (see Figure~\ref{splitting}). \begin{figure}[t] \centering \includegraphics[width=\textwidth]{Figure1.pdf} \caption{An example of a splitting mechanism of macro time-steps. On the left, we show the mesh before refinement: middle (in black) the macro nodes, top (in blue) the fluid nodes and bottom (in red) the solid nodes with subcycling. In the center sketch, we refine the first macro interval once within the fluid domain. Since one node is shared between fluid and solid, we refine the macro mesh to resolve subcycling. This final configuration is shown on the right.} \label{splitting} \end{figure} \end{enumerate} \section{Numerical results} \label{numerical_results} \subsection{Fluid subdomain functional} \label{fluid_functional} For the first example, we chose to test the derived error estimator on a goal functional concentrated in the fluid subproblem \begin{equation*} J_f(\vec{U})\coloneqq \int_{0}^T \nu\left(\mathbbm{1}_{\widetilde{\Omega}_f}(\vec{x})\nabla v_f, \nabla v_f\right)_f \diff t, \quad J_s(\vec{U}) \coloneqq 0 \end{equation*} where $\widetilde{\Omega}_f = (2, 4) \times (0, 1)$ is the right half of the fluid subdomain. For this example, we also took the right hand side concentrated in the fluid subdomain, presented in Configuration~\ref{configuration_1}. As the time interval, we choose $I = [0, 1]$. Then we have $$(J_f)'_{\vec{U}}(\boldsymbol{\Xi}_f) = \int_{0}^T 2\nu\left(\mathbbm{1}_{\widetilde{\Omega}_f}(\vec{x})\nabla v_f, \nabla \eta_f\right)_f \diff t.$$ Since the functional is nonlinear, we use a 2-point Gaussian quadrature for integration in time. With~(\ref{midpoints}), the quadrature points read as \[ g_{f, n}^{m, 1} \coloneqq \bar t_{f,n}^m + \frac{t_{f, n}^m - t_{f, n}^{m - 1}}{2 \sqrt{3}},\quad g_{f, n}^{m, 2} \coloneqq \bar t_{f,n}^m - \frac{t_{f, n}^m - t_{f, n}^{m - 1}}{2 \sqrt{3}}. \] With that at hand, we can formulate the discretization of the functional \begin{equation}\label{functional:fluid} \begin{aligned} (J_f)'_{\vec{U}}&(\Xi_{f,k}) = \sum_{n = 1}^N\sum_{m = 1}^{M_n}\sum_{q=1}^2 \nu\left(\mathbbm{1}_{\widetilde{\Omega}_f}(\vec{x}) j_{f,n}^m\nabla v_{f, k}(g_{f, n}^{m, q})j_{f,n}^m\nabla\eta_{f, k}(g_{f, n}^{m, q})\right)_f \\ &= \sum_{n = 1}^N\sum_{q=1}^2 \bigg\{\nu(-g_{f, n}^{1, q} + t_{f, n}^1)\left(\mathbbm{1}_{\widetilde{\Omega}_f}(\vec{x}) j_{f,n}^1\nabla v_{f, k}(g_{f, n}^{1, q})\nabla\eta_{f, k}(t_{f, n}^0)\right)_f \\ & + \sum_{m = 1}^{M_n - 1} \Big\{\nu(-g_{f, n}^{m + 1, q} + t_{f, n}^{m + 1})\left(\mathbbm{1}_{\widetilde{\Omega}_f}(\vec{x}) j_{f,n}^{m + 1}\nabla v_{f, k}(g_{f, n}^{m + 1, q})\nabla\eta_{f, k}(t_{f, n}^m)\right)_f \\ & \qquad \qquad+ \nu(g_{f, n}^{m, q} - t_{f, n}^{m - 1})\left(\mathbbm{1}_{\widetilde{\Omega}_f}(\vec{x}) j_{f,n}^{m}\nabla v_{f, k}(g_{f, n}^{m, q})\nabla\eta_{f, k}(t_{f, n}^m)\right)_f\Big\} \\ &+ \nu(g_{f, n}^{M_n, q} - t_{f, n}^{M_n - 1})\left(\mathbbm{1}_{\widetilde{\Omega}_f}(\vec{x}) j_{f,n}^{M_n}\nabla v_{f, k}(g_{f, n}^{M_n, q})\nabla\eta_{f, k}(t_{f, n}^{M_n})\right)_f \bigg\}, \end{aligned} \end{equation} where the nodal interpolation is defined as: \[ j_{f, n}^m\nabla v_{f, k}(t) \coloneqq \frac{t_{f, n}^m - t}{k_{f, n}^m} \nabla v_{f, k}(t_{f, n}^{m - 1}) + \frac{t - t_{f, n}^{m - 1}}{k_{f, n}^m}\nabla v_{f, k}(t_{f, n}^m) \] In Table~\ref{fluid_residuals_uniform_equal} we show results of the a posteriori error estimator on a sequence of uniform time meshes. Here, we considered the case without any micro time-stepping, that is the time-step sizes in both fluid and solid subdomains are uniformly equal. That gives a total number of time-steps in the fluid domain equal to $N$ and $N$ in the solid domain. Table~\ref{fluid_residuals_uniform_equal} consists of partial residuals $\theta_{f,k},\theta_{s,k},\vartheta_{f,k}$ and $\vartheta_{s,k}$, overall estimate $\sigma_k$, extrapolated errors $\widetilde{J}-J(\vec U_k)$ and effectivities $\textnormal{eff}_k$. The values of the goal functional on the three finest meshes were used for extrapolation in time. As a result, we got the reference value $\widetilde{J} = 6.029469 \cdot 10^{-5}$. Except for the coarsest mesh, the estimator is very accurate and the effectivities are almost 1. On finer meshes, values of $\theta_{f,k}$ and $\vartheta_{f, k}$ are very close to each other which is due to the linearity of the coupled problem~\cite{BeckerRannacher2001}. A similar phenomenon happens for $\theta_{s, k}$ and $\vartheta_{s, k}$. The residuals are concentrated in the fluid subdomain, which suggests the usage of smaller time-step sizes in this space domain. \begin{table}[t] \begin{center} \resizebox{\textwidth}{!}{% \begin{tabular}{c|cccc|ccc} \toprule $N$ & $\theta_{f, k}$ & $\theta_{s, k}$ & $\vartheta_{f, k}$ & $\vartheta_{s, k}$ & $\sigma_{k}$ & $\widetilde{J} - J(\vec{U}_k)$ & $\textnormal{eff}_k$ \\ \midrule 50 & $3.62\cdot 10^{-8}$ & $5.01 \cdot 10^{-10}$ & $1.05 \cdot 10^{-7}$ &$5.03 \cdot 10^{-10}$ & $1.42 \cdot 10^{-7}$ & $8.06 \cdot 10^{-8}$& 1.76 \\ 100 & $9.66 \cdot 10^{-9}$ & $1.37 \cdot 10^{-10}$ & $9.96 \cdot 10^{-9}$ & $1.40 \cdot 10^{-10}$ & $1.99 \cdot 10^{-8}$ & $2.05 \cdot 10^{-8}$ & 0.97 \\ 200 & $2.48 \cdot 10^{-9}$ & $3.00 \cdot 10^{-11}$ & $2.52 \cdot 10^{-9}$ & $3.02 \cdot 10^{-11}$ & $5.07 \cdot 10^{-9}$ & $5.22 \cdot 10^{-9}$ & 0.97 \\ 400 & $6.28 \cdot 10^{-10}$ & $9.44 \cdot 10^{-12}$ & $6.33 \cdot 10^{-10}$ & $9.56 \cdot 10^{-12}$& $1.28 \cdot 10^{-9}$ & $1.31 \cdot 10^{-9}$ & 0.98 \\ 800 & $1.58 \cdot 10^{-10}$ & $2.02 \cdot 10^{-12}$ & $1.58 \cdot 10^{-10}$ & $2.06 \cdot 10^{-12}$& $3.20 \cdot 10^{-10}$ & $3.28 \cdot 10^{-10}$ & 0.98 \\ \bottomrule \end{tabular} } \caption{Residuals and effectivities for fluid subdomain functional in case of uniform time-stepping in case $M_n, L_n = 1$ for all $n$.} \label{fluid_residuals_uniform_equal} \end{center} \end{table} Table~\ref{fluid_residuals_uniform_refined} collects results for another sequence of uniform time meshes. In this case, each of the macro time-steps in the fluid domain is split into two micro time-steps of the same size. That results in $2N$ time-steps in the fluid domain and $N$ in the solid domain. The performance is still highly satisfactory. The residuals remain mostly concentrated in the fluid subdomain. Additionally, after comparing Tables~\ref{fluid_residuals_uniform_equal} and~\ref{fluid_residuals_uniform_refined}, one can see that corresponding values of $\theta_{f, k}$ and $\vartheta_{f, k}$ are the same (value for $N = 800$ in Table~\ref{fluid_residuals_uniform_equal} and $N = 400$ in Table~\ref{fluid_residuals_uniform_refined}, etc.). Overall, introducing micro time-stepping improves performance and reduces extrapolated error $\widetilde{J} - J(\vec{U}_k)$ more efficiently. \begin{table}[t] \begin{center} \resizebox{\textwidth}{!}{% \begin{tabular}{c|cccc|ccc} \toprule $N$ &$\theta_{f, k}$ & $\theta_{s, k}$ & $\vartheta_{f, k}$ & $\vartheta_{s, k}$ & $\sigma_{k}$ & $\widetilde{J} - J(\vec{U}_k)$& $\textnormal{eff}_k$ \\ \midrule 50 & $9.66 \cdot 10^{-9}$ & $4.99 \cdot 10^{-10}$ & $9.96 \cdot 10^{-9}$ &$5.01 \cdot 10^{-10}$ &$2.06 \cdot 10^{-8}$ & $2.17 \cdot 10^{-8}$& 0.95\\ 100 & $2.48 \cdot 10^{-9}$ & $1.37 \cdot 10^{-10}$ & $2.52 \cdot 10^{-9}$ & $1.39 \cdot 10^{-10}$&$5.28 \cdot 10^{-9}$ & $5.45 \cdot 10^{-9}$ & 0.97 \\ 200 & $6.28 \cdot 10^{-10}$ & $2.99 \cdot 10^{-11}$ & $6.33 \cdot 10^{-10}$ & $3.01 \cdot 10^{-11}$&$1.32 \cdot 10^{-9}$ & $1.43 \cdot 10^{-9}$ & 0.92 \\ 400 & $1.58 \cdot 10^{-10}$ & $9.44 \cdot 10^{-12}$ & $1.58 \cdot 10^{-10}$ & $9.56 \cdot 10^{-12}$&$3.35 \cdot 10^{-10}$ & $3.58 \cdot 10^{-10}$ & 0.94 \\ \bottomrule \end{tabular} } \caption{Residuals and effectivities for fluid subdomain functional in case of uniform time-stepping in case $M_n = 2$ and $L_n = 1$ for all $n$.} \label{fluid_residuals_uniform_refined} \end{center} \end{table} \begin{table}[t] \begin{center} \resizebox{\textwidth}{!}{% \begin{tabular}{ccc|cccc|ccc} \toprule $N$ & $M$ & $L$ & $\theta_{f, k}$ & $\theta_{s, k}$ & $\vartheta_{f, k}$ & $\vartheta_{s, k}$& $\sigma_{k}$ & $\widetilde{J} - J(\vec{U}_k)$ &$\textnormal{eff}_k$\\ \midrule 50 & 56 & 50 & $3.08 \cdot 10^{-8}$ & $5.01 \cdot 10^{-10}$ & $3.16 \cdot 10^{-8}$ & $5.04 \cdot 10^{-10}$ & $6.34 \cdot 10^{-8}$ & $6.64 \cdot 10^{-8}$ & 0.95\\ 50 & 100 & 50 & $9.66 \cdot 10^{-9}$ & $4.99 \cdot 10^{-10}$ & $9.96 \cdot 10^{-9}$ & $5.01 \cdot 10^{-10}$& $2.06 \cdot 10^{-8}$ & $2.17 \cdot 10^{-8}$ & 0.95\\ 50 & 110 & 50 & $8.21 \cdot 10^{-9}$ & $4.99 \cdot 10^{-10}$ & $8.32 \cdot 10^{-9}$ & $5.02 \cdot 10^{-10}$& $1.75 \cdot 10^{-8}$ & $1.84 \cdot 10^{-8}$ & 0.95\\ 50 & 156 & 50 & $5.08 \cdot 10^{-9}$ & $4.99 \cdot 10^{-10}$ & $5.18 \cdot 10^{-9}$ & $4.97 \cdot 10^{-10}$ & $1.13 \cdot 10^{-8}$ & $1.20 \cdot 10^{-8}$ & 0.94\\ \bottomrule \end{tabular}} \caption{Residuals and effectivities for fluid subdomain functional in case of adaptive time-stepping.} \label{fluid_residuals_adaptive} \end{center} \end{table} In Table \ref{fluid_residuals_adaptive} we present findings in the case of adaptive time mesh refinement. We chose an initial configuration of uniform time-stepping without micro time-stepping for $N = 50$ and applied a sequence of adaptive refinements. On every level of refinement, the total number of time-steps is $M + L$. One can see that since the error is concentrated in the fluid domain, only time-steps corresponding to this space domain were refined. Again, effectivity gives very good results. The extrapolated error $\widetilde{J} - J(\vec{U}_k)$ is even more efficiently reduced. \subsection{Solid subdomain functional} \label{solid_functional} For the sake of symmetry, for the second example, we chose a functional concentrated on the solid subdomain \begin{equation*} J_f(\vec{U}) = 0, \quad J_s(\vec{U}) = \int_{0}^T \lambda\left(\mathbbm{1}_{\widetilde{\Omega}_s}(\vec{x})\nabla u_s, \nabla u_s\right)_s \diff t, \end{equation*} where $\widetilde{\Omega}_s = (2, 4) \times (-1, 0)$ is the right half of the solid subdomain. This time we set the right hand side according to Configuration~\ref{configuration_2}. Again, $\bar{I} = [0, 1]$. The derivative reads as \[ (J_s)'_{\vec{U}}(\boldsymbol{\Xi}_s) = \int_{0}^T 2\lambda\left(\mathbbm{1}_{\widetilde{\Omega}_s}(\vec{x})\nabla u_s, \nabla \xi_s\right)_s \diff t, \] and allows for a discretization according to~(\ref{functional:fluid}). Similarly, Table~\ref{solid_residuals_uniform_equal} gathers results for a sequence of uniform meshes without any micro time-stepping ($N + N$ micro time-steps). The last three solutions are used for extrapolation in time which gives $\widetilde{J} = 3.458826 \cdot 10^{-4}$. Also for this example, the effectivity is very satisfactory. On the finest discretization, the effectivity slightly declines. This might come from the limited accuracy of the reference value. Once more, on finer meshes, fluid residuals $\theta_{f, k}$, $\vartheta_{f, k}$ and solid residuals $\theta_{s, k}$ $\vartheta_{s, k}$ have similar values. This time, the residuals are concentrated in the solid subdomain and, in this case, the discrepancy is a bit bigger. \begin{table}[t] \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{c|cccc|ccc} \toprule $N$ & $\theta_{f, k}$ & $\theta_{s, k}$ &$\vartheta_{f, k}$ & $\vartheta_{s, k}$&$\sigma_{k}$ &$\widetilde{J} - J(\vec{U}_k)$ &$\textnormal{eff}_k$ \\ \midrule 50 & $2.03 \cdot 10^{-10}$ & $2.66 \cdot 10^{-6}$ & $1.93 \cdot 10^{-10}$ &$1.03 \cdot 10^{-5}$ & $1.30 \cdot 10^{-5}$ & $2.49 \cdot 10^{-5}$& 0.52 \\ 100 & $4.53 \cdot 10^{-11}$ & $2.59 \cdot 10^{-6}$ & $4.26 \cdot 10^{-11}$ & $2.67 \cdot 10^{-6}$& $5.26 \cdot 10^{-6}$ & $4.77 \cdot 10^{-6}$ & 1.10 \\ 200 & $1.28 \cdot 10^{-11}$ & $5.18 \cdot 10^{-7}$ & $1.26 \cdot 10^{-11}$ & $5.21 \cdot 10^{-7}$& $1.04 \cdot 10^{-6}$ & $9.80 \cdot 10^{-7}$ & 1.06 \\ 400 & $3.30 \cdot 10^{-12}$ & $1.17 \cdot 10^{-7}$ & $3.29 \cdot 10^{-12}$ & $1.17 \cdot 10^{-7}$& $2.34 \cdot 10^{-7}$ & $2.23 \cdot 10^{-7}$ & 1.05 \\ 800 & $8.32 \cdot 10^{-13}$ & $2.82 \cdot 10^{-8}$ & $8.32 \cdot 10^{-13}$ & $2.80 \cdot 10^{-8}$& $5.62 \cdot 10^{-8}$ & $5.07 \cdot 10^{-8}$ & 1.11 \\ \bottomrule \end{tabular}} \caption{Residuals and effectivities for solid subdomain functional in case of uniform time-stepping in case $M_n, L_n = 1$ for all $n$.} \label{solid_residuals_uniform_equal} \end{center} \end{table} \begin{figure}[t] \begin{center} \includegraphics[width=\textwidth]{Figure2.pdf} \end{center} \caption{Adaptive meshes for the solid functional. Top: uniform initial mesh; middle: 2 steps of adaptive refinement; bottom: 4 steps. Each plot shows the macro mesh (middle), the fluid mesh (top, in blue) and the solid mesh (bottom, in red). } \label{fig:refine} \end{figure} In Table~\ref{solid_residuals_uniform_refined} we display outcomes for a sequence of uniform meshes where each of the macro time-steps in the solid subdomain is split into two micro time-steps. That gives $N + 2N$ time-steps. Introducing micro time-stepping does not have a negative impact on the effectivity and significantly saves computational effort. Corresponding values of $\theta_{s,k}$ and $\vartheta_{s, k}$ in Tables \ref{solid_residuals_uniform_equal} and \ref{solid_residuals_uniform_refined} are almost the same. Residuals remain mostly concentrated in the solid subdomain. \begin{table}[t] \begin{center} \resizebox{\textwidth}{!}{% \begin{tabular}{c|cccc|ccc} \toprule $N$ &$\theta_{f, k}$ &$\theta_{s, k}$ & $\vartheta_{f, k}$ &$\vartheta_{s, k}$&$\sigma_{k}$ & $\widetilde{J} - J(\vec{U}_k)$ & $\textnormal{eff}_k$ \\ \midrule 50 & $4.13 \cdot 10^{-10}$ & $2.61 \cdot 10^{-6}$ & $1.91 \cdot 10^{-9}$ &$2.68 \cdot 10^{-6}$ & $5.29 \cdot 10^{-6}$ & $4.68 \cdot 10^{-6}$& 1.13\\ 100 & $8.69 \cdot 10^{-11}$ & $5.20 \cdot 10^{-7}$ & $-3.72 \cdot 10^{-11}$ & $5.23 \cdot 10^{-7}$& $1.04 \cdot 10^{-6}$ & $9.54 \cdot 10^{-7}$ & 1.09 \\ 200 & $1.80 \cdot 10^{-11}$ & $1.17 \cdot 10^{-7}$ & $1.40 \cdot 10^{-12}$ & $1.17 \cdot 10^{-7}$ & $2.34 \cdot 10^{-7}$ & $2.16 \cdot 10^{-7}$ & 1.08 \\ 400 & $3.94 \cdot 10^{-12}$ & $2.82 \cdot 10^{-8}$ & $1.87 \cdot 10^{-12}$ & $2.80 \cdot 10^{-8}$ & $5.62 \cdot 10^{-8}$ & $4.90 \cdot 10^{-8}$ & 1.15 \\ \bottomrule \end{tabular}} \caption{Residuals and effectivities for solid subdomain functional in case of uniform time-stepping in case $M_n = 1$ and $L_n = 2$ for all $n$.} \label{solid_residuals_uniform_refined} \end{center} \end{table} Following the fluid example, in Table~\ref{solid_residuals_adaptive} we show calculation results in the case of adaptive time mesh refinement. Here as well we took the uniform time-stepping without micro time-stepping for $N = 50$ as the initial configuration and the total number of time-steps is $M + L$. Except for the last entry, only the time-steps corresponding to the solid domain were refined. On the finest mesh, the effectivity deteriorates. However, adaptive time-stepping is still the most effective in reducing the extrapolated error $\widetilde{J} - J(\vec{U}_k)$. \begin{table}[t] \begin{center} \resizebox{\textwidth}{!}{% \begin{tabular}{ccc|cccc|ccc} \toprule $N$ & $M$ & $L$ & $\theta_{f, k}$ &$\theta_{s, k}$ &$\vartheta_{f, k}$ &$\vartheta_{s, k}$& $\sigma_{k}$ &$\widetilde{J} - J(\vec{U}_k)$ & $\textnormal{eff}_k$\\ \midrule 50 & 50 & 88 & $3.77 \cdot 10^{-10}$ & $6.57 \cdot 10^{-6}$ & $6.72 \cdot 10^{-8}$ & $6.91 \cdot 10^{-6}$ & $1.35 \cdot 10^{-5}$ & $1.06 \cdot 10^{-5}$ & 1.28\\ 50 & 50 & 166 & $5.17 \cdot 10^{-10}$ & $1.35 \cdot 10^{-6}$ & $7.16 \cdot 10^{-8}$ & $1.38 \cdot 10^{-6}$ & $2.80 \cdot 10^{-6}$ & $2.52 \cdot 10^{-6}$ & 1.11\\ 50 & 50 & 286 & $5.80 \cdot 10^{-10}$ & $4.54 \cdot 10^{-7}$ & $4.16 \cdot 10^{-8}$ & $4.56 \cdot 10^{-7}$ & $9.52 \cdot 10^{-7}$ & $7.34 \cdot 10^{-7}$ & 1.30\\ 54 & 54 & 400 & $5.70 \cdot 10^{-10}$ & $1.19 \cdot 10^{-7}$ & $4.12 \cdot 10^{-8}$ & $1.19 \cdot 10^{-7}$ & $2.81 \cdot 10^{-7}$ & $1.10 \cdot 10^{-7}$ & 2.55\\ \bottomrule \end{tabular}} \caption{Residuals and effectivities for solid subdomain functional in case of adaptive time-stepping.} \label{solid_residuals_adaptive} \end{center} \end{table} Finally, we show in Figure~\ref{fig:refine} a sequence of adaptive meshes that result from this adaptive refinement strategy. In the top row, we show the initial mesh with 50 macros steps and no further splitting in fluid and solid. For a better presentation, we only show a small subset of the temporal interval $[0.1,0.4]$. In the middle plot, we show the mesh after 2 steps of adaptive refinement and in the bottom line after 4 steps of adaptive refinement. Each plot shows the macro mesh, the fluid mesh (above) and the solid mesh (below). As expected, this example leads to a sub-cycling within the solid domain. For a finer approximation, the fluid problem also requires some local refinement. Whenever possible we avoid excessive subcycling by refining the macro mesh as described in Section~\ref{adaptivity}. \section*{Abstract} \label{abstract} We consider the dynamics of a parabolic and a hyperbolic equation coupled on a common interface and develop time-stepping schemes that can use different time-step sizes for each of the subproblems. The problem is formulated in a strongly coupled (monolithic) space-time framework. Coupling two different step sizes monolithically gives rise to large algebraic systems of equations where multiple states of the subproblems must be solved at once. For efficiently solving these algebraic systems, we inherit ideas from the partitioned regime and present two decoupling methods, namely a partitioned relaxation scheme and a shooting method. Furthermore, we develop an a posteriori error estimator serving as a mean for an adaptive time-stepping procedure. The goal is to optimally balance the time step sizes of the two subproblems. The error estimator is based on the dual weighted residual method and relies on the space-time Galerkin formulation of the coupled problem. As an example, we take a linear set-up with the heat equation coupled to the wave equation. We formulate the problem in a monolithic manner using the space-time framework. In numerical test cases, we demonstrate the efficiency of the solution process and we also validate the accuracy of the a posteriori error estimator and its use for controlling the time step sizes. \section{Introduction} \label{introduction} In this work, we are going to work with surface coupled multiphysics problems that are inspired by fluid-structure interaction (FSI) problems~\cite{Richter2017}. We couple the heat equation with the wave equation through an interface, where the typical FSI coupling conditions or Dirichlet-Neumann type act. Despite of its simplicity, each of the subproblems exhibits different temporal dynamics which is also found in FSI. The solution of the heat equation, as a parabolic problem, manifests smoothing properties, thus it can be characterized as a problem with slow temporal dynamics. The wave equation, on the other hand, is an example of a hyperbolic equation with highly oscillatory properties. FSI problems are characterized by two specific difficulties: the coupling of an equation of parabolic type with one of hyperbolic type gives rise to regularity problems at the interface. Further, the added mass effect~\cite{CausinGerbeauNobile2005}, which is present for problems coupling materials of a similar density, calls for discretization and solution schemes which are strongly coupled. This is the monolithic approach for modeling FSI, in contrast to partitioned approaches, where each of the subproblems is treated and solved as a separate system. While the monolithic approach allows for a more rigorous mathematical setting and the use of large time steps, the partitioned approach allows using fully optimized separate techniques for both of the subproblems. Most realizations for FSI, such as the technique described here, have to be regarded as a blend of both philosophies: while the formulation and discretization are monolithic, ideas of partitioned approaches are borrowed for solving the algebraic problems. Featuring distinct time scales in each of the problems, the use of multirate time-stepping schemes with adapted step sizes for fluid and solid is obvious. For parabolic problems, the concept of multirate time-stepping was discussed in \cite{Dawson1991}, \cite{Blum1992} and \cite{Faille2009}. In the hyperbolic setting, it was considered in~\cite{BergerMarsha1985}, \cite{Collino2003part1} \cite{Collino2003part2} and \cite{Piperno2006}. In the context of fluid-structure interactions, such subcycling methods are used in aeroelasticity~\cite{Piperno1997}, where explicit time integration schemes are used for the flow problem and implicit schemes for the solid problem~\cite{DeMoerlooseetal2018}. In the low Reynolds number regime, common in hemodynamics, the situation is different. Here, implicit and strongly coupled schemes are required by the added mass effect. Hence, large time steps can be applied for the flow problem, but smaller time steps might be required within the solid. A study on benchmark problems in fluid dynamics (Sch\"afer, Turek '96~\cite{SchaeferTurek1996}) and FSI presented in~\cite{HronTurek2006} shows that FSI problems demand a much smaller step size, although the problem configuration and the resulting nonstationary dynamics are very similar to oscillating solutions with nearly the same period~\cite{RichterWick2015_time}. We will derive a monolithic variational formulation for FSI like problems that can handle different time step sizes in the two subproblems. Implicit coupling of two problems with different step sizes will give rise to very large systems where multiple states must be solved at once. In Section~\ref{decoupling_methods} we will study different approaches for an efficient solution of these coupled systems, a simple partitioned relaxation scheme and a shooting like approach. Next, in Section~\ref{goal_oriented_estimation} we present a posteriori error estimators based on the dual weighted residual method~\cite{BeckerRannacher2001} for automatically identifying optimal step sizes for the two subproblems. Numerical studies on the efficiency of the time adaptation procedure are presented in Section \ref{numerical_results}. \section{Conclusion} In this paper, we have developed a multirate scheme and a temporal error estimate for a coupled problem that is inspired by fluid-structure interactions. The two subproblems, the heat equation and the wave equation, feature different temporal dynamics such that balanced approximation properties and stability demands ask for different step sizes. We introduced a monolithic variational Galerkin formulation for the coupled problem and then used a partitioned framework for solving the algebraic systems. Having different time-step sizes for each of the subproblems couples multiple states in each time-step, which would require an enormous computational effort. To solve this, we discussed two different decoupling methods: first, a simple relaxation scheme that alternates between fluid and solid problem and second, similar to the shooting method, where we defined a root-finding problem on the interface and used matrix-free Newton-Krylov method for quickly approximating the zero. Both of the methods were able to successfully decouple our specific example and showed good robustness concerning different subcycling of the multirate scheme in fluid- or solid-domain. However, the convergence of the shooting method was faster and it required fewer evaluations of the variational formulation. As the next step, we introduced a goal-oriented error estimate based on the dual weighted residual method to estimate errors with regard to functional evaluations. The monolithic space-time Galerkin formulation allowed to split the residual errors into contributions from the fluid and solid problems. Several numerical results for two different goal functionals show very good effectivity of the error estimate. Finally, we established the localization of the error estimator. That let us derive an adaptive refinement scheme for choosing optimal distinct time meshes for each problem. In future work, it remains to extend the methodology to nonlinear problems, in particular, to fully coupled fluid-structure interactions. \section{Acknowledgements} Both authors acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 314838170, GRK 2297 MathCoRe. TR further acknowledge supported by the Federal Ministry of Education and Research of Germany (project number 05M16NMA). \bibliographystyle{ieeetr} \section{Presentation of the model problem} \label{presentation_of_the_problem} Let us consider the time interval $I = [0, T]$ and two rectangular domains $\Omega_f = (0, 4) \times (0, 1)$, $\Omega_s = (0, 4) \times (0, -1)$. The interface is defined as $\Gamma \coloneqq \overline{\Omega}_f \cap \overline{\Omega}_s = (0,4)\times \{0\}$. The remaining boundaries are determined as $\Gamma_f^1 \coloneqq \{0 \} \times (0, 1)$, $\Gamma_f^2 \coloneqq (0, 4) \times \{ 1\}$, $\Gamma_f^3 \coloneqq \{4 \} \times (0, 1)$ and $\Gamma_s^1 \coloneqq \{0 \} \times (-1, 0)$, $\Gamma_s^2 \coloneqq (0, 4) \times \{ -1\}$, $\Gamma_s^3 \coloneqq \{4 \} \times (-1, 0)$. The domain is illustrated in Figure \ref{domain}. In the domain $\Omega_f$ we pose the heat equation parameterized by the diffusion parameter $\nu > 0$ with an additional transport term controlled by $\beta \in \mathds{R}^2$. In the domain $\Omega_s$ we set the wave equation. By $\sqrt{\lambda}$ we denote the propagation speed and by $\delta \geq 0$ a damping parameter. On the interface, we set both kinematic and dynamic coupling conditions. The former guarantees the continuity of displacement and velocity along the interface. The latter establishes the balance of normal stresses. The exact values of the parameters read as \[ \nu = 0.001,\quad \beta = \left(\begin{matrix} 2 \\ 0 \end{matrix}\right),\quad \lambda = 1000,\quad \delta = 0.1 \] and the complete set of equations is given by \begin{equation*} \begin{cases} \partial_t v_f - \nu \Delta v_f + \beta \cdot \nabla v_f = g_f,\quad - \Delta u_f = 0 & \textnormal{in } I \times \Omega_f, \\ \partial_t v_s - \lambda \Delta u_s - \delta \Delta v_s = g_s, \quad \partial_t u_s = v_s & \textnormal{in } I \times \Omega_s, \\ u_f = u_s,\quad v_f = v_s,\quad \lambda \partial_{\vec{n}_s} u_s = -\nu \partial_{\vec{n}_f}v_f & \textnormal{on } I \times \Gamma, \\ u_f = v_f = 0 & \textnormal{on } I \times \Gamma_f^2, \\ u_s = v_s = 0 & \textnormal{on } I \times \Gamma_s^1\cup \Gamma_s^3, \\ u_f(0) = v_f(0) = 0 & \textnormal{in } \Omega_f, \\ u_s(0) = v_s(0) = 0 & \textnormal{in } \Omega_s \\ \end{cases} \end{equation*} We use symbols $\vec{n}_f$ and $\vec{n}_s$ to distinguish between normal vectors for different space domains. The external forces are set to be products of functions of space and time ${g_f(\vec{x}, t) \coloneqq g_f^1(\vec{x})g^2(t)}$ and ${g_s(\vec{x}, t) \coloneqq g_s^1(\vec{x})g^2(t)}$ where $g_f^1$, $g_s^1$ are space components and $g^2$ is a time component. We will consider two configurations of the right hand side. In Configuration \ref{configuration_1}, the right hand side is concentrated in $\Omega_f$ where the space component consists of an exponential function centered around $\left(\frac{1}{2}, \frac{1}{2} \right)$. For Configuration \ref{configuration_2} we take a space component concentrated in $\Omega_s$ with an exponential function centered around $\left(\frac{1}{2}, -\frac{1}{2} \right)$. \begin{configuration}\label{configuration_1} \begin{alignat*}{2} g_f^1(\vec{x})\coloneqq &e^{-\left((x_1 - \frac{1}{2})^2 + (x_2 - \frac{1}{2})^2\right)}, \quad && \vec{x} \in \Omega_f \\ g_s^1(\vec{x})\coloneqq &0, \quad && \vec{x} \in \Omega_s \end{alignat*} \end{configuration} \begin{configuration}\label{configuration_2} \begin{alignat*}{2} g_f^1(\vec{x})\coloneqq &0,\quad && \vec{x} \in \Omega_f \\ g_s^1(\vec{x})\coloneqq &e^{-\left((x_1 - \frac{1}{2})^2 + (x_2 + \frac{1}{2})^2\right)}, \quad && \vec{x} \in \Omega_s \end{alignat*} \end{configuration} For both cases, we chose the same time component $g^2(t) \coloneqq \mathbbm{1}_{\big[\lfloor t \rfloor, \lfloor t \rfloor + \frac{1}{10}\big)}(t)$ for $t \in I$ illustrated in Figure \ref{g_2}. \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale = 1.0] \draw (0.0, 1.0) -- (4.0, 1.0) -- (4.0, -1.0); \draw (4.0, -1.0) -- (0.0, -1.0) -- (0.0, 1.0); \draw[dashed] (0.0, 0.0) -- (4.0, 0.0); \node at (0.35, 0.25) {$\Omega_f$}; \node at (-0.35, 0.5) {$\Gamma_f^1$}; \node at (-0.35, -0.5) {$\Gamma_s^1$}; \node at (4.35, 0.5) {$\Gamma_f^3$}; \node at (4.35, -0.5) {$\Gamma_s^3$}; \node at (0.35, -0.75) {$\Omega_s$}; \node at (2.0, 1.35) {$\Gamma_f^2$}; \node at (2.0, 0.35) {$\Gamma$}; \node at (2.0, - 1.35) {$\Gamma_s^2$}; \end{tikzpicture} \caption{View of the domain split into ``fluid'' $\Omega_f$ and ``solid'' $\Omega_s$ along the common interface~$\Gamma$. } \label{domain} \end{center} \end{figure} \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale = 0.85] \draw (-0.5, 0) -- (5.5, 0); \draw[ultra thick] (0,0.5) -- (0.5, 0.5); \draw[ultra thick] (0.5,0) -- (5, 0); \draw[fill = black] (0, 0.5) circle (0.075cm); \draw[fill = white] (0.5, 0.5) circle (0.075cm); \draw[fill = black] (0.5, 0) circle (0.075cm); \draw[fill = white] (5, 0) circle (0.075cm); \node at (0, -0.3) {0}; \node at (0.5, -0.3) {0.1}; \node at (5, -0.3) {1}; \draw[white] (0.0, 1.0) -- (1.0, 1.0); \end{tikzpicture} \caption{Function $g_2$ on $I=[0,T)$ for $T = 1$. } \label{g_2} \label{proba} \end{center} \end{figure} Since our example might be treated as a simplified case of an FSI problem, in the text we will use the corresponding nomenclature. We will refer to domain $\Omega_f$ as the \textit{fluid domain} and the problem defined there as the \textit{fluid problem}. Similarly, we will use \textit{solid domain} and \textit{solid problem} phrases. \subsection{Continuous variational formulation} As the first step, let us introduce a family of Hilbert spaces, which will be later on used as the trial and test spaces for our variational problems \begin{equation*} X(V) = \left\{ v \in L^2(I, V) |\; \partial_t v \in L^2(I, V^*) \right\}. \end{equation*} Because we would like to incorporate the Dirichlet boundary conditions on $\Gamma_f^2$ and $\Gamma_s^1$, $\Gamma_s^3$ into spaces of solutions, for $\Upsilon \subset \partial \Omega$, we define \begin{equation*} H^1_0(\Omega; \Upsilon) = \left\{v \in H^1(\Omega)|\; v_{|\Upsilon} = 0 \right\}. \end{equation*} Note that $\left(H^1_0(\Omega;\Upsilon)\right)^* = H^{-1}(\Omega)$. For our example, we choose $H_f \coloneqq H^1_0(\Omega_f; \Gamma_f^2)$ and $H_s \coloneqq H^1_0(\Omega_s;\Gamma_s^1\cup\Gamma_s^3)$ for representing space. We take $X_f \coloneqq (X(H_f))^2$, $X_s\coloneqq (X(H_s))^2$ and $X = X_f \times X_s$ for space-time trial and test function spaces. Below we present notations for inner products and duality pairings: \begin{alignat*}{2} & (u, \varphi)_f\coloneqq (u, \varphi)_{L^2(\Omega_f)}, \quad && \langle u, \varphi \rangle_f \coloneqq \langle u, \varphi \rangle_{H^{-1}(\Omega_f) \times H_f}, \\ & (u, \varphi)_s\coloneqq (u, \varphi)_{L^2(\Omega_s)}, && \langle u, \varphi \rangle_s \coloneqq \langle u, \varphi \rangle_{H^{-1}(\Omega_s) \times H_s}, \\ & && \langle u, \varphi \rangle_{\Gamma} \coloneqq \langle u, \varphi \rangle_{H^{-\frac{1}{2}}(\Gamma) \times H^{\frac{1}{2}}(\Gamma)} \end{alignat*} To shorten the notation, we introduce the abbreviations \[ \begin{aligned} \vec{U}_f &\coloneqq \left(\begin{matrix} u_f \\ v_f \end{matrix}\right),& \vec{U}_s &\coloneqq \left(\begin{matrix} u_s \\ v_s \end{matrix}\right), & \vec{U} &\coloneqq \left(\begin{matrix} \vec{U}_f \\ \vec{U}_s \end{matrix}\right),\\ \boldsymbol{\Phi}_f &\coloneqq \left(\begin{matrix} \varphi_f \\ \psi_f \end{matrix}\right),& \boldsymbol{\Phi}_s &\coloneqq \left(\begin{matrix} \varphi_s \\ \psi_s \end{matrix}\right), & \boldsymbol{\Phi} &\coloneqq \left(\begin{matrix} \boldsymbol{\Phi}_f \\ \boldsymbol{\Phi}_s \end{matrix}\right). \end{aligned} \] After these preliminaries, we are ready to construct a continuous variational formulation of the problem. We define operators describing the fluid and the solid problem \begin{subequations} \begin{flalign} B_f(\vec{U})(\boldsymbol{\Phi}_f) \coloneqq &\int_I \langle \partial_t v_f, \varphi_f \rangle_f \diff t + \int_I a_f(\vec{U})(\boldsymbol{\Phi}_f) \diff t + (v_f(0), \varphi_f(0))_f, \label{b_f} \\ B_s(\vec{U})(\boldsymbol{\Phi}_s) \coloneqq &\int_I \langle \partial_t v_s, \varphi_s \rangle_s \diff t + \int_I \langle \partial_t u_s, \psi_s \rangle_s \diff t + \int_I a_s(\vec{U})(\boldsymbol{\Phi}_s) \diff t \label{b_s} \\ &\qquad + (v_s(0), \varphi_s(0))_s + (u_s(0), \psi_s(0))_s, \nonumber\\ F_f(\boldsymbol{\Phi}_f) \coloneqq &\int_I ( g_f, \varphi_f )_f \diff t, \nonumber\\ F_s(\boldsymbol{\Phi}_s) \coloneqq &\int_I ( g_s, \varphi_s )_s \diff t \nonumber \end{flalign} \end{subequations} with \begin{subequations} \begin{flalign} a_f(\vec{U})(\boldsymbol{\Phi}_f) & \coloneqq (\nu \nabla v_f, \nabla \varphi_f)_f + (\beta \cdot \nabla v_f, \varphi_f)_f + (\nabla u_f, \nabla \psi_f )_f \label{a_f} \\ &\qquad - \langle \partial_{\vec{n}_f} u_f, \psi_f \rangle_{\Gamma} + \frac{\gamma}{h}\langle u_f - u_s, \psi_f \rangle_{\Gamma} \nonumber \\ &\qquad - \langle \nu \partial_{\vec{n}_f} v_f, \varphi_f \rangle_{\Gamma} + \frac{\gamma}{h} \langle v_f - v_s, \varphi_f \rangle_{\Gamma}, \nonumber \\ a_s(\vec{U})(\boldsymbol{\Phi}_s) & \coloneqq (\lambda \nabla u_s, \nabla \varphi_s)_s + (\delta \nabla v_s, \nabla \varphi_s)_s - (v_s, \psi_s )_s \label{a_s} \\ & \qquad + \langle \nu \partial_{\vec{n}_f} v_f, \varphi_s \rangle_{\Gamma} - \langle \delta \partial_{\vec{n}_s} v_s, \varphi_s \rangle_{\Gamma}. \nonumber \end{flalign} \end{subequations} All the Laplacian terms were integrated by parts and the dynamic coupling condition was added. The kinematic coupling condition was incorporated into the fluid problem, while the dynamic condition became a part of the solid problem. The Dirichlet boundary conditions over the interface $\Gamma$ were formulated in a weak sense using Nitsche's method \cite{Nitsche1971}. We arbitrarily set $\gamma = 1000$, while $h$ is the mesh size. The compact version of the variational problem presents itself as: \begin{problem} Find $\vec{U} \in X$ such that \begin{flalign*} & B_f(\vec{U})(\boldsymbol{\Phi}_f) = F_f(\boldsymbol{\Phi}_f) \\ & B_s(\vec{U})(\boldsymbol{\Phi}_s) = F_s(\boldsymbol{\Phi}_s) \end{flalign*} for all $\boldsymbol{\Phi}_f \in X_f $ and $\boldsymbol{\Phi}_s \in X_s $. \label{continuous_problem} \end{problem} \subsection{Semi-discrete Petrov-Galerkin formulation} \label{semi-discrete_formulation} One of the main challenges emerging from the discretization of Problem \ref{continuous_problem} is the construction of a satisfactory time interval partitioning. Our main objectives include: \begin{enumerate} \item \textbf{Handling coupling conditions} \\ For the time interval $I = [0, T]$ we introduce a coarse time-mesh which is shared by both of the subproblems \[ 0 = t_0 < t_1 < ... < t_N = T,\quad k_n = t_n - t_{n - 1}, \quad I_n = (t_{n - 1}, t_n]. \] We will refer to this mesh as a \textit{macro time mesh}. \item \textbf{Allowing for different time-step sizes (possibly non-uniform) in both subproblems} \\ For each of the subintervals $I_n = (t_{n - 1}, t_{n}]$ we create two distinct submeshes corresponding to each of the subproblems $$ t_{n - 1} = t_{f, n}^0 < t_{f, n}^1 < ... < t_{f, n}^{M_n} = t_n, \quad k_{f, n}^m = t_{f, n}^{m} - t_{f, n}^{m - 1}, \quad I_{f, n}^m = (t_{f, n}^{m - 1}, t_{f, n}^{m}],$$ $$ t_{n - 1} = t_{s, n}^0 < t_{s, n}^1 < ... < t_{s, n}^{L_n} = t_n, \quad k_{s, n}^l = t_{s, n}^{l} - t_{s, n}^{l - 1}, \quad I_{s, n}^l = (t_{s, n}^{l - 1}, t_{s, n}^{l}].$$ We will refer to these meshes as \textit{micro time meshes}. \end{enumerate} We define grid sizes as: $$k_f : = \max_{n = 1,...,N} \max_{m = 1,...,M_{n}}k_{f, n}^m,\quad k_s : = \max_{n = 1,...,N} \max_{l = 1,...,L_{n}}k_{s, n}^l,$$ $$k \coloneqq \max \{k_f, k_s \}$$ As trial spaces, we chose spaces consisting of piecewise linear functions in time, \[ \begin{aligned} X^{1, n}_{f, k}& = \left\{ v \in C(\bar{I_n}, L^2(\Omega_f))|\; v|_{I_{f, n}^m} \in \mathcal{P}_1(I_{f, n}^m, H_f)\text{ for } m = 1,...,M_{n}\right\}, \\ X^1_{f, k}& = \left\{ v \in C(\bar{I}, L^2(\Omega_f))|\; v|_{I_n} \in X^{1, n}_{f, k} \text{ for } n = 1,...,N\right\}, \\ X^{1, n}_{s, k}& = \left\{ v \in C(\bar{I_n}, L^2(\Omega_s))|\; v|_{I_{s, n}^l} \in \mathcal{P}_1(I_{s, n}^l, H_s)\text{ for } l = 1,...,L_{n}\right\}, \\ X^1_{s, k} &= \left\{ v \in C(\bar{I}, L^2(\Omega_s))|\; v|_{I_n} \in X^{1, n}_{s, k} \text{ for } n = 1,...,N \right\}, \end{aligned} \] whereas we took spaces of piecewise constant functions as test spaces \[ \begin{aligned} Y^{0, n}_{f, k} &= \left\{ v \in L^2(I_n, L^2(\Omega_f))|\; v|_{I_{f, n}^m} \in \mathcal{P}_0(I_{f, n}^m, H_f) \text{ for } m = 1,...,M_{n} \right. \\ &\hspace{6cm} \left. \text{ and }v(t_{n - 1}) \in L^2(\Omega_f)\right\}, \\ Y^{0}_{f, k}& = \left\{ v \in L^2(I, L^2(\Omega_f))|\; v|_{I_{n}} \in Y^{0, n}_{f, k}\text{ for } n = 1,...,N\right\}, \\ Y^{0, n}_{s, k} &= \left\{ v \in L^2(I_n, L^2(\Omega_s))|\; v|_{I_{s, n}^l} \in \mathcal{P}_0(I_{s, n}^l, H_s)\text{ for }l = 1,...,L_{n} \right.\\ &\hspace{6cm} \left. \text{ and }v(t_{n - 1}) \in L^2(\Omega_s)\right.\}, \\ Y^{0}_{s, k} &= \left\{ v \in L^2(I, L^2(\Omega_s))|\; v|_{I_{n}} \in Y^{0, n}_{s, k}\text{ for } n = 1,...,N\right\}. \end{aligned} \] By $\mathcal{P}_r(I,H)$ we denote the space of polynomials with degree $r$ and values in $H$. To shorten the notation, we set \[ \begin{aligned} X_{f, k}^n&\coloneqq \left(X_{f, k}^{1, n} \right)^2,\quad& X_{s, k}^n &\coloneqq \left(X_{s, k}^{1, n} \right)^2, \quad& X_k^n &\coloneqq X_{f, k}^n \times X_{s, k}^n, \\ X_{f, k} &\coloneqq \left(X_{f, k}^1 \right)^2,& X_{s, k}&\coloneqq \left(X_{s, k}^1 \right)^2,& X_k & \coloneqq X_{f, k} \times X_{s, k}, \\ Y_{f, k}^n&\coloneqq \left(Y_{f, k}^{0, n} \right)^2,& Y_{s, k}^n& \coloneqq \left(Y_{s, k}^{0, n} \right)^2,& Y_k^n& \coloneqq Y_{f, k}^n \times Y_{s, k}^n, \\ Y_{f, k}& \coloneqq \left(Y_{f, k}^0 \right)^2,& Y_{s, k}& \coloneqq \left(Y_{s, k}^0 \right)^2,& Y_k& \coloneqq Y_{f, k} \times Y_{s, k}. \end{aligned} \] We assume that inner points of fluid and solid micro time-meshes do not necessarily coincide, i.~e. for every $n = 1, ..., N$, $m = 1, ..., M_{n} - 1$, $l = 1,...,L_{n} - 1$ we may have $t_{f, n}^{m} \neq t_{s, n}^{l}$. Because of this fact, a function defined on the fluid micro time-mesh can not be directly evaluated in the points of the solid micro time mesh, and vice versa. To solve this problem, we introduce nodal interpolation operators \[ i_n^f:X^n \to X_f^n \times \mathcal{P}_1(I_n, X_s^n),\quad i_n^s:X^n \to \mathcal{P}_1(I_n, X_f^n) \times X_s^n, \] where $X^n \coloneqq X\Big|_{I_n}$, $X_n^f \coloneqq X^f\Big|_{I_n}$, $X_n^s \coloneqq X^s\Big|_{I_n}$ and \begin{equation} \label{interpolation_operator_primal} \begin{aligned} i_n^f \vec{U}(t) &\coloneqq \left(\begin{matrix} \vec{U}_f(t) \\ \frac{t_n - t}{k_n}\vec{U}_s(t_{n - 1}) + \frac{t - t_{n - 1}}{k_n}\vec{U}_s(t_n) \end{matrix}\right), \\ i_n^s \vec{U}(t) &\coloneqq \left(\begin{matrix} \frac{t_n - t}{k_n}\vec{U}_f(t_{n - 1}) + \frac{t - t_{n - 1}}{k_n}\vec{U}_f(t_n) \\ \vec{U}_s(t)\end{matrix}\right). \end{aligned} \end{equation} Since the operators $B_f$ and $B_s$ are linear, the resulting scheme is equivalent to the Crank-Nicolson scheme up to the numerical quadrature of $F_f$, see also~\cite{ErikssonEstepHansboJohnson1995,Thomee1997}. Taking trial functions piecewise linear in time $\vec{U}_k \in X_k$ and test functions piecewise constant in time $\boldsymbol{\Phi}_{f, k} \in Y_{f, k}$, $\boldsymbol{\Phi}_{s, k} \in Y_{s, k}$, we can construct operators on every of the macro time-steps $I_n = (t_{n - 1}, t_n]$ \begin{equation}\label{b_f^n} \begin{aligned} B_f^n(\vec{U}_k)(\boldsymbol{\Phi}_{f, k}) \coloneqq & \sum_{m = 1}^{M_{n}} \bigg\{ (v_{f, k}(t_{f, n}^m) - v_{f, k}(t_{f, n}^{m - 1}), \varphi_{f, k}(t_{f, n}^{m}))_f \\ & \qquad + \frac{k_{f, n}^m}{2}a_f(i_n^f \vec{U}_{k}(t_{f, n}^m))(\boldsymbol{\Phi}_{f, k}(t_{f, n}^m)) \\ & \qquad + \frac{k_{f, n}^m}{2}a_f(i_n^f\vec{U}_{k}(t_{f, n}^{m - 1}))(\boldsymbol{\Phi}_{f, k}(t_{f, n}^m))\bigg\}, \end{aligned} \end{equation} \begin{equation}\label{b_s^n} \begin{aligned} B_s^n(\vec{U}_k)(\boldsymbol{\Phi}_{s, k}) \coloneqq & \sum_{l = 1}^{L_{n}} \bigg\{ (v_{s, k}(t_{s, n}^{l}) - v_{s, k}(t_{s, n}^{l - 1}), \varphi_{s, k}(t_{s, n}^{l})_s \\ & \qquad + (u_{s, k}(t_{s, n}^{l}) - u_{s, k}(t_{s, n}^{l - 1}), \psi_{s, k}(t_{s, n}^{l}))_s \\ & \qquad + \frac{k_{s, n}^l}{2}a_s(i_n^s\vec{U}_{k}(t_{s, n}^l))(\boldsymbol{\Phi}_{s, k}(t_{s, n}^l)) \\ & \qquad + \frac{k_{f, n}^l}{2}a_s(i_n^s\vec{U}_{k}(t_{s, n}^{l - 1}))(\boldsymbol{\Phi}_{s, k}(t_{s, n}^l)) \bigg\}, \end{aligned} \end{equation} \begin{equation*} \begin{aligned} F_f^n(\boldsymbol{\Phi}_{f, k}) \coloneqq & \sum_{m = 1}^{M_{n}} \left(\int_{I_{s, n}^m}g_f(t) \diff t, \varphi_{f, k}(t_{f, n}^m) \right)_f,\\ F_s^n(\boldsymbol{\Phi}_{s, k}) \coloneqq & \sum_{l = 1}^{L_{n}} \left(\int_{I_{s, n}^l}g_s(t) \diff t, \varphi_{s, k}(t_{s, n}^l) \right)_s \end{aligned} \end{equation*} Then, the forms on the whole time interval $I= [0, T]$ are just sums of the operators over the subintervals and initial conditions: \begin{subequations} \begin{flalign*} B_f(\vec{U}_k)(\boldsymbol{\Phi}_{f, k}) = & \sum_{n = 1}^{N} B_f^n(\vec{U}_k)(\boldsymbol{\Phi}_{f, k}) + (v_{f, k}(t_0), \varphi_{f, k}(t_0))_f,\\ B_s(\vec{U}_k)(\boldsymbol{\Phi}_{s, k}) = & \sum_{n = 1}^{N}B_s^n(\vec{U}_k)(\boldsymbol{\Phi}_{s, k}) + (v_{s, k}(t_0), \varphi_{s, k}(t_0))_s + (u_{s, k}(t_0), \psi_{s, k}(t_0))_s, \\ F_f(\boldsymbol{\Phi}_{f, k}) = & \sum_{n = 1}^{N} F_f^n(\boldsymbol{\Phi}_{f, k}), \\ F_s(\boldsymbol{\Phi}_{s, k}) = & \sum_{n = 1}^{N} F_s^n(\boldsymbol{\Phi}_{s, k}) \end{flalign*} \end{subequations} With that at hand, we can pose a semi-discrete variational problem: \begin{problem} Find $\vec{U}_k \in X_k$ such that: \begin{flalign*} & B_f(\vec{U}_k)(\boldsymbol{\Phi}_{f, k}) = F_f(\boldsymbol{\Phi}_{f, k}) \\ & B_s(\vec{U}_k)(\boldsymbol{\Phi}_{s, k}) = F_s(\boldsymbol{\Phi}_{s, k}) \end{flalign*} for all $\boldsymbol{\Phi}_{f, k} \in Y_{f, k}$ and $\boldsymbol{\Phi}_{s, k} \in Y_{s, k}$. \label{semi_discrete_problem} \end{problem}
{'timestamp': '2020-07-13T02:12:09', 'yymm': '2007', 'arxiv_id': '2007.05372', 'language': 'en', 'url': 'https://arxiv.org/abs/2007.05372'}
\section{Introduction} Ionising radiation from OB stars influences the surrounding interstellar medium (ISM) on parsec scales. As the gas surrounding a high mass star is heated, it expands forming an HII region. The consequence of this expansion is twofold, on the one hand gas is removed from the centre of the potential, preventing further gravitational collapse and perhaps even disrupting the parent molecular cloud. On the other hand gas is swept up and compressed beyond the ionisation front producing high density regions that may be susceptible to gravitation collapse (i.e. the ``collect and collapse'' model, Elmegreen et al. 1995). Furthermore, pre-existing, marginally gravitationally stable clouds may also be driven to collapse by the advancing ionisation front (i.e. ``radiation-driven implosion'', Bertoldi 1989). Finally, ionisation radiation has also been suggested as a driver for small scale turbulence in a cloud (Gritschneder et al. 2009b). Observations (e.g. Deharveng, these proceedings) and theory (e.g. Dale et al. 2005, 2007, Gritschneder et al. 2009b) often present examples for positive and negative feedback, however, the net effect on the global star formation efficiency is still under debate. From a theoretical point of view, different groups have performed a number of numerical experiments demonstrating that the efficacy and direction of photoionisation feedback are very sensitive to the specific initial conditions, in particular, to the location of the ionising source(s) and to whether the cloud is initially bound or unbound. This suggests that a parameter space study may be necessary to assess what environmental variables may affect the direction in which feedback proceeds. Several authors in these proceedings discuss the results of recent ionisation feedback simulations (see oral contributions by Arthur, Bisbas, Gritschneder and Walch, and poster contributions by Choudhury, Cornwall, Miao, Motoyama, Rodon and Tremblin). As the field matures and the codes become more sophisticated it becomes important to assess the accuracy and limitations of the methods currently employed. The computational demand of treating the radiation transfer (RT) and photoionisation (PI) problem within a large scale hydrodynamical simulation has led to the development of approximate algorithms that drastically simplify the physics of RT and PI. In this review we will describe some of the most common approximations employed by current RT+PI implementations, highlighting some potentially important shortcomings. We will then present the result of our ongoing efforts to test current implementations against the 3D Monte Carlo code {\sc mocassin} (Ercolano et al. 2003, 2005, 2008) which includes all the necessary micro physics and solves the ionisation, thermal and statistical equilibrium in detail. \section{Some Common Approximations} The importance of studying the photoionisation process as part of hydrodynamical star formation simulations has long been recognised. Until very recently, however, due to the complexity and the computational demand of the problem, the evolution of ionised gas regions had only been studied in rather idealised systems (e.g. Yorke et al. 1989; Garcia-Segura \& Franco 1996), with simulations often lacking resolution and dimensions. The situation in the latest years has been rapidly improving, however, with more sophisticated implementations of ionised radiation both in grid-based codes (e.g. Mellema et al. 2006; Peters et al. 2010) and Smoothed Particle Hydrodynamical (SPH) codes (e.g. Kessel-Deynet \& Burkert 2000; Miao et al. 2006; Dale et al. 2007; Gritschneder et al. 2009; Bisbas et al. 2009). Klessen et al. (2009) and Mac Low et al. (2007) present recent reviews of the numerical methods employed. While the new codes can achieve higher resolutions and can treat more realistic geometries, the treatment of RT and PI is still rather crude in most cases. Even in the current era of parallel computing, an exact solution of the radiative transfer (RT) and photoionisation (PI) problem in three dimensions within SPH calculations is still prohibitive. Some common approximations include the following: \begin{enumerate} \item Monochromatic radiation field: In order to avoid the burden of frequency resolved RT calculations, monochromatic calculations are often carried out, where all the ionising flux is assumed to be at 13.6 eV (i.e. the H ionisation potential). This approximation is often implicit in the choice of a single value for the gas opacity, and it is of course implicit to Str\"omgren-type calculations. Implicit or explicit monochromatic fields have the serious drawback that the ionisation and temperature structure of the gas cannot be calculated. \item Ionisation and thermal balances: Its equations are not solved {\it or} simple heating/cooling functions are employed {\it or} the temperature is a simple function of an approximate ionisation fraction. When monochromatic fields are employed it is not possible to calculate the necessary terms to solve the balance equations and idealised temperature distributions must be used. \item On-the-spot (OTS) approximation (no diffuse field): The OTS approximation is described in detail by Osterbrock \& Ferland (2006, page 24). In the OTS approximation the diffuse component of the radiation field is ignored under the assumption that any ionising photon emitted by the gas will be reabsorbed elsewhere, close to where it was emitted, hence not contributing to the net ionisation of the nebula. This is not a bad approximation in the case of reasonably dense homogeneous or smoothly varying density fields, but it is certain to fail in the highly inhomogeneous star-forming gas, where the ionisation and temperature structure of regions that lie behind high density clumps and filaments is often dominated by the diffuse field. \item Steady-state calculations (instantaneous ionisation): The ionisation structure and the gas temperature of a photoionised region is often obtained by simultaneously solving the {\it steady state} thermal balance and ionisation equilibrium equations. This approximation is valid when the atomic physics timescales are shorter than the dynamical timescales and the rate of change of the ionising field. In this case, the photoionisation problem is completely decoupled from the dynamics and it can be solved for a given gas density distribution obtained as a snapshot at a given time in the evolution of a cloud. This is a fair assumption for the purpose to study of ionisation feedback on large scales, as most of the gas will be in equilibrium. Non-equilibrium effects, however, should still be kept in mind when interpreting the spectra of regions close to the ionisation front or where shocks are present. \end{enumerate} \section{ How good are the approximations?} In cases where the steady-state calculations are relevant, it is possible to test the effects of approximations a-c from the above list by comparing the temperature distributions obtained by the hydro+ionisation codes against those obtained by a specialised photoionisation code, like the {\sc mocassin} code, for density snapshots at several times in the hydrodynamics simulations. {\sc mocassin} is a fully three-dimensional photoionisation and dust radiative transfer code that employs a Monte Carlo approach to the fully frequency resolved transfer of radiation. The code includes all the microphysical processes that influence the gas ionisation balance and thermal balance as well as those that couple the gas and dust phases. In the case of an HII region ionised by an OB star the dominant heating process for typical gas abundances is H photoionisation, balanced by cooling via collisionally excited line emission (dominant), recombination line emission and free-bound and free-free emission. The atomic database included in {\sc mocassin} includes opacity data from Verner et al. (1996), energy levels, collision strengths and transition probabilities from Version 5.2 of the CHIANTI database (Landi et al. 2006, and references therein) and the improved hydrogen and helium free-bound continuous emission data of Ercolano \& Storey (2006). Dale et al. (2007, DEC07) performed detailed comparisons against {\sc mocassin}'s solution for the temperature structure of a complex density field ionised by a newly born massive star located at the convergence of high density accretion streams. They found that the two codes were in fair agreement on the ionised mass fractions in high density regions, while low density regions proved problematic for the DEC07 algorithm. The temperature structure, however, was poorly reproduced by the DEC07 algorithm, highlighting the need for more realistic prescriptions. For more details see DEC07. More recently we have used the {\sc mocassin} code to calculate the temperature and ionisation structure of the turbulent ISM density fields presented by Gritschneder et al. (2009b, hereafter: G09b). The SPH particle fields were obtained with the {\sc iVine} code (Gritschneder et al. 2009a) and mapped onto a regular 128$^3$ Cartesian grid. In order to compare with {\sc iVine}, which calculates the RT along parallel rays, the stellar field in {\sc mocassin} was forced to be plane parallel, while the following RT was performed in three dimensions thus allowing for an adequate representation of the diffuse field. The incoming stellar field was set to the value used by G09b ($Q{_H^0}$~=~5$\times$10$^9$ ionising photons per second) and a blackbody spectrum of 40kK was assumed. We run H-only simulations (referred to as ``H-only'') and simulations with typical HII region abundances (referred to as ``Metals''). The elemental abundance are as follows, given as number density with respect to Hydrogen: He/H = 0.1, C/H = 2.2e-4, N/H = 4.0e-5, O/H = 3.3e-4, Ne/H = 5.0e-5, S/H = 9.0e-6. The resulting {\sc mocassin} temperature and ionisation structure grids were compared to those obtained by {\sc iVine} in order to address the following questions: \begin{enumerate} \item Are the global ionisation fractions accurate? \item How accurate is the gas temperature distribution? \item What is the effect of the diffuse field? \item How can the algorithm be improved? \end{enumerate} \begin{figure*} \begin{center} \includegraphics[width=18.0cm]{ercolano_fig1.eps} \caption{Surface density of electrons projected in the z-direction for the G09b turbulent ISM simulation at t~=~0.5~Myr. {\it Left:} iVine; {\it Middle:} {\sc mocassin} H-only; {\it Right:} {\sc mocassin} nebular abundances.} \end{center} \label{f:sigmane} \end{figure*} \subsection{Global Properties} Figure~1 shows the surface density of electrons projected in the z-direction for the G09b turbulent ISM simulation at t~=~0.5~Myr. The figure shows that no significant differences are noticeable in the integrated ionisation structure, implying that the global ionisation structure is correctly determined by {\sc iVine}. This is also confirmed by the comparison of the total ionised mass fractions: at t~=~0.5~Myr, iVine obtains a total ionised mass of 13.9\%, while {\sc mocassin} ``H-only'' and ``Metals'' obtain 15.6\% and 14.0\%, respectively. The agreement at other time snapshots is equally good (e.g. at t~=~250kyr {\sc iVine} obtains 9.1\% and {\sc mocassin} ``Metals'' 9.5\%). It may at first appear curious that the agreement should be better between {\sc iVine} and {\sc mocassin} ``Metals'', rather than {\sc mocassin} ``H-only'', given that only H-ionisation is considered in {\sc iVine}. This is however simply explained by the fact that {\sc iVine} adopts a ``ionised gas temperature'' ($T_{hot}$) of 10kK, which is close to a {\it typical} HII region temperature, with {\it typical} gas abundances. The removal of metals in the ``H-only'' simulations causes the temperature to rise to values close to 17kK, due to the fact that cooling becomes much less efficient without collisionally excited lines of oxygen, carbon etc. The hotter temperatures in the ``H-only'' models directly translate to slower recombinations, as the recombination coefficient is proportional to the inverse square root of the temperature. As a result of slower recombination the ``H-only'' grids have a slightly larger ionisation degree. \subsection{Ionisation and temperature structure} \begin{figure*} \begin{center} \includegraphics[width=18.0cm]{ercolano_fig2.eps} \caption{Density and temperature maps for the z = 25 slice of the G09 turbulent ISM simulation at t~=~0.5~Myr. {\it Top left:} Gas density map; {\it Top right:} electron temperature, $T_e$ as calculated by iVine; {\it Bottom left:} electron temperature, $T_e$ as calculated by {\sc mocassin} with H-only; {\it Bottom right:} electron temperature, $T_e$ as calculated by {\sc mocassin} with nebular abundances.} \end{center} \label{f:te} \end{figure*} Accurate gas temperatures are of prime importance as this is how feedback from ionising radiation impacts on the hydrodynamics of the system. In Figure~2 we compare the electron temperatures $T_e$ calculated by {\sc iVine} and {\sc mocassin} (``H-only'' and ``Metals'') in a z-slice of the t~=~0.5~Myr grid. The top-right panel shows the number density [cm$^{-3}$] map for the selected slice. The large shadow regions behind the high density clumps are immediately evident from both figures. These shadows are largely reduced in the {\sc mocassin} calculations as a result of diffuse field ionisation. The diffuse field is softer than the stellar field and therefore temperatures in the shadow regions are lower. The higher temperatures in the shadow regions of the {\sc mocassin} ``Metals'' model are a consequence of the Helium Lyman radiation and the heavy elements free-bound contribution to the diffuse field. The rise in gas temperature shown in the {\sc mocassin} results at larger distances from the star is not surprising and a simple consequence of radiation hardening and the recombining of some of the dominant cooling ions. \section{Towards more realistic algorithms} \begin{figure*} \begin{center} \includegraphics[width=18.cm]{ercolano_fig3.eps} \caption{Density slice at 250 kyr for the OTS {\sc iVine} (left) and the diffuse field iVine (right).} \end{center} \label{f:divine} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=8.5cm]{ercolano_fig4.eps} \caption{Turbulence spectrum obtained for the standard OTS {\sc iVine} (solid lines), the control run with no ionising radiation (dotted line) and the diffuse field {\sc iVine} (dashed lines).} \end{center} \label{f:turb} \end{figure} As {\sc iVine} solves the transfer along plane parallel rays, it has currently no means of bringing ionisation (and hence heating) to regions that lie behind high density clumps. This creates a large temperature (pressure) gradient between neighbouring direct and diffuse-field dominated regions, which may have important implications for the dynamics, particularly with respect to turbulence calculations. The same problem is faced by all codes that employ the OTS approximation and thus ignore the diffuse field contributions. In order to investigate whether the error introduced by OTS approximation actually bears any consequence on the dynamical evolution of the system and on the turbulence spectrum, we propose here a simple zeroth order strategy to include the effects of the diffuse field in {\sc iVine} and which can be readily extended to other codes. It consists of the following steps: (i) identify the diffuse field dominated regions (shadow); (ii) study the realistic temperature distribution in the shadow region using fully frequency resolved three-dimensional photoionisation calculations performed with {\sc mocassin} and parameterise the gas temperature in the shadow regions as a function of (e.g.) gas density; (iii) implement the temperature parameterisation in {\sc iVine} and update the gas temperatures in the shadow regions at every dynamical time step accordingly. We note that this approach allows for environmental variables, such as the hardness of the stellar field and the metallicity of the gas to be accounted for in the SPH calculation, since their effect on the temperature distribution is folded in the parameterisation obtained with {\sc mocassin}. Figure~3 shows a slice of the density structure snapshots at 250k year for a standard {\sc ivine} (left panel) compared to a first attempt at a diffuse field implementation in {\sc iVINE} (right panel). These preliminary results indicate that the diffuse field affects the evolution of structures, promoting the detachment of clumps from the pillars, presumably by excavating them from behind.. Slightly higher densities are achieved in some of the clumps in the diffuse field simulation at this age, suggesting that the formation of stars may be thus accelerated. The turbulence spectrum obtained in simulations with or without diffuse fields are rather similar, as shown in Figure~4, where the specific kinetic energy is plotted as a function of wave number in the case of the control run with no ionisation at all (dotted lines), OTS {\sc iVine} (solid lines) and diffuse field {\sc iVine} (dashed lines). There is however tentative evidence for less efficient driving at the smaller scales, due to the fact that the large temperature gradients created by the OTS at the shadow regions are removed when the diffuse field is considered. We stress that the results presented here are to be considered only a first exploratory step to establish whether diffuse field effects are likely to play a role in the dynamical evolution of a turbulent medium. More detailed comparisons will be presented in a forthcoming article (Ercolano \& Gritschneder 2010, in prep) \section{Conclusions} We have presented a review of the current implementations of photoionisation algorithms in star formation hydrodynamical simulation, highlighting some of the most common approximation that are employed in order to simplify the radiative transfer and photoionisation problems. We discuss the robustness of the temperature fields obtained by such methods in light of recent tests against detailed 3D photoionisation calculations for complex density distributions typical of star forming regions. We conclude that while the global ionised mass fractions obtained by the simplified methods are roughly in agreement, the temperature fields are poorly represented. In particular, the assumption of the OTS approximation may lead to unrealistic shadow regions and extreme temperature gradients that affect the dynamical evolution of the system and, to a lesser extent, its turbulence spectrum. We propose a simple strategy to provide a more realistic description of the temperature distribution based on parameterisations obtained with a dedicated photoionisation code, {\sc mocassin}, which includes frequency resolved 3D radiative transfer and all the microphysical process needed for an accurate calculation of the temperature distribution of ionised regions. This computationally inexpensive method allows to include the thermal effects of diffuse field, as well as accounting for environmental variables, such as gas metallicity and stellar spectra hardness.
{'timestamp': '2010-10-27T02:02:04', 'yymm': '1010', 'arxiv_id': '1010.5374', 'language': 'en', 'url': 'https://arxiv.org/abs/1010.5374'}
\section{Introduction} Let $p$ be a fixed prime. We study homology and cohomology groups with $\mathbb{F}_p$-coefficients associated to towers of spectra \begin{equation} \label{equ_introtower} Y = \holim_{n\to-\infty} Y_n \to \cdots \to Y_{n-1} \to Y_n \to \cdots \,, \end{equation} where each $Y_n$ is bounded below and of finite type over~$\mathbb{F}_p$, and $Y$ is equal to the homotopy inverse limit of the tower. By a result of Caruso, May and Priddy \cite{CMP1987}, there exists an \emph{inverse limit of Adams spectral sequences} that calculates the homotopy groups of the $p$-completion $Y\sphat_p = F(S^{-1}\!/p^\infty, Y)$ of~$Y$, where $F(-, -)$ denotes the function spectrum and $S^{-1}\!/p^\infty$ is the Moore spectrum with homology $\mathbb{Z}/p^\infty$ in degree~$-1$. The $E_2$-term for this spectral sequence is given by the $\Ext$-groups of the direct limit of cohomology groups \begin{equation} \label{equ_introcolim} H_c^*(Y; \mathbb{F}_p) = \colim_{n\to-\infty} H^*(Y_n; \mathbb{F}_p) \,, \end{equation} arising from the tower~\eqref{equ_introtower}, considered as a module over the mod~$p$ Steenrod algebra $\mathscr{A}$. We shall refer to this colimit as the \emph{continuous cohomology groups} of $Y$. \subsection{The algebraic Singer construction} A natural question is: how well do we understand the structure of \eqref{equ_introcolim} as an $\mathscr{A}$-module? There is an interesting example of a tower of spectra where this question has the answer: very well. In fact, this question appeared in the study of Segal's Burnside ring conjecture for cyclic groups of prime order. At the heart of W.~H.~Lin's proof of the case $p=2$, published in \cite{LDMA1980}, lies a careful study of the $\mathscr{A}$-module $$ P = H_c^*(\mathbb{R} P^\infty_{-\infty}; \mathbb{F}_2) = \colim_{m\to\infty} H^*(\mathbb{R} P^\infty_{-m}; \mathbb{F}_2) \,, $$ and its associated $\Ext$-groups $\Ext^{*,*}_\mathscr{A}(P, \mathbb{F}_2)$. It turns out that $P = P(x, x^{-1}) = \mathbb{F}_2[x, x^{-1}]$ is isomorphic to the so-called \emph{Singer construction} $R_+(M)$ on the trivial $\mathscr{A}$-module $M=\mathbb{F}_2$, up to a degree shift. The Singer construction has an explicit description as a module over $\mathscr{A}$. More importantly, it has the property that there is a natural $\mathscr{A}$-module homomorphism $\epsilon \: R_+(M) \to M$ that induces an isomorphism $$ \epsilon^* \: \Ext^{*,*}_\mathscr{A}(M, \mathbb{F}_2) \overset\cong\to \Ext^{*,*}_\mathscr{A}(R_+(M), \mathbb{F}_2) \,. $$ \subsection{The topological Singer construction} Our objective is to present a topological realization and a useful generalization of these results. Specifically, for a bounded below spectrum $B$ of finite type over~$\mathbb{F}_p$, we construct a tower of spectra $$ (B^{\wedge p})^{tC_p} = \holim_{n\to-\infty} (B^{\wedge p})^{tC_p}[n] \to \cdots \to (B^{\wedge p})^{tC_p}[n{-}1] \to (B^{\wedge p})^{tC_p}[n] \to \cdots $$ as in~\eqref{equ_introtower}. Here $C_p$ is the cyclic group of order~$p$, and $B^{\wedge p}$ is a $C_p$-equivariant model of the $p$-th smash power of $B$. We call $R_+(B) = (B^{\wedge p})^{tC_p}$ the \emph{topological Singer construction} on $B$, and prove in Theorem~\ref{thm_topmodelsalg} that there is a natural isomorphism $$ R_+(H^*(B; \mathbb{F}_p)) \cong H_c^*(R_+(B); \mathbb{F}_p) = \colim_{n\to-\infty} H^*((B^{\wedge p})^{tC_p}[n]; \mathbb{F}_p) $$ of $\mathscr{A}$-modules. Furthermore, we define a natural stable map $\epsilon_B \: B \to (B^{\wedge p})^{tC_p}$, and prove in Proposition~\ref{prop_epsilonB} that it realizes the $\mathscr{A}$-module homomorphism $\epsilon \: R_+(H^*(B; \mathbb{F}_p)) \to H^*(B; \mathbb{F}_p)$ in continuous cohomology. The Segal conjecture for $C_p$ follows as a special case of this, when $B=S$ is the sphere spectrum, since $S^{\wedge p}$ is a model for the genuinely $C_p$-equivariant sphere spectrum. More generally, we prove in Theorem~\ref{thm_fixedpoints} that for any bounded below spectrum $B$ with $H_*(B; \mathbb{F}_p)$ of finite type the canonical map $$ \Gamma \: (B^{\wedge p})^{C_p} \longrightarrow (B^{\wedge p})^{hC_p} \,, $$ relating the $C_p$-fixed points and $C_p$-homotopy fixed points for $B^{\wedge p}$, becomes a homotopy equivalence after $p$-completion. In \cite{BBLR}*{1.7}, with M.~B{\"o}kstedt and R.~Bruner, we deduce from this that there are $p$-adic equivalences $\Gamma_n \: (B^{\wedge p^n})^{C_{p^n}} \longrightarrow (B^{\wedge p^n})^{hC_{p^n}}$ for all $n\ge1$. \subsection{Outline of the paper} In \textsection \ref{sec_limitsofspectra}, we discuss towers of spectra as above, and the associated limit systems obtained by applying homology or cohomology with $\mathbb{F}_p$-coefficients. When dealing with towers of ring spectra it is convenient to work in homology, while for the formation of the $\Ext$-groups mentioned above it is convenient to work in cohomology. In order to be able to switch back and forth between cohomology and homology we discuss linear topologies arising from filtrations, and continuous dualization. Then we see how an $\mathscr{A}$-module structure in cohomology dualizes to an $\mathscr{A}_*$-comodule structure in a suitably completed sense. In \textsection \ref{sec_algsinger} we recall the algebraic Singer construction on an $\mathscr{A}$-module $M$, in its cohomological form, and study a dual homological version, defined for $\mathscr{A}_*$-comodules $M_*$ that are bounded below and of finite type. We give the details for the form of the Singer construction that is related to the group $C_p$, since most references only consider the smaller version related to the symmetric group $\Sigma_p$. Then, in \textsection \ref{sec_tate}, we define a specific tower of spectra $\{ X^{tG}[n] \}_n$ with homotopy inverse limit equivalent to the Tate construction $X^{tG} = [\widetilde{EG} \wedge F(EG_+, X)]^G$ on a $G$-spectrum $X$. We consider the associated (co-)homological Tate spectral sequences, and compare our approach of working with homology groups to earlier papers that focused directly on homotopy groups. In \textsection \ref{sec_singer}, we specialize to the case when $X = B^{\wedge p}$. This is also where we discuss the genuinely $C_p$-equivariant model of the spectrum $B^{\wedge p}$, given by the $p$-fold smash product of (symmetric) spectra introduced by B{\"o}kstedt \cite{B1} in his definition of topological Hochschild homology. It is for this particular $C_p$-equivariant model that we can define the natural stable map $\epsilon_B \: B \to R_+(B)$ realizing Singer's homomorphism $\epsilon$. \subsection{Notation} Spectra will usually be named $B$, $X$ or $Y$. Here $B$ will be a bounded below spectrum or $S$-algebra of finite type over~$\mathbb{F}_p$. Spectra denoted by $X$ will be equipped with an equivariant structure. The main examples we have in mind are the $p$-fold smash product $X = B^{\wedge p}$ treated here, and the topological Hochschild homology spectrum $X = \THH(B)$ treated in the sequel~\cite{LR2}. When dealing with generic towers of spectra, we will use $Y$. The example of main interest is the Tate construction $Y = X^{tG}$ on some $G$-equivariant spectrum $X$. We write $\mathscr{A}$ for the mod~$p$ Steenrod algebra and $\mathscr{A}_*$ for its $\mathbb{F}_p$-linear dual. We will work with left modules over $\mathscr{A}$ and left comodules under $\mathscr{A}_*$. In the body of the paper we write $H_*(B) = H_*(B; \mathbb{F}_p)$ and $H^*(B) = H^*(B; \mathbb{F}_p)$, for brevity. Unlabeled $\Hom$ means $\Hom_{\mathbb{F}_p}$, and $\otimes$ means $\otimes_{\mathbb{F}_p}$. \subsection{History and notation of the Singer construction} The Singer construction appeared originally for $p=2$ in \cite{S1980} \and \cite{S1981}, and for $p$ odd in \cite{LS1982}. The work presented here concentrates on its relation to the calculations by Lin and Gunawardena and their work on the Segal conjecture for groups of prime order. A published account for the case of the group of order $2$ is found in \cite{LDMA1980}. A further study appears in \cite{AGM1985}, where a more conceptual definition of the Singer construction is given. In W.~Singer's paper \cite{S1980}, the following problem is posed: Let $M$ be an unstable $\mathscr{A}$-module and let $\Delta = \mathbb{F}_2\{\Sq^r \mid r\in \mathbb{Z}\}$. There is a map of graded $\mathbb{F}_2$-vector spaces $$ d\:\Delta\otimes M\to M $$ taking $\Sq^r\otimes m$ to $\Sq^r(m)$ for $r\ge0$, and to $0$ for $r<0$. Does there exist a natural $\mathscr{A}$-module structure on the source of $d$ rendering this map an $\mathscr{A}$-linear homomorphism? Singer answers this question affirmatively, by using an idea of Wilkerson \cite{W1977} to construct the $\mathscr{A}$-module that he denotes $R_+(M)$, an $\mathscr{A}$-module map $d \: R_+(M) \to M$ of degree~$1$, and an isomorphism $R_+(M) \cong \Delta \otimes M$, also of degree~$1$, that makes the two maps called $d$ correspond. In the end, the construction does not depend on $M$ being unstable. In Li and Singer's paper \cite{LS1982}, the odd-primary version of this problem is solved, with $\Delta = \mathbb{F}_p\{ \beta^i \SP^r \mid i \in \{0,1\}, r \in \mathbb{Z}\}$. Starting with that paper there is a degree shift in the notation: $R_+(M)$ now denotes the suspension of $R_+(M)$ from Singer's original paper, so that the $\mathscr{A}$-module map $d \: R_+(M) \to M$ is of degree~$0$. In connection with the Segal conjecture, Adams, Gunawardena and Miller \cite{AGM1985} published an algebraic account of the Singer construction, for all primes $p$. They write $T'(M)$ for the $\mathscr{A}$-module denoted $R_+(M)$ in \cite{S1980} and \cite{S1981}, and let $T''(M) = \Sigma T'(M)$ be its suspension, denoted $R_+(M)$ in \cite{LS1982}. For the trivial $\mathscr{A}$-module $\mathbb{F}_p$, $T''(\mathbb{F}_p)$ is isomorphic to the Tate homology $\widehat{H}_{-*}(\Sigma_p; \mathbb{F}_p)$, which can be obtained by localization from $\Sigma H^*(\Sigma_p; \mathbb{F}_p)$. Hence $T'(\mathbb{F}_p) = \widehat{H}^*(\Sigma_p; \mathbb{F}_p)$ is a localized form of $H^*(\Sigma_p; \mathbb{F}_p)$. Adams, Gunawardena and Miller are not only concerned with the Segal conjecture for the groups of prime order, but also for the elementary abelian groups $(C_p)^n$, so the cross product in cohomology is important for them. They therefore prefer $T'$ over $T''$. In fact, they really work with an extended functor $T(M) = T(\mathbb{F}_p) \otimes_{T'(\mathbb{F}_p)} T'(M)$, where $T(\mathbb{F}_p) = \widehat{H}^*(C_p; \mathbb{F}_p)$. In our context, for the cohomological study of towers of ring spectra it will be the coproduct in Tate homology that is most important, which is why we prefer $T''$ over $T'$. Again, our emphasis is on $C_p$ instead of $\Sigma_p$, so that the Singer-type functor we shall work with is the extension $$ \widehat{H}_{-*}(C_p; \mathbb{F}_p) \otimes_{T''(\mathbb{F}_p)} T''(M) $$ of $T''(M)$, which is $(p-1)$ times larger than the $T''(M) = R_+(M)$ of Li and Singer. This is the functor we shall denote $R_+(M)$, so that $R_+(\mathbb{F}_p) = \widehat{H}_{-*}(C_p; \mathbb{F}_p)$, and $R_+(M) = \Sigma T(M)$ in the notation of \cite{AGM1985}. The connection between the Singer construction and the continuous cohomology of a tower of spectra, displayed below as \eqref{equ_inverseextendedpowers} for $G=\Sigma_p$, was found by Haynes Miller, and explained in \cite{BMMS}*{II.5.1}. There the functor denoted $R_+$ is the same as in \cite{LS1982} for odd $p$, shifted up one degree from \cite{S1981} for $p=2$. In the following we will make use of the fact that the Singer construction on an $\mathscr{A}$-module $M$ comes equipped with the homomorphism of $\mathscr{A}$-modules $\epsilon\: R_+(M)\to M$. In Singer's work \cite{S1981}, this map has degree $+1$ (and was named $d$), whereas in \cite{LS1982} and \cite{BMMS} it has degree 0. We choose to follow the latter conventions, because this is the functor that with no shift of degrees describes our continuous (co-)homology groups. Furthermore, the homomorphism $\epsilon$ will be realized by a map of spectra, and will therefore be of degree zero. We choose to write $R_+(M)$ instead of $T(M)$ or any of its variants, because the letter $T$ is heavily overloaded by the presence of $\THH$, the Tate construction and the circle group $\mathbb{T}$. To add to the confusion, the letter $T$ is also used in Singer's \cite{S1981} work, but with a different meaning than the $T$ appearing in \cite{AGM1985}. \subsection{Acknowledgments} This work started out as a part of the first author's PhD thesis at the University of Oslo, supervised by the second author. The first author wishes to express the deepest gratitude to Prof.~Rognes for offering ideas, help and support during the work of his thesis. Thanks are also due to R.~Bruner, G.~Carlsson, I.~C.~Borge, S.~Galatius, T.~Kro, M.~B{\"o}kstedt, M.~Brun and B.~I.~Dundas for interest, comments and discussions. \section{Limits of spectra} \label{sec_limitsofspectra} We introduce our conventions regarding towers of spectra and their associated (co-)homology groups. Our motivation is the result of Caruso, May and Priddy, saying that that there is an inverse limit of Adams spectral sequences arising from such towers. The input for this inverse limit of Adams spectral sequences will give us the definition of continuous (co-)homology groups. \subsection{Inverse limits of Adams spectral sequences} \begin{dfn} Let $R$ be a (Noetherian) ring. A graded $R$-module $M_*$ is \emph{bounded below} if there is an integer $\ell$ such that $M_* = 0$ for all $* < \ell$. It is of \emph{finite type} if it is finitely generated over $R$ in each degree. A spectrum $B$ is \emph{bounded below} if its homotopy $\pi_*(B)$ is bounded below as a graded abelian group. It is of \emph{finite type over~$\mathbb{F}_p$} if its mod~$p$ homology $H_*(B) = H_*(B; \mathbb{F}_p)$ is of finite type as a graded $\mathbb{F}_p$-vector space. The spectrum $B$ is of \emph{finite type over $\widehat\mathbb{Z}_p$} if its homotopy $\pi_*(B)$ is of finite type as a graded $\widehat\mathbb{Z}_p$-module. \end{dfn} Let $\{Y_n\}_{n\in\mathbb{Z}}$ be a sequence of spectra, with maps $f_n \: Y_{n-1} \to Y_n$ for all integers~$n$. Assume that each $Y_n$ is bounded below and of finite type over~$\mathbb{F}_p$, and let $Y$ be the homotopy inverse limit of this system: \begin{equation} \label{equ_tower} Y \longrightarrow \cdots \longrightarrow Y_{n-1} \overset{f_n}\longrightarrow Y_n \longrightarrow \cdots \end{equation} In general, $Y$ will neither be bounded below nor of finite type over~$\mathbb{F}_p$. For each $n$ there is an Adams spectral sequence $\{E^{*,*}_r(Y_n)\}_r$ with $E_2$-term $$ E_2^{s,t}(Y_n) = \Ext_{\mathscr{A}}^{s,t}(H^*(Y_n), \mathbb{F}_p) \Longrightarrow \pi_{t-s}((Y_n)\sphat_p) \,, $$ converging strongly to the homotopy groups of the $p$-completion of $Y_n$. Each map in the tower~\eqref{equ_tower} induces a map of Adams spectral sequences $f_n \: \{E^{*,*}_r(Y_{n-1})\}_r \to \{E^{*,*}_r(Y_n)\}_r$. For every $r$ let $$ E^{*,*}_r(\underline{Y}) = \lim_{n\to-\infty} E^{*,*}_r(Y_n) \,, $$ and similarly for the $d_r$-differentials. We now state and prove a slightly sharper version of \cite{CMP1987}*{7.1}. \begin{prop} \label{prop_CMPss} Let $\{Y_n\}_n$ be a tower of spectra such that each $Y_n$ is bounded below and of finite type over~$\mathbb{F}_p$, and let $Y = \holim_n Y_n$. Then the bigraded groups $\{E^{*,*}_r(\underline{Y})\}_r$ are the terms of a spectral sequence, with $E_2$-term $$ E^{s,t}_2(\underline{Y}) \cong \Ext_{\mathscr{A}}^{s,t}(\colim_{n\to-\infty} H^*(Y_n), \mathbb{F}_p) \Longrightarrow \pi_{t-s}(Y\sphat_p) \,, $$ converging strongly to the homotopy groups of the $p$-completion of $Y$. \end{prop} The difference between this statement and the statement in \cite{CMP1987} lies in the hypothesis on the $Y_n$: we do not assume that $Y_n$ is $p$-complete, and weaken the condition that $Y_n$ should be of finite type over $\widehat\mathbb{Z}_p$ to the condition that $H_*(Y_n; \mathbb{F}_p)$ should be of finite type. We refer to the spectral sequence $\{E^{*,*}_r(\underline{Y})\}_r$ as the \emph{inverse limit of Adams spectral sequences} associated to the tower $\{Y_n\}_n$. \begin{proof} For any bounded below spectrum $B$ of finite type over~$\mathbb{F}_p$, the Adams spectral sequence converges strongly to $\pi_*(B\sphat_p)$. Since the $E_2$- and $E_\infty$-terms of this spectral sequence are of finite type, the abelian groups $\pi_*(B\sphat_p)$ are compact and Hausdorff in the topology given by the Adams filtration. The category of compact Hausdorff abelian groups is an abelian category, as is the category of discrete abelian groups. The Pontryagin duality functor assigns to each abelian group $G$ its character group $\Hom(G, \mathbb{T})$, where $\mathbb{T} = S^1$ is the circle group. It induces a contravariant equivalence between the category of compact Hausdorff abelian groups and the category of discrete abelian groups. The functor taking a filtered diagram of discrete abelian groups to its colimit is well-known to be exact. It follows that the functor taking a filtered diagram of compact Hausdorff abelian groups to its inverse limit is also an exact functor. In particular, passing to filtered inverse limits commutes with the formation of kernels, images, cokernels and homology, in the abelian category of compact Hausdorff abelian groups. We now adapt the proof of \cite{CMP1987}*{7.1}, using this version of the exactness of the inverse limit functor. First, we construct a double tower diagram of spectra \begin{equation} \label{equ_towAdamsres} \xymatrix{ \dots \ar[r] & Z_s \ar[r] \ar[d] & \dots \ar[r] & Z_0 = Y \ar[d] \\ & \vdots \ar[d] & & \vdots \ar[d] \\ \dots \ar[r] & Z_{n-1,s} \ar[r] \ar[d] & \dots \ar[r] & Z_{n-1,0} = Y_{n-1} \ar[d]^{f_n} \\ \dots \ar[r] & Z_{n,s} \ar[r] & \dots \ar[r] & Z_{n,0} = Y_n } \end{equation} where the $n$-th row is an Adams resolution of $Y_n$, such that each $Z_{n,s}$ is a bounded below spectrum of finite type over~$\mathbb{F}_p$. The $n$-th row can be obtained in a functorial way by smashing $Y_n$ with a fixed Adams resolution for the sphere spectrum $S$. The top row consists of the homotopy limits $Z_s = \holim_n Z_{n,s}$ for all $s\ge0$. By assumption, the homotopy cofiber $K_{n,s}$ of the map $Z_{n,s+1} \to Z_{n,s}$ is a wedge sum of suspended copies of the Eilenberg--Mac\,Lane spectrum $H = H\mathbb{F}_p$, also bounded below and of finite type over~$\mathbb{F}_p$. The exact couple $$ \xymatrix{ \pi_*(Z_{n,s+1}) \ar[r]^i & \pi_*(Z_{n,s}) \ar[d]^j \\ & \pi_*(K_{n,s}) \ar@{-->}[ul]^k } $$ generates the Adams spectral sequence $\{E_r^{*,*}(Y_n)\}_r$, with $E_1^{s,t}(Y_n) = \pi_{t-s}(K_{n,s})$. The dashed arrow has degree~$-1$. Now consider the $p$-completion of diagram~\eqref{equ_towAdamsres}. The $n$-th row becomes \begin{equation} \label{equ_nthrowp} \dots \to (Z_{n,s})\sphat_p \to \dots \to (Z_{0,s})\sphat_p = (Y_n)\sphat_p \end{equation} and the homotopy cofiber of $(Z_{n,s+1})\sphat_p \to (Z_{n,s})\sphat_p$ is the $p$-completion of $K_{n,s}$, which is just $K_{n,s}$ again. There is therefore a second exact couple \begin{equation} \label{equ_excplnp} \xymatrix{ \pi_*((Z_{n,s+1})\sphat_p) \ar[r]^i & \pi_*((Z_{n,s})\sphat_p) \ar[d]^j \\ & \pi_*(K_{n,s}) \ar@{-->}[ul]^k } \end{equation} for each $n$, which generates the same spectral sequence as the first one. Furthermore, in the second exact couple all the (abelian) homotopy groups are compact Hausdorff, since the spectra $Z_{n,s}$ and $K_{n,s}$ are all bounded below and of finite type over~$\mathbb{F}_p$. The $p$-completion of the top row in~\eqref{equ_towAdamsres} is the homotopy inverse limit over~$n$ of the $p$-completed rows~\eqref{equ_nthrowp}. The exactness of filtered limits in the category of compact Hausdorff abelian groups now implies that there are isomorphisms $\pi_*((Z_s)\sphat_p) \cong \lim_n \pi_*((Z_{n,s})\sphat_p)$ for all $s$. Furthermore, the inverse limit over $n$ of the exact couples~\eqref{equ_excplnp} defines a third exact couple (of compact Hausdorff abelian groups) \begin{equation} \label{equ_excpllim} \xymatrix{ \pi_*((Z_{s+1})\sphat_p) \ar[r]^i & \pi_*((Z_s)\sphat_p) \ar[d]^j \\ & \displaystyle{\lim_n} \, \pi_*(K_{n,s}) \ar@{-->}[ul]^k \rlap{\,,} } \end{equation} which generates the spectral sequence $\{E_r^{*,*}(\underline{Y})\}_r$ that we are after. Here $E_1^{s,t}(\underline{Y}) \cong \lim_n E_1^{s,t}(Y_n)$, and by induction on $r$ the same isomorphism holds for each $E_r$-term, since $E_{r+1}^{*,*}$ is the homology of $E_r^{*,*}$ with respect to the $d_r$-differentials, and we have seen that the formation of these limits commutes with homology. In particular, each abelian group $E_r^{s,t}(\underline{Y})$ is compact Hausdorff. The identification of the $E_2$-term for $s=0$ amounts to the isomorphism $$ \lim_n \Hom_{\mathscr{A}}(H^*(Y_n), N) \cong \Hom_{\mathscr{A}}(\colim_n H^*(Y_n), N) $$ for $N = \Sigma^t \mathbb{F}_p$. The general case follows, since we can compute $\Ext^s_{\mathscr{A}}$ by means of an injective resolution of $\mathbb{F}_p$. We must now check the convergence of this spectral sequence, which we (and \cite{CMP1987}) do following Boardman \cite{B1999}. The Adams resolution for each $Y_n$ is constructed so that $$ \lim_s \pi_*((Z_{n,s})\sphat_p) = \Rlim_s \pi_*((Z_{n,s})\sphat_p) = 0 \,. $$ These two conditions ensure that the Adams spectral sequence for $Y_n$ converges conditionally \cite{B1999}*{5.10}. The standard interchange of limits isomorphism gives $$ \lim_s \pi_*((Z_s)\sphat_p) \cong \lim_n \lim_s \pi_*((Z_{n,s})\sphat_p) = 0 \,. $$ Moreover, the exactness of the inverse limit functor in this case implies that the derived limit $$ \Rlim_s \pi_*((Z_s)\sphat_p) = 0 $$ vanishes, too. Hence the inverse limit Adams spectral sequence generated by~\eqref{equ_excpllim} is conditionally convergent to $\pi_*(Y\sphat_p)$. This is a half-plane spectral sequence with entering differentials, in the sense of Boardman. For such spectral sequences, strong convergence follows from conditional convergence together with the vanishing of the groups $$ RE^{s,t}_\infty = \Rlim_r E_r^{s,t}(\underline{Y}) \,, $$ see \cite{B1999}*{5.1, 7.1}. Again, the vanishing of this $\Rlim$ is ensured by the exactness of $\lim$ for the compact Hausdorff abelian groups $E_r^{s,t}(\underline{Y})$. \end{proof} \subsection{Continuous (co-)homology} \label{sec_duality} The spectral sequence in Proposition~\ref{prop_CMPss} is central to the proof of the Segal conjecture for groups of prime order and will be the foundation for the present work. Our work will, in analogy with Lin's proof of the Segal conjecture, focus on the properties of the $E_2$-term of the above spectral sequence. \begin{dfn} \label{dfn_continuous} Let $\{Y_n\}_n$ be a tower of spectra such that each $Y_n$ is bounded below and of finite type over~$\mathbb{F}_p$, and let $Y = \holim_n Y_n$. Define the \emph{continuous cohomology} of $Y$ as the colimit $$ H_c^*(Y) = \colim_{n\to-\infty} H^*(Y_n) \,. $$ Dually, define the \emph{continuous homology} of $Y$ as the inverse limit $$ H_*^c(Y) = \lim_{n\to-\infty} H_*(Y_n) \,. $$ \end{dfn} Note that we choose to suppress from the notation the tower of which $Y$ is a homotopy inverse limit, even if the continuous cohomology groups do depend on the choice of inverse system. For example, let $p=2$ and let $Y=S\sphat_2$ be the $2$-completed sphere spectrum. Since $Y$ is bounded below and of finite type over $\mathbb{F}_2$, we may express $Y$ by the constant tower of spectra. But by W.~H.~Lin's theorem, $S\sphat_2 \simeq \holim_n \Sigma \mathbb{R} P^\infty_n$, where each $\Sigma\mathbb{R} P^\infty_n$ is also bounded below and of finite type over $\mathbb{F}_2$. Now $\colim_n H^*(\Sigma \mathbb{R} P^\infty_n) = \Sigma P(x, x^{-1}) = R_+(\mathbb{F}_2)$ is much larger than $H^*(S\sphat_2) = \mathbb{F}_2$. By the universal coefficient theorem and our finite type assumptions, the $\mathbb{F}_p$-linear dual of $H^*_c(Y)$ is naturally isomorphic to $H_*^c(Y)$. The continuous homology of $Y$ will often not be of finite type, so its dual is in general not isomorphic to the continuous cohomology. However, if we take into account the linear topology on the inverse limit, given by the kernel filtration induced from the tower, we do get that the continuous dual of the continuous homology is isomorphic to the continuous cohomology. We discuss this in \textsection \ref{subsec_dualization}. Note that the continuous cohomology is a direct limit of bounded below $\mathscr{A}$-modules. The direct limit might of course not be bounded below, but we do get a natural $\mathscr{A}$-module structure on $H^*_c(Y)$ in the category of all $\mathscr{A}$-modules. Dually, the continuous homology is an inverse limit of bounded below $\mathscr{A}_*$-comodules, but the inverse limit might be neither bounded below nor an $\mathscr{A}_*$-comodule in the usual, algebraic, sense. Instead we get a completed coaction of $\mathscr{A}_*$ $$ H_*^c(Y) \to \mathscr{A}_* \mathbin{\widehat{\otimes}} H_*^c(Y) \,, $$ where $\mathbin{\widehat{\otimes}}$ is the tensor product completed with respect to the above-mentioned linear topology on the continuous homology. We discuss this in \textsection \ref{subsec_limitsofcomodules}. \subsection{Filtrations} For every $n\in \mathbb{Z}$, let $A^n$ be a graded $\mathbb{F}_p$-vector space and assume that these vector spaces fit into a sequence \begin{equation} \label{equ_cohomtower} 0 \longrightarrow \cdots \longrightarrow A^n \longrightarrow A^{n-1} \longrightarrow \cdots \longrightarrow A^{-\infty} \end{equation} with trivial inverse limit, and colimit denoted by $A^{-\infty}$. We assume further that each $A^n$ is of finite type. Let $A_n = \Hom(A^n, \mathbb{F}_p)$ be the dual of $A^n$. The diagram above dualizes to a sequence \begin{equation} \label{equ_homtower} A_{-\infty} \longrightarrow \cdots \longrightarrow A_{n-1} \longrightarrow A_n \longrightarrow \cdots \longrightarrow 0 \end{equation} with inverse limit $$ A_{-\infty} = \lim_n A_n = \lim_n \Hom(A^n, \mathbb{F}_p) \cong \Hom(\colim_n A^n, \mathbb{F}_p) = \Hom(A^{-\infty}, \mathbb{F}_p) $$ isomorphic to the dual of $A^{-\infty}$, and trivial colimit. The last fact follows from the assumption that $A^n$ is finite dimensional in each degree. Indeed, $$ \lim_n A^n \cong \lim_n \Hom(A_n,\mathbb{F}_p) \cong \Hom(\colim_n A_n,\mathbb{F}_p) $$ and thus $\lim_n A^n=0$ implies that $\colim_n A_n$ is trivial, since the latter injects into its double dual. Furthermore, the derived inverse limits $\Rlim_n A^n$ and $\Rlim_n A_n$ are zero, again because $A^n$ and $A_n$ are degreewise finite. Adapting Boardman's notation \cite{B1999}*{5.4}, we define filtrations of the colimit of~\eqref{equ_cohomtower} and the inverse limit of \eqref{equ_homtower} using the corresponding sequential limit systems. \begin{dfn} \label{def_filtr111} For each $n \in \mathbb{Z}$, let $$ F^nA^{-\infty} = \im(A^{n} \to A^{-\infty}) $$ and $$ F_nA_{-\infty} = \ker(A_{-\infty} \to A_n) \,. $$ Then \begin{equation} \label{equ_cohomfiltration} \cdots \subset F^nA^{-\infty} \subset F^{n-1}A^{-\infty} \subset \cdots \subset A^{-\infty} \end{equation} and \begin{equation} \label{equ_homfiltration} \cdots\subset F_{n-1}A_{-\infty}\subset F_nA_{-\infty}\subset \cdots \subset A_{-\infty} \end{equation} define a decreasing (resp.~increasing) sequence of subspaces of $A^{-\infty}$ (resp.~$A_{-\infty}$). \end{dfn} The filtration~\eqref{equ_cohomfiltration} clearly exhausts $A^{-\infty}$. Since each $A^n$ and $\ker(A^n \to A^{-\infty})$ is of finite type, the right derived limits $\Rlim_n A^n$ and $\Rlim_n \ker(A^n \to A^{-\infty})$ are both zero. By assumption $\lim_n A^n = 0$, hence both $\lim_n F^nA^{-\infty}$ and $\Rlim_n F^nA^{-\infty}$ vanish. In other words, the filtration~\eqref{equ_cohomfiltration} is Hausdorff and complete, so that the canonical map $A^{-\infty} \to \lim_n (A^{-\infty}/F^nA^{-\infty})$ is an isomorphism. Completeness is equivalent to saying that Cauchy sequences converge in the linear topology given by the filtration. That the filtration is Hausdorff is saying that Cauchy sequences have unique limits. For the filtration~\eqref{equ_homfiltration}, the proof of \cite{B1999}*{5.4(b)}) shows, without any hypotheses, that the filtration is Hausdorff and complete. It also shows that the filtration is exhaustive, since the colimit of~\eqref{equ_homtower} is trivial. We collect these facts in the following lemma. \begin{lemma} \label{lem_filtration} Assume that the inverse limit $\lim_n A^n$ in~\eqref{equ_cohomtower} is trivial and that each $A^n$ is of finite type. Then both filtrations given in Definition~\ref{def_filtr111} (of $\colim_n A^n = A^{-\infty}$ resp.~its dual $A_{-\infty}$) are exhaustive, Hausdorff and complete. \end{lemma} \subsection{Dualization} \label{subsec_dualization} The dual of the inverse limit $A_{-\infty}$ of \eqref{equ_homtower} is the double dual of the colimit $A^{-\infty}$ of~\eqref{equ_cohomtower}. It contains this colimit in a canonical way, but is often strictly bigger, since $A^{-\infty}$ needs not be of finite type. To remedy this, we take into account the linear topology on the limit induced by the inverse system, and dualize by considering the continuous $\mathbb{F}_p$-linear dual. In this topology on $A_{-\infty}$, an open neighborhood basis of the origin is given by the collection of subspaces $\{F_nA_{-\infty}\}_n$. A continuous homomorphism $A_{-\infty} \to \mathbb{F}_p$ is thus an $\mathbb{F}_p$-linear function whose kernel contains $F_nA_{-\infty}$ for some $n$. The set of these forms an $\mathbb{F}_p$-vector space $\Hom^c(A_{-\infty}, \mathbb{F}_p)$, which we call the \emph{continuous dual} of $A_{-\infty}$. \begin{lemma} \label{lem_contdual} There is a natural isomorphism $$ \Hom(A^{-\infty}, \mathbb{F}_p) \cong A_{-\infty}\,. $$ Give $A_{-\infty}$ the linear topology induced by the system of neighborhoods $\{F_nA_{-\infty}\}_n$. Then there is a natural isomorphism $$ \Hom^c(A_{-\infty}, \mathbb{F}_p) \cong A^{-\infty}\,. $$ \end{lemma} \begin{proof} The first isomorphism has already been explained. For the second, we wish to compute $$ \Hom^c(A_{-\infty}, \mathbb{F}_p) \cong \colim_n \Hom(A_{-\infty}/F_nA_{-\infty}, \mathbb{F}_p) \,. $$ The dual of the image $F^nA^{-\infty} = \im(A^n \to A^{-\infty})$ is the image \begin{equation} \label{equ_fna8dual} \Hom(F^nA^{-\infty}, \mathbb{F}_p) \cong \im(A_{-\infty} \to A_n) \cong A_{-\infty}/F_nA_{-\infty} \,, \end{equation} and $F^nA^{-\infty}$ is of finite type, so the canonical homomorphism $$ F^nA^{-\infty} \overset{\cong}{\to} \Hom(A_{-\infty}/F_nA_{-\infty}, \mathbb{F}_p) $$ into its double dual is an isomorphism. Passing to the colimit as $n \to -\infty$ we get the desired isomorphism, since $\colim_n F^nA^{-\infty} \cong A^{-\infty}$. \end{proof} \subsection{Limits of $\mathscr{A}_*$-comodules} \label{subsec_limitsofcomodules} Until now, the objects of our discussion have been graded vector spaces over~$\mathbb{F}_p$. We will now add more structure, and assume that \eqref{equ_cohomtower} is a diagram of modules over the Steenrod algebra $\mathscr{A}$. It follows that the finite terms $A_n$ in the dual tower \eqref{equ_homtower} are comodules under the dual Steenrod algebra~$\mathscr{A}_*$. We need to discuss in what sense these comodule structures carry over to the inverse limit $A_{-\infty}$. Let $M_*$ be a graded vector space, with a linear topology given by a system $\{U_\alpha\}_\alpha$ of open neighborhoods, with each $U_\alpha$ a graded subspace of $M_*$. We say that $M_*$ is \emph{complete Hausdorff} if the canonical homomorphism $M_* \overset{\cong}\to \lim_\alpha (M_*/U_\alpha)$ is an isomorphism. Let $V_*$ be a graded vector space, bounded below and given the discrete topology. By the \emph{completed tensor product} $V_* \mathbin{\widehat{\otimes}} M_*$ we mean the limit $\lim_\alpha (V_* \otimes (M_*/U_\alpha))$, with the linear topology given by the kernels of the surjections $V_* \mathbin{\widehat{\otimes}} M_* \to V_* \otimes (M_*/U_\alpha)$. The completed tensor product is complete Hausdorff by construction. Given a second graded vector space $W_*$, discrete and bounded below, there is a canonical isomorphism $(V_* \otimes W_*) \mathbin{\widehat{\otimes}} M_* \cong V_* \mathbin{\widehat{\otimes}} (W_* \mathbin{\widehat{\otimes}} M_*)$. \begin{dfn} \label{dfn_completecomodule} Let $M_*$ be a complete Hausdorff graded $\mathbb{F}_p$-vector space. We say that $M_*$ is a \emph{complete $\mathscr{A}_*$-comodule} if there is a continuous graded homomorphism $\nu \: M_* \to \mathscr{A}_* \mathbin{\widehat{\otimes}} M_*$ such that the diagrams \[ \xymatrix{ M_* \ar[r]^-\nu \ar[dr]_{\cong} & \mathscr{A}_* \mathbin{\widehat{\otimes}} M_* \ar[d]^{\epsilon\ctensor1} \\ & \mathbb{F}_p \mathbin{\widehat{\otimes}} M_* } \] and \begin{equation} \label{equ_contcomod} \xymatrix{ M_* \ar[r]^-\nu \ar[d]_\nu & \mathscr{A}_* \mathbin{\widehat{\otimes}} M_* \ar[dr]^{\psi\mathbin{\widehat{\otimes}} 1} \\ \mathscr{A}_* \mathbin{\widehat{\otimes}} M_* \ar[r]^-{1\mathbin{\widehat{\otimes}} \nu} & \mathscr{A}_* \mathbin{\widehat{\otimes}} (\mathscr{A}_* \mathbin{\widehat{\otimes}} M_*) \ar[r]^\cong & (\mathscr{A}_* \otimes \mathscr{A}_*) \mathbin{\widehat{\otimes}} M_* } \end{equation} commute. Here $\epsilon \: \mathscr{A}_* \to \mathbb{F}_p$ and $\psi \: \mathscr{A}_* \to \mathscr{A}_* \otimes \mathscr{A}_*$ denote the counit and coproduct in the dual Steenrod algebra, respectively. Let $N_*$ be another complete $\mathscr{A}_*$-comodule and let $f \: N_* \to M_*$ be a continuous graded homomorphism. Then $f \in \Hom_{\mathscr{A}_*}^c(N_*, M_*)$ if the diagram $$ \xymatrix{ N_* \ar[r]^-\nu \ar[d]_f & \mathscr{A}_* \mathbin{\widehat{\otimes}} N_* \ar[d]^{1 \mathbin{\widehat{\otimes}} f} \\ M_* \ar[r]^-\nu & \mathscr{A}_* \mathbin{\widehat{\otimes}} M_* } $$ commutes. Hence there is an equalizer diagram \begin{equation} \label{equ_homaequalizer} \xymatrix{ \Hom_{\mathscr{A}_*}^c(N_*, M_*) \to \Hom^c(N_*, M_*) \ar@<0.6ex>[rr]^-{f\mapsto (1\mathbin{\widehat{\otimes}} f) \circ \nu} \ar@<-0.6ex>[rr]_-{f\mapsto \nu\circ f} && \Hom^c(N_*, \mathscr{A}_* \mathbin{\widehat{\otimes}} M_*)\,. } \end{equation} \end{dfn} \begin{lemma} Suppose given a sequence of graded $\mathbb{F}_p$-vector spaces, as in \eqref{equ_cohomtower}, with each $A^n$ bounded below and of finite type. Suppose also that $A^n$ is an $\mathscr{A}$-module and that $A^n \to A^{n-1}$ is $\mathscr{A}$-linear, for each finite $n$. Then, with notation as above, $A^{-\infty}$ is an $\mathscr{A}$-module, and the topological $\mathbb{F}_p$-vector space $A_{-\infty}$ is a complete $\mathscr{A}_*$-comodule. \end{lemma} \begin{proof} The category of $\mathscr{A}$-modules is closed under direct limits, so the first claim of the lemma is immediate. For each $n$ we get a commutative diagram \[ \xymatrix{ \mathscr{A} \otimes A^n \ar@{->>}[r] \ar[d]_{\lambda^n} & \mathscr{A} \otimes F^nA^{-\infty} \ar@{ >->}[r] \ar[d] & \mathscr{A} \otimes A^{-\infty} \ar[d]^\lambda \\ A^n \ar@{->>}[r] & F^nA^{-\infty} \ar@{ >->}[r] & A^{-\infty} \rlap{\,,} } \] where the vertical arrows are the $\mathscr{A}$-module action maps. For every finite $n$, the dual of the $\mathscr{A}$-module action map $\lambda^n \: \mathscr{A} \otimes A^n \to A^n$ defines an $\mathscr{A}_*$-comodule coaction map $\nu_n \: A_n = \Hom(A^n, \mathbb{F}_p) \to \Hom(\mathscr{A} \otimes A^n, \mathbb{F}_p) \cong \Hom(\mathscr{A}, \mathbb{F}_p) \otimes \Hom(A^n, \mathbb{F}_p) = \mathscr{A}_* \otimes A_n$, where the middle isomorphism uses that $\mathscr{A}$ and $A^n$ are bounded below and of finite type over~$\mathbb{F}_p$. Similarly, the dual of the diagram above gives a commutative diagram \[ \xymatrix{ \mathscr{A}_* \otimes A_n & \mathscr{A}_* \otimes A_{-\infty}/F_nA_{-\infty} \ar@{ >->}[l] & \Hom(\mathscr{A} \otimes A^{-\infty}, \mathbb{F}_p) \ar@{->>}[l] \\ A_n \ar[u]^{\nu_n} & A_{-\infty}/F_nA_{-\infty} \ar@{ >->}[l] \ar[u] & A_{-\infty} \ar@{->>}[l] \ar[u] \rlap{\,,} } \] where we use the identification from~\eqref{equ_fna8dual}. Passing to limits over $n$, we get the diagram \[ \xymatrix{ \displaystyle{\lim_n} \, (\mathscr{A}_* \otimes A_n) & \mathscr{A}_* \mathbin{\widehat{\otimes}} A_{-\infty} \ar[l]_-\cong & \Hom(\mathscr{A} \otimes A^{-\infty}, \mathbb{F}_p) \ar[l]_-\cong \\ \displaystyle{\lim_n} \, A_n \ar[u]^{\lim_n \nu_n} & A_{-\infty} \ar[l]_-\cong \ar[u]_\nu & A_{-\infty} \ar[l]_-{=} \ar[u]_{\Hom(\lambda, \mathbb{F}_p)} \rlap{\,.} } \] The middle vertical coaction map $\nu$ is continuous, as it is realized as the inverse limit of a homomorphism of towers. It is clear that the upper left hand horizontal map is injective, but we claim that it is also surjective. To see this, let $Z_n$ be the cokernel of $A_{-\infty}/F_nA_{-\infty} \hookrightarrow A_n$. We know from Lemma~\ref{lem_filtration} that $\lim_n (A_{-\infty}/F_nA_{-\infty}) \cong \lim_n A_n$ and $\Rlim_n (A_{-\infty}/F_nA_{-\infty}) = 0$, so $\lim_n Z_n = 0$. This implies that $\lim_n (\mathscr{A}_* \otimes Z_n) = 0$, since there are natural injective maps $\mathscr{A}_* \otimes Z_n \hookrightarrow \Hom(\mathscr{A}, Z_n)$, and $\lim_n (\mathscr{A}_* \otimes Z_n) \hookrightarrow \lim_n \Hom(\mathscr{A}, Z_n) \cong \Hom(\mathscr{A}, \lim_n Z_n) = 0$. Now $\mathscr{A}_* \otimes Z_n$ is the cokernel of $\mathscr{A}_* \otimes A_{-\infty}/F_nA_{-\infty} \hookrightarrow \mathscr{A}_* \otimes A_n$, hence in the limit $\mathscr{A}_* \mathbin{\widehat{\otimes}} A_{-\infty} \to \lim_n (\mathscr{A}_* \otimes A_n)$ is surjective. The commutativity of the diagrams in Definition~\ref{dfn_completecomodule} is immediate since they are obtained as the inverse limits of the corresponding diagrams involving $A_n$ and $\nu_n$. Thus $A_{-\infty}$ is a complete $\mathscr{A}_*$-comodule. \end{proof} \begin{cor} \label{cor_HcY} Let $\{Y_n\}_n$ be a tower of spectra as in~\eqref{equ_tower}, each bounded below and of finite type over $\mathbb{F}_p$, with homotopy limit $Y$. Then the continuous cohomology $H_c^*(Y) = \colim_n H^*(Y_n)$ is an $\mathscr{A}$-module, the continuous homology $H^c_*(Y) = \lim_n H_*(Y_n)$ is a complete $\mathscr{A}_*$-comodule, and there are natural isomorphisms $\Hom(H_c^*(Y), \mathbb{F}_p) \cong H^c_*(Y)$ and $\Hom^c(H^c_*(Y), \mathbb{F}_p) \cong H_c^*(Y)$, in the respective categories. \qed \end{cor} \section{The algebraic Singer constructions} \label{sec_algsinger} Classically, the algebraic Singer construction is an endofunctor on the category of modules over the Steenrod algebra. In \textsection \ref{subsec_cohomalgsinger} we recall its definition, and a key property proved by Adams, Gunawardena and Miller. We then dualize the construction in \textsection \ref{subsec_homalgsinger}. Later, we will see how the algebraic Singer construction arises in its cohomological (resp.~homological) form as the continuous cohomology (resp.~continuous homology) of a certain tower of truncated Tate spectra. This tower of spectra induces a natural filtration on the Singer construction. We introduce this filtration in purely algebraic terms in the present section, and will show in \textsection \ref{subsec_tatessforsinger} that the algebraic and topological definitions agree. \subsection{The cohomological Singer construction} \label{subsec_cohomalgsinger} \begin{dfn} \label{dfn_rplus} Let $M$ be an $\mathscr{A}$-module. The \emph{Singer construction} $R_+(M)$ on $M$ is a graded $\mathscr{A}$-module given additively by the formulas $$ \Sigma^{-1} R_+(M) = P(x,x^{-1}) \otimes M $$ for $p=2$, and $$ \Sigma^{-1} R_+(M) = E(x) \otimes P(y, y^{-1}) \otimes M $$ for $p$ odd. Here $\deg(x)=1$, $\deg(y)=2$, and $\Sigma^{-1}$ denotes desuspension by one degree. The action of the Steenrod algebra is given, for $r \in \mathbb{Z}$ and $a \in M$, by the formula \begin{equation} \label{equ_singerops} \Sq^s(x^r \otimes a) = \sum_j \binom{r-j}{s-2j} \, x^{r+s-j} \otimes \Sq^j(a) \end{equation} for $p=2$, and the formulas \begin{align*} \label{equ_oddsingeroperations} \SP^s(y^r \otimes a) &= \sum_j \binom{r-(p-1)j}{s-pj} \, y^{r+(p-1)(s-j)} \otimes \SP^j(a) \\ &\qquad + \sum_j \binom{r-(p-1)j-1}{s-pj-1} \, x y^{r+(p-1)(s-j)-1} \otimes \beta\SP^j(a) \\ \SP^s(x y^{r-1} \otimes a) &= \sum_j \binom{r-(p-1)j-1}{s-pj} \, x y^{r+(p-1)(s-j)-1} \otimes \SP^j(a) \end{align*} and \begin{align*} \beta(y^r \otimes a) &= 0 \\ \beta(x y^{r-1} \otimes a) &= y^r \otimes a \end{align*} for $p$ odd. \end{dfn} This is the form of the Singer construction that is related to the cyclic group $C_p$. The cohomology of the classifying space of this group is $H^*(BC_p) \cong E(x) \otimes P(y)$ for $p$ odd, with $\deg(x)=1$, $\deg(y)=2$ and $\beta(x) = y$, as above. The natural $\mathscr{A}$-module structure on $H^*(BC_p)$ extends to the localization $H^*(BC_p)[y^{-1}] = E(x) \otimes P(y, y^{-1})$, and letting $M = \mathbb{F}_p$ we get that $\Sigma^{-1} R_+(\mathbb{F}_p)$ is isomorphic to $H^*(BC_p)[y^{-1}]$ as an $\mathscr{A}$-module. The case $p=2$ is similar. When $p$ odd there is a second form of the Singer construction, related to the symmetric group $\Sigma_p$. Following \cite{LS1982}*{p.~272} we identify $H^*(B\Sigma_p)$ with the subalgebra $E(u) \otimes P(v)$ of $H^*(BC_p)$ generated by $u = -x y^{p-2}$ and $v = - y^{p-1}$, with $\deg(u) = 2p-3$ and $\deg(v) = 2p-2$. The smaller form of the Singer construction then corresponds to the direct summand $E(u) \otimes P(v, v^{-1}) \otimes M$ of index~$(p-1)$ in $E(x) \otimes P(y, y^{-1}) \otimes M$. Explicit formulas for action of the Steenrod operations on the smaller form of the Singer construction are given in \cite{S1981}*{(3.2)}, \cite{LS1982}*{\textsection 2} and \cite{BMMS}*{p.~47}. In our work, we are only concerned with the version of the Singer construction related to the group $C_p$. The exact form of the formulas in Definition~\ref{dfn_rplus} is justified by Theorem~\ref{thm_TopSinger} below. \subsubsection{The cohomological $\epsilon$-map} \label{subsec_algepsilon} An important property of $R_+(M)$ is that there exists a natural homomorphism $\epsilon \: R_+(M) \to M$ of $\mathscr{A}$-modules. In Singer's original definition for $p=2$, the map is given by the formula \begin{equation} \label{equ_epsilon_even_formula} \epsilon(\Sigma x^{r-1} \otimes a) = \begin{cases} \Sq^r(a) & \text{for $r\ge0$,} \\ 0 & \text{for $r<0$.} \end{cases} \end{equation} For $p$ odd, the $\mathscr{A}$-submodule spanned by elements of the form $\Sigma y^{(p-1)r} \otimes a$ or $\Sigma x y^{(p-1)r-1} \otimes a$ is a direct summand in $R_+(M)$. The homomorphism $\epsilon$ is given by first projecting onto this direct summand and then composing with the map \begin{equation} \label{equ_epsilon_odd_formula} \begin{split} \Sigma y^{(p-1)r} \otimes a &\mapsto -(-1)^r \beta \SP^r(a) \\ \Sigma x y^{(p-1)r-1} \otimes a &\mapsto (-1)^r \SP^r(a) \end{split} \end{equation} for $r\ge0$, still mapping to $0$ for $r<0$. See \cite{BMMS}*{p.~50}. It is clear that $\epsilon$ is surjective. We recall the key property of $\epsilon$. Adams, Gunawardena and Miller \cite{AGM1985} make the following definition. \begin{dfn} An $\mathscr{A}$-module homomorphism $L \to M$ is a \emph{$\Tor$-equivalence} if the induced map \begin{equation} \Tor^{\mathscr{A}}_{*,*}(\mathbb{F}_p, L) \to \Tor^{\mathscr{A}}_{*,*}(\mathbb{F}_p, M) \end{equation} is an isomorphism. \end{dfn} The relevance of this condition is: \begin{prop}[\cite{AGM1985}*{1.2}] \label{prop_extiso} If $L \to M$ is a $\Tor$-equivalence, then for any $\mathscr{A}$-module~$N$ that is bounded below and of finite type the induced map \begin{equation} \Ext_\mathscr{A}^{*,*}(M, N)\to \Ext_{\mathscr{A}}^{*,*}(L, N) \end{equation} is an isomorphism. \end{prop} Here is their key result, proved in \cite{AGM1985}*{1.3}. \begin{thm}[Gunawardena, Miller] \label{thm_extiso} The Singer homomorphism $\epsilon \: R_+(M) \to M$ is a $\Tor$-equivalence. \end{thm} We will later encounter instances of $\mathscr{A}$-module homomorphisms $R_+(M) \to M$ induced by maps of spectra. It is often possible to determine those homomorphisms by the following corollary. \begin{cor} \label{cor_uniquehom} Let $M$, $N$ be $\mathscr{A}$-modules such that $N$ is bounded below and of finite type. Then $$ \epsilon^* \: \Hom_{\mathscr{A}}(M, N) \to \Hom_{\mathscr{A}}(R_+(M), N) $$ is an isomorphism, so any $\mathscr{A}$-linear homomorphism $f \: R_+(M) \to N$ factors as $g \circ \epsilon$ for a unique $\mathscr{A}$-linear homomorphism $g \:M \to N$: $$ \xymatrix{ R_+(M) \ar[rr]^-\epsilon \ar[dr]_f && M \ar[dl]^g \\ & N } $$ \end{cor} \begin{proof} This is clear from Theorem~\ref{thm_extiso} and Proposition~\ref{prop_extiso}. \end{proof} \begin{remark} A special case of this occurs when $M = N$ is a cyclic $\mathscr{A}$-module. Then$$ \mathbb{F}_p \cong \Hom_{\mathscr{A}}(M, M) \cong \Hom_{\mathscr{A}}(R_+(M), M) \,, $$ so any $\mathscr{A}$-linear homomorphism $R_+(M) \to M$ is equal to a scalar multiple of $\epsilon$. \end{remark} \subsection{The homological Singer construction} \label{subsec_homalgsinger} Before we define the homological version of the Singer construction on an $\mathscr{A}_*$-comodule $M_*$, we need to discuss a natural filtration on the cohomological Singer construction. For a bounded below $\mathscr{A}$-module $M$ of finite type over $\mathbb{F}_p$, let $$ F^nR_+(M) = \mathbb{F}_2\{ \Sigma x^r \otimes a \mid r \in \mathbb{Z}, \ \deg(a) = q, \ 1+r-q \ge n\} $$ for $p=2$, and $$ F^nR_+(M) = \mathbb{F}_p\{\Sigma x^i y^r \otimes a \mid i \in \{0,1\}, \ r \in \mathbb{Z}, \ \deg(a) = q, \ 1+i+2r-(p-1)q \ge n\} $$ for $p$ odd. In each case $a$ runs through an $\mathbb{F}_p$-basis for $M$. Then \begin{equation} \label{equ_rplusfiltration} \cdots \subset F^nR_+(M)\subset F^{n-1}R_+(M)\subset \cdots \subset R_+(M) \end{equation} is an exhaustive filtration of $R_+(M)$, which is clearly Hausdorff. Because $M$ is bounded below and of finite type, each $F^nR_+(M)$ is bounded below and of finite type, so $\Rlim_n F^nR_+(M)$ is trivial. Hence the filtration is complete. For reasons made clear in Corollary~\ref{cor_comparefiltrations}, we will refer to this filtration as the \emph{Tate filtration}. When $M$ is the cohomology of a bounded below spectrum of finite type over~$\mathbb{F}_p$, we will see how \eqref{equ_rplusfiltration} is induced from topology. In this case, it will be immediate that the filtration is one of $\mathscr{A}$-modules. For a general $\mathscr{A}$-module $M$, this can be checked directly using the explicit formulas in Definition~\ref{dfn_rplus}. We are now in the situation discussed in the previous section, with $A^n = F^nR_+(M)$ and $A^{-\infty} = R_+(M)$. Letting $F^nR_+(M)_* = \Hom(F^nR_+(M), \mathbb{F}_p) = A_n$ we get an inverse system \begin{equation} \label{equ_singhomfilt} \cdots \to F^{n-1}R_+(M)_* \to F^nR_+(M)_* \to \cdots \end{equation} as in~\eqref{equ_homtower}, dual to the direct system \eqref{equ_rplusfiltration}. We are interested in the inverse limit $A_{-\infty} = \lim_n A_n$, with the linear topology given by this tower of surjections. Recall Definition~\ref{dfn_completecomodule} of a complete $\mathscr{A}_*$-comodule. \begin{dfn} \label{dfn_homologysinger} Let $M_*$ be a bounded below $\mathscr{A}_*$-comodule of finite type. Its dual $M = \Hom(M_*, \mathbb{F}_p)$ is a bounded below $\mathscr{A}$-module of finite type, and $M_* \cong \Hom(M, \mathbb{F}_p)$. We define the \emph{homological Singer construction} on $M_*$ to be the complete $\mathscr{A}_*$-comodule given by $$ R_+(M_*) = \Hom(R_+(M), \mathbb{F}_p) \,. $$ It is isomorphic to the inverse limit $\lim_n F^nR_+(M)_*$. \end{dfn} A more explicit description can be given. For $p=2$ the $\mathbb{F}_2$-linear dual of \[ \widehat{H}_{-*}(C_2; \mathbb{F}_2) \cong \Sigma H^*(C_2; \mathbb{F}_2)[x^{-1}] = \Sigma P(x, x^{-1}) \] is isomorphic to the ring of Laurent polynomials $\widehat{H}^{-*}(C_2; \mathbb{F}_2) = P(u, u^{-1})$, where $\deg(u)=-1$ and $u^{-r}$ is dual to $\Sigma x^{r-1}$. For $p$ odd, the $\mathbb{F}_p$-linear dual of \[ \widehat{H}_{-*}(C_p; \mathbb{F}_p) \cong \Sigma H^*(C_p; \mathbb{F}_p)[y^{-1}] = \Sigma E(x) \otimes P(y, y^{-1}) \] is isomorphic to $\widehat{H}^{-*}(C_p; \mathbb{F}_p) = E(u) \otimes P(t, t^{-1})$, where $\deg(u)=-1$, $\deg(t)=-2$ and $u^{1-i} t^{-r}$ is dual to $\Sigma x^i y^{r-1}$. These notations are compatible with those from \cite{BM1994}. We get the following identifications: \[ F^nR_+(M)_* \cong \mathbb{F}_2\{ u^r \otimes \alpha \mid r \in \mathbb{Z}, \ \deg(\alpha) = q, \ r+q \le -n\} \] for $p=2$, and $$ F^nR_+(M)_* \cong \mathbb{F}_p\{ u^i t^r\otimes \alpha \mid i \in \{0,1\}, \ r \in \mathbb{Z}, \ \deg(\alpha) = q, \ i+2r+(p-1)q \le -n\} $$ for $p$ odd. In each case $\alpha$ ranges over an $\mathbb{F}_p$-basis for $M_*$. The maps of~\eqref{equ_singhomfilt} are given by the obvious projections. Thus, $R_+(M_*)$ is isomorphic to the graded vector space of formal series \[ \sum_{r=-\infty}^\infty u^r \otimes \alpha_r \] for $p=2$, and \[ \sum_{r=-\infty}^\infty t^r \otimes \alpha_{0,r} + \sum_{r=-\infty}^\infty u t^r \otimes \alpha_{1,r} \] for $p$ odd. In each of these sums $r$ is bounded below, but not above, since $M_*$ is bounded below. Using the linear topology on $R_+(M_*)$ given by the kernel filtration coming from~\eqref{equ_singhomfilt}, we may reformulate this as follows: Let $$ \Lambda = \widehat{H}^{-*}(C_p; \mathbb{F}_p) = \begin{cases} P(u, u^{-1}) & \text{for $p=2$,} \\ E(u) \otimes P(t,t^{-1}) & \text{for $p$ odd.} \end{cases} $$ Consider $\Lambda\otimes M_*\subset R_+(M_*)$. For every $n$ the composition $\Lambda\otimes M_* \subset R_+(M_*) \to F^nR_+(M)_*$ is surjective, so the completed tensor product $\Lambda\mathbin{\widehat{\otimes}} M_*$ (for the linear topology on $\Lambda$ derived from the grading) is canonically isomorphic to $R_+(M_*)$. \subsubsection{The homological $\epsilon_*$-map} Let $$ \epsilon_* \: M_* \to R_+(M_*) $$ be the dual of $\epsilon\: R_+(M)\to M$. Then $\epsilon_*$ is a continuous homomorphism of complete $\mathscr{A}_*$-comodules. Continuity is trivially satisfied since the source of $\epsilon_*$ has the discrete topology. Dualizing~\eqref{equ_epsilon_even_formula} and \eqref{equ_epsilon_odd_formula}, we see that $\epsilon_*$ is given by the formulas \begin{equation} \label{equ_dualSingerEvaluation_even} \epsilon_*(\alpha) = \sum_{r=0}^\infty u^{-r} \otimes \Sq_*^r(\alpha) \end{equation} for $p=2$, and \begin{equation} \label{equ_dualSingerEvaluation_odd} \epsilon_*(\alpha) = \sum_{r=0}^\infty (-1)^r t^{-(p-1)r} \otimes \SP^r_*(\alpha) - \sum_{r=0}^\infty (-1)^r u t^{-(p-1)r-1} \otimes (\beta\SP^r)_*(\alpha) \end{equation} for $p$ odd. This expression may be compared with \cite{AGM1985}*{(3.6)}. It is clear that $\epsilon_*$ is injective. \begin{lemma} \label{lem_homiso} Let $M$ and $N$ be bounded below $\mathscr{A}$-modules of finite type, and let $M_*$ and $N_*$ be the dual $\mathscr{A}_*$-comodules. Then $$ \epsilon_*\: \Hom_{\mathscr{A}_*}(N_*, M_*) \to \Hom^c_{\mathscr{A}_*}(N_*, R_+(M_*)) $$ is an isomorphism, so any continuous $\mathscr{A}_*$-comodule homomorphism $f_* \: N_* \to R_+(M_*)$ factors as $f_* = \epsilon_* \circ g_*$ for a unique $\mathscr{A}_*$-comodule homomorphism $g_* \: N_* \to M_*$. \end{lemma} \begin{proof} Notice that $\Hom_{\mathscr{A}_*}(N_*, M_*) = \Hom^c_{\mathscr{A}_*}(N_*, M_*)$ and $\mathscr{A}_* \otimes N_* = \mathscr{A}_* \mathbin{\widehat{\otimes}} N_*$, since $M_*$ and $N_*$ are discrete. Applying $\Hom(-, \mathbb{F}_p)$ to a commutative square \[ \xymatrix{ \mathscr{A} \otimes R_+(M) \ar[r]^-\lambda \ar[d]_{1 \otimes f} & R_+(M) \ar[d]^f \\ \mathscr{A} \otimes N \ar[r]^-\lambda & N } \] we get a commutative square \[ \xymatrix{ \mathscr{A}_* \mathbin{\widehat{\otimes}} R_+(M_*) & R_+(M_*) \ar[l]_-\nu \\ \mathscr{A}_* \otimes N_* \ar[u]^{1\mathbin{\widehat{\otimes}} f_*} & N_* \ar[l]_-\nu \ar[u]_{f_*} } \] of continuous homomorphisms, where $R_+(M_*)$ and $\mathscr{A}_* \mathbin{\widehat{\otimes}} R_+(M_*)$ have the limit topologies, while $N_*$ and $\mathscr{A}_* \otimes N_*$ are discrete. Applying $\Hom^c(-, \mathbb{F}_p)$ to the latter square we recover the first, by Lemma~\ref{lem_contdual}. Hence the right hand vertical map in the commutative square \[ \xymatrix{ \Hom_{\mathscr{A}}(M, N) \ar[r]^-{\epsilon^*}_-\cong \ar[d]_\cong & \Hom_{\mathscr{A}}(R_+(M), N) \ar[d]^\cong \\ \Hom_{\mathscr{A}_*}(N_*, M_*) \ar[r]^-{\epsilon_*} & \Hom_{\mathscr{A}_*}^c(N_*, R_+(M_*)) } \] is an isomorphism. It is easy to see that the left hand vertical map is an isomorphism, and the upper horizontal map is an isomorphism by Corollary~\ref{cor_uniquehom}. \end{proof} \subsubsection{Various remarks on the homological Singer construction} The following remarks are not necessary for our immediate applications, but we include them to shed some light on the coaction $\nu \: R_+(M_*) \to \mathscr{A}_* \mathbin{\widehat{\otimes}} R_+(M_*)$ and the dual Singer map $\epsilon_* \: M_* \to R_+(M_*)$, and their relations to the completions introduced so far. Dualizing~\eqref{equ_singerops}, we get that the dual Steenrod operations on classes $u^r \otimes \alpha$ in $\Lambda \otimes M_* \subset R_+(M_*)$ are given by \begin{equation} \label{equ_dualSinger} \Sq_*^s(u^r \otimes \alpha) = \sum_j \binom{-r-s-1}{s-2j} u^{r+s-j} \otimes \Sq_*^j(\alpha) \end{equation} for $p=2$, and similarly for $p$ odd. This sum is finite, since $M_*$ is assumed to be bounded below, so we have the following commutative diagram: \begin{equation} \label{diag_subtensor} \xymatrix{ R_+(M_*) \ar[r]^-\nu & \mathscr{A}_* \mathbin{\widehat{\otimes}} R_+(M_*) \\ \Lambda \otimes M_* \ar@{ >->}[u] \ar[r] & \mathscr{A}_* \mathbin{\widehat{\otimes}} (\Lambda \otimes M_*) \ar@{ >->}[u] } \end{equation} Two remarks are in order. First, $\Lambda \otimes M_*$ is not complete with respect to the subspace topology from $R_+(M_*)$. Hence $\Lambda \otimes M_*$ is not a complete $\mathscr{A}_*$-comodule in the sense explained above. Second, there are elements $u^r \otimes \alpha$ in $\Lambda \otimes M_*$ with the property that $\Sq^s_*(u^r \otimes \alpha)$ is nonzero for infinitely many $s$, and similarly for $p$ odd. For example, $\Sq^s_*(u^{-1} \otimes \alpha)$ contains the term \[ \binom{-s}{s} u^{s-1} \otimes \alpha = \binom{2s-1}{s} u^{s-1} \otimes \alpha \] for $j=0$, according to~\eqref{equ_dualSinger}. This equals $u^{s-1} \otimes \alpha$ whenever $s = 2^e$ is a power of~$2$, so $\nu(u^{-1} \otimes \alpha)$ is an infinite sum. Hence $\Lambda \otimes M_*$ is not an algebraic $\mathscr{A}_*$-comodule, either. We will now identify the image of the homological version of the Singer map $$ \epsilon_* \: M_* \to R_+(M_*) $$ with the maximal algebraic $\mathscr{A}_*$-comodule contained in $R_+(M_*)$. \begin{dfn} Given a complete $\mathscr{A}_*$-comodule $N_*$, let $N_*^{\alg} \subseteq N_*$ be given by the pullback \[ \xymatrix{ N_*^{\alg} \ar[rr]^-{\nu|N_*^{\alg}} \ar@{ >->}[d] && \mathscr{A}_* \otimes N_* \ar@{ >->}[d] \\ N_* \ar[rr]^-{\nu} && \mathscr{A}_* \mathbin{\widehat{\otimes}} N_* } \] in graded $\mathbb{F}_p$-vector spaces. In other words, $N_*^{\alg}$ consists of the $\alpha \in N_*$ whose coaction $\nu(\alpha) = \sum_I \Sq^I_* \otimes \alpha_I$ (in the notation for $p=2$) is a finite sum, rather than a formal infinite sum. Here $I$ runs over the admissible sequences, so that $\{\Sq^I_*\}_I$ is a basis for $\mathscr{A}_*$, and $\alpha_I = \Sq^I_*(\alpha)$. \end{dfn} \begin{lemma} The restricted coaction map $\nu|N_*^{\alg}$ factors (uniquely) through the inclusion $\mathscr{A}_* \otimes N_*^{\alg} \subseteq \mathscr{A}_* \otimes N_*$, hence defines a map \[ \nu^{\alg} \: N_*^{\alg} \to \mathscr{A}_* \otimes N_*^{\alg} \] that makes $N_*^{\alg}$ an $\mathscr{A}_*$-comodule in the algebraic sense. \end{lemma} \begin{proof} The composite \[ \xymatrix{ N_*^{\alg} \ar[r]^-{\nu|N_*^{\alg}} & \mathscr{A}_* \otimes N_* \ar[r]^-{1 \otimes \nu} & \mathscr{A}_* \otimes (\mathscr{A}_* \mathbin{\widehat{\otimes}} N_*) } \] factors as \[ \xymatrix{ N_*^{\alg} \ar[r]^-{\nu|N_*^{\alg}} & \mathscr{A}_* \otimes N_* \ar[r]^-{\psi \otimes 1} & \mathscr{A}_* \otimes \mathscr{A}_* \otimes N_* \subseteq \mathscr{A}_* \otimes (\mathscr{A}_* \mathbin{\widehat{\otimes}} N_*) } \] by coassociativity~\eqref{equ_contcomod} of the complete coaction. Hence $\nu|N_*^{\alg}$ factors through the pullback $\mathscr{A}_* \otimes N_*^{\alg}$ in \[ \xymatrix{ \mathscr{A}_* \otimes N_*^{\alg} \ar[rr]^-{1 \otimes \nu|N_*^{\alg}} \ar@{ >->}[d] && \mathscr{A}_* \otimes \mathscr{A}_* \otimes N_* \ar@{ >->}[d] \\ \mathscr{A}_* \otimes N_* \ar[rr]^-{1 \otimes \nu} && \mathscr{A}_* \otimes (\mathscr{A}_* \mathbin{\widehat{\otimes}} N_*) \rlap{\,.} } \] Algebraic counitality and coassociativity of the lifted map $\nu^{\alg}$ follow from the corresponding complete properties of $\nu$. \end{proof} The following identification stems from a conversation with M.~B{\"o}kstedt. \begin{prop} The image of the injective homomorphism $\epsilon_* \: M_* \to R_+(M_*)$ equals the maximal algebraic sub $\mathscr{A}_*$-comodule $R_+(M_*)^{\alg} \subset R_+(M_*)$. \end{prop} \begin{proof} Let $L_*$ be any algebraic $\mathscr{A}_*$-comodule. Given any $\alpha \in L_*$, with coaction $\nu(\alpha) = \sum_I \Sq^I_* \otimes \alpha_I$, let $\langle \alpha \rangle \subseteq L_*$ be the graded vector subspace spanned by the $\alpha_I = \Sq^I_*(\alpha)$. Here we are using the notation appropriate for $p=2$; the case $p$ odd is completely similar. Since $\nu(\alpha)$ is a finite sum, $\langle \alpha \rangle$ is a finite dimensional subspace. Furthermore, it is a sub $\mathscr{A}_*$-comodule, since $\nu(\alpha_I) = \sum_J \Sq^J_* \otimes \Sq^J_*(\alpha_I)$ and $\Sq^J_*(\alpha_I) = (\Sq^I \Sq^J)_*(\alpha)$ is a finite sum of terms $\Sq^K_*(\alpha) = \alpha_K$. Now consider the case $L_* = R_+(M_*)^{\alg}$. It is clear that $\epsilon_*(M_*) \subseteq R_+(M_*)^{\alg}$, since $M_*$ is an algebraic $\mathscr{A}_*$-comodule and $\epsilon_*$ respects the coaction. Let $\alpha \in R_+(M_*)^{\alg}$ be any element, and consider the linear span \[ N_* = \epsilon_*(M_*) + \langle \alpha \rangle \subseteq R_+(M_*)^{\alg} \,. \] It is bounded below and of finite type, so by Lemma~\ref{lem_homiso} there is a unique lift $g_*$ \[ \xymatrix{ M_* \ar[rr]^-{\epsilon_*} & & R_+(M_*) \\ & N_* \ar[ul]^{g_*} \ar@{ >->}[ur]_{f_*} } \] of the inclusion $f_* \: N_* \to R_+(M_*)$. Hence $N_* \subseteq \epsilon_*(M_*)$, so in fact $\alpha \in \epsilon_*(M_*)$. \end{proof} \section{The Tate construction} \label{sec_tate} We recall the Tate construction of Greenlees, and its relation with homotopy orbit and homotopy fixed point spectra. We then show how it can be expressed as the homotopy inverse limit of bounded below spectra, in two equivalent ways. This lets us make sense of the continuous (co-)homology groups of the Tate construction. We then describe the homological Tate spectral sequences. There are two types, one converging to the continuous homology of the Tate construction and one converging to the continuous cohomology. The terms of these spectral sequences will be linearly dual to each other, but, as already noted in \textsection \ref{sec_duality}, their target groups will only be dual in a topologized sense. The main properties of these spectral sequences are summarized in Propositions~\ref{prop_cohomological}, \ref{prop_homological} and~\ref{prop_homological2}. \subsection{Equivariant spectra and various fixed point constructions} We review some notions from stable equivariant homotopy theory, in the framework of Lewis--May spectra \cite{LMS}. Let $G$ be a compact Lie group, quite possibly finite, and let $\mathscr{U}$ be a complete $G$-universe. We fix an identification $\mathscr{U}^G = \mathbb{R}^\infty$, and write $i \: \mathbb{R}^\infty \to \mathscr{U}$ for the inclusion. Let $G\S\mathscr{U}$ be the category of genuine $G$-spectra, and let $G\S\mathbb{R}^\infty$ be the category of naive $G$-spectra. Similarly, let $\S\mathbb{R}^\infty$ be the category of (non-equivariant) spectra. The restriction of universe functor $i^* \: G\S\mathscr{U} \to G\S\mathbb{R}^\infty$ has a left adjoint, the extension of universe functor $i_* \: G\S\mathbb{R}^\infty \to G\S\mathscr{U}$, see \cite{LMS}*{\textsection II.1}. The functor $\S\mathbb{R}^\infty \to G\S\mathbb{R}^\infty$, giving a spectrum the trivial $G$-action, has a left adjoint taking a naive $G$-spectrum $Y$ to the orbit spectrum $Y/G$, as well as a right adjoint taking $Y$ to the fixed point spectrum $Y^G$. For a genuine $G$-spectrum~$X$, the orbit spectrum $X/G = (i^* X)/G$ and fixed point spectrum $X^G = (i^* X)^G$ are defined by first restricting to the underlying naive $G$-spectra. Let $EG$ be a free, contractible $G$-CW complex. Let $c \: EG_+ \to S^0$ be the collapse map that sends $EG$ to the non-base point of $S^0$, and let $\widetilde{EG}$ be its mapping cone, so that we have a homotopy cofiber sequence \begin{equation} \label{equ_fundamentalsequence} EG_+ \overset{c}\to S^0 \to \widetilde{EG} \end{equation} of based $G$-CW complexes. The $n$-skeleton $\widetilde{EG}{}^{(n)}$ of $\widetilde{EG}$ is then the mapping cone of the restricted collapse map $EG^{(n-1)}_+ \to S^0$, for each $n\ge0$. We may and will assume that each skeleton $EG^{(n-1)}$ is a finite $G$-CW complex. \begin{dfn} For each naive $G$-spectrum $Y$ let $Y_{hG} = (EG_+ \wedge Y)/G$ be the \emph{homotopy orbit spectrum}, and let $Y^{hG} = F(EG_+, Y)^G$ be the \emph{homotopy fixed point} spectrum. For each genuine $G$-spectrum $X$ let $$ X_{hG} = (EG_+ \wedge i^* X)/G = (i^* X)_{hG} $$ and $$ X^{hG} = F(EG_+, X)^G = (i^* X)^{hG} $$ be defined by first restricting to the $G$-trivial universe. Furthermore, let $$ X^{tG} = [\widetilde{EG} \wedge F(EG_+, X)]^G $$ be the \emph{Tate construction} on $X$. This is the spectrum denoted $\widehat{\mathbb{H}}(G, X)$ by B{\"o}kstedt and Madsen \cite{BM1994} and $t_G(X)^G$ by Greenlees and May \cite{GM1995}. \end{dfn} The Segal conjecture is concerned with the map $\Gamma \: X^G \to X^{hG}$ induced by $F(c, 1) \: X \cong F(S^0, X) \to F(EG_+, X)$ by passing to fixed points. By smashing the cofiber sequence~\eqref{equ_fundamentalsequence} with $F(c, 1)$ and passing to $G$-fixed points, we can embed this map in the following diagram, consisting of two horizontal cofiber sequences: $$ \xymatrix{ [EG_+ \wedge X]^G \ar[r] \ar[d]_\simeq & X^G \ar[r] \ar[d]^{\Gamma} & [\widetilde{EG} \wedge X]^G \ar[d]^{\hat\Gamma} \\ [EG_+ \wedge F(EG_+, X)]^G \ar[r] & F(EG_+, X)^G \ar[r] & [\widetilde{EG} \wedge F(EG_+, X)]^G } $$ The adjunction counit $\epsilon \: i_* i^* X \to X$ and the map $F(c, 1)$ are both $G$-maps and non-equivariant equivalences. By the $G$-Whitehead theorem, both maps $$ 1\wedge\epsilon \: i_*(EG_+ \wedge i^*X) = EG_+ \wedge i_* i^* X \to EG_+ \wedge X $$ and $$ 1\wedgeF(c,1) \: EG_+ \wedge X \to EG_+ \wedge F(EG_+, X) $$ are genuine $G$-equivalences. Hence we have the equivalence indicated on the left. Furthermore, there is an Adams transfer equivalence \begin{equation} \label{equ_adamstransfer} \tilde\tau \: (\Sigma^{\ad G} EG_+ \wedge i^* X)/G \overset{\simeq}\longrightarrow [i_*(EG_+ \wedge i^* X)]^G \,, \end{equation} where $\ad G$ denotes the adjoint representation of $G$. See \cite{LMS}*{\textsection II.2} and \cite{GM1995}*{Part~I} for further details. In the cases of interest to us, when $G$ is discrete or abelian, the adjoint representation is trivial so that $\Sigma^{\ad G} = \Sigma^{\dim G}$. Hence we may rewrite the diagram above as the following \emph{norm--restriction} diagram \begin{equation} \label{equ_fund1} \xymatrix{ \Sigma^{\dim G} X_{hG} \ar[rr]^-N \ar[d]_{=} && X^G \ar[rr]^-R \ar[d]^{\Gamma} && [\widetilde{EG} \wedge X]^G \ar[d]^{\hat\Gamma} \\ \Sigma^{\dim G} X_{hG} \ar[rr]^-{N^h} && X^{hG} \ar[rr]^-{R^h} && X^{tG} } \end{equation} for any genuine $G$-spectrum $X$. We note that the adjunction counit $\epsilon \: i_* i^* X \to X$ induces equivalences $(i_* i^* X)_{hG} \simeq X_{hG}$ and $(i_* i^* X)^{hG} \simeq X^{hG}$, hence $(i_* i^* X)^{tG} \simeq X^{tG}$, so the Tate construction on $X$ only depends on the naive $G$-spectrum underlying $X$. The spectra in the lower row have been studied by means of spectral sequences converging to their homotopy groups, e.g.~in \cite{BM1994}, \cite{HM1997}, \cite{R1999}, \cite{AR2002} and \cite{HM2003}. These spectral sequences arise in the case of the homotopy orbit and fixed point spectra by choosing a filtration of $EG$, and by a filtration of $\widetilde{EG}$ introduced by Greenlees \cite{G1987} in the case of the Tate spectrum $X^{tG}$. We shall instead be concerned with the spectral sequences that arise by applying homology in place of homotopy. \subsection{Tate cohomology and the Greenlees filtration of $\widetilde{EG}$} We recall the definition of the Tate cohomology groups from \cite{CE}*{XII.3}, and the associated Tate homology groups. Let $G$ be a finite group, let $\mathbb{F}_pG = \mathbb{F}_p[G]$ be its group algebra, and let $(P_*, d_*)$ be a \emph{complete resolution} of the trivial $\mathbb{F}_pG$-module $\mathbb{F}_p$ by free $\mathbb{F}_pG$-modules. This is a commutative diagram $$ \xymatrix{ \cdots \ar[r] & P_1 \ar[r]^{d_1} & P_0 \ar[r]^{d_0} \ar@{->>}[d] & P_{-1} \ar[r]^{d_{-1}} & P_{-2} \ar[r] & \cdots \\ && \mathbb{F}_p \ar@{ >->}[ur] } $$ of $\mathbb{F}_pG$-modules, where the $P_n$'s are free and the horizontal sequence is exact. The image of $d_0$ is identified with $\mathbb{F}_p$, as indicated. \begin{dfn} Given an $\mathbb{F}_pG$-module $M$ the \emph{Tate cohomology} and \emph{Tate homology groups} are defined by $$ \widehat{H}^n(G; M) = H^n(\Hom_{\mathbb{F}_pG}(P_*, M)) $$ and $$ \widehat{H}_n(G; M) = H_n(P_* \otimes_{\mathbb{F}_pG} M) \,, $$ respectively, where $(P_*, d_*)$ is a complete $\mathbb{F}_pG$-resolution. (To form the balanced tensor product, we turn $P_*$ into a complex of right $\mathbb{F}_pG$-modules by means of the group inverse.) These groups are independent of the chosen complete $\mathbb{F}_pG$-resolution, and there are isomorphisms $$ \widehat{H}^n(G; M) \cong \widehat{H}_{-n-1}(G; M) $$ and $$ \Hom(\widehat{H}_n(G; M), \mathbb{F}_p) \cong \widehat{H}^n(G; \Hom(M, \mathbb{F}_p)) $$ for all integers $n$. Note that we do not follow the shifted grading convention for Tate homology given in \cite{GM1995}*{11.2}. \end{dfn} The topological analogue of a complete resolution is a bi-infinite filtration of $\widetilde{EG}$, in the category of $G$-spectra, which was introduced by Greenlees \cite{G1987}. We recall the details of the construction. For brevity we shall not distinguish notationally between a based $G$-CW complex and its suspension $G$-CW spectrum. For integers $n\ge0$ we let $\tF{n} = \widetilde{EG}{}^{(n)}$ be (the suspension spectrum of) the $n$-skeleton of $\widetilde{EG}$, while $\tF{-n} = D(\tF{n}) = F(\widetilde{EG}{}^{(n)}, S)$ is its functional dual. These definitions agree for $n=0$, as $\tF{0} = S$ is the sphere spectrum. Splicing the skeleton filtration of $\widetilde{EG}$ with its functional dual, we get the finite terms in the following diagram \begin{equation} \label{equ_greenleesfiltration} D(\widetilde{EG}) \to \dots \to \tF{-1} \to \tF{0} = S \to \tF{1} \to \tF{2} \to \dots \to \widetilde{EG} \,, \end{equation} which we call the \emph{Greenlees filtration}. Both $\widetilde{EG} \simeq \hocolim_n \tF{n}$ and $D(\widetilde{EG}) \simeq \holim_n \tF{n}$ are non-equivariantly contractible. Applying homology to this filtration gives a spectral sequence with $E^1_{s,t}=H_{s+t}(\tF{s} / \tF{s-1})$ that converges to $H_*(\widetilde{EG}, D(\widetilde{EG})) = 0$. It is concentrated on the horizontal axis, since $\tF{n}/\tF{n-1}$ is a finite wedge sum of $G$-free $n$-sphere spectra $G_+ \wedge S^n$ for each integer $n$. Hence the spectral sequence collapses at the $E^2$-term, and we get a long exact sequence \begin{equation} \label{equ_geometricresolution} \xymatrix{ \dots \ar[r] & H_2(\tF{2}/\tF{1}) \ar[r]^-{d^1_{2,0}} & H_1(\tF{1}/\tF{0}) \ar[r]^-{d^1_{1,0}} \ar@{->>}[d] & H_0(\tF{0}/\tF{-1}) \ar[r] & \dots \\ && H_0(S) \ar@{ >->}[ru] } \end{equation} of finitely generated free $\mathbb{F}_pG$-modules. Letting $$ P_n = H_{n+1}(\tF{n+1} / \tF{n}) $$ and $d_n = d^1_{n+1,0}$ for all integers $n$ yields a complete resolution $(P_*, d_*)$ of $\mathbb{F}_p = H_0(S)$. \subsection{Continuous homology of the Tate construction} \label{sec_tatefiltration} Let $G$ be a finite group and let $X$ be a genuine $G$-spectrum. By means of the Greenlees filtration, we may filter the Tate construction $X^{tG}$ by a tower of spectra. \begin{dfn} For each integer $n$ let $\widetilde{EG}/\tF{n-1}$ be the homotopy cofiber of the map $\tF{n-1} \to \widetilde{EG}$, and define \begin{align*} X^{tG}[-\infty,n{-}1] &= [\tF{n-1} \wedge F(EG_+, X)]^G \\ X^{tG}[n] = X^{tG}[n, \infty] &= [\widetilde{EG}/\tF{n-1} \wedge F(EG_+, X)]^G \,. \end{align*} \end{dfn} Smashing the cofiber sequence $\tF{n-1} \to \widetilde{EG} \to \widetilde{EG}/\tF{n-1}$ with $F(EG_+, X)$ and taking $G$-fixed points, we get a cofiber sequence $$ X^{tG}[-\infty,n{-}1] \to X^{tG} \to X^{tG}[n, \infty] $$ for each integer $n$. The maps $\tF{n-1} \to \tF{n}$ in the Greenlees filtration~\eqref{equ_greenleesfiltration} induce maps between these cofiber sequences, which combine to the ``finite $n$ parts'' of the following horizontal tower of vertical cofiber sequences: \begin{equation} \label{equ_tatetowers} \xymatrix{ {*} \ar[r] \ar[d] & \dots \ar[r] & X^{tG}[-\infty,n{-}1] \ar[r] \ar[d] & X^{tG}[-\infty,n] \ar[r] \ar[d] & \dots \ar[r] & X^{tG} \ar[d]^{=} \\ X^{tG} \ar[r]^{=} \ar[d]_{=} & \dots \ar[r]^{=} & X^{tG} \ar[r]^{=} \ar[d] & X^{tG} \ar[r]^{=} \ar[d] & \dots \ar[r]^{=} & X^{tG} \ar[d] \\ X^{tG} \ar[r] & \dots \ar[r] & X^{tG}[n,\infty] \ar[r] & X^{tG}[n{+}1,\infty] \ar[r] & \dots \ar[r] & {*} } \end{equation} \begin{lemma} \label{lem_twotowers} Let $X$ be a $G$-spectrum. Then $$ \holim_{n\to-\infty} X^{tG}[-\infty,n] \simeq * \qquad\text{and}\qquad \hocolim_{n\to\infty} X^{tG}[n,\infty] \simeq * $$ so $$ \holim_{n\to-\infty} X^{tG}[n,\infty] \simeq X^{tG} \qquad\text{and}\qquad \hocolim_{n\to\infty} X^{tG}[-\infty,n] \simeq X^{tG} \,. $$ \end{lemma} \begin{proof} For negative $n$, $\tF{n} = D(\tF{m})$ for $m=-n$, and there is a $G$-equivariant equivalence $\nu \: D(\tF{m}) \wedge Z \overset{\simeq}\longrightarrow F(\tF{m}, Z)$ for any $G$-spectrum $Z$, since the finite $G$-CW spectrum $\tF{m}$ is dualizable \cite{LMS}*{III.2.8}. Hence \begin{multline*} \holim_{n\to-\infty} X^{tG}[-\infty,n] = \holim_{n\to-\infty} \, [\tF{n} \wedge F(EG_+, X)]^G \\ = \holim_{m\to\infty} \, [D(\tF{m}) \wedge F(EG_+, X)]^G \simeq \holim_{m\to\infty} F(\tF{m} \wedge EG_+, X)^G \\ \cong F(\hocolim_{m\to\infty} \tF{m} \wedge EG_+, X)^G \simeq F(\widetilde{EG} \wedge EG_+, X)^G \,, \end{multline*} which is contractible because $\widetilde{EG} \wedge EG_+$ is $G$-equivariantly contractible. For the second claim we use that $\widetilde{EG}/\tF{n}$ is a free $G$-CW spectrum. Indeed, for $n\ge0$, $\widetilde{EG}/\tF{n} \simeq \Sigma (EG/EG^{(n-1)})$. Thus, by the $G$-Whitehead theorem and the Adams transfer equivalence~\eqref{equ_adamstransfer} we have \begin{multline*} \hocolim_{n\to\infty} X^{tG}[n{+}1,\infty] = \hocolim_{n\to\infty} \, [\widetilde{EG}/\tF{n} \wedge F(EG_+, X)]^G \\ \simeq \hocolim_{n\to\infty} \, [\widetilde{EG}/\tF{n} \wedge X)]^G \simeq \hocolim_{n\to\infty} \, [\widetilde{EG}/\tF{n} \wedge i_* i^* X)]^G \\ \simeq \hocolim_{n\to\infty} \, (\widetilde{EG}/\tF{n} \wedge i^* X)/G \cong (\hocolim_{n\to\infty} \widetilde{EG}/\tF{n} \wedge i^* X)/G \,, \end{multline*} which is contractible since $\hocolim_{n\to\infty} \, (\widetilde{EG}/\tF{n})$ is $G$-equivariantly contractible. The remaining claims follow, since the homotopy limit of a fiber sequence is a fiber sequence, and the homotopy colimit of a cofiber sequence is a cofiber sequence. \end{proof} Hereafter we abbreviate $X^{tG}[n, \infty]$ to $X^{tG}[n]$. We will refer to the lower horizontal tower $\{X^{tG}[n]\}_n$ in~\eqref{equ_tatetowers} as the \emph{Tate tower}. The following two lemmas should be compared with the sequences~\eqref{equ_cohomtower} and~\eqref{equ_homtower}. \begin{lemma} \label{lem_otherlimitsvanish} Let $X$ be a $G$-spectrum. Then $$ \lim_{n\to\infty} H^*(X^{tG}[n]) = \Rlim_{n\to\infty} H^*(X^{tG}[n]) = \colim_{n\to\infty} H_*(X^{tG}[n]) = 0 \,. $$ \end{lemma} \begin{proof} In cohomology, we have a Milnor $\lim$-$\Rlim$ short exact sequence $$ 0 \to \Rlim_n H^{*-1}(X^{tG}[n]) \to H^*(\hocolim_n X^{tG}[n]) \to \lim_n H^*(X^{tG}[n]) \to 0 \,. $$ By Lemma~\ref{lem_twotowers} the middle term is zero, hence so are the other two terms. In homology, we have the isomorphism $$ \colim_{n\to\infty} H_*(X^{tG}[n]) \cong H_*(\hocolim_{n\to\infty} X^{tG}[n]) \,. $$ By the same lemma the right hand side is zero. \end{proof} \begin{lemma} Suppose that $X$ is bounded below and of finite type over $\mathbb{F}_p$. Then each spectrum $X^{tG}[n]$ is bounded below and of finite type over $\mathbb{F}_p$. Hence $$ \Rlim_{n\to-\infty} H_*(X^{tG}[n]) = 0 \,. $$ \end{lemma} \begin{proof} Let $X^{tG}[n,m] = [\tF{m}/\tF{n-1} \wedge F(EG_+, X)]^G$. For $m \ge n$ there is a cofiber sequence $$ X^{tG}[n,m{-}1] \to X^{tG}[n,m] \to \bigvee \Sigma^m X \,, $$ with one copy of $\Sigma^m X$ in the wedge sum for each of the finitely many $G$-free $m$-cells in $\widetilde{EG}$. Since the connectivity of $\Sigma^m X$ grows to infinity with $m$, the first claim of the lemma follows by induction on $m$. The derived limit of any tower of finite groups is zero, which gives the second conclusion. \end{proof} We use the lower horizontal tower $\{X^{tG}[n]\}_n$ in~\eqref{equ_tatetowers} to define the continuous (co-)homology of $X^{tG}$, as in Definition~\ref{dfn_continuous} and Corollary~\ref{cor_HcY}. \begin{dfn} Let $G$ be a finite group and $X$ a $G$-spectrum whose underlying non-equivariant spectrum is bounded below and of finite type over $\mathbb{F}_p$. By the \emph{continuous cohomology} of $X^{tG}$ we mean the $\mathscr{A}$-module $$ H_c^*(X^{tG}) = \colim_{n\to-\infty} H^*(X^{tG}[n]) \,. $$ By the \emph{continuous homology} of $X^{tG}$ we mean the complete $\mathscr{A}_*$-comodule $$ H^c_*(X^{tG}) = \lim_{n\to-\infty} H_*(X^{tG}[n]) \,. $$ There is a natural isomorphism $\Hom(H_c^*(X^{tG}), \mathbb{F}_p) \cong H^c_*(X^{tG})$ of complete $\mathscr{A}_*$-comodules, as well as a natural isomorphism $\Hom^c(H^c_*(X^{tG}), \mathbb{F}_p) \cong H_c^*(X^{tG})$ of $\mathscr{A}$-modules. \end{dfn} \begin{remark} Different $G$-CW structures on $EG$ will give rise to different Greenlees filtrations and Tate towers, but by cellular approximation any two choices give pro-isomorphic towers in homology. The continuous homology, as a complete $\mathscr{A}_*$-comodule, is therefore independent of the choice. Likewise for continuous cohomology. \end{remark} We end this subsection by giving a reformulation of the Tate construction, known as Warwick duality \cite{G1994}*{\textsection 4}, which will be used in \textsection \ref{subsec_topsinger} when making a topological model for the Singer construction. \begin{prop}[\cite{GM1995}*{2.6}] \label{prop_tate2} There is a natural chain of equivalences $$ \Sigma F(\widetilde{EG}, EG_+ \wedge X)^G \simeq [\widetilde{EG} \wedge F(EG_+, X)]^G = X^{tG} \,. $$ \end{prop} \begin{proof} We have a commutative diagram of $G$-spectra \begin{equation} \label{equ_abs} \xymatrix{ EG_+ \wedge X \ar[r]^-{F(c,1\wedge1)} & F(EG_+, EG_+\wedge X) \\ EG_+ \wedge EG_+ \wedge X \ar[u]^{c\wedge1\wedge1}_\simeq \ar[r] \ar[d]_{1\wedgeF(c,c\wedge1)}^\simeq & F(EG_+, EG_+\wedge X) \ar[u]_{=} \ar[d]^{F(1,c\wedge1)}_\simeq \\ EG_+ \wedge F(EG_+, X) \ar[r]^-{c\wedge1} & F(EG_+,X) \,. } \end{equation} The maps labeled $\simeq$ are $G$-equivalences. The proposition follows by taking horizontal (homotopy) cofibers and fixed points. \end{proof} We now strengthen this to a statement about towers. \begin{lemma} \label{lem_tatecomparison} For each integer $m$ there is a natural chain of equivalences $$ \Sigma F(\tF{m}, EG_+ \wedge X)^G \simeq [\widetilde{EG}/\tF{-m} \wedge F(EG_+, X)]^G = X^{tG}[1{-}m] $$ connecting the tower $$ \Sigma F(\widetilde{EG}, EG_+ \wedge X)^G \to \dots \to \Sigma F(\tF{m+1}, EG_+ \wedge X)^G \to \Sigma F(\tF{m}, EG_+ \wedge X)^G \to \dots $$ to the Tate tower $$ X^{tG} \to \dots \to X^{tG}[-m] \to X^{tG}[1{-}m] \to \dots \,. $$ \end{lemma} \begin{proof} We give the proof for the more interesting case $m\ge0$, leaving the case $m<0$ to the reader. Let $i_m \: EG^{(m-1)}_+ \to EG_+$ be the inclusion, let $c_m \: EG^{(m-1)}_+ \to S^0$ be the restricted collapse map, and let $\delta_m \: EG^{(m-1)}_+ \to EG^{(m-1)}_+ \wedge EG_+$ be the diagonal. We have a commutative diagram of $G$-spectra \begin{equation} \label{equ_notsougly} \xymatrix{ F(EG_+, EG_+ \wedge X) \ar[rr]^-{F(i_m,1\wedge1)} && F(EG^{(m-1)}_+, EG_+ \wedge X) \\ F(EG_+, EG_+ \wedge X) \ar[u]^{=} \ar[rr]^-{F(c_m\wedge1,1\wedge1)} \ar[d]_{F(1,c\wedge1)}^\simeq && F(EG^{(m-1)}_+ \wedge EG_+, EG_+ \wedge X) \ar[u]_{F(\delta_m,1\wedge1)}^\simeq \ar[d]^{F(1\wedge1,c\wedge1)}_\simeq \\ F(EG_+, X) \ar[rr]^-{F(c_m,1)} && F(EG^{(m-1)}_+, F(EG_+, X)) \\ F(EG_+, X) \ar[rr]^-{Dc_m\wedge1} \ar[u]^{=} && D(EG^{(m-1)}_+) \wedge F(EG_+, X) \ar[u]_{\nu}^\simeq \,. } \end{equation} On the right hand side, the middle vertical map makes use of the identification $F(EG^{(m-1)}_+ \wedge EG_+, X) \cong F(EG^{(m-1)}_+, F(EG_+, X))$, while the lower vertical map $\nu$ is an equivalence because $EG^{(m-1)}_+$ is dualizable. The left hand side of~\eqref{equ_notsougly} matches the right hand side of~\eqref{equ_abs}. Combining these two diagrams, and taking horizontal (homotopy) cofibers, we get a chain of $G$-equivalences connecting the cofiber of $$ F(c_m,1\wedge1) \: EG_+ \wedge X \to F(EG^{(m-1)}_+, EG_+ \wedge X) $$ to the cofiber of $$ (Dc_m \circ c)\wedge1 \: EG_+ \wedge F(EG_+, X) \to D(EG^{(m-1)}_+) \wedge F(EG_+, X) \,. $$ The cofiber in the upper row is $G$-equivalent to $\Sigma F(\tF{m}, EG_+ \wedge X)$, since $\tF{m}$ is the mapping cone of $c_m$. The cofiber in the lower row is $G$-equivalent to $\widetilde{EG}/\tF{-m} \wedge F(EG_+, X)$, because of the following commutative diagram with horizontal and vertical cofiber sequences: $$ \xymatrix{ && D(\tF{m}) \ar[r]^{=} \ar[d] & \tF{-m} \ar[d] \\ EG_+ \ar[rr]^-c \ar[d]_{=} && S \ar[r] \ar[d]^{Dc_m} & \widetilde{EG} \ar[d] \\ EG_+ \ar[rr]^-{Dc_m \circ c} && D(EG^{(m-1)}_+) \ar[r] & \widetilde{EG}/\tF{-m} \\ } $$ The lemma follows by passage to $G$-fixed points. It is clear that these equivalences are compatible for varying $m\ge0$. \end{proof} \begin{cor} The continuous (co-)homology of $X^{tG}$ may be computed from the tower $$ X^{tG} \to \dots \to \Sigma F(\tF{m+1}, EG_+ \wedge X)^G \to \Sigma F(\tF{m}, EG_+ \wedge X)^G \to \dots $$ as $$ H_c^*(X^{tG}) \cong \colim_{m\to\infty} \Sigma H^*(F(\tF{m}, EG_+ \wedge X)^G) $$ and $$ H^c_*(X^{tG}) \cong \lim_{m\to\infty} \Sigma H_*(F(\tF{m}, EG_+ \wedge X)^G) \,. $$ \end{cor} \subsection{The (co-)homological Tate spectral sequences} Let $G$ be a finite group and $X$ a $G$-spectrum. The cofiber sequence $\tF{s}/\tF{s-1} \to \widetilde{EG}/\tF{s-1} \to \widetilde{EG}/\tF{s}$ induces a cofiber sequence $$ [\tF{s}/\tF{s-1} \wedge F(EG_+, X)]^G \to X^{tG}[s] \overset{i}\longrightarrow X^{tG}[s{+}1] $$ for every integer~$s$. The left hand term is equivalent to $$ [\tF{s}/\tF{s-1} \wedge X]^G \simeq (\tF{s}/\tF{s-1} \wedge i^* X)/G $$ since $\tF{s}/\tF{s-1}$ is $G$-free. Applying cohomology, we get an exact couple of $\mathscr{A}$-modules \begin{equation} \label{equ_cohomtatecouple} \xymatrix{ A^{s+1,*} \ar[r]^-i & A^{s,*} \ar[d] \\ & \widehat{E}_1^{s,*} \ar@{-->}[ul] } \end{equation} with $$ A^{s,t} = H^{s+t}(X^{tG}[s]) \qquad\text{and}\qquad \widehat{E}_1^{s,t} = H^{s+t}((\tF{s}/\tF{s-1} \wedge i^* X)/G) \,. $$ By Lemma~\ref{lem_otherlimitsvanish}, $\lim_s A^s = \Rlim_s A^s = 0$, so this spectral sequence converges conditionally to the colimit $H_c^*(X^{tG})$, in the first sense of \cite{B1999}*{5.10}. Applying homology instead, we get an exact couple of algebraic $\mathscr{A}_*$-comodules \begin{equation} \label{equ_homtatecouple} \xymatrix{ A_{s,*} \ar[r]^-i & A_{s+1,*} \ar@{-->}[dl] \\ \widehat{E}^1_{s,*} \ar[u] } \end{equation} with $$ A_{s,t} = H_{s+t}(X^{tG}[s]) \qquad\text{and}\qquad \widehat{E}^1_{s,t} = H_{s+t}((\tF{s}/\tF{s-1} \wedge i^* X)/G) \,. $$ By Lemma~\ref{lem_otherlimitsvanish}, $\colim_s A_s = 0$, so this spectral sequence converges conditionally to the limit $H^c_*(X^{tG})$, in the second sense of \cite{B1999}*{5.10}. We can rewrite the $\widehat{E}^1$-term as $$ \widehat{E}^1_{s,t} \cong H_s(\tF{s}/\tF{s-1}) \otimes_{\mathbb{F}_pG} H_t(X) = P_{s-1} \otimes_{\mathbb{F}_pG} H_t(X) \,, $$ and the $d^1$-differential is induced by the differential in the complete resolution $(P_*, d_*)$, so $$ \widehat{E}^2_{s,t} \cong \widehat{H}_{s-1}(G; H_t(X)) \cong \widehat{H}^{-s}(G; H_t(X)) \,. $$ Dually, the $\widehat{E}_1$-term is $$ \widehat{E}_1^{s,t} \cong \Hom(P_{s-1} \otimes_{\mathbb{F}_pG} H_t(X), \mathbb{F}_p) \cong \Hom_{\mathbb{F}_pG}(P_{s-1}, H^t(X)) $$ and $$ \widehat{E}_2^{s,t} \cong \widehat{H}^{s-1}(G; H^t(X)) \cong \widehat{H}_{-s}(G; H^t(X)) \,. $$ \begin{dfn} Let $G$ be a finite group and $X$ a $G$-spectrum. The \emph{cohomological Tate spectral sequence} of $X$ is the conditionally convergent spectral sequence $$ \widehat{E}_2^{s,t} = \widehat{H}_{-s}(G; H^t(X)) \Longrightarrow H_c^{s+t}(X) $$ associated with the exact couple of $\mathscr{A}$-modules~\eqref{equ_cohomtatecouple}. Dually, the \emph{homological Tate spectral sequence} of $X$ is the conditionally convergent spectral sequence $$ \widehat{E}^2_{s,t} = \widehat{H}^{-s}(G; H_t(X)) \Longrightarrow H^c_{s+t}(X) $$ associated with the exact couple of $\mathscr{A}_*$-comodules~\eqref{equ_homtatecouple}. \end{dfn} \begin{remark} There is a natural isomorphism $\widehat{E}_r^{s,t} \cong \Hom(\widehat{E}^r_{s,t}, \mathbb{F}_p)$ for all finite $r$, $s$ and $t$, so that the $d_r$-differential $d_r^{s,t} \: \widehat{E}_r^{s,t} \to \widehat{E}_r^{s+r,t-r+1}$ is the linear dual of the $d^r$-differential $d^r_{s+r,t-r+1} \: \widehat{E}^r_{s+r,t-r+1} \to \widehat{E}^r_{s,t}$. In this sense the cohomological Tate spectral sequence is dual to the homological Tate spectral sequence. \end{remark} To get strong convergence, we need $X$ to be bounded below in the cohomological case, and that $X$ is bounded below and of finite type over $\mathbb{F}_p$ in the homological case. \begin{prop} \label{prop_cohomological} Let $G$ be a finite group and $X$ a $G$-spectrum. Assume that $X$ is bounded below. Then $X^{tG}$ is the homotopy inverse limit of a tower $\{X^{tG}[s]\}_s$ of bounded below spectra, and the cohomological Tate spectral sequence $$ \widehat{E}_2^{s,t}(X) = \widehat{H}_{-s}(G; H^t(X)) \Longrightarrow H^{s+t}_c(X^{tG}) $$ converges strongly to the continuous cohomology of $X^{tG}$ as an $\mathscr{A}$-module. \end{prop} \begin{proof} When $H^*(X)$ is bounded below, the cohomological Tate spectral sequence has exiting differentials in the sense of Boardman, so the spectral sequence is automatically strongly convergent by \cite{B1999}*{6.1}. In other words, the filtration of $H^{s+t}_c(X^{tG})$ by the sub $\mathscr{A}$-modules $$ F^s H^*_c(X^{tG}) = \im ( H^*(X^{tG}[s]) \to H^*_c(X^{tG}) ) $$ is exhaustive, complete and Hausdorff, and there are $\mathscr{A}$-module isomorphisms $$ F^s H^*_c(X^{tG}) / F^{s+1} H^*_c(X^{tG}) \cong \widehat{E}_\infty^{s,*} \,. $$ \end{proof} \begin{prop} \label{prop_homological} Let $G$ be a finite group and $X$ a $G$-spectrum. Assume that $X$ is bounded below and of finite type over~$\mathbb{F}_p$. Then $X^{tG}$ is the homotopy inverse limit of a tower $\{X^{tG}[s]\}_s$ of bounded below spectra of finite type over~$\mathbb{F}_p$, and the homological Tate spectral sequence $$ \widehat{E}^2_{s,t}(X) = \widehat{H}^{-s}(G; H_t(X)) \Longrightarrow H_{s+t}^c(X^{tG}) $$ converges strongly to the continuous homology of $X^{tG}$ as a complete $\mathscr{A}_*$-comodule. \end{prop} \begin{proof} When $H_*(X)$ is bounded below, the homological Tate spectral sequence has entering differentials in the sense of Boardman. The derived limit $R\widehat{E}^\infty_{*,*} = \Rlim_r \widehat{E}^r_{*,*}$ vanishes since $H_*(X)$, and thus $\widehat{E}^2_{*,*}$, is finite in each (bi-)degree. Hence the spectral sequence is strongly convergent by \cite{B1999}*{7.4}. In other words, the filtration of $H^c_{s+t}(X^{tG})$ by the complete sub $\mathscr{A}_*$-comodules $$ F_s H^c_*(X^{tG}) = \ker ( H^c_*(X^{tG}) \to H^c_*(X^{tG}[s]) ) $$ is exhaustive, complete and Hausdorff, and there are algebraic $\mathscr{A}_*$-comodule isomorphisms $$ F_{s+1} H^c_*(X^{tG}) / F_s H^c_*(X^{tG}) \cong \widehat{E}^\infty_{s,*} \,. $$ \end{proof} \subsubsection{Homotopy vs.~Homology} We used the lower tower in~\eqref{equ_tatetowers}: \begin{equation} \label{equ_inversetower} \xymatrix{ X^{tG} \ar[r] & \dots \ar[r] & X^{tG}[n] \ar[r] & X^{tG}[n{+}1] \ar[r] & \cdots \ar[r] & {*} } \end{equation} to define our homological Tate spectral sequence, by applying homology with $\mathbb{F}_p$-coefficients. When studying the \emph{homotopy groups} of the Tate construction, it has been customary to apply $\pi_*(-)$ to the upper tower in~\eqref{equ_tatetowers}: \begin{equation} \label{equ_directtower} \xymatrix{ {*} \ar[r] & \dots \ar[r] & X^{tG}[-\infty, n{-}1] \ar[r] & X^{tG}[-\infty, n] \ar[r] & \dots \ar[r] & X^{tG} } \end{equation} Applying a homological functor to these two towers of spectra gives two different exact couples with isomorphic spectral sequences. If we are working with homotopy, we get two spectral sequences converging to the same groups. Using~\eqref{equ_directtower} yields a spectral sequence converging to the colimit $$ \colim_n \pi_*(X^{tG}[-\infty,n]) \cong \pi_*(X^{tG}) \,, $$ while using~\eqref{equ_inversetower} yields an isomorphic spectral sequence converging to the inverse limit \begin{equation} \label{equ_nc9} \lim_n \pi_*(X^{tG}[n]) \cong \pi_*(X^{tG}) \,. \end{equation} The latter isomorphism assumes that $X$ is bounded below and of suitably finite type, so that the right derived limit $\Rlim_n \pi_*(X^{tG}[n]) = 0$. For example, it suffices if $\pi_*(X)$ is of finite type over $\mathbb{Z}$ or $\widehat\mathbb{Z}_p$. When working with homology with $\mathbb{F}_p$-coefficients, instead of homotopy groups, the failure of the isomorphism we made use of in~\eqref{equ_nc9} makes the situation more interesting. Applying $H_*(-)$ to the tower \eqref{equ_directtower} will produce a sequence of homology groups whose inverse limit is not trivial in general. This means that the associated spectral sequence will not be conditionally convergent to the direct limit $$ \colim_nH_*(X^{tG}[-\infty,n]) \cong H_*(X^{tG})\,. $$ In fact, we have seen that the (isomorphic) homological Tate spectral sequence, arising from~\eqref{equ_inversetower}, converges strongly to $$ \lim_nH_*(X^{tG}[n]) = H_*^c(X^{tG})\,, $$ which is only rarely isomorphic to $H_*(X^{tG})$, since inverse limits and homology do not commute in general. We end this discussion by noticing that the continuous homology groups of the Tate construction on $X$ can be thought of as the homotopy groups of the Tate construction on $H \wedge X$, where $H = H\mathbb{F}_p$ is the Eilenberg--Mac\,Lane spectrum of~$\mathbb{F}_p$. In other words, continuous homology of $X^{tG}$ is a special case of homotopy. \begin{prop} \label{prop_homotopyoftate} For any bounded below $G$-spectrum $X$ of finite type over~$\mathbb{F}_p$ there is a natural isomorphism $$ \pi_*(H\wedge X)^{tG} \cong H_*^c(X^{tG}) \,, $$ where $H$ has the trivial $G$-action. \end{prop} \begin{proof} For all integers~$m$ we have \begin{align*} (H\wedge X)^{tG}[1{-}m] &\simeq \SigmaF(\tF{m}, EG_+\wedge H\wedge X)^G \\ &\simeq H \wedge \SigmaF(\tF{m}, EG_+\wedge X)^G \simeq H \wedge X^{tG}[1{-}m] \,. \end{align*} The first and last equivalences follow from Lemma~\ref{lem_tatecomparison}, while the middle equivalence follows from the fact that $\tF{m}$ is $G$-equivariantly dualizable. Thus, we have that $\pi_*(H\wedge X)^{tG}[n]\cong H_*(X^{tG}[n])$ for all integers~$n$. For a general $G$-spectrum $X$, we then have the following surjective maps: \begin{align} \label{equ_sfinx} \pi_* (H\wedge X)^{tG} &\to \lim_{n\to-\infty} \pi_* (H\wedge X)^{tG}[n]\\ &\cong \lim_{n\to-\infty} H_*(X^{tG}[n]) = H_*^c(X^{tG}) \,. \notag \end{align} Since $X$ was assumed to be bounded below and of finite type over~$\mathbb{F}_p$, the groups in the first inverse limit system are all of finite type over $\mathbb{F}_p$, so their $\Rlim$ vanishes and the map in~\eqref{equ_sfinx} is an isomorphism. \end{proof} The previous proposition and discussion tells us that the continuous homology of $X^{tG}$ can be considered both as the direct limit $\colim_n\pi_*(H\wedge X)^{tG}[-\infty, n]$ or as the inverse limit $\lim_n\pi_*(H\wedge X)^{tG}[n,\infty]$. In both cases, the filtration of the two groups given by their defining towers are the same. \subsection{Multiplicative structure} \label{sec_multstructure} We now discuss how the treatment in \cite{HM2003}*{\textsection 4.3} of multiplicative structure in the homotopical Tate spectral sequence carries over to the homological Tate spectral sequence. Let $X$ be a bounded below $G$-equivariant ring spectrum of finite type over~$\mathbb{F}_p$. We assume that the unit map $\eta \: S \to X$ and the multiplication map $\mu\: X \wedge X \to X$ are equivariant with respect to the trivial $G$-action on $S$ and the diagonal $G$-action on $X \wedge X$. By \cite{GM1995}*{3.5}, the homotopy cartesian square \begin{equation} \label{equ_ringsquare} \xymatrix{ X^G \ar[r]^-R \ar[d]_{\Gamma} & [\widetilde{EG} \wedge X]^G \ar[d]^{\hat\Gamma} \\ X^{hG} \ar[r]^-{R^h} & X^{tG} } \end{equation} in~\eqref{equ_fund1} is a diagram of ring spectra and ring spectrum maps. Up to homotopy there is a unique $G$-equivalence $f \: \widetilde{EG} \wedge \widetilde{EG} \overset{\simeq}\to \widetilde{EG}$, compatible with the inclusion $S^0 \to \widetilde{EG}$ and the homeomorphism $S^0 \wedge S^0 \cong S^0$. Using $f$, the multiplication map on $X^{tG}$ is given by the composition $$ \xymatrix{ [\widetilde{EG} \wedge F(EG_+, X)]^G \wedge [\widetilde{EG}\wedge F(EG_+, X)]^G \ar[d] \\ [\widetilde{EG} \wedge \widetilde{EG} \wedge F(EG_+ \wedge EG_+, X \wedge X)]^G \ar[d]^{f \wedge F(\Delta, \mu)} \\ [\widetilde{EG} \wedge F(EG_+, X)]^G \,. } $$ The other multiplication maps arise by replacing $\widetilde{EG}$, $EG_+$ or both by $S^0$. The unit map to $X^G$ is adjoint to the $G$-map $S \to X$, since $S$ has the trivial $G$-action. The other unit maps arise by composition with the maps in~\eqref{equ_ringsquare}. The non-equivariant ring spectrum structure on the Eilenberg--Mac\,Lane spectrum $H$ makes $H \wedge X$ a naively $G$-equivariant ring spectrum, so $(H \wedge X)^{tG}$ is a ring spectrum. The induced graded ring structure on $\pi_* (H \wedge X)^{tG}$ then gives a graded ring structure on the continuous homology $H^c_*(X^{tG})$, by the isomorphism of Proposition~\ref{prop_homotopyoftate}. \begin{prop} \label{prop_homological2} Let $G$ be a finite group and $X$ a $G$-equivariant ring spectrum. Assume that $X$ is bounded below and of finite type over~$\mathbb{F}_p$. Then the homological Tate spectral sequence $$ \widehat{E}^2_{s,t} = \widehat{H}^{-s}(G; H_t(X)) \Longrightarrow H^c_{s+t}(X^{tG}) $$ is a strongly convergent $\mathscr{A}_*$-comodule algebra spectral sequence, whose product at the $\widehat{E}^2$-term is given by the cup product in Tate cohomology and the Pontryagin product on $H_*(X)$. \end{prop} \begin{proof} Using the Greenlees filtration, we may filter $(H\wedge X)^{tG}$ by the tower~\eqref{equ_directtower}. This produces a homotopical Tate spectral sequence with $\widehat{E}^2$-term $$ \widehat{E}^2_{s,t} = \widehat{H}^{-s}(G; \pi_{t}(H\wedge X)) = \widehat{H}^{-s}(G; H_t(X)) \,, $$ converging strongly to the homotopy $\pi_{s+t}(H\wedge X)^{tG} \cong H^c_{s+t}(X^{tG})$. By the proof of Proposition~\ref{prop_homotopyoftate} it is additively isomorphic to the homological Tate spectral sequence of Proposition~\ref{prop_homological}. By \cite{HM2003}*{4.3.5}, it is also an algebra spectral sequence, with differentials being derivations with respect to the product. The $\widehat{E}^\infty$-term is the associated graded of the multiplicative filtration of $\pi_{s+t}(H\wedge X)^{tG}$ given by the images $$ \im(\pi_* (H\wedge X)^{tG}[-\infty,s] \to \pi_* (H\wedge X)^{tG}) \,. $$ \end{proof} \section{The topological Singer construction} \label{sec_singer} \subsection{Realizing the Singer construction as continuous cohomology} As observed by Miller, and presented by Bruner, May, McClure and Steinberger in \cite{BMMS}*{\textsection II.5}, there is a particular inverse system of spectra whose continuous cohomology realizes the Singer construction in the version related to the symmetric group $\Sigma_p$. We go through the adjustments needed to realize the version of the Singer construction related to the subgroup $C_p$ generated by the cyclic permutation $(1~2~\cdots~p)$. Let $B$ be a non-equivariant spectrum that is bounded below and of finite type over~$\mathbb{F}_p$. For each subgroup $G \subseteq \Sigma_p$ there is an extended power construction \cite{BMMS}*{\textsection I.5} $$ D_G(B) = EG \ltimes_G B^{(p)} \,, $$ well defined in the stable homotopy category. Here $B^{(p)}$ denotes the external $p$-th smash power of $B$, and $G$ permutes the $p$ copies of $B$. It follows from \cite{BMMS}*{I.2.4} that $D_G(B)$ is bounded below and of finite type over~$\mathbb{F}_p$. More precisely, we have the following calculation. \begin{lemma}[\cite{BMMS}*{I.2.3}] \label{lem_grouphomology} There is a natural isomorphism $$ H_*(D_G(B))\cong H_*(G; H_*(B)^{\otimes p}) \,, $$ where $G$ permutes the $p$ copies of $H_*(B)$. \end{lemma} The $p$-fold diagonal map $S^1 \to S^1 \wedge \dots \wedge S^1 \cong S^p$ induces maps $\Sigma D_G(B) \to D_G(\Sigma B)$ as in \cite{BMMS}*{\textsection II.3}. Applied to desuspensions of $B$, these assemble to an inverse system \begin{equation} \label{equ_inverseextendedpowers} \dots \longrightarrow \Sigma^{n+1} D_G(\Sigma^{-n-1} B) \overset{\Sigma^n\Delta}\longrightarrow \Sigma^n D_G(\Sigma^{-n} B) \longrightarrow \dots \overset{\Delta}\longrightarrow D_G(B) \end{equation} in the stable homotopy category. This is a tower of bounded below spectra of finite type over~$\mathbb{F}_p$, so it makes sense to talk about its associated continuous cohomology. We now follow \cite{BMMS}*{\textsection II.5}, but focus on the case $G = C_p$ instead of $G = \Sigma_p$. There is an additive isomorphism $$ H_*(D_{C_p}(B)) \cong \mathbb{F}_p\{ e_0 \otimes \alpha_1 \otimes \dots \otimes \alpha_p \} \oplus \mathbb{F}_p\{ e_j \otimes \alpha^{\otimes p} \} $$ where the $\alpha_i$ and $\alpha$ range over a basis for $H_*(B)$, the $\alpha_i$ are not all equal, and only one representative is taken from each $C_p$-orbit of the tensors $\alpha_1 \otimes \dots \otimes \alpha_p$. The grading is determined by $\deg(e_j) = j$. Dually, there is an isomorphism $$ H^*(D_{C_p}(B)) \cong \mathbb{F}_p\{ w_0 \otimes a_1 \otimes \dots \otimes a_p\} \oplus \mathbb{F}_p\{ w_j \otimes a^{\otimes p} \} $$ where the $a_i$ and $a$ range over the dual basis for $H^*(B)$, and $w_j \otimes a^{\otimes p}$ is dual to $e_j \otimes \alpha^{\otimes p}$ when $a$ is dual to $\alpha$. It follows that \begin{equation} \label{equ_cohomsuspextpower} H^*(\Sigma^n D_{C_p}(\Sigma^{-n} B)) \cong \mathbb{F}_p\{ \Sigma^n w_0 \otimes \Sigma^{-n} a_1 \otimes \dots \otimes \Sigma^{-n} a_p\} \oplus \mathbb{F}_p\{ \Sigma^n w_j \otimes (\Sigma^{-n} a)^{\otimes p} \} \,. \end{equation} By \cite{BMMS}*{II.5.6}, the map $\Delta$ in~\eqref{equ_inverseextendedpowers} is given in cohomology as $$ \Delta^*( w_j \otimes a^{\otimes p} ) = (-1)^{j+1} \alpha(q) \cdot \Sigma w_{j+p-1} \otimes (\Sigma^{-1} a)^{\otimes p} $$ where $\deg(a) = q$, $m = (p-1)/2$ and $\alpha(q) = -(-1)^{mq} m!$. For $p=2$ the numerical coefficient should be read as $1$. The other classes $w_0 \otimes a_1 \otimes \dots \otimes a_p$ map to zero. It follows that \begin{equation} \label{equ_cohosigmandelta} (\Sigma^n\Delta)^*(\Sigma^n w_j \otimes (\Sigma^{-n} a)^{\otimes p}) = (-1)^{j+1} \alpha(q{-}n) \cdot \Sigma^{n+1} w_{j+p-1} \otimes (\Sigma^{-n-1} a)^{\otimes p} \,. \end{equation} The action of the Steenrod algebra $\mathscr{A}$ on $H^*(D_{C_p}(B))$ is given by the Nishida relations. Together with the explicit formula above for the maps $(\Sigma^n\Delta)^*$, this determines the direct limit of cohomology groups as an $\mathscr{A}$-module. Miller observed that this direct limit can be described in closed form by the Singer construction on the $\mathscr{A}$-module $H^*(B)$, up to a single degree shift. We now give the $C_p$-equivariant extension of the $\Sigma_p$-equivariant case discussed in \cite{BMMS}*{II.5.1}. \begin{thm} \label{thm_TopSinger} For each spectrum $B$ that is bounded below and of finite type over~$\mathbb{F}_p$, there is a natural isomorphism of $\mathscr{A}$-modules $$ \omega \: \colim_{n\to\infty} H^*(\Sigma^n D_{C_p}(\Sigma^{-n} B)) \overset{\cong}\longrightarrow \Sigma^{-1} R_+(H^*(B)) \,. $$ \end{thm} \begin{proof} For $p=2$ the isomorphism is given by $$ \omega(\Sigma^n w_{j+n} \otimes (\Sigma^{-n} a)^{\otimes 2}) = x^{j+q} \otimes a $$ where $\deg(a) = q$. For $p$ odd, the isomorphism is given by $$ \omega(\Sigma^n w_{2(r+mn)} \otimes (\Sigma^{-n} a)^{\otimes p}) = (-1)^{q-n} \nu(q{-}n)^{-1} \cdot y^{r+mq} \otimes a $$ and $$ \omega(\Sigma^n w_{2(r+mn)-1} \otimes (\Sigma^{-n} a)^{\otimes p}) = (-1)^{q} \nu(q{-}n)^{-1} \cdot x y^{r+mq-1} \otimes a \,, $$ where $\deg(a) = q$, $m = (p-1)/2$ and $\nu(2j+\epsilon) = (-1)^j (m!)^\epsilon$ for $\epsilon \in \{0, 1\}$. It follows from~\eqref{equ_cohosigmandelta} and the relation $\alpha(q) \nu(q-1)^{-1} = \nu(q)^{-1}$ that these homomorphisms are compatible under $(\Sigma^n\Delta)^*$. Then it is clear from~\eqref{equ_cohomsuspextpower} that $\omega$ is an additive isomorphism. It also commutes with the Steenrod operations on the extended powers, described in \cite{BMMS}*{II.5.5}, and the explicitly defined Steenrod operations on the Singer construction, given in Definition~\ref{dfn_rplus}. After some computation, the thing to check is that $\alpha(q) (-1)^{q+1} \nu(q+1)^{-1} = (-1)^q \nu(q)^{-1}$, which follows from the relation above and $\nu(q+1) = -\nu(q-1)$. \end{proof} \subsection{The relationship between the Tate and Singer constructions} \label{subsec_topsinger} We now show that the inverse system~\eqref{equ_inverseextendedpowers} for $G = C_p$ and $B$ a bounded below spectrum can be realized, up to a single suspension, as the Tate tower in~\eqref{equ_tatetowers} for a $C_p$-spectrum $X = B^{\wedge p}$. As a naive $C_p$-spectrum, $B^{\wedge p}$ is equivalent to the $p$-fold smash product $B \wedge \dots \wedge B$, with the $C_p$-action given by cyclic permutation of the factors. The genuinely equivariant definition of $B^{\wedge p}$ is obtained by specialization from B{\"o}kstedt's definition in \cite{B1}, \cite{BHM1993} of the topological Hochschild homology $\THH(B)$ of a symmetric ring spectrum $B$, in the sense of~\cite{HSS2000}. Namely, $B^{\wedge p} = \sd_p \THH(B)_0 = \THH(B)_{p-1}$ equals the $0$-simplices of the $p$-fold edgewise subdivision of $\THH(B)$, which in turn equals the $(p-1)$-simplices of $\THH(B)$. The ring structure on $B$ is only relevant for the simplicial structure on $\THH(B)$, and is not needed for the formation of its $(p-1)$-simplices. However, it is necessary to assume that the spectrum $B$ is realized as a symmetric spectrum. We now make a review of definitions, in order to compare the B{\"o}kstedt-style smash powers of symmetric spectra with the external powers of Lewis--May spectra. From now on, let $\mathscr{U}$ be the complete $C_p$-universe $$ \mathscr{U} = \mathbb{R}^\infty \oplus \dots \oplus \mathbb{R}^\infty = (\mathbb{R}^\infty)^p $$ with $C_p$-action given by cyclic permutation of summands. The inclusion $i \: \mathbb{R}^\infty \to \mathscr{U}$ is the diagonal embedding $\Delta \: \mathbb{R}^\infty \to (\mathbb{R}^\infty)^p$. Let $\mathfrak{A} = \{\mathbb{R}^n \mid n\ge0\}$ be the sequential indexing set in $\mathbb{R}^\infty$, and let $\mathfrak{A}^p = \{\mathbb{R}^n \oplus \dots \oplus \mathbb{R}^n = (\mathbb{R}^n)^p \mid n\ge0\}$ be the associated diagonal indexing set \cite{LMS}*{\textsection VI.5} in $\mathscr{U}$. Recall that a prespectrum $D$ is \emph{$\Sigma$-cofibrant} in the sense of \cite{LMS}*{I.8.7}, hence good in the sense of Hesselholt and Madsen~\cite{HM1997}*{Def.~A.1}, if each structure map $\Sigma^{W-V} D(V) \to D(W)$ is a cofibration for $V \subseteq W$ in the indexing set. There is a functorial thickening $D^\tau$ of prespectra, which produces $\Sigma$-cofibrant, hence good, prespectra, and there is a natural spacewise equivalence $D^\tau \to D$, see \cite{HM1997}*{Lem.~A.1}. All of this works just as well equivariantly. Let $B$ be a symmetric spectrum of topological spaces, with $n$-th space $B_n$ for each $n\ge0$. Recall that $B$ is $S$-cofibrant in the sense of \cite{HSS2000}*{5.3.6} if the natural map $\nu_n \: L_n B \to B$ is a cofibration for each $n$, where the latching space $L_n B$ is the $n$-th space in the spectrum $B \wedge \bar S$. Here $\bar S$ is the symmetric spectrum with $0$-th space $*$ and $n$-th space $S^n$, for $n>0$. We prefer to follow the terminology in \cite{Schwede}*{III.1.2} and refer to the $S$-cofibrant symmetric spectra as being \emph{flat}. Every symmetric spectrum is level equivalent, hence stably equivalent, to a flat symmetric spectrum, so any spectrum may be modeled by a flat symmetric spectrum. Each symmetric spectrum~$B$ has an underlying sequential prespectrum indexed on $\mathfrak{A}$, with $B(\mathbb{R}^n) = B_n$ equal to the $n$-th space of $B$. The structure map $\sigma \: B(\mathbb{R}^{n-1}) \wedge S^1 \to B(\mathbb{R}^n)$ factors as $B_{n-1} \wedge S^1 \overset{\iota_n}\longrightarrow L_n B \overset{\nu_n}\longrightarrow B_n$, where $\iota_n$ is always a cofibration. Hence the underlying prespectrum of a flat symmetric spectrum is $\Sigma$-cofibrant. \begin{dfn} \label{dfn_bsmashb} Let $B$ be a symmetric spectrum, with $n$-th space $B_n$ for each $n\ge0$, and let $I$ be B{\"o}kstedt's category of finite sets $\mathbf{n} = \{1, 2, \dots, n\}$ for $n\ge0$ and injective functions. Let $B^{\wedge p}_{\pre}$ be the $C_p$-equivariant prespectrum with $V$-th space $$ B^{\wedge p}_{\pre}(V) = \hocolim_{(\mathbf{n}_1, \dots, \mathbf{n}_p) \in I^p} \Map(S^{n_1} \wedge \dots \wedge S^{n_p}, B_{n_1} \wedge \dots \wedge B_{n_p} \wedge S^V) $$ for each finite dimensional $V \subset \mathscr{U}$. Here $C_p$ acts by cyclically permuting the $\mathbf{n}_i$, the $S^{n_i}$ and the $B_{n_i}$, as well as acting on $S^V$. Let $$ B^{\wedge p} = L((B^{\wedge p}_{\pre})^\tau) $$ be the genuine $C_p$-spectrum in $C_p\S\mathscr{U}$ obtained by spectrification from the functorial good thickening $(B^{\wedge p}_{\pre})^\tau$ of this prespectrum. The natural maps $$ B^{\wedge p}_{\pre}(V) \overset{\simeq}\longleftarrow (B^{\wedge p}_{\pre})^{\tau}(V) \overset{\simeq}\longrightarrow B^{\wedge p}(V) $$ are $C_p$-equivariant equivalences by the proof of \cite{HM1997}*{Prop.~2.4}. \end{dfn} \begin{dfn} Let $B$ be a prespectrum indexed on $\mathfrak{A}$. The $p$-fold external smash product $B^{(p)}$ is the spectrification in $C_p\S\mathscr{U}$ of the $C_p$-equivariant prespectrum $$ B^{(p)}_{\pre}((\mathbb{R}^n)^p) = B(\mathbb{R}^n) \wedge \dots \wedge B(\mathbb{R}^n) = B(\mathbb{R}^n)^{\wedge p} $$ indexed on $\mathfrak{A}^p$. When $B$ is $\Sigma$-cofibrant, so is $B^{(p)}_{\pre}$, hence the $V$-th space of the spectrification is given by the colimit $$ B^{(p)}(V) = \colim_{V \subseteq (\mathbb{R}^n)^p} \Map(S^{(\mathbb{R}^n)^p - V}, B(\mathbb{R}^n)^{\wedge p}) \,. $$ Here the colimit runs over the $n \in \mathbb{N}_0$ such that $V \subseteq (\mathbb{R}^n)^p$, and $(\mathbb{R}^n)^p - V$ denotes the orthogonal complement of $V$ in $(\mathbb{R}^n)^p$. Suspension by $V$ induces an isomorphism $$ B^{(p)}(V) \cong \colim_{n \in \mathbb{N}_0} \Map((S^n)^{\wedge p}, B(\mathbb{R}^n)^{\wedge p} \wedge S^V) \,, $$ with inverse given by suspension by $(\mathbb{R}^m)^p - V$, followed by $p$ instances of the stabilization $B(\mathbb{R}^n) \wedge S^m \to B(\mathbb{R}^{n+m})$, for $m$ sufficiently large. \end{dfn} We say that a symmetric spectrum $B$ is \emph{convergent} if there are integers $\lambda(n)$ that grow to infinity with $n$, such that $B_n$ is $((n/2) + \lambda(n))$-connected and the structure map $\Sigma B_n \to B_{n+1}$ is $(n+\lambda(n))$-connected, for all sufficiently large~$n$. These hypotheses suffice for the use of B{\"o}kstedt's approximation lemma in the following proof. Every symmetric spectrum is stably equivalent to a convergent one. \begin{lemma} \label{lem_comparepthpowers} Let $B$ be a flat, convergent symmetric spectrum. There is a natural chain of weak equivalences of naive $C_p$-spectra $$ i^* B^{(p)} \simeq i^* B^{\wedge p} \,. $$ \end{lemma} \begin{proof} For every finite dimensional $V \subset \mathbb{R}^\infty$ we have a natural chain of $C_p$-equivariant maps $$ \xymatrix{ \displaystyle{\colim_{n \in \mathbb{N}_0}} \Map((S^n)^{\wedge p}, B(\mathbb{R}^n)^{\wedge p} \wedge S^V) \\ \displaystyle{\hocolim_{n \in \mathbb{N}_0}} \Map((S^n)^{\wedge p}, B(\mathbb{R}^n)^{\wedge p} \wedge S^V) \ar[u]_\simeq \ar[d]^\simeq \\ \displaystyle{\hocolim_{(n_1, \dots, n_p) \in \mathbb{N}_0^p}} \Map(S^{n_1} \wedge \dots \wedge S^{n_p}, B_{n_1} \wedge \dots \wedge B_{n_p} \wedge S^V) \ar[d]^\simeq \\ \displaystyle{\hocolim_{(\mathbf{n}_1, \dots, \mathbf{n}_p) \in I^p}} \Map(S^{n_1} \wedge \dots \wedge S^{n_p}, B_{n_1} \wedge \dots \wedge B_{n_p} \wedge S^V) } $$ connecting $B^{(p)}(V)$ to $B^{\wedge p}_{\pre}(V)$. The upper map is a weak equivalence because $B$ is flat, hence $\Sigma$-cofibrant. The middle map is a weak homotopy equivalence because the diagonal $\mathbb{N}_0 \to \mathbb{N}_0^p$ is (co-)final. The lower map is a weak equivalence by convergence, the fact that $\mathbb{N}_0^p$ is filtering, and B{\"o}kstedt's approximation lemma \cite{B1}*{1.5}, see \cite{B2000}*{2.5.1} for a published proof. Applying the thickening construction and spectrifying, we get the desired chain of naively $C_p$-equivariant weak equivalences. \end{proof} \begin{dfn} For $p=2$, let $\mathbb{R}(1)$ be the sign representation of $C_2$. For $p$ odd, let $\mathbb{C}(1)$ be the standard rank~$1$ representation of $C_p \subset S^1$, and let $\mathbb{C}(i)$ be its $i$-th tensor power. For all primes $p$, let $W \subset \mathbb{R}^p$ be the orthogonal complement of the diagonal copy of $\mathbb{R}$. Then $W \cong \mathbb{R}(1)$ for $p=2$, and $W \cong \mathbb{C}(1) \oplus \dots \oplus \mathbb{C}(m)$ for $p$ odd, where $m = (p-1)/2$. Let $EC_p = S(\infty W)$, $\widetilde{EC}_p = S^{\infty W}$, and give $EC_p$ a $C_p$-CW structure so that $$ \widetilde{EC}_p^{((p-1)n)} = S^{nW} $$ for all $n\ge0$. Then $\tF{(p-1)n} = S^{nW}$ for all integers~$n$. \end{dfn} \begin{prop} \label{prop_taterewrite} Let $B$ be a flat and convergent symmetric spectrum, and give $EC_p$ a free $C_p$-CW structure as in the definition above. There is a natural weak equivalence $$ (B^{\wedge p})_{hC_p} = (EC_{p+} \wedge i^* B^{\wedge p})/C_p \simeq EC_p \ltimes_{C_p} B^{(p)} = D_{C_p}(B) \,. $$ More generally, there are weak equivalences $$ (B^{\wedge p})^{tC_p}[1{-}(p{-}1)n] \simeq \Sigma^{1+n} D_{C_p}(\Sigma^{-n} B) $$ for all $n\ge0$, which are compatible with the $(p-1)$-fold composites of maps in the Tate tower~\eqref{equ_tatetowers} for $X = B^{\wedge p}$ $$ \dots \to (B^{\wedge p})^{tC_p}[1{-}(p{-}1)(n{+}1)] \to (B^{\wedge p})^{tC_p}[1{-}(p{-}1)n] \to \dots \,, $$ and the suspension of the inverse system~\eqref{equ_inverseextendedpowers} for $G = C_p$ $$ \dots \to \Sigma^{1+n+1} D_{C_p}(\Sigma^{-n-1} B) \to \Sigma^{1+n} D_{C_p}(\Sigma^{-n} B) \to \dots \,. $$ \end{prop} \begin{proof} By the untwisting theorem \cite{LMS}*{VI.1.17} and Lemma~\ref{lem_comparepthpowers} there are $C_p$-equivariant equivalences $$ EC_p \ltimes B^{(p)} \simeq EC_{p+} \wedge i^* B^{(p)} \simeq EC_{p+} \wedge i^* B^{\wedge p} \,, $$ since $EC_p$ is $C_p$-free. The first claim follows by passage to $C_p$-orbit spectra. More generally, there are $C_p$-equivariant equivalences $$ \Sigma^{1+n} EC_p \ltimes (\Sigma^{-n} B)^{(p)} \simeq \Sigma F(S^{nW}, EC_p \ltimes B^{(p)}) \simeq \Sigma F(S^{nW}, EC_{p+} \wedge i^* B^{\wedge p}) $$ by \cite{LMS}*{VI.1.5}, since $S^n \wedge S^{nW} \cong (S^n)^{\wedge p}$. Passing to $C_p$-orbits, and using the Adams transfer equivalence~\eqref{equ_adamstransfer}, we get the equivalences $$ \Sigma^{1+n} D_{C_p}(\Sigma^{-n} B) \simeq \Sigma F(S^{nW}, EC_{p+} \wedge B^{\wedge p})^{C_p} \,. $$ The right hand side is a model for $(B^{\wedge p})^{tC_p}[1{-}(p{-}1)n]$, by Lemma~\ref{lem_tatecomparison}, since $\tF{(p-1)n} = S^{nW}$. The stabilization of the left hand side given by $\Delta \: S^1 \to S^p$ is compatible under all of these equivalences with the stabilization of the right hand side given by the inclusion $S^0 \to S^W$, again by \cite{LMS}*{VI.1.5}. \end{proof} \begin{dfn} For each symmetric spectrum $B$ let the \emph{topological Singer construction} on $B$ be the spectrum $$ R_+(B) = (B^{\wedge p})^{tC_p} \,. $$ \end{dfn} The topological Singer construction realizes the algebraic Singer constructions, in the following sense. \begin{thm} \label{thm_topmodelsalg} Let $B$ be a symmetric spectrum that is bounded below and of finite type over $\mathbb{F}_p$. There are natural isomorphisms $$ \omega \: H^*_c(R_+(B)) \overset{\cong}\longrightarrow R_+(H^*(B)) $$ and $$ \omega_* \: R_+(H_*(B)) \overset{\cong}\longrightarrow H_*^c(R_+(B)) $$ of $\mathscr{A}$-modules and complete $\mathscr{A}_*$-comodules, respectively. \end{thm} \begin{proof} We may replace $B$ by a stably equivalent flat and convergent symmetric spectrum, without changing the (co-)homology of $B^{\wedge p}$ and $R_+(B)$. By Proposition~\ref{prop_taterewrite} there are $\mathscr{A}$-module isomorphisms $$ H^*((B^{\wedge p})^{tC_p}[1{-}(p{-}1)n]) \cong \Sigma^{1+n} H^*(D_{C_p}(\Sigma^{-n} B)) $$ for each $n$, which by Theorem~\ref{thm_TopSinger} induce $\mathscr{A}$-module isomorphisms $$ H^*_c((B^{\wedge p})^{tC_p}) \cong R_+(H^*(B)) $$ after passage to colimits. The dual $\mathscr{A}_*$-comodule isomorphisms then induce complete $\mathscr{A}_*$-comodule isomorphisms $$ H^c_*((B^{\wedge p})^{tC_p}) \cong \Hom(R_+(H^*(B)), \mathbb{F}_p) = R_+(H_*(B)) $$ after passage to limits. \end{proof} \subsection{The topological Singer $\epsilon$-map} \label{subsec_topepsilon} We now turn to the construction of a stable map $\epsilon_B \: B \to R_+(B)$ that realizes the Singer homomorphism $\epsilon$ on passage to cohomology. The homotopy fixed point property for the $C_p$-equivariant spectrum $B^{\wedge p}$ will then follow easily. Let $B$ be a symmetric spectrum that is bounded below and of finite type over~$\mathbb{F}_p$. The $C_p$-spectrum $X = B^{\wedge p}$ introduced in Definition~\ref{dfn_bsmashb} is then also bounded below and of finite type over~$\mathbb{F}_p$. We shall make use of parts of the Hesselholt--Madsen proof that $\THH(B)$ is a \emph{cyclotomic spectrum}. By the first half of the proof of \cite{HM1997}*{Prop.~2.1} there is a natural equivalence $$ \bar s_{C_p} \: [\widetilde{EC}_p \wedge B^{\wedge p}]^{C_p} \overset{\simeq}\longrightarrow \Phi^{C_p}(B^{\wedge p}) \,, $$ where $\Phi^{C_p}(X)$ denotes the \emph{geometric fixed point spectrum} of $X$. Furthermore, by the simplicial degree~$k=p-1$ part of the proof of \cite{HM1997}*{Prop.~2.5} there is a natural equivalence $$ r'_{C_p} \: \Phi^{C_p}(B^{\wedge p}) \overset{\simeq}\longrightarrow B \,. $$ If $B$ is a ring spectrum, both of these equivalences are ring spectrum maps. \begin{dfn} Let $\epsilon_B \: B \to R_+(B)$ be the natural stable map given by the composite $$ B \overset{\simeq}\longleftarrow \Phi^{C_p}(B^{\wedge p}) \overset{\simeq}\longleftarrow [\widetilde{EC}_p \wedge B^{\wedge p}]^{C_p} \overset{\hat\Gamma}\longrightarrow (B^{\wedge p})^{tC_p} = R_+(B) \,. $$ If $B$ is a ring spectrum then $\epsilon_B$ is a ring spectrum map. \end{dfn} With this notation we can rewrite the homotopy cartesian square in~\eqref{equ_fund1} for $X = B^{\wedge p}$ as follows: \begin{equation} \label{equ_fundbsmashb} \xymatrix{ (B^{\wedge p})^{C_p} \ar[r]^-R \ar[d]_{\Gamma} & B \ar[d]^{\epsilon_B} \\ (B^{\wedge p})^{hC_p} \ar[r]^-{R^h} & R_+(B) } \end{equation} We thank M.~B{\"o}kstedt for a helpful discussion on the following two results. \begin{lemma} \label{lem_suspensionepsilon} The stable map $\epsilon_B \: B \to R_+(B)$ commutes with suspension, in the sense that $\epsilon_{\Sigma B} = \Sigma \epsilon_B$. \end{lemma} \begin{proof} Consider the commutative diagram: $$ \xymatrix{ \Sigma B \ar[d]_{=} & \Sigma [\widetilde{EC}_p \wedge B^{\wedge p}]^{C_p} \ar[l]_-{\simeq} \ar[r]^-{\Sigma\hat\Gamma} \ar[d]^{\Delta} & \Sigma R_+(B) \ar[d]^{\Delta} \\ \Sigma B & [\widetilde{EC}_p \wedge (\Sigma B)^{\wedge p}]^{C_p} \ar[l]_-{\simeq} \ar[r]^-{\hat\Gamma} & R_+(\Sigma B) } $$ The vertical maps labeled $\Delta$ are induced by the diagonal inclusion $S^1 \to S^p$, which on the right hand side is the same map as was used in the interpretation (Proposition~\ref{prop_taterewrite}) of $R_+(B)$ as the inverse system of suspended extended power constructions. Hence these maps are weak equivalences. \end{proof} \begin{prop} \label{prop_epsilonB} Let $B$ be a bounded below spectrum of finite type over~$\mathbb{F}_p$. Then the homomorphism $$ (\epsilon_B)^* \: H_c^*(R_+(B)) \to H^*(B) $$ induced on continuous cohomology by the spectrum map $\epsilon_B \: B \to R_+(B)$ is equal to Singer's homomorphism $$ \epsilon_{H^*(B)} \: R_+(H^*(B)) \to H^*(B) $$ associated to the $\mathscr{A}$-module $H^*(B)$. \end{prop} \begin{proof} By Corollary~\ref{cor_uniquehom} there is a unique $\mathscr{A}$-module homomorphism $g_B \: H^*(B) \to H^*(B)$ that makes the square $$ \xymatrix{ R_+(H^*(B)) \ar[d]_{\epsilon_{H^*(B)}} & H_c^*(R_+(B)) \ar[d]^{(\epsilon_B)^*} \ar[l]_-{\omega}^-{\cong} \\ H^*(B) \ar[r]^-{g_B} & H^*(B) } $$ commute. We must show that $g_B$ equals the identity. First consider the case $B = H$. The homological Tate spectral sequence $$ \widehat{E}^2_{*,*}(H) = \widehat{H}^{-*}(C_p; H_*(H)^{\otimes p}) \Longrightarrow H^c_*(R_+(H)) $$ is an algebra spectral sequence (Proposition~\ref{prop_homological2}), and $\epsilon_H \: H \to R_+(H)$ is a ring spectrum map, so the image of $1 \in H_0(H)$ under $(\epsilon_H)_*$ is represented by $1 \otimes 1^{\otimes p}$ in $\widehat{H}^0(C_p; H_0(H)^{\otimes p})$. Hence $(\epsilon_H)^*$ maps the dual class represented by $1 \otimes 1^{\otimes p}$ in $\widehat{H}_0(C_p; H^0(H)^{\otimes p})$ to $1 \in H^0(H)$. Now $\omega(1 \otimes 1^{\otimes p}) = \Sigma xy^{-1} \otimes 1 \in R_+(\mathscr{A})$ and $\epsilon_{\mathscr{A}}(\Sigma xy^{-1} \otimes 1) = 1 \in \mathscr{A}$, so $g_H(1) = 1$. (We replace $xy^{-1}$ by $x^{-1}$ for $p=2$.) Since $g_H$ is an $\mathscr{A}$-module homomorphism, it must be equal to the identity $\mathscr{A} = H^*(H)$. The case $B = \Sigma^n H$ then follows by Lemma~\ref{lem_suspensionepsilon}. In the general case any element in $H^n(B)$ is represented by a map $f \: B \to \Sigma^n H$, which induces an $\mathscr{A}$-module homomorphism $f^* \: \Sigma^n \mathscr{A} \to H^*(B)$. By naturality of the isomorphism $\omega$, the Singer homomorphism $\epsilon$, and the spectrum map $\epsilon$, we get a diagram $$ \xymatrix{ R_+(\Sigma^n \mathscr{A}) \ar[ddd]_{\epsilon_{\Sigma^n \mathscr{A}}} \ar[dr]_-{R_+(f^*)} &&& H_c^*(R_+(\Sigma^n H)) \ar[ddd]^{(\epsilon_{\Sigma^n H})^*} \ar[lll]_-{\omega}^-{\cong} \ar[dl]^-{R_+(f)^*} \\ & R_+(H^*(B)) \ar[d]_{\epsilon_{H^*(B)}} & H_c^*(R_+(B)) \ar[d]^{(\epsilon_B)^*} \ar[l]_-{\omega}^-{\cong} \\ & H^*(B) \ar[r]_-{g_B} & H^*(B) \\ \Sigma^n \mathscr{A} \ar[rrr]^-{=} \ar[ur]^-{f^*} &&& H^*(\Sigma^n H) \ar[ul]_-{f^*} } $$ where the left hand, upper and right hand trapezoids all commute. The inner square commutes by construction, and the outer square commutes by the case $B = \Sigma^n H$. Since $\epsilon_{\Sigma^n \mathscr{A}}$ is surjective, it follows that the lower trapezoid also commutes. Hence $g_B$ equals the identity on the class $f^*(\Sigma^n 1) \in H^n(B)$. Since $n$ and $f$ were arbitrary, this proves that $g_B$ equals the identity on all of $H^*(B)$. \end{proof} The following theorem generalizes the Segal conjecture for $C_p$. \begin{thm} \label{thm_fixedpoints} Let $B$ be a bounded below spectrum of finite type over~$\mathbb{F}_p$. Then the natural maps $$ \epsilon_B \: B \to R_+(B) = (B^{\wedge p})^{tC_p} $$ and $$ \Gamma\: (B^{\wedge p})^{C_p} \to (B^{\wedge p})^{hC_p} $$ are $p$-adic equivalences of spectra. \end{thm} \begin{proof} The map $\epsilon_B$ induces a map of spectral sequences $$ E_2^{*,*}(B) = \Ext_{\mathscr{A}}^{*,*}(H^*(B), \mathbb{F}_p) \to \Ext_{\mathscr{A}}^{*,*}(H_c^*(R_+(B)), \mathbb{F}_p) = E_2^{*,*}(R_+(B)) $$ where the first is the Adams spectral sequence of $B$, and the second is the inverse limit of Adams spectral sequences associated to the Tate tower $\{(B^{\wedge p})^{tC_p}[n]\}_n$, as in Proposition~\ref{prop_CMPss}. The map converges strongly to the homomorphism $$ \pi_*(\epsilon_B)\sphat_p \: \pi_*(B\sphat_p) \to \pi_*(R_+(B)\sphat_p) \,. $$ By Propositions~\ref{prop_epsilonB} and~\ref{prop_extiso} and Theorem~\ref{thm_extiso}, the map of $E_2$-terms is an isomorphism, hence so is the map of $E_\infty$-terms and of the abutments. In other words, $\epsilon_B$ is a $p$-adic equivalence. The corresponding assertion for $\Gamma$ follows immediately, since diagram~\eqref{equ_fundbsmashb} is homotopy cartesian. \end{proof} \subsection{The Tate spectral sequence for the topological Singer construction} \label{subsec_tatessforsinger} We conclude by relating the homological Tate spectral sequence for $B^{\wedge p}$ to the Tate filtration on the homological Singer construction for $H_*(B)$, and likewise in cohomology. \begin{prop} Let $B$ be a bounded below spectrum of finite type over~$\mathbb{F}_p$. The homological Tate spectral sequence $$ \widehat{E}^2_{*,*} = \widehat{H}^{-*}(C_p; H_*(B)^{\otimes p}) \Longrightarrow H^c_*((B^{\wedge p})^{tC_p}) $$ converging to $H^c_*(R_+(B)) \cong R_+(H_*(B))$ collapses at the $\widehat{E}^2$-term. Hence the $\widehat{E}^2 = \widehat{E}^\infty$-term is given by $$ \widehat{E}^\infty_{*,*} = P(u, u^{-1}) \otimes \mathbb{F}_2\{\alpha^{\otimes 2}\} $$ for $p=2$, and by $$ \widehat{E}^\infty_{*,*} = E(u) \otimes P(t, t^{-1}) \otimes \mathbb{F}_p\{\alpha^{\otimes p}\} $$ for $p$ odd. In each case $\alpha$ runs through an $\mathbb{F}_p$-basis for $H_*(B)$. For $p=2$ and any $r \in \mathbb{Z}$, $\alpha \in H_q(B)$, the element $u^r \otimes \alpha \in R_+(H_*(B))$ is represented in the Tate spectral sequence by $$ u^{r+q} \otimes \alpha^{\otimes 2} \in \widehat{E}^\infty_{-r-q, 2q} \,. $$ For $p$ odd and any $i \in \{0,1\}$, $r \in \mathbb{Z}$ and $\alpha \in H_q(B)$, the element $u^i t^r \otimes \alpha \in R_+(H_*(B))$ is represented in the Tate spectral sequence by $$ (-1)^q \nu(q)^{-1} \cdot u^i t^{r+mq} \otimes \alpha^{\otimes p} \in \widehat{E}^\infty_{-i-2r-(p-1)q, pq} \,, $$ where $m=(p-1)/2$ and $\nu(2j+\epsilon) = (-1)^j (m!)^\epsilon$ for $\epsilon \in \{0,1\}$. \end{prop} \begin{proof} Consider first the case $B=S^q$. Then the result is trivial for dimensional reasons. The $\widehat{E}^2$-term is concentrated in bidegrees $(*, pq)$, so there is no room for differentials, and there are no extension problems to be solved. The formula for the homological Tate spectral sequence representative for $u^i t^r \otimes \alpha$ will follow by dualization from the cohomological case, given below. Let $H$ be a model for the mod~$p$ Eilenberg--Mac\,Lane spectrum as a commutative symmetric ring spectrum, and form the spectrum $H^{\wedge p}$ in $C_p\S\mathscr{U}$ as in Definition~\ref{dfn_bsmashb}. The iterated multiplication on $H$ then induces a naively $C_p$-equivariant map $H^{\wedge p} \to H$. Let $f \: S^q \to H \wedge B$ represent a class $\alpha \in H_q(B)$, and consider the naively $C_p$-equivariant composite map $$ f^p \: H\wedge S^{pq} \to H\wedge (H\wedge B)^{\wedge p} \simeq H\wedge H^{\wedge p}\wedge B^{\wedge p} \to H \wedge H \wedge B^{\wedge p} \to H\wedge B^{\wedge p} \,. $$ On homotopy groups it induces the homomorphism $H_*(S^q)^{\otimes p} \to H_*(B)^{\otimes p}$ that takes $\iota_q^{\otimes p}$ to $\alpha^{\otimes p}$, where $\iota_q = \Sigma^q 1$ is the fundamental class in $H_q(S^q)$. By applying the Tate construction to this map, we get a map of spectra $$ (f^p)^{tC_p} \: (H \wedge S^{pq})^{tC_p} \to (H \wedge B^{\wedge p})^{tC_p} $$ and an associated map of homotopical Tate spectral sequences, converging to an $\mathbb{F}_p$-linear map \begin{equation} \label{equ_univmap} H^c_*(R_+(S^q)) \to H^c_*(R_+(B)) \end{equation} by Proposition~\ref{prop_homotopyoftate}. For $p$ odd, this map is given at the level of $\widehat{E}^2$-terms as sending $u^i t^r \otimes \iota_q^{\otimes p}$ to $u^i t^r \otimes \alpha^{\otimes p}$, and similarly for $p=2$. The statement of the proposition then follows by naturality. \end{proof} Note that the $\mathbb{F}_p$-linear map~\eqref{equ_univmap} is not a homomorphism of $\mathscr{A}_*$-comodules, because $f^p$ was formed using the product on $H$. \begin{prop} Let $B$ be a bounded below spectrum of finite type over~$\mathbb{F}_p$. The cohomological Tate spectral sequence $$ \widehat{E}_2^{*,*} = \widehat{H}_{-*}(C_p; H^*(B)^{\otimes p}) \Longrightarrow H_c^*((B^{\wedge p})^{tC_p}) $$ converging to $H_c^*(R_+(B)) \cong R_+(H^*(B))$ collapses at the $\widehat{E}_2$-term, so that $$ \widehat{E}_\infty^{*,*} = \Sigma P(x, x^{-1}) \otimes \mathbb{F}_2\{a^{\otimes 2}\} $$ for $p=2$, and $$ \widehat{E}_\infty^{*,*} = \Sigma E(x) \otimes P(y, y^{-1}) \otimes \mathbb{F}_p\{a^{\otimes p}\} $$ for $p$ odd. In each case $a$ runs through an $\mathbb{F}_p$-basis for $H^*(B)$. For $p=2$ and any $r \in \mathbb{Z}$, $a \in H^q(B)$, the element $\Sigma x^r \otimes \alpha \in R_+(H^*(B))$ is represented in the Tate spectral sequence by $$ \Sigma x^{r-q} \otimes a^{\otimes 2} \in \widehat{E}_\infty^{1+r-q, 2q} \,. $$ For $p$ odd and any $i \in \{0,1\}$, $r \in \mathbb{Z}$ and $a \in H^q(B)$, the element $\Sigma x^i y^r \otimes a \in R_+(H^*(B))$ is represented in the Tate spectral sequence by $$ (-1)^q \nu(q) \cdot \Sigma x^i y^{r-mq} \otimes a^{\otimes p} \in \widehat{E}_\infty^{1+i+2r-(p-1)q, pq} \,. $$ \end{prop} \begin{proof} This follows by dualization from the homological case. In the special case $B = S^q$, $\Sigma x^i y^r \otimes a^{\otimes p} \in H_c^*(R_+(B)) \cong \widehat{H}^{-*}(C_p; H_*(S^q)^{\otimes p})$ is represented by $(-1)^q \nu(q)^{-1} \cdot \Sigma x^i y^{r+mq} \otimes a \in R_+(H_*(S^q))$, by the explicit isomorphism given in the proof of Theorem~\ref{thm_TopSinger}. The formula for the cohomological Tate spectral sequence representative follows. \end{proof} \begin{cor} \label{cor_comparefiltrations} The Tate filtration $$ \{F^nR_+(H^*(B))\}_n $$ of the Singer construction $R_+(H^*(B))$ corresponds, under the isomorphism $R_+(H^*(B)) \cong H_c^*(R_+(B))$, to the Boardman filtration $$ \{F^nH_c^*(R_+(B))\}_n $$ of $H_c^*(R_+(B))$. \end{cor} \begin{proof} For each integer~$n$, the Boardman filtration $F^nH_c^*(R_+(B))$ equals the image of $H^*((B^{\wedge p})^{tC_p}[n])$ in $H^*((B^{\wedge p})^{tC_p})$, which is the part of $H^*((B^{\wedge p})^{tC_p})$ represented in filtrations $\ge n$ at the $\widehat{E}_\infty$-term. This corresponds to the part of the Singer construction $H^*(R_+(B))$ spanned by the monomials $\Sigma x^r \otimes a$ with $1+r-q \ge n$ for $p=2$, and by the monomials $\Sigma x^i y^r \otimes a$ with $1+i+2r-(p-1)q \ge n$ for $p$ odd, which precisely equals the $n$-th term $F^nR_+(H^*(B))$ of the Tate filtration, as defined in \textsection \ref{subsec_homalgsinger}. \end{proof} \begin{bibdiv} \begin{biblist} \bib{AGM1985}{article}{ author={Adams, J. F.}, author={Gunawardena, J. H.}, author={Miller, H.}, title={The Segal conjecture for elementary abelian $p$-groups}, journal={Topology}, volume={24}, date={1985}, number={4}, pages={435--460}, } \bib{AR2002}{article}{ author={Ausoni, Ch.}, author={Rognes, J.}, title={Algebraic $K$-theory of topological $K$-theory}, journal={Acta Math.}, volume={188}, date={2002}, number={1}, pages={1--39}, } \bib{B1999}{article}{ author={Boardman, J. M.}, title={Conditionally convergent spectral sequences}, conference={ title={Homotopy invariant algebraic structures}, address={Baltimore, MD}, date={1998}, }, book={ series={Contemp. Math.}, volume={239}, publisher={Amer. Math. Soc.}, place={Providence, RI}, }, date={1999}, pages={49--84}, } \bib{BHM1993}{article}{ author={B{\"o}kstedt, M.}, author={Hsiang, W. C.}, author={Madsen, I.}, title={The cyclotomic trace and algebraic $K$-theory of spaces}, journal={Invent. Math.}, volume={111}, date={1993}, number={3}, pages={465--539}, } \bib{BM1994}{article}{ author={B{\"o}kstedt, M.}, author={Madsen, I.}, title={Topological cyclic homology of the integers}, note={$K$-theory (Strasbourg, 1992)}, journal={Ast\'erisque}, number={226}, date={1994}, pages={7--8, 57--143}, } \bib{B1}{article}{ author={B{\"o}kstedt, M.}, title={Topological Hochschild homology}, note={Bielefeld preprint}, date={ca.~1986}, } \bib{BBLR}{article}{ author={B{\"o}kstedt, M.}, author={Bruner, R. R.}, author={Lun{\o}e-Nielsen, S.}, author={Rognes, J.}, title={On cyclic fixed points of spectra}, note={arXiv:0712.3476 preprint}, date={2007}, } \bib{B2000}{article}{ author={Brun, M.}, title={Topological Hochschild homology of ${\bf Z}/p^n$}, journal={J. Pure Appl. Algebra}, volume={148}, date={2000}, number={1}, pages={29--76}, } \bib{BMMS}{book}{ author={Bruner, R. R.}, author={May, J. P.}, author={McClure, J. E.}, author={Steinberger, M.}, title={$H_\infty $ ring spectra and their applications}, series={Lecture Notes in Mathematics}, volume={1176}, publisher={Springer-Verlag}, place={Berlin}, date={1986}, pages={viii+388}, } \bib{CE}{book}{ author={Cartan, H.}, author={Eilenberg, S.}, title={Homological algebra}, publisher={Princeton University Press}, place={Princeton, N. J.}, date={1956}, pages={xv+390}, } \bib{CMP1987}{article}{ author={Caruso, J.}, author={May, J. P.}, author={Priddy, S. B.}, title={The Segal conjecture for elementary abelian $p$-groups. II. $p$-adic completion in equivariant cohomology}, journal={Topology}, volume={26}, date={1987}, number={4}, pages={413--433}, } \bib{G1987}{article}{ author={Greenlees, J. P. C.}, title={Representing Tate cohomology of $G$-spaces}, journal={Proc. Edinburgh Math. Soc. (2)}, volume={30}, date={1987}, number={3}, pages={435--443}, } \bib{G1994}{article}{ author={Greenlees, J. P. C.}, title={Tate cohomology in commutative algebra}, journal={J. Pure Appl. Algebra}, volume={94}, date={1994}, number={1}, pages={59--83}, } \bib{GM1995}{article}{ author={Greenlees, J. P. C.}, author={May, J. P.}, title={Generalized Tate cohomology}, journal={Mem. Amer. Math. Soc.}, volume={113}, date={1995}, number={543}, pages={viii+178}, } \bib{HM1997}{article}{ author={Hesselholt, L.}, author={Madsen, I.}, title={On the $K$-theory of finite algebras over Witt vectors of perfect fields}, journal={Topology}, volume={36}, date={1997}, number={1}, pages={29--101}, } \bib{HM2003}{article}{ author={Hesselholt, L.}, author={Madsen, I.}, title={On the $K$-theory of local fields}, journal={Ann. of Math. (2)}, volume={158}, date={2003}, number={1}, pages={1--113}, } \bib{HSS2000}{article}{ author={Hovey, M.}, author={Shipley, B.}, author={Smith, J.}, title={Symmetric spectra}, journal={J. Amer. Math. Soc.}, volume={13}, date={2000}, number={1}, pages={149--208}, } \bib{LMS}{book}{ author={Lewis, L. G., Jr.}, author={May, J. P.}, author={Steinberger, M.}, author={McClure, J. E.}, title={Equivariant stable homotopy theory}, series={Lecture Notes in Mathematics}, volume={1213}, note={With contributions by J. E. McClure}, publisher={Springer-Verlag}, place={Berlin}, date={1986}, pages={x+538}, } \bib{LS1982}{article}{ author={Li, H. H.}, author={Singer, W. M.}, title={Resolutions of modules over the Steenrod algebra and the classical theory of invariants}, journal={Math. Z.}, volume={181}, date={1982}, number={2}, pages={269--286}, } \bib{LDMA1980}{article}{ author={Lin, W. H.}, author={Davis, D. M.}, author={Mahowald, M. E.}, author={Adams, J. F.}, title={Calculation of Lin's Ext groups}, journal={Math. Proc. Cambridge Philos. Soc.}, volume={87}, date={1980}, number={3}, pages={459--469}, } \bib{LR2}{article}{ author={Lun{\o}e-Nielsen, S.}, author={Rognes, J.}, title={The Segal conjecture for topological Hochschild homology of complex cobordism}, note={arXiv:1010.???? preprint}, date={2010}, } \bib{R1999}{article}{ author={Rognes, J.}, title={Topological cyclic homology of the integers at two}, journal={J. Pure Appl. Algebra}, volume={134}, date={1999}, number={3}, pages={219--286}, } \bib{Schwede}{book}{ author={Schwede, S.}, title={An untitled book project about symmetric spectra}, eprint={http://www.math.uni-bonn.de/people/schwede/SymSpec.pdf}, note={(in preparation)}, date={2007}, } \bib{S1980}{article}{ author={Singer, W. M.}, title={On the localization of modules over the Steenrod algebra}, journal={J. Pure Appl. Algebra}, volume={16}, date={1980}, number={1}, pages={75--84}, } \bib{S1981}{article}{ author={Singer, W. M.}, title={A new chain complex for the homology of the Steenrod algebra}, journal={Math. Proc. Cambridge Philos. Soc.}, volume={90}, date={1981}, number={2}, pages={279--292}, } \bib{W1977}{article}{ author={Wilkerson, C.}, title={Classifying spaces, Steenrod operations and algebraic closure}, journal={Topology}, volume={16}, date={1977}, number={3}, pages={227--237}, } \end{biblist} \end{bibdiv} \end{document}
{'timestamp': '2010-10-28T02:01:40', 'yymm': '1010', 'arxiv_id': '1010.5633', 'language': 'en', 'url': 'https://arxiv.org/abs/1010.5633'}
\section{Challenges Discovered} \label{sec:challenges} In this section, we give evidence of the issues, namely Cardinality Mismatch and High Client Scan Latencies. We compare the severity of these issues for both the frequency bands, identify the causes behind these issues, and measure their impact on the issues. \begin{figure*}[t] \centering \begin{subfigure}[b]{0.46\textwidth} \centering \includegraphics[scale=0.5]{FPMAP-Cardinality.pdf} \caption{Offline Phase } \label{Fig:FPMAP-Cardinality} \end{subfigure}% \qquad \begin{subfigure}[b]{0.46\textwidth} \centering \includegraphics[scale=0.5]{ReportingAPs.pdf} \caption{Online Phase} \label{Fig:ReportingAPs} \end{subfigure} \caption{Cardinalities observed during the offline and online phases. For both the phases, the cardinalities are lower for $5$ GHz. During the online phase, there is a substantial decrease in the cardinality for both bands as compared to the offline phase.}\label{Fig:cardinality_issues} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.46\textwidth} \centering \includegraphics[scale=0.5]{ClientCloseToAP-ScanVsNoScan.pdf} \caption{Client Close To AP} \label{Fig:ClientCloseToAP-ScanVsNoScan} \end{subfigure}% \qquad \begin{subfigure}[b]{0.46\textwidth} \centering \includegraphics[scale=0.5]{ClientFarFromAP-ScanVsNoScan.pdf} \caption{Client Far From AP} \label{Fig:ClientFarFromAP-ScanVsNoScan} \end{subfigure}% \caption{Variations in RSSI for scanning and non-scanning frames in two scenarios -- ($a$) client close to the AP and ($b$) client far from the AP. For both cases the RSSI from scanning frames vary far lesser than the non-scanning frames.}\label{Fig:rssi_mismatch_scanvsnoscan} \end{figure*} \begin{figure}[t] \centering \includegraphics[scale=0.6]{Hist-RSSIs-ScanFreq.pdf} \caption{Frequency of scanning in both band increases as RSSI reduces. $2.4$ GHz experiences higher scan frequencies.} \label{Fig:Hist-RSSIs-ScanFreq} \end{figure} \subsection{Evidence of the Issues} The Cardinality Mismatch arises from the dynamic power and client management performed by a centralized controller as well as the client-side power management. Given the dynamic nature of these management policies, it is not possible to estimate their implications on the Cardinality Mismatch, and thereby on the localization errors. We take an empirical approach to see whether ($a$) we can find out the severity of these implications on the Cardinality Mismatch and the localization error and ($b$) identify the tunability of the implicating factors. Figure~\ref{Fig:cardinality_issues} plots the differences in cardinality between the offline and online phases for $2.4$ and $5$ GHz. Figure~\ref{Fig:FPMAP-Cardinality} shows the cardinality observed in our offline fingerprints. Figure~\ref{Fig:ReportingAPs} shows the cardinality observed during the online phase. While the maximum cardinality is $16$ during the offline phase, it is merely $6$ in the online phase. This shows the spectrum of the Cardinality Mismatch. In the online phase, $80$\% of the time only $1$ AP reports for a client in $5$ GHz while $40$\% in $2.4$ GHz. Any fingerprint-based algorithm will be adversely affected by such a big difference in the cardinality. For each band, we find out how much is the extent of the Cardinality Mismatch. Overall, across all the cardinalities, $2.4$ GHz has $57.30\%$ mismatches and $5$ GHz has $30.6\%$ mismatches. The $5$ GHz band is more adversely affected by the Cardinality Mismatch issue because it experiences lower cardinality, which increases the chances of a mismatch. $2.4$ GHz always sees higher cardinality than $5$ GHz, both during the offline and online phases. This is because -- ($a$) signals in $2.4$ GHz travel farther than that of $5$ GHz, and ($b$) the number of scanning frames transmitted in $2.4$ GHz. Unlike the data frames, the scanning frames are broadcasted and hence heard by more number of APs. As the number of scanning frames increases, more APs hear them and revert, thereby increasing the cardinality. Besides, the RSSI variation for the scanning frames is lesser compared to that of the data frames. To validate, we perform a controlled experiment with a stationary client and collect client's traffic using a sniffer. The client has an ongoing data transmission and periodic scanning is triggered every $15$ seconds. From the sniffed packet capture, we extract per-frame RSSI. The experiment is repeated for two scenarios- ($a$) the client is close to the AP and ($b$) the client is far from the AP. With these two scenarios, we emulate the client behavior for low and high RSSI from the AP. Figure~\ref{Fig:rssi_mismatch_scanvsnoscan} shows the RSSI measurements in two cases. In the first scenario, when the client is close to an AP, RSSI of the scanning frames varies by up to $10$ dB and for non-scanning frames it varies by up to $50$ dB. Similarly, in the second scenario, when the client is far from AP, RSSI of scanning frames varies by up to $5$ dB and for non-scanning frames it varies by up to $30$ dB. Both our experiments validate that the RSSI from scanning frames vary far lesser than the non-scanning frames. This means the online RSSI from scanning frames match more closely and is a much more reliable indicator of the client's position. We want to study how clients in their default configuration behave in \emph{real} networks; therefore we do not modify the default behavior of client driver in any way. We repeated the experiment with devices of Samsung, Nexus, Xiaomi, and iPhone. Next, we study the effect of the band on the frequency of scanning. We collect WiFi traffic with sniffers listening on the channels in operation at that time in both the bands for $6$ hours. Data from $200$ WiFi clients is recorded. Figure~\ref{Fig:Hist-RSSIs-ScanFreq} shows the plot. For both $2.4$ and $5$ GHz bands, the frequency increases as RSSI reduces. Overall, the frequency is lesser for $5$ GHz, even though most ($\approx 2X$) of the clients in our network associate in $5$ GHz. More the frequency of scanning, lesser is the chance of Cardinality Mismatch. Our comparative analysis of the two bands revealed that the instance of frame losses and poor connection quality, which cause scanning, are much lower in $5$ GHz due to lower interference. The analysis of the scanning behavior of our clients reveals that -- ($a$) $90^{th}$ \%ile values of scanning intervals is in the order of few $1000$ seconds, which is a lot for fingerprint based solutions, ($b$) $5$ GHz is the least preferred band of scanning, and ($c$) clients rarely scan both the frequency bands. Hence, we rule out the possibility of the reduced range of $5$ GHz resulting in lesser scanning frames. \subsection{Causes Behind the Issues} Next, we study the combined effect of frequency of scanning, \textit{i.e.}, number of scans per hour, and transmission distance on the cardinality. For this, we consider clients configured in one of the four states as discussed in Section~\ref{subsubsec:gt_collection}. Note that each state implicitly controls the amount of scanning. We do not manually control scanning behavior to imitate the real-world. Figure~\ref{fig:ISAT} shows the interval between two consecutive scans, \textit{i.e.}, the inter-scan arrival time, observed for each client state when the client moves from one landmark to another. We find that the median scan intervals are $15$-$20$ seconds for a client with either WiFi intermittently used or continuously used, while it is $26$-$47$ seconds for a client whose WiFi is disconnected or not actively used. However, in all the cases, $90^{th}$ percentile values are in thousands of seconds, which signify that clients scan infrequently in reality. \begin{figure}[t] \centering \includegraphics[scale=0.6]{ISAT.pdf} \caption{Inter Scan Arrival Time for a client in four states -- Disconnected, Inactive, Intermittent, and Active. The intermittent and active states have median scanning time of $16$-$19$ seconds, while it increases to $26$-$47$ seconds for the inactive and disconnected clients. Notice that the upper quartile inter scan arrival times go up to $1000$ of seconds, which means clients mostly do not scan and hence the reduced cardinality.} \label{fig:ISAT} \end{figure} \begin{table*}[h!] \centering \small \begin{tabular}{m{4cm}m{2.4cm}m{1.7cm}m{3cm}m{1cm}} \toprule \textbf{Phone} & \textbf{Disconnected} & \textbf{Inactive} & \textbf{Intermittent} & \textbf{Active}\\ \hline iPhone $6$ (iOS $10.3$) & Random & \ding{55} & \ding{55} & \ding{55}\\ Nexus $5$X (Android $7.1$) & $100$-$300$s & \ding{55} & Screen Off$\rightarrow$On & Once\\ Galaxy S$7$ (Android $6.0$) & $100$-$2600$s & $1200$s & Screen Off$\rightarrow$On & \ding{55}\\ Galaxy S$3$ (Android $4.0$) & $240$s & $300$-$1300$s & Screen Off$\rightarrow$On & \ding{55}\\ Moto G$4$ (Android $7.1$) & $100$-$150$s & $10$-$1000$s & Screen Off$\rightarrow$On & $15$s\\ SonyXperia (Android $6.0$) & Once & $5$-$30$s & Screen Off$\rightarrow$On & Once\\ \bottomrule \end{tabular} \caption{Scanning Behavior of the Stationary clients. Most devices either do not trigger scans or perform minimal scans while in Disconnected, Inactive, or Active states. It may take up to 2600 seconds for devices to a trigger scan. This latency negatively reduces the localization accuracy. An exception is the Intermittent state in which all the Android devices trigger scans as and when the screen is turned on, otherwise the devices stay silent. The reduced number of active scans, across the states, result in inaccurate localization.} \label{tbl:scan_behavior} \end{table*} We evaluate scanning behavior of stationary clients by closely monitoring $6$ different models of phone. Table~\ref{tbl:scan_behavior} summarizes the measurements and confirms our observations of the reduced scans. Unlike mobile clients, stationary clients tend to scan much lesser. The clients trigger scan almost every time when the screen is turned on while being in the intermittent state. This is especially true for the Android clients. However, scanning is infrequent in all the other states; even if the client is in the Active state. Such a behavior results in the reduced cardinality and therefore inaccurate localization. \iffalse \begin{figure}[t] \centering \includegraphics[scale=0.70]{ScanningAnalysis.pdf} \caption{\textcolor{blue}{Per-Band Scanning Analysis for the clients in each state. Probability of a connected client scanning both bands is very low (minimum $12$\%). Probability of a client scanning $5$ GHz band is even lower (minimum $3.40$\%). Mostly scans are seen in $2.4$ GHz band, with intermittent state scanning the most.}} \label{Fig:ScanningAnalysis} \end{figure} We extend the scanning analysis to understand the change in scanning behavior with a change in the frequency band. Figure~\ref{Fig:ScanningAnalysis} shows the analysis. We perform this analysis for the following cases -- ($1$) Only $2.4$ GHz, ($2$) Only $5$ GHz, ($3$) Either band, and ($4$) Both bands. Overall we observe the following -- ($a$) clients scan more in $2.4$ GHz band than $5$ GHz band, ($b$) most scans are observed the intermittent state followed by active state, and ($c$) associated clients, irrespective of their state, do not prefer to scan both bands. \fi \begin{figure*} \scriptsize \centering \begin{tabular}{cc} \includegraphics[scale=0.5]{SOffDc_24GHz.pdf}& \hspace{0.1in} \includegraphics[scale=0.5]{SOffDc_5GHz.pdf} \\(a) \textbf{State: Disconnected - $2.4$ GHz} & (b) \textbf{State: Disconnected - $5$ GHz}\\ \includegraphics[scale=0.5]{SOffCo_24GHz.pdf}& \hspace{0.1in} \includegraphics[scale=0.5]{SOffCo_5GHz.pdf} \\(c) \textbf{State: Inactive - $2.4$ GHz} & (d) \textbf{State: Inactive - $5$ GHz} \\ \includegraphics[scale=0.5]{SOffOn_24GHz.pdf}& \hspace{0.1in} \includegraphics[scale=0.5]{SOffOn_5GHz.pdf} \\(e) \textbf{State: Intermittent - $2.4$ GHz } & (f) \textbf{State: Intermittent - $5$ GHz} \\ \includegraphics[scale=0.5]{SOnOn_24GHz.pdf}& \hspace{0.1in} \includegraphics[scale=0.5]{SOnOn_5GHz.pdf} \\ (g) \textbf{State: Active - $2.4$ GHz} & (h) \textbf{State: Active - $5$ GHz}\\ \end{tabular} \caption{The number of APs reporting in the absence and presence of scanning for $4$ client states -- Disconnected, Inactive, Intermittent, and Active. When a client is disconnected, the APs report only when the client scans (Figure ($a$) and ($b$)). For the remaining $3$ states, the number of reporting APs are present irrespective of the scanning status; however, the amount of APs are consistently higher when the client scans. $2.4$ GHz always has more APs reporting than $5$ GHz. } \label{Fig:cardinality_scanvsnoscan_statewise} \end{figure*} In the absence of client scans, APs get only non-scanning frames. For each of the four states of the client, we study how many APs report that client \textit{i.e.} the cardinality. With this analysis, we are able to compare the cardinality in the presence and absence of scans, for both $2.4$ and $5$ GHz bands. We see that as the frequency of scanning increases, more number of APs respond and the cardinality increases. We expand the result for each state of the client in Figure~\ref{Fig:cardinality_scanvsnoscan_statewise}. The cardinality is consistently higher for $2.4$ GHz than that of $5$ GHz, that further increases in the presence of scans. This implies that higher frequency of scanning possibly reduces the Cardinality Mismatch and vice-versa. \begin{comment} \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=5.5cm,height=5cm]{24vs5.pdf} \caption{Server-side} \label{Fig:server-side} \end{subfigure}% \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=5.5cm,height=5cm]{htc_buffalo_g.png} \caption{Client-side - $2.4$ GHz} \label{Fig:client_side_1} \end{subfigure}% \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=5.5cm,height=5cm]{htc_buffalo_a.png} \caption{Client-side - $5$ GHz} \label{Fig:client_side_2} \end{subfigure}% \caption{RSSI Variations - $2.4$ GHz vs $5$ GHz. This figure shows the variations in RSSI for a stationary client as measured at the server (Fig~\ref{Fig:server-side} and client (Fig~\ref{Fig:client_side_1} and Fig~\ref{Fig:client_side_2}). Notice that $5$ GHz is relatively stable than $2.4$ GHz.}\label{Fig:RSSIFluctuations-24vs5} \end{figure*} \end{comment} \begin{figure}[t] \centering \includegraphics[scale=0.6]{24vs5.pdf} \caption{Variations in RSSI in $2.4$ GHz and $5$ GHz for a stationary client as measured at the server. Notice that $5$ GHz is relatively stable than $2.4$ GHz.} \label{Fig:RSSIFluctuations-24vs5} \end{figure}% \begin{table}[t] \begin{subtable}{.5\textwidth} \scriptsize \centering \begin{tabular}{@{} m{1.5cm}m{2cm}m{1.5cm}m{2cm} @{}} \toprule \textbf{Frames} & \textbf{Transmission Distance} & \textbf{RSSI Variation} & \textbf{Frequency of Transmission}\\%\cmidrule{2-4} \hline Scanning & \emph{High} -- \cmark & \emph{Low} -- \cmark & \emph{Low} -- \ding{55} \\ Non-Scanning & \emph{Low} -- \ding{55} & \emph{High} -- \ding{55} & \emph{High} -- \cmark \\ \bottomrule \end{tabular} \end{subtable} \\ \begin{subtable}{.5\textwidth} \scriptsize \centering \begin{tabular}{@{} m{1.5cm}m{2cm}m{1.5cm}m{2cm} @{}} \toprule \textbf{Band} & \textbf{Transmission Distance} & \textbf{RSSI Variation} & \textbf{Frequency of Scanning}\\%\cmidrule{2-4} \hline $2.4$ GHz & \emph{High} -- \cmark & \emph{High} -- \ding{55} & \emph{High} -- \cmark \\ $5$ GHz & \emph{Low} -- \ding{55} & \emph{Low} -- \cmark & \emph{Low} -- \ding{55} \\ \bottomrule \end{tabular} \end{subtable} \caption{A summary of the causes and their impact (\cmark - Reduces Localization Errors, \ding{55} - Increases Localization Errors). The causes conflict with each other, making server-side localization non-trivial.} \label{tbl:observations_summary} \end{table} However, a downside for $2.4$ GHz is that the frames (both scanning and non-scanning) have higher variation in RSSI. This means, even though the extent of the Cardinality Mismatch is lower, the RSSI will differ more in $2.4$ GHz. To confirm this, we analyze the RSSI from a stationary client by enabling its association in one band at a time and disabling the other band altogether. We use the RSSI recorded at the RTLS server for a duration of $1$ hour. As shown in Figure~\ref{Fig:RSSIFluctuations-24vs5}, even with more scanning information available $2.4$ GHz is more prone to RSSI fluctuations than $5$ GHz. The reasons for this behavior are ($a$) the range of $2.4$ GHz is almost double than that of $5$ GHz and ($b$) a lesser number of non-overlapping channels makes it susceptible to interference. Therefore, RSSI from $2.4$ GHz results in predicting distant and transient locations. We validate this in different locations with devices of four other models. To summarize, there is a significant extent of Cardinality Mismatch and High Client Scan Latency. There is a difference in the extent of the issues for the two classes of frames and the two bands of operation. While scanning frames has a longer distance of transmission and less variation in RSSI, they are not often sent by the clients. The factors favoring $2.4$ GHz are longer distance of transmission and higher frequency of scanning. However, low variation in RSSI works in favor of $5$ GHz. We summarize these observations in Table~\ref{tbl:observations_summary}. \iffalse \begin{table} \scriptsize \centering \begin{tabular}{@{} c | c c | c c @{}} \toprule \multicolumn{1}{c |}{Band}& \multicolumn{2}{c |}{\textbf{$2.4$ GHz}} & \multicolumn{2}{c }{\textbf{$5$ GHz}} \\ \hline \textbf{$C$} & \textbf{SF} & \textbf{DF} & \textbf{SF} & \textbf{DF} \\ \hline $1$ & \textgreater $50$ & $20.00$ & $29.50$ & $03.10$ \\ $2$ & \textgreater $50$ & $18.30$ & $26.80$ & $02.00$ \\ $3$ & \textgreater $50$ & $33.15$ & $20.12$ & $00.00$ \\ $4$ & \textgreater $50$ & $18.49$ & $NA$ & $NA$ \\ $5$ & $19.20$ & $10.25$ & $NA$ & $NA$ \\ $6$ & $23.00$ & $04.17$ & $NA$ & $NA$ \\ \bottomrule \end{tabular} \caption{Localization errors reported with baseline fingerprinting algorithm. The errors are measured as Same Floor (SF) errors in meters and Different Floor (DF) errors in \% of floors for Cardinalities ($C$) ranging from $1$ to $6$. Same Floor errors are reported as $85^{th}$\%$ile$. Cardinalities \textgreater $3$ are Not Applicable (NA) to $5$ GHz due to cardinality mismatch.} \label{tbl:accuracy_results_baseline} \end{table} \fi \begin{figure*}[t] \centering \begin{subfigure}[b]{0.46\textwidth} \centering \includegraphics[scale=0.3]{images/Baseline-SameFloor.jpg} \caption{Same Floor} \label{Fig:Baseline-SameFloor} \end{subfigure}% \qquad \begin{subfigure}[b]{0.46\textwidth} \centering \includegraphics[scale=0.3]{images/Baseline-FloorErrors.jpg} \caption{Different Floor} \label{Fig:Baseline-FloorErrors} \end{subfigure}% \caption{Localization errors reported with baseline fingerprinting algorithm. The errors are measured as Same Floor errors in meters and Different Floor errors in percentage of floors for Cardinalities ranging from $1$ to $6$. Cardinalities \textgreater $3$ are Not Applicable to $5$ GHz due to cardinality mismatch.}\label{Fig:accuracy_results_baseline} \end{figure*} \subsection{Impact of Causes on Localization Errors} We now evaluate the impact of the causes on the localization errors. We implemented a server-side localization using well known fingerprint-based method~\cite{radar}. Since we use server-side processing, we do not require any client-side modification. Our proposals do not make assumptions about hardware or operating system of the clients or the controller. Although each adaptation of fingerprint-based technique from the existing body of work may result in different errors, our exercise gives us a baseline that cuts across all the adaptations. The $2.4$ and $5$ GHz bands differ in distance of transmission, variation in RSSI, and frequency of scanning. We measure the localization errors for both the bands. We report localization errors for each value of the cardinality in online phase to understand how the error varies as a function of the cardinality. We consider a multi-storey building; hence, the clients may be localized across floors, irrespective of their current position. Therefore, we measure the errors in terms of ($a$) Different Floor and ($b$) Same Floor errors. Different Floor error is the percentage of total records for which a wrong floor is estimated. These are seen at the higher percentiles. For the rest of the records, the Same Floor error is the distance in meters between the actual and the predicted landmark on the floor. The errors at the higher percentiles are essential for security applications, for example, an error by a floor in localizing a suspect can make or break the evidence. We want to minimize both the errors. Figure~\ref{Fig:accuracy_results_baseline} shows the results for the Baseline fingerprinting-based localization. Overall, we observe that the errors are high for the low cardinalities, and the errors for $2.4$ GHz are more significant compared to that of $5$ GHz. This is despite the fact that more APs hear clients and frequency of transmission of scanning frames is more for $2.4$ GHz. We conclude that the variation in RSSI, including that induced by the transmission power control, has a significant impact on the cardinality and therefore on the localization errors. \iffalse \begin{table*} \scriptsize \centering \begin{tabular}{p{1cm}ccccccc} \toprule & & \multicolumn{3}{c}{ 2.4 GHz} & \multicolumn{3}{c}{ 5 GHz}\\ \cmidrule{3-8} Frames & Metric & $1$ & $2$ & $3$ & $1$ & $2$ & $3$ \\ \toprule Scanning & $80$\%ile & $49.2$ & $36$ & \textgreater $50$ & $27$ & $24.2$ & $20$ \\ and & $75$\%ile & $36.1$ & $25.8$ & \textgreater $50$ & $24.7$ & $23.4$ & $18$ \\ Non- & $50$\%ile & $21.8$ & $15$ & $17$ & $19$ & $17.49$ & $16$ \\ Scanning & Max & $49.2$ & $43.2$ & $36$ ($66$\%ile) & $42$ ($96$\%ile) & $42$ ($97$\%ile) & $27$ ($99$\%ile) \\ & Post Max & \textgreater $50$($81$\%ile) & \textgreater $50$($81$\%ile) & \textgreater $50$ ($67$\%ile) & \textgreater $50$ ($97$\%ile) & \textgreater $50$ ($98$\%ile) & No Floor Error \\ \midrule Scanning & $80$\%ile & $42.4$ & $27.6$ & $36$ & $31$ & $27.6$ & $27$ \\ & $75$\%ile & $36.1$ & $26.8$ & $25.6$ & $30$ & $25.8$ & $20$ \\ & $50$\%ile & $22.8$ & $21.6$ & $18.2$ & $21.8$ & $20$ & $18$ \\ & Max & $49.2$ & $43.2$ & $36$ & $38.4$ & $42$ & $27$ \\ & Post Max & \textgreater $50$($83$\%ile) & \textgreater $50$ ($95$\%ile) & \textgreater $50$ ($80$\%ile) & \textgreater $50$ ($91$\%ile) & \textgreater $50$ ($94$\%ile) & No Floor Error \\ \midrule Non- & $80$\%ile & \textgreater $50$ & \textgreater $50$ & \textgreater $50$ & $27$ & $20.12$ & $22$ \\ Scanning & $75$\%ile & \textgreater $50$ & $25$ & \textgreater $50$ & $24.7$ & $16.15$ & $21.8$ \\ & $50$\%ile & $21.2$ & $12.7$ & $17.49$ & $18$ & $12.36$ & $16.1$ \\ & Max & $46.8$ & $35$ & $43$ & $42$ & $30.5$ & $24.2$ \\ & Post Max & \textgreater $50$ ($76$\%ile) & \textgreater $50$ ($78$\%ile) & \textgreater $50$ ($62$\%ile) & \textgreater $50$ ($97$\%ile) & No Floor Error & No Floor Error\\ \bottomrule \end{tabular} \caption{\textcolor{blue}{Localization Errors with ($a$) Scanning and Non-Scanning frames combined, ($b$) Only Scanning frames, and ($c$) Only Non-Scanning frames. Errors are reported in meters.}\textcolor{red}{Needs better presentation.} } \label{tbl:scanning_nonscanning_results} \end{table*} \fi \begin{figure*}[t] \centering \begin{subfigure}[t]{\linewidth} \centering \includegraphics[scale=0.3]{images/LocalizationErrors75th-24GHz.jpg} \, \includegraphics[scale=0.3]{images/StartPercentileOfFloorError24GHz.jpg} \caption{2.4 GHz} \label{Fig:LocalizationErrors75th-2.4GHz} \end{subfigure}% \begin{subfigure}[t]{\linewidth} \centering \includegraphics[scale=0.3]{images/LocalizationErrors75th-5GHz.jpg} \, \includegraphics[scale=0.3]{images/StartPercentileOfFloorError5GHz.jpg} \caption{5 GHz} \label{Fig:LocalizationErrors75th-5GHz} \end{subfigure}% \caption{Bifurcation of Localization Errors with and without scanning frames for 2.4 and 5 GHz. Localizing client with only scanning frames reduces errors in 2.4 GHz; while only non-scanning frames reduces errors in 5 GHz.} \label{Fig:localizationerrors_scanNoScan} \end{figure*} In order to understand the significance of scanning frames on localization errors, we study the spectrum of errors caused due to each type of frame -- scanning and non-scanning. Results in Figure~\ref{Fig:accuracy_results_baseline} combine both types of frames; we now bifurcate the results for each frame type. The analysis is categorized for -- ($a$) both scanning and non-scanning frames, ($b$) only scanning frames, and ($c$) only non-scanning frames. Figure~\ref{Fig:localizationerrors_scanNoScan} shows the localization errors on the same floor and different floors for both frequency bands. Scanning frames not only reduce the localization errors on the same floor in $2.4$ GHz but, even floor errors start at as high as 85$^{th}$\%ile. With non-scanning frames, floor errors start at as low as 62$^{th}$\%ile. The reason being that the scanning frames, as opposed to non-scanning frames, do not incorporate transmit power control, implying lesser or no RSSI variation, which in turn helps in a close match of online and offline fingerprints. This improves accuracy in $2.4$ GHz. While non-scanning frames increase the localization errors in $2.4$ GHz. The reason being that the non-scanning frames fetch RSSI from AP to which the client is associated and APs which are on the same or overlapping channels as the AP of association. A client connects to a farther AP because ($a$) in $2.4$ GHz, a farther AP can temporarily have better RSSI, ($b$) algorithms, such as load balancing, cause overloaded APs, which may be nearer to the client, to not respond while farther APs respond, and ($c$) sticky clients, these clients do not disassociate with an AP of low RSSI, which is a limitation of client drivers. The results for the $5$ GHz are contrasting to that of $2.4$ GHz. Non-scanning frames reduce the localization errors in $5$ GHz. Important to note here is that the 5 GHz band hardly encounters floor errors, as is seen in Figure~\ref{Fig:LocalizationErrors75th-5GHz}. Primary reason being the $2$x reduced range of $5$ GHz ($15$ meters) when compared to $2.4$ GHz($30$ meters). Non-scanning frames in $5$ GHz fetch RSSI from the AP to which the client is associated, which is the strongest AP. Hence, the RSSI from this AP have minimal variation due to reduced interference. Therefore, they match closely with fingerprint maps and improve reduce errors. However, the scanning frames in $5$ GHz do not fetch RSSI from the AP to which the client is associated, that is the strongest AP. RSSI from other APs, with which the client is not associated, may or may not closely match with offline fingerprints. Hence, the accuracy suffers. We analyze the localization errors when a mix of scanning and non-scanning frames are used for localizing the client. In this case, the errors range between the numbers reported by only scanning and only non-scanning frames. Overall, we see that the $5$ GHz outperforms $2.4$ GHz, irrespective of which type of frames are considered for the localization. However, accuracy of localization in $2.4$ GHz is severely affected by the minimal scans. Thus, preferring scanning frames in $2.4$ GHz and non-scanning frames in $5$ GHz is beneficial to improve localization accuracy. \begin{figure*}[t] \centering \begin{subfigure}[t]{\linewidth} \centering \includegraphics[scale=0.3]{images/MaximumNoOfAPs-SF.jpg} \, \includegraphics[scale=0.3]{images/APwithMaxRSSI-SF.jpg} \, \includegraphics[scale=0.3]{images/APofAssoc-SF.jpg} \caption{Same Floor Errors} \label{Fig:heuristics-samefloor} \end{subfigure}% \\ \begin{subfigure}[b]{\linewidth} \centering \includegraphics[scale=0.3]{images/MaximumNoOfAPs-FloorErrors.jpg} \, \includegraphics[scale=0.3]{images/APwithMaxRSSI-FloorErrors.jpg} \, \includegraphics[scale=0.3]{images/APofAssoc-FloorErrors.jpg} \caption{Different Floor Errors} \label{Fig:heuristics-differentfloor} \end{subfigure}% \caption[Localization Errors with Three floor detection heuristics.]{The localization error with three floor detection heuristics. The lowest localization errors were seen for the heuristic -- AP of Association. On an average, across all the cardinalities, AP of association heuristic reduces the same floor and different floor errors by $58$\% and $78$\% in $2.4$ GHz; $3.8$\% and $46$\% in $5$ GHz, respectively. Note that cardinalities greater than 3 are not applicable in 5 GHz. \label{Fig:localizationerrors_heuristics} \end{figure*} \subsection{Reducing the Localization Errors} \label{sec:solution} The challenge in large-scale deployments is that clients or APs cannot be modified. So, the odds of installing mobile apps that can trigger scans are very low. Furthermore, the latest phones do not prefer, for the purposes of saving bandwidth, to trigger scans until absolutely necessary. Therefore, even if the scanning frames have a potential of improving the localization accuracy in 2.4 GHz, their frequency is not in one's control. Since all the identified causes in Table~\ref{tbl:observations_summary}, are conflicting to each other, getting rid of the two % issues -- Cardinality Mismatch and Low Cardinality, is not trivial. For improvement, we take a position to make the best use of whatever RSSI we receive during the online phase. We use heuristic to select the APs from the online phase to reduce localization errors. We know the location of each AP. We use this information to shortlist the APs from the online fingerprint. The algorithm first selects a floor and then shortlists all the APs that are located on the same floor. We use the shortlisted APs to find a match with offline fingerprints. For selecting the floor, we explore three heuristics -- ($a$) Maximum Number of APs - the floor for which the maximum APs are reporting the client, ($b$) AP with Maximum RSSI - the floor from which the strongest RSSI is received, and ($c$) AP of Association - the floor of AP to which the client is presently associated with. Figure~\ref{Fig:localizationerrors_heuristics} shows how the localization errors vary for the three heuristics for $85^{th}$ percentile values. There is clearly an improvement for both Same Floor and Different Floor errors. Floor detection with Maximum Number of APs gives the least improvement. In fact, until cardinality $4$ it performs worse than the Baseline. A cause behind this is that the distant APs, specially in $2.4$ GHz that has longer transmission distance, respond and thus localization errors increase. Next, is the floor detection with the AP with Maximum RSSI and AP of Association. The AP with Maximum RSSI or the AP of Association are typically closest to the client, except when the controller does load balancing and transmit power control. There is marginal improvement for $5$ GHz. Since the Figure~\ref{Fig:localizationerrors_heuristics} only shows the data for $85^{th}$ percentile, we plot the CDF of error for cardinality=$1$ in Figure~\ref{Fig:Card1-Accuracy-CDF} for $2.4$ GHz. We see that the error reduces for all the percentiles. We see similar results for other cardinalities, but we don't include them due to space constraint. \begin{figure}[t] \centering \includegraphics[scale=0.6]{Card1-24GHz-Accuracy.pdf} \caption{Localization errors with three floor detection heuristics -- ($a$) Maximum Number of APs, ($b$) AP with Maximum RSSI, and ($c$) AP of Association at $Cardinality$=$1$ for $2.4$ GHz. AP of Association performs the best for both bands. Maximum Number of APs performs worse than the Baseline, while the other two significantly reduced the errors.} \label{Fig:Card1-Accuracy-CDF} \end{figure} We compare our results with Signal SLAM~\cite{signalSLAM} which is deployed in a public space like mall since we also have similar deployments. We have similar observations in other venues too. We find their $90^{th}$ percentile is about $15$ meters. We perform similar. In fact, their AP visibility algorithm has $90^{th}$ percentile as $24.3$ meters. We perform better than this in $5$ GHz. Given the complexity of the algorithm Signal SLAM incorporates and the amount of sensing it needs, we believe even with few meters of accuracy our approach is better; particularly because its simple and scalable. \section{Discussion} \label{sec:discussion} Now, we discuss the practical challenges encountered while localizing clients in real deployments and limitations of our solution. \subsection{Challenges Of Real Deployments} Real deployments have myriad of practical challenges that hamper the efficiency and the accuracy of an empirical study. For instance, there can be sudden and unexpected crowd movement which is known to increase signal variations. Furthermore, as and when required network administrators either replace old APs or deploy new APs. These administrative decisions are not under our control. However, such changes severely affect the offline fingerprints and change the floor heat maps that ultimately affect location accuracy. Preparing fingerprints for an entire campus with several thousands of landmarks is already tedious, such developments make the process of iterations even more cumbersome. Beyond insufficient measurement and latency issues, various contextual dynamics makes the fingerprint-based system erroneous. The primary reason is that such dynamic changes results in significant fluctuation in RSSI measurements, which affects the distance calculation of the localization algorithms. These fluctuations can happen quite frequently as there are many different factors affecting RSSI between an AP and its clients, such as crowds blocking the signal path, AP-side power-control for load balancing, and client-side power control to save battery. In Section~\ref{sec:challenges} we shed light on most of these factors. Lastly, all MAC addresses in our system are anonymized. We do not do a device to person mapping to preserve user privacy. \subsection{Limitations} A major limitation of this work is that we have not considered an exhaustive set of devices. Given a multitude of device vendors, it is practically impossible to consider all set of devices for this kind of in-depth analysis. We did cover the latest set of devices, though, including iPhone and Android devices. The second limitation is that even though we collected the data for both lightly (semester off, very few students on campus) and heavily loaded (semester on, most students on campus) days. We tested our observations on the lightly loaded dataset but, only on a subset of heavily loaded days. We do not yet know the behavior of system during heavily loaded days, in its entirety. Specifically, the load, concerning the number of clients and traffic is expected to increase interference and thus, signal variations. However, this study is still a part of future work. The third limitation of this work is that we do not consider the effect of MAC address randomization algorithms which make clients intractable. Although there is an active field of research that suggest ways to map randomized MAC to actual MAC~\cite{MACRandomization}, but given its complexity we do not employ these. \subsection{Pre-processing of the Data} Now, we present the details of data collection and its processing. \begin{figure}[t] \centering \includegraphics[scale=0.3]{FloorMap.jpg} \caption{Floor Map of the school where we collected ground truth data.} \label{Fig:floormap} \end{figure} \subsubsection{\textbf{Collection of the Ground Truth}} \label{subsubsec:gt_collection} We collect the ground truth data for online fingerprints. We want to correlate the data collection with real-world usage scenarios. Therefore, we choose four most common states of WiFi devices as per their WiFi association status and Data transmission. The states are -- ($i$) Disconnected, ($ii$) WiFi Associated -- ($ii.a$) Never actively used by user, ($ii.b$) Intermittently used, and ($ii.c$) Actively used. These states implicitly modulate the scanning frequency. We use a separate phone for each state; thus, we use $4$ Samsung Galaxy S$7$ phones to record ground truth for each landmark. State ($a$) client is disconnected. In this state, WiFi is turned on but not associated with any AP and screen remains off throughout the data collection. Therefore, only traffic generated from this client is scanning traffic and no data traffic. We ensure that this client did not follow MAC address randomization, which most latest devices follow in the unassociated state~\cite{MACRandomization}. State ($b$) client is associated but inactive. In this state, WiFi is turned on, it is associated but the screen remains off throughout the data collection. State ($c$) client is associated and intermittently used. In this state, WiFi is on, the client is associated and user intermittently uses the device. This is one of the most common state for mobile devices and previous research~\cite{sniffer-based-inference} states scanning is triggered whenever screen of the device is lit up. State ($d$) client is associated and actively used. In this state, WiFi is on, the client is associated, and a YouTube video plays throughout the data collection. This state generates most data traffic, \textit{i.e.} non-scanning traffic. Each client stayed at a landmark for about a minute before it moved to the next landmark. We manually noted down start time and end time for every landmark at the granularity of seconds. We did this exercise for $3203$ landmarks of our university, collected $86$ hours worth of data, that accounts for $54,096$ files carrying $274$ GB of data. The amount of time to localize a client is $40$ seconds. Processing the entire dataset would take $\approx 100$ days. Therefore, given the size of the entire dataset, we present our analysis of $200$ landmarks, which accounts for $3121$ files with $15.3$ GB of data. Figure~\ref{Fig:floormap} shows the floor map of one of the schools whose data we refer for our analysis. Our aim is to demonstrate the challenges associated with fingerprint\hyp{}based localization. These challenges apply to all the solutions that employ fingerprint-based localization, irrespective of the type of device present in the network. The variation of RSSI with device heterogeneity is well known~\cite{RSSIVariation} and that will further exacerbate the problems identified by this paper. We collect ground truth with only one device so that we can highlight issues without any complications added by heterogeneous devices. \subsubsection{\textbf{Pre-processing of the RTLS Data Feeds}} Our code reads every feed to extract the details of APs reporting a particular client. RTLS data feeds, may obtain stale records for a client. Therefore, we filter the raw RTLS data feed for the latest values, with age less than or equal to $15$ seconds, and the RSSI should be greater than or equal to $-72$ dBm. The threshold for age is a heuristic to take the most recent readings. The threshold for RSSI is decided based on the fact that a client loses association when RSSI is below $-72$ dBm. For our analysis, we classify MAC layer frames in two classes ($a$) \emph{Scanning Frames} -- high power and low bit rate probe requests and ($b$) \emph{Non-Scanning Frames} -- all other MAC layer frames. Offline fingerprints are derived from the scanning frames, which are known to provide accurate distance estimates as they are transmitted at full power. In the offline phase, a client is configured to scan continuously. However, in the online phase we have no control over the scanning behavior of the client, resulting in a mix of scanning and non-scanning frames. Therefore, while localizing with the fingerprints, RSSIs available for matching are from the different categories of frames. RTLS data feeds do not report the type of frame and do not have a one-to-one mapping of MAC layer frames to the feeds. Therefore, we devise a probabilistic approach to identify these frames. We design with a set of controlled experiments, where we configured the client in one of the two settings at a time ($a$) send scanning frames only and ($b$) send non-scanning frames only. These two settings are mutually exclusive. We collected the traffic from the client with a sniffer as well as the corresponding RTLS data feeds. Then, we compare both the logs -- sniffer and RTLS, to confirm the frame types and the corresponding data rates. Our analysis reveals that when a client is associated and sending non-scanning frames, the AP to which it is associated reports the client as \emph{associated}. The data rates of the RTLS data feeds vary among various $802.11g$ rates, e.g. $1$, $2$, $5.5$, ...,$54$ Mbps. Even though, our network deployment is dual-band and supports the latest $802.11$ standards including $802.11ac$, still the rates reported in the RTLS data feeds follow $802.11$g. We do not have any visibility in the controller's algorithm to deduce the reason for this mismatch in the reported data rates. However, when a client sends scanning frames, all the APs that could see the client report the client as \emph{unassociated} and the data rates reported is fixed at either $1$, $6$, or $24$ Mbps, as per the configured probe response rate. We use these facts to differentiate non-scanning and scanning RTLS data feeds. We believe this approach correctly infers scanning frames because ($a$) the data rates are fixed to $1$, $6$, or $24$ Mbps, ($b$) when an associated client scans, other APs report that client as unassociated, and ($c$) an unassociated client can only send either scanning or association frames. However, our approach may still incorrectly identify a scanning frame as non-scanning in the following cases -- ($a$) When an associated client scans and the AP, to which it is associated, reports. This AP reports the client as associated and its data rate as $1$, $6$, or $24$ Mbps. In this case, these rates may also be because of the non-scanning frames. We identify such feeds as non-scanning. ($b$) When an unassociated client sends association or authentication frames. In this case also, the rates overlap with the scanning data rates and the association status is reported as unassociated. Here, we incorrectly identify non-scanning frames as scanning frames. However, these cases are rare. For other cases, our approach is deterministically correct. \section{Conclusion}\label{sec:conclusion} To conclude, we presented two major issues that need to be addressed to perform server-side localization. We validated these challenges with a huge data from a production WLAN deployed across a university campus. We discussed the causes and their impact on these challenges. We proposed heuristics that handle the challenges and reduce the localization errors. Our findings apply to all the server-side localization algorithms, which use fingerprinting techniques. Most of this work provides real-world evidence of ``where'' and ``what'' may go wrong for practically localizing clients in a device agnostic manner. \section{Introduction} \label{sec:introduction} There has been a long and rich history of WiFi-based indoor localization research~\cite{centaur, will, zee, localizeWithoutInfra, virtualCompass, placeLab, surroundSense, unsupervisedIndoorLoc, pushTheLimit, horus, monalisa, minimizingcalibration, radar, wigem, localizeWithoutPreDeployment, apsUsingAoA, zeroStartupCosts, phaser, arraytrack, spotFi, pinpoint, sensorToA, caesar, sail, cupid, wifiImaging, humanActivity, reusing60GHz, seeThroughWalls, mtrack}. However, in spite of several breakthroughs, there are very few real-world deployments of WiFi-based indoor localization systems in public spaces. The reasons for this are many-fold, with three of the most common being -- ($a$) the high cost of deployment, ($b$) arguably, the lack of compelling business use, and ($c$) the inability of existing solutions to seamlessly work with all devices. In fact, current solutions impose a tradeoff between universality, accuracy, and energy, for example, client-based solutions that combine inertial-based tracking with WiFi scanning offer significantly better accuracy but require a mobile application which will possibly drain energy faster and which will be downloaded by only a fraction of visitors~\cite{localizationSurvey}. In this paper, we present our experiences with deploying and operating a WiFi-based indoor localization system across the entire campus of a small Asian university. It is worth noting that the environment is very densely occupied, by $\approx$ $10,000$ students and $1,500$ faculty and staff. The system has been in the production for more than four years. It is deployed at multiple venues including two universities (Singapore Management University, University of Massachusetts, Amherst), and four different public spaces (Mall, Convention Center, Airport, and Sentosa Resort)~\cite{livelabs_mobisys, livelabs_hotmobile}. These venues use the localization system for various real-time analytics such as group detection, occupancy detection, and queue detection while taking care of user privacy. Our goal is to highlight challenges and propose easy to integrate solutions to build a universal indoor localization system -- one that can spot localize all WiFi enabled devices on campus without any modifications whether client or infrastructure-side. The scale and the nature of this real environment, presents unique set of challenges -- ($a$) infrastructure \textit{i.e.} controller and APs do not allow any changes, ($b$) devices cannot be modified in any way \textit{i.e.} no explicit/implicit participation for data generation, no app download allowed, and no chipset changes allowed, and ($c$) only available data is RSSI measurements from APs, which are centrally controlled by the controller, using a \emph{Real-Time Location Service} (RTLS) interface~\cite{rtls}. It is worth noting that within the face of these challenges we have to rule out more sophisticated state-of-the-art schemes, such as fine-grained CSI measurements~\cite{spotFi}, Angle-of-Arrival~\cite{apsUsingAoA}, Time-of-Flight~\cite{sail}, SignalSLAM~\cite{signalSLAM}, or Inertial Sensing~\cite{inertialSensors}. Given the challenges, we adopt an offline fingerprint-based approach to compute each device's location. Fingerprints have been demonstrated to be more accurate than model-based approaches in densely crowded spaces~\cite{FPvsmodel} and hence widely preferred. Our localization software processes the RSSI updates using well-known ``classical'' fingerprint-based technique~\cite{radar}. Given the wide usage of this approach, our experiences and results apply to a majority of the localization algorithms. Our primary contribution is to detail the cases where such a conventional approach succeeds and where it fails. We highlight the related challenges for making the approach work in current, large-scale WiFi networks, and then develop appropriate solutions to overcome the observed challenges. We collect three weeks of detailed ground truth data ($\approx 200$ landmarks) in our large-scale deployment, carefully construct a set of experimental studies to show two unique challenges -- \emph{Cardinality Mismatch} and \emph{High Client Scan Latency} associated with a server-side localization approach. The three weeks of data is representative of our four years of data. ($a$) \emph{Cardinality Mismatch:} We define cardinality as the set of APs reporting for a client located at a specific landmark. We first show that the cardinality, during the online phase, is \emph{often} quite different from the cardinality in the offline phase. Note that this divergence is in the \emph{set} of reporting APs, and not just merely a mismatch in the values of the RSSI vectors. Intuitively, this upends the very premise of fingerprint-based systems that the cardinality seen at any landmark is the same during the offline and online phases. This phenomenon arises from the dynamic power and client management performed by a centralized controller in all commercial grade WiFi networks (for example, those provided by Aruba, Cisco, and other vendors) to achieve outcomes such as ($i$) minimize overall interference (shift neighboring APs to alternative channels), ($ii$) enhance throughput (shift clients to alternative APs), and ($iii$) reduce energy consumption (shut down redundant APs during periods of low load). ($b$) \emph{High Client Scan Latency:} Most localization systems use client-side localization techniques where clients actively scan the network when they need a location fix. Specialized mobile apps trigger active scans at the client devices. However, when using server-side localization, the location system has no way to induce scans from client devices.This is because such a localization system is deployed in public places, e.g. universities and malls, where changes at the client device are not feasible. Hence, the system can only ``see'' clients when clients scan as part of their normal behavior. However, as we show in Section~\ref{sec:challenges}, most clients today, by default, do not prefer to scan. We analyze different usage mode of the devices. Our observations reveal that while mobile clients trigger scans only while handover, stationary clients trigger scans only when their screen is turned on. These phenomena do not exist in small-scale deployments often used in the past pilot studies, where each AP is configured independently. In large-scale deployments, where it is fairly common to use controller-managed WLANs with a large number of devices, these phenomena invariably persist to a great extent. To exemplify, we noticed $57.30$\% instances of cardinality mismatch in $2.4$ GHz and $30.60$\% in $5$ GHz in our deployment. We saw $90^{th}$\%ile of client scan interval to be $20$ minutes. While localizing with fingerprint-based solutions in such environments, these phenomena translate to either \emph{minimal} or even worse \emph{no} matching APs, resulting in substantial delays between client location updates and ``teleporting'' of clients across the location. It is important to note that not only the schedule of these algorithms is non-deterministic but also their distribution during offline and online phases. This is attributed to the fundamental fact that the dynamics of WiFi networks such as load and interference, is non-deterministic in most of the cases and that the controller algorithm is a black-box to us. Furthermore, the differences in signal propagation and scanning behavior of $2.4$ and $5$ GHz contribute to these problems. We believe that we are the first to present the challenges of server-side localization as well as their mitigation. Our proposals are device-agnostic, simple, and easily integrable with any large-scale WiFi deployment to efficiently localize devices. \noindent \textbf{Key Contributions:} \begin{itemize} \item We identify and describe a couple of novel and fundamental problems associated with a server-side localization framework. In particular, we provide evidence for the (i) ``Cardinality Mismatch'' and (ii) ``High Client Scan Latency'' problems, explain why these problems are progressively becoming more significant in commercial WiFi deployments. We discuss the reasons why these problems are non-trivial to be solved given the challenge of no client/infrastructure-side allowed. Our entire analysis is for both frequency bands -- $2.4$ and $5$ GHz. \item We provide valuable insights about the causes of these problems with extensive evaluations based on the ground truth data collected over three weeks for $200$ landmarks. Motivated from the real-world usage of clients, we study their 4 states -- Disconnected, Inactive, Intermittent, and Active. For each of these states, we analyze -- (i) their scanning behavior while being mobile as well as stationary and (ii) the impact of these states on the achieved cardinality in the presence and the absence of scans. We demonstrate the impact of considering ``only'' scanning frames as compared to ``only'' non-scanning frames on the localization errors. Furthermore, we compare our results with real-world scenario of having a mix of both categories of frames. \item We propose heuristics to improve the accuracy of the localization in the face of these problems. We see an improvement from a minimum of $35.40$\% to a maximum of $100$\%. We show an improvement in the higher percentiles over SignalSLAM~\cite{signalSLAM}. This shows that our lessons learned have the potential of improving the existing localization algorithms. \item We describe our experiences with deploying, managing, and improving a fingerprint-based WiFi localization system, which has been operational, since $2013$, across the entire campus of Singapore Management University. We not only focus on the final ``best solution'' that uses RTLS data feeds, but also discuss the challenges and pitfalls encountered over the years. \end{itemize} \noindent \textbf{Paper Organization:} We discuss the related works in Section~\ref{sec:related_works}. We present the system architecture and the details of data collection in Section~\ref{sec:system-details}. We introduce the challenges, their evidences, and propose the solutions in Section~\ref{sec:challenges}. We discuss the challenges of localizing clients in real world deployments and the limitations of our proposed solutions in Section~\ref{sec:discussion}. We conclude in Section~\ref{sec:conclusion}. \section{Related Work} \label{sec:related_works} In this section, we discuss existing solutions and their limitations for indoor localization. \textbf{Fingerprint vs. Model-based Solutions:} One of the oldest localization techniques use either a fingerprint-based~\cite{minimizingcalibration, radar, surroundSense, monalisa, horus, zee, littleHumanIntervention, pushTheLimit} or model-based~\cite{localizeWithoutPreDeployment, zeroConfig, wigem, selfCalibratingLocalization} approach, or a combination of both~\cite{diversityInLocalization}. Overall, fingerprint-based solutions tend to have much higher accuracies than other approaches albeit with a high setup and maintenance cost~\cite{FPvsmodel}. The fingerprint-based approach was pioneered by Radar~\cite{radar} and has spurred numerous follow-on research. For example, Horus~\cite{horus} uses a probabilistic technique to construct statistical radio maps, which can infer locations with centimeter level accuracy. PinLoc~\cite{monalisa} incorporates physical layer information into location fingerprints. Liu \textit{et al.}~\cite{pushTheLimit} improved accuracy by adopting a peer-assisted approach based on p$2$p acoustic ranging estimates. Another thread of research in this line is to reduce the fingerprinting effort, for example, using crowdsourcing~\cite{littleHumanIntervention, will, zee, localizationUsingPriorInfo} and down-sampling~\cite{minimizingcalibration}. The mathematical signal propagation model approach~\cite{wigem, localizeWithoutPreDeployment} has the benefit of easy deployment (no need for fingerprints) although its accuracy suffers when the environment layout or crowd dynamics change~\cite{selfCalibratingLocalization}. Systems, such as EZ, improve the accuracy by additionally using GPS to guide the model construction. \textbf{Client vs. Infrastructure-based solutions:} There is a rich history of client-based indoor location solutions, to name a few, SignalSLAM~\cite{signalSLAM}, SurroundSense~\cite{surroundSense}, UnLoc~\cite{unsupervisedIndoorLoc}, and many others ~\cite{centaur, will, zee, localizeWithoutInfra, virtualCompass, placeLab}. All of them share some commonalities in that they extract sensor signals (of various types) from client devices to localize. The location algorithms usually run on the device itself; however, it is also possible to run the algorithm on a server and use the signals from multiple clients to achieve better performance~\cite{pushTheLimit}. Overall, client-based solutions have very high accuracy (centimeter resolution in some cases~\cite{horus, monalisa}). An alternative would be to pull signal measurements directly from the WiFi infrastructure, similar to what our solution does. The research community has only lightly explored this approach since it requires full access to the WLAN controllers, which is usually proprietary. Our main competitors are the commercial WiFi providers themselves. In particular, both Cisco~\cite{cisco} and Aruba~\cite{aruba} offer location services. These solutions use server-side tracking coupled with model-based approaches (to eliminate fingerprint setup overhead). \textbf{Other Solutions:} There are several other solutions, complementary to the signal strength-based technique. Time-based solutions~\cite{pinpoint, sensorToA, caesar, sail, cupid} use the arrival time of signals to estimate the distance between client and AP, while angle-based solutions~\cite{apsUsingAoA, zeroStartupCosts, phaser, arraytrack, spotFi} utilize angle of arrival information, estimated from a antenna array, to locate mobile users. Recently, the notion of passive location tracking~\cite{wifiImaging, humanActivity, reusing60GHz, seeThroughWalls, mtrack} has been proposed, which does not assume people carry devices. In large and crowded venues, however, the feasibility and accuracy of such passive tracking is still an open question. Other systems like light-based localization~\cite{epsilon, pharos, localizeUsingLights} and acoustic-based localization~\cite{swordfight, audioCapture, sonoloc, acousticOccupany}. \textbf{Limitations of above solutions:} These solutions can achieve higher accuracy, but they have at least one of the following limitations -- ($a$) need of a customized hardware, which cannot be implemented in large-scale deployments, ($b$) installation of client application, making them hard to scale, ($c$) rooting client OS - Android or iOS, which limits their generalizability, ($d$) energy savvy, ($e$) high error rates in dense networks, and ($f$) proprietary and expensive to deploy (especially, solutions from vendors like Cisco and Aruba). To summarize, even though several wonderful solutions are available, their scalability is still a question. Therefore, we advocate using server-side localization approach with fingerprints. Our aim is not to compare the efficacy of different approaches, but to address the challenges of practical and widely deployed device-agnostic indoor localization using today's WiFi standards and hardware, for example, use of $5$ GHz band and controller-based architecture. \section{System Architecture and Data Collection} \label{sec:system-details} In this section, we present the details about system architecture and the dataset. \subsection{Background \& Deployment} \label{subsec:background} This work began in $2013$ when we started deploying a WiFi-based localization solution across the entire campus. It has since gone through much major and minor evolutions. However, in this paper, we focus our evaluation and results on just one venue -- a university, as we have full access to that venue. Our university campus has seven schools in different buildings. Five buildings have six floors, remaining two have five and three floors respectively, with a floor area of $\approx 70,000$ $m^2$. Landmarks, characterized by water sprinklers are deployed every three meters, on a given floor denote a particular location. There are $3203$ landmarks across thirty-eight floors of seven schools. WLAN deployment includes $750$+ dual-band APs, centrally controlled by eleven WiFi controllers, with $\approx 4000$ associated clients per day. \begin{figure}[t] \centering \includegraphics[scale=0.45]{images/LocationSystem.pdf} \caption{Block diagram of Indoor Localization System. Note that lines between AP and C denote the coverage area of AP and not the association. \textit{Legend: AP - Access Point, C - Client, WLAN - Wireless LAN, RTLS - Real-Time Location System}} \label{Fig:LocationSystem} \end{figure} \begin{table}[t] \centering \begin{tabular}{@{} lm{8cm} @{}} \toprule \textbf{Field} & \textbf{Description} \\ \hline Timestamp & AP Epoch time (milliseconds) \\ Client MAC & SHA$1$ of original MAC address \\ Age & \#Seconds since the client was last seen at an AP\\ Channel & Band ($2.4$/$5$ GHz) on which client was seen \\ AP & MAC address of the Access Point \\ Association Status & Client's association status (associated/unassociated) \\ Data Rate & MAC layer bit-rate of last transmission by the client\\ RSSI & Average RSSI for duration when client was seen \\ \bottomrule \end{tabular} \caption{Details of RTLS data feeds} \label{tbl:rtls_feeds} \end{table} \subsection{System Architecture} Figure \ref{Fig:LocationSystem} represents the primary building blocks of the system. The system is bootstrapped with APs configured by the WLAN controller to send RTLS data feeds every $5$ seconds to the RTLS server. Most commercial WLAN infrastructures allow such a configuration. Once configured, APs bypass WLAN controller and report RTLS data feeds directly to our Location Server. Table~\ref{tbl:rtls_feeds} presents all the fields contained in an RTLS data feed per client. The reported RSSI value is not on a per-frame basis, but a summarized value from multiple received frames. The Location Server analyzes these RTLS data feeds for the signal strengths reported by different APs to estimate the location of a client. Note that the APs do not report the type of frames. They gather information from their current channel of operation and scan other channels to collect data. Vendors have microscopic details of what APs measure~\cite{aruba_location}, however as an end-user we do not have access to any more information than what is specified. Nevertheless, even this information at large-scale gives us a view of the entire network from a single vantage point. \subsection{Recording of the Fingerprints} \label{subsec:fingerprinting} We define a fingerprint as a vector of RSSI from APs for a given client. We consider two types of fingerprints -- offline and online. An offline fingerprint is collected and stored in a database before the process of localization is bootstrapped, while an online fingerprint is collected in real-time. \textbf{Offline Fingerprinting} A two-dimensional offline fingerprint map is prepared for each landmark on the per-floor basis. The client devices used for fingerprinting were dual-band Android phones, which were associated with the network, and they actively scanned for APs. For each landmark, the device collected data for $5$ minutes. While the clients scan their vicinity, APs collate RSSI reports for the client and send their measurements as RTLS data feeds to the Location Server. For a given landmark $L_{i}$, an offline fingerprint takes the following form: \begin{equation} <L_{i},B,AP_{1}:RSSI_{1};...;AP_{n}:RSSI_{n};> \label{eq:fpmap_offline} \vspace*{-0.05in} \end{equation} We maintain fingerprints for both $2.4$ and $5$ GHz frequency bands. Band $B$, in the above equation, takes a value of band being recorded. The vectors are stored in a database on the Location Server. \textbf{Online Fingerprinting} Localization of a client is done with online fingerprints. An online fingerprint takes the same syntax as offline fingerprints in Equation~\ref{eq:fpmap_offline}, except the landmark, as shown below: \begin{equation} <B,AP_{1}:RSSI_{1};...;AP_{m}:RSSI_{m};> \label{eq:fpmap_online} \vspace*{-0.05in} \end{equation} Now, we match this online fingerprint with offline fingerprints of each landmark to calculate the distance in signal space, as discussed in~\cite{radar}. The landmark with minimum distance in signal space is reported as the probable location of the client.
{'timestamp': '2021-01-27T02:09:01', 'yymm': '2101', 'arxiv_id': '2101.09515', 'language': 'en', 'url': 'https://arxiv.org/abs/2101.09515'}
\section{Introduction} The techniques to date starbursts are based on their radiative properties which are determined by their massive stellar content. At ultraviolet wavelengths, the spectrum is dominated by absorption features many of them formed in the stellar wind of massive stars. The profiles of these lines are a function of the evolutionary state of the starburst (Leitherer, Robert, \& Heckman, 1995; Gonz\'alez Delgado, Leitherer, \& Heckman, 1997a). This technique has been successfully applied to age-date young starbursts ($\leq$10 Myr) (see Leitherer and Robert in this conference). The optical spectrum of starbursts is dominated by nebular emission lines formed in the surrounding interstellar medium of the starburst that is photoionized by radiation from the massive stars. This emission-line spectrum depends on the density and chemical composition of the gas, and on the radiation field from the ionizing stellar cluster; therefore, on the evolutionary state of the starburst (Garc\'\i a-Vargas, Bressan \& D\'\i az 1995). A test of consistency between these two techniques was performed in the prototypical starburst galaxy NGC 7714 (Gonz\'alez Delgado et al 1999). The age of the nuclear starburst derived from the wind resonance ultraviolet lines and the optical emission-line spectrum is 5 Myr old. At the Balmer jump the spectra of starbursts can also show absorption features formed in the photospheres of O, B and A stars. The spectra of these stars are characterized by strong H and HeI absorption lines and with only very weak metallic lines. However, the detection of these stellar features at the optical wavelengths in the spectra of starburst galaxies is difficult because H and HeI absorption features are coincident with the nebular emission lines that mask the absorption. Even so, the higher-order terms of the Balmer series and some of the HeI lines are detected in absorption in many starburst galaxies or even in the spectra of giant HII regions (e.g. NGC 604, Terlevich et al 1996). These features can be seen in absorption because the strength of the Balmer series in emission decreases rapidly with decreasing wavelength, whereas the equivalent width of the stellar absorption lines is constant with wavelength. Since the strength of the Balmer and HeI absorption lines show a strong deppendency with the effective temperature and gravity, these lines are also an age indicator of starburst and post-starburst galaxies (D\'\i az 1988; Olofsson 1995). In this contribution, evolutionary synthesis models are presented, to predict the profile of the H Balmer and HeI absorption lines of a single-metallicity stellar population up to 1 Gyr old. Models are used to predict the age of a super-star cluster in the starburst galaxy NGC 1569. \section{Description of the models} A stellar library of synthetic spectra which covers the main H Balmer (H$\beta$, H$\gamma$, H$\delta$, H8, H9, H10, H11, H12 and H13) and HeI absorption lines (HeI $\lambda$4922, HeI $\lambda$4471, HeI $\lambda$4388, HeI $\lambda$4144, HeI $\lambda$4121, HeI $\lambda$4026, HeI $\lambda$4009 and HeI $\lambda$3819) has been implemented in the evolutionary synthesis code Starburst99 (Leitherer et al 1999). Evolutionary models are computed for burst and continuous star formation up to 1 Gyr old and different assumptions about the stellar initial mass function. The stellar library is generated using a set of programs developed by Hubeny and colaborators (Hubeny 1988; Hubeny, Lanz \& Jeffery, 1995) in three stages. For T$_{eff}\geq$ 25000 K, the code TLUSTY is used to compute NLTE stellar atmosphere models. These models together with Kurucz (1993) LTE stellar atmosphere models (for T$_{eff}\leq$ 25000 K) are used as input to SYNSPEC, the program that solves the radiative transfer equation. Finally, the synthetic spectrum is obtained after performing the rotational (100 km s$^{-1}$ is assumed) and instrumental convolution. The final sampling of the spectra is 0.3 \AA. The metallicity is solar. \section{Models results} The Balmer and HeI line profiles are sensitive to the age (Figure 1), except during the first 4 Myr of the evolution, when the equivalent widths of these lines are constant. The equivalent widths of the Balmer lines range from 2 to 16 \AA\ and the HeI lines from 0.2 to 1.2 \AA.The strength of the lines is maximum when the cluster is a few hundred (for the Balmer lines) and a few ten (for the HeI lines) Myr old. In the continuous star formation scenario, the strength of the Balmer and HeI lines increases monotonically with time until 500 Myr and 100 Myr, respectively. However, the lines are weaker than in the burst models due to the dilution of the Balmer and HeI lines by the contribution from very massive stars. \begin{figure} \psfig{file=gonzalez_f1.ps,height=4.5cm} \caption[fig]{Synthetic spectra from 3700 to 4200 \AA\ predicted for an instantaneous burst of 10$^6$ M$\odot$ formed following a Salpeter IMF between M$_{low}$= 1 M$\odot$ and M$_{up}$= 80 M$\odot$ at the age 3, 50 and 500 Myr.} \end{figure} As anticipated, the higher-terms of the Balmer series have equivalent widths similar to H$\beta$; therefore, this suggests that the higher-terms of the Balmer series are more useful to age-date starbursts than H$\beta$. To show which lines are better age indicators, the equivalent widths of the Balmer and HeI lines in emission are estimated. Photoionization models using CLOUDY (Ferland 1997) are computed to predict HeI $\lambda$4471/H$\beta$ assuming that the gas is spherically distributed around the ionizing cluster with a constant electron density. Figure 2 shows that H$\delta$ and the higher-order terms of the Balmer series and HeI are dominated by the stellar absorption component if an instantaneous burst is older than $\simeq$ 5 Myr. If the star formation proceeds continuously, after 30 Myr and 100 Myr, the strengths of the stellar absorptions are equal to those of the nebular emission lines H8 and H$\delta$, respectively. HeI $\lambda$4471 is very little affected by the absorption; however, the equivalent width of the emission line HeI $\lambda$3819 equals the stellar absorption line equivalent width at 20 Myr of continuous star formation, and after this time HeI $\lambda$3819 is dominated by the stellar absorption. \begin{figure} \psfig{file=gonzalez_f2.eps,height=4.5cm} \caption[fig]{Equivalent width of the Balmer (a) and HeI (b) lines for an instantaneous burst. The nebular emission lines are plotted as full line and the absorption stellar lines as dotted lines} \end{figure} \section{Dating a super-star cluster in NGC 1569} HST images of the starburst galaxy NGC 1569 suggest that the SSC B has an age of 15-300 Myr (O'Connell et al 1994) and ground-based optical spectra suggest an age of 10 Myr (Gonz\'alez Delgado et al 1997). The later result is based on the analysis of the optical spectral energy distribution. Since the Balmer lines are partially filled with nebular emission, the fitting has to be done based on the wings of the absorption features. Figure 3 plots the observed lines and the synthetic models for a burst of 10 and 50 Myr at Z$\odot$/4 metallicity (assuming Salpeter IMF, M$_{low}$=1 M$\odot$ and M$_{up}$=80 M$\odot$). The profiles indicate that the Balmer lines are more compatible with a burst of 10 Myr old than with the 50 Myr old. Ages older than 10 Myr produce profiles which are wider than the observed one. This comparison shows that this technique can discriminate well between a young and an intermediate age population. \begin{figure} \psfig{file=gonzalez_f3.ps,height=4.5cm} \caption[fig]{Normalized optical spectrum of the SSC B in NGC 1569 (full line). The synthetic normalized spectra of an instantaneous burst 10 Myr (dotted line) and 50 Myr old (dashed line) formed following a Salpeter IMF at Z$\odot$/4 metallicity are ploted.} \end{figure} \begin{references \reference D\'\i az, A.I. 1988, MNRAS, 231, 57 \reference Ferland, G.J. 1997, Hazy, a Brief Introduction to CLOUDY, University of Kentucky, Deparment of Physiscs and Astronomy Internal Report \reference Garc\'{\i}a-Vargas, M.L., Bressan, A. \& D\'\i az, A. 1995, A\&AS, 112, 35 \reference Gonz\'alez Delgado, R.M., et al. 1999, ApJ, 513, 707 \reference Gonz\'alez Delgado, R.M., Leitherer, C., \& Heckman, T. 1997a, ApJ, 489, 601 \reference Gonz\'alez Delgado, R.M., et al.. 1997b, ApJ, 483, 705 \reference Hubeny, I. 1988, Compt. Phys. Commmm., 52, 103 \reference Hubeny, I., Lanz, T.,\& Jeffery, C.S. 1995, SYNSPEC-A User's Guide \reference Kurucz, R.L. 1993, CD-ROM 13, ATLAS9 Stellar Atmosphere Programs and 2 km/s Grid (Cambridge: Smithsoniam Astrophys. Obs.) \reference Leitherer, C., Robert, C., \& Heckman, T. M. 1995, ApJS, 99, 173 \reference Leitherer, C., et al. 1999, APJS, in press \reference O'Connell, R., Gallagher, J., \& Hunter, D. 1994, ApJ, 433, 65 \reference Olofsson, K. 1995, A\&A, 111, 57 \reference Terlevich, E, et al. 1996, MNRAS, 279, 1219 \end{references} \end{document}
{'timestamp': '1999-05-12T07:05:57', 'yymm': '9905', 'arxiv_id': 'astro-ph/9905136', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/9905136'}
\section{Introduction} Gradient-based optimization methods, such as SGD with momentum or Adam~\citep{kingma2014adam}, have become standard tools in the deep learning toolbox. While they are very effective for optimizing differentiable parameters, there is an interest in developing other efficient learning techniques that are complementary to gradient-based optimization. Evolution Strategies~\citep[ES,][]{Wierstra2014} is one such technique that has been used as an SGD alternative and has shown promising results on small-scale problems in reinforcement learning~\citep{igel2003neuroevolution, salimans2017evolution} and supervised learning~\citep{mandischer2002comparison, Lehman2017, Zhang2017, varelas2018comparative}. ES is a black-box method and does not require the parameters to be differentiable. As such, it can potentially be applied to a much larger family of models than standard SGD. Additionally, as ES only requires inference, it allows for training neural nets on inference-only hardware (although we do not make use of this property here). The goal of this paper is to explore how ES can be scaled up to larger and more complex models, from both an algorithmic and implementation perspective, and how it can be used in combination with SGD for training sparse neural networks. We begin with investigating whether it is \emph{possible} to train CIFAR-10\xspace classification ConvNets with ES (\cref{sec:snes}). This is a harder task than training MNIST classification MLPs \citep[as in][]{Zhang2017}, and to address it with ES we develop a more efficient execution model which we call semi-updates. As a result, ES reaches competitive accuracy compared to SGD on CIFAR-10\xspace, although this comes at the cost of significantly worse computational efficiency. In~\cref{sec:hybrid} we then turn our attention to more practical applications (text-to-speech raw audio generation) and the use of ES alongside SGD for training large sparse models. Instead of relying on hand-crafted methods for training such models by pruning weights~\citep{NarangDSE17, Zhu2017}, we employ ES to learn the weight sparsity masks (i.e.\ which weight should be used in the final sparse model), while SGD is responsible for learning the values of the weights, and is performed in parallel with ES. It turns out that unlike the ES for weights, the ES for sparsity patterns needs significantly fewer parameter samples which makes this approach computationally feasible. Beyond the scientific interest in the feasibility of such hybrid learning techniques, one practical advantage of this approach is that it enables joint training of weights together with their sparsity mask, thus avoiding the need to first train a dense model (which might be too large to fit into memory), and then prune it. The experiments on the state-of-the-art sparse text-to-speech model show that ES achieves comparable performance to SGD with pruning. In summary, our contributions are three-fold: (i) we propose a new execution model of ES which allows us to scale it up to more complex ConvNets on CIFAR-10\xspace; (ii) we perform empirical analysis of the hyper-parameter selection for the ES training; (iii) we show how ES can be used in combination with SGD for hybrid training of sparse models, where the non-differentiable sparsity masks are learned by ES, and the differentiable weights by SGD. \subsection*{Related work} Prior works on hybrid SGD-ES methods mostly use ES for network structure and SGD to train the differentiable parameters of each sampled architecture \cite{yao1999evolving, igel2009genesis, real2018regularized}. This severely limits the size of the models where this method is practical as for each architecture sample the network has to be retrained. Instead of meta-learning of the model, our goal is to optimize the model's non-differentiable and differentiable parameters \emph{jointly}. Recent works compare the relation between the SGD gradients and ES updates, either on synthetic landscapes \cite{Lehman2017} or MNIST models \cite{Zhang2017}, while \cite{maheswaranathan2018guided} uses SGD gradients to guide the sampler's search direction. Instead of combining them, we use SGD and ES for two different training tasks - SGD for the continuous, differentiable parameters and ES for modelling the sparsity mask distribution. Sparse models, where a subset of the model parameters are set to exactly zero, offer a faster execution and compact storage in sparse-matrix formats. This typically reduces the model performance but allows deployment to resource-constrained environments \cite{kalchbrenner2018efficient, Theis2018FasterGP}. Training unstructured sparse supervised models can be traced back at least to \cite{Cun90optimalbrain}, which used second order information to prune weights. More tractable efforts can be traced at least to \cite{Strom97sparseconnection}, which pruned weights based on the magnitude and then retrained. That was more recently extended in \citet{HanLMPPHD16} where the pruning and retraining process was carried out in multiple rounds. The pruning process was further refined in \cite{NarangDSE17} where the pruning gradually occurred during the first pass of training so that it took no longer to find a pruned model than to train the original dense model. Finally, the number of hyper-parameters involved in the procedure was reduced in \cite{Zhu2017} and \citet{bellec2017deep} uses random walk for obtaining the sparsity mask. Additionally a number of Bayesian approaches have been proposed, such as Variational Dropout (VD) \cite{pmlr-v70-molchanov17a} or L0 regularization \cite{louizos2018learning}. VD uses reparametrization trick, while we learn directly the multinomial distribution with ES. As indicated in \citet{gale19state}, VD and L0 regularization tends to perform not better than \cite{NarangDSE17} we compare against. \section{Preliminaries} \subsection{Natural Evolution Strategies}\label{ss:nes} Black-box optimization methods are characterized by treating the objective function $f(\theta)$ as a black box, and thus do not require it to be differentiable with respect to its parameters $\theta$. Broadly speaking, they fall into three categories: population-based methods like genetic algorithms that maintain and adapt a set of parameters, distribution-based methods that maintain and adapt a distribution over the parameters $\theta \sim \pi$, and Bayesian optimization methods that model the function $f$, using a Gaussian Process for example. \emph{Natural evolution strategies} \citep[NES,][]{Wierstra2008,Wierstra2014} are an instance of the second type. NES proceeds by sampling a batch of $\esgst$ parameter vectors $\{\theta_i \sim \pi; 1 \leq i \leq \esgsm \}$, evaluating the objective for each ($f_i = f(\theta_i)$) and then updating the parameters of the distribution $\pi$ to maximize the objective $J = \mathbb{E}_{\theta \sim \pi}[f(\theta)]$, that is, the expectation of $f$ under $\pi$. The attribute `natural' refers to the fact that NES follows the \emph{natural gradient} w.r.t. $J$ instead of the steepest (vanilla) gradient. In its most common instantiation, NES is used to optimize unconstrained continuous parameters $\theta \in \mathbb{R}^d$, and employs a Gaussian distribution $\pi = \mathcal{N}(\mean, \mathrm{Cov})$ with mean $\mean \in \mathbb{R}^d$ and covariance matrix $\mathrm{Cov} \in \mathbb{R}^{d \times d}$. For large $d$, estimating, updating and sampling from the full covariance matrix is too expensive, so here we use its diagonal approximation $\mathrm{Cov} = \operatorname{diag}(\stds^2)$ with element-wise variance terms $\stds^2 \in \mathbb{R}^d$; this variant is called Separable NES \citep[SNES,][]{snes}, and similar to \citep{ros2008simple}. The sampled parameters are constructed as $\theta_i = \mean + \stds \odot \z_i$. where $\odot$ is element-wise (Hadamard) product, from standard normal samples $\z \sim \mathcal{N}(0,\mathbf{I})$. The natural gradient with respect to $(\mean, \stds)$ is given by: \begin{eqnarray}\label{eq:snes} \nabla_{\mean} J = \sum_{1 \leq i \leq n} u_i \z_i \notag, ~~ \nabla_{\stds} J = \stds \odot \sum_{1 \leq i \leq n} u_i \left(\z_i - 1 \right)^2 \end{eqnarray} and the updates are \begin{eqnarray}\label{eq:snes-update} \mean \leftarrow \mean + \eta_{\mean} \stds \odot \nabla_{\mean} J & & \stds \leftarrow \stds \odot \exp(\frac{1}{2} \eta_{\stds} \nabla_{\stds} J ) \end{eqnarray} where $\eta_{\mean}$ and $\eta_{\stds}$ are learning rates of the distribution's mean and variance parameters respectively, and $u_i$ is a transformation of the fitness $f_i$ (fitness-shaping, \cref{ss:fitness-shaping}). Note that the multiplicative update on $\stds$ is guaranteed to preserve positive variances. The more restricted case of constant variance ($\eta_{\stds}=0$), where the natural gradient coincides with the vanilla gradient, was advocated by~\citet{salimans2017evolution}, and it is sometimes referred to as simply `Evolution Strategies' (ES), even though it is subtly different from classic Evolution Strategies~\citep{rechenberg1973evolutionsstrategie}: it matches the classic non-elitist $(1,\esgsm)$-ES only if the fitness shaping function is a top-$1$ threshold function. \subsection{Fitness shaping functions}\label{ss:fitness-shaping} It is often desirable to make evolution strategies more robust and scale-invariant by transforming the raw evaluation $f_i$ into a normalized utility $u_i$; this mapping is called \emph{fitness shaping}~\citep{hansen2001completely,Wierstra2014}. A common one is based on the rank of $f_i$ within the current batch, where $\nu$ is a hyper-parameter: \begin{equation}\label{eq:fshaping} u_i = \frac{\max\left(0, \log\left(\frac{\esgsm}{\nu} + 1\right) - \log(\operatorname{rank}(f_i))\right)} {\sum_{j=1}^\esgsm \max\left(0, \log\left(\frac{\esgsm}{\nu} + 1\right) - \log(j)\right)} - \frac{1}{\esgsm} \end{equation} For $\nu = 2$, this method sets a constant utility to the lowest $50\%$ of the fitnesses, which we use in our experiments. \section{Scaling up SNES\xspace for supervised models}\label{sec:snes} In this section we propose a novel method of distributing the evaluation of SNES\xspace updates. In \cref{s:semi-updates} we present the semi-updates method, which allows for better distribution of random number generation, fitness evaluation, and update computation among multiple workers. This allows us to train large supervised models with millions of parameters without creating a bottleneck on the master that holds the distribution parameters. Further on, in \cref{ss:es-experiments} we investigate various hyper-parameter selections for obtaining competitive results. \subsection{Speeding up SNES\xspace with semi-updates}\label{s:semi-updates} In practice, SNES\xspace training of a model with $d$ parameters requires a large amount of weight samples per generation (generation size $\esgsm$) that grows with the dimension of its parameters $\theta \in \mathbb{R}^{\esdimm}$. A \emph{standard} execution model for SNES\xspace is to draw the parameter samples $\{\theta_{1 \dots \esgsm}\}$ at the master process, which looks after distribution parameters $(\mean, \stds)$, and to distribute them among $\esbsm$ worker processes. Even with smaller models, the main bottleneck is generating $\esdimm \cdot \esgsm$ random values which are needed for computing the weighted sums (\cref{eq:snes}). One possible improvement is to send only the initial distribution parameters $(\mean, \stds)$ and a random seed to the \esbst workers to let each worker generate $\esgsm / \esbsm$ parameter samples and exchange set of $\esgsm / \esbsm$ fitness scalars, similar to \cite{salimans2017evolution}; we refer to this method as \emph{batched} execution. This significantly reduces the amount of data communicated between the workers, however to perform an update, a worker which performs the ES update still needs to generate \emph{all} \esgst random parameter samples, which is slow for large generations and models. We propose to use \emph{semi-updates} execution model. It is similar to the batched method as each worker obtains only the distribution parameters $(\mean, \stds)$, however instead of sending back the fitness scalars for each sample, it computes the update on the batch of $\esgsm / \esbsm$ parameter samples according to Equations~\ref{eq:snes} and~\ref{eq:snes-update}, and sends it back to the master. Even though the standard ES execution model performs fitness shaping based on the rank of \emph{all} the parameter samples of a generation, doing it on only $\esgsm / \esbsm$ parameter samples within each worker has a surprisingly little effect on the final performance. The main advantage of this method is that each worker now has to generate only $\esgsm / \esbsm$ parameter samples while the master performs only a simple average over $\esbsm$ semi-updates. However, contrary to the batched method, the new distribution parameters has to be communicated to each worker. \subsection{Experiments}\label{ss:es-experiments} In this section we perform experiments with different variants of the SNES\xspace algorithm. In \cref{ss:semi-upd-exp}, we investigate the processing speed of different execution models defined in \cref{s:semi-updates}. Later on, in \cref{ss:snes-cifar-exp} we investigate what is needed for being able to learn ConvNet models for CIFAR-10\xspace classification with SNES\xspace. Finally, in \cref{ss:es-nsamples}, we investigate the dependence of accuracy on the generation size. We use SNES\xspace for supervised classification models and the objective function $f$ is the mini-batch log-likelihood\footnote{It is possible to use accuracy, however it under-performs due to quantization of the mini-batch accuracy.}. For all experiments we optimize all model parameters $\theta \in \mathbb{R}^\esdimm$ with a single normal distribution. The distribution parameters are updated with learning rates $\eta_{\mean}=1$ and $\eta_{\stds}= (3 + \ln{\esdimm}) / (5 \sqrt{\esdimm})$ \citep[p.~33]{Wierstra2014}. The mean $\mean$ is initialized using the truncated normal initializer \cite{glorot2010understanding}, and the variances $\stds^2$ are initialized to $1$. We perform experiments on MNIST~\citep{lecun-98} and CIFAR-10\xspace~\citep{krizhevsky2009learning} datasets using \mbox{ConvNet} models, which are described in \cref{ss:models}. Selected models differ mainly in the number of parameters, which is controlled by the number of hidden representations. For CIFAR-10\xspace models, we train on random $24 \times 24$ crops and flips from the $32 \times 32$ training images. At test time, we process the full $32 \times 32$ test images and average-pool the activations before the classification layer. To reduce the parameter count, we use the Separable Convolution~\citep{chollet16xception}. \subsection{Selected MNIST and CIFAR models}\label{ss:models} In table \cref{tab:mnist_models} we provide details of the investigated MNIST and CIFAR models. For the MNIST models, $\mathrm{C}_{\times n}^{M}$ stands for $M$ convolution filters of size $n \times n$, $\mathrm{P}$ for max pooling, and $\mathrm{F}_{a}^{b}$ for a fully connected layer with $b$ input features and $a$ output features. The MNIST-30k\xspace model is identical to the model by \citet{Zhang2017}. Each CIFAR-10\xspace model consists of a set of $3\times3$ separable or standard convolutions where the number of filters per layer is specified in the column `Layers'. Layers with stride 2 are denoted as $s2$. The convolutional layers are followed by global average pooling and a fully connected layer with 10 outputs. The column `Sep' specifies whether separable or dense convolutions are used. \begin{table}[h!] \centering \scriptsize \caption{MNIST models (left) and CIFAR-10\xspace models (right) used in the experiments. ACC @10k and @80k denotes the SGD performance on the test set after 10k or 80k steps of training respectively. } \label{tab:mnist_models} \setlength{\tabcolsep}{1pt} \begin{minipage}[t]{.45\linewidth} \begin{tabular}{l | r l | c } \toprule Name & \# Params & Layout & ACC @10k \\ \midrule MNIST-30k\xspace & $28\,938$ & \makecell{ $\mathrm{C}_{\times5}^{16}$, $\mathrm{P}_{\times2}$, $\mathrm{C}_{\times2}^{32}$,\\ $\mathrm{P}_{\times2}$, $\mathrm{F}_{10}^{1568}$} & $99.12\%$\\ \midrule MNIST-500k\xspace & $454\,922$ & \makecell{ $\mathrm{C}_{\times5}^{32}$, $\mathrm{P}_{\times2}$, $\mathrm{C}_{\times2}^{64}$, \\ $\mathrm{P}_{\times2}$, $\mathrm{F}_{128}^{3136}$, $\mathrm{F}_{10}^{128}$} & $99.28\%$\\ \midrule MNIST-3M\xspace & $3\,274\,634$ & \makecell{ $\mathrm{C}_{\times5}^{32}$, $\mathrm{P}_{\times2}$, $\mathrm{C}_{\times2}^{64}$, \\ $\mathrm{P}_{\times2}$, $\mathrm{F}_{1024}^{3136}$, $\mathrm{F}_{10}^{1024}$} & $99.33\%$\\ \bottomrule \end{tabular} \end{minipage} \begin{minipage}[t]{.45\linewidth} \begin{tabular}{l c c | c | r r } \toprule Name & Sep. & Layers & Params & \makecell{ACC\\@10k} & \makecell{ACC\\ @80k} \\ \midrule CIF-300k\xspace & YES & \makecell{ $64$, $64$, \\ $128_{s2}$, $128$, $128$,\\ $256_{s2}$, $256$, $256$, \\ $512_{s2}$} & $358\,629$ & $87.7$ & $91.7$\\ \midrule CIF-900k\xspace & YES & \makecell{ CIF-300k\xspace \\ + $512$, $512$ } & $895\,973$ & $88.1$ & $92.01$\\ \midrule CIF-8M\xspace & NO & \makecell{ CIF-300k\xspace \\ + $512$, $512$} & $7\,790\,794$ & $84.4$ & $94.6$\\ \bottomrule \end{tabular} \end{minipage} \end{table} \subsubsection{Semi-updates}\label{ss:semi-upd-exp} In \cref{fig:snes_scaling_emp}-left we compare the speed of different execution modes introduced in \cref{s:semi-updates}. The median time per generation is computed on a MNIST-500k\xspace model on 110 workers where each worker has up to 10 CPU cores available, thus for each generation size $\esgsm$, the amount of computational resources is constant. As it can be seen, the semi-updates provide a significant speedup. \begin{figure} \centering \setlength{\tabcolsep}{1pt} \begin{minipage}[t][][t]{.33\linewidth} \includegraphics[width=1\linewidth]{./figures/speed_uber_mnist_log.pdf} \end{minipage}% \begin{minipage}[t][][t]{.33\linewidth} \includegraphics[width=1\linewidth]{figures/snes_tinysep_fixvarb.pdf} \end{minipage}% \begin{minipage}[t][][t]{.33\linewidth} \includegraphics[width=1\linewidth]{figures/snes_tinysep_bn.pdf} \end{minipage} \caption{ Left - Seconds per generation of the MNIST-500k\xspace model (lower is better) for different execution modes, trained on 110 workers with varying generation sizes $\esgsm$ (x-axis in log scale). Batches and semi-updates are computed over $\esbsm = \esgsm / 100$. The proposed semi-updates scheme achieves the highest training speed and allows for larger generation sizes. Center - Performance of SNES\xspace on CIF-300k\xspace for different training data batch regimes. VarB computes the fitness on a variable batch per each SNES sample; FixB computes the fitness on a single batch fixed for the entire SNES generation; WFixB fixes the training batch only within each worker, and different workers use different batches. Right - Performance of SNES\xspace on CIF-300k\xspace with and without Batch Normalisation. } \label{fig:snes_scaling_emp} \end{figure} \subsection{Training on CIFAR-10\xspace with SNES\xspace}\label{ss:snes-cifar-exp} We have found that using SNES\xspace for training models on CIFAR-10\xspace dataset requires a careful selection of hyper-parameters. To achieve 99\% test set accuracy on MNIST with SNES\xspace, it is typically sufficient to use a large number of parameter samples \esgst. But for CIFAR-10\xspace, it turned out to be challenging to achieve performance comparable to SGD. Due to the computational complexity, we perform most of the hyper-parameter selection experiments on a relatively small model with approx. 300k parameters -- CIF-300k\xspace, and train it for 10k steps. It uses a parameter-efficient architecture based on separable convolutions~\citep{chollet16xception}. This model is bigger than MNIST-30k\xspace, but smaller than MNIST-3M\xspace. Each experiment is run on 50 Tesla P4 GPU workers and the average processing time is 20s per generation. For all the experiments, the results are computed over a generation size $20\,000$. Each generation is by default evaluated on a batch of 256 training images. \paragraph{Batch normalization} Somewhat surprisingly, batch normalization (BN) \citep{ioffe2015batch}, is crucial for obtaining competitive results with SNES\xspace. BN is often seen as an architectural block to simplify training with SGD, but it turns out that it is even more important for SNES\xspace training, as shown in \cref{fig:snes_scaling_emp}-right. Similar effect has also been observed for RL ES models \citep{salimans2017evolution, Mania2018}. \paragraph{Fixed versus variable batch} For MNIST experiments, using a different batch of training examples in each sample of a generation provides better results than using the same fixed batch, as first noted in \cite{Zhang2017}. However, this does not hold for the CIFAR-10\xspace experiments where we have consistently observed that it is important to evaluate each SNES\xspace sample within a generation on the same training batch, as shown in \cref{fig:snes_scaling_emp}-middle. We hypothesize that this could be due to the higher complexity of CIFAR-10\xspace classification task compared to MNIST, which leads to increased variance of the fitness values when using different batches. With semi-updates, fitness is computed individually by each worker, so it is important to fix the training data only within each worker for each semi-update. In fact, fixed batch per semi-update, WFixB, obtains slightly better performance than FixB (\cref{fig:snes_scaling_emp}-middle) due to more training data per generation. It also allows for a simpler implementation as the training data does not have to be synchronized among the workers. \subsection{Convergence and generation size}\label{ss:es-nsamples} Finally, we test the SNES\xspace performance versus the number of parameter samples \esgst per generation. In all experiments which we have performed with SNES\xspace, this has proven to be the most important parameter for improving performance, as observed in \cite{Zhang2017}. For the MNIST models, we have run the training for 10k steps. Results are shown in \cref{tab:snes_nsamples}-left. For the MNIST-30k\xspace model, it is possible to achieve a slightly better accuracy of $99\%$ at 10k steps vs $98.7\%$ in \cite{Zhang2017}. For the MNIST-3M\xspace we are able to achieve higher performance ($98.81\%$ test set acc.) than \cite{Zhang2017} mainly due to larger number of training steps, which was facilitated by the more efficient semi-updates execution model. \begin{table}[h] \centering \caption{Test set accuracy after 10k and 20k training steps for MNIST (left) and CIFAR-10\xspace (right) dataset respectively with SGD and SNES\xspace with various generation sizes \esgst.} \label{tab:snes_nsamples} \footnotesize \setlength{\tabcolsep}{2pt} \begin{tabular}{c c c} \begin{tabular}{c c c c c c c}\toprule & SGD &\multicolumn{5}{ c }{Generation size \esgst} \\ Model & Acc & 1k & 5k & 10k & 20k & 50k \\ \midrule MNIST-30k\xspace & 99.04 & 98.57 & 98.76 & 99.13 & 99.18 & 99.16 \\ MNIST-500k\xspace & 99.28 & 97.36 & 98.87 & 98.84 & 99.19 & 99.08 \\ MNIST-3M\xspace & 99.33 & 96.54 & 98.46 & 98.74 & 98.60 & 98.81 \\ \bottomrule \end{tabular} & ~ & \begin{tabular}{c c c c c c}\toprule & SGD &\multicolumn{4}{ c }{Generation size \esgst} \\ Model & Acc & 10k & 50k & 100k & 200k \\ \midrule CIF-300k\xspace & 88.7 & 80.03 & 86.32 & 88.17 & 89.48 \\ CIF-900k\xspace & 88.79 & 81.48 & 87.08 & 88.47 & 89.08 \\ \bottomrule \end{tabular} \end{tabular} \end{table} In \cref{tab:snes_nsamples}-right, we show the accuracy obtained with SNES\xspace for the CIFAR-10\xspace models. The models use batch normalization, fixed mini-batch of 256 training images per worker (WFixB). Similarly to the MNIST models, it is possible to reach performance comparable to SGD with a sufficient number of parameter samples. \paragraph{Number of training samples per evaluation} Similarly to SGD \cite{Smith2017}, SNES\xspace performance can be improved by evaluating fitness function on more training examples (batch size), as can be seen in \cref{tab:snes_batchsize}. We hypothesize that this is due to reduced variance of the fitness values. However, in our experiments generation size $\esgsm$ tended to have a larger effect on the final performance. \begin{table}[h] \footnotesize \caption{Performance of SNES\xspace on CIF-300k\xspace versus the size of the training mini-batch.} \label{tab:snes_batchsize} \centering \begin{tabular}{c| c c c} \toprule Batch Size & 128 & 256 & 512 \\ \midrule Val Acc & $78.61\%$ & $81.12\%$ & $83.03\%$ \\ Gen [s] & 13.31 & 20.05 & 33.3 \\ \bottomrule \end{tabular} \end{table} \subsection{Discussion} The empirical results show that with the right algorithmic and engineering changes, it is possible to scale up SNES\xspace and tackle tasks of higher complexity. We believe that with sufficient effort SNES\xspace can be scaled to even larger models than we have shown. Compared to standard SGD, SNES\xspace only needs to infer the model, which has a potential to enable training neural nets on inference-only or inference-optimized hardware. \section{Hybrid ES\xspace for sparse supervised models}\label{sec:hybrid} \begin{algorithm}[h] \footnotesize \KwIn{fitness $f$, diff. params. $\theta_{init}$, sparsity control $k$, temperature $\tau$, learning rates $\eta_{\bm{l}}, \eta_{\theta}$, steps number $S$} $\bm{l} \leftarrow 0$ \; \For{Step = 1 \dots $S$}{ \For{i = 1 \dots n}{ sample mask $\mathbf{m}_i \sim \mathcal{C}(\mathbf{p})$ (sampled $k$ times)\; $\theta_i \leftarrow \mathbf{m}_i \odot \theta$\; evaluate the fitness $f(\theta_i)$;~compute $\nabla_{\theta_i} f$\; } compute utilities $u_i$\; compute $\nabla_{\theta} f = \frac{1}{n}\sum_{i=1}^n \nabla_{\theta_i} f$\; update differentiable weights $\theta = \theta + \eta_{\theta} \nabla_{\theta} f$\; compute $\nabla_{\bm{l}} J = \frac{1}{\tau} \sum_{i=1}^{n} u_i \cdot (1 - \mathbf{p}) \odot \mathbf{m}_i$\; update mask distribution $\bm{l} \leftarrow \bm{l} + \eta_{\bm{l}} \nabla_{\bm{l}} J$\; } \caption{C-ES\xspace, a hybrid of ES and SGD}\label{alg:sgd_es} \end{algorithm} We have shown that it is possible to use ES\xspace for learning differentiable parameters of large supervised models. However, a key advantage of black-box optimization algorithms is that they are able to train models with non-differentiable parameters. In this section we show that it is indeed possible to combine conventional SGD optimization with ES\xspace for learning weight sparsity masks of sparse neural networks. We use SGD to train the differentiable weights, and ES\xspace for learning the masks at the same time. This leads to \emph{C-ES\xspace}, a hybrid ES\xspace-SGD scheme which works as follows. At each training step, a generation of mask samples is drawn from the sparsity mask distribution. Each mask sample zeroes out a subset of model weights, and the resulting sparse weights are then evaluated to obtain the mask fitness, used to compute the mask update with ES\xspace. Simultaneously, each worker performs gradient descent w.r.t.\ its current non-zero weights, and the weight gradients are then averaged across all workers to perform a step of SGD with momentum. The algorithm is specified in \cref{alg:sgd_es}. This method, similarly to Drop-Connect \citep{wan2013regularization}, randomly zeroes model parameters. However, we replace the constant uniform distribution with a distribution optimized by ES. \subsection{Sparsity mask distributions} Mask samples $\mathbf{m}_i \in \{0,1\}^d, i = 1 \dots n$, are modelled with a multinomial distribution. To sample a mask, we draw repeatedly $\catnsm$ indices from a categorical distribution $\mathcal{C}(\mathbf{p})$, where $\catnsm$ controls the number of non-masked weights. We model the distribution probabilities $\mathbf{p}$ with a softmax function $p_j = \operatorname{exp}(\lval{j}/\tau) / \Sigma_c \operatorname{exp}(\lval{c}/\tau)$ where $\tau$ is temperature and $\bm{l} \in \mathbb{R}^\esdimm$ is a vector of distribution parameters, learned with ES\xspace. For each sample $j \sim \mathcal{C}(\mathbf{p})$, we set $m_j \leftarrow 1$, i.e.\@\xspace we sample which model parameters are retained (not zeroed out), and the model is evaluated with $f(\mathbf{m \odot \theta})$. We approximate the derivatives of the multinomial distribution with derivatives of the $\mathcal{C}$'s PMF $g(x = j \vert \mathbf{p}) = p_j$: \begin{equation} \frac{\partial \ln g(x \vert \mathbf{p})}{\partial l_j} = \begin{cases} 1 - p_j &\mbox{if } x=j \\ 0 &\mbox{if } x\neq j \end{cases} \end{equation} and an ES update from $\esgsm$ mask samples is approximated as: \begin{equation} \nabla_{\bm{l}} J \leftarrow \sum_{i=1}^{n} u_i \cdot \frac{1 - \mathbf{p}}{\tau} \odot \mathbf{m}_i \end{equation} where $u_i$ is the utility of sample $i$. We do not use natural gradients. The sparsity mask distribution parameters are updated with $\bm{l} \leftarrow \bm{l} + \eta_{\bm{l}} \cdot \nabla_{\bm{l}} J$ with learning rate $\eta_{\bm{l}}$. An alternative way of modelling weight sparsity is using separate Bernoulli distribution for each differentiable parameter \citep{Williams1992}. However, it does not allow for controlling the overall sparsity of the network, as the sparsity of each weight is learned independently from the others. Although it might be possible to control the sparsity of Bernoulli-parameterized masks using a special form of regularization as in~\citep{louizos2018learning}, in this work we opted for modelling the sparsity masks with a multinomial distribution as described above, as it allows for a direct control over the sparsity. Details about sampling without replacement are in the following section. \subsection{Sampling from multinomial distributions}\label{ss:cat-samplers} A mask $\mathbf{m}$, sampled from a multinomial distribution, is an outcome of $\catnsm$ draws from a categorical distribution $\mathcal{C}$. We have observed that the implementation of the mask sampler is crucial. Not only it does affect the ability to scale to large models, but it also has a significant influence on the final performance. The standard multinomial sampler, referred to as MS-wR\xspace, implements sampling from categorical distribution with replacement. For this method, as the number of sampler invocations $\catnsm$ increases, fewer unique indices are sampled due to the increased number of collisions, i.e.\@\xspace $\| \mathbf{m} \|_1 < \catnsm$. Fixed sparsity can achieved with various methods. As a baseline which achieves $\| \mathbf{m} \|_1 = \catnsm$, we sample $\catnsm - \| \mathbf{m} \|_1$ additional unique non-zero indices, uniformly from the non-masked ones. This method is referred to as MS-wR+u\xspace. This method does not sample exactly from the original distribution but gives uniform mass to the remaining samples. However, in our experiments sampling exactly $\catnsm$ indices without replacement tends to yield better results. Unfortunately, we have not found any computationally efficient GPU implementation of this, so instead we consider two different approximations. The first, MS-woRb\xspace, splits $\catnsm$ into $m$ batches. For each batch we sample $\catnsm/m$ indices using MS-wR\xspace and remove them from the distribution. This method has the advantage that for $m = \catnsm$, it converges to exact sampling without replacement, while for small $m$ it is more computationally efficient. However, unless $m = \catnsm$, it does not guarantee $\| \mathbf{m} \|_1 = \catnsm$. In the other approximation, MS-tN\xspace, we sample $M \times \catnsm$ indices from the categorical distribution (where $M>1$ is a sufficiently large number), accumulate the sampled indices in a histogram and use its top-$\catnsm$ indices to form the mask. To break possible ties, we add small uniform noise $\mathcal{U}(0, 10^{-3})$ to the histogram\footnote{Stable sort, used in many top-N implementations, is by definition biased towards lower indices.}. \paragraph{Fast sampler implementations} An efficient implementation of the random sampler for $\mathcal{C}(\mathbf{p})$ is crucial for the execution speed. The current multinomial sampler in Tensorflow is based on Gumbel-softmax sampling \citep{maddison2017concrete, jang2017gumbel}. The downsides are that it requires generating $k \esdimm$ random values and the same amount of storage. When $\esdimm$ is in the order of millions and $k$ itself is a tenth of that size (for $90\%$ sparse models), the product $\frac{\esdimm^2}{10}$ is prohibitively large, both in terms of the computation cost and memory. A more efficient strategy is to use inverse CDF sampling. It computes the CDF of the distribution using a prefix sum \cite{blelloch1990pre, harris2007parallel} in $O(\esdimm)$ time, then generates $k$ random numbers and performs a sorted binary search into the CDF in $O(k \log \esdimm)$ with storage requirements only $O(k + \esdimm)$. We employ CUB\footnote{\url{https://nvlabs.github.io/cub/}} to implement fast prefix sums, and use existing fast GPU RNG generators to implement a fast GPU binary search similar to the one in Thrust~\cite{bell2012thrust}. \subsection{Experimental results} In this section we evaluate C-ES\xspace for training sparse feed-forward and recurrent models. First, we deploy the proposed method on feed-forward CIFAR-10\xspace classification models CIF-300k\xspace and CIF-8M\xspace (see \cref{ss:models}), and sparsify all layers apart from the last, fully connected layer. We then show that C-ES\xspace is also applicable to a different kind of models, and use it to train SparseWaveRNN\xspace~\cite{kalchbrenner2018efficient}, which is a state-of-the-art recurrent text-to-speech model. \subsubsection{Feed-forward models}\label{ss:exps-ff} In this section we show results for the feed-forward image classification models. First we investigate hyper-parameter selection, and then we compare the results against the pruning baseline \cite{NarangDSE17} which progressively masks model parameters with magnitude close to zero. By default, in all the following experiments with C-ES\xspace training, the models are trained from the start with an initial sparsity $50\%$. The ES learning rate is set to $\eta_{\bm{l}} = 0.1$ and softmax temperature is set to $\tau = 3$ and $\esgsm=9$. We use a single C-ES\xspace distribution for all weights. At test time, the mask is formed by the top-$\catnsm$ indices from $\bm{l}$. \paragraph{Sampling methods} In \cref{tab:cates_samplers} we show results for different sampling methods introduced in \cref{ss:cat-samplers}. This experiment is performed on the CIF-300k\xspace for $k = \esdimm / 2$ ($50\%$ sparsity), which is trained for 10k training steps. As can be seen, MS-wR\xspace, which does not ensure constant sparsity, reaches the worst performance. With MS-wR+u\xspace, where we sample additional indices from a uniform distribution, the final accuracy is considerably improved. The variants of MS-woRb\xspace increase the accuracy even further, and the best performing method is MS-tN\xspace. We believe this is due to the fact that this method amplifies higher probabilities, and keeps a constant number of non-zero weights while still sampling model parameters with lower mask probabilities from the distribution. \begin{table}[h] \centering \footnotesize \caption{Performance versus sampling method after 10k training steps, $\esgsm=200$. Dense test accuracy $87.65\%$.} \label{tab:cates_samplers} \setlength{\tabcolsep}{3pt} \begin{tabular}{c| c c | c c c | c c c} \toprule Test Acc & \multirow{2}{*}{wR} & \multirow{2}{*}{wR+u} & \multicolumn{3}{c|}{woRb, m=} & \multicolumn{3}{c}{tN, M=} \\ $[\%]$ & & & $3$ & $4$ & $5$ & $2$ & $3$ & $5$ \\ \midrule CIF-300k\xspace & $61.8$ & $74.2$ & $83.5$ & $83.4$ & $83.7$ & $85.6$ & $87.6$ & $88.6$ \\ \bottomrule \end{tabular} \end{table} In all experiments of the main text, we use MS-tN\xspace approximation with $M=5$. \paragraph{C-ES\xspace and generation size} Empirically, we have observed that the generation size has a surprisingly limited effect on C-ES\xspace training, as can be seen in \cref{tab:cates_cifsep_gs}. As ConvNets are often trained with DropOut \cite{srivastava14a} or its close variant DropConnect \cite{wan2013regularization} as a form of regularization, they are known to be robust against randomly dropped weights or activations. We hypothesize that because of this robustness, finding a sparsity mask distribution for a current local optima of the differentiable weights, towards which the SGD converges, is a considerably simpler task. Conveniently, small $\esgsm$ allows us to make use of standard pipelines for multi-GPU SGD training. \begin{table}[h] \centering \footnotesize \caption{Test set accuracy after 20k training steps of C-ES\xspace for 50\% sparsity with different generation size \esgst. } \label{tab:cates_cifsep_gs} \setlength{\tabcolsep}{4pt} \begin{tabular}{c | c c c c c c} \toprule Test Acc & \multicolumn{6}{c}{Generation Size \esgst} \\ $[\%]$ & 2 & 5 & 10 & 50 & 100 & 200 \\ \midrule CIF-300k\xspace & 87.33 & 88.13 & 88.48 & 88.54 & 88.32 & 88.72 \\ \bottomrule \end{tabular} \end{table} \paragraph{Comparison to pruning} In this section we compare C-ES\xspace against a more conventional pruning-based method for training sparse models~\cite{NarangDSE17}. For C-ES\xspace, the update of the distribution is computed over 9 parameter samples per generation. We use a similar sparsity schedule for both algorithms: the models are trained for $2000$ steps with the initial sparsity ($0\%$ for pruning and $50\%$ for C-ES\xspace), and then the sparsity is decreased monotonically until it reaches the final sparsity at step 50k. Overall, training is performed for 80k steps, and we use SGD with momentum, weight decay, and the learning rate follows a schedule\footnote{$0.1$ for 40k steps, then $0.01$, and $0.001$ for the last 20k steps.}. The results of the C-ES\xspace and pruning methods and the execution speed are summarized in \cref{tab:cifar_cates}-left. Generally, C-ES\xspace provides competitive results to the pruning baseline. As C-ES\xspace allows us to train sparse models from the first training step, we test different initial sparsities in \cref{tab:cifar_cates}-right, which shows that C-ES\xspace is remarkably stable across initial sparsities even for models trained with only $10\%$ dense weights from scratch. In this experiment we additionally compare against a model trained with a fixed mask distribution -- ``FixMask''-- where we set the ES learning rate to zero. It shows that training of sparse over-parameterised models, such as CIF-8M\xspace is possible even with a fixed sparsity mask, however it fails for models with fewer parameters where learning the mask is important. \begin{table}[h] \centering \caption{Accuracy on the test set of the C-ES\xspace trained on CIFAR-10\xspace with constant initial sparsity $50\%$ (Left) and a constant target sparsity of $90\%$ (right). Results after 80k steps. FPS for the C-ES\xspace method measured on 3 NVIDIA Tesla P100 GPU. } \label{tab:cifar_cates} \footnotesize \setlength{\tabcolsep}{2pt} \begin{minipage}[t]{.49\linewidth} \begin{tabular}{l l c c c c c} \toprule & & \multicolumn{4}{c }{Final sparsity} & \\ Model & Method & $0\%$ & $50\%$ & $80\%$ & $90\%$ & FPS \\ \midrule \multirow{2}{*}{CIF-300k\xspace} & Pruning & \multirow{2}{*}{91.73} & 90.98 & 88.02 & 80.62 & 30.6\\ & C-ES\xspace & & 89.13 & 84.98 & 79.58 & 3.4\\ \midrule \multirow{2}{*}{CIF-900k\xspace} & Pruning & \multirow{2}{*}{92.01} & 90.67 & 88.37 & 81.74 & 25.8\\ & C-ES\xspace & & 89.01 & 86.1 & 81.94 & 2.6\\ \midrule \multirow{2}{*}{CIF-8M\xspace} & Pruning & \multirow{2}{*}{94.6} & 93.32 & 92.64 & 92.24 & 20.7\\ & C-ES\xspace & & 93.94 & 92.87 & 90.83 & 1.32\\ \bottomrule \end{tabular} \end{minipage}~~% \begin{minipage}[t]{.49\linewidth} \begin{tabular}{ c c c c c c c} \toprule & & \multicolumn{5}{ c }{Initial sparsity} \\ Model & Method & $10\%$ & $30\%$ & $50\%$ & $70\%$ & $90\%$ \\ \midrule \multirow{2}{*}{CIF-300k\xspace} & C-ES\xspace & 80.15 & 79.46 & 79.86 & 80.23 & 79.58 \\ & FixMask\xspace & 69.84 & 64.22 & 71.28 & 71.24 & 65.79 \\ \midrule \multirow{2}{*}{CIF-900k\xspace} & C-ES\xspace & 80.87 & 81.73 & 81.94 & 82.61 & 83.76\\ & FixMask\xspace & 69.69 & 72.91 & 74.78 & 74.67 & 74.74 \\ \midrule \multirow{2}{*}{CIF-8M\xspace} & C-ES\xspace & 90.21 & 90.16 & 90.56 & 90.96 & 91.77 \\ & FixMask\xspace & 90.01 & 90.01 & 90.37 & 90.59 & 90.46 \\ \bottomrule \end{tabular} \end{minipage} \end{table} \subsubsection{Recurrent models} In this section we show the results on a large sparse recurrent network, SparseWaveRNN\xspace~\cite{kalchbrenner2018efficient} for text-to-speech task. We trained it on a dataset of 44 hours of North American English speech recorded by a professional speaker. The generation is conditioned on conventional linguistic features and pitch information. All compared models synthesize raw audio at 24 kHz in 16-bit format. The evaluation is carried out on a held-out test set. We perform experiments on two models -- one with 448 (WR448\xspace) and another with 1792 hidden state variables (WR1792\xspace). As in~\cite{kalchbrenner2018efficient}, we do not sparsify the 1D convolutions at the network input, which has approximately $8M$ parameters. In total, the WR448\xspace has $420k$ and WR1792\xspace $23.7M$ masked parameters. For all experiments with the C-ES\xspace training, models are trained with an initial sparsity $50\%$ and generation size $\esgsm = 8$ on 8 GPUs. The sparsity decreases after 40k steps and reaches the final sparsity after 251k steps. Model parameters are trained with ADAM optimizer and a constant learning rate $2 \cdot 10^{-4}$. Otherwise, we use the same C-ES\xspace hyper-parameters as in \cref{ss:exps-ff}. We use a separate mask distribution per each parameter tensor which offers slightly better execution speeds. As noted in~\cite{kalchbrenner2018efficient}, in practical applications it might be beneficial to have the sparsity pattern in the form of contiguous blocks, so we train the models with different sparsity block widths (width $1$ corresponds to unconstrained sparsity pattern as in the experiments above). This is implemented by using the same sparsity mask value for several contiguous weights. \begin{table}[h] \centering \caption{NLL on the test set at 300k steps (lower is better) of the recurrent Sparse WaveRNN models. Left - Comparison of pruning and C-ES\xspace on various models and block widths. Processing speed (FPS) is measured on 8 NVIDIA Tesla P100 GPU, for block width 1. Right - C-ES\xspace versus FixMask trained on WR1792\xspace for different initial sparsities, block width, and target sparsity of $97\%$.} \label{tab:res_wavernn} \footnotesize \setlength{\tabcolsep}{2pt} \begin{minipage}[t]{.49\linewidth} \begin{tabular}{l l | c | c | c c c | c } \toprule \multirow{2}{*}{Model} & \multirow{2}{*}{Method} & SGD & \multirow{2}{*}{Spar.} & \multicolumn{3}{c |}{Block Width} & \multirow{2}{*}{FPS} \\ & & NLL & & 1 & 2 & 16 & \\\midrule \multirow{2}{*}{WR448\xspace} & Pruning & \multirow{2}{*}{5.72} & \multirow{2}{*}{$50\%$} & 5.81 & 5.78 & 5.75 & 1.16 \\ & C-ES\xspace & & & 6.02 & 5.91 & 5.80 & 1.06 \\ \midrule \multirow{2}{*}{WR1792\xspace} & Pruning & \multirow{2}{*}{5.43} & \multirow{2}{*}{$97\%$} & 5.49 & 5.48 & 5.52 & 0.49 \\ & C-ES\xspace & & & 5.64 & 5.61 & 5.56 & 0.45 \\ \bottomrule \end{tabular} \end{minipage}% \begin{minipage}[t]{.49\linewidth} \centering \setlength{\tabcolsep}{2pt} \begin{tabular}{ c l | c c c c} \toprule & & \multicolumn{3}{ c }{Initial sparsity} \\ Model & Method & $20\%$ & $50\%$ & $80\%$ & $90\%$ \\ \midrule \multirow{2}{*}{WR1792\xspace BW1} & C-ES\xspace & 5.66 & 5.64 & 5.63 & 5.63 \\ & FixMask\xspace & 5.73 & 5.72 & 9.54 & 9.54 \\ \midrule \multirow{2}{*}{WR1792\xspace BW16} & C-ES\xspace & 5.58 & 5.56 & 5.56 & 5.57 \\ & FixMask\xspace & 5.65 & 5.65 & 9.54 & 9.54 \\ \bottomrule \end{tabular} \end{minipage}% \end{table} The results are shown in \cref{tab:res_wavernn}. The computational overhead of C-ES\xspace is approximately equal to the generation size, however C-ES\xspace is easily parallelizable over multiple computational devices. With each mask sample being evaluated on a single GPU, the execution speed is comparable to pruning, even though more than 57M random numbers have to be generated per training step on each GPU. In all cases, the C-ES\xspace method is competitive with the pruning baseline. In general, it performs better for larger block widths due to the reduced number of C-ES\xspace parameters. However, contrary to pruning, C-ES\xspace allows us to train the sparse model from scratch -- as shown in \cref{tab:res_wavernn}-right, which opens up possibilities for using accelerated sparse training, even though we have not investigated this further here. Contrary to feed-forward models, a fixed mask distribution (``FixMask'') does not work well for high initial sparsities. Additionally, C-ES\xspace might allow for optimizing such non-differentiable factors as execution speed. \section{Conclusion} In this work we have investigated the applicability of Evolution Strategies to training more complex supervised models. We have shown that using appropriate ``tricks of the trade'' it is possible to train such models to the accuracy comparable to SGD. Additionally, we have shown that hybrid ES is a viable alternative for training sparsity masks, allowing for training sparse models from scratch in the same time as the dense models when the ES samples are parallelized across multiple computation devices. Considering that ES is often seen as a prohibitively slow method only applicable to small problems, the significance of our results is that ES should be seriously considered as a complementary tool in the DL practitioner's toolbox, which could be useful for training non-differentiable parameters (sparsity masks and beyond) in combination with SGD. We hope that our results, albeit not the state of the art, will further reinvigorate the interest in ES and black-box methods in general. \paragraph{Acknowledgements} We would like to thank David Choi and Jakub Sygnowski for their help developing the infrastructure used by this work.
{'timestamp': '2019-06-10T02:14:36', 'yymm': '1906', 'arxiv_id': '1906.03139', 'language': 'en', 'url': 'https://arxiv.org/abs/1906.03139'}
\section{Introduction} \label{intro} The so - called non - dissipative transport effects have been widely discussed recently \cite{Landsteiner:2012kd,semimetal_effects7,Gorbar:2015wya,Miransky:2015ava,Valgushev:2015pjn,Buividovich:2015ara,Buividovich:2014dha,Buividovich:2013hza}. In the high energy physics those effects are expected to be observed in the non - central heavy ion collisions, when the fireballs are in the presence of both magnetic field and rotation~\cite{ref:HIC}. Such effects have also been considered for the recently discovered Dirac and Weyl semimetals \cite{semimetal_effects6,semimetal_effects10,semimetal_effects11,semimetal_effects12,semimetal_effects13,Zyuzin:2012tv,tewary,16}. It was expected that their family includes, in particular, the chiral separation effect (CSE) \cite{Metl}, the chiral magnetic effect (CME) \cite{CME,Kharzeev:2015znc,Kharzeev:2009mf,Kharzeev:2013ffa}, the chiral vortical effect (CVE) \cite{Vilenkin}, the anomalous quantum Hall effect (AQHE) \cite{TTKN,Hall3DTI,Zyuzin:2012tv}. It is widely believed that all mentioned above phenomena have the same origin - the chiral anomaly. For example, the direct "derivation" of the CME from the standard expression for chiral anomaly has been presented \cite{Zyuzin:2012tv,CME,SonYamamoto2012}. Nevertheless, the more accurate consideration demonstrates that the original equilibrium version of the CME does not exist. In \cite{Valgushev:2015pjn,Buividovich:2015ara,Buividovich:2014dha,Buividovich:2013hza} this was demonstrated using the numerical lattice methods. In the context of condensed matter theory the absence of CME was reported for the particular model of Weyl semimetal \cite{nogo}. The same conclusion has been obtained using the conjectured no - go Bloch theorem \cite{nogo2}. The analytical proof of the absence of the equilibrium CME was presented in \cite{Z2016_1,Z2016_2} using the technique of Wigner transformation \cite{Wigner,star,Weyl,berezin} applied to the lattice models\footnote{The CME may, though, survive in a certain form as a non - equilibrium kinetic phenomenon - see, for example, \cite{ZrTe5}.}. The same technique allowed to reproduce the known results on the AQHE \cite{Z2016_1} and to confirm the existence of the equilibrium CSE \cite{KZ2017} both in the framework of the quantum field theory and the solid state physics. Below we review the mentioned above technique, which allows to express through the topological invariants in momentum space the response of various nondissipative currents to the external magnetic or electric field. In the framework of the naive nonregularized quantum field theory the CSE was discussed, for example, in \cite{Gorbar:2015wya} in the technique similar to the one that was used for the consideration of the CME \cite{CME} in Dirac semimetals and AQHE in Weyl semimetals \cite{Zyuzin:2012tv}. However, the more refined calculations reported here confirm the existence of the CSE and AQHE in the mentioned systems and demonstrate the absence of the equilibrium CME \cite{Z2016_1,Z2016_2,KZ2017}. It is worth mentioning that the investigation of momentum space topology was developed previously mainly within the condensed matter physics theory. It allows to relate the gapless boundary fermions to the bulk topology for the topological insulators and to describe the stability of the Fermi surfaces and the Fermi points, as well as the fermion zero modes on vortices (for the review see \cite{Volovik2003,Volovik:2011kg}). It is worth mentioning that momentum space topology of QCD has been discussed recently in \cite{Z2016_3}. The whole Standard Model of fundamental interaction has been considered as a topological material in \cite{Volovik:2016mre}. \section{Lattice models in momentum space} \label{SectWigner} Here we consider briefly the lattice models in momentum space following the methodology of \cite{Z2016_1,Z2016_2}. In the absence of the external gauge field the partition function of the theory defined on the infinite lattice is \begin{equation} Z = \int D\bar{\psi}D\psi \, {\rm exp}\Big( - \int_{\cal M} \frac{d^D {p}}{|{\cal M}|} \bar{\psi}^T({p}){\cal G}^{-1}({ p})\psi({p}) \Big)\label{Z1} \end{equation} Here $|{\cal M}|$ is the volume of momentum space $\cal M$, $D$ is the dimensionality of space - time. $\bar{\psi}$ and $\psi$ are the Grassmann - valued fields defined in momentum space $\cal M$. The Green function $\cal G$ is specific for the given system. For the often used model of $3+1$ D Wilson fermions it is given by \begin{equation} {\cal G}({p}) = - i \Big(\sum_{k}\gamma^{k} g_{k}({p}) - i m({p})\Big)^{-1}\label{G10} \end{equation} where $\gamma^k$ are Euclidean Dirac matrices while $g_k({p})$ and $m({p})$ are the real - valued functions ($k = 1,2,3,4$) given by \begin{equation} g_k({p}) = {\rm sin}\, p_k, \quad m({p}) = m^{(0)} + \sum_{a=1,2,3,4} (1 - {\rm cos}\, p_a)\label{gWilson} \end{equation} The fields in coordinate space are related to the fields in momentum space: $\psi({r}) = \int_{\cal M} \frac{d^D {p}}{|{\cal M}|} e^{i {p}{r}} \psi({p})$. This allows to define formally the values of fields at any other values of $r$, not only at the lattice sites. The partition function has the form \begin{equation} Z = \int D\bar{\psi}D\psi \, {\rm exp}\Big( - \sum_{{r}_n} \bar{\psi}^T({r}_n)\Big[{\cal G}^{-1}(-i\partial_{r})\psi({ r})\Big]_{{r}={r}_n} \Big)\label{Z2} \end{equation} Here the sum in the exponent is over the discrete coordinates ${r}_n$. However, the operator $-i\partial_{r}$ acts on the function $\psi({r})$ of continuous variable defined above. \section{Introduction of the gauge field} In the presence of the constant external gauge field corresponding to the potential $A(x)$ (up to the terms irrelevant in the low energy effective theory) we may represent the partition function as follows \cite{Z2016_1,Z2016_2} \begin{eqnarray} Z = \int D\bar{\psi}D\psi \, {\rm exp}\Big( - \int_{\cal M} \frac{d^D {p}}{|{\cal M}|} \bar{\psi}^T({p})\hat{\cal Q}(i{\partial}_{p},{p})\psi({p}) \Big)\label{Z4} \end{eqnarray} Here \begin{equation} \hat{\cal Q} = {\cal G}^{-1}({p} - {A}(i{\partial}^{}_{p}))\label{calQM} \end{equation} while the pseudo - differential operator ${A}(i\partial_{p})$ is defined as follows. First, we represent the original gauge field ${A}({r})$ as a series in powers of coordinates ${r}$. Next, variable ${r}$ is substituted in this expansion by the operator $i\partial_{p}$. Besides, in Eq. (\ref{calQM}) each product of the components of ${p} - {A}(i{\partial}^{}_{p})$ is subsitituted by the symmetric combination (for the details see \cite{Z2016_1}). Electric current is the response of effective action to the variation of the external Electromagnetic field. This gives \begin{eqnarray} j^k({R}) &=& \int_{\cal M} \frac{d^D {p}}{(2\pi)^D} \, {\rm Tr} \, \tilde{G}({R},{p}) \frac{\partial}{\partial p_k}\Big[\tilde{G}^{(0)}({R},{p})\Big]^{-1}\label{j423} \end{eqnarray} where the Wigner transformation of the Green function is expressed as: \begin{equation} \tilde{G}({R},{p}) = \sum_{{r}={r}_n} e^{-i {p} {r}} G({R}+{r}/2,{R}-{r}/2)\label{Wl2} \end{equation} while the Green function itself is \begin{eqnarray} G({r}_1,{r}_2)&=& \frac{1}{Z}\int D\bar{\Psi}D\Psi \,\bar{\Psi}({r}_2)\Psi({r}_1) {\rm exp}\Big(-\sum_{{ r}_n}\Big[ \bar{\Psi}({r}_n)\Big[{\cal G}^{-1}(-i\partial_{r} \nonumber\\&& - {A}({r}))\Psi({r})\Big]_{{r}={r}_n}\Big]\Big)\nonumber \end{eqnarray} At the same time $ \tilde G^{(0)}({R},{p}) = {\cal G}({p}-{A}({R}))\label{Q0} $. In \cite{Z2016_1} the following expression was derived for the linear response of the electric current to external electromagnetic field: \begin{eqnarray} j^{(1)k}({R}) &= & \frac{1}{4\pi^2}\epsilon^{ijkl} {\cal M}_{l} A_{ij} ({R}), \label{calM}\\ {\cal M}_l &=& \int_{} \,{\rm Tr}\, \nu_{l} \,d^4p \label{Ml} \\ \nu_{l} & = & - \frac{i}{3!\,8\pi^2}\,\epsilon_{ijkl}\, \Big[ {\cal G} \frac{\partial {\cal G}^{-1}}{\partial p_i} \frac{\partial {\cal G}}{\partial p_j} \frac{\partial {\cal G}^{-1}}{\partial p_k} \Big] \label{nuG} \end{eqnarray} Here tensor $\cal M$ is the topological invariant in momentum space, i.e. it is not changed if the system is modified smoothly. This representation allows to prove that the equilibrium CME does not exist for the system of massless fermions. The proof is presented in \cite{Z2016_2}. At the nonzero chiral chemical potential $\mu_5$ we may consider the following expression for the fermion Green function in the absence of the gauge field: \begin{equation} {\cal G}^{}({\bf p}) = \Big(\sum_{k}\gamma^{k} g_{k}({\bf p}) + i\gamma^4 \gamma^5 \mu_5 - i m({\bf p})\Big)^{-1}\label{G2} \end{equation} where $g_k({\bf p})$ and $m({\bf p})$ are the real - valued functions, $k = 1,2,3,4$. Function $m$ is defined in such a way that the pole of $\cal G$ appears that corresponds to the massless fermion. We may substitute ${\cal G}$ of Eq. (\ref{G2}) into Eq. (\ref{calM}) while dealing with the linear response to the external magnetic field. In the non - marginal cases including the ordinary regularization of the QFT using Wilson fermions we are able to bring the system using smooth transformation from the state with nonzero $\mu_5$ and vanishing fermion mass to the state with vanishing $\mu_5$ and nonzero fermion mass. During such a modification the singularity of the Green function is not encountered. Therefore, the value of the topological invariant ${\cal M}_4$ responsible for the CME is not changed. It may be calculated easily for the system of massive fermions, and this calculation gives ${\cal M}_4 =0$, which proves the absence of the corresponding CME current. \section{AQHE in the $3+1$D systems} \label{SectHall3d} From the above expressions it follows that the Hall current is given by \cite{Z2016_1} \begin{equation} {j}^k_{Hall} = \frac{1}{4\pi^2}\,{\cal M}^\prime_l\,\epsilon^{jkl}E_j,\label{HALLj3d} \end{equation} where ${\cal M}^\prime_l = i{\cal M}_l/2$ is \begin{eqnarray} {\cal M}^\prime_l &=& \frac{1}{3!\,4\pi^2}\,\epsilon_{ijkl}\,\int_{} \,\,d^4p\,{\rm Tr} \Big[ {\cal G} \frac{\partial {\cal G}^{-1}}{\partial p_i} \frac{\partial {\cal G}}{\partial p_j} \frac{\partial {\cal G}^{-1}}{\partial p_k} \Big] \label{nuGHall} \end{eqnarray} Notice, that in the case of the non - interacting condensed matter system with the Green function ${\cal G}^{-1} = i \omega - \hat{H}({\bf p})$ expressed through the Hamiltonian $\hat{H}$ we may derive (similar to the above considered case of the $2+1$ D system) the representation of the components of topological invariant ${\cal M}^{\prime}_l$ with $l\ne 4$ through the Berry curvature $ {\cal F}_{ij}$: \begin{eqnarray} {\cal M}^\prime_l &=& \frac{\epsilon^{ijl}}{4\pi}\sum_{\rm occupied}\, \int d^3p\, {\cal F}_{ij} \end{eqnarray} Here the sum is over the occupied branches of spectrum. The formalism developed in \cite{Z2016_1} allows to demonstrate that for the wide class of Weyl semimetals the contribution to the AQHE electric current of the pair of Weyl fermions is given by \begin{equation} {j}^k_{Hall} = \frac{\beta}{2\pi^2}\,\epsilon^{jk3}E_j,\label{HALLj3dp} \end{equation} Here $E$ is electric field while $\beta$ is the distance in momentum space between the Weyl fermions of opposite chirality while the third axis is directed along the line connecting them. Thus the previously obtained result on the AQHE in Weyl semimetals is confirmed. Moreover, the same method allows to predict the existence of the topological insulators with the AQHE, in which the meaning of the constant $\beta$ is the length of the inverse lattice vector (proportional to $1/a$, where $a$ is the lattice spacing). \section{Linear response of axial current to external magnetic field} In continuum theory the naive expression for the axial current is $\langle \bar{\psi} \gamma^\mu \gamma^5 \psi\rangle$. Several different definitions for the particular lattice regularization may give this expression in the naive continuum limit. In \cite{KZ2017} it has been proposed to define the axial current in the lattice models as follows \begin{eqnarray} j^{5k}({R}) &=& \int_{\cal M} \frac{d^D {p}}{(2\pi)^D} \, {\rm Tr} \,\gamma^5\, \tilde{G}({R},{p}) \frac{\partial}{\partial p_k}\Big[\tilde{G}^{(0)}({R},{p})\Big]^{-1}\label{j423} \end{eqnarray} where \begin{eqnarray} &&\tilde G^{(0)}({R},{p}) = {\cal G}({p}-{A}({R}))\label{Q01} \end{eqnarray} One can easily check that in the naive continuum limit this definition gives $\langle \bar{\psi} \gamma^k \gamma^5 \psi\rangle$. Let us regularize the expressions for the massless fermions using the finite temperature version of the lattice theory. For the periodic boundary conditions in the spatial directions and anti-periodic in the imaginary time direction, the lattice momenta are \begin{equation}\label{disc} p_i \in (0, 2\pi);\, p_4=\frac{2\pi}{N_t }(n_4+1/2) \end{equation} where $i=1,2,3$ while $n_4=0,...,N_t-1$. Temperature is equal to $T = 1/N_t $, in lattice units $1/a$, where $a$ is the lattice spacing. Thus the imaginary frequencies are discrete $p_4 = \omega_{n}=2\pi T (n+1/2)$, where $n= 0, 1, ... N_t-1$, while the axial current is expressed via the Green's functions as follows: \begin{eqnarray} j^{5k}&=&-\frac{i}{2}T\sum_{n=0}^{N_t-1}\int \frac{d^3p}{(2\pi)^3}{\rm Tr}\, \gamma^5 ({\cal G}(\omega_{n},\textbf{p})\partial_{p_{i}}{\cal G}^{-1}(\omega_{n},\textbf{p})\nonumber \\&&\partial_{p_{j}}{\cal G}(\omega_{n},\textbf{p})\partial_{p_{k}}{\cal G}^{-1}(\omega_{n},\textbf{p}))F_{ij}\label{calN} \end{eqnarray} Next, we introduce the chemical potential in the standard way $\omega_{n} \to \omega_{n}-i\mu$. If $\gamma^5$ anti - commutes with the Green function in a small vicinity of its poles, then the term in the axial current linear in $\mu$ is \cite{KZ2017} \begin{equation} j^{5k}= \frac{{\cal N}\,\epsilon^{ijk}}{4\pi^2} F_{ij} \mu\label{jmuH} \end{equation} in the low temperature limit $T\to 0$. Here \begin{eqnarray} {\cal N}&=&\frac{\epsilon_{ijk}}{12}\int_{\Sigma}\frac{d^3p}{(2\pi)^2}{\rm Tr}\, \gamma^5 {\cal G}(\omega_{},\textbf{p})\partial_{p_{i}}{\cal G}^{-1}(\omega_{},\textbf{p})\nonumber \\&&\partial_{p_{j}}{\cal G}(\omega_{},\textbf{p}) \partial_{p_{k}}{\cal G}^{-1}(\omega_{},\textbf{p})\label{calN1} \end{eqnarray} while $\Sigma$ is the 3D hypersurface of infinitely small volume that embraces the singularities of the Green function concentrated at the Fermi surfaces (or Fermi points). The advantage of this representation is that Eq. (\ref{calN}) is the topological invariant. In particular, in \cite{KZ2017} it has been demonstrated using the regularization with Wilson and overlap fermions that if the lattice model describes one massless Dirac fermion, then \begin{equation} {\cal N}= 1 \end{equation} In this way the validity of the obtained earlier naive result for the CSE has been established. In the presence of nonzero mass $m^{(0)}$ the situation is changed. At $\mu \ge m^{(0)}$ the Fermi surface appears, and it contributes to the chiral current through Eq. (\ref{calN}). However, the result is not given by the simple expression of Eq. (\ref{jmuH}). \section{Conclusions} \label{sec-1} Above we reviewed the application of momentum space topology to the analysis of anomalous transport. This methodology works equally well both in the lattice regularized relativistic quantum field theory and for the lattice models of solid state physics. In particular, it appears that the response of electric current to the external electric field is expressed through the topological invariant in momentum space. Its nonzero value leads to the appearance of the quantum Hall effect. This allows to calculate the AQHE conductivity for the wide class of the condensed matter systems. The same technique allows to prove the absence of the equilibrium CME in the lattice regularized quantum field theory: the corresponding conductivity is proportional to the topological invariant in momentum space that vanishes for the systems with finite chiral chemical potential. For the systems of massless fermions the further development of this technique allows to express through the topological invariant the terms in the axial current proportional to the external magnetic field and to the ordinary chemical potential. This is the topological invariant in momentum space responsible for the stability of the Fermi point. In this way we describe the axial current of the CSE that is relevant for description of the fireballs that appear in the non - central heavy ion collisions. We show that in the considered cases the corresponding nondissipative currents are proportional to the momentum space topological invariants. They are not changed when the system is deformed smoothly, without passing through a phase transition. The corresponding conductivities for the complicated systems may be calculated within the simple ones related to the original systems by a smooth deformation. On the technical side our methodology is based on the derivative expansion applied to the Wigner transform of the two - point Green functions. We introduce the slowly varying external gauge field directly to the momentum space formulation of the lattice models. The external gauge field appears as a pseudo - differential operator ${\bf A}(i\partial_{\bf p})$. This way of the incorporation of the external field to the theory is useful for the analytical derivations and allows to obtain the above mentioned remarkable relation between the momentum space topological invariants and the non - dissipative currents. We started our consideration from Eq. (\ref{Z1}) thus neglecting the contributions of interactions. However, even the complicated interacting system may be described in a certain approximation by the same Eq. (\ref{Z1}). Then function $\cal G$ incorporates all contributions of interactions to the two - point fermion Green functions. In this approach we neglect the contributions to the physical observables of the fermion Green functions with more than two external lines. MZ kindly acknowledges numerous discussions with G.E.Volovik. Both authors are greatful to M.N.Chernodub for useful discussions. The work of ZK was supported by Russian Science Foundation Grant No 16-12-10059.
{'timestamp': '2018-11-20T02:49:07', 'yymm': '1811', 'arxiv_id': '1811.07778', 'language': 'en', 'url': 'https://arxiv.org/abs/1811.07778'}
\section{Introduction} In recent years, propagation of the superparticle [1] and of the superstring [2] in the presence of external background superfields was intensively investigated [3--7]. It was shown in the original paper [3] that the requirement of the Siegel's local fermionic symmetry [8] in the $d=10$ superparticle action (correct inclusion of interaction has to preserve local symmetries of a theory) led to some constraints on a background. Namely, the full set of SYM constraints arised in the case of SYM background superfield and some part of the full set of supergravity constraints appeared in the case of a curved background. Different attempts to get the full set of supergravity constraints proceeding from the consideration of the superparticle in a curved background were proposed in Refs. [4, 9]. The straightforward way to get the superparticle model in a curved background includes three steps [3]. First of all, the flat action and gauge transformations are to be written in terms of the flat supervielbein. Second, the coordinates and supervielbein are set to be the coordinates and supervielbein of a curved superspace. Third, since the resulting action doesn't possess any local symmetry except the reparametrization invariance, it is necessary to demand the $k$-transfor\-mations (arising at the first and second steps) to be a symmetry of the superparticle action. One can note, however, that this procedure doesn't guarantee that the arising in such a way gauge transformations are of the most general form. In the present paper we consider the (1,0) superparticle in (1,0) curved superspace. Proceeding from the minimal set of constraints which must be imposed on background geometry for correct inclusion of interaction, we find the most general form of Siegel's local fermionic transformations for the theory. An algebraic structure of the arising transformations is investigated and requirements leading to the full set of (1,0) supergravity constraints are presented. We begin with brief consideration of the model in flat superspace in Sec. 2. A full gauge algebra of the theory is shown to be closed off-shell, as opposed to the case of another dimensions. In Sec. 3 we examine the model in curved superspace within the Hamiltonian formalism and find the minimal set of constraints on background necessary for correct inclusion of interaction. We do these using the Dirac's constraint formalism and requiring a correct number of first- and second-class constraints for the model. Our analysis here is closely related to that of Ref. [4]. Next, in the Lagrangian formalism we reconstruct the most general form of Siegel's transformations which are consistent with these restrictions on the background. It is interesting to note that the arising transformations contain non-trivial contributions including the torsion superfield and don't coincide with the direct generalization of the flat Siegel's transformations. In Sec. 4 full gauge algebra is evaluated and shown to be closed off-shell and nontrivially deformed as compared to the flat one. Further, we construct the formulation leading to the full set of (1,0) supergravity constraints. The full set follows from requirement that the direct generalization of the flat gauge transformations is realized in the model. One can note the similarity with the (1,0) chiral superparticle [10]. In the conclusion the possibilities of extending the results to the case of another dimensions are discussed. \section{Off-shell closure of local symmetries} We use the real representation for $\Gamma^M$-matrices in $d=2$ and light-cone coordinates for bosonic variables. (1,0) superspace is parametrized by the coordinates $z^M=(x^+,x^-,\theta^\alpha )$, where $\theta^\alpha \equiv \left(\begin{array}{l}\theta^{(+)}\\ 0 \end{array} \right)$ is a Majorana--Weyl spinor. In these notations the action for the (1,0) superparticle in flat superspace has the following form: $$ S = \int {\rm d}\tau \,e^{-1}\Pi^+{\dot x}^- \eqno{(1)}$$ where we denoted $\Pi^+={\dot x}^+-{\rm i}\dot\theta^{(+)} \theta^{(+)}$ and $e$ is an auxiliary einbein field. The theory possesses global (1,0)-supersymmetry $$ \delta\theta^{(+)}=\epsilon^{(+)}, \qquad \delta x^+ = {\rm i} \theta^{(+)}\epsilon^{(+)}. \eqno{(2)}$$ The local symmetries of the model include the standard reparametrization invariance with a bosonic parameter $\alpha (\tau )$. $$ \begin{array}{l} \delta_\alpha x^+ =\alpha{\dot x}^+, \qquad \delta_\alpha x^- =\alpha{\dot x}^-,\\ \delta_\alpha \theta^+ =\alpha{\dot\theta}^+, \qquad \delta_\alpha e =(\alpha e)^{\cdot}.\end{array} \eqno{(3)}$$ and the Siegel's transformations with a fermionic parameter $k^{(-)} (\tau )$ $$ \begin{array}{l}\delta_k\theta^{(+)}=\Pi^+k^{(-)}, \qquad \delta_kx^-=0,\\ \delta_kx^+={\rm i}\delta_k\theta^{(+)}\theta^{(+)}, \qquad \delta_ke = 2{\rm i}ek^{(-)}\dot\theta^{(+)}.\end{array} \eqno{(4)}$$ An interesting property of the superparticle in (1,0) flat superspace is off-shell closure of gauge algebra for the model (it is not so for another dimensions). To check this assertion, consider the following transformations with a bosonic parameter $\xi^-$ $$ \delta_\xi x^-=-\xi^-\Pi^+{\dot x}^-, \qquad \delta_\xi e = \xi^-e^2 (e^{-1}\Pi^+)^{\cdot}. \eqno{(5)}$$ It is a trivial symmetry of the action in the sense that it vanishes on-shell and, consequently, doesn't remove a number of degrees of freedom of the theory. In the presence of this symmetry, however, the full gauge algebra turns out to be closed and has the following form: $$ \begin{array}{ll}{} &\qquad\alpha = 2{\rm i}k_2^{(-)}k_1^{(-)}\Pi^+\cr [\delta_{k_1}\delta_{k_2}] = \delta_\alpha + \delta_\xi + \delta_{k_3}, &\qquad\xi^-=2{\rm i}k_2^{(-)}k_1^{(-)}\\ {}&\qquad k_3^{(-)}=2{\rm i}k_2^{(-)}k_1^{(-)}\dot\theta^{(+)};\cr [\delta_k\delta_\xi ]=\delta_{\xi_1}, &\qquad \xi_1^-=2{\rm i}\xi^-k^{(-)}\dot\theta^{(+)}\cr [\delta_{\xi_1},\delta_{\xi_2}]=\delta_{\xi_3}; &\qquad \xi_3^-=(\xi_1^-\dot\xi_2^--\xi_2^-\dot\xi_1^-)\Pi^+;\cr [\delta_k,\delta_\alpha ]=\delta_{k_1}, &\qquad k_1^{(-)} = \alpha{\dot k}^{(-)}-\dot\alpha k^{(-)};\cr [\delta_\xi ,\delta_\alpha ]=\delta_{\xi_1}, &\qquad \xi_1^-=\alpha \dot\xi^--2\xi^-\dot\alpha ;\cr [\delta_{\alpha_1},\delta_{\alpha_2}]=\delta_{\alpha_3}, &\qquad \alpha_3=\alpha_2\dot\alpha_1-\alpha_1\dot\alpha_2. \end{array} \eqno{(6)}$$ An application of the Dirac procedure to the model (1) leads to the following constraints system $$ P_e\approx 0, \qquad P_{(+)}+{\rm i}\theta^{(+)}P_+\approx 0, \qquad P_+P_- \approx 0 \eqno{(7)}$$ where ($P_e,P_+,P_-,P_{(+)}$) are canonically conjugate momenta for the variables ($e,x^+,x^-,\theta^{(+)}$) respectively. Specific feature of $d=2$ lies in the fact that the last condition implies two possibilities:\\ a) $P_+=0,~P_-\neq 0$ and, consequently, all constraints are first class;\\ b) $P_-=0,~P_+\neq 0$ whence we conclude that $P_e\approx 0$ and $P_- \approx 0$ are first class while $P_{(+)}+{\rm i}\theta^{(+)}P_+ \approx 0$ is second class. \section{The model in a curved background} Introducing a set of basis one-forms $e^A={\rm d}z^M{e_M}^A$, where ${e_M}^A$ is the supervielbein, the extension of the (1,0) superparticle action to curved superspace can be written in the form $$ S=\int{\rm d}\tau\,e^{-1}\,{\dot z}^M{e_M}^+{\dot z}^N{e_N}^- \eqno{(8)}$$ where $M=(m,(+))$ are coordinate indices and $A=(+,-,(+))$ are tangent space indices of (1,0) curved superspace. World indices appear on the coordinates $z^M=(x^m,\theta^{(+)})$ and supervielbein only. It should be noted that in the presence of an arbitrary background superfield the action (8) doesn't possess any local symmetry except the reparametrization invariance ($\delta_\alpha e =(\alpha e)^{\cdot}$, $\delta_\alpha z^M=\alpha{\dot z}^M$). Thus it is not the superparticle model yet. Let us find what constraints on background geometry follow from the requirement of correct number of degrees of freedom for the theory or, what is the same, from the requirement of correct number of first and second class constraints for the model. The momenta conjugate to $e$ and $z^M$ are $$ P_e = 0, \qquad P_M=e^{-1}{e_M}^+{\dot z}^N{e_N}^- +e^{-1}{e_M}^-{\dot z}^N{e_N}^+ \eqno{(9)}$$ whence one gets the primary constraints $$ P_e\approx 0, \qquad P_{(+)}\approx 0 \eqno{(10)}$$ where we denoted $P_A\equiv {e_A}^MP_M$. The canonical Hamiltonian is given by $$ H=P_e\lambda_e+eP_+P_- + \lambda^{(+)}P_{(+)} \eqno{(11)}$$ where $\lambda_e$, $\lambda^{(+)}$ are Lagrange multipliers enforcing the constraints (10). The graded Poisson bracket has the form $$ \{AB\}=(-)^{\epsilon_A\epsilon_N}\frac{\vec\partial A}{\partial z^N}~ \frac{\vec\partial B}{\partial P^N}-(-)^{\epsilon_A\epsilon_B + \epsilon_B\epsilon_N}\frac{\vec\partial B}{\partial z^N}~ \frac{\vec\partial A}{\partial P^N} \eqno{(12)}$$ where $\epsilon_A$ is a parity of a function $A$. It is straightforward to check that $$ \{P_A,P_B\}={T_{AB}}^CP_C-\left({\omega_{AB}}^C-(-)^{\epsilon_A \epsilon_B}{\omega_{BA}}^C\right)P_C \eqno{(13)}$$ where ${T_{AB}}^C$ are components of the torsion two-form $$ {T_{BC}}^A=(-)^{\epsilon_B(\epsilon_M+\epsilon_C)}{e_C}^M{e_B}^N\left( \partial_N{e_M}^A+(-)^{\epsilon_N(\epsilon_M+\epsilon_B)}{e_M}^B {\omega_{NB}}^A -\right. $$ $$ \qquad{}\left.-(-)^{\epsilon_N\epsilon_M}(\partial_M{e_N}^A + (-)^{\epsilon_M(\epsilon_N+\epsilon_B)}{e_N}^B{\omega_{MB}}^A)\right) \eqno{(14)}$$ and ${\omega_{MA}}^B$ are components of the superconnection one-form which, in the case of $d=2$, can be chosen in the form [12] $$ {\omega_{NA}}^B=\omega_N{\delta_A}^BN_B, \qquad N_B=(N_+,N_-,N_{(+)}) = (1,-1,1/2). \eqno{(15)}$$ The preservation in time of the primary constraint $P_e\approx 0$ leads to the secondary one $$ P_+P_-\approx 0. \eqno{(16)}$$ As in flat case, this condition implies two possibilities which should be considered separately. Because inclusion of interaction has to preserve dynamical contents of a theory (a number of degrees of freedom), in the first case we must demand all constraints to be first class. It means that all brackets between the constraints have to vanish weakly. Taking into account Eq. (15) and the fact that ${T_{AB}}^C=-(-)^{\epsilon_A\epsilon_B} {T_{BA}}^C$, it is easy to get the following weak equations $$ \begin{array}{c} \{P_+,P_+\}=0,\qquad\{P_+,P_{(+)}\}\approx T_{+(+)}{}^-P_-, \\ \{P_{(+)},P_{(+)}\}\approx T_{(+)(+)}{}^-P_-\end{array} \eqno{(17)}$$ whence we conclude that it is necessary to require the following constraints on background $$ {T_{+(+)}}^-=0, \qquad {T_{(+)(+)}}^-=0. \eqno{(18)}$$ In the second case, the constraints $P_e\approx 0$, $P_-\approx 0$ are to be first class while the constraint $P_{(+)}\approx 0$ is to be second class. Evaluating brackets between the constraints $$ \begin{array}{c} \{P_-,P_-\}=0,\qquad\{P_-,P_{(+)}\}\approx T_{-(+)}{}^+P_+, \\ \{P_{(+)},P_{(+)}\}\approx T_{(+)(+)}{}^+P_+\end{array} \eqno{(19)}$$ we conclude that they have correct class if the following conditions: $$ {T_{-(+)}}^+=0, \qquad {T_{(+)(+)}}^+=-2{\rm i}\Phi \eqno{(20)}$$ are fulfilled, where $\Phi$ is a nonvanishing everywhere arbitrary superfield (the multiplier -2i is taken for further convenience). Thus, a correct formulation for the (1,0) superparticle model in a curved background implies Eqs. (18) and (20). Now, let us reconstruct local transformations which are consistent with Eqs. (18), (20) in the Lagrangian formalism. The following remark will be important here. Since ${e_M}^A(z)$ and $\omega_A(z)$ are background superfields, they transform under arbitrary variation $\delta z^M$ only through their coordinate dependence ($\delta{e_M}^A=\delta z^N\partial_N {e_M}^A$). Thus, local Lorentz and gauge transformations are not consistent. Following the standard arguments [12], we accompany $\delta z^N$ by compensating local Lorentz transformation $\delta {e_M}^A= \delta z^N\partial_N{e_M}^A+\delta_L{e_M}^A$ with the $\delta z^N$-dependent parameter $L$ such that the resulting $\delta{e_M}^A$ is Lorentz vector. The results are $$ \begin{array}{l}\delta{e_M}^A=\delta z^N{\rm D}_N{e_M}^A\equiv \delta z^N\left(\partial_N{e_M}^A+(-)^{\epsilon_N(\epsilon_M+\epsilon_C)} {e_M}^C{\omega_{NC}}^A\right),\\ \delta\omega_A=\delta z^N{e_N}^B{\rm D}_B\omega_A-\partial_A (\delta z^N{e_N}^B\omega_B).\end{array} \eqno{(21)}$$ Armed with this note, we have for any variation $\delta z^N$ $$ \delta ({\dot z}^A)={\cal D}(\delta z^A)-\delta z^C{\dot z}^B{T_{BC}}^A \eqno{(22)}$$ where ${\dot z}^A\equiv{\dot z}^N{e_N}^A$, $\delta z^A\equiv \delta z^N {e_N}^A$ and ${\cal D}(\delta z^A)$ is a covariantized derivative $$ {\cal D}(\xi^A)=\dot\xi^A+\xi^B{\dot z}^M{\omega_{MB}}^A. \eqno{(23)}$$ Now, it is straightforward to check that the most general (modulo $\alpha$-reparametrizations) transformations leaving Eq. (8) invariant and being consistent with Eqs. (18) and (20) have the form $$ \begin{array}{l}\delta_kz^M{e_M}^{(+)}={\dot z}^M{e_M}^+k^{(-)},\cr \delta_kz^M{e_M}^a=0,\cr \delta_ke=2{\rm i}e\Phi k^{(-)}{\dot z}^M{e_M}^{(+)} - e\delta_kz^M {e_M}^{(+)}(T_{+(+)}{}^++T_{-(+)}{}^-).\end{array} \eqno{(24)}$$ Thus, Eqs. (24) present the most general form of Siegel's $k$-symmetry for the (1,0) superparticle in (1,0) curved superspace. Note that these transformations contain nontrivial contributions including the torsion superfield and don't coincide with the direct generalization of the flat Siegel's transformations. The equations of motion for the (1,0) superparticle in a curved background have the form $$ \begin{array}{l}{\cal D}(e^{-1}{\dot z}^-)+e^{-1}{\dot z}^-{\dot z}^B {T_{B+}}^+ +e^{-1}{\dot z}^+{\dot z}^B{T_{B+}}^-=0,\\ {\cal D}(e^{-1}{\dot z}^+)+e^{-1}{\dot z}^-{\dot z}^B {T_{B-}}^+ +e^{-1}{\dot z}^+{\dot z}^B{T_{B-}}^-=0,\\ {\dot z}^-{\dot z}^B{T_{B(+)}}^++{\dot z}^+{\dot z}^B{T_{B(+)}}^-=0,\\ {\dot z}^M{e_M}^+{\dot z}^N{e_N}^-=0\end{array} \eqno{(25)}$$ and it is implied that the conditions (18) or (20) (in accordance with the case) are fulfilled. \section{Gauge algebra and the full set of (1,0) supergravity constraints} As was shown in Sec. 2, the full gauge algebra for the (1,0) superparticle in flat superspace turned out to be closed. What happen with the algebra when the model is considered in a curved background? Introducing the following transformations with a bosonic parameter $\xi^-$ $$ \begin{array}{l} \delta_\xi z^M=-\xi^-{\dot z}^+{\dot z}^-{e_-}^M,\\ \delta_\xi e=\xi^-e^2{\cal D}(e^{-1}{\dot z}^+)+\xi^-e{\dot z}^+{\dot z}^- {T_{+-}}^+ +\\ +\xi^-e{\dot z}^+{\dot z}^{(+)}{T_{(+)-}}^- + \xi^-e{\dot z}^+{\dot z}^+{T_{+-}}^-,\end{array} \eqno{(26)}$$ it is straightforward to check that the commutator of two $k$-transforma\-tions has the form $$ [\delta_{k_1},\delta_{k_2}]=\delta_\alpha +\delta_{k_3}+\delta_\xi , \eqno{(27)}$$ \begin{eqnarray*} \alpha &=& 2{\rm i}{\dot z}^+k_2^{(-)}k_1^{(-)}\Phi ,\qquad \xi^- = 2{\rm i}k_2^{(-)}k_1^{(-)}\Phi ,\\ k_3^{(-)}&=&2{\rm i}k_2^{(-)}k_1^{(-)}{\dot z}^{(+)}\Phi - 2 k_2^{(-)} k_1^{(-)}{\dot z}^+{T_{+(+)}}^+ -\\ {}&-& k_2^{(-)}k_1^{(-)}{\dot z}^+{T_{(+)(+)}}^{(+)}. \end{eqnarray*} To get Eq. (27) it is necessary to use the constraints (18), (20) and the consequences of the Bianchi identities (which are solved in the presence of (18), (20)) $$ \begin{array}{l} 2{\rm D}_{(+)}[{T_{+(+)}}^++{T_{-(+)}}^-]+2{\rm i}\partial_+\Phi +2{\rm i}\Phi [{T_{+-}}^-+2{T_{+(+)}}^{(+)}]+\\ \quad{}+{T_{(+)(+)}}^{(+)}[{T_{+(+)}}^+ + {T_{-(+)}}^-]=0,\\ \partial_-\Phi +\Phi [{T_{+-}}^++2{T_{-(+)}}^{(+)}]=0,\\ \partial_{(+)}\Phi +\Phi [{T_{+(+)}}^++{T_{(+)(+)}}^{(+)}]=0. \end{array} \eqno{(28)}$$ Taking into account Eqs. (25), it is easy to check that the transformations (26) are trivial symmetry of the action (8) without imposing of new constraints on background. The rest commutators in the algebra are $$ \begin{array}{l}[\delta_{\xi_1},\delta_{\xi_2}]=\delta_{\xi_3},\qquad \xi_3^-= (\xi_1^-\dot\xi_2^--\xi_2^-\dot\xi_1^-){\dot z}^+,\cr [\delta_k,\delta_\xi ]=\delta_{k_1}+\delta_{\xi_1},\cr k_1^{(-)}=\xi^-{\dot z}^+{\dot z}^-k^{(-)}[{T_{(+)-}}^{(+)}- {T_{+-}}^+],\cr \xi_1^-=2{\rm i}\Phi\xi^-k^{(-)}{\dot z}^{(+)}-\xi^-k^{(-)}{\dot z}^+ {T_{+(+)}}^+,\end{array} \eqno{(29)}$$ where we have used the constraints (18), (20) and the consequences of the Bianchi identities \begin{eqnarray*} R_{+(+)}&=&{\rm D}_{(+)}{T_{+-}}^-+{\rm D}_+{T_{-(+)}}^- +{T_{(+)+}}^{(+)}{T_{(+)-}}^-+{T_{(+)+}}^+{T_{+-}}^-,\\ R_{(+)-}&=&{\rm D}_{(+)}{T_{-+}}^++{\rm D}_-{T_{+(+)}}^+ +{T_{(+)-}}^{(+)}{T_{(+)+}}^+-\\ {}&-&2{\rm i}\Phi{T_{-+}}^{(+)} +{T_{(+)-}}^-{T_{-+}}^+,&(30)\cr R_{(+)(+)}&=&2{\rm i}\Phi{T_{+-}}^--2{\rm D}_{(+)}{T_{(+)-}}^-- {T_{(+)(+)}}^{(+)}{T_{(+)-}}^-,\\ \partial_-\Phi &+& \Phi [{T_{+-}}^++2{T_{-(+)}}^{(+)}]=0. \end{eqnarray*} Thus, the full gauge algebra turns out to be closed and nontrivially deformed as compared to the flat one (6). As was noted above, the set (18), (20) is minimal one which implies correct inclusion of interaction. In general case, we can add some additional constraints to Eqs. (18), (20) and the resulting model will be the superparticle in the presence of more rigid (restricted) background. Let us examine, now, one of such possibilities. Consider the transformations, which are direct generalization of the flat one $$ \begin{array}{l} \delta_kz^M{e_M}^{(+)}={\dot z}^M{e_M}^+k^{(-)},\\ \delta_kz^M{e_M}^a=0,\\ \delta_ke=2{\rm i}ek^{(-)}{\dot z}^M{e_M}^{(+)};\end{array} \eqno{(31)}$$ $$ \begin{array}{l}\delta_\xi z^M{e_M}^-=-\xi^-{\dot z}^M{e_M}^+{\dot z}^N {e_N}^-,\\ \delta_\xi e=\xi^-e^2{\cal D}(e^{-1}{\dot z}^M{e_M}^+).\end{array} \eqno{(32)}$$ The requirement of the invariance of the action (8) under Eq. (31) leads to the following constraints on a background: $$ \begin{array}{l} {T_{(+)(+)}}^+=-2{\rm i}, \qquad {T_{(+)(+)}}^-= {T_{+(+)}}^- = {T_{-(+)}}^+ =0,\\ {T_{+(+)}}^+ + {T_{-(+)}}^-=0\end{array} \eqno{(33)}$$ while invariance under Eq. (32) implies the conditions $$ {T_{(+)-}}^- = {T_{(+)-}}^+ = {T_{+-}}^+ = {T_{+-}}^- =0. \eqno{(34)}$$ Taking into account the symmetry properties of the torsion, the constraints (33) and (34) can be written in the following compact form $$ {T_{(+)(+)}}^a = -2{\rm i}\delta^{a+}, \qquad {T_{ab}}^c = {T_{a(+)}}^b =0. \eqno{(35)}$$ Because the conditions (18), (20) are present among Eqs. (35), the model (8), (31) and (32) is the superparticle. Using the Bianchi identities, it is easy to show that the set (35) is equivalent to the full set of (1,0) supergravity constraints [12]. In the presence of Eqs. (35) the equations of motion for the model are written in the form $$ \begin{array}{l}{\dot z}^M{e_M}^+{\dot z}^N{e_N}^-=0,\qquad {\cal D} (e^{-1}{\dot z}^N{e_N}^+)=0,\\ {\dot z}^N{e_N}^-{\dot z}^M{e_M}^{(+)}=0, \qquad {\cal D} (e^{-1}{\dot z}^N{e_N}^-)=0,\end{array} \eqno{(36)}$$ and, consequently, the $\xi^-$-symmetry is absent on-shell just as in flat case. The full gauge algebra of the theory turns out to be closed and not deformed as compared to the flat one $$ \begin{array}{ll}[\delta_{k_1}\delta_{k_2}]=\delta_\alpha + \delta_{k_3} + \delta_\xi, &\quad \alpha = 2{\rm i}k_2^{(-)}k_1^{(-)}{\dot z}^+,\cr {}&\quad k_3^{(-)}=2{\rm i}k_2^{(-)}k_1^{(-)}{\dot z}^{(+)},\cr {}&\quad \xi^-=2{\rm i}k_2^{(-)}k_1^{(-)}\cr [\delta_k\delta_\xi ]=\delta_{\xi_1}, &\quad \xi_1^-=2{\rm i}k^{(-)}\xi^-{\dot z}^{(+)}\cr [\delta_{\xi_1},\delta_{\xi_2}]=\delta_{\xi_3}, &\quad \xi_3^-=(\xi_1^-\dot\xi_2^--\xi_2^-\dot\xi_1^-){\dot z}^+.\end{array} \eqno{(37)}$$ Thus, the full set of supergravity constraints follows from the requirement that the direct generalization of the flat algebra (31), (32) is realized in the model. In this case, the algebra is not deformed as compared to the flat one. One can note the similarity of this approach with the light-like integrability conditions of Ref. [9]. \section{Conclusion} In the present paper we have considered the (1,0) superparticle model in (1,0) curved superspace. It was shown that correct inclusion of interaction implies some set of constraints on the background which, in general case, doesn't coincide with the full set of (1,0) supergravity constraints. Lagrangian gauge transformations which are consistent with these constraints contain nontrivial contributions including the torsion superfield and don't coincide with the direct generalization of the flat gauge transformations to a curved background. Gauge algebra of the theory was shown to be closed and nontrivially deformed as compared to the flat one. The requirements leading to the full set of (1,0) supergravity constraints were proposed. As was shown in Sec. 2, full gauge algebra of the theory turned out to be closed. In the case of another dimensions the algebra is open. In the first order formalism, however, the algebra turns out to be closed again [13], and one can directly generalize the analysis of the present paper to the case of another dimensions. For example, in the case of (1,1) superspace the situation looks quite analogously and the results will be published later. We hope as well, that similar analysis can be fulfilled for the superstring case too. \bigskip \centerline{Acknowledgments} \bigskip This work is supported in part by ISF Grant No M2I000 and European Community Grant No INTAS-93-2058. \bigskip \centerline{References} \bigskip \noindent 1. L. Brink and J. Schwarz, Phys. Lett. B {\bf 100}, 310 (1981).\\ 2. M. Green and J. Schwarz, Phys. Lett. B {\bf 136}, 367 (1984).\\ 3. E. Witten, Nucl. Phys. B {\bf 266}, 245 (1986).\\ 4. J.A. Shapiro and C.C. Taylor, Phys. Lett. B {\bf 181}, 67 (1986); {\bf 186}, 69 (1987).\\ 5. M.T. Grisaru, P. Howe, L. Mezincescu, B.E.W. Nilsson and P.K. Townsend, Phys. Lett. B {\bf 162}, 166 (1985).\\ 6. J.J. Atick, A. Dhar and B. Ratra, Phys. Lett. B {\bf 169}, 54 (1986).\\ 7. E. Bergshoeff, E. Sezgin and P.K. Townsend, Phys. Lett. B {\bf 169}, 191 (1986).\\ 8. W. Siegel, Phys. Lett. B {\bf 128}, 397 (1983).\\ 9. E. Bergshoeff, P.S. Howe, C.N. Pope, E. Sezgin and E. Sokatchev, Nucl. Phys. B {\bf 354}, 113 (1991).\\ 10. A.A. Deriglazov, Int. J. Mod. Phys. A {\bf 8}, 1093 (1993).\\ 11. L. Brink, M. Henneaux and C. Teitelboim, Nucl. Phys. B {\bf 293}, 505 (1987).\\ 12. M. Evans, J. Louis and B.A. Ovrut, Phys. Rev. D {\bf }, 3045 (1987).\\ 13. G. Sierra, Class. Quant. Grav., {\bf 3}, L67 (1986). \end{document}
{'timestamp': '1994-09-14T14:58:11', 'yymm': '9409', 'arxiv_id': 'hep-th/9409078', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-th/9409078'}
\section{Introduction} Lattices of coupled dynamical systems have been studied in many different contexts: as discrete versions of partial differential equations of evolution type \cite{BuSi}, \cite{Radu}, models of neuronal networks \cite{aron}, \cite{huerta} and phase lock loops \cite{nekor}, and in statistical mechanics \cite{fpu}. The particular problem of synchronization and emergence of coherent behavior in lattices of diffusively coupled, chaotic, continuous time dynamical systems has received much attention recently due to its applications in neuroscience \cite{abarbanel}, \cite{huerta}, and chaotic synchronization \cite{josic}, \cite{heagy}. Analytical results in the literature suggest that the coupling strength necessary for the appearance of coherent behavior in networks of chaotic systems grow with the size of the network \cite{lin},\cite{nekor},\cite{hale}. For instance, estimates of Afraimovich and Lin \cite{lin} suggest that the coupling strength necessary to synchronize a lattice of generalized forced Duffing systems grows as a high power of the size of the lattice. Since stable, coherent states in lattices may frequently be viewed as instances of generalized synchronization \cite{josic}, one might guess that the appearance of dynamically stable states of this type also requires that the coupling grows with the number of oscillators in the system. This would make it unlikely that such networks are of importance in nature, since the coupling strength in realistic systems cannot be varied by orders of magnitude. In contrast to these theoretical results, the numerical experiments of Huerta, et. al. \cite{huerta} indicate that partial synchronization may appear in certain networks of chaotic systems at a nearly constant coupling strength, regardless of the size of the network. In particular, \cite{huerta} considered a system with about $10^4$ coupled neurons. Each neuron was modeled by a system of three nonlinear, ordinary differential equations which, when uncoupled, underwent chaotic motion. They observed a variety of different stable, coherent motions including a number of states that exhibited roughly periodic patterns. It is the goal of this paper to show rigorously that such such stable, coherent states can occur in arbitrarily large systems of coupled chaotic oscillators at a coupling strength which is independent of the size of the system. Since the Lorenz system is one of the paradigms of chaotic flows, we have chosen a ring of diffusively coupled Lorenz equations for our investigations. The systems in the ring evolve according to the equations \begin{eqnarray}\label{coupledlorenz} x_i' = \sigma (y_i-x_i) + d_x \Delta x_i \nonumber \\ y_i' = r x_i - y_i - x_i z_i + d_y \Delta y_i \label{eq1} \\ z_i' = - \beta z_i + x_i y_i + d_z \Delta z_i \nonumber \end{eqnarray} where $\Delta x_i = x_{i-1} - 2 x_i + x_{i+1}$ with the index $i$ taken modulo $n$ is the discretized Laplacian with periodic boundary conditions. The constants $\sigma = 10, r = 27$, and $\beta = 8/3$ are chosen so that each uncoupled system would be in the chaotic regime. Following the work of \cite{huerta}, we will focus on the case $d_x \neq 0$, $d_y = d_z = 0$. As shown in \cite{josic}, such a ring is synchronized identically for sufficiently large values of $d_x$. We will be concerned with the case when $d_x$ is sufficiently large for coherent spatial structures to emerge in the chain, but insufficient to synchronize the ring. The paper is organized as follows: Typical spatial structures observed in numerical experiments are described in section \ref{numerics}. In particular, we give examples of stable stationary states, traveling waves, and breathers. In section \ref{steadystate} a general framework for the study of fixed states in chains and rings of dynamical systems is discussed. We show that the existence of such states can be reduced to the study of periodic solutions of reversible dynamical systems on ${\Bbb{R}}^n$. We then apply these results to two different cases, including the chains of coupled Lorenz oscillators (\ref{coupledlorenz}), to show that a large number of fixed states can be expected in such rings. The conditions under which these states are stable are investigated in section \ref{stabilityofsteady}. We then show how Floquet theory can be used to reduce the stability question in lattices of arbitrarily many oscillators to the study of the spectrum of a matrix of fixed size (but depending on a parameter -- the ``quasi-momentum''.) Finally, in section \ref{example} we implement this program to rigorously construct an example of a spatially periodic stable fixed state which remains stable for rings of arbitrary size. \section{Numerical Experiments} \label{numerics} Numerical experiments were performed on a Sun SuperSparc using the numerical integration program XPP. Several different integration methods were used to check the validity of the results. These included fixed and variable step Runge-Kutta methods of varying order and the Gear method. At very low values of the coupling constant $d_x$, the individual systems in the ring behaved as if they were uncoupled. As the coupling strength was increased the behavior of the ring became more coherent. Since the discrete Laplacian acts to synchronize neighboring systems in the ring, these results agree with our expectations. As was emphasized in the introduction, the onset of the coherent behavior occurred at coupling strengths which seemed to be independent of the number of oscillators in the ring. Some of the typical structures observed in these numerical experiments are discussed below. \subsection{Breathers} If some of the systems in the ring are nearly stationary, while others are undergoing oscillations we say that the oscillating systems form a breather. Since we are considering rings of finite size, it is not expected that any single system in the ring is stationary, unless the entire ring is stationary. An example of a system with two breathers is shown in figure~\ref{breath}. It is interesting to note that in these states each of the individual systems undergoes oscillations that are nearly periodic, and that there is a strong spatial dependence on the motion. Loosely speaking the motion of each individual system is trapped in a region close to one of the `lobes'' of the Lorenz attractor. No stable states in which all systems in the ring are trapped in the vicinity of only \emph{one} ``lobe'' has been observed. Similar breathers have been studied in coupled map lattices in \cite{bunimovich}, while occurrences of nearly periodic behavior with pronounced spatial patterns in lattices of chaotically bursting neurons have been observed numerically by M.I. Rabinovich, et. al. \cite{rabinovich}. Another interesting feature of breathers, which is also shared with other types of solutions of this network, is that frequently nonadjacent systems synchronize without synchronizing with the oscillators that lie between them. The synchrony is usually not exact, however this may be due to numerical errors in the calculations. Due to the many symmetries of the system this sometimes seems to be due to the stability of an invariant manifold corresponding to a partially synchronized state in the chain as discussed \cite{josic}. In other cases the state does not appear to be symmetric, and it is unclear what the mechanism behind this synchronization is. It is important to note that some of the qualitative features of the dynamics remain unaffected by the size, or the coupling strength in the ring. Figure \ref{timese} shows typical timeseries of the $x$ variable at four different coupling values in a ring of 32 oscillators. Note the smooth envelope and regularity of the oscillations, as well as the agreement of the timescales among the different timeseries. These general features remain unchanged as the number of systems in the ring is varied, as long as the coupling is below the synchronization threshold and above the value necessary for the appearance of coherent behavior. \subsection{Stable Stationary States} In numerical experiments with rings of 8 to 52 systems stable steady states were observed for a variety of initial conditions for $d_x > 10$. Typical stable fixed states are shown in figure \ref{sensitive}. For certain values of the coupling it appears that all initial conditions lead to one of the stationary states of the system. The basins of attraction of the stationary states seem to be intertwined in a complicated way in this case, as shown in figure \ref{sensitive}. The same is true for other stable states of the system. Small changes in the initial values of the parameters, and changes in the integrating methods can result in very different asymptotic behavior of the system. This situation is similar to that of attractors with riddled basins of attraction \cite{ashwin1}, and was also observed in numerical simulations of networks of chaotically bursting neurons \cite{huerta}. \subsection{Traveling Pulses} Traveling pulses were observed in the ring only when $d_x \neq 0$ and $d_y \neq 0$. A typical traveling pulse is shown in figure \ref{wave}. The pulse oscillates as it propagates along the chain, and is thus a periodic, rather than a fixed state in a moving coordinate frame. This can be seen in the timeseries in figure \ref{wave}. \section{A General Framework for the Study of Steady States}\label{steadystate} System (\ref{coupledlorenz}) is a special case of a lattice discrete or continuous time dynamical system \begin{equation}\label{lattice} (u_j)' = F(\{ u_j \}^s) \end{equation} where $j \in \Bbb{Z}^D$, $u_j \in \Bbb{R}^p$ and $\{u_j\}^s = \{u_i : |i-j| \leq s\}$. In the following discussion we will use the convention that in $u_i^k$ the subscript $i \in \Bbb{Z}^D$ denotes the position in the lattice, while the superscript $1 \leq k \leq p$ denotes the component of the vector $u_i$. Such systems have been studied by many authors \cite{AfChow}, \cite{BuSi}, \cite{Radu} as discrete versions of partial differential equations of evolution type. We will mainly be concerned with systems continuous in time with $D=1$ and $s=1$. These are simply chains of systems coupled to their nearest neighbors. The special case of \emph{rings}, that is chains of finite size with periodic boundary condition will be the focus of our attention. A state in a ring of $n$ systems corresponds to a state of period $n$ in the spatial variable in an infinite chain. A steady state of a chain given in (\ref{lattice}) is determined by \begin{eqnarray} F(\{u_j\}^s) = F(u_{j-1}, u_j, u_{j+1}) = 0 \label{continuous}\\ F(\{u_j\}^s) = F(u_{j-1}, u_j, u_{j+1}) = u_j\label{discrete} \end{eqnarray} in the continuous, respectively discrete case. We will study systems that satisfy the following condition: {\bf Condition 1} If we write $F: \Bbb{R}^{3p} \rightarrow \Bbb{R}^p$ as $F(\chi, \eta, \zeta) = (F^1(\chi, \eta, \zeta),$ $ F^2(\chi, \eta, \zeta),$ $ \dots, $ $F^p(\chi, \eta, \zeta))$ then det $\!\left[ \frac{\partial F^i}{\partial \zeta^j}(\chi,\eta,\zeta) \right]_{i,j} \neq 0$. By the Implicit Function Theorem in the case of continuous time if det $\!\left[ \frac{\partial F^i}{\partial \zeta^j}(\chi,\eta,\zeta) \right]_{i,j}$ $ \neq 0$ for all values of $\chi,\eta,\zeta$ then either there exists a point $(a,b,c) \in \Bbb{R}^{3p}$ such that $F(a,b,c) = 0$ and therefore a function $G(\eta,\chi)$ such that $F(\chi,\eta,G(\eta,\chi))=0$, or $F(\chi,\eta,\zeta) = 0$ has no solutions and there are no fixed points of (\ref{lattice}). If such a function $G$ exists and we define $u_{j+1} = G(u_j, u_{j-1})$, $x_j = u_j$ and $y_j = u_{j-1}$ this leads to the following dynamical system on $\Bbb{R}^{2p}$: \begin{eqnarray}\label{fixed} x_{j+1} = G(x_j , y_j) \\ y_{j+1} = x_j \nonumber \end{eqnarray} The steady states of (\ref{lattice}) are given by the $x$-coordinates of the orbits of (\ref{fixed}). In general, the function $G$ may not be unique, in which case more than one system of the form (\ref{fixed}) is needed to determine all the fixed points of the chain. An equivalent argument holds in the case of discrete time. Since the function $G$ can assume any form, not much can be said about such systems in general. The subclass of chains of systems with the following type of coupling is easier to analyze. \begin{definition} A nearest neighbor coupling of a chain of systems is said to be \emph{symmetric} if $F(u_{j-1}, u_j, $ $u_{j+1})$ $= F(u_{j+1}, u_j, u_{j-1})$. \end{definition} This simply means that a system in the chain is coupled to its left and right neighbor in the same way. For instance the couplings $\Delta u_j = u_{j-1} - 2 u_j + u_{j+1}$ and $\Psi u_j = u_{j-1} u_{j+1} + u_j$ are of this type. The stationary states of symmetrically coupled chains are related to the following class of dynamical systems: \begin{definition}\label{reversible} Given a diffeomorphism $\Phi:\Bbb{R}^{2p} \rightarrow \Bbb{R}^{2p}$ and an involution $R: \Bbb{R}^{2p} \rightarrow \Bbb{R}^{2p}$ such that the dimension of the fixed point set of $R$ is $p$ we say that $\Phi$ is \emph{R-reversible} if \begin{equation} R \circ \Phi = \Phi^{-1} \circ R \end{equation} The dynamical system defined by $x_{i+1} = \Phi(x_i)$ is also said to be R-reversible. \end{definition} The relation between symmetrically coupled chains and reversible systems is given in the following \begin{theorem} \label{rever} The fixed states of a symmetrically coupled chain of systems satisfying condition 1 correspond to the orbits of a dynamical system of the form (\ref{fixed}) which is $R$-reversible and volume preserving. For such a system $R(x,y) = (y,x)$, with $x,y \in \mathbb{R}^p$. \end{theorem} {\bf Proof:} The arguments for the continuous and discrete case are virtually identical, so only the first will be considered. Equation (\ref{continuous}) together with the assumption that the coupling is symmetric implies that \begin{equation} F(u_{j-1}, u_j, u_{j+1}) = F(u_{j+1}, u_j, u_{j-1})= 0 \end{equation} Therefore this leads to the recursion equations \begin{eqnarray} u_{j+1} = G(u_j, u_{j-1}) & u_{j-1} = G(u_j, u_{j+1}) \end{eqnarray} and the two dynamical systems \begin{equation} \label{dyn} \begin{aligned} x_{j+1} &= G(x_j , y_j) \\ y_{j+1} &= x_j \end{aligned} \qquad \qquad \begin{aligned} z_{j-1} &= G(z_j , w_j)\\ w_{j-1} &= z_j \end{aligned} \end{equation} where $x_j = z_j = u_j$, $y_j = u_{j-1}$ and $w_j = u_{j+1}$. The action of these systems is shown schematically in figure \ref{action}. Let $R(x,y) = (y,x)$ and $\Phi(x,y) = (G(x,y),x)$ so that the dynamical systems in (\ref{dyn}) is generated by $\Phi$. By the definition of this diffeomorphism \begin{equation} \begin{split} R \circ \Phi \circ R \circ \Phi(x_i,y_i) & = R \circ \Phi (y_{i+1}, x_{i+1}) = R \circ \Phi (z_i, w_i) \\ & = R (z_{i-1}, w_{i-1}) = R(y_i, x_i) = (x_i, y_i) \end{split} \end{equation} Since $R$ is an involution with Fix$(R) = \{ (x,y) : x=y \}$ the diffeomorphism $\Phi$ and the dynamical system it induces on the plane are $R$-reversible. To prove that the map $\Phi$ is volume preserving notice that \begin{equation} D\Phi = \begin{bmatrix} D_1G & D_2G \\ I & 0 \end{bmatrix} \end{equation} By the definition of the function G we know that $F(\chi,\eta,G(\eta,\chi))=0$ so that differentiating with respect to $\chi$ leads to \begin{equation} D_1F + D_3F D_2G =0 \end{equation} $D_3F$ is an invertible matrix by Condition 1, so that $D_2G = - D_1F (D_3F)^{-1}$. By assumption the system is symmetric so that $D_1F =D_3F$ at all points in $\Bbb{R}^{3p}$ and hence $D_2G = -I$. Since \begin{equation} \begin{bmatrix} 0 & I \\ -I & 0 \end{bmatrix} \begin{bmatrix} D_1G & -I \\ I & 0 \end{bmatrix} = \begin{bmatrix} I & 0 \\ -D_1G & I \end{bmatrix}. \end{equation} and \begin{equation} \det \begin{bmatrix} 0 & I \\ -I & 0 \end{bmatrix} = \det \begin{bmatrix} I & 0 \\ -D_1G & I \end{bmatrix}=1 \end{equation} it follows that $\det D\Phi =1$ and the diffeomorphism $\Phi$ is volume preserving. $\diamondsuit$ $R$-reversible systems have many properties that facilitate their study. We will make use of several of these summarized in the following proposition adopted from \cite{Dev} \renewcommand{\theenumi}{\alph{enumi}} \begin{proposition}\label{devaney} Let $R$ and $\Phi$ be as in Definition (\ref{reversible}). \begin{enumerate} \item If $p \in$ Fix($R$) and $\Phi^k(p) \in$ Fix($R$), then $\Phi^{2k}(p) = p$. Such periodic points will be referred to as \emph{symmetric periodic points}. \item Let $p \in$ Fix($R$) be a fixed point of $\Phi$, then $R(W^u(p)) = W^s(p)$ and $R(W^s(p)) = W^u(p)$ so that if $q \in W^u(p) \cap$ Fix($R$) then $q$ is a homoclinic point. \item If $p \in$ Fix($R$) is a fixed point of $\Phi$ such that $W^u(p)$ intersects Fix($R$) transversally at a point $q$, then there exist infinitely many symmetric periodic points in any neighborhood of $p$. \end{enumerate} \end{proposition} \section{Steady States In the Case of Discrete Laplacian Coupling} \label{section.steady} For a chain of systems of the form $u' = f(u)$ coupled through a discrete Laplacian $\Delta u_j = (u_{j-1} - 2u_j + u_{j+1})$ the equation (\ref{lattice}) takes the form \begin{equation}\label{lap} (u_j)' = f(u_j) + d (u_{j-1} - 2u_j + u_{j+1}) \end{equation} and the dynamical system (\ref{fixed}) takes the form \begin{equation}\label{laplacefixed} \begin{align} x_{i+1} &= -\frac{1}{d} f(x_i) - y_i + (2 + \frac cd) x_i \\ y_{i+1} &= x_i \nonumber \end{align} \end{equation} where $c=0$ in the case of continuous time and $c=1$ in the case of discrete time. The following proposition follows immediately from these definitions \begin{proposition}\label{lapfixed} The fixed points of (\ref{laplacefixed}) are of the form $x_i = y_i = u_0$ where $u_0$ is any solution of the equation $f(u) = 0$ in the continuous time and $f(u)= u$ in the discrete time case. Thus all fixed points of (\ref{laplacefixed}) lie on Fix$(R)= \{ (x,y) : x=y \}$ and are in 1-1 correspondence with the steady states of an uncoupled system in the chain. \end{proposition} The fixed points of (\ref{fixed}) correspond to states of the chain that are constant in space and time and proposition~\ref{lapfixed} shows that the only such states $\{u_j\}_{j \in \Bbb{Z}}$ in the case of a discrete Laplacian coupling are given by $u_j = u_0$ for all $j \in \Bbb{Z}$ where $u_0$ is a fixed point of an uncoupled system of the chain $u' = f(u)$. Since the fixed points of (\ref{laplacefixed}) are $(u_0, u_0) \in$ Fix($R$) this allows us to make use of proposition \ref{devaney}.c\footnote{The chains under consideration are a special case of chains of the form \begin{equation}\label{special} (u_j)' = f(u_j) + g(u_{j-1}, u_j, u_{j+1}) \nonumber \end{equation} where $g(u_{j-1}, u_j, u_{j+1})$ is a symmetric coupling vanishing on the linear submanifold of $\Bbb{R}^{3p}$ defined by $u_{j-1} = u_j = u_{j+1} =0$. Such systems can be analyzed using the approach of this section} . Moreover we can apply Theorem~\ref{rever} directly in this case to conclude that the map defining the fixed points is $R$-reversible and volume preserving. {\bf Remark:} It is interesting to note that if the fixed states of a chain of one dimensional systems are determined by a dynamical system of the form (\ref{laplacefixed}) on $\Bbb{R}^2$ and $(x_0, y_0)$ is a fixed point of (\ref{laplacefixed}) then the discriminant of the characteristic polynomial of $D\Phi(x_0,y_0)$ is \begin{equation} \sqrt{(-\frac{1}{d}f'(x_0)+ 2+c/d)^2 -4}. \end{equation} Since det~$D \Phi (x_0,y_0) =1$ we can conclude the following: \begin{enumerate} \item In the case of discrete time the eigenvalues of $D \Phi (x_0,y_0)$ are on the unit circle when $f'(x_0) > 1$ and $d>(f'(x_0)-1)/4)$ or $f'(x_0) <1$ and $d<(f'(x_0)-1)/4)$. \item In the case of continuous time, the eigenvalues of $D \Phi (x_0,y_0)$ are on the unit circle for sufficiently large \emph{positive} $d$ when $f'(x_0) >0$ and for sufficiently large \emph{negative} $d$ when $f'(x_0) <0$, \end{enumerate} By a theorem of Birkhoff (see for example \cite{moser}) since $\Phi$ is area preserving, the dynamical system (\ref{laplacefixed}) will generically have infinitely many periodic orbits in any neighborhood of $(x_0, y_0)$. Any such periodic orbit will correspond to a fixed state of (\ref{lap}) which is periodic in space, i.e. in the variable $j$. The condition det $D \Phi (x_0,y_0) =1$ is not sufficient to guarantee that $D \Phi (x_0,y_0)$ will have eigenvalues on the unit circle if (\ref{laplacefixed}) is not a dynamical system on the plane. Additional information needs to be considered to conclude the existence of such periodic orbits in this case. \vspace{.5cm} {\bf Example 1:} Consider a chain of tent maps coupled through a discrete Laplacian described by the equation $u_j' = f(u_j) + \Delta u_j$ with \begin{equation} f(u_j) = \begin{cases} 2 u_j & \text{if $u_j < 1/2$}, \\ -2 u_j +2 & \text{if $u_j \geq 1/2$}. \end{cases} \end{equation} This is a discrete dynamical system with fixed states determined by a homeomorphism of the plane given in (\ref{laplacefixed}) with $c=1$. The only fixed points of (\ref{laplacefixed}) are (0,0) and (2/3, 2/3) by proposition~\ref{lapfixed}. Since in this case the dynamical system (\ref{laplacefixed}) is linear in each of the two halves of the plane $x < 1/2$ and $x \geq 1/2$ the analysis of the dynamics around the fixed points is straightforward. A direct calculation shows that in the region $x < 1/2$ there is a family of invariant ellipses whose major axis lies on the diagonal $D=\{(x,y) \in \mathbb{R}^2: x=y \}$ and on which the action of (\ref{laplacefixed}) is a rotation. As $d$ is increased, these ellipses become more eccentric. The fixed states of the chain corresponding to these orbits are not dynamically stable. If $\hat{u}=\{\hat{u}_j\}_{j \in \Bbb{Z}}$ is a fixed state in the chain such that $\hat{u}_j < 1/2$ for all $j$ then around this state the dynamics of the chain are described by: \begin{equation} u_j' = 2u_j + d \Delta u_j \end{equation} The spectrum of the operator $2 + d \Delta$ is the interval $[2-4d, 2]$ and so $\hat{u}$ cannot be stable, and would not be observed in numerical experiments. Besides these invariant ellipses, this system will also typically have infinitely many periodic orbits, and will exhibit complicated behavior, as the following argument shows. A simple calculation shows that the point $p = (2/3, 2/3)$ is hyperbolic for all values $d>0$. Since the dynamics is piecewise linear the stable and unstable manifold can be calculated explicitly. $W^u(p)$ will consist of a ray contained in the half plane $x>1/2$ and a second more complicated part which is constructed as follows. The first section of $W^u(p)$ is a line $A$, as shown in figure \ref{ellman}. The part of $A$ contained in the half plane $x<1/2$ is rotated along the invariant ellipses around the origin to the line $A'$ under forward iteration. The subsequent images of $A'$ are lines that are rotated further. Therefore $f^{n_0}(A')$ intersects the diagonal $D$ for some $n_0$. Whenever the angle of this intersection is not a right angle $W^u(p)$ will intersect $W^s(p)$ transversely since by proposition \ref{devaney}.b the stable manifold is the reflection of the unstable manifold through the diagonal $W^u(p)$. This shows that complicated dynamics of (\ref{laplacefixed}) can be expected and implies the existence of infinitely many periodic as well as spatially chaotic states in the chain of tent maps. {\bf Example 2:} Next we consider the chain of diffusively coupled Lorenz system (\ref{eq1}) with $d_y = d_z = 0$. This system does not satisfy condition 1, however after setting the right--hand side of equations (\ref{eq1}) to 0 and using the second and third equation in (\ref{eq1}) to eliminate the variables $y_i$ and $z_i$ the following equation for a fixed state is obtained \begin{align} \sigma (\hat{x}_i - \frac{\beta r \hat{x}_i}{\beta + (\hat{x}_i)^2}) - & d (\hat{x}_{i-1} - 2 \hat{x}_i + \hat{x}_{i+1}) = 0 \label{steady} \\ \hat{y}_i = & \frac{\beta r \hat{x}_i}{\beta + (\hat{x}_i)^2} \\ \hat{z}_i = & \frac{r (\hat{x}_i)^2}{\beta + (\hat{x}_i)^2}. \end{align} The function on the left hand side of equation (\ref{steady}) satisfies condition 1, and we can proceed as in the previous example. Equation (\ref{steady}) defines the following dynamical system on the plane \begin{equation} \label{lorenzfixed} \begin{bmatrix} x_{i+1} \\ y_{i+1} \end{bmatrix} = F \left( \begin{bmatrix} x_i \\ y_i \end{bmatrix} \right) = \begin{bmatrix} \frac{\sigma}{d} (x_i - \frac{\beta r x_i}{\beta + (x_i)^2}) + 2 x_i - y_i \\ x_i \end{bmatrix} \end{equation} Orbits of period $n$ of this system are in 1--1 correspondence with the steady states in the ring of $n$ Lorenz systems. The two fixed points of (\ref{lorenzfixed}) are (0,0) and $(\pm \sqrt{\beta (r-1)},\pm \sqrt{\beta (r-1)}) = (\pm 8.32\bar{6},\pm 8.32\bar{6})$ which both lie on lie on Fix($R$). The following definition from \cite{Dev} will be used in the remainder of the argument. \begin{definition} A compact region $H$ in the plane is called \emph{overflowing} in the $y$-direction for a function $F$ if the image of any point $(x_0, y_0) \in$ int($H$) lies strictly above the line $y=y_0$. \end{definition} Notice that any point in the interior of $H$ either leaves $H$ or is asymptotic to a periodic point on the boundary of $H$. \begin{theorem} The map (\ref{lorenzfixed}) has a homoclinic point and infinitely many periodic points for all $0<d \leq 20$. \end{theorem} {\bf Proof:} Let $p$ denote the fixed point $(-\sqrt{ \beta (r-1)},$ $-\sqrt{ \beta (r-1)})$ of (\ref{lorenzfixed}), $G$ the ray $y = -\sqrt{ \beta (r-1)}$ with $x > -\sqrt{ \beta (r-1)}$, and $D = \{(x,y): x=y\}$ the diagonal in $\Bbb{R}^2$. The image of the triangular region $W$ bounded by $D$ and $G$ lies between the two cubics \begin{eqnarray} F(D) = \{(x,y): x = \frac{\sigma}{d} (y - \frac{\beta r y}{\beta + y^2}) + y\} \\ F(G) = \{(x,y): x = \frac{\sigma}{d} (y - \frac{\beta r y}{\beta + y^2}) + 2 y + \sqrt{ \beta (r-1)} \} \end{eqnarray} Consider the region $F(W) \cap W = H$ depicted in figure \ref{wedge2}. Let $(x_0, y_0) \in$ int($H$) and consider the vertical line segment $l$ passing through $(x_0, y_0)$ and connecting $G$ and $D$. The image of $l$ is a horizontal line segment connecting $F(G)$ and $F(D)$ contained in the line $y = x_0$. Any point of this line segment lies above the line $y = y_0$. Therefore $H$ is overflowing in the $y$ direction. Since int($F(H) -$ int($H$)) and int($H$) lie on opposite sides of $D$, any point in $H$ must have an iterate which either lies on $D$ or crosses $D$. A direct calculation shows that one branch of $W^u(p)$ enters $H$. Since $H$ is overflowing in the $y$ direction, $W^u(p)$ must either cross $D$ or be asymptotic to the fixed point $(0,0)$ creating a saddle-saddle connection. The second possibility can be excluded as follows: The eigenvector of the linearization of F around (0,0) corresponding to the stable direction is $\mathbf{v}_1= (1, -\frac{d}{d-130+ 2 \sqrt{65^2 -d}})$. The vector $\mathbf{v}_2=(1, \frac{d-260}{d})$ is tangent to $F(D)$ at (0,0). A direct computation shows that for $0<d<20$ the vector $\mathbf{v}_1$ points to the left of $\mathbf{v}_2$ and so the tangent to $W^s(0,0)$ at (0,0) does not point into $H$. This situation is depicted schematically in figure \ref{vectors} and excludes the possibility that $W^u(p)$ is asymptotic to (0,0). By proposition~\ref{devaney}.b the fact that $W^u(p)$ meets $D$ implies the existence of a homoclinic point. By proposition \ref{devaney}.c in the case of $R$-reversible systems an infinite number of periodic points exists whenever $W^u(p)$ crosses $D$ transversely at a homoclinic point. If this point of intersection is not transverse then a transverse intersection of $W^u(p)$ and $D$ can be produced nearby by an argument given in \cite[p. 261]{Dev}. $\diamondsuit$ Numerical investigations suggest that the theorem remains true for arbitrarily large values of d. This theorem shows that rings of Lorenz systems coupled through the first variable can be expected to have many periodic stationary states. Numerical computations of $W^u(p)$ suggest that even more complicated stationary states can be expected (see figure \ref{stablemanofp}). However it is unclear whether any of them are stable. Numerical investigations show that not only are they stable, but their basins of attraction occupy a large portion of phase space. We next address the problem of stability of these fixed states. \section{Stability of Steady States in a Ring} \label{stabilityofsteady} The study of stable fixed states in a chain of oscillators is not new. They are described in \cite{erm}, \cite{aron} as instances of oscillator death in a chain of neural oscillators, in \cite{nekor} as synchronous states of phase lock loops, and conditions for the stability of complex stationary states in general lattices are given in \cite{AfrChow2}. The situation presented here is different in that an explicit periodic stationary state is analyzed in a case where the methods from the theory of parabolic partial differential equations used in \cite{erm} are not applicable, and the conditions proposed in \cite{AfrChow2} cannot be verified. In addition, since the coupling strength is neither very large nor very small, there is no obvious perturbative approach to the problem. Even if some of the fixed states in a ring of $n$ systems are stable it can not be concluded that stable equilibria exist in arbitrarily long rings of such systems. If a stable fixed state is viewed as a special case of synchronization, one might guess from results in Afraimovich and Lin \cite{lin} that the coupling necessary to have stable steady states in longer rings increases as a power of the size of the ring. It has already been argued that this is not realistic from a physical viewpoint, and we will show below that it is not the case. In the following sections we will concentrate on the specific example of a ring of Lorenz systems coupled in the $x$ variable discussed in example 2 of the last section. However the ideas presented can be applied to any chain of symmetrically coupled systems for which condition 1 holds. The linearization of each of the Lorenz equations around a point $\mathbf{\hat{v}_i} = (\hat{x}_i, \hat{y}_i, \hat{z}_i )$ when $d = 0$ is \begin{equation}\label{linlorenz} D_{\mathbf{\hat{v}_i}}f = \begin{bmatrix} -\sigma & \sigma & 0 \\ r - \hat{z}_i & -1 & - \hat{x}_i \\ \hat{y}_i & \hat{x}_i & - b \end{bmatrix} \end{equation} and hence linearizing the entire ring around the steady state $\hat{\mathbf{v}} = ( {\mathbf{\hat{v}_i}} )_{i=1}^n = (\hat{x}_1, \hat{y}_1, \hat{z}_1, \dots, $ $\hat{x}_n, $ $\hat{y}_n, $ $\hat{z}_n )$ leads to the $3n \times 3n$ matrix \begin{equation} D_{\mathbf{\hat{v}}}F = \begin{bmatrix} D_{\mathbf{\hat{v}_1}}f - 2\Gamma & \Gamma & 0 & \dots & 0 & \Gamma \\ \Gamma & D_{\mathbf{\hat{v}_2}}f - 2\Gamma& \Gamma & \dots & 0 & 0 \\ \hdotsfor[3]{6} \\ \Gamma & 0 & 0 & \dots & \Gamma & D_{\mathbf{\hat{v}_n}}f -2\Gamma \end{bmatrix} \end{equation} where the matrix $\Gamma$ is defined as: \begin{equation} \Gamma = \begin{bmatrix} d & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} \end{equation} It is in general not possible to compute the eigenvalues of this matrix analytically. As mentioned above, we are specifically interested in coherent states in networks with arbitrarily many oscillators. Given a periodic stationary state in a chain of $n$ oscillators, an easy way to obtain a periodic stationary state in arbitrarily long chains is to repeat this state. In particular, if $\mathbf{\hat{v}}$ is a fixed state in a ring of $n$ oscillators then repeating these values $K$ times to obtain $\mathbf{\hat{v}}^{(K)} = \{\mathbf{\hat{v}}, \mathbf{\hat{v}}, \dots, \mathbf{\hat{v}} \}$ produces a steady state in a chain of $Kn$ oscillators. We refer to such a state as the $K$-{\em multiple} of the steady state $\mathbf{\hat{v}}$. An example of a stationary state and its 4-multiple is given in figure \ref{kmultiples}. We next derive conditions under which all $K$-multiples of a stable steady state are themselves stable and demonstrate that these conditions hold in a particular case. The linearization around the steady state $\mathbf{\hat{v}}^{(K)}$ leads to the following stability matrix: \begin{equation*} D_{\mathbf{\hat{v}}^{(K)}}F = \begin{bmatrix} M & E_d & 0 & \dots & 0 & E_u \\ E_u & M & E_d & \dots & 0 & 0 \\ \hdotsfor[3]{6} \\ E_d & 0 & 0 & \dots & E_u & M \end{bmatrix} \end{equation*} where \begin{equation*} M = \begin{bmatrix} D_{\mathbf{\hat{v}_1}}f - 2\Gamma & \Gamma & 0 & \dots & 0 & 0 \\ \Gamma & D_{\mathbf{\hat{v}_2}}f - 2\Gamma & \Gamma & \dots & 0 & 0 \\ \hdotsfor[3]{6} \\ 0 & 0 & 0 & \dots & \Gamma & D_{\mathbf{\hat{v}_n}}f -2\Gamma \end{bmatrix} \end{equation*} and $E$ is an $3n \times 3n$ matrix of the form \begin{align*} E_u = \begin{bmatrix} 0 & \dots & \Gamma \\ \hdotsfor[3]{3} \\ 0 & \dots & 0 \end{bmatrix} \qquad & E_d= \begin{bmatrix} 0 & \dots & 0\\ \hdotsfor[3]{3} \\ \Gamma & \dots & 0 \end{bmatrix} \end{align*} In general it is impossible to compute the spectrum of this $3Kn \times 3Kn$ matrix. However in the present instance Bloch wave theory, provides a way of expressing the eigenfunctions of a larger system with a periodic dependency as the eigenfunctions of a smaller system. As usual, this reduction is accompanied by the introduction of an additional parameter into the equations. Here the Bloch wave approach will be used to reduce the problem of computing the eigenvalues of the $3Kn \times 3Kn$ matrix $D_{\mathbf{\hat{v}}^{(K)}}F$ for arbitrary $K$ to the computation of the eigenvalues of an $3n \times 3n$ matrix dependent on a parameter $t$. Let $\mathbf{e} = (\Psi(j), \Phi(j), \eta(j) )_{j = 1}^{Kn} = (\Psi(1), \Phi(1), \eta(1), \dots, \Psi(Kn), \Phi(Kn), \eta(Kn)) $ where \begin{eqnarray} \Psi(j) = \exp \left( \frac{2 \pi i q}{nK} j \right) \tilde{\Psi}_q (j) \nonumber \\ \Phi(j) = \exp \left( \frac{2 \pi i q}{nK} j \right) \tilde{\Phi}_q (j) \\ \eta(j) = \exp \left( \frac{2 \pi i q}{nK} j \right) \tilde{\eta}_q (j) \nonumber \end{eqnarray} for $1 \leq q \leq Kn$ and $\tilde{\Psi}_q, \tilde{\Phi}_q,\tilde{\eta}_q$ are assumed to be $n$-periodic. We will show that the vector $\mathbf{e}$ is an eigenvector of $D_{\mathbf{\hat{v}}^{(K)}}$ whenever the vector \begin{equation} \label{mathi} \mathbf{i} = (\tilde{\Psi}(j), \tilde{\Phi}(j), \tilde{\eta}(j) )_{j = 1}^n \end{equation} is an eigenvector of a particular $3n \times 3n$ matrix derived from $D_{\mathbf{\hat{v}}^{(K)}}F$. A direct calculation leads to: \begin{align}\label{bloch} \left[D_{\mathbf{\hat{v}}^{(K)}}F\right] \mathbf{e} \, (3j) = & e^{ \frac{2 \pi i q}{nK} j } ( d e^{ \frac{2 \pi i q}{nK} } \tilde{\Psi}_q (j+1 \!\!\!\!\!\mod n) + \nonumber de^{- \frac{2 \pi i q}{nK} } \tilde{\Psi}_q (j-1 \!\!\!\!\!\mod n) - \\ & 2 \tilde{\Psi}_q (j) + v_1(j) \tilde{\Psi}_q (j) + w_1(j) \tilde{\Phi}_q (j) + u_1(j) \tilde{\eta}_q (j) ) \nonumber \\ \left[D_{\mathbf{\hat{v}}^{(K)}}F\right] \mathbf{e} \, (3j+1)= & e^{ \frac{2 \pi i q}{nK} j } (v_2(j) \tilde{\Psi}_q (j) + w_2(j) \tilde{\Phi}_q (j) + u_2(j) \tilde{\eta}_q (j))\\ \left[D_{\mathbf{\hat{v}}^{(K)}}F\right] \mathbf{e} \, (3j+2)= & e^{ \frac{2 \pi i q}{nK} j } (v_3(j) \tilde{\Psi}_q (j) + w_3(j) \tilde{\Phi}_q (j) + u_3(j) \tilde{\eta}_q (j)) \nonumber \end{align} for $1 \leq j \leq Kn$. Here $v_i(j), w_i(j), u_i(j)$ are given by equation (\ref{linlorenz}) as \begin{align*} \begin{bmatrix} v_1(j) & w_1(j) & u_1(j)\\ v_2(j) & w_2(j) & u_2(j)\\ v_3(j) & w_3(j) & u_3(j) \end{bmatrix} \quad = \quad & \begin{bmatrix} -\sigma & \sigma & 0 \\ r - \hat{z}_{j \!\!\!\!\! \mod n} & -1 & - \hat{x}_{j \!\!\!\!\!\mod n} \\ \hat{y}_{j \!\!\!\!\!\mod n} & \hat{x}_{j \!\!\!\!\!\mod n} & - b \end{bmatrix} \end{align*} and are thus $n$-periodic. We want to check when $\mathbf{e}$ is an eigenvector of $D_{\mathbf{\hat{v}}^{(K)}}F$ and so we set $D_{\mathbf{\hat{v}}^{(K)}}F \mathbf{e} = \lambda \mathbf{e}$. Using the expressions obtained in (\ref{bloch}) we get \begin{equation}\label{bloch2} \begin{split} & \exp \left( \frac{2 \pi i q}{nK} j \right) \biggl( d e^{ \frac{2 \pi i q}{nK} } \tilde{\Psi}_q (j+1 \!\!\!\!\!\mod n) + d e^{- \frac{2 \pi i q}{nK} } \tilde{\Psi}_q (j-1 \!\!\!\!\!\mod n) - \\ & 2 \tilde{\Psi}_q (j) + v_1(j) \tilde{\Psi}_q (j) + w_1(j) \tilde{\Phi}_q (j) + u_1(j) \tilde{\eta}_q (j), \;\; v_2(j) \tilde{\Psi}_q (j) + w_2(j) \tilde{\Phi}_q (j) + u_2(j) \tilde{\eta}_q (j), \\ & v_3(j) \tilde{\Psi}_q (j) + w_3(j) \tilde{\Phi}_q (j) + u_3(j) \tilde{\eta}_q (j) \biggr)_{j = 1}^{Kn} = \lambda \exp \left( \frac{2 \pi i q}{nK} j \right) \biggl( \tilde{\Psi}_q(j), \tilde{\Phi}_q(j), \tilde{\eta}_q(j) \biggr)_{j = 1}^{Kn}. \end{split} \end{equation} Since all the functions on the right hand side of (\ref{bloch2}) are $n$-periodic we can express equation (\ref{bloch2}) as $\mathcal{D}(q) \mathbf{i} = \lambda \mathbf{i}$ where $\mathbf{i}$ is defined in equation (\ref{mathi}) and \begin{equation} \mathcal{D}(q) = \begin{bmatrix}\label{dq} D_{\mathbf{\hat{v}_1}}f - 2\Gamma & E(q)^+ & 0 & \dots & 0 & E(q)^- \\ E(q)^- & D_{\mathbf{\hat{v}_2}}f - 2\Gamma& E(q)^+ & \dots & 0 & 0 \\ \hdotsfor[3]{6} \\ E(q)^+ & 0 & 0 & \dots & E(q)^- & D_{\mathbf{\hat{v}_n}}f -2\Gamma \end{bmatrix} \end{equation} with the matrices $E(q)^{\pm}$ defined as: \begin{equation} E(q)^{\pm} = \begin{bmatrix} d e^{ \pm \frac{2 \pi i q}{nK}} & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} \end{equation} This implies that the eigenvalues of $D_{\mathbf{\hat{v}}^{(K)}}F$ are the same as the eigenvalues of the $3n \times 3n$ matrices $\mathcal{D}(q)$ for $1 \leq q \leq Kn$. The $3n \times 3n$ matrix $\mathcal{D}(q)$ is easier to handle than the $3Kn \times 3Kn$ matrix $D_{\mathbf{\hat{v}}^{(K)}}F$. To conclude that any $K$-multiple of a stable fixed state $\mathbf{v}_0$ of a ring of $n$ oscillator is stable it is sufficient to show that the eigenvalues of the matrix $\mathcal{D}(q)$ have negative real part for all values of $q$ and $K$. To simplify the calculations we can replace the argument $q \in \Bbb{Z}$ by a continuous parameter $t \in \Bbb{R}$ by replacing the matrices $E(q)^{\pm}$ with the matrices \begin{equation*} E_t^{\pm} = \begin{bmatrix} d e^{ \pm it} & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} \end{equation*} in the expression for $\mathcal{D}$. If we can show that $\mathcal{D}(t)$ has eigenvalues with negative real part for all $t \in \Bbb{R}$ then the same is true for $\mathcal{D}(q)$ for any values of $K$ and $q$ in $\Bbb{Z}$. The following lemma about the class of matrices of type (\ref{dq}) will be used in the next section. \begin{lemma} \label{char} Let $E(t)^{\pm}$ be as in the previous lemma. The characteristic polynomial of a $Km \times Km$ matrix \begin{equation*} B = \begin{bmatrix} M_1 & E(t)^+ & 0 & \dots & 0& 0 & E(t)^- \\ E(t)^- & M_2 & E(t)^+ & \dots 0& & 0 & 0 \\ \hdotsfor[3]{6} \\ 0 & 0 & 0 & \dots & E(t)^- & M_{K-1} & E(t)^+ \\ E(t)^+ & 0 & 0 & \dots & 0 & E(t)^- & M_K \end{bmatrix} \end{equation*} is of the form \begin{equation}\label{charpoly} p_M(t, \lambda) = \sum_{i=0}^{(K-1)m} (\alpha_i + \beta_i \cos Kt) \lambda^i + \sum_{i=(K-1)m+1}^{Km} \alpha_i \lambda^i . \end{equation} \end{lemma} A proof of this lemma is given in the appendix. \section{An Example} \label{example} This section gives a computer assisted proof that any $K$-multiple of a particular stable fixed state in a ring of four Lorenz system is stable. The proof involves the use of interval arithmetic and the following theorems about the convergence of the Newton-Raphson-Kantorovich method. The computations were performed using Mathematica's implementation of interval arithmetic. \begin{theorem} \label{newton} Let $f(z)$ be a complex analytic function and assume $f(z_0) f'(z_0) \neq 0$ for some $z_0$. Define $h_0 = -f(z_0)/f'(z_0)$, the disc $K_0 = \{z : |z - z_0| \leq |h_0|\}$ and $M = \max_{K_0} |f''(z)|$. If $2|h_0| M \leq |f'(z_0)|$ then there is exactly one root of $f$ in the closed disc $K_0$. \end{theorem} \begin{theorem}[Kantorovich's Convergence Theorem] \label{kantorovich} Given a twice differentiable function $f : \Bbb{R}^n \rightarrow \Bbb{R}^n$ which is nonsingular at a point $x_0 = ( x_1^0, x_2^0, \dots, x_n^0 )$ let $[\Gamma_{ik}] = [\frac{\partial f_i}{\partial x_k}(x_0)]^{-1}$. Let $A, B, C$ be positive real numbers such that \begin{align} \max_i \sum_{k=1}^{n} |\Gamma_{ik}| \leq A \qquad & \max_i \sum_{k=1}^{n} |\Gamma_{ik} f_k(x_0)| \leq B \qquad C \leq \frac{1}{2 A B}. \end{align} Define the region $R = \{x \in \Bbb{R}^n : \max_i |x_i - x_i^0| \leq (AC)^{-1}(1 - \sqrt{1-2ABC}) \}$. If \begin{align} \max_{x \in R} \sum_{j=1}^{n} \sum_{k=1}^{n} \left| \frac{\partial^2 f_i}{ \partial x_j \partial x_k} \right| \leq C \qquad & i = 1, \dots,n \end{align} then the equation $f(x) = 0$ has a solution in $R$. \end{theorem} The argument will proceed as follows: First a steady state of the ring is found using interval arithmetic. Next it is shown that the characteristic polynomial $p_{\mathcal{D}}(\lambda, t)$ of the matrix $\mathcal{D}(q)$ in (\ref{dq}) corresponding to this steady state has roots with negative real part for $t = \frac{\pi}{4}$. Finally it is proved that $p_{\mathcal{D}}(\lambda, t)$ does not have roots on the imaginary axis for any $t \in \mathbb{R}$. Since the roots of a polynomial depend continuously on its coefficients this implies that the roots of $p_{\mathcal{D}}(\lambda, t)$ cannot cross into right half of $\mathbb{C}$. By the results of section \ref{stabilityofsteady} this implies that all $K$-multiples of the fixed state under consideration must be stable. In the following intervals of numbers will be denoted with overbars to distinguish them from real numbers, and make the notation less cumbersome. For instance $\bar{\lambda}$ denotes an interval and the interval $[3.16524, 3.16533]$ is denoted $\overline{3.165}$. Numerical investigations with the program XPP show that a stable state in a chain of four oscillators coupled with $d=1.5$ occurs close to $ \hat{\mathbf{x}}= (\hat{x}_1,\hat{x}_2, \hat{x}_3, \hat{x}_4 )$ = (-6.4114408, 7.3696656, 8.1984129, 7.3696656). Notice that the orbit of (\ref{lorenzfixed}) describing this state is not symmetric in the sense of proposition \ref{devaney} although it is mapped into itself under $R$. This numerically obtained solution is used as an initial guess in Mathematica's implementation of Newton's method to determine the roots $x^i_a$ of the set of equations \begin{equation} \label{stachain} \sigma(x_i - \frac{\beta r x_i}{\beta+ (x_i)^2}) - 1.5 ( x_{(i-1) \!\!\!\!\!\mod 4 +1} - 2 x_i + x_{(i+1) \!\!\!\!\!\mod 4 +1}) \qquad i = 1,2,3,4. \end{equation} Although Mathematica can be instructed to find roots to an arbitrary precision, so far only floating point arithmetic was used so that none of the results are rigorous yet. At this point all the quantities in the computations are redefined as intervals rather than floating point numbers and using interval arithmetic and theorem \ref{kantorovich} we find an interval around each $x^i_a$ in which the roots of equation (\ref{stachain}) must lie. These bounds are now rigorous. In section \ref{stabilityofsteady} it was shown that a fixed state and all of its $K$-multiples are stable if the polynomial $p_{\mathcal{D}}(\lambda, t)$ corresponding to that state has roots with negative real part for all $t \in [0, \pi/2]$. Using interval arithmetic it is shown that \begin{equation} \begin{split} \bar{p}_{\mathcal{D}}(\lambda, t)= & \overline{3.61524}\times{{10}^{12}} - \overline{5.02753}\times10^7 \cos 4t + (\overline{9.49583}\times10^{11} - \overline{1.33199}\times10^7 \cos 4t ) \,\lambda + \\ & (\overline{2.67802}\times10^{11} - \overline{4.94338}\times10^6 \cos 4t ) \,\lambda^2+ (\overline{4.90382}\times10^{10} - \overline{771725} \cos 4t) \,\lambda^3 + \\ & (\overline{7.551}\times10^9 - \overline{144880} \cos 4t) \, \lambda^4 + (\overline{9.81388}\times10^8 - \overline{13673.3} \cos 4t) \lambda^5 + \\ & (\overline{1.03403}\times10^8 - \overline{1560.66} \cos 4t) \lambda^6 + (\overline{9.39525}\times10^6 - \overline{74.25} \cos 4t) \lambda^7 + \\ & (\overline{703005} - \overline{5.0625} \cos 4t) \lambda^8 + \overline{42026.7}\,\lambda^9 + \overline{2023.92}\,\lambda^{10} + \overline{66.6667}\lambda^{11} + \lambda^{12} \end{split} \end{equation} for the fixed state under consideration. In the next step the roots of the polynomial $\bar{p}_{\mathcal{D}}(\lambda, \frac{\pi}{2})$ are found. To employ Mathematica's implementation of Newton's method we need a polynomial with coefficients that are floating point numbers rather than intervals. Floating point numbers inside the intervals which determine the coefficients of the polynomial $\bar{p}_{\mathcal{D}}(\lambda, \frac{\pi}{2})$ can be chosen to define a polynomial $p^a_{\mathcal{D}}(\lambda, \frac{\pi}{2})$ which approximates $p_{\mathcal{D}}(\lambda, \frac{\pi}{2})$. The roots of $\{r^a_i\}_{i=1}^{12}$ of $p^a_{\mathcal{D}}(\lambda,\frac{\pi}{2} )$ are now found using Newton's method. The complex intervals\footnote{There are several ways in which complex intervals can be defined. In this case we use regions of the form $\bar{z} = \bar{x} + i \bar{y}$ where $\bar{x}$ and $\bar{y}$ are real intervals. These are simply rectangular regions in $\Bbb{C}$.} containing the roots of $\bar{p}_{\mathcal{D}}(\lambda,\frac{\pi}{2} )$ are found by using theorem \ref{newton}. We set $z_0$ equal to complex intervals around $r^a_i$ and use complex interval arithmetic to find the radius $K_0$ and check the conditions of the theorem for each root $r^a_i$. The 4 real roots and 4 complex pairs obtained by this procedure are given in the table below. \vspace{5mm} \centerline{ \begin{tabular}{||c|c||} \hline $\overline{-18.481}$ & $\overline{-0.2463} \pm i \overline{9.769}$ \\ \hline $\overline{-17.173}$ & $\overline{-0.1961} \pm i \overline{9.317}$ \\ \hline $\overline{-15.437}$ & $\overline{-0.1345} \pm i \overline{9.164}$ \\ \hline $\overline{-14.238}$ & $\overline{-0.09054} \pm i \overline{8.622}$ \\ \hline \end{tabular} } \vspace{5mm} Since interval arithmetic was used in these calculations, these estimates are now rigorous. The remainder of the argument shows that these roots will not cross the imaginary axis as $t$ varies. By lemma \ref{char} the characteristic polynomial $p_{\mathcal{D}}(\lambda, t)$ takes the following form when evaluated on the imaginary axis: \begin{equation} \begin{split} p_{\mathcal{D}}(i\mu, t) & = \sum_{j=0, j \text{ even}}^{12} (\alpha_j + \beta_j \cos 4t) \mu^j (-1)^{\frac{j}{2}} + i \sum_{j=1, j \text{ odd}}^{11} (\alpha_j + \beta_j \cos 4t) \mu^j (-1)^{\frac{j-1}{2}} = \\ & = p_{\mathcal{D}}^{\text{R}}(\mu, t) + i p_{\mathcal{D}}^{\text{I}}(\mu, t) \nonumber \end{split} \end{equation} Since the roots of $p_{\mathcal{D}}(\lambda, t)$ depend continuously on the parameter $t$, if $p_{\mathcal{D}}(\lambda, t)$ has roots with positive real part for some $t_1$ then there must exist a $t_0$ such that $p_{\mathcal{D}}(\lambda, t_0)$ has a root on the imaginary axis. In other words there must exist a $\mu_0$ such that \begin{equation} \label{inequality} p_{\mathcal{D}}^{\text{R}}(\mu_0, t_0) =p_{\mathcal{D}}^{\text{I}}(\mu_0, t_0)=0. \end{equation} We will use interval arithmetic to show that this cannot happen. The polynomial $p_{\mathcal{D}}(\lambda, t)$ is split into a real and complex part to avoid using complex interval arithmetic in the numerical calculations since complex interval arithmetic leads to much rougher estimates than real interval arithmetic. By Gre\v{s}gorin's theorem the eigenvalues of the matrix $\mathcal{D}(t)$ lie in the union of discs $C_i= \{ z: |z-\mathcal{D}_{ii}| < R_i \}$ where $\mathcal{D}_{ij}$ are the entries in the matrix $\mathcal{D}(t)$ and $R_i = \sum_{j=1, j \neq i}^{12} |\mathcal{D}_{ij}|$. A direct computation shows that the intersection of $\cup_{i=1}^{12} C_i$ with the imaginary axis is contained in the interval $[-17 i, 17i]$ and it is therefore sufficient to show that $p_{\mathcal{D}}^{\text{R}}(\mu, t_0)$ and $p_{\mathcal{D}}^{\text{I}}(\mu, t_0)$ are not zero simultaneously for any value of $t$ and $\mu \in [-17,17]$. Since the coefficients of $\bar{p}_{\mathcal{D}}(\lambda, t)$ are $\frac{\pi}{2}$ periodic in $t$, it is sufficient to show that (\ref{inequality}) is not satisfied for any $\lambda \in [-17,17]$ and $t \in [0, \frac{\pi}{2}]$. This is shown by subdividing these intervals into a sufficient number of subintervals $\bar{\mu}_n$ and $\bar{t}_m$ so that $[-17,17] \subset \cup_{i=1}^{N} \bar{\mu}_n$ and $[0,\frac{\pi}{2}] \subset \cup_{m=1}^{M} \bar{t}_m$ with the property that the intervals $\bar{p}_{\mathcal{D}}^{\text{R}}(\bar{\mu}_n, \bar{t}_m)$ and $\bar{p}_{\mathcal{D}}^{\text{I}}(\bar{\mu}_n, \bar{t}_m)$ do not contain zero simultaneously for any given pair of subintervals $\bar{\mu}_n$ and $\bar{t}_m$. Therefore the roots of $p_{\mathcal{D}}(\lambda, t)$ stay in the left half plane for all $t \in \mathbb{R}$ and all $K$-multiples of the fixed state under consideration are also stable. The paths of the roots of $p_{\mathcal{D}}(\lambda, t)$ as $t$ is varied are shown in figure \ref{eigenvalues}. The \emph{Mathematica} code used in these calculations is available at \emph{http://math.bu.edu/people/josic/research/code}. {\bf Acknowledgment:} The authors research was supported in part by NSF grant DMS-9803164. K.J. would like to thank the Department of Mathematics at Boston University for its hospitality while he conducted his research. The authors also thank J.P. Eckmann for useful discussions concerning this problem.
{'timestamp': '1999-05-11T17:41:12', 'yymm': '9905', 'arxiv_id': 'cond-mat/9905140', 'language': 'en', 'url': 'https://arxiv.org/abs/cond-mat/9905140'}
\section{Introduction} The study of tunneling in nanostructures has assumed an important role in the last few years in connection with advances in nanoelectronics. The problem of tunneling of wave packets through a potential barrier arises in many cases, for example, in the study of the action of femtosecond light pulses on coupled wells. This problem is also important because of possible applications of scanning tunneling microscopes irradiated with femtosecond pulses for simultaneously high-spatial and high-temporal resolution study of nanostructures.\cite{r1} Other interesting questions are the tunneling time in the ionization of a hydrogen atom by ultrashort laser pulses and the tunneling time for tunneling induced by the action of a laser pulse on low-lying nuclear energy levels. In the present paper we investigate the no less interesting question of the time duration of tunneling in nanostructures. The tunneling time is of practical interest in this case because it permits to estimate the response time of semiconductor components. In this connection we shall study the following problem: Let a laser pulse produce a wave packet of an excited electron near a tunneling barrier. The question is: When will the tunneling portion of the packet appear behind the barrier? The arrival of the wave packet can be detected by studying local variations of the optical properties using ultrashort probe pulses. It is interesting that a number of effects which are absent in the stationary case are observed when a packet passes through a tunneling barrier. The tunneling time of a packet is determined in general not by the reciprocal of the probability of stationary tunneling, but rather it is related with quite complicated processes --- a change in the shape and behavior of the packet inside the barrier. Moreover, the transmission time through a barrier depends on the measured quantities, i.e. on the type of experiment. The investigation of the question of the presence time of a tunneling particle under a barrier started quite a long time ago,\cite{r2,r3,r4} and many theoretical and experimental methods for measuring the tunneling time have been proposed. For example, there exist approaches where the peak of the packet or the average coordinate (the ``centroid'') is chosen as the observed quantity while the reflection or transmission time is determined by their evolution. However, it has been shown in Refs. \onlinecite{r5} and \onlinecite{r6} and subsequent works that a wave-packet peak incident on a potential barrier does not pass into the peak of the transmitted wave. In Ref. \onlinecite{r7} it was shown that on account of dispersion of the wave packet in momentum space the high-energy components reached the barrier before the other components. Since the tunneling probability increases with energy, these components made the main contribution to the transmitted part of the packet. The initial parameters could be chosen in a manner so that the transmitted part of the packet left the barrier long before the main peak, chosen as the observed quantity, appears. This example demonstrates the breakdown of the causality principle, which is the basis of this method, and therefore it limits the applicability of the method. Moreover, it is difficult to conceive of an experimental method for measuring the arrival time of a packet according to its peak or ``centroid.'' There also exists a class of approaches that employ an ensemble of dynamical trajectories to determine the tunneling time. These dynamical trajectories arise as a necessary apparatus of the description in the Feynman and Bohm interpretations of quantum mechanics. When Feynman trajectories were used,% \cite{r8} the transmission time through a barrier was determined as a path integral over all possible trajectories that start from a prescribed point to the left of the barrier and arrive at a certain time at a point located to the right of the barrier. The integrated function in the path integral contained the product of a classical presence time of a trajectory inside the barrier by a weighting factor $\exp \{iS(x(t^{\prime }))/\hbar ]$, where $S(x(t^{\prime }))$ is the action related with the trajectory $x(t^{\prime }) $ under consideration. The calculated times possess real and imaginary parts because of the multiplication by a complex weighting factor, and the question of how these times should be associated to the physically observable quantities, which are always real, arose. To explain the complex times, which also arise in other methods (for example, in the method of ``physical clocks'' (see below)), Sokolovski and Connor\cite{r9} examined so-called direct and indirect measurements. In indirect measurements as in the case of Feynman trajectories method the quantities can be complex. Approaches employing physical clocks have found quite wide application. Physical clocks are various additional degrees of freedom in the system that make it possible to determine the presence time of a particle in a given region. Three types of clocks have been investigated in theoretical works. Baz' and Rybachenko\cite{r10,r11} used spin precession in a weak uniform magnetic field applied inside a barrier. At first spin precession in a single plane was considered. Then Buttiker and Landauer\cite{r12} extended the analysis to three dimensions. During the tunneling a spin of a particle acquires a component along the direction of motion and along the magnetic field. It is obvious that the intensities of the detected components with spin polarization in these two directions will be proportional to the presence time of the particle in the region with the magnetic field, i.e. in the region of the barrier. It turned out that for a square barrier the tunneling times determined in this manner are identical to the real and imaginary parts of the complex tunneling time introduced via Feynman path integrals.\cite{r8} The extension of this method to the case of an arbitrary potential barrier was made in Ref. \onlinecite{r13}. Buttiker and Landauer\cite{r14} considered as physical clocks an oscillating barrier in which the amplitude of the oscillations of the temporal component was much smaller than the barrier height. At low frequencies particles see an effective static barrier, since the transmission time through the barrier is much shorter than the period of the oscillations of the temporal component of the barrier. As the frequency increases, the delayed particles or wave-packet components see a slightly modified potential barrier. Finally, for some frequencies one or several periods of the oscillations influence the tunneling particles. The frequency at which a substantial difference from the adiabatic case corresponding to a stationary barrier appears will determine the reciprocal of the interaction time with the barrier or the transmission time through the barrier. Martin and Landauer\cite{r15} chose as physical clocks the oscillating amplitude of the incident wave. For this, a wave function consisting of a superposition of two plane waves with different energies was chosen to the left of the barrier. It is obvious that in this case the wave function to the right of the barrier will also be a superposition of the tunneled parts of the plane waves, which, however, possess a different transmission amplitude, since the amplitude depends on the energy. The transmitted wave function will reproduce the incident wave function if the amplitudes of the tunneled plane waves differ very little; this corresponds to the adiabatic case. The energy difference between the initial plane waves for which the wave function behind the barrier does not reproduce the incident wave function makes it possible to determine the transmission time through a potential barrier. The main advantage of this method is that it is applicable for all types of potentials, but it employs two values of the energy, so that it is not clear to which energy the tunneling time obtained should be ascribed. Do all clocks give the same measurement result? Of course, no. However, in many cases these results are close to one another or identical.\cite {r13,r16,r17,r18} The main advantage of the approaches using physical clocks is that they strive to determine the tunneling time in terms of possible measurements in physical expeirments. The search for ``time operators'' and the study of their properties is no less popular.\cite{r19,r20,r21,r22,r23,r24} As first noted by Pauli,\cite {r25} the main difficulty is that a measurable hermitian time operator for a system Hamiltonian with a bounded spectrum does not exist. Various attempts have been made to construct operators that would describe the necessary properties of physical times. In order that the constructed operator satisfy the correspondence principle, relations from classical mechanics were taken as the basis for the operator construction. However, it is well known that the construction of an operator expression corresponding to a classical quantity is not unique, and its relation with the measurement process requires additional analysis. In the present work the tunneling time was determined as the difference of the average ``arrival'' and ``presence'' times of a wave packet (see Sec. 3) before and after the barrier. The method of quantum molecular dynamics was used to calculate these times and to investigate the dynamics of a tunneling wave packet.\cite{r26,r27,r28} It is well known that molecular dynamics investigates the properties of classical systems in phase space. Therefore it is natural to extend this method to quantum systems in phase space. The evolution of a system in phase space can be described, for example, on the basis of the Wigner formalism of quantum mechanics by the Wigner--Liouville equation. To solve the Wigner--Liouville equation written in integral form it is convenient to rewrite the equation in the form of an iterational series. Each term of this series can be treated as a weighted contribution of a trajectory consisting of segments of classical trajectories separated by finite disturbances of the momentum. In what follows we shall call such a trajectory a quantum trajectory. The statistical ensemble of quantum trajectories makes it possible to calculate the sum of all terms in the series. The Monte Carlo method is used to take account of only the trajectories making the main contribution. In the classical limit the quantum trajectories turn into classical trajectories, and the method of generalized molecular dynamics becomes identical to usual molecular dynamics. The principles of the method are presented in Sec. 2. The expressions for calculation of the distributions of the arrival and presence times of a wave packet are presented in Sec. 3 on the basis of the Wigner formalism of quantum mechanics. The simulation results are discussed in Sec. 4. The one-dimensional case is considered in this paper, but the method employed makes it possible to perform similar calculations for multidimensional and multiparticle systems, where it has serious advantages from the standpoint of computer time over, for example, the solution of the nonstationary Schr\"{o}dinger equation. \section{Computational method} To calculate the quantum-mechanical average of a quantity $A$ for a nonstationary state $\left| \psi \right\rangle $ in the Wigner formulation of quantum mechanics it is necessary to calculate an integral in phase space% \cite{r29} \begin{equation} A\left( t\right) =\left\langle \psi \left| \hat{A}\left( t\right) \right| \psi \right\rangle =\int \!\!\!\int dqdpA\left( q,p\right) W\left( q,p,t\right) , \end{equation} where, by definition, the Weyl symbol $A(q,p)$ is introduced for the operator $\hat{A}$ and $W(q,p,t)$ is the Wigner function, which is the Fourier transform of the off-diagonal density-matrix element: \begin{eqnarray} A\left( q,p\right) &=&\int d\xi \exp \left( \frac{ip\xi }\hbar \right) \left\langle q+\frac \xi 2\left| \hat{A}\right| q-\frac \xi 2\right\rangle , \\ W\left( q,p,t\right) &=&\frac 1{2\pi \hbar }\int d\xi \exp \left( -\frac{% ip\xi }\hbar \right) \psi ^{*}\left( q-\frac \xi 2,t\right) \psi \left( q+% \frac \xi 2,t\right) . \end{eqnarray} Differentiating the distribution function with respect to time, substituting it for the time derivative of the function $\psi $ on the right-hand side of the Schr\"{o}dinger equation, and integrating by parts, we obtain the Wigner--Liouville integrodifferential equation\cite{r30} \begin{equation} \frac{\partial W}{\partial t}+\frac pm\,\frac{\partial W}{\partial q}% +F\left( q\right) \frac{\partial W}{\partial p}=\int\limits_{-\infty }^\infty dsW\left( p-s,q,t\right) \omega \left( s,q\right) . \end{equation} In this equation \begin{equation} \omega \left( s,q\right) =\frac 2{\pi \hbar ^2}\int dq^{\prime }V\left( q-q^{\prime }\right) \sin \left( \frac{2sq^{\prime }}\hbar \right) +F\left( q\right) \frac{d\delta \left( s\right) }{ds} \end{equation} takes account of the nonlocal contribution of the potential, and $% F(q)=-\partial V(q)/\partial q$ is a classical force. In the classical limit, $\hbar \rightarrow 0$, Eq. (4) becomes the classical Liouville equation \begin{equation} \frac{\partial W}{\partial t}+\frac pm\,\frac{\partial W}{\partial q}=-F \left( q\right) \frac{\partial W}{\partial p}. \end{equation} The equation (4) can be written in an integral form. For this, one introduces the dynamical trajectories $\{\bar{q}_\tau (\tau ;p,q,t),\bar{p}% _\tau (\tau ;p,q,t)\},$ $\tau \in [0,t]$, starting from the point $(p,q)$ at time $\tau =t$: \begin{eqnarray} d\bar p/d\tau &=&F(\bar p(\tau )),\ \bar p_t(t;p,q,t)=p \nonumber \\ d\bar q/d\tau &=&\bar q(\tau )/m,\ \bar q_t(t;p,q,t)=q \label{s8} \end{eqnarray} An integral equation is obtained by substituting the right-hand sides of these equations into the Wigner--Liouville equation, whose left-hand side becomes a total differential, and integrating over time one have \begin{equation} W\left( p,q,t\right) =W^0(\bar{p}_0,\bar{q}_0)+\int\limits_0^td\tau \int\limits_{-\infty }^\infty dsW\left( \bar{p}_\tau -s,\bar{q}_\tau ,\tau \right) \omega \left( s,\bar{q}_\tau \right) . \end{equation} Here $W^0(\bar{p}_0,\bar{q}_0)=W(p,q,0)\,\,$is the Wigner distribution function at zero time. The solution of Eq. (8) can be represented as an iterational series. For this we introduce the notation $\tilde{W}^{\tau _1}$ for the distribution function, which evolves classically in the interval $% [0,\tau _1]$, and the integral operator $K_{\tau _i}^{\tau _{i+1}}$ describing the evolution between the times $\tau _i$ and $\tau _{i+1}$. Now Eq. (8) can be represented in the form \begin{equation} W^t=\tilde{W}^t+K_\tau ^tW^\tau , \end{equation} where $\tilde{W}^t=W^0(\bar{p}_0,\bar{q}_0)$. The corresponding iterational series solving this equation can be written as \begin{equation} W^t=\tilde{W}^t+K_{\tau _1}^t\tilde{W}^{\tau _1}+K_{\tau _2}^tK_{\tau _1}^{\tau _2}\tilde{W}^{\tau _1}+K_{\tau _3}^tK_{\tau _2}^{\tau _3}K_{\tau _1}^{\tau _2}\tilde{W}^{\tau _1}+... \end{equation} Now, to calculate the quantum-mechanical average (1) it is necessary to calculate a linear functional of the Wigner distribution function \begin{eqnarray} A\left( t\right) &=&\int \!\!\!\int dqdpA\left( q,p\right) W\left( q,p,t\right) \nonumber \\ &=&\left( A|\tilde{W}^t\right) +\left( A|K_{\tau _1}^t\tilde{W}^{\tau _1}\right) +\left( A|K_{\tau _2}^tK_{\tau _1}^{\tau _2}\tilde{W}^{\tau _1}\right) +\left( A|K_{\tau _3}^tK_{\tau _2}^{\tau _3}K_{\tau _1}^{\tau _2}% \tilde{W}^{\tau _1}\right) +... \end{eqnarray} Here the brackets $(...\mid ...)$ for the functions $A=A(p,q)$ and $\tilde{W}% ^t$ or $K_{\tau _i}^tK_{\tau _{i-1}}^{\tau _i}...K_{\tau _1}^{\tau _2}\tilde{% W}^{\tau _1}$ indicate averaging over the entire phase space $\{p,q\}$. The first term on the right-hand side of Eq. (10) gives the classically evolving initial distribution $W^0(\bar{p}_0,\bar{q}_0)$, i.e. the evolution of the distribution function without quantum corrections. However, even this first term of the iterational series describes not classical but rather quantum effects and can contain arbitrary powers of the Planck constant, since a quantum initial state of the system is taken as the initial data for Eq. (10). The rest of the terms in the iterational series describe quantum corrections to evolution. Each term of the iterational series (10) is a multiple integral. This multiple integral can be replaced by an integral sum, and each term of the integral sum can be represented as a contribution of trajectories of a definite topological type. These trajectories consist of segments of classical trajectories --- solutions of Eqs. (7) --- separated from one another by random perturbations of the momentum. All terms of the iterational series can be calculated in accordance with the theory of Monte Carlo methods for solving linear integral equations. For this purpose the Monte Carlo scheme was developed, which provides the essential sampling of the iteration series terms (10). This essential sampling also decreases the computer time required to calculate the rest of the integrals appearing in each term of the iterational series. Let us consider the second term of the series (10). This term can be rewritten as \begin{eqnarray} K_{\tau _1}^t\tilde{W}^{\tau _1} &=&\int\limits_0^1d\tau _1\int ds_1\omega \left( s_1,\bar{q}_1\right) W^0(\bar{p}_0^1,\bar{q}_0^1)= \nonumber \\ &=&\int\limits_0^1d\tau _1\left[ B(\bar{q}_2)\left( 1+Q(\bar{q}_2)\right) \right] \theta \left( 1-\tau _2\right) r\left( \tau _2\right) \int ds_1P(s_1,% \bar{q} \nonumber \\ &\times &\left\{ \sigma \left( s_1,\bar{q}_1^2\right) t\tilde{Q}(\bar{q}% _1^2)\theta \left( \tau _2-\tau _1\right) /C(\bar{q}_1^2)r(\tau _1)\right\} W^0(\bar{p}_0^1,\bar{q}_0^1), \end{eqnarray} where the substitution of variables $\tau \rightarrow \tau t$ was made for all terms of the iterational series (10). The quantity $r(\tau _1)$ is the probability of choosing a random time $\tau _1$ and $\theta $ is the theta function. Once the second term of the series (10) is written in the form (12), it can be given the following probabilistic interpretation. We will take advantage of the time-reversibility of the equations of classical dynamics (7) and start the construction of a trajectory at time $\tau =0$. At time $\tau _1$ for a trajectory representing an arbitrary term in the iterational series a perturbation of the momentum of the trajectory by an amount $s_1$ can occur with probability $C(\bar{q}_1^2)$, and the probability of rejecting a momentum perturbation is $B(\bar{q}_2)$ $(C(\bar{q}_1^2)+B(\bar{q}_2)=1)$ . The probability $B$ for rejecting momentum jumps was introduced to make the algorithm more flexible, so that depending on the degree of quantization of the system a transition from quantum to classical trajectories would occur automatically. Since we are considering a trajectory representing the second term in the iterational series, a perturbation of the momentum at the time $\tau _1$ was accepted. Now it is necessary to choose in the time interval $[\tau _1,1]$ a random value $\tau _2$ which is the time of the next attempt to perturb the momentum. After a perturbation of the momentum by an amount $s$ we must continue the generation of the trajectory up to the time $\tau _2$ in accordance with Hamilton's equations. At this time an attempt to perturb the momentum for the second term of the iterational series must be rejected, and we continue the generation of the trajectory up to the time $\tau =1$. The rejected attempt of the perturbation of the momentum must be taken into account by multiplying the weighting function of the trajectory by a compensating factor, which stands in the braces on the right-hand side of the expression (12). The product of the Weyl symbol of the operator under consideration and the weighting function at different points along the trajectory gives the time dependence of the computed quantities. Averaging over a large ensemble of trajectories of this type gives the contribution of the second term of the iterational series. Similar expressions but with a large number of intermediate times on classical trajectories when a perturbation of the momentum occurs can also be written for the other terms in the series (10). The number of the term in the iterational series (10), described by the given trajectory, determines the number of momentum perturbations along the trajectory. The final expression used to calculate the linear functional (11) is \begin{eqnarray} \label{b34} A\left( t\right)&=&M\left\{ \alpha \left( A;T_{i}\right) \right\}= \sum_{p,q}\left( \triangle p\triangle q\right) \sum_{i=0}^\infty \sum_{j=0}^i\sum_{\tau_j}\sum_{s_j}\alpha \left( A;T_{i}\right) \times P\left(T_{i}\right) ;\nonumber\\ \alpha \left( A;T_{i}\right)&=&A\left(p,q\right)W^0(\bar{p}_0^1,\bar{q}% _0^1)\Omega \left(T_{i}\right), \end{eqnarray} where the functions $P$ and $\Omega $ are, respectively, the probability of generating a quantum trajectory $T_i$ and the weighting function of this trajectory. \section{Measured quantities} The study of the evolution of a wave packet can be taken as the starting point for studying the temporal aspects of tunneling. The probability of observing a wave packet or particle at an arbitrary point $X$ is determined by the squared modulus $\left| \psi (X,t)\right| ^2\,$ of the wave function. In a nonstationary problem this probability depends on the time and determines the characteristic times of the wave-packet dynamics. If an ideal detector (i.e. measurement by the detector does not disturb the wave function), sensitive to the presence of particles, is used in the experiment, then the average presence time measured by the detector at the point $X$ is \begin{equation} \tilde{t}_X=\frac{\displaystyle \int\limits_0^\infty dt\,t\left| \psi \left( X,t\right) \right| ^2}{\displaystyle \int\limits_0^\infty dt\left| \psi \left( X,t\right) \right| ^2}. \end{equation} A description of these times can be found in Refs. \onlinecite{r31}--\onlinecite{r32}. The distribution of presence times at the point $X$ is \begin{equation} \tilde{P}\left( t_X\right) =\frac{\left| \psi \left( X,t\right) \right| ^2}{% \displaystyle \int\limits_0^\infty dt\left| \psi \left( X,t\right) \right| ^2% }. \end{equation} To find the squared wave function $\left| \psi (X,t)\right| ^2\,$ it is sufficient to calculate a quantum-mechanical average of an operator \begin{eqnarray*} \langle \psi \left( t\right) \left| \delta \left( \hat{q}-X\right) \right| \psi \left( t\right) \rangle =\int dq\delta \left( q-X\right) \left| \psi \left( q,t\right) \right| ^2=\left| \psi \left( X,t\right) \right| ^2. \end{eqnarray*} In the Wigner representation this is equivalent to calculation of the integral \begin{equation} \langle \psi \left( t\right) \left| \delta (\hat{q}-X)\right| \psi \left( t\right) \rangle =\int \!\!\!\int dqdp\delta \left( q-X\right) W\left( q,p,t\right) =\int dp\,W(X,p,t). \end{equation} If the point $X$ is chosen to the right of the barrier, then this integral makes it possible to calculate the squared wave function which has tunneled through the barrier. The distribution of the ``presence'' times can be rewritten, in accordance with Eq. (16), as \begin{equation} P_X\left( t\right) =\frac{\left| \psi \left( X,t\right) \right| ^2}{% \displaystyle \int\limits_0^\infty dt\left| \psi \left( X,t\right) \right| ^2% }=\frac{\int dpW\left( X,p,t\right) }{\displaystyle \int\limits_0^\infty dt\int dpW\left( X,p,t\right) }. \end{equation} To determine the average time when the wave packet passes through a detector at the point $X$ it is necessary to calculate the integral \begin{eqnarray} \left\langle t\left( X\right) \right\rangle =\int\limits_0^\infty dt\,tP_X\left( t\right) , \end{eqnarray} and the average transition time of a packet from the point $X_i$ to the point $X_f$ will be \begin{equation} \left\langle t_T\left( X_i,X_f\right) \right\rangle =\left\langle t\left( X_f\right) \right\rangle -\left\langle t\left( X_i\right) \right\rangle . \end{equation} If the points $X_i$ and $X_f$ are chosen on different sides of the potential barrier, then the expression (19) can be used to estimate the tunneling time. The main drawback of the definition (17) is that, as a rule, detectors sensitive to a flux density and not a probability density are used in physical experiments. Therefore a different quantity must be considered in order to compare theory and experiment. For this, the distribution of arrival times of a wave packet at a prescribed point in terms of the probability flux density was introduced:\cite{r34} \begin{equation} P_X\left( t\right) =\frac{\left\langle \psi \left( t\right) \left| \hat{J}% \left( X\right) \right| \psi \left( t\right) \right\rangle }{\displaystyle % \int\limits_0^\infty dt\langle \psi \left( t\right) \left| \hat{J}\left( X\right) \right| \psi \left( t\right) \rangle }, \end{equation} where \begin{equation} \hat{J}\left( X\right) =\frac 12\left[ \hat{p}\delta \left( \hat{q}-X\right) +\delta \left( \hat{q}-X\right) \hat{p}\right] . \end{equation} Of course, the definition (20) is not a real distribution function from probability theory, since this function can assume negative values at some points. Nonetheless the definition (20) will be a distribution function if there is no reverse flux through the point $X$ or the flux is negligibly small. For this the point $X$ is chosen at a sufficiently far from the barrier. Measuring the distribution of the arrival times of a packet in front of and beyond the barrier, the transition time through a region much larger than the region of the potential barrier can be calculated. This time is analogous to the asymptotic phase times\cite{r35} and, besides the tunneling time and the packet--barrier interaction time, it also contains the transmission time through the region where the potential barrier is zero. These two times cannot be separated. Despite continuing discussions, this tunneling-time problem has still not been finally solved.\cite {r19,r20,r21,r22,r23,r24,r36,r37} Another problem concerns the physical implementation of an experiment in which simultaneous detection of a packet in front of and beyond a barrier would not lead to substantial reduction of the wave function. For this reason, ordinarily, a different quantity --- the ``time delay'' --- is measured in experiments.\cite{r38,r39,r40,r41,r42} A time delay arises because of the presence of a barrier and is defined as the difference of the average arrival times of the tunneling and free packets: \begin{eqnarray} \Delta \tau _{arrival}(X)=\langle t_X\rangle ^{tun}-\langle t_X\rangle ^{free}. \end{eqnarray} The definition (20) for calculation the average arrival times gives a reasonable estimate of the time delays measured in an experiment. The distribution of arrival times (20) can be rewritten in the Wigner formulation of quantum mechanics as \begin{equation} P_X\left( t\right) =\frac{\displaystyle \int \!\!\!\displaystyle \int dqdpJ_X\left( q,p\right) W\left( q,p,t\right) }{\displaystyle % \int\limits_0^\infty dt\displaystyle \int \!\!\!\displaystyle \int dqdpJ_X\left( q,p\right) W\left( q,p,t\right) }, \end{equation} where the Weyl symbol of the current operator $\hat{J}(X)$ is \begin{eqnarray} J_X\left( q,p\right) =\frac \hbar 2\sin \left( \frac{2p\left( X-q\right) }% \hbar \right) \frac \partial {\partial q}\delta \left( q-X\right) . \end{eqnarray} Substituting into Eq. (20) the expression (24) and calculating the integral over the variable $q$ by parts we obtain the expression \begin{equation} P_X\left( t\right) =\frac{\displaystyle \int dppW(X,p,t)}{\displaystyle % \int\limits_0^\infty dt\displaystyle \int dppW(X,p,t)}. \end{equation} Comparing the expressions (17) and (25), it is easy to see that they differ by the fact that the momentum $p$ appears in the numerator and denominator in Eq. (25). This momentum appeared in the last expression because the probability flux density is measured there. \section{Simulation results} We shall examine a series of experiments on the tunneling of an electron with the wave function \begin{equation} \psi (x,0)=\frac 1{\left( 2\pi \sigma _x\right) ^{1/4}}\exp \left[ -\left( \frac{x-x_0}{2\sigma _x}\right) ^2+ik_0x\right] \end{equation} through a Gaussian potential barrier \[ V\left( x\right) =V_0\exp \left[ -\frac{(x-d)^2}{\sigma ^2}\right] . \] The Wigner distribution function (3) corresponding to the initial wave function of the electron can be written as \begin{equation} W(p,q,0)=2\exp \left[ -\frac{(q-x_0)^2}{2\sigma ^2}\right] \exp \left[ -% \frac{2\sigma ^2(p-\hbar k_0)^2}{\hbar ^2}\right] . \end{equation} The center $x_0=\left\langle \psi (x,0)\right| \hat{x}\left| \psi (x,0)\right\rangle $ of the wave packet at zero time was chosen far enough from the left-hand boundary of the barrier so that the probability density beyond the barrier would be negligibly small compared with the transmission probability $\left| T\right| ^2$ through the barrier. Tunneling occurred through a ``wide'' $(\sigma =2.5$ nm --- this parameter of the barrier is characteristic for Al$_x$Ga$_{1-x}$As structures) and a ``narrow'' $(\sigma =0.5$ nm) Gaussian barrier of height $V_0=0.3$ eV centered at $d=0$. The electron kinetic energy was $E_0=\hbar ^2k_0^2/2m=V_0/2=0.15$ eV. We used the system of units where $\hbar =m=V_0=1$. Distances were measured in units of the reduced de Broglie wavelength $\lambda =1/k_0$. In this system of units the parameters of the wave packet and barrier are: $E_0=0.5,$ $\Delta k=0.04$ $(0.125),$ $\sigma _x=1/2\Delta k=12.5$ $(4),$ $x_0=-92.5$ $(-43),$ $% \sigma =5$ $(2.5$ nm$),$ and $\sigma =1$ $(0.5\,$nm$)$. \subsection{Evolution of the wave packet} The interaction of a wave packet $(\hbar \Delta k=0.125)$ with a narrow potential barrier $(\sigma =1$ $(0.5$ nm$))\,$ is shown in Figs. \ref{f1}a and b. These figures show the probability density $\left| \psi (x,t)\right| ^2$ (curves 1--5) of reflected (Fig. \ref{f1}a) and tunneled (Fig. \ref{f1}% b) wave packets at successive times $t=114-239$ fs. The probability density was calculated using Eq. (16), i.e. in terms of the Wigner distribution function. This integral was calculated along quantum and classical trajectories. In the calculation over classical trajectories only the high-energy components of a packet could pass classically above the barrier. This calculation corresponds to the curve 1 in Fig. \ref{f1}c, and the evolution of the Wigner function can be described only by the first term of the series (10). In the formalism of quantum trajectories the passage of the components of a packet beyond the barrier is associated with random perturbations of the momentum, i.e. with a virtual change in energy. The results of this calculation correspond to the curve 2 in Fig. \ref{f1}c. Now the quantum corrections introduced by all terms in the series (10) are taken into account in the evolution of the Wigner function. Of course, the calculation over quantum trajectories also takes account of the high-energy components that pass above the barrier, since they describe the contribution of the first term in the series (10). However, comparing the curves 1 and 2 in Fig. \ref{f1}c shows that their role is negligible for a narrow barrier and most of the packet passes above the barrier on account of the virtual change in energy, described as random perturbations of the momentum of the quantum trajectories. A study of tunneling through a wide barrier leads to the opposite conclusion. The curves 1 and 2 in Fig. \ref{f1}d approximately coincide. This means that most of the packet has passed above the barrier, and the contribution of all terms in the series (10), except for the first term, is negligibly small. To avoid such a situation and to restore the importance of quantum effects, it is necessary to decrease the uncertainty of the momentum of the initial wave packet. In what follows all calculations for a wide barrier are presented for momentum uncertainty $\hbar \Delta k=0.04$. \subsection{Average coordinate, average momentum, and their variances} Figure \ref{f2}a shows the evolution of the average coordinate $\left\langle \psi (t)\right| \hat{X}\left| \psi (t)\right\rangle $ of the wave packet for calculation according to classical (curve 1) and quantum (curve 2) trajectories. In these two methods for calculation the average coordinate $% \bar{X}$ no differences are observed before interaction with the barrier (curves 1 and 2 are coincident). This result can be explained as follows. In the method under discussion the quantum-mechanical properties appear at two points: in the properties of the initial state of a wave packet and in the evolution of the packet. Since the same initial data were chosen for the quantum and classical trajectories, the fact that $\bar{X}$ is the same must be explained by the evolution of the wave packet. Specifically, while the packet moves freely in front of the barrier, it is correctly described by classical trajectories also. In this case the first term in the series (10) is sufficient to describe the evolution of the Wigner function. This result can also be obtained analytically, estimating the right-hand side of the Wigner--Liouville equation (4). For the initial Wigner function (27) and Gaussian barrier which we have chosen it is easy to show that the integral on the right-hand side of Eq. (4) decays exponentially with increasing distance from the barrier. In this case Eq. (4) becomes the classical Liouville equation, whose characteristics are ordinary classical trajectories. A difference in the behavior of the curves 1 and 2 appears after the packet interacts with a barrier. Now the classical trajectories are no longer characteristics and do not describe the evolution of the wave packet correctly. In Figs. \ref{f2}a and b the average coordinate and the momentum of the calculation over quantum trajectories (curve 2) are greater than for classical trajectories (curve 1). This is due to the following circumstances. In the first place, since most of the packet is reflected, as one can see from Fig. \ref{f2}b the average momentum changes sign after being scattered by the barrier. In the second place, the classical trajectories (curve 1) do not take account of tunneling; they only take account of the negligible above-barrier transmission, arising because of the uncertainty in the momentum of a Gaussian wave packet. At the same time it is obvious that the tunneling part of the packet has positive momentum and moves in the opposite direction relative to the reflected part. Therefore its contribution to $\bar{X}$ and $\bar{P}\,$ has a different sign. This is the explanation of the difference between the curves 1 and 2. In addition, the motion of the tunneling and reflected packets on different sides of the barrier also explains the more rapid increase of the coordinate variance in the quantum calculation (curve 2, Fig. \ref{f2}c) as compared with the classical calculation (curve 1), which takes into account only the spreading of the wave packet. The behavior of the packet width on scattering by a barrier is shown in greater detail in the upper left-hand part of Fig. \ref{f2}c. The interaction of a packet with the barrier also leads to an interesting behavior of the momentum variance in Fig. \ref{f2}d. The constant values (curve 1) on the initial and final sections show the momentum variance in the incident and reflected wave packets, i.e. before and after interaction with the barrier. The observed peak is due to the change in the sign of the momentum of the packet and to the fact that different components reach the barrier and are reflected from it at different times. The increase in the momentum variance (curve 2) on the final section is explained by the appearance of a tunneling packet with positive momentum in the quantum computational method, while the total average momentum is negative. \subsection{Distribution of arrival and presence times. Momentum distribution function} The results of the calculation of the unnormalized presence time distribution (17) at different points in front of the barrier, inside the barrier, and beyond barrier are presented in Figs. \ref{f3}a and b (curves 1--5). Figures \ref{f4}a and b show the analogous results for the unnormalized distribution (20) of the arrival times. The curves 1 in Figs. \ref{f3}a and \ref{f4}a show the behavior of the probability density and flux, corresponding to the fact that the incident and reflected wave packets pass through the detector at different times. Curve 2 in Fig. \ref{f4}a shows the behavior of the flux measured at a certain point to the left of barrier center. The tunneling and high-energy components present in the initial packet can classically reach this point. An interesting result is obtained for the probability flux density in Fig. \ref {f4}b (curves 3--5). The flux measured at barrier center (curve 3) is much less than the flux on the right-hand boundary of the barrier (curve 4) and far to the right of the barrier (curve 5). This means that interference of the tunneling components of the wave packet which move in opppostie directions occurs inside the barrier. Some of these components pass completely through the barrier, while others are reflected inside the barrier and do not reach its right-hand boundary. Interference of the reflected and transmitted components leads to the observed decrease in the flux amplitude at barrier center (curve 3) and at the right-hand boundary (compare curves 4 and 5). It is interesting that the investigation of tunneling using classical trajectories in complex time also reveals the similar effect.% \cite{r37} It was found that transition through a barrier occurs as a series of attempts, many of which are unsuccessful because of reflections in different regions under the barrier. The comparison of the presence and arrival times distributions in Figs. \ref {f3}b and \ref{f4}b shows that they are almost identical. The computed average presence and arrival times (18) are also identical (the difference is less than 1 fs). As we have already stated, the distribution of the arrival times (20) is not a true distribution function and, as one can see from Fig. \ref{f4}a (curve 2), it is not suitable for calculation of the average arrival time of a packet in front of the barrier. This makes it impossible to calculate the tunneling time as the difference (19) of the average arrival times of the packet in front of and beyond the barrier. Nonetheless the expression (19) can be used to estimate the tunneling time, if the average presence time (14) is used instead of the average arrival time in front of the barrier. Then the tunneling time through the potential barrier is $\tau _T(-0.67\sigma ,+0.67\sigma )=12$ fs, i.e. it is almost equal to the transmission time of a free packet through a similar region $% \tau _T^{class}(-0.67\sigma ,+0.67\sigma )=13.5$ fs. The time delays were measured at the points $x_4=0.67\sigma $ $(1.6$ nm) and $x_5=5\sigma $ (12 nm) and were found to be $\Delta \tau _{arrival}(x_4)=8$ fs and $\Delta \tau _{arrival}(x_5)\le 0.5$ fs. If these measurements were performed even farther to the right of the barrier, then $\Delta \tau _{arrival}(x)$ would become negative. Thus an interesting behavior is discerned: Even though the tunneling wave packet is delayed by the barrier ($\Delta \tau _{arrival}(x_4)=8$ fs) and passes through the barrier approximately in the same time as a free packet, it appears earlier at a definite distance to the right of the barrier. This effect can be explained by the fact that the transmission probability through a Gaussian barrier increases with energy, so that packet components with a larger momentum have a higher probability of ending up beyond the barrier. These components move more rapidly than a free packet and eventually overtake a free packet. Then the time delays can only be negative. This confirms the momentum distribution function \begin{equation} \frac{\langle \psi \left( t\right) \left| \delta \left( \hat{p}-p\right) \right| \psi \left( t\right) \rangle }{\langle \psi \left( t\right) |\psi \left( t\right) \rangle }, \end{equation} calculated for narrow (Fig. \ref{f5}a) and wide (Fig. \ref{f5}b) barriers, respectively, at times $t=218$ and $385$ fs. At these characteristic times the distribution function no longer changes, since the interaction with the barrier has finished. It is evident from Fig. \ref{f5} that the average momentum of the tunneled wave packet (curve 2) is greater than the average momentum of the wave packet initially (curve 1). The peak observed in the momentum distribution function (curve 2 in Fig. \ref{f5}a) is due to the packet components that had a quite large momentum and passed above the barrier. It is evident that tunneling through a narrow potential barrier leads to a larger variance of the distribution function, while tunneling through a wide barrier substantially shifts the center of the distribution in the direction of large momenta (curve 2 in Fig. \ref{f5}b). \section{Conclusions} The quantum generalization of classical molecular dynamics was used to solve the Wigner--Liouville integral equation in the Wigner formulation of quantum mechanics. The method discussed for solving this equation does not require a large increase of computer time and makes it possible to avoid the computational difficulties that arise when solving the nonstationary Schr\"{o}dinger equation. This approach was used to solve the nonstationary problem of tunneling of a finite wave packet, i.e. a problem in which it is important to take account of exponentially small quantum effects. The evolution of a wave packet, the behavior of the average and the variances of the coordinate and momentum and the distributions of the presence and arrival times for the wave packet at different positions of an ideal detector were analyzed. The following results were obtained: 1) The tunneling time through a potential barrier is approximately of the same order of magnitude as the transmission time of a free wave packet over a similar distance; 2) the tunneling wave packet is delayed by the potential barrier, so that after the barrier the time delay should be positive; 3) measurement of negative time delays is possible only at sufficiently large distances from the barrier and is associated with a shift of the momentum distribution function; 4) a Gaussian barrier transmits mainly the high-energy components of a packet, interaction with the barrier shifts the center of the momentum distribution function so that the average momentum of the transmitted packet is larger than the initial average momentum of the entire packet; 5) tunneling through a narrow potential barrier leads to a larger variance of the momenta of the tunneled components, while tunneling through a wide barrier leads to an appreciable increase in the average momentum; and, 6) the computational results for the probability flux density showed that the tunneling wave packet does not pass completely through the barrier, a portion of the packet under the barrier is reflected and does not reach its right boundary. This work was partically supported by grants from the Russian Fund for Fundamental Research and the Program ``Physics of Solid-State Nanostructures.''
{'timestamp': '1999-05-14T12:28:07', 'yymm': '9905', 'arxiv_id': 'quant-ph/9905047', 'language': 'en', 'url': 'https://arxiv.org/abs/quant-ph/9905047'}
\section{Introduction}\label{introduction} In this paper, we consider nonlinear inverse problems of the form \begin{equation}\label{Fx=y} F(x)=y \,, \end{equation} where $F:D(F)\subset X \to Y$ is a continuously Fr\'echet-differentiable nonlinear operator between real Hilbert spaces $X$ and $Y$. Furthermore, we assume that instead of exact data $y$ we are only given noisy data $y^{\delta}$ which satisfy \begin{equation}\label{cond_noise_delta} \norm{y-y^{\delta}} \leq \delta \,, \end{equation} where $\delta$ denotes the noise level. Typically, inverse problems are also ill-posed, which means that they may have no or even multiple solutions and in particular that a solution does not necessarily depend continuously on the data. This entails a number of difficulties, due to which one usually has to regularize the problem. During the last decades, a large number of different regularization approaches have been developed; see for example \cite{Engl_Hanke_Neubauer_1996,Kaltenbacher_Neubauer_Scherzer_2008,Schuster_Kaltenbacher_Hofmann_Kazimierski_2012} and the references therein. Two of the most popular methods, which also serve as the bases for a wide variety of other regularization approaches, are \emph{Tikhonov regularization} \cite{Tikhonov_1963,Tikhonov_Glasko_1969} and \emph{Landweber iteration} \cite{Landweber_1951}. In its most basic form, Tikhonov regularization determines a stable approximation $x_\alpha^\delta$ to the solution of \eqref{Fx=y} as the minimizer of the Tikhonov functional \begin{equation}\label{Tikhonov} T_\alpha^\delta(x) := \norm{F(x) - y^{\delta}}_Y^2 + \alpha \norm{x-x_0}_X^2 \,, \end{equation} where $\alpha \geq 0$ is a regularization parameter and $x_0$ is an initial guess. In order to obtain convergence of $x_\alpha^\delta$ to a solution $x_*$ of \eqref{Fx=y}, the regularization parameter $\alpha$ has to be suitably chosen. If the noise level $\delta$ from \eqref{cond_noise_delta} is known, then one can either use \emph{a-priori} parameter choice rules such as $\alpha \sim \delta$, or \emph{a-posteriori} rules such as the \emph{discrepancy principle}, which determines $\alpha$ as the solution of the nonlinear equation with some $\tau >1$ \begin{equation}\label{discrepancy_principle_alpha} \norm{F(x_\alpha^\delta)-y^{\delta}} = \tau \delta \,. \end{equation} Unfortunately, in many practical applications, estimates of the noise level $\delta$ are either unavailable or unreliable, which renders the above parameter choice rules impractical. Hence, a number of so-called \emph{heuristic parameter choice rules} have been developed over the years; see for example \cite{Leonov_1978,Leonov_1991,Tikhonov_Glasko_1965,Hanke_Raus_1996,Reginska_1996,Hansen_OLeary_1993,Wahba_1990} and the references therein. Most of them determine a regularization parameter $\alpha_*$ via \begin{equation}\label{heuristic_rule_alpha} \alpha_* \in \argmin_{\alpha \geq 0} \, \psi(\alpha,y^{\delta}) \,, \end{equation} where $\psi : \R_0^+ \times Y \to \R \cup \{\infty\}$ is some lower semi-continuous functional. For example, the following popular choices in turn define the \emph{heuristic discrepancy (HD) principle} \cite{Hanke_Raus_1996}, the \emph{Hanke-Raus (HR) rule} \cite{Hanke_Raus_1996,Raus_Haemarik_2018}, the \emph{quasi-optimality (QO) rule} \cite{Tikhonov_Glasko_1969}, and the \emph{simple L (LS) rule} \cite{Kindermann_Raik_2020}: \begin{equation}\label{heuristic_rules_alpha} \begin{split} \psi_{\text{HD}}(\alpha,y^\delta) &:= \frac{1}{\sqrt{\alpha}}\norm{F(x_\alpha^\delta)-y^{\delta}} \,, \\ \psi_{\text{HR}}(\alpha,y^{\delta}) &:=\frac{1}{\alpha}\spr{ F(x^\delta_{\alpha,2})-y^{\delta},F(x_\alpha^\delta)-y^{\delta}}\,, \\ \psi_{\text{QO}}(\alpha,y^{\delta}) &:= \norm{x^\delta_{\alpha,2}-x_\alpha^\delta }\,, \\ \psi_{\text{LS}}(\alpha,y^{\delta}) &:=\spr{x_\alpha^\delta-x^\delta_{\alpha,2},x_\alpha^\delta} \,, \end{split} \end{equation} where $x^\delta_{\alpha,2}$ denotes the so-called \emph{second Tikhonov iterate} \cite{Hanke_Groetsch_1998}, which is defined by \begin{equation*} x^\delta_{\alpha,2} := \argmin_{x\in X}\Kl{\norm{F(x)-y^{\delta}}_Y^2+\alpha\norm{x-x_\alpha^\delta}_X^2 } \,. \end{equation*} As is usually the case, rules of the form \eqref{heuristic_rule_alpha} are not restricted to Tikhonov regularization but can be used (possibly in a different form) in conjunction with various other regularization method as well. Due to their practical success in the treatment of linear inverse problems, heuristic parameter choice rules have received extended theoretical attention in recent years; see e.g.\ \cite{Kindermann_2011,Kindermann_2013,Kindermann_Neubauer_2008,Raus_Haemarik_2018,Haemarik_Palm_Raus_2011,Kindermann_Raik_Hamarik_Kangro_2019} and Section~\ref{sect_heuristics} for an overview. A potential drawback of using Tikhonov regularization is the need to minimize the Tikhonov functional \eqref{Tikhonov}. Although for linear operators this reduces to solving a linear operator equation, the case of nonlinear operators is much more involved, since one typically has to use iterative optimization algorithms for the minimization of \eqref{Tikhonov}. Moreover, in order to use either the discrepancy principle \eqref{discrepancy_principle_alpha} or a heuristic rule of the form \eqref{heuristic_rule_alpha}, this minimization usually has to be done repeatedly for many different values of $\alpha$, which can render it infeasible for many practical applications. Hence, a popular alternative in order to circumvent these issues is to directly use so-called \emph{iterative regularization methods}. As noted above, perhaps the most popular of these methods is Landweber iteration \cite{Engl_Hanke_Neubauer_1996,Kaltenbacher_Neubauer_Scherzer_2008}, which is defined by \begin{equation}\label{Landweber} x_{k+1}^\delta = x_k^\delta + \omega F'(x_k^\delta)^*(y^{\delta}-F(x_k^\delta)) \,, \end{equation} where $\omega > 0$ is a stepsize parameter. In order to obtain a convergent regularization method, Landweber iteration has to be combined with a suitable stopping rule such as the discrepancy principle, which now determines the stopping index ${k_*}$ by \begin{equation}\label{discrepancy_principle} {k_*}:=\min\Kl{k\in\mathbb{N} \, \vert \, \norm{F(x_k^\delta)-y^{\delta}}\leq \tau\delta } \,, \end{equation} for some parameter $\tau \geq 1$. Note that in contrast to \eqref{discrepancy_principle_alpha} for the choice of $\alpha$ in Tikhonov regularization, the discrepancy principle for Landweber iteration can be verified directly during the iteration, and does not require it to be run more than once. Now, analogously to \eqref{heuristic_rule_k}, most heuristic parameter choice (stopping) rules for Landweber iteration determine a stopping index ${k_*}$ via \begin{equation}\label{heuristic_rule_k} {k_*} \in \argmin_{k\in\mathbb{N}} \psi(k,y^{\delta}) \,, \end{equation} with $\psi:\mathbb{N}\times Y\to\R \cup\{\infty\}$ again being some lower semi-continuous functional~\cite{Kindermann_2011,Kindermann_Neubauer_2008,Palm_2010,Kindermann_2013,Hanke_Raus_1996}. At least conceptually, the regularization parameter $\alpha_*$ in Tikhonov regularization and the stopping index ${k_*}$ for Landweber iteration play inversely proportional roles, i.e., $\alpha \sim 1/k$. Hence, the heuristic rules from \eqref{heuristic_rule_alpha} now correspond to \begin{equation}\label{heuristic_rules} \begin{split} \psi_{\text{HD}}(k,y^\delta) &:=\sqrt{k}\norm{F(x_k^\delta)-y^{\delta}} \,, \\ \psi_{\text{HR}}(k,y^{\delta}) &:=k \spr{y^{\delta}-F(x^\delta_{2k}), y^{\delta}-F(x_k^\delta)} \,, \\ \psi_{\text{QO}}(k,y^{\delta}) &:= \norm{x^\delta_{2k} - x_k^\delta} \,, \\ \psi_{\text{LS}}(k,y^{\delta}) &:= \spr{x_k^\delta, x^\delta_{2k} - x_k^\delta } \,. \end{split} \end{equation} Note that the analogue of the second Tikhonov iteration $x^\delta_{\alpha,2}$ in the Landweber method corresponds to applying $k$ steps of a Landweber iteration with initial guess $x_k^\delta$, which is easily seen to be identical to simply doubling the iteration number, i.e., calculating $x^\delta_{2k}$. Similarly to the case of the discrepancy principle \eqref{discrepancy_principle}, these functionals can be evaluated during a single run of Landweber iteration. Hence, in particular in the nonlinear case, the combination of heuristic parameter choice rules together with Landweber iteration (or other iterative regularization methods) suggests itself. Heuristic parameter choice rules are already fairly well-understood in the case of linear problems. Despite a number of obstacles such as \emph{Bakushinskii's veto} \cite{Bakushinskiy_1985}, many theoretical results on both the convergence and other aspects of these rules are already available; see, e.g.,~\cite{Kindermann_2011} and the references therein. Furthermore, numerical tests have been carried out for various different linear test problems \cite{Bauer_Lukas_2011,Haemarik_Palm_Raus_2011,Palm_2010}. In contrast, for the case of nonlinear problems, not much is known with respect to convergence theory nor is there a unifying framework for heuristic rules, and, furthermore, there are no numerical performance studies for heuristic rules. Hence, the main motivation of this article is to close this gap. In particular, we consider the behaviour of the four rules given in \eqref{heuristic_rule_k}, i.e., the heuristic discrepancy principle, the Hanke-Raus rule, the quasi-optimality rule, and the simple L rule, using Landweber iteration and for different nonlinear test problems from practically relevant fields such as integral equations, parameter estimation, and tomography. The aim of this paper is two-fold: On the one hand, we want to demonstrate that heuristic parameter choice rules can indeed be used successfully not only for linear but also for nonlinear inverse problems. On the other hand, we want to provide some useful insight into potential difficulties and pitfalls which one might encounter, and what can be done about them. Since we also compare the heuristics with the discrepancy principle, the results in this article additionally provide a numerical study of the performance of the latter in the nonlinear case. The outline of this paper is as follows: in Section~\ref{sect_background}, we provide some general background on heuristic parameter choice rules and nonlinear Landweber iteration. In Section~\ref{sect_test_problems}, we then introduce the various problems on which we want to test the different heuristic parameter choice rules. The corresponding results are then presented in Section~\ref{sect_numerical_results}, which is followed by a short conclusion in Section~\ref{sect_conclusion}. \section{Landweber iteration and stopping rules}\label{sect_background} In this section, we recall some basic results on Landweber iteration and heuristic parameter choice rules. Since in this paper we are mainly interested in a numerical comparison of the different rules, here we only provide a general overview of some of the main results from the literature, focusing in particular on those aspects which are relevant to understand the numerical results presented below. \subsection{Landweber iteration for nonlinear problems}\label{sect_Landweber} One of the main differences to the linear case is that for nonlinear problems only local convergence can be established. For this, one needs to impose certain restrictions on the nonlinearity of the operator $F$, such as the \emph{tangential cone condition} \cite{Hanke_Neubauer_Scherzer_1995,Kaltenbacher_Neubauer_Scherzer_2008}, which is given by \begin{equation}\label{tangentialconecondition} \norm{F(x)-F(\tilde{x})-F'(x)(x-\tilde{x})} \leq \eta \norm{F(x)-F(\tilde{x})} \,, \qquad \eta< 1/2 \,. \end{equation} This condition has to hold locally in a neighbourhood of a solution $x^*$ of the problem, and the initial guess $x_0$ of the iteration has to be contained inside it. Furthermore, the parameter $\tau$ in the discrepancy principle \eqref{discrepancy_principle} has to satisfy \begin{equation}\label{cond_tau} \tau > 2\frac{1+\eta}{1-2\eta} \geq 2 \,, \end{equation} where the factor $2$ can be slightly improved by an expression depending on $\eta$ which tends to $1$ as $\eta \to 0$, thereby recovering the optimal bound in the linear case \cite{Hanke_2014}. If in addition the stepsize $\omega$ is chosen small enough such that locally there holds \begin{equation}\label{cond_omega} \omega\norm{F'(x)}^2 \leq 1 \,, \end{equation} then Landweber iteration combined with the discrepancy principle \eqref{discrepancy_principle} converges to $x^*$ as the noise level $\delta$ goes to $0$ \cite{Hanke_Neubauer_Scherzer_1995,Kaltenbacher_Neubauer_Scherzer_2008}. Furthermore, convergence to the minimum-norm solution $x^\dagger$ can be established given that in a sufficiently large neighbourhood there holds $N(F'(x^\dagger))\subset N(F'(x))$. In order to prove convergence rates, in addition to source conditions further restrictions on the nonlinearity of $F$ are necessary \cite{Engl_Hanke_Neubauer_1996,Kaltenbacher_Neubauer_Scherzer_2008}. Even though the tangential cone condition \eqref{tangentialconecondition} holds for a number of different applications (see e.g.~\cite{Kaltenbacher_Nguyen_Scherzer_2019} and the references therein), and even though attempts have been made to replace it by more general conditions \cite{Kindermann_2017}, these can still be difficult to prove for specific applications (cf.,~e.g., \cite{Kindermann_2022} for an analysis for the EIT problem and \cite{Hubmer_Sherina_Neubauer_Scherzer_2018} for an analysis of a parameter estimation problem in linear elastography). Furthermore, even if the tangential cone condition can be proven, the exact value of $\eta$ typically remains unknown. Since this also renders condition \eqref{cond_noise_delta} impractical, the parameter $\tau$ in the discrepancy principle then has to be chosen manually; popular choices include $\tau = 1.1$ or $\tau = 2$. These work well in many situations, but are also known to fail in others (compare with Section~\ref{sect_numerical_results} below). In any case, this shows that for practical applications involving nonlinear operators, informed ``heuristic'' parameter choices remain necessary even if the noise level $\delta$ is known. \subsection{Heuristic stopping rules}\label{sect_heuristics} As mentioned in the introduction, in many practical situations one does not have knowledge of the noise level. Thus, applying it with unreliable estimates of the noise level is rarely fruitful. The remedy is the use of \emph{heuristic} (aka \emph{data-driven} or \emph{error-free}) rules, where the iteration is terminated at ${k_*}:=k(y^{\delta})$, which depends only on the measured data and not the noise level. The analysis of the so-called heuristic rules for Tikhonov regularisation has been examined extensively for linear problems \cite{Kindermann_2011,Kindermann_Neubauer_2008,Kindermann_2013} but less so beyond the linear case. Some results for convex problems and nonlinear problems in Banach spaces can be found in \cite{Jin_Lorenz_2010,Jin_2016_02,Jin2017}. The numerical performance of these rules in the linear case has also been studied \cite{Bauer_Lukas_2011,Haemarik_Palm_Raus_2011,Palm_2010} and for convex Tikhonov regularization in \cite{Kindermann_Mutimbu_Resmerita_2014}. However, for Landweber iteration for nonlinear problems, neither an analysis has been given nor has the numerical performance of heuristic stopping rules been investigated so far. Let us briefly illustrate the rationale behind the heuristic minimization-based rules \eqref{heuristic_rule_k} (or \eqref{heuristic_rule_alpha}). It is well-known that the total error between the regularized solution $x_k^\delta$ and the exact solution $x^\dagger$ can be split into approximation and stability error: \begin{equation}\label{eq_error_split} \norm{x_k^\delta- x^\dagger} \leq \norm{x_k - x^\dagger} + \norm{x_k-x_k^\delta} \,. \end{equation} Here, $x_k$ denotes the regularized solution when exact data ($\delta = 0$) would be given. An ideal optimal parameter choice would be one that minimizes over $k$ the total error or the upper bound in \eqref{eq_error_split}. However, this ``oracle'' parameter choice is not possible, as in \eqref{eq_error_split} only the element $x_k^\delta$ is at hand, and neither the exact solution nor the exact data are available. The idea of minimization-based heuristic rules is to construct a computable functional $\psi$ that estimates the total error sufficiently well, i.e., $\psi(k,y^{\delta}) \sim \norm{x_k^\delta- x^\dagger}$, such that a minimization over $k$ is expected to yields a reasonable parameter choice of ${k_*}$ as well. In analogy to the approximation/stability split, we may just as well use a similar splitting for $\psi$, i.e., $\psi(k,y^{\delta}) \leq \psi_a(k,y^{\delta}) + \psi_d(k,y^{\delta})$ and search for conditions such that the respective parts estimate the approximation and stability error separately: \begin{align} \psi_a(k,y^{\delta}) \sim \norm{x_k - x^\dagger} \,, \label{eq_psiapprox} \\ \psi_d(k,y^{\delta}) \sim \norm{x_k-x_k^\delta } \,. \label{eq_psistab} \end{align} As briefly mentioned earlier, the pitfall for heuristic stopping rules manifests itself in the form of the so-called \emph{Bakushinskii veto}, the consequence of which is that a heuristic stopping rule cannot yield a convergent regularisation scheme in the \emph{worst case} scenario \cite{Bakushinskiy_1985}, i.e., for all possible noise elements $y-y^{\delta}$. A direct consequence is that there \emph{cannot} exist a $\psi$ with the error-estimating capabilites as mentioned above, in particular, such that \eqref{eq_psistab} holds! However, there is a way to overcome the negative result of Bakushinskii by restricting the class of permissible noise elements $y-y^{\delta}$. In this way, one can prove convergence in a \emph{restricted noise case} scenario. (Of course, for this approach to be meaningful, the restrictions should be such that ``realistic" noise is always permitted.) Some noise restriction were used, e.g., in \cite{Glasko_Kriskin_1984,Hanke_Raus_1996}, although the restrictions there were implicit and very hard to interpret. (For instance, in \cite{Glasko_Kriskin_1984}, essentially, condition \eqref{eq_psistab} was \emph{postulated} rather than derived from more lucid conditions.) A major step towards an understanding of heuristic rules was made in \cite{Kindermann_Neubauer_2008,Kindermann_2011}, when a full convergence analysis in the linear case with explicit and interpretable restrictions was given. These conditions, which were proven to imply \eqref{eq_psistab}, take the form of a \emph{Muckenhoupt}-type inequality: Let $(\sigma_i,u_i,v_i)$ be the singular system of the forward operator. Then, for $p \in \{1,2\}$, the $p$-Muckenhoupt-inquality holds if there is a constant $C$ and a $t_0$ such that for all admissible noise $y-y^{\delta}$ it holds that \begin{equation}\label{eq_mc1} \sum_{\sigma_i^2 \geq t} \frac{t}{\sigma_i^2} \abs{(y-y^{\delta},v_i)}^2 \leq C \sum_{\sigma_i^2 < t} \left(\tfrac{\sigma_i^2}{t}\right)^{p-1} \abs{(y-y^{\delta},v_i)}^2 \qquad \forall t \in (0,t_0). \end{equation} It has been shown in \cite{Kindermann_Neubauer_2008,Kindermann_2011,Kindermann_Raik_2020} that for most of the classical regularization schemes and for the four above mentioned rules this condition suffices for being able to estimate the stability error \eqref{eq_psistab}, and thus convergence can be proven. The inequality \eqref{eq_mc1} has the interpretation of an ``irregularity'' condition for the noise vector $y-y^{\delta}$; by postulating \eqref{eq_mc1}, the noise must be distinguishable from smooth data error (which never satisfies \eqref{eq_mc1}). However, this anyway agrees with the common idea of noise. \begin{remark}\label{rem_heuremark} The above heuristic rules require slightly different Muckenhoupt conditions which leads to two groups of rules: For HD and HR, \eqref{eq_mc1} with $p = 1$ suffices, while for QO and LS, the condition with $p =2$ (which is a slightly stronger requirement) has to be postulated~\cite{Kindermann_Neubauer_2008,Kindermann_2011,Kindermann_Raik_2020}. Thus the former might be successful even if the later fail. However, it has to be kept in mind that the error analysis shows that, as long as they can be successfully applied, QO and LS in general lead to smaller errors. \end{remark} Indeed, it has been shown in \cite{Kindermann_Neubauer_2008} that the above mentioned Muckenhoupt inequality is satisfied in typical situations, and in \cite{Kindermann_Pereverzyev_Philipenko_2018} it was shown in a stochastic setting that it is also satisfied for coloured Gaussian noise almost surely for many cases. Below, we discuss cases where the Muckenhoupt inequality might not be satisfied and when heuristic rules may fail. The above mentioned noise restrictions are heavily rooted in the linear theory and in particular make use of spectral theory and the functional calculus of operators. In the case of a nonlinear operator, we are no longer afforded the luxury of having these tools available. Some alternative noise conditions in the nonlinear case have been established in~\cite{Jin_Lorenz_2010} for convex variational Tikhonov regularisation, in \cite{Zhang_Jin_2018} for Bregman iteration in Banach spaces, or in~\cite{Liu_Real_Lu_Jia_Jin_2020} for general variational regularisation. However, as of yet, these conditions could not be deciphered into a palatable explanation as to when a rule will work or otherwise. An attempt was made in \cite{Kindermann_Raik_2019_02} to formulate an analogous Muckenhoupt-type inequality for convex Tikhonov regularisation in a somewhat restrictive setting of a diagonal operator over $\ell^p$ spaces with $p\in(1,\infty)$. However, to the knowledge of the authors, neither a convergence analysis nor a numerical investigation of heuristic rules for nonlinear Landweber iteration seems to be available in the literature. In light of all of this, we thus have great incentive to investigate heuristic stopping rules for nonlinear Landweber iteration numerically and to compare them with the more tried and tested a-posteriori rules. The Muckenhoupt inequality covers the convergence theory. When it comes to the practical capabilities of heuristic rules, also convergence rates are important. For this, efficient estimates of the approximation error as in \eqref{eq_psiapprox} are vital, and in the linear case, sufficient conditions for this have been established, this time in form of conditions for the exact solution $x^\dagger$. For instance, using the singular system $(\sigma_i,u_i,v_i)$, the following {\em regularity condition} (for $p = 1$ or $p=2$, depending on the rule) \begin{equation}\label{eq_reg_con} \sum_{\sigma_i^2 \leq t } \abs{(x^\dagger ,u_i)}^2 \leq C \sum_{\sigma_i^2 > t } \left(\tfrac{\sigma_i^2}{t}\right)^{p-1}\! r(\sigma_i^2,\alpha)^2 \abs{(x^\dagger ,u_i)}^2, \qquad \forall t \in (0,t_0) \,, \end{equation} is sufficient for \eqref{eq_psiapprox} and, together with smoothness conditions, yields (optimal-order) convergence rates in many situations \cite{Kindermann_2011}. Here, $r(\lambda,\alpha)$ is the residual spectral filter function of the regularization method, i.e., $r(\lambda,\alpha) = \frac{\alpha}{\alpha +\lambda}$ for Tikhonov regularization and $r(\lambda,\alpha) = (1-\lambda)^\frac{1}{\alpha}$ with $\alpha = k^{-1}$ for Landweber iteration. Note that the regularity condition depends strongly on the regularization method via $r(\lambda,\alpha)$ in contrast to the Muckenhoupt condition. The rough interpretation of \eqref{eq_reg_con} is that the exact solution has coefficients $(x^\dagger ,u_i)$ that do not deviate too much from a given decay (that is encoded in a smoothness condition). \subsection{Challenges and practical issues for heuristic rules}\label{subsect_challenges} Next, let us point out possible sources of failure for heuristic rules and some peculiarities for the case of nonlinear Landweber regularization. \begin{itemize} \setlength{\itemsep}{5pt} \item \emph{Failure of Muckenhopt condition.} A general problem for heuristic rules, both in the linear and nonlinear case, is when the Muckenhoupt condition \eqref{eq_mc1} is not satisfied. This can happen for standard noise for super-exponential ill-posed problems, for instance, for the backward heat equation (see \cite{Kindermann_2011,Kindermann_2013}). Less obvious but practically important is the case that the Muckenhoupt inequality might also fail to hold for standard noise if the problem is \emph{nearly well-posed}, i.e., when the singular values decay quite slowly (e.g., as $\sigma_i \sim i^{-\beta}$ with, say, $\beta < 1$). In this case, exact data can be quite irregular as the operator is only little smoothing, and it is hard to distinguish between exact data and noise, and this is indeed a relevant possible source of failure. % \item \emph{The spurious first local minimum for Landweber iteration.} Recall that the effective performance of heuristic rules depend also on efficent estimates of the approximation errors and in particular on the regularity condition to hold. As pointed out before, this strongly depends on the regularization method. We have extensive numerical evidence that for linear and nonlinear Landweber iteration, the approximation error (i.e., \eqref{eq_psiapprox}) is often only badly estimated by $\psi(k,y^{\delta})$ for the first few iterations (i.e., $k$ small) and that \eqref{eq_reg_con} holds only with a bad constant for large $t = k^{-1}$. In practical computations, this has the consequence that $\psi(k,y^{\delta})$ typically has an outstanding local minimum for small $k$. However, this local minimum is rarely the global minimum which usually appears much later for larger $k$, and inexperienced users are often tempted to mistakenly take this local minimum for the global one to save having to compute later iterations. This happens quite often in the linear and the nonlinear case for Landweber iteration, but a similar problem for Tikhonov regularization is rarely observed. The deeper reason for this discrepancy is the different shape of the residual filter function for both method, which makes the regularity condition \eqref{eq_reg_con} more restrictive for Landweber iteration and for large $t$. % \item \emph{Discretization cut-off.} It is known that, due to discretization, the theoretically global minimum of $\psi(k,y^{\delta})$ for finite-dimensional problems is at $k= \infty$, which does not provide a correct stopping index. Thus, in practical computations, we have to restrict the search space by fixing an upper bound for $k$ (or, for continuous regularization method, by a lower bound for $\alpha$). Some rules how to do this together with an accompanying analysis in the linear case are given in \cite{Kindermann_2013}; however, for nonlinear problems no such investigation exists. This issue is relevant for very small noise levels or for coarse discretizations, and in practice one takes a pragmatic approach and assumes a reasonable upper bound for the iteration index and looks for interior minima rather than global minima at the boundary of the search space. % \item \emph{Only local convergence in nonlinear case.} The established convergence theory in the nonlinear case is a local one: one can only prove convergence when the initial guess is sufficiently close to the exact solution $x^\dagger$, and in the case where noise is present, the iteration usually diverges out of the neighborhood of $x^\dagger$ as $k\to \infty$. In particular, it is possible that $x_k^\delta$ ``falls'' out of the domain of the forward operator. As a consequence, it might happen that the functionals $\psi(k,y^{\delta})$ in \eqref{heuristic_rule_k} are not defined for very large $k$. By definition, however, one would have to compute a minimizer over all $k$, which is then not practically possible. (This is different to Tikhonov regularization, whose solution is always well-defined for any $\alpha$). In practice, as a remedy, one would introduce an upper limit for the number of iterations up to which the functional $\psi(k,y^{\delta})$ is computed. Additionally, one could monitor the distance to the intial guess $\norm{x_k^\delta-x_0}$ and terminate if this becomes too large. \end{itemize} In this section, have stated some practical aspects of heuristic rules; a deeper mathematical analysis (especially in the nonlinear case) is outside the scope of this article. For further aspects on heuristic stopping rules both from a theoretical and practical viewpoint, we refer the reader to \cite{Raik_20} and the references therein. \section{Test problems}\label{sect_test_problems} In this section, we introduce a number of test problems on which we evaluate the performance of the heuristic stopping rules described above. These nonlinear inverse problems belong to a variety of different problem classes, including integral equations, tomography, and parameter estimation. For each of them, we shortly review their background and describe their precise mathematical setting and relevant theoretical results below. \subsection{Nonlinear Hammerstein operator}\label{subsect_Hammerstein} A commonly used nonlinear inverse problem \cite{Hanke_Neubauer_Scherzer_1995, Neubauer_2000,Neubauer_2016, Neubauer_2017_2, Hubmer_Ramlau_2017, Hubmer_Ramlau_2018} for testing, in particular, the behaviour of iterative regularization methods is based on so-called nonlinear Hammerstein operators of the form \begin{equation*} F \, : \, H^1[0,1] \to L_2[0,1] \,, \qquad F(x)(s) := \int_0^1 k(s,t) \gamma(x(t)) \,dt \,, \end{equation*} with some given function $\gamma: \R\to \R$. Here, we look at a special instance of this operator, namely \begin{equation}\label{def_Hammerstein} F(x)(s) := \int_0^s x(t)^3 \, dt \,, \end{equation} for which the tangential cone condition \eqref{tangentialconecondition} holds locally around a solution $x^\dagger$, given that it is bounded away from zero (see e.g.\ \cite{Neubauer_2017_2}). Furthermore, the Fr\'echet derivative and its adjoint, which are required for the implementation of Landweber iteration, can be computed explicitly. \subsection{Diffusion-Coefficient estimation}\label{subsect_diffusion} Another classic test problem \cite{Kaltenbacher_Neubauer_Scherzer_2008} in inverse problems is the estimation of the diffusion coefficient $a$ in the partial differential equation \begin{equation*} - \div(a \nabla u) = f \,, \end{equation*} from measurements of $u$, and given knowledge of the source-term $f$ and (Dirichlet) boundary conditions on $u$. For this test problem, we focus on the one-dimensional version \begin{equation}\label{eq_diffusion_PDE} \begin{split} -(a(s)u(s)_s)_s = f(s) &\,, \qquad s \in (0,1) \,, \\ u(0) = u(1) = 0 &\,, \end{split} \end{equation} which leads to an inverse problem of the form \eqref{Fx=y} with the nonlinear operator \begin{equation}\label{def_diffusion} \begin{split} F \, : \, D(F) := \{ a \in H^1[0,1] \, : \, a(s) \geq \underline{a} > 0 \} \quad &\to \quad L^2[0,1] \,, \\ a \quad &\mapsto \quad F(a) := u(a) \,, \end{split} \end{equation} where $u(a)$ is the solution of \eqref{eq_diffusion_PDE} above. The computation of the Fr\'echet derivative and its adjoint of $F$ now requires solving PDEs of the form \eqref{eq_diffusion_PDE}. Furthermore, it was shown (see e.g.\ \cite{Kaltenbacher_Neubauer_Scherzer_2008}) that the tangential cone condition \eqref{tangentialconecondition} holds locally around a solution $a^\dagger \geq c > 0$. \subsection{Acousto-Electical Tomography}\label{subsect_AET} Another PDE parameter estimation problem, this time from the field of tomography, is the hybrid imaging modality of acousto-electrical tomography (AET) \cite{Ammari_Bonnetier_Capdeboscq_Tanter_Fink_2008, Kuchment_Kunyansky_2010,Gebauer_Scherzer_2008, Zhang_Wang_2004}. Based on a modulation of electrical impedance tomography (EIT) by ultrasound waves, AET aims at reconstructing the spatially varying electrical conductivity distribution inside an object from electrostatic measurements of voltages and the corresponding current fluxes on its surface. Compared for example to EIT, reconstructions of high contrast and high resolution may be obtained. Mathematically, the problem amounts to reconstructing the spatially varying conductivity $\sigma$ from measurements of the power densities \begin{equation*} E_j(\sigma) := \sigma \abs{ u_j(\sigma) }^2 \,, \end{equation*} where the interior voltage potentials $u_j(\sigma)$ are the solution of the elliptic PDEs \begin{equation}\label{eq_AET_PDE} \begin{split} \text{div}(\sigma \nabla u_j) &= 0 \,, \quad \text{in}\, \Omega \,, \\ (\sigma \nabla u_j) \cdot \Vec{n} \vert_{\partial\Omega} &= g_j \,, \end{split} \end{equation} where $\Omega \subset \R^N$, $N=2,3$ is a bounded and smooth domain, and $g_j$ models the current flux on the boundary $\partial \Omega$ in the outward unit normal direction $\Vec{n}$. Once again, this problem can be restated as an operator equation of the form \eqref{Fx=y} with a Fr\'echet differentiable nonlinear operator. Its Fr\'echet derivative and the adjoint thereof can for example be found in \cite{Hubmer_Knudsen_Li_Sherina_2018}, and their evaluation again require the solution of PDEs of the form \eqref{eq_AET_PDE} for different right-hand sides. Note that it is not known whether the tangential cone condition holds for AET (or EIT). Furthermore, it is in general not possible to uniquely determine the conductivity $\sigma$ from a single power density measurement $E_j(\sigma)$ \cite{Bal_2013,Isakov_2006}. In addition, if $g_j = 0$ on some part $\Gamma \subset \partial \Omega$ of the boundary, then the problem becomes severely ill-posed. On the other hand, if $g_j \neq 0$ almost everywhere on $\partial \Omega$ and given a sufficient amount (depending on the dimension $N$) of ``different'' power density measurements $E_j(\sigma)$, the conductivity $\sigma$ can be uniquely reconstructed \cite{Capdeboscq_Fehrenbach_Gournay_Kavian_2009, Bal_Bonnetier_Monard_Triki_2013, Monard_Bal_2012, Alberti_Capdeboscq_2018}. In this case, the problem behaves numerically close to well-posed, which is reflected in the behaviour of the heuristic parameter choice rules; cf.~Section~\ref{subsect_challenges}. \subsection{SPECT}\label{subsect_SPECT} Next, we look at Single Photon Emission Computed Tomography (SPECT), which is another example from the large field of tomography \cite{Natterer_2001, Dicken_1998, Dicken_1999, Ramlau_2003, Ramlau_Teschke_2006}. In this medical imaging problem, on aims at reconstructing the radioactive distribution $f$ (activity function) and the attenuation map $\mu$, which is related to the density of different tissues, from radiation measurements outside the examined body. The connection between these quantities is typically modelled by the attenuated Radon Transform (ART), which is given by \cite{Natterer_2001}: \begin{equation}\label{def_SPECT} F(f,\mu)(s,\omega) := \int_{\R} f(s \omega^\perp + t \omega) \exp\left(-\int_{t}^{\infty} \mu(s \omega^\perp + r \omega) \, dr\right) \, dt \,, \end{equation} where $s \in \R$ and $\omega \in S^1$. With this, one again arrives at a problem of the form \eqref{Fx=y}, where $y$ then is the measured singoram. The well-definedness and differentiability of the operator $F$ with respect to suitable Sobolev spaces has been studied in detail in \cite{Dicken_1998, Dicken_1999}. However, it is still unknown whether also the tangential cone condition \eqref{tangentialconecondition} holds for this problem. \subsection{Auto-Convolution}\label{subsect_conv} As a final test example, we consider the problem of (de-)auto-convolution \cite{Buerger_Hofmann_2015,Fleischer_Hofmann_1996, Gorenflo_Hofmann_1994,Ramlau_2003}. Among the many inverse problems based on integral operators, auto-convolution is particularly interesting due to its importance in laser optics \cite{Anzengruber_Buerger_Hofmann_Steinmeyer_2016,Birkholz_Steinmeyer_Koke_Gerth_Buerger_Hofmann_2015, Gerth_2014}. Mathematically, it amounts to solving an operator equation of the form \eqref{Fx=y} with the operator \begin{equation}\label{def_conv} F \, : \, L_2[0,1] \to L_2[0,1] \,, \qquad F(x)(s) := (x \ast x) (s) := \int_0^1 x(s-t)x(t) \, dt \,, \end{equation} where the functions on $L_2[0,1]$ are interpreted as 1-periodic functions on $\R$. While deriving the Fr\'echet differentiability and its adjoint of $F$ is straightforward, it is not known whether the tangential cone condition \eqref{tangentialconecondition} holds. However, for small enough noise levels $\delta$, the residual functional is locally convex around the exact solution $x^\dagger$, given that it only has finitely many non-zero Fourier coefficients \cite{Hubmer_Ramlau_2018}. \section{Numerical Results}\label{sect_numerical_results} In this section, we present the results of using the four heuristic parameter choice rules defined in \eqref{heuristic_rules} to determine a stopping index for Landweber iteration applied to the different nonlinear test problems introduced in Section~\ref{sect_test_problems}. For each of these problems, we started from a known solution $x^\dagger$ in order to define the exact right-hand side $y$. Random noise corresponing to different noise levels $\delta$ was added to $y$ in order to create noisy data $y^{\delta}$, and a suitable stepsize $\omega$ for Landweber iteration was computed via \eqref{cond_omega} based on numerical estimates of $\norm{F'(x^\dagger)}$. Afterwards, we ran Landweber iteration for a predefined number of iterations $k_{\text{max}}$, which was chosen manually for each problem via a visual inspection of the error, residual, and heuristic functionals, such that all important features of the parameter choice rules were captured for this comparison. Following each application of Landweber iteration, we computed the values of the heuristic functionals $\psi$, as well as their corresponding minimizers ${k_*}$. As noted in Section~\ref{sect_background}, the functional values corresponding to the first few iterations have to be discarded in the search for the minimizers due to the spurious first local minimum (tacitly assuming that the noise-level is small enough such that a good stopping index appears later). For each of the different heuristic rules we then computed the resulting absolute error \begin{equation}\label{abs_error} \norm{x_\ks^\delta - x^\dagger }\,, \end{equation} and for comparison, for each problem we also computed the \emph{optimal stopping index} \begin{equation}\label{def_kopt} k_{\text{opt}} := \argmin_{k\in\mathbb{N}} \norm{x_k^\delta-x^\dagger} \,, \end{equation} together with the corresponding optimal absolute error. Furthermore, we also computed the stopping index $k_\text{DP}$ determined by the discrepancy principle \eqref{discrepancy_principle}, which can also be interpreted as the ``first'' minimizer of the functional \begin{equation}\label{def_hrDP} \psi_\text{DP}(k) := \abs{ \norm{F(x_k^\delta) - y^{\delta}} - \tau \delta} \,. \end{equation} As noted in Section~\ref{sect_Landweber}, since the exact value of $\eta$ in \eqref{tangentialconecondition} is unknown for our test problems, a suitable value for $\tau$ has to be chosen manually. Depending on the problem, we used either one of the popular choices $\tau = 1.1$ or $\tau = 2$, although as we are going to see below, these are not necessarily the ``optimal'' ones. In any case, the corresponding results are useful reference points to the performance of the different heuristic parameter choice rules. Concerning the discretization and implementation of each of the numerical test problems, we refer to the subsequent sections and the references mentioned therein. All computations were carried out in Matlab on a notebook computer with an Intel(R) Core(TM) i7-85650 processor with 1.80GHz (8 cores) and 16 GB RAM, except for the acousto-electrical tomography problem, which was carried out in Python using the FEniCS library \cite{Alnaes_Blechta_2015} on a notebook computer with an Intel(R) Core(TM) i7-4810MQ processor with 2.80GHz (8 cores) with 15.3 GB RAM. \subsection{Nonlinear Hammerstein operator} First, we consider the nonlinear Hammerstein problem introduced in Section~\ref{subsect_Hammerstein}. In order to discretize this problem, the interval $[0,1]$ is subdivided into $128$ subintervals, and the operator $F$ itself is discretized as described in~\cite{Neubauer_2000,Neubauer_2017_2}; cf.\ also \cite{Hubmer_Ramlau_2017,Hubmer_2015}. For the exact solution we choose $x^\dagger(s) = 2+(s-0.5)/10$ and compute the corresponding data $y$ by the application of the Hammerstein operator \eqref{def_Hammerstein}. For the initial guess we choose $x_0(s) = 1$, and in the discrepancy principle \eqref{discrepancy_principle} we use $\tau = 2$. The absolute error \eqref{abs_error} corresponding to different parameter choice rules and noise levels $\delta$ from $0.1\%$ to $2\%$ is depicted in Figure~\ref{fig_Hammerstein_results}. Typical plots of the heuristic functionals $\psi$ as well as the evolution of the absolute error over the iteration are depicted in Figure~\ref{fig_Hammerstein_functionals}. There, the marked points denote the corresponding stopping indices selected via the different rules. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth, trim = {6.5cm 1.5cm 6cm 2cm}, clip = true]{Hammerstein_n128_tau2_Error_Rules_SemiLogY.jpg} \caption{Numerical results for the nonlinear Hammerstein problem introduced in Section~\ref{subsect_Hammerstein}: Absolute error \eqref{abs_error} at the stopping indices ${k_*}$ determined by the discrepancy principle \eqref{discrepancy_principle}, the different heuristic parameter choice rules \eqref{heuristic_rules}, and the optimal stopping index $k_{\text{opt}}$ defined in \eqref{def_kopt}, each for different relative noise levels $\delta$.} \label{fig_Hammerstein_results} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=\textwidth, trim = {6.5cm 1.5cm 6cm 2cm}, clip = true]{Hammerstein_Phi_Delta1Percent.jpg} \caption{Numerical results for the nonlinear Hammerstein problem introduced in Section~\ref{subsect_Hammerstein}: Heuristic functionals \eqref{heuristic_rules} and discrepancy functional \eqref{def_hrDP} for $\delta = 1\%$ relative noise (left). Corresponding evolution of the absolute error $\norm{x_k^\delta - x^\dagger}$ with marked points indicating the stopping indices chosen by the different rules (right).} \label{fig_Hammerstein_functionals} \end{figure} As can be seen from the left plot in Figure~\ref{fig_Hammerstein_functionals}, the heuristic functionals $\psi$ generally exhibit the same shape as expected from the theoretical considerations discussed above. For example, each of the functionals exhibits a spurious ``first'' local minimum within the first few iterations, as already discussed in Section~\ref{subsect_challenges}. Apart from this, each functional $\psi$ has a well-defined minimum reasonably close to the stopping index $k_{\text{DP}}$ determined by the discrepancy principle. However, for larger noise levels, this minimum vanishes for the the HD and the HR rule, which is reflected in Figure~\ref{fig_Hammerstein_results} by their unsatisfactory constant absolute error (the rules select the spurious minimum in this case). In contrast, the QO and LS rule keep their general shape for all noise levels, and thus produce stable stopping indices, which are typically larger than those determined by the discrepancy principle. Since the evolution of the absolute error depicted in the right plot in Figure~\ref{fig_Hammerstein_functionals} flattens for larger iteration numbers (an effect of the discretization), the error curves for the QO and the LS rules remain rather constant on the logarithmic scale. Curiously, and contrary to theoretical expectations, the absolute error curve for the discrepancy principle in Figure~\ref{fig_Hammerstein_results} exhibits a parabola shape. This indicates that on the one hand, the chosen value of $\tau$ is too small, while on the other hand the discretization might be too coarse in comparison with the small noise levels. However, in practice the discretization is often fixed by practical limitations, while a proper value of $\tau$ satisfying \eqref{cond_tau} is typically impossible to determine, or unreasonably large. Hence, this first test already indicates the usefulness of heuristic parameter choice rules (expecially the QO and LS rule) in comparison to the discrepancy principle, and shows some typical limitations which we now investigate further in the remaining test problems. \subsection{Diffusion-Coefficient estimation} Next, we consider the diffusion-coefficient estimation problem introduced in Section~\ref{subsect_diffusion}. For discretizing the problem we use a standard projection approach (see e.g.\ \cite{Engl_Hanke_Neubauer_1996}) onto a finite-dimensional subspace of $H^1[0,1]$ spanned by piecewise linear FEM hat functions defined on a uniform subdivision of $[0,1]$ into $50$ subintervals. For the exact solution we choose $x^\dagger(s) = 2 + s(1-s)$ and compute the corresponding data $y$ by applying the operator $F$ defined in \eqref{def_diffusion}, using a finer grid in order to avoid an inverse crime. For the initial guess we use $x_0(s) = 2.1$, and in the discrepancy principle \eqref{discrepancy_principle} we choose $\tau = 1.1$. As before, Figure~\ref{fig_Diffusion_results} depicts the absolute errors \eqref{abs_error} corresponding to the different parameter choice rules, now for noise levels $\delta$ between $0.1\%$ and $1\%$. A typical plot of the heuristic functionals $\psi$ and the evolution of the absolute error can be seen in Figure~\ref{fig_Diffusion_functionals} \begin{figure}[ht!] \centering \includegraphics[width=\textwidth, trim = {6.5cm 1.5cm 6cm 2cm}, clip = true]{Diffusion_Error_Rules_SemiLogY.jpg} \caption{Numerical results for the diffusion-coefficient estimation problem introduced in Section~\ref{subsect_diffusion}: Absolute error \eqref{abs_error} at the stopping indices ${k_*}$ determined by the discrepancy principle \eqref{discrepancy_principle}, the different heuristic parameter choice rules \eqref{heuristic_rules}, and the optimal stopping index $k_{\text{opt}}$ defined in \eqref{def_kopt}, each for different relative noise levels $\delta$.} \label{fig_Diffusion_results} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=\textwidth, trim = {6.5cm 1.5cm 6cm 2cm}, clip = true]{Diffusion_Phi_Delta0d2Percent.jpg} \caption{Numerical results for the diffusion-coefficient estimation problem introduced in Section~\ref{subsect_diffusion}: Heuristic functionals \eqref{heuristic_rules} and discrepancy functional \eqref{def_hrDP} for $\delta = 0.2\%$ relative noise (left). Corresponding evolution of the absolute error $\norm{x_k^\delta - x^\dagger}$ with marked points indicating the stopping indices chosen by the different rules (right). } \label{fig_Diffusion_functionals} \end{figure} We observe again a slight superiority of the QO and LS rules over the HD and HR methods and even over the discrepancy principle, although the rate of these methods (the slope of the plots in Figure~\ref{fig_Diffusion_results}) are comparable. The HD and HR method indicate erratic behaviour for large $\delta$, which is explained by a lack of a clear minimum and the resulting sub-optimal choice of the spurious minimum. While the discrepancy principle follows quite nicely a theoretically predicted rate, the QO rule (and less the LS rule) has a jump in the error curve, which is explained by the occurrence of two local minima in the graph of Figure~\ref{fig_Diffusion_functionals}; at a certain $\delta$ the global minimum switches from one to the other. As before, we can conclude sucessful results for the discrepancy principle as well as for the heuristic rules. \subsection{Acousto-Electrical Tomography} Next, we consider the AET problem introduced in Section~\ref{subsect_AET}. For a detailed description of the problem setup, discretization, and implementation, we refer to \cite{Hubmer_Knudsen_Li_Sherina_2018}. In short, the unknown inclusion $\sigma^\dagger$ consists of three uniform disconnected inclusions (two circular, one crescent shaped) with values of $1.3$, $1.7$, and $2$, respectively, in an otherwise constant background of value $1$ over the circular domain $\Omega := \Kl{(r,\theta) \in [0,1) \times [0,2\pi]} \subset \R^2$. Furthermore, we use the boundary flux functions \begin{equation*} g_j(r,\theta) := \begin{cases} \sin(2j\pi\theta/\alpha) \,, & (r,\theta) \in \Gamma(\alpha) \,, \\ 0 \,, & \text{else} \,, \end{cases} \qquad \text{for} \, j = 1,2,3 \,, \end{equation*} where $\Gamma(\alpha) := \Kl{(r,\theta) \in \Kl{1} \times [0,\alpha]} \subset \partial \Omega$ for $\alpha \in [0,2\pi]$. Hence, if $\alpha = 2\pi$ then $g_j \neq 0$ almost everywhere on $\partial \Omega$. In the following, this case will be called \emph{$100\%$ boundary data}, while the case $\alpha = 3\pi/2$ is analogously called \emph{$75\%$ boundary data}. Note that in the $75\%$ boundary case, the inverse problem shows only mild instability, while in the $100\%$ case the problem behaves essentially like a well-posed problem. This can be quantified via the condition number of the discretized Fr\'echet derivative of the underlying nonlinear operator, which is equal to $385$ and $12$ in the $75\%$ and $100\%$ boundary data cases, respectively. The power density data $E_j(\sigma^\dagger)$ is created via solving the PDE \eqref{eq_AET_PDE} for $\sigma = \sigma^\dagger$, and for the initial guess we use $\sigma_0(r,\theta) = 1.5$. For completeness, note that on the definition space of the underlying nonlinear operator we use the same weighted inner product as described in \cite{Hubmer_Knudsen_Li_Sherina_2018}. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth, trim = {6.5cm 1.5cm 6cm 2cm}, clip = true]{AET-abs-err-75bdc.jpg} \caption{Numerical results for the AET problem introduced in Section~\ref{subsect_AET} with $75\%$ boundary data: Absolute error \eqref{abs_error} at the stopping indices ${k_*}$ determined by the discrepancy principle \eqref{discrepancy_principle}, the different heuristic parameter choice rules \eqref{heuristic_rules}, and the optimal stopping index $k_{\text{opt}}$ defined in \eqref{def_kopt}, each for different relative noise levels $\delta$.} \label{fig_AET_results_75} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=\textwidth, trim = {6.5cm 1.5cm 6cm 2cm}, clip = true]{AET-2p-err-75bdc.jpg} \caption{Numerical results for the AET problem introduced in Section~\ref{subsect_AET} with $75\%$ boundary data: Heuristic functionals \eqref{heuristic_rules} and discrepancy functional \eqref{def_hrDP} for $\delta = 2\%$ relative noise (left). Corresponding evolution of the absolute error $\norm{x_k^\delta - x^\dagger}$ with marked points indicating the stopping indices chosen by the different rules (right).} \label{fig_AET_functionals_75} \end{figure} First, we present results for the $75\%$ boundary data case, for which the resulting absolute errors \eqref{abs_error} corresponding to the different parameter choice rules can be found in Figures~\ref{fig_AET_results_75}. As previously, characteristic plots of the heuristic functionals $\psi$ and the evolution of the absolute errors can be found in Figure~\ref{fig_AET_functionals_75}. First of all, consider the results of discrepancy principle \eqref{discrepancy_principle}, here used with $\tau = 2$. While its corresponding stopping index is not too far off the optimal value, the steep shape of the absolute error curve nevertheless results in an overall large error. While this suggests to use a smaller $\tau$, note that already in this case for $\delta = 1\%$ the discrepancy principle is not attainable within a reasonable number of iterations. Next, note that the HR rule fails, since its corresponding functional $\psi_\text{HR}$ is monotonously increasing. From the remaining heuristic parameter choice rules, the LS rule gives the best results overall, determining a stopping index close to the optimal one. In contrast, the HD and QO role stop the iteration relatively early and late, respectively, and thus lead to suboptimal absolute errors. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth, trim = {6.5cm 1.5cm 6cm 2cm}, clip = true]{AET-abs-err-100bdc.jpg} \caption{Numerical results for the AET problem introduced in Section~\ref{subsect_AET} with $100\%$ boundary data: Absolute error \eqref{abs_error} at the stopping indices ${k_*}$ determined by the discrepancy principle \eqref{discrepancy_principle}, the different heuristic parameter choice rules \eqref{heuristic_rules}, and the optimal stopping index $k_{\text{opt}}$ defined in \eqref{def_kopt}, each for different relative noise levels $\delta$.} \label{fig_AET_results_100} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=\textwidth, trim = {6.5cm 1.5cm 6cm 2cm}, clip = true]{AET-2p-err-100bdc.jpg} \caption{Numerical results for the AET problem introduced in Section~\ref{subsect_AET} with $100\%$ boundary data: Heuristic functionals \eqref{heuristic_rules} and discrepancy functional \eqref{def_hrDP} for $\delta = 2\%$ relative noise (left). Corresponding evolution of the absolute error $\norm{x_k^\delta - x^\dagger}$ with marked points indicating the stopping indices chosen by the different rules (right).} \label{fig_AET_functionals_100} \end{figure} Next, we consider the results for the $100\%$ boundary data case, for which the resulting absolute errors \eqref{abs_error} corresponding to the different parameter choice rules can be found in Figures~\ref{fig_AET_results_100}. Characteristic plots of the heuristic functionals $\psi$ and the evolution of the absolute errors can now be found in Figure~\ref{fig_AET_functionals_100}. Following our previous findings, we now consider the discrepancy principle \eqref{discrepancy_principle} with the choice $\tau = 1.1$. However, apart from the case of $\delta = 2\%$ the resulting stopping index is still far away from the optimal one, with the case of $\delta = 1\%$ being non-attainable as before. Next, note that both the HR, QO, and LS rule fail, with their corresponding functionals being either monotonously increasing or decreasing. We conjecture that this failure might be related to the fact that the Muckenhoupt condition is not satisfied due to a nearly well-posed situation; cf.~Remark~\ref{rem_heuremark}. As noted above, the $100\%$ boundary case behaves essentially like a well-posed problem, which is reflected in the evolution of the absolute error depicted in Figure~\ref{fig_AET_functionals_100} (right). Consequently a suboptimal stopping index has little effect, and the resulting errors when using the QO and the LS rule are comparable to those of the HD rule, which in this setting is the only heuristic rule producing a well-defined stopping index. \subsection{SPECT} For the fifth test, we consider the nonlinear SPECT problem introduced in Section~\ref{subsect_SPECT}. For discretizing the problem we utilize the same approach used e.g.\ in \cite{Hubmer_Ramlau_2017,Ramlau_2003,Ramlau_Teschke_2006}, using $79$ uniformly spaced angles $\omega$ in the attenuated Radon transform \eqref{def_SPECT}. For the exact solution $(f^\dagger,\mu^\dagger)$, we choose the MCAT phantom \cite{Terry_Tsui_Perry_Hendricks_Gullberg_1990}, see also \cite{Hubmer_Ramlau_2017,Ramlau_2003,Ramlau_Teschke_2006}, and for the initial guess we use $(f_0,\mu_0) = (0,0)$. For the discrepancy principle \eqref{discrepancy_principle} we choose the rather large value $\tau = 10$, which was however found to lead to better results than the standard choices $\tau = 1.1$ or $\tau = 2$. The absolute error \eqref{abs_error} corresponding to the different parameter choice rules are depicted in Figure~\ref{fig_SPECT_results}, in this case for the practically realsitc case of noise levels $\delta$ between $1\%$ and $10\%$. Again, typical plots of the heuristic functionals $\psi$ as well as the evolution of the absolute error over the iteration is depicted in Figure~\ref{fig_SPECT_functionals}. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth, trim = {6.5cm 1.5cm 6cm 2cm}, clip = true]{SPECT_Error_Rules_SemiLogY.jpg} \caption{Numerical results for the nonlinear SPECT problem introduced in Section~\ref{subsect_SPECT}: Absolute error \eqref{abs_error} at the stopping indices ${k_*}$ determined by the discrepancy principle \eqref{discrepancy_principle}, the different heuristic parameter choice rules \eqref{heuristic_rules}, and the optimal stopping index $k_{\text{opt}}$ defined in \eqref{def_kopt}, each for different relative noise levels $\delta$.} \label{fig_SPECT_results} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=\textwidth, trim = {6.5cm 1.5cm 6cm 2cm}, clip = true]{SPECT_Phi_Delta5Percent.jpg} \caption{Numerical results for the nonlinear SPECT problem introduced in Section~\ref{subsect_SPECT}: Heuristic functionals \eqref{heuristic_rules} and discrepancy functional \eqref{def_hrDP} for $\delta = 5\%$ relative noise (left). Corresponding evolution of the absolute error $\norm{x_k^\delta - x^\dagger}$ with marked points indicating the stopping indices chosen by the different rules (right).} \label{fig_SPECT_functionals} \end{figure} The plots in these figures show that all parameter choice rules work well for this problem, that the error rate follow the optimal one, and that the heuristic plots in Figure~\ref{fig_SPECT_functionals} have a clear minimum. Furthermore, the HD and HR rules are slightly superior to the other rules in terms of the resulting absolute error. Note that as for the nonlinear Hammerstein problem, the error graph for the discrepancy principle again exhibits a slight parabola shape, which could again be related either to a too small choice of $\tau$, or to a coarse discretization of the problem. \subsection{Auto-Convolution} For the final test, we consider the auto-convolution problem introduced in Section~\ref{subsect_conv}. For the discretization of this problem, which is based on standard FEM hat functions on a uniform subdivision of the interval $[0,1]$ into $60$ subintervals, we refer to \cite{Hubmer_Ramlau_2018}. For the exact solution we choose $x^\dagger(s) = 10 + \sqrt{2} \sin(2 \pi s)$, from which we compute the corresponding data $y$ by applying the operator $F$ as defined in \eqref{def_conv}. For the initial guess we use $x_0(s) = 10 + \tfrac{1}{4}\sqrt{2} \sin(2 \pi s)$, and in the discrepancy principle we choose $\tau = 1.1$. Since the initial guess is rather close to the exact solution, we now consider noise levels $\delta$ between $0.01 \%$ and $0.1\%$. The corresponding absolute errors \eqref{abs_error} for the different parameter choice rules are depicted in Figure~\ref{fig_Autoconvolution_results}, while a typical plot of the heuristic functionals $\psi$ and the evolution of the absolute error can be seen in Figure~\ref{fig_Autoconvolution_functionals}. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth, trim = {6.5cm 1.5cm 6cm 2cm}, clip = true]{Autoconvolution_Error_Rules_SemiLogY.jpg} \caption{Numerical results for the auto-convolution problem introduced in Section~\ref{subsect_conv}: Absolute error \eqref{abs_error} at the stopping indices ${k_*}$ determined by the discrepancy principle \eqref{discrepancy_principle}, the different heuristic parameter choice rules \eqref{heuristic_rules}, and the optimal stopping index $k_{\text{opt}}$ defined in \eqref{def_kopt}, each for different relative noise levels $\delta$.} \label{fig_Autoconvolution_results} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=\textwidth, trim = {6.5cm 1.5cm 6cm 2cm}, clip = true]{Autoconvolution_Phi_Delta0d05Percent.jpg} \caption{Numerical results for the auto-convolution problem introduced in Section~\ref{subsect_conv}: Heuristic functionals \eqref{heuristic_rules} and discrepancy functional \eqref{def_hrDP} for $\delta = 0.05\%$ relative noise (left). Corresponding evolution of the absolute error $\norm{x_k^\delta - x^\dagger}$ with marked points indicating the stopping indices chosen by the different rules (right).} \label{fig_Autoconvolution_functionals} \end{figure} We observe that for this problem the HD rules give the best results overall, yielding stopping indices close to the optimal $k_{\text{opt}}$ for all considered noise levels. Furthermore, from Figure~\ref{fig_Autoconvolution_functionals} we can see that also the functional $\psi_\text{HR}$ exhibits a clearly distinguishable minimum. However, since the absolute error drops steeply between the corresponding stopping index and the optimal one, the resulting absolute error when using the HR rule is significantly higher. In contrast, both the QO rule and the LS rule tend towards $-\infty$ as the iteration number increases. This coincides with the fact that the absolute error in the iteration stays more or less constant after having reached the minimum value at $k_{\text{opt}}$. Consequently, the LS and the QR rule both stop with ${k_*} = k_{\text{max}}$, which in this case by chance leads to generally very good absolute errors. However, the shape of the QO and LS functionals is in contrast to theory, as the graph is expected to diverge for $k\to\infty$. This observation is a hint that the Muckenhoupt condition is not satisfied here for QO and LS, while it might be for HD and HR; cf.~Remark~\ref{rem_heuremark}. This may also be connected to the fact that the residual functional $x \mapsto \norm{F(x)-y^{\delta}}^2$ is locally convex around our chosen solution $x^\dagger$, see e.g.\ \cite[Proposition~5.2]{Hubmer_Ramlau_2018}, and thus the auto-convolution problem may behave nearly like a well-posed problem; cf.~Section~\ref{subsect_challenges}. \section{Summary and Conclusion}\label{sect_conclusion} In the previous section, we presented and discussed the results of the different parameter choice rules applied to the test problems introduced in Section~\ref{sect_test_problems}. As a summary, we now present our findings in Table~\ref{tabtab}, where we classify the results, in a rather informal style, together with comments and suspected background. \begin{table}[ht!] \caption{Summary of performance for various examples and stopping rules.}\label{tabtab} \begin{center} \begin{tabular}{|m{3.3cm}|c||m{0.9cm}|m{1.5cm}|m{1.5cm}|m{1.5cm}|m{1.5cm}|} \multicolumn{1}{c|}{Example} &Indicator &$\delta$-rule &\multicolumn{4}{c}{Heuristic Rules} \\ \hline & &\rule{0mm}{2.3ex}DP & HD & HR & QO & LS \\ \hline \hline \multirow{2}{*}{\makecell{\\[-7mm] Hammerstein\\ equation \\ Illposedness: Mild \\ TCC: Yes }} & $k_*\sim k_{opt}$ & {\small Good} & \multicolumn{2}{c|}{\makecell{$\delta$ small: {\small Good} \\ $\delta$ large: {\small Bad} }} & \multicolumn{2}{c|}{{\small Average}} \\ \cline{2-7} & Error: & {\small Good} & \multicolumn{2}{l|}{\makecell{$\delta$ small: {\small Good} \\ $\delta$ large: {\small Bad} }} & \multicolumn{2}{c|}{{\small Average}} \\ \hline \hline \multirow{2}{*}{\makecell{\\[-7mm] Diffusion\\ Estimation \\ Illposedness: Mild \\ TCC: Yes }} & $k_*\sim k_{opt}$ & {\small Good} & \multicolumn{2}{c|}{\makecell{$\delta$ small: {\small Good} \\ $\delta$ large: {\small Bad} }} & \multicolumn{2}{c|}{{\small Good}} \\ \cline{2-7} & Error: & {\small Good} & \multicolumn{2}{l|}{\makecell{$\delta$ small: {\small Good} \\ $\delta$ large: {\small Bad} }} & \multicolumn{2}{c|}{{\small Good}} \\ \hline \hline \multirow{2}{*}{\makecell{\\[-3mm] Acousto-Electric \\ Tomography $75\%$ \\ Illposedness: Mild \\ TCC: Unknown }} & $k_*\sim k_{opt}$ & \makecell{ {\small Good} \\ } & \multicolumn{2}{c|}{{\small Bad}} & {\small Average} & {\small Excellent} \\ \cline{2-7} & Error: & \makecell{ {\small Good} \\ } & {\small Good} & {\small Good} & {\small Good} & {\small Excellent} \\ \hline \hline \multirow{2}{*}{\makecell{\\[-3mm] Acousto-Electric\\ Tomography $100\%$ \\ Near-Wellposed \\ TCC: Unknown }} & $k_*\sim k_{opt}$ & \makecell{ {\small Good} \\ } & {\small Good} & \multicolumn{3}{c|}{{\small Bad}} \\ \cline{2-7} & Error: & \makecell{ {\small Good} \\ } & {\small Good} & {\small Average} & {\small Good} & {\small Excellent} \\ \hline \hline \multirow{2}{*}{\makecell{\\[-6mm] \\ SPECT \\ Illposed: Mild \\ TCC: Unknown }} & $k_*\sim k_{opt}$ & \makecell{ {\small Good} \\ } & \multicolumn{4}{c|}{{\small Good}} \\ \cline{2-7} & Error: & \makecell{ {\small Good} \\ } & {\small Good} & {\small Excellent} & {\small Good} & {\small Good} \\ \hline \hline \multirow{2}{*}{\makecell{\\[-6mm] Auto-\\ Convolution \\ Illposed: Mild \\ TCC: Unknown }} & $k_*\sim k_{opt}$ & \makecell{ {\small Good} \\ } & \multicolumn{2}{c|}{{\small Good}} & \multicolumn{2}{c|}{{\small Bad}} \\ \cline{2-7} & Error: & \makecell{ {\small Good} \\ } & {\small Excellent} & {\small Average} & {\small Average} & {\small Excellent} \\ \hline \end{tabular} \end{center} \end{table} In the first column of Table~\ref{tabtab}, we indicate the (suspected) type of ill-posedness of the various test problems and whether the tangential cone condition (TCC) is known to hold. Furthermore, in the second column the performance of the parameter choice rules is classified according to two indicators: The row with $k_* \sim k_{\text{opt}}$ indicates whether the stopping index selected by the respective parameter choice rule is close to the optimal index, and whether the heuristic functionals behave as desired, i.e., have a clear minimum. The row with ``Error'' indicates how close the resulting absolute error is to the optimal error. These performance indicators are stated in a colloquial manner, classified as ``Excellent'', ``Good'', ``Average'', or ``Bad''. Some conclusions can be drawn from these results: First of all, the discrepancy principle works well in all cases, even when the tangential cone condition is not known to hold; as discussed above. However, it requires a proper choice of the parameter $\tau$, and the strange parabola shape in Figure~\ref{fig_Hammerstein_results} is, e.g., attributed to its having been selected too small. Next, we observe that the heuristic rules work well for many cases, but not always. Furthermore, our numerical results show that it is not possible to determine an overall ``best'' heuristic parameter choice rule for all test problems. This suggests that in practice one should always first conduct a series of simulations using multiple different parameter choice rules for any given problem, instead of blindly selecting any single rule among them. From the computational point of view, the authors personally prefer the HD rule, which does not require to compute and store both iterates $x_k$ and $x_{2k}$ during the iteration. However, also the HD rule fails at times, and the LS rule in particular has been found to be a useful alternative which often showed an excellent performance with respect to the resulting absolute error. One issue with the QO and LS rules is that their corresponding functionals $\psi$ do not exhibit a clear minimium in the case of well-posed or nearly well-posed problems (e.g., AET with $100\%$ boundary data and Auto-Convolution). However, this can be attributed to the fact that the Muckenhoupt condition is stronger for these rules and hence might not be satisfied; cf.~Remark~\ref{rem_heuremark}. On the other hand, even if the QO and LS rule fail in finding a good approximation of $k_{opt}$, the resulting error is then not too dramatic, since the almost well-posedness then only leads to moderate errors. Finally, our numerical studies also have implications for the further analytic study and development of (novel) heuristic stopping rules for nonlinear Landweber iteration. In particular, our findings indicate that apart from the relation between the smoothness of the solution and the noise (i.e., noise conditions), also the type of ill-posedness and especially the type of nonlinearity of the problem have to be taken into account. The authors believe that it will in general be impossible to design and analyse a single heuristic parameter choice rules which performs well for all different nonlinear inverse problems. Rather, one may have to consider different heuristic rules depending on the type of nonlinearity and ill-posedness of each specific problem. This then poses an interesting challenge for future research. \section{Support} The authors were funded by the Austrian Science Fund (FWF): F6805-N36 (SH), F6807-N36 (ES), and P30157-N31 (SK and KR). \bibliographystyle{plain} {\footnotesize
{'timestamp': '2022-05-23T02:04:17', 'yymm': '2205', 'arxiv_id': '2205.09831', 'language': 'en', 'url': 'https://arxiv.org/abs/2205.09831'}
\section{Introduction} \label{sec:intro} Statistical inference on graphs is a burgeoning field of research in machine learning and statistics, with numerous applications to social network, neuroscience, etc. Many of the graphs in application domains are large and complex but nevertheless are believed to be composed of multiple smaller-scale communities. Thus, an essential task in graph inference is detecting/identifying local (sub)communities. The resulting problem of community detection on graphs is well-studied (see the survey \cite{Fortunato2010Community}), with many available techniques including those based on maximizing modularity and likelihood \cite{Bickel2009,Newman2004,Snijders1997Estimation,Celisse}, random walks \cite{pons05:_comput,rosvall08:_maps}, spectral clustering \cite{mcsherry,rohe2011spectral,chaudhuri12:_spect,sussman12,rinaldo_2013,von2007tutorial}, and semidefinite programming \cite{hajek,abbe}. It is widely known that under suitable models --- such as the popular stochastic blockmodel and its variants \cite{holland,karrer2011stochastic,lyzinski15_HSBM} --- one can consistently recover the underlying communities as the number of observed nodes increases, and furthermore, there exist deep and beautiful phase transitions phenomena with respect to the statistical and computational limits for recovery. Another important question in graph inference is, subsequent to community detection, that of characterizing the nature and/or structure of these communities. One of the simplest and possibly most essential example is determining the (community specific) tendencies/probabilities for nodes to form link within and between communities. Consistent recovery of the underlying communities yields a straightforward and universally employed procedure for consistent estimation of these probabilities, namely averaging the number of edges within a community and/or between communities. This procedure, when intepreted in the context of estimating the parameters $\bm{\theta} = (\theta_1, \theta_2, \dots, \theta_r)$ of a collection of independent binomially distributed random variables $\{X_1,X_2, \dots, X_r\}$ with $X_i \sim \mathrm{Bin}(n_i, \theta_i)$, corresponds to maximum likelihood estimation with no restrictive assumption on the $\{\theta_i\}$. However, in the context of graphs and communities, there are often natural relationships among the communities; hence a graph with $K$ communities need not require $K(K+1)/2$ parameters to describe the within and between community connection probabilities. The above procedure is therefore potentially sub-optimal. Motivated by the above observation, our paper studies the asymptotic properties of three different estimators for $\mathbf{B}$, the matrix of edge probabilities among communities, of a stochastic blockmodel graph. Two estimators are based on maximum likelihood methods and the remaining estimator is based on spectral embedding. We show that, given an observed graph with adjacency matrix $\mathbf{A}$, the most commonly used estimator --- the MLE under no rank assumption on $\mathbf{B}$ --- is sub-optimal when $\mathbf{B}$ is not invertible. Moreover, when $\mathbf{B}$ is singular, the estimator based on spectral embedding $\mathbf{A}$ is often times better (smaller mean squared error) than the MLE under no rank assumption, and is almost as efficient as the asymptotically (first-order) efficient MLE whose parametrization depend on $\mathrm{rk}(\mathbf{B})$. Finally, when $\mathbf{B}$ is invertible, the three estimators are asymptotically first-order efficient. \subsection{Background} We now formalize the setting considered in this paper. We begin by recalling the notion of stochastic blockmodel graphs due to \cite{holland}. Stochastic blockmodel graphs and its variants, such as degree-corrected blockmodels and mixed membership models \cite{karrer2011stochastic,Airoldi2008} are the most popular models for graphs with intrinsic community structure. In addition, they are widely used as building blocks for constructing approximations (see e.g., \cite{gao,wolfe13:_nonpar,airoldi13:_stoch,klopp}) of the more general latent position graphs or graphons models \cite{Hoff2002,lovasz12:_large}. \label{subsec:background} \begin{definition} \label{def:SBM} Let $K \geq 1$ be a positive integer and let $\bm{\pi} \in \mathcal{S}_{K-1}$ be a non-negative vector in $\mathbb{R}^{K}$ with $\sum_{k} \pi_k = 1$; here $\mathcal{S}_{K-1}$ denote the $K-1$ dimensional simplex in $\mathbb{R}^{K}$. Let $\mathbf{B} \in [0,1]^{K \times K}$ be symmetric. We say that $(\mathbf{A}, \bm{\tau}) \sim \mathrm{SBM}(\mathbf{B}, \bm{\pi})$ with sparsity factor $\rho$ if the following hold. First $\bm{\tau} = (\tau_1, \dots, \tau_n)$ where $\tau_i$ are i.i.d. with $\Pr[\tau_i = k] = \pi_k$. Then $\mathbf{A} \in \{0,1\}^{n \times n}$ is a symmetric matrix such that, conditioned on $\bm{\tau}$, for all $i \leq j$ the $\mathbf{A}_{ij}$ are independent Bernoulli random variables with $\mathbb{E}[\mathbf{A}_{ij}] = \rho \mathbf{B}_{\tau_i, \tau_j}$. We write $\mathbf{A} \sim \mathrm{SBM}(\mathbf{B}, \bm{\pi})$ when only $\mathbf{A}$ is observed, i.e., $\bm{\tau}$ is integrated out from $(\mathbf{A}, \bm{\tau})$. \end{definition} For $(\mathbf{A}, \bm{\tau}) \sim \mathrm{SBM}(\mathbf{B}, \bm{\pi})$ or $\mathbf{A} \sim \mathrm{SBM}(\mathbf{B}, \bm{\pi})$ with $\mathbf{A}$ having $n$ vertices and (known) sparsity factor $\rho$, the likelihood of $(\mathbf{A}, \tau)$ and $\mathbf{A}$ are, respectively \begin{gather} L(\mathbf{A}, \tau; \mathbf{B}, \bm{\pi}) = \Bigl(\prod_{i=1}^{n} \pi_{\tau_i} \Bigr) \Bigl(\prod_{i \leq j} (\rho \mathbf{B}_{\tau_i, \tau_j})^{\mathbf{A}_{ij}} (1 - \rho \mathbf{B}_{\tau_i,\tau_j})^{1 - \mathbf{A}_{ij}} \Bigr), \\ L(\mathbf{A}; \mathbf{B}, \bm{\pi}) = \sum_{\bm{\tau} \in [K]^{n}} L(\mathbf{A}, \bm{\tau}; \mathbf{B}, \bm{\pi}). \end{gather} When we observed $\mathbf{A} \sim \mathrm{SBM}(\mathbf{B}, \bm{\pi})$ with $\mathbf{B} \in [0,1]^{K}$ where $\rho$ and $K$ are assumed known, a maximum likelihood estimate of $\bm{\pi}$ and $\mathbf{B}$ is given by \begin{equation} \label{eq:MLE_naive} (\hat{\mathbf{B}}^{(N)}, \hat{\bm{\pi}}) = \operatornamewithlimits{argmax}_{\mathbf{B} \in [0,1]^{K \times K}, \,\, \bm{\pi} \in \mathcal{S}_{K-1}} L(\mathbf{A}; \mathbf{B}, \bm{\pi}). \end{equation} The maximum likelihood estimate (MLE) $\hat{\mathbf{B}}^{(N)}$ in Eq.~\eqref{eq:MLE_naive} requires estimation of the $K(K+1)/2$ entries of $\mathbf{B}$ and is generally intractable as it requires marginalization over the latent vertex-to-block assignment vector $\bm{\tau}$. Another parametrization of $\mathbf{B}$ is via the eigendecomposition $\mathbf{B} = \mathbf{V} \mathbf{D} \mathbf{V}^{\top}$ where $d = \mathrm{rk}(\mathbf{B})$, $\mathbf{V}$ is a $K \times d$ matrix with $\mathbf{V}^{\top} \mathbf{V} = \mathbf{I}$ and $\mathbf{D}$ is diagonal. This parametrization results in the estimation of $d(2K - d+1)/2 \leq K(K+1)/2$ parameters, with $Kd - d(d+1)/2$ parameters being estimated for $\mathbf{V}$ (as an element of the Stiefel manifold of orthonormal $d$ frames in $\mathbb{R}^{K}$) and $d$ parameters estimated for $\mathbf{D}$. Therefore, when $d = \mathrm{rk}(\mathbf{B})$ (in addition to $\rho$ and $K$) is also assumed known, another MLE of $\mathbf{B}$ and $\bm{\pi}$ is given by \begin{equation} \label{eq:MLE_rank} (\hat{\mathbf{B}}^{(M)}, \hat{\bm{\pi}}) = \operatornamewithlimits{argmax}_{\mathbf{B} \in [0,1]^{K \times K},\,\, \mathrm{rk}(\mathbf{B}) = d, \,\, \bm{\pi} \in \mathcal{S}_{K-1}} L(\mathbf{A}; \mathbf{B}, \bm{\pi}). \end{equation} The MLE parametrization in Eq.~\eqref{eq:MLE_naive} is the one that is universally used, see e.g., \cite{Choi2010,Celisse,Snijders1997Estimation,bickel_asymptotic_normality}; variants of MLE estimation such as maximization of the profile likelihood \cite{Bickel2009} or variational inference \cite{daudin} are also based solely on approximating the MLE in Eq.~\eqref{eq:MLE_naive}. In contrast, the MLE parametrization used in Eq.~\eqref{eq:MLE_rank} has, to the best of our knowledge, never been considered heretofore in the literature. We shall refer to $\hat{\mathbf{B}}^{(N)}$ and $\hat{\mathbf{B}}^{(M)}$ as the naive MLE and the true (rank-constrained) MLE, respectively. The estimator $\hat{\mathbf{B}}^{(N)}$ is asymptotically normal around $\mathbf{B}$; in particular Lemma~1 (and its proof) in \cite{bickel_asymptotic_normality} states that \begin{theorem}[\cite{bickel_asymptotic_normality}] \label{thm:bickel_choi} Let $\mathbf{A}_n \sim \mathrm{SBM}(\mathbf{B}, \bm{\pi})$ for $n \geq 1$ be a sequence of stochastic blockmodel graphs with sparsity factors $\rho_n$. Let $\hat{\mathbf{B}}^{(N)}$ be the MLE of $\mathbf{B}$ obtained from $\mathbf{A}_n$ with $\rho_n$ assumed known. If $\rho_n \equiv 1$ for all $n$, then \begin{gather} \label{eq:naive_clt1} n (\hat{\mathbf{B}}^{(N)}_{kk} - \mathbf{B}_{kk}) \overset{\mathrm{d}}{\longrightarrow} \mathcal{N}\Bigl(0, \frac{2 \mathbf{B}_{kk} (1 - \mathbf{B}_{kk})}{\pi_k^2}\Bigr), \quad \text{for $k \in [K]$} \\ \label{eq:naive_clt2} n(\hat{\mathbf{B}}^{(N)}_{kl} - \mathbf{B}_{kl}) \overset{\mathrm{d}}{\longrightarrow} \mathcal{N}\Bigl(0, \frac{\mathbf{B}_{kl} (1 - \mathbf{B}_{kl})}{\pi_k \pi_l}\Bigr), \quad \text{for $k \in [K], l \in [K], k \not = l$} \end{gather} as $n \rightarrow \infty$, and that the $K(K+1)/2$ random variables $\{n(\hat{\mathbf{B}}^{(N)}_{kl}- \mathbf{B}_{kl})\}_{k \leq l}$ are asymptotically independent. If, however, $\rho_n \rightarrow 0$ with $n \rho_n = \omega(\log{n})$, then \begin{gather} \label{eq:naive_clt3} n \sqrt{\rho_n} (\hat{\mathbf{B}}^{(N)}_{kk} - \mathbf{B}_{kk}) \overset{\mathrm{d}}{\longrightarrow} \mathcal{N}\Bigl(0, \frac{2 \mathbf{B}_{kk}}{\pi_k^2} \Bigr), \quad \text{for $k \in [K]$} \\ \label{eq:naive_clt4} n \sqrt{\rho_n} (\hat{\mathbf{B}}^{(N)}_{kl} - \mathbf{B}_{kl}) \overset{\mathrm{d}}{\longrightarrow} \mathcal{N}\bigl(0, \frac{\mathbf{B}_{kl}}{\pi_k \pi_l}\Bigr), \quad \text{for $k \in [K], l \in [K], k \not = l$} \end{gather} as $n \rightarrow \infty$, and the $K(K+1)/2$ random variables $\{n \sqrt{\rho_n}(\hat{\mathbf{B}}^{(N)}_{kl} - \mathbf{B}_{kl} \}_{k \leq l}$ are asymptotically independent. \end{theorem} Furthermore, $\hat{\mathbf{B}}^{(N)}$ is also purported to be optimal; Section~5 of \cite{bickel_asymptotic_normality} states that \begin{quotation} These results easily imply that classical optimality properties of these procedures\footnote{The procedures referred to here are the MLE $\hat{\mathbf{B}}^{(N)}$ and its variational approximation as introduced in \cite{daudin}.}, such as achievement of the information bound, hold. \end{quotation} We present a few examples in Section~\ref{sec:main_results} illustrating that $\hat{\mathbf{B}}^{(N)}$ is optimal only if $\mathbf{B}$ is invertible, and that in general, for singular $\mathbf{B}$, $\hat{\mathbf{B}}^{(N)}$ is dominated by the MLE $\hat{\mathbf{B}}^{(M)}$. Another widely used technique, and computationally tractable alternative to $\hat{\mathbf{B}}^{(N)}$, for estimating $\mathbf{B}$ is based on spectral embedding $\mathbf{A}$. More specifically, given $\mathbf{A} \sim \mathrm{SBM}(\mathbf{B}, \bm{\pi})$ with known sparsity factor $\rho$, we consider the following procedure for estimating $\mathbf{B}$: \begin{enumerate} \item Assuming $d = \mathrm{rk}(\mathbf{B})$ is known, let $\mathbf{A} = \hat{\mathbf{U}} \hat{\bm{\Lambda}} \hat{\mathbf{U}}^{\top} + \hat{\mathbf{U}}_{\perp} \hat{\bm{\Lambda}}_{\perp} \hat{\mathbf{U}}^{\top}_{\perp}$ be the eigendecomposition of $\mathbf{A}$ where $\hat{\bm{\Lambda}}$ is the diagonal matrix containing the $d$ largest eigenvalues of $\mathbf{A}$ in modulus and $\hat{\mathbf{U}}$ is the $n \times d$ matrix whose columns are the corresponding eigenvectors of $\mathbf{A}$. \item Assuming $K$ is known, cluster the rows of $\hat{\mathbf{U}}$ into $K$ clusters using $K$-means, obtaining an ``estimate'' $\hat{\bm{\tau}}$ of $\bm{\tau}$. \item For $k \in [K]$, let $\hat{\bm{s}}_{k}$ be the vector in $\mathbb{R}^{n}$ where the $i$-th entry of $\hat{\bm{s}}_k$ is $1$ if $\hat{\tau}_i = k$ and $0$ otherwise and let $\hat{n}_k = |\{i \colon \hat{\tau}_i = k\}|$ be the number of vertices assigned to block $k$. \item Estimate $\mathbf{B}_{kl}$ by $\hat{\mathbf{B}}^{(S)}_{kl} = \frac{1}{\hat{n}_k \hat{n}_l \rho∫} \hat{\bm{s}}_k^{\top} \hat{\mathbf{U}} \hat{\bm{\Lambda}} \hat{\mathbf{U}}^{\top} \hat{\bm{s}}_{l}$. \end{enumerate} The above procedure assumes $d $ and $K$ are known. When $d$ is unknown, it can be consistently estimated using the following approach: let $\hat{d}$ be the number of eigenvalues of $\mathbf{A}$ exceeding $4 \sqrt{\delta(\mathbf{A})}$ in modulus; here $\delta(\mathbf{A})$ denotes the max degree of $\mathbf{A}$. Then $\hat{d}$ is a consistent estimate of $d$ as $n \rightarrow \infty$. This follows directly from tail bounds for $\|\mathbf{A} - \mathbb{E}[\mathbf{A}]\|$ (see e.g., \cite{rinaldo_2013,oliveira2009concentration}) and Weyl's inequality. The estimation of $K$ is investigated in \cite{bickel13:_hypot,lei2014} among others, and is based on showing that if $\mathbf{A}$ is a $K$-block SBM, then there exists a consistent estimate $\widehat{\mathbb{E}[\mathbf{A}]} = (\hat{\mathbf{A}}_{ij})$ of $\mathbb{E}[\mathbf{A}]$ such that the matrix $\tilde{\mathbf{A}}$ with entries $\tilde{\mathbf{A}}_{ij} = (\mathbf{A}_{ij} - \hat{\mathbf{A}}_{ij})/\sqrt{(n-1)\hat{\mathbf{A}}_{ij} (1 - \hat{\mathbf{A}}_{ij})}$ has a limiting Tracy-Widom distribution, i.e., $n^{2/3}(\lambda_{1}(\tilde{\mathbf{A}}) - 2)$ converges to Tracy-Widom. The above spectral embedding procedure (and related procedures based on eigendecomposition of other matrices such as the normalized Laplacian) is also well-studied, see e.g., \cite{rinaldo_2013,sussman12,rohe2011spectral,mcsherry,joseph_yu_2015,bickel_sarkar_2013,fishkind2013consistent,athreya2013limit,tang_priebe_16,perfect,coja-oghlan} among others; however, the available results mostly focused on showing that $\hat{\bm{\tau}}$ consistently recovers $\bm{\tau}$. Since $\hat{\bm{\tau}}$ is almost surely an exact recovery of $\bm{\tau}$ in the limit, i.e., $\hat{\tau}_i = \tau_i$ for all $i$ as $n \rightarrow \infty$ (see e.g., \cite{perfect,mcsherry}), the estimate of $\mathbf{B}_{k \ell}$ given by $\tfrac{1}{\hat{n}_k \hat{n}_{\ell} \rho} \hat{\bm{s}}_k^{\top} \mathbf{A} \hat{\bm{s}}_{\ell}$ is a consistent estimate of $\mathbf{B}$, and furthermore, coincides with $\hat{\mathbf{B}}^{(N)}$ as $n \rightarrow \infty$. The quantity $\hat{\mathbf{U}} \hat{\bm{\Lambda}} \hat{\mathbf{U}}^{\top} - \mathbb{E}[\mathbf{A}]$ is also widely analyzed in the context of matrix and graphon estimation using universal singular values thresholding \cite{chatterjee,gao,xu_spectral,klopp}; the focus there had been in showing the minimax rates of convergence of $\tfrac{1}{n^2}\|\hat{\mathbf{U}} \hat{\bm{\Lambda}} \hat{\mathbf{U}}^{\top} - \mathbb{E}[\mathbf{A}]\|_{F}$ to $0$ as $n \rightarrow \infty$. These rates of convergence, however, do not translate to results on the limiting distribution of $\hat{\mathbf{B}}^{(S)}_{k \ell} - \mathbf{B}_{k \ell} = \tfrac{1}{\hat{n}_k \hat{n}_{\ell} \rho} \hat{\bm{s}}_k^{\top} \hat{\mathbf{U}} \hat{\bm{\Lambda}} \hat{\mathbf{U}}^{\top} \hat{\bm{s}}_{\ell} - \tfrac{1}{n_{k} n_{\ell} \rho} \bm{s}_k^{\top}\mathbb{E}[\mathbf{A}] \bm{s}_{\ell}$. In summary, formal comparisons between $\hat{\mathbf{B}}^{(N)}$ and $\hat{\mathbf{B}}^{(S)}$ are severely lacking. Our paper addresses this important void in the literature. The contributions of our paper are as follows. For stochastic blockmodel graphs with sparisty factors $\rho_n$ satisfying $n \rho_n = \omega(\sqrt{n})$, we establish asymptotic normality of $n \rho_n^{1/2} (\hat{\mathbf{B}}^{(S)} - \mathbf{B})$ in Theorem~\ref{THM:GEN_D} and Theorem~\ref{THM:GEN_D_SPARSE}. As a corollary of this result, we show that when $\mathbf{B}$ is of full-rank, that $n \sqrt{\rho_n}(\hat{\mathbf{B}}^{(S)} - \mathbf{B})$ has the same limiting distribution as $n \sqrt{\rho_n} (\hat{\mathbf{B}}^{(N)} - \mathbf{B})$ given in Eq.~\eqref{eq:naive_clt1} and Eq.~\eqref{eq:naive_clt2} and that both estimators are asymptotically efficient; the two estimators $\hat{\mathbf{B}}^{(M)}$ and $\hat{\mathbf{B}}^{(N)}$ are identical in this setting. When $\mathbf{B}$ is singular, we show that $n \sqrt{\rho_n}(\hat{\mathbf{B}}^{(S)} - \mathbf{B})$ can have smaller variances than $n \sqrt{\rho_n} (\hat{\mathbf{B}}^{(N)} - \mathbf{B})$, and thus a bias-corrected $\hat{\mathbf{B}}^{(S)}$ can have smaller mean square error than $\hat{\mathbf{B}}^{(N)}$, and furthermore, that the resulting bias-corrected $\hat{\mathbf{B}}^{(S)}$ can be almost as efficient as the asymptotically first-order efficient estimator $\hat{\mathbf{B}}^{(M)}$. Finally, we also provide some justification of the potential necessity of the condition that the average degree satisfies $n \rho_n = \omega(\sqrt{n})$; in essence, as $\rho_n \rightarrow 0$, the bias incurred by the low-rank representation $ \hat{\mathbf{U}} \hat{\bm{\Lambda}} \hat{\mathbf{U}}^{\top}$ of $\mathbf{A}$ overwhelms the reduction in variance resulting from the low-rank representation. \section{Central limit theorem for $\hat{\mathbf{B}}^{(S)}_{k\ell}$} \label{sec:main_results} Let $(\mathbf{A}, \bm{\tau}) \sim \mathrm{SBM}(\mathbf{B}, \bm{\pi}, \rho)$ be a stochastic blockmodel graph on $n$ vertices with sparsity factor $\rho$ and $\mathrm{rk}(\mathbf{B}) = d$. We first consider the setting wherein both $\bm{\tau}$ and $d$ are assumed known. The setting wherein $\bm{\tau}$ is unobserved and needs to be recovered will be addressed subsequently in Corollary~\ref{cor:in_practice}, while the setting when $d$ is unknown was previously addressed following the introduction of the estimator $\hat{\mathbf{B}}^{(S)}$ in Section~\ref{sec:intro}. When $\bm{\tau}$ is known, the spectral embedding estimate of $\mathbf{B}_{kl}$ (with $\rho$ assumed known) is $\hat{\mathbf{B}}_{kl}^{(S)} = \tfrac{1}{\rho n_k n_l} \bm{s}_k^{\top} \hat{\mathbf{U}} \hat{\bm{\Lambda}} \hat{\mathbf{U}}^{\top} \bm{s}_l$ where $\bm{s}_k$ is the vector whose elements $\{s_{ki}\}$ are such that $s_{ki} = 1$ if vertex $i$ is assigned to block $k$ and $s_{ki} = 0$ otherwise; here $n_k$ denote the number of vertices $v_i$ assigned to block $k$. We then have the following non-degenerate limiting distribution of $\hat{\mathbf{B}}^{(S)} - \mathbf{B}$. We shall present two variants of this limiting distribution. The first variant, Theorem~\ref{THM:GEN_D}, applies to the setting where the average degree grows linearly with $n$, i.e., the sparsity factor $\rho_n \rightarrow c > 0$; without loss of generality, we can assume $\rho_n \equiv c = 1$. The second variant applies to the setting where the average degree grows sub-linearly in $n$, i.e., $\rho_n \rightarrow 0$ with $n \rho_n = \omega(\sqrt{n})$. For ease of exposition, these variants (and their proofs) will be presented using the following parametrization of stochastic blockmodel graphs as a sub-class of the more general random dot product graphs model \cite{young2007random,grdpg1}. \begin{definition}[Generalized random dot product graph] \label{def:grdpg} Let $d$ be a positive integer and $p \geq 1$ and $q \geq 0$ be such that $p + q = d$. Let $\mathbf{I}_{p,q}$ denote the diagonal matrix whose diagonal elements contains $p$ entries equaling $1$ and $q$ entries equaling $-1$. Let $\mathcal{X}$ be a subset of $\mathbb{R}^{d}$ such $x^{\top} \mathbf{I}_{p,q} y \in[0,1]$ for all $x,y\in \mathcal{X}$. Let $F$ be a distribution taking values in $\mathcal{X}$. We say $(\mathbf{X},\mathbf{A}) \sim \mathrm{GRDPG}_{p,q}(F)$ with sparsity factor $\rho \in (0,1]$ if the following hold. First let $X_1, X_2, \dots, X_n \overset{\mathrm{i.i.d}}{\sim} F$ and set $\mathbf{X}=[X_1 \mid \cdots \mid X_n]^\top\in \mathbb{R}^{n\times d}$. Then $\mathbf{A}\in\{0,1\}^{n\times n}$ is a symmetric matrix such that, conditioned on $\mathbf{X}$, for all $i \geq j$ the $A_{ij}$ are independent and \begin{equation} A_{ij} \sim \mathrm{Bernoulli}(\rho X_i^\top \mathbf{I}_{p,q} X_j). \end{equation} We therefore have \begin{equation} \Pr[\mathbf{A} \mid \mathbf{X}]=\prod_{i \leq j} (\rho X^{\top}_i \mathbf{I}_{p,q} X_j)^{A_{ij}}(1- \rho X^{\top}_i \mathbf{I}_{p,q} X_j)^{(1-A_{ij})}. \end{equation} \end{definition} It is straightforward to show that any stochastic blockmodel graph $(\mathbf{A}, \bm{\tau}) \sim \mathrm{SBM}(\bm{\pi}, \mathbf{B})$ can also be represented as a (generalized) random dot product graph $(\mathbf{X}, \mathbf{A}) \sim \mathrm{GRDPG}_{p,q}(F)$ where $F$ is a mixture of point masses. Indeed, suppose $\mathbf{B}$ is a $K \times K$ matrix and let $\mathbf{B} = \mathbf{U} \bm{\Sigma} \mathbf{U}^{\top}$ be the eigendecomposition of $\mathbf{B}$. Then, denoting by $\nu_1, \nu_2, \dots, \nu_K$ the rows of $\mathbf{U} |\bm{\Sigma}|^{1/2}$, we can define $F = \sum_{k=1}^{K} \pi_k \delta_{\nu_k}$ where $\delta$ is the Dirac delta function; $p$ and $q$ are given by the number of positive and negative eigenvalues of $\mathbf{B}$, respectively. \begin{theorem} \label{THM:GEN_D} Let $\mathbf{A} \sim \mathrm{SBM}(\bm{\pi}, \mathbf{B}, \rho_n)$ be a $K$-block stochastic blockmodel graph on $n$ vertices with sparsity factor $\rho_n = 1$. Let $\nu_1, \dots, \nu_K$ be point masses in $\mathbb{R}^{d}$ such that $\mathbf{B}_{k \ell} = \nu_k^{\top} \mathbf{I}_{p,q} \nu_{\ell}$ and let $\Delta = \sum_{k} \pi_k \nu_k \nu_k^{\top}$. For $k \in [K]$ and $\ell \in [K]$, let $\theta_{k \ell}$ be given by \begin{equation} \label{eq:mu_kl} \begin{split} \theta_{k\ell} &= \sum_{r=1}^{K} \pi_r \bigl(\mathbf{B}_{kr} (1 - \mathbf{B}_{kr}) + \mathbf{B}_{\ell r}(1 - \mathbf{B}_{\ell r})\bigr)\nu_k^{\top} \Delta^{-1} \mathbf{I}_{p,q} \Delta^{-1} \nu_{\ell} \\ & - \sum_{r=1}^{K} \sum_{s=1}^{K} \pi_r \pi_s \mathbf{B}_{sr} (1 - \mathbf{B}_{sr}) \nu_s^{\top} \Delta^{-1} \mathbf{I}_{p,q} \Delta^{-1} (\nu_{\ell} \nu_k^{\top} + \nu_k \nu_{\ell}^{\top}) \Delta^{-1} \nu_s. \end{split} \end{equation} Now let $\zeta_{k \ell} = \nu_{k}^{\top} \Delta^{-1} \nu_{\ell}$. Define $\sigma_{kk}^2$ for $k \in [K]$ to be \begin{equation} \begin{split} \label{eq:sigma_kk} \sigma_{k k}^2 &= 4 \mathbf{B}_{kk}(1 - \mathbf{B}_{kk}) \zeta_{kk}^2 + 4 \sum_{r} \pi_r \mathbf{B}_{kr} (1 - \mathbf{B}_{kr}) \zeta_{kr}^2 \bigl(\tfrac{1}{\pi_k} - 2 \zeta_{kk}\bigr) \\ & + 2 \sum_{r} \sum_{s} \pi_r \pi_s \mathbf{B}_{rs} (1 - \mathbf{B}_{rs}) \zeta_{kr}^2 \zeta_{ks}^2 \end{split} \end{equation} and define $\sigma_{k \ell}^{2}$ for $k \in [K], \ell \in [K], k \not = \ell$ to be \begin{equation} \begin{split} \label{eq:sigma_kl2} \sigma_{k\ell}^2 &= \bigl(\mathbf{B}_{kk} (1 - \mathbf{B}_{kk}) + \mathbf{B}_{\ell \ell}(1 - \mathbf{B}_{\ell \ell}) \bigr)\zeta_{k \ell}^2 + 2 \mathbf{B}_{k \ell} (1 - \mathbf{B}_{k \ell}) \zeta_{kk} \zeta_{\ell \ell} \\ &+ \sum_{r} \pi_{r} \mathbf{B}_{k r}(1 - \mathbf{B}_{kr}) \zeta_{\ell r}^2 \bigl(\tfrac{1}{\pi_{k}} - 2 \zeta_{kk} \bigr) \\ &+ \sum_{r} \pi_r \mathbf{B}_{\ell r} (1 - \mathbf{B}_{\ell r}) \zeta_{kr}^{2} \bigl(\tfrac{1}{\pi_{\ell}} - 2 \zeta_{\ell \ell} \bigr) \\ & - 2 \sum_{r} \pi_r \bigl(\mathbf{B}_{k r} (1 - \mathbf{B}_{k r}) + \mathbf{B}_{\ell r} (1 - \mathbf{B}_{\ell r})\bigr) \zeta_{kr} \zeta_{r \ell} \zeta_{k \ell} \\ & + \frac{1}{2} \sum_{r} \sum_{s} \pi_r \pi_s \mathbf{B}_{rs} (1 - \mathbf{B}_{rs}) (\zeta_{k r} \zeta_{\ell s} + \zeta_{\ell r} \zeta_{k s})^2. \end{split} \end{equation} Then for any $k \in [K]$ and $\ell \in [K]$, \begin{equation} \label{eq:sbm_normal2} n \bigl(\hat{\mathbf{B}}^{(S)}_{k\ell} - \mathbf{B}_{k \ell} - \frac{\theta_{kl}}{n}\bigr) \overset{\mathrm{d}}{\longrightarrow} N(0, \sigma_{k \ell}^2) \end{equation} as $n \rightarrow \infty$. \end{theorem} \begin{theorem} \label{THM:GEN_D_SPARSE} Let $\mathbf{A} \sim \mathrm{SBM}(\bm{\pi}, \mathbf{B}, \rho_n)$ be a $K$-block stochastic blockmodel graph on $n$ vertices with sparsity factor $\rho_n$. Let $\nu_1, \dots, \nu_K$ be point masses in $\mathbb{R}^{d}$ such that $\mathbf{B}_{k \ell} = \nu_k^{\top} \mathbf{I}_{p,q} \nu_{\ell}$ and let $\Delta = \sum_{k} \pi_k \nu_k \nu_k^{\top}$. For $k \in [K]$ and $\ell \in [K]$, let $\tilde{\theta}_{k \ell}$ be given by \begin{equation} \begin{split} \label{eq:tilde_theta_kl} \tilde{\theta}_{k\ell} & = \sum_{r=1}^{K} \pi_r \bigl(\mathbf{B}_{kr} + \mathbf{B}_{\ell r} \bigr)\nu_k^{\top} \Delta^{-1} \mathbf{I}_{p,q} \Delta^{-1} \nu_{\ell} \\ &- \sum_{r=1}^{K} \sum_{s=1}^{K} \pi_r \pi_s \mathbf{B}_{sr} \nu_s^{\top} \Delta^{-1} \mathbf{I}_{p,q} \Delta^{-1} (\nu_{\ell} \nu_k^{\top} + \nu_k \nu_{\ell}^{\top}) \Delta^{-1} \nu_s, \end{split} \end{equation} let $\tilde{\sigma}_{kk}^{2}$ for $k \in [K]$ be \begin{equation} \begin{split} \label{eq:tilde_sigma_kk} \tilde{\sigma}_{kk}^{2} &= 4 \mathbf{B}_{kk} \zeta_{kk}^2 + 4 \sum_{r} \pi_r \mathbf{B}_{kr} \zeta_{kr}^2 \bigl(\tfrac{1}{\pi_k} - 2 \zeta_{kk}\bigr)^{2} \\ &+ 2 \sum_{r} \sum_{s} \pi_r \pi_s \mathbf{B}_{rs} \zeta_{kr}^2 \zeta_{ks}^2, \end{split} \end{equation} and let $\tilde{\sigma}_{k\ell}^{2}$ for $k \in [K]$, $\ell \in [K]$, $k \not = \ell$ be \begin{equation} \begin{split} \label{eq:tilde_sigma_kl} \tilde{\sigma}_{k\ell}^2 &= \bigl(\mathbf{B}_{kk} + \mathbf{B}_{\ell \ell} \bigr)\zeta_{k \ell}^2 + 2 \mathbf{B}_{k \ell} \zeta_{kk} \zeta_{\ell \ell} - 2 \sum_{r} \pi_r \bigl(\mathbf{B}_{k r} + \mathbf{B}_{\ell r} \bigr) \zeta_{kr} \zeta_{r \ell} \zeta_{k \ell} \\ & + \sum_{r} \pi_{r} \mathbf{B}_{k r} \zeta_{\ell r}^2 \bigl(\tfrac{1}{\pi_{k}} - 2 \zeta_{kk} \bigr)^{2} + \sum_{r} \pi_r \mathbf{B}_{\ell r} \zeta_{kr}^{2} \bigl(\tfrac{1}{\pi_{\ell}} - 2 \zeta_{\ell \ell} \bigr)^{2} \\ &+ \frac{1}{2} \sum_{r} \sum_{s} \pi_r \pi_s \mathbf{B}_{rs} (\zeta_{k r} \zeta_{\ell s} + \zeta_{\ell r} \zeta_{k s})^2. \end{split} \end{equation} If $\rho_n \rightarrow 0$ and $n \rho_n = \omega(\sqrt{n})$, then for any $k \in [K]$ and $\ell \in [K]$, \begin{equation} \label{eq:sbm_normal3} n \sqrt{\rho_n} \bigl(\hat{\mathbf{B}}^{(S)}_{k\ell} - \mathbf{B}_{k \ell} - \frac{\tilde{\theta}_{kl}}{n \rho_n}\bigr) \overset{\mathrm{d}}{\longrightarrow} N(0, \tilde{\sigma}_{k \ell}^2) \end{equation} as $n \rightarrow \infty$. \end{theorem} The proofs of Theorem~\ref{THM:GEN_D} and Theorem~\ref{THM:GEN_D_SPARSE} are given in the appendix. As a corollary of Theorem~\ref{THM:GEN_D} and Theorem~\ref{THM:GEN_D_SPARSE}, we have the following result for the asymptotic efficiency of $\hat{\mathbf{B}}^{(S)}$ whenever $\mathbf{B}$ is invertible (see also Theorem~\ref{thm:bickel_choi}). \begin{corollary} \label{cor:full-rank} Let $\mathbf{A} \sim \mathrm{SBM}(\bm{\pi}, \mathbf{B}, \rho_n)$ be a $K$-block stochastic blockmodel graph on $n$ vertices with sparsity factor $\rho_n$. Suppose $\mathbf{B}$ is invertible. Then for all $k \in [K], \ell \in [K]$, $\sigma_{k \ell}$ and $\tilde{\sigma}_{k \ell}$ as defined in Theorem~\ref{THM:GEN_D} and Theorem~\ref{THM:GEN_D_SPARSE} satisfy \begin{gather} \theta_{k \ell} = 0; \quad \tilde{\theta}_{k \ell} = 0 \\ \sigma_{kk}^2 = \frac{2 \mathbf{B}_{kk} (1 - \mathbf{B}_{kk})}{\pi_k^2}; \quad \sigma_{k\ell}^2 = \frac{\mathbf{B}_{k \ell} (1 - \mathbf{B}_{k\ell})}{\pi_k \pi_{\ell}} \,\, \text{if $k \not = \ell$} \\ \tilde{\sigma}_{kk} = \frac{2 \mathbf{B}_{kk}}{\pi_k^2}; \quad \tilde{\sigma}_{k \ell} = \frac{2 \mathbf{B}_{k \ell}}{\pi_k \pi_{\ell}} \,\, \text{if $k \not = \ell$}. \end{gather} Therefore, for all $k \in [K], \ell \in [K]$, if $\rho_n \equiv 1$, then \begin{equation} n (\hat{\mathbf{B}}^{(S)}_{k\ell} - \mathbf{B}_{k\ell}) \overset{\mathrm{d}}{\longrightarrow} N(0, \sigma_{k\ell}^2). \end{equation} as $n \rightarrow \infty$. If $\rho_n \rightarrow 0$ and $n \rho_n = \omega(\sqrt{n})$, then \begin{equation} n \rho_n^{1/2}(\hat{\mathbf{B}}^{(S)}_{k\ell} - \mathbf{B}_{k\ell}) \overset{\mathrm{d}}{\longrightarrow} N(0, \tilde{\sigma}_{k\ell}^2). \end{equation} as $n \rightarrow \infty$. $\hat{\mathbf{B}}^{(S)}_{k \ell}$ is therefore {\em asymptotically efficient} for all $k, \ell$. \end{corollary} \begin{proof}[Proof of Corollary~\ref{cor:full-rank}] The proof follows trivially from the observation that $\zeta_{r s} = \tfrac{1}{\pi_r}$ for $r = s$ and $\zeta_{rs} = 0$ otherwise. Indeed, $\Delta = \sum_{k} \pi_k \nu_k \nu_k^{\top} = \bm{\nu}^{\top} \mathbf{D} \bm{\nu}$ where $\bm{\nu}$ is a $K \times K$ matrix with $\bm{\nu} \mathbf{I}_{p,q} \bm{\nu}^{\top} = \mathbf{B}$ and $\mathbf{D} = \mathrm{diag}(\bm{\pi})$. Hence \begin{equation} \label{key:reduction} \zeta_{rs} = \nu_r^{\top} \Delta^{-1} \nu_{s} = \nu_r^{\top} \bm{\nu}^{-1} \mathbf{D}^{-1} (\bm{\nu}^{-1})^{\top} \nu_s = \frac{1}{\pi_r} \mathbbm{1}\{r = s\}. \end{equation} As an example, the expression for $\theta_{k \ell}$ in Eq.~\eqref{eq:mu_kl} reduces to \begin{equation*} \begin{split} \theta_{k\ell} &= \sum_{r=1}^{K} \pi_r \bigl(\mathbf{B}_{kr} (1 - \mathbf{B}_{kr}) + \mathbf{B}_{\ell r}(1 - \mathbf{B}_{\ell r})\bigr)\nu_k^{\top} \Delta^{-1} \mathbf{I}_{p,q} \Delta^{-1} \nu_{\ell} \\ &- \sum_{r=1}^{K} \sum_{s=1}^{K} \pi_r \pi_s \mathbf{B}_{sr} (1 - \mathbf{B}_{sr}) \nu_s^{\top} \Delta^{-1} \mathbf{I}_{p,q} \Delta^{-1} \Bigl(\tfrac{\mathbbm{1}\{s = k\}}{\pi_k} \nu_{\ell} + \tfrac{\mathbbm{1}\{s = \ell\}}{\pi_{\ell}} \nu_{k} \Bigr) \\ &= 0 \end{split} \end{equation*} for all $k \in [K], \ell \in [K]$. The expression for $\sigma_{kk}^{2},\tilde{\sigma}_{kk}^2$, $\sigma_{k \ell}^2$ and $\tilde{\sigma}_{k\ell}^2$ also follows directly from Eq.~\eqref{key:reduction}. \end{proof} \begin{remark} As a special case of Theorem~\ref{THM:GEN_D}, we consider the two blocks stochastic blockmodel with block probability matrix $\mathbf{B} = \Bigl[\begin{smallmatrix} p^2 & pq \\ pq & q^2 \end{smallmatrix} \Bigr]$ and block assignment probabilities $\bm{\pi} = (\pi_p, \pi_q), \pi_p + \pi_q = 1.$ Then $\Delta = \pi_p p^2 + \pi_q q^2$ and Eq.~\eqref{eq:mu_kl} reduces to \begin{gather*} \theta_{11} = \tfrac{2 \pi_q p^2 q^2}{\Delta^3} \bigl(\pi_p p^2 (1 - p^2) + (\pi_q - \pi_p) p q (1 - pq) - \pi_q q^2 (1 - q^2) \bigr), \\ \theta_{12} = \tfrac{pq}{\Delta^3} \bigl(\pi_p p^2 ( 1- p^2) (\pi_q q^2 - \pi_p p^2) + (\pi_p - \pi_q) pq (1 - pq) (\pi_p q^2 - \pi_q p^2) \\ + \pi_q q^2 ( 1- q^2) (\pi_p p^2 - \pi_q q^2) \bigr), \\ \theta_{22} = \tfrac{2 \pi_p p^2 q^2}{\Delta^3} \bigl(\pi_q q^2 (1 - q^2) + (\pi_p - \pi_q) p q (1 - pq) - \pi_p p^2 (1 - p^2) \bigr). \end{gather*} Meanwhile, we also have \begin{gather*} \sigma^{2}_{11} = \tfrac{8 p^6 (1 - p^2)}{\Delta^2} \bigl(1 - \tfrac{\pi_p p^2}{2 \Delta}\bigr)^2 + \tfrac{4 \pi_q p^3 q^3 (1 - pq)}{\pi_p \Delta^2} \bigl(1 - \tfrac{\pi_p p^2}{\Delta}\bigr)^2 + \tfrac{2 \pi_q^2 p^4 q^6 (1 - q^2)}{\Delta^4} \\ \sigma^{2}_{12} = \tfrac{2 \pi_q^2 p^4 q^6 (1 - p^2)}{\Delta^4} + \tfrac{\pi_p \pi_q pq (1 - pq)}{\Delta^4} \bigl(\tfrac{\pi_q q^4}{\pi_p} + \tfrac{\pi_p p^4}{\pi_q}\bigr)^2 + \tfrac{2 \pi_p^2 p^6 q^4 (1 - q^2)}{\Delta^4} \\ \sigma^2_{22} = \tfrac{8 q^6 (1 - q^2)}{\Delta^2} \bigl(1 - \tfrac{\pi_q q^2}{2 \Delta}\bigr)^2 + \tfrac{4 \pi_p p^3 q^3 (1 - pq)}{\pi_q \Delta^2} \bigl(1 - \tfrac{\pi_q q^2}{\Delta}\bigr)^2 + \tfrac{2 \pi_p^2 q^4 p^6 (1 - p^2)}{\Delta^4}. \end{gather*} The naive (MLE) estimator $\hat{\mathbf{B}}^{(N)}$ has asymptotic variances \begin{gather*} \mathrm{Var}[\hat{\mathbf{B}}^{(N)}_{11}] = \tfrac{2 p^2 (1 - p^2)}{\pi_p^2}; \quad \mathrm{Var}[\hat{\mathbf{B}}^{(N)}_{12}] = \tfrac{pq (1 - pq)}{\pi_p \pi_q}; \quad \mathrm{Var}[\hat{\mathbf{B}}^{(N)}_{22}] = \tfrac{2 q^2 (1 - q^2)}{\pi_q^2}. \end{gather*} We now evaluate the asymptotic variances for the true (rank-constrained) MLE. Suppose for simplicity that $n \pi_p$ vertices are assigned to block $1$ and $n \pi_q$ vertices are assigned to block $2$. Let $n_{11} = \tbinom{(n \pi_p + 1)}{2}$, $n_{12} = n^2 \pi_p \pi_q$ and $n_{22} = \tbinom{(n \pi_p + 1)}{2}$. Let $\mathbf{A}$ be given and assume for the moment that $\bm{\tau}$ is observed. Then the log-likelihood for $\mathbf{A}$ is equivalent to the log-likelihood for observing $m_{11} \sim \mathrm{Bin}(n_{11}, p^2)$, $m_{12} \sim \mathrm{Bin}(n_{12}, pq)$ and $m_{22} \sim \mathrm{Bin}(n_{22}, q^2)$ with $m_{11}$, $m_{12}$, and $m_{22}$ mutually independent. More specifically, ignoring terms of the form $\tbinom{n_{ij}}{m_{ij}}$, we have \begin{equation*} \begin{split} \ell(\mathbf{A} \mid p,q) &= m_{11} \log p^2 + (n_{11} - m_{11}) \log (1 - p^2) + m_{12} \log pq \\ &+ (n_{12} - m_{12}) \log (1 - pq) + m_{22} \log q^2 + (n_{22} - m_{22}) \log (1 - q^2) \end{split} \end{equation*} We therefore have \begin{equation*} \begin{split} \mathrm{Var}\Bigl(\tfrac{\partial \ell}{\partial p}\Bigr) &= \mathrm{Var}\Bigl(\tfrac{2m_{11}p}{p^2} - \tfrac{2(n_{11} - m_{11})p}{1 - p^2} + \tfrac{m_{12}q}{pq} - \tfrac{(n_{12} - m_{12})q}{1 - pq}\Bigr) = \tfrac{4n_{11}}{1 - p^2} + \tfrac{n_{12} pq}{p^2(1 - pq)}. \end{split} \end{equation*} Similarly, \begin{gather*} \mathrm{Var}\Bigl(\tfrac{\partial \ell}{\partial q}\Bigr) = \tfrac{n_{12}pq}{q^2(1-pq)} + \tfrac{4n_{22}}{1 - q^2}; \quad \mathrm{Cov}\Bigl(\tfrac{\partial \ell}{\partial p}, \tfrac{\partial \ell}{\partial q} \Bigr) = \tfrac{n_{12}}{1 - pq}. \end{gather*} Next we note that $n_{11}/n^2 \rightarrow \pi_p^2/2$, $n_{12}/n^2 \rightarrow \pi_p \pi_q$ and $n_{22}/n^2 \rightarrow \pi_q^2/2$. We therefore have $$\frac{1}{n^2} \mathrm{Var}\Bigl[\Bigl(\tfrac{\partial \ell}{\partial p}, \tfrac{\partial \ell}{\partial p}\Bigr) \Bigr] \overset{\mathrm{a.s.}}{\longrightarrow} \mathcal{I} := \begin{bmatrix} \frac{2 \pi_p^2}{1 - p^2} + \frac{\pi_p \pi_q q}{p(1 - pq)} & \frac{\pi_p \pi_q}{1 - pq} \\ \frac{\pi_p \pi_q}{1 - pq} & \frac{2 \pi_q^2}{1 - q^2} + \frac{\pi_p \pi_q p}{q(1 - pq)} \end{bmatrix} $$ as $n \rightarrow \infty$. Let $(\hat{p}, \hat{q})$ be the MLE of $(p,q)$ (we emphasize that there is no close form formula for the MLE $\hat{p}$ and $\hat{q}$ in this setting) and let $\mathcal{J}$ be the Jacobian of $\mathbf{B}$ with respect to $p$ and $q$, i.e., $ \mathcal{J}^{\top} = \Bigl[\begin{smallmatrix} 2p & q & 0 \\ 0 & p & 2q \end{smallmatrix}\Bigr].$ The classical theory of maximum likelihood estimation (see e.g., Theorem~5.1 of \cite{point_estimation}) implies that $$ n\bigl( \mathrm{vech}(\hat{\mathbf{B}}^{(M)} - \mathbf{B}) \bigr) \overset{\mathrm{d}}{\longrightarrow} \mathrm{MVN}\bigl(\bm{0}, \mathcal{J} \mathcal{I}^{-1} \mathcal{J}^{\top} \bigr)$$ as $n \rightarrow \infty$, i.e., $\hat{\mathbf{B}}^{(M)}$ is asymptotically (first-order) efficient. \begin{figure}[tp!] \center \subfloat[$\mathrm{MSE}(\hat{\mathbf{B}}^{(S)})/\mathrm{MSE}(\hat{\mathbf{B}}^{(N)})$]{ \includegraphics[width=0.48\textwidth]{ratio_mse_rank1.pdf}} \subfloat[$\mathrm{MSE}(\hat{\mathbf{B}}^{(S)})/\mathrm{MSE}(\hat{\mathbf{B}}^{(M)})$]{ \includegraphics[width=0.48\textwidth]{ratio_mle.pdf}} \caption{The ratios $\mathrm{MSE}(\hat{\mathbf{B}}^{(S)})/\mathrm{MSE}(\hat{\mathbf{B}}^{(N)})$ and $\mathrm{MSE}(\hat{\mathbf{B}}^{(S)})/\mathrm{MSE}(\hat{\mathbf{B}}^{(M)})$ for values of $p \in [0.1, 0.9]$ and $q \in [0.1, 0.9]$ in a $2$-blocks rank $1$ SBM. The labeled lines in each plot are the contour lines for the $\mathrm{MSE}$ ratios. The $\mathrm{MSE}(\hat{\mathbf{B}}^{(S)})$ is computed using the bias-adjusted estimates $\{\hat{\mathbf{B}}^{(S)}_{ij} - \hat{\theta}_{ij}\}_{i \leq j}$ (see Corollary~\ref{cor:in_practice}). } \label{fig:ratio-plot} \end{figure} We now compare the naive estimator $\hat{\mathbf{B}}^{(N)}$, the ASE estimator $\hat{\mathbf{B}}^{(S)}$ and the true MLE $\hat{\mathbf{B}}^{(M)}$ for the case when $\pi_p = 0.5$ and $\rho_n \equiv 1$. Plots of the (ratio) of the MSE (equivalently the sum of variances $\sigma_{11}^2 + \sigma_{12}^2 + \sigma_{22}^2$) for $\hat{\mathbf{B}}^{(S)}$ (adjusted for the bias terms $\theta_{k \ell}$; see Corollary~\ref{cor:in_practice}) against the MSE (equivalently the sum of variances) of $\hat{\mathbf{B}}^{(N)}$ and the MSE of $\hat{\mathbf{B}}^{(M)}$ for $p \in [0.1,0.9]$ and $q \in [0.1,0.9]$ are given in Figure~\ref{fig:ratio-plot}. For this example, $\hat{\mathbf{B}}^{(S)}$ have smaller mean squared error than $\hat{\mathbf{B}}^{(N)}$ over the whole range of $p$ and $q$. In addition $\hat{\mathbf{B}}^{(S)}$ has mean squared error almost as small as that of $\hat{\mathbf{B}}^{(M)}$ for a large range of $p$ and $q$. \end{remark} \begin{remark} We next compare the three estimators $\hat{\mathbf{B}}^{(S)}$, $\hat{\mathbf{B}}^{(M)}$ and $\hat{\mathbf{B}}^{(N)}$ in the setting of stochastic blockmodels with $3 \times 3$ block probability matrix $\mathbf{B}$ where $\mathbf{B}$ is positive semidefinite and $\mathrm{rk}(\mathbf{B}) = 2$. The minimal parametrization of $\mathbf{B}$ requires $5$ parameters $(r_1,r_2,r_3,\theta,\gamma)$, namely \begin{gather*} \mathbf{B}_{11} = r_1^2; \quad \mathbf{B}_{22} = r_2^2; \quad \mathbf{B}_{33} = r_3^2; \\ \mathbf{B}_{12} = r_1 r_2 \cos \theta; \quad \mathbf{B}_{13} = r_1 r_3 \cos \gamma ; \quad \mathbf{B}_{23} = r_2 r_3 \cos(\theta - \gamma). \end{gather*} Now let $\bm{\pi} = (\pi_1, \pi_2, \pi_3)$ be the block assignment probability vector. Let $\mathbf{A} \sim \mathrm{SBM}(\mathbf{B}, \bm{\pi})$ be a graph on $n$ vertices and suppose for simplicity that the number of vertices in block $i$ is $n_i = n \pi_i$. Let $n_{ii} = n_i^{2}/2$ for $i=1,2,3$ and $n_{ij} = n_i n_j$ if $i \not = j$. Let $m_{ij}$ for $i \leq j$ be independent random variables with $m_{ij} \sim \mathrm{Bin}(n_{ij}, \mathbf{B}_{ij})$. Then, assuming $\bm{\tau}$ is known, the log-likelihood for $\mathbf{A}$ is \begin{equation*} \begin{split} \ell(\mathbf{A}) &= m_{11} \log(r_1^2) + (n_{11} - m_{11}) \log(1 - r_1^2) + m_{22} \log (r_2^2) + (n_{22} - m_{22}) \log (1 - r_2^2) \\ &+ m_{33} \log (r_3^2) + (n_{33} - m_{33}) \log (1 - r_3^2) + m_{12} \log (r_1 r_2 \cos \theta) \\ &+ (n_{12} - m_{12}) \log (1 - r_1 r_2 \cos \theta) + m_{13} \log (r_1 r_3 \cos \gamma) \\ & + (n_{13} - m_{13}) \log (1 - r_1 r_3 \cos \gamma) + m_{23} \log (r_2 r_3 \cos (\theta - \gamma)) \\ &+ (n_{23} - m_{23}) \cos(1 - r_2 r_3 \cos(\theta - \gamma)) \end{split} \end{equation*} The Fisher information matrix $\mathcal{I}$ for $(r_1, r_2, r_3, \theta, \gamma)$ in this setting is straightforward, albeit tedious, to derive. \begin{figure}[tp!] \center \subfloat[$\mathrm{MSE}(\hat{\mathbf{B}}^{(S)})/\mathrm{MSE}(\hat{\mathbf{B}}^{(N)})$]{ \includegraphics[width=0.48\textwidth]{ratio_3blocks_naive.pdf}} \subfloat[$\mathrm{MSE}(\hat{\mathbf{B}}^{(S)})/\mathrm{MSE}(\hat{\mathbf{B}}^{(M)})$]{ \includegraphics[width=0.48\textwidth]{ratio_3blocks_mle.pdf}} \caption{The ratios $\mathrm{MSE}(\hat{\mathbf{B}}^{(S)})/\mathrm{MSE}(\hat{\mathbf{B}}^{(N)})$ and $\mathrm{MSE}(\hat{\mathbf{B}}^{(S)})/\mathrm{MSE}(\hat{\mathbf{B}}^{(M)})$ for values of $r_2 \in [0.1, 0.9]$ and $\theta \in [0.1, \tfrac{\pi}{2} - 0.1]$ in a $3$-block rank $2$ SBM. The labeled lines in each plot are the contour lines for the $\mathrm{MSE}$ ratios. The $\mathrm{MSE}(\hat{\mathbf{B}}^{(S)})$ is computed using the bias-adjusted estimates $\{\hat{\mathbf{B}}^{(S)}_{ij} - \hat{\theta}_{ij}\}_{i \leq j}$ (see Corollary~\ref{cor:in_practice}). } \label{fig:ratio-plot2} \end{figure} Plots of the MSE (equivalently the sum of variances ) for the bias-adjusted estimates $\hat{\mathbf{B}}^{(S)}$ against the MSE (equivalently the sum of variances) of the naive estimates $\hat{\mathbf{B}}^{(N)}$ and the true MLE estimates $\hat{\mathbf{B}}^{(M)}$ for a subset of the parameters of a $3$-blocks SBM with $\rho_n \equiv 1$ given in Figure~\ref{fig:ratio-plot2}. In Figure~\ref{fig:ratio-plot2}, we fix $\bm{\pi} = (1/3,1/3,1/3)$, $r_3 = 0.7$, $\gamma = 0.5$, $r_1 = 1- r_2$, letting $r_2$ and $\theta$ vary in the intervals $[0.1,0.9]$ and $[0.1, \tfrac{\pi}{2} - 0.1]$, respectively. Once again, $\hat{\mathbf{B}}^{(S)}$ has smaller mean squared error than $\hat{\mathbf{B}}^{(N)}$ over the whole range of $r_2$ and $\theta$, and has mean squared error almost as small as that of $\hat{\mathbf{B}}^{(M)}$ for a large range of $r_2$ and $\theta$. \end{remark} We emphasize that the rank assumptions placed on $\mathbf{B}$ in the previous two examples are natural assumptions, i.e., there is no primafacie reason why $\mathbf{B}$ needs to be invertible, and hence procedures that can both estimate $\mathrm{rk}(\mathbf{B})$ and incorporate it in the subsequent estimation of $\mathbf{B}$ are equally flexible and generally more efficient. This is in contrast to other potentially more restrictive assumptions such as assuming that $\mathbf{B}$ is of the form $q \bm{1} \bm{1}^{\top} + (p - q) \mathbf{I}$ for $p > q$ (i.e., the planted-partitions model). Indeed, a $K$-block SBM from the planted-partitions model is parametrized by two parameters, irrespective of $K$ and as such the three estimators considered in this paper are provably sub-optimal for estimating the parameters of the planted partitions model. \section{Discussions} Theorem~\ref{THM:GEN_D} and Theorem~\ref{THM:GEN_D_SPARSE} were presented in the context wherein the vertices to block assignments $\bm{\tau}$ are assumed known. For unknown $\bm{\tau}$, Lemma~\ref{LEM:PERFECT} (presented below) implies that $\hat{\bm{\tau}}$ obtained using $K$-means (or Gaussian mixture modeling) on the rows of $\hat{\mathbf{U}}$ is an exact recovery of $\bm{\tau}$, provided that $n \rho_n \ = \omega(\log n)$. The lemma implies Corollary~\ref{cor:in_practice} showing that we can replace the quantities $\theta_{k \ell}$ and $\tilde{\theta}_{k \ell}$ in Eq.~\eqref{eq:sbm_normal2} and Eq.~\eqref{eq:sbm_normal3} of Theorem~\ref{THM:GEN_D} and Theorem~\ref{THM:GEN_D_SPARSE} by consistent estimates $\hat{\theta}_{k \ell}$ without changing the resulting limiting distribution. We emphasize that it is essential for Corollary~\ref{cor:in_practice} that $\hat{\bm{\tau}}$ is an exact recovery of $\bm{\tau}$ in order for the limiting distributions in Eq.~\eqref{eq:sbm_normal2} and Eq.~\eqref{eq:sbm_normal3} to remain valid when $\hat{\theta}_{k \ell}$ is substituted for $\theta_{k \ell}$ and $\tilde{\theta}_{k \ell}$. Indeed, if there is even a single vertex that is mis-clustered by $\hat{\bm{\tau}}$, then $\hat{\theta}_{k \ell}$ as defined will introduce an additional (random) bias term in the limiting distribution of Eq.~\eqref{eq:sbm2_inpractice}. \begin{remark} For ease of exposition, bounds in this paper are often written as holding ``with high probability''. A random variable $\xi \in \mathbb{R}$ is $O_{\mathbb{P}}(f(n))$ if, for any positive constant $c > 0$ there exists a $n_0 \in \mathbb{N}$ and a constant $C > 0$ (both of which possibly depend on $c$) such that for all $n \geq n_0$, $|\xi| \leq C f(n)$ with probability at least $1 - n^{-c}$; moreover, a random variable $\xi \in \mathbb{R}$ is $o_{\mathbb{P}}(f(n))$ if for any positive constant $c > 0$ and any $\epsilon > 0$ there exists a $n_0 \in \mathbb{N}$ such that for all $n \geq n_0$, $|\xi| \leq \epsilon f(n)$ with probability at least $1 - n^{-c}$. Similarly, when $\xi$ is a random vector in $\mathbb{R}^{d}$ or a random matrix in $\mathbb{R}^{d_1 \times d_2}$, $\xi = O_{\mathbb{P}}(f(n))$ or $\xi = o_{\mathbb{P}}(f(n))$ if $\|\xi\| = O_{\mathbb{P}}(f(n))$ or $\|\xi\| = o_{\mathbb{P}}(f(n))$, respectively. Here $\|x\|$ denotes the Euclidean norm of $x$ when $x$ is a vector and the spectral norm of $x$ when $x$ is a matrix. We write $\xi = \zeta + O_{\mathbb{P}}(f(n))$ or $\xi = \zeta + o_{\mathbb{P}}(f(n))$ if $\xi - \zeta = O_{\mathbb{P}}(f(n))$ or $\xi - \zeta = o_{\mathbb{P}}(f(n))$, respectively. \end{remark} \begin{lemma} \label{LEM:PERFECT} Let $(\mathbf{A}_n, \mathbf{X}_n) \sim \mathrm{GRDPG}_{p,q}(F)$ be a generalized random dot product graph on $n$ vertices with sparsity factor $\rho_n$. Let $\hat{\mathbf{U}}_n(i)$ and $\mathbf{U}_n(i)$ be the $i$-th row of $\hat{\mathbf{U}}_n$ and $\mathbf{U}_n$, respectively. Here $\hat{\mathbf{U}}_n$ and $\mathbf{U}_n$ are the eigenvectors of $\mathbf{A}_n$ and $\mathbf{X}_n \mathbf{X}_n^{\top}$ corresponding to the $p + q$ largest eigenvalues (in modulus) of $\mathbf{A}_n$ and $\mathbf{X}_n \mathbf{X}_n^{\top}$. Then there exist a (universal) constant $c > 0$ and a $d \times d$ orthogonal matrix $\mathbf{W}_n$ such that, for $n \rho_n = \omega(\log^{2c}(n)$, \begin{equation} \label{eq:perfect} \max_{i \in [n]} \|\mathbf{W}_n \hat{\mathbf{U}}_n(i) - \mathbf{U}_n(i)\| = O_{\mathbb{P}}\Bigl(\frac{\log^{c}{n}}{n \sqrt{\rho_n}}\Bigr). \end{equation} \end{lemma} The proof of Lemma~\ref{LEM:PERFECT} is given in the appendix. If $\mathbf{A}_n \sim \mathrm{SBM}(\mathbf{B}, \bm{\pi})$ with sparsity factor $\rho_n$ and $\mathbf{B}$ is $K \times K$, then the rows of $\mathbf{U}_n$ take on at most $K$ possible distinct values. Moreover, for any vertices $i$ and $j$ with $\tau_i \not = \tau_j$, $\|\mathbf{U}_n(i) - \mathbf{U}_n(j)\| \geq Cn^{-1/2}$ for some constant $C$ depending only on $\mathbf{B}$. Now if $n \rho_n = \omega(\log^{2c}(n))$, then Lemma~\ref{LEM:PERFECT} implies, for sufficiently large $n$, $$\|\mathbf{W}_n \hat{\mathbf{U}}_n(i) - \mathbf{U}_n(i)\| < \min_{j \colon \tau_j \not = i}\|\mathbf{W}_n \hat{\mathbf{U}}_n(i) - \mathbf{U}_n(j)\|; \quad \text{for all $i \in [n]$}.$$ Hence, since $\mathbf{W}_n$ is an orthogonal matrix, $K$-means clustering of the rows of $\hat{\mathbf{U}}_n$ yield an assignment $\hat{\bm{\tau}}$ that is indeed, up to a permutation of the block labels, an exact recovery of $\bm{\tau}$ as $n \rightarrow \infty$. We note that Lemma~\ref{LEM:PERFECT} is an extension of our earlier results on bounding the perturbation $\hat{\mathbf{U}}_n - \mathbf{U}_n \mathbf{W}$ using the $2 \to \infty$ matrix norm \cite{perfect,lyzinski15_HSBM,ctp_2_to_infty}. Lemma~\ref{LEM:PERFECT} is very similar in flavor to a few recent results by other researchers \cite{abbe2,mao_sarkar,belkin_2_infty} where eigenvector perturbations of $\mathbf{A}_n$ (compared to the eigenvectors of $\mathbf{X}_n \mathbf{X}_n^{\top}$) in the $\ell_{\infty}$ norm is established in the regime where $n \rho_n = \omega(\log^{c}(n))$ for some constant $c > 0$. \begin{corollary} \label{cor:in_practice} Assume the setting and notations of Theorem~\ref{THM:GEN_D}. Assume $K$ known, let $\hat{\bm{\tau}} \colon [n] \mapsto [K]$ be the vertex to cluster assignments when the rows of $\hat{\mathbf{U}}$ are clustered into $K$ clusters. For $k \in [K]$, let $\hat{\bm{s}}_k \in \{0,1\}^{n}$ where the $i$-th entry of $\hat{\bm{s}}_k$ is $1$ if $\hat{\tau}_i = k$ and $0$ otherwise. Let $\hat{n}_k = |\{i \colon \hat{\tau}_i = k \}|$ and let $\hat{\pi}_k = \tfrac{\hat{n}_k}{n}$. For $k \in [K]$, let $\hat{\nu}_k = \tfrac{1}{\hat{n}_k} \hat{\bm{s}}_k^{\top} \hat{\mathbf{U}} \hat{\bm{\Lambda}}^{1/2}$, let $\hat{\mathbf{B}}_{k \ell} = \hat{\mathbf{B}}_{k \ell}^{(S)} = \hat{\nu}_k^{\top} \mathbf{I}_{p,q} \hat{\nu}_{\ell}$, and let $\hat{\Delta} = \sum_{k} \hat{\pi}_k \hat{\nu}_k \hat{\nu}_k^{\top}$. For $k \in [K]$ and $\ell \in [K]$, let $\hat{\theta}_{k \ell}$ be given by \begin{equation} \label{eq:hat.mu_kl} \begin{split} \theta_{k\ell} &= \sum_{r=1}^{K} \hat{\pi}_r \bigl(\hat{\mathbf{B}}_{kr} (1 - \hat{\mathbf{B}}_{kr}) + \hat{\mathbf{B}}_{\ell r}(1 - \hat{\mathbf{B}}_{\ell r})\bigr)\hat{\nu}_k^{\top} \hat{\Delta}^{-1} \mathbf{I}_{p,q} \hat{\Delta}^{-1} \nu_{\ell} \\ & - \sum_{r=1}^{K} \sum_{s=1}^{K} \hat{\pi}_r \hat{\pi}_s \hat{\mathbf{B}}_{rs} (1 - \hat{\mathbf{B}}_{sr}) \hat{\nu}_s^{\top} \hat{\Delta}^{-1} \mathbf{I}_{p,q} \hat{\Delta}^{-1} (\hat{\nu}_{\ell} \hat{\nu}_k^{\top} + \hat{\nu}_k \hat{\nu}_{\ell}^{\top}) \hat{\Delta}^{-1} \hat{\nu}_s. \end{split} \end{equation} Then there exists a (sequence of) permutation(s) $\psi \equiv \psi_n$ on $[K]$ such that for any $k \in [K]$ and $\ell \in [K]$, \begin{equation} \label{eq:sbm2_inpractice} n(\hat{\mathbf{B}}^{(S)}_{\psi(k), \psi(\ell)} - \mathbf{B}_{k \ell} - \tfrac{\hat{\theta}_{k \ell}}{n}) \overset{\mathrm{d}}{\longrightarrow} \mathcal{N}(0, \sigma_{k \ell}^2) \end{equation} as $n \rightarrow \infty$. \end{corollary} An almost identical result hold in the setting when $\rho_n \rightarrow 0$. More specifically, assume the setting and notations of Theorem~\ref{THM:GEN_D_SPARSE} and let $\hat{\nu}_k$, $\hat{\Delta}$ and $\hat{\mathbf{B}} = \hat{\mathbf{B}}^{(S)}$ be as defined in Corllary~\ref{cor:in_practice}. Now let $\hat{\theta}_{k \ell}$ be given by \begin{equation} \label{eq:hat.mu_kl2} \begin{split} \hat{\theta}_{k\ell} &= \sum_{r=1}^{K} \hat{\pi}_r \bigl(\hat{\mathbf{B}}_{kr} + \hat{\mathbf{B}}_{\ell r} \bigr)\hat{\nu}_k^{\top} \hat{\Delta}^{-1} \mathbf{I}_{p,q} \hat{\Delta}^{-1} \nu_{\ell} \\ & - \sum_{r=1}^{K} \sum_{s=1}^{K} \hat{\pi}_r \hat{\pi}_s \hat{\mathbf{B}}_{rs} \hat{\nu}_s^{\top} \hat{\Delta}^{-1} \mathbf{I}_{p,q} \hat{\Delta}^{-1} (\hat{\nu}_{\ell} \hat{\nu}_k^{\top} + \hat{\nu}_k \hat{\nu}_{\ell}^{\top}) \hat{\Delta}^{-1} \hat{\nu}_s. \end{split} \end{equation} Then there exists a (sequence of) permutation(s) $\psi \equiv \psi_n$ on $[K]$ such that for any $k \in [K]$ and $\ell \in [K]$, \begin{equation} n \rho_n^{1/2} (\hat{\mathbf{B}}_{\psi(k), \psi(\ell)} - \mathbf{B}_{k \ell} - \tfrac{\hat{\theta}_{k \ell}}{n \rho_n}) \overset{\mathrm{d}}{\longrightarrow} \mathcal{N}(0, \tilde{\sigma}_{k \ell}^2) \end{equation} as $n \rightarrow \infty$, $\rho_n \rightarrow 0$ and $n \rho_n = \omega(\sqrt{n}).$ Finally, we provide some justification on the necessity of the assumption $n \rho_n = \omega(\sqrt{n})$ in the statement of Theorem~\ref{THM:GEN_D_SPARSE}, even though Lemma~\ref{LEM:PERFECT} implies that $\hat{\bm{\tau}}$ is an exact recovery of $\bm{\tau}$ for $n \rho_n = \omega(\log^{2c}(n))$. Consider the case of $\mathbf{A}$ being an Erd\H{o}s-R\'{e}nyi graph on $n$ vertices with edge probability $p$. The estimate $\hat{p}$ obtained from the spectral embedding in this setting is $\tfrac{1}{n^2} \hat{\lambda} (\bm{1}^{\top} \hat{\bm{u}})^2$ where $\hat{\lambda}$ is the largest eigenvalue of $\mathbf{A}$, $\bm{1}$ is the all ones vector, and $\hat{\bm{u}}$ is the associated (unit-norm) eigenvector. Let $\bm{e} = n^{-1/2} \bm{1}$. We then have \begin{equation*} \begin{split} n (\hat{p} - p) &= \tfrac{1}{n} \hat{\lambda} (\bm{1}^{\top} \hat{\bm{u}})^2 - np = \hat{\lambda} \bigl((\bm{e}^{\top} \hat{\bm{u}})^2 - 1\bigr) + \hat{\lambda} - np. \end{split} \end{equation*} When $p$ remains constant as $n$ changes, then the results of \cite{furedi1981eigenvalues} implies $\bm{e}^{\top} \hat{\bm{u}} = 1 - \tfrac{1-p}{2np} + O_{\mathbb{P}}(n^{-3/2})$ and $\hat{\lambda} - np = \bm{e}^{\top} (\mathbf{A} - \mathbb{E}[\mathbf{A}]) \bm{e} + (1-p) + O_{\mathbb{P}}(n^{-1/2})$, from which we infer \begin{equation*} \begin{split} n (\hat{p} - p) &= - (1-p) \tfrac{\hat{\lambda}}{np} + \bm{e}^{\top} (\mathbf{A} - \mathbb{E}[\mathbf{A}]) \bm{e} + (1-p) + O_{\mathbb{P}}(n^{-1/2}) \\ &= \bm{e}^{\top} (\mathbf{A} - \mathbb{E}[\mathbf{A}]) \bm{e} + O_{\mathbb{P}}(n^{-1/2}) \overset{\mathrm{d}}{\longrightarrow} \mathcal{N}(0, 2p(1-p)) \end{split} \end{equation*} since $\bm{e}^{\top} (\mathbf{A} - \mathbb{E}[\mathbf{A}]) \bm{e}$ is a sum of $n(n+1)/2$ independent mean $0$ random variables with variance $p(1-p)$. On the other hand, if $p \rightarrow 0$ as $n$ increases, then Theorem~6.2 of \cite{erdos} (more specifically Eq.~(6.9) and Eq.~(6.26) of \cite{erdos}) implies \begin{gather} \label{eq:et_hatu} \bm{e}^{\top} \hat{\bm{u}} = 1 - \tfrac{1-p}{2np} + O_{\mathbb{P}}\bigl((np)^{-3/2} + \tfrac{\log^{c}n}{n \sqrt{p}}\bigr) \end{gather} and \begin{equation} \label{eq:lambda_ER} \begin{split} \hat{\lambda} - np &= \bm{e}^{\top} (\mathbf{A} - \mathbf{E}[\mathbf{A}]) \bm{e} + \tfrac{\bm{e}^{\top} (\mathbf{A} - \mathbb{E}[\mathbf{A}])^2 \bm{e}}{np} + O_{\mathbb{P}}\bigl((np)^{-1} + \tfrac{\log^{c}{n}}{n \sqrt{p}}) \\ &= \bm{e}^{\top} (\mathbf{A} - \mathbf{E}[\mathbf{A}]) \bm{e} + (1 - p) + O_{\mathbb{P}}\bigl((np)^{-1} + \tfrac{\log^{c}{n}}{n \sqrt{p}}). \end{split} \end{equation} The second equality in Eq.~\eqref{eq:lambda_ER} follows from Lemma~6.5 of \cite{erdos} which states that $\bm{e}^{\top} (\mathbf{A} - \mathbf{E}[\mathbf{A}])^{k} \bm{e} = \bm{e}^{\top} \mathbb{E}[(\mathbf{A} - \mathbb{E}[\mathbf{A}])^{k}] \bm{e} + O_{\mathbb{P}}(\tfrac{(np)^{k/2} \log^{kc}(n)}{\sqrt{n}})$ for some universal constant $c > 0$ provided that $n p = \omega(\log{n})$. Hence \begin{equation} n(\hat{p} - p) = - (1-p) \tfrac{\hat{\lambda}}{np} + \bm{e}^{\top} (\mathbf{A} - \mathbf{E}[\mathbf{A}]) \bm{e} + (1-p) + O_{\mathbb{P}}((np)^{-1/2}). \end{equation} Once again $\bm{e}^{\top} (\mathbf{A} - \mathbf{E}[\mathbf{A}]) \bm{e}$ is a sum of $n(n+1)/2$ independent mean $0$ random variables with variance $p(1-p)$, but since $p \rightarrow 0$, the individual variance also vanishes as $n \rightarrow \infty$. In order to obtain a non-degenerate limiting distribution for $\bm{e}^{\top} (\mathbf{A} - \mathbf{E}[\mathbf{A}]) \bm{e}$, it is necessary that we consider $p^{-1/2} \bm{e}^{\top} (\mathbf{A} - \mathbf{E}[\mathbf{A}]) \bm{e}$. This, however, lead to non-trivial technical difficulties. In particular, \begin{equation*} \begin{split} n p^{-1/2}(\hat{p} - p) &= p^{-1/2} \bm{e}^{\top} (\mathbf{A} - \mathbf{E}[\mathbf{A}]) \bm{e} + (1-p)(\tfrac{\hat{\lambda} - np}{np^{3/2}}) + O_{\mathbb{P}}(n^{-1/2} p^{-1}) \\ &= p^{-1/2} \bm{e}^{\top} (\mathbf{A} - \mathbf{E}[\mathbf{A}]) \bm{e} \bigl(1 + \tfrac{1-p}{np}\bigr) + O_{\mathbb{P}}(n^{-1/2} p^{-1}) \end{split} \end{equation*} upon iterating the term $(\hat{\lambda} - np)$. To guarantee that $O_{\mathbb{P}}(n^{-1/2} p^{-1})$ vanishes in the above expression, it might be necessary to require $n p = \omega(\sqrt{n})$. That is to say, the expansions for $\bm{e}^{\top} \hat{\bm{u}}$ and $\hat{\lambda} - np$ in Eq.~\eqref{eq:et_hatu} and Eq.~\eqref{eq:lambda_ER} is not sufficiently refined. We surmise that to extend Theorem~\ref{THM:GEN_D_SPARSE}, even in the context of Erd\H{o}s-R\'{e}nyi graphs, to the setting wherein $n p = o(\sqrt{n})$, it is necessary to consider higher order expansion for $\bm{e}^{\top} \hat{\bm{u}}$ and $\hat{\lambda} - np$. But this necessitates evaluating $\bm{e}^{\top} \mathbb{E}[(\mathbf{A} - \mathbb{E}[\mathbf{A}])^{k}] \bm{e}$ for $k \geq 3$, a highly non-trivial task; in particular $n p = \omega(\log{n})$ potentially require evaluating $\mathbb{E}[\bm{e}^{\top}(\mathbf{A} - \mathbb{E}[\mathbf{A}])^{k} \bm{e}]$ for $k = O(\log{n})$. In a slightly related vein, \cite{zongmingma} evaluates $\mathrm{tr}[\mathbb{E}[(\mathbf{A} - \mathbb{E}[\mathbf{A}])^{k}$ in the case of Erd\H{o}s-R\'{e}nyi graphs and two-blocks planted partition SBM graphs. Exact recovery of $\bm{\tau}$ via $\hat{\bm{\tau}}$ is therefore not sufficient to guarantee control of $\hat{\mathbf{B}}^{(S)}_{k \ell} - \mathbf{B}_{k \ell} = \bm{s}_k^{\top} (\hat{\mathbf{U}} \hat{\bm{\Lambda}} \hat{\mathbf{U}}^{\top} - \mathbb{E}[\mathbf{A}]) \bm{s}_{\ell}$. In essence, as $\rho_n \rightarrow 0$, the bias incurred by the low-rank approximation $ \hat{\mathbf{U}} \hat{\bm{\Lambda}} \hat{\mathbf{U}}^{\top}$ of $\mathbf{A}$ overwhelms the reduction in variance resulting from the low-rank approximation.
{'timestamp': '2017-10-31T01:17:57', 'yymm': '1710', 'arxiv_id': '1710.10936', 'language': 'en', 'url': 'https://arxiv.org/abs/1710.10936'}
\section{Introduction} \label{sec:intro} Diachronic \textit{Lexical Semantic Change (LSC)} detection, i.e., the automatic detection of word sense changes over time, is a flourishing new field within NLP \citep[i.a.]{Frermann:2016,Hamilton:2016,schlechtweg-EtAl:2017:CoNLL}.\footnote{An example for diachronic LSC is the German noun \textit{Vorwort} \citep{Paul02XXI}, which was mainly used in the meaning of `preposition' before $\approx$1800. Then \textit{Vorwort} rapidly acquired a new meaning `preface', which after 1850 has nearly exclusively been used.} Yet, it is hard to compare the performances of the various models, and optimal parameter choices remain unclear, because up to now most models have been compared on different evaluation tasks and data. Presently, we do not know which model performs best under which conditions, and if more complex model architectures gain performance benefits over simpler models. This situation hinders advances in the field and favors unfelicitous drawings of statistical laws of diachronic LSC \citep{dubossarsky2017}. In this study, we provide the first large-scale evaluation of an extensive number of approaches. Relying on an existing German LSC dataset we compare models regarding different combinations of semantic representations, alignment techniques and detection measures, while exploring various pre-processing and parameter settings. Furthermore, we introduce \textit{Word Injection} to LSC, a modeling idea drawn from term extraction, that overcomes the problem of vector space alignment. Our comparison of state-of-the-art approaches identifies best models and optimal parameter settings, and it suggests modifications to existing models which consistently show superior performance. Meanwhile, the detection of lexical sense divergences across time-specific corpora is not the only possible application of LSC detection models. In more general terms, they have the potential to detect sense divergences between corpora of any type, not necessarily time-specific ones. We acknowledge this observation and further explore a \textit{synchronic LSC detection} task: identifying domain-specific changes of word senses in comparison to general-language usage, which is addressed, e.g., in term identification and automatic term extraction \cite{drouin2004,perez2016measuring,Haetty18LaymenStudy}, and in determining social and dialectal language variations \cite{del2017semantic,hovy/purschke:emnlp18}.\footnote{An example for domain-specific synchronic LSC is the German noun \textit{Form}. In general-language use, \textit{Form} means `shape'/`form', while in the cooking domain the predominant meaning is the domain-specific `baking tin'.} For addressing the synchronic LSC task, we present a recent sense-specific term dataset \cite{haettySurel:2019} that we created analogously to the existing diachronic dataset, and we show that the diachronic models can be successfully applied to the synchronic task as well. This two-fold evaluation assures robustness and reproducibility of our model comparisons under various conditions. \section{Related Work} \label{sec:previous} \paragraph{Diachronic LSC Detection.} Existing approaches for diachronic LSC detection are mainly based on three types of meaning representations: (i) semantic vector spaces, (ii) topic distributions, and (iii) sense clusters. In (i), semantic vector spaces, each word is represented as two vectors reflecting its co-occurrence statistics at different periods of time \citep[][]{Gulordava11,Kim14,Xu15,Eger:2016,HamiltonShiftDrift,Hamilton:2016,Hellrich16p2785,RosenfeldE18}. LSC is typically measured by the cosine distance (or some alternative metric) between the two vectors, or by differences in contextual dispersion between the two vectors \citep{kisselewetal:2016,schlechtweg-EtAl:2017:CoNLL}. (ii) Diachronic topic models infer a probability distribution for each word over different word senses (or topics), which are in turn modeled as a distribution over words \citep[][]{Wang06,Bamman11p1,Wijaya11p35,Lau12p591,Mihalcea12,Cook14p1624,Frermann:2016}. LSC of a word is measured by calculating a novelty score for its senses based on their frequency of use. (iii) Clustering models assign all uses of a word into sense clusters based on some contextual property \citep{Mitra15p773}. Word sense clustering models are similar to topic models in that they map uses to senses. Accordingly, LSC of a word is measured similarly as in (ii). For an overview on diachronic LSC detection, see \citet{2018arXiv181106278T}. \paragraph{Synchronic LSC Detection.} We use the term synchronic LSC to refer to NLP research areas with a focus on how the meanings of words vary across domains or communities of speakers. Synchronic LSC per se is not widely researched; for meaning shifts across domains, there is strongly related research which is concerned with domain-specific word sense disambiguation \cite{maynard1998term, chen2006context, Taghipour15, daille2016ambiguity} or term ambiguity detection \cite{Baldwin13TermAmbiguity,wang2013automatic}. The only notable work for explicitly measuring across domain meaning shifts is \citet{ferrari2017detecting}, which is based on semantic vector spaces and cosine distance. Synchronic LSC across communities has been investigated as meaning variation in online communities, leveraging the large-scale data which has become available thanks to online social platforms \cite{del2017semantic,rotabi2017competition}. \paragraph{Evaluation.} Existing evaluation procedures for LSC detection can be distinguished into evaluation on (i) empirically observed data, and (ii) synthetic data or related tasks. (i) includes case studies of individual words \citep[][]{Sagi09p104,Jatowt:2014,HamiltonShiftDrift}, stand-alone comparison of a few hand-selected words \citep[][]{Wijaya11p35,Hamilton:2016,del2017semantic}, comparison of hand-selected changing vs. semantically stable words \citep[][]{Lau12p591,Cook14p1624}, and post-hoc evaluation of the predictions of the presented models \citep[][]{Cook10,Kulkarni14,del2016tracing,Eger:2016,ferrari2017detecting}. \citet{schlechtweg-EtAl:2017:CoNLL} propose a small-scale annotation of diachronic metaphoric change. Synthetic evaluation procedures (ii) include studies that simulate LSC \citep{Cook10,Kulkarni14,RosenfeldE18}, evaluate sense assignments in WordNet \citep{Mitra15p773,Frermann:2016}, identify text creation dates, \citep[][]{Mihalcea12,Frermann:2016}, or predict the log-likelihood of textual data \citep[][]{Frermann:2016}. Overall, the various studies use different evaluation tasks and data, with little overlap. Most evaluation data has not been annotated. Models were rarely compared to previously suggested ones, especially if the models differed in meaning representations. Moreover, for the diachronic task, synthetic datasets are used which do not reflect actual diachronic changes. \section{Task and Data} \label{sec:task} Our study makes use of the evaluation framework proposed in \citet{Schlechtwegetal18}, where diachronic LSC detection is defined as a comparison between word uses in two time-specific corpora. We further applied the framework to create an analogous synchronic LSC dataset that compares word uses across general-language and domain-specific corpora. The common, meta-level task in our diachronic+synchronic setup is, given two corpora $C_a$ and $C_b$, to rank the targets in the respective datasets according to their degree of relatedness between word uses in $C_a$ and $C_b$. \subsection{Corpora} \label{subsec:corpora} \paragraph{\textsc{DTA}} \citep{dta2017} is a freely available lemmatized, POS-tagged and spelling-normalized diachronic corpus of German containing texts from the 16th to the 20th century. \vspace{-1mm} \paragraph{\textsc{Cook}} is a domain-specific corpus. We crawled cooking-related texts from several categories (recipes, ingredients, cookware and cooking techniques) from the German cooking recipes websites \textit{kochwiki.de} and \textit{Wikibooks Kochbuch}\footnote{\url{de.wikibooks.org/wiki/Kochbuch}}. \vspace{-1mm} \paragraph{\textsc{SdeWaC}} \citep{faasseckart2012} is a cleaned version of the web-crawled corpus \textsc{deWaC} \citep{baroni2009wacky}. We reduced \textsc{SdeWaC} to \nicefrac{1}{8}th of its original size by selecting every 8th sentence for our general-language corpus. \vspace{+2mm}\\ Table \ref{tab:corpSizes} summarizes the corpus sizes after applying pre-processing. See Appendix \ref{sec:parameter} for pre-processing details. \begin{table}[] \centering \begin{adjustbox}{width=0.45\textwidth} \small \begin{tabular}{lrrrr} & \multicolumn{2}{c}{\textbf{Times}}&\multicolumn{2}{c}{\textbf{Domains}} \\ \hline & \multicolumn{1}{c}{\textsc{Dta18}} & \multicolumn{1}{c}{\textsc{Dta19}} & \multicolumn{1}{c}{\textsc{SdeWaC}} & \multicolumn{1}{c}{\textsc{Cook}} \\ \hline \textsc{L\textsubscript{all}} & 26M & 40M & 109M & 1M \\ \textsc{L/P\textsubscript{}} & 10M & 16M & 47M & 0.6M \\ \end{tabular} \end{adjustbox} \caption{Corpora and their approximate sizes.}\label{tab:corpSizes} \end{table} \subsection{Datasets\footnote{The datasets are available in Appendix \ref{sec:supplemental} and at \url{https://github.com/Garrafao/LSCDetection}.} and Evaluation} \label{subsec:datasets} \paragraph{Diachronic Usage Relatedness (DURel).} DURel is a gold standard for diachronic LSC consisting of 22 target words with varying degrees of LSC \citep{Schlechtwegetal18}. Target words were chosen from a list of attested changes in a diachronic semantic dictionary \citep{Paul02XXI}, and for each target a random sample of use pairs from the DTA corpus was annotated for meaning relatedness of the uses on a scale from 1 (unrelated meanings) to 4 (identical meanings), both within and across the time periods 1750--1799 and 1850--1899. The annotation resulted in an average Spearman's $\rho=0.66$ across five annotators and 1,320 use pairs. For our evaluation of diachronic meaning change we rely on the ranking of the target words according to their mean usage relatedness across the two time periods. \paragraph{Synchronic Usage Relatedness (SURel).} SURel is a recent gold standard for synchronic LSC \citep{haettySurel:2019} using the same framework as in DURel. The 22 target words were chosen such as to exhibit different degrees of domain-specific meaning shifts, and use pairs were randomly selected from \textsc{SdeWaC} as general-language corpus and from \textsc{Cook} as domain-specific corpus. The annotation for usage relatedness across the corpora resulted in an average Spearman's $\rho=0.88$ across four annotators and 1,320 use pairs. For our evaluation of synchronic meaning change we rely on the ranking of the target words according to their mean usage relatedness between general-language and domain-specific uses. \paragraph{Evaluation.} The gold LSC ranks in the DURel and SURel datasets are used to assess the correctness of model predictions by applying Spearman's rank-order correlation coefficient $\rho$ as evaluation metric, as done in similar previous studies \citep{Gulordava11,schlechtweg-EtAl:2017:CoNLL,SchlechtwegWalde18}. As corpus data underlying the experiments we rely on the corpora from which the annotated use pairs were sampled: \textsc{DTA} documents from 1750--1799 as $C_a$ and documents from 1850--1899 as $C_b$ for the diachronic experiments, and the \textsc{SdeWaC} corpus as $C_a$ and the \textsc{Cook} corpus as $C_b$ for the synchronic experiments. \section{Meaning Representations\footnote{Find the hyperparameter settings in Appendix \ref{sec:parameter}. The scripts for vector space creation, alignment, measuring LSC and evaluation are available at \url{https://github.com/Garrafao/LSCDetection}.}} \label{sec:repres} Our models are based on two families of distributional meaning representations: semantic vector spaces (Section~\ref{subsec:spaces}), and topic distributions (Section~\ref{subsec:topic}). All representations are bag-of-words-based, i.e. each word representation reflects a weighted bag of context words. The contexts of a target word $w_i$ are the words surrounding it in an $n$-sized window: $w_{i-n}, ..., w_{i-1}, w_{i+1}, ..., w_{i+n}$. \subsection{Semantic Vector Spaces} \label{subsec:spaces} A semantic vector space constructed from a corpus $C$ with vocabulary $V$ is a matrix $M$, where each row vector represents a word $w$ in the vocabulary $V$ reflecting its co-occurrence statistics \citep{turney2010frequency}. We compare two state-of-the-art approaches to learn these vectors from co-occurrence data, (i) counting and (ii) predicting, and construct vector spaces for each time period and domain. \subsubsection{Count-based Vector Spaces} In a count-based semantic vector space the matrix $M$ is high-dimensional and sparse. The value of each matrix cell $M_{i,j}$ represents the number of co-occurrences of the word $w_i$ and the context $c_j$, $\#(w_i,c_j)$. In line with \citet{Hamilton:2016} we apply a number of transformations to these raw co-occurrence matrices, as previous work has shown that this improves results on different tasks \cite{Bullinaria2012,Levy2015}. \vspace{+1mm} \paragraph{Positive Pointwise Mutual Information (PPMI).} In PPMI representations the co-occurrence counts in each matrix cell $M_{i,j}$ are weighted by the positive mutual information of target $w_i$ and context $c_j$ reflecting their degree of association. The values of the transformed matrix are \vspace{-3mm} \small \begin{equation*} { M}^{\textrm{PPMI}}_{i,j} = \max\left\lbrace\log\left(\frac{\#(w_i,c_j)\sum_c \#(c)^{\alpha}}{ \#(w_i)\#(c_j)^{\alpha}}\right)-\log(k),0\right\rbrace, \end{equation*} \normalsize where $k > 1$ is a prior on the probability of observing an actual occurrence of $(w_i, c_j)$ and $0 < \alpha < 1$ is a smoothing parameter reducing PPMI's bias towards rare words \citep{Levy:2014,Levy2015}. \vspace{+1mm} \paragraph{Singular Value Decomposition (SVD).} Truncated SVD finds the optimal rank $d$ factorization of matrix $M$ with respect to L2 loss \citep{Eckart1936}. We use truncated SVD to obtain low-dimensional approximations of the PPMI representations by factorizing ${ M}^{\textrm{PPMI}}$ into the product of the three matrices ${ U}{ \Sigma}{ V}^\top$. We keep only the top $d$ elements of $\Sigma$ and obtain \vspace{-2mm} \begin{equation*} {M}^{\textrm{SVD}} = { U_d}{ \Sigma^{p}_{d}}, \end{equation*} where $p$ is an eigenvalue weighting parameter \cite{Levy2015}. The $i$th row of ${ M}^{\textrm{SVD}}$ corresponds to $w_i$'s $d$-dimensional representation. \vspace{+1mm} \paragraph{Random Indexing (RI).} RI is a dimensionality reduction technique based on the Johnson-Lindenstrauss lemma according to which points in a vector space can be mapped into a randomly selected subspace under approximate preservation of the distances between points, if the subspace has a sufficiently high dimensionality \citep{Johnson1984,Sahlgren2004}. We reduce the dimensionality of a count-based matrix $M$ by multiplying it with a random matrix $R$: \vspace{-2mm} \begin{equation*} {M}^{\textrm{RI}} = {M}{ R^{|\mathcal{V}| \times d}}, \end{equation*} where the $i$th row of ${M}^{\textrm{RI}}$ corresponds to $w_i$'s $d$-dimensional semantic representation. The choice of the random vectors corresponding to the rows in $R$ is important for RI. We follow previous work \citep{Basile2015} and use sparse ternary random vectors with a small number $s$ of randomly distributed $-1$s and $+1$s, all other elements set to 0, and we apply subsampling with a threshold $t$. \vspace{+1mm} \subsubsection{Predictive Vector Spaces} \vspace{+1mm} \paragraph{Skip-Gram with Negative Sampling (SGNS)} differs from count-based techniques in that it directly represents each word $w \in V$ and each context $c \in V$ as a $d$-dimensional vector by implicitly factorizing $M=WC^\top$ when solving \vspace{-3mm} \small \begin{equation*} \arg\max_\theta \sum_{(w,c)\in D} \log \sigma(v_c \cdot v_w) + \sum_{(w,c) \in D'} \log \sigma (-v_c \cdot v_w), \end{equation*} \normalsize where $\sigma(x) = \frac{1}{1+e^{-x}}$, $D$ is the set of all observed word-context pairs and $D'$ is the set of randomly generated negative samples \citep{Mikolov13a,Mikolov13b,GoldbergL14}. The optimized parameters $\theta$ are $v_{c_i}=C_{i*}$ and $v_{w_i}=W_{i*}$ for $w,c\in V$, $i\in 1,...,d$. $D'$ is obtained by drawing $k$ contexts from the empirical unigram distribution $P(c) = \frac{\#(c)}{|D|}$ for each observation of (w,c), cf. \citet{Levy2015}. SGNS and PPMI representations are highly related in that the cells of the implicitly factorized matrix $M$ are PPMI values shifted by the constant $k$ \citep{Levy:2014}. Hence, SGNS and PPMI share the hyper-parameter $k$. The final SGNS matrix is given by \vspace{-2mm} \begin{equation*} {M}^{\textrm{SGNS}} = W, \end{equation*} where the $i$th row of ${M}^{\textrm{SGNS}}$ corresponds to $w_i$'s $d$-dimensional semantic representation. As in RI we apply subsampling with a threshold $t$. SGNS with particular parameter configurations has shown to outperform transformed count-based techniques on a variety of tasks \cite{marcobaroni2014predict,Levy2015}. \vspace{+1mm} \subsubsection{Alignment} \vspace{+1mm} \paragraph{Column Intersection (CI).} In order to make the matrices $A$ and $B$ from time periods $a < b$ (or domains $a$ and $b$) comparable, they have to be aligned via a common coordinate axis. This is rather straightforward for count and PPMI representations, because their columns correspond to context words which often occur in both $A$ and $B$ \citep{Hamilton:2016}. In this case, the alignment for $A$ and $B$ is \begin{equation*} \begin{split} A_{*j}^{\textrm{CI}} = A_{*j}\textrm{~~ for all } c_j \in V_{a} \cap V_{b},\\ B_{*j}^{\textrm{CI}} = B_{*j}\textrm{~~ for all } c_j \in V_{a} \cap V_{b},\ \end{split} \end{equation*} where $X_{*j}$ denotes the $j$th column of $X$. \vspace{+1mm} \paragraph{Shared Random Vectors (SRV).} RI offers an elegant way to align count-based vector spaces and reduce their dimensionality at the same time \citep{Basile2015}. Instead of multiplying count matrices $A$ and $B$ each by a separate random matrix $R_A$ and $R_B$ they may be multiplied both by the same random matrix $R$ representing them in the same low-dimensional random space. Hence, $A$ and $B$ are aligned by \begin{equation*} \begin{split} A^{\textrm{SVR}} = A R,\\ B^{\textrm{SVR}} = B R. \end{split} \end{equation*} We follow \citeauthor{Basile2015} and adopt a slight variation of this procedure: instead of multiplying both matrices by exactly the same random matrix (corresponding to an intersection of their columns) we first construct a shared random matrix and then multiply $A$ and $B$ by the respective sub-matrix. \vspace{+1mm} \paragraph{Orthogonal Procrustes (OP).} In the low-dimensional vector spaces produced by SVD, RI and SGNS the columns may represent different coordinate axes (orthogonal variants) and thus cannot directly be aligned to each other. Following \citet{Hamilton:2016} we apply OP analysis to solve this problem. We represent the dictionary as a binary matrix D, so that $D_{i,j} = 1$ if $w_i \in V_b$ (the $i$th word in the vocabulary at time $b$) corresponds to $w_j \in V_a$. The goal is then to find the optimal mapping matrix $W^{*}$ such that the sum of squared Euclidean distances between $B$'s mapping $B_{i*}W$ and $A_{j*}$ for the dictionary entries $D_{i,j}$ is minimized: \begin{equation*} W^{*} = \arg\min_{W} \sum_i\sum_j D_{i,j}\| B_{i*}W-A_{j*} \|^{2}. \end{equation*} Following standard practice we length-normalize and mean-center $A$ and $B$ in a pre-processing step \citep{artetxe2017acl}, and constrain $W$ to be orthogonal, which preserves distances within each time period. Under this constraint, minimizing the squared Euclidean distance becomes equivalent to maximizing the dot product when finding the optimal rotational alignment \citep{Hamilton:2016,artetxe2017acl}. The optimal solution for this problem is then given by $W^{*}=UV^{\top}$, where $B^{\top} DA=U\Sigma V^{\top}$ is the SVD of $B^{\top} DA$. Hence, $A$ and $B$ are aligned by \begin{equation*} \begin{split} &A^{\textrm{OP}} = A,\\ &B^{\textrm{OP}} = B W^{*}, \end{split} \end{equation*} where $A$ and $B$ correspond to their preprocessed versions. We also experiment with two variants: $\textrm{OP}_{-}$ omits mean-centering \citep{Hamilton:2016}, which is potentially harmful as a better solution may be found after mean-centering. $\textrm{OP}_{+}$ corresponds to OP with additional pre- and post-processing steps and has been shown to improve performance in research on bilingual lexicon induction \citep{artetxe2018aaai,artetxe2018acl}. We apply all OP variants only to the low-dimensional matrices. \vspace{+1mm} \paragraph{Vector Initialization (VI).} In VI we first learn $A^{\textrm{VI}}$ using standard SGNS and then initialize the SGNS model for learning $B^{\textrm{VI}}$ on $A^{\textrm{VI}}$ \citep{Kim14}. The idea is that if a word is used in similar contexts in $a$ and $b$, its vector will be updated only slightly, while more different contexts lead to a stronger update. \vspace{+1mm} \paragraph{Word Injection (WI).} Finally, we use the word injection approach by \citet{ferrari2017detecting} where target words are substituted by a placeholder in one corpus before learning semantic representations, and a single matrix $M^{\textrm{WI}}$ is constructed for both corpora after mixing their sentences. The advantage of this approach is that all vector learning methods described above can be directly applied to the mixed corpus, and target vectors are constructed directly in the same space, so no post-hoc alignment is necessary. \vspace{+1mm} \subsection{Topic Distributions} \label{subsec:topic} \vspace{+1mm} \paragraph{Sense ChANge (SCAN).} SCAN models LSC of word senses via smooth and gradual changes in associated topics \citep{Frermann:2016}. The semantic representation inferred for a target word $w$ and time period $t$ consists of a $K$-dimensional distribution over word senses $\phi^{t}$ and a $V$-dimensional distribution over the vocabulary $\psi^{t,k}$ for each word sense $k$, where $K$ is a predefined number of senses for target word $w$. SCAN places parametrized logistic normal priors on $\phi^{t}$ and $\psi^{t,k}$ in order to encourage a smooth change of parameters, where the extent of change is controlled through the precision parameter $K^{\phi}$, which is learned during training. Although $\psi^{t,k}$ may change over time for word sense $k$, senses are intended to remain thematically consistent as controlled by word precision parameter $K^{\psi}$. This allows comparison of the topic distribution across time periods. For each target word $w$ we infer a SCAN model for two time periods $a$ and $b$ and take $\phi^{a}_w$ and $\phi^{b}_w$ as the respective semantic representations. \vspace{+1mm} \section{LSC Detection Measures} \label{sec:measures} LSC detection measures predict a degree of LSC from two time-specific semantic representations of a word $w$. They either capture the contextual similarity (Section~\ref{sec:similarity}) or changes in the contextual dispersion (Section~\ref{subsec:dispersion}) of $w$'s representations.\footnote{Find an overview of which measure was applied to which representation type in Appendix \ref{sec:parameter}.} \vspace{+1mm} \subsection{Similarity Measures} \label{sec:similarity} \vspace{+2mm} \paragraph{Cosine Distance (CD).} CD is based on cosine similarity which measures the cosine of the angle between two non-zero vectors $\vec{x},\vec{y}$ with equal magnitudes \cite{salton1986introduction}: \begin{equation*} cos(\vec{x},\vec{y}) = \frac{\vec{x} \cdot \vec{y}}{\sqrt{\vec{x} \cdot \vec{x}} \sqrt{\vec{y} \cdot \vec{y}}}. \end{equation*} The cosine distance is then defined as \begin{equation*} CD(\vec{x},\vec{y})=1-cos(\vec{x},\vec{y}). \end{equation*} CD's prediction for a degree of LSC of $w$ between time periods $a$ and $b$ is obtained by $CD(\vec{w}_a,\vec{w}_b)$. \vspace{+1mm} \paragraph{Local Neighborhood Distance (LND).} LND computes a second-order similarity for two non-zero vectors $\vec{x},\vec{y}$ \citep{HamiltonShiftDrift}. It measures the extent to which $\vec{x}$ and $\vec{y}$~'s distances to their shared nearest neighbors differ. First the cosine similarity of $\vec{x},\vec{y}$ with each vector in the union of the sets of their $k$ nearest neighbors $N_k(\vec{x})$ and $N_k(\vec{y})$ is computed and represented as a vector $s$ whose entries are given by \begin{equation*} \begin{split} s(j) = \text{cos}(\vec{x},\vec{z}_j) \quad \forall \vec{z}_j\in N_{k}(\vec{x}) \cup N_{k}(\vec{y}). \end{split} \end{equation*} LND is then computed as cosine distance between the two vectors: \begin{equation*} LND(\vec{x},\vec{y}) = CD(\vec{s_x}, \vec{s_y}). \end{equation*} LND does not require matrix alignment, because it measures the distances to the nearest neighbors in each space separately. It was claimed to capture changes in paradigmatic rather than syntagmatic relations between words \citep{HamiltonShiftDrift}. \paragraph{Jensen-Shannon Distance (JSD).} JSD computes the distance between two probability distributions $\phi_x,\phi_y$ of words $w_x, w_y$ \citep{Lin1991,DonosoS17}. It is the symmetrized square root of the Kullback-Leibler divergence: \small \begin{equation*} JSD(\phi_x||\phi_y)=\sqrt{\frac{D_{KL}(\phi_x||M)+D_{KL}(\phi_y||M)}{2}}\,, \end{equation*} \normalsize where $M=(\phi_x+\phi_y)/2$. JSD is high if $\phi_x$ and $\phi_y$ assign different probabilities to the same events. \begin{table*}[] \center \small \hspace*{-10pt} \begin{tabular}{ c | c S c c c c | c } \hline \textbf{Dataset} & \textbf{Preproc} & \textbf{Win} & \specialcell{\textbf{Space}} & \specialcell{\textbf{Parameters}\\} & \specialcell{\textbf{Align}\\} & \specialcell{\textbf{Measure}} & \textbf{Spearman m (h, l)} \\ \hline \multirow{5}{*}{\textbf{DURel}} & \multirow{1}{*}{\textsc{L\textsubscript{all}}} &10& SGNS & k=1,t=None & $\textrm{OP}$ & CD & \textbf{0.866} \tiny (0.914, 0.816) \\ &\multirow{1}{*}{\textsc{L\textsubscript{all}}} &10& SGNS & k=5,t=None & $\textrm{OP}$ & CD & 0.857 \tiny (0.891, 0.830) \\ &\multirow{1}{*}{\textsc{L\textsubscript{all}}} &5& SGNS & k=5,t=0.001 & $\textrm{OP}$ & CD & 0.835 \tiny (0.872, 0.814) \\ &\multirow{1}{*}{\textsc{L\textsubscript{all}}} &10& SGNS & k=5,t=0.001 & $\textrm{OP}$ & CD & 0.826 \tiny (0.863, 0.768) \\ &\multirow{1}{*}{\textsc{L/P}} &2& SGNS & k=5,t=None & $\textrm{OP}$ & CD & 0.825 \tiny (0.826, 0.818) \\ \hline \multirow{5}{*}{\textbf{SURel}} & \textsc{L/P} &2& SGNS & k=1,t=0.001 & $\textrm{OP}$ & CD & \textbf{0.851} \tiny (0.851, 0.851) \\ & \textsc{L/P} &2& SGNS & k=5,t=None & $\textrm{OP}$ & CD & 0.850 \tiny (0.850, 0.850) \\ & \textsc{L/P} &2& SGNS & k=5,t=0.001 & $\textrm{OP}$ & CD & 0.834 \tiny (0.838, 0.828) \\ & \textsc{L/P} &2& SGNS & k=5,t=0.001 & $\textrm{OP}_{-}$ & CD & 0.831 \tiny (0.836, 0.817) \\ & \textsc{L/P} &2& SGNS & k=5,t=0.001 & $\textrm{OP}$ & CD & 0.829 \tiny (0.832, 0.823) \\ \hline \end{tabular} \caption{Best results of $\rho$ scores (Win=Window Size, Preproc=Preprocessing, Align=Alignment, k=negative sampling, t=subsampling, Spearman m(h,l): mean, highest and lowest results).} \label{tab:bestResults} \end{table*} \vspace{+2mm} \subsection{Dispersion Measures} \label{subsec:dispersion} \vspace{+2mm} \paragraph{Frequency Difference (FD).} The log-transformed relative frequency of a word $w$ for a corpus $C$ is defined by \begin{equation*} F(w,C)=\log \frac{|w\in C|}{|C|} \end{equation*} FD of two words $x$ and $y$ in two corpora $X$ and $Y$ is then defined by the absolute difference in F: \begin{equation*} FD(x,X,y,Y)= |F(x,X)-F(y,Y)| \end{equation*} FD's prediction for $w$'s degree of LSC between time periods $a$ and $b$ with corpora $C_a$ and $C_b$ is computed as $FD(w,C_a,w,C_b)$ (parallel below). \vspace{+1mm} \paragraph{Type Difference (TD).} TD is similar to FD, but based on word vectors $\vec{w}$ for words $w$. The normalized log-transformed number of context types of a vector $\vec{w}$ in corpus $C$ is defined by \begin{equation*} T(\vec{w},C) = \log \frac{\sum_{i=1} 1 \quad \textrm{if $\vec{w}_i \neq 0$}}{|C_T|} , \end{equation*} where $|C_T|$ is the number of types in corpus $C$. The TD of two vectors $\vec{x}$ and $\vec{y}$ in two corpora $X$ and $Y$ is the absolute difference in T: \begin{equation*} TD(\vec{x},X,\vec{y},Y)= |T(\vec{x},X)-T(\vec{y},Y)|. \end{equation*} \vspace{+1mm} \paragraph{Entropy Difference (HD).} HD relies on vector entropy as suggested by \citet{Santus:2014}. The entropy of a non-zero word vector $\vec{w}$ is defined by \begin{equation*} VH(\vec{w}) = -\sum_{i=1} \frac{\vec{w}_i}{\sum_{j=1} \vec{w}_j} \; \log \frac{\vec{w}_i}{\sum_{j=1} \vec{w}_j}. \end{equation*} VH is based on Shannon's entropy \citep{Shannon:1948}, which measures the unpredictability of $w$'s co-occurrences \citep{schlechtweg-EtAl:2017:CoNLL}. HD is defined as \begin{equation*} HD(\vec{x},\vec{y}) = |VH(\vec{x})-VH(\vec{y})|. \end{equation*} We also experiment with differences in H between topic distributions $\phi^{a}_w,\phi^{b}_w$, which are computed in a similar fashion, and with normalizing VH by dividing it by $\log(VT(\vec{w}))$, its maximum value. \vspace{+2mm} \section{Results and Discussions} \label{sec:results} First of all, we observe that nearly all model predictions have a strong positive correlation with the gold rank. Table \ref{tab:bestResults} presents the overall best results across models and parameters.\footnote{For models with randomness we computed the average results of five iterations.} With $\rho=0.87$ for diachronic LSC (DURel) and $\rho=0.85$ for synchronic LSC (SURel), the models reach comparable and unexpectedly high performances on the two distinct datasets. The overall best-performing model is Skip-Gram with orthogonal alignment and cosine distance (SGNS+OP+CD). The model is robust in that it performs best on both datasets and produces very similar, sometimes the same results across different iterations. \vspace{+1mm} \paragraph{Pre-processing and Parameters.} Regarding pre-processing, the results are less consistent: \textsc{L\textsubscript{all}} (all lemmas) dominates in the diachronic task, while \textsc{L/P} (lemma:pos of content words) dominates in the synchronic task. In addition, \textsc{L/P} pre-processing, which is already limited on content words, prefers shorter windows, while \textsc{L\textsubscript{all}} (pre-processing where the complete sentence structure is maintained) prefers longer windows. Regarding the preference of \textsc{L/P} for SURel, we blame noise in the \textsc{Cook} corpus, which contains a lot of recipes listing ingredients and quantities with numerals and abbreviations, to presumably contribute little information about context words. For instance, \textsc{Cook} contains $4.6\%$ numerals, while \textsc{DTA} only contains $1.2\%$ numerals. Looking at the influence of subsampling, we find that it does not improve the mean performance for Skip-Gram (SGNS) (with $\rho=0.506$, without $\rho=0.517$), but clearly for Random Indexing (RI) (with $\rho=0.413$, without $\rho=0.285$). \citet{Levy2015} found that SGNS prefers numerous negative samples ($k>1$), which is confirmed here: mean $\rho$ with $k=1$ is $0.487$, and mean $\rho$ with $k=5$ is $0.535$.\footnote{For PPMI we observe the opposite preference, mean $\rho$ with $k=1$ is $0.549$ and mean $\rho$ with $k=5$ is $0.439$.} This finding is also indicated in Table \ref{tab:bestResults}, where $k=5$ dominates the 5 best results on both datasets; yet, $k=1$ provides the overall best result on both datasets. \paragraph{Semantic Representations.} Table \ref{tab:spaces} shows the best and mean results for different semantic representations. SGNS is clearly the best vector space model, even though its mean performance does not outperform other representations as clearly as its best performance. Regarding count models, PPMI and SVD show the best results. SCAN performs poorly, and its mean results indicate that it is rather unstable. This may be explained by the particular way in which SCAN constructs context windows: it ignores asymmetric windows, thus reducing the number of training instances considerably, in particular for large window sizes. \vspace{+1mm} \begin{table}[htp] \center \small \begin{adjustbox}{width=0.8\linewidth} \begin{tabular}{ c | l c c } \hline \textbf{Dataset}&\textbf{Representation} &\textbf{best}&\textbf{mean}\\ \hline \multirow{6}{*}{\textbf{DURel}} & raw count & 0.639 & 0.395\\ & PPMI & 0.670 & 0.489\\ & SVD & 0.728 & 0.498\\ & RI & 0.601 & 0.374 \\ & SGNS & \textbf{0.866} & \textbf{0.502}\\ & SCAN & 0.327 & 0.156 \\ \hline \multirow{6}{*}{\textbf{SURel}} & raw count & 0.599 & 0.120\\ & PPMI & 0.791 & 0.500\\ & SVD & 0.639 & 0.300\\ & RI & 0.622 & 0.299 \\ & SGNS & \textbf{0.851} & \textbf{0.520} \\ & SCAN & 0.082 & -0.244 \\ \hline \end{tabular} \end{adjustbox} \caption{Best and mean $\rho$ scores across similarity measures (CD, LND, JSD) on semantic representations.} \label{tab:spaces} \end{table} \paragraph{Alignments.} The fact that our modification of \citet{Hamilton:2016} (SGNS+OP) performs best across datasets confirms our assumption that column-mean centering is an important preprocessing step in Orthogonal Procrustes analysis and should not be omitted. Additionally, the mean performance in Table \ref{tab:CompOP} shows that OP is generally more robust than its variants. $\textrm{OP}_{+}$ has the best mean performance on DURel, but performs poorly on SURel. \citet{artetxe2018aaai} show that the additional pre- and post-processing steps of $\textrm{OP}_{+}$ can be harmful in certain conditions. We tested the influence of the different steps and identified the non-orthogonal whitening transformation as the main reason for a performance drop of $\approx$20\%. In order to see how important the alignment step is for the low-dimensional embeddings (SVD/RI/SGNS), we also tested the performance without alignment (`None' in Table \ref{tab:CompOP}). As expected, the mean performance drops considerably. However, it remains positive, which suggests that the spaces learned in the models are not random but rather slightly rotated variants. Especially interesting is the comparison of Word Injection (WI) where one common vector space is learned against the OP-models where two separately learned vector spaces are aligned. Although WI avoids (post-hoc) alignment altogether, it is consistently outperformed by OP, which is shown in Table \ref{tab:CompOP} for low-dimensional embeddings.\footnote{We see the same tendency for WI against random indexing with a shared random space (SRV), but instead variable results for count and PPMI alignment (CI). This contradicts the findings in \citet{Dubossarskyetal19}, using, however, a different task and synthetic data.} We found that OP profits from mean-centering in the pre-processing step: applying mean-centering to WI matrices improves the performance by 3\% on WI+SGNS+CD. The results for Vector Initialization (VI) are unexpectedly low (on DURel mean $\rho=-0.017$, on SURel mean $\rho=0.082$). An essential parameter choice for VI is the number of training epochs for the initialized model. We experimented with 20 epochs instead of 5, but could not improve the performance. This contradicts the results obtained by \citet{Hamilton:2016} who report a ``negligible'' impact of VI when compared to $\textrm{OP}_{-}$. We reckon that VI is strongly influenced by frequency. That is, the more frequent a word is in corpus C$_b$, the more its vector will be updated after initialization on C$_a$. Hence, VI predicts more change with higher frequency in C$_b$. \vspace{+1mm} \begin{table}[t] \centering \begin{adjustbox}{width=0.45\textwidth} \begin{tabular}{c|r r r |r |r } \hline \textbf{Dataset}&\textbf{$\textrm{OP}$}&\textbf{$\textrm{OP}_{-}$}&\textbf{$\textrm{OP}_{+}$}&\textbf{WI}&\textbf{None} \\ \hline \textbf{DURel} &0.618&0.557&\textbf{0.621}&0.468&0.254\\ \textbf{SURel} &\textbf{0.590}&0.514&0.401&0.492&0.285\\ \hline \end{tabular} \end{adjustbox} \caption{Mean $\rho$ scores for CD across the alignments. Applies only to RI, SVD and SGNS.} \label{tab:CompOP} \end{table} \vspace{-1mm} \paragraph{Detection Measures.} Cosine distance (CD) dominates Local Neighborhood Distance (LND) on all vector space and alignment types (e.g., mean $\rho$ on DURel with SGNS+OP is $0.723$ for CD vs. $0.620$ for LND) and hence should be generally preferred if alignment is possible. Otherwise LND or a variant of WI+CD should be used, as they show lower but robust results.\footnote{JSD was not included here, as it was only applied to SCAN and its performance thus strongly depends on the underlying meaning representation.} Dispersion measures in general exhibit a low performance, and previous positive results for them could not be reproduced \citep{schlechtweg-EtAl:2017:CoNLL}. It is striking that, contrary to our expectation, dispersion measures on SURel show a strong negative correlation (max. $\rho=-0.79$). We suggest that this is due to frequency particularities of the dataset: SURel's gold LSC rank has a rather strong negative correlation with the targets' frequency rank in the \textsc{Cook} corpus ($\rho=-0.51$). Moreover, because \textsc{Cook} is magnitudes smaller than \textsc{SdeWaC} the normalized values computed in most dispersion measures in \textsc{Cook} are much higher. This gives them also a much higher weight in the final calculation of the absolute differences. Hence, the negative correlation in \textsc{Cook} propagates to the final results. This is supported by the fact that the only measure not normalized by corpus size (HD) has a positive correlation. As these findings show, the dispersion measures are strongly influenced by frequency and very sensitive to different corpus sizes. \paragraph{Control Condition.} As we saw, dispersion measures are sensitive to frequency. Similar observations have been made for other LSC measures \citep{dubossarsky2017}. In order to test for this influence within our datasets we follow \citet{dubossarsky2017} in adding a control condition to the experiments for which sentences are randomly shuffled across corpora (time periods). For each target word we merge all sentences from the two corpora $C_a$ and $C_b$ containing it, shuffle them, split them again into two sets while holding their frequencies from the original corpora approximately stable and merge them again with the original corpora. This reduces the target words' mean degree of LSC between $C_a$ and $C_b$ significantly. Accordingly, the mean degree of LSC predicted by the models should reduce significantly if the models measure LSC (and not some other controlled property of the dataset such as frequency). We find that the mean prediction on a result sample (\textsc{L/P}, win=2) indeed reduces from $0.5$ to $0.36$ on DURel and from $0.53$ to $0.44$ on SURel. Moreover, shuffling should reduce the correlation of individual model predictions with the gold rank, as many items in the gold rank have a high degree of LSC, supposedly being canceled out by the shuffling and hence randomizing the ranking. Testing this on a result sample (SGNS+OP+CD, \textsc{L/P}, win=2, k=1, t=None), as shown in Table \ref{tab:shuffle}, we find that it holds for DURel with a drop from $\rho=0.816$ (ORG) to $0.180$ on the shuffled (SHF) corpora, but not for SURel where the correlation remains stable ($0.767$ vs. $0.763$). We hypothesize that the latter may be due to SURel's frequency properties and find that downsampling all target words to approximately the same frequency in both corpora ($\approx 50$) reduces the correlation (+DWN). However, there is still a rather high correlation left ($0.576$). Presumably, other factors play a role: (i) Time-shuffling may not totally randomize the rankings because words with a high change still end up having slightly different meaning distributions in the two corpora than words with no change at all. Combined with the fact that the SURel rank is less uniformly distributed than DURel this may lead to a rough preservation of the SURel rank after shuffling. (ii) For words with a strong change the shuffling creates two equally polysemous sets of word uses from two monosemous sets. The models may be sensitive to the different variances in these sets, and hence predict stronger change for more polysemous sets of uses. Overall, our findings demonstrate that much more work has to be done to understand the effects of time-shuffling as well as sensitivity effects of LSC detection models to frequency and polysemy. \begin{table}[t] \centering \begin{adjustbox}{width=0.65\linewidth} \begin{tabular}{c|r| rr } \hline \textbf{Dataset} & \textbf{ORG} & \textbf{SHF} & \textbf{+DWN} \\ \hline \textbf{DURel} & \textbf{0.816} & 0.180 & 0.372 \\ \textbf{SURel} & \textbf{0.767} & 0.763 & 0.576 \\ \hline \end{tabular} \end{adjustbox} \caption{$\rho$ for SGNS+OP+CD (\textsc{L/P}, win=2, k=1, t=None) before (ORG) and after time-shuffling (SHF) and downampling them to the same frequency (+DWN).} \label{tab:shuffle} \end{table} \vspace{+1mm} \section{Conclusion} \label{sec:conclusion} We carried out the first systematic comparison of a wide range of LSC detection models on two datasets which were reliably annotated for sense divergences across corpora. The diachronic and synchronic evaluation tasks we introduced were solved with impressively high performance and robustness. We introduced \textit{Word Injection} to overcome the need of (post-hoc) alignment, but find that Orthogonal Procrustes yields a better performance across vector space types. The overall best performing approach on both data suggests to learn vector representations for different time periods (or domains) with SGNS, to align them with an orthogonal mapping, and to measure change with cosine distance. We further improved the performance of the best approach with the application of mean-centering as an important pre-processing step for rotational vector space alignment. \section*{Acknowledgments} The first author was supported by the Konrad Adenauer Foundation and the CRETA center funded by the German Ministry for Education and Research (BMBF) during the conduct of this study. We thank Haim Dubossarsky, Simon Hengchen, Andres Karjus, Barbara McGillivray, Cennet Oguz, Sascha Schlechtweg, Nina Tahmasebi and the three anonymous reviewers for their valuable comments. We further thank Michael Dorna and Bingqing Wang for their helpful advice. We also thank Lea Frermann for providing the code of SCAN and helping to set up the implementation.
{'timestamp': '2019-06-10T02:09:13', 'yymm': '1906', 'arxiv_id': '1906.02979', 'language': 'en', 'url': 'https://arxiv.org/abs/1906.02979'}
\section{Introduction} The development of multifiber spectroscopy has made possible the simultaneous acquisition of tens of galaxy spectra. As a result, the study of the dynamics of clusters of galaxies is enjoying the availability of extensive and more complete new redshift data bases, which reveal the complexity of detail in cluster structure and help to solve the problems plaguing the interpretation of the kinematical data like projection effects, velocity anisotropies, substructure and secondary infall. In this paper, we present the results of an extensive observational effort to expand the kinematical data base for A2634, and a detailed analysis of the structure and dynamics of that cluster. A2634 is a nearby cluster, classified by Abell (1958) as of richness class 1. It was included by Dressler (1980a,b) in his study of the galaxy populations of 55 nearby clusters and, given its relative proximity ($z \sim 0.03$), it figures prominently in his cluster sample with 132 galaxies catalogued. Using Dressler's coarse subdivision into E, S0, S and Irr types, the fraction of E and S0 galaxies in the central region of A2634 is approximately $63\%$. This cluster and its close companion Abell 2666 (around 3$^\circ$ apart and at slightly lower redshift) are located in a region of complex topology in the background of the Pisces-Perseus supercluster (Batuski \& Burns 1985; Giovanelli, Haynes, \& Chincarini 1986a), which further complicates the kinematical analysis. Matthews, Morgan, \& Schmidt (1964) catalogued the central first-ranked galaxy, NGC 7720 (UGC 12716), as a cD, thus leading to the classification of A2634 as of Rood-Sastry type cD or Bautz-Morgan type I--II (Sastry \& Rood 1971; Bautz \& Morgan 1970). The association of NGC 7720 with the wide-angle tailed (WAT) radio source 3C 465 (Riley \& Branson 1973) and the absence of a cooling flow was used by Jones \& Forman (1984) to cast serious doubts on the possibility that this galaxy was at rest at the bottom of the cluster potential and to justify the nXD classification of A2634. Early radial velocity observations reporting velocity offsets of NGC 7720 with respect to the rest frame of the cluster of more than 300 $\rm km\,s^{-1}\,$ (Scott, Robertson, \& Tarenghi 1977) seemed to support the nonstationary nature of this galaxy (see, however, Zabludoff, Huchra, \& Geller 1990), although the significance of these measurements is affected by the small size of the samples involved. The need to elucidate the process leading to the formation of the central WAT radio source has instigated a recent thorough study of the cluster by Pinkney et al. (1993; hereinafter Pi93). From a sample of 126 redshifts of galaxies within one degree of the cluster center, Pi93 find a difference between the radial velocities of the cD galaxy and the whole cluster of $-219\pm 98$ $\rm km\,s^{-1}\,$, which they report as statistically significant. Nevertheless, in their analysis they also stress that the kinematical properties of the galaxies in the above sample and the elongation exhibited by the X-ray image of the central part of the cluster suggest that the northeast region of A2634 may harbor a dispersed subcluster currently undergoing merging with the primary unit. When they remove this contaminating region, the velocity offset of NGC 7720 drops to $-85\pm 91$ $\rm km\,s^{-1}\,$, a result consistent with the galaxy being stationary with respect to the cluster primary component. Pi93 then conclude that the bending of the radio tails of 3C 465 is probably due to large-scale turbulent gas motions fueled by the ongoing merging process. The application of the Tully-Fisher (TF) technique (Tully \& Fisher 1977) to spiral galaxies in nearby clusters, which has led to estimates of $H_0$, has been used to sample the large-scale peculiar velocity field. Likewise, the analogous $D_n-\sigma$ relation (Dressler et al. 1987), which is applied to early-type galaxies, has been extensively applied to cluster fields. The results of the two techniques have not always been in agreement. Most notable is the case of the cluster A2634, where Lucey et al. (1991a) showed that the discrepancy between the estimates of the peculiar velocity of the cluster as inferred using the two techniques exceeds 3000 $\rm km\,s^{-1}\,$, a value much in excess of the estimated accuracies of the two methods. The source of this discrepancy could lie in biases that destroy the universality of the TF and $D_n-\sigma$ relations, or in problems associated with the definition of cluster membership. While the issue of universality for those relations has received significant attention, the problems arising from misplaced assignment of cluster membership have not been considered in comparable detail. A2634 is one of a sample of nearby clusters in each of which we are obtaining at least 50 redshift-independent distances via both the TF and the $D_n-\sigma$ methods. Our goals include not only the determination of a reliable set of cluster peculiar velocities, but also disentanglement of possible environmental dependences of the TF and $D_n-\sigma$ relations from the blurring influence of poorly assessed cluster membership. In this paper, we present 174 new galaxy redshifts, which we combine with those in the public domain to carry out a detailed kinematical analysis in the region of A2634 and A2666. In a forthcoming work, the analysis of the redshift-independent distances will be presented. The present paper is organized as follows. In \S~2, we present our new spectroscopic observations in a $6\deg\!\times 6\deg$ field centered on A2634, that also includes A2666, obtained at the Arecibo and Palomar Observatories. In \S~3, we describe the large-scale characteristics of the region in which A2634 and A2666 are located, outline the main clusters and groups that can be detected in the $6\deg\!\times 6\deg$ field around A2634 and define strict cluster membership criteria for A2634. In \S~4, we examine the spatial distribution and kinematics, and their dependence on morphological type, of the galaxies on a sample that contains all the cluster members within two degrees of the A2634 center, and on a more restricted, magnitude-limited, sample within half degree. For the latter sample, we investigate issues related to the kinematics of NGC 7720 and the existence of subclustering. In \S~5 we determine the main kinematical properties of A2666 and the other clusters and groups around A2634 identified in \S~3. The dynamical analysis of all groups and clusters is presented in \S~6. Mass estimates are given for A2634, A2666 and two background clusters, and the current dynamical state of the A2634\slash 2666 system is explored. In \S~7, we summarize the main results. Throughout the paper we assume $H_0=50$ $\rm km\,s^{-1}\,$ $\rm Mpc^{-1}$ and $q_0=1/2$. Celestial coordinates are all referred to the 1950 epoch. \section{New Redshifts} The $6\deg\!\times 6\deg$ field centered on A2634 was visually surveyed on the Palomar Observatory Sky Survey (POSS) blue prints, identifying spiral galaxies with major diameter larger than about $0\farcm5$. Coordinates for these objects were measured with few arcsec accuracy using a measuring machine developed by T. Herter. This search was carried out to obtain a list of candidates for future TF work. In addition, the inner square degree of this region was searched using the FOCAS software package, on digitized images of the Palomar Quick Survey, obtained with the kind assistance of D. Golombek of the Space Telescope Science Institute. This search produced a list of galaxy positions and rough indicative magnitudes which is estimated to be complete to a limiting magnitude near 16.5, although numerous fainter galaxies are included. About half of the galaxies in the first spiral sample were observed with the Arecibo 305m radio telescope in the 21 cm line. An effort was made to include all the galaxies in the inner region brighter than 16th magnitude among those targeted by an optical spectroscopic survey carried out with the 5m Hale telescope of the Palomar Observatory. Because the spectra were in great part obtained with a multifiber spectrograph, many targets were included for reasons of observational expedience rather than responding to strict flux or size criteria. As a result, the limits of our spectroscopic survey are blurred, and can roughly be represented by a completeness function which, in the inner square degree is 1 at $m_{pg} \sim 15.7$, 0.85 near 16.0, and drops below 0.5 at magnitudes fainter than 16.5. We report here new redshifts of 174 galaxies in the $6\deg\!\times 6\deg$ field centered on A2634, which also includes A2666. In a companion paper (Giovanelli et al. 1994), we present 37 additional redshifts that fall within that region, which were obtained for a separate but complementary study. \subsection{Hale 5m Telescope Observations} The red camera of the double spectrograph (Oke \& Gunn 1982) was used on September 25, 1992 to obtain single slit spectroscopy for 41 galaxies in our catalog. A grating with $316\rm\, lines\,mm^{-1}$ was used to produce spectra in the range 4900--$7300\rm\,\AA$, with a typical resolution of $9.6\rm\,\AA$. The detector was a TI $800\times 800$ CCD chip. The data were reduced following standard procedures with available IRAF\footnote{IRAF (Image Reduction and Analysis Facility) is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation.} tasks. Each frame was overscan- and bias-subtracted, then flat-fielded using appropriate dome flats. Wavelength calibration was performed using He-Ne-Ar lamp comparison spectra before and after each source observation. Heliocentric velocities were computed in two different ways: using the IRAF task {\it fxcor\/}, which is based on the cross-correlation algorithm described by Tonry \& Davis (1979), for pure absorption spectra, and measuring the central wavelength of the gaussian fit to the emission lines of spiral galaxies (typically between 3 and 7 lines were measured for each galaxy). Three different K giant stars were used as templates for the cross-correlation. The Norris multifiber spectrograph (Hamilton et al. 1993) was used on October 20, 1993 to observe a fainter sample of galaxies in the central region of A2634 and A2666. The Norris spectrograph has a field of view of $20\arcmin$, within which 176 independent fibers can be placed for photon acquisition. At the front end, the fibers subtend $1\farcs6$ and can be placed as close as $16\arcsec$ apart. The fiber outputs are transferred through a grating element to a CCD detector. At the time of these observations, the Tektronik CCD had a reduced format of $1024\times 1024$, restricting the number of usable fibers to half the total and the field of view to a $10\arcmin\times 20\arcmin$ region. Coordinates for the target galaxies were measured on the digitized Palomar ``quick V'' survey plates at STScI\footnote{Astrometry was obtained using the Guide Stars Selection System Astrometric Support Program developed at the Space Telescope Science Institute, which is operated for NASA by the Association of Universities for Research in Astronomy, Inc.}. A $300\rm\,lines\,mm^{-1}$ grating was used to produce spectra in the range 4500--$7000\rm\,\AA$, with a resolution of $7.5\rm\,\AA$. Each frame was first overscan- and bias-subtracted, then one-dimensional spectra were extracted using appropriately sized apertures. These spectra were wavelength-calibrated using He-Ne-Ar comparison lamp spectra taken with the fibers deployed as for the galaxy acquisition, and extracted using the same apertures used for the object spectra. The sky subtraction was performed subtracting from each one of the object spectra the median of the output of ten fibers positioned on blank sky positions, properly scaled in order to compensate for light transmission variations from fiber to fiber. After the final one-dimensional spectra were obtained, velocities were derived with precisely the same procedure described above for the single slit spectra. Two different K giant stars were used as templates for the cross-correlation. The optical radial velocities and their associated errors are listed in Tables 1 (longslit data) and 2 (multifiber data). In column (1) the CGCG (Zwicky et al. 1963--68) identification (field number and ordinal number of galaxy in the field), or NGC number is given, when available. Otherwise, the galaxy name follows an internal catalog number designation. In columns (2) and (3), the right ascension and declination are listed; columns (4) and (5) give the heliocentric radial velocity and its associated error, as estimated combining the uncertainty in the calibration and the one derived from the measured signal to noise ratio of the cross-correlation function (``a'' in column [6]), or estimated from the dispersion in the single emission line velocity determinations, combined with the calibration uncertainty (``e'' in column [6]), both in $\rm km\,s^{-1}\,$. Column (7) lists the velocities found in the literature for some of these galaxies, with the references coded in column (8). \subsection{Arecibo Observations} New $21\rm\,cm$ observations of 89 galaxies in the A2634\slash 2666 region were carried out between 1990 and January 1994. In all cases the observational setup was as described in Giovanelli \& Haynes (1989). Since this sample includes galaxies significantly fainter than those customarily observed in $21\rm\,cm$ emission, integration times averaged 0.7 hours per object on source. Typical rms noise per averaged spectrum ranged between 0.3 and $0.9\rm\,mJy$. The later set of observations (July 1993 to January 1994, approximately $25\%$ of all runs) benefited from the completion of Phase I of the Arecibo telescope ongoing upgrading project (Hagfors et al. 1989), which includes the construction of a ground screen, 15 meters high and covering an area of $14{,}700\rm\,m^2$ around the main dish. The principal benefit of this device is a drastic reduction in the amount of ground radiation pickup, allowing a significant reduction of system temperature and an increase in the telescope sensitivity for observations with zenith angles higher than $10\deg$. All observations were taken with a spectral resolution of approximately 8 $\rm km\,s^{-1}\,$, later reduced by smoothing by an amount dependent on the signal-to-noise ratio. Table 3 lists the galaxies for which new $21\rm\,cm$ spectra were obtained. In columns (1) and (2), the CGCG (Zwicky et al. 1963--68) identification (field number and ordinal number of galaxy in the field) or the NGC/IC number, and the UGC (Nilson 1973) number are given, when available. Otherwise, in column (1) the galaxy name follows an internal catalog number designation. The right ascension and declination are in columns (3) and (4); columns (5) and (6) give the heliocentric $21\rm\,cm$ radial velocity and its associated error, estimated from the signal-to-noise ratio and structural parameters of the line profile, both in $\rm km\,s^{-1}\,$. For those galaxies in common with Pi93 we list in column (7) the velocities given by these authors. About one third of the entries in Table 3 had been previously observed by us at Arecibo, but data were of inferior quality. Listings in Table 3 supersede previous reports, which are coded in column (8). \subsection{Comparison with Previous Work} Optical observations for 20 galaxies and $21\rm\,cm$ line observations for 15 overlap with the list of observations presented by Pi93. The comparison between these two sets of redshift determinations reveals a general good agreement and few large discrepancies, which usually involve the measurements affected by the largest uncertainties. As already pointed out by Pi93, in the case of low signal to noise ratio spectra (like their galaxies in the C2 category), the cross-correlation function has typically more than one peak, because of possible chance alignments of real and noise features in the object and template spectra. The choice of the wrong peak might then result in redshift determinations with errors arbitrarily large. We suscribe to the cautionary words of Pi93 about the redshift measurements inferred from the noisiest spectra, and notice that, likewise, velocities in Tables 1 and 2 with associated errors on the order of 200 $\rm km\,s^{-1}\,$ or larger should be considered as doubtful entries. In the following analysis the velocities listed in column (4) of Tables 1 and 2, and in column (5) of Table 3, are used for all the galaxies with multiple observations. \section{The Environment of A2634 and Cluster Membership} \subsection{The Large-Scale Environment} The cluster A2634 appears projected on the main ridge of the Pisces-Perseus supercluster (hereinafter referred to as PPS; see Fig.~1 of Wegner, Haynes, \& Giovanelli 1993), although it and its close companion A2666 are actually located on the higher redshift branch of the two into which the PPS splits near $\rm R.A. = 00^h 45^m$. Figures 1 and 2 illustrate respectively the projected spatial distribution and the distribution in redshift space of the galaxies with measured radial velocities in our PPS catalog, pertaining to a region bounded by $\rm 22^h\le R.A.\le 1^h$, $\rm 20\deg \le Dec.\le 35\deg$ and $0\le cz_{hel}\le 16{,}000$ $\rm km\,s^{-1}\,$, which well describes the large-scale environment around A2634. The radial velocities included in Fig.~2 come from Giovanelli \& Haynes (1985, 1989, 1993), Giovanelli et al. (1986b), Wegner et al. (1993), Pi93, this paper, and Giovanelli et al. (1994). A2634 is the most conspicous galaxy concentration dominating the center of Figs.~1 and 2; it is located in a region of high galactic density at a distance of $\sim 9000$ $\rm km\,s^{-1}\,$. Approximately $3\deg$ to the east is the lesser concentration represented by A2666, with the PPS ridge extending to the northeast. Several galaxy groups appear also prominent in the same figure. The substantial galaxy concentration associated with A2634 is partly due to the fact that in the cluster region radial velocities are measured to a fainter flux limit than in the rest of the field. It is clear from Figs.~1 and 2 that the topology of the region surrounding A2634 is quite intricate, perhaps more so than any in the PPS region. Next, we concentrate on the more immediate neighborhood of A2634, namely the $6\deg\!\times 6\deg$ field around its center where our new redshifts have been measured. Figure 3 displays the positions of 663 galaxies with known redshift in that region. Of those, 237 are redshifts obtained from our previous surveys, 215 are listed in Pi93 and 211 are the new determinations presented in this paper and in Giovanelli et al. (1994). In Fig.~3, filled circles are used for objects with $m_{pg}\le 15.7$ for which the redshift survey is virtually complete in the whole field, while open circles are used for galaxies fainter than that limit. Note that the latter are clearly more concentrated towards the cluster inner regions due to the observational bias towards the core of A2634 mentioned before. In Figure 4 a radial velocity histogram between 0 and $45{,}000$ $\rm km\,s^{-1}\,$ of the galaxies in Fig.~3 is shown: the significant peak near 5500 $\rm km\,s^{-1}\,$ is associated with the foreground branch of the PPS; the peak near 8000 $\rm km\,s^{-1}\,$ includes principally galaxies in the redshift domain of A2666; the highest peak near 9000 $\rm km\,s^{-1}\,$ is associated with A2634; and a well defined group around $11{,}700$ $\rm km\,s^{-1}\,$ appears detached from the A2634 regime. The feature around $18{,}000$ $\rm km\,s^{-1}\,$ includes the rich Abell cluster A2622 (centered $0\fdg9$ to the NW of A2634) and several more widely spread galaxies. The velocity peak near $37{,}000$ $\rm km\,s^{-1}\,$ is produced by a noticeable concentration of galaxies almost directly behind the A2634 center. Pi93 tentatively associate the X-ray clump seen in the {\it Einstein\/} IPC map of A2634 to the northwest of A2634 itself (see their Fig.~4) with this background distant cluster. In \S~5.2, we analyze this possibility in more detail. \subsection{Cluster Membership} As a first quantitative step in our analysis, we have assigned cluster/group membership to the galaxies in our $6\deg\!\times 6\deg$ sample. Usually, cluster membership is assigned using only a fixed range of velocities. The velocity distribution of all galaxies projected within a chosen distance from the cluster center is obtained and field galaxy contamination is eliminated, typically via a 3$\sigma$-clipping technique (Yahil \& Vidal 1977) or using procedures that take into account the gaps of the observed velocity distribution. At this point, all galaxies within two fixed velocity limits are assigned to the cluster. These methods do not consider the fact that, with the exception of the virialized inner cluster regions where a roughly constant velocity dispersion is expected, the line-of-sight velocity dispersion of true cluster members decreases with increasing projected separation from the cluster center, whether the cluster is bound and isolated or whether a significant amount of secondary infall is present. This consideration will be taken into account in the subsequent membership assignment for the best sampled cluster A2634. Because of the proximity between A2634 and A2666, we first proceed to identify those galaxies in the neigborhood of A2634 that are more likely to be associated with A2666. To accomplish that, we compare the projected number density profiles of the two clusters at the position of any given galaxy, and assign it to A2666 if this cluster's density profile has a higher value than that of A2634 at that location. The projected number density profile of A2634 has been determined applying the so-called ``direct method'' of Salvador-Sol\'e, Sanrom\'a, \& Gonz\'alez-Casado (1993a) to the Dressler's (1980b) magnitude-limited sample of A2634 galaxies (see \S~4.3 below for details), which covers the inner region of the cluster up to a radial distance of $\sim 0\fdg5$ from its center (within this region a negligible contamination by galaxies belonging to A2666 is expected as the two clusters are separated by $3\deg$). A good fit to the numerical solution given by this method is obtained using a modified Hubble-law of the form \begin{equation}\label{den_2d} N(X)=N(0)[1+X^2]^{-\gamma+1/2}\;, \end{equation} where $N(0)$ is the central projected number density of galaxies, $\gamma$ a shape index, and $X=s/r_\ast$ the projected distance $s$ to the center of the cluster in core radius units $r_\ast$. The best fitting values of the parameters $N(0)$, $\gamma$ and $r_\ast$ for A2634 are: $69\rm\,galaxies\,Mpc^{-2}$, 1.5, and $0.48\rm\,Mpc$, respectively. For A2666, the scarcity of observations makes the above procedure more uncertain; we then assume, for simplicity, that the values of the shape index and core radius of both clusters are the same and that they have the same mass-to-light ratio. Next, for each galaxy in the sample with heliocentric velocity in the range 6500--9500 $\rm km\,s^{-1}\,$ (which we adopt as the velocity boundaries of A2666; see \S~5.1), we compute the projected distances to A2624, $X_{34}$, and A2666, $X_{66}$, and assign A2666 membership if $N (X_{66}) > N (X_{34})$ or, equivalently, if \begin{equation} X_{66} < \biggl[{M_{66}\over{M_{34}}}(1+X_{34}^2 )-1\biggr]^{1/2}\;, \end{equation} where $M_{66}/M_{34}$ is the ratio of the masses of the two clusters, which we take equal to $1/7$ (see \S~6.1). For the center of A2634 we adopt the peak of the X-ray emission $\rm R.A. = 23^h 35^m 54\fs 9$, $\rm Dec. = 26\deg 44\arcmin 19\arcsec$ (C. Jones 1993, private communication), while for A2666 we use the peak density of the galaxy distribution $\rm R.A. = 23^h 48^m 24\fs 0$, $\rm Dec. = 26\deg 48\arcmin 24\arcsec$. To convert angular separations to physical units, we calculate the cosmological distances of the clusters from the redshifts in the Cosmic Microwave Background (CMB) reference frame (see \S~4.2 and 5.1): $z_{CMB} (\rm A2634) = 0.0297$ and $z_{CMB} (\rm A2666) = 0.0256$. It is worthwhile to mention that the exact values adopted for the ratio of cluster masses (or, equivalently, the ratio of central projected number densities), $\gamma$ and $r_\ast$, have a negligible practical influence on the results to be discussed later on, so any further refinement of this procedure is unnecessary. In Figure 5 we plot the heliocentric velocity as a function of the angular separation $\theta$ for all galaxies within $6\deg$ of the A2634's center. Unfilled symbols identify probable A2666 members using the procedure described above. Since contamination from A2666 starts to be present at $\theta\gtrsim 2\deg$, we choose to restrict our following analysis to galaxies within this distance from the center of A2634. The solid lines plotted in Fig.~5 are the caustics associated with A2634 derived using the spherical infall model described in Reg\"os \& Geller (1989). In this simple model, the amplitude of the caustics depends only on $\Delta(r)$, the mass overdensity profile of the cluster, and the cosmological density parameter $\Omega_{0}$. To determine the former, we have followed the procedure outlined by Reg\"os \& Geller (1989), which relies on the assumption that the galaxies trace the mass distribution, i.e., that $\Delta(r)$ is identical to the {\it number\/} density enhancement of galaxies in the cluster: \begin{equation}\label{den_enh} \Delta(r)={<\rho>\over \rho_u}-1={<n>\over n_u}-1\;. \end{equation} In equation~(\ref{den_enh}), $<\rho>$ and $<n>$ are, respectively, the average mass and number density of cluster galaxies (corrected for field contamination) inside a radius $r$, whereas $\rho_u$ and $n_u$ are the mean mass density and the mean number density of galaxies in the universe. The spatial distribution of galaxies $n(r)$ has been estimated by inverting the analytical fit to the observed projected number density profile $N(r)$ given by eq.~(\ref{den_2d}). The mean number density of field galaxies $n_u$ has been calculated by integrating the Schechter (1976) luminosity function obtained from the CfA survey (de Lapparent, Geller, \& Huchra 1989) to the limiting magnitude of the Dressler sample, $m_v=16.0$. This has been converted to the $B(0)$ scale of the CfA survey assuming $m_{B(0)}-m_v=0.8$. We obtain $n_u=2.28\times 10^{-3}\rm\,galaxies\,Mpc^{-3}$. The caustics of Fig.~5 correspond to $\Omega_{0}=0.2, 0.5$, and 1, the amplitude increasing with the value of $\Omega_{0}$. Clearly, no sharp boundary is observed in the velocity distribution of the galaxies in this sample that would mark the transition between the triple-valued and single-valued regions, which suggests that some of the simplifying assumptions implicit in the Reg\"os \& Geller model are probably too stringent (see, for instance, van Haarlem et al. 1993). But the caustics derived from the spherical infall model can still be used effectively to accomplish our purpose of defining strict cluster membership if they are supplemented by further inspection of both the spatial and velocity distribution of candidate A2634 members. Figure 6a shows an enlargement of Fig.~5, focusing on galaxies in the inner $2\deg$ around the center of A2634, while Figure 6b displays their corresponding sky distribution. Two groups of galaxies between the $\Omega_{0}=0.5$ and $\Omega_{0}=1.0$ caustics can be separated from strict cluster members: one group probably located in the foreground of the cluster (hereinafter A2634--F) between 7000 and 8000 $\rm km\,s^{-1}\,$, and a second in the background (hereinafter A2634--B) between $11{,}000$ and $12{,}000$ $\rm km\,s^{-1}\,$ already identified as a detached group by Pi93. Both groups have small velocity dispersions, are separated by large gaps in radial velocity from other galaxies at a comparable distance from the cluster center and appear to be concentrated in small areas in the sky, to the east of the A2634 core (approximately $1\fdg5$ to the southwest of the center of A2634 there is a possible third group of foreground objects with velocities around 7000 $\rm km\,s^{-1}\,$, but the evidence for its physical reality is poor). We assign separate dynamical identity to these two groups; additional justification for this choice is presented in \S~6.2. Accordingly, we chose to consider {\it bona fide\/} A2634 members only those galaxies inside the $\Omega_{0}=0.5$ caustic. We caution again that this choice represents more an operational criterion for defining cluster membership than a statement on the value of $\Omega_0$. \subsection{Implications of Cluster Membership on Peculiar Motion Measurements} Lenient assignment of cluster membership and systematically different kinematic behavior among galaxies of different morphology may be factors in producing conflicting estimates of the peculiar velocity of a cluster with respect to a reference frame comoving with the Hubble flow. A2634 constitutes a glaring example of conflicting results: the peculiar velocity of the cluster as inferred by Lucey et al. (1991a) by applying the $D_n-\sigma$ technique to a sample of 18 early-type galaxies is $-3400\pm 600$ $\rm km\,s^{-1}\,$, while that inferred from a sample of 11 spirals by Aaronson et al. (1986) applying the TF technique is essentially zero. In their analysis, Lucey et al. noticed that the Aaronson et al. sample could be divided into two velocity subsets, one associated with A2634 with 6 members and a CMB systemic velocity of $8692\pm 92$ $\rm km\,s^{-1}\,$, and the other associated with A2666 with 5 galaxies and $V_{CMB}=7195\pm 349$ $\rm km\,s^{-1}\,$, but the TF distances they inferred for these two subsets still implied negligible peculiar motions. We note, however, that according to the membership criteria discussed in \S~3.2 only 5 galaxies of the Aaronson et al. late-type sample are proper members of A2634, while 2 are peripheral members (they are within the $\Omega_0 = 0.5$ caustic, but at a distance greater than $2\deg$ from the A2634 center), 2 are members of the A2634--F group and 2 are A2666 members. Clearly, the situation on the TF side is far from being satisfactory. On the other hand, the origin of the discrepant $D_n-\sigma$ distance cannot be fully attributed to the presence of separate dynamical units in the region, or to ill-defined membership criteria, because all 18 early-type galaxies (9 E's and 9 S0's) used by Lucey et al. (1991a) for their distance determination are all A2634 members. A full discussion about other possible origins of that discrepancy is beyond the purpose of this paper. However, we point out that a surface brightness bias similar to the one observed in the elliptical population of the Coma Cluster by Lucey, Bower \& Ellis (1991b), and discussed and dismissed for A2634 by Lucey et al. (1991a), is not only expected in a cluster at a distance comparable to A2634, but should be large enough to originate a spurious peculiar motion of precisely the magnitude derived by these authors for A2634. Better understanding of this complicated scenario should be provided by TF and $D_n-\sigma$ distance determinations based on the larger samples of strict cluster members being currently analyzed. \section{Kinematics and Spatial Analysis of A2634} Following the recommendations of Beers, Flynn, \& Gebhardt (1990), we characterize the velocity distribution of the A2634 galaxies by means of the biweight estimators of central location, $V$ (i.e., systemic velocity, sometimes in the literature referred to as $C_{\rm BI}$), and scale, $\sigma$ (i.e., velocity dispersion, sometimes in the literature referred to as $S_{\rm BI}$), because of their robustness (i.e., insensitivity to the probabilistic model adopted for the observed data) and resistance (i.e., insensitivity to the presence of outliers). The errors associated with these estimates, $\epsilon_V$ and $\epsilon_\sigma$ respectively, correspond to $68\%$ bootstrap confidence intervals based on $10{,}000$ resamplings. The program ROSTAT, kindly provided by T. Beers, is used for all these calculations. Among the wide variety of statistical tests implemented in this program to assess the gaussianity of the empirical distribution, we quote here the results for the $W$-, $B_{1}$-, $B_{2}$-, and $A^2$-tests (definitions of these tests can be found, for instance, in Yahil \& Vidal 1977 and D'Agostino 1986). The scaled tail index TI (Rosenberger \& Gasko 1983; see also Bird \& Beers 1993) is applied to determine the amount of elongation of the empirical distribution relative to the gaussian (which is neutrally elongated by definition: TI=1.0). Distributions which have more strongly populated tails than the gaussian have TI$>$1.0, while those with underpopulated tails have TI$<$1.0. Finally, we also investigate the existence in the velocity distribution of weighted gaps of size 2.75 or larger, corresponding to a probability of ocurrence in a normal distribution of less than $1\%$ (Beers et al. 1990). We refer the reader to the listed references for a detailed explanation of these statistical techniques. If the statistical sample is large enough, the analysis of the possible correlation between galaxy morphology and kinematics provides a tool that is helpful in assessing the dynamical and evolutionary state of a cluster of galaxies. Several studies (e.g., Kent \& Gunn 1982; Tully \& Shaya 1984; Sodr\'e, Capelato, \& Steiner 1989) show evidence for a higher velocity dispersion of the spiral population than that of the early-types in clusters. This result is consistent with the idea that the spiral galaxies are currently infalling and have not suffered appreciable dynamical mixing with the virialized core. Should this picture turn out to be correct, it would imply that either the spiral fraction in clusters is increasing with time, and/or that environmental effects are an important factor in determining galaxy evolution and the relation between morphology and density, transforming spiral galaxies into earlier morphological types. For this reason, we split each one of the samples selected for A2634 into two subsets of early- and late-type galaxies and comparatively investigate their kinematics and spatial distribution. The results of the kinematical analysis presented in this and in the following sections for all the clusters within the $6\deg\!\times 6\deg$ field are summarized in Table~4. The name of the cluster and of the subsample considered are listed in columns (1) and (2), and the number of galaxies in the sample is in column (3). Columns (4)--(7) list the heliocentric systemic velocity $V_{hel}$, the dispersion $\sigma$ and their associated $68\%$ boostrap errors. Column (8) gives the value of the cluster Abell radius $r_A$ in degrees calculated using cosmological distances. Columns (9) and (10) give the masses of the various clusters, as obtained in \S~6.1. All the relevant statistical results for the same samples and their associated probabilities, are summarized in Table 5. In column (1) we list the name of the sample and in column (2) the number of galaxies in it. Columns (3)--(10) list the values of the test statistic and associated significance levels for the $W$-, $B_{1}$-, $B_{2}$- and $A^2$-tests, respectively. The significance levels refer to the probability that the empirical value of a given statistic could have arisen by chance from a parent gaussian distribution, i.e., the smaller the quoted probability the more significant is the departure from the null hypothesis. We choose to discuss only those cases where the gaussian hypothesis can be rejected at better (i.e., lower) than the $10\%$ significance level. Column (11) reports the value of the scaled tail index TI. The number of weighted gaps of size equal to or larger than 2.75 and the cumulative probability of finding these many highly significant weighted gaps somewhere in the distribution are listed in columns (12) and (13), respectively. \subsection{The Two Degree (TD) Sample} After the contamination due to nearby groups has been removed via the procedure described in \S~3.2, our sample of bona fide A2634 members within a radius of 2 degrees (hereinafter referred to as the ``TD sample'') is composed of 200 galaxies. The stripe density plot of heliocentric radial velocities and the corresponding histogram for this sample are shown in Figure~7. For this sample $V_{hel}=9249^{+55}_{-56}\,$ $\rm km\,s^{-1}\,$ and $\sigma =716^{+48}_{-35}\,$ $\rm km\,s^{-1}\,$ after applying relativistic and measurement error corrections (Danese, De Zotti, \& di Tullio 1980). In the direction of A2634, the motion of the Sun with respect to the Local Group (LG) of galaxies gives a correction $\Delta V_{LG}=V_{LG}-V_{hel}=244$ $\rm km\,s^{-1}\,$, while the corresponding correction to the CMB rest frame (Smoot et al. 1991) is $\Delta V_{CMB}=V_{CMB}-V_{hel}= -364$ $\rm km\,s^{-1}\,$. At the cosmological distance of A2634, one $r_A$ corresponds to 1.02 degrees. The visual inspection of Fig.~7 yields suggestive but inconclusive indication of deviation of the velocity distribution from gaussianity. The dictum of statistical tests, unsurprisingly, is equally ambiguous. The results of the $B_{1}$-, $B_{2}$- and $A^2$-tests do not indicate significant departures from normality. However, the sensitive $W$-test gives only a $7\%$ probability that the observed distribution could have arisen by chance from a parent gaussian population. The tail index TI indicates slight contamination of the tails of the distribution with respect to the gaussian. The velocity distribution also contains one highly significant weighted gap with a {\it per gap\/} probability of 0.006 (the cumulative probability is not statistically significant), indicated by the arrow in Fig.~7. The size of the corresponding gap is 83 $\rm km\,s^{-1}\,$. The presence of this gap in the observed distribution suggests, ---but only weakly---, that the underlying parent distribution may be kinematically complex. For the TD sample, its subdivision into two subsets of early- (E and S0) and late-type (S and Irr) galaxies reveals that the kinematical characteristics of the two populations are different. Figure~8a shows the stripe density plot of radial velocities and histogram for the early-type subsample, which has $V_{hel}=9276^{+67}_{-73}\,$ $\rm km\,s^{-1}\,$ and $\sigma=658^{+62}_{-42}\,$ $\rm km\,s^{-1}\,$. Despite the low value of the tail index (TI=0.91) and the existence of one highly significant gap of size 94 $\rm km\,s^{-1}\,$ (identified with the arrow in Fig.~8a) near 9000 $\rm km\,s^{-1}\,$ that weakly suggest possible bimodality in the parent distribution, none of the gaussianity tests reveals inconsistency with a normal parent population. On the contrary, the velocity distribution of the late-type subsample represented in Figure~9a is clearly non-gaussian (the most sensitive tests of normality, $W$ and $A^2$, reject the gaussian hypothesis at better than the $3\%$ level of significance) and bimodal. Two highly significant and very close gaps of 334 and 192 $\rm km\,s^{-1}\,$, with {\it per gap\/} probabilities of only 0.0005 and 0.002, respectively, can be seen at both sides of two galaxies with radial velocities of 8500 $\rm km\,s^{-1}\,$, splitting the distribution of velocities into two modes which peak around 7700 and 9300 $\rm km\,s^{-1}\,$ and are responsible of the high value of the tail index (TI=1.23). The 7700 $\rm km\,s^{-1}\,$ mode contains four times fewer galaxies than the 9300 $\rm km\,s^{-1}\,$ mode, for which location and scale are approximately coincident with those of the velocity distribution of the early-type galaxies. This result implies that the low-velocity tail seen in the velocity histogram of the TD sample (Fig.~7) is largely dominated by late-type galaxies (notice that the 10 likely nonmembers discussed in the Pi93 study have already been removed from the present sample, as they all belong to the A2634--B and A2634--F groups). Although a two-sample Kolmogorov-Smirnov (KS) test gives a probability of $25\%$ for the null hypothesis that early- and late-type subsamples are drawn from the same parent kinematical population, the systemic velocity of the late-type galaxies $V_{hel}=9092^{+99}_{-126}\,$ $\rm km\,s^{-1}\,$ is marginally inconsistent, within the adopted uncertainties, with that of the early-type subsample. In the next section we shall see that the distributions of early- and late-type galaxies in the central regions of the cluster are even more markedly different. The results of the statistical analysis of these two subsamples are summarized in Table 5. The spatial distributions of the galaxies in the two subsamples are shown in Figures 8b and 9b, with coordinates measured with respect to the center adopted for A2634 (see \S~3.2). Crosses identify galaxies with radial velocities less than 8700 $\rm km\,s^{-1}\,$, while open circles identify those galaxies with velocities equal or above this limit. Although there is no noticeable spatial segregation among the galaxies belonging to each one of these two velocity subgroups, the comparison of the spatial distributions of the two subsamples reveals two remarkable aspects. First, and in agreement with the morphology/density (Dressler 1980a) and morphology/clustercentric distance relations (Whitmore \& Gilmore 1991), the early-type subset is strongly concentrated towards the center of the cluster (median radial distance of 0.24 degrees), while the late-type galaxies are less strongly so (median radial distance of 0.45 degrees). In addition, the spatial distribution of the early-type population reveals an apparent scarcity of these galaxies at distances larger than $30\arcmin$ from the cluster center along the north-south direction, while that of the late-type population does not show any noticeable asymmetry. One possible interpretation of this apparent elongation is that the cluster has suffered a recent merger. Indeed, this possibility has been already suggested by Pi93 who brought attention to the elongation of the ICM, in a direction consistent with that of the galaxy component (see also Eilek et al. 1984 and our Fig.~17). These authors claim that the rough alignment of the position angle of the X-ray image with the direction of the axis of symmetry of the WAT radio source and the lack of a cooling flow may be explained by the recent ocurrence of a cluster-subcluster merger along a line contained in the plane of the sky. In this scenario, the quasi-stationary WAT is shaped and powered by the the ICM of the merging subunit, which provides the high relative velocities ($\gtrsim 1000$ $\rm km\,s^{-1}\,$) required by the ram pressure model for the bending to be possible. We point out, nonetheless, that for clusters immersed in a very large supercluster structure like A2634, the tidal interaction caused by the supercluster may also account for the observed elongation of the galaxy component and the ICM, and their alignment with the first-ranked galaxy (Salvador-Sol\'e \& Solanes 1993). In the next three sections, we will study in detail the dynamical status of the galaxy component of A2634 and apply a series of statistical tests for the detection of significant spatial and kinematical substructure that can add further support to the merger hypothesis. Since an adequate study of substructure in clusters requires magnitude-limited samples with a substantial number of galaxies and free from field contamination, we will concentrate our subsequent analysis on the Dressler's (1980b) magnitude-limited sample of the central region of A2634. \subsection{The Half Degree (HD) Sample} Within the inner $0\fdg5$ from the A2634 center (approximately 0.5 $r_A$), Dressler (1980b) has catalogued 132 galaxies brighter than $m_{v}\simeq 16$. Radial velocities are available for 113 of the galaxies in this flux-limited catalog (i.e., a completeness of $86\%$). Among these 113 galaxies, we find 99 cluster members and 14 outliers, which implies that we should expect to find about 2 more outliers among the 19 galaxies for which a radial velocity is unavailable. Hence the sample of $99+19=118$ cluster galaxies can be considered a magnitude-limited sample almost free of field contamination. We shall distinguish between the latter sample of 118 A2634 members within the central half degree region (hereinafter referred to as the ``HD sample'') and the subset of that of 99 galaxies with known redshift (hereinafter referred to as the ``HDR sample''). Figure~10 contains the stripe density plot and velocity histogram for the 99 galaxies in the HDR sample. For this sample $V_{hel}=9151^{+84}_{-91}\,$ $\rm km\,s^{-1}\,$ and $\sigma=800^{+61}_{-52}\,$ $\rm km\,s^{-1}\,$. The results of the $W$- and $A^2$-tests indicate a more significant departure from the gaussian model than for the TD sample, giving probabilities of a gaussian parent population of only $2\%$ and $3\%$, respectively. The velocity distribution appears also relatively skewed, but the $B_1$-test cannot reject the gaussian hypothesis ($p(B_1)=0.29$). Similarly, the apparent bimodality of the velocity histogram is not supported by the presence of significant gaps in the velocity distribution. Figures 11 and 12 show, respectively, the velocity and spatial distributions of the subsets of early- and late-type galaxies. Although the velocity distribution of the E+S0 population has underpopulated tails (TI=0.89) and contains one highly significant gap of 146 $\rm km\,s^{-1}\,$ centered around 8770 $\rm km\,s^{-1}\,$, all the statistical tests give markedly non-significant rejection levels for the gaussian hypothesis (see Table 5). For this subsample $V_{hel}=9240^{+79}_{-89}\,$ $\rm km\,s^{-1}\,$ and $\sigma=661^{+74}_{-52}\,$ $\rm km\,s^{-1}\,$. In contrast, the velocity distribution of the late-type subsample is clearly non-gaussian, ---with the exception of the $B_1$-test, the other tests reject the gaussian hypothesis at better than the $3\%$ significance level---, and multimodal, with suggestive evidence of three possible kinematical subunits with similar number of members. The small number of galaxies, however, hampers the detection of highly significant gaps: only one gap of 548 $\rm km\,s^{-1}\,$ centered around 8400 $\rm km\,s^{-1}\,$ falls within this category, the corresponding cumulative probability of finding such a weighted gap somewhere in this distribution being $1\%$. Two of the modes are located in the lower and upper tails of the global velocity distribution, implying that only $\sim 1/3$ of the late-type galaxies have velocities close to the systemic velocity of the cluster. Accordingly, the main moments of the velocity distribution of this late-type subset, $V_{hel}=8897^{+206}_{-246}\,$ $\rm km\,s^{-1}\,$ and $\sigma=1030^{+116}_{-86}\,$ $\rm km\,s^{-1}\,$ (see also Table 4), have values that are incompatible, within the adopted uncertainties, with those of the early-type population. Appart from the dynamical complexity of A2634, the HD sample also reconfirms the presence of significant morphological segregation in this cluster. Figures 11b and 12b show that the early-type galaxies dominate in the densest regions where the late-type galaxies are almost absent. Superposed to the spatial distribution of galaxies in both subsamples is the adaptive kernel density contour map (Silverman 1986; Beers 1992) drawn using the 118 galaxies in the HD sample. The adaptive kernel technique is a two-step procedure which, after applying a pilot smoothing to estimate the local density of galaxies, uses a smoothing window the size of which decreases with increasing local density, so the statistical noise in low-density regions can be supressed without oversmoothing the high-density ones. Two noticeable subclumps can be seen in the figures, the location of the secondary subclump being consistent with the northeast elongation seen in the X-ray image of Pi93. To check if the apparent substructures seen in the density map correspond to kinematically different subunits, we have used different symbols for the galaxies according to three velocity ranges: 7000--8700 (crosses), 8700--9500 (circles) and 9500--$11{,}500$ $\rm km\,s^{-1}\,$ (asterisks). In both the early- and late-type subsamples the velocities of the galaxies are totally independent of their sky positions. The adaptive kernel map obtained using only the galaxies in the HDR sample shows a close similarity in the form of the contours to the one presented here for the HD sample. The agreement between the two maps implies that the detected subclumps are not due to chance projections of field galaxies mistakenly included in the HD sample, and confirms a posteriori its fairness. The above results suggest that the early- and late-type galaxy populations of the HD sample have indeed very different spatial and kinematical properties (a two-sided KS test gives a probability of only $1\%$ that a difference as large, or larger, than the one observed between the two velocity distributions can occur by chance). The spatial and kinematical distributions of velocities of early-type galaxies are compatible with them being dynamically relaxed. On the other hand, the absence of spatial segregation among the different kinematical subunits of the late-type population and their narrow velocity dispersions, suggest that this kinematical multimodality is more the signature of the infall of individual galaxies or small groups onto the cluster taking place mainly along the line-of-sight than that of a recent merger with a dispersed subcluster moving in the plane of the sky. It is also interesting to note that the correlation between velocity dispersion and X-ray temperature in clusters of galaxies recently published by Lubin \& Bahcall (1993), predicts a velocity dispersion for A2634 of $690\pm 140$ $\rm km\,s^{-1}\,$, very close to the value derived here for the early-type subsample. Accordingly, we adopt the values of the velocity centroid and dispersion of the early-type population in the HDR sample, i.e., $9240\pm 84$ $\rm km\,s^{-1}\,$ and $661 \pm 63$ $\rm km\,s^{-1}\,$, respectively (here the quoted uncertainties correspond to the average of the $68\%$ bootstrap errors), as representative of the whole cluster. Note that these values are in excellent agreement with those obtained by Pi93 for their restricted sample of 88 galaxies free from substructure and spatial/kinematical outliers. \subsection{The Velocity Offset of NGC 7720} The significance of the velocity offset of NGC 7720 with respect to the systemic velocity of A2634 can be determined by the expression (Teague, Carter, \& Grey 1990): \begin{equation} S=\mid \Delta V_{off}\mid /(\epsilon_{clus}^{2}+\epsilon_{cD}^{2})^{1/2}\;, \end{equation} where $\Delta V_{off}$ is the (relativistically correct) velocity offset of the cD, and $\epsilon_{clus}$ and $\epsilon_{cD}$ are the corresponding uncertainties in the velocity of, respectively, the cluster and the cD galaxy. Using the velocity centroid adopted for A2634 and a heliocentric velocity of NCG 7720 equal to $V_{\rm cD}=9154\pm 59$ $\rm km\,s^{-1}\,$ (Pi93), the corresponding velocity offset is $\Delta V_{off}=-83\pm 99$ $\rm km\,s^{-1}\,$ (the errors in the velocities of A2634 and NGC 7720 have been summed in quadrature). Thus, we have $S=0.84$, implying the (quasi-)stationarity of the cD relative to the cluster, in agreement with Pi93 result. \subsection{Analysis of Substructure} Both the density contours obtained via the adaptive kernel method and the shape of the velocity histograms (specially that of the late-type population) suggest the existence of substructure in the inner regions of A2634. We ascertain the statistical significance of such apparent substructure with the application of specifically designed tests. In subsections 4.4.1 and 4.4.2 we scrutinize the spatial correlation properties of the galaxy distribution, while in subsection 4.4.3 we concentrate on the detection of significant local deviations from the global kinematics of the cluster. \subsubsection{Spatial Tests. The SSG Test} We first consider the test developed by Salvador-Sol\'e et al. (1993a; hereinafter referred to as the SSG test), which has been shown to be well suited for the detection of small-scale substructure in systems with circular or elliptical self-similar symmetry and with a small number of particles. Let $s$ be the projected radial distance from the center of symmetry of a (circularly symmetrized) cluster and $N(s)$ the projected number density profile of galaxies. The SSG test produces two different estimates of $N(s)$: $N_{dec}(s)$ and $N_{dir}(s)$, which are, respectively, sensitive and insensitive to the existence of correlation among galaxy positions relative to the cluster background density; the difference among the two is interpreted as an index of existence of substructure. The estimate $N_{dec}(s)$ is obtained by inverting (``deconvolution method'') the relation: \begin{equation} \label{dec} \Sigma(s)=\pi s(N_{dec}\ast N_{dec})(s)\;, \end{equation} where $\Sigma(s) \delta s$ is the number of pairs of galaxies with observed separation between $s$ and $s + \delta s$, among the $N_{gal}(N_{gal}-1)/2$ pairs obtained from the $N_{gal}$ galaxies in the cluster sample. $(N_{dec}\ast N_{dec})(s)$ is the autocorrelation of $N_{dec}(s)$, which, for a radially symmetric function is also equal to its self-convolution. The estimate $N_{dir}(s)$ is obtained by inverting (``direct method'') the relation: \begin{equation} \label{dir} \Pi(s)=2\pi sN_{dir}(s)\;, \end{equation} where $\Pi(s) \delta s$ is the number of galaxies at projected distances between $s$ and $s + \delta s$ from the center of symmetry of the galaxy distribution. Contrary to $N_{dec}$, $N_{dir}$ does not rely on the relative separations of galaxies, therefore it is insensitive to the existence of correlation in galaxy positions. In practice, in order to use the full positional information in the data, the inversion of equations (\ref{dec}) and (\ref{dir}) relies on the cumulative forms $\int_{s}^{\infty}\Sigma(x)\,dx$ and $\int_{s}^{\infty}\Pi(x)\,dx$, rather than the distributions $\Sigma(s)$ and $\Pi(s)$ themselves, so that \begin{equation}\label{n_dec} N_{dec}(s)={{\cal F}_1 \circ {\cal A}}\biggl[{{\cal A}\circ {\cal F}_{1}^{-1}} \biggl( 2\int_{s}^{\infty}\Sigma(x)\,dx\biggr)\biggr]^{1/2}\;, \end{equation} and \begin{equation}\label{n_dir} N_{dir}(s)={{\cal F}_1 \circ {\cal A}}\biggl[{{\cal A}\circ {\cal F}_{1}^{-1}} \biggl(\int_{s}^{\infty}\Pi(x)\,dx\biggr)\biggr]\;, \end{equation} where ${\cal F}_1$ and $\cal A$ stand, respectively, for the one-dimensional Fourier and Abel transformations (Bracewell 1978), and where the symbol ``$\circ$'' denotes the composition of functions. Before we proceed further, a caveat is necessary. The profile $N_{dec}$ cannot be inferred with an arbitrarily high spatial resolution. Because the observables are not continuous functions, radial symmetry is always broken at small enough scales. This causes the argument of the square root in eq.~(\ref{n_dec}) to take negative values due to statistical fluctuations, which requires that we filter out the highest spatial frequencies. This results in a final $N_{dec}$ profile convolved with a hamming window of smoothing size $\lambda_{min}$ corresponding to the minimum resolution-length that guarantees the fulfillment of the radial symmetry condition. Although this additional smoothing is unnecessary for $N_{dir}(s)$, it must be also applied in order not to introduce any bias in the subsequent comparison of the two profiles. The significance of substructure is estimated from the null hypothesis that $N_{dec}(s)$ arises from a poissonian realization of some unknown spatial distribution of galaxies that led to {\it the observed distribution\/} of radial distances. The probability of this being the case is calculated by means of the statistic \begin{equation}\label{chi2} \chi^2 = {(N_{dec}(0)-N_{dir}(0))^2 \over{2S^2(0)}}\;, \end{equation} for one degree of freedom. In eq.~(\ref{chi2}), $N_{dec}(0)$ and $N_{dir}(0)$ are the values of profiles $N_{dec}$ and $N_{dir}$ at $s=0$. $S^2(s)$ is the radial run of the variance of the $N_{dir}$ profiles of 100 simulated clusters convolved to the same resolution-length $\lambda_{min}$ of the observed profile. These simulated clusters are generated by the azimuthal scrambling of the observed galaxy positions around the center of symmetry of the cluster, i.e., by randomly shuffling between 0 and $2\pi$ the azimutal angle of each galaxy, while maintaining its clustercentric distance $s$ unchanged. Figure 13a shows the projected density profiles $N_{dec}$ and $N_{dir}$ and their associated standard deviations for the galaxies in the circularized HD sample. It is readily apparent from this figure that both profiles are equal within the statistical uncertainties; this is confirmed by the inference from eq.~(\ref{chi2}) that the probability that the two profiles are the same is $60\%$. The resulting minimum resolution-length of $0.27\rm\,Mpc$ puts an upper limit to the half-coherence length of any possible clump that may remain undetected in the central regions of A2634. Notice that this value is much lower than the typical value of $0.6\rm\,Mpc$ inferred by Salvador-Sol\'e, Goz\'alez-Casado, \& Solanes (1993b) for the scale-length of the clumps detected in the Dressler \& Shectman (1988a) clusters. \subsubsection{Spatial Tests. The SSGS Test} Another useful estimate of the significance of subclustering is that proposed by Salvador-Sol\'e et al. (1993b; hereinafter referred to as the SSGS test), which can be considered a modification to the SSG test and is based on the density in excess of neighbors from a random galaxy in a cluster. The quantity \begin{equation}\label{excess} N_{gal}^{-1}(N_{dir}\ast N_{dir})(s)\bar \xi(s)\;, \end{equation} represents the probability in excess of random of finding one cluster galaxy at an infinitesimal volume $\delta V$ located at a distance $s$ from a random cluster galaxy, per unit volume. In eq. (\ref{a2pcf}), the ``average two-point correlation function'' statistic $\bar \xi(s)$, a generalization for isotropic but inhomogeneus systems of the usual two-point correlation function, is given by the expression: \begin{equation}\label{a2pcf} \bar \xi(s)={(N_{dec}\ast N_{dec})(s)-(N_{dir}\ast N_{dir})(s)\over{(N_{dir} \ast N_{dir})(s)}}\;, \end{equation} with \begin{equation}\label{ndec2} (N_{dec}\ast N_{dec})(s)={{\cal F}_1\circ {\cal A}}\biggl[{{\cal A}\circ {\cal F}_{1}^{-1}}\biggl( 2\int_{s}^{\infty}\Sigma(x)\,dx\biggr)\biggr]\;, \end{equation} and \begin{equation}\label{ndir2} (N_{dir}\ast N_{dir})(s)={{\cal F}_1\circ {\cal A}}\biggl[{{\cal A}\circ {\cal F}_{1}^{-1}}\biggl(\int_{s}^{\infty}\Pi(x)\,dx\biggr)\biggr]^{2}\;. \end{equation} In this case, the statistical significance of substructure is obtained by checking the null hypothesis that $N_{dec}(s)$ arises from a poissonian realization of some unknown spatial distribution of galaxies, which is approximated by $N_{dir}(s)$. In practice, the presence of substructure is estimated by the comparison of the empirical function given by eq.~(\ref{excess}) with the mean and one standard deviation of the same function obtained from a large number of poissonian cluster simulations (i.e., both the radius and the azimuthal angle of each galaxy are choosen at random) that reproduce the profile $N_{dir}(s)$. In the SSGS test, the use of poissonian simulations, instead of angular scramblings, to estimate the statistical uncertainties translates in a partial loss of sensitivity in the detection of substructure when compared with the SSG test (see Salvador-Sol\'e et al. 1993b for further details). This is compensated, however, by the fact that the two functions $N_{dec}\ast N_{dec}$ and $N_{dir}\ast N_{dir}$ (eqs. [\ref{ndec2}] and [\ref{ndir2}], respectively) can be inferred with any arbitrarily high spatial resolution, so there are no lower limits in the size of the clumps that can be detected, as opposed to the SSG test. Figure 13b shows the density in excess of neighbors (eq.~[\ref{excess}]) for the galaxies in the circularized HD sample, calculated between its center and twice its maximum radius. The dashed line and the vertical solid lines represent, respectively, the mean value of this function and its $1\sigma$-error calculated from 200 poissonian simulations. A low-passband hamming filter leading to a resolution length of $0.05\rm\,Mpc$ has been applied to attenuate the statistical noise at galactic scales. As the ``signal'' is embedded in the ``noise'', it is clear from this figure that the SSGS test does also not detect any significant substructure in the HD sample. Notice that both the SSG and the SSGS tests detect substructure in the 50\% of the Dressler \& Shectman clusters (Salvador-Sol\'e et al. 1993a,b). \subsubsection{Kinematical Test} The Dressler \& Shectman (1988b) statistical test (hereinafter referred to as the DS test) complements the above spatial correlation tests because it is sensitive to kinematical substructure in the form of significant local deviations from the global distribution of radial velocities. The DS test is based on the comparison of the local velocity mean, $\bar V_{local}$, and velocity dispersion, $\sigma_{local}$, associated with each galaxy with measured radial velocity (calculated using that galaxy and its 10 nearest projected neighbors with measured velocities) with the mean velocity, $\bar V$, and velocity dispersion, $\sigma$, of the entire sample. For each galaxy, the deviation from the global values is defined by \begin{equation} \label{delta} \delta^{2}={11\over{\sigma^{2}}}[(\bar V_{local}-\bar V)^{2}+(\sigma_{local}- \sigma)^{2}]\;. \end{equation} The observed cumulative deviation $\Delta_{obs}$, defined as the sum of the $\delta$'s for all the galaxies with measured radial velocities, is the statistic used to quantify the presence of substructure. To avoid the formulation of any hypothesis on the form of the velocity distribution of the parent population, this statistic is calibrated by Monte-Carlo simulations that randomly shuffle the velocities of the galaxies while keeping fixed their observed positions. In this way any existing correlation between velocities and positions is destroyed. The significance of subclustering is then given in terms of the fraction of simulated clusters for which their cumulative deviation $\Delta_{sim}$ is larger than $\Delta_{obs}$. A visual judgment of the statistical significance of local deviations from the global kinematics for the galaxies in our HDR sample can be done comparing the plots in Figures 14a--d. Fig.~14a shows the spatial distribution of the HDR sample galaxies (filled circles) superposed on the adaptive kernel density contour map of the HD sample (dashed lines). The coordinates are measured with respect to the center adopted for A2634. In Fig.~14b each galaxy is identified with a circle whose radius is proportional to $e^\delta$ (with $\delta$ given above). Hence, the larger the circle, the larger the deviation from the global values (but beware of the insensitivity of the definition of $\delta$ given by eq.~[\ref{delta}] to the sign of the deviations from the mean cluster velocity). The superposition of the projected density contours (dashed lines) shows that, among the subclumps seen in the adaptive kernel map, the small density enhancement at plot coordinates (-15,3), and to less extent the density enhancement at (8,12), are related with apparently large local deviations from the global kinematics. The remaining figures show two of the 1000 Monte-Carlo models performed: Fig.~14c corresponds to the one whose $\Delta_{sim}$ is closest to the median of the $\Delta$'s of all the simulations, while Fig.~14d corresponds to the simulation whose $\Delta_{sim}$ is closest to the value of the upper quartile. The comparison of Fig.~14b with these last two figures reveals that the observed local deviations from the global kinematics in the HDR sample are indeed statistically insignificant. This is corroborated by the fact that the value of $\Delta_{sim}$ is larger than $\Delta_{obs}$ in more than $71\%$ of the Monte-Carlo models. Even if we run the same test for the subset of late-type galaxies the value of $\Delta_{sim}$ is still larger than $\Delta_{obs}$ in the $57\%$ of the simulated clusters. Based on the results of the above spatial and kinematical analysis, we consider the apparent clumpiness on the central regions of A2634 seen in the kernel map as statistically insignificant (i.e., consistent with poissonian fluctuations of the galaxy distribution). The kinematical test has also provided quantitative confirmation of the fact that the velocities of the late-type population are not segregated in the plane of the sky, although we cannot exclude the possibility that an important fraction of the spirals is located in loose groups superposed along the line-of-sight. Therefore, if the merger scenario is preferred in front of other interpretative frameworks, one must conclude that the galaxy component of A2634 has suffered a faster relaxation after the collision than the gaseous one. Although N-body/hydrodynamical simulations of cluster formation have revealed the important role played by the shocks and turbulences generated in mergers in the evolution of the ICM, there is no general agreement on the timescales for the dynamical settlement of this component because of its depence on the adopted initial conditions (e.g., Evrard 1990; Roettiger, Burns, \& Loken 1993), making difficult to pronounce on how likely to happen the above fact is. Perhaps, as suggested by the simulations of Schindler \& M\"uller (1993), the two-dimensional temperature distributions of the ICM, which are expected to be observable with the next generation of X-ray satellites, will provide a definitive answer to that question. \section{Kinematics and Spatial Analysis of the Other Clusters and Groups} In this section we investigate in some detail the spatial distribution and kinematics of the galaxies associated with A2666, the two clusters in the background of A2634 and the groups A2634--F and A2634--B. \subsection{A2666} For A2666, we limit our analysis to the galaxies located within $1\,r_A$ (1.16 degrees) of its center. In Figure 15, we show (a) the velocity and (b) spatial distribution (b) of the 39 galaxies within this region that have velocities in the range 6500--9500 $\rm km\,s^{-1}\,$ and are not included in the TD sample of A2634. The spatial distribution (plot coordinates are given with respect to the adopted center for A2666; see \S~3.2) shows the existence of a central compact subunit containing a large fraction of the galaxies and a dispersed population significantly separated from the central condensation. The biweight location and scale of the whole sample are $V_{hel}=8118^{+81}_{-80}\,$ $\rm km\,s^{-1}\,$ and $\sigma=533^{+126}_{-98}\,$ $\rm km\,s^{-1}\,$. For A2666, $\Delta V_{LG}=239$ $\rm km\,s^{-1}\,$ and $\Delta V_{CMB}=-363$ $\rm km\,s^{-1}\,$. The velocity distribution appears to be marginally consistent with the gaussian hypothesis for all the statistical tests except the $A^2$-test, which rejects it at the $3\%$ level of significance. However, the most marked characteristic exhibited by the velocity histogram of these 39 galaxies is the presence of heavily populated tails (TI=1.56), strongly suggesting a complex velocity field. In an attempt to unravel whether the shape of the velocity distribution is due to the presence of infalling galaxies towards the central subunit, we have identified galaxies with heliocentric velocities smaller than 7800 $\rm km\,s^{-1}\,$ with crosses, those in the interval 7800--8600 $\rm km\,s^{-1}\,$ around the main velocity peak with circles and those with velocities larger than 8600 $\rm km\,s^{-1}\,$ with asterisks. The three kinematical subsets, however, appear well mixed in the sky. The solid part of the histogram in Fig.~15a shows the distribution of velocities of the 26 galaxies belonging to the central subunit (i.e., those within $0.5\,r_A$ of the cluster center). The velocity dispersion of the galaxies in this subsample is $\sigma=380^{+121}_{-78}\,$ $\rm km\,s^{-1}\,$, substantially smaller than that of the whole sample (the systemic velocity remains practically unchanged; see Table 4), but the tail index TI=1.75 is even larger. None of the statistical tests can reject now the gaussianity of the parent distribution (see Table 5). The heavily populated tails of the velocity distribution of A2666 may also signal possible chance superpositions of objects not physically bound to the cluster. The presence of contaminating galaxies is to be expected both because of the supercluster in which A2634 and A2666 are embedded and because of the proximity of A2634 itself. Notice, for instance, that all four objects with radial velocity larger than 8600 $\rm km\,s^{-1}\,$ in Fig.~15a are well within the $\Omega_0 = 0.5$ caustic of Fig.~5, and therefore might be peripheral members of A2634. \subsection{The Distant Clusters} In \S~3.1, the presence of two background clusters in the A2634 region was briefly discussed. Figure 16 shows the sky distribution of the likely members of these two clusters within a $1\fdg5\!\times 1\fdg5$ region around the A2634 center. A total of 40 galaxies within this region have radial velocities in the range from $15{,}000$ to $21{,}000$ $\rm km\,s^{-1}\,$, which is dominated by the rich background cluster A2622. The biweight location and scale of this subset of galaxies are, respectively, $V_{hel}=18345^{+144}_{-150}\,$ $\rm km\,s^{-1}\,$ and $\sigma=942^{+165}_{-109}\,$ $\rm km\,s^{-1}\,$ (see also Table 4), and the velocity distribution appears to be fully consistent with a gaussian parent population (Table 5). A2622 appears to be embedded in a region of high galactic density, perhaps a supercluster, that extends several core radii to the southeast from its center. The peak density is located at $\rm R.A.\sim 23^h 32^m 40^s$, $\rm Dec.\sim 27\deg 4\arcmin$, approximately $0\fdg9$ to the NW of A2634. It coincides with one of the two secondary peaks of diffused X-ray emission in the {\it ROSAT\/} PSPC X-ray map of A2634 shown in Figure 17, and is very close to the position of the galaxy that is probably the dominant galaxy of the cluster (the radio galaxy 4C 27.53; see Riley 1975). For the 31 galaxies with radial velocities in the range $35{,}000$--$41{,}000$ $\rm km\,s^{-1}\,$ spread over the same $1\fdg5\!\times 1\fdg5$ region, we find $V_{hel}=37093^{+192}_{-156}\,$ $\rm km\,s^{-1}\,$ and $\sigma=924^{+307}_{-265}\,$ $\rm km\,s^{-1}\,$. In this case, the $W$-, $B_1$- and $A^2$-tests reject the gaussian hypothesis at better than the $5\%$ level of significance, and the tail index, with a value of 1.51, signals the presence of heavily populated tails in the velocity distribution (similar results are obtained if the kinematical analysis is restricted to the 16 galaxies in the central subunit). These results suggest that this galaxy concentration (hereinafter CL37) has a complex velocity field, perhaps contaminated by outliers. Pi93 tentatively associated the secondary peak seen in their {\it Einstein\/} IPC image of A2634 (see also Fig.~17) with a probable background cluster at $\sim 37{,}000$ $\rm km\,s^{-1}\,$, but could not further elaborate on this idea for the lack of enough redshift measurements. With twice as many redshifts, we can now confirm the presence of a rich cluster of galaxies with its peak density located less than $0\fdg2$ to the NW of A2634 and surrounded, as A2622, by several smaller galaxy concentrations at the same distance. Based on CCD images that we have obtained for the central region of A2634 and on the association of this background cluster with a noticeable secondary peak of X-ray emission seen in both the {\it Einstein\/} and {\it ROSAT\/} images, we identify an elliptical galaxy at $\rm R.A. = 23^h 35^m 25\fs4$, $\rm Dec.= 26\deg 54\arcmin 36\arcsec$ and $cz_{hel}=37{,}322$ $\rm km\,s^{-1}\,$ as the most probable central galaxy of this cluster. The consistency of the association of these two background clusters with secondary peaks in the X-ray image of A2634 can be estimated from their predicted X-ray luminosities in the 2--$10\rm\,keV$ energy range, calculated using their observed velocity dispersion and the $L_X-\sigma$ relation (Edge \& Stewart 1991). For a cluster with a velocity dispersion on the order of 900 $\rm km\,s^{-1}\,$, $L_X \simeq 7\times 10^{44}\rm\,erg\,s^{-1}$, similar to the X-ray luminosity of a cluster like Coma. This predicts total X-ray fluxes of $4.5\times 10^{-11}\rm\,erg\,cm^{-2}\,s^{-1}$ for A2622 and of $1.0\times 10^{-11}\rm\,erg\,cm^{-2}\,s^{-1}$ for CL37, which have to be compared with the total flux of $1.2\times 10^{-11}\rm\,erg\,cm^{-2}\,s^{-1}$ measured for A2634 (David et al. 1993). The negligible dependence on cluster temperature of the fluxes measured in the {\it Einstein\/} 0.5--$3.0\rm\,keV$ and {\it ROSAT\/} 0.14--$2.24\rm\,keV$ energy bands implies that the predicted ratios between the X-ray fluxes of the background clusters and A2634 in the 2--$10\rm\,keV$ energy range should be similar to those calculated in the energy bands of both the {\it Einstein\/} IPC and {\it ROSAT\/} PSPC detectors. The expectation of a noticeable secondary peak associated with A2622 is, however, not corroborated by Fig.~17. That is largely due to the fact that the image was not flat-fielded, while the X-ray peak pressumably coincident with A2622 is located close to both the edge of the {\it ROSAT\/} image and the shadow of the mirror supports. Therefore, we accept the positional coincidence as sufficient evidence for the identification of the two secondary X-ray peaks in Fig.~17 at $13\arcmin$ and $50\arcmin$ to the NW of the center of A2634 with the two background clusters A2622 and CL37. \subsection{The Groups A2634--F and A2634--B} In \S~3.2 we identified two groups in the neighborhood of A2634, labelled respectively A2634--F and A2634--B. The former has 18 members (open squares in Fig.~6) and a systemic velocity of $7546^{+65}_{-71}$ $\rm km\,s^{-1}\,$; A2634--B has 17 members (open triangles in Fig.~6) and a systemic velocity of $11{,}619^{+72}_{-90}$ $\rm km\,s^{-1}\,$. Both groups have very small velocity dispersions (244 and 186 $\rm km\,s^{-1}\,$, respectively), and are relatively spiral rich ($\sim 60\%$). The two groups appear concentrated in small areas on the plane of the sky, dominating the galaxy counts on the east side of the cluster. The galaxies associated with A2634--F are spread to the northeast of the cluster center, at a median distance of $\sim 1\fdg1$, and forming perhaps two different subunits very close to each other, spatially and kinematically. A2634--B is slightly more concentrated (although Pi93 also suggest that it may contain two different subunits) and located slightly to the southeast of the cluster, at a median distance of $0\fdg 6$ from its center. It is likely that both groups represent separate dynamical entities in the vicinity of A2634. In particular, all the galaxies in A2634--B would have been automatically discarded from cluster membership in the TD sample by the $3\,\sigma$-clipping method of Yahil \& Vidal (1977). In \S~6.2, we shall study their dynamical relation to the cluster. \section{Dynamical Analysis} \subsection{Mass Estimates} The virial theorem is customarily used as the standard tool to estimate the dynamical mass of galaxy clusters. Under the assumptions that the cluster is a spherically symmetric system in hydrostatic equilibrium and that the mass distribution follows closely that of the observed galaxies independently of their luminosity, the total gravitating mass of a cluster is given by \begin{equation} M_{\rm VT} = {3\pi \over G} \sigma^2 R_H\;, \end{equation} where $\sigma$ is the line-of-sight velocity dispersion of the galaxies taken here as the biweight scale estimate, and $R_H$ is the cluster mean harmonic radius, defined as \begin{equation} \label{rharm} R_H = {D \over 2} N_{gal}(N_{gal}-1) \Bigr( \sum_i \sum_{j<i} {1 \over \theta_{ij}}\Bigl)^{-1}\;, \end{equation} where $D$ is the cosmological cluster distance, $\theta_{ij}$ is the angular separation between galaxies $i$ and $j$, and $N_{gal}$ the total number of galaxies. An alternative approach is to use the ``projected mass estimator'' (Bahcall \& Tremaine 1981; Heisler, Tremaine, \& Bahcall 1985) \begin{equation} \label{pme} M_{\rm PM} = {32 \over{\pi GN_{gal}}} \sum_i V_{i}^2 R_i\;, \end{equation} where $V_i$ is the observed radial component of the velocity of galaxy $i$ with respect to the systemic velocity of the cluster, and $R_i$ is its projected separation from the cluster center. The numerical factor in front of eq.~(\ref{pme}) assumes an isotropic distribution of galaxy orbits. It is worth noting that the cluster masses obtained with these two methods may underestimate the actual ones if the distribution of matter is less concentrated than the light. Mass estimates using the above two methods and their $68\%$ uncertainties (computed by means of the bootstrap technique and a standard propagation of errors analysis) are listed in columns (9) and (10) of Table 4 for A2634, A2666 and the two distant clusters. We have computed dynamical masses for the TD and HDR samples of A2634, and for the galaxies within 1 and $0.5\,r_A$ from the center of A2666. In addition, because of the clearly different spatial and kinematical properties shown by the early- and late-type galaxies of A2634, we have also derived mass estimates for the two populations. For A2634, we can compare the different mass estimates for the HDR sample given in Table~4 with the value of $2.5\times 10^{14}\rm\,M_{\odot}$ calculated for the central cluster region from the observed X-ray gas distribution (Eilek et al. 1984). The better agreement is obtained with the mass estimates inferred for the early-type population, while the late-type galaxies yield excessively large values. This result adds further support to the idea, discussed in \S~4.2, that the spiral galaxies of A2634 represent a young cluster population, not yet in a relaxed dynamical state, and possibly with a different distribution of orbits than the early-type galaxies. A similar difference between the mass estimates of the early- and late-type galaxies is also present in the TD sample. Note that if the spiral galaxies were all falling onto the cluster along purely radial orbits, their average velocity would be on the order of $\sqrt{2}$ times the mean velocity of the relaxed cluster population. This higher velocity would produce an overestimate of the cluster mass by a factor of two, very close to the observed difference between the mass estimates of the early- and late-type populations. On the other hand, a rough estimate of the radius of the virialized region for a typical rich cluster (Mayoz 1990) gives $r_{vir}=2.48\rm\,Mpc$, which corresponds to $0\fdg 8$ at the distance of A2634. This suggests that even some of the early-type galaxies in the TD sample might be part of an unrelaxed population. Consequently, we adopt as the best estimate of the virial mass of A2634 that given by the early-type population of the HDR sample. Similarly, the spatial distribution of the galaxies associated with A2666 (Fig.~15b) suggests that the best estimate of the virial mass of this cluster is given by the galaxies within $0.5\,r_A$ of its center. Table 4 also shows that the virial mass estimates for all the selected samples are affected by smaller uncertainties and yield smaller values than the projected mass estimator. Indeed, Monte Carlo simulations of clusters done by Heisler et al. (1985) show that this second method tends to overestimate the dynamical masses if the samples are contaminated by interlopers. This case probably applies for A2666, where the projected mass estimates are one order of magnitude larger than the virial masses, and for the two background clusters, because the small size of the corresponding samples makes them more susceptible to contamination by outliers. Notice, for instance, that the ratio of masses between A2634 and A2666 ranges between 4 and 10 depending of the technique adopted. In the next section, we use only the virial mass estimates in the study of the dynamical state of the A2634\slash 2666 system. \subsection{Two-Body Analysis} We now investigate by means of simple energy considerations whether the two clusters A2634 and A2666 and the two small groups A2634--F and A2634--B in the vicinity of A2634 form a gravitationally bound system. In the framework of newtonian mechanics, a system of particles is gravitationally bound if it has a negative total energy or, equivalently, if $v^2/2<GM_{tot}/r$, where $v$ represents the (system averaged) global velocity dispersion, $r$ is the characteristic size of the system (e.g., eq.~[\ref{rharm}]) and $M_{tot}$ is the total system mass. In the particular case under study, as the masses of the groups A2634--F and A2634--B are negligible with respect to the masses of the two main clusters, the energetic analysis can be reduced to that of three independent two-body systems: the system A2634\slash 2666, and the two cluster/group systems A2634/2634--F and A2634/2634--B. For a system of two point masses the criterion for gravitational binding expressed in terms of observable quantities is written as (Beers, Geller, \& Huchra 1982): \begin{equation}\label{ener} V_{rel}^2 R_{p}\leq 2GM_{tot}\sin^{2}\alpha \cos \alpha\;, \end{equation} where $V_{rel}=v\sin\alpha$ is the relative velocity between the two components along the line-of-sight, $R_{p}= r\cos\alpha$ is their projected separation, and $\alpha$ is the angle between the plane of the sky and the line joining the centers of the two components. Notice that eq.~(\ref{ener}) defines the region of bound orbits in the $(\alpha,V_{rel})$-plane independently of the adopted value of $H_0$. For the system A2634\slash 2666, the (relativistically correct) relative velocity between the two clusters is $V_{rel}=1078\pm105$ $\rm km\,s^{-1}\,$ (the quoted uncertainty is the rms of the average 68\% boostrap errors associated with each cluster), while $R_p=9.1\rm\,Mpc$, computed from the angular separation of their respective centers at the average $z_{CMB}$ of these clusters. Figure 18a shows the variation of $V_{rel}$ as a function of $\alpha$. Two different curves are drawn for two different values of the total mass of the clusters. The solid curve is drawn using our previous best estimates of the virial masses of the two clusters, while for the dotted line we use the highest values obtained in our calculations using the virial mass estimator (see Table~4). The region of bound orbits is on the left of the curves. The vertical lines correspond to the observed value of $V_{rel}$ (solid) and its associated uncertainty (dashed). Fig.~18a suggests that these two clusters are currently gravitationally unbound. For the system A2634/2634--F, the relative velocity between the two components is $V_{rel}= 1632\pm 108$ $\rm km\,s^{-1}\,$, while for the system A2634/2634--B, is $V_{rel}= 2292\pm 117$ $\rm km\,s^{-1}\,$. From the estimated mean angular separations between the groups and the cluster we have $R_p=3.4\rm\,Mpc$ for A2634/2634--F and $R_p=1.9\rm\,Mpc$ for A2634/2634--B at the cosmic distance of A2634. Figures 18b and 18c show the limits between the bound and unbound orbit regions in the $(V_{rel}{,}\alpha)$-plane for these two systems. As in Fig.~18a, the calculations done with the best virial mass estimate of A2634 are represented by solid curves, while the dotted ones show the result of using the highest value given by the virial mass estimator. In each calculation the mass of the group is neglected with respect to the mass of the cluster. These figures show that the mass of A2634 is too small to form a bound system with the two groups A2634--F and A2634--B, a result that validates, a posteriori, the membership criterion adopted for A2634 in \S~3.2. Therefore, from the results of the preceeding analysis, we conclude that it is unlikely that the whole system of clusters and groups around A2634 is gravitationally bound. \section{Conclusions} An extensive galaxy redshift survey around A2634 serves to reveal a region of intricate topology at all scales. Besides the large-scale structure associated with the PPS, we are able to identify in the $6\deg$ wide field around A2634 several clusters and groups: (a) A2634; (b) A2666; (c) two rich and massive background clusters, also detected in the X-ray domain, located at respectively twice and four times the redshift of A2634; and (d) A2634--F and A2634--B, two spiral rich groups near A2634 at $\sim 7500$ and $\sim 11{,}500$ $\rm km\,s^{-1}\,$, respectively. Simple energetic considerations suggest that the system formed by A2634, A2666, A2634--F and A2634--B is gravitationally unbound, despite the proximity among its members. We also show that the conflicting results on the motion of A2634 with respect to the CMB reported by Lucey et al. (1991a) may be in part {\it but not fully\/} ascribed to the complex structure of the region and lenient assignment of cluster membership to the galaxies in the adopted samples. Other explanations, ---in addition to stricter TF and $D_n-\sigma$ sample selection criteria---, are needed to solve the problem. The dynamical complexity of this region is also reflected in the structure of the best sampled of its galaxy concentrations: A2634. While the spatial, kinematical and dynamical properties of the early-type population agree fairly well with those expected for a relaxed system, the spiral galaxies are not only less concentrated to the cluster core, but they also display strong evidence for multimodality in their velocity distribution and dominate the high-velocity tails, suggesting their recent arrival to the cluster. In addition, spirals are virtually absent from the central parts of A2634, a result similar to that obtained by Beers et al. (1992) for A400. These results have two important implications. First, the velocity centroid of the (presumably virialized) innermost cluster region can be miscalculated and/or its velocity dispersion overestimated if (assymmetric) secondary infall plays an important role. It is therefore advisable to use only the early-type galaxy population to derive these two properties. Second, unless secondary infall is a recent event in the life of these clusters, late-type newcomers must be efficiently converted into early-type systems in order to explain the scarcity of the former types in the innermost cluster regions. Based on these considerations, we advocate the following choice of parameters for, respectively, A2634 and A2666: systemic velocities of 9240 and 8134 $\rm km\,s^{-1}\,$, velocity dispersions of 661 and 380 $\rm km\,s^{-1}\,$, and virial masses of $5.2\times 10^{14}$ and $0.4\times 10^{14}\rm\,M_{\odot}$. The clumpiness shown by the galaxy number density map of the central regions of A2634 and the multimodal velocity distribution of the late-type population, are investigated as possible signatures of a recent collision of this cluster with a large subunit. Statistical tests for substructure find, however, no significant evidence of clumpiness in the galaxy component of A2634 that can corroborate the ocurrence of a merger in the plane of the sky, but we cannot exclude the existence of smaller, loose groups of spirals, unlikely associated with dense ICMs, spatially superposed. This result indicates that the structure and kinematics of A2634 reflect the continuous infall of individual galaxies or small groups onto the cluster along the line-of-sight, rather than the recent merger of two comparable subunits moving in the plane of the sky. \acknowledgments The authors would like to thank Timothy Beers for kindly providing the program ROSTAT used in the kinematical analysis and the software for the calculation of the adaptive kernel density maps. We are also indebted to Eduardo Salvador-Sol\'e and Guillermo Gonz\'alez-Casado who developed the basic source code used in the two spatial tests of substructure, and to Daniel Golombek for assistance in extracting images from the ``Palomar Quick Survey'' at the STScI. Ginevra Trinchieri, Bill Forman and Alex Zepka provided valuable advice about the {\it ROSAT\/} image of A2634, which was captured from the {\it ROSAT\/} master data base maintained in the public domain by NASA. MS greatly benefited from many enlightening and entertaining discussions with Enzo Branchini. JMS acknowledges support by the United States-Spanish Joint Committee for Cultural and Educational Cooperation and the Direcci\'on General de Investigaci\'on Cient\'{\i}fica y T\'ecnica through Postdoctoral Research Fellowships. This work was supported by grants AST--9115459 to RG, and AST--9023450 and AST--9218038 to MPH. \newpage
{'timestamp': '1994-09-02T22:54:08', 'yymm': '9409', 'arxiv_id': 'astro-ph/9409006', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/9409006'}
\section{Results and Analysis} \label{sec:results} \begin{table}[t] \centering \begin{tabular}{@{}l r r r@{}} \toprule \textbf{Model} & \textbf{BLEU} & \textbf{ROUGE-L} & \textbf{METEOR} \\ \midrule \multicolumn{4}{c}{\textit{Encoder Input: Supporting Facts Sentences}}\\ \midrule NQG++$^\dagger$ & 11.50 & 32.01 & 16.96 \\ ASs2s$^\dagger$ & 11.29 & 32.88 & 16.78 \\ MP-GSA$^\dagger$ & 13.48 & 34.51 & 18.39 \\ SRL-Graph$^\dagger$ & 15.03 & 36.24 & 19.73 \\ DP-Graph$^\dagger$ & 15.53 & 36.94 & 20.15 \\ GATE$_{\text{NLL}}$ & \bf{19.33} & \bf{39.00} & \bf{22.21} \\ \midrule \midrule \multicolumn{4}{c}{\emph{Encoder Input: Full Document Context}}\\ \midrule TE$_{\text{NLL+CT}}$ & \bf{19.60} & \bf{39.23} & \bf{22.50}\\ GATE$_{\text{NLL}}$ & 17.13 & 38.13 & 21.34 \\ GATE$_{\text{NLL+CT}}$ & \bf{20.02} & \bf{39.49} & \bf{22.40} \\ \bottomrule \end{tabular} \caption{Results of multi-hop QG on HotpotQA. NQG++ is from~\citet{zhou2018nqg}, ASs2s is from~\citet{kim2019improving}, MP-GSA is from~\citet{zhao-etal-2018-paragraph}, SRL-Graph and DP-Graph are from~\citet{pan2020semantic}. $\dagger$ denotes that the results are taken from~\citet{pan2020semantic}. Best results in each section are highlighted in bold.} \label{tab:model_bleu} \end{table} We report the performance of our proposed transformer encoder (TE) and graph-augmented transformer encoder (GATE) models in Table~\ref{tab:model_bleu}, comparing with a number of recent QG approaches. In subsequent discussions, we will mostly use the BLEU values from this table to compare performance, but we also provide ROUGE-L and METEOR scores. \paragraph{Performance with supporting facts as input} We first consider a \emph{simplified version} of the task when \emph{only the supporting facts are used during training and testing} (top section in Table~\ref{tab:model_bleu}). In other words, in this setting, we remove all sentences of the context documents that have not been annotated as supporting facts. This is an overly simplified setting since supporting fact annotations are not always available at test time. However, this is the setting used in previous work on multi-hop QG \cite{pan2020semantic}, which we directly compare to. In this setting, we see that our GATE models scores 19.33 BLEU, an absolute gain of around 4 points over the previous best result. Note that as contrastive training is not applicable here, we just use the log-likelihood loss ($\sL_{\text{NLL}}$) for training. \begin{figure}[t] \centering \makebox[\linewidth][c]{\includegraphics[max width=1\linewidth, scale=1.0]{fig/context_length.pdf}} \caption{Length distribution of full document context and supporting facts sentences in HotpotQA. It reveals that the full document context is almost three times longer than supporting facts.} \label{fig:cl} \end{figure} \paragraph{Performance with full context as input} In a more realistic setting when the supporting facts are not available at test time, the model needs to processes the full context. As the average document context is three times the size of the supporting facts in HotpotQA (Figure~\ref{fig:cl}), this setting is potentially much more challenging. As is evident from results (bottom section in Table~\ref{tab:model_bleu}), we see that the BLEU of our GATE model using only the likelihood objective GATE$_{\text{NLL}}$ drops to 17.13, an absolute drop of nearly 2 points. However, when trained using the composite objective (GATE$_{\text{NLL+CL}}$), the GATE model is able to obtain a BLEU of 20.02, which is actually higher than what GATE could achieve in the simplified setting. We suspect that the additional training signal from the longer contexts combined with the contrastive objective actually benefits the models. Moreover, we find that our TE model---which contains no graph augmentations---is also able to achieve very strong performance in this setting, achieving a BLEU of 19.60, which also substantially outperforms all previous methods. \subsection{Ablation Studies} \label{sub:ans_rep} \begin{table}[t] \centering \small \begin{tabular}{l r@{}} \toprule \textbf{Setting} & \textbf{BLEU} \\ \midrule GATE$_{\text{NLL+CT}}$ & 20.02 \\ \midrule -- \emph{contrastive training} & 17.13 \\ \midrule -- \emph{data filtering} & 14.50 \\ -- \emph{data filtering, contrastive training} & 11.90 \\ \midrule -- \emph{answer type ids} & 7.81 \\ \bottomrule \end{tabular} \caption{Ablation studies when the encoders' input is the full document context.} \label{tab:input_rep} \end{table} We perform ablation studies to understand what components are essential for strong performance on multi-hop QG (Table~\ref{tab:input_rep}). \paragraph{Contrastive Training} We see that contrastive training improves the BLEU score by 3 points and it also helps the model attain around 75 F1 and 35 Exact Match scores in predicting the supporting facts sentences. This highlights that---besides being helpful in QG---contrastive training additionally imparts useful signals to the encoder such that its supporting facts predictions are \emph{interpretable}. \paragraph{Data Filtering} Next, we see that data filtering of easy questions greater than 30 words is a critical step in our training pipeline. If we use the full training data, the accuracy drops by about 5 points to 14.5 BLEU. We also observed the generations to be 1.6-1.8x longer than the reference questions. Furthermore, if we just use teacher-forcing on the entire training set, then the performance further drops to 11.90 BLEU, signifying that question filtering and contrastive training are independently useful and essential. \paragraph{Answer Encoding} Finally, we show that effective encoding of the answer is of utmost importance in multi-hop QG as the decoder needs to condition generation on both the context and answer. As mentioned previously, the common approach in standard QG is to append the answer tokens after the context. We see that using this approach results in a drop of around 12 points to 7.81 BLEU, which is quite low. Our approach to marking the answer span in the context with answer type ids appears to be a much stronger methodology. \subsection{Complementarity of TE and GATE} \label{sub:ensemble} \begin{table}[t] \centering \begin{tabular}{@{}l r r r@{}} \toprule \textbf{Model} & \textbf{BLEU} & \textbf{ROUGE-L} & \textbf{METEOR} \\ \midrule \multicolumn{4}{c}{\emph{Encoder Input: Full Document Context}}\\ \midrule TE$_{\text{NLL+CT}}$ & 19.60 & 39.23 & 22.50 \\ GATE$_{\text{NLL+CT}}$ & 20.02 & 39.49 & 22.40 \\ Ensemble & \bf{21.34} & \bf{40.36} & \bf{23.24} \\ \bottomrule \end{tabular} \caption{Performance comparison of the TE and GATE models and their ensemble.} \label{tab:compl_str} \end{table} \paragraph{Model Ensemble} We notice that our graph-augmented model GATE seems to provide complementary strengths compared to the TE model, which is evident when we ensemble both the models during decoding. At every step, we compute the probability of the model combination using their linearly weighted probability scores as, \begin{align*} p\left(q_{k} \mid c, q_{1:k-1}\right) &= \alpha \cdot p_{\text{TE}}\left(q_{k} \mid c, q_{1:k-1}\right) \\ &+ (1-\alpha) \cdot p_{\text{GATE}}\left(q_{k} \mid c, q_{1:k-1}\right), \end{align*} where $\alpha \in [0, 1]$ is a hyperparameter.\footnote{We find that $\alpha=0.5$ works the best.} We see that the ensemble of TE and GATE model results in an accuracy score of 21.34 BLEU, which is an improvement of 1.7 points over the TE model. Ensembling also provides close to 1 point gain in the ROUGE-L scores. This suggests that---while the gains from graph-augmentations are relatively small---there is complementary information in the explicit graph structures. \paragraph{GLEU Score Comparison} In order to further understand how the GATE model is different in performance from the TE model, we perform an analysis of the generated questions on the test set. We analyze the distribution of the difference in their question-level GLEU scores~\cite{wu2016google}\footnote{GLEU is a sentence-level metric and is calculated as the minimum of the precision or recall between the reference and the hypothesis.} and observe that on 397 (5.4\%) test examples, the GATE model achieves a GLEU score of 20 points or more than that of the TE, while on 377 (5.1\%) examples the TE model achieves at least 20 points higher. Therefore, this complementary performance is the reason for the gain that we see in Table~\ref{tab:compl_str} when the two models are ensembled. \section{Standard vs Multi-Hop QG} \label{sec:ques_complexity} \begin{table}[t] \small \centering \begin{tabular}{@{}l r r r@{}} \toprule \textbf{QG task} & \textbf{Words} & \textbf{Entities} & \textbf{Predicates} \\ \midrule Standard (SQuAD) & 10.22 & 1.12 & 1.75 \\ Multi-Hop (HotpotQA) & 15.58 & 2.34 & 2.07 \\ \bottomrule \end{tabular} \caption{Comparison of questions' properties in standard and multi-hop QG datasets. We show the average number of words, entities, and predicates per question.} \label{tab:qg_comp2} \end{table} In this section, we present our results to illustrate the relative complexity of standard and multi-hop QG tasks. For this analysis, we compare three properties of expected output \emph{i.e.}\ questions: total words, named entities, and predicates, as we believe these represent the \emph{sufficient statistics} of the question. As a benchmark dataset of standard QG, we use the development set from SQuAD~\cite{rajpurkar2018know} and for multi-hop QG, we use the development set from HotpotQA. We extract named entities using Spacy and predicates using Open IE~\cite{stanovsky-etal-2018-supervised}. From the results in Table~\ref{tab:qg_comp2}, we see that multi-hop questions are almost 1.5 times longer than standard ones and also contain twice the number of entities. These results suggest that in multi-hop QG the decoder needs to generate longer sequences containing more entity-specific information making it considerably more challenging than standard QG. We also observe that multi-hop questions contain roughly 2 predicates in 15 words while standard questions contain 1.75 predicates in 10 words--- highlighting that there are fewer predicates per word in multi-hop questions compared with standard ones. This highlights that information is more densely packed within the multi-hop question as they are not expected to contain latent (or bridge) entity information. \section{Training Details} \label{sec:train_det} We mostly follow the model training details as outlined in~\cite{sachan-neubig-2018-parameter}, which we also describe here for convenience. The word embedding layer is initialized according to the Gaussian distribution $\mathcal{N}(0,{d}^{-1/2})$, while other model parameters are initialized using \emph{LeCun uniform initialization}~\cite{lecun1998efficient}. For optimization, we use Adam~\citep{kingma2014adam} with $\beta_1=0.9$, $\beta_2=0.997$, $\epsilon=1e^{-9}$. The learning rate is scheduled as: \begin{math} 2 \textit{d}^{-0.5} \mathrm{min}\left(\textit{step}^{-0.5},\textit{step}\cdot 16000^{-1.5}\right). \end{math} During training, the mini-batch contains $12,000$ source and target tokens. For regularization, we use label smoothing (with $\epsilon=0.1$)~\cite{pereyra2017regularizing} and apply dropout (with $p=0.1$)~\cite{srivastava2014dropout} to the word embeddings, attention coefficients, ReLU activation, and to the output of each sublayer before the residual connection. For decoding, we use beam search with width $5$ and length normalization following~\cite{wu2016google} with $\alpha=1$. We also use $\lambda=0.5$ when performing joint NLL and contrastive training. \section{Conclusion} \label{sec:conclusion} In this work, we propose a series of strong transformer models for multi-hop QG. To effectively encode the context documents and the answer, we introduce answer type embeddings and a new sublayer to incorporate the extracted entity-centric graph. We also propose an auxiliary contrastive objective to identify the supporting facts and a data filtering approach to balance the training-test distribution mismatch. Experiments on the HotpotQA dataset show that our models outperform the current best approaches by a substantial margin of 5 BLEU points. Our analysis further reveals that graph-based components may not be the most critical in improving the performance, but can render complementary strengths to the transformer. \section{Experimental Setup} \label{sec:setup} \subsection{Dataset Preprocessing and Evaluation} \label{sub:preprocessing} We use HotpotQA dataset~\cite{yang2018hotpotqa} for experiments as it is the only multi-hop QA dataset that contains questions in textual form.\footnote{We also explored WikiHop~\cite{welbl2018constructing} but it contains questions in triple format and thus is outside the scope of this work.} HotpotQA is a large-scale crowd-sourced dataset constructed from Wikipedia articles and contains over 100K questions. We use its \emph{distractor} setting that contains 2 gold and 8 distractor paragraphs for a question. Following prior work on multi-hop QG, we limit the context size to the 2 gold paragraphs, as the distractor paragraphs are irrelevant to the generation task \cite{pan2020semantic}. The questions can be either of type \emph{bridge}- or \emph{comparison}-based. The answer span is not explicitly specified in the context documents rather the answer tokens are provided. Hence, we use approximate text-similarity algorithms to search for the best matching answer span in the context. For some of the comparison questions whose answer is either \emph{yes} or \emph{no}, we append it to the context. To train and evaluate the models, we use the standard training and dev sets.\footnote{As the test set is hidden for HotpotQA.} We pre-process the dataset by excluding examples with spurious annotations and filter out training instances whose question length is more than 30 words. As the official dev set is used as a test set, we reserve 500 examples from the training set to be used as dev set. Overall, our training set consists of 84,000 examples, and the test set consists of 7,399 examples. We follow the evaluation protocol of \citet{pan2020semantic} and report scores on standard automated evaluation metrics common in QG: BLEU~\cite{papineni2002bleu},\footnote{This is also known as BLEU-4.} ROUGE-L~\cite{lin2004rouge}, and METEOR~\cite{banerjee2005meteor}. \subsection{Training Protocols} For all the experiments, we follow the same training process. We encode the context and question words with subwords units by applying a \emph{unigram language model} as implemented in the open-source \textit{sentencepiece} toolkit~\cite{kudo2018sentencepiece}. We use 32,000 subword units including 4 special tokens (\textit{<bos>}, \textit{<eos>}, \textit{<sep>}, \textit{<unk>}). The first three of these subword units were introduced in \S\ref{sec:methods}, while the \textit{<unk>} or \textit{unknown} token helps to scale to larger vocabularies and provides a mechanism to handle new tokens at test time. For all the experiments, we use a 2-layer transformer model with 8 attention heads, 512-D model size, and 2048-D hidden layer.\footnote{This is the Transformer-\emph{base} setting from the original paper, apart from the number of layers.} Word embedding weights are shared between the encoder, decoder, and generation layer. For reproducibility, we describe model training details in Appendix~\ref{sec:train_det}. \section{Introduction} Motivated by the process of human inquiry and learning, the field of question generation (QG) requires a model to generate natural language questions in context. QG has wide applicability in automated dialog systems~\cite{mostafazadeh2016generating,woebot2017}, language assessment~\cite{settles2020asessment}, data augmentation~\cite{tang2017question}, and the development of annotated data sets for question answering (QA) research. \begin{figure}[t!] \centering \input{fig/task_intro.tex} \caption{A (truncated) example illustrating the multi-hop QG task. The inputs are the two documents, answer, and supporting facts. The model is expected to generate a multi-hop question such that it is answerable using both the documents together. Entities and predicates relevant to generating the question are underlined in the documents, supporting facts are shown in color and their sentences ids are highlighted in bold.} \label{fig:intro-example} \vspace{-15pt} \end{figure} Most prior research on QG has focused on generating relatively simple {\em factoid-based} questions, where answering the question simply requires extracting a span of text from a single reference document~\cite{zhao2018paragraph,kumar2019question}. However, motivated by the desire to build NLP systems that are capable of more sophisticated forms of reasoning and understanding~\cite{kaushik2018much, sinha2019clutrr}, there is an increasing interest in developing systems for {\em multi-hop} question answering and generation \cite{zhang2017variational, welbl2018constructing, yang2018hotpotqa, dhingra2020differentiable}, where answering the questions requires reasoning over the content in multiple text documents (see Figure~\ref{fig:intro-example} for an example). Unlike standard QG, generating multi-hop questions requires the model to understand the relationship between disjoint pieces of information in multiple context documents. Compared to standard QG, multi-hop questions tend to be substantially longer, contain a higher density of named entities, and---perhaps most importantly---high-quality multi-hop questions involve complex chains of predicates connecting the mentioned entities (see Appendix \S\ref{sec:ques_complexity} for supporting statistics.) To address these challenges, existing research on multi-hop QG primarily relies on graph-to-sequence (G2S) methods~\cite{pan2020semantic,Yu:20}. These approaches extract graph inputs by augmenting the original text with structural information (e.g., entity annotations and dependency parses) and then apply graph neural networks (GNNs)~\cite{kipf2016semi,hamilton2017representation} to learn graph embeddings that are then fed to a sequence-based decoder. However, the necessity of these complex G2S approaches---which require designing hand-crafted graph extractors---is not entirely clear, especially when standard transformer-based sequence-to-sequence (S2S) models already induce a strong relational inductive bias~\cite{vaswani2017attention}. Since transformers have the inherent ability to reason about the relationships between the entities in the text, one might imagine that these models alone would suffice for the relational reasoning requirements of multi-hop QG. \xhdr{Present work} In this work, we show that, in fact, a standard transformer architecture is sufficient to outperform the prior state-of-the-art on multi-hop QG. We also propose and analyze a graph-augmented transformer (GATE)---which integrates explicit graph structure information into the transformer model. GATE sets a new state-of-the-art and outperforms the best previous method by 5 BLEU points on the HotpotQA dataset~\cite{yang2018hotpotqa}. However, we show that the gains induced by the graph augmentations are relatively small compared to other improvements in our vanilla transformer architecture, such as an auxiliary contrastive objective and a data filtering approach, which improve our model by 7.9 BLEU points in ablation studies. Overall, our results suggest diminishing returns from incorporating hand-crafted graph structures for multi-hop reasoning and provides a foundation for stronger multi-hop reasoning systems based on transformer architectures. \noindent Our key contributions are summarized as follows: \begin{itemize}[leftmargin=*,itemsep=1pt,topsep=2pt,parsep=2pt] \item We propose a strong transformer-based approach for multi-hop QG, achieving new state-of-the-art performance without leveraging hand-crafted graph structures. \item We further show how graph augmentations can be integrated into the transformer architecture, leading to an overall increase of 5 BLEU points compared to previously published work. \item Detailed ablations and error analysis highlight essential challenges of multi-hop QG---such as distributional mismatches---that have largely gone unnoticed in previous work and reveal critical design decisions (e.g., for data filtering). \end{itemize} We hope that our work provides a strong foundation for future research on multi-hop QG while guiding the field towards the most promising avenues for future model improvements. \section{Methods} \label{sec:methods} In this section, we formalize the multi-hop question generation (QG) task and introduce a series of strong transformer-based models for this task. In particular, we first describe how we adapt the standard transformer architecture proposed by \citet{vaswani2017attention} to multi-hop QG (\S\ref{sub:te}). Following this, we introduce an approach for augmenting a transformer with graph-structured information (\S\ref{sub:gate}), and, finally, we outline two techniques that are critical to achieving strong performance: an auxiliary contrastive objective (\S\ref{sec:composite_obj}) and a data filtering approach (\S\ref{sub:qlen_dist}). \paragraph{Problem Formulation} The input to the multi-hop QG task is a set of context documents $\{c_1,..,c_k\}$ and an answer $a$. These documents can be long containing multiple sentences, i.e., $c_j = [s_1,...,s_n]$, where each $s_i = [w^{(i)}_1, \dots, w^{(i)}_t]$ is composed of a sequence of tokens. Sentences across different documents are linked through bridge entities, which are named entities that occur in multiple documents. The answer $a$ always spans one or multiple tokens in one document. The desired goal of multi-hop QG is to generate a question $q$ conditioned on the context and the answer, where answering this question requires reasoning about the content in more than one of the context documents. \subsection{Sequence-to-Sequence via Transformers}\label{sub:te} Our base architecture for multi-hop QG is a transformer-based sequence-to-sequence (S2S) model \cite{vaswani2017attention}. In particular, we formulate multi-hop QG as a S2S learning task, where the input sequence contains the concatenation of the context documents $[c_1,..,c_k]$ and the provided answer $a$. In a transformer S2S model, both the encoder and decoder consist of self-attention and feed-forward sublayers, which are trained using teaching-forcing and a negative log-likelihood loss \cite{williams1989tf}. We describe the basic self-attention and feed-forward sublayers below. In addition, we found that achieving strong performance with a transformer required careful design decisions in terms of how the input is annotated in the encoder and decoder, so we include a detailed description of our input annotation technique. \subsubsection{Transformer sublayers} \paragraph{Self-attention sublayer} The self-attention sublayer performs \emph{dot-product self-attention}. Let the input to the sublayer be token embeddings $x = (x_1,\ldots,x_{\textit{T}})$ and the output be $z = (z_1,\ldots,z_{\textit{T}})$, where $x_i, z_i \in \mathbb{R}^{d}$. First, the input is linearly transformed to obtain key ($k_i=x_i \boldsymbol{W_{\textit{K}}}$), value ($v_i=x_i \boldsymbol{W_{\textit{V}}}$), and query ($q_i=x_i \boldsymbol{W_{\textit{Q}}}$) vectors. Next, interaction scores ($s_{ij}$) between query and key vectors are computed by performing a dot-product operation $s_{ij} = q_i k_j^{\top}$. Then, attention coefficients ($\alpha_{ij}$) are computed by applying softmax function over these interaction scores $\alpha_{ij} = \frac{\exp{s_{ij}}}{\sum_{l=1}^{\textit{T}}\exp{s_{il}}}.$ Finally, self-attention embeddings ($z_i$) are computed by the weighted combination of attention coefficients with value vectors followed by a linear transformation $z_i = (\sum_{j=1}^T\alpha_{ij}v_j) \boldsymbol{W_{\textit{F}}}$. \paragraph{Feed-forward sublayer} In feed-foward sublayer, we pass as input the embeddings of all the tokens to a two-layer $\mathrm{MLP}$ with $\mathrm{ReLU}$ activation. $h_i = \max(0,\:z_i \boldsymbol{W_{\textit{L}_1}} + b_1) \boldsymbol{W_{\textit{L}_2}} + b_2$, where $\boldsymbol{W_{\textit{L}_1}}\in\mathbb{R}^{d \times d'}$, $\boldsymbol{W_{\textit{L}_2}}\in\mathbb{R}^{d' \times d}$. These embeddings ($h_i$) are given as input to the next layer. In the above descriptions, all the weight matrices (denoted by $\boldsymbol{W_*}$) and biases (denoted by $b_*$) are trainable parameters. \subsubsection{Input annotations} \paragraph{Sentence and document annotations} As the sentences present in the document context are expected to play a crucial role in learning, we add additional annotations to the input in order to learn sentence-level embeddings. Learning these sentence-level embeddings adds a form of implicit regularization, and we also leverage these embeddings in our auxiliary contrastive loss (\S\ref{sec:composite_obj}). In particular, we add a \emph{sentence id token} after the last token of a sentence. We similarly use special tokens to represent each document. In practice, as the number of sentences varies between examples we \emph{tie} the sentence id token embedding weights for all sentences and refer to it as the \textit{<sep>} token. This simple trick also makes the model more robust to the training-test distribution shift arising due to the difference in the number of sentences in context. \paragraph{Annotating the answer span} To provide the answer tokens as input to the encoder, the prevalent technique in QG approaches is to append the answer tokens after the context tokens with a delimiter token between them~\cite{dong2019unified}. However, we found this approach to substantially under-perform in multi-hop QG. A possible reason is that answer tokens concatenation imparts poor inductive biases to the decoder. To overcome this limitation, we define indicator \emph{answer type id} tokens in which the value of type ids is \texttt{1} for the answer span tokens (within the context) and \texttt{0} for the remaining tokens. We introduce a new embedding layer for the answer type ids, and the \emph{answer type embeddings} are added to the context token embeddings. \paragraph{Delimiters in the decoder} Finally, for the decoder input, in addition to the question tokens, we define two special tokens: \textit{<bos>} and \textit{<eos>}. This is done to simplify the question generation step during the decoding process. To decode the question sequence during inference, we initially feed the \textit{<bos>} token to the decoder and stop the generation process when the \textit{<eos>} token is emitted. \subsection{Graph-Augmented Transformer Encoder}\label{sub:gate} Having described our basic transformer approach, we now discuss how this transformer can be augmented by extracting explicit graph-structure from the input context. In addition to the document-level structure such as paragraphs and sentences, the context also contains structural information such as \emph{entities} and \emph{relations} among them, and a popular approach in the multi-hop setting is to use graph neural networks (GNNs) to encode this structural information \cite{pan2020semantic}. In this work, we augment the transformer architecture itself with the graph-structure information---an approach that we found to substantially outperform other graph-to-sequence alternatives. We refer to this approach as the graph-augmented transformer encoder (GATE) and contrast it with the basic transformer encoder (TE) discussed in the previous section. \subsubsection{Graph Representation of Documents} To extract graph structure from the input context, we consider three types of nodes---\emph{named-entity mentions}, \emph{coreferent-entities}, and \emph{sentence-ids}---and we extract a \emph{multi-relational graph} with three types of relations over these nodes (Figure \ref{fig:graph-example}). First, we extract named entities present in the context and introduce edges between them.\footnote{We use the English NER model provided by Spacy toolkit, which was trained on OntoNotes-5.0 and covers 18 classes.} Second, we extract coreferent words in a document and connect them with edges.\footnote{We use the coreference resolution model trained on OntoNotes-5.0 provided by Spacy.} Third, we introduce edges between all sentence nodes in the context. As entities comprise the nodes of this graph, we refer to it as ``\emph{context-entity graph}''. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{fig/graph.pdf} \caption{Entity-centric graph corresponding to the example in Figure~\ref{fig:intro-example}. The sentence nodes are drawn in circles and the entities in rectangles. Entity edges are drawn in bold, coreference edges are dotted and sentence edges are dashed.} \label{fig:graph-example} \end{figure} \subsubsection{GATE Sublayers} We leverage the context-entity graph by defining two new types of transformer sublayers: a {\em graph-attention sublayer} and a {\em fused-attention sublayer}. These two sublayers are intended to be used in sequence with each other and in conjunction with the usual self-attention and fully-connected sublayers of a transformer. \paragraph{Graph-attention sublayer} The graph-attention sublayer performs \emph{relational dot-product graph-attention}. The input to this sublayer are node embeddings\footnote{The node embeddings are obtained from the entity's token embeddings.} from the context-entity graph $v = (v_1,\ldots,v_{\textit{N}})$. Here, we aggregate information from the connected nodes instead of all the tokens. First, interaction scores ($\tilde{s}_{ij}$) are computed for all the edges by performing dot-product on the adjacent nodes embeddings $\tilde{s}_{ij} = (v_i \boldsymbol{\widetilde{W}_{\textit{Q}}}) (\tilde{v}_j \boldsymbol{\widetilde{W}_{\textit{K}}} + \gamma_{ij})^{\top}$. In this step, we additionally account for the relation between the two nodes by learning embeddings $(\gamma \in \mathbb{R}^d)$ for each relation type~\cite{shaw2018self}, where $\gamma_{ij}$ denotes the relation type between nodes $i$ and $j$. Next, we compute attention score ($\tilde{\alpha}_{ij}$) for each node by applying softmax over the interaction scores from all its connecting edges $\tilde{\alpha}_{ij} = \frac{\exp(\tilde{s}_{ij})}{{\sum_{k\in \mathcal{N}_i}\exp{(\tilde{s}_{ik})}}}$, where $\mathcal{N}_\textit{i}$ refers to the set of nodes connected to the $i^{\textit{th}}$ node. Graph-attention embeddings ($\tilde{z}_i$) are computed by the aggregation of attention scores followed by a linear transformation $\tilde{z}_i = (\sum_{j\in \mathcal{N}_i}\tilde{\alpha}_{ij}(\tilde{v}_j \boldsymbol{\widetilde{W}_{\textit{V}}} + \gamma_{ij}))\boldsymbol{\widetilde{W}_{\textit{F}}}$. \paragraph{Fused-attention sublayer} After running both the graph-attention sublayer described above, as well as the standard self-attention sublayer described in \S\ref{sub:te}, the context tokens which belong to the vertex set of context-entity graph will have two embeddings: $z_i$ from self-attention and $\tilde{z}_{i}$ from graph-attention. To effectively integrate information from sequence- and graph-views, we concatenate these two embeddings and apply a parametric function $f$ such as $\mathrm{MLP}$ with $\mathrm{ReLU}$ non-linearity~\cite{glorot2011deep}, which we term as the \emph{fused-attention sublayer} $z_i = f(\:[z_i,\:\tilde{z}_i])$, where $z_i \in \mathbb{R}^d$. \subsection{Training Losses} \label{sec:composite_obj} To train our S2S transformer models, we combine two loss functions. The first loss function is the standard S2S log-likelihood loss, while the second loss function trains the model to detect useful sentences within the question context. \subsubsection{Negative Log-Likelihood Objective} Our primary training signal for our sequence-to-sequence approach comes from a standard negative log-likelihood loss: \begin{align*} \sL_{\text{NLL}} = -\frac{1}{K} \sum_{k=1}^{K} \log{p\left(q_{k} \mid c, q_{1:k-1}\right)}, \end{align*} where the parametric distribution $p(q \mid c)$ models the conditional probability of question ($q$) given the context ($c$) and $K$ is the number of question tokens. As is common practice in the literature, we use teacher-forcing while training with this loss. \subsubsection{Auxiliary Contrastive Objective} To compliment our standard likelihood objective, we also design a contrastive objective, which trains the model to detect the occurrence of {\em supporting facts} in the multi-document context. \paragraph{Supporting facts in multi-hop QA} One of the unique challenges of multi-hop QA is the fact that the context contains a large number of irrelevant sentences due to the fact that multi-hop questions require reasoning over multiple long documents. Thus, as a common practice, researchers will annotate which sentences are necessary to answer each question, called {\em supporting facts} \cite{yang2018hotpotqa}. Prior work on multi-hop QG leveraged these annotations by simply discarding all irrelevant sentences and training only on sentences with supporting facts \cite{yang2018hotpotqa}. Instead, we propose a contrastive objective, which allows us to leverage these annotations during training while still receiving full document contexts at inference. \paragraph{Contrastive objective} Our contrastive learning setup utilizes sentences contained in the supporting facts as positive examples ($y=1$) while we consider all the remaining sentences in the context to be negative examples ($y=0$). We use only the sentence id embedding for training (\emph{i.e.}, the embedding corresponding to the \emph{<sep>} token; see \S\ref{sub:te}) but not the words contained in the sentence. Let $h_i$ denote the sentence id embedding. The contrastive training loss is defined in terms of binary cross-entropy loss formulation as: \begin{align*} \sL_{\text{CT}} &= -\frac{1}{P+N}\left(\sum_{i=1}^P \mathbbm{1}(y_i=1)\log \sD\left(h_i \right)\right. \\ &+ \left. \sum_{j=1}^N \mathbbm{1}(y_i=0)\log\left(1 - \sD\left( {h}_j \right)\right)\right), \end{align*} where $\mathcal{D}$ is a binary classifier consisting of a two-layer MLP with ReLU activation and a final sigmoid layer, $P$ and $N$ are the number of positive and negative training sentences in the context documents respectively. During evaluation, we predict the supporting facts using the binary classifier and also calculate the F1 and Exact Match (EM) scores from the predictions. This contrastive objective is added as a regularization term in addition to the main likelihood loss, leading to the following composite objective: \begin{align*} \sL = \lambda \sL_{\text{CT}} + (1-\lambda) \sL_{\text{NLL}}. \end{align*} \subsection{Data Filtering Approach}\label{sub:qlen_dist} \begin{figure}[t] \centering \makebox[\linewidth][c]{\includegraphics[max width=1\linewidth, scale=1.0]{fig/qlength.pdf}} \caption{Question length distribution according to its difficulty level in the HotpotQA training set. Plot reveals that train-easy questions are much longer than train-medium and train-hard questions.} \label{fig:ques_length} \end{figure} The final key component of our transformer-based multi-hop QG model is a data filtering approach. This aspect of our model specifically addresses challenges arising from the question-length distribution in the standard HotpotQA benchmark \cite{yang2018hotpotqa}, which is the main multi-hop QA dataset analyzed in this work. Nonetheless, despite the fact that this approach is motivated directly by the statistics of HotpotQA, we expect the general principle to be applicable to future multi-hop datasets as well. \paragraph{The question-length distribution in HotPotQA} The training set of HotpotQA consists of three categories: \emph{train-easy}, \emph{train-medium}, and \emph{train-hard}. Train-easy questions are essentially single-hop; i.e., they need one context document to extract the answer while both train-medium and train-hard questions are multi-hop requiring multiple context documents. \emph{However, both the dev and test sets in HotpotQA mainly consist of hard multi-hop questions}. While the additional {\em train-easy} and {\em train-medium} examples have proved useful as training signals in the question-answering setting, our QG experiments reveal that naively using the provided training distribution of questions leads to a significant drop in BLEU scores on the development set. The reason for the lower BLEU scores is that the generated questions are almost 80\% longer than the reference questions, and thus are less precise. \paragraph{Filtering to avoid distributional mismatch} We speculate that the model generates long questions because of the \emph{negative exposure bias} which it receives due to train-easy questions being much longer than train-medium and train-hard. We plot the distribution of the question length in the training set in Figure~\ref{fig:ques_length}. We observe that a significant number of train-easy questions are much longer than train-medium and train-hard---while most of the train-medium and train-hard questions are 30 words long, train-easy questions can be as long as 70 words. Thus, we match the training-dev question-length distribution by pruning examples whose question length is more than 30 words in our training set. According to our analysis above, most of these pruned questions are train-easy questions. Although one can adopt complex data-weighting techniques for this~\cite{zhiting2019nips}, we observed that simple hard-filtering works well in practice in our case. \section{Related Work} \label{rw} Recent work on QG has mostly focused on generating one-hop questions conditioned using neural S2S models~\cite{du2017learning}, pre-trained transformers~\cite{dong2019unified}, query reformulation using reinforcement learning~\cite{buck18}, and reinforcement learning based G2S model ~\cite{chen2019reinforcement}. Contemporary to our work,~\cite{pan2020semantic,Yu:20} also propose approaches for multi-hop QG. Similar to our work, these works incorporate an entity-graph to capture information about entities and their contextual relations within as well as across multiple documents. In addition to modeling the entity-graph, our approach also uses contrastive training with teacher-forcing to allow the model to efficiently use the information presented in the supporting facts. In parallel, there have been advances in multi-hop question answering (as opposed to generation) models \cite{tu2019select,chen2019multihop,tu2020graph,groeneveld2020simple}. GNN models applied over the extracted graph structures have led to improvements in this domain ~\cite{de2019question, fang2019hierarchical,zhao2020transformer-xh}. Our work examines the complementary task of multi-hop QG and provides evidence that stronger transformer models could, in fact, achieve more competitive results in this domain, compared to these GNN-based models that use explicit graph structure. Also related to our work is the recent line of work on graph-to-text transduction \cite{xu2018graph2seq,koncel2019text,zhu2019modeling,cai2019graph,chen2019reinforcement}. However, these works seek to generate text from a structured input, rather than the setting we examine, which involves taking context text as the input.
{'timestamp': '2020-10-23T02:08:05', 'yymm': '2010', 'arxiv_id': '2010.11374', 'language': 'en', 'url': 'https://arxiv.org/abs/2010.11374'}
\section*{Acknowledgments} The authors would like to thank Andreas Winter for invaluable discussions on the problem of enumerating the extremal rays of polyhedral convex cones. This work was supported by NSF CAREER award CCF 1652560.
{'timestamp': '2019-04-23T02:04:11', 'yymm': '1811', 'arxiv_id': '1811.08000', 'language': 'en', 'url': 'https://arxiv.org/abs/1811.08000'}
\subsection{Proof of \cref{#1}}\label{proof:#1} \usepackage{cleveref} \newtheorem{lemma}{Lemma} \newtheorem{claim}{Claim} \newtheorem{proposition}{Proposition} \newtheorem{theorem}{Theorem} \theoremstyle{definition} \newtheorem{definition}[lemma]{Definition} \newtheorem{example}{Example} \newtheorem{drule}{Rule} \crefname{table}{Table}{Tables} \crefname{figure}{Figure}{Figures} \crefname{theorem}{Theorem}{Theorems} \crefname{definition}{Definition}{Definitions} \crefname{corollary}{Corollary}{Corollaries} \crefname{observation}{Observation}{Observations} \crefname{lemma}{Lemma}{Lemmas} \crefname{example}{Example}{Examples} \crefname{reduction}{Reduction}{Reductions} \crefname{construction}{Construction}{Constructions} \crefname{subsection}{Subsection}{Subsections} \crefname{section}{Section}{Sections} \crefname{proposition}{Proposition}{Propositions} \crefname{algorithm}{Algorithm}{Algorithms} \crefname{drule}{Rule}{Rules} \crefname{claim}{Claim}{Claims} \crefname{appendix}{Appendix}{Appendix} \newcommand{\myparagraph}[1]{% \smallskip \noindent \textbf{#1}% } \newcommand{\ensuremath{\mathsf{occ}}}{\ensuremath{\mathsf{occ}}} \newcommand{\textsc{3-SAT}}{\textsc{3-SAT}} \newcommand{\textsc{Satisfiability}}{\textsc{Satisfiability}} \newcommand{\textsc{${}_\le 3$-${}_\le 3$-SAT}}{\textsc{${}_\le 3$-${}_\le 3$-SAT}} \newcommand{\textsc{2P1N-SAT}}{\textsc{2P1N-SAT}} \newcommand{\texttt{true}}{\texttt{true}} \newcommand{\texttt{false}}{\texttt{false}} \newcommand{\ensuremath{f}}{\ensuremath{f}} \newcommand{\ensuremath{\overline{v}}}{\ensuremath{\overline{v}}} \newcommand{\ensuremath{\overline{x}}}{\ensuremath{\overline{x}}} \newcommand{\ensuremath{\mathsf{ob}}}{\ensuremath{\mathsf{ob}}} \newcommand{\ensuremath{\mathsf{ag}}}{\ensuremath{\mathsf{ag}}} \newcommand{\ensuremath{\mathsf{L}}}{\ensuremath{\mathsf{L}}} \newcommand{\ensuremath{\mathsf{R}}}{\ensuremath{\mathsf{R}}} \newcommand{\ensuremath{\mathsf{sw}}}{\ensuremath{\mathsf{sw}}} \newcommand{\ensuremath{\mathsf{t}}}{\ensuremath{\mathsf{t}}} \tikzstyle{blueline} = [thick, blue, dotted] \tikzstyle{redline} = [thick, red, dashed] \tikzstyle{blackline} = [thick, black] \pagestyle{plain} \newcommand{\textsc{Reachable Object}}{\textsc{Reachable Object}} \newcommand{\RO}{\textsc{Reachable Object}} \DeclareMathOperator{\poly}{poly} \begin{document} \sloppy \newcommand{Good Things Come to Those Who Swap Objects on Paths}{Good Things Come to Those Who Swap Objects on Paths} \title{Good Things Come to Those Who Swap Objects on Paths} \newcommand{Paper \#3817}{Paper \#3817} \author{Matthias Bentert$^{1}$ \and Jiehua Chen$^{2}$ \and Vincent Froese$^1$ \and Gerhard J.\ Woeginger$^3$\\ {\small $^1$Algorithmics and Computational Complexity, Faculty~IV, TU Berlin, Berlin, Germany}\\ {\small \texttt{\{matthias.bentert,vincent.froese\}@tu-berlin}}\\ {\small $^2$University of Warsaw, Warsaw, Poland}\\ {\small \texttt{[email protected]}}\\ {\small $^3$RWTH Aachen University, Aachen, Germany}\\ {\small \texttt{[email protected]}}\\ } \date{} \maketitle \begin{abstract} We study a simple exchange market, introduced by Gourv\`{e}s, Lesca and Wilczynski~(IJCAI-17), where every agent initially holds a single object. The agents have preferences over the objects, and two agents may swap their objects if they both prefer the object of the other agent. The agents live in an underlying social network that governs the structure of the swaps: Two agents can only swap their objects if they are adjacent. We investigate the {\textsc{Reachable Object}} problem, which asks whether a given starting situation can ever lead, by means of a sequence of swaps, to a situation where a given agent obtains a given object. Our results answer several central open questions on the complexity of {\textsc{Reachable Object}}. First, the problem is polynomial-time solvable if the social network is a path. Second, the problem is NP-hard on cliques and generalized caterpillars. Finally, we establish a three-versus-four dichotomy result for preference lists of bounded length: The problem is easy if all preference lists have length at most three, and the problem becomes NP-hard even if all agents have preference lists of length at most four. \end{abstract} \looseness=-1 \section{Introduction}\label{sec:intro} Resource allocation under preferences is a widely-studied problem arising in areas such as artificial intelligence and economics. We consider the case when resources are \emph{indivisible objects} and each agent, having preferences over the objects, is to receive exactly one object. In the standard scenario known as \emph{housing market}, each agent initially holds an object, and the task is to \emph{reallocate} the objects so as to achieve some desirable properties, such as Pareto optimality, fairness, or social welfare~\cite{shapley_cores_1974,roth_incentive_1982,AbrCecManMeh2005,SoeUen2010}. While a large body of research in the literature takes a \emph{centralized} approach that globally controls and reallocates an object to each agent, we pursue a \emph{decentralized} (or \emph{distributed}) strategy where any pair of agents may locally \emph{swap} objects as long as this leads to an improvement for both of them, i.e., they both receive a more preferred object~\cite{damamme_power_2015}. To capture the situation where not all agents are able to communicate and swap with each other, \citet{GouLesWil2017} introduced a variant of distributed object reallocation where the agents are embedded in an underlying social network so that agents can swap objects with each other only if \begin{inparaenum}[(i)] \item they are directly connected (socially tied) via the network and \item will be better off after the swap. \end{inparaenum} To study the distributed process of swap dynamics along the underlying network topology, the authors analyzed various computational questions. In particular, they study the \textsc{Reachable Object}{} problem, which asks whether a given agent can reach a desired object via some sequence of mutually profitable swaps between agents. Consider the following example (initial objects are drawn in boxes). If the underlying graph is complete, object~$x_3$ is reachable for agent~$1$ within one swap. However, if the graph is a cycle as shown below, then to let object~$x_3$ reach agent~$1$, agent~$3$ can swap with agent~$2$, and then agent~$2$ can swap object~$x_3$ with agent~$1$. \begin{center} \begin{tikzpicture}[scale=1, every node/.style={scale=0.9}] \node[circle,draw, label=below:1, inner sep=3pt] at (0,0) (1) {}; \node[circle,draw, label=below:2, inner sep=3pt] at (.8,.4) (2) {}; \node[circle,draw, label=below:3, inner sep=3pt] at (1.6,.4) (3) {}; \node[circle,draw, label=below:4, inner sep=3pt] at (2.4,0) (4) {}; \node[circle,draw, label=below:5, inner sep=3pt] at (1.6,-.4) (5) {}; \node[circle,draw, label=below:6, inner sep=3pt] at (0.8,-.4) (6) {}; \foreach \i / \j in {1/2, 2/3,3/4,4/5,5/6,6/1} { \draw (\i) -- (\j); } \node at (4.3,.3) (v1) {$1:x_3\succ x_4 \succ$ \fbox{$x_1$}\,,}; \node[right = 0pt of v1] (v2) {$2:x_1 \succ x_3 \succ x_4 \succ$ \fbox{$x_2$}\,,}; \node[below = 12pt of v1.west,anchor=west] (v3) {$3:x_1 \succ x_2\succ x_4 \succ$ \fbox{$x_3$}\,,}; \node[right = -1pt of v3] (v4) {$4: x_5\succ x_3 \succ$ \fbox{$x_4$}\,,}; \node[below = 12pt of v3.west, anchor=west] (v5){$5:x_6\succ x_3 \succ$ \fbox{$x_5$}\,,}; \node[right = -1pt of v5] (v6){$6:x_4\succ x_3 \succ$ \fbox{$x_6$}\,,}; \end{tikzpicture} \end{center} \noindent Showing that \textsc{Reachable Object}{} is NP-hard when the underlying graph is a tree and presenting a simple polynomial-time algorithm for a special restricted case on a path (where the given agent is an endpoint of the path), the authors explicitly leave as open questions the general case on paths and other special cases (including restricted input preferences). In this work, we answer several open questions~\cite{GouLesWil2017,SW18} and draw a comprehensive picture of the computational complexity of \textsc{Reachable Object}. \begin{compactitem}[$\bullet$] \item Our main contribution is a polynomial-time algorithm on paths (\Cref{thm:path}). This algorithm combines a multitude of structural observations in a nontrivial way and requires a sophisticated analysis. \item Second, we show NP-hardness on complete graphs even if all preference lists have length at most four~(\Cref{thm:NP-c-length-4}). We complement this hardness by giving a linear-time algorithm for preferences lists of length at most three~(\Cref{thm:preflength-three}). \item Moreover, we prove NP-hardness for generalized caterpillars (\Cref{thm:cater}) and thereby narrow the gap between tractable and intractable cases of the problem. \end{compactitem} The NP-hardness from \Cref{thm:NP-c-length-4} implies that the problem is already NP-hard even if the agents are allowed to swap without restrictions and no agent has more than three objects which she prefers to her initial one. The hardness reduction can be adapted to also show NP-hardness for the case where the maximum vertex degree of the graph is five and the preference lists have length at most four. \paragraph{Related Work.} \citet{GouLesWil2017} proposed the model of distributed object reallocation via swaps on social networks, and showed that \textsc{Reachable Object}{} is NP-hard on trees. Moreover, they showed polynomial-time solvability on stars and for a special case on paths, namely when testing whether an object is reachable for an agent positioned on an endpoint of the path. They also indicated that the problem is polynomial-time solvable on paths when the agent and the object are at constant distance. Notably, they explicitly asked for a polynomial-time algorithm on paths in general and describe the problem as being at the frontier of tractability, despite its simplicity. Besides \textsc{Reachable Object}{}, \citet{GouLesWil2017} also considered the questions of reachability of a particular allocation, the \textsc{Reachable Assignment} problem, and existence of a Pareto-efficient allocation; both are shown to be NP-hard. \citet{SW18} studied the parameterized complexity of \textsc{Reachable Object}{} with respect to parameters such as the maximum vertex degree of the underlying graph or the overall number of swaps allowed in a sequence. They showed several parameterized intractability results and also fixed-parameter tractable cases (none of which covers our results). Notably, in their conclusion they suggested to study restrictions on the preferences~(as we do in this paper). Other examples of recently studied problems regarding allocations of indivisible resources under social network constraints are envy-free allocations~\cite{BKN18,BCGLMW18}, Pareto-optimal allocations~\cite{IP18}, and two-sided stable matching~\cite{ArcVas2009,AnsBhaHoe2017}. See the work of \citet{GouLesWil2017} fore more related work. \subsection{Preliminaries}\label{sec:prelim} Let $V = \{1,2,\ldots, n\}$ be a set of $n$ agents and $X\!=\!\{x_1,x_2$, $\ldots,x_n\}$ be a set of $n$ objects. Each agent~$i \in V$ has a \myemph{preference list over a subset~$X_i\subseteq X$} of the objects, which is a strict linear order on $X_i$. This list is denoted as $\succ_i$; we omit the subscript if the agent is clear from the context. For two objects~$x_j, x_{j'}\in X_i$, the notation~$x_j \succ_i x_{j'}$ means that agent~$i$ \myemph{prefers~$x_j$ to~$x_{j'}$}. A \myemph{preference profile~$\mathcal{P}$ for the agent set~$V$} is a collection~$(\succ_i)_{i\in V}$ of preference lists of the agents in $V$. An \myemph{assignment} is a bijection~$\sigma\colon V \to X$, where each agent~$i$ is assigned an object~$\sigma(i)\in X_i$. We say that \myemph{an assignment~$\sigma$ admits a rational trade for two agents~$i, i'\in V$} if $\sigma(j) \succ_i \sigma(i)$ and $\sigma(i) \succ_j \sigma(j)$. We assume that the agents from $V$ form a social network such that pairs of adjacent agents can trade their objects. The social network is modeled by an undirected graph~$G=(V,E)$ with $V$ being also the vertex set and $E$ being a set of edges on $V$. We say that an assignment~$\sigma$ admits \myemph{a swap for two agents~$i$ and $i'$}, denoted as $\tau\!=\!\{\{i,\sigma(i)\},\{i',\sigma(i')\}\}$, if it admits a rational trade for $i$ and $i'$ and the vertices corresponding to~$i$ and $i'$ are adjacent in the graph, i.e., $\{i,i'\}\in E$. Accordingly, we also say that objects $\sigma(i)$ (resp.\ $\sigma(i')$) \myemph{passes through} edge~$\{i,i'\}$. By definition, an object can pass through an edge at most once. A \myemph{sequence of swaps} is a sequence~$(\sigma_0,\sigma_1,\ldots, \sigma_t)$ of assignments where for each index~$k \in \{0,1,\ldots, t-1\}$ there are two agents~$i,i' \in V$ for which $\sigma_k$ admits a swap such that \begin{inparaenum}[(1)] \item $\sigma_{k+1}(i)=\sigma_{k}(i')$, \item $\sigma_{k+1}(i')=\sigma_{k}(i)$, and \item for each remaining agent~$z\in V\setminus \{i,i'\}$ it holds that $\sigma_{k+1}(z)=\sigma_{k}(z)$. \end{inparaenum} We call an \myemph{assignment~$\sigma'$ reachable from another assignment~$\sigma$} if there is a sequence~$(\sigma_0,\sigma_1,\ldots, \sigma_t)$ of swaps such that $\sigma_0=\sigma$ and $\sigma_t=\sigma'$. The reachability relation defines a partial order on the set of all possible assignments. Given an initial assignment~$\sigma_0$, we say that \myemph{an object~$x\in X$ is reachable for an agent~$i$} if there is an assignment~$\sigma$ which is reachable from $\sigma_0$ with $\sigma(i)=x$. We study the following computational problem~\cite{GouLesWil2017}, called \textsc{Reachable Object}{} (\RO), which has as input an agent set~$V$, an object set~$X$, a preference profile for $V$, an undirected graph~$G=(V,E)$ on $V$, an initial assignment~$\sigma_0$, an agent~$I\in V$ and an object~$x\in X$, and ask whether $x$ is reachable for $I$ from $\sigma_0$. \noindent Note that \RO{} is contained in NP~\cite{GouLesWil2017}. \newcommand{\ROs{} is in NP.}{\RO{} is in NP.} \begin{proposition}\label[proposition]{prop:RO-in-NP} \ROs{} is in NP. \end{proposition} \begin{proof} Since each object can pass through each edge at most once it follows that each reachable assignment uses a different edge to swap objects. Hence, a certificate for a \RO{} instance with $n$~agents is a sequence of swaps where each object appears at most $O(n^2)$ times, i.e. the length of the sequence is $O(n^3)$. \end{proof} We consider simple undirected graphs~$G=(V,E)$ containing vertices~$V$ and edges~$E\subseteq\binom{V}{2}$. We assume the reader to be familiar with basic graph classes such as paths, cycles, trees, and complete graphs (cliques)~\cite{Diestel2012}. A \myemph{caterpillar} is a tree such that removing all leaves yields a path (i.e, all vertices are within distance at most one of a central path). We call the edges to the leaves attached to the central path \myemph{hairs} of \myemph{length}~1. A \myemph{generalized caterpillar} has hairs of length~$h\ge 1$. \section{Preferences of Length at Most Three}\label{sec:lenth=3} In this section, we provide a linear-time algorithm for \RO{} in the special case of preferences with length at most three. The main idea is to reduce \RO{} to reachability in directed graphs. The approach is described in \cref{alg:length=3}. Take the example from the introduction, and delete object~$x_1$ from agent~$3$'s preference list and object~$x_3$ from agent~$2$' preference list. Then, our algorithm finds the following swap sequence for object~$x_3$ to reach agent~$1$: $4\leftrightarrow 3$, $3\leftrightarrow 2$, $2\leftrightarrow 1$, $4\leftrightarrow 5$, $5\leftrightarrow 6$, $6\leftrightarrow 1$; here ``$i \leftrightarrow j$'' means that agent~$i$ swaps with agent~$j$ over the objects currently held by them. Throughout this section, we assume that each agent~$i \in \{1,\ldots, n\}$, when initially holding object~$x_i$, i.e., $\sigma_0(i)=x_i$, has either two or three objects in her preference list; we can ignore those agents who only have one object in their respective preference lists because they will never trade with someone else. We aim to determine whether object~$x_n$, which is initially held by agent~$n$, is reachable for agent~$1$. To ease the reasoning, we define an equivalent and succinct notion of swap sequences, which only focuses on the relevant swaps. Let $\tau=\{\{i,x\},\{j,y\}\}$ be a swap for which an assignment~$\sigma$ admit. Then, we use the notation \myemph{$\sigma/\tau$} to denote the assignment that results from $\sigma$ by performing the swap~$\tau$. In an overloading manner, we say that a \emph{sequence~$\phi = (\tau_1,\tau_2,\ldots,\tau_m)$ of swaps} is a \myemph{valid swap sequence for some start assignment~$\sigma_0$} if there exists a sequence of swaps~$(\sigma_0,\sigma_1,\ldots,\sigma_m)$~(see \cref{sec:prelim}) such that for each $z\in \{1,\ldots,m\}$, it holds that $\sigma_z=\sigma_{z-1}/\tau_z$. We first observe a property which allows us to exclusively focus on a specific valid swap sequence where each swap involves swapping object~$x_q$, if object~$x_q$ reaches agent~$1$ during the swap sequence. \newcommand{\swappq}{ Let $\phi=(\tau_1,\tau_2,\ldots,\tau_m)$ be a valid swap sequence for $\sigma_0$ such that the agent~$1$ obtains~$x_n$ in the last swap. Consider two objects~$x_p$ and $x_q$. If $\phi$ contains a swap~$\tau_{r}$ with $\tau_r=\{\{1,x_p\},\{k,x_q\}\}$ (for some agent~$k$), then let $\phi'=(\tau'_1,\tau'_2,\ldots,\tau'_s)$ be a subsequence of the prefix~$\phi_0=(\tau_1,\tau_2,\ldots,\tau_r)$ of $\phi$, up to~(including) $\tau_r$, that consists of exactly those swaps~$\tau$ from $\phi_0$ with $\{k_\tau,x_q\}\in \tau$ for some agent~$k_\tau$, Define $a_{s+1}=p$ and $a_z$, $1\le z \le s$, such that $\{a_z,x_q\}\in \tau'_z$. The following holds. \begin{compactenum}[(i)] \item \label{swap-p-q:last-swap} $\tau'_s=\tau_r$. \item\label{swap-p-q:a1} $a_1=q$. \item\label{swap-p-q:prefer} For each $z\in \{1,\ldots, s\}$ agent~$a_z$ prefers~$x_{a_{z+1}}$ to~$x_{q}$. \item\label{swap-p-q:preference-list} For each~$z\in \{2,\ldots, s\}$ agent~$a_{z}$ has preference list $x_{a_{z+1}} \succ x_{q} \succ$\fbox{$x_{a_z}$}. \item \label{swap-p-q:swap-p-q} $\phi'$ is a valid swap sequence for $\sigma'_0$, where $\sigma'_0(i)=\sigma_0(i)$ for all $i\notin\{1,p\}$ while $\sigma_0'(1)=x_p$ and $\sigma_0'(p)=x_1$. \item\label{swap-p-q:no-xn} If agent~$1$ prefers $x_n$ to $x_q$, then no agent~$a_z$, $3\le z\le s$, contains object~$x_n$ in her preference list. \end{compactenum} } \begin{lemma} \label[lemma]{lem:swap-p-q} \swappq \end{lemma} \begin{proof} Statement~\eqref{swap-p-q:last-swap}: By definition, the last swap~$\tau_r$ in $\phi_0$ contains $\{j,x_q\}$ for some agent~$j$, since $\phi'$ contains exactly those swaps from $\phi_0$ that involves swapping object~$x_q$, it follows that the last swap in $\phi'$ must be $\tau_r$. Statement~\eqref{swap-p-q:a1} holds because agent~$q$ initially holds object~$x_q$ and thus, in order to make object~$x_q$ reach agent~$1$, agent~$q$ is the first agent to give away object~$x_q$. Now, we turn to statement~\eqref{swap-p-q:prefer}. Note that all agents~$a_z$ are distinct because by the definition of rational trades no agent will give away the same object~$x_q$ more than once. We show statement~\eqref{swap-p-q:prefer} by induction on $z$, $1\le z\le s$, starting with $z=s$. First of all, by the first statement, we know that $\tau'_s=\tau_r$, i.e, in this swap~$\tau'_s$ agent~$a_{s}$ swaps with agent~$1$ over objects~$x_q$ and $x_p=x_{a_{s+1}}$. Since $\tau_s'$ is also a rational trade, this means that agent~$a_{s}$ must prefer object~$x_{a_{s+1}}$ to object~$x_q$. For the induction assumption, let us assume that for each~$i \ge z$, agent~$a_i$ prefers $x_{a_{i+1}}$ to $x_q$. Now, we consider agent~$a_{z-1}$, and we aim to show that $a_{z-1}$ prefers $x_{a_{z}}$ to $x_q$. By definition, we know that $\tau'_{z-1}=\{\{a_{z-1},x_q\},\{a_z,y\}\}$ for some object~$y$, i.e., agent~$a_{z-1}$ gives object~$x_q$ to agent~$a_{z}$ in order to obtain another object~$y$. Thus, agent~$a_{z-1}$ must prefer~$y$ to~$x_q$ and agent~$a_z$ must prefer ~$x_q$ to~$y$. Since each agent has at most three objects in her preference list and since by our induction assumption we know that $a_z$ already prefers $x_{a_{z+1}}$ to $x_q$ we infer that that $y$ is the initial object of agent~$a_z$, that is, $y=x_{a_z}$. Next, to show statement~\eqref{swap-p-q:preference-list}, let us re-examine the preferences of agents~$a_z$, $1\le z \le s$ implied by statement~\eqref{swap-p-q:prefer}. Since each $a_z$, $1\le z\le s$, prefers $x_{a_{z+1}}$ to $x_q$, the preference list of each agent~$a_{z}$, $2\le z\le s$, is $x_{a_{z+1}} \succ x_q \succ$ \fbox{$x_{a_z}$}; again, recall that each agent has at most three objects in her preference list. Now, we show that $\phi'$ is a valid swap sequence for $\sigma'_0$, i.e., there exists a swap sequence~$(\rho_0,\rho_1,\rho_2,\ldots,\rho_s)$, a sequence of assignments, such that for $\rho_0=\sigma'_0$ and for each $z\in \{1,\ldots, s\}$ it holds that $\rho_{z}=\rho_{z-1}/\tau'_z$. We prove this by showing the following \begin{compactenum}[(1)] \item $\sigma'_0$ is an assignment and admits swap~$\tau'_1$. \item for each $z\in \{1,\ldots, s-1\}$ if $\sigma'_{z-1}$ is an assignment and admits swap~$\tau'_{z}$, then $\sigma'_{z}\coloneqq \sigma'_{z-1}/\tau'_{z}$ is also an assignment and admits swap~$\tau'_{z+1}$, \item $\sigma'_{s}\coloneqq \sigma'_{s-1}/\tau'_{s}$ is an assignment. \end{compactenum} Clearly,~$\sigma'_0$ is an assignment and admits swap~$\tau'_1$, which is $\tau'_1=\{\{a_1,x_q\}, \{a_2,x_{a_2}\}\}$. This means that $\sigma'_1$, defined as $\sigma'_1\coloneqq \sigma'_0/\tau'_1$, is an assignment. Now, consider~$z\in\{1,\ldots,s-2\}$, and assume that $\sigma'_{z-1}$ is an assignment and admits swap~$\tau_{z}$, which is $\tau_z=\{\{a_z,x_q\}, \{a_{z+1},x_{a_{z+1}}\}\}$. Thus,~$\sigma'_{z}\coloneqq \sigma'_{z-1}/\tau'_{z}$ is an assignment. By the definitions of $T\coloneqq \{\tau'_1,\ldots, \tau'_{z}\}$, we infer that $\sigma'_z(a_{z+1})=x_q$ and $\sigma'_{z}(a_{z+2})=x_{a_{z+2}}$; the latter holds because no swap from $T$ has involved agent~$a_{z+2}$. Thus, by the preference lists of $a_{z+1}$ and $a_{z+2}$ it holds that $\sigma'_{z}$ admits swap~$\tau'_{z+1}$, which is $\tau'_{z+1}=\{\{a_{z+1},x_q\},\{a_{z+2}, x_{a_{z+2}}\}\}$. Finally, we obtain that $\sigma'_{s-1}\coloneqq \sigma'_{s-2}/\tau_{s-1}$ is an assignment such that $\sigma'_{s-1}(a_{s-1}) =x_q$ and $\sigma'_{s-1}(1)=x_{a_{s+1}}=x_p$. By the preference list of agent~$1$ and $a_{s-1}$, it is clear that $\sigma'_{s-1}$ admits swap~$\tau'_s$. Define $\sigma'_s=\sigma'_{s-1}/\tau_s$ and it is clear it is an assignment. Concluding,~$\phi'$ is a valid swap sequence for $\sigma'_0$. Finally, we show statement~\eqref{swap-p-q:no-xn}. Assume that agent~$1$ prefers object~$x_n$ to object~$x_q$, implying that $x_n\neq x_q$. We assume that $s-1\ge 4$ as otherwise the set~$\{3,\ldots,s-1\}$ is empty and we are done with the statement. Suppose, for the sake of contradiction, that some agent~$a_z$ with $3\le z\le s-1$ has object~$x_n$ in her preference list. By the preference list of $a_z$ we infer that the object~$x_n$ is either $x_{a_z}$ or $x_{a_{z+1}}$ because $x_n\neq x_q$. If $x_n=x_{a_z}$, then after the swap~$\tau'_z$, agent~$a_{z-1}$ will obtain $x_n$, which is her most preferred object---a contradiction to agent~$1$ receiving object~$x_n$ after $\phi'$. If $x_n=x_{a_{z+1}}$, then after the swap~$\tau'_{z+1}$, agent~$a_{z}$ will obtain $x_n$, which is her most preferred object---a contradiction to agent~$1$ receiving object~$x_n$ after $\phi'$. \end{proof} Now, we are ready to give the main algorithm for the case where the length of the preference list of each agent is bounded by three based on solving reachability in a directed graph (\cref{alg:length=3}). \renewcommand\ArgSty{\normalfont} \begin{algorithm}[t] \DontPrintSemicolon \caption{Algorithm for \RO{} with preference list length at most three.} \label[algorithm]{alg:length=3} \small \SetKwInOut{Input}{Input} \Input{\small Agent set~$V$ with preference lists~$(\succ_i)_{i\in V}$ over object set~$X$, and the underlying graph~$(V,E)$} $D\!\coloneqq\! (V,F)$ with $F\!\coloneqq\! \{(i,j)\! \mid\! \{i,j\} \in E \wedge x_j \succ_i x_n \wedge x_n \succ_j x_j\}$.\label{line:D} \label{line:n-1} \If{$D$ admits a directed path~$P$ from $n$ to $1$} {\label{line:path-n-1} \textbf{return} yes} \lIf{$x_n\succ_1 x_w \succ_1 $ \fbox{$x_1$} for some~$x_w\neq x_n$} { $D_1\coloneqq (V,F_1)$ with $F_1\coloneqq \{(i,j) \mid \{i,j\} \in E \wedge x_j \succ_i x_w \wedge x_w \succ_j x_j\}$.\label{line:D1} \ForEach{Directed path~$P_1=(w,n,a_1,\ldots,a_s,1)$ from $w$ to $1$ in $D_1$ such that the first arc on~$P_1$ is $(w,n)$\label{line:D1-wn}}{ $D_2\coloneqq D-\{a_1,\ldots,a_s\}+\{(j,1)\mid x_w \succ_j x_n\}$ \label{line:D2} \lIf{$D_2$ admits a directed path~$P_2$ from $n$ to $1$ such that the first arc on $P_2$ is $(n,w)$} {\textbf{return} yes \label{line:1-w-n}} } \ForEach{Directed path~$P_1=(w,a_1,\ldots,a_s,1)$ from $w$ to $1$ in $D_1$ such that the first arc on~$P_1$ is \emph{not} $(w,n)$, i.e., $a_1\neq n$} { $D_3\coloneqq D-\{w,a_1,\ldots,a_s\}+\{(j,1)\mid x_w \succ_j x_n\}$ \label{line:D3} \lIf{$D_3$ admits a directed path~$P_3$ from $n$ to $1$ such that the first arc on $P_3$ is \emph{not} $(n,w)$} {\textbf{return} yes \label{line:1-no-w-n}} } } \textbf{return} no \end{algorithm} \newcommand{\lengththree}{% \RO{} for preference list length at most three can be solved in~$O(n+m)$~time, where~$m$ is the number of edges in the underlying graph.% } \begin{theorem}\label{thm:preflength-three} \lengththree \end{theorem} \begin{proof}[Proof sketch.] We claim that \cref{alg:length=3} solves our problem in linear time. Assume that object~$x_n$ is reachable for agent~$1$, i.e., there exists a valid swap sequence~$\phi=(\tau_1,\tau_2,\ldots,\tau_m)$ for $\sigma_0$ such that $\tau_m=\{\{1,x\}, \{j,x_n\}\}$ for some object~$x$ which exists in the preference list of $1$ and some agent~$j$ that has $x_n$ in her preference list. There are two cases for $x$: either $x=x_1$ or $x\neq x_1$. If $x=x_1$, then using $x_p=x_1$ and $x_q=x_n$, the sequence~$\phi'$ as defined in~\cref{lem:swap-p-q} is a valid swap sequence for $\sigma'_0$ with $\sigma'_0=\sigma_0$. By the properties in \cref{lem:swap-p-q}\eqref{swap-p-q:preference-list}, graph~$D$ as constructed in Line~\ref{line:D} must contain a path from $n$ to $1$. Thus, by Line~\ref{line:path-n-1} our algorithm returns yes. If $x\neq x_1$, implying that the preference list of agent~$1$ is $x_n \succ x_w \succ $ \fbox{$x_1$} for some object~$x_w$ such that $x=x_w$, then $\phi$ has a swap~$\tau_r$ such that $\tau_r=\{\{1,x_1\}, \{k,x_w\}\}$ for some agent~$k$. By \cref{lem:swap-p-q} (using $x_p=x_1$ and $x_q=x_w$), the sequence~$\phi'=(\tau'_1,\ldots,\tau'_s)$ as defined in \cref{lem:swap-p-q} is a valid swap sequence for $\sigma'_0$ with $\sigma'_0=\sigma_0$. Let $a_z$, $1\le z \le s$, be the agents such that $\{a_z,x_w\} \in \tau'_z$. By \cref{lem:swap-p-q}\eqref{swap-p-q:a1}, we know that $\tau'_1=\{\{w,x_w\},\{a_2,x_{a_2}\}\} = \{\{a_1,x_{a_1}\},\{a_2,x_{a_2}\}\}$. There are two cases for $a_2$: either $a_2=n$ or $a_2\neq n$. If $a_2=n$, then by the properties in \cref{lem:swap-p-q}\eqref{swap-p-q:preference-list}, it follows that $\phi'$ defines a directed path~$(a_1,a_2),\ldots,(a_{s-1},a_s),(a_s,1)$ in $D_1$ (Line~\ref{line:D1}) with $(a_1,a_2)=(w,n)$. By $\sigma'_0$ and \cref{lem:swap-p-q}\eqref{swap-p-q:preference-list}, we have that $(a_1,a_2),(a_2,a_3), \ldots, (a_{s-1},a_s), (a_s,1)$ is a directed path in graph~$D_1$ as defined in Line~\ref{line:D1} such that $(a_1,a_2) = (w,n)$ so that the if condition from Line \ref{line:D1-wn} holds. By \cref{lem:swap-p-q}\eqref{swap-p-q:no-xn}, no agent from $\{a_3,a_4,\ldots,a_s\}$ is involved in a swap from $\phi$ which includes $x_n$. By the above properties of $\tau_m$ and $\phi$, it follows that using $x_p=x_w$ and $x_q=x_n$, the sequence~$\phi''=(\rho_1,\rho_2,\ldots,\rho_{t})$ as defined in \cref{lem:swap-p-q} is a valid swap sequence for $\sigma''_0$ with $\sigma''_0(1)=x_w$, $\sigma''_0(w)=x_1$, and for all $i\notin \{1,w\}$, $\sigma''_0(i)=\sigma_0(i)$. Recall that $\phi'$ contains exactly those swaps from $\phi_0$ which involve swapping object~$x_w$. Thus, it must hold that $\rho_1=\tau'_1$. Let $b_z$, $1\le z\le t$, be the agents such that $\{b_z,x_n\}\in \rho_z$. Then, $b_1=n$ and $b_2=w$ and no agent from $\{b_3,\ldots,b_t\}$ is from $\{a_3,\ldots,a_s\}$. Thus, by $\sigma''_0$ and \cref{lem:swap-p-q}\eqref{swap-p-q:preference-list}, we have that $(b_1,b_2),(b_2,b_3), \ldots, (b_{t-1},b_t), (b_t,1)$ is a directed path in graph~$D_2$ as defined in Line~\ref{line:D2} such that $(b_1,b_2)=(n,w)$. Indeed, by Line \ref{line:1-w-n}, our algorithm returns yes. Now, we turn to the other case, namely, $a_2\neq n$. Again, by the properties of $\tau_m$ and $\phi$, it follows that, using $x_p=x_w$ and $x_q=x_n$, the sequence~$\phi''=(\rho_1,\rho_2,\ldots,\rho_{t})$ as defined in \cref{lem:swap-p-q} is a valid swap sequence for $\sigma''_0$ with $\sigma''_0(1)=x_w$, $\sigma''_0(w)=x_1$, and for all $i\notin \{1,w\}$, $\sigma''_0(i)=\sigma_0(i)$. Let $b_z$, $1\le z\le t$, be the agents such that $\{b_z,x_n\}\in \rho_z$. We aim to show that no swap from $\phi$ that involves swapping object~$x_n$ will involve agent~$a_z$, $1\le z\le s$, i.e., $\{a_1,\ldots,a_s\}\cap \{b_1,\ldots,b_t\}=\emptyset$. By $\phi'_0$ and the properties in \cref{lem:swap-p-q}\eqref{swap-p-q:preference-list}, it follows that $\phi'$ defines a directed path~$(a_1,a_2),\ldots,(a_{s-1},a_s),(a_s,1)$ in $D_1$ (Line~\ref{line:D1}) such that $a_2 \neq n$. By \cref{lem:swap-p-q}\eqref{swap-p-q:preference-list} and \cref{lem:swap-p-q}\eqref{swap-p-q:no-xn}, no agent from $\{a_2,a_3,a_4,\ldots,a_s\}$ is involved in any swap from $\phi$ which includes object~$x_n$. Neither will agent~$a_1$ be involved in any swap from $\phi$ which includes object~$x_n$ because of the following. After swap~$\tau'_1$ agent~$a_1$ obtains object~$x_{a_2}$ which is not $x_n$, so if she would be involved in swapping $x_n$, then she would obtain $x_n$ as her most preferred object and never give away $x_n$. Thus, indeed we have $\{a_1,\ldots,a_s\}\cap \{b_1,\ldots,b_t\}=\emptyset$. By $\sigma''_0$ and \cref{lem:swap-p-q}\eqref{swap-p-q:preference-list}, we have that $(b_1,b_2),(b_2,b_3), \ldots, (b_{t-1},b_t), (b_t,1)$ is a directed path in graph~$D_3$ as defined in Line~\ref{line:D3} such that $(b_1,b_2)\neq (n,w)$. Indeed, by Line \ref{line:1-no-w-n}, our algorithm returns yes. For the converse direction, if our algorithm return yes, then one of the three lines \ref{line:path-n-1}, \ref{line:1-w-n}, \ref{line:1-no-w-n}, returns yes. It is not hard to check that the corresponding constructed path(s) indeed define(s) a desired valid swap sequence. As for the running time, our algorithm constructed at most four directed graphs~$D, D_1, D_2$, and~$D_3$, each of them with $O(n)$~arcs. We only show how to construct $D$ in $O(n+m)$~time, the construction for $D_1$ is analogous and~graphs~$D_2$ and $D_3$ are derived from $D$. To construct $D=(V,F)$, we go through each edge~$\{i,j\}$ with $i\neq n$ and $j \neq n$ in the underlying graph and do the following in $O(1)$ time; we tackle agent~$n$ separately \begin{compactitem} \item Check whether the preference list of~$i$ is $x_j \succ x_n \succ $\fbox{$x_i$} and~$j$ prefers~$x_n$ over~$x_j$. If so, we add the edge~$(i,j)$. \item Check whether the preference list of~$j$ is $x_i \succ x_n \succ $\fbox{$x_j$} and~$i$ prefers~$x_n$ over~$x_i$. If so, we add the edge~$(j,i)$. \end{compactitem} Now, we consider agent~$n$. For each object~$x_j$ (there are at most two of them) that agent~$n$ prefers to her initial object~$x_n$, we do the following in $O(m)$~time. \begin{compactitem} \item Check whether $\{n,j\} \in E$. \item Check whether $j$ prefers $x_n$ to $x_j$. \end{compactitem} Add the arc~$(n,j)$ to~$D$ only if the above two checks are positive. In this way, each vertex from $V\setminus \{n\}$ has at most two in-arcs and at most one out-arc and vertex~$n$ has at most two out-arcs but no in-arcs. Thus, $|F| \in O(n)$. For each of the three graphs, each with $O(n)$ arcs, checking whether the specific path stated in the corresponding line exists can be done in $O(n)$~time. In total, the algorithm runs in $O(n+m)$~time. \end{proof} \section{Paths}\label{sec:paths} In this section we prove that \RO{} on paths is solvable in~$O(n^4)$ time. This answers an open question by \citet{GouLesWil2017}. The proof consists of two phases. In the first phase, we analyze sequences of swaps in paths and observe a crucial structure regarding possible edges along which a pair of objects can be traded. In the second phase, we use this structure to reduce \RO{} on paths to \textsc{2-SAT}. Let~$P = (\{1,2,\ldots, n\},\{\{i,i+1\}\mid 1\le i < n\})$ be a path, and w.l.o.g.\ let~$\sigma_0(n) = x$ (see \Cref{fig:exampleInput}). Note that if there are some agents~$n+1,\ldots$, then their initially assigned objects can never be part of any swap that is necessary for~$x$ to reach~$I$. If such a swap was necessary, then there would be a swap that moves~$x$ away from~$I$, that is, to an agent with a higher index, and therefore the agent that gave away~$x$ now possesses an object that she prefers over~$x$ and she will never accept~$x$ again. Thus, object~$x$ could never reach~$I$. In the following,``an object~$y$ moves to the right'' means that object~$y$ is given to an agent with a higher index than the agent that currently holds it. An object~$z$ is ``to the left (resp.\ right) of some other object~$a$'' when the agent that initially holds object~$z$ has a smaller index than the agent that initially holds object~$a$. \begin{figure} \centering \begin{tikzpicture}[scale=0.8, every node/.style={scale=0.9}] \node[circle,draw, label=below:1] at (0,0) (v1) {}; \node[circle,draw, label=below:2] at (1,0) (v2) {}; \node at (2,0) (v3) {$\cdots$}; \node[circle,draw, label=below:$I$] at (3,0) (v4) {}; \node[circle,draw, label=below:$I+1$] at (4,0) (v5) {}; \node[circle,draw, label=below:$I+2$] at (5,0) (v6) {}; \node at (6,0) (v7) {$\cdots$}; \node[circle,draw, label=below:$n-1$] at (7,0) (v8) {}; \node[circle,draw, label=below:$n$, label=above:$x$] at (8.5,0) (v9) {}; \draw (v1) -- (v2); \draw (v2) -- (v3); \draw (v3) -- (v4); \draw (v4) -- node[above] {1} (v5); \draw (v5) -- node[above] {2} (v6); \draw (v6) -- node[above] {$\cdots$} (v7); \draw (v7) -- (v8); \draw (v8) -- node[above] {$n - I$} (v9); \end{tikzpicture} \caption{An example of a path where the name of the agent is denoted below its vertex. The target object~$x$ is the initial object of~$n$ and is depicted above the corresponding vertex~$n$. The edges to the ``right'' of~$I$ are enumerated in order to define types of objects. \label{fig:exampleInput} \end{figure} We start with a helpful lemma that will be used multiple times. It states that for each pair of objects there is at most one edge along which these two objects can be swapped in order to pass each other. \newcommand{\swappos}{ For each agent~$i$ and each two distinct objects object~$w$ and $y$ there exists at most one edge, denoted as $\{j,j+1\}$, such that for each sequence~$\xi$ of swaps the following statement holds: If after~$\xi$ agent~$i$ holds object~$w$, and if objects~$w$ and~$y$ are swapped in~$\xi$, then objects~$w$ and~$y$ are swapped over the edge~$\{j,j+1\}$. Deciding whether such an edge exists and computing it if it exists takes~$O(n)$ time. } \begin{lemma} \label{lem:swapPos} \swappos \end{lemma} \begin{proof} Let~$a$ be the agent that initially holds~$y$ and let~$c$ be the agent that initially holds~$w$. Assume without loss of generality that $a < c$. By the definition of rational trades, no agent takes an object back that she gave away before. Hence, if~$w$ and~$y$ are swapped at some point, then they need to travel ``towards and never away from each other'' in the path before the swap occurs. Thus, we only need to consider the set of agents~$M = \{d \mid a \leq d \leq c\}$ as potential candidates for~$j$ and~$j+1$. We first find an agent~$e$ that has to get object~$y$ before she can get object~$w$. We consider two cases: Either~$i \in M$ or~$i \notin M$ (see \Cref{fig:casedistinction}). \noindent \textbf{Case~1: $i\notin M$.}~That is, $i < a$ because $w$ and $y$ are swapped before $w$ reaches $i$, and~$M$ contains all agents on a connected subpath including agent~$c$. Then,~$w$ needs to pass agent~$a$ at some point in order for~$w$ to reach~$i$. Let $e=a$. \noindent \textbf{Case~2: $i\in M$.} That is, agent~$i$ is initially between objects~$w$ and~$y$. Since we are only interested in sequences of swaps until agent~$i$ gets object~$w$, we can assume w.l.o.g.\ that~$j \geq i$. We assume further that agent~$i$ prefers~$w$ over~$y$ as she would otherwise not accept~$w$ after giving away~$y$. Thus, agent~$i$ is the agent~$e$ we are looking for. Next we will show that there is only one edge to the right of agent~$e$ where~$w$ and~$y$ can be swapped. Consider the agent~$b \in M$ such that all agents in the set~$M_b = \{d \mid e < d < b\}$ prefer~$w$ over~$y$ and~$b$ prefers~$y$ over~$w$. Note that~$b$ cannot obtain object~$y$ before object~$w$ as she would not accept~$w$ at any future point in time and therefore~$w$ will never pass~$b$. Thus, it holds that~$j+1 \leq b$. Any agent in~$M_b$ prefers~$w$ over~$y$ and therefore would not agree on swapping~$w$ for~$y$. Thus, it holds that~$j+1 \geq b$ and therefore~$j+1 = b$. Observe that the agent~$b$ can be computed in linear time and is always the only candidate for being agent~$j+1$ we are looking for. \end{proof} \begin{figure} \centering \def 1ex {1ex} \begin{tikzpicture} \node[draw,circle, label=below:{$i$}, fill=white] at (0,0) (i) {}; \node[left = 0pt of i] {Case~1:~}; \node[draw,circle, right = 1ex of i] (dummy4) {}; \node[right = 1ex of dummy4] (dummy3) {$\cdots$}; \node[draw,circle, right = 1ex of dummy3] (dummy2) {}; \node[draw,circle, label=below:{$e=a$}, label=above:$y$, fill=black!50!white, right = 1ex*2 of dummy2] (v1) {}; \node[draw,circle, pattern=dots, right = 1ex*2 of v1] (v2) {}; \node[right = 1ex of v2] (v3) {$\cdots$}; \node[draw,circle, pattern=dots, right = 1ex of v3] (v4) {}; \node[draw,circle, label=below:{$b$}, fill=black, right = 1ex*2 of v4] (v5) {}; \node[draw,circle, fill=white, right = 1ex*2 of v5] (v6) {}; \node[right = 1ex of v6] (v7) {$\cdots$}; \node[draw,circle, label=below:$c$, label=above:$w$, right = 1ex of v7] (v8) {}; \draw (dummy4) -- (i); \draw (dummy3) -- (dummy4); \draw (dummy3) -- (dummy2); \draw (dummy2) -- (v1); \draw (v1) -- (v2); \draw (v2) -- (v3); \draw (v3) -- (v4); \draw (v4) -- (v5); \draw (v5) -- (v6); \draw (v6) -- (v7); \draw (v7) -- (v8); \end{tikzpicture} \begin{tikzpicture} \node[draw,circle, label=below:{$y$}, label=above:$a$, fill=white] at (0,0) (i) {}; \node[left = 0pt of i] {Case~2:~}; \node[draw,circle, right = 1ex of i] (dummy4) {}; \node[right = 1ex of dummy4] (dummy3) {$\cdots$}; \node[draw,circle, right = 1ex of dummy3] (dummy2) {}; \node[draw,circle, label=below:{$e=i$}, fill=black!50!white, right = 1ex*2 of dummy2] (v1) {}; \node[draw,circle, pattern=dots, right = 1ex*2 of v1] (v2) {}; \node[right = 1ex of v2] (v3) {$\cdots$}; \node[draw,circle, pattern=dots, right = 1ex of v3] (v4) {}; \node[draw,circle, label=below:{$b$}, fill=black, right = 1ex*2 of v4] (v5) {}; \node[draw,circle, fill=white, right = 1ex*2 of v5] (v6) {}; \node[right = 1ex of v6] (v7) {$\cdots$}; \node[draw,circle, label=below:$c$, label=above:$w$, right = 1ex of v7] (v8) {}; \draw (dummy4) -- (i); \draw (dummy3) -- (dummy4); \draw (dummy3) -- (dummy2); \draw (dummy2) -- (v1); \draw (v1) -- (v2); \draw (v2) -- (v3); \draw (v3) -- (v4); \draw (v4) -- (v5); \draw (v5) -- (v6); \draw (v6) -- (v7); \draw (v7) -- (v8); \end{tikzpicture} \caption{The two cases considered in the proof of \cref{lem:swapPos}. The agent colored in black, $b$, prefer~$y$ to~$w$ while ``dotted'' agents prefer~$w$ over~$y$. The agent~$e$ in the proof is colored gray and also prefers~$w$ over~$y$.} \label{fig:casedistinction} \end{figure} We next define the \myemph{type} of an object. The type of an object is represented by the index of the edge where the object can possibly be swapped with~$x$. \begin{definition}\label{def:types} Define the \emph{index} of each edge~$\{I+t-1,I+t\}$ to be $t$, $t\ge 1$. For each object~$y$, if $y$ and $x$ can possibly be swapped at some edge~(see \cref{lem:swapPos}), then let the \myemph{type} of~$y$ be the index of this edge. Otherwise, the \myemph{type} of~$y$ is~$0$. \end{definition} \begin{figure}[t] \centering \begin{tikzpicture}[scale=0.85, every node/.style={scale=0.9}] \node[circle,draw, label=below:1, label=above:$a$] at (0,0) (v1) {}; \node[circle,draw, label=below:2, label=above:$b$] at (1,0) (v2) {}; \node[circle,draw, label=below:$I$, label=above:$c$] at (2,0) (v3) {}; \node[circle,draw, label=below:4, label=above:$d$] at (3,0) (v4) {}; \node[circle,draw, label=below:5, label=above:$x$] at (4,0) (v5) {}; \draw (v1) -- (v2); \draw (v2) -- (v3); \draw (v3) -- node[above] {1} (v4); \draw (v4) -- node[above] {2} (v5); \end{tikzpicture} \begin{tabular}{llll} $1:b \succ$ \fbox{$a$}, & $2: c \succ a \succ$ \fbox{$b$}\,, & $I:x \succ a \succ$ \fbox{$c$}\,,\\ $4: a \succ c \succ x \succ$ \fbox{$d$}\,, & $5: d \succ$ \fbox{$x$}\,. \end{tabular} \caption{An example for types (\cref{def:types}). Object~$b$ can never be swapped with~$x$ before $x$ reaches agent~$I$, and therefore its type is~$0$. The type of object~$d$ is~$2$ and the type of~$a$ and~$c$ is~$1$.} \label{fig:types} \end{figure} \noindent \Cref{fig:types} shows an example of types. Note that it is not possible that the edge found by \cref{lem:swapPos} has no index as we can assume that agent~$I$ prefers object~$x$ over all other objects. Clearly, in every sequence of swaps in which agent~$I$ gets~$x$, object~$x$ is swapped exactly once with an object of each type~$\geq 1$. \noindent Assume that $z$ the object of type~$1$ that agent~$I$ swaps last to get object~$x$. Observe that since the underlying graph is a path, each object that initially starts between objects~$x$ and~$z$ must be swapped with exactly one of these two objects in every sequence of swaps that ends with agent~$I$ exchanging~$z$ for~$x$. In the algorithm we will try all objects of type~$1$ and check whether at least one of them yields a solution. This only adds a factor of~$n$ to the running time. Observe that object~$z$ needs to reach~$I$ from the left and hence \cref{lem:swapPos} applies for~$z$ and~$I$. We will use this fact to show that there are at most two possible candidate objects of each type. For this, we first define the \myemph{subtype} of an object. Roughly speaking, the subtype of~$y$ encodes whether~$y$ is left or right of the object of the same type that shall move to the right. \begin{definition}\label{def:subtypes} For each object~$y$ of type~$\alpha > 1$, let~$e$ be the edge where~$y$ and~$z$ can possibly be swapped. If $e$ does not exist, then set the type of all other objects of type~$\alpha$ to~$0$ and set the subtype of~$y$ to~$\ell$. If $e$ exists, then let~$h$ be the number of edges between the agent~$a$ initially holding~$y$ and the edge~$e$; if~$e$ is incident to~$a$, then let~${h=0}$. If $\alpha \leq h + 1$, then the \myemph{subtype} of $y$ is ``$r$'' (for right); otherwise the \myemph{subtype} of $y$ is ``$\ell$'' (for left). \end{definition} \Cref{fig:subtypes} shows an example of subtypes. Notice that if the edge~$e$ exists, then it is unique as stated in \cref{lem:swapPos}. \begin{figure}[t] \centering \begin{tikzpicture}[scale=0.9, every node/.style={scale=0.9}] \node[circle,draw, label=above:$z$, label=below:$1$] at (0,0) (v1) {}; \node[circle,draw, label=above:$a$, label=below:$2$] at (1,0) (v2) {}; \node[circle,draw, label=above:$b$, label=below:$3$] at (2,0) (v3) {}; \node[circle,draw, label=above:$c$, label=below:$4$] at (3,0) (v4) {}; \node[circle,draw, label=above:$d$, label=below:$5$] at (4,0) (v5) {}; \node at (5,0) (v6) {$\cdots$}; \draw (v1) -- (v2); \draw (v2) -- (v3); \draw (v3) -- (v4); \draw (v4) -- (v5); \draw (v5) -- (v6); \end{tikzpicture} \begin{tabular}{l@{\;\;}l@{\;\;}l@{\;\;}l} $1: a \succ$ \fbox{$z$}\,, & $2: c \succ b \succ z \succ$ \fbox{$a$}\,, & $3: z \succ c \succ$ \fbox{$b$}\,,\\ $4: d \succ z \succ b \succ$\,, \fbox{$c$}, & $5: z \succ$ \fbox{$d$}\,. \end{tabular} \caption{An example for subtypes (\cref{def:subtypes}). Preference lists only contain the objects that are depicted in the picture. Assume that objects~$a,b,c,$ and~$d$ are all of type~$2$. If~$c$ has to be swapped with~$z$ at any point, then it is required to move once to the left before being swapped with~$z$ as agent~$3$ prefers~$z$ over~$c$. Thus, the value~$h$ as defined in \cref{def:subtypes} is $1$. One can verify that $c$ has subtype~$r$. Agents~$a$,~$b$, and~$d$ have subtype~$\ell$.} \label{fig:subtypes} \end{figure} The following auxiliary result helps to identify at most two relevant objects of each type (one of each subtype). \newcommand{\objectsmovingaround}{ Consider an object~$y$ that is swapped with object~$z$ before object~$z$ reaches agent~$I$, let $\{i-1,i\}$ be the edge where object~$z$ and $y$ are swapped, and let~$j$ be the agent that initially holds~$y$. Then, for each object~$w$ it holds that \begin{inparaenum}[(i)] \item\label{prop:swapleft} if~$w$ is swapped with~$y$ before~$y$ is swapped with $z$ then~$w$ has type from $\{2,3,\ldots,j-i+1\}$, and \item\label{prop:smalltype} if~$w$ has a type from $\{2,3,\ldots,j-i+1\}$ and is swapped with object~$x$, then it has to be swapped with object~$y$ before~$y$ and~$z$ can be swapped. \end{inparaenum} } \begin{lemma \label{lem:interval}\objectsmovingaround \end{lemma} \begin{proof} Let~$y$ be an object that is initially held by some agent~$j$ and that is swapped with~$z$ before~$x$ and~$z$ are swapped. Then,~$y$ has to be moved to the left and we can use \cref{lem:swapPos} to compute a unique edge~$\{i-1,i\}$ at which~$y$ and~$z$ are swapped. Throughout this proof we will implicitly use the fact that the relative order of all objects that move to the right can never change as otherwise an agent would regain an object she already gave away before and hence she would have made an irrational trade. Statement~\eqref{prop:swapleft}: Let~$w$ be an object that is swapped with~$y$ before~$y$ is swapped with $z$. Then~$w$ has to be moved to the right. Suppose towards a contradiction that~$w$ has type at least~$j-i+2$; note that no object can have type~$1$ as object~$z$ will be the type-$1$ object. Then, either there are at least~$j-i$ other objects (of types~$2,3,\ldots j-i+1$) that initially start between~$y$ and~$z$ and that are moved to the right or there is a type~$\alpha \in [2,j-i+1]$ such that no object of type~$\alpha$ is initially between~$y$ and~$z$ and that is moved to the right. In the former case, there are at least~$j-i+1$ objects that are initially between~$y$ and~$z$ and that move to the right and hence~$y$ is moved to agent~$i-1$ before it can be swapped with~$z$, a contradiction. In the latter case, note that after objects~$w$ and~$x$ are swapped, there is some~$\alpha \in [2,j-i+1]$ such that no object of type~$\alpha$ is between~$x$ and~$z$ and that is moved to the right and hence~$x$ can not be swapped over the edge~$\{I+\alpha-1,I+\alpha\}$, again a contradiction. Thus,~$w$ has a type at most~$j-i+1$. Statement~\eqref{prop:smalltype}: Let~$w$ have a type~$\alpha \in [2,j-i+1]$ and let~$w$ be moved to the right. Suppose towards a contradiction that~$w$ and~$y$ are not swapped. Since~$y$ is moved to the left and~$w$ is moved to the right,~$w$ has to start to the right of~$y$. Since the relative order of objects moving to the right cannot change and since object~$x$ is swapped with all objects moving to the right in decreasing order of their types, it follows that all objects that move to the right are initially ordered by their type. Hence, there are only objects of type~$\beta \in [2,\alpha-1]$ that initially start between~$w$ and~$z$ and that move to the right. Since~$y$ is initially left of~$w$, it holds that there are at most~$j-i-1$ objects that are initially between~$y$ and~$z$ and that are moved to the right. Thus,~$y$ can only reach agent~$i+1$ before it has to be swapped with~$z$. Since the edge computed by \cref{lem:swapPos} is unique,~$y$ and~$z$ cannot be swapped over~$\{i,i+1\}$, a contradiction. Thus,~$w$ and~$y$ have to be swapped before~$y$ and~$z$ can be swapped. \end{proof} \noindent Subtypes help to exclude all but two objects of each type. \begin{lemma} \label{lem:type} Given objects~$x$ and~$z$, there is an~$O(n^2)$-time preprocessing that excludes all but at most two objects of each type~$\alpha \geq 2$ as potential candidates for being swapped with~$x$. \end{lemma} \begin{proof} Consider a type~$\alpha \geq 2$ and all objects of type~$\alpha$. Compute the subtype of each of these objects. Exactly one of them is swapped to the right and all others have to be swapped with~$z$ at some point. From \cref{lem:swapPos}, we know that all the objects that are swapped to the left have a specific edge where they can possibly be swapped with~$z$. If such an edge does not exist for some object, then we know that this object has to move to the right and we can change the type of all other objects of type~$\alpha$ to~$0$. Note that there is no solution if such an edge does not exist for multiple objects of the same type. If such an edge exists for each object of type~$\alpha$, then we count the number~$h$ of swaps to the left that are needed for each object~$y$ of type~$\alpha$ to reach the edge at which it can be swapped with~$z$. Note that by \cref{lem:interval} each of these swaps happens with an object of type~$\beta \in [2,h+1]$. If these types include $\alpha$, that is, $\alpha \leq h+1$, then~$y$ has subtype~$r$ and otherwise it has subtype~$\ell$. If the subtype of~$y$ is~$r$ and if~$y$ is moved to the left, then by \cref{lem:interval}~$y$ has to be right of the object of type~$\alpha$ that moves to the right. Again by \cref{lem:interval}, if~$y$ is moved to the left and has subtype~$\ell$, then~$y$ has to be left of the object of type~$\alpha$ that is moved to the right. Consider the case where the objects of a given type are not ordered by their subtype (from left to right:~$\ell,\ell,\ldots,\ell,r,r,\ldots,r$). Then, for each object~$w$ of type~$\alpha$, there exists an object of type~$\alpha$ and subtype~$r$ to the left or an object of type~$\alpha$ and subtype~$\ell$ to the right. In \cref{fig:subtypes}, for~$w = d$ there is~$c$ with subtype~$r$ left of~$w$ and for~$w \in \{a,b,c\}$ there is object~$d$ of subtype~$\ell$ right of~$w$. Thus, if we try to send~$w$ to the right, then the number of swaps to the left of some other object of type~$\alpha$ (objects~$c$ or~$d$ in \cref{fig:subtypes}) does not match the number of swaps needed to reach the edge where the objects can be swapped with~$z$. Hence, there is no solution. Now consider the case where the objects are ordered by subtype as indicated above. By the same argument as above there are only two possible objects of type~$\alpha$ that can possibly travel to the right: The last object of subtype~$\ell$ and the first object of subtype~$r$. We can therefore set the type of all other objects of type~$\alpha$ to~$0$. Let~$n_\alpha$ be the number of objects of type~$\alpha$. Since the subtype for each object of type~$\alpha$ can be computed in~$O(n)$ time, we obtain that the described preprocessing takes~$O(n_\alpha \cdot n)$ time for type~$\alpha$. After having computed the subtype of each object of type~$\alpha$, we iterate over all these objects and find the two specified objects or determine that the objects are not ordered by subtype in~$O(n)$ time. Hence the overall running time is in~$O(\sum_{\alpha>1} (n_\alpha \cdot n)) \subseteq O(n^2)$. The inclusion holds since each object (except for~$x$) has exactly one type. \end{proof} We are now in a position to present the heart of our proof. We will show how to choose an object of each type~$\alpha \geq 1$ such that moving those objects to the right and all other objects to the left leads to a swap sequence such that agent~$I$ gets object~$x$ in the end. Once we have chosen the correct objects, we can compute the final position of each object in linear time and then use the fact then any swap sequence that only sends objects ``in the correct'' direction is a valid sequence since the relative order of all objects that travel to the left (respectively to the right) can never change. Such a selection leads to a solution if and only if for each pair of objects such that the right one moves to the left and the left one moves two the right, the two endpoints of the edge where they are to be swapped can agree on this swap. We mention that these insights were also used in Algorithm~1 by \citet{GouLesWil2017}. We will next focus on objects of type~$0$. Using \cref{lem:swapPos}, we can compute for each object~$y$ of type~$0$ the edge where~$y$ and~$z$ can be swapped. If such an edge does not exist, then there is no solution. Hence, we can again compute the number~$h$ of objects between~$y$ and~$z$ that need to move to the right. If any object~$w$ which is to the right of~$y$ has a type~$\beta \leq h+1$ or any object~$w'$ to the left of~$y$ has a type~$\beta' > h + 1$, then by \cref{lem:interval} these object~$w$ and~$w'$ cannot be moved to the right and hence we can set its type to~$0$. Suppose, towards a contradiction, that there is an object~$y$ of type~$0$ between two objects~$v,w$, both of type~$\alpha$. Then, we can either set the type of~$w$ (if~$\alpha \leq h+1$) or of~$v$ (if~$\alpha > h+1$) to~$0$. Hence, there is no object of type~$0$ between two objects of the same type and we are guaranteed that~$z$ can be swapped with all objects of type~$0$ regardless of which objects of each type are moved to the right. Hence, it remains to study swaps (i) of~$z$ with objects of type at least~$2$ that move to the left, (ii) of objects that move to the right and objects of type at least~$2$ that move to the left, and (iii) of objects of type~$0$ and objects moving to the right. Before doing so, we need to define the last ingredient for our proof: \myemph{blocks}. \begin{definition}\label{def:blocks} A \myemph{block} is a minimal subset~$B\subseteq X$ of objects that contains all objects of all types in some interval~$[\alpha,\beta]$ with~$2 \le \alpha \le \beta$ such that all objects in~$B$ are initially hold by agents on a subpath of the input path (see \cref{fig:blocks}). \end{definition} \begin{figure} \centering \begin{tikzpicture}[scale=0.75, every node/.style={scale=1}] \node at (-1,0) (v0) {$\cdots$}; \node[circle,draw, label=above:$z$] at (0,0) (v1) {}; \node[circle,draw, label=above:$2_\ell$] at (1,0) (v2) {}; \node[circle,draw, label=above:$3_\ell$] at (2,0) (v3) {}; \node[circle,draw, label=above:$2_r$] at (3,0) (v4) {}; \node[circle,draw, label=above:$4_\ell$] at (4,0) (v5) {}; \node[circle,draw, label=above:$3_r$] at (5,0) (v6) {}; \node[circle,draw, label=above:$4_r$] at (6,0) (v7) {}; \node[circle,draw, label=above:$0$] at (7,0) (v8) {}; \node[circle,draw, label=above:$5_\ell$] at (8,0) (v9) {}; \node[circle,draw, label=above:$5_r$] at (9,0) (v10) {}; \node at (10,0) (v11) {$\cdots$}; \draw (v0) -- (v1); \draw (v1) -- (v2); \draw (v2) -- (v3); \draw (v3) -- (v4); \draw (v4) -- (v5); \draw (v5) -- (v6); \draw (v6) -- (v7); \draw (v7) -- (v8); \draw (v8) -- (v9); \draw (v9) -- (v10); \draw (v10) -- (v11); \draw (0.5,1) -- (6.5,1) -- (6.5,-0.5) -- (0.5,-0.5) -- cycle; \draw (7.5,1) -- (9.5,1) -- (9.5,-0.5) -- (7.5,-0.5) -- cycle; \end{tikzpicture} \caption{An example of blocks~(\cref{def:blocks}). The left block contains all objects of type~$2,3$ and~$4$ and the right block contains the two objects of type~$5$.} \label{fig:blocks} \end{figure} We first prove that blocks are well defined and that each object of type~$\eta \geq 2$ is contained in exactly one block. \begin{lemma} \label{lem:block} Each object of type~$\eta \geq 2$ is contained in exactly one block, if there is no object of a higher type to its left, then this object has the highest type in its block, and all blocks can be computed in linear time. \end{lemma} \begin{proof} For any type~$\delta$, let~$\delta_\ell$ be the object of type~$\delta$ and subtype~$\ell$ and analogously let~$\delta_r$ be the object of type~$\delta$ and subtype~$r$. Recall that~$\delta_\ell$ is left of~$\delta_r$. If there is only one object~$w$ of type~$\delta$, then we say that~$\delta_\ell = \delta_r = w$. Observe that the objects in a block are initially hold by agents of a subpath of the input path and that the subpaths of different blocks do not intersect. The ``first'' or ``leftmost'' block with respect to this subpath has to start with~$2_\ell$ as otherwise the leftmost object of the block (again with respect to its subpath) could never be chosen and hence its type could be set to~$0$. Analogously, the objects of a block with interval~$[\alpha,\beta]$ have to start with~$\alpha_\ell$ and end with~$\beta_r$, as otherwise the leftmost (respectively rightmost) object could never be chosen since there is no object of type~$\alpha$ (of type~$\beta$) to its left (respectively to its right) and its type is larger than~$\alpha$ (less than~$\beta$). We will start with the block that contains type~$2$ in its interval and show that there is a unique type~$\gamma$ such that the block has interval~$[2,\gamma]$. Applying the same argumentation iteratively with~$\gamma + 1$, we show that each object of type~$\eta > 2$ is contained in exactly one block: Consider the smallest type~$\alpha$ that is not yet shown to be in a block and consider the objects~$\alpha_\ell$ and~$\alpha_r$. If these two objects are the same or are initially hold by adjacent agents, then these object(s) are a block with interval~$[\alpha,\alpha]$ and since blocks are minimal sets of objects, they are not part of any other block. If initially there is an object between~$\alpha_\ell$ and~$\alpha_r$, then it cannot be objects of type~$\eta < \alpha$. This holds, since it cannot be an object of type~$0$ as shown above, it cannot be~$z$ as~$z$ is the leftmost object that we consider and by assumption it cannot be an object of type~$\eta \in [2,\alpha-1]$. The right neighbor of~$\alpha_\ell$ therefore has to be~$(\alpha+1)_\ell$ since if it was of type~$\eta > \alpha+1$, then it could never be chosen as there is no object of type~$\alpha +1$ to its left and hence its type could be set to~$0$. Analogously, note that~$(\alpha+1)_r$ has to initially be right of object~$\alpha_r$ as otherwise object~$\alpha_r$ could never be chosen. Hence we can continue with the objects~$(\alpha+1)_\ell$ and~$(\alpha+1)_r$. Again, there can be no objects of type~$\eta < \alpha$ between them. If there are only objects of type~$\alpha$ between them, then the block has interval~$[\alpha,\alpha+1]$ and otherwise we can continue with objects~$(\alpha+2)_\ell$ and~$(\alpha+2)_r$ and so on. Note that this chain stops exactly at the first type~$\eta$ where all objects of a higher type than~$\eta$ are initially right of~$\eta_r$ and this happens at latest at~$\delta_r$, where~$\delta$ describes the largest type. Since we need only a constant amount of computation for each object, all blocks can be computed in linear time. \end{proof} We next prove that for each block there are only two possibilities to choose objects of each type in the block that can lead to a solution. We start with an intermediate lemma. \newcommand{\ftwol}{% If in a block with interval~$[\alpha,\beta]$, we decide for some type~$\gamma \in [\alpha,\beta]$ to send object~$\gamma_r$ to the right, then we need to send all objects~$\delta_r$ of type~$\delta \in [\gamma,\beta]$ to the right. % } \begin{lemma}[$\star$] \label{lem:f2l} \ftwol \end{lemma} \begin{proof} Let~$B$ be a block with interval~$[\alpha,\beta]$ and let~$\gamma\in[\alpha,\beta]$ be some type and assume that~$\gamma_r$ is to be sent to the right. By \cref{lem:block}, we know that unless~$\gamma = \beta$, it holds that~$(\gamma+1)_\ell$ is to the left of~$\gamma_r$ and can therefore not be sent to the right. Thus, we also have to move~$(\gamma+1)_r$ to the right. This argument applies iteratively for all types in~$[\gamma,\beta]$. \end{proof} Based on \Cref{lem:f2l}, we prove the following. \newcommand{\twopos}{% There are at most two selections of objects in a block with interval~$[\alpha,\beta]$ that can lead to~$I$ getting~$x$. These selections can be computed in~$O(n\!\cdot\! (\beta-\alpha+1))$~time.% } \begin{lemma} \label{lem:2pos} \twopos \end{lemma} \begin{proof} Let~$B$ be a block with interval~$[\alpha,\beta]$. There are only two possibilities for type~$\alpha$: Either~$\alpha_\ell$ is moved to the right or~$\alpha_r$ is moved to the right. If~$\alpha_r$ is moved to the right, then \cref{lem:f2l} states that we need to move~$\delta_r$ to the right for all~$\delta \in [\alpha,\beta]$. If we want to move~$\alpha_\ell$ to the right, then we know the final destination of~$\alpha_\ell$ and that~$\alpha_\ell$ and~$\alpha_r$ have to swap at some point. We can therefore use \cref{lem:swapPos} to compute the edge where~$\alpha_\ell$ and~$\alpha_r$ are swapped. From this we can compute the number~$h$ of objects between~$\alpha_\ell$ and~$\alpha_r$ that have to be moved to the right and hence we know that they have to be of types~$\alpha+1,\alpha+2,\ldots,\alpha+h$. Note that no object of subtype~$r$ between~$\alpha_\ell$ and~$\alpha_r$ can be moved to the right. Thus, there are two possibilities: The number~$h$ equals the number of objects of subtype~$r$ between~$\alpha_\ell$ and~$\alpha_r$ or not. In the former case we know that~$h$ objects of subtype~$\ell$ have to be moved to the right and then an object of subtype~$r$ (the respective object of subtype~$\ell$ is to the left of~$\alpha_r$). By \cref{lem:f2l}, all remaining objects of subtype~$r$ have to be moved to the right. In the latter case we can consider the highest type~$\eta$ that occurs between~$\alpha_\ell$ and~$\alpha_r$. If~$\eta = \beta$, then we have to only move objects of subtype~$\ell$ to the right and otherwise we can next look at the objects between~$\eta_\ell$ and~$\eta_r$ and apply the very same argument as before. Thus, there is only the choice at the very beginning whether to move~$\alpha_\ell$ or~$\alpha_r$ to the right. One possibility is to only move objects of subtype~$r$ to the right and the other can be computed in~$O(n \cdot (\beta-\alpha+1))$ time as there are~$O(\beta-\alpha+1)$ types in~$B$ and for each type we can compute the possible swap position in~$O(n)$ time. All other computations can be done in constant time per object. \end{proof} For both possible selections (see \Cref{lem:2pos}), we can determine in~$O((\beta-\alpha+1)^2 \cdot n)$ time, whether this selection is \myemph{consistent}, i.e., any pair of objects in this block that needs to be swapped at some point can in fact be swapped, by computing for each pair of objects where they should be swapped in~$O(n)$ time using \cref{lem:swapPos} (observe that we know the final destination of the object moving to the right) and checking whether the two endpoints of this edge can agree on this. We further require that this selection is not in conflict with objects of type~$0$ in the sense that all of the objects that we move to the right can be swapped with all objects of type~$0$. Observe that objects of type~$0$ are initially not located between objects of the same type and therefore we know exactly where the objects of the selection and the objects of type~$0$ are swapped independent of the selection for other blocks. By the definition of types, we know that~$x$ can always be swapped with the objects we moved to the right. Consider a possible selection of objects from some block~$B$ to move to the right. All other objects are moved to the left and hence have to be swapped with~$z$ at some edge. Since we know the number of objects to the left of~$B$ that are moved to the right (we do not know which objects these are but we know their number and types by the definition of blocks), we can compute for each of them the edge where they need to swap with~$z$. If the swap of the considered object and~$z$ is not rational for the two endpoints of this edge, then the selection can never lead to a situation where~$I$ swaps~$z$ for~$x$ and we can therefore ignore this selection. Thus, it only remains to find a selection for each block such that the objects that are moved to the right and the objects that are moved to the left can be swapped (if the former one is initially to the left of the latter one). We say that these selections are \myemph{compatible}. \begin{definition}\label{def:compatible} Let~$B$ and $C$ be two blocks with intervals~$[\alpha,\beta]$ and~$[\gamma,\delta]$, respectively, and let~$\beta < \gamma$. Let~$s_B$ (resp.~$s_C$) be a selection of objects from~$B$ (respectively $C$) to move to the right. We say that~$s_B$ and~$s_C$ are \myemph{compatible} if for all~$b \in s_B$ and all~$c \in C \setminus s_C$, the swap of~$b$ and~$c$ is rational for the two agents at the (unique) position where~$b$ and~$c$ can be swapped. Otherwise, we say that~$s_B$ and~$s_C$ are \myemph{in conflict}. \end{definition} \noindent Observe that we can compute a unique pair of agents that can possibly swap~$b$ and~$c$ since we know how many objects between~$b$ and~$c$ are moved to the right. This number is the sum of objects in~$s_B$ right of~$b$, objects in~$s_C$ left of~$c$, and $\gamma - \beta - 1$. Hence, computing whether these two selections are compatible takes~$O(|B| \cdot |C| \cdot n)$ time as the agents can be computed in constant time per pair of objects and checking whether these agents can agree on swapping takes~$O(n)$ time. It remains to find a selection for each block such that all of these selections are pairwise compatible. We solve this problem using a reduction to 2-SAT which is known to be linear-time solvable~\cite{APT79}. \begin{theorem} \label{thm:path} \RO{} on paths can be solved in~$O(n^4)$ time. \end{theorem} \begin{proof} For each object~$z$ of type~$n-I$ (there are $O(n)$ many), we do the following. First, compute the type, subtype and block of each object and use \cref{lem:2pos} to compute two possible selections for each block in overall~$O(n^2)$ time. Second, compute in~$O(n^3)$ time pairs of selections compatible to each other. Third, check in~$O(n^3)$ time whether these selections are consistent. If some selection~$s$ for a block~$B$ is not consistent or if there is some other block~$C$ such that~$s$ is not compatible with either selection for~$C$, then we know we have to take the other possible selection~$s'$ for~$B$ and can therefore ignore all selections that are in conflict with~$s'$. If this rules out some selection, then we can repeat the process. After at most~$n$ rounds of which each only takes~$O(n)$ time, we arrive at a situation where there are exactly two consistent selections for each block and the task is to find a set of pairwise compatible selections that include a selection for each block. We finally reduce this problem to a 2-SAT formula. We start with a variable~$v_B$ for each block~$B$ which is set to true if we move all objects of subtype $r$ to the right and false otherwise. For each pair~$s,s'$ of selections that are in conflict with one another, let~$B$ be the block of~$s$ and~$C$ be the block of~$s'$. Without loss of generality, let~$s$ and~$s'$ be the selections that are represented by~$v_B,v_C$ being set to true (otherwise swap~$\lnot v_B$ with~$v_B$ or~$\lnot v_C$ with~$v_C$ in the following clause). Since we cannot select~$s$ and~$s'$ at the same time, we add a clause~$(\lnot v_B \lor \lnot v_C)$ to our 2-SAT formula. \noindent Observe that if there is a set of pairwise non-conflicting selections, then the 2-SAT formula is satisfied by the corresponding assignment of the variables and if the formula is satisfied, then this assignment corresponds to a solution to the original \RO-instance. Since 2-SAT can be solved in linear time~\cite{APT79} and the constructed formula has~$O(n^2)$ clauses of constant size, our statement follows. \end{proof} \noindent We conclude by conjecturing that the case when the underlying graph is a cycle can also be solved in polynomial time. The idea is similar to the case of a path. The main difference is that it may happen that some objects may be swapped with~$x$ twice. Since we ``guess'' the object~$z$ that is last swapped with~$x$ (and we can also ``guess'' the moving direction of~$x$ in a similar fashion), we can compute the first edge where~$x$ and~$z$ are swapped. This determines the number~$k$ of objects which initially start between~$x$ and~$z$ and are also required to swap twice with~$x$ in any solution. We then apply the same type-based arguments as in the proof for paths, but we incorporate the additional information that the objects of types~$1,\ldots,k+1$ that are swapped with~$x$ are also the objects with the last (largest)~$k+1$ types that are swapped with~$x$. \section{Preferences of Length at Most Four \label{sec:complete-graphs} In this section we investigate the case where we do not impose any restriction on the underlying social network, i.e., it is a complete graph. We find that \RO{} remains NP-complete in this case. This implies that the computational hardness of the problem does not stem from restricting the possible swaps between agents by an underlying social network. Moreover, the hardness holds even if each agent has at most four objects in her preference list. To show NP-hardness, we reduce from a restricted NP-complete variant of the \textsc{3-SAT}{} problem~\cite{Tovey84}. In this variant, each clause has either $2$ or $3$ literals, and each variable appears once as a negative literal and either once or twice as a positive literal. We note that in the original NP-hardness reduction by \citet{GouLesWil2017}, the lengths of the preference lists are unbounded. \newcommand{\thmnpclengthfour}{ \RO{} is NP-complete on complete graphs, even if each preference list has length at most four. } \begin{theorem}\label[theorem]{thm:NP-c-length-4} \thmnpclengthfour \end{theorem} \begin{proof} We only focus on the hardness reduction as containment in NP is shown in~\cref{prop:RO-in-NP}. We reduce from the restricted \textsc{3-SAT}{} variant mentioned in the beginning of the section. The general idea of the reduction is to introduce for each literal of a clause a pair of \emph{private} clause agents to pass through the target object if the corresponding literal is set to $\texttt{true}$ and to introduce for each variable some variable agents to make sure that no two pairs of private clause agents for which the corresponding literals are complement to each other will pass through the target object in the same sequence of swaps. In this way, we can identify a satisfying truth assignment if and only if there is a sequence of swaps that makes the target object reach our agent. Let $\phi=(\mathcal{V}, \mathcal{C})$ be an instance of the restricted \textsc{3-SAT}{} problem with variables~$\mathcal{V}=\{v_1,\ldots, v_n\}$ and clauses~$\mathcal{C}=\{C_1,\ldots, C_m\}$. For each variable~$v_i\in \mathcal{V}$, let $\ensuremath{\mathsf{occ}}(i)$ be the number of occurrences of variable~$v_i$ (note that $\ensuremath{\mathsf{occ}}(i)\in\{2,3\}$), let $\nu(i)$ denote the index of the clause that contains the negative literal~$\ensuremath{\overline{v}}_i$, and let $\pi_1(i)$ and $\pi_2(i)$ be the indices of the clauses with $\pi_1(i) < \pi_2(i)$ that contain the positive literal~$v_i$; if $v_i$ only appears twice in $\phi$, then we simply neglect~$\pi_2(i)$. Now, we construct an instance of \RO{} as follows. \myparagraph{Agents and Initial Assignment~$\sigma_0$.} For each variable~$v_i\in \mathcal{V}$, introduce $\ensuremath{\mathsf{occ}}(i)\!-\!1$ \myemph{variable agents}, denoted as $X_i^{z}$ with initial objects~$x_i^{\ensuremath{\mathsf{occ}}(i)-z}$, $z\in \{1,\ldots, \ensuremath{\mathsf{occ}}(i)\!-\!1\}$. For each clause~$C_j \in \mathcal{C}$, introduce $2|C_j|+1$ \myemph{clause agents}, denoted as $A_j$, $B_j^{z}$, and $D_j^{z}$, $z\in \{1,\ldots, |C_j|\}$, where $|C_j|$ denotes the number of literals contained in~$C_j$. The initial objects of $B^z_j$, and $D_j^{z}$ are $b_j^z$, and $d_j^z$, respectively. The initial object of $A_1$ is our target object~$x$ and the initial object of $A_{j}$, $j\ge 2$, is $a_{j-1}$. Finally, our target agent~$I$ initially holds object~$a_m$. \myparagraph{Preference Lists.} For each clause~$C_j\in \mathcal{C}$, we use an arbitrary but fixed order of the literals in $C_j$ to define a bijective function~$\ensuremath{f}_j \colon C_j \to \{1,\ldots, |C_j|\}$, which assigns to each literal contained in $C_j$ a distinct number from $\{1,\ldots, |C_j|\}$. \begin{compactenum}[(1)] \item For each variable~$v_i\in \mathcal{V}$, let $j=\nu(i)$, $j'=\pi_1(i)$ (and $j''=\pi_2(i)$ if~$\ensuremath{\mathsf{occ}}(i)=3$) and do the following: \begin{compactenum}[(i)] \item If $\ensuremath{\mathsf{occ}}(i)=2$, then $X_i^1$ has preference list \begin{align*} d_{j'}^{\ensuremath{f}_{j'}(v_i)} \succ d_{j}^{\ensuremath{f}_{j}(\ensuremath{\overline{v}}_i)} \succ \fbox{$x_i^1$}\,. \end{align*} \item If $\ensuremath{\mathsf{occ}}(i) = 3$, then the preference lists of $X_i^1$ and $X_i^2$ are \begin{align*} X^1_i \colon & d_{j'}^{\ensuremath{f}_{j'}(v_i)} \succ x_i^1 \succ d_{j}^{\ensuremath{f}_{j}(\ensuremath{\overline{v}}_i)} \succ \fbox{$x_i^2$}\,,\\ X^2_i \colon & d_{j''}^{\ensuremath{f}_{j''}(v_i)} \succ x_i^2 \succ~\fbox{$x_i^1$}\,. \end{align*} \end{compactenum} \item We define an auxiliary function to identify an object for each clause~$C_j\in \mathcal{C}$ and each literal~$\ell\in C_j$ contained in~$C_j$: \begin{align*} \tau(C_j,\ell) \coloneqq \begin{cases} x_i^1, &\text{if } \ensuremath{\mathsf{occ}}(i)=2 \text{ and } \ell=\ensuremath{\overline{v}}_i \text{ for some variable } v_i, \\ x_i^2, &\text{if } \ensuremath{\mathsf{occ}}(i)=3 \text{ and } \ell=\ensuremath{\overline{v}}_i \text{ for some variable } v_i, \\ x_i^1, &\text{if } \ell = v_i \text{ and } j=\pi_1(i) \text{ for some variable } v_i,\\ x_i^2, &\text{if } \ell = v_i \text{ and } j=\pi_2(i)\text{ for some variable } v_i. \end{cases} \end{align*} The preference lists of the clause agents corresponding to $C_1$ are: \begin{align*} A_1 \colon b_1^1 \succ \cdots \succ b_1^{|C_1|} \succ \fbox{$x$}\,. \end{align*} For each literal~$\ell \in C_1$, the preference lists of $B_1^{\ensuremath{f}_1(\ell)}$ and $D_1^{\ensuremath{f}_1(\ell)}$ are \begin{align*} B_1^{\ensuremath{f}_1(\ell)} \colon & \tau(C_1, \ell) \succ x \succ \fbox{$b_1^{\ensuremath{f}_1(\ell)}$},\text{ and }\\ D_1^{\ensuremath{f}_1(\ell)} \colon & a_1 \succ x \succ \tau(C_1, \ell) \succ \fbox{$ d_1^{\ensuremath{f}_1(\ell)}$}\,. \end{align*} For each index~$j\in \{2,\ldots, m\}$, the preference lists of the clause agents corresponding to $C_j$ are \begin{align*} A_j \colon & b_{j}^1 \succ \cdots \succ b_j^{|C_j|} \succ \fbox{$a_{j-1}$},\\ B_j^{\ensuremath{f}_j(\ell)} \colon& \tau(C_j, \ell) \succ x \succ a_{j-1} \succ \fbox{$b_j^{\ensuremath{f}_j(\ell)}$}, \text{ and }\\ \text{for all~}\ell \in C_j,\text{ let } D_j^{\ensuremath{f}_j(\ell)} \colon& a_j \succ x \succ \tau(C_j, \ell) \succ \fbox{$d_j^{\ensuremath{f}_j(\ell)}$}\,. \end{align*} \item Let the preference list of our target agent~$I$ be $x \succ$ \fbox{$a_m$}. \end{compactenum} To finish the construction, we let the underlying graph be complete. One can verify that the constructed preference lists have length at most four. \myparagraph{The underlying graph~$G=(V,\binom{V}{2})$.} All agents are pairwise connected by an edge in the underlying graph. By the definition of rational trades, indeed, we can delete all irrelevant edges, say~$\{u,v\}$, if $u$ and $v$ will never agree to trade, i.e., there are no two objects, say $i,j$, which exist in the preference lists of both~$u$ and $v$ such that $u$ prefers~$i$ to~$j$ while $v$ prefers~$j$ to~$i$. By carefully examining the preference lists of the agents, we observe that only the following edges~$E$ are relevant for $V$. \begin{compactenum}[(1)] \item\label{edge:A-B-D} For each clause~$C_j$ the corresponding vertices form a \myemph{generalized star} with $A_j$ being the center and each leaf~$D_j^z$ having distance two to the center. Formally, for each clause~$C_j\in \mathcal{C}$ and for each two clause agents~$B_j^z$, $D_j^z$ with $1\le z \le |C_j|$ let $\{A_j, B_j^z\}, \{B_j^z, D_j^z\} \in E$. \item\label{edge:B-B} For each $j\in \{1,\ldots, m-1\}$, the two vertex sets~$\{D_{j}^{z} \mid 1\le z \le |C_{j}|\}$ and $\{B_{j+1}^{z'} \mid 1\le z' \le |C_{j+1}|\}$ form a complete bipartite graph. Formally, for each $j\in \{1,\ldots, m-1\}$, and for each two clause agents~$D_{j}^{z}$ and $B_{j+1}^{z'}$ with $1\le z \le |C_{j}|$ and $1\le z' \le |C_{j+1}|$, let $\{D_{j}^z, B_{j+1}^{z'}\}\in E$. \item To connect the clause agents and variable agents, for each variable~$v_i\in \mathcal{V}$, we do the following. \begin{compactenum}[(a)] \item If $\ensuremath{\mathsf{occ}}(i)=2$, then add $\{X_i^1, B_{j'}^{f_{j'}(v_i)}\}$ and $ \{X_i^1, B_{j}^{f_j(\ensuremath{\overline{v}}_i)}\}$ to~$E$, where $j=\nu(i)$ and $j'=\pi_1(i)$. \item If $\ensuremath{\mathsf{occ}}(i)=3$, then add $\{X_i^1, B_{j'}^{f_{j'}(v_i)}\}$, $\{X_i^1, B_{j}^{f_j(\ensuremath{\overline{v}}_i)}\}$, $ \{X_i^2, B_{j''}^{f_{j''}(v_i)}\}$, and $\{X_i^1,X_i^2\}$ to~$E$, where $j=\nu(i)$, $j'=\pi_1(i)$, and $j''=\pi_2(i)$. \end{compactenum} \item Our target agent~$I$ is adjacent to all clause agents~$D_m^z$, $z\in \{1, \ldots, |C_m|\}$. \end{compactenum} \allowdisplaybreaks[1] \begin{example}\label{ex:NP-c-l-4} For an illustration of the construction, let us consider the following restricted \textsc{3-SAT}{} instance: \begin{alignat*}{4} \mathcal{V}=\{v_1, v_2, v_3, v_4\}, & \mathcal{C} = \{C_1=(v_2\vee v_3), C_2 =(v_1\vee \ensuremath{\overline{v}}_2 \vee \ensuremath{\overline{v}}_3), C_3 =(\ensuremath{\overline{v}}_1\vee v_2 \vee v_4), C_4 =(v_3\vee \ensuremath{\overline{v}}_4) \}. \end{alignat*} Our instance for \RO{} contains the following agents. \begin{align*} V=& \{A_1,B_1^1, B_1^2, D_1^1, D_1^2\} \cup \{A_2,B_2^1,B_2^2,B_2^3, D_2^1,D_2^2, D_2^3\} \cup \\ &\{A_3,B_3^1,B_3^2,B_3^3,D_3^1,D_3^2,D_3^3\}\cup \{A_4,B_4^1, B_4^2,D_4^1,D_4^2\} \cup\\ &\{X_1^1,X_2^1,X_2^2,X_3^1,X_3^2,X_4^1\} \cup \{I\}. \end{align*} The preference lists of these agents are \begin{table*} \begin{tabular}{r@{}lr@{}lr@{}lr@{}lr@{}} $A_1\colon$ & $b_1^1 \succ b_1^2 \succ$ \fbox{$x$}, &~~$A_2\colon$ &$b_2^1 \succ b_2^2 \succ b_2^3 \succ$ \fbox{$a_1$}, &~~$A_3\colon$ & $b_3^1 \succ b_3^2 \succ b_3^3 \succ$ \fbox{$a_2$}, &~~$A_4\colon$ & $b_4^1 \succ b_4^2 \succ$ \fbox{$a_3$},\\ $B_1^1\colon$ & $x_2^1 \succ x \succ$ \fbox{$b_1^1$}, & $B_2^1\colon$& $x_1^1 \succ x \succ a_1 \succ$ \fbox{$b_2^1$}, & $B_3^1\colon$ & $x_1^1 \succ x \succ a_2 \succ$ \fbox{$b_3^1$}, & $B_4^1\colon$& $x_3^2 \succ x \succ a_3 \succ$ \fbox{$b_4^1$},\\ $B_1^2\colon$& $x_3^1 \succ x \succ$ \fbox{$b_1^2$}, & $B_2^2\colon$& $x_2^2 \succ x \succ a_1 \succ$ \fbox{$b_2^2$}, & $B_3^2\colon$& $x_2^2 \succ x \succ a_2 \succ$ \fbox{$b_3^2$}, & $B_4^2\colon$& $x_4^1 \succ x \succ a_3 \succ$ \fbox{$b_4^2$},\\ & & $B_2^3\colon$& $x_3^2 \succ x \succ a_1 \succ$ \fbox{$b_2^3$},& $B_3^3\colon$& $x_4^1 \succ x \succ a_2 \succ$ \fbox{$b_3^3$},\\ $D_1^1\colon$& $a_1 \succ x\succ x_2^1 \succ$ \fbox{$d_1^1$}, & $D_2^1\colon$& $a_2\succ x \succ x_1^1 \succ$ \fbox{$d_2^1$}, & $D_3^1\colon$& $a_3\succ x \succ x_1^1 \succ$ \fbox{$d_3^1$}, & $D_4^1\colon$& $a_4\succ x \succ x_3^2 \succ$ \fbox{$d_4^1$}, \\ $D_1^2\colon$& $a_1\succ x \succ x_3^1 \succ$ \fbox{$d_1^2$}, & $D_2^2\colon$& $a_2\succ x \succ x_2^2 \succ$ \fbox{$d_2^2$}, & $D_3^2\colon$& $a_3\succ x \succ x_2^2 \succ$ \fbox{$d_3^2$}, & $D_4^2\colon$& $a_4\succ x \succ x_4^1 \succ$ \fbox{$d_4^2$},\\ && $D_2^3\colon$ & $a_2\succ x \succ x_3^2 \succ$ \fbox{$d_2^3$},& $D_3^3\colon$ & $a_3\succ x \succ x_4^1 \succ$ \fbox{$d_3^3$},\\ $I \colon$ & $x \succ$ \fbox{$a_4$}, \\[2ex] $X_1^1\colon$ & $d_2^1 \succ d_3^1 \succ$ \fbox{$x_1^1$}, & $X_2^1 \colon$ & $d_1^1\succ x_2^1 \succ d_2^2 \succ$ \fbox{$x_2^2$}, & $X_3^1 \colon$ & $d_1^2 \succ x_3^1 \succ d_2^3 \succ$ \fbox{$x_3^2$}, & $X_4^1\colon$ & $d_3^3 \succ d_4^2 \succ$ \fbox{$x_4^1$},\\ & & $X_2^2 \colon$ & $d_3^2\succ x_2^2 \succ$ \fbox{$x_2^1$}, & $X_3^2 \colon$ & $d_4^1 \succ x_3^2 \succ$ \fbox{$x_3^1$}. &&\hfill (of \cref{ex:NP-c-l-4})~$\diamond$ \end{tabular} \end{table*} \end{example} \noindent The underlying graph is complete. Nevertheless, only the edges as depicted in \cref{fig:example-NP-c-l-4} turn out to be relevant for swaps. \begin{figure*}[t!h] \centering \begin{tikzpicture} \def 1 {1} \def 55 {55} \def .5 {.5} \node[agentnode] (B21) {$B_2^1$}; \node[agentnode, below = .5 of B21] (B22) {$B_2^2$}; \node[agentnode, below = .5 of B22] (B23) {$B_2^3$}; \node[agentnode, right = 1 of B21] (D21) {$D_2^1$}; \node[agentnode, right = 1 of B22] (D22) {$D_2^2$}; \node[agentnode, right = 1 of B23] (D23) {$D_2^3$}; \gettikzxy{(B21)}{\xdta}{\ydta}; \gettikzxy{(D21)}{\xbta}{\ybta}; \gettikzxy{(B22)}{\xdtb}{\ydtb}; \gettikzxy{(B23)}{\xdtc}{\ydtc}; \def .5 {30} \node[agentnode] at (\xdta*0.2+\xbta*0.8, \ydta+.5) (A2) {$A_2$}; \node[agentnode] at (\xdta, \ydtc-.5*1.5) (X21) {$X_2^1$}; \node[agentnode] at (\xbta, \ydtc-.5*1.5) (X22) {$X_2^2$}; \def .5 {.5} \node[agentnode] at (\xdta-55, \ydta/2+\ydtb/2) (D11) {$D_1^1$}; \node[agentnode] at (\xdta-55, \ydtc/2+\ydtb/2) (D12) {$D_1^2$}; \def .5 {.5} \node[agentnode, left = 1 of D11] (B11) {$B_1^1$}; \node[agentnode, left = 1 of D12] (B12) {$B_1^2$}; \gettikzxy{(D11)}{\xxa}{\yya}; \gettikzxy{(B11)}{\xxb}{\yyb}; \def .5 {30} \node[agentnode] at (\xxa*0.8+\xxb*0.2, \yya+.5) (A1) {$A_1$}; \node[agentnode] at (\xxa*0.5+\xxb*0.5, \ydtc-.5*1.5) (X11) {$X_1^1$}; \gettikzxy{(D21)}{\xbta}{\ybta}; \gettikzxy{(D22)}{\xbtb}{\ybtb}; \gettikzxy{(D23)}{\xbtc}{\ybtc}; \def .5 {.5} \node[agentnode] at (\xbta+55, \ybta) (B31) {$B_3^1$}; \node[agentnode] at (\xbta+55, \ybtb) (B32) {$B_3^2$}; \node[agentnode] at (\xbta+55, \ybtc) (B33) {$B_3^3$}; \def .5 {.5} \node[agentnode, right = 1 of B31] (D31) {$D_3^1$}; \node[agentnode, right = 1 of B32] (D32) {$D_3^2$}; \node[agentnode, right = 1 of B33] (D33) {$D_3^3$}; \gettikzxy{(D31)}{\xxa}{\yya}; \gettikzxy{(B31)}{\xxb}{\yyb}; \def .5 {30} \node[agentnode] at (\xxa*0.8+\xxb*0.2, \yya+.5) (A3) {$A_3$}; \node[agentnode] at (\xxb, \ydtc-.5*1.5) (X31) {$X_3^1$}; \node[agentnode] at (\xxa, \ydtc-.5*1.5) (X32) {$X_3^2$}; \def .5 {.5} \gettikzxy{(D31)}{\xbta}{\ybta}; \gettikzxy{(D32)}{\xbtb}{\ybtb}; \gettikzxy{(D33)}{\xbtc}{\ybtc}; \node[agentnode] at (\xbta+55, \ybta/2+\ybtb/2) (B41) {$B_4^1$}; \node[agentnode] at (\xbta+55, \ybtb/2+\ybtc/2) (B42) {$B_4^2$}; \def .5 {.5} \node[agentnode, right = 1 of B41] (D41) {$D_4^1$}; \node[agentnode, right = 1 of B42] (D42) {$D_4^2$}; \gettikzxy{(D41)}{\xxa}{\yya}; \gettikzxy{(B41)}{\xxb}{\yyb}; \def .5 {30} \node[agentnode] at (\xxa*0.8+\xxb*0.2, \yya+.5) (A4) {$A_4$}; \node[agentnode] at (\xxa*0.5+\xxb*0.5, \ydtc-.5*1.5) (X41) {$X_4^1$}; \def .5 {.5} \gettikzxy{(D41)}{\xbta}{\ybta}; \gettikzxy{(D42)}{\xbtb}{\ybtb}; \node[agentnode] at (\xbta+55, \ybta/2+\ybtb/2) (I) {${\,\,}I^{\;\;}$}; \foreach \j in {1,4} { \foreach \i in {1,2} { \draw (B\j\i) -- (D\j\i); \draw (A\j) -- (B\j\i); } } \foreach \j in {2,3} { \foreach \i in {1,2,3} { \draw (B\j\i) -- (D\j\i); \draw (A\j) -- (B\j\i); } } \foreach \i in {11,12} { \foreach \j in {21,22,23} { \draw (D\i) -- (B\j); } } \foreach \i in {21,22,23} { \foreach \j in {31,32,33} { \draw (D\i) -- (B\j); } } \foreach \i in {31,32,33} { \foreach \j in {41,42} { \draw (D\i) -- (B\j); } } \draw (I) -- (D41); \draw (I) -- (D42); \foreach \i / \j in {11/21, 11/31, 21/11,21/22, 22/32, 31/12,31/23,32/41,41/33,41/42} { \draw[red] (X\i) -- (D\j); } \draw[red] (X21) -- (X22); \draw[red] (X31) -- (X32); \def .3 {.3} \draw[gray, rounded corners] ($(A1.north west)+(0,.3)$) -- ($(B11.north west)+(-.3,0)$) -- ($(B12.south west)+(-.3, -.3)$) -- ($(D12.south east)+(.3, -.3)$) -- ($(D11.north east)+(.3, 0)$) -- ($(A1.north east)+(.3*.3, .3)$) -- cycle; \draw[gray, rounded corners] ($(A2.north west)+(0,.3)$) -- ($(B21.north west)+(-.3,0)$) -- ($(B23.south west)+(-.3, -.3)$) -- ($(D23.south east)+(.3, -.3)$) -- ($(D21.north east)+(.3, 0)$) -- ($(A2.north east)+(.3*.3, .3)$) -- cycle; \draw[gray, rounded corners] ($(A3.north west)+(0,.3)$) -- ($(B31.north west)+(-.3,0)$) -- ($(B33.south west)+(-.3, -.3)$) -- ($(D33.south east)+(.3, -.3)$) -- ($(D31.north east)+(.3, 0)$) -- ($(A3.north east)+(.3*.3, .3)$) -- cycle; \draw[gray, rounded corners] ($(A4.north west)+(0,.3)$) -- ($(B41.north west)+(-.3,0)$) -- ($(B42.south west)+(-.3, -.3)$) -- ($(D42.south east)+(.3, -.3)$) -- ($(D41.north east)+(.3, 0)$) -- ($(A4.north east)+(.3*.3, .3)$) -- cycle; \end{tikzpicture} \caption{Underlying graph with only relevant edges for the profile constructed according to the reduction given in \cref{thm:NP-c-length-4}} \label{fig:example-NP-c-l-4} \end{figure*} Now, we show that instance~$\phi=(\mathcal{V}, \mathcal{C})$, with $n$ variables~$\mathcal{V}$ and $m$ clauses~$\mathcal{C}$, admits a satisfying truth assignment if and only if object~$x$ which agent~$A_1$ initially holds is reachable for our agent~$I$ which initially holds $a_m$. For the ``only if'' part, assume that $\beta\colon \mathcal{V}\to \{\texttt{true},\texttt{false}\}$ is a satisfying assignment for~$\phi$. Intuitively, this satisfying assignment will guide us to find a sequence of swaps, making object~$x$ reach agent~$I$. First, for each variable~$v_i\in \mathcal{V}$, if $\ensuremath{\mathsf{occ}}(i)=3$, implying that there are two variable agents ($X_i^1$ and $X_i^2$) for~$v_i$, and if $\beta(v_i)=\texttt{true}$, then let agent~$X_i^1$ and $X_i^2$ swap their initial objects so that $X_i^1$ and $X_i^2$ hold $x_i^1$ and $x_i^2$, respectively. For each clause~$C_j$, identify a literal, say~$\ell_j$, which satisfies $C_j$ under~$\beta$, and do the following. \begin{compactenum} \item Let agent~$A_j$ and agent~$B_j^{f_j(\ell_j)}$ swap their initial objects. \item Let agent~$D_j^{f_j(\ell_j)}$ and agent~$X_i^z$ swap their current objects such that \begin{compactenum}[(a)] \item if $\ell_j = \ensuremath{\overline{v}}_i$, then $z=1$ (note that in this case agent~$X_i^1$ is holding object~$x_i^1$ if $\ensuremath{\mathsf{occ}}(i)=2$, and is holding object~$x_i^2$ if $\ensuremath{\mathsf{occ}}(i)=3$), \item if $\ell_j = v_i$ and $j=\pi_1(i)$, then $z=1$ (note that in this case agent~$X_i^1$ is holding object~$x_i^1$), and \item if $\ell_j = v_i$ and $j=\pi_2(i)$, then $z=2$ (note that in this case agent~$X_i^2$ is holding object~$x_i^2$). \end{compactenum} \end{compactenum} After these swaps, agent~$B_1$ is holding object~$x$. Each agent~$B_j^{f_j(\ell_j)}$, $2\le j \le m$, is holding object~$a_{j-1}$. Each agent~$D_j^{f_j(\ell_j)}$ is holding object~$\tau(C_j, \ell_j)$. Now, to let $x$ reach agent~$I$, we let it pass through agents~$B_1$, $D_1$, $B_2$, $D_2$, $\ldots$, $B_m$, $D_m$, and finally to $I$. Formally, iterating from $j=1$ to $j=m-1$, we do the following: \begin{compactenum} \item Let agent~$B_j$ and $D_j$ swap their current objects so that $D_j$ holds object~$x$. \item Let agent~$D_j$ and $B_{j+1}$ swap their current objects so that $B_{j+1}$ holds object. \end{compactenum} After these swaps, agent~$B_m$ obtains object~$x$. Let agent~$D_m$ swap its object~$\tau(D_m, \ell_m)$ with agent~$B_m$ for object~$x$. Finally, let agent~$I$ swap its object~$a_m$ with agent~$D_m$ for object~$x$. This completes the proof for the ``only if'' part. For the ``if'' part, assume that there is a sequence of swaps ~$(\sigma_0,\sigma_1,\ldots, \sigma_s)$ which makes object~$x$ reach agent~$I$, i.e.\ $\sigma_s(I)=x$. Now, we show how to construct a satisfying truth assignment for $\phi$. First, we observe the following properties which will help us to identify a literal for each clause such that setting it to \texttt{true} will satisfy the clause. \begin{claim}\label{claim:clause-literal} For each clause~$C_j\in \mathcal{C}$, there exist an assignment~$\sigma_r$, $1\le r < s$, and a literal~$\ell_j \in C_j$ such that $\sigma_r$ admits a swap for agents~$B_j^{f_j(\ell_j)}$ and $D_j^{f_j(\ell_j)}$, i.e. \begin{compactenum}[(1)] \item $\sigma_r(B_j^{f_j(\ell_j)}) = x$, \item $\sigma_r(D_j^{f_j(\ell_j)}) = \tau(C_j, \ell_j)$, \item $\sigma_{r+1}(B_j^{f_j(\ell_j)}) = \tau(C_j, \ell_j)$, and \item $\sigma_{r+1}(D_j^{f_j(\ell_j)}) = x$. \end{compactenum} \end{claim} \begin{proof} \renewcommand{\qedsymbol}{(of \cref{claim:clause-literal})~$\diamond$} In our initial assignment~$\sigma_0$, agent~$I$ holds object~$a_m$. To make object~$x$ reach agent~$I$, one can verify that agent~$I$ must have swapped with some clause agent~$D_m^z$ with~$z\in \{1,\ldots, |C_m|\}$ since agent~$I$ only prefers $x$ to $a_m$, and only agents from $\{D_m^t \mid 1\le t \le |C_m|\}$ are willing to swap $x$ for $a_m$. Let $\ell_m$ be the literal with $f_m(\ell_m)=z$; recall that $f_m$ is a bijection. In order to make agent~$D_m^z$ obtain object~$x$, by her preference list, she must be holding object~$\tau(C_m, \ell_m)$ and swap it for $x$ since no agent will swap with her for $d_m^z$. Observe that agent~$B_m^z$ is the only agent that prefers $\tau(C_m,\ell_m)$ to $x$. It follows that $B_m^z$ must have swapped with $D_m^z$ for object $\tau(C_m, \ell_m)$. This means that there must be an assignment~$\sigma_{r}$ in the sequence such that \begin{compactenum}[(1)] \item $\sigma_{r}(B_m^{z}) = x$, \item $\sigma_{r}(D_m^{z}) = \tau(C_m, \ell_m)$, \item $\sigma_{r+1}(B_m^{z}) = \tau(C_m, \ell_m)$, and \item $\sigma_{r+1}(D_m^{z}) = x$. \end{compactenum} Now, we show our statement through induction on the index~$j$ of the clause agents, $j\ge 2$. Assume that there is an assignment~$\sigma_{r_j}$ in the sequence and that $C_j$ contains a literal~$\ell_j$ such that \begin{compactenum}[(1)] \item $\sigma_{r_j}(B_j^{f_j(\ell_j)}) = x$, \item $\sigma_{r_j}(D_j^{f_j(\ell_j)}) = \tau(C_j, \ell_j)$, \item $\sigma_{r_j+1}(B_j^{f_j(\ell_j)}) = \tau(C_j, \ell_j)$, and \item $\sigma_{r_j+1}(D_j^{f_j(\ell_j)}) = x$. \end{compactenum} By the above assignment, it follows that agent~$B_j^{f_j(\ell_j)}$ must have swapped with some other agent for object~$x$. Since agent~$B^{f_j(\ell_j)}_j$ prefers $x$ only to objects~$a_{j-1}$ and $b_j^{f_j(\ell_j)}$ and since no agent prefers~$b_j^{f_{j}(\ell_j)}$ to $x$, it follows that agent~$B_j^{f_j(\ell_j)}$ must have swapped with some other agent for $x$ while holding object~$a_{j-1}$. Since only agents from $\{D_{j-1}^{t}\mid 1\le t \le |C_{j-1}|\}$ prefer $a_{j-1}$ to $x$, it follows that $B_j^{f_j(\ell_j)}$ must have swapped with some agent~$D_{j-1}^{z_{j-1}}$ with $z_{j-1}\in \{1,\ldots, |C_{j-1}|\}$ for object~$x$. Let $\ell_{j-1}$ be the literal with $f_{j-1}(\ell_{j-1})=z_{j-1}$. To perform such a swap, however, agent~$D_{j-1}^{z_{j-1}}$ must first obtain object~$x$. Similarly to the case with agent~$D_m$, agent~$D_{j-1}^{z_{j-1}}$ must once hold object~$\tau(C_{j-1}, \ell_{j-1})$ and swap it for object~$x$ since no agent will swap with her for~$d_{j-1}^{z_{j-1}}$. Observe that agent~$B^{z_{j-1}}_{j-1}$ is the only agent that prefers~$\tau(C_{j-1}, \ell_{j-1})$ to~$x$. It follows that agent~$D_{j-1}^{z_{j-1}}$, while holding object~$\tau(C_{j-1}, \ell_{j-1})$, swapped with agent~$B_{j-1}^{z_{j-1}}$ for object~$x$, i.e.\ there is an assignment~$\sigma_{r_{j-1}}$ in the sequence and $C_{j-1}$ contains a literal~$\ell_{j-1}$ such that \begin{compactenum}[(1)] \item $\sigma_{r_{j-1}}(B_{j-1}^{f_{j-1}(\ell_{j-1})}) = x$, \item $\sigma_{r_{j-1}}(D_{j-1}^{f_{j-1}(\ell_{j-1})}) = \tau(C_{j-1}, \ell_{j-1})$, \item $\sigma_{r_{j-1}+1}(B_{j-1}^{f_{j-1}(\ell_{j-1})}) = \tau(C_{j-1}, \ell_{j-1})$, and \item $\sigma_{r_{j-1}+1}(D_{j-1}^{f_{j-1}(\ell_{j-1})}) = x$. \qedhere \end{compactenum} \end{proof} \noindent By the above claim, we can now define a truth assignment~$\beta$ for $\phi$. \begin{align*} & \text{For all } v_i \in \mathcal{V}, ~~\text{ let } \beta(v_i) \coloneqq \begin{cases} \texttt{false}, &\text{if } D_j^{f_j(\ensuremath{\overline{v}}_i)} \text{ swapped} \text{ with } \\ & B_j^{f_j(\ensuremath{\overline{v}}_i)} \text{ for } x \text{, where } j=\nu(i), \\ \texttt{true}, &\text{otherwise.}\end{cases} \end{align*} Recall that in~$\phi$ each variable appears exactly once as a negative literal. Thus, our $\beta$ is a well-defined truth assignment. To show that~$\beta$ is indeed a satisfying assignment, suppose, towards a contradiction, that $\beta$ does not satisfy some clause~$C_j \in \mathcal{C}$. By \cref{claim:clause-literal}, let $\ell_j\in C_j$ be a literal such that $D_j^{f_j(\ell_j)}$, while holding object~$\tau(C_j, \ell_j)$, swapped with $B_j^{f_j(\ell_j)}$ for $x$. Observe that $\ell_j\in \{v_i, \ensuremath{\overline{v}}_i\}$ for some $v_i\in \mathcal{V}$. We distinguish between two cases, in each of which we will arrive at a contradiction. \myparagraph{Case 1: $\boldsymbol{\ell_j=\ensuremath{\overline{v}}_i}$.} This implies that $\ensuremath{\overline{v}}_i\in C_j$, $j=\nu(i)$. Thus, $D_j^{f_j(\ensuremath{\overline{v}}_i)}$ swapped with $B_j^{f_j(\ensuremath{\overline{v}}_i)}$ for $x$. By our definition of $\beta$ it follows that $\beta(v_i)=\texttt{false}$ which satisfies $C_j$--a contradiction. \myparagraph{Case 2: $\boldsymbol{\ell_j=v_i}$.} This implies that $v_i\in C_j$. Since $C_j$ is not satisfied by $\beta$ it follows that $\ensuremath{\overline{v}}_i\notin C_j$ and $\beta(v_i)=\texttt{false}$. By our definition of $\beta$ it follows that $D_{j'}^{f_{j'}(\ensuremath{\overline{v}}_i)}$, while holding object~$\tau(C_{j'}, \ensuremath{\overline{v}}_i)$ swapped with $B_{j'}^{f_{j'}(\ensuremath{\overline{v}}_i)}$ for $x$ where $j'=\nu(i)$ and $j'\neq j$. If $\ensuremath{\mathsf{occ}}(i)=2$, implying that there is exactly one variable agent, namely $X_i^1$ for $v_i$, then by our definition of $\tau$ it follows that $\tau(C_j, v_i)=\tau(C_{j'}, \ensuremath{\overline{v}}_i)=x_i^1$. To be able to swap away object~$x_i^1$, agent~$D_{j'}^{f_{j'}(\ensuremath{\overline{v}}_i)}$ needs to obtain it from agent~$X_i^1$ since~$d_{j'}^{f_{j'}(\ensuremath{\overline{v}}_i)}$ is the only object to which agent~$D_{j'}^{f_{j'}(\ensuremath{\overline{v}}_i)}$ prefers $x_i^1$ and since~$X_i^1$ is the only agent who prefers~$d_{j'}^{f_{j'}(\ensuremath{\overline{v}}_i)}$ to~$x_i^1$. This implies that agent~$D_{j}^{f_{j}(v_i)}$ did not obtain object~$x_i^1$ from agent~$X_i^1$, and hence did not hold object~$x_i^1$ during the whole swap sequence. However, since $x_i^1$ is the only object that agent~$B_j^{f_j(\ell_j)}$ prefers to~$x$, it follows that agent~$D_j^{f_(j)(v_i)}$ has not swapped with $B_j^{f_j(\ell_j)}$ for $x$---a contradiction to our assumption above (before the case study). Analogously, if $\ensuremath{\mathsf{occ}}(i)=3$, implying that there are exactly two variable agents, namely $X_i^1$ and $X_i^2$ for $v_i$, then by the definition of $\tau$ it follows that $\tau(C_{j'}, \ensuremath{\overline{v}}_i)=x_i^2$. By a similar reasoning as in the case of $\ensuremath{\mathsf{occ}}(i)=2$, it follows that $X_{i}^1$ did not swap with any other agent for object~$x_i^1$ as this would require her to swap her initial object~$x_i^2$ which she gave away for $d_{j'}^{f_{j'}(\ensuremath{\overline{v}}_i)}$. Consequently, agent~$D_j^{f_j(v_i)}$ would \myemph{not} have swapped either with agent~$X_i^1$ for object~$x_i^1$ or with agent~$X_i^2$ for object~$x_i^1$--a contradiction to our initial assumption that $D_j^{f_j(v_i)}$ swapped away $\tau(C_j, x_i)$ which is either $x_i^1$ or $x_i^2$. This completes the ``if'' part. \end{proof} \section{Generalized Caterpillars}\label{sec:caterpillars} We obtain NP-hardness for \RO{} on generalized caterpillars where each hair has length at most two and only one vertex has degree larger than two. This strengthens the NP-hardness of \RO{} on trees~\cite{GouLesWil2017} (their constructed tree is a generalized caterpillar where each hair has length three and there is only one vertex of degree larger than two). For the sake of completeness, we give a full proof including parts of the original proof by \citet{GouLesWil2017}. \newcommand{\nphcatepillars}{ \RO{} is NP-hard on generalized caterpillars where each hair has length at most two and only one vertex has degree larger than two. } \begin{theorem} \label[theorem]{thm:cater} \nphcatepillars \end{theorem} \begin{proof} We use the same notation as \citet{GouLesWil2017}. Let~$\phi = (\mathcal{V,C})$ be an instance of \textsc{Two positive one negative at most 3-Sat} with variable set~${\mathcal{V} = \{v_1,\ldots, v_n\}}$ and clause set~$\mathcal{C}=\{C_1,\ldots, C_m\}$. For each variable~$v_i$, let~$p_1^i,p_2^i$ and~$n^i$ denote the clause in which~$v_i$ occurs first as a positive literal, second as a positive literal and as a negative literal, respectively. Accordingly, we denote the respective literal of~$v_i$ by~$v_i^{p_1^i},v_i^{p_2^i}$ or~$\bar{v}_i^{n^i}$. The instance of \RO{} is constructed as follows. For each variable~$v_i$, we add a ``variable gadget'' consisting of nine agents~$\bar{X}_i^{n^i}, X_i^{p_1^i}, X_i^{p_2^i}, D_i^1, D_i^2, P_i^1, P_i^2, N_i$, and~$H_i$. For each clause~$C_i$, we add one ``clause agent''~$C_i$ and we add an additional agent~$T$ which starts with an object~$t$. We ask whether agent~$C_m$ can get object~$t$. \begin{figure} \centering \begin{tikzpicture}[scale=0.8, every node/.style={scale=0.95}] \node[circle, draw, label=above:$C_m$] (Cm) at (0,6) {}; \node[circle, draw, label=above:$C_{m-1}$] (Cm1) at (1,6) {}; \node (Cdots) at (2,6) {$\dots$}; \node[circle, draw, label=above:$C_{1}$] (C1) at (3,6) {}; \node[circle, draw, label=above:$T$] (T) at (4,6) {}; \node[circle, draw, label=left:$D_i^1$] (D1) at (-4.5,3) {}; \node[circle, draw, label=left:$X_i^{p_1^i}$] (X1) at (-3,3) {}; \node[circle, draw, label=left:$X_i^{p_2^i}$] (X2) at (-1.5,3) {}; \node[circle, draw, label=left:$\bar{X}_i^{n^i}$] (Xn) at (0,3) {}; \node[circle, draw, label=right:$H_i$] (H) at (1.5,3) {}; \node[circle, draw, label=right:$D_{i+1}^1$] (D1') at (3,3) {}; \node[] (dots) at (5,3) {$\dots$}; \node[circle, draw, label=left:$D_i^2$] (D2) at (-4.5,0) {}; \node[circle, draw, label=left:$P_i^1$] (P1) at (-3,0) {}; \node[circle, draw, label=left:$P_i^2$] (P2) at (-1.5,0) {}; \node[circle, draw, label=left:$N_i$] (N) at (0,0) {}; \node[circle, draw, label=right:$D_{i+1}^2$] (D2') at (3,0) {}; \draw (Cm) -- (Cm1); \draw (Cm1) -- (Cdots); \draw (Cdots) -- (C1); \draw (C1) -- (T); \draw (Cm) -- (D1); \draw (Cm) -- (X1); \draw (Cm) -- (X2); \draw (Cm) -- (Xn); \draw (Cm) -- (H); \draw (Cm) -- (D1'); \draw (Cm) -- (dots); \draw (D1) -- (D2); \draw (X1) -- (P1); \draw (X2) -- (P2); \draw (Xn) -- (N); \draw (D1') -- (D2'); \end{tikzpicture} \caption{Illustration of the construction in the proof of \cref{thm:cater} showing one variable gadget (bottom) for representing the assignment of one variable and the ``clause path'' (top) that verifies whether some assignment satisfies the input formula~$\phi$.} \label{fig:hair2} \end{figure} The underlying graph is depicted in \cref{fig:hair2}. The preference lists of the agents are as follows: \allowdisplaybreaks[1] \begin{align*} D_i^1 &: v_i \succ -_i \succ \fbox{$+_i$},\\ D_i^2 &: +_i \succ \fbox{$-_i$},\\ X_i^{p_1^i} &: c_{m-p_1^i+1} \succ x_i^{p_1^i} \succ +_i \succ \fbox{$++_i$},\\ P_i^1 &: +_i \succ \fbox{$x_i^{p_1^i}$},\\ X_i^{p_2^i} &: c_{m-p_2^i+1} \succ x_i^{p_2^i} \succ ++_i \succ \fbox{$p_i$},\\ P_i^2 &: ++_i \succ \fbox{$x_i^{p_2^i}$},\\ \bar{X}_i^{n^i} &: c_{m-p_n^i+1} \succ \bar{x}_i^{n^i} \succ -_i \succ \fbox{$n_i$},\\ N_i &: -_i \succ \fbox{$\bar{x}_i^{n^i}$},\\ H_n &: p_n \succ n_n \succ \fbox{$c_m$},\\ H_i &: p_i \succ n_i \succ \fbox{$v_{i+1}$},\\ C_m &: t \succ \{\ell_m\} \succ c_1 \succ \{\ell_{m-1}\} \succ \ldots \succ c_{m-1} \succ \{\ell_1\} \succ c_m \succ \\ &\ \ p_n \succ n_n \succ -_n \succ ++_n \succ +_n \succ v_n \succ\\ & \ \ p_{n-1} \succ n_{n-1} \succ \ldots \succ +_1 \succ \fbox{$v_1$},\\ C_j &: \{\ell_{j+1}\} \succ t \succ \{\ell_j\} \succ c_1 \succ \{\ell_{j-1}\} \succ \ldots \succ c_{j-1} \succ \{\ell_1\} \succ \fbox{$c_j$},\\ T &: \{\ell_1\} \succ \fbox{$t$}, \end{align*} where~$\{\ell_j\}$ is the set of all literal objects corresponding to the literals in~$C_j$ ranked in arbitrary order. We start by showing that the variable gadgets work properly, that is, each variable is either set to \texttt{true} or \texttt{false} and the different literal objects can be used in an arbitrary order. Consider a variable~$v_i$ and note the following: \begin{compactitem} \item $D_i^1$ can only give either~$+_i$ or~$-_i$ to~$C_m$, \item $N_i$ will only release~$\bar{x}_i^{n^i}$ in exchange for~$-_i$, \item $P_i^1$ will only release~$x_i^{p_1^i}$ in exchange for $+_i$, and \item $P_i^2$ will only release~$x_i^{p_2^i}$ in exchange for~$++_i$. \end{compactitem} Furthermore, observe that we cannot use~$c_{m-p_1^i+1}$ to release~$++_i$ from~$X_i^{p_1^i}$ since agent~$C_m$ prefers~$c_{m-p_1^i+1}$ over~$++_i$. Hence, agent~$C_m$ can only receive either the ``negative token''~$\bar{x}_i^{n^i}$ or positive ``token(s)''~$x_i^{p_1^i}$ and/or~$x_i^{p_2^i}$. Once~$D_i^1$ and~$D_i^2$ have decided on whether~$+_i$ or~$-_i$ is traded in exchange for~$v_i$, agent~$C_m$ trades either with~$X_i^{p_1^i}, X_i^{p_2^i}$ and~$H_i$ or with~$\bar{X}_i^{n^i}$ and~$H_i$ in this order. Afterwards, either~$X_i^{p_1^i}$ and~$P_i^1$ (and also~$X_i^{p_2^i}$ and~$P_i^2$) can swap or~$\bar{X}_i^{n^i}$ and~$N_i$ can swap their current objects. Thus, after fixing all variable gadgets, agent~$C_m$ holds the object~$c_m$ and can swap it with one of the agents~$X_i^{p_1^i},X_i^{p_2^i}$ or~$\bar{X}_i^{n^i}$. The remainder of the proof works exactly as the proof of \citeauthor{GouLesWil2017}~\cite[Theorem 1]{GouLesWil2017}. Once each variable is assigned a truth value, the path of clause agents verifies that each clause is satisfied by this assignment. To this end, first notice that each clause agent~$C_j$ (excluding~$C_m$) prefers~$t$ over her initial object~$c_j$ and only accepts to swap~$t$ for an object in~$\{\ell_{j+1}\}$. Hence, the only way to move~$t$ from~$T$ to~$C_{m-1}$ involves giving each agent~$C_i$ (again excluding~$C_m$) an object associated with a literal that satisfies the clause~$C_{i+1}$. Finally, observe that the first and last clauses also need to be satisfied in order for~$T$ and~$C_{m-1}$ to give away~$t$. Thus, if there is a sequence of swaps such that~$C_m$ gets~$t$ in the end, then~$\phi$ is satisfiable. If~$\phi$ is satisfiable, then there is a sequence of swaps such that~$C_m$ gets~$t$: We first iterate over all variable gadgets and set their value according to some satisfying assignment for~$\phi$. Once this is done, agent~$C_m$ has object~$c_m$ and can now trade this for an object corresponding to a literal that satisfies clause~$C_1$. This object is then passed to agent~$T$ such that~$C_m$ has object~$c_{m-1}$ which can again be swapped for an object representing a satisfying literal for clause~$C_2$. This object is then passed to agent~$C_1$ and this procedure is repeated until~$t$ reaches~$C_m$. \end{proof} \section{Conclusion} We investigated the computational complexity of \RO{} with respect to different restrictions regarding the underlying graph and the agent preferences. Our work narrows the gap between known tractable and intractable cases leading to a comprehensive understanding of the complexity of \RO{}. In particular, we settled the complexity with respect to the preference lengths. Several questions remain open: Can \RO{} be solved in polynomial time on caterpillars? Note that on stars \RO{} can be solved in polynomial time~\cite[Proposition~1]{GouLesWil2017}). Also, the complexity of \RO{} on graphs of maximum degree three is open. \citet[Theorem 4]{SW18} showed NP-hardness of \RO{} on graphs of maximum degree four, while our results imply polynomial-time solvability of \RO{} on graphs of maximum degree two. (Note that a graph of maximum degree two is the disjoint union of paths and cycles, and note that we provide an efficient algorithm for paths and sketch an adaption for cycles at the end of~\cref{sec:paths}.) Regarding preference restrictions, following the line of studying stable matchings on restricted domains~\cite{BreCheFinNie2017}, it would be interesting to know whether assuming a special preference structure can help in finding tractable cases for our problem. Finally, one may combine resource allocation with social welfare, measured by the egalitarian or utilitarian cost~\cite{damamme_power_2015}, and study the parameterized complexity of finding a reachable assignment which meets these criteria as studied in the context of stable matchings~\cite{CHSYicalp-par-stable2018}. \paragraph*{Acknowledgments} The work on this paper started at the research retreat of the Algorithmics and Computational Complexity group, TU~Berlin, held at Darlingerode, Harz, March~2018. \bibliographystyle{named}
{'timestamp': '2019-05-13T02:17:42', 'yymm': '1905', 'arxiv_id': '1905.04219', 'language': 'en', 'url': 'https://arxiv.org/abs/1905.04219'}
\chapter{Alternative derivation of the SBE flow equations} \label{app: SBE_fRG_app} % In this Appendix, we present an alternative derivation of the flow equations presented in Chap.~\ref{chap: Bos Fluct Norm}, obtained by the introduction bosonic fields via three \emph{Hubbard-Stratonovich transformations} (HST). We re-write the bare interaction as % \begin{equation} Un_{\uparrow}n_{\downarrow}=3Un_{\uparrow}n_{\downarrow} - 2 Un_{\uparrow}n_{\downarrow}, \end{equation} % and apply three different HST on each of the first three terms, one for physical channel. In formulas this can be expressed as % \begin{equation} \begin{split} \mathcal{Z}_\text{Hubbard}&=\int \mathcal{D}\left(\psi,\Bar{\psi}\right)e^{-\mathcal{S}_\text{Hubbard}\left[\psi,\Bar{\psi}\right]} =\int\mathcal{D}\boldsymbol{\Phi}\mathcal{D}\left(\psi,\Bar{\psi}\right)e^{-\mathcal{S}_\text{bos}\left[\psi,\Bar{\psi},\boldsymbol{\Phi}\right]}, \end{split} \label{eq_app_SBE_fRGZ} \end{equation} % where $\boldsymbol{\Phi}=(\Vec{\phi}_m,\phi_c,\phi_p,\phi_p^*)$ collects all three bosonic fields, $\mathcal{S}_\mathrm{Hubbard}$ is the bare Hubbard action, and the bosonized one is given by % \begin{equation} \begin{split} \mathcal{S}_\text{bos}\left[\psi,\Bar{\psi},\boldsymbol{\Phi}\right] = &-\int_{k,\sigma} \Bar{\psi}_{k,\sigma} \left(i\nu+\mu-\epsilon_k\right) \psi_{k,\sigma}\\ &-\frac{1}{2}\int_{q}\phi_c(-q) \frac{1}{U} \phi_c(q) -\frac{1}{2}\int_{q}\Vec{\phi}_m(-q) \cdot \frac{1}{U} \Vec{\phi}_m(q)\\ &+\int_{q}\phi^*_p(q)\frac{1}{U} \phi_p(q) +\int_{k,q,\sigma}\phi_c(q) \,\Bar{\psi}_{k+\frac{q}{2},\sigma}\psi_{k-\frac{q}{2},\sigma}\\ &+\int_{k,q}\sum_{\sigma,\sigma'}\Vec{\phi}_m(q)\cdot \,\Bar{\psi}_{k+\frac{q}{2},\sigma}\Vec{\sigma}_{\sigma\sigma'}\psi_{k-\frac{q}{2},\sigma'}\\ &+\int_{k,q}\left[\phi_p(q) \,\Bar{\psi}_{\frac{q}{2}+k,\uparrow}\Bar{\psi}_{\frac{q}{2}-k,\downarrow}+\phi^*_p(q) \,\psi_{\frac{q}{2}-k,\downarrow}\psi_{\frac{q}{2}+k,\uparrow}\right]\\ & -2U \int_0^\beta d\tau \sum_j n_{j,\uparrow}(\tau) n_{j,\downarrow}(\tau). \label{eq_app_SBE_fRG: HS action} \end{split} \end{equation} % The remaining (not bosonized) $-2U$ term in $\mathcal{S}_\text{bos}$, avoids double counting of the bare interaction. We then introduce the RG scale via a regulator acting on the fermions. The regularized generating functional reads as % \begin{equation} \begin{split} W^\Lambda\left[\eta,\Bar{\eta},\boldsymbol{J}\right]&=-\ln \int\mathcal{D}\boldsymbol{\Phi}\int \mathcal{D}\left(\psi,\Bar{\psi}\right) e^{-S^\Lambda_\text{bos}\left[\psi,\Bar{\psi},\boldsymbol{\Phi}\right]+\left(\Bar{\psi},\eta\right)+\left(\Bar{\eta},\psi\right) + \left(\boldsymbol{\Phi},\boldsymbol{J}\right)}, \label{eq_app_SBE_fRG: W Lambda} \end{split} \end{equation} % with % \begin{equation} S^\Lambda_\text{bos}\left[\psi,\Bar{\psi},\boldsymbol{\Phi}\right]=S_\text{bos}\left[\psi,\Bar{\psi},\boldsymbol{\Phi}\right]+\int_{k,\sigma} \Bar{\psi}_{k,\sigma}\,R^\Lambda(k)\,\psi_{k,\sigma}. \label{eq_app_SBE_fRG: reg Sbos} \end{equation} % The initial conditions at $\L={\Lambda_\mathrm{ini}}$ depend on the formalism used. In the plain fRG, we impose $R^{\Lambda\rightarrow{\Lambda_\mathrm{ini}}}(k)\rightarrow \infty$, so that at the initial scale the effective action must equal $\mathcal{S}_\mathrm{bos}$. Differently, within the DMF\textsuperscript 2RG, the regulator must fulfill % \begin{equation} R^{\Lambda_{\text{ini}}}(k) = \epsilon_{\boldsymbol{k}} - \Delta_\text{AIM}\left(\nu\right), \end{equation} % so that we have % \begin{equation} \mathcal{S}^{{\Lambda_\mathrm{ini}}}\left[\psi,\Bar{\psi},\boldsymbol{\Phi}\right] = \mathcal{S}_\text{AIM}\left[\psi,\Bar{\psi},\boldsymbol{\Phi}\right], \end{equation} % where $\mathcal{S}_\text{AIM}\left[\psi,\Bar{\psi}\right]$ is the action of the self-consistent AIM, where (local) bosonic fields have been introduced via HST. The initial conditions for the effective action therefore read as % \begin{equation} \Gamma^{{\Lambda_\mathrm{ini}}}\left[\psi,\Bar{\psi},\boldsymbol{\Phi}\right]=\Gamma_\text{AIM}\left[\psi,\Bar{\psi},\boldsymbol{\Phi}\right], \end{equation} % with $\Gamma_\text{AIM}\left[\psi,\Bar{\psi},\boldsymbol{\Phi}\right]$ the effective action of the self-consistent AIM. Expanding it in terms of 1PI functions, one recovers the initial conditions given in Sec.~\ref{eq_SBE_fRG: DMF2RG initial conditions}, where the screened interactions $D^X$ and the Yukawa couplings $h^X$ at the initial scale equal their local counterpart of the AIM. The above defined formalism allows for a straightforward inclusion of the bosonic fluctuations that, among other things, are responsible for the fulfillment of the Mermin-Wagner theorem. In fact, the present formalism can be extended by adding some boson-boson interaction terms~\cite{Strack2008,Friederich2011,Obert2013} which can suppress the divergence of the bosonic propagators at a finite scale. % \end{document} \chapter{Bosonized flow equations in the SSB phase} \label{app: fRG+MF app} % \section{Derivation of flow equations in the bosonic formalism} % In this section we will derive the flow equations used in Chap.~\ref{chap: fRG+MF}. We consider only those terms in which the dependence on the center of mass momentum $q$ is fixed to zero by the topology of the relative diagram or that depend only parametrically on it. These diagrams are the only ones necessary to reproduce the MF approximation. The flow equations will be derived directly from the Wetterich equation~\eqref{eq_methods: Wetterich eq RL}, with a slight modification, since we have to keep in mind that the bosonic field $\phi$ acquires a scale dependence due to the scale dependence of its expectation value. The flow equation reads (for real $\alpha^\Lambda$): % \begin{equation} \partial_\Lambda\Gamma^\Lambda=\frac{1}{2}\widetilde{\partial}_\Lambda\text{Str}\ln\left[\mathbf{\Gamma}^{(2)\Lambda}+R^\Lambda\right]+\frac{\delta\Gamma^\Lambda}{\delta\sigma_{q=0}} \, \partial_\Lambda \alpha^\Lambda, \label{eq_app_fRG+MF: Wetterich eq.} \end{equation} % where $\mathbf{\Gamma}^{(2)\Lambda}$ is the matrix of the second derivatives of the action with respect to the fields, the supertrace Str includes a minus sign when tracing over fermionic variables. The first equation we derive is the one for the flowing expectation value $\alpha^\Lambda$. This is obtained by requiring that the one-point function for $\sigma_q$ vanishes. Taking the $\sigma_q$ derivative in Eq.~\eqref{eq_app_fRG+MF: Wetterich eq.} and setting the fields to zero, we have % \begin{equation} \begin{split} \partial_\Lambda\Gamma^{(0,1,0)\Lambda}(q=0)\equiv\partial_\Lambda\frac{\delta\Gamma^\Lambda}{\delta\sigma_{q=0}}\bigg\lvert_{\Psi,\overline{\Psi},\sigma,\pi=0} =-\int_k h^\Lambda_\sigma(k;0)\, \widetilde{\partial}_\Lambda F^\Lambda(k)+m^\Lambda_\sigma(0)\,\partial_\Lambda\alpha^\Lambda=0, \end{split} \label{eq_app_fRG+MF: alpha eq app} \end{equation} % where we have defined % \begin{equation} \Gamma^{(2n_1,n_2,n_3)\Lambda}=\frac{\delta^{(2n_1+n_2+n_3)}\Gamma^\Lambda}{\left(\delta\overline{\Psi}\right)^{n_1}\left(\delta\Psi\right)^{n_1}\left(\delta\sigma\right)^{n_2}\left(\delta\pi\right)^{n_3}}. \end{equation} % From Eq.~\eqref{eq_app_fRG+MF: alpha eq app} we get the flow equation for $\alpha^\Lambda$. % \begin{equation} \partial_\Lambda\alpha^\Lambda=\frac{1}{m^\Lambda_\sigma(0)}\int_k h^\Lambda_\sigma(k;0)\, \widetilde{\partial}_\Lambda F^\Lambda(k). \label{eq_app_fRG+MF: alpha flow} \end{equation} % The MF flow equation for the fermionic gap reads % \begin{equation} \begin{split} \partial_\Lambda \Delta^\Lambda(k)= \int_{k'}\mathcal{A}^\Lambda(k,k';0)\,\widetilde{\partial}_\Lambda F^\Lambda(k') +\partial_\Lambda\alpha^\Lambda\,h_\sigma^\Lambda(k;0), \end{split} \label{eq_app_fRG+MF: gap equation bosonic} \end{equation} % with $\mathcal{A}^\Lambda$ being the residual two fermion interaction in the longitudinal channel. The equation for the inverse propagator of the $\sigma_q$ boson is % \begin{equation} \begin{split} \partial_\Lambda m_\sigma^\Lambda(q)=\int_p h_\sigma^\Lambda(p;q)\left[\widetilde{\partial}_\Lambda\Pi^\Lambda_{11}(p;q)\right] h_\sigma^\Lambda(p;q) +&\int_p \Gamma^{(2,2,0)\Lambda}(p,0,q)\,\widetilde{\partial}_\Lambda F^\Lambda(p)\\ +&\partial_\Lambda\alpha^\Lambda\,\Gamma^{(0,3,0)\Lambda}(q,0), \label{eq_app_fRG+MF: full P sigma flow} \end{split} \end{equation} % where we have defined the bubble at finite momentum $q$ as % \begin{equation} \Pi^\Lambda_{\alpha\beta}(k;q)=-\frac{1}{2}\Tr\left[\tau^\alpha\mathbf{G}^\Lambda(k)\tau^\beta\mathbf{G}^\Lambda(k-q)\right], \end{equation} % $\Gamma^{(0,3,0)\Lambda}$ is an interaction among three $\sigma$ bosons and $\Gamma^{(2,2,0)\Lambda}$ couples one fermion and 2 longitudinal bosonic fluctuations. The equation for the longitudinal Yukawa coupling is % \begin{equation} \begin{split} \partial_\Lambda h^\Lambda_\sigma(k;q)=\int_p\mathcal{A}^\Lambda(k,p;q)\left[\widetilde{\partial}_\Lambda\Pi^\Lambda_{11}(p;q)\right]h^\Lambda_\sigma(p,q) +&\int_{k'}\Gamma^{(4,1,0)\Lambda}(k,p,q,0)\,\widetilde{\partial}_\Lambda F^\Lambda(p)\\ +&\partial_\Lambda\alpha^\Lambda\,\Gamma^{(2,2,0)\Lambda}(k,q,0), \label{eq_app_fRG+MF: full h_sigma flow} \end{split} \end{equation} % where $\Gamma^{(4,1,0)\Lambda}$ is a coupling among 2 fermions and one $\sigma$ boson. % The flow equation for the coupling $\mathcal{A}^\Lambda$ reads instead % \begin{equation} \begin{split} \partial_\Lambda\mathcal{A}^\Lambda(k,k';q)=&\int_p\mathcal{A}^\Lambda(k,p;q)\left[\widetilde{\partial}_\Lambda\Pi^\Lambda_{11}(p;q)\right]\mathcal{A}^\Lambda(p,k';q)\\ +&\int_{p}\Gamma^{(6,0,0)\Lambda}(k,k',q,p,0)\,\widetilde{\partial}_\Lambda F^\Lambda(p) +\partial_\Lambda\alpha^\Lambda\,\Gamma^{(4,1,0)\Lambda}(k,k',q,q), \label{eq_app_fRG+MF: full A flow} \end{split} \end{equation} % with $\Gamma^{(6,0,0)\Lambda}$ the 3-fermion coupling. We recall that in all the above flow equations, we have considered only the terms in which the center of mass momentum $q$ enters parametrically in the equations. This means that we have assigned to the flow equation for $\mathcal{A}^\Lambda$ only contributions in the particle-particle channel and we have neglected in all flow equations all the terms that contain a loop with the normal single scale propagator $\widetilde{\partial}_\Lambda G^\Lambda(k)$. Within a reduced model, where the bare interaction is nonzero only for $q=0$ scattering processes, the mean-field is the exact solution and one can prove that, due to the reduced phase space, only the diagrams that we have considered in our truncation of the flow equations survive~\cite{Salmhofer2004}. In order to treat the higher order couplings, $\Gamma^{(0,3,0)\Lambda}$, $\Gamma^{(2,2,0)\Lambda}$, $\Gamma^{(4,1,0)\Lambda}$ and $\Gamma^{(6,0,0)\Lambda}$, one can approximate their flow equations in order to make them integrable in way similar to Katanin's approximation for the 3-fermion coupling. The integrated results are the fermionic loop integrals schematically shown in Fig.~\ref{fig_app_fRG+MF: katanin}. Skipping any calculation, we just state that this approximation allows for absorbing the second and third terms on the right hand-side of Eqs.~\eqref{eq_app_fRG+MF: full P sigma flow},~\eqref{eq_app_fRG+MF: full h_sigma flow} and~\eqref{eq_app_fRG+MF: full A flow} into the first one just by replacing $\widetilde{\partial}_\Lambda\Pi^\Lambda_{11}$ with its full derivative $\partial_\Lambda\Pi^\Lambda_{11}$. In summary: % \begin{subequations} \begin{align} &\partial_\Lambda m_\sigma^\Lambda(q)=\int_p h_\sigma^\Lambda(p;q)\left[\partial_\Lambda\Pi^\Lambda_{11}(p;q)\right] h_\sigma^\Lambda(p;q),\\ &\partial_\Lambda h^\Lambda_\sigma(k;q)=\int_p\mathcal{A}^\Lambda(k,p;q)\left[\partial_\Lambda\Pi^\Lambda_{11}(p;q)\right]h^\Lambda_\sigma(p;q),\\ &\partial_\Lambda\mathcal{A}^\Lambda(k,k';q)=\int_p\mathcal{A}^\Lambda(k,p;q)\left[\partial_\Lambda\Pi^\Lambda_{11}(p;q)\right]\mathcal{A}^\Lambda(p,k';q). \end{align} \end{subequations} % With a similar approach, one can derive the flow equations for the transverse couplings: % \begin{subequations} \begin{align} &\partial_\Lambda m_\pi^\Lambda(q)=\int_p h_\pi^\Lambda(p;q)\left[\partial_\Lambda\Pi^\Lambda_{22}(p;q)\right] h_\pi^\Lambda(p;q),\\ &\partial_\Lambda h^\Lambda_\pi(k;q)=\int_p\Phi^\Lambda(k,p;q)\left[\partial_\Lambda\Pi^\Lambda_{22}(p;q)\right]h_\pi^\Lambda(p;q),\\ &\partial_\Lambda\Phi^\Lambda(k,k';q)=\int_p\Phi^\Lambda(k,p;q)\left[\partial_\Lambda\Pi^\Lambda_{22}(p;q)\right]\Phi^\Lambda(p,k';q). \end{align} \end{subequations} % % \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth]{fig10_paper.png} \caption{Feynman diagrams describing the Katanin-like approximation higher order correlation functions. The conventions are the same as in Figs.~\ref{fig_fRG+MF: flow eqs} and~\ref{fig_fRG+MF: flow eqs gaps}.} \label{fig_app_fRG+MF: katanin} \end{figure} % \section{Calculation of the irreducible vertex in the bosonic formalism} \label{app: irr V bosonic formalism} % In this appendix we provide a proof of Eq.~\eqref{eq_fRG+MF: irr V bosonic formalism} by making use of matrix notation. If the full vertex can be decomposed as in Eq.~\eqref{eq_fRG+MF: vertex at Lambda crit} % \begin{equation} V=\mathcal{Q}+\frac{h [h]^T}{m}, \end{equation} % we can plug this relation into the definition of the irreducible vertex, Eq.~\eqref{eq_fRG+MF: irr V fermionic}. With some algebra we obtain % \begin{equation} \begin{split} \widetilde{V}=\left[1+V\Pi\right]^{-1}V =\left[1+\frac{\widetilde{h} [h]^T}{m}\Pi\right]^{-1}\left[\widetilde{\mathcal{Q}}+\frac{\widetilde{h} [h]^T}{m}\right], \end{split} \label{eq_app_fRG+MF: V tilde appendix I} \end{equation} % where in the last equality we have inserted a representation of the identity, % \begin{equation} 1=\left[1+\mathcal{Q}\Pi\right]\left[1+\mathcal{Q}\Pi\right]^{-1}, \end{equation} % in between the two matrices and we have made use of definitions~\eqref{eq_fRG+MF: reduced C tilde} and~\eqref{eq_fRG+MF: reduced yukawa}. With a bit of simple algebra, we can analytically invert the matrix on the left in the last line of Eq.~\eqref{eq_app_fRG+MF: V tilde appendix I}, obtaining % \begin{equation} \left[1+\frac{\widetilde{h} [h]^T}{m}\Pi\right]^{-1}=1-\frac{\widetilde{h} [h]^T}{\widetilde{m}}\Pi, \end{equation} % Where $\widetilde{m}$ is defined in Eq.~\eqref{eq_fRG+MF: reduced mass P tilde}. By plugging this result into Eq.~\eqref{eq_app_fRG+MF: V tilde appendix I}, we finally obtain % \begin{equation} \widetilde{V}=\widetilde{\mathcal{Q}}+\frac{\widetilde{h} [\widetilde{h}]^T}{\widetilde{m}}, \end{equation} that is the result of Eq.~\eqref{eq_fRG+MF: irr V bosonic formalism}. % \section{Algorithm for the calculation of the superfluid gap} % The formalism described in Sec.~\ref{sec_fRG+MF: bosonic flow and integration} allows us to formulate a minimal set of closed equations required for the calculation of the gap. We drop the $\Lambda$ superscript, assuming that we have reached the final scale. The gap can be computed using the Ward identity, so we can reduce ourselves to a single self consistent equation for $\alpha$, that is a single scalar quantity, and another one for $h_\pi$, momentum dependent. The equation for $\alpha$ is Eq.~\eqref{eq_fRG+MF: alpha solution}. The transverse Yukawa coupling is calculated through Eq.~\eqref{eq_fRG+MF: h_pi}. The equations are coupled since the superfluid gap $\Delta=\alpha h_\pi$ appears in the right hand side of both.\\ We propose an iterative loop to solve the above mentioned equations. By starting with the initial conditions $\alpha^{(0)}=0$ and $h_\pi^{(0)}(k)=0$, we update the transverse Yukawa coupling at every loop iteration $i$ according to Eq.~\eqref{eq_fRG+MF: h_pi}, that can be reformulated in the following algorithmic form: % \begin{equation} h_\pi^{(i+1)}(k)=\int_{k'} \left[M^{(i)}(k,k')\right]^{-1}\,\widetilde{h}^{\Lambda_s}(k'), \label{eq_fRG+MF_app: h_pi loop} \end{equation} % with the matrix $M^{(i)}$ defined as % \begin{equation} M^{(i)}(k,k')=\delta_{k,k'}-\mathcal{\widetilde{Q}}^{\Lambda_s}(k,k')\,\Pi_{22}^{(i)}(k';\alpha^{(i)}), \end{equation} % and the $22$-bubble rewritten as % \begin{equation} \Pi_{22}^{(i)}(k;\alpha) = \frac{1}{G^{-1}(k)G^{-1}(-k)+\alpha^2\left[h_\pi^{(i)}(k)\right]^2}, \end{equation} % with $G(k)$ defined in Eq.~\eqref{eq_fRG+MF: G at Lambda_s}. Eq.~\eqref{eq_fRG+MF_app: h_pi loop} is not solved self consistently at every loop iteration $i$, because we have chosen to evaluate the r.h.s with $h_\pi$ at the previous iteration. $\alpha^{(i+1)}$ is calculated by self consistently solving % \begin{equation} 1=\frac{1}{\widetilde{m}^{\Lambda_s}}\int_k \widetilde{h}^{\Lambda_s}(k)\, \Pi_{22}^{(i+1)}(k;\alpha) \,h_\pi^{(i+1)}(k) \label{eq_fRG+MF_app: alpha loop} \end{equation} % for $\alpha$. The equation above is nothing but Eq.~\eqref{eq_fRG+MF: alpha solution} where the solution $\alpha=0$ has been factorized away. The loop consisting of Eqs.~\eqref{eq_fRG+MF_app: h_pi loop} and~\eqref{eq_fRG+MF_app: alpha loop} must be repeated until convergence is reached in $\alpha$ and, subsequently, in $h_\pi$. % This formulation of self consistent equations is not computationally lighter than the one in the fermionic formalism, but more easily controllable, as one can split the frequency and momentum dependence of the gap (through $h_\pi$) from the strength of the order ($\alpha$). Moreover, thanks to the fact that $h_\pi$ is updated with an explicit expression, namely Eq.~\eqref{eq_fRG+MF_app: h_pi loop}, that is in general a well behaved function of $k$, the frequency and momentum dependence of the gap is assured to be under control. \end{document} \chapter{Details on the RPA for spiral magnets} \label{app: low en spiral} % In this Appendix, we report some details on the RPA calculation of the collective excitations in spiral magnets. % \section{Coherence factors} % The coherence factors entering the bare susceptibilities $\widetilde{\chi}^0_{ab}(\mathbf{q},\omega)$ in Eq.~\eqref{eq_low_spiral: chi0 expression} are defined as % \begin{equation} \mathcal{A}^{ab}_{\ell\ell'}(\mathbf{k},\mathbf{q})=\frac{1}{2}\Tr\left[\sigma^a u_\mathbf{k}^\ell\sigma^b u_{\mathbf{k}+\mathbf{q}}^{\ell'}\right], \label{eq_app_spiral: coh fact def} \end{equation} % with $u^\ell_\mathbf{k}$ given by (see Eq.~\eqref{eq_low_spiral: ukl def}) % \begin{equation} u_\mathbf{k}^\ell = \sigma^0 + \ell \, \frac{h_\mathbf{k}}{e_\mathbf{k}} \sigma^3 + \ell \, \frac{\Delta}{e_\mathbf{k}} \sigma^1. \end{equation} % Here, $\ell$ and $\ell'$ label the quasiparticle bands, and $a$ and $b$ correspond to the charge-spin indices. Performing the trace, we get the following expression for the charge-charge coherence factor % \begin{equation} A^{00}_{\ell\ell'}(\mathbf{k},\mathbf{q}) = 1 + \ell\ell'\,\frac{h_\mathbf{k} h_{\mathbf{k}+\mathbf{q}} + \Delta^2}{e_\mathbf{k} e_{\mathbf{k}+\mathbf{q}}}, \end{equation} % while for the mixed charge-spin ones we have % \begin{eqnarray} A^{01}_{\ell\ell'}(\mathbf{k},\mathbf{q}) &=& \ell \, \frac{\Delta}{e_\mathbf{k}} + \ell' \, \frac{\Delta}{e_{\mathbf{k}+\mathbf{q}}}, \\ A^{02}_{\ell\ell'}(\mathbf{k},\mathbf{q}) &=& -i \ell\ell' \, \Delta \frac{h_\mathbf{k} - h_{\mathbf{k}+\mathbf{q}}}{e_\mathbf{k} e_{\mathbf{k}+\mathbf{q}}}, \\ A^{03}_{\ell\ell'}(\mathbf{k},\mathbf{q}) &=& \ell \, \frac{h_\mathbf{k}}{e_\mathbf{k}} + \ell' \, \frac{h_{\mathbf{k}+\mathbf{q}}}{e_{\mathbf{k}+\mathbf{q}}} . \end{eqnarray} % The diagonal spin coherence factors are given by % \begin{eqnarray} A^{11}_{\ell\ell'}(\mathbf{k},\mathbf{q}) &=& 1 - \ell\ell'\,\frac{h_\mathbf{k} h_{\mathbf{k}+\mathbf{q}} - \Delta^2}{e_\mathbf{k} e_{\mathbf{k}+\mathbf{q}}}, \\ A^{22}_{\ell\ell'}(\mathbf{k},\mathbf{q}) &=& 1 - \ell\ell'\,\frac{h_\mathbf{k} h_{\mathbf{k}+\mathbf{q}} + \Delta^2}{e_\mathbf{k} e_{\mathbf{k}+\mathbf{q}}}, \\ A^{33}_{\ell\ell'}(\mathbf{k},\mathbf{q}) &=& 1 + \ell\ell'\,\frac{h_\mathbf{k} h_{\mathbf{k}+\mathbf{q}} - \Delta^2}{e_\mathbf{k} e_{\mathbf{k}+\mathbf{q}}}, \label{eq_app_spiral: A33} \end{eqnarray} % and the off-diagonal ones by % \begin{eqnarray} A^{12}_{\ell\ell'}(\mathbf{k},\mathbf{q}) &=& -i\ell \, \frac{h_\mathbf{k}}{e_\mathbf{k}} + i\ell' \, \frac{h_{\mathbf{k}+\mathbf{q}}}{e_{\mathbf{k}+\mathbf{q}}}, \\ A^{13}_{\ell\ell'}(\mathbf{k},\mathbf{q}) &=& \ell\ell' \, \Delta \frac{h_\mathbf{k} + h_{\mathbf{k}+\mathbf{q}}}{e_\mathbf{k} e_{\mathbf{k}+\mathbf{q}}}, \\ A^{23}_{\ell\ell'}(\mathbf{k},\mathbf{q}) &=& -i \ell \, \frac{\Delta}{e_\mathbf{k}} + i\ell' \, \frac{\Delta}{e_{\mathbf{k}+\mathbf{q}}}. \end{eqnarray} % The remaining off-diagonal coherence factors can be easily obtained from the above expressions and the relation $\mathcal{A}^{ba}_{\ell\ell'}(\mathbf{k},\mathbf{q})=[\mathcal{A}^{ab}_{\ell\ell'}(\mathbf{k},\mathbf{q})]^*$. The $\mathcal{A}^{ab}_{\ell\ell'}(\mathbf{k},\mathbf{q})$ are purely imaginary if and only if one of the two indices equals two, and real in all other cases. Thus, the exchange of $a$ and $b$ gives % \begin{equation} \mathcal{A}^{ba}_{\ell\ell'}(\mathbf{k},\mathbf{q}) = p^a p^b \mathcal{A}^{ab}_{\ell\ell'}(\mathbf{k},\mathbf{q}) \label{eq_app_spiral: coh fact exchange a,b} \end{equation} % with $p^a=+1$ for $a=0,1,3$ and $p^{a=2}=-1$. Using $\xi_\mathbf{k}=\xi_{{-\mathbf{k}}}$, one obtains $h_{{-\mathbf{k}}-\mathbf{Q}}=-h_\mathbf{k}$, $g_{{-\mathbf{k}}-\mathbf{Q}}=g_\mathbf{k}$, $e_{{-\mathbf{k}}-\mathbf{Q}}=e_\mathbf{k}$ and $u^\ell_{{-\mathbf{k}}-\mathbf{Q}}=\sigma^1 u^\ell_{\mathbf{k}}\sigma^1$. From Eq.~\eqref{eq_app_spiral: coh fact def}, one sees that % \begin{equation} A^{ab}_{\ell'\ell}({-\mathbf{k}}-\mathbf{Q}-\mathbf{q},\mathbf{q}) = \frac{1}{2}\Tr\left[\widetilde{\sigma}^b u^\ell_{\mathbf{k}+\mathbf{q}}\widetilde{\sigma}^a u^{\ell'}_{\mathbf{k}}\right], \end{equation} % with $\widetilde{\sigma}^a=\sigma^1\sigma^a\sigma^1=s^a\sigma^a$, where $s^a=+1$ for $a=0,1$, and $s^a=-1$ for $a=2,3$. Using Eq.~\eqref{eq_app_spiral: coh fact exchange a,b}, one obtains % \begin{equation} A^{ab}_{\ell'\ell}({-\mathbf{k}}-\mathbf{Q}-\mathbf{q},\mathbf{q}) = s^a s^b \mathcal{A}^{ba}_{\ell\ell'}(\mathbf{k},\mathbf{q})=s^{ab}\mathcal{A}^{ab}_{\ell\ell'}(\mathbf{k},\mathbf{q}), \label{eq_app_spiral: Aq symmetry} \end{equation} % where % \begin{equation} s^{ab}=s^a s^b p^a p^b = (1-2\delta_{a3})(1-2\delta_{b3}). \end{equation} % % \section{Symmetries of the bare susceptibilities} % In this Section, we prove the symmetries of the bare bubbles listed in Table~\ref{tab_low_spiral: symmetries}. % \subsection{Parity under frequency sign change} % We decompose expression~\eqref{eq_low_spiral: chi0 expression} into intraband and interband contributions % \begin{equation} \begin{split} \widetilde{\chi}_{ab}^0(\mathbf{q},z) = &-\frac{1}{8}\sum_\ell\int_\mathbf{k} A^{ab}_{\ell\ell}(\mathbf{k},\mathbf{q})\frac{f(E^\ell_\mathbf{k})-f(E^\ell_{\mathbf{k}+\mathbf{q}})}{E^\ell_\mathbf{k}-E^\ell_{\mathbf{k}+\mathbf{q}}+z}\\ &-\frac{1}{8}\sum_\ell\int_\mathbf{k} A^{ab}_{\ell,{-\ell}}(\mathbf{k},\mathbf{q})\frac{f(E^\ell_\mathbf{k})-f(E^{-\ell}_{\mathbf{k}+\mathbf{q}})}{E^\ell_\mathbf{k}-E^{-\ell}_{\mathbf{k}+\mathbf{q}}+z}, \end{split} \end{equation} % with $z$ a generic complex frequency. Splitting the difference of the Fermi functions, and making the variable change $\mathbf{k}\to{-\mathbf{k}}-\mathbf{Q}-\mathbf{q}$ in the integral in the second term, we obtain for the intraband term % \begin{equation} \begin{split} [\widetilde{\chi}_{ab}^0(\mathbf{q},z)]_\mathrm{intra}= &-\frac{1}{8}\sum_\ell\int_\mathbf{k} A^{ab}_{\ell\ell}(\mathbf{k},\mathbf{q})\frac{f(E^\ell_\mathbf{k})}{E^\ell_\mathbf{k}-E^\ell_{\mathbf{k}+\mathbf{q}}+z}\\ &-\frac{1}{8}\sum_\ell\int_\mathbf{k} A^{ab}_{\ell\ell}(\mathbf{k},\mathbf{q})\frac{f(E^{-\ell}_{{-\mathbf{k}}-\mathbf{Q}-\mathbf{q}})}{E^\ell_{{-\mathbf{k}}-\mathbf{Q}}-E^\ell_{{-\mathbf{k}}-\mathbf{Q}-\mathbf{q}}-z}. \end{split} \end{equation} % Using Eq.~\eqref{eq_app_spiral: Aq symmetry} and $E^\ell_{{-\mathbf{k}}-\mathbf{Q}}=E^\ell_{\mathbf{k}}$, we obtain % \begin{equation} [\widetilde{\chi}_{ab}^0(\mathbf{q},z)]_\mathrm{intra} = -\frac{1}{8} \sum_\ell\int_\mathbf{k} A^{ab}_{\ell\ell}(\mathbf{k},\mathbf{q}) f(E^\ell_\mathbf{k}) \left( \frac{1}{E^\ell_\mathbf{k}-E^\ell_{\mathbf{k}+\mathbf{q}}+z} + \frac{s^{ab}}{E^\ell_\mathbf{k}-E^\ell_{\mathbf{k}+\mathbf{q}}-z} \right) \, . \end{equation} % Similarly, we rewrite the interband term as % \begin{equation} \begin{split} [\widetilde{\chi}_{ab}^0(\mathbf{q},z)]_\mathrm{inter}= &-\frac{1}{8}\sum_\ell\int_\mathbf{k} A^{ab}_{\ell,{-\ell}}(\mathbf{k},\mathbf{q}) \frac{f(E^\ell_\mathbf{k})}{E^\ell_\mathbf{k}-E^{-\ell}_{\mathbf{k}+\mathbf{q}}+z} \nonumber \\ & -\frac{1}{8}\sum_\ell\int_\mathbf{k} A^{ab}_{{-\ell},\ell}({-\mathbf{k}}- \mathbf{q}-\mathbf{Q},\mathbf{q}) \frac{-f(E^\ell_{{-\mathbf{k}}-\mathbf{Q}})}{-(E^\ell_{{-\mathbf{k}}-\mathbf{Q}}-E^{-\ell}_{{-\mathbf{k}}-\mathbf{Q}-\mathbf{q}}-z)}, \end{split} \end{equation} % where in the second term we have made the substitution $\ell\to{-\ell}$. Using again Eq.~\eqref{eq_app_spiral: Aq symmetry}, we get % \begin{equation} [\widetilde{\chi}_{ab}^0(\mathbf{q},z)]_\mathrm{inter} = -\frac{1}{8}\sum_\ell\int_\mathbf{k} A^{ab}_{\ell,{-\ell}}(\mathbf{k},\mathbf{q}) f(E^\ell_\mathbf{k}) \left( \frac{1}{E^\ell_\mathbf{k}-E^{-\ell}_{\mathbf{k}+\mathbf{q}}+z} + \frac{s^{ab}}{E^\ell_\mathbf{k}-E^{-\ell}_{\mathbf{k}+\mathbf{q}}-z} \right). \end{equation} % Summing up the interband and intraband terms, we obtain % \begin{equation} \widetilde{\chi}^0_{ab}(\mathbf{q},-z) = s^{ab}\widetilde{\chi}^0_{ab}(\mathbf{q},z). \label{eq_app_spiral: frequency change symm} \end{equation} % In the physical case of retarded suscecptibilities, that is, $z=\omega+i0^+$, we get % \begin{subequations} \begin{align} &\widetilde{\chi}^{0r}_{ab}(\mathbf{q},-\omega)=s^{ab}\widetilde{\chi}^{0r}_{ab}(\mathbf{q},\omega),\\ &\widetilde{\chi}^{0i}_{ab}(\mathbf{q},-\omega)=-s^{ab}\widetilde{\chi}^{0i}_{ab}(\mathbf{q},\omega), \end{align} \end{subequations} % with $\widetilde{\chi}^{0r}_{ab}$ and $\widetilde{\chi}^{0i}_{ab}$ defined in the main text. % \subsubsection{Parity under momentum sign change} % Performing the variable change $\mathbf{k}\to\mathbf{k}-\mathbf{q}/2$ in the definition of the bare susceptibility, we get % \begin{equation} \widetilde{\chi}^0_{ab}(\mathbf{q},z) = \widetilde{\chi}_0^{ab}(\mathbf{q},z)= -\frac{1}{8}\sum_{\ell\ell'} \int_\mathbf{k} A^{ab}_{\ell\ell'}\left(\mathbf{k}-\frac{\mathbf{q}}{2},\mathbf{q}\right) \frac{f(E^{\ell}_{\mathbf{k}-\frac{\mathbf{q}}{2}})-f(E^{\ell'}_{\mathbf{k}+\frac{\mathbf{q}}{2}})} {E^{\ell}_{\mathbf{k}-\frac{\mathbf{q}}{2}}-E^{\ell'}_{\mathbf{k}+\frac{\mathbf{q}}{2}}+z}. \end{equation} % Using % \begin{equation} A^{ab}_{\ell'\ell}\left(\mathbf{k}+\frac{\mathbf{q}}{2},-\mathbf{q}\right) = A^{ba}_{\ell\ell'}\left(\mathbf{k}-\frac{\mathbf{q}}{2},\mathbf{q}\right) = p^{a}p^{b}\,A^{ab}_{\ell\ell'}\left(\mathbf{k}-\frac{\mathbf{q}}{2},\mathbf{q}\right) \, , \end{equation} % we immediately see that % \begin{equation} \label{eq_low_spiral:Aq->-q} \widetilde{\chi}^0_{ab}(-\mathbf{q},-z) = p^{a}p^{b} \, \widetilde{\chi}^0_{ab}(\mathbf{q},z). \end{equation} % Combining this result with Eq.~\eqref{eq_app_spiral: frequency change symm}, we obtain % \begin{equation} \widetilde{\chi}^0_{ab}(-\mathbf{q},z) = p^{ab} \, \widetilde{\chi}^0_{ab}(\mathbf{q},z), \end{equation} % with $p^{ab}$ defined as % \begin{equation} p^{ab} = p^a p^b s^{ab} = s^a s^b = (1-2\delta_{a2})(1-2\delta_{b2})(1-2\delta_{a3})(1-2\delta_{b3}). \end{equation} % In the case of retarded susceptibilities, that is, $z=\omega+i0^+$, we get % \begin{subequations} \begin{align} &\widetilde{\chi}^{0r}_{ab}(-\mathbf{q},\omega) = p^{ab} \widetilde{\chi}^{0r}_{ab}(\mathbf{q},\omega),\\ &\widetilde{\chi}^{0i}_{ab}(-\mathbf{q},\omega) = -p^{ab} \widetilde{\chi}^{0i}_{ab}(\mathbf{q},\omega). \end{align} \end{subequations} % % \section{Calculation of \texorpdfstring{$\widetilde{\chi}^0_{33}(\pm\mathbf{Q},0)$}{chitilde033(+-Q,0)}} % In this Appendix we prove the relation \eqref{eq_low_spiral: chi33(q,0)} for $\widetilde{\chi}^0_{33}(-\mathbf{Q},0)$. The corresponding relation for $\widetilde{\chi}^0_{33}(\mathbf{Q},0)$ follows from the parity of $\widetilde{\chi}^0_{33}(\mathbf{q},\omega)$ under $\mathbf{q}\to-\mathbf{q}$. Using the general expression~(\ref{eq_low_spiral: chi0 def}) for the bare susceptibility, and Eq.~\eqref{eq_app_spiral: A33} for the coherence factor $A^{33}_{\ell\ell'}(\mathbf{k},\mathbf{q})$, one obtains % \begin{eqnarray} \widetilde{\chi}^0_{33}(-\mathbf{Q},0) &=& - \frac{1}{8} \int_\mathbf{k} \left[ 1 + \frac{h_\mathbf{k} h_{\mathbf{k}-\mathbf{Q}}-\Delta^2}{e_\mathbf{k} e_{\mathbf{k}-\mathbf{Q}}}\right] \left(\frac{f(E^+_\mathbf{k})-f(E^+_{\mathbf{k}-\mathbf{Q}})}{E^+_\mathbf{k}-E^+_{\mathbf{k}-\mathbf{Q}}} + \frac{f(E^-_\mathbf{k})-f(E^-_{\mathbf{k}-\mathbf{Q}})}{E^-_\mathbf{k}-E^-_{\mathbf{k}-\mathbf{Q}}}\right) \nonumber \\ && - \frac{1}{8} \int_\mathbf{k} \left[1-\frac{h_\mathbf{k} h_{\mathbf{k}-\mathbf{Q}}-\Delta^2}{e_\mathbf{k} e_{\mathbf{k}-\mathbf{Q}}}\right] \left(\frac{f(E^+_\mathbf{k})-f(E^-_{\mathbf{k}-\mathbf{Q}})}{E^+_\mathbf{k}-E^-_{\mathbf{k}-\mathbf{Q}}} + \frac{f(E^-_\mathbf{k})-f(E^+_{\mathbf{k}-\mathbf{Q}})}{E^-_\mathbf{k}-E^+_{\mathbf{k}-\mathbf{Q}}}\right) \nonumber \\[2mm] &=& - \frac{1}{4} \sum_{\ell=\pm}\int_\mathbf{k} \left\{\left[1-\frac{h_\mathbf{k} h_{-\mathbf{k}}+\Delta^2}{e_\mathbf{k} e_{-\mathbf{k}}}\right] \frac{f(E^\ell_\mathbf{k})}{E^\ell_\mathbf{k}-E^\ell_{-\mathbf{k}}} + \left[1+\frac{h_\mathbf{k} h_{-\mathbf{k}}+\Delta^2}{e_\mathbf{k} e_{-\mathbf{k}}}\right] \frac{f(E^\ell_\mathbf{k})}{E^\ell_\mathbf{k}-E^{-\ell}_{-\mathbf{k}}}\right\} \nonumber \\[2mm] &=& \sum_{\ell=\pm} \int_\mathbf{k} \frac{(-\ell) f(E^\ell_\mathbf{k})}{4e_\mathbf{k}} \left\{\frac{2\ell e_\mathbf{k} (g_\mathbf{k}-g_{-\mathbf{k}})+2 h_\mathbf{k}(h_\mathbf{k} -h_{-\mathbf{k}})} {(E^\ell_\mathbf{k}-E^{-\ell}_{-\mathbf{k}})(E^\ell_\mathbf{k}-E^\ell_{-\mathbf{k}})}\right\} \, . \end{eqnarray} % In the second equation we have used $h_{\mathbf{k}-\mathbf{Q}}=-h_{{-\mathbf{k}}}$, $e_{\mathbf{k}-\mathbf{Q}}=e_{{-\mathbf{k}}}$, and $E^\pm_{\mathbf{k}-\mathbf{Q}}=E^\pm_{{-\mathbf{k}}}$. It is easy to see that the linear combinations $g^-_\mathbf{k} = g_\mathbf{k} - g_{-\mathbf{k}}$, $h^\pm_\mathbf{k} = h_\mathbf{k} \pm h_{-\mathbf{k}}$, and $e^\pm_\mathbf{k} = e_\mathbf{k} \pm e_{-\mathbf{k}}$ obey the relations $h^-_\mathbf{k}h^+_\mathbf{k} = h_\mathbf{k}^2-h_{-\mathbf{k}}^2 = e_\mathbf{k}^2-e_{-\mathbf{k}}^2 = e^-_\mathbf{k}e^+_\mathbf{k}$, and $h^-_\mathbf{k} = -g^-_\mathbf{k}$. Using these relations, we finally get % \begin{eqnarray} \widetilde{\chi}^0_{33}(-\mathbf{Q},0) &=& \sum_{\ell=\pm}\int_\mathbf{k} \frac{(-\ell) f(E^\ell_\mathbf{k})}{4e_\mathbf{k}} \left\{\frac{2\ell e_\mathbf{k} g^-_\mathbf{k} + 2h_\mathbf{k}h^-_\mathbf{k} }{(g^-_\mathbf{k} + \ell e^+_\mathbf{k})(g^-_\mathbf{k} + \ell e^-_\mathbf{k})} \right\} \nonumber \\ &=& \sum_{\ell=\pm}\int_\mathbf{k} \frac{(-\ell) f(E^\ell_\mathbf{k})}{4e_\mathbf{k}} \left\{\frac{2\ell e_\mathbf{k} g^-_\mathbf{k} + e^-_\mathbf{k}e^+_\mathbf{k} + (g^-_\mathbf{k})^2 }{(g^-_\mathbf{k} + \ell e^+_\mathbf{k})(g^-_\mathbf{k} + \ell e^-_\mathbf{k})} \right\} \nonumber \\ &=& \sum_{\ell=\pm}\int_\mathbf{k} \frac{(-\ell) f(E^\ell_\mathbf{k})}{4e_\mathbf{k}} = \int_\mathbf{k} \frac{f(E^-_\mathbf{k}) - f(E^+_\mathbf{k})}{4e_\mathbf{k}} \, . \end{eqnarray} % \section{Expressions for \texorpdfstring{$\kappa_\alpha^{30}(\mathbf{0})$}{ka30(0)} and \texorpdfstring{$\kappa_\alpha^{31}(\mathbf{0})$}{ka31(0)}} \label{app: ka30 and ka31} % In this appendix, we report explicit expressions for the off-diagonal paramagnetic contributions to the spin stiffness, namely $\kappa_\alpha^{30}(\mathbf{0})$, and $\kappa_\alpha^{31}(\mathbf{0})$. For $\kappa_\alpha^{30}(\mathbf{0})$, we have, after having made the trace in \eqref{eq_low_spiral: paramagnetic contr Kernel} explicit, % \begin{equation} \begin{split} &\kappa_\alpha^{30}(\mathbf{0})=\lim_{\mathbf{q}\to\mathbf{0}}K_{\mathrm{para},\alpha 0}^{31}(\mathbf{q},\mathbf{q}',0)=\\&=-\frac{1}{4}\int_\mathbf{k} T\sum_{\nu_n} \left\{\left[G^2_\mathbf{k}(i\nu_n)+F^2_\mathbf{k}(i\nu_n)\right]\gamma^\alpha_\mathbf{k}-\left[\overline{G}^2_\mathbf{k}(i\nu_n)+F^2_\mathbf{k}(i\nu_n)\right]\gamma^\alpha_{\mathbf{k}+\mathbf{Q}}\right\}\delta_{\mathbf{q}',\mathbf{0}}\\ &=-\frac{1}{4}\int_\mathbf{k} T\sum_{\nu_n}\left\{\partial_\mathbf{k}\left[G_\mathbf{k}-\overline{G}_\mathbf{k}\right]+4F_\mathbf{k}^2\,\partial_{k_\alpha}h_\mathbf{k}\right\}\delta_{\mathbf{q}',\mathbf{0}}, \end{split} \end{equation} % where we have made use of properties \eqref{eq_low_spiral: derivatives of G} in the last line. The first term vanishes when integrated by parts, while the Matsubara summation for the second yields % \begin{equation} \begin{split} \kappa_\alpha^{30}(\mathbf{0})= -\frac{\Delta^2}{4}\int_\mathbf{k} \left[\frac{f(E^-_\mathbf{k})-f(E^+_\mathbf{k})}{e^3_\mathbf{k}}+\frac{f^\prime(E^+_\mathbf{k})+f^\prime(E^-_\mathbf{k})}{e_\mathbf{k}^2}\right](\partial_{k_{\alpha}}h_{\mathbf{k}}). \end{split} \end{equation} % For $\kappa_\alpha^{31}(\mathbf{0})$ we have % \begin{equation} \begin{split} \lim_{\mathbf{q}\to\mathbf{0}}K_{\mathrm{para},\alpha 0}^{31}&(\mathbf{q},\mathbf{q}',0)=\\ &=-\frac{1}{4}\int_\mathbf{k} T\sum_{\nu_n} \left[G_\mathbf{k}(i\nu_n)F_\mathbf{k}(i\nu_n)\gamma^\alpha_\mathbf{k}-\overline{G}_\mathbf{k}(i\nu_n)F_\mathbf{k}(i\nu_n)\gamma^\alpha_{\mathbf{k}+\mathbf{Q}}\right]\left(\delta_{\mathbf{q}',\mathbf{Q}}+\delta_{\mathbf{q}',-\mathbf{Q}}\right). \end{split} \end{equation} % Defining $\kappa_\alpha^{31}(\mathbf{0})=2K_{\mathrm{para},\alpha 0}^{31}(\mathbf{0},\mathbf{Q},0)$ (see Eq.~\eqref{eq_low_spiral: k31 def}), and performing the Matsubara sum, we obtain % \begin{equation} \begin{split} \kappa_\alpha^{31}(\mathbf{0})=-\frac{\Delta^2}{4} \int_\mathbf{k}\bigg\{ \left[\frac{h_\mathbf{k}}{e_\mathbf{k}}(\partial_{k_\alpha}g_\mathbf{k})+(\partial_{k_\alpha}h_\mathbf{k})\right]\frac{f'(E^+_\mathbf{k})}{e_\mathbf{k}} +\Big[&\frac{h_\mathbf{k}}{e_\mathbf{k}}(\partial_{k_\alpha}g_\mathbf{k})-(\partial_{k_\alpha}h_\mathbf{k})\Big]\frac{f'(E^-_\mathbf{k})}{e_\mathbf{k}}\\ &+\frac{h_\mathbf{k}}{e_\mathbf{k}^2}(\partial_{k_\alpha}g_\mathbf{k}) \frac{f(E^-_\mathbf{k})-f(E^+_\mathbf{k})}{e_\mathbf{k}}\bigg\}. \end{split} \end{equation} % Furthermore, it is easy to see that $K_{\alpha 0}^{31}(\mathbf{0},\pm\mathbf{Q},0)=\mp iK_{\alpha 0}^{32}(\mathbf{0},\pm\mathbf{Q},0)$, which, together with Eq.~\eqref{eq_low_spiral: k32 def} proves $\kappa_\alpha^{31}(\mathbf{0})=\kappa^\alpha_{32}(\mathbf{0})$. We remark that in the N\'eel limit both $\kappa_\alpha^{30}(\mathbf{0})$ and $\kappa_\alpha^{31}(\mathbf{0})$ vanish as their integrands are odd under $\mathbf{k}\to\mathbf{k}+\mathbf{Q}$. % \section{Expressions for \texorpdfstring{$\widetilde{\chi}_0^{-a}(Q)$}{chit0-a(Q)}} \label{app: chit0-a(Q)} % We report here the RPA expressions for the off-diagonal bare susceptibilities $\widetilde{\chi}_0^{-a}(Q)$, with $a=0,1,2$. They can all be obtained by computing the trace and the Matsubara summation in Eq.~\eqref{app: chit0-a(Q)}. We obtain % \begin{subequations} \begin{align} &\widetilde{\chi}_0^{-0}(Q)=-\frac{1}{16}\int_\mathbf{k} \sum_{\ell,\ell'=\pm}\left[\ell\frac{\Delta}{e_\mathbf{k}}+\ell'\frac{\Delta}{e_{\mathbf{k}+\mathbf{Q}}}+\ell\ell'\frac{\Delta(h_{\mathbf{k}+\mathbf{Q}}-h_\mathbf{k})}{e_\mathbf{k} e_{\mathbf{k}+\mathbf{Q}}}\right]F_{\ell\ell'}(\mathbf{k},\mathbf{Q},0),\\ &\widetilde{\chi}_0^{-1}(Q)=-\frac{1}{16}\int_\mathbf{k} \sum_{\ell,\ell'=\pm}\left[1+\ell\frac{h_\mathbf{k}}{e_\mathbf{k}}-\ell'\frac{h_{\mathbf{k}+\mathbf{Q}}}{e_{\mathbf{k}+\mathbf{Q}}}-\ell\ell'\frac{h_\mathbf{k} h_{\mathbf{k}+\mathbf{Q}}-\Delta^2}{e_\mathbf{k} e_{\mathbf{k}+\mathbf{Q}}}\right]F_{\ell\ell'}(\mathbf{k},\mathbf{Q},0),\\ &\widetilde{\chi}_0^{-2}(Q)=+\frac{i}{16}\int_\mathbf{k} \sum_{\ell,\ell'=\pm}\left[1+\ell\frac{h_\mathbf{k}}{e_\mathbf{k}}-\ell'\frac{h_{\mathbf{k}+\mathbf{Q}}}{e_{\mathbf{k}+\mathbf{Q}}}-\ell\ell'\frac{h_\mathbf{k} h_{\mathbf{k}+\mathbf{Q}}+\Delta^2}{e_\mathbf{k} e_{\mathbf{k}+\mathbf{Q}}}\right]F_{\ell\ell'}(\mathbf{k},\mathbf{Q},0). \end{align} \end{subequations} % with $F_{\ell\ell'}(\mathbf{k},\mathbf{q},\omega)$ defined as in Eq.~\eqref{eq_low_spiral: Fll def}. \end{document} \chapter{Symmetries and flow equation of the vertex function} \label{app: symm V} % In this Appendix, we present the symmetries and the explicit flow equation of the vertex function $V$. % \section{Symmetries of \texorpdfstring{$V$}{V}} % we start by considering the effect of the following symmetries on $V$: SU(2)-spin, lattice-, and time reversal (TRS) symmetries, translational invariance, remnants of anti-symmetry (RAS), and complex conjugation (CC). More detailed discussions can be found in~\cite{Husemann2009,Rohringer2012,Wentzell2016}. % \subsection{Antisymmetry properties}{} % The two-particle vertex enters the effective action as % \begin{equation} \frac{1}{(2!)^2}\sum_{\substack{x_1',x_2',\\x_1,x_2}}V(x_1',x_2',x_1,x_2)\overline{\psi}(x_1')\overline{\psi}(x_2')\psi(x_2)\psi(x_1), \label{eq_app_V: gamma4} \end{equation} % where $x=(\mathbf{k},\nu,\sigma)$ is a collective variable enclosing the lattice momentum $\mathbf{k}$, a fermionic Matsubara frequency $\nu$ and the spin quantum number $\sigma$. From now on we label as $k$ the pair $(\mathbf{k},\nu)$. From Eq.~\eqref{eq_app_V: gamma4}, we immediately see that exchanging the dummy variables $x_1'$ and $x_2'$ or $x_1$ and $x_2$ the effective action gets a minus sign because of the Grassmann algebra of the fields $\psi$ and $\overline{\psi}$. To keep the effective action invariant, the vertex must therefore obey % \begin{equation} V(x_1',x_2',x_1,x_2) = -V(x_2',x_1',x_1,x_2) = -V(x_1',x_2',x_2,x_1). \label{eq_app_V: Antisymmetry} \end{equation} % % \subsection{SU(2)-spin symmetry}{} % The SU(2)-spin symmetry acts on the fermionic fields as % \begin{subequations} \begin{align} &\psi_{k,\sigma}\to \sum_{\sigma'}U_{\sigma\sigma'}\,\psi_{k,\sigma}, \\ &\overline{\psi}_{k,\sigma}\to \sum_{\sigma'}U^\dagger_{\sigma\sigma'}\,\overline{\psi}_{k,\sigma}, \label{eq_app_V: SU(2) psi} \end{align} \end{subequations} % with $U\in\mathrm{SU(2)}$. A vertex that is invariant under~\eqref{eq_app_V: SU(2) psi} can be expressed as (see also Eq.~\eqref{eq_methods: V SU(2) inv}) % \begin{equation} \begin{split} V_{\sigma_1'\sigma_2'\sigma_1\sigma_2}(k_1',k_2',k_1,k_2) = V(k_1',k_2',k_1,k_2)\delta_{\sigma_1'\sigma_1}\delta_{\sigma_2'\sigma_2} +\overline{V}(k_1',k_2',k_1,k_2)\delta_{\sigma_1'\sigma_2}\delta_{\sigma_2'\sigma_1}, \end{split} \end{equation} % where Eq.~\eqref{eq_app_V: Antisymmetry} forces the identity % \begin{equation} \overline{V}(k_1',k_2',k_1,k_2)=-V(k_2',k_1',k_1,k_2)=-V(k_1',k_2',k_2,k_1). \end{equation} % From now on we only consider symmetry properties of the vertex function $V=V_{\uparrow\downarrow\uparrow\downarrow}$. % \subsection{Time and space translational invariance}{} % The invariance of the system under time and space translations implies energy and momentum conservation, respectively. If these symmetries are fulfilled, then the vertex function can be written as % \begin{equation} V(k_1',k_2',k_1,k_2)=V(k_1',k_2',k_1)\, \delta\left(k_1'+k_2'-k_1-k_2\right). \end{equation} % % \subsection{Remnants of antisymmetry}{} % The vertex function $V$ is not antisymmetric under the exchange of the pair $(k_1',k_2')$ or $(k_1,k_2)$. It is however invariant under a \emph{simultaneous} exchange of them, that is, % \begin{equation} V(k_1',k_2',k_1,k_2)=V(k_2',k_1',k_2,k_1). \end{equation} % We call this symmetry remnants of antisymmetry (RAS). % \subsection{Time reversal symmetry}{} % A time reversal transformation exchanges the fermionic creation and annihilation operators. It acts on the Grassmann variables as % \begin{subequations} \begin{align} &\psi_{k,\sigma}\to i\overline{\psi}_{k,\sigma}, \\ &\overline{\psi}_{k,\sigma}\to i\psi_{k,\sigma}. \end{align} \end{subequations} % Since this is a symmetry of the bare action, the vertex function must obey % \begin{equation} V(k_1',k_2',k_1,k_2)=V(k_1,k_2,k_1',k_2'). \end{equation} % % \subsection{Lattice symmetries}{} % The square lattice considered in this Thesis is invariant under transformations belonging to the discrete group $C_4$. The latter are implemented on the fermionic fields as % \begin{subequations} \begin{align} &\psi_{(\mathbf{k},\nu),\sigma}\to \psi_{(R\mathbf{k},\nu),\sigma}, \\ &\overline{\psi}_{(\mathbf{k},\nu),\sigma}\to \overline{\psi}_{(R\mathbf{k},\nu),\sigma}, \end{align} \end{subequations} % with $R\in C_4$. If the lattice symmetries are not spontaneously broken, the vertex function obeys % \begin{equation} V(k_1',k_2',k_1,k_2)=V(R\,k_1',R\,k_2',R\,k_1,R\,k_2), \end{equation} % with $R\,k=(R\mathbf{k},\nu)$. For a more detailed discussion see Ref.~\cite{Platt2013}. % \subsection{Complex conjugation}{} % The complex conjugation (CC) transformation acts as % \begin{subequations} \begin{align} &\psi_{k,\sigma}\to i\mathcal{K}\overline{\psi}_{\widetilde{k},\sigma},\\ &\overline{\psi}_{k,\sigma}\to i\mathcal{K}\psi_{\widetilde{k},\sigma}, \end{align} \end{subequations} % where $\widetilde{k}=(\mathbf{k},-\nu)$, and the operator $\mathcal{K}$ transforms scalars into their complex conjugate. Since CC is a symmetry of the Hubbard action, the vertex function fulfills % \begin{equation} V(k_1',k_2',k_1,k_2)= \left[V(\widetilde{k}_1',\widetilde{k}_2',\widetilde{k}_1,\widetilde{k}_2)\right]^*. \end{equation} % % \subsection{Channel decomposition}{} % Let us now analyze how the above described symmetries act on the physical channels in which the vertex function can be decomposed (see also Eq.~\eqref{eq_methods: channel decomp physical}) % \begin{equation} \begin{split} V^\L(k_1',k_2',k_1) = &\lambda(k_1',k_2',k_1) \\ &+ \frac{1}{2}\mathcal{M}^{\L}_{k_{ph},k_{ph}'}(k_1-k_1') - \frac{1}{2}\mathcal{C}^{\L}_{k_{ph},k_{ph}'}(k_1-k_1') \\ &+ \mathcal{M}^{\L}_{k_{\overline{ph}},k_{\overline{ph}}'}(k_2'-k_1) \\ &- \mathcal{P}^{\L}_{k_{pp},k_{pp}'}(k_1'+k_2'), \end{split} \end{equation} % with $k_{ph}$, $k'_{ph}$, $k_{\overline{ph}}$, $k'_{\overline{ph}}$, $k_{pp}$, and $k'_{pp}$ defined as in Eq.~\eqref{eq_methods: k k' pp ph phx}. Combining RAS, TRS, CC, and, among the lattice symmetries, only the spatial inversion, we can prove that % \begin{subequations} \begin{align} &\mathcal{M}_{k,k'}(q)=\mathcal{M}_{k',k}(q),\\ &\mathcal{M}_{k,k'}(q)=\mathcal{M}_{k,k'}(-q),\\ &\mathcal{M}_{k,k'}(q) = \left[\mathcal{M}_{-k+q\mathrm{m}2,-k'+q\mathrm{m}2}(q)\right]^*, \end{align} \end{subequations} % with $q\mathrm{m}2=(\mathbf{0},2 (j\,\mathrm{mod}\,2)\pi T)$, $j\in\mathbb{Z}$. The same relations can be obtained for the charge channel % \begin{subequations} \begin{align} &\mathcal{C}_{k,k'}(q)=\mathcal{C}_{k',k}(q),\\ &\mathcal{C}_{k,k'}(q)=\mathcal{C}_{k,k'}(-q),\\ &\mathcal{C}_{k,k'}(q) = \left[\mathcal{C}_{-k+q\mathrm{m}2,-k'+q\mathrm{m}2}(q)\right]^*, \end{align} \end{subequations} % Differently, for the pairing channel, we have % \begin{subequations} \begin{align} &\mathcal{P}_{k,k'}(q)=\mathcal{P}_{-k+q\mathrm{m}2,-k'+q\mathrm{m}2}(q),\\ &\mathcal{P}_{k,k'}(q)=\mathcal{P}_{k',k}(q),\\ &\mathcal{P}_{k,k'}(q) = \left[\mathcal{P}_{k,k'}(-q)\right]^*. \end{align} \end{subequations} % % \subsection{SBE decomposition} % It is also useful to apply the symmetries described above to the SBE decomposition of the vertex function, introduced in Chap.~\ref{chap: Bos Fluct Norm}. In more detail, we study the symmetry properties of screened interactions and Yukawa couplings, as the rest functions obey the same relations as their relative channel, as described above. % \subsubsection{Magnetic channel} % The symmetries in the magnetic channel read as % \begin{subequations} \begin{align} &h^m_k(q) = h^m_k(-q),\\ &h^m_k(q) = \left[h^m_{-k+q\mathrm{m}2}(q)\right]^*,\\ &D^m(q)=D^m(-q),\\ &D^m(q)=\left[D^m(q)\right]^*. \end{align} \end{subequations} % % \subsubsection{Charge channel} % Similarly, in the charge channel we have % \begin{subequations} \begin{align} &h^c_k(q) = h^c_k(-q),\\ &h^c_k(q) = \left[h^c_{-k+q\mathrm{m}2}(q)\right]^*,\\ &D^c(q)=D^c(-q),\\ &D^c(q)=\left[D^c(q)\right]^*. \end{align} \end{subequations} % % \subsubsection{Pairing channel} % In the pairing channel we obtain % \begin{subequations} \begin{align} &h^p_k(q) = h^p_{-k+q\mathrm{m}2}(q),\\ &h^p_k(q) = \left[h^p_{k}(-q)\right]^*,\\ &D^p(q)=\left[D^p(-q)\right]^*. \end{align} \end{subequations} % % \section{Explicit flow equations for physical channels} % In this section, we explicitly express the flow equations for the physical channels within the form factor expansion introduced in Sec.~\ref{sec_fRG_MF: symmetric regime}. % The flow equation for the $s$-wave projected magnetic channel reads as % \begin{equation} \partial_\L \mathcal{M}_{\nu,\nu'}(q)=-T\sum_\omega V^m_{\nu\omega}(q)\left[\widetilde{\partial}_\L \chi^{0,ph}_\omega(q)\right] V^m_{\omega\nu}(q), \end{equation} % with the bubble reading as % \begin{equation} \chi^{0,ph}_\nu(q)=\int_\mathbf{k} G\left((\mathbf{k},\nu)+\rnddo{q}\right)G\left((\mathbf{k},\nu)-\rndup{q}\right), \end{equation} % and the vertex $V^m$ as % \begin{equation} \begin{split} V^m_{\nu\nu'}(\mathbf{q},\Omega)= &U + \mathcal{M}_{\nu\nu'}(\mathbf{q},\Omega)\\ +&\int_\mathbf{k} \bigg\{ -\frac{1}{2}\mathcal{C}_{\rndup{\nu+\nu'}-\rndup{\Omega},\rndup{\nu+\nu'}+\rnddo{\Omega}}(\mathbf{k},\nu'-\nu)\\ &\phantom{\int_\mathbf{k} \Big\{} +\frac{1}{2}\mathcal{M}_{\rndup{\nu+\nu'}-\rndup{\Omega},\rndup{\nu+\nu'}+\rnddo{\Omega}}(\mathbf{k},\nu'-\nu)\\ &\phantom{\int_\mathbf{k} \Big\{} -\mathcal{S}_{\rndup{\nu-\nu'-\Omega},\rndup{\nu'-\nu-\Omega}}(\mathbf{k},\nu+\nu'-\Omega\mathrm{m}2)\\ &\phantom{\int_\mathbf{k} \Big\{} -\mathcal{D}_{\rndup{\nu-\nu'-\Omega},\rndup{\nu'-\nu-\Omega}}(\mathbf{k},\nu+\nu'-\Omega\mathrm{m}2) \,\frac{\cos k_x+\cos k_y}{2} \bigg\}. \end{split} \end{equation} % Similarly, in the charge channel, we have % \begin{equation} \partial_\L \mathcal{C}_{\nu,\nu'}(q)=-T\sum_\omega V^c_{\nu\omega}(q)\left[\widetilde{\partial}_\L \chi^{0,ph}_\omega(q)\right] V^c_{\omega\nu}(q), \end{equation} % with % \begin{equation} \begin{split} V^c_{\nu\nu'}(\mathbf{q},\Omega)= &U - \mathcal{C}_{\nu\nu'}(\mathbf{q},\Omega)\\ +&\int_\mathbf{k} \bigg\{ \frac{1}{2}\mathcal{C}_{\rndup{\nu+\nu'}-\rndup{\Omega},\rndup{\nu+\nu'}+\rnddo{\Omega}}(\mathbf{k},\nu'-\nu)\\ &\phantom{\int_\mathbf{k} \Big\{} +\frac{3}{2}\mathcal{M}_{\rndup{\nu+\nu'}-\rndup{\Omega},\rndup{\nu+\nu'}+\rnddo{\Omega}}(\mathbf{k},\nu'-\nu)\\ &\phantom{\int_\mathbf{k} \Big\{} -2\mathcal{S}_{\rndup{\nu-\nu'-\Omega},\rndup{\nu-\nu'+\Omega}}(\mathbf{k},\nu+\nu'-\Omega\mathrm{m}2)\\ &\phantom{\int_\mathbf{k} \Big\{} +\mathcal{S}_{\rndup{\nu-\nu'-\Omega},\rndup{\nu'-\nu-\Omega}}(\mathbf{k},\nu+\nu'-\Omega\mathrm{m}2)\\ &\phantom{\int_\mathbf{k} \Big\{} -2\mathcal{D}_{\rndup{\nu-\nu'-\Omega},\rndup{\nu-\nu'+\Omega}}(\mathbf{k},\nu+\nu'-\Omega\mathrm{m}2) \,\frac{\cos k_x+\cos k_y}{2}\\ &\phantom{\int_\mathbf{k} \Big\{} +\mathcal{D}_{\rndup{\nu-\nu'-\Omega},\rndup{\nu'-\nu-\Omega}}(\mathbf{k},\nu+\nu'-\Omega\mathrm{m}2) \,\frac{\cos k_x+\cos k_y}{2} \bigg\}. \end{split} \end{equation} % The flow equation for the $s$-wave pairing channel reads as % \begin{equation} \partial_\L \mathcal{S}_{\nu,\nu'}(q)=T\sum_\omega V^p_{\nu\omega}(q)\left[\widetilde{\partial}_\L \chi^{0,pp}_\omega(q)\right] V^p_{\omega\nu}(q), \end{equation} % with the particle-particle bubble given by % \begin{equation} \chi^{0,pp}_\nu(q)=\int_\mathbf{k} G\left(\rnddo{q}+(\mathbf{k},\nu)\right)G\left(\rndup{q}-(\mathbf{k},\nu)\right), \end{equation} % and the vertex $V^p$ as % \begin{equation} \begin{split} V^p_{\nu\nu'}(\mathbf{q},\Omega)= &U - \mathcal{S}_{\nu\nu'}(\mathbf{q},\Omega)\\ +&\int_\mathbf{k} \bigg\{ -\frac{1}{2}\mathcal{C}_{\rnddo{\Omega}-\rndup{\nu+\nu'},\rndup{\Omega}-\rndup{\nu+\nu'}}(\mathbf{k},\nu'-\nu)\\ &\phantom{\int_\mathbf{k} \Big\{} +\frac{1}{2}\mathcal{M}_{\rnddo{\Omega}-\rndup{\nu+\nu'},\rndup{\Omega}-\rndup{\nu+\nu'}}(\mathbf{k},\nu'-\nu)\\ &\phantom{\int_\mathbf{k} \Big\{} +\mathcal{M}_{\rndup{\nu-\nu'+\Omega},\rndup{\nu'-\nu+\Omega}}(\mathbf{k},-\nu-\nu'+\Omega\mathrm{m}2) \bigg\}. \end{split} \end{equation} % Finally, in the $d$-wave pairing channel, we have % \begin{equation} \partial_\L \mathcal{D}_{\nu,\nu'}(q)=T\sum_\omega V^d_{\nu\omega}(q)\left[\widetilde{\partial}_\L \chi^{0,pp,d}_\omega(q)\right] V^d_{\omega\nu}(q), \end{equation} % where the $d$-wave pairing bubble is % \begin{equation} \chi^{0,pp,d}_\nu(q)=\int_\mathbf{k}\,d_\mathbf{k}^2\, G\left(\rnddo{q}+(\mathbf{k},\nu)\right)G\left(\rndup{q}-(\mathbf{k},\nu)\right), \end{equation} % with $d_\mathbf{k}=\cos k_x-\cos k_y$, and the vertex $V^d$ is given by % \begin{equation} \begin{split} V^d_{\nu\nu'}(\mathbf{q},\Omega)= &-\mathcal{D}_{\nu\nu'}(\mathbf{q},\Omega)\\ +&\int_\mathbf{k} \frac{\cos k_x+\cos k_y}{2}\bigg\{ +\frac{1}{2}\mathcal{C}_{\rnddo{\Omega}-\rndup{\nu+\nu'},\rndup{\Omega}-\rndup{\nu+\nu'}}(\mathbf{k},\nu'-\nu)\\ &\phantom{\int_\mathbf{k} \frac{\cos k_x+\cos k_y}{2}\Big\{} -\frac{1}{2}\mathcal{M}_{\rnddo{\Omega}-\rndup{\nu+\nu'},\rndup{\Omega}-\rndup{\nu+\nu'}}(\mathbf{k},\nu'-\nu)\\ &\phantom{\int_\mathbf{k} \frac{\cos k_x+\cos k_y}{2}\Big\{} -\mathcal{M}_{\rndup{\nu-\nu'+\Omega},\rndup{\nu'-\nu+\Omega}}(\mathbf{k},-\nu-\nu'+\Omega\mathrm{m}2) \bigg\}. \end{split} \end{equation} % % \end{document} \chapter*{Introduction} \label{chap: intro} \section*{Context and motivation} Since the discovery of high-temperature superconductivity in copper-oxide compounds in the late 1980's~\cite{Bednorz1986}, the strongly correlated electron problem has gained considerable attention among condensed matter theorists. In fact, the conduction electrons lying within the stacked Cu-O planes, where the relevant physics is expected to take place, strongly interact with each other. These strong correlations produce a very rich phase diagram spanned by chemical doping and temperature~\cite{Keimer2015}: while the undoped compounds are antiferromagnetic Mott insulators, chemical substitution weakens the magnetic correlations and produces a so-called "superconducting dome" centered around optimal doping. Aside from these phases, many others have been found to coexist and compete with them, such as charge- and spin-density waves~\cite{Comin2014}, pseudogap~\cite{Proust2019}, and strange metal~\cite{Hussey2008}. From a theoretical perspective, the early experiments on the cuprate materials immediately stimulated the search for a model able to describe at least some of the many competing phases. In 1987, Anderson proposed the single-band two-dimensional Hubbard model to describe the electrons moving in the copper-oxide planes~\cite{Anderson1987}. Despite the real materials exhibiting several bands with complex structures, Zhang and Rice suggested that the Cu-O hybridization produces a singlet whose propagation through the lattice can be described by a single band model~\cite{Zhang1988}. While some other models have been proposed, such as the $t$-$J$ one~\cite{Zhang1988}, describing the motion of holes in a Heisenberg antiferromagnet and corresponding to the strong coupling limit of the Hubbard model~\cite{Chao1977,Chao1978}, or a more complex three-band model~\cite{Emery1987,Emery1988}, the Hubbard model has gained the most popularity because of its (apparent) simplicity. The model has been originally introduced by Hubbard~\cite{Hubbard1963,Hubbard1964}, Kanamori~\cite{Kanamori1963}, and Gutzwiller~\cite{Gutzwiller1963}, to describe correlation phenomena in three-dimensional systems with partially filled $d$- and $f$-bands. It consists of spin-$\frac{1}{2}$ electrons moving on a square lattice, with quantum mechanical hopping amplitudes $t_{jj'}$ between the sites labeled as $j$ and $j'$ and experiencing an on-site repulsive interaction $U$ (see Fig.~\ref{fig_intro: hubbard}). % \begin{figure}[t] \centering \includegraphics[width=0.275\textwidth]{Hubbard.png} \caption{Pictorial representation of the Hubbard model on a square lattice. Here we consider hopping amplitudes $t$, $t'$ and $t^{\prime\prime}$ between nearest, next-to-nearest and third-neighboring sites, respectively. The onsite repulsive interaction only acts between opposite spin electrons, as, due to the Pauli principle, equal spin electrons can never occupy the same site.} \label{fig_intro: hubbard} \end{figure} % Despite its apparent simplicity, the competition of different energy scales (Fig.~\ref{fig_intro: hubbard scales}) in the Hubbard model leads to various phases, some of which are still far from being understood~\cite{Qin2021}. One of the key ingredients is the competition between the localization energy scale $U$ and the kinetic energy (given by the bandwith $D=8t$) that instead tends to delocalize the electrons. This gives rise, at half filling, that is, when the single band is half occupied, to the celebrated Mott metal-to-insulator (MIT) transition. At weak coupling the system is in a metallic phase, characterized by itinerant electrons. Above a given critical value of the onsite repulsion $U$, the energy gained by localizing becomes lower than that of the metal, realizing a correlated (Mott) insulator. To capture this kind of physics was one of the early successes of the dynamical mean-field theory (DMFT)~\cite{Metzner1989,Georges1992,Georges1996}. Another important energy scale is given by the antiferromagnetic exchange coupling $J$. Indeed, the half-filled Hubbard model with only nearest neighbor hopping amplitude $t$ at strong coupling can be mapped onto the antiferromagnetic Heisenberg model with coupling constant $J=4t^2/U$, where the electron spins are the only degrees of freedom, as the charge fluctuations get frozen out. Therefore, the ground state at half filling is a N\'eel antiferromagnet. This is true not only at strong, but also at weak coupling and for finite hopping amplitudes to sites further than nearest neighbors. Indeed, a crossover takes place by varying the interaction strength. At small $U$, the instability to antiferromagnetism is driven by the Fermi surface (FS) geometry, with the wave vector $\mathbf{Q}=(\pi/a,\pi/a)$ being a nesting vector (where $a$ is the lattice spacing), that is, it maps some points (hot spots) of the FS onto other points on the FS. In the particular case of zero hopping amplitudes beyond nearest neighbors (sometimes called pure Hubbard model), the nesting becomes perfect, with every point on the FS being a hot spot, implying that even infinitesimally small values of the coupling $U$ produce an antiferromagnetic state. In the more general case a minimal interaction strength $U_c$ is required to destabilize the paramagnetic phase. The state characterized by magnetic order occurring on top of a metallic state goes under the name of Slater antiferromagnet~\cite{Slater1951}. Differently, at strong coupling, local moments form on the lattice sites due to the freezing of charge fluctuations, which order antiferromagnetically, too~\cite{AuerbachBook1994,GebhardBook1997}. At intermediate coupling, the system is in a state that is something in between the two limits. In the pure Hubbard model case, a canonical particle-hole transformation~\cite{Micnas1990} maps the repulsive half-filled Hubbard model onto the attractive one, in which the crossover mentioned above becomes the BCS-BEC crossover~\cite{Eagles1969,Leggett1980,Nozieres1985}, describing the evolution from a weakly-coupled superconductor formed by loosely bound Cooper pairs, to a strongly coupled one, where the electrons tightly bind, forming bosonic particles which undergo Bose-Einstein condensation\footnote{Actually, in the pure attractive Hubbard model at half filling, the charge density wave and superconducting order parameters combine together to form an order parameter with SU(2) symmetry, which is the equivalent of the magnetization in the repulsive model.}. % \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{Hubbard_scales.png} \caption{Hierarchy of energy scales in cuprate superconductors. Taken from Ref.~\cite{Metzner2012}.} \label{fig_intro: hubbard scales} \end{figure} % Upon small electron or hole doping, the antiferromagnetic order gets weakened but may survive, giving rise to an \emph{itinerant antiferromagnet}, with small Fermi surfaces consisting of hole or electron pockets. Depending on the parameters, doping makes the N\'eel antiferromagnetic state unstable, with the spins rearranging in order to maximize the charge carrier kinetic energy, leading to an \emph{incommensurate} spiral magnet with ordering wave vector $\mathbf{Q}\neq(\pi/a,\pi/a)$. The transition from a N\'{e}el to a spiral incommensurate antiferromagnet has been found not only at weak~\cite{Igoshev2010,Fresard1991,Chubukov1995}, but also at strong~\cite{Vilardi2018,Bonetti2020_I} coupling, as well as in the $t$-$J$ model~\cite{Shraiman1989,Kotov2004}, which describes the large-$U$ limit of the doped Hubbard model. At a given doping value, the (incommensurate) antiferromagnetic order finally ends, leaving room for other phases. At \emph{finite temperature}, long-range antiferromagnetic ordering is prevented by the Mermin-Wagner theorem~\cite{Mermin1966}, but strong magnetic fluctuations survive, leaving their signature in the electron spectrum~\cite{Borejsza2004,Scheurer2018}. At finite doping, magnetic fluctuations generate an effective attractive interaction between the electrons, eventually leading to an instability towards a $d$-wave superconducting state, characterized by a gap that vanishes at the nodal points of the underlying Fermi surface (see left panel of Fig.~\ref{fig_intro: pseudogap exp}). At least in the weak coupling limit, the presence of a superconducting state has to be expected, because, as pointed out by Kohn and Luttinger~\cite{Kohn1965}, as long as a sharp Fermi surface is present, every kind of (weak) interaction produces an attraction in a certain angular momentum channel, causing the onset of superconductivity. In other words, the Cooper instability always occurs in a Fermi liquid as soon as the interactions are turned on. At weak and moderate coupling, several methods have found $d$-wave superconducting phases and/or instabilities coexisting and competing with (incommensurate) antiferromagnetic ones. Among these methods, we list the fluctuation exchange approximation (FLEX)~\cite{Bickers1989,Bickers2004}, and the functional renormalization group (fRG)~\cite{Halboth2000,Halboth2000_PRL,Zanchi1998,Zanchi2000,Friederich2011,Husemann2009,Husemann2012}. The FLEX approximation consists of a decoupling of the fluctuating magnetic and pairing channels, describing the $d$-wave pairing instability as a spin fluctuation mediated mechanism. On the other hand, the fRG~\cite{Metzner2012}, based on an exact flow equation~\cite{Wetterich1993,Berges2002}, provides an \emph{unbiased} treatment of \emph{all} the competing channels (including, for example, also charge fluctuations). The unavoidable truncation of the hierarchy of the flow equations, however, limits the applicability of this method to weak-to-moderate coupling values. Important progress has been made in this direction by replacing the \emph{bare} initial conditions with a converged DMFT solution~\cite{Taranto2014}, therefore "boosting"~\cite{Metzner2014} the fRG to strong coupling. One of the most challenging issues of the so-called DMF\textsuperscript{2}RG (DMFT+fRG) approach is the frequency dependence of the vertex function, representing the effective interaction felt by two electrons in the many-body medium, which has to be fully retained to properly capture strong coupling effects~\cite{Vilardi2019}. Similarly to the fRG, the parquet approximation~\cite{Zheleznyak1997,Eckhardt2020} (PA) treats all fluctuations on equal footing. Self-consistent parquet equations are hard to converge numerically, and this has prevented their application to physically relevant parameter regimes so far. A notable advancement in this direction has been brought by the development of the multiloop fRG, that, by means of an improved truncation of the exact flow equations, controlled by a parameter $\ell$ counting the number of loops present in the flow diagrams, has been shown to become equivalent to the PA in the limit $\ell\to\infty$~\cite{Kugler2018_PRB,Kugler2018_PRL}. % \begin{figure}[t] \centering \includegraphics[width=0.65\textwidth]{pseudogap_exp.png} \caption{Left panel: superconducting $d$-wave gap and pseudogap as functions of the lattice momentum $\mathbf{k}$. While the superconducting gap vanishes in a single $\mathbf{k}$-point, the pseudogap is zero over a finite portion of the bare Fermi surface. Right panel: pseudogap spectral function (top) exhibiting the characteristic Fermi arcs, and spectral function without pseudogap (bottom), marked by a large Fermi surface. Taken from Ref.~\cite{Keimer2015}.} \label{fig_intro: pseudogap exp} \end{figure} % Aside from antiferromagnetism and superconductivity, the Hubbard model is host to other intriguing phases, which have also been experimentally observed. One of those is the \emph{pseudogap} phase, characterized by the suppression of spectral weight at the antinodal points of the Fermi surface, forming so-called \emph{Fermi arcs} (see Fig.~\ref{fig_intro: pseudogap exp}). A full theoretical understanding of the mechanisms behind this behavior is still lacking, even though several works with various numerical methods~\cite{Vilk1997,Schaefer2015,VanLoon2018_II,Eckhardt2020,Simkovic2020,Hille2020_II} have found a considerable suppression of the spectral function close to the antinodal points in the Hubbard model. In all these works, the momentum-selective insulating nature of the computed self-energy seems to arise from strong antiferromagnetic fluctuations~\cite{Gunnarsson2015}. This can be described, at least in the weak coupling regime, by plugging in a Fock-like diagram for the self-energy an Ornstein-Zernike formula for the spin susceptibility, that is (in imaginary frequencies), % \begin{equation*} \chi^m(\mathbf{q},\Omega)\simeq \frac{Z_m}{\Omega^2+c_s^2(\mathbf{q}-\mathbf{Q})^2+(c_s\xi^{-1})^{2}}, \end{equation*} % with $c_s$ the spin wave velocity, $\mathbf{Q}$ the antiferromagnetic ordering wave vector, $\xi$ the magnetic correlation length, and $Z_m$ a constant. According to the analysis carried out by Vilk and Tremblay~\cite{Vilk1997}, a gap opens at the antinodal points when $\xi\gg v_F/(\pi T)$, with $v_F$ the Fermi velocity and $T$ the temperature. More recent studies speculate that the pseudogap is connected to the onset of topological order in a fluctuating (that is, without long-range order) antiferromagnet~\cite{Sachdev2016,Chatterjee2017_II,Scheurer2018,Wu2018,Sachdev2018,Sachdev2019}. % \begin{figure}[t] \centering \includegraphics[width=0.65\textwidth]{stripes.png} \caption{Pictorial representation of stripe order. The spins are ordered antiferromagnetically with a magnetization amplitude that gets modulated along one lattice direction. At the same time, the charge density also gets modulated, taking its maximum value where the magnetization density is minimal. Taken from Ref.~\cite{Tocchio2019}.} \label{fig_intro: stripes} \end{figure} % Numerical calculations have also shown the emergence of a stripe phase, where the antiferromagnetic order parameter shows a modulation along one lattice direction, accompanied by a charge modulation (see Fig.~\ref{fig_intro: stripes}). Stripe order can be understood as an instability of a spiral phase~\cite{Shraiman1989,Dombre1990,Chubukov2021}, that is, a uniform incommensurate antiferromagnetic phase. It can also be viewed as the result of phase separation occurring in a hole-doped antiferromagnet~\cite{Schulz1989,Emery1990}. Stripe phases have been observed in several works, with methods starting from Hartree-Fock~\cite{Zaanen1989,Poilblanc1989}, up to the most recent density matrix renormalization group and quantum Monte Carlo studies of "Hubbard cylinders" at strong coupling~\cite{Zheng2017,Qin2020}. Stripe order is found to compete with other magnetic orders, such as uniform spiral magnetic phases~\cite{Shraiman1989}, as well as with $d$-wave superconductivity~\cite{White1999}. Among the phases listed above, the pseudogap remains one of the most puzzling ones. Most of its properties can be described by assuming some kind of magnetic order (often N\'eel or spiral) that causes a reconstruction of the large Fermi surface into smaller pockets and the appearance of Fermi arcs in the spectral function. However, no signature of static, long-range order has been found in experiments performed in this regime. In this thesis (see Chapter~\ref{chap: pseudogap} in particular), we theoretically model the pseudogap phase as a short-range ordered magnetic phase, where spin fluctuations prevent symmetry breaking at finite temperature (and they may do so even in the ground state), while many features of the long-range ordered state are retained, such as transport properties, superconductivity, and the spectral function. This is achieved by \emph{fractionalizing} the electron into a fermionic "chargon" and a charge neutral bosonic "spinon", carrying the spin quantum number of the original electron. In this way, one can assume magnetic order for the chargon degrees of freedom which gets eventually destroyed by the spinon fluctuations. % \section*{Outline} % This thesis is organized as it follows: % \begin{itemize} \item In Chapter~\ref{chap: methods} we provide a short introduction of the main methods to approach the many body problem that we have used throughout this thesis. These are the functional renormalization group (fRG) and the dynamical mean-field theory (DMFT). In particular, we discuss various truncations of the fRG flow equations and the limitation of their validity to weak coupling. We finally present the usage of the DMFT as an initial condition of the fRG flow to access nonperturbative regimes. % \item In Chapter~\ref{chap: spiral DMFT}, we present a study of transport coefficients across the transition between the pseudogap and the Fermi liquid phases of the cuprates. We model the pseudogap phase with long-range spiral magnetic order and perform nonperturbative computations in this regime via the DMFT. Subsequently, we extract an effective mean-field model, and using the formulas of Ref.~\cite{Mitscherling2018}, we compute the transport coefficients, which we can compare with the experimental results of Ref.~\cite{Badoux2016}. The results of this chapter have appeared in the publication: \begin{itemize} \item P.~M.~Bonetti\footnotemark[1], J.~Mitscherling\footnotemark[1]\footnotetext[1]{Equal contribution}, D.~Vilardi, and W.~Metzner, Charge carrier drop at the onset of pseudogap behavior in the two-dimensional Hubbard model, \href{https://link.aps.org/doi/10.1103/PhysRevB.101.165142}{Phys.~Rev.~B \textbf{101}, 165142 (2020)}. \end{itemize} % \item In Chapter~\ref{chap: fRG+MF} we present the fRG+MF (mean-field) framework, introduced in Refs.~\cite{Wang2014,Yamase2016} that allows to continue the fRG flow into a spontaneously symmetry broken phase by means of a relatively simple truncation of the flow equations, that can be formally integrated, resulting into renormalized Hartree-Fock equations. After presenting the general formalism, we apply the method to study the coexistence and competition of antiferromagnetism and superconductivity in the Hubbard model at weak coupling, by means of a state-of-the-art parametrization of the frequency dependence, thus methodologically improving the results of Ref.~\cite{Yamase2016}. We conclude the chapter by reformulating the fRG+MF equations in a mixed boson-fermion representation, where the explicit introduction of a bosonic field allows for a systematic inclusion of the collective fluctuations on top of the MF. The results of this chapter have appeared in the following publications: % \begin{itemize} \item D.~Vilardi, P.~M.~Bonetti, and W.~Metzner, Dynamical functional renormalization group computation of order parameters and critical temperatures in the two-dimensional Hubbard model, \href{https://link.aps.org/doi/10.1103/PhysRevB.102.245128}{Phys.~Rev.~B \textbf{102}, 245128 (2020)}. \item P.~M.~Bonetti, Accessing the ordered phase of correlated Fermi systems: Vertex bosonization and mean-field theory within the functional renormalization group, \href{https://link.aps.org/doi/10.1103/PhysRevB.102.235160}{Phys.~Rev.~B \textbf{102}, 235160 (2020)}. \end{itemize} % \item In Chapter~\ref{chap: Bos Fluct Norm}, we present a reformulation of the fRG flow equations that exploits the \emph{single boson exchange } (SBE) representation of the two-particle vertex, introduced in Ref.~\cite{Krien2019_I}. The key idea of this parametrization is to represent the vertex in terms of processes each of which involves the exchange of a single boson, corresponding to a collective fluctuation, between two electrons, and a residual interaction. On the one hand, this decomposition offers numerical advantages, highly simplifying the computational complexity of the vertex function; on the other hand, it provides physical insight into the collective excitations of the correlated system. The chapter contains a formulation of the flow equations and results obtained by the application of this formalism to the Hubbard model at strong coupling, using the DMFT approximation as an initial condition of the fRG flow. The results of this chapter have appeared in: % \begin{itemize} \item P.~M.~Bonetti, A.~Toschi, C.~Hille, S.~Andergassen, and D.~Vilardi, Single boson exchange representation of the functional renormalization group for strongly interacting many-electron systems, \href{https://doi.org/10.1103/PhysRevResearch.4.013034}{Phys.~Rev.~Research \textbf{4}, 013034 (2022)}. \end{itemize} % \item In Chapter~\ref{chap: low energy spiral}, we analyze the low-energy properties of magnons in an itinerant spiral magnet. In particular, we show that the \emph{complete} breaking of the SU(2) symmetry gives rise to three Goldstone modes. For each of these, we present a low energy expansion of the magnetic susceptibilities within the random phase approximation (RPA), and derive formulas for the spin stiffnesses and spectral weights. We also show that \emph{local Ward identities} enforce that these quantities can be alternatively computed from the response to a gauge field. Moreover, we analyze the size and the low-momentum and frequency dependence of the Landau damping of the Goldstone modes, due to their decay into particle-hole pairs. The results of this chapter have appeared in: % \begin{itemize} \item P.~M.~Bonetti, and W.~Metzner, Spin stiffness, spectral weight, and Landau damping of magnons in metallic spiral magnets, \href{https://link.aps.org/doi/10.1103/PhysRevB.105.134426}{Phys.~Rev.~B \textbf{105}, 134426 (2022)}. % \item P.~M.~Bonetti, Local Ward identities for collective excitations in fermionic systems with spontaneously broken symmetries, \href{https://doi.org/10.48550/arXiv.2204.04132}{arXiv:2204.04132}, \textit{accepted in Physical Review B} (2022). \end{itemize} % \item In Chapter~\ref{chap: pseudogap}, we formulate a theory for the pseudogap phase in high-$T_c$ cuprates. This is achieved \emph{fractionalizing} the electron into a "chargon", carrying the original electron charge, and a charge neutral "spinon", which is a SU(2) matrix providing a time and space dependent local spin reference frame. We then consider a magnetically ordered state for the chargons where the Fermi surface gets reconstructed. Despite the chargons display long-range order, symmetry breaking at finite temperature is prevented by spinon fluctuations, in agreement with the Mermin-Wagner theorem. We subsequently derive an effective theory for the spinons integrating out the chargon degrees of freedom. The spinon dynamics is governed by a non-linear sigma model (NL$\sigma$M). By performing a large-$N$ expansion of the NL$\sigma$M derived from the two-dimensional Hubbard model at moderate coupling, we find a broad finite temperature pseudogap regime. At weak or moderate coupling $U$, however, spinon fluctuations are not strong enough to destroy magnetic long-range order in the ground state, except possibly near the edges of the pseudogap regime at large hole doping. The spectral functions in the hole doped pseudogap regime have the form of hole pockets with a truncated spectral weight on the backside, similar to the experimentally observed Fermi arcs. The results of this chapter appear in % \begin{itemize} \item P.~M.~Bonetti, and W.~Metzner, SU(2) gauge theory of the pseudogap phase in the two-dimensional Hubbard model, \href{https://doi.org/10.48550/arXiv.2207.00829}{arXiv:2207.00829} (2022). \end{itemize} % \end{itemize} % \end{document} \chapter{Methods} \label{chap: methods} \section{Functional renormalization group (fRG)} The original idea of an exact flow equation for a generating functional dates back to Wetterich~\cite{Wetterich1993}, who derived it for a bosonic theory. Since then, the concept of a \emph{nonperturbative} renormalization group, that is, distinct from the \emph{perturbative} Wilsonian one~\cite{Wilson1975}, has been applied in many contexts, ranging from quantum gravity to statistical physics (see Ref.~\cite{Dupuis2021} for an overview). The first application of the Wetterich equation to correlated Fermi systems is due to Salmhofer and Honerkamp~\cite{Salmhofer2001}, in the context of the Hubbard model. In this section, we present the functional renormalization group equations for the one-particle-irreducible (1PI) correlators of fermionic fields. The derivation closely follows Ref.~\cite{Metzner2012}, and we refer to it and to Refs.~\cite{Salmhofer1999,Berges2002,Kopietz2010} for further details. \subsection{Generating functionals} We start by defining the generating functional of \emph{connected} Green's functions as~\cite{NegeleOrland} % \begin{equation} W\left[\eta,\overline{\eta}\right]=-\ln\int\!\mathcal{D}\Psi \mathcal{D}\overline{\Psi}\, e^{-\mathcal{S}\left[\Psi,\overline{\Psi}\right]+\left(\overline{\eta},\Psi\right)+\left(\overline{\Psi},\eta\right)}, \label{eq_methods: W functional} \end{equation} % where the symbol $(\overline{\eta},\Psi)$ is a shorthand for $\sum_{x}\overline{\eta}(x)\,\Psi(x)$, with $x$ a collective variable grouping a set of suitable quantum numbers and imaginary time or frequency. The bare action $\mathcal{S}$ typically consists of a noninteracting one-body term $\mathcal{S}_0$ % \begin{equation} \mathcal{S}_0\left[\Psi,\overline{\Psi}\right]= -\left(\overline{\Psi},G_0^{-1}\Psi\right), \end{equation} % with $G_0$ the bare propagator, and a two-body interaction % \begin{equation} \mathcal{S}_\mathrm{int}\left[\Psi,\overline{\Psi}\right]= \frac{1}{(2!)^2} \sum_{\substack{x_1',x_2',\\x_1,x_2}} \lambda(x_1',x_2';x_1,x_2)\, \overline{\Psi}(x_1')\overline{\Psi}(x_2')\Psi(x_2)\Psi(x_1), \label{eq_methods: Sint} \end{equation} % with $\lambda$ describing the two-body potential. Deriving Eq.~\eqref{eq_methods: W functional} with respect to the source fields $\eta$ and $\overline{\eta}$, one can obtain the correlation functions corresponding to connected Feynman diagrams. In general, we define the connected $m$-particle Green's function as % \begin{equation} G^{(2m)}(x_1,\shortdots,x_m,x_1',\shortdots,x_m')=(-1)^m \frac{\delta^{(2m)} W\left[\eta,\overline{\eta}\right]}{\delta\overline{\eta}(x_1)\shortdots\delta\overline{\eta}(x_m)\delta\eta(x_m')\shortdots\delta\eta(x_1')}\Bigg\rvert_{\eta,\overline{\eta}=0}. \label{eq_methods: m-particle Gfs} \end{equation} % In particular, the $m=1$ case gives the interacting propagator. Another relevant functional is the so-called effective action, which generates all the 1PI correlators, that is, all correlators which cannot be divided into two distinct parts by removing a propagator line. It is defined as the Legendre transform of $W$ % \begin{equation} \Gamma\left[\psi,\overline{\psi}\right] = W\left[\eta,\overline{\eta}\right] + \left(\overline{\eta},\psi\right)+\left(\overline{\psi},\eta\right), \end{equation} % where the fields $\psi$ and $\overline{\psi}$, represent the expectation values of the original fields $\Psi$ and $\overline{\Psi}$ in presence of the sources. They are related to $\eta$ and $\overline{\eta}$ via % \begin{subequations} \begin{align} &\psi = - \frac{\delta W}{\delta\overline{\eta}},\\ &\overline{\psi} = + \frac{\delta W}{\delta\eta}, \end{align} \label{eq_methods: psi from G} \end{subequations} % and the inverse relations read as % \begin{subequations} \begin{align} &\frac{\delta\Gamma}{\delta\psi} = -\overline{\eta},\\ &\frac{\delta\Gamma}{\delta\overline{\psi}} = +\eta. \end{align} \end{subequations} % Deriving $\Gamma$, one can obtain the 1PI $m$-particle correlators, that is, % \begin{equation} \Gamma^{(2m)}(x_1,\shortdots,x_m,x_1',\shortdots,x_m')= \frac{\delta^{(2m)} \Gamma\left[\psi,\overline{\psi}\right]}{\delta\overline{\psi}(x_1')\shortdots\delta\overline{\psi}(x_m')\delta\psi(x_m)\shortdots\delta\psi(x_1)}\Bigg\rvert_{\psi,\overline{\psi}=0}. \end{equation} % In particular the $m=1$ case gives the inverse interacting propagator, % \begin{equation} \Gamma^{(2)} = G^{-1} = G_0^{-1} - \Sigma, \end{equation} % with $\Sigma$ the self-energy, and the $m=2$ case the so-called two-particle vertex or effective interaction. It is possible to derive~\cite{NegeleOrland} a particular relation between the $W$ and $\Gamma$ functional, called reciprocity relation. It reads as % \begin{equation} \mathbf{\Gamma}^{(2)}\left[\psi,\overline{\psi}\right]=\left(\mathbf{G}^{(2)}\left[\eta,\overline{\eta}\right]\right)^{-1}, \label{eq_methods: reciprocity rel} \end{equation} % with % \begin{equation} \mathbf{G}^{(2)}\left[\eta,\overline{\eta}\right] = - \left( \begin{array}{cc} \frac{\delta^2W}{\delta\overline{\eta}(x)\delta\eta(x')} & -\frac{\delta^2W}{\delta\overline{\eta}(x)\delta\overline{\eta}(x')} \\ -\frac{\delta^2W}{\delta\eta(x)\delta\eta(x')} & \frac{\delta^2W}{\delta\eta(x)\delta\overline{\eta}(x')} \end{array} \right), \end{equation} % and % \begin{equation} \mathbf{\Gamma}^{(2)}\left[\psi,\overline{\psi}\right] = \left( \begin{array}{cc} \frac{\delta^2\Gamma}{\delta\overline{\psi}(x')\delta\psi(x)} & \frac{\delta^2\Gamma}{\delta\overline{\psi}(x')\delta\overline{\psi}(x)} \\ \frac{\delta^2\Gamma}{\delta\psi(x')\delta\psi(x)} & \frac{\delta^2\Gamma}{\delta\psi(x')\delta\overline{\psi}(x)} \end{array} \right). \label{eq_methods: gamma2} \end{equation} % \subsection{Derivation of the exact flow equation} % For single band, translationally invariant systems, the bare propagator $G_0$ takes a simple form in momentum and imaginary frequency space: % \begin{equation} G_0(\mathbf{k},\nu)= \frac{1}{i\nu -\xi_\mathbf{k}}, \end{equation} % where $\nu$ is a fermionic Matusbara frequency, taking the value $(2n+1)\pi T$ ($n\in\mathbb{Z}$) at finite temperature $T$, and $\xi_\mathbf{k}$ the band dispersion relative to the chemical potential $\mu$. At low temperatures $G_0$ exhibits a nearly singular structure at $\nu\sim 0$ and $\xi_\mathbf{k}=0$, which highly influences the physics of the correlated system. This is a manifestation of the importance of the low energy excitations, that is, those close to the Fermi surface, at low temperatures. Therefore, one might be tempted to perform the integral in \eqref{eq_methods: W functional} step by step, including first the high energy modes and then, gradually, the low energy ones. This can be achieved regularizing the propagator via a scale-dependent function, that is, % \begin{equation} G_0^\L(\mathbf{k},\nu)=\frac{\Theta^\L(\mathbf{k},\nu)}{i\nu -\xi_\mathbf{k}}, \end{equation} % where $\Theta^\L(\mathbf{k},\nu)$ is a function that vanishes for $\nu\ll \L $ and/or $\xi_\mathbf{k}\ll \L$ and tends to one for $\nu\gg\L$ and/or $\xi_\mathbf{k}\gg\L$. In this way, one can define a scale-dependent action as % \begin{equation} \mathcal{S}^\L\left[\Psi,\overline{\Psi}\right]=-\left(\overline{\Psi},Q_0^\L\Psi\right)+\mathcal{S}_\mathrm{int}\left[\Psi,\overline{\Psi}\right], \end{equation} % with $Q_0^\L=(G_0^\L)^{-1}$, as well as a scale-dependent $ W$-functional % \begin{equation} W^\L\left[\eta,\overline{\eta}\right]=-\ln\int\!\mathcal{D}\Psi \mathcal{D}\overline{\Psi}\, e^{-\mathcal{S}^\L\left[\Psi,\overline{\Psi}\right]+\left(\overline{\eta},\Psi\right)+\left(\overline{\Psi},\eta\right)}. \label{eq_methods: W functional Lambda} \end{equation} % Differentiating Eq.~\eqref{eq_methods: W functional Lambda} with respect to $\L$, we obtain an exact flow equation for $ W$: % \begin{equation} \begin{split} \partial_\L W^\L=e^{ W^\L}\partial_\L e^{- W^\L} &=e^{ W^\L}\int\!\mathcal{D}\Psi \mathcal{D}\overline{\Psi}\,\left(\overline{\Psi},\dot{Q}_0^\L\Psi\right) \,e^{-\mathcal{S}^\L\left[\Psi,\overline{\Psi}\right]+\left(\overline{\eta},\Psi\right)+\left(\overline{\Psi},\eta\right)}\\ &=e^{ W^\L}\left(\frac{\delta}{\delta\eta},\dot{Q}_0^\L \frac{\delta}{\delta\overline{\eta}}\right)e^{- W^\L}\\ &=\left(\frac{\delta W^\L}{\delta\eta},\dot{Q}_0^\L\frac{\delta W^\L}{\delta\overline{\eta}}\right)+\tr\left[\dot{Q}_0^\L\frac{\delta^2 W^\L}{\delta\overline{\eta}\delta\eta}\right], \end{split} \label{eq_methods: flow equation W} \end{equation} % with $\dot{Q}_0^\L$ a shorthand for $\partial_\L Q_0^\L$. Expanding $ W^\L$ in powers of the source fields, one can derive the flow equations for the connected Green's functions in Eq.~\eqref{eq_methods: m-particle Gfs}. Since 1PI correlators are easier to handle, we exploit the above result to derive a flow equation for the effective action functional $\Gamma^\L$: % \begin{equation} \partial_\L\Gamma^\L\left[\psi,\overline{\psi}\right]= \left(\partial_\L\overline{\eta}^\L,\psi\right)+ \left(\overline{\psi},\partial_\L\eta^\L\right)+ \partial_\L W^\L\left[\eta^\L,\overline{\eta}^\L\right], \end{equation} % where $\eta^\L$ and $\overline{\eta}^\L$ are solutions of the implicit equations % \begin{subequations} \begin{align} &\psi =- \frac{\delta W^\L}{\delta\overline{\eta}},\\ &\overline{\psi} = \frac{\delta W^\L}{\delta\eta}. \end{align} \end{subequations} % Using the properties of the Legendre transform, we get % \begin{equation} \partial_\L\Gamma^\L\left[\psi,\overline{\psi}\right]=\partial_\L W^\L\left[\eta^\L,\overline{\eta}^\L\right]\Big\rvert_{\eta^\L,\overline{\eta}^\L\,\mathrm{fixed}}, \end{equation} % that, combined with Eq.~\eqref{eq_methods: psi from G}, \eqref{eq_methods: reciprocity rel}, and \eqref{eq_methods: flow equation W} gives % \begin{equation} \partial_\L\Gamma^\L\left[\psi,\overline{\psi}\right]= -\left(\overline{\psi},\dot{Q}_0^\L\psi\right)-\frac{1}{2}\tr\left[\dot{\mathbf{Q}}_0^\L\left(\mathbf{\Gamma}^{(2)\L}\right)^{-1}\right], \label{eq_methods: Wetterich eq G0L} \end{equation} % with $\mathbf{\Gamma}^{(2)\Lambda}$ the same as in Eq.~\eqref{eq_methods: gamma2}, and % \begin{equation} \dot{\mathbf{Q}}_0^\L(x,x')= \left( \begin{array}{cc} \dot{Q}_0^\L(x,x') & 0 \\ 0 & -\dot{Q}_0^\L(x',x) \end{array} \right). \end{equation} % Alternatively~\cite{Berges2002}, one can define the regularized bare propagators via a regulator $R^\L$: % \begin{equation} G_0^\L=\frac{1}{G_0^{-1}-R^\L}, \end{equation} % and introduce the concept of effective average action, % \begin{equation} \Gamma_R^\L\left[\psi,\overline{\psi}\right]=\Gamma^\L\left[\psi,\overline{\psi}\right]-\left(\overline{\psi},R^\L\psi\right), \end{equation} % so that the flow equation for $\Gamma_R^\L$ becomes % \begin{equation} \begin{split} \partial_\L\Gamma^\L_R\left[\psi,\overline{\psi}\right]=&-\frac{1}{2}\tr\left[\dot{\mathbf{R}}^\L\left(\mathbf{\Gamma}_R^{(2)\L}+\mathbf{R}^\L\right)^{-1}\right]\\ =&-\frac{1}{2}\widetilde{\partial}_\L\tr\ln\left[\mathbf{\Gamma}_R^{(2)\L}+\mathbf{R}^\L\right], \end{split} \label{eq_methods: Wetterich eq RL} \end{equation} % with $\mathbf{R}^\L$ defined similarly to $\mathbf{Q}_0^\L$, and the symbol $\widetilde{\partial}_\L$ is defined as $\displaystyle{\widetilde{\partial}_\L=\dot{R}^\L\frac{\delta}{\delta R^\L}}$. Eq.~\eqref{eq_methods: Wetterich eq G0L} (or \eqref{eq_methods: Wetterich eq RL}) is the so-called Wetterich equation and describes the exact evolution of the effective action functional. For the whole approach to make sense, it is necessary to completely remove the regularization of $G_0$ at the final scale $\L={\Lambda_\mathrm{fin}}$, that is, $G_0^{{\Lambda_\mathrm{fin}}}=G_0$, so that at the final scale the scale-dependent effective action is the effective action of the many-body problem defined by action $\mathcal{S}$. Furthermore, like any other first order differential equation, Eq.~\eqref{eq_methods: Wetterich eq G0L} must be complemented with an initial condition at the initial scale ${\Lambda_\mathrm{ini}}$. If we choose the function $\Theta^\L$ such that $G_0^{{\Lambda_\mathrm{ini}}}= 0$, the integral in \eqref{eq_methods: W functional Lambda} is exactly given by the saddle point approximation, and Legendre transforming we get % \begin{equation} \Gamma^{{\Lambda_\mathrm{ini}}}\left[\psi,\overline{\psi}\right]=\mathcal{S}\left[\psi,\overline{\psi}\right]+\left(\overline{\psi},\left[Q_0^{{\Lambda_\mathrm{ini}}}-G_0^{-1}\right]\psi\right) =\mathcal{S}^{{\Lambda_\mathrm{ini}}}\left[\psi,\overline{\psi}\right], \end{equation} % or, in terms of the effective average action, % \begin{equation} \Gamma^{{\Lambda_\mathrm{ini}}}_R\left[\psi,\overline{\psi}\right]=\mathcal{S}\left[\psi,\overline{\psi}\right]. \label{eq_methods: fRG Gamma ini} \end{equation} % \subsection{Expansion in the fields} % A common approach to tackle Eq.~\eqref{eq_methods: Wetterich eq G0L} is to expand the effective action functional $\Gamma^\L$ in powers of the fields, where the coefficient of the $2m$-th power corresponds to the $m$ particle vertex up to a prefactor. We write $\mathbf{\Gamma}^{(2)\L}$ as % \begin{equation} \mathbf{\Gamma}^{(2)\L}\left[\psi,\overline{\psi}\right] = \left(\mathbf{G}^\L\right)^{-1} - \widetilde{\mathbf{\Sigma}}^\L\left[\psi,\overline{\psi}\right], \label{eq_methods: Gamma2=G-Sigma} \end{equation} % where $\left(\mathbf{G}^\L\right)^{-1}$ is at the same time the field-independent part of $\mathbf{\Gamma}^{(2)\L}$ and the interacting propagator, and $\widetilde{\mathbf{\Sigma}}^\L$ vanishes for zero fields, that is, it is \emph{at least} quadratic in $\psi$, $\overline{\psi}$. We further notice that, as long as no pairing is present in the system, $\mathbf{G}^\L(x,x')$ can be expressed as $\mathrm{diag}\left(G^\L(x,x'),-G^\L(x',x)\right)$. Inserting \eqref{eq_methods: Gamma2=G-Sigma} into \eqref{eq_methods: Wetterich eq G0L} and writing % \begin{equation} \left(\mathbf{\Gamma}^{(2)\L}\right)^{-1} = \left(1-\mathbf{G}^\L \widetilde{\mathbf{\Sigma}}^\L \right)^{-1}\mathbf{G}^\L = \mathbf{G}^\L + \mathbf{G}^\L\widetilde{\mathbf{\Sigma}}^\L\mathbf{G}^\L+ \mathbf{G}^\L\widetilde{\mathbf{\Sigma}}^\L\mathbf{G}^\L\widetilde{\mathbf{\Sigma}}^\L\mathbf{G}^\L + \dots, \end{equation} % we get % \begin{equation} \begin{split} \partial_\L\Gamma^\L\left[\psi,\overline{\psi}\right]= -\left(\overline{\psi},\dot{Q}_0^\L\psi\right)-\tr\left[\dot{Q}_0^\L G^\L\right] +\frac{1}{2}\tr\left[\mathbf{S}^\L\left(\widetilde{\mathbf{\Sigma}}^\L+\widetilde{\mathbf{\Sigma}}^\L\mathbf{G}^\L\widetilde{\mathbf{\Sigma}}^\L+\dots\right)\right], \end{split} \label{eq_methods: flow equation expanded in the fields} \end{equation} % where we have defined a \emph{single-scale propagator} $\mathbf{S}^\L=-\mathbf{G}^\L\dot{\mathbf{Q}}_0^\L\mathbf{G}^\L=\widetilde{\partial}_\L\mathbf{G}^\L$, which, in a normal system, reads as $\mathbf{S}^\L(x,x')=\mathrm{diag}\left(S^\L(x,x'),-S^\L(x',x)\right)$, with $S^\L=\widetilde{\partial}_\L G^\L$. Here, the the symbol $\widetilde{\partial}_\L$ is intended as $\widetilde{\partial}_\L=\displaystyle{\dot{Q}_0^\L\frac{\delta}{\delta Q_0^\L}}=\displaystyle{\partial_\L\left(\frac{1}{\Theta^\L}\right)\frac{\delta}{\delta(1/\Theta^\L)}}$. If we now write % \begin{equation} \begin{split} \Gamma^\L\left[\psi,\overline{\psi}\right] = &\Gamma^{(0)\L} - \sum_{x,x'} \Gamma^{(2)\L}(x',x)\,\overline{\psi}(x')\psi(x) \\ &+ \frac{1}{(2!)^2}\sum_{\substack{x_1',x_2',\\x_1,x_2}} \Gamma^{(4)\L}(x_1',x_2',x_1,x_2)\,\overline{\psi}(x_1')\overline{\psi}(x_2')\psi(x_2)\psi(x_1)\\ &- \frac{1}{(3!)^2}\sum_{\substack{x_1',x_2',x_3',\\x_1,x_2,x_3}} \Gamma^{(6)\L}(x_1',x_2',x_3',x_1,x_2,x_3)\,\overline{\psi}(x_1')\overline{\psi}(x_2')\overline{\psi}(x_3')\psi(x_3)\psi(x_2)\psi(x_1) \\ & + \dots, \end{split} \end{equation} % and compare the coefficients in Eq.~\eqref{eq_methods: flow equation expanded in the fields}, we can derive the flow equations for all the different moments $\Gamma^{(2m)\L}$ of the effective action. We remark that, since we are dealing with fermions, all the $\Gamma^{(2m)\L}$ vertices are \emph{antisymmetric} under the exchange of a pair of primed or non-primed indices, that is % \begin{equation} \begin{split} &\Gamma^{(2m)\L}(x_1',\shortdots,x_{\overline{i}}',\shortdots,x_{\overline{j}}',\shortdots, x_m',x_1,\shortdots,x_i,\shortdots,x_j,\shortdots, x_m)\\ &=(-1)\Gamma^{(2m)\L}(x_1',\shortdots,x_{\overline{j}}',\shortdots,x_{\overline{i}}',\shortdots, x_m',x_1,\shortdots,x_i,\shortdots,x_j,\shortdots, x_m)\\ &=(-1)\Gamma^{(2m)\L}(x_1',\shortdots,x_{\overline{i}}',\shortdots,x_{\overline{j}}',\shortdots, x_m',x_1,\shortdots,x_j,\shortdots,x_i,\shortdots, x_m). \end{split} \end{equation} % The 0-th moment of the effetive action, $\Gamma^{(0)\L}$, is given by $T^{-1}\Omega^\L$, with $T$ the temperature and $\Omega^\L$ the grand canonical potential~\cite{Metzner2012}, so that we have % \begin{equation} \partial_\L\Omega^\L = -T\tr\left[\dot{Q}_0^\L G^\L\right]. \end{equation} % The flow equation for the 2nd moment reads as % \begin{equation} \partial_\L\Gamma^{(2)\L} = \dot{Q}_0^\L - \Tr\left[S^\L \Gamma^{(4)\L}\right]. \end{equation} % Noticing that $\Gamma^{(2)\L}=(G^\L)^{-1}-\Sigma^\L$, we can extract the flow equation for the self-energy: % \begin{equation} \partial_\L\Sigma^\L(x',x) = \sum_{y,y'} S^\L(y,y') \Gamma^{(4)\L}(x',y',x,y), \label{eq_methods: flow eq Sigma xx'} \end{equation} % where its initial condition can be extracted from~\eqref{eq_methods: fRG Gamma ini}, and it reads as $\Sigma^{\Lambda_\mathrm{ini}}(x',x)=0$. Similarly, one can derive the evolution equation for the two-particle vertex $\Gamma^{(4)\L}$: % \begin{equation} \begin{split} \partial_\L\Gamma^{(4)\L}(x_1',x_2',x_1,x_2) =\hskip 2cm&\\ \sum_{\substack{y_1',y_2',\\y_1,y_2}} \bigg[P^\L(y_1',y_2',y_1,y_2) \Big( &+\Gamma^{(4)}(x_1',y_2',x_1,y_1)\Gamma^{(4)}(y_1',x_2',y_2,x_2)\\[-5mm] &-\Gamma^{(4)}(x_1',y_1',y_2,x_2)\Gamma^{(4)}(y_2',x_2',x_1,y_1)\\ &-\frac{1}{2}\Gamma^{(4)}(x_1',x_2',y_1,y_2)\Gamma^{(4)}(y_1',y_2',x_1,x_2) \Big) \bigg]\\ & \hskip -4.75cm -\sum_{y,y'}S^\L(y,y')\Gamma^{(6)\L}(x_1',x_2',y',x_1,x_2,y), \end{split} \label{eq_methods: flow eq vertex xx'} \end{equation} % with % \begin{equation} P^\L(y_1',y_2',y_1,y_2) = S^\L(y_1,y_1')G^\L(y_2,y_2') + S^\L(y_2,y_2')G^\L(y_1,y_1'). \label{eq_methods: GS+SG} \end{equation} % The initial condition for the two particle vertex, reads as $\Gamma^{(4){\Lambda_\mathrm{ini}}}=\lambda$, with $\lambda$ the bare two-particle interaction in Eq.~\eqref{eq_methods: Sint}. In Fig.~\ref{fig_methods: flow diagrams} a schematic representation of the flow equations for the self-energy and the two-particle vertex is shown. % \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{diagrams_scheme.png} \caption{Schematic representation of the flow equations for the self-energy (left) and vertex (right). The ticked lines represent single-scale propagators, and the dots over the symbols scale derivatives.} \label{fig_methods: flow diagrams} \end{figure} % % \subsection{Truncations} % Inspecting Eqs.~\eqref{eq_methods: flow eq Sigma xx'} and, in particular \eqref{eq_methods: flow eq vertex xx'}, we notice that the flow equation for the self-energy requires the knowledge of the two-particle vertex, whose flow equation involves $\Sigma^\L$ (through $G^\L$ and $S^\L$), $\Gamma^{(4)\L}$, and $\Gamma^{(6)\L}$. Considering higher order terms, one can prove that the right hand side of the flow equation for $\Gamma^{(2\overline{m})\L}$ involves all the $\Gamma^{(2m)\L}$, with $m\leq\overline{m}+1$. This produces an infinite hierarchy of flow equations for the $m$-particle 1PI correlators that, for practical reasons, needs to be truncated at some given order. Since for most purposes the calculation of the self-energy and of the two-particle vertex is sufficient, the truncations often work as approximations on the three-particle vertex $\Gamma^{(6)\L}$. The simplest one could perform is the so-called \emph{1-loop} ($1\ell$) truncation, where the three-particle vertex is set to zero all along the flow, in this way, the last term in Eq.~\eqref{eq_methods: flow eq vertex xx'} can be discarded to compute the flow equation of the two-particle vertex. Alternatively, one can \emph{approximately} integrate the flow equation for $\Gamma^{(6)}$, obtaining the loop diagram schematically shown in panel (a) of Fig.~\ref{fig_methods: integrated Gamma6}, and insert it into the last term of the flow equation for the vertex. One can then classify the resulting terms into two classes depending on whether the corresponding diagram displays non-overlapping or overlapping loops (see (b) and (c) panels of Fig.~\ref{fig_methods: integrated Gamma6}). By considering only the former class, one can easily prove that these terms coming from $\Gamma^{(6)\L}$ can be reabsorbed into the first ones of Eq.~\eqref{eq_methods: flow eq vertex xx'} by replacing the single-scale propagator $S^\L$ with the full derivative of the Green's function $\partial_\L G^\L = S^\L + G^\L (\partial_\L \Sigma^\L) G^\L$ in Eq.~\eqref{eq_methods: GS+SG}, so that one can rewrite % \begin{equation} P^\L(y_1',y_2',y_1,y_2) = \partial_\L \left[G^\L(y_1,y_1')G^\L(y_2,y_2')\right]. \label{eq_methods: P Katanin} \end{equation} % This approximation is known under the name of Katanin scheme~\cite{Katanin2004} and, when considering only one of the first three terms in Eq.~\eqref{eq_methods: flow eq vertex xx'} (with $P^\Lambda$ as in Eq.~\eqref{eq_methods: P Katanin}), becomes equivalent to a \emph{Hartree-Fock} approximation for the self-energy, combined with a ladder resummation for the vertex~\cite{Salmhofer2004}. % \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{gamma6.png} \caption{(a) Feynman diagram representing the approximate integration of the flow equation for $\Gamma^{(6)\L}$. (b-c) Contributions to the vertex flow equations with non-overlapping (b) and overlapping (c) loops. Here, the ticked lines represent single-scale propagators $S^\L$.} \label{fig_methods: integrated Gamma6} \end{figure} % The more involved inclusion of diagrams with \emph{both} non-overlapping and overlapping loops leads to the \emph{2-loop} ($2\ell$) truncation, introduced by Eberlein~\cite{Eberlein2014}. Finally, Kugler and von Delft~\cite{Kugler2018_PRL,Kugler2018_PRB} have recently developed the so-called \emph{multiloop approximation}, which systematically and approximately takes into account contributions from higher order 1PI vertices in the fashion of a loop expansion. They also proved that in the limit of infinite loops this truncation becomes equivalent to the parquet approximation~\cite{Roulet1969,Bickers2004}, based on a diagrammatic approach, rather than on a flow equation. In the context of statistical physics, where one mainly deals with bosonic rather than fermionic fields, other \emph{nonperturbative} truncations are possible. One can, for example, write the effective action as a one-body term (propagator) plus a local potential that only depends on the absolute value of the field, and then compute the flow of these two terms. In this way, one is able to include contributions from vertices with an arbitrary number of external legs. This approximation goes under the name of \emph{local potential approximation} (LPA). For a more detailed discussion on the LPA and its extensions, see Ref.~\cite{Berges2002} and references therein. % \subsection{Vertex flow equation} \label{subs_methods: vertex flow eq} % We now turn our attention to the first three terms of Eq.~\eqref{eq_methods: flow eq vertex xx'} and neglect the contribution from the three-particle vertex, in a $1\ell$ approximation. Following the order of Eq.~\eqref{eq_methods: flow eq vertex xx'}, we call them particle-hole ($ph$), particle-hole-crossed ($\overline{ph}$) and particle-particle ($pp$) channels, respectively. In Fig.~\ref{fig_methods: diagrams flow vertex} we show a diagrammatic representation of each term. % \begin{figure} \centering \includegraphics[width=0.9\textwidth]{diagrams_flow_vertex.png} \caption{Schematic representation of the first three terms in Eq.~\eqref{eq_methods: flow eq vertex xx'}, also referred to as $pp$, $ph$, and $\overline{ph}$ channels (see text). For each channel, another diagram with the "tick" on the other internal fermionic line exists.} \label{fig_methods: diagrams flow vertex} \end{figure} % If we now consider a rotationally and translationally invariant system of spin-$\frac{1}{2}$ fermions, we can choose the set of quantum numbers and imaginary frequency as $x=\{k,\sigma\}$, where $k=(\mathbf{k},\nu)$ is a collective variable encoding the spatial momentum and the frequency, and $\sigma=\uparrow,\downarrow$ is the spin projection. Under these assumptions, the propagator reads as % \begin{equation} G^\L(x',x)= G^\L_{\sigma'\sigma}(k',k) = G^\L(k)\delta_{\sigma\sigma'}\delta(k-k'), \end{equation} % and a similar relation holds for $S^\L$. Analogously, we can write the two-particle vertex as % \begin{equation} \Gamma^{(4)\L}(x_1',x_2',x_1,x_2) = \Gamma^{(4)\L}_{\sigma_1'\sigma_2'\sigma_1^{\phantom{'}}\sigma_2^{\phantom{'}}}(k_1',k_2',k_1)\,\delta(k_1'+k_2'-k_1-k_2), \label{eq_methods: V en mom cons} \end{equation} % where spin rotation invariance constrains the dependency of the vertex on spin-projections % \begin{equation} \Gamma^{(4)\L}_{\sigma_1'\sigma_2'\sigma_1^{\phantom{'}}\sigma_2^{\phantom{'}}}(k_1',k_2',k_1)= V^\L(k_1',k_2',k_1)\delta_{\sigma_1'\sigma_1^{\phantom{'}}}\delta_{\sigma_2'\sigma_2^{\phantom{'}}} +\overline{V}^\L(k_1',k_2',k_1)\delta_{\sigma_1'\sigma_2^{\phantom{'}}}\delta_{\sigma_2'\sigma_1^{\phantom{'}}}. \label{eq_methods: V SU(2) inv} \end{equation} % Finally, the antisymmetry properties of $\Gamma^{(4)\L}$ enforce $\overline{V}^\L(k_1',k_2',k_1)=-V^\L(k_2',k_1',k_1)$. Inserting \eqref{eq_methods: V en mom cons} and \eqref{eq_methods: V SU(2) inv} into \eqref{eq_methods: flow eq vertex xx'}, and with some straightforward calculations, we obtain a flow equation for $V^\L=\Gamma^{(4)\L}_{\uparrow\downarrow\uparrow\downarrow }$ % \begin{equation} \partial_\L V^\L(k_1',k_2',k_1) = \mathcal{T}_{ph}^\L(k_1',k_2',k_1)+\mathcal{T}_{\overline{ph}}^\L(k_1',k_2',k_1)+\mathcal{T}_{pp}^\L(k_1',k_2',k_1). \label{eq_methods: flow Eq V k} \end{equation} % From now on, we define the symbol $\int_{k=(\mathbf{k},\nu)}=T\sum_\nu\int\frac{d^d\mathbf{k}}{(2\pi)^d}$ as the sum over the Matsubara frequencies and an integral over the spatial momentum, which can be either unbounded, for continuum systems, or, for lattice systems, a Brillouin zone momentum. In the case of zero temperature, the sum $T\sum_\nu$ is replaced by an integral. The particle-hole, particle-hole-crossed, and particle-particle contributions in Eq.~\eqref{eq_methods: flow Eq V k} have been defined as % \begin{subequations} \begin{align} &\mathcal{T}_{ph}^\L(k_1',k_2',k_1) =\int_p P^\L(p,p+k_1-k_1') \big[2V^\L(k_1',p+k_1-k_1',k_1)V^\L(p,k_2',p+k_1-k_1') \nonumber\\[-2mm] &\hskip6.5cm-V^\L(p+k_1-k_1',k_1',k_1)V^\L(p,k_2',p+k_1-k_1') \\ &\hskip6.5cm-V^\L(k_1',p+k_1-k_1',k_1)V^\L(k_2',p,p+k_1-k_1') \nonumber \big],\\ &\mathcal{T}_{\overline{ph}}^\L(k_1',k_2',k_1) =-\int_p P^\L(p,p+k_2'-k_1) V^\L(k_1',p+k_2'-k_1,p)V^\L(p,k_2',k_1),\\ &\mathcal{T}_{pp}^\L(k_1',k_2',k_1) =-\int_p P^\L(p,k_1'+k_2'-p) V^\L(k_1',k_2',p)V^\L(p,k_1'+k_2'-p,k_1), \end{align} \label{eq_methods: Tph Tphx Tpp} \end{subequations} % respectively, and $P^\Lambda(p,p')$ reads as % \begin{equation} P^\L(p,p') = \widetilde{\partial}_\L \left[G^\L(p)G^\L(p')\right] = G^\L(p)S^\L(p') + S^\L(p)G^\L(p'). \end{equation} % An interesting fact of the decomposition in Eq.~\eqref{eq_methods: flow Eq V k} is that each of the three terms, $\mathcal{T}_{ph}^\L$, $\mathcal{T}_{\overline{ph}}^\L$, and $\mathcal{T}_{pp}^\L$, depend on a "bosonic" variable appearing as a sum or as a difference of two fermionic variables. One can therefore write the vertex function $V^\L$ as the sum of three terms, each of which depends on one of the above mentioned bosonic momenta and two fermionic ones, and its flow equation is given by the $\mathcal{T}^\L$ depending on the corresponding combination of momenta~\cite{Karrasch2008}. In formulas, we have % \begin{equation} V^\L(k_1',k_2',k_1) = \lambda(k_1',k_2',k_1) + \phi^{(ph)\L}_{k_{ph},k_{ph}'}(k_1-k_1') + \phi^{(\overline{ph})\L}_{k_{\overline{ph}},k_{\overline{ph}}'}(k_2'-k_1) - \phi^{(pp)\L}_{k_{pp},k_{pp}'}(k_1'+k_2'), \label{eq_methods: V ph phx pp} \end{equation} % where $\lambda$ represents the bare two-particle interaction, and the last sign is choice of convenience. Furthermore, we have defined % \begin{subequations} \begin{align} &k_{ph} = \rndup{k_1+k_1'}, \hskip 2cm k_{ph}' = \rndup{k_2+k_2'},\\ &k_{\overline{ph}} = \rndup{k_1+k_2'}, \hskip 2cm k_{\overline{ph}}' = \rndup{k_2+k_1'},\\ &k_{pp} = \rndup{k_1'-k_2'}, \hskip 2cm k_{pp}' = \rndup{k_1-k_2}, \end{align} \label{eq_methods: k k' pp ph phx} \end{subequations} % where, at finite $T$, the symbol $\rndupnotwo{k}$ rounds up the frequency component of $k$ to the closest fermionic Matsubara frequency, while at $T=0$ it has no effect. This apparently complicated parametrization of momenta has the goal to completely disentangle the dependencies on fermionic and bosonic variables of the various terms in \eqref{eq_methods: V ph phx pp}. The flow equations of these terms read as % \begin{subequations} \begin{align} &\partial_\L \phi^{(ph)\L}_{k,k'}(q) = \mathcal{T}_{ph}^\L\left(k-\rndup{q},k'+\rnddo{q},k+\rndup{q}\right),\\ &\partial_\L \phi^{(\overline{ph})\L}_{k,k'}(q) = \mathcal{T}_{\overline{ph}}^\L\left(k-\rndup{q},k'+\rnddo{q},k'-\rnddo{q}\right),\\ &\partial_\L \phi^{(pp)\L}_{k,k'}(q) = -\mathcal{T}_{pp}^\L\left(\rnddo{q}+k,\rndup{q}-k,\rnddo{q}-k'\right), \end{align} \end{subequations} % where here (at $T>0$) $\rndup{q}$ ($\rnddo{q}$) rounds up (down) the frequency component of $\frac{q}{2}$ to the closest \emph{bosonic} Matsubara frequency. % \subsection{Instability analysis} \label{sec_methods: instability analysis} % One of the main reasons of the success obtained by the application of the fRG to correlated fermions, and the Hubbard model in particular, is that it allows for an \emph{unbiased} analysis of the possible instabilities and competing orders of the system~\cite{Zanchi1998,Zanchi2000,Halboth2000,Halboth2000_PRL,Honerkamp2001}. Indeed, through the fRG flow one can detect the presence of an ordering tendency by looking at the evolution of the vertex as the scale $\L$ is lowered and the cutoff removed. In many cases $V^\L$ diverges at a finite scale $\L_\mathrm{cr}>{\Lambda_\mathrm{fin}}$, signaling the onset of some spontaneous symmetry breaking. Decomposition~\eqref{eq_methods: flow Eq V k}, though being very practical under a computational point of view, does not generally allow for understanding which kind of order is to be realized at scales $\L<\L_\mathrm{cr}$. In this perspective, instead of \eqref{eq_methods: flow Eq V k}, one can perform a \emph{physical} channel decomposition, first introduced in the context of the fRG by Husemann \emph{et al.}~\cite{Husemann2009,Husemann2012}: % \begin{equation} \begin{split} V^\L(k_1',k_2',k_1) = &\lambda(k_1',k_2',k_1) \\ &+ \frac{1}{2}\mathcal{M}^{\L}_{k_{ph},k_{ph}'}(k_1-k_1') - \frac{1}{2}\mathcal{C}^{\L}_{k_{ph},k_{ph}'}(k_1-k_1') \\ &+ \mathcal{M}^{\L}_{k_{\overline{ph}},k_{\overline{ph}}'}(k_2'-k_1) \\ &- \mathcal{P}^{\L}_{k_{pp},k_{pp}'}(k_1'+k_2'), \end{split} \label{eq_methods: channel decomp physical} \end{equation} % where $\mathcal{M}^\L=\phi^{(\overline{ph})\L}$, $\mathcal{C}^\L=-2\phi^{(ph)\L}+\phi^{(\overline{ph})\L}$, and $\mathcal{P}^\L=\phi^{(pp)\L}$ are referred to as magnetic, charge, and pairing channels. Thanks to this decomposition, when a vertex divergence occurs, one can understand whether the system is trying to realize some kind of magnetic, charge, or superconducting (or superfluid) order, depending on which among $\mathcal{M}^\L$, $\mathcal{C}^\L$, or $\mathcal{P}^\L$ diverges. Furthermore, more information on the ordering tendency can be inferred by analyzing the combination of fermionic and bosonic momenta for which the channel takes extremely large (formally infinite) values. If, for example, in a 2D lattice system, we would detect $\mathcal{M}^{\L\to\L_\mathrm{cr}}_{k,k'}\left(q=\left(\left(\frac{\pi}{a},\frac{\pi}{a}\right),0\right)\right)\to\infty$ ($a$ is the lattice spacing), this would signal an instability towards antiferromagnetism. Differently, $\mathcal{P}^{\L\to\L_\mathrm{cr}}_{((\mathrm{k}_x,\mathrm{k}_y),\nu),k'}(q=(\mathbf{0},0))\to+\infty$, and $\mathcal{P}^{\L\to\L_\mathrm{cr}}_{((\mathrm{k}_y,\mathrm{k}_x),\nu),k'}(q=(\mathbf{0},0))\to-\infty$ would imply the tendency to a superconducting state with $d$-wave symmetry. The flow equations for the physical channels read as: % \begin{subequations} \begin{align} &\partial_\L \mathcal{M}^\L_{k,k'}(q) = \mathcal{T}_{\overline{ph}}^\L\left(k-\rndup{q},k'-\rnddo{q},k'-\rnddo{q}\right),\\ &\partial_\L \mathcal{C}^\L_{k,k'}(q) = -2\mathcal{T}_{ph}^\L\left(k-\rndup{q},k'-\rnddo{q},k+\rndup{q}\right) \nonumber \\ &\hskip2.25cm+\mathcal{T}_{\overline{ph}}^\L\left(k-\rndup{q},k'-\rnddo{q},k'-\rnddo{q}\right), \\ &\partial_\L \mathcal{P}^\L_{k,k'}(q) = -\mathcal{T}_{pp}^\L\left(\rnddo{q}+k,\rndup{q}-k,\rnddo{q}-k'\right), \end{align} \end{subequations} % where \eqref{eq_methods: channel decomp physical} has to be inserted into \eqref{eq_methods: Tph Tphx Tpp}. In Appendix~\ref{app: symm V}, one can find the symmetry properties of the various channels. % \section{Dynamical mean-field theory (DMFT)} % While the fRG schemes are able to capture both long- and short-range correlation effects, their applicability is restricted to weakly interacting systems, as the unavoidable truncations can be justified only in this limit. In this section, we deal with a different approach, namely the dynamical mean-field theory (DMFT)~\cite{Georges1992,Georges1996}, which can be used to study even strongly interacting systems, but treats only \emph{local} (that is, extremely short-ranged) correlations. In this section we restrict our attention to a particular class of lattice models which exhibit a purely local interaction, that is, the Hubbard models: % \begin{equation} \mathcal{H} = \sum_{jj',\sigma=\uparrow,\downarrow}t_{jj'} c^\dagger_{j,\sigma}c_{j',\sigma} + U \sum_j n_{j,\uparrow} n_{j,\downarrow}-\mu\sum_{j,\sigma} n_{j,\sigma} , \label{eq_methods: Hubbard hamilt} \end{equation} % where $c^\dagger_{j,\sigma}$ ($c_{j,\sigma}$) creates (annihilates) a spin-$\frac{1}{2}$ electron at site $j$ with spin projection $\sigma$, $t_{jj'}$ represents the probability amplitude for an electron to hop form site $j$ to site $j'$, $U$ is the strength of the onsite interaction, $n_{j,\sigma}=c^\dagger_{j,\sigma}c_{j,\sigma}$, and $\mu$ is the chemical potential. In classical spin systems, such as the ferromagnetic Ising model, a \emph{mean-field} (MF) approximation consists in replacing all the spins surrounding a given site with a uniform background field, the Weiss field, whose value is obtained by a self-consistent equation. Similarly, in lattice quantum many-body systems, one can focus on a single site and replace the neighboring ones with a \emph{dynamical} field, which still fully embodies quantum fluctuation effects~\cite{Georges1992}. Similarly to MF for spin systems, DMFT is exact in the limit of large coordination number $z\to\infty$, or, equivalently, in the limit of infinite spatial dimensions~\cite{Metzner1989}. % \subsection{Self-consistency relation} % The key point of DMFT is to replace the action deriving from~\eqref{eq_methods: Hubbard hamilt}, $\mathcal{S}=\int_0^\beta\!d\tau[c^\dagger \partial_\tau c + \mathcal{H}]$, with a purely local one % \begin{equation} \mathcal{S}_\mathrm{imp} = -\int_0^\beta\!d\tau \int_0^\beta\!d\tau' \sum_\sigma c^\dagger_{0,\sigma}(\tau)\,\mathcal{G}_0^{-1}(\tau-\tau')\, c_{0,\sigma}(\tau') +U\int_0^\beta\!d\tau\, n_{0,\uparrow}(\tau)n_{0,\downarrow}(\tau), \label{eq_methods: local eff action} \end{equation} % where the label 0 in the fermionic operators stands for a given fixed site of the lattice and $U$ takes the same value as in the original Hubbard model. This action is usually referred to as (quantum) impurity problem, as it describes a 0+1 dimensional system. Here, the function $\mathcal{G}_0^{-1}$ plays the role of the Weiss field and has to be determined self-consistently. Since~\eqref{eq_methods: local eff action} is a \emph{local} approximation of~\eqref{eq_methods: Hubbard hamilt}, we require the local Green's function of the Hubbard model, that is % \begin{equation} G_{\mathrm{loc}}(\tau) = -\Big\langle \mathcal{T}\left\{c_{j,\sigma}(\tau)c^\dagger_{j,\sigma}(0)\right\} \Big\rangle, \end{equation} % with $\mathcal{T}\{\bullet\}$ the time ordering operator, to equal the one obtained from~\eqref{eq_methods: local eff action}, which, in imaginary frequency space, can be written as % \begin{equation} \mathcal{G}(\nu) = \frac{1}{\mathcal{G}_0^{-1}(\nu)-\Sigma_\mathrm{imp}(\nu)}, \end{equation} % with $\Sigma_\mathrm{imp}(\nu)$ the self-energy of the local action. Furthermore, the self-energy of the Hubbard model, $\Sigma_{jj'}(\tau)$, is approximated to a purely local function, that is, % \begin{equation} \Sigma_{jj'}(\tau) \simeq \Sigma_\mathrm{dmft}(\tau) \delta_{jj'}, \end{equation} % which becomes an exact statement in infinite dimensions $d\to\infty$, as shown in Ref.~\cite{Metzner1989} by means of diagrammatic arguments. In other words, we are requiring the Luttinger-Ward functional (see Ref.~\cite{Abrikosov1965}) $\Phi[G_{jj'}(\tau)]$ to be a functional of the local Green's function $G_{jj}(\tau)$ only, so that % \begin{equation} \Sigma_{jj'}(\tau) = \frac{\delta\Phi[G_{jj'}(\tau)]}{\delta G_{jj'}(\tau)} \simeq\frac{\delta\Phi[G_{jj}(\tau)]}{\delta G_{jj'}(\tau)} = \Sigma_\mathrm{dmft}(\tau)\delta_{jj'}. \end{equation} % Essentially, we are claiming that if we neglect the nonlocal ($j\neq j'$) elements of the self-energy, this can be generated by the Luttinger-Ward functional of a purely local theory, which we choose to be the one defined by~\eqref{eq_methods: local eff action}. This leads us to conclude that $\Sigma_\mathrm{dmft}(\tau)=\Sigma_\mathrm{imp}(\tau)$. The self-consistency relation can be therefore expressed in the frequency domain as % \begin{equation} G_{jj}(\nu) = \int_{k\in \mathrm{B.Z.}} \!\!\frac{d^2\mathbf{k}}{(2\pi)^2}\,\frac{1}{i\nu-\xi_\mathbf{k}-\Sigma_\mathrm{dmft}(\nu)} = \frac{1}{\mathcal{G}_0^{-1}(\nu)-\Sigma_\mathrm{dmft}(\nu)}. \label{eq_methods: DMFT self consist} \end{equation} % where $\xi_\mathbf{k}=\epsilon_\mathbf{k}-\mu$, with $\epsilon_\mathbf{k}$ the Fourier transform of the hopping matrix $t_{jj'}$ and $\mu$ the chemical potential. For a more detailed derivation of \eqref{eq_methods: DMFT self consist} and for a broader discussion, see Refs.~\cite{Georges1996,Georges1992}. Eq.~\eqref{eq_methods: DMFT self consist} closes the equations of the so-called DMFT loop. In essence, one starts with a guess for the Weiss field $\mathcal{G}_0^{-1}$, computes the self-energy of the action~\eqref{eq_methods: local eff action}, extracts a new $\mathcal{G}_0^{-1}$ from the self-consistency relation~\eqref{eq_methods: DMFT self consist}, and repeats this loop until convergence is reached, as shown in Fig.~\ref{fig_methods: DMFT sc loop}. % \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth]{dmft_sc_loop.png} \caption{DMFT self-consistent loop with the AIM as impurity model.} \label{fig_methods: DMFT sc loop} \end{figure} % The main advantage of this computational scheme is that the action~\eqref{eq_methods: local eff action} is much easier to treat than the Hubbard model itself, and several reliable numerical methods (so-called impurity solvers) provide numerically exact solutions. Among those, we find quantum Monte Carlo (QMC) methods, originally adapted to quantum impurity problems by Hirsch and Fye~\cite{Hirsch1986}, exact diagonalization (ED)~\cite{Si1994,Caffarel1994,Rozenberg1994}, and the numerical renormalization group (NRG)~\cite{Wilson1975,Bulla2008}. The ED and NRG methods require the impurity action~\eqref{eq_methods: local eff action} to descend from a Hamiltonian $\mathcal{H}$. This is provided by the Anderson impurity model (AIM)~\cite{Anderson1961}, which describes an impurity coupled to a bath of noninteracting electrons. Its Hamiltonian is given by % \begin{equation} \mathcal{H}_\mathrm{AIM} = \sum_{\ell,\sigma} \varepsilon_\ell \, a^\dagger_{\ell,\sigma}a_{\ell,\sigma} +\sum_{\ell,\sigma}V_\ell\left[c^\dagger_{0,\sigma}a_{\ell,\sigma}+a^\dagger_{0,\sigma}c_{\ell,\sigma}\right] -\mu\sum_\sigma c^\dagger_{0,\sigma}c_{0,\sigma} +U n_{0,\uparrow}n_{0,\downarrow}, \label{eq_methods: AIM Hamilt} \end{equation} % where $a^\dagger_{\ell,\sigma}$ ($a_{\ell,\sigma}$) creates (annihilates) an electron on bath level $\ell$ with spin projection $\sigma$, $\varepsilon_\ell$ are the bath energy levels, $V_\ell$ represent the bath-impurity hybridization parameters, and $\mu$ is the impurity chemical potential. The set $\{\varepsilon_\ell,V_\ell\}$ is often referred to as Anderson parameters. Expressing $\mathcal{H}_\mathrm{AIM}$ as a functional integral, and integrating over the bath electrons, one obtains the impurity action~\eqref{eq_methods: local eff action}, with the Weiss field given by % \begin{equation} \mathcal{G}_0^{-1}(\nu)=i\nu+\mu - \Delta(\nu), \end{equation} % where the hybridization function $\Delta(\nu)$ is related to $\varepsilon_\ell$ and $V_\ell$ by % \begin{equation} \Delta(\nu) = \sum_\ell \frac{|V_\ell|^2}{i\nu-\varepsilon_\ell}. \end{equation} % In the context of the AIM, the Weiss field is therefore expressed in terms of an optimally determined discrete set of Anderson parameters. % \subsection{DMFT two-particle vertex and susceptibilities} \label{subs_methods: DMFT susceptibilities} % For many studies, the knowledge of the single-particle quantities such as the self-energy is not sufficient. The DMFT provides also a framework for the computation of two-particle quantities and response functions after the loop has converged and the optimal Weiss field (or Anderson parameters) has been found. The impurity two-particle Green's function is defined as % \begin{equation} G^{4,\mathrm{imp}}_{\sigma_1',\sigma_2',\sigma_1^{\phantom{'}},\sigma_2^{\phantom{'}}}(\tau_1',\tau_2',\tau_1,\tau_2)= \Big\langle \mathcal{T}\left\{ c_{0,\sigma_1^{\phantom{'}}}(\tau_1) c_{0,\sigma_2^{\phantom{'}}}(\tau_2) c_{0,\sigma_1'}^\dagger(\tau_1') c_{0,\sigma_2'}^\dagger(\tau_2') \right\} \Big\rangle, \label{eq_methods: G4 DMFT} \end{equation} % and it is by definition antisymmetric under the exchange of $(\tau_1',\sigma_1')$ with $(\tau_2',\sigma_2')$ or $(\tau_1,\sigma_1^{\phantom{'}})$ with $(\tau_2,\sigma_2^{\phantom{'}})$. Fourier transforming it with respect to the four imaginary time variables, one obtains % \begin{equation} G^{4,\mathrm{imp}}_{\sigma_1',\sigma_2',\sigma_1^{\phantom{'}},\sigma_2^{\phantom{'}}}(\nu_1',\nu_2',\nu_1,\nu_2) = G^{4,\mathrm{imp}}_{\sigma_1',\sigma_2',\sigma_1^{\phantom{'}},\sigma_2^{\phantom{'}}}(\nu_1',\nu_2',\nu_1)\,\beta\delta_{\nu_1'+\nu_2'-\nu_1-\nu_2}, \end{equation} % where $\beta=1/T$ is the inverse temperature, and the delta function of the frequencies arises because of time translation invariance. Removing the disconnected terms, one obtains the connected two-particle Green's function % \begin{equation} \begin{split} G^{4,c,\mathrm{imp}}_{\sigma_1',\sigma_2',\sigma_1^{\phantom{'}},\sigma_2^{\phantom{'}}}(\nu_1',\nu_2',\nu_1)=\,&G^{4,\mathrm{imp}}_{\sigma_1',\sigma_2',\sigma_1^{\phantom{'}},\sigma_2^{\phantom{'}}}(\nu_1',\nu_2',\nu_1)\\&-\beta\mathcal{G}(\nu_1')\mathcal{G}(\nu_2')\, \delta_{\nu_1',\nu_1}\, \delta_{\sigma_1',\sigma_1^{\phantom{'}}}\delta_{\sigma_2',\sigma_2^{\phantom{'}}}\\ &+\beta \mathcal{G}(\nu_1')G(\nu_2')\delta_{\nu_1',\nu_2}\,\delta_{\sigma_1',\sigma_2^{\phantom{'}}}\delta_{\sigma_2',\sigma_1^{\phantom{'}}}, \end{split} \label{eq_methods: G2 conn} \end{equation} % with $\mathcal{G}(\nu)$ the single-particle Green's function of the impurity problem. The relation between the connected two-particle Green's function and the vertex is then given by~\cite{Rohringer2012} % \begin{equation} \begin{split} G^{4,c,\mathrm{imp}}_{\sigma_1',\sigma_2',\sigma_1^{\phantom{'}},\sigma_2^{\phantom{'}}}(\nu_1',\nu_2',\nu_1)= -\mathcal{G}(\nu_1')\mathcal{G}(\nu_2')V^\mathrm{imp}_{\sigma_1',\sigma_2',\sigma_1^{\phantom{'}},\sigma_2^{\phantom{'}}}(\nu_1',\nu_2',\nu_1)\mathcal{G}(\nu_1)\mathcal{G}(\nu_2), \end{split} \label{eq_methods: G2 1PI} \end{equation} % where $V^\mathrm{imp}$ is the impurity two-particle (1PI) vertex, and $\nu_2=\nu_1'+\nu_2'-\nu_1$ is fixed by energy conservation. Because of the spin-rotational invariance of the system, the spin dependence of the vertex can be simplified to % \begin{equation} \begin{split} &V^\mathrm{imp}_{\sigma_1',\sigma_2',\sigma_1^{\phantom{'}},\sigma_2^{\phantom{'}}}(\nu_1',\nu_2',\nu_1)= V^\mathrm{imp}(\nu_1',\nu_2',\nu_1)\delta_{\sigma_1',\sigma_1^{\phantom{'}}}\delta_{\sigma_2',\sigma_2^{\phantom{'}}} -V^\mathrm{imp}(\nu_2',\nu_1',\nu_1)\delta_{\sigma_1',\sigma_2^{\phantom{'}}}\delta_{\sigma_2',\sigma_1^{\phantom{'}}}. \end{split} \end{equation} % Furthermore, we can introduce three different notations for the two particle vertex, depending on the use one wants to make of it. We define the particle-hole ($ph$), particle-hole-crossed ($\overline{ph}$), and particle-particle ($pp$) notations as: % \begin{subequations} \begin{align} &V^{\mathrm{imp},ph}_{\nu,\nu'}(\Omega)=V^\mathrm{imp}\left(\nu-\rndup{\Omega},\nu'+\rnddo{\Omega},\nu+\rnddo{\Omega}\right),\\ &V^{\mathrm{imp},\overline{ph}}_{\nu,\nu'}(\Omega)=V^\mathrm{imp}\left(\nu-\rndup{\Omega},\nu'+\rnddo{\Omega},\nu'-\rndup{\Omega}\right),\\ &V^{\mathrm{imp},pp}_{\nu,\nu'}(\Omega)=V^\mathrm{imp}\left(\rnddo{\Omega}+\nu,\rndup{\Omega}-\nu,\rnddo{\Omega}+\nu'\right), \end{align} \end{subequations} % where, as explained previously, $\rndupnotwo{\bullet}$ ($\rnddonotwo{\bullet}$) rounds its argument up (down) to the closest \emph{bosonic} Matsubara frequency. In Fig.~\ref{fig_methods: notations}, we show a pictorial representation of the different notations for the vertex function. Within the QMC methods, the two-particle Green's function can be directly sampled from the impurity action~\eqref{eq_methods: local eff action} with converged Weiss field $\mathcal{G}_0^{-1}$, while for an ED or a NRG solver, one has to employ the \emph{Lehmann representation} of $G^{4,\mathrm{imp}}$ ~\cite{Toschi2007}. Once the two-particle Green's function has been obtained, the vertex can be extract via~\eqref{eq_methods: G2 conn} and~\eqref{eq_methods: G2 1PI}. \begin{figure} \centering \includegraphics[width=1.0\textwidth]{notations.png} \caption{Schematic representation of the different notations for the two-particle vertex.} \label{fig_methods: notations} \end{figure} The computation of the susceptibilities or transport coefficients of the lattice system can be achieved, within DMFT, through the knowledge of the vertex function. For example, the charge/magnetic susceptibilities of a paramagnetic system can be expressed in terms of the \emph{generalized} susceptibility $\chi^{c|m}_{\nu\nu'}(\mathbf{q},\Omega)$ as % \begin{equation} \chi^{c|m}(\mathbf{q},\Omega)=T^2\sum_{\nu\nu'}\chi^{c|m}_{\nu\nu'}(\mathbf{q},\Omega). \end{equation} % The DMFT approximation for $\chi^{c|m}_{\nu\nu'}(\mathbf{q},\Omega)$ is obtained solving the integral equation % \begin{equation} \begin{split} \chi^{c|m}_{\nu\nu'}(\mathbf{q},\Omega)=\beta\chi^0_\nu(\mathbf{q},\Omega)\delta_{\nu\nu'} -T\sum_{\nu''}\chi^0_\nu(\mathbf{q},\Omega)\,\widetilde{V}^{c|m}_{\nu\nu''}(\Omega)\,\chi^{c|m}_{\nu''\nu'}(\mathbf{q},\Omega), \end{split} \label{eq_methods: generalized chi DMFT} \end{equation} % where $\chi^0_\nu(\mathbf{q},\Omega)$ is given by % \begin{equation} \chi^0_\nu(\mathbf{q},\Omega)=-\int_\mathbf{k} G\left(\mathbf{k}+\frac{\mathbf{q}}{2},\nu+\rnddo{\Omega}\right) G\left(\mathbf{k}-\frac{\mathbf{q}}{2},\nu-\rndup{\Omega}\right), \label{eq_methods: chi0 generalized} \end{equation} % with $\int_\mathbf{k}=\int_{\mathbf{k}\in\mathrm{B.Z.}}\frac{d^d\mathbf{k}}{(2\pi)^d}$, and $G(\mathbf{k},\nu)$ the lattice propagator evaluated with the local self-energy $\Sigma(\nu)$. Finally, in Eq.~\eqref{eq_methods: generalized chi DMFT}, $\widetilde{V}^{c|m}$ represents the two particle irreducible (2PI) vertex in the charge/magnetic channel at the DMFT level. It can be obtained inverting a Bethe-Salpeter equation, that is, % \begin{equation} V^{c|m}_{\nu\nu'}(\Omega) = \widetilde{V}^{c|m}_{\nu\nu'}(\Omega) + T\sum_{\nu''} \widetilde{V}^{c|m}_{\nu\nu''}(\Omega)\,\chi^{0,\mathrm{imp}}_{\nu''}(\Omega)\, V^{c|m}_{\nu''\nu'}(\Omega), \end{equation} % where $\chi^{0,\mathrm{imp}}$ must be evaluated similarly to~\eqref{eq_methods: chi0 generalized} with the \emph{local} (or impurity) Green's function, and % \begin{subequations} \begin{align} &V^c_{\nu\nu'}(\Omega)=V_{\uparrow\up\uparrow\up,\nu\nu'}^{\mathrm{imp},ph}(\Omega)+ V_{\uparrow\up\downarrow\down,\nu\nu'}^{\mathrm{imp},ph}(\Omega)=2V^{\mathrm{imp},ph}_{\nu\nu'}(\Omega)-V^{\mathrm{imp},\overline{ph}}_{\nu\nu'}(\Omega),\\ &V^m_{\nu\nu'}(\Omega)=V_{\uparrow\up\uparrow\up,\nu\nu'}^{\mathrm{imp},ph}(\Omega)- V_{\uparrow\up\downarrow\down,\nu\nu'}^{\mathrm{imp},ph}(\Omega)=-V^{\mathrm{imp},\overline{ph}}_{\nu\nu'}(\Omega). \end{align} \end{subequations} % In $d\to\infty$, even though the two-particle vertex is generally momentum-dependent, Eq.~\eqref{eq_methods: generalized chi DMFT} with a purely local 2PI vertex, is exact, as it can be proven by means of diagrammatic arguments~\cite{Georges1996}. % \subsection{Strong coupling effects: the Mott transition} % One of the earliest successes of the DMFT was, unlike weak-coupling theories, its ability to correctly capture and the describe the occurrence of a metal-to-insulator (MIT) transition in the Hubbard model, the so called \emph{Mott transition}, named after Mott's early works~\cite{Mott1949} on this topic. In 1964, Hubbard~\cite{Hubbard1964} attempted to describe this transition within an \emph{effective band} picture. According to his view, the spectral function is composed of two "domes" which overlap in the metallic regime. As the interaction strength $U$ is increased, they move apart from each other, until, at the transition, they split into two separate bands, the so-called \emph{Hubbard bands} (see Fig.~\ref{fig_methods: Hubbard bands}). Despite this picture being \emph{qualitatively} correct in the insulating regime. it completely fails in reproducing the Fermi liquid properties of the metallic side. Differently, before the advent of the DMFT, other approaches could instead properly capture the transition approaching from the metallic regime~\cite{Brinkman1970}, but failed in describing the insulating phase. % \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth]{Hubbard_bands.png} \caption{Schematic picture of Hubbard's attempt to describe the Mott MIT transition. Taken from Ref.~\cite{Georges1996}.} \label{fig_methods: Hubbard bands} \end{figure} % Within the DMFT, since no assumptions are made on the strength of the on-site repulsion $U$, both sides of the transitions can be studied qualitatively and quantitatively. In addition to the Fermi liquid regime and the insulating one, a new intermediate regime is predicted. In fact, for $U$ only \emph{slightly} smaller than the critical value (above which a gap in the excitation spectrum is generated) the spectral function already exhibits two evident precursors of the Hubbard bands in between of which, that is, at the Fermi level, a narrow peak appears (Fig.~\ref{fig_methods: MIT DMFT}). This feature, visible only at low temperatures, is a hallmark of the Kondo effect taking place. Indeed, in this regime, a local moment (spin) is already formed on the impurity site, as the charge excitations have been gapped out, and the (self-consistent) bath electrons screen it, leading to a singlet ground state. A broader discussion on the MIT as well as the antiferromagnetic properties of the (half-filled) Hubbard model as predicted by the DMFT can be found in Ref.~\cite{Georges1996}. \begin{figure}[t] \centering \includegraphics[width=0.5 \textwidth]{MIT_DMFT.png} \caption{Evolution of the spectral function across the MIT, as predicted by DMFT. Between the metallic regime, marked by a large density of states at the Fermi level, and the insulating one, exhibiting a large charge gap, a Kondo peak is formed, signaling the onset of local moment screening. Taken from Ref.~\cite{Georges1996}.} \label{fig_methods: MIT DMFT} \end{figure} % \subsection{Extensions of the DMFT} % In this subsection we briefly list possible extensions of the DMFT to include the effects of \emph{nonlocal} correlations. For a more detailed overview, we refer to Ref.~\cite{Rohringer2018}. First of all, we find approximations that allow for the treatment of \emph{short-range} correlations, by replacing the single impurity atom with a cluster of few sites, either in real space, as in the cluster DMFT (CDMFT)~\cite{Kotliar2001}, or in reciprocal space, in the so-called dynamical cluster approximation (DCA)~\cite{Maier2005}. Even with few cluster sites, these approximations are sufficient to capture the interplay of antiferromagnetism and superconductivity in the Hubbard model~\cite{Foley2019}. Furthermore, we find the dual boson approach~\cite{Rubtsov2012}, that extends the applicability of the DMFT to systems with nonlocal interactions by adding to the impurity problem some local bosonic degrees of freedom. The dual fermion theory~\cite{Rubtsov2008}, instead allows for a \emph{perturbative} inclusion of nonlocal correlations on top of the DMFT, similarly to the \emph{vertex-based} approaches such as the dynamical vertex approximation (D$\Gamma$A)~\cite{Toschi2007} and the triply and quadruply irreducible local expansions (TRILEX and QUADRILEX)~\cite{Ayral2016}. % \section[The \texorpdfstring{DMF\textsuperscript2RG}{DMF2RG} approach]{Boosting the fRG to strong coupling: the \texorpdfstring{DMF\textsuperscript2RG}{DMF2RG} approach} % In this section, we introduce another extension of the DMFT for the inclusion of nonlocal correlations, namely its fusion together with the fRG in the so-called DMF\textsuperscript{2}RG approach~\cite{Taranto2014,Metzner2014}. Alternatively, the DMF\textsuperscript{2}RG can be viewed as a development of the fRG that enlarges its domain of validity to strongly interacting systems. We start by defining a scale-dependent action as % \begin{equation} \mathcal{S}^\L\left[\psi,\overline{\psi}\right]=-\int_k\sum_\sigma\overline{\psi}_{k,\sigma}\left[G_0^\L(k)\right]^{-1}\psi_{k,\sigma} + U\int_0^\beta\!d\tau\,\sum_j n_{j,\uparrow}(\tau)n_{j,\downarrow}(\tau). \label{eq_methods: DMF2RG scale-dependent action} \end{equation} % We notice that by choosing $G_0^\L(k)=\mathcal{G}_0(\nu)$, \eqref{eq_methods: DMF2RG scale-dependent action} becomes the action of $N_s$ (number of lattice sites) identical and uncoupled impurity problems. On the other hand, $G_0^\L(k)=(i\nu-\xi_\mathbf{k})^{-1}$, gives the Hubbard model action. The key idea of the DMF\textsuperscript{2}RG is therefore to set up a fRG flow, which interpolates between the self-consistent AIM and the Hubbard model. The boundary conditions for $G^\L_0(k)$ read therefore as % \begin{subequations} \begin{align} &G_0^{{\Lambda_\mathrm{ini}}}(k)=\mathcal{G}_0(\nu),\\ &G_0^{{\Lambda_\mathrm{fin}}}(k)=\frac{1}{i\nu-\xi_\mathbf{k}}. \end{align} \end{subequations} % Furthermore, one requires the DMFT solution to be conserved at each fRG step~\cite{Vilardi2019,Vilardi_Thesis}, that is % \begin{equation} \int_\mathbf{k} G^\L(k)\big\rvert_{\Sigma^\L(k)=\Sigma_\mathrm{dmft}(\nu)}= \int_\mathbf{k} \frac{1}{\left[G_0^\L(k)\right]^{-1}-\Sigma_\mathrm{dmft}(\nu)}= \frac{1}{\mathcal{G}_0^{-1}(\nu)-\Sigma_\mathrm{dmft}(\nu)}. \label{eq_methods: DMFT conservation} \end{equation} % Possible cutoffs schemes satisfying the boundary conditions and the conservation of DMFT might be, for example, % \begin{equation} \left[G_0^\L(k)\right]^{-1}=\Theta^\L(k)\left(i\nu-\xi_\mathbf{k}\right)+\Xi^\L(k)\mathcal{G}_0^{-1}(\nu), \end{equation} % or % \begin{equation} G_0^\L(k)=\frac{\Theta^\L(k)}{i\nu-\xi_\mathbf{k}}+\Xi^\L(k)\mathcal{G}_0(\nu), \end{equation} % where $\Theta^\L(k)$ is an arbitrarily chosen cutoff satisfying $\Theta^{{\Lambda_\mathrm{ini}}}(k)=0$, and $\Theta^{{\Lambda_\mathrm{fin}}}(k)=1$, and $\Xi^\L(k)$ is calculated at every step from~\eqref{eq_methods: DMFT conservation}. Obviously, at $\L={\Lambda_\mathrm{ini}}$ (when $\Theta^\L(k)=0$), one would get $\Xi^{{\Lambda_\mathrm{ini}}}(k)=1$, while at $\L={\Lambda_\mathrm{fin}}$, Eq.~\eqref{eq_methods: DMFT conservation} becomes the DMFT self-consistency condition, fulfilled by $G_0^\L(k)=(i\nu-\xi_\mathbf{k})^{-1}$, which returns $\Xi^{{\Lambda_\mathrm{fin}}}(k)=0$. The choice $\mathcal{S}^{{\Lambda_\mathrm{ini}}}[\psi,\overline{\psi}]=\sum_j\mathcal{S}_\mathrm{imp}[\psi_j,\overline{\psi}_j]$, imposes an initial condition for the fRG effective action, that is, % \begin{equation} \Gamma^{{\Lambda_\mathrm{ini}}}\left[\psi,\overline{\psi}\right] = \sum_j \Gamma_\mathrm{imp}\left[\psi_j,\overline{\psi}_j\right], \end{equation} % where $\Gamma_\mathrm{imp}$ is the effective action of the self-consistent impurity problem. Expanding it in power of the fields, we get % \begin{equation} \begin{split} \Gamma_\mathrm{imp}\left[\psi,\overline{\psi}\right] = &-\int_\nu\sum_\sigma \overline{\psi}_{\nu,\sigma}\left[\mathcal{G}_0^{-1}(\nu)-\Sigma_\mathrm{dmft}(\nu)\right]\psi_{\nu,\sigma}\\ &+\frac{1}{(2!)^2}\int_{\nu_1',\nu_2',\nu_1}\,\,\sum_{\substack{\sigma_1',\sigma_2',\\\sigma_1^{\phantom{'}},\sigma_2^{\phantom{'}}}} \overline{\psi}_{\nu_1',\sigma_1'} \overline{\psi}_{\nu_2',\sigma_2'}\, V^\mathrm{imp}_{\sigma_1',\sigma_2',\sigma_1^{\phantom{'}},\sigma_2^{\phantom{'}}}(\nu_1',\nu_2',\nu_1)\, \psi_{\nu_1'+\nu_2'-\nu_1,\sigma_2^{\phantom{'}}}\psi_{\nu_1,\sigma_1^{\phantom{'}}}\\ &+\dots \end{split} \label{eq_methods: Gamma ini DMF2RG} \end{equation} % Within the DMF\textsuperscript{2}RG, the flow equations for the 1PI vertices remain unchanged, while their initial conditions can be read from~\eqref{eq_methods: Gamma ini DMF2RG}: % \begin{subequations} \begin{align} &\Sigma^{{\Lambda_\mathrm{ini}}}(k) = \Sigma_\mathrm{dmft}(\nu),\\ &V^{\Lambda_\mathrm{ini}}(k_1,k_2,k_3)=V^\mathrm{imp}(\nu_1,\nu_2,\nu_3) = V^\mathrm{imp}_{\uparrow\downarrow\uparrow\downarrow}(\nu_1,\nu_2,\nu_3). \end{align} \end{subequations} % The development of the DMF\textsuperscript{2}RG has enabled the study of the doped Hubbard model at strong coupling, with particular focus on the (generally incommensurate) antiferromagnetic and ($d$-wave) superconducting instabilities~\cite{Vilardi2019,Vilardi_Thesis}. % \end{document} \chapter{Charge carrier drop driven by spiral magnetism} \label{chap: spiral DMFT} In this chapter, we present a DMFT description of the so-called spiral magnetic state of the Hubbard model. This magnetically ordered phase is a candidate for the normal state of cuprate superconductors, emerging when superconductivity gets suppressed by strong magnetic fields, as realized in a series of recent experiments~\cite{Badoux2016,Laliberte2016,Collignon2017,Proust2019}. In particular, a sudden change in the charge carrier density, measured via the Hall number, is observed as the hole doping $p=1-n$ is varied across the value $p=p^*$, where the pseudogap phase is supposed to end. This observation is consistent with a drastic change in the Fermi surface topology, which can described, among others, by a transition from a spiral magnet to a paramagnet~\cite{Eberlein2016,Chatterjee2017,Verret2017}. Other possible candidates for the phase appearing for $p<p^*$ are N\'eel antiferromagnetism~\cite{Storey2016,Storey2017}, charge density waves~\cite{Caprara2017,Sharma2018}, or nematic order~\cite{Maharaj2017}. The chapter is organized as it follows. First of all, we define the spiral magnetic state and provide a DMFT description of it. Secondly, we present results for the spiral order parameter as a function of doping at low temperatures, together with an analysis of the evolution of the Fermi surfaces. Finally, we compare our results with the experimental findings by computing the transport coefficients using the DMFT parameters as an input for the formulas derived in Ref.~\cite{Mitscherling2018}. This task has been carried out by J.~Mitscherling, who equally contributed to Ref.~\cite{Bonetti2020_I}, which contains the results presented in this chapter. \section{Spiral magnetism} % Spiral magnetic order is defined by a finite expectation value of the spin operator of the form % \begin{equation} \langle \vec{S}_j \rangle = m\hat{n}_j, \label{eq_spiral: <Sj>} \end{equation} % where $m$ is the amplitude of the onsite magnetization, and $\hat{n}_j$ is a unitary vector indicating the magnetization direction on site $j$, which can be written as % \begin{equation} \hat{n}_j = \cos(\mathbf{Q}\cdot\mathbf{R}_j)\hat{v}_1 +\sin(\mathbf{Q}\cdot\mathbf{R}_j)\hat{v}_2, \label{eq_spiral: n_j} \end{equation} % with $\hat{v}_1$ and $\hat{v}_2$ two constant mutually orthogonal unitary vectors. The magnetization lies therefore in the plane spanned by $\hat{v}_1$ and $\hat{v}_2$, and its direction on two neighboring sites $j$ and $j'$ differs by an angle $\mathbf{Q}\cdot(\mathbf{R}_j-\mathbf{R}_{j'})$. The vector $\mathbf{Q}$ is a parameter which must be determined microscopically. In the square lattice Hubbard model it often takes the form, in units of the inverse lattice constant $a^{-1}$, $\mathbf{Q}=(\pi-2\pi\eta,\pi)$ or, in the case of a \emph{diagonal spiral}, $\mathbf{Q}=(\pi-2\pi\eta,\pi-2\pi\eta)$, where the parameter $\eta$ is called incommensurability. If the system Hamiltonian exhibits SU(2) spin symmetry, as in case of the Hubbard model, the vectors $\hat{v}_1$ and $\hat{v}_2$ can be chosen arbitrarily, and we thus choose $\hat{v}_1=\hat{e}_1\equiv(1,0,0)$, and $\hat{v}_2=\hat{e}_2\equiv(0,1,0)$. The magnetization pattern resulting from this choice on a square lattice for a specific value of $\mathbf{Q}$ is shown in Fig.~\ref{fig_spiral: spiral}. % \begin{figure}[t] \centering \includegraphics[width=0.65\textwidth]{spiral.png} \caption{Magnetization pattern for a spiral magnetic state on a square lattice with lattice constant $a=1$, and $\mathbf{Q}=(\pi-2\pi\eta,\pi)$, with $\eta\simeq0.07$.} \label{fig_spiral: spiral} \end{figure} % Within the Hubbard model, where the fundamental degrees of freedom are electrons rather than spins, the spin operator is expressed as % \begin{equation} \vec{S}_j = \frac{1}{2}\sum_{s,s'=\uparrow,\downarrow}c^\dagger_{j,s}\vec{\sigma}_{ss'}c_{j,s'}, \label{eq_spiral: Sj Hubbard} \end{equation} % with $\vec{\sigma}$ the Pauli matrices. Combining the above definition with \eqref{eq_spiral: <Sj>} and \eqref{eq_spiral: n_j}, one obtains the following expression for the onsite magnetization amplitude % \begin{equation} m = \frac{1}{2}\int_\mathbf{k} \Big\langle c^\dagger_{\mathbf{k},\uparrow}c_{\mathbf{k}+\mathbf{Q},\downarrow}+c^\dagger_{\mathbf{k}+\mathbf{Q},\downarrow}c_{\mathbf{k},\uparrow}\Big\rangle. \end{equation} % From the equation above, it is evident that spiral magnetism couples the single particle states $(\mathbf{k},\uparrow)$ and $(\mathbf{k}+\mathbf{Q},\downarrow)$, for each momentum $\mathbf{k}$. It is thus convenient to use a Nambu-like basis $(c_{\mathbf{k},\uparrow},c_{\mathbf{k}+\mathbf{Q},\downarrow})$, for which the inverse bare Green's function reads as % \begin{equation} \mathbf{G}_0^{-1}(\mathbf{k},\nu)=\left( \begin{array}{cc} i\nu-\xi_{\mathbf{k}} & 0\\ 0 & i\nu-\xi_{\mathbf{k}+\mathbf{Q}} \end{array} \right), \end{equation} % with $\xi_\mathbf{k}$ the single-particle dispersion relative to the chemical potential $\mu$. Within the above definitions, the 2D N\'eel state is recovered by setting $\mathbf{Q}=\mathbf{Q}_\mathrm{AF}=(\pi,\pi)$. In the Hubbard model a spiral magnetic (that is, with $\mathbf{Q}$ close to $\mathbf{Q}_\mathrm{AF}$) state has been found by several methods at finite doping: Hartree-Fock~\cite{Igoshev2010}, slave-boson mean-field~\cite{Fresard1991} calculations, as well as expansions in the hole density~\cite{Chubukov1995}, and moderate-coupling functional renormalization group~\cite{Yamase2016} calculations. Interestingly enough, normal state DMFT calculations have revealed that the ordering wave vector $\mathbf{Q}$ is related to the shape of the Fermi surface geometry not only at weak but also at strong coupling~\cite{Vilardi2018}. Furthermore, spiral states are found to emerge upon doping also in the $t$-$J$ model~\cite{Shraiman1989,Kotov2004}. % \section{DMFT for spiral states} % The single impurity DMFT equations presented in Chap.~\ref{chap: methods} can be easily extended to magnetically ordered states~\cite{Georges1996}. The particular case of spiral magnetism has been treated in Refs.~\cite{Fleck1999,Goto2016} for the square- and triangular-lattice Hubbard model, respectively. In the Nambu-like basis introduced previously, the self-consistency equation takes the form % \begin{equation} \int_\mathbf{k} \left[\mathbf{G}_0^{-1}(\mathbf{k},\nu)-\boldsymbol{\Sigma}_\mathrm{dmft}(\nu)\right]^{-1}= \left[\boldsymbol{\mathcal{G}}_0^{-1}(\nu)-\boldsymbol{\Sigma}_\mathrm{dmft}(\nu)\right]^{-1}, \end{equation} % where $\boldsymbol{\Sigma}_\mathrm{dmft}(\nu)$ is the local self-energy, and $\boldsymbol{\mathcal{G}}_0(\nu)$ the bare propagator of the self-consistent AIM. The self-energy is a $2\times2$ matrix of the form % \begin{equation} \boldsymbol{\Sigma}_\mathrm{dmft}(\nu) = \left( \begin{array}{cc} \Sigma(\nu) & \Delta(\nu) \\ \Delta^*(-\nu) & \Sigma(\nu) \end{array} \right), \end{equation} % with $\Sigma(\nu)$ the normal self-energy, and $\Delta(\nu)$ the gap function. Since the impurity model lives in 0+1 dimensions, there can be no spontaneous symmetry breaking, leading to off diagonal elements in the self-energy with a diagonal Weiss field $\boldsymbol{\mathcal{G}}_0$. We therefore explicitly break the SU(2) symmetry in the impurity model, allowing for a non-diagonal bare propagator. The corresponding AIM can be then written as (cf.~Eq.~\eqref{eq_methods: AIM Hamilt}) % \begin{equation} \begin{split} \mathcal{H}_\mathrm{AIM}=\sum_{\ell,\sigma} \varepsilon_\ell\, a^\dagger_{\ell,\sigma}a_{\ell,\sigma}+ \sum_{\ell,\sigma,\sigma'}\left[V_\ell^{\sigma\sigma'}c^\dagger_{0,\sigma}a_{\ell,\sigma'}+\mathrm{h.c.}\right] -\mu\sum_\sigma c^\dagger_{0,\sigma}c_{0,\sigma} +Un_{0,\uparrow}n_{0,\downarrow}, \end{split} \label{eq_spiral: spiral AIM} \end{equation} % with $V_\ell^{\sigma\sigma'}$ a hermitian matrix describing spin-dependent hoppings. By means of a suitable global spin rotation around the axis perpendicular to the magnetization plane, one can impose $\Delta(\nu)=\Delta^*(-\nu)$, and therefore require the $V_\ell^{\sigma\sigma'}$ to be real symmetric matrices. The self-consistent loop will then return nonzero off-diagonal hoppings ($V_\ell^{\uparrow\downarrow}$ and $V_\ell^{\downarrow\uparrow}$), and therefore a finite gap function $\Delta(\nu)$, only if symmetry breaking occurs in the original lattice system. Integrating out the bath fermions in Eq.~\eqref{eq_spiral: spiral AIM}, one obtains the Weiss field % \begin{equation} \boldsymbol{\mathcal{G}}_0(\nu) = (i\nu+\mu)\mathbb{1}-\sum_\ell \frac{\boldsymbol{V}^\dagger_\ell \boldsymbol{V}^{\phantom{\dagger}}_\ell}{i\nu-\varepsilon_\ell}, \end{equation} % which, in general, exhibits off-diagonal elements. Using ED with four bath sites as impurity solver, we converge several loops for various values of $\mathbf{Q}$, and we retain the one that minimizes the grand-canonical potential. For its computation we use the formula~\cite{Georges1996} % \begin{equation} \frac{\Omega}{V} = \Omega_\mathrm{imp}-T\sum_\nu\int_\mathbf{k}\Tr\log\left[\mathbf{G}_0^{-1}(\mathbf{k},\nu)-\boldsymbol{\Sigma}_\mathrm{dmft}(\nu)\right] +T\sum_\nu\Tr\log\left[\boldsymbol{\mathcal{G}}_0^{-1}(\nu)-\boldsymbol{\Sigma}_\mathrm{dmft}(\nu)\right], \end{equation} % with $V$ the system volume, and $\Omega_\mathrm{imp}$ the impurity grand-canonical potential per unit volume, which can be computed within the ED solver as % \begin{equation} \Omega_\mathrm{imp} = -2T\sum_n \log \left(1+e^{-\beta \epsilon_n}\right), \end{equation} % where the factor 2 comes from the spin degeneracy, and $\epsilon_n$ are the eigenenergies of the AIM Hamiltonian. In the case of calculations performed at finite density $n$ rather than at fixed chemical potential $\mu$, the function to be minimized is the free energy per unit volume $F/V=\Omega/V+\mu n$. In Fig.~\ref{fig_spiral: Free en vs eta} we show a typical behavior of $F/V$ as a function of the incommensurability $\eta$ for a $\mathbf{Q}=(\pi-2\pi\eta,\pi)$ spiral. We notice that the variation of microscopic parameters such as the hole doping $p=1-n$ can drive the system from a N\'eel state ($\eta=0$) to a spiral one ($\eta\neq0$). % \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{freen_vs_eta_LSCO.png} \caption{Free energy potential relative to its minimum value $F_\mathrm{min}$ at two different doping values, as a function of the incommensurability $\eta$ for a $\mathbf{Q}=(\pi-2\pi\eta,\pi)$ spiral.} \label{fig_spiral: Free en vs eta} \end{figure} % \section{Hubbard model parameters} % In order to mimic the behavior of real materials, namely YBa\textsubscript 2Cu\textsubscript 3O\textsubscript y (YBCO), and La\textsubscript{2-x}Sr\textsubscript xCuO\textsubscript4 (LSCO), we use hopping parameters ($t'$ and $t''$) calculated by downfolding \emph{ab initio} band structures on the single-band Hubbard model~\cite{Andersen1995,Pavarini2001}. For LSCO we choose $t'=-0.17t$, $t''=0.05t$, and $U=8t$, while for YBCO we have $t'=-0.3t$, $t''=0.15t$, and $U=10t$. Furthermore, since YBCO is a bilayer compound, its band structure must be extended to % \begin{equation} \xi_{\mathbf{k},k_z}=\xi_\mathbf{k}-t^\perp_\mathbf{k}\cos k_z, \end{equation} % where $k_z\in\{0,\pi\}$ is the $z$-axis component of the momentum, and $t^\perp_\mathbf{k}$ is an interlayer hopping amplitude taking the form % \begin{equation} t^\perp_\mathbf{k}=t^\perp(\cos k_x - \cos k_y )^2, \end{equation} % with $t^\perp=0.15t$. The dispersion obtained with $k_z=0$ is often referred to as \emph{bonding band}, and the one with $k_z=\pi$ as \emph{antibonding band}. The self-consistency equation must be then modified to % \begin{equation} \frac{1}{2}\sum_{k_z=0,\pi}\int_\mathbf{k} \left[\mathbf{G}_0^{-1}(\mathbf{k},k_z,\nu)-\boldsymbol{\Sigma}_\mathrm{dmft}(\nu)\right]^{-1}= \left[\boldsymbol{\mathcal{G}}_0^{-1}(\nu)-\boldsymbol{\Sigma}_\mathrm{dmft}(\nu)\right]^{-1}, \end{equation} % where the bare lattice Green's function is now given by % \begin{equation} \mathbf{G}_0^{-1}(\mathbf{k},k_z,\nu)=\left( \begin{array}{cc} i\nu-\xi_{\mathbf{k},k_z} & 0\\ 0 & i\nu-\xi_{\mathbf{k}+\mathbf{Q},k_z+Q_z} \end{array} \right), \end{equation} % with $Q_z=\pi$, that is, we require the interlayer dimers to be antiferromagnetically ordered. In the rest of this chapter, all the quantities with energy dimensions will be given in units of the hopping $t$ when not explicitly stated otherwise. % \section{Order parameter and incommensurability} % In this section, we show results obtained from calculations at the lowest temperatures reachable by the ED algorithm with $N_s=4$ bath sites, namely $T=0.027t$ for LSCO, and $T=0.04t$ for YBCO. Notice that decreasing $T$ below these two values leads, at least for some dopings, to an unphysical decrease and eventual vanishing of the order parameter $m$. Lower temperatures could be reached increasing $N_s$. However, the exponential scaling of the ED algorithm makes low-$T$ calculations computationally involved. We obtain homogeneous solutions for any doping, that is, for all values of $p$ shown, we have $\frac{\partial\mu}{\partial n}>0$. By contrast, in Hartree-Fock studies~\cite{Igoshev2010} phases with two different densities have been found over broad doping regions. % \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{self_om_p10_LSCO_b_100.png} \caption{Diagonal (top) and off-diagonal (bottom) component of the DMFT self-energy as a function of the Matsubara frequency for LSCO parameters at $p=0.10$, and $T=0.04t$.} \label{fig_spiral: sigma & delta vs nu} \end{figure} % Differently than static mean-field theory, where the off-diagonal self-energy is given by a simple number which can be chosen as purely real, within DMFT it acquires a frequency dependency, and, in general, an imaginary part. A particular case when $\Delta(\nu)$ can be chosen as a purely real function of the frequency is the half-filled Hubbard model with only nearest neighbor hoppings ($t'=t''=0$), where a particle-hole transformation can map the antiferromagnetic state onto a superconducting one, for which it is always possible to choose a real gap function. In Fig.~\ref{fig_spiral: sigma & delta vs nu}, we plot the normal and anomalous self-energies as functions of the Matsubara frequency $\nu$. $\Sigma(\nu)$ displays a behavior qualitatively similar to the one of the paramagnetic state, with a negative imaginary part, and a real one approaching the Hartree-Fock expression $Un/2$ for $\nu\to\infty$. The anomalous self-energy $\Delta(\nu)$ exhibits a sizable frequency dependence with its real part interpolating between its value at the Fermi level $\Delta\equiv\Delta(\nu\to0)$, and an Hartree-Fock-like expression $Um$, with $m$ the onsite magnetization. We notice that within the DMFT (local) charge and pairing fluctuations are taken into account, leading to an overall suppression of $\Delta$ compared to the Hartree-Fock result. This is the magnetic equivalent of the Gor'kov-Melik-Barkhudarov effect found in superconductors~\cite{Gorkov1961}. The observation that $\Delta<Um$ is another manifestation of these fluctuations. % \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{delta_vs_p_2panels.png} \caption{Magnetic gaps for LSCO (left panel) and YBCO (right panel) as functions of doping. For LSCO, we show results at $T=0.04t$ (squares), and $T=0.027t$ (diamonds). The dashed black lines represent estimations of the gaps at $T=0$ via a linear extrapolation, while the dashed gray lines indicate the doping above which electron pockets are present, together with hole pockets, in the Fermi surface} \label{fig_spiral: delta vs p} \end{figure} % In Fig.~\ref{fig_spiral: delta vs p}, we show the extrapolated zero frequency gap $\Delta$ as a function of the doping for the two materials under study. As expected, the gap is maximal at half filling, and decreases monotonically upon doping, until it vanishes continuously at $p=p^*$. Due to the mean-field character of the DMFT, the magnetic gap is expected to behave proportionally to $(p^*-p)^{1/2}$ for $p$ slightly below $p^*$ at finite temperature. Examining the temperature trend for LSCO (left panel of Fig.~\ref{fig_spiral: delta vs p}), lowering the temperature, we expect $p^*$ to increase, and the approximately linear behavior of $\Delta$ to extend up to the critical doping, as indicated by the extrapolation in the figure. In principle, a weak first-order transition is also possible at $T=0$. % \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{eta_vs_p.png} \caption{Incommensurability $\eta$ as a function of doping for LSCO and YBCO parameters at $T=0.04t$.} \label{fig_spiral: eta vs p} \end{figure} % Within the parameter ranges under study, the ordering wave vector always takes the form $\mathbf{Q}=(\pi-2\pi\eta,\pi)$ (or symmetry related), with the incommensurability $\eta$ varying with doping, as shown in Fig.~\ref{fig_spiral: eta vs p}. For both compounds we find that $\eta$ is lower than $p$. Experimentally, the relation $\eta(p)\simeq p$ has been found to hold for LSCO for $0.06<p<0.12$, saturating to $\eta\simeq1/8$ for larger dopings~\cite{Yamada1998}. Differently, experiments on YBCO have found $\eta(p)$ being significantly smaller than $p$~\cite{Haug2010}. % \section{Fermi surfaces} % The onset of spiral magnetic order leads to a band splitting and therefore to a fractionalization of the Fermi surface. In the vicinity of the Fermi level, we can approximate the anomalous and normal self-energies as constants, $\Delta$ and $\Sigma_0\equiv\mathrm{Re}\Sigma(\nu\to0)$, which leads to a mean-field expression~\cite{Igoshev2010} for the quasiparticle bands reading as % \begin{equation} E^\pm_\mathbf{k}=\frac{\epsilon_\mathbf{k}+\epsilon_{\mathbf{k}+\mathbf{Q}}}{2}\pm\sqrt{\left(\frac{\epsilon_\mathbf{k}-\epsilon_{\mathbf{k}+\mathbf{Q}}}{2}\right)^2+\Delta^2}-\widetilde{\mu}, \end{equation} % with $\widetilde{\mu}=\mu-\Sigma_0$. The quasiparticle Fermi surfaces are then given by $E_\mathbf{k}^\pm=0$. In the case of the bilayer compound YBCO, there are two sets of Fermi surfaces corresponding to the bonding and antibonding bands. We remark that the above expression for the quasiparticle dispersions holds only in the vicinity of the Fermi level, where the expansion of the DMFT self-energies is justifiable. % \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{FS_pstar.png} \caption{Fermi surfaces for LSCO and YBCO slightly below their critical doping at $T=0.04t$. The read lines indicate hole pockets, while the blue ones electron pockets. For YBCO solid and dashed lines denote the bonding and antibonding bands, respectively.} \label{fig_spiral: FS p*} \end{figure} % In Fig.~\ref{fig_spiral: FS p*} the quasiparticle Fermi surfaces for LSCO and YBCO band parameters are shown for doping values slightly smaller than their respective critical doping $p^*$. In all cases, due to the small value of $\Delta$ in the vicinity of $p^*$, both electron and hole pockets are present. % \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{Zfactors.png} \caption{$Z$ factors as functions of the doping for LSCO and YBCO band parameters at $T=0.04t$. The dashed lines represent the $Z$ factors in the unstable paramagnetic phase.} \label{fig_spiral: Z factors} \end{figure} % The quasiparticle Fermi surface differs from the Fermi surface observed in photoemission experiments. The latter is determined by poles of the diagonal elements of the Green's function, corresponding to peaks in the spectral function at zero frequency $A(\mathbf{k},0)$. Discarding the frequency dependence of the self-energies, the spectral functions in the vicinity of the Fermi level can be expressed as~\cite{Eberlein2016} % \begin{subequations} \begin{align} &A_\uparrow(\mathbf{k},\omega)=\sum_{\eta=\pm}\frac{\Delta^2}{\Delta^2+\left(\xi_{\mathbf{k}-\mathbf{Q}}-E^{-\eta}_{\mathbf{k}-\mathbf{Q}}\right)^2}\,\delta\!\left(\omega-E^{\eta}_{\mathbf{k}-\mathbf{Q}}\right),\\ &A_\downarrow(\mathbf{k},\omega)=\sum_{\eta=\pm}\frac{\Delta^2}{\Delta^2+\left(\xi_{\mathbf{k}}-E^{\eta}_{\mathbf{k}}\right)^2}\,\delta\!\left(\omega-E^{\eta}_{\mathbf{k}}\right), \end{align} \end{subequations} % where $\omega/t\ll1$, $\xi_\mathbf{k}=\epsilon_\mathbf{k}-\widetilde{\mu}$, and $\delta(x)$ denotes the Dirac delta function. The total spectral function, $A(\mathbf{k},\omega)=A_\uparrow(\mathbf{k},\omega)+A_\downarrow(\mathbf{k},\omega)$, is inversion symmetric ($A(-\mathbf{k},\omega)=A(\mathbf{k},\omega)$) for band dispersions obeying $\epsilon_{-\mathbf{k}}=\epsilon_\mathbf{k}$, while the quasiparticle bands are not~\cite{Bonetti2020_I}. Furthermore, the spectral weight on the Fermi surface is given by $\frac{\Delta^2}{\Delta^2+\xi_\mathbf{k}^2}$, which is maximal for momenta close to the "bare" Fermi surface, $\xi_\mathbf{k}=0$. At low temperatures and in the vicinity of the Fermi level, the main effect of the normal self-energy $\Sigma(\nu)$ is a renormalization of the quasiparticle weight by the $Z$ factor % \begin{equation} Z=\left[1-\frac{\partial\,\mathrm{Im}\Sigma(\nu)}{\partial\nu}\bigg\rvert_{\nu=0}\right]^{-1}\leq1, \end{equation} % where the derivative can be approximated by $\mathrm{Im}\Sigma(\pi T)/(\pi T)$ at finite temperatures. The $Z$ factor reduces the bare dispersion to $\bar{\xi}_\mathbf{k}=Z\xi_\mathbf{k}$, the magnetic gap to $\bar{\Delta}=Z\Delta$, and the quasiparticle energies to $\bar{E}^\pm_\mathbf{k}=ZE^\pm_\mathbf{k}$. Moreover, the quasiparticle contributions to the spectral function get suppressed by a global factor $Z$. The missing spectral weight is then shifted to incoherent contributions at higher energies. The resulting spectral function will then read as % \begin{equation} \begin{split} A(\mathbf{k},\omega)=&Z\sum_{\eta=\pm}\left[ \frac{\bar{\Delta}^2}{\bar{\Delta}^2+\left(\bar{\xi}_{\mathbf{k}-\mathbf{Q}}-\bar{E}^{-\eta}_{\mathbf{k}-\mathbf{Q}}\right)^2}\,\delta\!\left(\omega-\bar{E}^{\eta}_{\mathbf{k}-\mathbf{Q}}\right) +\frac{\bar{\Delta}^2}{\bar{\Delta}^2+\left(\bar{\xi}_{\mathbf{k}}-\bar{E}^{\eta}_{\mathbf{k}}\right)^2}\,\delta\!\left(\omega-\bar{E}^{\eta}_{\mathbf{k}}\right) \right]\\ &+A^\mathrm{inc}(\mathbf{k},\omega). \end{split} \end{equation} % In Fig.~\ref{fig_spiral: Z factors}, we plot the $Z$ factors for LSCO and YBCO parameters computed at $T=0.04t$ as functions of the doping. The values computed for the (enforced) unstable paramagnetic solution are also shown for comparison (dashed lines). We notice that the $Z$ factors exhibit a quite weak doping dependence, and, depending on the material, take values between 0.2 and 0.4, with the strongest renormalization occurring for YBCO. We remark that for $p\to0$ the paramagnetic $Z$ factors are not expected to vanish as the choice of parameters for both materials makes them lie on the metallic side of the Mott transition at half filling. % \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{FS_spfun_LSCO.png} \caption{Quasiparticle Fermi surfaces (top) and spectral functions $A(\mathbf{k},0)$ (bottom) for LSCO parameters and for different doping values. The spectral functions have been broadened by a constant scattering rate $\Gamma=0.025t$.} \label{fig_spiral: FS sp fun LSCO} \end{figure} % In Fig.~\ref{fig_spiral: FS sp fun LSCO}, we show the quasiparticle Fermi surfaces and spectral functions for various doping values across the spiral-to-paramagnetic transition. Electron pockets are present in the Fermi surface only in a narrow doping region below $p^*$ (see also Fig.~\ref{fig_spiral: delta vs p}). The spectral function exhibits visible peaks only on the inner sides of the pockets, as the outer sides are strongly suppressed by the spectral weight. Therefore, the Fermi surface observed in photoemission experiments smoothly evolves from Fermi arcs, characteristic of the pseudogap phase, to a large Fermi surface upon increasing doping. % \section{Application to transport experiments in Cuprates} % \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{sigma_LSCO_YBCO.png} \caption{Longitudinal conductivity as a function of doping for LSCO (left panel) at $T=0.04t$ (solid line, squares), and $T=0.027t$ (dashed line, diamonds) and YBCO (right panel) at $T=0.04t$. The dashed-dotted lines represent extrapolations at $T=0$. The conductivity in the unstable paramagnetic phase (gray symbols) is shown for comparison.} \label{fig_spiral: sigma_xx LSCO YBCO} \end{figure} % Transport coefficients can in principle be computed within the DMFT. However, this involves a delicate analytic continuation from Matsubara to real frequencies. Furthermore, the quasiparticle lifetimes calculated within this approach are due to electron-electron scattering processes, while in the real systems important contributions also come from phonons and impurities. We therefore compute the magnetic gap $\Delta$, the incommensurability $\eta$, and the $Z$ factor as functions of the doping $p$ within the DMFT, and plug them in a mean-field Hamiltonian, while taking estimates for the scattering rates from experiments. The mean-field Hamiltonian reads as % \begin{equation} \mathcal{H}_\mathrm{MF}=\int_\mathbf{k}\sum_\sigma\left(\bar{\epsilon}_\mathbf{k}-\mu\right) c^\dagger_{\mathbf{k},\sigma}c_{\mathbf{k},\sigma}+\int_\mathbf{k} \bar{\Delta}\left(c^\dagger_{\mathbf{k},\uparrow}c_{\mathbf{k}+\mathbf{Q},\downarrow}+c^\dagger_{\mathbf{k}+\mathbf{Q},\downarrow}c_{\mathbf{k},\uparrow}\right), \label{eq_spiral: H MF} \end{equation} % with $\bar{\epsilon}_\mathbf{k}=Z\epsilon_\mathbf{k}$. The chemical potential $\mu$ is then adapted such that the doping calculated from~\eqref{eq_spiral: H MF} coincides with the one computed within the DMFT. The scattering rate is then implemented by adding a constant imaginary part $i\Gamma$ to the inverse retarded bare propagator, with $\Gamma$ fixed to $0.025t$. The transport coefficients are obtained by coupling the system to the U(1) electromagnetic gauge potential $\boldsymbol{A}(\mathbf{r},t)$ through the \emph{Peierls substitution}, that is % \begin{equation} t_{jj'}\to t_{jj'}\exp\left[ie\int_{\mathbf{R}_j}^{\mathbf{R}_{j'}}\boldsymbol{A}(\mathbf{r},t)\cdot d\mathbf{r}\right], \label{eq_spiral: Peierls subst} \end{equation} % with $t_{jj'}$ the hopping matrix, that is the Fourier transform of $\bar{\epsilon}_\mathbf{k}$, and $e<0$ the electron charge. The ordinary and Hall conductivities are defined as % \begin{equation} j^\alpha = \left[\sigma^{\alpha\beta}+\sigma_H^{\alpha\beta\gamma}B^\gamma\right]E^\beta, \end{equation} % with $j^\alpha$ the electrical current, and $E^\beta$ and $B^\gamma$ the electric and magnetic field, respectively. The Hall coefficient is then given by % \begin{equation} R_H=\frac{\sigma_H^{xyz}}{\sigma^{xx}\,\sigma^{yy}}, \end{equation} % and the Hall number as $n_H=1/(|e|R_H)$. Exact expressions for the conductivities of the Hamiltonian~\eqref{eq_spiral: H MF} can be obtained, and we refer to Ref.~\cite{Mitscherling2018} for a derivation and more details. These formulas go well beyond the independent band picture often used in the calculation of transport properties, as they include \emph{interband} and \emph{intraband} contributions on equal footing. For a broad discussion on these different terms in \emph{general} two-band models, we refer to Refs.~\cite{Mitscherling2020,Mitscherling_Thesis}. In Fig.~\ref{fig_spiral: sigma_xx LSCO YBCO}, we show the longitudinal conductivity as a function of doping for the two materials under study and for different temperatures, together with an extrapolation at zero temperature, obtained by inserting the guess for the doping dependence of $\Delta$ at $T=0$ sketched in Fig.~\ref{fig_spiral: delta vs p}. The expected drop at $p=p^*$ is particularly steep at $T>0$ due to the square root onset of $\Delta(p)$, while it is smoother at $T=0$. Since in the present calculation the scattering rate does not depend on doping, the drop in $\sigma^{xx}$ is exclusively due to the Fermi surface reconstruction. % \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{nH_LSCO_YBCO.png} \caption{Hall number $n_H$ as a function of doping for LSCO (left panel) and YBCO (right panel). The symbol code is the same as in Fig.~\ref{fig_spiral: sigma_xx LSCO YBCO}. The black dashed lines correspond to the na\"ive expectations $n_H=p$ for the hole pockets and $n_H=1+p$ for a large Fermi surface.} \label{fig_spiral: nH LSCO YBCO} \end{figure} % The Hall number as a function of doping is plotted in Fig.~\ref{fig_spiral: nH LSCO YBCO} for different temperatures, together with an extrapolation at $T=0$. A pronounced drop is found for $p<p^*$, indicating once again a drop in the charge carrier concentration. In the high-field limit $\omega_c\tau\gg1$, with $\omega_c$ the cyclotron frequency and $\tau\propto1/\Gamma$ the quasiparticle lifetime, the Hall number exactly equals the charge carrier density enclosed by the Fermi pockets. However, the experiments are performed in the \emph{low-field} limit $\omega_c\tau\ll 1$. In this limit, $n_H$ equals the charge carrier density \emph{only} for parabolic dispersions. For low doping, the Hall number approaches the value $p$, indicating that for small $p$ the hole pockets are well approximated by ellipses. In the paramagnetic phase emerging at $p>p^*$, $n_H$ is slightly above the na\"ive expectation $1+p$ for YBCO, while for LSCO it is completely off, a sign that in this regime the dispersion is far from being parabolic. In fact, the large values of $n_H$ are a precursor of a divergence occurring for $p=0.33$, well above the van Hove doping at $p=0.23$. % \begin{figure}[b!] \centering \includegraphics[width=0.5\textwidth]{syy_o_sxx.png} \caption{Ratio $\sigma^{yy}/\sigma^{xx}$ as a function of doping for LSCO (orange symbols) and YBCO (blue symbols) at $T=0.04t$.} \label{fig_spiral: sigma_yy/sigma_xx} \end{figure} % In Fig.~\ref{fig_spiral: sigma_yy/sigma_xx}, we show the ratio $\sigma^{yy}/\sigma^{xx}$ as a function of doping for LSCO and YBCO at $T=0.04t$. The breaking of the square lattice symmetry due to the onset of spiral order leads to an anisotropy, or nematicity, in the longitudinal conductivity. This behavior has also been experimentally observed in Ref.~\cite{Ando2002}, where the values for the ratio were however much larger than the ones of the present calculation. For a wavevector of the form $\mathbf{Q}=(\pi-2\pi\eta,\pi)$, the longitudinal conductivity in the $y$ direction is larger than the one in the $x$ direction. Lowering $p$ below $p^*$, the decrease in $\eta$ is compensated by an increase in $\Delta$, leading to an overall increase in the ratio $\sigma^{yy}/\sigma^{xx}$, until a point where the incommensurability becomes too small and the ratio decreases again, saturating to 1 for small values of the doping, where $\eta=0$. \end{document} \chapter{fRG+MF approach to the Hubbard model} \label{chap: fRG+MF} % Aim of this chapter is to present a framework that allows to continue the fRG flow into phases exhibiting spontaneous symmetry breaking (SSB). This can be achieved by means of a simplified truncation that neglects the interplay of the different channels below the critical scale ${\L_c}$, at which symmetry breaking occurs. This set of flow equations can be shown to be equivalent to a mean-field (MF) approach with renormalized couplings computed for RG scales ($\L$) larger than ${\L_c}$~\cite{Wang2014,Yamase2016}. Therefore, we call this approach fRG+MF. Neglecting the channel competition also for $\L>{\L_c}$ leads to the Hartree-Fock approximation for the order parameter~\cite{Salmhofer2004}. This chapter is divided into three parts. In the first one, we introduce the fRG+MF equations. In the second one, we perform a fRG+MF calculation of the phase diagram of the Hubbard model. We treat all the frequency dependencies of the two-particle vertex, therefore extending the static results of Ref.~\cite{Yamase2016}. The results of this part have been published in Ref.~\cite{Vilardi2020}. In the third part, we tackle the problem of reformulating the fRG+MF approach in a mixed boson-fermion representation, where the explicit presence of a bosonic field allows for a systematic inclusion of the collective fluctuations on top of the MF. Ref.~\cite{Bonetti2020_II} contains the results presented in this part. % % \section{fRG+MF formalism} \label{sec_fRG_MF: fRG+MF equations} % In this section, we derive the fRG+MF equations assuming that at $\L={\L_c}$ the particle-particle channel $\phi^{(pp)\L}$ diverges, signaling the onset of superconductivity for $\L<{\L_c}$, arising from the breaking of the global U(1) charge symmetry. Generalizations to other orders and symmetries are straightforward, as shown for example in Sec.~\ref{sec_fRG_MF: Phase diag} for the case of N\'eel and spiral antiferromagnetism. We now derive the equations for the fRG+MF approach that neglects any kind of order parameter (thermal or quantum) fluctuations. In order to deal with the breaking of the global U(1) symmetry, we introduce the Nambu spinors % \begin{equation} \Psi_k=\left( \begin{array}{c} \psi_{k,\uparrow} \\ \overline{\psi}_{-k,\downarrow} \end{array} \right) \hskip 1cm \overline{\Psi}_k=\left( \begin{array}{c} \overline{\psi}_{k,\uparrow} \\ \psi_{-k,\downarrow} \end{array} \right), \label{eq_fRG+MF: Nambu spinors} \end{equation} % where $\psi_{k,\sigma}$ ($\overline{\psi}_{k,\sigma}$) is a Grassmanian field corresponding to the annihilation (creation) of an electron, $k=(\mathbf{k},\nu)$ a collective variable comprising the lattice momentum and a fermionic Matsubara frequency, and $\sigma=\uparrow,\downarrow$ the spin quantum number. \subsection{Flow equations and integration} In the SSB phase, the vertex function $V$ acquires anomalous components due to the violation of particle number conservation. In particular, besides the normal vertex describing scattering processes with two incoming and two outgoing particles ($V_{2+2}$), in the superfluid phase also components with three ($V_{3+1}$) or four ($V_{4+0}$) incoming or outgoing particles can arise. We avoid to treat the 3+1 components, since they are related to the coupling of the order parameter to charge fluctuations~\cite{Eberlein2013}, which do not play any role in a MF-like approximation for the superfluid state. It turns out to be useful to work with linear combinations % \begin{equation} \begin{split} &V_\mathcal{A}=\Re\left\{V_{2+2}+V_{4+0}\right\},\\ &V_\Phi=\Re\left\{V_{2+2}-V_{4+0}\right\}, \label{eq_fRG+MF: A and Phi vertex combination} \end{split} \end{equation} % that represent two fermion interactions in the longitudinal and transverse order parameter channels, respectively. They are related to the amplitude and phase fluctuations of the superfluid order parameter, respectively. In principle, a longitudinal-transverse mixed interaction can also appear, from the imaginary parts of the vertices in Eq.~\eqref{eq_fRG+MF: A and Phi vertex combination}, but it has no effect in the present MF approximation because it vanishes at zero center of mass frequency~\cite{Eberlein_Thesis}. Below the critical scale, $\Lambda <{\L_c}$, we consider a truncation of the effective action of the form % \begin{equation} \begin{split} \Gamma^{\Lambda}_{\text{SSB}}[\Psi,\overline{\Psi}]=-\int_{k} \overline{\Psi}_{k} \, \left[\mathbf{G}^{\Lambda}(k)\right]^{-1} \Psi_{k}\,\, +&\int_{k,k',q}V^{\Lambda}_{\mathcal{A}}(k,k';q)\, S^1_{k,q}\,S^1_{k',-q}\\ +&\int_{k,k',q}V^{\Lambda}_{\Phi}(k,k';q)\, S^2_{k,q}\,S^2_{k',-q} , \end{split} \label{eq_fRG+MF: fermionic SSB truncation} \end{equation} % with the Nambu bilinears defined as % \begin{equation} S^\alpha_{k,q}=\overline{\Psi}_{k+\rnddo{q}}\, \tau^\alpha \,\Psi_{k-\rndup{q}}, \label{eq_fRG+MF: fermion bilinear} \end{equation} % where the Pauli matrices $\tau^\alpha$ are contracted with Nambu spinor indexes. The fermionic propagator $\mathbf{G}^\Lambda(k)$ is given by the matrix % \begin{equation} \left( \begin{array}{cc} Q_{0}^\Lambda(k)-\Sigma^\Lambda(k) & \Delta^\Lambda(k)\\ \Delta^\Lambda(k) & -Q_0^\Lambda(-k)+\Sigma^\Lambda(-k) \end{array} \right)^{-1}, \end{equation} % where $Q_{0}^\Lambda(k)=i\nu-\xi_\mathbf{k}+R^\Lambda(k)$, $\xi_\mathbf{k}$ is the single particle dispersion relative to the chemical potential, $R^\Lambda(k)$ the fRG regulator, $\Sigma^\Lambda(k)$ the normal self energy, and $\Delta^\Lambda(k)$ the superfluid gap. The initial conditions at the scale $\Lambda={\L_c}$ require $\Delta^{{\L_c}}$ to be zero and both $V^{{\L_c}}_\mathcal{A}$ and $V^{{\L_c}}_\Phi$ to equal the vertex $V^{{\L_c}}$ in the symmetric phase. We are now going to introduce the MF approximation to the symmetry broken state, that means that we focus on the $q=0$ component of $V_\mathcal{A}$ and $V_\Phi$ and neglect all the rest. So, from now on we keep only the $q=0$ terms. We also neglect the flow of the normal self-energy below ${\L_c}$. In order to simplify the presentation, we introduce a matrix-vector notation for the gaps and vertices. In particular, the functions $V_\mathcal{A}$ and $V_\Phi$ are matrices in the indices $k$ and $k'$, while the gap and the fermionic propagator behave as vectors. For example, in this notation an object of the type $\int_{k'}V_\mathcal{A}^\Lambda(k,k')\Delta^\Lambda(k')$ can be viewed as a matrix-vector product, $V_\mathcal{A}^\Lambda \Delta^\Lambda$. Within our MF approximation, we consider in the set of flow equations only the terms that involve only the $q=0$ components of the functions $V_\mathcal{A}$ and $V_\Phi$. This means that in a generalization of Eq.~\eqref{eq_methods: flow eq vertex xx'} to the SSB phase, we consider only the particle-particle contributions. In formulas we have: % \begin{align} &\partial_\Lambda V_\mathcal{A}^\Lambda=V_\mathcal{A}^\Lambda\left[\widetilde{\partial}_\Lambda\Pi^\Lambda_{11}\right] V_\mathcal{A}^\Lambda+\Gamma^{(6)\Lambda} \circ \widetilde{\partial}_\Lambda G^\Lambda, \label{eq_fRG+MF: flow eq Va fermionic}\\ &\partial_\Lambda V_\Phi^\Lambda=V_\Phi^\Lambda \left[\widetilde{\partial}_\Lambda\Pi^\Lambda_{22}\right] V_\Phi^\Lambda+\Gamma^{(6)\Lambda} \circ \widetilde{\partial}_\Lambda G^\Lambda, \label{eq_fRG+MF: flow eq Vphi fermionic} \end{align} % where we have defined the bubbles % \begin{equation} \Pi^\Lambda_{\alpha\beta}(k,k')=-\frac{1}{2}\Tr\left[\tau^\alpha\,\mathbf{G}^\Lambda(k)\,\tau^\beta\,\mathbf{G}^\Lambda(k)\right]\delta_{k,k'}, \end{equation} % with $\delta_{k,k'}=(2\pi)^2/T \,\delta(\mathbf{k}-\mathbf{k}')\delta_{\nu\nu'}$, and the trace runs over Nambu spin indexes. The last terms of Eqs.~\eqref{eq_fRG+MF: flow eq Va fermionic} and~\eqref{eq_fRG+MF: flow eq Vphi fermionic} involve the 6-particle interaction, which we treat here in the Katanin approximation, that allows us to replace the derivative acting on the regulator $\widetilde{\partial}_\Lambda$ of the bubbles with the full scale derivative $\partial_\Lambda$~\cite{Katanin2004}. This approach is useful for it provides the exact solution of mean-field models, such as the reduced BCS model, in which the bare interaction is restricted to the zero center of mass momentum channel~\cite{Salmhofer2004}. In this way, the flow equation~\eqref{eq_fRG+MF: flow eq Va fermionic} for the vertex $V_\mathcal{A}$, together with the initial condition $V_\mathcal{A}^{{\L_c}}=V^{{\L_c}}$ can be integrated analytically, yielding % \begin{equation} \begin{split} V_\mathcal{A}^\Lambda = &\left[1+V^{{\L_c}}(\Pi^{{\L_c}}-\Pi_{11}^\Lambda)\right]^{-1}V^{{\L_c}} =\left[1-\widetilde{V}^{{\L_c}}\Pi_{11}^\Lambda\right]^{-1}\widetilde{V}^{{\L_c}}, \end{split} \label{eq_fRG+MF: Va solution fermionic} \end{equation} % where % \begin{equation} \Pi^{{\L_c}}(k,k')=G^{{\L_c}}(k)G^{{\L_c}}(-k)\delta_{k,k'}, \label{eq_fRG+MF: bubble at Lambda_s} \end{equation} % is the (normal) particle-particle bubble at zero center of mass momentum, % \begin{equation} G^{\Lambda}(k)=\frac{1}{Q_0^{\Lambda}(k)-\Sigma^{{\L_c}}(k)}, \label{eq_fRG+MF: G at Lambda_s} \end{equation} is the fermionic normal propagator, and % \begin{equation} \widetilde{V}^{{\L_c}}=\left[1+V^{{\L_c}}\Pi^{{\L_c}}\right]^{-1}V^{{\L_c}} \label{eq_fRG+MF: irr vertex fermionic} \end{equation} % is the irreducible (normal) vertex in the particle-particle channel at the critical scale. The flow equation for the transverse vertex $V_\Phi$ exhibits a formal solution similar to the one in Eq.~\eqref{eq_fRG+MF: Va solution fermionic}, but the matrix $[1-\widetilde{V}^{{\L_c}}\Pi_{22}^\Lambda]$ is not invertible. We will come to this aspect later. % \subsection{Gap equation} % Similarly to the flow equations for vertices, in the flow equation of the superfluid gap we neglect the contributions involving the vertices at $q\neq 0$. We are then left with % \begin{equation} \partial_\Lambda\Delta^\Lambda(k)=\int_{k'}V_\mathcal{A}^\Lambda(k,k')\,\widetilde{\partial}_\Lambda F^\Lambda(k'), \label{eq_fRG+MF: gap flow equation} \end{equation} % where % \begin{equation} F^\Lambda(k)=\frac{\Delta^\Lambda(k)}{[G^\Lambda(k)\,G^\Lambda(-k)]^{-1}+\left[\Delta^\Lambda(k)\right]^2} \label{eq_fRG+MF: F definition} \end{equation} % is the anomalous fermionic propagator, with $G^\L$ defined as in Eq.~\eqref{eq_fRG+MF: G at Lambda_s}, and with the normal self-energy kept fixed at its value at the critical scale. By inserting Eq.~\eqref{eq_fRG+MF: Va solution fermionic} into Eq.~\eqref{eq_fRG+MF: gap flow equation} and using the initial condition $\Delta^{{\L_c}}=0$, we can analytically integrate the flow equation, obtaining the gap equation~\cite{Wang2014} % \begin{equation} \Delta^\Lambda(k)=\int_{k'}\widetilde{V}^{{\L_c}}(k,k')\, F^\Lambda(k'). \label{eq_fRG+MF: gap equation fermionic} \end{equation} % In the special case in which the contributions to the vertex flow equation from other channels (different from the particle-particle) are neglected also above the critical scale, the irreducible vertex is nothing but the bare interaction, and Eq.~\eqref{eq_fRG+MF: gap equation fermionic} reduces to the standard Hartree-Fock approximation to the SSB state. % \subsection{Goldstone Theorem} \label{sec_fRG+MF: Goldstone fermi} % In this subsection we prove that the present truncation of flow equations fulfills the Goldstone theorem. We revert our attention on the transverse vertex $V_\Phi$. Its flow equation in Eq.~\eqref{eq_fRG+MF: flow eq Vphi fermionic} can be (formally) integrated, too, together with the initial condition $V_\Phi^{{\L_c}}=V^{{\L_c}}$, giving % \begin{equation} \begin{split} V_\Phi^\Lambda = \left[1+V^{{\L_c}}(\Pi^{{\L_c}}-\Pi_{22}^\Lambda)\right]^{-1}V^{{\L_c}} =\left[1-\widetilde{V}^{{\L_c}}\Pi_{22}^\Lambda\right]^{-1}\widetilde{V}^{{\L_c}}. \end{split} \label{eq_fRG+MF: Vphi solution fermionic} \end{equation} % However, by using the relation % \begin{equation} \Pi_{22}^\Lambda(k,k')=\frac{F^\Lambda(k)}{\Delta^\Lambda(k)}\,\delta_{k,k'}, \label{eq_fRG+MF: Pi22=F/delta} \end{equation} % one can rewrite the matrix in square brackets on the right hand side of Eq.~\eqref{eq_fRG+MF: Vphi solution fermionic} as % \begin{equation} \delta_{k,k'}-\widetilde{V}^{{\L_c}}(k,k')\,\frac{F^\Lambda(k')}{\Delta^\Lambda(k')}. \end{equation} % Multiplying this expression by $\Delta^\Lambda(k')$ and integrating over $k'$, we see that it vanishes if the gap equation~\eqref{eq_fRG+MF: gap equation fermionic} is obeyed. Thus, the matrix in square brackets in Eq.~\eqref{eq_fRG+MF: Vphi solution fermionic} has a zero eigenvalue with the superfluid gap as eigenvector. In matrix notation this property can be expressed as % \begin{equation} \left[ 1 - \widetilde{V}^{{\L_c}}\Pi^\Lambda_{22}\right]\Delta^\Lambda=0. \end{equation} % Due to the presence of this zero eigenvalue, the matrix $[1-\widetilde{V}^{{\L_c}}\Pi_{22}^\Lambda]$ is not invertible. This is nothing but a manifestation of the Goldstone theorem. Indeed, due to the breaking of the global U(1) symmetry, transverse fluctuations of the order parameter become massless at $q=0$, leading to the divergence of the transverse two fermion interaction $V_\Phi$. % \section{Interplay of antiferromagnetism and superconductivity} \label{sec_fRG_MF: Phase diag} % In this section, we present an application of the fRG+MF approach to the phase diagram of the two-dimensional Hubbard model. We parametrize the vertex function by \emph{fully} taking into account its frequency dependence. In Refs.~\cite{Husemann2012,Vilardi2017} the frequency dependence of the vertex function has been shown to be important, as a static approximation underestimates the size of magnetic fluctuations, while overestimating the $d$-wave pairing scale. The present dynamic computation therefore extends and improves the static results obtained in Ref.~\cite{Yamase2016}. % \subsection{Symmetric regime} \label{sec_fRG_MF: symmetric regime} % In the symmetric regime, that is, for $\L>{\L_c}$, we perform a weak-coupling fRG calculation within a 1-loop truncation. For the vertex function, we start from the parametrization in Eq.~\eqref{eq_methods: channel decomp physical}, and simplify the dependencies of the three channels on $\mathbf{k}$, $\mathbf{k}'$. We perform a form factor expansion in these dependencies and retain only the $s$-wave terms for the magnetic and charge channels, and $s$-wave and $d$-wave terms for the pairing one. In formulas, we approximate % \begin{subequations} \begin{align} &\mathcal{M}^\L_{k,k'}(q)=\mathcal{M}^\L_{\nu\nu'}(q),\\ &\mathcal{C}^\L_{k,k'}(q)=\mathcal{C}^\L_{\nu\nu'}(q),\\ &\mathcal{S}^\L_{k,k'}(q)=\mathcal{S}^\L_{\nu\nu'}(q) + d_\mathbf{k} d_{\mathbf{k}'}\mathcal{D}^\L_{\nu\nu'}(q), \end{align} \end{subequations} % where the $d$-wave form factor reads as $d_\mathbf{k}=\cos k_x-\cos k_y$, and $q=(\mathbf{q},\Omega)$ is a collective variable comprising a momentum and a bosonic Matsubara frequency. Furthermore, we set the initial two-particle vertex equal to the bare interaction $U$, that is, in Eq.~\eqref{eq_methods: channel decomp physical} we set $\lambda(k_1',k_2',k_1)=U$. The parametrization of the vertex function described above has been used in Ref.~\cite{Vilardi2017}, with a slightly different notation, and we refer to this publication and to Appendix~\ref{app: symm V} for the flow equations for $\mathcal{M}^\L$, $\mathcal{C}^\L$, $\mathcal{S}^\L$, and $\mathcal{D}^\L$. % \subsection{Symmetry broken regime} % In the $\L<{\L_c}$ regime, at least one of the symmetries of the Hubbard Hamiltonian is spontaneously broken. The flow in the symmetric phase~\cite{Metzner2012}, and other methods~\cite{Scalapino2012} indicate antiferromagnetism, of N\'eel type or incommensurate, and $d$-wave pairing as the leading instabilities. Among all possible incommensurate antiferromagnetic orderings, we restrict ourselves to spiral order, exhaustively described in Chap.~\ref{chap: spiral DMFT}, characterized by the order parameter $\langle \overline{\psi}_{k,\uparrow}\psi_{k+Q,\downarrow} \rangle$, with $Q=(\mathbf{Q},0)$, and $\mathbf{Q}$ the ordering wave vector. N\'eel antiferromagnetism is then recovered by setting $\mathbf{Q}=(\pi,\pi)$. Allowing for the formation of spiral order and $d$-wave pairing, the quadratic part of the effective action takes the form % \begin{equation} \begin{split} \Gamma^{(2)\L}\left[\psi,\overline{\psi}\right]=&-\int_k\sum_\sigma \overline{\psi}_{k,\sigma}\left[\left(G_0^\L(k)\right)^{-1}-\Sigma^\L(k)\right]\psi_{k,\sigma}\\ &-\int_k \left[\left(\Delta_m^\L(k^*)\right)^*m(k)+\Delta_m^\L(k)m^*(k)\right]\\ &-\int_k \left[\left(\Delta_p^\L(k^*)\right)^*p(k)+\Delta_p^\L(k)p^*(k)\right], \end{split} \end{equation} % where $\Delta_m^\L(k)$ and $\Delta_p^\L(k)$ are the spiral and pairing anomalous self-energies, respectively, and $k^*=(\mathbf{k},-\nu)$. We have defined the bilinears $m(k)$ and $p(k)$ as % \begin{subequations} \begin{align} &m(k) = \overline{\psi}_{k,\uparrow}\psi_{k+Q,\downarrow}, \hskip 1,05cm m^*(k)= \overline{\psi}_{k+Q,\downarrow}\psi_{k,\uparrow},\\ &p(k) = \psi_{k,\uparrow}\psi_{-k,\downarrow}, \hskip 1.5cm p^*(k)=\overline{\psi}_{-k,\downarrow}\overline{\psi}_{k,\uparrow}. \end{align} \end{subequations} % From now on, we neglect the normal self-energy $\Sigma^\L(k)$ both above and below ${\L_c}$. It is more convenient to employ a 4-component Nambu-like basis, reading as % \begin{equation} \Psi_k = \left( \begin{array}{c} \psi_{k,\uparrow} \\ \overline{\psi}_{-k,\downarrow} \\ \psi_{k+Q,\downarrow}\\ \overline{\psi}_{-k-Q,\uparrow} \end{array} \right)\hskip2cm \overline{\Psi}_k = \left( \begin{array}{c} \overline{\psi}_{k,\uparrow} \\ \psi_{-k,\downarrow} \\ \overline{\psi}_{k+Q,\downarrow}\\ \psi_{-k-Q,\uparrow} \end{array} \right). \end{equation} % In this way, the quadratic part of the action can be expressed as % \begin{equation} \Gamma^{(2)\L}\left[\Psi,\overline{\Psi}\right] = -\int_k \overline{\Psi}_k \left[\mathbf{G}^\L(k)\right]^{-1}\Psi_k, \end{equation} % with % \begin{equation} \left[\mathbf{G}^\L(k)\right]^{-1} = {\footnotesize\left( \begin{array}{cccc} \left[G_0^\L(k)\right]^{-1} & \Delta_p^\L(k) & \Delta_m^\L(k) & 0\\ \left[\Delta_p^\L(k^*)\right]^* & -\left[G_0^\L(-k)\right]^{-1} & 0 & -\Delta_m^\L(-k-Q)\\ \left[\Delta_m^\L(k^*)\right]^* & 0 & \left[G_0^\L(k+Q)\right]^{-1} & -\Delta_p^\L(-k-Q) \\ 0 & -\left[\Delta_m^\L(-k^*-Q^*)\right]^* & -\left[\Delta_m^\L(-k^*-Q^*)\right]^* & -\left[G_0^\L(-k-Q)\right]^{-1} \end{array} \right)}. \label{eq_fRG+MF: 4x4 G} \end{equation} % The fRG+MF approach introduced in Sec.~\ref{sec_fRG_MF: fRG+MF equations} can be easily generalized to the present case. Neglecting the $q\neq0$ ($q\neq Q$) contributions to the pairing (spiral) channel, the quartic part of the effective action reads as % \begin{equation} \begin{split} \Gamma^{(4)\L}\left[\psi,\overline{\psi}\right]= &+\frac{1}{2}\int_k V_m^\L(k,k')\left[m^*(k)m(k')+m^*(k')m(k)\right]\\ &+\frac{1}{2}\int_k W_m^\L(k,k')\left[m^*(k)m^*(k')+m(k')m(k)\right]\\ &+\frac{1}{2}\int_k V_p^\L(k,k')\left[p^*(k)p(k')+p^*(k')p(k)\right]\\ &+\frac{1}{2}\int_k W_p^\L(k,k')\left[p^*(k)p^*(k')+p(k')p(k)\right], \end{split} \end{equation} % where $W_m^\L$ and $W^\L_p$ represent anomalous interaction terms in the SSB phase. At the critical scale, the normal interactions are given by % \begin{subequations} \begin{align} &V_m^{\L_c}(k,k')=V^{\L_c}_{\uparrow\downarrow\down\uparrow}(k+Q,k',k),\\ &V_p^{\L_c}(k,k')=\frac{1}{2}\left[V^{\L_c}_{\uparrow\downarrow\uparrow\downarrow}(k,-k,k')-V^{\L_c}_{\uparrow\downarrow\down\uparrow}(k,-k,k')\right], \end{align} \end{subequations} % where the pairing vertex has been projected onto the singlet component. One can then define the longitudinal and transverse interactions as % \begin{subequations} \begin{align} &\mathcal{A}_m^\L(k,k') = V_m^\L(k,k') + W_m^\L(k,k'),\\ &\Phi_m^\L(k,k') = V_m^\L(k,k') - W_m^\L(k,k'),\\ &\mathcal{A}_p^\L(k,k') = V_p^\L(k,k') + W_p^\L(k,k'),\\ &\Phi_p^\L(k,k') = V_p^\L(k,k') - W_p^\L(k,k'). \end{align} \label{eq_fRG+MF: A e Phi m e p} \end{subequations} % While $\Phi_m^\L$ and $\Phi_p^\L$ are decoupled from the flow equations for the gap functions within the fRG+MF approach, they are crucial for the fulfillment of the Goldstone theorem, as shown in Sec.~\ref{sec_fRG+MF: Goldstone fermi}. In line with the parametrization performed in the symmetric regime, we approximate % \begin{subequations} \begin{align} &\Delta_m^\L(k) = \Delta_m^\L(\nu),\\ &\Delta_p^\L(k) = \Delta_p^\L(\nu)d_\mathbf{k}, \end{align} \end{subequations} % and % \begin{subequations} \begin{align} &\mathcal{A}_m^\L(k,k')=\mathcal{A}_m^\L(\nu,\nu'),\\ &\mathcal{A}_p^\L(k,k')=\mathcal{A}_p^\L(\nu,\nu')d_\mathbf{k} d_{\mathbf{k}'}. \end{align} \end{subequations} % Notice that for the pairing gap we do not have considered an $s$-wave term because in the repulsive Hubbard model the interaction in the $s$-wave particle-particle channel is always repulsive. Within the matrix notation previously introduced, the flow equations for the longitudinal interactions read as % \begin{equation} \partial_\L \mathcal{A}_X^\L = \mathcal{A}_X^\L \left[\partial_\L\Pi_X^\L\right] \mathcal{A}_X^\L, \end{equation} % with $X=m,p$. The longitudinal bubbles are defined as % \begin{subequations} \begin{align} & \Pi_m^\L(\nu,\nu') = T\delta_{\nu\nu'}\int_\mathbf{k} \left\{G^\L(k)G^\L(k+Q)+\left[F_m(k)\right]^2\right\},\\ & \Pi_p^\L(\nu,\nu') = T\delta_{\nu\nu'}\int_\mathbf{k} d_\mathbf{k}^2\left\{-G^\L(k)G^\L(-k)+\left[F_p(k)\right]^2\right\}, \end{align} \end{subequations} % with % \begin{subequations} \begin{align} &G^\L(k) = \left[\mathbf{G}^\L(k)\right]_{11},\\ &F_m^\L(k) = \left[\mathbf{G}^\L(k)\right]_{13},\\ &F_p^\L(k) = \left[\mathbf{G}^\L(k)\right]_{12}, \end{align} \end{subequations} % where $\mathbf{G}^\L(k)$ is obtained inverting Eq.~\eqref{eq_fRG+MF: 4x4 G}. As shown in Sec.~\ref{sec_fRG_MF: fRG+MF equations}, the flow equations for $\mathcal{A}_X^\L$ can be analytically solved, giving % \begin{equation} \mathcal{A}_X^\L = \left[1-\widetilde{V}^{\L_c}_X\Pi_X^\L\right]^{-1}\widetilde{V}^{\L_c}_X, \label{eq_fRG+MF: integrated Ax} \end{equation} % with the irreducible vertex at the critical scale reading as % \begin{equation} \widetilde{V}^{\L_c}_X = \left[1+V^{\L_c}_X\Pi_X^{\L_c}\right]^{-1}V^{\L_c}_X. \label{eq_fRG+MF: irr V fermionic} \end{equation} % The flow equations for the gap functions are given by % \begin{subequations} \begin{align} &\partial_\L\Delta_m^\L(\nu)=-T\sum_{\nu'}\int_\mathbf{k} A_m^\L(\nu,\nu')\widetilde{\partial}_\L F_m(\mathbf{k},\nu'),\\ &\partial_\L\Delta_p^\L(\nu)=-T\sum_{\nu'}\int_\mathbf{k} A_p^\L(\nu,\nu')\widetilde{\partial}_\L F_p(\mathbf{k},\nu')d_\mathbf{k}, \end{align} \label{eq_fRG+MF: gap flow eqs} \end{subequations} % with $\widetilde{\partial}_\L$ the single-scale derivative. The integration of the above equations returns % \begin{subequations} \begin{align} &\Delta_m^\L(\nu) = -T\sum_{\nu'}\int_\mathbf{k} \widetilde{V}_m^{\L_c}(\nu,\nu')F_m(\mathbf{k},\nu'),\\ &\Delta_p^\L(\nu) = -T\sum_{\nu'}\int_\mathbf{k} \widetilde{V}_p^{\L_c}(\nu,\nu')F_p(\mathbf{k},\nu')d_\mathbf{k}. \end{align} \end{subequations} % Since a solution of the above nonlinear integral equations is hard to converge when both order parameters are finite, we compute the gap functions from their flow equations~\eqref{eq_fRG+MF: gap flow eqs}, plugging in the integrated form of the function $\mathcal{A}_X^\L$, in Eq.~\eqref{eq_fRG+MF: integrated Ax}. By means of global transformations, one can impose $[\Delta_m(-\nu)]^*=\Delta_m(\nu)$, and $[\Delta_p^\L(-\nu)]^*=\Delta_p(\nu)$. Since the relation $\Delta_p(-\nu)=\Delta_p(\nu)$, descending from the singlet nature of the pairing, is general, one can always remove the imaginary part of the pairing gap function. Notice that for the magnetic one this is in general not possible. Concerning the computation of the spiral wave vector $\mathbf{Q}$, we fix it to the momentum at which the magnetic channel $\mathcal{M}^\L$ peaks (or even diverges) at $\L={\L_c}$. % \subsection{Order parameters} % \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth]{gaps.png} \caption{Gap function amplitudes at the lowest fermionic Matsubara frequency $\nu_0=\pi T$ as functions of the hole doping $p$ at $T=0.027t$. Static results are shown for comparison. The factor 2 in the pairing gap is due to the fact that $\mathrm{max}_\mathbf{k}(d_\mathbf{k})=2$.} \label{fig_fRG+MF: gaps vs p} \end{figure} % We run a fRG flow in the symmetric phase keeping, for each of the channel functions $\mathcal{M}^\L$, $\mathcal{C}^\L$, $\mathcal{S}^\L$, and $\mathcal{D}^\L$, about 90 values for each of the three frequency arguments and 320 patches in the Brillouin zone for the momentum dependence. The critical scale ${\L_c}$ has been determined as the scale where the largest of the channels exceeds the value of $400t$ ($t$ is the nearest neighbor hopping). The electron density $n$ has been calculated along the flow from the first diagonal entry of the matrix propagator~\eqref{eq_fRG+MF: 4x4 G} and kept fixed at each scale $\L$ by tuning the chemical potential $\mu$. The chosen Hubbard model parameters are $t'=-0,16t$, $t''=0$, and $U=3t$. The lowest temperature reached by dynamical calculations has been set to $T=0.027t$. When not explicitly stated, we use $t$ as our energy unit all along this section. All quantities without the superscript $\L$ have to be understood as computed at the final scale $\L=0$. % % In Fig.~\ref{fig_fRG+MF: gaps vs p}, we show the order parameters computed at the lowest Matsubara frequency $\nu_0=\pi T$, as functions of the hole doping $p=1-n$, and at fixed temperature $T=0.027t$. For the pairing gap, we consider its maximum in momentum space, that is, $\Delta_p(\nu)$ multiplied by a factor 2, coming from $\mathrm{max}_\mathbf{k}(d_\mathbf{k})=2$, occurring at $\mathbf{k}=(0,\pi)$, or symmetry related. While $\Delta_p(\nu)$ is purely real, $\Delta_m(\nu)$ has an imaginary part, whose continuation to real frequencies vanishes for $\nu\to 0$. $\mathrm{Im}\Delta_m(\nu_0)$ is therefore always small for low $T$. Magnetic order is found from half filling to about $p=0.20$, with the size of the gap monotonically decreasing upon doping, and with spiral replacing N\'eel order at about $p=0.14$. The ordering wave vector is always of the form $\mathbf{Q}=(\pi-2\pi\eta,\pi)$, or symmetry related, with the incommensurability $\eta$ exhibiting a sudden jump at the N\'eel-to-spiral transition. A sizable $d$-wave pairing state is found for dopings between $0.08$ and $0.20$ coexisting with antiferromagnetic ordering, therefore confirming the previous static results obtained in Refs.~\cite{Reiss2007,Wang2014,Yamase2016}. From Fig.~\ref{fig_fRG+MF: gaps vs p} we deduce that the inclusion of dynamic effects enhances the order parameter magnitudes. This is due to multiple effects. First of all the function $\mathcal{M}^\L_{\nu\nu'}(\mathbf{Q},0)$ has a minimum at $\nu=\nu'=\pm\pi T$, which in the static approximation is extended to the whole frequency range, leading to reduced magnetic correlations. Secondly, the static approximation largely overestimates the screening of the magnetic channel by the other channels for $\L>{\L_c}$~\cite{Vilardi2017,Tagliavini2019}. On the other hand, the function $\mathcal{D}^\L_{\nu\nu'}(0)$ is maximal at $\nu=\nu'=\pm\pi T$ and rapidly decays to zero for large $\nu$, $\nu'$. This implies that, conversely to the magnetic channel, $d$-wave correlations are enhanced in the static limit~\cite{Husemann2012}. In this approximation, however, as previously discussed, the magnetic fluctuations providing the seed for the pairing are weaker, leading to a mild overall enhancement of the $d$-wave pairing gap when dynamical effects are included. % \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth]{gaps_vs_nu.png} \caption{Frequency dependence of the real parts of the gap functions at $p=0.12$ and $T=0.027$.} \label{fig_fRG+MF: gaps vs nu} \end{figure} % A typical behavior of the gap functions as a function of the Matsubara frequency $\nu$ is shown in Fig.~\ref{fig_fRG+MF: gaps vs nu}. Similarly to what has been discussed in Chap.~\ref{chap: spiral DMFT}, the magnetic gap interpolates between its value at the Fermi level and the constant Hartree-Fock-like expression $Um$, with $m$ the onsite magnetization, at $\nu\to\infty$. By contrast, the $d$-wave pairing gap is maximal for $\nu\to0$ and rapidly decays to zero for large frequencies, related to the fact that the Hartree-Fock approximation would yield no $d$-wave pairing at all in the Hubbard model. Finally, the magnetic gap function shows a generally small imaginary part (not shown) obeying $\mathrm{Im}\Delta_m(-\nu)=-\mathrm{Im}\Delta_m(\nu)$ therefore extrapolating to zero for $\nu\to 0$. % \begin{figure}[t] \centering \includegraphics[width=0.75 \textwidth]{flow_gap_A.png} \caption{Scale dependence of the real part of the magnetic gap function $\mathrm{Re}\Delta_m(\nu)$ (blue dots) and longitudinal two-particle interaction $\mathrm{Re}\mathcal{A}_m(\nu,\nu')$ (red diamonds) at the lowest Matsubara frequency $\nu_0=\pi T$, at doping $p=0.12$, and temperature $T=0.027t$.} \label{fig_fRG+MF: gap vs Lambda} \end{figure} % In Fig.~\ref{fig_fRG+MF: gap vs Lambda} we show the behavior of the magnetic gap function and of the longitudinal magnetic interaction computed at $\nu_0=\pi T$ as functions of the fRG scale. In the symmetric phase, the effective interaction grows until it diverges at the critical scale $\L={\L_c}$. In the SSB regime, a magnetic order parameters forms, leading to a quick decrease of the longitudinal interaction, that saturates to a finite value at the end of the flow. By contrast, the transverse interaction (not shown) remains infinite for all $\L<{\L_c}$, in agreement with the Goldstone theorem. The flow of the analogous quantities in the pairing channel looks similar, but the divergence occurs at a scale smaller than ${\L_c}$, as the leading instability in the present parameter regime is always a magnetic one. % \subsection{Berezinskii-Kosterlitz-Thouless transition and phase diagram} % In this section, we compute the superfluid phase stiffness, which enables us to estimate the Berezinskii-Kosterlitz-Thouless, or simply Kosterlitz-Thouless, (KT) transition temperature ($T_\mathrm{KT}$) for the onset of superconductivity~\cite{Berezinskii1971,Kosterlitz1973}. $T_\mathrm{KT}$, together with the temperature for the onset of magnetism, $T^*$, allows us to draw a phase diagram for the Hubbard model at intermediate coupling. Coupling the system to an external U(1) electromagnetic gauge field $\boldsymbol{A}(\mathbf{r},t)$, via, for example, the Peierls substitution (see Eq.~\eqref{eq_spiral: Peierls subst}), one is able to compute the electromagnetic response kernel $K_{\alpha\alpha'}(\mathbf{q},\omega)$, defined via % \begin{equation} j_\alpha(\mathbf{q},\omega)=-\sum_{\alpha'}K_{\alpha\alpha'}(\mathbf{q},\omega)A_{\alpha'}(\mathbf{q},\omega), \end{equation} % with $j_\alpha$ the electromagnetic current. The superfluid stiffness is then given by % \begin{equation} J_{\alpha\alpha'} = \frac{1}{(2e)^2}\lim_{\mathbf{q}\to\mathbf{0}}K_{\alpha\alpha'}(\mathbf{q},0), \end{equation} % with $e$ the electron charge. If the \emph{global} U(1) charge symmetry is broken via the formation of a pairing gap, the limit in the equation above is finite, and one finds a finite stiffness. Writing the superconducting order parameter as $\Phi(x)=\sqrt{\alpha^2+\varrho(x)}e^{2ie\theta(x)}$, with $\alpha=\langle\Phi(x)\rangle\in \mathbb{R}$, and neglecting the amplitude fluctuations described by $\varrho(x)$, one can derive a long-wavelength classical effective action for phase fluctuations only % \begin{equation} \mathcal{S}_\mathrm{eff}[\theta]= \frac{1}{2}\sum_{\alpha\alpha'}J_{\alpha\alpha'} \int \!d^2 \mathbf{x}\,[\nabla_\alpha \theta(\mathbf{x})] [\nabla_{\alpha'} \theta(\mathbf{x})], \label{eq_fRG+MF: S BKT} \end{equation} % where $\theta(\mathbf{x})\in[0,2\pi]$, and the superfluid stiffness plays the role of a coupling constant. This action is well known to display a \emph{topological phase transition} at finite temperature $T_\mathrm{KT}$, above which topological vortex configurations proliferate and reduce $J_{\alpha\alpha'}$ to zero, causing an exponential decay in the correlation function. Differently, for $0<T<T_\mathrm{KT}$, the vortices get bound in pairs and form a \emph{quasi-long-range ordered} phase, marked by a power law decay of the order parameter correlation function. The power law exponent is found to scale linearly with temperature, eventually vanishing at $T=0$, marking the onset of a true off-diagonal long-range order (ODLRO). The $0<T<T_\mathrm{KT}$ phase does not exhibit ODLRO, according to the Mermin-Wagner theorem~\cite{Mermin1966}, but an infinite correlation length, due to the slower than exponential decay of the correlation function. For isotropic systems, that is, when $J_{\alpha\alpha'}=J\delta_{\alpha\alpha'}$, the transition temperature can be computed by the universal formula~\cite{ChaikinLubensky} % \begin{equation} T_\mathrm{KT}=\frac{\pi}{2}J(T_\mathrm{KT}). \label{eq_fRG+MF: T KT} \end{equation} % If the system is non-isotropic, one can introduce some rotated spatial coordinates as % \begin{equation} x'_{\alpha}=\sum_{\alpha'=1}^2\left[\boldsymbol J^{\frac{1}{2}}\right]_{\alpha\alpha'}x_{\alpha'}, \end{equation} % with $\boldsymbol J$ the stiffness tensor. Action~\eqref{eq_fRG+MF: S BKT} is thus rotationally invariant in the new basis, with a new stiffness given by $J_\mathrm{eff}=\sqrt{\mathrm{det}[\boldsymbol J]}$, coming from the Jacobian of the coordinate change. The BKT temperature for a non-isotropic system therefore reads as % \begin{equation} T_\mathrm{KT}=\frac{\pi}{2}\sqrt{\mathrm{det}[\boldsymbol{J}(T_\mathrm{KT})]}. \label{eq_fRG+MF: T KT anisotr} \end{equation} % The authors of Ref.~\cite{Metzner2019} have derived formulas for the superfluid stiffness in a \emph{mean-field} state in which antiferromagnetism and superconductivity coexist. Since these equations have been derived in the static limit, we compute the superfluid phase stiffness by plugging into them the (real parts of) the gap functions calculated at the lowest Matsubara frequency. For a spiral state with $\mathbf{Q}=(\pi-2\pi\eta,\pi)$, we have $J_{xx}\neq J_{yy}$, and $J_{xy}=J_{yx}=0$, while for a N\'eel state, $J_{xx}=J_{yy}$, and $J_{xy}=J_{yx}=0$. % \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth]{phase_diag.png} \caption{Doping-temperature phase diagram with the Kosterlitz-Thouless temperature, $T_\mathrm{KT}$, and the antiferromagnetic ($T^*$) and pairing ($T_p$) onset temperatures. The fading colors at low temperatures indicate that our dynamical fRG code is not able to access temperatures lower than $T=0.027t$.} \label{fig_fRG+MF: phase diag} \end{figure} % In Fig.~\ref{fig_fRG+MF: phase diag}, we plot the computed $T_\mathrm{KT}$, together with $T^*$, that is, the lowest temperature at which the flow does not encounter a divergence in the magnetic channel down to $\L=0$, and $T_p$, at which the pairing gap vanishes, as functions of the hole doping $p$. We notice a large difference between $T_\mathrm{KT}$ and $T_p$, marking a sizable window ($T_\mathrm{KT}<T<T_p$) of the phase diagram where strong superconducting fluctuations open a gap in the spectrum, but the order parameter correlation function remains exponentially decaying, resulting in the lack of superconducting properties, which, instead, are present for $T<T_\mathrm{KT}$. The Mermin-Wagner theorem prevents the formation of long-range order at finite temperature. We therefore expect that, upon including order parameter fluctuations, the antiferromagnetic state found for $T<T^*$ will be transformed into a \emph{pseudogap state} with short-range correlations. We can thus interpret $T^*$ as the temperature for the onset of the pseudogap behavior. This topic is the subject of Chap.~\ref{chap: pseudogap} % \section{Broken symmetry phase: bosonic formalism} \label{sec_fRG+MF: bosonic formalism} % The SSB phase can be accessed also via the introduction of a bosonic field, describing the fluctuations of the order parameter~\cite{Baier2004,Strack2008,Friederich2011,Obert2013}, and whose finite expectation value is related to the formation of anomalous components in the fermionic propagator. Similarly to Sec.~\ref{sec_fRG_MF: fRG+MF equations}, we focus here on superconducting order, while generalizations to other order parameters are straightforward. In order to introduce this bosonic field, we express the vertex at the critical scale in the following form: % \begin{equation} V^{{\L_c}}(k,k';q)=\frac{h^{{\L_c}}(k;q)\,h^{{\L_c}}(k';q)}{m^{{\L_c}}(q)}+\mathcal{Q}^{{\L_c}}(k,k';q). \label{eq_fRG+MF: vertex at Lambda crit} \end{equation} % We assume from now on that the divergence of the vertex, related to the appearance of a massless mode, is absorbed into the first term, while the second one remains finite. In other words, we assume that at the critical scale ${\L_c}$, at which the vertex is formally divergent, the (inverse) bosonic propagator $m^{{\L_c}}(q)$ vanishes at zero frequency and momentum, while the Yukawa coupling $h^{{\L_c}}(k;q)$ and the residual two fermion interaction $\mathcal{Q}^{{\L_c}}(k,k';q)$ remain finite. In Sec.~\ref{sec_fRG+MF: vertex bosonization} we introduce a systematic scheme to extract the decomposition~\eqref{eq_fRG+MF: vertex at Lambda crit} from a given vertex at the critical scale. % \subsection{Hubbard-Stratonovich transformation and MF-truncation} % Since the effective action at a given scale $\Lambda$ can be viewed as a bare action with bare propagator $G_0-G_0^\Lambda$ (with $G_0^\Lambda$ the regularized bare propagator)\footnote{One can prove it by considering the effective interaction functional $\mathcal{V}$, as shown in Ref.~\cite{Metzner2012}.}, one can decouple the factorized (and singular) part of the vertex at ${\L_c}$ via a Gaussian integration, thus introducing a bosonic field. By adding source terms which couple linearly to this field and to the fermionic ones, one obtains the generating functional of connected Green's functions, whose Legendre transform at the critical scale reads as % \begin{equation} \begin{split} \Gamma^{{\L_c}}[\psi,\overline{\psi},\phi]= &-\int_{k,\sigma} \overline{\psi}_{k,\sigma} \left[G^{{\L_c}}(k)\right]^{-1} \psi_{k,\sigma} -\int_{q} \phi^*_q \, m^{{\L_c}}(q)\, \phi_q\\ &+\int_{k,k',q}\mathcal{Q}^{{\L_c}}(k,k';q)\,\overline{\psi}_{k,\uparrow} \overline{\psi}_{q-k,\downarrow} \psi_{q-k',\downarrow} \psi_{k',\uparrow}\\ &+\int_{k,q}h^{{\L_c}}(k;q)\left[ \overline{\psi}_{k,\uparrow} \overline{\psi}_{q-k,\downarrow} \phi_q + \text{h.c.}\right], \end{split} \label{eq_fRG+MF: gamma lambda crit bos} \end{equation} % where $\phi$ represents the expectation value (in presence of sources) of the Hubbard-Stratonovich field. Note that we have avoided to introduce an interaction between equal spin fermions. Indeed, since we are focusing on a spin singlet superconducting order parameter, within the MF approximation this interaction does not contribute to the flow equations. The Hubbard-Stratonovich transformation introduced in Eq.~\eqref{eq_fRG+MF: gamma lambda crit bos} is free of the so-called Fierz ambiguity, according to which different ways of decoupling of the bare interaction can lead to different mean-field results for the gap (see, for example, Ref.~\cite{Baier2004}). Indeed, through the inclusion of the residual two fermion interaction, we are able to recover the same equations that one would get without bosonizing the interactions, as proven in Sec.~\ref{subsec_fRG+MF: equivalence bos and fer}. In essence, the only ambiguity lies in selecting what to assign to the bosonized part of the vertex and what to $\mathcal{Q}$, but by keeping both of them all along the flow, the results will not depend on this choice. We introduce Nambu spinors as in Eq.~\eqref{eq_fRG+MF: Nambu spinors} and we decompose the bosonic field into its (flowing) expectation value plus longitudinal ($\sigma$) and transverse ($\pi$) fluctuations~\cite{Strack2008,Obert2013}: % \begin{subequations} \begin{align} &\phi_q=\alpha^\Lambda\,\delta_{q,0} + \sigma_q + i\, \pi_q, \\ &\phi^*_q=\alpha^\Lambda\,\delta_{q,0} + \sigma_{-q} - i\, \pi_{-q}, \end{align} \end{subequations} % where we have chosen $\alpha^\Lambda$ to be real. For the effective action at $\Lambda<{\L_c}$ in the SSB phase, we use the following \textit{ansatz} % \begin{equation} \begin{split} \Gamma^{\Lambda}_\text{SSB}[\Psi,\overline{\Psi},\sigma,\pi]=\Gamma^\Lambda_{\Psi^2}+\Gamma^\Lambda_{\sigma^2}+\Gamma^\Lambda_{\pi^2} +\Gamma^\Lambda_{\Psi^2\sigma} + \Gamma^\Lambda_{\Psi^2\pi} +\Gamma^\Lambda_{\Psi^4}, \end{split} \label{eq_fRG+MF: bosonic eff action} \end{equation} % where the first three quadratic terms are given by % \begin{subequations} \begin{align} &\Gamma^\Lambda_{\Psi^2}=-\int_{k} \overline{\Psi}_{k} \left[\mathbf{G}^{\Lambda}(k)\right]^{-1} \Psi_{k},\\ &\Gamma^\Lambda_{\sigma^2}=-\frac{1}{2}\int_q \sigma_{-q}\,m_\sigma^{\Lambda}(q)\, \sigma_q,\\ &\Gamma^\Lambda_{\pi^2}=-\frac{1}{2}\int_q \pi_{-q}\,m_\pi^{\Lambda}(q)\, \pi_q, \end{align} \end{subequations} % and the fermion-boson interactions are % \begin{subequations} \begin{align} &\Gamma^\Lambda_{\Psi^2\sigma}=\int_{k,q}h^{\Lambda}_\sigma(k;q)\left\{S^1_{k,-q}\,\sigma_q+ \text{h.c.} \right\},\\ &\Gamma^\Lambda_{\Psi^2\pi}=\int_{k,q}h^{\Lambda}_\pi(k;q)\left\{S^2_{k,-q}\,\pi_q+ \text{h.c.} \right\}, \end{align} \end{subequations} % with $S^\alpha_{k,q}$ as in Eq.~\eqref{eq_fRG+MF: fermion bilinear}. The residual two fermion interaction term is written as % \begin{equation} \begin{split} \Gamma^\Lambda_{\Psi^{4}}= \int_{k,k',q}\mathcal{A}^{\Lambda}(k,k';q)\,S^1_{k,q}\,S^1_{k',-q} +\int_{k,k',q}\hskip - 5mm\Phi^{\Lambda}(k,k';q) \,S^2_{k,q}\,S^2_{k',-q}. \end{split} \end{equation} % Notice that in the above equation the terms $\mathcal{A}^\L$ and $\Phi^\L$ have a different physical meaning than those in Eq.~\eqref{eq_fRG+MF: A e Phi m e p}. While the former represent only a residual interaction term, the latter embody \emph{all} the interaction processes in the longitudinal and transverse channels. As in the fermionic formalism, in the truncation in Eq.~\eqref{eq_fRG+MF: bosonic eff action} we have neglected any type of longitudinal-transverse fluctuation mixing in the Yukawa couplings, bosonic propagators and two fermion interactions because at $q=0$ they are identically zero. In the bosonic formulation, as well as for the fermionic one, the MF approximation selects only the $q=0$ components of the various terms appearing in the effective action and neglects all the rest. So, from now on we keep only the $q=0$ terms. We will make use of the matrix notation introduced in Sec.~\ref{sec_fRG_MF: fRG+MF equations}, where the newly introduced Yukawa couplings behave as vectors and bosonic inverse propagators as scalars. % \subsection{Flow equations and integration} \label{sec_fRG+MF: bosonic flow and integration} % Here we focus on the flow equations for two fermion interactions, Yukawa couplings and bosonic propagators in the longitudinal and transverse channels within a MF approximation, that is, we focus only on the Cooper channel ($q=0$) and neglect all the diagrams containing internal bosonic lines or couplings $\mathcal{A}^\L$, $\Phi^\L$ at $q\neq 0$. Furthermore, we introduce a generalized Katanin approximation to account for higher order couplings in the flow equations. This approximation allows to replace the single-scale derivatives in the bubbles with full scale derivatives. We refer to Appendix~\ref{app: fRG+MF app} for more details and a derivation of the latter. We now show that our reduced set of flow equations for the various couplings can be integrated. We first focus on the longitudinal channel, while in the transverse one the flow equations possess the same structure. The flow equation for the longitudinal bosonic mass (inverse propagator at $q=0$) reads as % \begin{equation} \begin{split} \partial_\Lambda m_\sigma^\Lambda=\int_{k,k'} h^\Lambda_\sigma(k) \left[\partial_\Lambda\Pi^\Lambda_{11}(k,k')\right] h^\Lambda_\sigma(k') \equiv \left[h^\Lambda_\sigma\right]^T\left[\partial_\Lambda\Pi^\Lambda_{11}\right] h^\Lambda_\sigma. \end{split} \label{eq_fRG+MF: flow P sigma} \end{equation} % Similarly, the equation for the longitudinal Yukawa coupling is % \begin{equation} \partial_\Lambda h^\Lambda_\sigma=\mathcal{A}^\Lambda\left[\partial_\Lambda\Pi^\Lambda_{11}\right]h^\Lambda_\sigma, \label{eq_fRG+MF: flow h sigma} \end{equation} % and the one for the residual two fermion longitudinal interaction is given by % \begin{equation} \partial_\Lambda\mathcal{A}^\Lambda=\mathcal{A}^\Lambda\left[\partial_\Lambda\Pi^\Lambda_{11}\right]\mathcal{A}^\Lambda. \label{eq_fRG+MF: A flow eq} \end{equation} % The above flow equations are pictorially shown in Fig.~\ref{fig_fRG+MF: flow eqs}. The initial conditions at $\Lambda={\L_c}$ read, for both channels, % \begin{subequations} \begin{align} &m_\sigma^{{\L_c}}=m_\pi^{{\L_c}}=m^{{\L_c}},\\ &h_\sigma^{{\L_c}}=h_\pi^{{\L_c}}=h^{{\L_c}},\\ &\mathcal{A}^{{\L_c}}=\Phi^{{\L_c}}=\mathcal{Q}^{{\L_c}}. \end{align} \end{subequations} % We start by integrating the equation for the residual two fermion longitudinal interaction $\mathcal{A}^\L$. Eq.~\eqref{eq_fRG+MF: A flow eq} can be solved exactly as we have done in the fermionic formalism, obtaining for $\mathcal{A}^\L$ % \begin{equation} \mathcal{A}^\Lambda = \left[1-\widetilde{\mathcal{Q}}^{{\L_c}}\Pi_{11}^\Lambda\right]^{-1}\widetilde{\mathcal{Q}}^{{\L_c}}, \label{eq_fRG+MF: A} \end{equation} % where we have introduced a reduced residual two fermion interaction $\widetilde{\mathcal{Q}}$ % \begin{equation} \widetilde{\mathcal{Q}}^{{\L_c}}=\left[1+\mathcal{Q}^{{\L_c}}\Pi^{{\L_c}}\right]^{-1}\mathcal{Q}^{{\L_c}}. \label{eq_fRG+MF: reduced C tilde} \end{equation} % We are now in the position to employ this result and plug it in Eq.~\eqref{eq_fRG+MF: flow h sigma} for the Yukawa coupling. The latter can be integrated as well. Its solution reads as % \begin{equation} h_\sigma^\Lambda= \left[1-\widetilde{\mathcal{Q}}^{{\L_c}}\Pi_{11}^\Lambda\right]^{-1}\widetilde{h}^{{\L_c}}, \label{eq_fRG+MF: h_sigma} \end{equation} % where the introduction of a "reduced" Yukawa coupling % \begin{equation} \widetilde{h}^{{\L_c}}=\left[1+\mathcal{Q}^{{\L_c}}\Pi^{{\L_c}}\right]^{-1}h^{{\L_c}} \label{eq_fRG+MF: reduced yukawa} \end{equation} % is necessary. This Bethe-Salpeter-like equation for the Yukawa coupling is similar in structure to the parquetlike equations for the three-leg vertex derived in Ref.~\cite{Krien2019_II}. Finally, we can use the two results of Eqs.~\eqref{eq_fRG+MF: A} and~\eqref{eq_fRG+MF: h_sigma} and plug them in the equation for the bosonic mass, whose integration provides % \begin{equation} m_\sigma^\Lambda=\widetilde{m}^{{\L_c}}-\left[\widetilde{h}^{{\L_c}}\right]^T\Pi_{11}^\Lambda\,h_\sigma^\Lambda, \label{eq_fRG+MF: P_sigma} \end{equation} % where, by following definitions introduced above, the "reduced" bosonic mass is given by % \begin{equation} \widetilde{m}^{{\L_c}}=m^{{\L_c}}+\left[\widetilde{h}^{{\L_c}}\right]^T\Pi^{{\L_c}}\,h^{{\L_c}}. \label{eq_fRG+MF: reduced mass P tilde} \end{equation} % \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth]{fig1_paper.png} \caption{Schematic representation of flow equations for the mass and the couplings in the longitudinal channel. Full lines represent Nambu matrix propagators, triangles the Yukawa coupling $h_\sigma$ and squares the residual interaction $\mathcal{A}$. The black dots over fermionic legs represent full derivatives with respect to the scale $\Lambda$.} \label{fig_fRG+MF: flow eqs} \end{figure} % In the transverse channel, the equations have the same structure and can be integrated in the same way. Their solutions read as % \begin{subequations} \begin{align} &\Phi^\Lambda = \left[1-\widetilde{\mathcal{Q}}^{{\L_c}}\Pi_{22}^\Lambda\right]^{-1}\widetilde{\mathcal{Q}}^{{\L_c}}, \label{eq_fRG+MF: Phi}\\ &h_\pi^\Lambda= \left[1-\widetilde{\mathcal{Q}}^{{\L_c}}\Pi_{22}^\Lambda\right]^{-1}\widetilde{h}^{{\L_c}}, \label{eq_fRG+MF: h_pi}\\ &m_\pi^\Lambda=\widetilde{m}^{{\L_c}}-\left[\widetilde{h}^{{\L_c}}\right]^T\Pi_{22}^\Lambda\,h_\pi^\Lambda. \label{eq_fRG+MF: goldstone mass} \end{align} \end{subequations} % Eq.~\eqref{eq_fRG+MF: goldstone mass} provides the mass of the transverse mode, which, according to the Goldstone theorem, must be zero. We will show later that this is indeed fulfilled. The combinations % \begin{subequations} \begin{align} &\frac{h_\sigma^\Lambda \left[h_\sigma^\Lambda\right]^T}{m_\sigma^\Lambda}+\mathcal{A}^\Lambda,\\ &\frac{h_\pi^\Lambda \left[h_\pi^\Lambda\right]^T}{m_\pi^\Lambda}+\Phi^\Lambda, \end{align} \label{eq_fRG+MF: eff fer interactions} \end{subequations} % obey the same flow equations, Eqs.~\eqref{eq_fRG+MF: flow eq Va fermionic} and~\eqref{eq_fRG+MF: flow eq Vphi fermionic}, as the vertices in the fermionic formalism and share the same initial conditions. Therefore the solutions for these quantities coincide with expressions~\eqref{eq_fRG+MF: Va solution fermionic} and~\eqref{eq_fRG+MF: Vphi solution fermionic}, respectively. Within this equivalence, it is interesting to express the irreducible vertex $\widetilde{V}^{{\L_c}}$ of Eq.~\eqref{eq_fRG+MF: irr vertex fermionic} in terms of the quantities, $\mathcal{Q}^{{\L_c}}$, $h^{{\L_c}}$ and $m^{{\L_c}}$, introduced in the factorization in Eq.~\eqref{eq_fRG+MF: vertex at Lambda crit}: % \begin{equation} \widetilde{V}^{{\L_c}}=\frac{\widetilde{h}^{{\L_c}}\left[\widetilde{h}^{{\L_c}}\right]^T}{\widetilde{m}^{{\L_c}}}+\widetilde{\mathcal{Q}}^{{\L_c}}, \label{eq_fRG+MF: irr V bosonic formalism} \end{equation} % where $\widetilde{\mathcal{Q}}^{{\L_c}}$, $\widetilde{h}^{{\L_c}}$ and $\widetilde{m}^{{\L_c}}$ were defined in Eqs.~\eqref{eq_fRG+MF: reduced C tilde},~\eqref{eq_fRG+MF: reduced yukawa} and~\eqref{eq_fRG+MF: reduced mass P tilde}. For a proof see Appendix~\ref{app: fRG+MF app}. Relation~\eqref{eq_fRG+MF: irr V bosonic formalism} is of particular interest because it states that when the full vertex is expressed as in Eq.~\eqref{eq_fRG+MF: vertex at Lambda crit}, then the irreducible one will obey a similar decomposition, where the bosonic propagator, Yukawa coupling and residual two fermion interaction are replaced by their "reduced" counterparts. This relation holds even for $q\neq 0$. % \subsection{Ward identity for the gap and Goldstone theorem} We now focus on the flow of the fermionic gap and the bosonic expectation value and express a relation that connects them. Their flow equations are given by (see Appendix~\ref{app: fRG+MF app}) \begin{equation} \partial_\Lambda \alpha^\Lambda=\frac{1}{m_\sigma^\Lambda}\left[h_\sigma^\Lambda\right]^T\widetilde{\partial}_\Lambda F^\Lambda, \label{eq_fRG+MF: dalpha dLambda main text} \end{equation} % and % \begin{equation} \begin{split} \partial_\Lambda \Delta^\Lambda = \partial_\Lambda \alpha^\Lambda\, h_\sigma^\Lambda+\mathcal{A}^\Lambda\widetilde{\partial}_\Lambda F^\Lambda = \left[\frac{h_\sigma^\Lambda \left[h_\sigma^\Lambda\right]^T}{m_\sigma^\Lambda}+\mathcal{A}^\Lambda\right]\widetilde{\partial}_\Lambda F^\Lambda, \label{eq_fRG+MF: gap eq main text} \end{split} \end{equation} % with $F^\Lambda$ given by Eq.~\eqref{eq_fRG+MF: F definition}. In Fig.~\ref{fig_fRG+MF: flow eqs gaps} we show a pictorial representation. % \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth]{fig2_paper.png} \caption{Schematic representation of flow equations for the bosonic expectation value $\alpha^\Lambda$ and fermionic gap $\Delta^\Lambda$. Aside from the slashed lines, representing Nambu matrix propagators with a scale derivative acting only on the regulator, the conventions for the symbols are the same as in Fig.~\ref{fig_fRG+MF: flow eqs}.} \label{fig_fRG+MF: flow eqs gaps} \end{figure} % Eq.~\eqref{eq_fRG+MF: dalpha dLambda main text} can be integrated, with the help of the previously obtained results for $\mathcal{A}$, $h_\sigma$ and $m_\sigma$, yielding \begin{equation} \alpha^\Lambda=\frac{1}{\widetilde{m}^{{\L_c}}}\left[\widetilde{h}^{{\L_c}}\right]^T F^\Lambda. \label{eq_fRG+MF: alpha solution} \end{equation} % In the last line of Eq.~\eqref{eq_fRG+MF: gap eq main text}, as previously discussed, the object in square brackets equals the full vertex $V_\mathcal{A}$ of the fermionic formalism. Thus, integration of the gap equation is possible and the result is simply Eq.~\eqref{eq_fRG+MF: gap equation fermionic} of the fermionic formalism. However, if we now insert the expression in Eq.~\eqref{eq_fRG+MF: irr V bosonic formalism} for the irreducible vertex within the "fermionic" form (Eq.~\eqref{eq_fRG+MF: gap equation fermionic}) of the gap equation, and use relation~\eqref{eq_fRG+MF: Pi22=F/delta}, we get: % \begin{equation} \Delta^\Lambda(k)=\alpha^\Lambda h_\pi^\Lambda(k). \label{eq_fRG+MF: Ward Identity} \end{equation} % This equation is the Ward identity for the mixed boson-fermion system related to the global U(1) symmetry~\cite{Obert2013}. In Appendix~\ref{app: fRG+MF app} we propose a self consistent loop for the calculation of $\alpha$, $h_{\pi}$, through Eqs.~\ref{eq_fRG+MF: alpha solution} and~\ref{eq_fRG+MF: h_pi}, and subsequently the superfluid gap $\Delta$. % Let us now come back to the question of the Goldstone theorem. For the mass of the Goldstone boson to be zero, it is necessary for Eq.~\eqref{eq_fRG+MF: goldstone mass} to vanish. We show that this is indeed the case. With the help of Eq.~\eqref{eq_fRG+MF: Pi22=F/delta}, we can reformulate the equation for the transverse mass in the form % \begin{equation} \begin{split} m^\Lambda_\pi = \widetilde{m}^{{\L_c}}-\int_k \widetilde{h}^{{\L_c}}(k)F^\Lambda(k)\frac{h^\Lambda_\pi(k)}{\Delta^\Lambda(k)} =\widetilde{m}^{{\L_c}}-\frac{1}{\alpha^{\Lambda}}\int_k \widetilde{h}^{{\L_c}}(k)F^\Lambda(k), \end{split} \end{equation} % where the Ward Identity $\Delta=\alpha h_\pi$ was applied in the last line. We see that the expression for the Goldstone boson mass vanishes when $\alpha$ obeys its self consistent equation, Eq.~\eqref{eq_fRG+MF: alpha solution}. This proves that our truncation of flow equations fulfills the Goldstone theorem. \\ % Constructing a truncation of the fRG flow equations which fulfills the Ward identities and the Goldstone theorem is, in general, a nontrivial task. In Ref.~\cite{Bartosch2009}, in which the order parameter fluctuations have been included on top of the Hartree-Fock solution, no distinction has been made between the longitudinal and transverse Yukawa couplings and the Ward identity~\eqref{eq_fRG+MF: Ward Identity} as well as the Goldstone theorem have been enforced, by calculating the gap and the bosonic expectation values from them rather than from their flow equations. Similarly, in Ref.~\cite{Obert2013}, in order for the flow equations to fulfill the Goldstone theorem, it was necessary to impose $h_\sigma=h_\pi$ and use only the flow equation of $h_\pi$ for both Yukawa couplings. Within the present approximation, due to the mean-field-like nature of the truncation, the Ward identity~\eqref{eq_fRG+MF: Ward Identity} and the Goldstone theorem are automatically fulfilled by the flow equations. % \subsection{Equivalence of bosonic and fermionic formalisms} \label{subsec_fRG+MF: equivalence bos and fer} % As we have proven in the previous sections, within the MF approximation the fully fermionic formalism of Sec.~\ref{sec_fRG_MF: fRG+MF equations} and the bosonized approach introduced in the present section provide the same results for the superfluid gap and for the effective two fermion interactions. Notwithstanding the formal equivalence, the bosonic formulation relies on a further requirement. In Eqs.~\eqref{eq_fRG+MF: Phi} and~\eqref{eq_fRG+MF: h_pi} we assumed the matrix $\left[1-\widetilde{\mathcal{Q}}^{{\L_c}}\Pi_{22}^\Lambda\right]$ to be invertible. This statement is exactly equivalent to assert that the two fermion residual interaction $\Phi$ remains finite. Otherwise the Goldstone mode would lie in this coupling and not (only) in the Hubbard-Stratonovich boson. This cannot happen if the flow is stopped at a scale ${\L_c}$ coinciding with the critical scale $\Lambda_c$ at which the (normal) bosonic mass $m^\Lambda$ turns zero, but it could take place if one considers symmetry breaking in more than one channel, as we have done in Sec.~\ref{sec_fRG_MF: Phase diag}. In particular, if one allows the system to develop two different orders and stops the flow when the mass of one of the two associated bosons becomes zero, it could happen that, within a MF approximation for both order types, the appearance of a finite gap in the first channel makes the two fermion transverse residual interaction in the other channel diverging. In that case one can apply the technique of the \textit{flowing bosonization}~\cite{Friederich2010,Friederich2011}, by reassigning to the bosonic sector the (most singular part of the) two fermion interactions that are generated during the flow. It can be proven that also this approach gives the same results for the gap and the effective fermionic interactions in Eq.~\eqref{eq_fRG+MF: eff fer interactions} as the fully fermionic formalism. \subsection{Vertex bosonization} \label{sec_fRG+MF: vertex bosonization} In this section we present a systematic procedure to extract the quantities in Eq.~\eqref{eq_fRG+MF: vertex at Lambda crit} from a given vertex, within an approximate framework. Starting from the channel decomposition in Eq.~\eqref{eq_methods: channel decomp physical}, we simplify the treatment of the dependence on fermionic spatial momenta of the various channels expanding them in a complete basis of Brillouin zone form factors $\{f^\ell_\mathbf{k}\}$~\cite{Lichtenstein2017} \begin{equation} \begin{split} \phi^\Lambda_X(k,k';q)=\sum_{\ell\ell'} \phi^{\Lambda}_{X,\ell\ell'}(\nu,\nu';q)f^\ell_{\mathbf{k}}\,f^{\ell'}_{\mathbf{k'}}, \end{split} \label{eq_fRG+MF: form factor expansion} \end{equation} % with $X=p$, $m$ or $c$, corresponding to pairing, magnetic, and charge channels. For practical calculations the above sum is truncated to a finite number of form factors and often only diagonal terms, $\ell=\ell'$, are considered. Within the form factor truncated expansion, one is left with the calculation of a finite number of channels that depend on a bosonic collective variable $q=(\mathbf{q},\Omega)$ and two fermionic Matsubara frequencies $\nu$ and $\nu'$. We will now show how to obtain the decomposition introduced in Eq.~\eqref{eq_fRG+MF: vertex at Lambda crit} within the form factor expansion. We focus on only one of the three channels, depending on the type of order we are interested in, and factorize its dependence on the two fermionic Matsubara frequencies. We introduce the so called channel asymptotics, that is, the functions that describe the channels for large $\nu$, $\nu'$. From now on, we adopt the shorthand $\lim_{\nu\rightarrow\infty}g(\nu)=g(\infty)$ for whatever $g$, function of $\nu$. By considering only diagonal terms in the form factor expansion in Eq.~\eqref{eq_fRG+MF: form factor expansion}, we can write the $\ell=\ell'$ components of the channels as~\cite{Wentzell2016}: % \begin{equation} \begin{split} \phi_{X,\ell}^\Lambda(\nu,\nu';q)=\mathcal{K}_{X,\ell}^{(1)\Lambda}(q)+\mathcal{K}_{X,\ell}^{(2)\Lambda}(\nu;q) +\overline{\mathcal{K}}_{X,\ell}^{(2)\Lambda}(\nu';q) +\delta\phi^\Lambda_{X,\ell}(\nu,\nu';q), \end{split} \label{eq_fRG+MF: vertex asymptotics} \end{equation} % with % \begin{subequations} \begin{align} &\mathcal{K}_{X,\ell}^{(1)\Lambda}(q)=\phi_{X,\ell}^\Lambda(\infty,\infty;q)\\ &\mathcal{K}_{X,\ell}^{(2)\Lambda}(\nu;q)=\phi_{X,\ell}^\Lambda(\nu,\infty;q)-\mathcal{K}_{X,\ell}^{(1)\Lambda}(q)\\ &\overline{\mathcal{K}}_{X,\ell}^{(2)\Lambda}(\nu';q)=\phi_{X,\ell}^\Lambda(\infty,\nu';q)-\mathcal{K}_{X,\ell}^{(1)\Lambda}(q)\\ &\delta\phi^\Lambda_{X,\ell}(\nu,\infty;q)=\delta\phi^\Lambda_{X,\ell}(\infty,\nu';q)=0. \end{align} \label{eq_fRG+MF: asymptotics properties} \end{subequations} % According to Ref.~\cite{Wentzell2016}, these functions are related to physical quantities. $\mathcal{K}_{X,\ell}^{(1)}$ turns out to be proportional to the relative susceptibility and the combination $\mathcal{K}_{X,\ell}^{(1)}+\mathcal{K}_{X,\ell}^{(2)}$ (or $\mathcal{K}_{X,\ell}^{(1)}+\overline{\mathcal{K}}_{X,\ell}^{(2)}$) to the boson-fermion vertex, that describes both the response of the Green's function to an external field~\cite{VanLoon2018} and the coupling between a fermion and an effective boson. In principle one should be able to calculate the above quantities directly from the vertex (see Ref.~\cite{Wentzell2016} for the details) without performing any limit. However, it is well known how fRG truncations, in particular the 1-loop approximation, do not properly weigh all the Feynman diagrams contributing to the vertex, so that the diagrammatic calculation and the high frequency limit give two different results. To keep the property in the last line of Eq.~\eqref{eq_fRG+MF: asymptotics properties}, we choose to perform the limits. We rewrite Eq.~\eqref{eq_fRG+MF: vertex asymptotics} in the following way: % \begin{equation} \begin{split} \phi_{X,\ell}^\Lambda(\nu,\nu';q)= &\frac{\left[\mathcal{K}_{X,\ell}^{(1)\Lambda}+\mathcal{K}_{X,\ell}^{(2)\Lambda}\right]\left[\mathcal{K}_{X,\ell}^{(1)\Lambda}+\overline{\mathcal{K}}_{X,\ell}^{(2)\Lambda}\right]}{\mathcal{K}_{X,\ell}^{(1)\Lambda}}+\mathcal{R}_{X,\ell}^\Lambda\\ =&\frac{\phi_{X,\ell}^\Lambda(\nu,\infty;q)\phi_{X,\ell}^\Lambda(\infty,\nu';q)}{\phi_{X,\ell}^\Lambda(\infty,\infty;q)}+\mathcal{R}_{X,\ell}^\Lambda(\nu,\nu';q), \end{split} \label{eq_fRG+MF: vertex separation} \end{equation} % where we have made the frequency and momentum dependencies explicit only in the second line, and we have defined % \begin{equation} \mathcal{R}_{X,\ell}^\Lambda(\nu,\nu';q)=\delta\phi^\Lambda_{X,\ell}(\nu,\nu';q)-\frac{\mathcal{K}_{X,\ell}^{(2)\Lambda}(\nu;q)\overline{\mathcal{K}}_{X,\ell}^{(2)\Lambda}(\nu';q)}{\mathcal{K}_{X,\ell}^{(1)\Lambda}(q)}. \end{equation} % From the definitions given above, it is obvious that the rest function $\mathcal{R}_{X,\ell}$ decays to zero in all frequency directions. Since the first term of Eq.~\eqref{eq_fRG+MF: vertex separation} is separable by construction, we choose to identify this term with the first one of Eq.~\eqref{eq_fRG+MF: vertex at Lambda crit}. Indeed, in many cases the vertex divergence is manifest already in the asymptotic $\mathcal{K}_{X,\ell}^{(1)\Lambda}$, that we recall to be proportional to the susceptibility of the channel. There are however situations in which the functions $\mathcal{K}^{(1)}$ and $\mathcal{K}^{(2)}$ are zero even close to an instability in the channel, an important example being the $d$-wave superconducting instability in the repulsive Hubbard model. In general, this occurs for those channels that, within a Feynman diagram expansion, cannot be constructed with a ladder resummation with the bare vertex. In the Hubbard model, due to the locality of the bare interaction, this happens for every $\ell\neq 0$, that is, for every term in the form factor expansion different than the $s$-wave contribution. In this case one should adopt a different approach and, for example, replace the limits to infinity in Eq.~\eqref{eq_fRG+MF: vertex separation} by some given values of the Matsubara frequencies, $\pm \pi T$ for example. In Chap.~\ref{chap: Bos Fluct Norm}, we will present an alternative approach to the vertex factorization, by means of a diagrammatic decomposition called \emph{single boson exchange} (SBE) decomposition~\cite{Krien2019_I}. % \subsection{Results for the attractive Hubbard model at half filling} \label{sec_fRG+MF: results} % In this section we report some exemplary results of the equations derived within the bosonic formalism, for the two-dimensional attractive Hubbard model. We focus on the half-filled case. For pure nearest neighbor hopping with amplitude $-t$, the band dispersion $\xi_\mathbf{k}$ is given by % \begin{equation} \xi_\mathbf{k} = - 2 t \left( \cos k_x + \cos k_y \right) -\mu, \label{eq_fRG+MF: dispersion band} \end{equation} % with $\mu=0$ at half filling. We choose the onsite attraction and the temperature to be $U=-4t$ and $T=0.1t$, respectively. All results are presented in units of the hopping parameter $t$. % \subsubsection{Symmetric phase} % In the symmetric phase, in order to run a fRG flow, we introduce the $\Omega$-regulator~\cite{Husemann2009} % \begin{equation} R^\Lambda(k) = \left(i\nu-\xi_\mathbf{k}\right) \frac{\Lambda^2}{\nu^2}, \end{equation} % so that the initial scale is ${\Lambda_\mathrm{ini}}=+\infty$ (fixed to a large number in the numerical calculation) and the final one ${\Lambda_\mathrm{fin}}=0$. We choose a 1-loop truncation, and use the physical channel decomposition in Eq.~\eqref{eq_methods: channel decomp physical}, with a form factor expansion. We truncate Eq.~\eqref{eq_fRG+MF: form factor expansion} only to the first term, that is, we use only s-wave, $f^{(0)}_\mathbf{k}\equiv 1$, form factors. Within these approximations, the vertex reads as % \begin{equation} \begin{split} V^\Lambda(k_1,k_2,k_3) = &- U - \mathcal{P}^{\Lambda}_{\nu_1\nu_3}(k_1+k_2) \\ &+ \mathcal{M}^{\Lambda}_{\nu_1\nu_2}(k_2-k_3)\\ &+\frac{1}{2} \mathcal{M}^{\Lambda}_{\nu_1\nu_2}(k_3-k_1) -\frac{1}{2} \mathcal{C}^{\Lambda}_{\nu_1\nu_2}(k_3-k_1), \end{split} \label{eq_fRG+MF: channel decomposition attractive model} \end{equation} % where $\mathcal{P}$, $\mathcal{M}$, $\mathcal{C}$, are referred as pairing, magnetic and charge channel, respectively. % Furthermore, we focus only on the spin-singlet component of the pairing (the triplet one is very small in the present parameter region), so that we require the pairing channel to obey~\cite{Rohringer2012} % \begin{equation} \mathcal{P}^{\Lambda}_{\nu\nu'}(q) = \mathcal{P}^{\Lambda}_{-\nu+\Omega\,\mathrm{m}\,2,\nu'}(q) = \mathcal{P}^{\Lambda}_{\nu,-\nu'+\Omega\,\mathrm{m}\,2}(q), \end{equation} % where $q=(\mathbf{q},\Omega)$, and $\Omega\,\mathrm{m}\,2=2(n\,\mathrm{mod}\,2)\pi T$, and $n\in\mathbb{Z}$ is the Matusbara frequency index. The initial condition for the vertex reads as % \begin{equation} V^{{\Lambda_\mathrm{ini}}}(k_1,k_2,k_3) = - U, \end{equation} % so that $\mathcal{P}^{{\Lambda_\mathrm{ini}}}=\mathcal{M}^{{\Lambda_\mathrm{ini}}}=\mathcal{C}^{{\Lambda_\mathrm{ini}}}=0$. Neglecting the fermionic self-energy, $\Sigma^\Lambda(k)\equiv0$, we run a flow for these three quantities until one (ore more) of them diverges. Each channel is computed by keeping 50 positive and 50 negative values for each of the three Matsubara frequencies (two fermionic, one bosonic) on which it depends. Frequency asymptotics are also taken into account by following Ref.~\cite{Wentzell2016}. The momentum dependence of the channel is treated by discretizing with 38 patches the region $\mathcal{B}=\{(k_x,k_y): 0\leq k_y\leq k_x\leq\pi\}$ in the Brillouin zone and extending to the other regions by using lattice symmetries. Due to particle-hole symmetry occurring at half filling, pairing fluctuations at $\mathbf{q}=0$ combine with charge fluctuations at $\mathbf{q}=(\pi,\pi)$ to form an order parameter with SO(3) symmetry~\cite{Micnas1990}. Indeed, with the help of a canonical particle-hole transformation, one can map the attractive half-filled Hubbard model onto the repulsive one. Within this duality, the SO(3)-symmetric magnetic order parameter is mapped onto the above mentioned combined charge-pairing order parameter and vice versa. This is the reason why we find a critical scale, $\Lambda_c$, at which \emph{both} $\mathcal{C}((\pi,\pi),0)$ and $\mathcal{P}(\mathbf{0},0)$ diverge. On a practical level, we define the critical scale ${\L_c}$ as the scale at which one (or more, in this case) channel exceeds $10^3t$. With our choice of parameters, we find that at ${\L_c} \simeq 0.378t$ both $\mathcal{C}$ and $\mathcal{P}$ cross our threshold. % \ifx \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{fig3_paper.png} \caption{Flow of the maximum values of the pairing, magnetic and charge channel. The maximum value of the charge channel at zero frequency and momentum $(\pi,\pi)$ and the one for the pairing channel at $q=0$ coincide, within the numerical accuracy, and both exceed the threshold $10^3t$ at the critical scale, signaling an instability to the formation of an order parameter given by any linear combination of the superfluid and the charge density wave ones.} \label{fig_fRG+MF: flow channels} \end{figure} \fi % In the SSB phase, we choose to restrict the ordering to the pairing channel, thus excluding the formation of charge density waves. This choice is always possible because we have the freedom to choose the "direction" in which our order parameter points. In the particle-hole dual repulsive model, our choice would be equivalent to choose the (antiferro-) magnetic order parameter to lie in the $xy$ plane. This choice is implemented by selecting the particle-particle channel as the only one contributing to the flow in the SSB phase, as exposed in Secs.~\ref{sec_fRG_MF: fRG+MF equations} and~\ref{sec_fRG+MF: bosonic flow and integration}. In order to access the SSB phase with our bosonic formalism, we need to perform the decomposition in Eq.~\eqref{eq_fRG+MF: vertex at Lambda crit} for our vertex at ${\L_c}$. Before proceeding, in order to be consistent with our form factor expansion in the SSB phase, we need to project $V$ in Eq.~\eqref{eq_fRG+MF: channel decomposition attractive model} onto the $s$-wave form factors, because we want the quantities in the ordered phase to be functions of Matsubara frequencies only. Therefore we define the total vertex projected onto $s$-wave form factors % \begin{equation} \overline{V}^{{\L_c}}_{\nu\nu'}(q)=\int_{\mathbf{k},\mathbf{k}'}V^{{\L_c}}\hskip -1mm\left(\rnddo{q}+k,\rndup{q}-k,k'\right). \end{equation} % Furthermore, since we are interested only in spin singlet pairing, we symmetrize it with respect to one of the two fermionic frequencies, so that in the end we are dealing with % \begin{equation} V^{{\L_c}}_{\nu\nu'}(q)=\frac{\overline{V}^{{\L_c}}_{\nu\nu'}(q)+\overline{V}^{{\L_c}}_{\nu,-\nu'+\Omega\,\mathrm{m}\,2}(q)}{2}. \end{equation} % In order to extract the Yukawa coupling $h^{{\L_c}}$ and bosonic propagator $m^{{\L_c}}$, we employ the strategy described in Sec.~\ref{sec_fRG+MF: vertex bosonization}. Here, however, instead of factorizing the pairing channel $\mathcal{P}^{{\L_c}}$ alone, we subtract from it the bare interaction $U$. In principle, $U$ can be assigned both to the pairing channel, to be factorized, or to the residual two fermion interaction, giving rise to the same results in the SSB phase. However, when in a real calculation the vertices are calculated on a finite frequency box, it is more convenient to have the residual two fermion interaction $\mathcal{Q}^{{\L_c}}$ as small as possible, in order to reduce finite size effects in the matrix inversions needed to extract the reduced couplings in Eqs.~\eqref{eq_fRG+MF: reduced C tilde},~\eqref{eq_fRG+MF: reduced yukawa} and~\eqref{eq_fRG+MF: reduced mass P tilde}, and in the calculation of $h_\pi$, in Eq.~\eqref{eq_fRG+MF: h_pi}. Furthermore, since it is always possible to rescale the bosonic propagators and Yukawa couplings by a constant such that the vertex constructed with them (Eq.~\eqref{eq_fRG+MF: vertex separation}) is invariant, we impose the normalization condition $h^{{\L_c}}(\nu\rightarrow\infty;q)=1$. In formulas, we thus have % \begin{equation} m^{{\L_c}}(q)=\frac{1}{\mathcal{K}_{p,\ell=0}^{(1){\L_c}}(q)-U}=\frac{1}{\mathcal{P}^{{\L_c}}_{\infty,\infty}(q)-U}, \end{equation} % and % \begin{equation} \begin{split} h^{{\L_c}}(\nu;q)=\frac{\mathcal{K}_{p,\ell=0}^{(2){\L_c}}(\nu;q)+\mathcal{K}_{p,\ell=0}^{(1){\L_c}}(q)-U}{\mathcal{K}_{p,\ell=0}^{(1){\L_c}}(q)-U} =\frac{\mathcal{P}^{{\L_c}}_{\nu,\infty}(q)-U}{\mathcal{P}^{{\L_c}}_{\infty,\infty}(q)-U}. \end{split} \end{equation} % The limits are numerically performed by evaluating the pairing channel at large values of the fermionic frequencies. The extraction of the factorizable part from the pairing channel minus the bare interaction defines the rest function % \begin{equation} \mathcal{R}^{{\L_c}}_{\nu\nu'}(q)=\mathcal{P}^{{\L_c}}_{\nu\nu'}(q)-U-\frac{h^{{\L_c}}(\nu;q)h^{{\L_c}}(\nu';q)}{m^{{\L_c}}(q)}, \end{equation} % and the residual two fermion interaction $\mathcal{Q}$ % \begin{equation} \begin{split} \mathcal{Q}^{{\L_c}}_{\nu\nu'}(q)=&\left[V^{{\L_c}}_{\nu\nu'}(q)-\mathcal{P}^{{\L_c}}_{\nu\nu'}(q)+U\right]+\mathcal{R}^{{\L_c}}_{\nu\nu'}(q) =V^{{\L_c}}_{\nu\nu'}(q)-\frac{h^{{\L_c}}(\nu;q)h^{{\L_c}}(\nu';q)}{m^{{\L_c}}(q)}. \end{split} \end{equation} % We are now in the position to extract the reduced couplings, $\widetilde{\mathcal{Q}}^{{\L_c}}$, $\widetilde{h}^{{\L_c}}$ and $\widetilde{m}^{{\L_c}}$, defined in Eqs.~\eqref{eq_fRG+MF: reduced C tilde},~\eqref{eq_fRG+MF: reduced yukawa},~\eqref{eq_fRG+MF: reduced mass P tilde}. This is achieved by numerically inverting the matrix (we drop the $q$-dependence from now on, assuming always $q=0$) % \begin{equation} \delta_{\nu\nu'} + \mathcal{Q}^{{\L_c}}_{\nu\nu'}\, \chi^{{\L_c}}_{\nu'}, \end{equation} % with % \begin{equation} \chi^{{\L_c}}_{\nu} = T\int_{\mathbf{k}}G_0^{{\L_c}}(k)G_0^{{\L_c}}(-k), \end{equation} % and % \begin{equation} G_0^{{\L_c}}(k) = \frac{1}{i\nu-\xi_\mathbf{k}+R^{{\L_c}}(k)} =\frac{\nu^2}{\nu^2+{\L_c}^2}\frac{1}{i\nu-\xi_\mathbf{k}}. \end{equation} % In Fig.~\ref{fig_fRG+MF: vertices Lambda s} we show the results for the pairing channel minus the bare interaction, the rest function, the residual two fermion interaction $\mathcal{Q}$ and the reduced one $\widetilde{\mathcal{Q}}$ at the critical scale. One can see that in the present parameter region the pairing channel (minus $U$) is highly factorizable. Indeed, despite the latter being very large because of the vicinity to the instability, the rest function $\mathcal{R}$ remains very small, a sign that the pairing channel is well described by the exchange of a single boson. Furthermore, thanks to our choice of assigning the bare interaction to the factorized part, as we see in Fig.~\ref{fig_fRG+MF: vertices Lambda s}, both $\mathcal{Q}$ and $\widetilde{\mathcal{Q}}$ possess frequency structures that arise from a background that is zero. Finally, the full bosonic mass at the critical scale is close to zero, $m^{{\L_c}}\simeq10^{-3} $, due to the vicinity to the instability, while the reduced one is finite, $\widetilde{m}^{{\L_c}}\simeq 0.237$. % % \subsubsection{SSB Phase} % \begin{figure}[t!] \centering \includegraphics[width=0.6\textwidth]{fig4_paper.png} \caption{Couplings contributing to the total vertex at the critical scale. \\ % \textit{Upper left}: pairing channel minus the bare interaction. At the critical scale this quantity acquires very large values due to the vicinity to the pairing instability. \\ % \textit{Upper right}: rest function of the pairing channel minus the bare interaction. In the present regime the pairing channel is very well factorizable, giving rise to a small rest function.\\ % \textit{Lower left}: residual two fermion interaction. The choice of factorizing $\mathcal{P}^{{\L_c}}-U$ instead of $\mathcal{P}^{{\L_c}}$ alone makes the background of this quantity zero.\\ % \textit{Lower right}: reduced residual two fermion interaction. As well as the full one, this coupling has a zero background value, making calculations of couplings in the SSB phase more precise by reducing finite number of Matsubara frequencies effects in the matrix inversions.} \label{fig_fRG+MF: vertices Lambda s} \end{figure} % In the SSB phase, instead of running the fRG flow, we employ the analytical integration of the flow equations described in Sec.~\ref{sec_fRG+MF: bosonic flow and integration}. On a practical level, we implement a solution of the loop described in Appendix~\ref{app: fRG+MF app}, that allows for the calculation of the bosonic expectation value $\alpha$, the transverse Yukawa coupling $h_\pi$ and subsequently the fermionic gap $\Delta$ through the Ward identity $\Delta=\alpha h_\pi$. In this section we drop the dependence on the scale, since we have reached the final scale ${\Lambda_\mathrm{fin}}=0$. Note that, as exposed previously, in the half-filled attractive Hubbard model the superfluid phase sets in by breaking a SO(3) rather than a U(1) symmetry. This means that one should expect the appearance of two massless Goldstone modes. Indeed, besides the Goldstone boson present in the (transverse) particle-particle channel, another one appears in the particle-hole channel and it is related to the divergence of the charge channel at momentum $(\pi,\pi)$. However, within our choice of considering only superfluid order and within the MF approximation, this mode is decoupled from our equations. % \begin{figure}[t] \centering \includegraphics[width=0.65\textwidth]{fig5_paper.png} \caption{Frequency dependence of the superfluid gap. It interpolates between its value at the Fermi level, $\Delta_0$, and its asymptotic one. The dashed line marks the BCS value, while the dotted one $|U|$ times the condensate fraction.} \label{fig_fRG+MF: gap} \end{figure} % Within our previously discussed choice of bosonizing $\mathcal{P}^{{\L_c}}-U$ instead of $\mathcal{P}^{{\L_c}}$ alone, the self consistent loop introduced in Appendix~\ref{app: fRG+MF app} converges extremely fast, 15 iterations for example are sufficient to reach a precision of $10^{-7}$ in $\alpha$. Once convergence is reached and the gap $\Delta(\nu)$ obtained, we are in the position to evaluate the remaining couplings introduced in Sec.~\ref{sec_fRG+MF: bosonic flow and integration} through their integrated flow equations. In Fig.~\ref{fig_fRG+MF: gap} we show the computed frequency dependence of the gap. It interpolates between $\Delta_0=\Delta(\nu\rightarrow 0)$, its value at the Fermi level, and its asymptotic value, that equals the absolute value of the bare interaction times the condensate fraction $\langle\psi_{\downarrow}\psi_{\uparrow}\rangle=\int_\mathbf{k}\langle \psi_{-\mathbf{k},\downarrow}\psi_{\mathbf{k},\uparrow}\rangle$. $\Delta_0$ also represents the gap between the upper and lower Bogoliubov band. Magnetic and charge fluctuations above the critical scale significantly renormalize the gap with respect to the Hartree-Fock calculation ($\widetilde{V}=-U$ in Eq.~\eqref{eq_fRG+MF: gap equation fermionic}), that in the present case coincides with Bardeen-Cooper-Schrieffer (BCS) theory. This effect is reminiscent of the Gor'kov-Melik-Barkhudarov correction in weakly coupled superconductors~\cite{Gorkov1961}. The computed frequency dependence of the gap compares qualitatively well with Ref.~\cite{Eberlein2013}, where a more sophisticated truncation of the flow equations has been carried out. Since $\Delta$ is a spin singlet superfluid gap, and we have chosen $\alpha$ to be real, it obeys % \begin{equation} \Delta(\nu) = \Delta(-\nu) = \Delta^*(-\nu), \end{equation} % where the first equality comes from the spin singlet nature and the second one from the time reversal symmetry of the effective action. Therefore, the imaginary part of the gap is always zero. By contrast, a magnetic gap would gain, in general, a finite (and antisymmetric in frequency) imaginary part. % \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{fig6_paper.png} \caption{Effective interactions calculated in the SSB phase as functions of Matsubara frequencies.\\ \textit{Upper left}: longitudinal residual two fermion interaction $\mathcal{A}$.\\ \textit{Upper right}: transverse residual two fermion interaction $\Phi$.\\ \textit{Lower left}: longitudinal effective two fermion interaction $V_\mathcal{A}$.\\ \textit{Lower right}: longitudinal residual two fermion interaction $\mathcal{A}$ with its reduced counterpart $\widetilde{\mathcal{Q}}$ at the critical scale subtracted (left), and transverse longitudinal residual two fermion interaction $\Phi$ minus its equivalent, $\mathcal{Q}$, at ${\L_c}$ (right). Both quantities exhibit very small values, showing that $\mathcal{A}$ and $\Phi$ do not deviate significantly from $\widetilde{\mathcal{Q}}$ and $\mathcal{Q}$, respectively.} \label{fig_fRG+MF: final vertices} \end{figure} % In Fig.~\ref{fig_fRG+MF: final vertices} we show the results for the residual two fermion interactions in the longitudinal and transverse channels, together with the total effective interaction in the longitudinal channel, defined as % \begin{equation} V_{\mathcal{A},\nu\nu'}=\frac{h_\sigma(\nu)h_\sigma(\nu')}{m_\sigma}+\mathcal{A}_{\nu\nu'}. \label{eq_fRG+MF: VA SSB bosonic} \end{equation} % The analog of Eq.~\eqref{eq_fRG+MF: VA SSB bosonic} for the transverse channel cannot be computed, because the transverse mass $m_\pi$ is zero, in agreement with the Goldstone theorem. The key result is that the residual interactions $\mathcal{A}_{\nu\nu'}$ and $\Phi_{\nu\nu'}$ inherit the frequency structures of $\widetilde{\mathcal{Q}}^{{\L_c}}_{\nu\nu'}$ and $\mathcal{Q}^{{\L_c}}_{\nu\nu'}$, respectively, and they are also close to them in values (compare with Fig.~\ref{fig_fRG+MF: vertices Lambda s}). The same occurs for the Yukawa couplings, as shown in Fig.~\ref{fig_fRG+MF: hs}. Indeed, the calculated transverse coupling $h_\pi$ does not differ at all from the Yukawa coupling at the critical scale $h^{{\L_c}}$. In other words, if instead of solving the self consistent equations, one runs a flow in the SSB phase, the transverse Yukawa coupling will stay the same from ${\L_c}$ to ${\Lambda_\mathrm{fin}}$. Furthermore, the longitudinal coupling $h_\sigma$ develops a dependence on the frequency which does not differ significantly from the one of $\widetilde{h}^{{\L_c}}$. This feature, at least for our choice of parameters, can lead to some simplifications in the flow equations of Sec.~\ref{sec_fRG+MF: bosonic flow and integration}. Indeed, when running a fRG flow in the SSB phase, one might let flow only the bosonic inverse propagators by keeping the Yukawa couplings and residual interactions fixed at their values, reduced or not, depending on the channel, at the critical scale. This simplifications can be crucial to make computational costs lighter when including bosonic fluctuations of the order parameter, which, similarly, do not significantly renormalize Yukawa couplings in the SSB phase~\cite{Obert2013,Bartosch2009}. % \begin{figure}[t] \centering \includegraphics[width=0.65\textwidth]{fig7_paper.png} \caption{Frequency dependence of Yukawa couplings both at the critical scale ${\L_c}$ and in the SSB phase. While $h_\pi$ coincides with $h^{{\L_c}}$, the longitudinal coupling $h_\sigma$ does not differ significantly from the reduced one at the critical scale, $\widetilde{h}^{{\L_c}}$. The continuous lines for $h^{{\L_c}}$ and $\widetilde{h}^{{\L_c}}$ are an interpolation through the data calculated on the Matsubara frequencies.} \label{fig_fRG+MF: hs} \end{figure} % \end{document} \chapter{Single boson exchange decomposition of the two-particle vertex} \label{chap: Bos Fluct Norm} % In this chapter, we introduce a reformulation of the fRG equations that exploits the \emph{single boson exchange} (SBE) representation of the vertex function, introduced in Ref.~\cite{Krien2019_I}. The latter is based on a diagrammatic decomposition classifying the contributions to the vertex function in terms of their reducibility with respect to removing a bare interaction vertex. This idea is implemented in the fRG by writing each physical channel in terms of a single boson process and a residual two-particle interaction. On the one hand, the present decomposition offers numerical advantages, substantially reducing the computational complexity of the vertex function. On the other hand, it provides a physical insight into the collective fluctuations of the correlated system. We apply the SBE decomposition to the strongly interacting Hubbard model, by combining it with the DMF\textsuperscript{2}RG (see Chap.~\ref{chap: methods}), both at half filling and at finite doping. The results presented in this chapter can be found in Ref.~\cite{Bonetti2021}. % \section{Single boson exchange decomposition} % \begin{figure}[b] \centering \includegraphics[width= 0.7\textwidth]{diagrams_Ured_irr.png} \caption{Representative diagrams of the $U$-reducibility. While all three diagrams are two-particle-$pp$ reducible, diagram (a) and (b) are also $U$-$pp$ reducible, while (c) is $U$-irreducible.} \label{fig_SBE_fRG: SBE diagrams} \end{figure} % % \begin{figure}[t] \centering \includegraphics[width= 0.5\textwidth]{SBE_venn_diagram.png} \caption{Venn diagram showing the differences between the $U$- and two-particle reducibility (here denoted as $gg$). We notice that in a given channel ($pp$, $ph$ or $\overline{ph}$) the $U$-reducible diagrams are a subset of those which are two-particle reducible. The diagram consisting of the bare interaction $U$ is the only diagram that is $U$-reducible but two-particle irreducible. Taken from Ref.~\cite{Krien2019_I}.} \label{fig_SBE_fRG: SBE venn diagramm} \end{figure} % In this section, we introduce the SBE decomposition, and we refer to Ref.~\cite{Krien2019_I} for further details. The SBE decomposition relies on the concept of $U$-reducibility~\cite{GiulianiVignale}. The diagrams contributing to the two-particle vertex $V$ can be classified as two-particle reducible or irreducible, depending on whether they can be cut into two disconnected parts by removing a pair of fermionic propagators. The $U$-reducibility sets in as an alternative criterion to classify these diagrams. A diagram is called $U$-reducible (irreducible) if it can (cannot) be cut in two by the removal of a bare interaction vertex. Furthermore, similarly to what happens for the two-particle reducibility, a diagram can be classified as $U$-$pp$ (particle-particle), $U$-$ph$ (particle-hole), or $U$-$\overline{ph}$ (particle-hole-crossed) reducible, depending on how the fermionic Green's functions are connected to the removed interaction. Moreover, since the bare vertex has always two pairs of fermionic legs attached, a $U$-reducible diagram is always also two-particle reducible in the same channel, while the opposite is in general not true, as shown by the exemplary diagrams in Fig.~\ref{fig_SBE_fRG: SBE diagrams}. The only exception to this rule is the diagram consisting of a single bare interaction, which as a convention we choose to be $U$-reducible, but it is two-particle irreducible (see Fig.~\ref{fig_SBE_fRG: SBE venn diagramm}). Switching from diagrammatic to physical channels, one can re-write the vertex decomposition in Eq.~\eqref{eq_methods: channel decomp physical} as % \begin{equation} \begin{split} V(k_1',k_2',k_1) = \,\,&\Lambda_{U\mathrm{irr}}(k_1',k_2',k_1) -2U \\ &+ \frac{1}{2}\phi^{m,\mathrm{SBE}}_{k_{ph},k_{ph}'}(k_1-k_1') + \frac{1}{2}\phi^{c,\mathrm{SBE}}_{k_{ph},k_{ph}'}(k_1-k_1') \\ &+ \phi^{m,\mathrm{SBE}}_{k_{\overline{ph}},k_{\overline{ph}}'}(k_2'-k_1) \\ &+ \phi^{p,\mathrm{SBE}}_{k_{pp},k_{pp}'}(k_1'+k_2'), \end{split} \label{eq_SBE_fRG: channel decomp SBE} \end{equation} % with $k_{ph}$, $k'_{ph}$, $k_{\overline{ph}}$, $k'_{\overline{ph}}$, $k_{pp}$, and $k'_{pp}$ defined as in Eq.~\ref{eq_methods: k k' pp ph phx}. Here, $\Lambda_{U\mathrm{irr}}$ is given by the $U$-irreducible diagrams, and $[\phi^{m,\mathrm{SBE}}+\phi^{c,\mathrm{SBE}}]/2$, $\phi^{m,\mathrm{SBE}}$, and $\phi^{p,\mathrm{SBE}}$ by the $ph$, $\overline{ph}$, and $pp$ $U$-reducible diagrams, respectively. Notice that a term $2U$ has been subtracted to avoid double counting of the bare interaction, present in each of the $\phi^{X,\mathrm{SBE}}$. Every $U$-reducible channel can be then further reduced in more fundamental building blocks. Because of the locality of the Hubbard interaction $U$, its dependence on the fermionic arguments $k$ and $k'$ gets completely factorized, and it can be written as % \begin{equation} \phi^{X,\mathrm{SBE}}_{k,k'}(q) = h^X_k(q)\,D^X(q)\,h^X_{k'}(q), \label{eq_SBE_fRG: phi SBE} \end{equation} % where $X=m$, $c$ or $p$, and $h^X$ are referred to as Yukawa (or sometimes Hedin) couplings and $D^X$ as bosonic propagators of screened interactions. The former are related to the three point Green's functions $G^{(3)X}$ via % \begin{subequations} \begin{align} &h^m_k(q)=\frac{G^{(3)m}_k(q)}{\chi^{0,ph}_k(q)\left[1+U\chi^m(q)\right]},\\ &h^c_k(q)=\frac{G^{(3)c}_k(q)+\beta n G(k)\delta_{q,0}}{\chi^{0,ph}_k(q)\left[1-U\chi^c(q)\right]}, \label{eq_SBE_fRG: G3c}\\ &h^p_k(q)=\frac{G^{(3)p}_k(q)}{\chi^{0,pp}_k(q)\left[1-U\chi^p(q)\right]}, \end{align} \label{eq_SBE_fRG: yukawas from G3} \end{subequations} % where $G(k)$ is the fermionic Green's function, $\chi^X(q)$ the magnetic, charge or pairing susceptibility, $n$ the particle density, and the generalized bare bubbles are defined as % \begin{subequations} \begin{align} &\chi^{0,ph}_k(q)=G\left(k+\rnddo{q}\right)G\left(k-\rndup{q}\right), \\ &\chi^{0,pp}_k(q)=G\left(\rnddo{q}+k\right)G\left(\rndup{q}-k\right). \end{align} \end{subequations} % The three point Green's functions are then related to the four point one $G^{(4)}$ via % \begin{subequations} \begin{align} &G^{(3)m}_k(q)=\sum_{\sigma=\uparrow,\downarrow}\int_{k'}\mathrm{sgn}(\sigma)\, G^{(4)}_{\uparrow\sigma\uparrow\sigma}\left(k-\rnddo{q},k'+\rndup{q},k+\rndup{q}\right),\\ &G^{(3)c}_k(q)=\sum_{\sigma=\uparrow,\downarrow}\int_{k'} G^{(4)}_{\uparrow\sigma\uparrow\sigma}\left(k-\rnddo{q},k'+\rndup{q},k+\rndup{q}\right),\\ &G^{(3)p}_k(q)=\int_{k'} G^{(4)}_{\uparrow\downarrow\uparrow\downarrow}\left(\rnddo{q}+k,\rndup{q}-k,\rnddo{q}+k'\right), \end{align} \end{subequations} % where $\mathrm{sgn}(\uparrow)=+1$, $\mathrm{sgn}(\downarrow)=-1$, and the definition for $G^{(4)}$ is a straightforward lattice generalization of Eq.~\eqref{eq_methods: G4 DMFT}. Notice that in Eq.~\eqref{eq_SBE_fRG: G3c} a disconnected term has been removed from the definition of the charge Yukawa coupling. The screened interactions are related to the susceptibilities through % \begin{subequations} \begin{align} &D^m(q) = U + U^2\chi^m(q),\\ &D^c(q) = U - U^2\chi^c(q),\\ &D^p(q) = U - U^2\chi^p(q). \end{align} \label{eq_SBE_fRG: D from chi} \end{subequations} % We therefore see that the division by a term $1\pm U\chi^X(q)$ in Eq.~\eqref{eq_SBE_fRG: yukawas from G3} is necessary to avoid double counting of the diagrams in $\phi^{X,\mathrm{SBE}}$. It is then interesting to analyze the limits when the frequencies contained in the variables $k$ and $q$ are sent to infinity. All the susceptibilities decay to zero for large frequency, implying % \begin{equation} \lim_{\Omega\to\infty} D^X(\mathbf{q},\Omega)=U. \label{eq_SBE_fRG: limit D} \end{equation} % Concerning the Yukawa couplings, with some algebra one can express them in the form~\cite{Krien2019_III} % \begin{subequations} \begin{align} &h^m_k(q) = 1 + \int_{k'} \varphi^m_{k,k'}(q) \chi^{0,ph}_{k'}(q),\\ &h^c_k(q) = 1 - \int_{k'} \varphi^c_{k,k'}(q) \chi^{0,ph}_{k'}(q),\\ &h^p_k(q) = 1 - \int_{k'} \varphi^p_{k,k'}(q) \chi^{0,pp}_{k'}(q), \end{align} \end{subequations} % with % \begin{equation} \varphi^X_{k,k'}(q)=V^X_{k,k'}(q)-h^X_k(q)D^X(q)h^X_{k'}(q), \label{eq_SBE_fRG: varphi def} \end{equation} % and % \begin{subequations} \begin{align} &V^m_{k,k'}(q) = V\left(k-\rndup{q},k'+\rndup{q},k'-\rndup{q}\right),\\ &V^c_{k,k'}(q) = 2V\left(k-\rndup{q},k'+\rndup{q},k+\rnddo{q}\right)-V\left(k-\rndup{q},k'+\rndup{q},k'-\rndup{q}\right),\\ &V^p_{k,k'}(q)=V\left(\rnddo{q}+k,\rndup{q}-k,\rnddo{q}+k'\right), \end{align} \label{eq_SBE_fRG: Vertices X} \end{subequations} % \hspace{-1.5mm}where $V=V_{\uparrow\downarrow\uparrow\downarrow}$ is the vertex function defined in Sec.~\ref{subs_methods: vertex flow eq}. Combining decomposition~\eqref{eq_SBE_fRG: channel decomp SBE}, Eq.~\eqref{eq_SBE_fRG: limit D} and the fact that $\Lambda_{U\mathrm{irr}}$ decays to zero when sending to infinity one of its frequency arguments (this can be proven diagrammatically), one then sees that % \begin{equation} \lim_{\Omega\to\infty}\varphi^X_{k,k'}(\mathbf{q},\Omega)=\lim_{\nu\to\infty}\varphi^X_{(\mathbf{k},\nu),k'}(q)=0, \label{eq_SBE_fRG: varphi limit} \end{equation} % because the frequencies that are sent to infinity enter as arguments in all the screened interactions present in the definition of $\varphi^X$. This lets us conclude that % \begin{equation} \lim_{\Omega\to\infty}h^X_{k}(\mathbf{q},\Omega)=\lim_{\nu\to\infty}h^X_{(\mathbf{k},\nu)}(q)=1. \label{eq_SBE_fRG: h limit} \end{equation} % The limits here derived can be also proven by means of diagrammatic arguments, as shown in Ref.~\cite{Wentzell2016}, where a different notation has been used. The SBE decomposition offers several advantages. In first place, it allows for a substantial reduction of the computational complexity. Indeed, the calculation of the SBE terms, accounting for the asymptotic frequency dependencies of the vertex, reduces to two functions, namely $D^X$ and $h^X$, that depend on at least one less collective variable $k$ than the full channel functions $\phi^X$. Furthermore, since the rest functions are fast decaying in all frequency directions, the number of Masubara frequencies required for their calculation can be kept small. Approximations where the $\mathcal{R}^X$ are fully neglected are also possible, based on the choice of selecting only the class of $U$-reducible diagrams. Secondly, the SBE offers a clear physical interpretation of the processes that generate correlations between the electrons, allowing, for example, to diagnose which kind of collective fluctuations give the largest contribution to a given physical observable. Finally, the clear identification of bosonic fluctuations allows for a better treatment of the Goldstone modes when spontaneous symmetry breaking occurs. % \section{SBE representation of the fRG} % In this section, we implement the SBE decomposition in the 1-loop fRG equations. Generalizations to other truncations (Katanin, 2-loop, multiloop~\cite{Gievers2022}) are also possible. To keep the notation light, we omit the $\L$-dependence of the quantities at play. We start by recasting the channel flow equations, derived in Sec.~\ref{sec_methods: instability analysis}, in the following form % \begin{equation} \partial_\L \phi^X_{k,k'}(q) = \int_{p}V^X_{k,p}(q)\left[\widetilde{\partial}_\L \chi^{0,X}_p(q)\right]V^X_{p,k'}(q), \end{equation} % where, according to the definitions in Sec.~\ref{sec_methods: instability analysis}, we have defined $\phi^m=U+\mathcal{M}$, $\phi^c=U-\mathcal{C}$, and $\phi^p=U-\mathcal{P}$. The bare bubbles are given by % \begin{subequations} \begin{align} &\chi^{0,m}_k(q)=-\chi^{0,ph}_k(q),\\ &\chi^{0,c}_k(q)=\chi^{0,ph}_k(q),\\ &\chi^{0,p}_k(q)=-\chi^{0,pp}_k(q). \end{align} \end{subequations} % In essence, the $\phi^X$ represent the collection of all two-particle reducible diagrams in a given (physical) channel plus the bare interaction. We can express them in the form % \begin{equation} \phi^X_{k,k'}(q) = \phi^{X,\mathrm{SBE}}_{k,k'}(q) + \mathcal{R}^X_{k,k'}(q) - U, \end{equation} % where $\phi^{X,\mathrm{SBE}}$ is $U$-reducible and can be written as in Eq.~\eqref{eq_SBE_fRG: phi SBE}, and $\mathcal{R}^X$ is $U$-\emph{irreducible} but two particle \emph{reducible} in the given channel. The rest function $\mathcal{R}^X$ decays to zero when \emph{any} of the three frequencies on which it depends is sent to infinity~\cite{Wentzell2016}. With the help of Eqs.~\eqref{eq_SBE_fRG: varphi limit} and \eqref{eq_SBE_fRG: h limit}, one can therefore prove that % \begin{subequations} \begin{align} &\lim_{\nu'\to\infty}\phi^X_{k,(\mathbf{k}',\nu')}(q) =\lim_{\nu'\to\infty}V^X_{k,(\mathbf{k}',\nu')}(q) = h^X_k(q)D^X(q),\\ &\lim_{\substack{\nu\to\infty\\\nu'\to\infty}}\phi^X_{(\mathbf{k},\nu),(\mathbf{k}',\nu')}(q) = \lim_{\substack{\nu\to\infty\\\nu'\to\infty}}V^X_{(\mathbf{k},\nu),(\mathbf{k}',\nu')}(q) = D^X(q). \end{align} \end{subequations} % The flow equations for the screened interactions, Yukawa couplings, and rest functions immediately follow % \begin{subequations} \begin{align} &\partial_\L D^X(q) = \left[D^X(q)\right]^2\int_p h^X_p(q)\left[\widetilde{\partial}_\L \chi^{0,X}_p(q)\right]h^X_p(q),\\ &\partial_\L h^X_k(q) = \int_p \varphi^X_{k,p}(q)\left[\widetilde{\partial}_\L \chi^{0,X}_p(q)\right]h^X_p(q),\\ &\partial_\L \mathcal{R}^X_{k,k'}(q) = \int_p \varphi^X_{k,p}(q)\left[\widetilde{\partial}_\L \chi^{0,X}_p(q)\right]\varphi^X_{p,k'}(q), \end{align} \end{subequations} % where $\varphi^X$ has been defined in Eq.~\eqref{eq_SBE_fRG: varphi def}. In Appendix~\ref{app: symm V} one can find the symmetry properties of the screened interactions, Yukawa couplings, and rest functions. The above flow equations can be alternatively derived by introducing three bosonic fields in the Hubbard action via as many Hubbard-Stratonovich transformations, and running an fRG flow for a mixed boson-fermion system (for more details see Appendix~\ref{app: SBE_fRG_app}). % \subsection{Plain fRG} % For the plain fRG, the initial condition $V^{\Lambda_\mathrm{ini}}=U$ translates into $\phi^{X}_{k,k'}(q)=U$, which implies % \begin{subequations} \begin{align} &D^{X,{\Lambda_\mathrm{ini}}}(q)=U,\\ &h^{X,{\Lambda_\mathrm{ini}}}_k(q)=1,\\ &\mathcal{R}^{X,{\Lambda_\mathrm{ini}}}_{k,k'}(q)=0. \end{align} \end{subequations} % Furthermore, in the 1-loop, Katanin, 2-loop, and multiloop approximations, the fully $U$-irreducible term $\Lambda_{U\mathrm{irr}}$ is set to the sum of the three rest functions, lacking any fully two-particle irreducible contribution. % \subsection{\texorpdfstring{DMF\textsuperscript2RG}{DMF2RG}} \label{eq_SBE_fRG: DMF2RG initial conditions} % Within the DMF\textsuperscript{2}RG, one has to apply the parametrization in Eq.~\eqref{eq_SBE_fRG: channel decomp SBE} also to the impurity vertex, that is % \begin{equation} \begin{split} V^\mathrm{imp}(\nu_1',\nu_2',\nu_1) = &\Lambda_{U\mathrm{irr}}^\mathrm{imp}(\nu_1',\nu_2',\nu_1) -2U \\ &+ \frac{1}{2}\phi^{m,\mathrm{SBE},\mathrm{imp}}_{\nu_{ph},\nu_{ph}'}(\nu_1-\nu_1') + \frac{1}{2}\phi^{c,\mathrm{SBE},\mathrm{imp}}_{\nu_{ph},\nu_{ph}'}(\nu_1-\nu_1') \\ &+ \phi^{m,\mathrm{SBE},\mathrm{imp}}_{\nu_{\overline{ph}},\nu_{\overline{ph}}'}(\nu_2'-\nu_1) \\ &+ \phi^{p,\mathrm{SBE},\mathrm{imp}}_{\nu_{pp},\nu_{pp}'}(\nu_1'+\nu_2'), \end{split} \end{equation} % where the definitions of the frequencies $\nu_{ph}$, $\nu_{ph}'$, $\nu_{\overline{ph}}$, $\nu_{\overline{ph}}'$, $\nu_{pp}$, and $\nu_{pp}'$ can be read from the frequency components of Eq.~\eqref{eq_methods: k k' pp ph phx}. The impurity $U$-reducible terms can be written as % \begin{equation} \phi^{X,\mathrm{SBE},\mathrm{imp}}_{\nu\nu'}(\Omega)=h^{X,\mathrm{imp}}_\nu(\Omega) \, D^{X,\mathrm{imp}}(\Omega) \, h^{X,\mathrm{imp}}_{\nu'}(\Omega), \end{equation} % where the impurity Yukawa couplings and screened interactions can be computed from the momentum independent version of Eqs.~\eqref{eq_SBE_fRG: yukawas from G3} and \eqref{eq_SBE_fRG: D from chi}, after the DMFT self-consistent loop has converged. The $U$-irreducible contribution is then obtained by subtracting the $\phi^{X,\mathrm{SBE},\mathrm{imp}}$ from the impurity vertex. In principle, one can invert three Bethe-Salpeter equations to extract the local rest functions from $\Lambda^\mathrm{imp}_{U\mathrm{irr}}$. However, this can be avoided assigning to the flowing $k$-dependent rest functions only those contributions arising on top the local ones. The DMF\textsuperscript{2}RG initial conditions thus read as % \begin{subequations} \begin{align} &D^{X,{\Lambda_\mathrm{ini}}}(q)=D^{X,\mathrm{imp}}(\Omega),\\ &h^{X,{\Lambda_\mathrm{ini}}}_k(q)=h^{X,\mathrm{imp}}_\nu(\Omega),\\ &\mathcal{R}_{k,k'}^{X,{\Lambda_\mathrm{ini}}}(q)=0. \end{align} \end{subequations} % Within the 1-loop, Katanin, 2-loop, and multiloop approximations, the DMF\textsuperscript{2}RG $U$-irreducible vertex consists of two terms: a non-flowing one, accounting \emph{also} for the local fully two-particle irreducible contributions, and a flowing one, given by the sum of the three rest functions, consisting of nonlocal two-particle reducible but $U$-irreducible corrections. % \subsection{Results at half filling} % \begin{figure}[t] \centering \includegraphics[width=0.65\textwidth]{3d_chis_U16_bis.png} \caption{Magnetic (left) and charge (right) susceptibilities at zero frequency as functions of the spatial momentum $\mathbf{q}$ for $U=16t$ and $T=0.286t$. In the left panel the black dashed line indicates the values taken by $\chi^m(\mathbf{q},0)$ along the path in the Brillouin zone considered in Fig.~\ref{fig_SBE_fRG: chim rest no rest}.} \label{fig_SBE_fRG: 3D chis HF} \end{figure} % In this section we test the validity of the SBE decomposition on the Hubbard model from moderate to strong coupling by means of the DMF\textsuperscript{2}RG. In order to further simplify the numerics, we project the Yukawa couplings and rest functions dependencies on secondary momenta $\mathbf{k}$ and $\mathbf{k}'$ onto $s$-wave form factors, so that % \begin{subequations} \begin{align} &h^X_{(\mathbf{k},\nu)}(q)\simeq h^X_\nu(q),\\ &\mathcal{R}^X_{(\mathbf{k},\nu),(\mathbf{k}',\nu')}(q)\simeq \mathcal{R}^X_{\nu\nu'}(q). \end{align} \end{subequations} % The flow equations therefore simplify to % \begin{subequations} \begin{align} &\partial_\L D^X(q) = \left[D^X(q)\right]^2\,T\sum_\omega h^X_\omega(q)\left[\widetilde{\partial}_\L \chi^{0,X}_\omega(q)\right]h^X_\omega(q),\\ &\partial_\L h^X_\nu(q) = T\sum_\omega \varphi^X_{\nu\omega}(q)\left[\widetilde{\partial}_\L \chi^{0,X}_\omega(q)\right]h^X_\omega(q),\\ &\partial_\L \mathcal{R}^X_{\nu\nu'}(q) = T\sum_\omega \varphi^X_{\nu\omega}(q)\left[\widetilde{\partial}_\L \chi^{0,X}_\omega(q)\right]\varphi^X_{\omega\nu'}(q), \end{align} \end{subequations} % where we have projected $\varphi^X$ and the bubbles onto $s$-wave form factors, that is % \begin{subequations} \begin{align} &\chi_{\nu}^{0,X}(q)=\int_\mathbf{k} \chi^{0,X}_{(\mathbf{k},\nu)}(q),\label{eq_SBE_fRG: chi0 nu}\\ &\varphi_{\nu\nu'}^X(q)=\int_{\mathbf{k},\mathbf{k}'} \varphi^X_{(\mathbf{k},\nu),(\mathbf{k}',\nu')}(q). \end{align} \end{subequations} % We notice that in some parameter ranges the Yukawa couplings and, more importantly, the rest functions may acquire a strong dependence on $\mathbf{k}$ and $\mathbf{k}'$. In this case, the $s$-wave approximation is no longer justified. However, in this section we will focus on the half-filled Hubbard model at fairly high temperature, where the dependencies of the vertices on secondary momenta are expected to be weak. In all the rest of the chapter we will neglect the flow of the self-energy, which we keep fixed at the DMFT value. As far as the computation of the DMFT initial conditions is concerned, we use ED with 4 bath sites as impurity solver. After the self-consistent loop has converged, we calculate the impurity three- and four-point Green's functions as well as the susceptibilities from their Lehmann representation~\cite{Tagliavini2018}, and extract the respective Yukawa couplings, screened interactions, and the $U$-irreducible DMFT vertex. In this section we focus on the half-filled Hubbard model with only nearest neighbor hoppings ($t'=t''=0$) for different couplings and temperatures. For the present choice of parameters particle-hole symmetry is realized. In the results below, the flow of the rest functions has been neglected, when not explicitly stated otherwise. We take the hopping $t$ as energy unit. % \subsubsection{Susceptibilities} % \begin{figure}[t] \centering \begin{subfigure}[t]{0.495\textwidth} \centering \includegraphics[width=\textwidth]{gener_chiU4.png} \caption{} \label{fig_SBE_fRG: gen chis HF U4} \end{subfigure} \begin{subfigure}[t]{0.495\textwidth} \centering \includegraphics[width=\textwidth]{gener_chiU16.png} \caption{} \label{fig_SBE_fRG: gen chis HF U16} \end{subfigure} \caption{\emph{Panel (a)}: Generalized magnetic (left column) and charge (right) susceptibilities at $U=4t$, and $T=0.250t$, obtained from the impurity model (upper row) and from the DMF\textsuperscript{2}RG (lower row). \emph{Panel (b)}: same as (a), with $U=16t$ and $T=0.286t$.} \label{fig_SBE_fRG: gen chis HF} \end{figure} % We start by testing the validity of the SBE decomposition at strong coupling, focusing on the physical response functions. In Fig.~\ref{fig_SBE_fRG: 3D chis HF}, we show the zero frequency magnetic and charge susceptibilities, extracted from the computed screened interactions $D^m$ and $D^c$, as functions of the lattice momentum for $U=16t$ and $T=0.286t$, that is, slightly above the N\'eel temperature predicted by DMFT at this coupling (see also leftmost panel of Fig.~\ref{fig_SBE_fRG: chim rest no rest}). We notice that particle-hole symmetry implies $D^p(\mathbf{q},\Omega)=D^c(\mathbf{q}+\mathbf{Q},\Omega)$, with $\mathbf{Q}=(\pi,\pi)$. The 1-loop truncation of the DMF\textsuperscript{2}RG does not substantially suppress the N\'eel temperature $T_N$ predicted by the DMFT, resulting in large peaks of $\chi^m(\mathbf{q},0)$ at $\mathbf{q}=\mathbf{Q}$. It is remarkable, however, that within the DMF\textsuperscript{2}RG $T_N$ is much smaller than the one that plain fRG would give for the present coupling, that is, $T_N\propto U$ for large $U$. The charge susceptibility $\chi^c$ is strongly suppressed at strong coupling, vanishing at $\mathbf{q}=\mathbf{0}$, due to the fully insulating nature of the system at the coupling here considered. Indeed, $U=16t$ lies far above the critical coupling at which the Mott metal to insulator occurs in DMFT ($U_\mathrm{MIT}(T=0)\simeq12t$). Following the analysis in Ref.~\cite{Chalupa2021}, it is instructive to analyze the evolution of the generalized susceptibilities, introduced in Sec.~\ref{subs_methods: DMFT susceptibilities}, as the coupling is tuned across the Mott transition. They are in general defined as % \begin{subequations} \begin{align} &\chi^m_{k,k'}(q) = -\chi^{0,ph}_k(q)\delta_{k,k'} + \chi^{0,ph}_k(q)V^m_{k,k'}(q)\chi^{0,ph}_{k'}(q),\\ &\chi^c_{k,k'}(q) = -\chi^{0,ph}_k(q)\delta_{k,k'} - \chi^{0,ph}_k(q)V^c_{k,k'}(q)\chi^{0,ph}_{k'}(q),\\ &\chi^p_{k,k'}(q) = \chi^{0,pp}_k(q)\delta_{k,k'} + \chi^{0,pp}_k(q)V^c_{k,k'}(q)\chi^{0,pp}_{k'}(q), \end{align} \label{eq_SBE_fRG: gen chi k} \end{subequations} % \hspace{-1.5mm}where $\delta_{k,k'}=\beta\delta_{\nu\nu'}\delta(\mathbf{k}-\mathbf{k}')$, and $V^X$ defined as in Eq.~\eqref{eq_SBE_fRG: Vertices X}. The physical susceptibilities are then obtained from $\chi^X(q)=\int_{k,k'}\chi^X_{k,k'}(q)$. We notice that in a conserving approximation (such as the multiloop fRG) the $\chi^X(q)$ calculated with the above "post-processing" formula coincide with the ones extracted from the screened interactions $D^X(q)$. However, for the 1-loop truncation here employed, the two calculations might yield different results. In the following, we project the $\mathbf{k}$ and $\mathbf{k}'$ dependencies of the generalized susceptibility onto $s$-wave form factors, that is, we consider % \begin{subequations} \begin{align} &\chi^m_{\nu\nu'}(q) = -\beta\chi^{0,ph}_\nu(q)\delta_{\nu\nu'} + \chi^{0,ph}_\nu(q)V^m_{\nu\nu'}(q)\chi^{0,ph}_{\nu'}(q),\\ &\chi^c_{\nu\nu'}(q) = -\beta\chi^{0,ph}_\nu(q)\delta_{\nu\nu'} - \chi^{0,ph}_\nu(q)V^c_{\nu\nu'}(q)\chi^{0,ph}_{\nu'}(q),\\ &\chi^p_{\nu\nu'}(q) = +\beta\chi^{0,pp}_\nu(q)\delta_{\nu\nu'} + \chi^{0,pp}_\nu(q)V^p_{\nu\nu'}(q)\chi^{0,pp}_{\nu'}(q), \end{align} \label{eq_SBE_fRG: gen chi nu} \end{subequations} % with $\chi^{0,X}_\nu(q)$ as defined in Eq.~\eqref{eq_SBE_fRG: chi0 nu}, and $V^X_{\nu\nu'}(q)=\int_{\mathbf{k},\mathbf{k}'}V_{k,k'}^X(q)$. In the following, we will focus on the generalized susceptibilities at zero bosonic frequency and for two coupling values, $U=4t$, and $U=16t$, below and above the Mott transition. The corresponding temperatures are chosen to be close to the DMFT N\'eel temperature for the given coupling, that is, $T=0.250t$ and $T=0.286t$. The corresponding results are shown in Fig.~\ref{fig_SBE_fRG: gen chis HF}, where we also plot the corresponding generalized susceptibilities for the self-consistent impurity problem, denoted as $\chi^X_{\mathrm{aim},\nu\nu'}(\Omega)$. At moderate coupling (Fig.~\ref{fig_SBE_fRG: gen chis HF U4}), the leading structure of the charge susceptibility, both for the AIM and DMF\textsuperscript{2}RG results, is given by a \emph{positive} diagonal decaying to zero for large $\nu=\nu'$, arising from the bubble term $-\chi^{0,ph}_\nu(q)$, built upon a metallic Green's function. At the AIM level, the role of vertex corrections appears to be marginal in both channels, with small negative (positive) off-diagonal elements, leading to an overall mild suppression (enhancement) of the physical charge (magnetic) susceptibility. While for the charge channel the nonlocal DMF\textsuperscript{2}RG corrections are essentially irrelevant, in the magnetic one, they lead to a strong enhancement of $\chi^m_{\nu\nu'}(\mathbf{Q},0)$, signaling strong antiferromagnetic correlations. In the Mott phase, the picture changes drastically, due to large vertex corrections. In the magnetic channel, they strongly enhance the physical susceptibility even at the AIM level overtaking the diagonal term. This is a clear hallmark of the formation of local magnetic moments, resulting in a large magnetic response at zero frequency, following the Curie-Weiss law. Differently, in the charge channel, the vertex strongly suppresses the physical response, flipping the sign of the diagonal entries up to frequencies $|\nu=\nu'|\sim U$. In more detail, these negative values are responsible for the freezing of charge fluctuations in the deep insulating regime~\cite{Gunnarsson2016,Chalupa2021}. This observation can be interpreted as the charge counterpart of the local moment formation in the magnetic sector. The negative diagonal entries are in general related to negative eigenvalues of the generalized susceptibility. Increasing the coupling $U$, when one of the eigenvalues flips its sign, the matrix $\chi^c_{\nu\nu'}(0)$ becomes non-invertible, leading to divergences of the irreducible vertex function~\cite{Schaefer2013,Gunnarsson2016,Chalupa2020,Springer2020}, which are in turn related to the multivaluedness of the Luttinger-Ward functional~\cite{Kozik2014,Vucicevic2018}. % \subsubsection{Role of the rest functions} % \begin{figure}[t] \centering \includegraphics[width=\textwidth]{rest_vs_norest.png} \caption{Left panel: DMFT N\'eel temperature and location of the parameters considered in the remaining three panels. Other panels: magnetic susceptibility at zero frequency in a path in the Brillouin zone (see Fig.~\ref{fig_SBE_fRG: 3D chis HF} for its definition) calculated with and without considering the flow of the rest functions. The coupling values considered are, from left to right, $U=4t$, $8t$, and $16t$.} \label{fig_SBE_fRG: chim rest no rest} \end{figure} % \begin{figure}[b!] \centering \includegraphics[width=0.6\textwidth]{xi_vs_T.png} \caption{Inverse magnetic susceptibility at zero frequency and $\mathbf{q}=\mathbf{Q}$ as a function of temperature for a coupling value of $U=8t$. The dashed lines correspond to linear fits of the data, whose extrapolation yields the N\'eel temperature. } \label{fig_SBE_fRG: xi vs T} \end{figure} % All the results presented so far have been obtained neglecting the flow of the rest functions. While this approximation significantly reduces the computational cost, the extent of its validity has to be verified in different coupling regimes. We recall that by considering the flow of $\mathcal{R}^X$ we recover the results obtained by conventional implementations of the fRG (see, for example Refs.~\cite{Vilardi2017,Vilardi2019}). In Fig.~\ref{fig_SBE_fRG: chim rest no rest}, we analyze the impact of the inclusion/neglection of the rest functions on the magnetic susceptibilities at coupling values of $U=4t$, $8t$, and $16t$ and temperatures close to the corresponding $T_N$ as obtained in the DMFT, namely $T=0.250t$, $0.444t$, and $0.286t$, respectively (see leftmost panel for the location of these points in the $(U,T)$ phase diagram). The corrections due to the inclusion of the $\mathcal{R}^X$ are rather marginal for all couplings considered, resulting in only a slight enhancement of magnetic correlations. As a consequence of this, the N\'eel temperature, which is finite as the 1-loop truncation here considered violates the Mermin-Wagner theorem, is very mildly affected by the rest functions. This can be observed in Fig.~\ref{fig_SBE_fRG: xi vs T}, where we plot the inverse magnetic susceptibility at $\mathbf{q}=\mathbf{Q}$ and zero frequency as a function of the temperature for $U=8t$. We notice that the inclusion of the $\mathcal{R}^X$ yields a N\'eel temperature of $T_N=0.4042t$, and the one obtained without rest function lies very close to it, $T_N=0.3986t$. The effects of the rest functions on the charge and pairing susceptibilities (not shown) are negligible. In Fig.~\ref{fig_SBE_fRG: rests}, we plot the frequency structure of the rest functions for the three channels at zero bosonic frequency, for $U=4t$ and $U=16t$ and for the same temperatures considered in Fig.~\ref{fig_SBE_fRG: chim rest no rest}. The decay to zero for large frequencies $\nu$, $\nu'$ is clear, particularly at strong coupling (lower row), where the $\mathcal{R}^X$ take extremely large values at the lowest Matsubara frequencies. However, in the insulating regime the Green's function is suppressed at the smallest Matsubara frequencies, strongly reducing the effect of the large values of the rest functions on the physical observables. % \begin{figure}[t] \centering \includegraphics[width=0.95\textwidth]{rests_rotated.png} \caption{Fermionic frequency dependence of the magnetic (left column), charge (central column), and pairing (right column) rest functions for zero bosonic frequency, and for couplings $U=4t$ (upper row) and $U=16t$ (lower row). The temperatures are the same as in Fig.~\ref{fig_SBE_fRG: chim rest no rest}.} \label{fig_SBE_fRG: rests} \end{figure} % % \subsection{Finite doping: fluctuation diagnostics of \texorpdfstring{$d$-wave}{d-wave} correlations} % In this section, we show results for the doped Hubbard model at fairly low temperature. The parameter set we consider is a hole doping of $p=1-n=0.18$, $T=0.044t$, a next-to-nearest neighbor hopping of $t'=-0.2t$, and $U=8t$. Since at finite doping and low temperatures the Hubbard model is expected to display sizable $d$-wave correlations, we improve our form factor expansion to include them. Considering the pairing channel, we notice that the $U$-reducible term can have a finite $d$-wave coefficient thanks to the Yukawa coupling % \begin{equation} h^{p}_{(\mathbf{k},\nu)}(q)\sim h^{p,s}_{\nu}(q) + h^{p,d}_{\nu}(q) d_\mathbf{k}, \end{equation} % with $d_\mathbf{k}=\cos k_x-\cos k_y$. However, due to the locality of the bare interaction, the function $h_\nu^{p,d}(\mathbf{q},\Omega)$ identically vanishes for $\mathbf{q}=\mathbf{0}$, therefore not contributing to an eventual $d$-wave pairing state. For this reason we retain only the $s$-wave contribution to the pairing Yukawa coupling. What would really drive the formation of a $d$-wave superconducting gap is the rest function $\mathcal{R}^p$. We expand the latter as % \begin{equation} \mathcal{R}^p_{k,k'}(q)\simeq\mathcal{R}^{p}_{\nu\nu'}(q)-\mathcal{D}_{\nu\nu'}(q)d_{\mathbf{k}} d_{\mathbf{k}'}, \label{eq_SBE_fRG: expansion of Rp} \end{equation} % where we have neglected possible $s$-$d$-wave mixing terms, and the minus sign has been chosen for convenience. In essence, the function $\mathcal{D}_{\nu\nu'}(q)$, which we refer to as $d$-wave pairing channel, is given by diagrams that are two-particle-$pp$ reducible but $U$-irreducible and, at the same time, exhibit a $d$-wave symmetry in the dependence on $\mathbf{k}$ and $\mathbf{k}'$. The flow equation for $\mathcal{D}_{\nu\nu'}(q)$ reads as % \begin{equation} \partial_\L \mathcal{D}_{\nu\nu'}(q)= T\sum_\omega V^{d}_{\nu\omega}(q)\left[\widetilde{\partial}_\L\chi^{0,d}_\omega(q)\right]V^{d}_{\omega\nu'}(q), \label{eq_SBE_fRG: flow eq D} \end{equation} % with % \begin{equation} \chi^{0,d}_\nu(q)=\int_{\mathbf{k}}d_\mathbf{k}^2\,\chi^{0,pp}_{(\mathbf{k},\nu)}(q), \end{equation} % and % \begin{equation} V^{d}_{\nu\nu'}(q)=\int_{\mathbf{k},\mathbf{k}'}d_\mathbf{k} d_{\mathbf{k}'}\,V^p_{(\mathbf{k},\nu),(\mathbf{k}',\nu')}(q). \end{equation} % The flow equations of the other quantities remain unchanged, except that the contribution in Eq.~\eqref{eq_SBE_fRG: expansion of Rp} has to be considered in the calculation of the functions $V^X_{\nu\nu'}(q)$, with $X=m$, $c$, or $p$. In this section we neglect the flow of all the rest functions but $\mathcal{D}$. % \begin{figure}[t] \centering \includegraphics[width= 0.95\textwidth]{dwave_susceptibilities_3D_tris.png} \caption{From left to right: magnetic, charge, $s$- and $d$-wave pairing susceptibilities at zero bosonic frequency as functions of the lattice momentum $\mathbf{q}$, for $p=0.18$, $t'=-0.2t$, $T=0.044t$, and $U=8t$, determined at the stopping scale ${\L_c}=0.067t$.} \label{fig_SBE_fRG: dwave 3d chis} \end{figure} % In the parameter regime considered, and within the 1-loop truncation employed, the system is unstable under the formation of incommensurate magnetic order. Thus the flow needs to be stopped due to the divergence of $D^m(\mathbf{q},0)$ at $\mathbf{q}=(\pi-2\pi\eta,\pi)$ (and symmetry related). We therefore arbitrarily define the stopping scale ${\L_c}$ as the one at which $D^m$ exceeds the value of $8\times 10^3t$, corresponding to a magnetic susceptibility of $\sim 120t^{-1}$. We obtain ${\L_c}=0.067t$. While the choice of a parameter regime close to a magnetic instability is crucial to detect \emph{sizable} $d$-wave pairing correlations, we expect that an improved truncation (as, for example, the multiloop extension) would remove the divergence in the magnetic channel, allowing to continue the flow down to $\L=0$, thereby probably enhancing the $d$-wave susceptibility. In Fig.~\ref{fig_SBE_fRG: dwave 3d chis}, we show the magnetic, charge, $s$- and $d$-wave pairing susceptibilities, computed at the stopping scale. While the first three have been extracted from the bosonic propagators $D^X$ (see Eq.~\eqref{eq_SBE_fRG: D from chi}), the $d$-wave pairing susceptibility has been calculated with the "post-processing" formula % \begin{equation} \chi^d(q) = T\sum_\nu\chi^{0,d}_\nu(q) + T^2\sum_{\nu,\nu'}\chi^{0,d}_\nu(q)\,V^d_{\nu\nu'}(q) \,\chi^{0,d}_{\nu'}(q). \end{equation} % The magnetic susceptibility displays very large values in the form of peaks at wave vectors $(\pi-2\pi\eta,\pi)$ and symmetry related ($\eta\simeq0.08$) due to the incommensurate antiferromagnetic instability. Differently, the charge and $s$-wave pairing response functions are rather suppressed, with $\chi^c(q)$ exhibiting peaks at $\mathbf{q}=(\pi,0)$ (and symmetry related), signaling \emph{very mild} charge stripe correlations. Finally, $\chi^d(q)$, although being not excessively large, presents a well-defined peak at $\mathbf{q}=\mathbf{0}$ and is by far the second largest response function. % \begin{figure}[t] \centering \includegraphics[width= 0.65\textwidth]{diagrams_dwave_ch.png} \caption{Diagrammatic representation of the two boson contributions to the flow equation of $\mathcal{D}_{\nu\nu'}(q)$. Wavy (dashed) lines represent magnetic (charge) screened interactions, and solid lines fermionic Green's functions. The ticked lines indicate single scale propagators.} \label{fig_SBE_fRG: dwave diagrams} \end{figure} % Similarly to what has been done in the \emph{fluctuation diagnostics} for the self-energy~\cite{Gunnarsson2015,Krien2020}, it is instructive to analyze the different bosonic fluctuations contributing to the formation of a \emph{sizable} $d$-wave pairing channel $\mathcal{D}$. The function $V^d_{\nu\nu'}(q)$ entering the flow equation of $\mathcal{D}$ can be written as % \begin{equation} \begin{split} V^d_{\nu\nu'}(\mathbf{q},\Omega)= -L^{d,m}_{\nu\nu'}(\Omega) -L^{d,c}_{\nu\nu'}(\Omega) -\mathcal{D}_{\nu\nu'}(\mathbf{q},\omega), \end{split} \end{equation} % where we have defined % \begin{subequations} \begin{align} L^{d,m}_{\nu\nu'}(\Omega)=&\phantom{+}\frac{1}{2} \phi^{m,\mathrm{SBE},d}\left(\rnddo{\Omega}+\rndup{\nu+\nu'},\rndup{\Omega}-\rndup{\nu+\nu'};\nu'-\nu\right)\nonumber\\ &+\phi^{m,\mathrm{SBE},d}\left(\rndup{\nu-\nu'+\Omega},\rndup{\nu'-\nu+\Omega};-\nu-\nu' + \Omega\,\mathrm{m}\,2\right), \\ L^{d,c}_{\nu\nu'}(\Omega)=&\phantom{+}\frac{1}{2} \phi^{c,\mathrm{SBE},d}\left(\rnddo{\Omega}+\rndup{\nu+\nu'},\rndup{\Omega}-\rndup{\nu+\nu'};\nu'-\nu\right), \end{align} \end{subequations} % with % \begin{equation} \phi^{X,\mathrm{SBE},d}(\nu,\nu';\Omega) = -\int_\mathbf{q}\frac{\cos q_x+\cos q_y}{2}\, \phi^{X,\mathrm{SBE}}_{\nu\nu'}(\mathbf{q},\Omega). \end{equation} % We can then split the different terms contributing to~\eqref{eq_SBE_fRG: flow eq D} as % \begin{subequations} \begin{align} &\partial_\L\mathcal{D}^{mm}_{\nu\nu'}(q)=T\sum_\omega L^{d,m}_{\nu\omega}(q)\left[\widetilde{\partial}_\L \chi^{0,d}_\omega(q)\right] L^{d,m}_{\omega\nu'}(q), \label{eq_SBE_fRG: D mm}\\ % &\partial_\L\mathcal{D}^{cc}_{\nu\nu'}(q)=T\sum_\omega L^{d,c}_{\nu\omega}(q)\left[\widetilde{\partial}_\L \chi^{0,d}_\omega(q)\right] L^{d,c}_{\omega\nu'}(q), \label{eq_SBE_fRG: D cc}\\ % &\partial_\L\mathcal{D}^{mc}_{\nu\nu'}(q)=T\sum_\omega L^{d,m}_{\nu\omega}(q)\left[\widetilde{\partial}_\L \chi^{0,d}_\omega(q)\right] L^{d,c}_{\omega\nu'}(q) +T\sum_\omega L^{d,c}_{\nu\omega}(q)\left[\widetilde{\partial}_\L \chi^{0,d}_\omega(q)\right] L^{d,m}_{\omega\nu'}(q), \label{eq_SBE_fRG: D mc}\\ % &\partial_\L\mathcal{D}^{N_b\geq3}_{\nu\nu'}(q)=\partial_\L\mathcal{D}_{\nu\nu'}(q)-\partial_\L\mathcal{D}^{mm}_{\nu\nu'}(q)-\partial_\L\mathcal{D}^{cc}_{\nu\nu'}(q)-\partial_\L\mathcal{D}^{mc}_{\nu\nu'}(q)\nonumber\\ &\phantom{\partial_\L\mathcal{D}^{N_b\geq3}_{\nu\nu'}(q)}=T\sum_\omega \mathcal{D}_{\nu\omega}(q)\left[\widetilde{\partial}_\L \chi^{0,d}_\omega(q)\right] \mathcal{D}_{\omega\nu'}(q) +\sum_{X=m,c} T\sum_\omega \mathcal{D}_{\nu\omega}(q)\left[\widetilde{\partial}_\L \chi^{0,d}_\omega(q)\right] L^{d,X}_{\omega\nu'}(q) \nonumber\\ &\hskip 2.5cm +\sum_{X=m,c} T\sum_\omega L^{d,X}_{\nu\omega}(q)\left[\widetilde{\partial}_\L \chi^{0,d}_\omega(q)\right] \mathcal{D}_{\omega\nu'}(q). \label{eq_SBE_fRG: D Nb>=3} \end{align} \label{eq_SBE_fRG: D contributions} \end{subequations} % A diagrammatic representation of the flow equations of the first three terms, $\mathcal{D}^{mm}$, $\mathcal{D}^{cc}$, and $\mathcal{D}^{mc}$ is given in Fig.~\ref{fig_SBE_fRG: dwave diagrams}. They represent two boson processes, also known as Aslamazov-Larkin diagrams. Inspecting Eqs.~\eqref{eq_SBE_fRG: D mm},~\eqref{eq_SBE_fRG: D cc}, and~\eqref{eq_SBE_fRG: D mc}, one can notice that they are not fully reconstructed by the flow, as the functions $L^{d,m}$ and $L^{d,c}$ (and the self-energy) also depend on the fRG scale. This is a feature of the 1-loop truncation and is not present in the framework of the multiloop extension. It is nonetheless reasonable to interpret these contributions as two boson processes, and the remainder $\mathcal{D}^{N_b\geq3}$ as a higher order contribution in the number of exchanged bosons. In Fig.~\ref{fig_SBE_fRG: dwave diagnostics flow}, we plot the different contributions to $\mathcal{D}_{\nu\nu'}$ at $q=0$ and $\nu=\nu'=\nu_0\equiv\pi T$ as functions of the scale $\L$. We notice that in the early stages of the flow the largest contribution to $\mathcal{D}$ comes from magnetic two boson process, confirming that magnetic fluctuations provide the seed for the formation of $d$-wave pairing in the 2D Hubbard model, as found in other fRG studies~\cite{Halboth2000_PRL,Husemann2009,Katanin2009,Vilardi2017,Vilardi2019}. Moreover, the multiboson term ($\mathcal{D}^{N_b\geq3}$) develops at smaller scales, compared to the two boson ones. At the same time, it increases considerably when approaching the stopping scale ${\L_c}$, overtaking the other contributions. In general, arbitrarily close to a thermodynamic instability towards a $d$-wave pairing state, the $N_b\geq\bar{N}$ boson contribution is always larger than all the $N_b<\bar{N}$ ones, for every finite $\bar{N}$~\cite{Bonetti2021}. In the present parameter region we indeed observe $\mathcal{D}^{N_b\geq3}>\mathcal{D}^{mm},\mathcal{D}^{cc},\mathcal{D}^{mc}$, which means that an important precondition for the onset of a thermodynamic superconducting instability has already been realized. % \begin{figure}[t] \centering \includegraphics[width= 0.65\textwidth]{dwave_flow.png} \caption{Different terms contributing to the $d$-wave pairing channel $\mathcal{D}_{\nu\nu'}(q)$, as defined in Eq.~\eqref{eq_SBE_fRG: D contributions}, at $q=(\mathbf{0},0)$ and $\nu=\nu'=\nu_0\equiv\pi T$ as functions of the scale $\L$. The total flow of $\mathcal{D}$ is also reported for comparison (red stars).} \label{fig_SBE_fRG: dwave diagnostics flow} \end{figure} % \end{document} \chapter{Collective modes of metallic spiral magnets} \label{chap: low energy spiral} % In this chapter, we present a detailed analysis of the low-energy magnons of a spiral magnet. In particular, we show that, differently than a N\'eel antiferromagnet, the SU(2) spin symmetry is broken down to $\mathbb{Z}_2$, giving rise to three Goldstone modes, corresponding to three gapless magnon branches. We focus on the case of \emph{coplanar} spiral order, that implies that two of the three magnon modes have the same dispersion. In particular, one finds one \emph{in-plane} and two \emph{out-of-plane} modes. We perform a low energy expansion of the magnetic susceptibilities in the spiral magnetic state, and derive general expressions for the spin stiffnesses and spectral weights of the magnon excitations. We also show that they can be alternatively computed from the response to a gauge field. We prove that the equivalence of this approach with a low-energy expansion of the susceptibilities is enforced by some \emph{Ward identities}. Moreover, we analyze the size and the low-momentum and frequency dependence of the Landau damping of the Goldstone modes. The understanding of the low energy physics of a spiral magnet will be of fundamental importance for the next chapter, where a model for the pseudogap phase is presented in terms of short-range spiral order. This chapter is organized as it follows. In Sec.~\ref{sec_low_spiral: Ward Identities}, we derive the local Ward identities that enforce the equality of the spin stiffnesses and spectral weights computed expanding the susceptibilities near their Goldstone pole and from the response to a gauge field. In this Section, besides the spiral magnet, we also analyze the case of a superconductor and of a N\'eel antiferromagnet. In Secs.~\ref{sec_low_spiral: MF spiral} and \ref{sec_low_spiral: RPA spiral}, we present the mean-field and random phase approximation (RPA) approaches to the spiral magnetic state and its collective excitations. In Sec.~\ref{sec_low_spiral: properties of Goldstones} we expand the RPA magnetic susceptibilities around their Goldstone poles and derive expressions for the spin stiffnesses, spectral weights and Landau dampings of the three magnon modes. In Sec.~\ref{sec_low_spiral: explicit WIs} we compute the spin stiffnesses and spectral weights as response coefficients of the spiral magnet to a SU(2) gauge field and show the equivalence with the formulas of Sec.~\ref{sec_low_spiral: properties of Goldstones}. In Sec.~\ref{sec_low_spiral: Neel limit} we analyze the N\'eel limit, and in Sec.~\ref{sec_low_spiral: numerical results} we show a numerical evaluation of the results of the previous sections. The content of this chapter has been published in Refs.~\cite{Bonetti2022} and \cite{Bonetti2022_II}. % \section{Local Ward identities for spontaneously broken symmetries} \label{sec_low_spiral: Ward Identities} % In this section we derive and discuss the Ward identities connected with a specific gauge symmetry which gets globally broken due to the onset of long-range order in the fermionic system. We focus on two specific symmetry groups: the (abelian) U(1) charge symmetry and the (nonabelian) SU(2) spin symmetry. All over the chapter we employ Einstein's notation, that is, a sum over repeated indices is implicit. % \subsection{U(1) symmetry} % We consider the generating functional of susceptibilities of the superconducting order parameter and gauge kernels, defined as: % \begin{equation} \begin{split} \mathcal{G}\left[A_\mu,J,J^*\right]= -\ln \int \!\mathcal{D}\psi\mathcal{D}\overline{\psi} e^{-\mathcal{S}\left[\psi,\overline{\psi},A_\mu\right]+(J^*,\psi_\downarrow\psi_\uparrow)+(J,\overline{\psi}_\uparrow\overline{\psi}_\downarrow)}, \end{split} \label{eq_low_spiral: G-functional U(1)} \end{equation} % where $\psi=(\psi_\uparrow,\psi_\downarrow)$ ($\overline{\psi}=(\overline{\psi}_\uparrow,\overline{\psi}_\downarrow)$) are Grassmann spinor fields corresponding to the annihilation (creation) of a fermion, $A_\mu$ is the electromagnetic field, $J$ ($J^*$) is a source field that couples to the superconducting order parameter $\overline{\psi}_\uparrow\overline{\psi}_\downarrow$ ($\psi_\downarrow\psi_\uparrow$), and $\mathcal{S}[\psi,\overline{\psi},A_\mu]$ is the action of the system. The index $\mu=0,1,\dots,d$, with $d$ the system dimensionality, runs over temporal ($\mu=0$) and spatial ($\mu=1,\dots,d$) components. In the above equation and from now on, the expression $(A,B)$ has to be intended as $\int_x A(x)B(x)$, where $x$ is a collective variable consisting of a spatial coordinate $\bs{x}$ (possibly discrete, for lattice systems), and an imaginary time coordinate $\tau$, and $\int_x$ is a shorthand for $\int d^d\bs{x}\int_0^\beta d\tau$, with $\beta$ the inverse temperature. Even in the case of a lattice system, we define the gauge field over a \emph{continuous} space time, so that expressions involving its gradients are well defined. We let the global U(1) charge symmetry be broken by an order parameter that, to make the treatment simpler, we assume to be local ($s$-wave) % \begin{equation} \langle \psi_\downarrow(x)\psi_\uparrow(x)\rangle = \langle \overline{\psi}_\uparrow(x)\overline{\psi}_\downarrow(x)\rangle= \varphi_0, \end{equation} % where the average is computed at zero source and gauge fields, and, without loss of generality, we choose $\varphi_0\in \mathbb{R}$. A generalization to systems with nonlocal order parameters, such as $d$-wave superconductors, is straightforward. The functional~\eqref{eq_low_spiral: G-functional U(1)} has been defined such that its second derivative with respect to $J$ and $J^*$ at zero $J$, $J^*$ and $A_\mu$ gives (minus) the susceptibility of the order parameter $\chi(x,x')$, while (minus) the gauge kernel $K_{\mu\nu}(x,x')$ can be extracted differentiating twice with respect to the gauge field. In formulas % \begin{subequations} \begin{align} &\chi(x,x')=-\frac{\delta^2\mathcal{G}}{\delta J(x)\delta J^*(x')}\bigg\rvert_{J=J^*=A_\mu=0},\\ &K_{\mu\nu}(x,x')=-\frac{\delta^2\mathcal{G}}{\delta A_\mu(x)\delta A_\nu(x')}\bigg\rvert_{J=J^*=A_\mu=0}. \end{align} \end{subequations} % Let us now consider the constraints that the U(1) gauge invariance imposes on the functional $\mathcal{G}$. Its action on the fermionic fields is % \begin{subequations}\label{eq_low_spiral: U(1) gauge transf on psis} \begin{align} &\psi(x) \to e^{i\theta(x)}\psi(x),\\ &\overline{\psi}(x) \to e^{-i\theta(x)}\overline{\psi}(x), \end{align} \end{subequations} % \noindent with $\theta(x)$ a generic function. Similarly, the external fields transform as % \begin{subequations}\label{eq_low_spiral: ext fields U(1) transformation} \begin{align} &J(x) \to J^\prime(x)= e^{2i\theta(x)}J(x),\\ &J^*(x) \to [J^{\prime}(x)]^*= e^{-2i\theta(x)}J^*(x),\\ & A_\mu(x) \to A_\mu^\prime(x)=A_\mu(x)-\partial_\mu \theta(x), \end{align} \end{subequations} % where $\partial_\mu =(i\partial_\tau,\nabla)$. In Eqs.~\eqref{eq_low_spiral: U(1) gauge transf on psis} and \eqref{eq_low_spiral: ext fields U(1) transformation} the spatial coordinate $\bs{x}$ of the spinors $\psi$ and $\overline{\psi}$, as well as the sources $J$ and $J^*$ may be a lattice one, while the gauge field $A_\mu$ and the parameter $\theta$ are always defined over a continuous space. To keep the notation lighter, we always indicate the space-time coordinate as $x$, keeping in mind that its spatial component could have a different meaning depending on the field it refers to. For $\mathcal{G}$ to be invariant under a U(1) gauge transformation, it must not to depend on $\theta(x)$: % \begin{equation} \frac{\delta}{\delta \theta(x)}\mathcal{G}[A_\mu',J',(J^{\prime})^*]=0. \label{eq_low_spiral: dG/dtheta(x) U(1)} \end{equation} % Considering an infinitesimal transformation, that is $|\theta(x)|\ll 1$, from Eqs.~\eqref{eq_low_spiral: ext fields U(1) transformation} and~\eqref{eq_low_spiral: dG/dtheta(x) U(1)}, we obtain % \begin{equation} \partial_\mu \left(\frac{\delta\mathcal{G}}{\delta A_\mu(x)}\right)+2i\left[\frac{\delta\mathcal{G}}{J(x)}J(x)-\frac{\delta\mathcal{G}}{J^*(x)}J^*(x)\right]=0. \label{eq_low_spiral: Ward identity G} \end{equation} % We now consider the change of variables % \begin{subequations} \begin{align} & J(x)=J_{1}(x)+iJ_{2}(x),\\ & J^*(x)=J_{1}(x)-iJ_{2}(x), \end{align} \end{subequations} % such that $J_{1}(x)$ ($J_{2}(x)$) is a source field coupling to longitudinal (transverse) fluctuations of the order parameter, and the functional $\Gamma$, defined as the Legendre transform of $\mathcal{G}$, % \begin{equation} \begin{split} \Gamma[A_\mu,\phi_{1},\phi_{2}]=\sum_{a=1,2}&\int_x\phi_{a}(x) J_{a}(x) +\mathcal{G}[A_\mu,J_{1},J_{2}], \end{split} \end{equation} % where $\phi_{a}(x)=\frac{\delta\mathcal{G}[A_\mu,J_{1},J_{2}]}{\delta J_{a}(x)}$. The gauge kernel can be computed from $\Gamma$ as well: % \begin{equation} K_{\mu\nu}(x,x')=-\frac{\delta^2 \Gamma}{\delta A_\mu(x) \delta A_\nu(x')}\Big\rvert_{\vec{\phi}=A_\mu=0}, \end{equation} % because, thanks to the Legendre transform properties, $\delta\Gamma/\delta A_\mu(x) = \delta\mathcal{G}/\delta A_\mu(x)$. Differently, differentiating $\Gamma$ twice with respect to the fields $\phi_{a}$ returns the inverse correlator % \begin{equation} C^{ab}(x,x')=-\frac{\delta^2\Gamma}{\delta\phi_{a}(x)\delta\phi_{b}(x')}\bigg\rvert_{\vec{\phi}=A_\mu=0}, \end{equation} % which obeys a reciprocity relation~\cite{NegeleOrland} % \begin{equation} \int_{x^{\prime\prime}}C^{ac}(x,x^{\prime\prime})\chi^{cb}(x^{\prime\prime},x')=\delta_{ab}\delta(x-x'), \label{eq_low_spiral: reciprocity relation} \end{equation} % with the generalized susceptibility $\chi^{ab}(x,x')$, defined as % \begin{equation} \chi^{ab}(x,x')=-\frac{\delta^2\mathcal{G}}{\delta J_{a}(x)\delta J_{b}(x')}\bigg\rvert_{J_{a}=A_\mu=0}. \end{equation} % Eq.~\eqref{eq_low_spiral: Ward identity G} can be expressed in terms of $\Gamma$ as % \begin{equation} \begin{split} &\partial_\mu \left(\frac{\delta\Gamma}{\delta A_\mu(x)}\right) -2\left[\frac{\delta\Gamma}{\delta\phi_{1}(x)}\phi_{2}(x) -\frac{\delta\Gamma}{\delta\phi_{2}(x)}\phi_{1}(x)\right] =0. \end{split} \label{eq_low_spiral: Ward identity Gamma} \end{equation} % Eq.~\eqref{eq_low_spiral: Ward identity Gamma} is an identity for the generating functional $\Gamma$ stemming from U(1) gauge invariance of the theory. Taking derivatives with respect to the fields, one can derive an infinite set of Ward identities. We are interested in the relation between the gauge kernel and the transverse inverse susceptibility $C^{22}(x,x')$. For this purpose, we differentiate Eq.~\eqref{eq_low_spiral: Ward identity Gamma} once with respect to $\phi_{2}(x')$ and once with respect to $A_\nu(x')$, and then set the fields to zero. We obtain the set of equations % \begin{subequations} \begin{align} &-\partial_\mu \mathcal{C}_\mu^2(x,x')=2\varphi_0\, C^{22}(x,x'),\label{eq_low_spiral: WI 1}\\ &-\partial_\mu K_{\mu\nu}(x,x')=2\varphi_0\, \mathcal{C}_\nu^{2}(x,x'), \label{eq_low_spiral: WI 2} \end{align} \end{subequations} % where $\varphi_0=\langle\phi(x)\rangle=\langle\phi_{1}(x)\rangle=\langle\psi_\downarrow(x)\psi_\uparrow(x)\rangle$, and we have defined the quantity % \begin{equation} \mathcal{C}_\mu^{a}(x,x')=-\frac{\delta^2\Gamma}{\delta A_\mu(x)\delta\phi_{a}(x')}\bigg\rvert_{\vec{\phi}=A_\mu=0}. \end{equation} % Combining \eqref{eq_low_spiral: WI 1} and \eqref{eq_low_spiral: WI 2}, we obtain % \begin{equation} \partial_\mu\partial_\nu K_{\mu\nu}(x,x')=4\varphi_0^2\,C^{22}(x,x'). \label{eq_low_spiral: WI for SC} \end{equation} % Fourier transforming Eq.~\eqref{eq_low_spiral: WI for SC} and rotating to real frequencies, we have % \begin{equation} -q_\mu q_\nu K_{\mu\nu}(q)=4\varphi_0^2 C^{22}(q), \label{eq_low_spiral: WI for SC in q space} \end{equation} % with $q=(\mathbf{q},\omega)$ a collective variable combining momentum and real frequency. We now define the superfluid stiffness $J_{\alpha\beta}$ and the uniform density-density susceptibility $\chi_n$\footnote{\label{note_low_spiral: Note_def_stiffnesses}The spin stiffnesses and dynamical susceptibilities (or density-density uniform susceptibility, for the superconductor) can be equivalently defined as the coefficients of a low-energy expansion of the transverse susceptibilities. Here, we choose to define them from the gauge kernels and show that, within a conserving approximation, the two definitions are equivalent.} as % \begin{subequations} \begin{align} &J_{\alpha\beta}\equiv-\lim_{\mathbf{q}\to\mathbf{0}}K_{\alpha\beta}(\mathbf{q},\omega=0) \label{eq_low_spiral: Jsc definition},\\ &\chi_n\equiv\lim_{\omega\to 0}K_{00}(\mathbf{q}=\mathbf{0},\omega),\label{eq_low_spiral: den-den chi SC} \end{align} \end{subequations} % where the minus sign in \eqref{eq_low_spiral: Jsc definition} has been introduced so that $J_{\alpha\beta}$ is positive definite. Notice that, even though the limits $\mathbf{q}\to\mathbf{0}$ and $\omega\to 0$ in Eq.~\eqref{eq_low_spiral: den-den chi SC} have been taken in the opposite order compared to what it is conventionally done, they commute in a $s$-wave superconductor because of the absence of gapless fermionic excitations. In the above equation and from now on, we employ the convention that the indices labeled as $\mu$, $\nu$ include temporal and spatial components, whereas $\alpha$ and $\beta$ only the latter. Taking the second derivative with respect to $q$ on both sides of \eqref{eq_low_spiral: WI for SC in q space}, we obtain % \begin{subequations}\label{eq_low_spiral: J and chi from K22 SC} \begin{align} &J_{\alpha\beta}=2\varphi_0^2 \partial^2_{q_\alpha q_\beta}C^{22}(\mathbf{q},\omega=0)\big\rvert_{\mathbf{q}\to\mathbf{0}},\\ &\chi_n=-2\varphi_0^2 \partial^2_{\omega}C^{22}(\mathbf{q}=\mathbf{0},\omega)\big\rvert_{\omega\to0}, \end{align} \end{subequations} % where $\partial^2_{q_\alpha q_\beta}$ and $\partial^2_\omega$ are shorthands for $\frac{\partial^2}{\partial q_\alpha q_\beta}$ and $\frac{\partial^2}{\partial\omega^2} $, respectively. Moreover, we have made use of the Goldstone theorem, reading $C^{22}(\mathbf{0},0)=0$. To derive Eq.~\eqref{eq_low_spiral: J and chi from K22 SC} from \eqref{eq_low_spiral: WI for SC in q space} we have exploited the finiteness of the gauge kernel $K_{\mu\nu}(q)$ in the $\mathbf{q}\to\mathbf{0}$ and $\omega\to 0$ limits. Eq.~\eqref{eq_low_spiral: J and chi from K22 SC} states that the superfluid stiffness and the uniform density-density correlation function are not only the zero momentum and frequency limit of the gauge kernel, but also the coefficients of the inverse transverse susceptibility when expanded for small $\mathbf{q}$ and $\omega$, respectively. Inverting Eq.~\eqref{eq_low_spiral: reciprocity relation}, $C^{22}(q)$ can be expressed in terms of $\chi^{ab}(q)$ as % \begin{equation} C^{22}(q)= \frac{1}{\chi^{22}(q)-\chi^{21}(q)\frac{1}{\chi^{11}(q)}\chi^{12}(q)}. \end{equation} % In the limit $q\to 0=(\mathbf{0},0)$, $\chi^{22}(q)$ diverges for the Goldstone theorem, while the second term in the denominator vanishes like some power of $q$. This implies that, for small $q$, % \begin{equation} C^{22}(q)\simeq \frac{1}{\chi^{22}(q)}. \end{equation} % From this consideration, together with \eqref{eq_low_spiral: J and chi from K22 SC}, we can deduce that the transverse susceptibility can be written as % \begin{equation} \chi^{22}(\mathbf{q},\omega)\simeq \frac{4\varphi_0^2}{-\chi_n \omega^2+J_{\alpha\beta}q_\alpha q_\beta}, \label{eq_low_spiral: chi22 SC small q} \end{equation} % for small $\mathbf{q}$ and $\omega$. The above form of the $\chi^{22}(q)$ can be also deduced from a low energy theory for the phase fluctuations of the superconducting order parameter. Setting $J$ and $J^*$ to zero in \eqref{eq_low_spiral: G-functional U(1)}, and integrating out the Grassmann fields, one obtains an effective action for the gauge fields. The quadratic contribution in $A_\mu$ is % \begin{equation} \mathcal{S}_\mathrm{eff}^{(2)}[A_\mu]=-\frac{1}{2}\int_q K_{\mu\nu}(q) A_\mu(-q) A_\nu(q), \end{equation} % where $\int_q$ is a shorthand for $\int \frac{d\omega}{2\pi}\int \frac{d^d \mathbf{q}}{(2\pi)^d}$. Since we are focusing only on slow and long-wavelength fluctuations of $A_\mu$, we replace $K_{\mu\nu}(q)$ with $K_{\mu\nu}(0)$. Considering a pure gauge field, $A_\mu(x)=-\partial_\mu\theta(x)$, where $\theta(x)$ is (half) the phase of the superconducting order parameter ($\phi(x)=\varphi_0 e^{-2i\theta(x)}$), we obtain % \begin{equation} \mathcal{S}_\mathrm{eff}[\theta]=\frac{1}{2}\int_x \left\{-\chi_n\left[\partial_t\theta(x)\right]^2+J_{\alpha\beta}\partial_\alpha\theta(x)\partial_\beta\theta(x)\right\}, \label{eq_low_spiral: phase field action} \end{equation} % with $\theta(x)\in[0,2\pi]$ a periodic field. The above action is well known to display a Berezinskii-Kosterlitz-Thouless (BKT) transition~\cite{Berezinskii1971,Kosterlitz1973} for $d=1$ (at $T=0$) and $d=2$ (at $T>0)$, while for $d=3$ ($T\geq 0$) or $d=2$ ($T=0$), it describes a gapless phase mode known as Anderson-Bogoliubov phonon~\cite{Anderson1958}. From~\eqref{eq_low_spiral: phase field action}, we can extract the propagator of the field $\theta(x)$ % \begin{equation} \langle \theta(-q)\theta(q)\rangle = \frac{1}{-\chi_n \omega^2+J_{\alpha\beta}q_\alpha q_\beta}, \end{equation} % where we have neglected the fact that $\theta(x)$ is defined modulo $2\pi$. Writing $\phi_{2}(x)=(\phi(x)-\phi^*(x))/(2i)=-\varphi_0 \sin(2\theta(x))\simeq -2\varphi_0 \theta(x)$, $\chi^{22}(q)$ can be expressed as % \begin{equation} \begin{split} \chi^{22}(q)=\langle \phi_{2}(-q)\phi_{2}(q)\rangle \simeq 4\varphi_0^2 \langle \theta(-q)\theta(q)\rangle = \frac{4\varphi_0^2}{-\chi_n \omega^2+J_{\alpha\beta}q_\alpha q_\beta}, \end{split} \end{equation} % which is in agreement with Eq.~\eqref{eq_low_spiral: chi22 SC small q}. % \subsection{SU(2) symmetry} % In this Section, we repeat the same procedure we have applied in the previous one to derive the Ward identities connected to a SU(2) gauge invariant system. We consider the functional % \begin{equation} \mathcal{G}[A_\mu,\vec{J}]=-\ln \int \!\mathcal{D}\psi\mathcal{D}\overline{\psi} e^{-\mathcal{S}\left[\psi,\overline{\psi},A_\mu\right]+(\vec{J},\frac{1}{2}\overline{\psi}\vec{\sigma}\psi)}, \end{equation} % where $A_\mu(x)=A_\mu^a(x)\frac{\sigma^a}{2}$ is a SU(2) gauge field, $\vec{\sigma}$ are the Pauli matrices, and $\vec{J}(x)$ is a source field coupled to the fermion spin operator $\frac{1}{2}\overline{\psi}(x)\vec{\sigma}\psi(x)$. Similarly to the previous section, derivatives of $\mathcal{G}$ with respect to $A_\mu$ and $\vec{J}$ at zero external fields give minus the gauge kernels and spin susceptibilities, respectively. In formulas, % \begin{subequations} \begin{align} &\chi^{ab}(x,x')=-\frac{\delta^2\mathcal{G}}{\delta J_{a}(x)\delta J_{b}(x')}\bigg\rvert_{\vec{J}=A_\mu=0},\\ &K_{\mu\nu}^{ab}(x,x')=-\frac{\delta^2\mathcal{G}}{\delta A^a_\mu(x)\delta A^b_\nu(x')}\bigg\rvert_{\vec{J}=A_\mu=0}.\label{eq_low_spiral: gauge kernel definition} \end{align} \end{subequations} % We let the SU(2) symmetry be broken by a (local) order parameter of the form % \begin{equation}\label{eq_low_spiral: magnetic order parameter} \left\langle \frac{1}{2}\overline{\psi}(x)\vec{\sigma}\psi(x) \right\rangle = m \hat{v}(\bs{x}), \end{equation} % with $\hat{v}(\bs{x})$ a position-dependent unit vector pointing along the local direction of the magnetization. A SU(2) gauge transformation on the fermionic fields reads % \begin{subequations} \begin{align} &\psi(x) \to R(x)\psi(x),\\ &\overline{\psi}(x) \to \overline{\psi}(x)R^\dagger(x), \end{align} \label{eq_low_spiral: SU(2) gauge symm} \end{subequations} % where $R(x)\in\mathrm{SU(2)}$ is a matrix acting on the spin indices of $\psi$ and $\overline{\psi}$. The external fields transform as % \begin{subequations} \begin{align} J_{a}(x) \to J^{\prime}_a(x)= &\mathcal{R}^{ab}(x)J_{b}(x),\\ A_\mu(x) \to A_\mu^\prime(x)=&R^\dagger(x)A_\mu(x)R(x) +iR^\dagger(x)\partial_\mu R(x), \label{eq_low_spiral: SU(2) Amu transformation} \end{align} \end{subequations} % where $\mathcal{R}(x)$ is the adjoint representation of $R(x)$ % \begin{equation} \mathcal{R}^{ab}(x)\sigma^b = R(x)\sigma^a R^\dagger(x). \label{eq_low_spiral: mathcalR definition} \end{equation} % The SU(2) gauge invariance of $\mathcal{G}$ can be expressed as % \begin{equation} \frac{\delta}{\delta R(x)}\mathcal{G}[A_\mu',\vec{J}^\prime]=0. \end{equation} % Writing $R(x)=e^{i \theta_a(x)\frac{\sigma^a}{2}}$, $R^\dagger(x)=e^{-i \theta_a(x)\frac{\sigma^a}{2}}$, and considering an infinitesimal transformation $|\theta_a(x)|\ll1$, we obtain the functional identity % \begin{equation} \begin{split} \partial_\mu \left(\frac{\delta\Gamma}{\delta A_\mu^a(x)}\right) -\varepsilon^{a\ell m}\bigg[ \frac{\delta\Gamma}{\delta\phi^\ell(x)}\phi^m(x) -\frac{\delta\Gamma}{\delta A_\mu^\ell(x)}A_\mu^m(x) \bigg] =0, \end{split} \label{eq_low_spiral: Ward identity SU(2)} \end{equation} % where $\varepsilon^{abc}$ is the Levi-Civita tensor. $\Gamma[A_\mu,\vec{\phi}]$ is the Legendre transform of $\mathcal{G}$, defined as % \begin{equation} \Gamma[A_\mu,\vec{\phi}]=\int_x\vec{\phi}(x)\cdot\vec{J}(x) + \mathcal{G}[A_\mu,\vec{J}], \end{equation} % with $\phi_{a}(x)=\frac{\delta\mathcal{G}[A_\mu,\vec{J}]}{\delta J_{a}(x)}$. The inverse susceptibilities $C^{ab}(x,x')$, defined as, % \begin{equation} C^{ab}(x,x')=-\frac{\delta^2\Gamma}{\delta\phi_{a}(x)\delta\phi_{b}(x')}\bigg\rvert_{\vec{\phi}=A_\mu=0}, \end{equation} % obey a reciprocity relation with the spin susceptibilities $\chi^{ab}(x,x')$ similar to \eqref{eq_low_spiral: reciprocity relation}. Defining the quantities % \begin{subequations} \begin{align} &\mathcal{C}_\mu^{ab}(x,x')=-\frac{\delta^2\Gamma}{\delta A^a_\mu(x)\delta\phi_{b}(x')}\bigg\rvert_{\vec{\phi}=A_\mu=0},\\ &\mathcal{B}_{\mu}^{a}(x)=-\frac{\delta\Gamma}{\delta A^a_\mu(x)}\bigg\rvert_{\vec{\phi}=A_\mu=0}, \end{align} \end{subequations} % we obtain from~\eqref{eq_low_spiral: Ward identity SU(2)} the set of equations % \begin{subequations} \begin{align} -\partial_\mu \mathcal{C}_\mu^{ab}(x,x')&=m \varepsilon^{a\ell m}C^{\ell b}(x,x')v_m(\bs{x}),\label{eq_low_spiral: WI SU(2) I}\\ -\partial_\mu K_{\mu\nu}^{ab}(x,x')&=m \varepsilon^{a\ell m}\mathcal{C}^{b\ell}_\nu(x,x')v_m(\bs{x}) -\varepsilon^{a\ell b}\mathcal{B}_\nu^\ell(x)\delta(x-x')\label{eq_low_spiral: WI SU(2) II},\\ \partial_\mu \mathcal{B}_{\mu}^{a}(x)&=0\label{eq_low_spiral: WI SU(2) III}, \end{align} \end{subequations} % where \eqref{eq_low_spiral: WI SU(2) I}, \eqref{eq_low_spiral: WI SU(2) II}, have been obtained differentiating \eqref{eq_low_spiral: Ward identity SU(2)} with respect to $\phi_b(x')$ and $A_\nu(x')$, respectively, and setting the fields to zero. Eq.~\eqref{eq_low_spiral: WI SU(2) III} simply comes from \eqref{eq_low_spiral: Ward identity SU(2)} computed at zero $A_\mu$, $\phi_{a}$. According to Eq.~\eqref{eq_low_spiral: magnetic order parameter}, the expectation value of $\vec{\phi}(x)$ takes the form $\langle \vec{\phi}(x)\rangle=m \hat{v}(\bs{x})$. Combining \eqref{eq_low_spiral: WI SU(2) I}, \eqref{eq_low_spiral: WI SU(2) II}, and \eqref{eq_low_spiral: WI SU(2) III}, we obtain the Ward identity % \begin{equation} \partial_\mu\partial_\nu K_{\mu\nu}^{ab}(x,x')=m^2 \varepsilon^{a\ell m}\varepsilon^{bnp} v^\ell( \bs{x}) v^n(\bs{x}') C^{mp}(x,x'), \label{eq_low_spiral: WI gauge Kab SU(2)} \end{equation} % which connects the gauge kernels with the inverse susceptibilities. In the following, we analyze two concrete examples where the above identity applies, namely the N\'eel antiferromagnet and the spiral magnet. We do not consider ferromagnets or, in general, systems with a net average magnetization, as in this case the divergence of the transverse components of the kernel $K_{00}^{ab}(q)$ for $q\to 0$ leads to changes in the form of the Ward identities. In this case, one can talk of type-II Goldstone bosons~\cite{Wilson2020}, characterized by a non-linear dispersion. % \subsubsection{N\'eel order} % We now consider the particular case of antiferromagnetic (or N\'eel) ordering for a system on a $d$-dimensional bipartite lattice. In this case $\hat{v}(\bs{x})$ takes the form $(-1)^\bs{x}\hat{v}$, with $(-1)^\bs{x}$ being 1 ($-1$) on the sites of sublattice A (B), and $\hat{v}$ a constant unit vector. In the following, without loss of generality, we consider $\hat{v}=(1,0,0)$. Considering only the diagonal ($a=b$) components of \eqref{eq_low_spiral: WI gauge Kab SU(2)}, we have % \begin{subequations} \begin{align} \partial_\mu\partial_\nu K_{\mu\nu}^{11}(x,x')&=0,\label{eq_low_spiral: WI K11}\\ \partial_\mu\partial_\nu K_{\mu\nu}^{22}(x,x')&=m^2 (-1)^{\bs{x}-\bs{x}'} C^{33}(x,x')\label{eq_low_spiral: WI K22},\\ \partial_\mu\partial_\nu K_{\mu\nu}^{33}(x,x')&=m^2 (-1)^{\bs{x}-\bs{x}'} C^{22}(x,x')\label{eq_low_spiral: WI K33}. \end{align} \end{subequations} % Despite N\'eel antiferromagnetism breaking the lattice translational symmetry, the components of the gauge Kernel considered above depend only on the difference of their arguments $x-x'$, and thus have a well-defined Fourier transform. Eq.~\eqref{eq_low_spiral: WI K11} implies $q_\mu q_\nu K_{11}^{\mu\nu}(\mathbf{q},\omega)=0$, as expected due to the residual U(1) gauge invariance in the N\'eel state. In particular, one obtains $\lim_{\mathbf{q}\to\mathbf{0}}K^{11}_{\alpha\beta}(\mathbf{q},0)=0$, and $\lim_{\omega\to 0}K^{11}_{00}(\mathbf{0},\omega)=0$. Eqs.~\eqref{eq_low_spiral: WI K22} and \eqref{eq_low_spiral: WI K33} are the same equation as we have $K_{22}(x,x')=K_{33}(x,x')$, again because of the residual symmetry. If we rotate them onto the real time axis and perform the Fourier transform, we get % \begin{subequations}\label{eq_low_spiral: chi and J Neel} \begin{align} &J_{\alpha\beta}\equiv-\lim_{\mathbf{q}\to\mathbf{0}}K^{22}_{\alpha\beta}(\mathbf{q},0)=\frac{1}{2}m^2 \partial^2_{q_\alpha q_\beta} C^{33}(\mathbf{q},0)\Big\rvert_{\mathbf{q}\to\mathbf{Q}},\label{eq_low_spiral: spin stiffness Neel}\\ &\chi_\mathrm{dyn}^\perp\equiv\lim_{\omega\to 0}K^{22}_{00}(\mathbf{0},\omega)=-\frac{1}{2}m^2 \partial^2_{\omega} C^{33}(\mathbf{Q},\omega)\Big\rvert_{\omega\to 0},\label{eq_low_spiral: chi perp Neel} \end{align} \end{subequations} % where $J_{\alpha\beta}$ is the spin stiffness, $\mathbf{Q}=(\pi/a_0,\dots,\pi/a_0)$, with $a_0$ the lattice spacing, and we name $\chi_\mathrm{dyn}^\perp$ as transverse dynamical susceptibility\footnote{See footnote\ref{note_low_spiral: Note_def_stiffnesses}}. In the above equations we have made use of the Goldstone theorem, which in the present case reads % \begin{equation}\label{eq_low_spiral: Goldstone Neel} C^{22}(\mathbf{Q},0)=C^{33}(\mathbf{Q},0)=0. \end{equation} % Furthermore, to derive Eq.~\eqref{eq_low_spiral: chi and J Neel} from \eqref{eq_low_spiral: WI K22}, we have used the finiteness of the $\mathbf{q}\to\mathbf{0}$ and $\omega\to 0$ limits of the gauge kernels. Following the argument given in the previous section, for $q=(\mathbf{q},\omega)$ close to $Q=(\mathbf{Q},0)$, we can replace $C^{33}(q)$ by $1/\chi^{33}(q)$ in \eqref{eq_low_spiral: chi and J Neel}, implying % \begin{equation} \begin{split} \chi^{22}(q\simeq Q)=\chi^{33}(q\simeq Q) \simeq\frac{m^2}{-\chi_\mathrm{dyn}^\perp \omega^2 + J_{\alpha\beta}(q-Q)_\alpha(q-Q)_\beta}. \end{split} \label{eq_low_spiral: chi22 and chi33 Neel small q} \end{equation} % Notice that in Eq.~\eqref{eq_low_spiral: chi22 and chi33 Neel small q} we have neglected the imaginary parts of the susceptibilities, that, for doped antiferromagnets, can lead to \emph{Landau damping} of the Goldstone modes~\cite{Bonetti2022}. Also for N\'eel ordering, form \eqref{eq_low_spiral: chi22 and chi33 Neel small q} of the transverse susceptibilities can be deduced from a low energy theory for the gauge field $A_\mu(x)$, that is, % \begin{equation} \begin{split} \mathcal{S}_\mathrm{eff}[A_\mu]=-\frac{1}{2}\int_q \Big[ K_{00}^{ab}(\mathbf{0},\omega\to0) A^a_0(-q) A^b_0(q) +K_{\alpha\beta}^{ab}(\mathbf{q}\to\mathbf{0},0) A^a_\alpha(-q) A^b_\beta(q) \Big]. \end{split} \end{equation} % Considering a pure gauge field % \begin{equation} A_\mu(x)=iR^\dagger(x)\partial_\mu R(x), \label{eq_low_spiral: pure gauge Amu SU(2)} \end{equation} % with $R(x)$ a SU(2) matrix, we obtain the action % \begin{equation} \mathcal{S}_\mathrm{eff}[\hat{n}]=\frac{1}{2}\int_x \left\{-\chi_\mathrm{dyn}^\perp \left|\partial_t\hat{n}(x)\right|^2+J_{\alpha\beta} \partial_\alpha\hat{n}(x)\cdot\partial_\beta\hat{n}(x)\right\}, \label{eq_low_spiral: SU(2)/U(1) NLsM} \end{equation} % where $\hat{n}(x)=(-1)^\bs{x}\mathcal{R}(x)\hat{v}(\bs{x})$, with $\mathcal{R}(x)$ defined as in Eq.~\eqref{eq_low_spiral: mathcalR definition}, and $|\hat{n}(x)|^2=1$. Eq.~\eqref{eq_low_spiral: SU(2)/U(1) NLsM} is the well-known $\mathrm{O(3)/O(2)}$ non-linear sigma model (NL$\sigma$M) action, describing low-energy properties of quantum antiferromagnets~\cite{Haldane1983,AuerbachBook1994}. Writing $R(x)=e^{i\theta_a(x)\frac{\sigma^a}{2}}$, and expanding to first order in $\theta_a(x)$, $\hat{n}(x)$ becomes $\hat{n}(x)\simeq(1,\theta_2(x),-\theta_3(x))$. Considering the expression $\vec{\phi}(x)=(-1)^\bs{x} m \hat{n}(x)$ for the order parameter field, we see that small fluctuations in $\hat{n}(x)$ only affect the 2- and 3-components of $\vec{\phi}(x)$. The transverse susceptibilities can be therefore written as % \begin{equation} \begin{split} \chi^{22}(q)&=\chi^{33}(q)=\langle \phi_{2}(q)\phi_{2}(-q)\rangle \simeq m^2 \langle n_2(q+Q) n_2(-q-Q)\rangle \\ &= \frac{m^2}{-\chi_\mathrm{dyn}^\perp \omega^2+J_{\alpha\beta}(q-Q)_\alpha(q-Q)_\beta}, \end{split} \label{eq_low_spiral: goldstone chis neel} \end{equation} % which is the result of Eq.~\eqref{eq_low_spiral: chi22 and chi33 Neel small q}. In Eq.~\eqref{eq_low_spiral: goldstone chis neel} we have made use of the propagator of the $\hat{n}$-field dictated by the action of Eq.~\eqref{eq_low_spiral: SU(2)/U(1) NLsM}, that is, % \begin{equation} \langle n_a(q) n_a(-q)\rangle = \frac{1}{-\chi_\mathrm{dyn}^\perp \omega^2+J_{\alpha\beta}q_\alpha q_\beta}. \end{equation} % Eq.~\eqref{eq_low_spiral: goldstone chis neel} predicts two degenerate magnon branches with linear dispersion for small $\mathbf{q}-\mathbf{Q}$. In the case of an isotropic antiferromagnet ($J_{\alpha\beta}=J\delta_{\alpha\beta}$), we have $\omega_\mathbf{q} = c_s |\mathbf{q}|$, with the spin wave velocity given by $c_s=\sqrt{J/\chi_\mathrm{dyn}^\perp}$. % \subsubsection{Spiral magnetic order} \label{subsec_low_spiral: WI spiral} % We now turn our attention to the case of spin spiral ordering, described by the magnetization direction % \begin{equation} \hat{v}(\bs{x})=\cos(\mathbf{Q}\cdot\bs{x})\hat{v}_1+\sin(\mathbf{Q}\cdot\bs{x})\hat{v}_2, \label{eq_low_spiral: spiral magnetization} \end{equation} % where $\hat{v}_1$ and $\hat{v}_2$ are two generic constant unit vectors satisfying $\hat{v}_1\cdot\hat{v}_2=0$, and at least one component of $\mathbf{Q}$ is neither 0 nor $\pi/a_0$. Without loss of generality, we choose $\hat{v}_1=\hat{e}_1=(1,0,0)$ and $\hat{v}_2=\hat{e}_2=(0,1,0)$. It is convenient to rotate the field $\vec{\phi}(x)$ to a basis in which $\hat{v}(\bs{x})$ is uniform. This is achieved by the transformation~\cite{Kampf1996} % \begin{equation} \vec{\phi}^\prime(x) = \mathcal{M}(x) \vec{\phi}(x), \label{eq_low_spiral: spiral rotation field} \end{equation} % with % \begin{equation} \mathcal{M}(x)=\left( \begin{array}{ccc} \cos(\mathbf{Q}\cdot\bs{x}) & \sin(\mathbf{Q}\cdot\bs{x}) & 0 \\ -\sin(\mathbf{Q}\cdot\bs{x}) & \cos(\mathbf{Q}\cdot\bs{x}) & 0 \\ 0 & 0 & 1 \end{array} \right). \label{eq_low_spiral: spiral rotation matrix} \end{equation} % In this way, the inverse susceptibilities are transformed into % \begin{equation} \begin{split} \widetilde{C}^{ab}(x,x')=&- \frac{\delta^2\Gamma}{\delta \phi^{\prime}_a(x)\delta \phi^{\prime}_b(x')}\bigg\rvert_{\vec{\phi}'=A_\mu=0} =[\mathcal{M}^{-1}(x)]^{ac}[\mathcal{M}^{-1}(x')]^{bd} C^{cd}(x,x'). \end{split} \label{eq_low_spiral: rotated Ks spiral} \end{equation} % If we now apply the Ward identity \eqref{eq_low_spiral: WI gauge Kab SU(2)}, we obtain % \begin{subequations}\label{eq_low_spiral: WI real space spiral} \begin{align} \partial_\mu\partial_\nu K_{\mu\nu}^{11}(x,x')&=m^2 \sin(\mathbf{Q}\cdot\bs{x})\sin(\mathbf{Q}\cdot\bs{x}') \widetilde{C}^{33}(x,x'),\\ \partial_\mu\partial_\nu K_{\mu\nu}^{22}(x,x')&=m^2 \cos(\mathbf{Q}\cdot\bs{x})\cos(\mathbf{Q}\cdot\bs{x}') \widetilde{C}^{33}(x,x'),\\ \partial_\mu\partial_\nu K_{\mu\nu}^{33}(x,x')&=m^2\widetilde{C}^{22}(x,x'), \end{align} \end{subequations} % with % \begin{subequations} \begin{align} \widetilde{C}^{33}(x,x')&=C^{33}(x,x'),\\ \widetilde{C}^{22}(x,x')&=\sin(\mathbf{Q}\cdot\bs{x})\sin(\mathbf{Q}\cdot\bs{x}') C^{11}(x,x') +\cos(\mathbf{Q}\cdot\bs{x})\cos(\mathbf{Q}\cdot\bs{x}') C^{22}(x,x')\nonumber\\ &\phantom{=}-\sin(\mathbf{Q}\cdot\bs{x})\cos(\mathbf{Q}\cdot\bs{x}')C^{12}(x,x') -\cos(\mathbf{Q}\cdot\bs{x})\sin(\mathbf{Q}\cdot\bs{x}')C^{21}(x,x'). \end{align} \end{subequations} % We remark that an order parameter of the type~\eqref{eq_low_spiral: spiral magnetization} completely breaks the SU(2) spin symmetry, which is why none of the right hand sides of the equations above vanishes. We have considered a \emph{coplanar} spiral magnetic order, that is, we have assumed all the spins to lie in the same plane, so that out of the three Goldstone modes, two are degenerate and correspond to out-of-plane fluctuations, and one to in-plane fluctuations of the spins. Furthermore, translational invariance is broken, so the Fourier transforms of the gauge kernels $K_{\mu\nu}^{ab}(\mathbf{q},\mathbf{q}',\omega)$ and inverse susceptibilities $C^{ab}(\mathbf{q},\mathbf{q}',\omega)$ are nonzero not only for $\mathbf{q}-\mathbf{q}'=\mathbf{0}$ but also for $\mathbf{q}-\mathbf{q}'=\pm\mathbf{Q}$ or $\pm2\mathbf{Q}$. Time translation invariance is preserved, and the gauge kernels and the inverse susceptibilities depend on one single frequency. However, in the basis obtained with transformation~\eqref{eq_low_spiral: spiral rotation matrix}, translational invariance is restored, so that the Fourier transform of $\widetilde{C}^{ab}(x,x')$ only depends on one spatial momentum. With this in mind, we can extract expressions for the spin stiffnesses and dynamical susceptibilities from~\eqref{eq_low_spiral: WI real space spiral}. After rotating to real frequencies, and using the property that for a spiral magnet the gauge kernels are finite in the limits $\mathbf{q}=\mathbf{q}'\to\mathbf{0}$ and $\omega\to 0$, we obtain\footnote{See footnote\ref{note_low_spiral: Note_def_stiffnesses}} % \begin{subequations}\label{eq_low_spiral: spin stiff spiral def} \begin{align} J^{\perp,1}_{\alpha\beta}\equiv&-\lim_{\mathbf{q}\to\mathbf{0}}K^{11}_{\alpha\beta}(\mathbf{q},0) =\frac{1}{8}m^2\partial^2_{q_\alpha q_\beta}\sum_{\eta=\pm}\widetilde{C}^{33}(\mathbf{q}+\eta\mathbf{Q},0)\bigg\rvert_{\mathbf{q}\to\mathbf{0}},\label{eq_low_spiral: expression J1 spiral}\\ J^{\perp,2}_{\alpha\beta}\equiv&-\lim_{\mathbf{q}\to\mathbf{0}}K^{22}_{\alpha\beta}(\mathbf{q},0) =\frac{1}{8}m^2\partial^2_{q_\alpha q_\beta}\sum_{\eta=\pm}\widetilde{C}^{33}(\mathbf{q}+\eta\mathbf{Q},0)\bigg\rvert_{\mathbf{q}\to\mathbf{0}},\label{eq_low_spiral: expression J2 spiral}\\ J^\msml{\square}_{\alpha\beta}\equiv&-\lim_{\mathbf{q}\to\mathbf{0}}K^{33}_{\alpha\beta}(\mathbf{q},0) =\frac{1}{2}m^2\partial^2_{q_\alpha q_\beta}\widetilde{C}^{22}(\mathbf{q},0)\bigg\rvert_{\mathbf{q}\to\mathbf{0}},\label{eq_low_spiral: expression J3 spiral} \end{align} \end{subequations} % and % \begin{subequations}\label{eq_low_spiral: Z factors spiral def} \begin{align} \chi_\mathrm{dyn}^{\perp,1}\equiv&\lim_{\omega\to 0}K^{11}_{00}(\mathbf{0},\omega) =-\frac{1}{8}m^2\partial^2_\omega \sum_{\eta=\pm}\widetilde{C}^{33}(\eta\mathbf{Q},\omega)\bigg\rvert_{\omega\to0},\label{eq_low_spiral: expression chi1 spiral}\\ \chi_\mathrm{dyn}^{\perp,2}\equiv&\lim_{\omega\to, 0}K^{22}_{00}(\mathbf{0},\omega) =-\frac{1}{8}m^2\partial^2_\omega \sum_{\eta=\pm}\widetilde{C}^{33}(\eta\mathbf{Q},\omega)\bigg\rvert_{\omega\to0},\label{eq_low_spiral: expression chi2 spiral}\\ \chi_\mathrm{dyn}^\msml{\square}\equiv&\lim_{\omega\to 0}K^{33}_{00}(\mathbf{0},\omega) =-\frac{1}{2}m^2\partial^2_\omega \widetilde{C}^{22}(\mathbf{0},\omega)\bigg\rvert_{\omega\to0}\label{eq_low_spiral: expression chi3 spiral}, \end{align} \end{subequations} % where the labels $\perp$ and $\msml{\square}$ denote out-of-plane and in-plane quantities, respectively. In the equations above, we have defined $K^{ab}_{\mu\nu}(\mathbf{q},\omega)$ as the prefactors of the components of the gauge kernels $K^{ab}_{\mu\nu}(\mathbf{q},\mathbf{q}',\omega)$ which are proportional to $(2\pi)^d\delta^d(\mathbf{q}-\mathbf{q}')$. From Eqs.~\eqref{eq_low_spiral: spin stiff spiral def} and \eqref{eq_low_spiral: Z factors spiral def} it immediately follows that $J^{\perp,1}_{\alpha\beta}=J^{\perp,2}_{\alpha\beta}\equiv J^{\perp}_{\alpha\beta}$, and $\chi_\mathrm{dyn}^{\perp,1}=\chi_\mathrm{dyn}^{\perp,2}\equiv \chi_\mathrm{dyn}^{\perp}$, as expected in the case of coplanar order~\cite{Azaria1990}. To derive the equations above, we have made use of the Goldstone theorem, which for spiral ordering reads (see for example Refs.~\cite{Chubukov1995,Kampf1996}) % \begin{subequations} \begin{align} &\widetilde{C}^{33}(\pm\mathbf{Q},0)=0,\\ &\widetilde{C}^{22}(\mathbf{0},0)=0. \end{align} \end{subequations} % Notice that the above relations can be also derived from a functional identity similar to~\eqref{eq_low_spiral: Ward identity SU(2)} but descending from the \emph{global} SU(2) symmetry. Moreover, close to their respective Goldstone points ($(\mathbf{0},0)$ for $\widetilde{C}^{22}$, and $(\pm\mathbf{Q},0)$ for $\widetilde{C}^{33}$), $\widetilde{C}^{22}(q)$ can be replaced with $1/\widetilde{\chi}^{22}(q)$, and $\widetilde{C}^{33}(q)$ with $1/\widetilde{\chi}^{33}(q)$, with the rotated susceptibilities defined analogously to~\eqref{eq_low_spiral: rotated Ks spiral}. If the spin spiral state occurs on a lattice that preserves parity, we have $\widetilde{C}^{aa}(\mathbf{q},\omega)=\widetilde{C}^{aa}(-\mathbf{q},\omega)$, from which we obtain % \begin{subequations}\label{eq_low_spiral: stiffenss and chi spiral final} \begin{align} &J_{\alpha\beta}^\perp=\frac{1}{4}m^2 \partial^2_{q_\alpha q_\beta}\left( \frac{1}{\widetilde{\chi}^{33}(\mathbf{q},0)}\right)\bigg\rvert_{\mathbf{q}\to \pm \mathbf{Q}}, \label{eq_low_spiral: stiffenss and chi spiral final J2}\\ &J^\msml{\square}_{\alpha\beta}=\frac{1}{2}m^2 \partial^2_{q_\alpha q_\beta}\left( \frac{1}{\widetilde{\chi}^{22}(\mathbf{q},0)}\right)\bigg\rvert_{\mathbf{q}\to \mathbf{0}}, \label{eq_low_spiral: stiffenss and chi spiral final J3}\\ &\chi_\mathrm{dyn}^\perp=-\frac{1}{4}m^2 \partial^2_{\omega}\left( \frac{1}{\widetilde{\chi}^{33}(\pm\mathbf{Q},\omega)}\right)\bigg\rvert_{\omega\to 0}, \label{eq_low_spiral: stiffenss and chi spiral final chi2}\\ &\chi_\mathrm{dyn}^\msml{\square}=-\frac{1}{2}m^2 \partial^2_{\omega}\left( \frac{1}{\widetilde{\chi}^{22}(\mathbf{0},\omega)}\right)\bigg\rvert_{\omega\to 0}. \label{eq_low_spiral: stiffenss and chi spiral final chi3} \end{align} \end{subequations} % Neglecting the imaginary parts of the susceptibilities, giving rise to dampings of the Goldstone modes~\cite{Bonetti2022}, from Eq.~\eqref{eq_low_spiral: stiffenss and chi spiral final} we can obtain expressions for the susceptibilities near their Goldstone points % \begin{subequations}\label{eq_low_spiral: low energy chis spiral} \begin{align} &\widetilde{\chi}^{22}(q\simeq(\mathbf{0},0))\simeq \frac{m^2}{-\chi^\msml{\square}_\mathrm{dyn}\omega^2+J^\msml{\square}_{\alpha\beta}q_\alpha q_\beta},\\ &\widetilde{\chi}^{33}(q\simeq(\pm\mathbf{Q},0))\simeq \frac{m^2/2}{-\chi^\perp_\mathrm{dyn}\omega^2+J_{\alpha\beta}^\perp(q\mp Q)_\alpha (q\mp Q)_\beta}. \end{align} \end{subequations} % Expressions \eqref{eq_low_spiral: low energy chis spiral} can be deduced from a low energy model also in the case of spin spiral ordering. Similarly to what we have done for the N\'eel case, we consider a pure gauge field, giving the non-linear sigma model action % \begin{equation} \mathcal{S}_\mathrm{eff}[\mathcal{R}]=\frac{1}{2}\int_x \tr\left[\mathcal{P}_{\mu\nu}\partial_\mu \mathcal{R}(x)\partial_\nu \mathcal{R}^T(x)\right], \label{eq_low_spiral: O(3)xO(2)/O(2) NLsM} \end{equation} % where $\mathcal{R}(x)\in\mathrm{SO(3)}$ is defined as in Eq.~\eqref{eq_low_spiral: mathcalR definition}, and now $\partial_\mu$ denotes $(-\partial_t,\vec{\nabla})$. The matrix $\mathcal{P}_{\mu\nu}$ is given by % \begin{equation} \mathcal{P}_{\mu\nu}= \left( \begin{array}{ccc} \frac{1}{2}J^\msml{\square}_{\mu\nu} & 0 & 0 \\ 0 & \frac{1}{2}J^\msml{\square}_{\mu\nu} & 0 \\ 0 & 0 & J^\perp_{\mu\nu}-\frac{1}{2}J^\msml{\square}_{\mu\nu} \end{array} \right), \end{equation} % with % \begin{equation} J^a_{\mu\nu}= \left( \begin{array}{c|c} -\chi_\mathrm{dyn}^a & 0 \\ \hline 0 & J^a_{\alpha\beta} \end{array} \right), \end{equation} % for $a\in\{\msml{\square},\perp\}$. Action~\eqref{eq_low_spiral: O(3)xO(2)/O(2) NLsM} is a NL$\sigma$M describing low energy fluctuations around a spiral magnetic ordered state. It has been introduced and studied in the context of frustrated antiferromagnets~\cite{Azaria1990,Azaria1992,Azaria1993,Azaria1993_PRL}. We now write the field $\vec{\phi}^\prime(x)$ as $\vec{\phi}^\prime(x)=m \mathcal{M}(x)\mathcal{R}(x)\hat{v}(\bs{x})$, and consider an $\mathcal{R}(x)$ stemming from a SU(2) matrix $R(x)=e^{i\theta_a(x)\frac{\sigma^a}{2}}$ with $\theta_a(x)$ infinitesimal, that is, % \begin{equation} \mathcal{R}_{ab}(x)\simeq\delta_{ab}-\varepsilon^{abc}\theta_c(x), \label{eq_low_spiral: mathcal R small theta} \end{equation} % we get % \begin{equation} \begin{split} \vec{\phi}^\prime(x)&\simeq m \mathcal{M}(x)[\hat{v}(\bs{x})-\hat{v}(\bs{x})\times\vec{\theta}(x)]\\ &=m[\hat{e}_1-\hat{e}_1\times\vec{\theta}^\prime(x)], \end{split} \end{equation} % with $\hat{e}_1=(1,0,0)$, and $\vec{\theta}^\prime(x)=\mathcal{M}(x)\vec{\theta}(x)$. Inserting \eqref{eq_low_spiral: mathcal R small theta} into \eqref{eq_low_spiral: O(3)xO(2)/O(2) NLsM}, we obtain % \begin{equation} \begin{split} \mathcal{S}_\mathrm{eff}[\vec{\theta}]=\frac{1}{2}\int_x \bigg\{J^\perp_{\mu\nu}\sum_{a=1,2}\left[\partial_\mu \theta_a(x)\partial_\nu \theta_a(x)\right] +J^\msml{\square}_{\mu\nu}\partial_\mu \theta_3(x)\partial_\nu \theta_3(x)\bigg\}. \end{split} \label{eq_low_spiral: NLsM spiral linearized} \end{equation} % We are finally in the position to extract the form of the susceptibilities for small fluctuations % \begin{subequations}\label{eq_low_spiral: low energy susceptibilities} \begin{align} \widetilde{\chi}^{22}(q)=&\langle\phi^{\prime}_2(q)\phi^{\prime}_2(-q)\rangle\simeq m^2\langle\theta^{\prime}_3(q)\theta^{\prime}_3(-q)\rangle =\frac{m^2}{-\chi_\mathrm{dyn}^\msml{\square}\omega^2+J^\msml{\square}_{\alpha\beta}q_\alpha q_\beta},\\ \widetilde{\chi}^{33}(q)=&\langle\phi^{\prime}_3(q)\phi^{\prime}_3(-q)\rangle\simeq m^2\langle\theta^{\prime}_2(q)\theta^{\prime}_2(-q)\rangle =\sum_{\eta=\pm}\frac{m^2/2}{-\chi_\mathrm{dyn}^\perp\omega^2+J_{\alpha\beta}^\perp(q-\eta Q)_\alpha (q-\eta Q)_\beta}, \end{align} \end{subequations} % which is the result of Eq.~\eqref{eq_low_spiral: low energy chis spiral}. In the above equations we have used the correlators of the $\theta$ field descending from action \eqref{eq_low_spiral: NLsM spiral linearized}. Form~\eqref{eq_low_spiral: low energy chis spiral} of the susceptibilities predicts three linearly dispersing Goldstone modes, two of which (the out-of-plane ones) are degenerate and propagate with velocities $c_\perp^{(n)} = \sqrt{\lambda_\perp^{(n)}/\chi^\perp_\mathrm{dyn}}$, where $\lambda^{(n)}_\perp$ are the eigenvalues of $J_{\alpha\beta}^\perp$ and $n=1,\dots,d$. Similarly, the in-plane mode velocity is given by $c_\msml{\square}^{(n)}=\sqrt{\lambda_\msml{\square}^{(n)}/\chi_\msml{\square}^\mathrm{dyn}}$, with $\lambda_\msml{\square}^{(n)}$ the eigenvalues of $J_{\alpha\beta}^\msml{\square}$. % \section{Mean-field treatment of the spiral magnetic state} \label{sec_low_spiral: MF spiral} % For this chapter to be self-contained, we repeat here the some of the basic concepts of spiral magnetism already presented in Chapter~\ref{chap: spiral DMFT}. A spiral magnetic state is characterized by an average magnetization lying in a plane, which, by rotational invariance, can have \emph{any} orientation. Without loss of generality, we choose it to be the $xy$-plane. We therefore express the expectation value of the spin operator as % \begin{equation} \langle \vec{S}_j \rangle = m \left[\cos(\mathbf{Q}\cdot\bs{R}_j)\hat{e}_1+\sin(\mathbf{Q}\cdot\bs{R}_j)\hat{e}_2\right], \label{eq_low_spiral: spiral magn} \end{equation} % where $m$ is the magnetization amplitude, $\bs{R}_j$ is the spatial coordinate of lattice site $j$, $\hat{e}_a$ is a unit vector pointing along the $a$-direction, and $\mathbf{Q}$ is a fixed wave vector. In an itinerant electron system, the spin operator is given by % \begin{equation} S^a_j = \frac{1}{2}\sum_{s,s'=\uparrow,\downarrow}c^\dagger_{j,s}\sigma^a_{ss'}c_{j,s'}, \label{eq_low_spiral: spin operator} \end{equation} % where $\sigma^a$ ($a=1,2,3$) are the Pauli matrices, and $c^\dagger_{j,s}$ ($c_{j,s}$) creates (annihilates) an electron at site $j$ with spin projection $s$. Fourier transforming Eq.~\eqref{eq_low_spiral: spiral magn}, we find that the magnetization amplitude is given by the momentum integral % \begin{equation} \int_\mathbf{k} \langle c^\dagger_{\mathbf{k},\uparrow} c_{\mathbf{k}+\mathbf{Q},\downarrow}\rangle, \label{eq_low_spiral: spiral magn k-space} \end{equation} % where $c^\dagger_{\mathbf{k},\sigma}$ ($c_{\mathbf{k},\sigma}$) is the Fourier transform of $c^\dagger_{j,s}$ ($c_{j,s}$), $\int_\mathbf{k}=\int\frac{d^d\mathbf{k}}{(2\pi)^d}$ denotes a $d$-dimensional momentum integral, with $d$ the system dimensionality. From Eq.~\eqref{eq_low_spiral: spiral magn k-space}, we deduce that spiral magnetism only couples the electron states $(\mathbf{k},\uparrow)$ and $(\mathbf{k}+\mathbf{Q},\downarrow)$. It is therefore convenient to use a rotated spin reference frame~\cite{Kampf1996}, corresponding to transformation % \begin{equation} \tilde c_j = e^{-\frac{i}{2}\mathbf{Q}\cdot\bs{R}_j} e^{\frac{i}{2}\mathbf{Q}\cdot\bs{R}_j \sigma^3} c_j \, , \quad \tilde c_j^\dagger = c_j^\dagger \, e^{-\frac{i}{2}\mathbf{Q}\cdot\bs{R}_j \sigma^3} e^{\frac{i}{2}\mathbf{Q}\cdot\bs{R}_j}. \label{eq_low_spiral: ctildes} \end{equation} % In this basis, the Fourier transform of the spinor $\tilde{c}_j$ is given by $\tilde{c}_\mathbf{k}=(c_{\mathbf{k},\uparrow},c_{\mathbf{k}+\mathbf{Q},\downarrow})$, and the magnetization~\eqref{eq_low_spiral: spiral magn} points along the $\hat{e}_1$ axis: % \begin{equation} \langle\widetilde{S}^\alpha_j\rangle = \frac{1}{2} \big\langle \tilde c^\dagger_j \sigma^a \tilde c_j \big\rangle = m \delta_{a,1}. \end{equation} % With the help of transformation~\eqref{eq_low_spiral: ctildes}, we can express the mean-field Green's function in Matsubara frequencies as % \begin{equation} \widetilde{\mathbf{G}}_\mathbf{k}(i\nu_n) = \left( \begin{array}{cc} i\nu_n-\xi_{\mathbf{k}} & - \Delta \\ - \Delta & i\nu_n-\xi_{\mathbf{k}+\mathbf{Q}} \end{array} \right)^{-1} , \label{eq_low_spiral: matrix Gtilde} \end{equation} % where $\nu_n=(2n+1)\pi T$, $\xi_\mathbf{k} = \varepsilon_\mathbf{k} - \mu$, with the single-particle dispersion $\varepsilon_\mathbf{k}$ and the chemical potential $\mu$, while $\Delta$ is the magnetic gap associated with the spiral order. Diagonalizing~\eqref{eq_low_spiral: matrix Gtilde}, one obtains the quasiparticle bands % \begin{equation} E_{\mathbf{k}}^\pm = g_{\mathbf{k}} \pm \sqrt{h_{\mathbf{k}}^2 + \Delta^2}, \label{eq_low_spiral: QP dispersions} \end{equation} % where $g_\mathbf{k} = \frac{1}{2}(\xi_\mathbf{k} + \xi_{\mathbf{k}+\mathbf{Q}})$ and $h_\mathbf{k} = \frac{1}{2}(\xi_\mathbf{k} - \xi_{\mathbf{k}+\mathbf{Q}})$. It is convenient to express the Green's function~\eqref{eq_low_spiral: matrix Gtilde} as % % \begin{equation} \widetilde G_\mathbf{k}(i\nu_n) = \frac{1}{2} \sum_{\ell=\pm} \frac{u^\ell_{\mathbf{k}}}{i\nu_n-E^\ell_{\mathbf{k}}}, \label{eq_low_spiral: spiral Gf comf} \end{equation} % with the coefficients % \begin{equation} u_\mathbf{k}^\ell = \sigma^0 + \ell \, \frac{h_\mathbf{k}}{e_\mathbf{k}} \sigma^3 + \ell \, \frac{\Delta}{e_\mathbf{k}} \sigma^1, \label{eq_low_spiral: ukl def} \end{equation} % where $\sigma^0$ is the $2\times2$ unit matrix and $e_\mathbf{k} = \sqrt{h_\mathbf{k}^2 + \Delta^2}$. We assume the spiral states to emerge from a lattice model with onsite repulsive interactions (Hubbard model), with imaginary time action % \begin{equation}\label{eq_low_spiral: Hubbard action} \begin{split} \mathcal{S}[\psi,\overline{\psi}]=\int_0^\beta\!d\tau\left\{\sum_{j,j',\sigma}\overline{\psi}_{j,\sigma}\left[(\partial_\tau - \mu)\delta_{jj'} + t_{jj'}\right]\psi_{j',\sigma} + U\sum_{j}\overline{\psi}_{j,\uparrow}\overline{\psi}_{j,\downarrow}\psi_{j,\downarrow}\psi_{j,\uparrow}\right\}, \end{split} \end{equation} % where $t_{jj'}$ describes the hopping amplitude between the lattice sites labeled by $j$ and $j'$ and $U$ is the Hubbard interaction. The Hartree-Fock or mean-field (MF) gap equation at temperature $T$ reads % \begin{equation} \Delta = - U \int_\mathbf{k} T\sum_\nu \widetilde G^{\uparrow\downarrow}_\mathbf{k}(\nu) = U \int_\mathbf{k} \frac{\Delta}{2e_\mathbf{k}} \left[f(E^-_\mathbf{k})-f(E^+_\mathbf{k})\right] \, , \label{eq_low_spiral: gap equation} \end{equation} % where $f(x)=(e^{x/T}+1)^{-1}$ is the Fermi function, the magnetization amplitude is related to $\Delta$ via $\Delta = Um$. Finally, the optimal $\mathbf{Q}$-vector is obtained minimizing the mean-field free energy % \begin{equation}\label{eq_low_spiral: MF free energy} \begin{split} \mathcal{F}_\mathrm{MF}(\mathbf{Q})=&-T\sum_{\nu_n}\int_\mathbf{k} \Tr\ln \widetilde{\mathbf{G}}_\mathbf{k}(i\nu_n)+\frac{\Delta^2}{U}+\mu n \\ =&-T\int_\mathbf{k} \sum_{\ell=\pm}\ln\left(1+e^{-E^\ell_\mathbf{k}/T}\right)+\frac{\Delta^2}{U}+\mu n, \end{split} \end{equation} % where $n$ is the fermion density. % \section{Susceptibilities and Goldstone modes} \label{sec_low_spiral: RPA spiral} % In spin spiral state, the spin and charge susceptibilities are coupled. It is therefore convenient to treat them on equal footing by extending the definition of the spin operator in Eq.~\eqref{eq_low_spiral: spin operator} to % \begin{equation} S^a_j = \frac{1}{2}\sum_{s,s'=\uparrow,\downarrow}c^\dagger_{j,s}\sigma^a_{ss'}c_{j,s'}, \end{equation} % where now $a$ runs from 0 to 3, with $\sigma^0$ the unit 2$\times$2 matrix. It is evident that for $a=1,2,3$ we recover the usual spin operator, while $a=0$ gives half the density. We then consider the imaginary-time susceptibility % \begin{equation} \chi^{ab}_{jj'}(\tau) = \langle {\cal T} S_j^a(\tau) S_{j'}^b(0) \rangle, \end{equation} % where $\mathcal{T}$ denotes time ordering. Fourier transforming to Matsubara frequency representation and analytically continuing to the real frequency axis, we obtain the retarded susceptibility $\chi^{ab}_{jj'}(\omega)$. As previously mentioned, $\chi^{ab}_{jj'}(\omega)$ is not invariant under spatial translations. It is therefore convenient to compute the susceptibilities in the rotated reference frame~\cite{Kampf1996} of Eq.~\eqref{eq_low_spiral: spiral rotation matrix} % \begin{equation} \widetilde{\chi}_{jj'}(\omega)=\mathcal{M}_j \chi_{jj'}(\omega) \mathcal{M}^T_{j'}, \label{eq_low_spiral: rotated chis spiral RPA} \end{equation} % where in this case we have % \begin{equation} \mathcal{M}_j=\left( \begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & \cos(\mathbf{Q}\cdot\bs{R}_j) & \sin(\mathbf{Q}\cdot\bs{R}_j) & 0 \\ 0 & -\sin(\mathbf{Q}\cdot\bs{R}_j) & \cos(\mathbf{Q}\cdot\bs{R}_j) & 0 \\ 0 & 0 & 0 & 1 \end{array} \right). \label{eq_low_spiral: spiral rotation matrix with charge} \end{equation} % The physical susceptibilities $\chi_{jj'}(\omega)$ can be obtained inverting Eq.~\eqref{eq_low_spiral: rotated chis spiral RPA}. Their momentum representation typically involves two distinct spatial momenta $\mathbf{q}$ and $\mathbf{q}'$, where $\mathbf{q}'$ can equal $\mathbf{q}$, $\mathbf{q}\pm\mathbf{Q}$ (only for $a\neq b$), or $\mathbf{q}\pm 2\mathbf{Q}$. Inverting Eq.~\eqref{eq_low_spiral: rotated chis spiral RPA} and Fourier transforming, we obtain the following relations between the momentum and spin diagonal components of the physical susceptibilities and those within the rotated reference frame % \begin{subequations} \begin{align} \chi^{00}(\mathbf{q},\mathbf{q},\omega) &= \widetilde{\chi}^{00}(\mathbf{q},\omega) \\ \chi^{11}(\mathbf{q},\mathbf{q},\omega) &= \chi^{22}(\mathbf{q},\mathbf{q},\omega) \nonumber \\ &= \frac{1}{4} \big[ \widetilde{\chi}^{11}(\mathbf{q}+\mathbf{Q},\omega) + \widetilde{\chi}^{11}(\mathbf{q}-\mathbf{Q},\omega) + \widetilde{\chi}^{22}(\mathbf{q}+\mathbf{Q},\omega) + \widetilde{\chi}^{22}(\mathbf{q}-\mathbf{Q},\omega) \nonumber \\ & \quad + 2i \, \widetilde{\chi}^{12}(\mathbf{q}+\mathbf{Q},\omega) + 2i \, \widetilde{\chi}^{21}(\mathbf{q}-\mathbf{Q},\omega) \big] \nonumber \\ &= \widetilde{\chi}^{-+}(\mathbf{q}+\mathbf{Q},\omega) + \widetilde{\chi}^{+-}(\mathbf{q}-\mathbf{Q},\omega) \, , \label{eq_low_spiral: chi11phys} \\ \chi^{33}(\mathbf{q},\mathbf{q},\omega) &= \widetilde{\chi}^{33}(\mathbf{q},\omega) \, , \end{align} \label{eq_low_spiral: chi physical mom diagonal} \end{subequations} % where we have used $\widetilde{\chi}^{21}(q)=-\widetilde{\chi}^{12}(q)$ (see Table \ref{tab_low_spiral: symmetries}), and we have defined % \begin{equation} \widetilde{\chi}^{+-}(\mathbf{q},\omega)=\langle \widetilde{S}^+_{-\mathbf{q},-\omega}\widetilde{S}^-_{\mathbf{q},\omega}\rangle, \end{equation} % with $\widetilde{S}^\pm=(\widetilde{S}^1\pm i\widetilde{S}^2)/2$. For $a=b$ the only momentum off-diagonal components are given by % \begin{subequations} \begin{align} \chi^{11}(\mathbf{q},\mathbf{q} \pm 2\mathbf{Q},\omega) &=\frac{1}{4} \left[ \widetilde{\chi}^{11}(\mathbf{q} \mp \mathbf{Q},\omega) - \widetilde{\chi}^{22}(\mathbf{q} \mp \mathbf{Q},\omega) \right] , \\ \chi^{22}(\mathbf{q},\mathbf{q} \pm 2\mathbf{Q},\omega) &=\frac{1}{4} \left[ \widetilde{\chi}^{22}(\mathbf{q} \mp \mathbf{Q},\omega) - \widetilde{\chi}^{11}(\mathbf{q} \mp \mathbf{Q},\omega) \right] . \end{align} \label{eq_low_spiral: chi physical mom off-diagonal} \end{subequations} % Here, with a slight abuse of notation, we denoted as $\chi^{aa}(\mathbf{q},\mathbf{q},\omega)$ and $\chi^{aa}(\mathbf{q},\mathbf{q}\pm2\mathbf{Q},\omega)$ the prefactors of $(2\pi)^d\delta^d(\mathbf{q}-\mathbf{q}')$ and $(2\pi)^d\delta^d(\mathbf{q}-\mathbf{q}'\mp2\mathbf{Q})$, respectively, in the contributions to the full susceptibilities $\chi^{aa}(\mathbf{q},\mathbf{q}',\omega)$. In the spacial case of N\'eel ordering, the observation $2\mathbf{Q}\simeq \mathbf{0}$ combined with Eqs.~\eqref{eq_low_spiral: chi physical mom diagonal} and \eqref{eq_low_spiral: chi physical mom off-diagonal}, implies $\chi^{11}(\mathbf{q},\mathbf{q},\omega)\neq \chi^{22}(\mathbf{q},\mathbf{q},\omega)$, differently than for the spiral state, where $2\mathbf{Q}\neq\mathbf{0}$. Within the random phase approximation, the susceptibilities of the Hubbard model are given by % \begin{equation} \label{eq_low_spiral: chi RPA} \widetilde{\chi}(q)=\widetilde{\chi}_0(q)[\mathbb{1}_4-\Gamma_0\widetilde{\chi}_0(q)]^{-1}, \end{equation} % where $\mathbb{1}$ is the $4\times 4$ unit matrix, and $\Gamma_0=2U\mathrm{diag}(-1,1,1,1)$ is the bare interaction. The bare bubbles on the real frequency axis are given by % \begin{equation} \begin{split} &\widetilde{\chi}^{ab}_0(\mathbf{q},\omega) = - \frac{1}{4} \int_\mathbf{k} T \sum_{\nu_n} \tr \big[ \sigma^a\,\widetilde{\mathbf{G}}_\mathbf{k}(i\nu_n) \, \sigma^b\,\widetilde{\mathbf{G}}_{\mathbf{k}+\mathbf{q}}(i\nu_n+i\Omega_m) \big] \Big\rvert_{i\Omega_m\to\omega+i0^+}, \end{split} \label{eq_low_spiral: chi0 expression} \end{equation} % where $\Omega_m=2m\pi T$ denotes a bosonic Matsubara frequency. Using \eqref{eq_low_spiral: spiral Gf comf}, one can perform the frequency sum, obtaining % \begin{equation} \widetilde{\chi}^{ab}_0(\mathbf{q},\omega) = - \frac{1}{8}\sum_{\ell,\ell'=\pm}\int_\mathbf{k} \mathcal{A}^{ab}_{\ell\ell'}(\mathbf{k},\mathbf{q})F_{\ell\ell'}(\mathbf{k},\mathbf{q},\omega), \label{eq_low_spiral: chi0 def} \end{equation} % where we have defined % \begin{equation} F_{\ell\ell'}(\mathbf{k},\mathbf{q},\omega)=\frac{f(E^\ell_\mathbf{k})-f(E^{\ell'}_{\mathbf{k}+\mathbf{q}})}{\omega+i0^++E^\ell_\mathbf{k}-E^{\ell'}_{\mathbf{k}+\mathbf{q}}}, \label{eq_low_spiral: Fll def} \end{equation} % and the coherence factors % \begin{equation} \mathcal{A}^{ab}_{\ell\ell'}(\mathbf{k},\mathbf{q})=\frac{1}{2}\Tr\left[\sigma^a u^\ell_\mathbf{k}\sigma^b u^{\ell'}_{\mathbf{k}+\mathbf{q}}\right]. \label{eq_low_spiral: coh fact def} \end{equation} % The coherence factors are either purely real or purely imaginary, depending on $a$ and $b$. The functions $F_{\ell\ell'}(\mathbf{k},\mathbf{q},\omega)$ have a real part and an imaginary part proportional to a $\delta$-function. To distinguish the corresponding contributions to $\widetilde{\chi}^{ab}_0(\mathbf{q},\omega)$, we refer to the contribution coming from the real part of $F_{\ell\ell'}(\mathbf{k},\mathbf{q},\omega)$ as $\widetilde{\chi}^{ab}_{0r}(\mathbf{q},\omega)$, and to the contribution from the imaginary part of $F_{\ell\ell'}(\mathbf{k},\mathbf{q},\omega)$ as $\widetilde{\chi}^{ab}_{0i}(\mathbf{q},\omega)$. Note that $\widetilde{\chi}^{ab}_{0r}(\mathbf{q},\omega)$ is imaginary and $\widetilde{\chi}^{ab}_{0i}(\mathbf{q},\omega)$ is real if the corresponding coherence factor is imaginary. % \subsection{Symmetries of the bare susceptibilities} % Both contributions $\widetilde{\chi}_{0r}^{ab}$ and $\widetilde{\chi}_{0i}^{ab}$ to $\widetilde{\chi}_0^{ab}$ have a well defined parity under $\mathbf{q} \to -\mathbf{q}$. In Appendix~\ref{app: low en spiral} we show that the diagonal components of $\widetilde{\chi}_{0r}^{ab}$ and the off-diagonal ones which do not involve either the 2- or the 3-component of the spin are symmetric, while the other off-diagonal elements are antisymmetric. The sign change of $\widetilde{\chi}_{0i}^{ab}(q)$ under $\mathbf{q} \to -\mathbf{q}$ is the opposite, that is, $\widetilde{\chi}_{0i}^{ab}(q)$ is antisymmetric if $\widetilde{\chi}_{0r}^{ab}(q)$ is symmetric and vice versa. In two spatial dimensions, for a spiral wave vector $\mathbf{Q}$ of the form $(\pi-2\pi\eta,\pi)$ all the susceptibilities are symmetric under $q_y \to -q_y$. This implies that those susceptibilities which are antisymmetric for $\mathbf{q} \to -\mathbf{q}$ are identically zero for $q_y=0$, and vanish in the limit of N\'eel order ($\eta \to 0$). Similarly, for a diagonal spiral $\mathbf{Q} = (\pi-2\pi\eta,\pi-2\pi\eta)$ all the susceptibilities are symmetric for $q_x \leftrightarrow q_y$ and those which are antisymmetric in $\mathbf{q}$ vanish for $q_x = q_y$. The contributions $\widetilde{\chi}_{0r}^{ab}$ and $\widetilde{\chi}_{0i}^{ab}$ to $\widetilde{\chi}_0^{ab}$ are also either symmetric or antisymmetric under the transformation $\omega \to -\omega$. In Appendix~\ref{app: low en spiral} we show that among the functions $\widetilde{\chi}_{0r}^{ab}$ all the diagonal parts and the off-diagonal ones which do not involve the 3-component of the spin are symmetric in $\omega$. The off-diagonal terms involving the 3-component of the spin are antisymmetric. $\widetilde{\chi}_{0i}^{ab}(q)$ is antisymmetric under $\omega \to -\omega$ if $\widetilde{\chi}_{0r}^{ab}(q)$ is symmetric and vice versa. In Table \ref{tab_low_spiral: symmetries} we show a summary of the generic (for arbitrary $\mathbf{Q}$) symmetries of the bare susceptibilities. Susceptibilities with real (imaginary) coherence factors are symmetric (antisymmetric) under the exchange $a \leftrightarrow b$. % \begin{table}[ht!] \centering \begin{tabular}{|c|c|c|c|c|} \hline $a,b$ & 0 & 1 & 2 & 3 \\ \hline 0 & $+,+,+$ & $+,+,+$ & $-,+,-$ & $-,-,+$ \\ \hline 1 & $+,+,+$ & $+,+,+$ & $-,+,-$ & $-,-,+$ \\ \hline 2 & $-,+,-$ & $-,+,-$ & $+,+,+$ & $+,-,-$ \\ \hline 3 & $-,-,+$ & $-,-,+$ & $+,-,-$ & $+,+,+$ \\ \hline \end{tabular} \caption{Symmetries of the bare susceptibilities. The first sign in each field represents the sign change of $\widetilde{\chi}_{0r}^{ab}(q)$ under $\mathbf{q} \to -\mathbf{q}$. The second one represents the sign change of ${\widetilde{\chi}_{0r}^{ab}}(q)$ under $\omega \to -\omega$. The sign changes of ${\widetilde{\chi}_{0i}^{ab}}(q)$ under $\mathbf{q} \to -\mathbf{q}$ or $\omega \to -\omega$ are just the opposite. The third sign in each field is the sign change of $\widetilde{\chi}_0^{ab}(q)$ under the exchange $a \leftrightarrow b$.} \label{tab_low_spiral: symmetries} \end{table} \subsection{Location of Goldstone modes} We now identify the location of Goldstone modes in the spiral magnet by analyzing divergences of the rotated susceptibilities $\widetilde{\chi}(q)$. % \subsubsection{In-plane mode} % For $q=0=(\mathbf{0},0)$, all the off-diagonal components of the bare bubbles $\widetilde{\chi}_0(q)$ involving the 2-component of the spin vanish: $\widetilde{\chi}_0^{20}(q)$ and $\widetilde{\chi}_0^{21}(q)$ because they are odd in momentum, and $\widetilde{\chi}_0^{23}(q)$ because it is antisymmetric for $\omega\to-\omega$. We also remark that all the $\widetilde{\chi}_{0i}^{ab}$ vanish at zero frequency. The RPA expression for the 22-component of the rotated susceptibility therefore takes the simple form % \begin{equation} \widetilde{\chi}^{22}(0) = \frac{\widetilde{\chi}^{22}_0(0)}{1 - 2 U\widetilde{\chi}^{22}_0(0)}. \label{eq_low_spiral: chi22(0) RPA} \end{equation} % Notice that the limits $\mathbf{q}\to\mathbf{0}$ and $\omega\to0$ commute for $\widetilde{\chi}_0^{22}$ as the intraband coherence factor $A^{22}_{\ell\ell}(\mathbf{k},\mathbf{q})$ vanishes for $\mathbf{q}=\mathbf{0}$ (see Appendix~\ref{app: low en spiral}). Eq.~\eqref{eq_low_spiral: chi0 def} yields % \begin{equation} \widetilde{\chi}_0^{22}(0)=\int_\mathbf{k} \frac{f(E^-_\mathbf{k})-f(E^+_\mathbf{k})}{4e_\mathbf{k}}. \end{equation} % The denominator of Eq.~\eqref{eq_low_spiral: chi22(0) RPA} vanishes if the gap equation~\eqref{eq_low_spiral: gap equation} is fulfilled. Thus, $\widetilde{\chi}^{22}(0)$ is divergent. From Eq.~\eqref{eq_low_spiral: chi11phys}, we see that this makes the momentum diagonal part of the physical susceptibilities $\chi^{11}(\mathbf{q},\mathbf{q},0)$ and $\chi^{22}(\mathbf{q},\mathbf{q},0)$ divergent at $\mathbf{q}=\pm \mathbf{Q}$. These divergences are associated with a massless Goldstone mode corresponding to fluctuations of the spins within the $xy$ plane~\cite{Chubukov1995}, in which the magnetization is aligned. By contrast, $\widetilde{\chi}^{11}(\mathbf{q},0)$ is always finite and corresponds to a massive amplitude mode. % \subsubsection{Out-of-plane modes} % By letting $\omega\to 0$, all the off-diagonal components of the bare susceptibilities involving the 3-component of the spin vanish as they are odd in $\omega$. Hence, we can express $\widetilde{\chi}^{33}(\mathbf{q},0)$ as % \begin{equation} \widetilde{\chi}^{33}(\mathbf{q},0) = \frac{\widetilde{\chi}^{33}_0(\mathbf{q},0)}{1 - 2 U \widetilde{\chi}^{33}_0(\mathbf{q},0)} . \label{eq_low_spiral: chi33(q,0)} \end{equation} % In Appendix~\ref{app: low en spiral}, we show that % \begin{equation} \widetilde{\chi}_0^{33}(\pm\mathbf{Q},0) = \int_\mathbf{k} \frac{f(E^-_\mathbf{k})-f(E^+_\mathbf{k})}{4e_\mathbf{k}} = \widetilde{\chi}_0^{22}(0) , \label{eq_low_spiral: chi33_0(q,0)} \end{equation} % so that the denominator of \eqref{eq_low_spiral: chi33(q,0)} vanishes if the gap equation is fulfilled. Therefore, the 33-component of the susceptibility is divergent at $q=(\pm \mathbf{Q},0)$ due to two degenerate Goldstone modes corresponding to fluctuations of the spins out of the $xy$ plane~\cite{Chubukov1995}. % \section{Properties of the Goldstone modes} \label{sec_low_spiral: properties of Goldstones} % As we have already anticipated in Sec.~\ref{subsec_low_spiral: WI spiral}, the susceptibilities containing a Goldstone mode can be expanded around their zero-energy pole as (cf.~Eq.~\eqref{eq_low_spiral: low energy susceptibilities}) % \begin{subequations} \begin{align} &\widetilde{\chi}^{22}(\mathbf{q},\omega) \sim \frac{m^2} {J^{\msml{\square}}_{\alpha\beta} \, q_\alpha q_\beta - \chi_\mathrm{dyn}^{\msml{\square}} \,\omega^2 + iD^\msml{\square}(\mathbf{q},\omega)} \, ,\\ &\widetilde{\chi}^{33}(\mathbf{q},\omega) \sim \frac{m^2/2} {J^{\perp}_{\alpha\beta} \, \big( q_\alpha \mp Q_{\alpha} \big) \big( q_\beta\mp Q_{\beta} \big) - \chi_\mathrm{dyn}^{\perp} \,\omega^2 + iD^\perp(\mathbf{q},\omega)} \, , \end{align} \end{subequations} % where $m$ is the magnetization amplitude as defined before, $J^{a}_{\alpha\beta}$ ($a\in\{\msml{\square},\perp\}$) are the spin stiffnesses, and $\chi_\mathrm{dyn}^{a}$ the dynamical susceptibilities. The ratios $m^2/\chi_\mathrm{dyn}^{a}$ define the spectral weights of the Goldstone modes and $[J^{a}_{\alpha\beta}/\chi_\mathrm{dyn}^{a}]^{1/2}$ their velocity tensors. Compared with Eq.~\eqref{eq_low_spiral: low energy susceptibilities}, we have also considered an imaginary part $iD^{a}(\mathbf{q},\omega)$ in the denominator of the susceptibilities, due to \emph{Landau damping} of the collective excitations due to their decay into particle-hole pairs, which has been instead neglected in Sec.~\ref{subsec_low_spiral: WI spiral}. The structure of this term will be discussed below. In the following we will discuss how to extract the spin stiffnesses, dynamical susceptibilities and Landau dampings from the RPA expressions for $\widetilde{\chi}^{22}(\mathbf{q},\omega)$ and $\widetilde{\chi}^{33}(\mathbf{q},\omega)$. % \subsection{In-plane mode} % Using \eqref{eq_low_spiral: chi RPA}, the in-plane susceptibility can be conveniently written as % \begin{equation} \widetilde{\chi}^{22}(q)=\frac{\overline{\chi}_0^{22}(q)}{1-2U\overline{\chi}_0^{22}(q)}, \label{eq_low_spiral: chi22 RPA} \end{equation} % with % \begin{equation} \overline{\chi}_0^{22}(q)=\widetilde{\chi}_0^{22}(q) + \sum_{a,b\in\{0,1,3\}}\widetilde{\chi}_0^{2a}(q)\widetilde{\Gamma}_{2}^{ab}(q)\widetilde{\chi}_0^{b2}(q). \end{equation} % $\widetilde{\Gamma}_{2}(q)$ is given by % \begin{equation} \widetilde{\Gamma}_{2}(q)=\left[\mathbb{1}_3-\Gamma_{0,2}\widetilde{\chi}_{0,2}(q)\right]^{-1}\Gamma_{0,2}, \end{equation} % where $\Gamma_{0,2}^{ab}$ and $\widetilde{\chi}_{0,2}^{ab}(q)$ are matrices obtained from $\Gamma_0^{ab}$ and $\widetilde{\chi}_0^{ab}(q)$ removing the components where $a=2$ and/or $b=2$, and $\mathbb{1}_3$ denotes the $3\times3$ identity matrix. For later convenience, we notice that for $q=0$, all the off-diagonal elements $\widetilde{\chi}_0^{2a}(q)$ and $\widetilde{\chi}_0^{a2}(q)$ vanish, so that $\widetilde{\Gamma}_{2}(0)$ can be obtained from the full expression % \begin{equation} \widetilde{\Gamma}(q)=\left[\mathbb{1}-\Gamma_{0}\widetilde{\chi}_{0}(q)\right]^{-1}\Gamma_{0}, \label{eq_low_spiral: Gamma(q) definition} \end{equation} % selecting only the components in which the indices take the values 0,1, or 3. \subsubsection{Spin stiffness} % Setting $\omega=0$, the bare susceptibilities $\widetilde{\chi}_0^{23}(q)$ and $\widetilde{\chi}_0^{32}(q)$ vanish as they are odd in $\omega$. Moreover, in the limit $\mathbf{q}\to\mathbf{0}$, $\widetilde{\chi}_0^{2a}(\mathbf{q},0)$ and $\widetilde{\chi}_0^{a2}(\mathbf{q},0)$, with $a=0,1$, are linear in $\mathbf{q}$ as they are odd under $\mathbf{q}\to-\mathbf{q}$. The in-plane spin stiffness can be therefore written as (cf.~Eq.~\eqref{eq_low_spiral: stiffenss and chi spiral final J3}) % \begin{equation} \begin{split} J^\msml{\square}_{\alpha\beta}=& -2\Delta^2\partial^2_{q_\alpha q_\beta}\overline{\chi}_0^{22}(0)\\ =&-2\Delta^2\bigg[ \partial^2_{q_\alpha q_\beta}\widetilde{\chi}_0^{22}(0)+ 2\sum_{a,b\in\{0,1\}}\partial_{q_\alpha}\widetilde{\chi}_0^{2a}(0)\,\widetilde{\Gamma}^{ab}(\mathbf{q}\to\mathbf{0},0)\,\partial_{q_\alpha}\widetilde{\chi}_0^{b2}(0) \bigg], \end{split} \label{eq_low_spiral: J3 RPA} \end{equation} % where we have used $\overline{\chi}_0^{22}(0)=\widetilde{\chi}_0^{22}(0)=1/(2U)$, descending from the gap equation, and $\partial_{q_\alpha}f(0)$ is a shorthand for $\partial f(\mathbf{q},0)/\partial q_\alpha |_{\mathbf{q}\to\mathbf{0}}$, and similarly for $\partial^2_{q_\alpha q_\beta}f(0)$. \subsubsection{Dynamical susceptibility} % In a similar way, if we set $\mathbf{q}$ to $\mathbf{0}$ and consider the limit of small $\omega$, the terms where $a$ and/or $b$ are 0 or 1 vanish as $\widetilde{\chi}_0^{2a}(q)$ and $\widetilde{\chi}_0^{a2}(q)$ for a=0,1 are odd in $\mathbf{q}$. On the other hand, $\widetilde{\chi}_0^{23}(q)$ and $\widetilde{\chi}_0^{32}(q)$ are linear in $\omega$ for small $\omega$. With these considerations, the in-plane dynamical susceptibility is given by (see Eq.~\eqref{eq_low_spiral: stiffenss and chi spiral final chi3}) % \begin{equation} \begin{split} \chi_\mathrm{dyn}^\msml{\square}= &2\Delta^2\partial^2_{\omega}\overline{\chi}_0^{22}(0)\\ =&2\Delta^2\Big[ \partial^2_{\omega}\widetilde{\chi}_0^{22}(0) +2\partial_{\omega}\widetilde{\chi}_0^{23}(0)\,\widetilde{\Gamma}^{33}(\mathbf{0},\omega\to 0)\,\partial_{\omega}\widetilde{\chi}_0^{32}(0) \Big], \end{split} \label{eq_low_spiral: chi3 RPA} \end{equation} % where $\partial^n_{\omega}f(0)$ is a shorthand for $\partial^n f(\mathbf{0},\omega)/\partial \omega^n |_{\omega\to 0}$, and $\widetilde{\Gamma}^{33}(\mathbf{0},\omega\to 0)$ can be cast in the simple form % \begin{equation} \widetilde{\Gamma}^{33}(\mathbf{0},\omega\to 0)=\frac{2U}{1-2U \widetilde{\chi}_0^{33}(\mathbf{0},\omega\to 0)}. \end{equation} % \subsubsection{Landau damping} % We now analyze the leading imaginary term, describing the damping of the in-plane Goldstone mode for small $\mathbf{q}$ and $\omega$. Imaginary contributions to the bare susceptibilities arise from the $\delta$-function term in $\widetilde{\chi}_{0i}^{ab}$. For small $\mathbf{q}$ and $\omega$ only intraband ($\ell=\ell'$) terms contribute since $E^+_\mathbf{k}-E^-_\mathbf{k} > 2\Delta$. We expand the imaginary part of $1/\widetilde{\chi}^{22}(\mathbf{q},\omega)$ for small $\mathbf{q}$ and $\omega$ by keeping the ratio $\hat{\omega}=\omega/|\mathbf{q}|$ fixed. The coupling to the 3-component can be neglected, since the intraband coherence factor $A^{23}(\mathbf{k}-\mathbf{q}/2,\mathbf{q})$ is already of order $|\mathbf{q}|^2$ for small $\mathbf{q}$. Hence at order $|\mathbf{q}|^2$, we obtain % \begin{equation} \label{eq_low_spiral: chit22ominv} \begin{split} {\rm Im} \, \frac{1}{\widetilde{\chi}^{22}(\mathbf{q},\hat{\omega}|\mathbf{q}|)} = - 4U^2 \bigg[ \widetilde{\chi}_{0i}^{22}(\mathbf{q},\hat{\omega}|\mathbf{q}|) + {\rm Im} \! \sum_{a,b=0,1} \widetilde{\chi}_0^{2a}(\mathbf{q},\hat{\omega}|\mathbf{q}|) \widetilde{\Gamma}^{ab}(\mathbf{0},0) \, \widetilde{\chi}_0^{b2}(\mathbf{q},\hat{\omega}|\mathbf{q}|) \bigg]. \end{split} \end{equation} % Note that $\widetilde{\Gamma}^{ab}(\mathbf{0},0)=\lim_{|\mathbf{q}|\to0}\widetilde{\Gamma}^{ab}(\mathbf{q},\hat{\omega}|\mathbf{q}|)$ depends on $\hat{\omega}$ and the direction of $\hat{q}=\mathbf{q}/|\mathbf{q}|$. We now show that both terms in Eq.~\eqref{eq_low_spiral: chit22ominv} are of order $|\mathbf{q}|^2$ at fixed $\hat{\omega}$. Shifting the integration variable $\mathbf{k}$ in Eq.~\eqref{eq_low_spiral: chi0 def} by $-\mathbf{q}/2$, $\widetilde{\chi}_{0i}^{22}$ becomes % \begin{equation} \widetilde{\chi}_{0i}^{22}(\mathbf{q},\omega)=\frac{i\pi}{8}\int_\mathbf{k} \sum_{\ell,\ell'}A^{22}_{\ell\ell'}(\mathbf{k}-\mathbf{q}/2,\mathbf{q}) \big[ f(E_{\mathbf{k}-\mathbf{q}/2}^\ell) - f(E_{\mathbf{k}+\mathbf{q}/2}^{\ell'}) \big] \delta(\omega + E_{\mathbf{k}-\mathbf{q}/2}^\ell - E_{\mathbf{k}+\mathbf{q}/2}^{\ell'}). \end{equation} % For small $\omega$, only the intraband terms contribute. The intraband coherence factor % \begin{equation} A^{22}_{\ell\ell}(\mathbf{k}-\mathbf{q}/2,\mathbf{q})=1-\frac{h_{\mathbf{k}-\mathbf{q}/2}h_{\mathbf{k}+\mathbf{q}/2}+\Delta^2}{e_{\mathbf{k}-\mathbf{q}/2} e_{\mathbf{k}+\mathbf{q}/2}}, \end{equation} % is of order $|\mathbf{q}|^2$ for small $\mathbf{q}$. Expanding $E_{\mathbf{k}+\mathbf{q}/2}^\ell - E_{\mathbf{k}-\mathbf{q}/2}^{\ell}=\mathbf{q}\cdot\nabla_\mathbf{k} E^\ell_\mathbf{k}+O(|\mathbf{q}|^3)$ and $f(E_{\mathbf{k}-\mathbf{q}/2}^\ell) - f(E_{\mathbf{k}+\mathbf{q}/2}^{\ell})=-f^\prime(E^\ell_\mathbf{k}) (\mathbf{q}\cdot\nabla_\mathbf{k} E^\ell_\mathbf{k})+O(|\mathbf{q}|^3)$, with $f^\prime(x)=df(x)/dx$, and using $\delta(|\mathbf{q}|x)=|\mathbf{q}|^{-1}\delta(x)$, we obtain % \begin{equation}\label{eq_low_spiral: chi0i22} \widetilde{\chi}_{0i}^{22}(\mathbf{q},\omega)=-\frac{i\pi}{16}\hat{\omega}q_\alpha q_\beta \int_\mathbf{k}\sum_{\ell} \left[\partial^2_{q_\alpha q_\beta}A^{22}_{\ell\ell}(\mathbf{k}-\mathbf{q}/2,\mathbf{q})\big\rvert_{\mathbf{q}=\mathbf{0}}\right]f^\prime(E_\mathbf{k}^\ell)\, \delta(\hat{\omega}-\hat{\mathbf{q}}\cdot\nabla_\mathbf{k} E^\ell_\mathbf{k})+O(|\mathbf{q}|^3). \end{equation} % We thus conclude that $\widetilde{\chi}_{0i}^{22}(\mathbf{q},\omega)$ is of order $\hat{\omega}|\mathbf{q}|^2$ for small $\mathbf{q}$ and $\omega=\hat{\omega} |\mathbf{q}|$. Since at low temperatures $T\ll\Delta$ the term $f^\prime(E^\ell_\mathbf{k})$ in Eq.~\eqref{eq_low_spiral: chi0i22} behaves as $-\delta(E^\ell_\mathbf{k})$, we deduce that the only presence of Fermi surfaces (that is, $\mathbf{k}$-points where $E^\ell_\mathbf{k}=0$) is sufficient to induce a finite $\widetilde{\chi}_{0i}^{22}(\mathbf{q},\omega)$ at small $\mathbf{q}$. Since the effective interaction $\widetilde{\Gamma}^{ab}(\mathbf{0},0)$ is real, the second term in Eq.~\eqref{eq_low_spiral: chit22ominv} receives contribution only from the cross terms $\widetilde{\chi}_{0r}^{2a}(\mathbf{q},\omega) \widetilde{\Gamma}^{ab}(\mathbf{0},0) \, \widetilde{\chi}_{0i}^{b2}(\mathbf{q},\omega)$ and $\widetilde{\chi}_{0i}^{2a}(\mathbf{q},\omega) \widetilde{\Gamma}^{ab}(\mathbf{0},0) \cdot$ $\widetilde{\chi}_{0r}^{b2}(\mathbf{q},\omega)$. For small $\omega$ only intraband terms contribute to $\widetilde{\chi}_{0i}^{2a}(\mathbf{q},\omega)$ and $\widetilde{\chi}_{0i}^{b2}(\mathbf{q},\omega)$. Both are of order $\hat{\omega}\mathbf{q}$ for small $\mathbf{q}$ at fixed $\hat{\omega}$ because the intraband coherence factors $A_{\ell\ell}^{02}(\mathbf{k},\mathbf{q}) = - A_{\ell\ell}^{20}(\mathbf{k},\mathbf{q})$ and $A_{\ell\ell}^{12}(\mathbf{k},\mathbf{q}) = - A_{\ell\ell}^{21}(\mathbf{k},\mathbf{q})$ are of order $\mathbf{q}$. Moreover, $\widetilde{\chi}_{0r}^{2a}(\mathbf{q},\omega)$ and $\widetilde{\chi}_{0r}^{b2}(\mathbf{q},\omega)$ are antisymmetric in $\mathbf{q}$ and thus of order $\mathbf{q}$, too. Hence, the second term in Eq.~\eqref{eq_low_spiral: chit22ominv} is of order $\hat{\omega}|\mathbf{q}|^2$. We thus have shown that the damping term of the in-plane mode has the form % \begin{equation} \label{eq_low_spiral: damping2} {\rm Im} \, \frac{m^2}{\widetilde{\chi}^{22}(\mathbf{q},\omega)} = - \hat{\omega}|\mathbf{q}|^2 \gamma(\hat\mathbf{q},\hat\omega) + O(|\mathbf{q}|^3) \,, \end{equation} % where the scaling function is symmetric in $\hat{\omega}$ and finite for $\hat{\omega}=0$. The Landau damping of the in-plane mode has the same form of the two Goldstone modes in a N\'eel antiferromagnet~\cite{Sachdev1995}. It is of the same order of the leading real terms near the pole, implying that the damping of the Goldstone mode is of the same order as its excitation energy, that is, $|\mathbf{q}|$. This implies that the in-plane mode of a spin spiral state and the Goldstone modes of a N\'eel antiferromagnet are not \emph{asymptotically stable} quasiparticles, as this would require a damping rate vanishing faster than the excitation energy when approaching the pole of the susceptibility. % \subsection{Out-of-plane modes} % Similarly to the in-plane mode, one can write the out-of-plane susceptibility in the form % \begin{equation} \widetilde{\chi}^{33}(q)=\frac{\overline{\chi}_0^{33}(q)}{1-2U\overline{\chi}_0^{33}(q)}, \label{eq_low_spiral: chi33 RPA} \end{equation} % with % \begin{equation}{\label{eq_low_spiral: chit33(q)}} \overline{\chi}_0^{33}(q)=\widetilde{\chi}_0^{33}(q) + \sum_{a,b\in\{0,1,2\}}\widetilde{\chi}_0^{3a}(q)\widetilde{\Gamma}_{3}^{ab}(q)\widetilde{\chi}_0^{b3}(q), \end{equation} % where $\widetilde{\Gamma}_{3}(q)$ is defined similarly to $\widetilde{\Gamma}_{2}(q)$, that is, removing the components that involve the index 3 instead of the index 2. We also notice that % \begin{equation} \widetilde{\Gamma}_{3}^{ab}(\mathbf{q},0)=\widetilde{\Gamma}^{ab}(\mathbf{q},0), \end{equation} % for $a$, $b=0,1,2$, because all the off-diagonal components $\widetilde{\chi}_0^{3a}(q)$ and $\widetilde{\chi}_0^{a3}(q)$ vanish for zero frequency. % \subsubsection{Spin stiffness} % Using Eq.~\eqref{eq_low_spiral: stiffenss and chi spiral final J2} and $\overline{\chi}_0^{33}(\pm Q)=\widetilde{\chi}_0^{33}(\pm Q)=1/(2U)$, we obtain the out-of-plane spin stiffness % \begin{equation} J_{\alpha\beta}^\perp= -\Delta^2\partial^2_{q_\alpha q_\beta}\overline{\chi}_0^{33}(\pm Q)= -\Delta^2\partial^2_{q_\alpha q_\beta}\widetilde{\chi}_0^{33}(\pm Q), \end{equation} % where $\partial^2_{q_\alpha q_\beta} f(\pm Q)$ stands for $\partial^2f(\mathbf{q},0)/\partial q_\alpha\partial q_\beta |_{\mathbf{q}\to\pm\mathbf{Q}}$. % \subsubsection{Dynamical susceptibility} % In the limit $\omega\to 0$, all the $\widetilde{\chi}_0^{3a}(q)$ and $\widetilde{\chi}_0^{a3}(q)$, with $a=0,1,2$, are linear in $\omega$, and the dynamical susceptibility is given by (see Eq.~\eqref{eq_low_spiral: stiffenss and chi spiral final chi2}) % \begin{equation} \begin{split} \chi_\mathrm{dyn}^\perp&=\Delta^2 \partial^2_\omega \overline{\chi}_0^{33}(\pm Q)\\ &=\Delta^2 \bigg[ \partial^2_\omega \widetilde{\chi}^{33}_0(\pm Q) + 2\sum_{a,b\in\{0,1,2\}}\partial_\omega \widetilde{\chi}_0^{3a}(\pm Q) \widetilde{\Gamma}^{ab}(\pm Q) \partial_\omega \widetilde{\chi}_0^{b3}(\pm Q) \bigg], \end{split} \end{equation} % with $\partial^n_{\omega} f(\pm Q)$ a shorthand for $\partial^n f(\pm\mathbf{Q},\omega)/\partial \omega^n |_{\omega\to 0}$. We remark that for $\widetilde{\Gamma}^{ab}(q)$ the limits $\mathbf{q}\to\mathbf{Q}$ and $\omega\to 0$ commute if $\mathbf{Q}$ is not a high-symmetry wave vector, that is, if $E^\ell_{\mathbf{k}+\mathbf{Q}}\neq E^\ell_\mathbf{k}$. % \subsubsection{Landau damping} % We now analyze the $\mathbf{q}$- and $\omega$-dependence of the imaginary part of $1/\widetilde{\chi}^{33}$ for small $\omega$ and for $\mathbf{q}$ near $\pm\mathbf{Q}$. We discuss the case $\mathbf{q}\sim\mathbf{Q}$. The behavior for $\mathbf{q}\sim-\mathbf{Q}$ is equivalent. We first fix $\mathbf{q}=\mathbf{Q}$ and study the $\omega$-dependence of the damping term. Since all the off-diagonal bare susceptibilities $\widetilde{\chi}_0^{3a}(q)$ and $\widetilde{\chi}_0^{b3}(q)$ in Eq.~\eqref{eq_low_spiral: chit33(q)} vanish for $\omega=0$, we obtain the following expansion of $1/\widetilde{\chi}^{33}(\mathbf{Q},\omega)$ for small $\omega$ % \begin{equation} \label{eq_low_spiral: chit33inv} \frac{1}{\widetilde{\chi}^{33}(\mathbf{Q},\omega)} = 2U \bigg[ 1 - 2U \widetilde{\chi}_0^{33}(\mathbf{Q},\omega) - 2U \!\!\! \sum_{a,b\in\{0,1,2\}} \widetilde{\chi}_0^{3a}(\mathbf{Q},\omega) \widetilde{\Gamma}^{ab}(\mathbf{Q},0) \, \widetilde{\chi}_0^{b3}(\mathbf{Q},\omega) \bigg] + O(\omega^3) \, . \end{equation} % The first contribution to the imaginary part of $1/\widetilde{\chi}^{33}$ comes from the imaginary part of the bare susceptibility $\widetilde{\chi}^{33}_{0i}(\mathbf{Q},\omega)$: % \begin{equation} \label{eq_low_spiral: chit0i33} \widetilde{\chi}_{0i}^{33}(\mathbf{Q},\omega) = \frac{i\pi}{8} \int_\mathbf{k} \sum_{\ell,\ell'} A_{\ell\ell'}^{33}(\mathbf{k},\mathbf{Q}) \big[ f(E_\mathbf{k}^\ell) - f(E_{\mathbf{k}+\mathbf{Q}}^{\ell'}) \big] \delta(\omega + E_\mathbf{k}^\ell - E_{\mathbf{k}+\mathbf{Q}}^{\ell'}) \, . \end{equation} % For small $\omega$, only momenta for which \emph{both} $E^\ell_\mathbf{k}$ and $E^{\ell'}_{\mathbf{k}+\mathbf{Q}}$ are $O(\omega)$ contribute to the integral. These momenta are restricted to a small neighborhood of $\emph{hot spots}$ $\mathbf{k}_H$, defined by the relations % \begin{equation}\label{eq_low_spiral: hot spots} E^\ell_{\mathbf{k}_H}=E^{\ell'}_{\mathbf{k}_H+\mathbf{Q}}=0. \end{equation} % In most cases, only intraband ($\ell=\ell'$) hot spots appear. While the existence of interband hot spots cannot be excluded in general, we restrict our analysis to intraband contributions. For $\ell=\ell'$, Eq.~\eqref{eq_low_spiral: hot spots} is equivalent to % \begin{equation} E_{\mathbf{k}_H}^\ell = 0 \quad \mbox{and} \quad \xi_{\mathbf{k}_H} = \xi_{\mathbf{k}_H+2\mathbf{Q}} \, . \end{equation} % In the N\'eel state $2\mathbf{Q}$ is a reciprocal lattice vector and the second equation is always satisfied, so that all momenta on the Fermi surfaces are hot spots. The condition $\xi_{\mathbf{k}_H} = \xi_{\mathbf{k}_H+2\mathbf{Q}}$ implies $h_{\mathbf{k}_H+\mathbf{Q}}=-h_{\mathbf{k}_H}$, which in turn translates into $A^{33}_{\ell\ell}(\mathbf{k}_H,\mathbf{Q})=0$ and also $\nabla_\mathbf{k} A^{33}_{\ell\ell}(\mathbf{k},\mathbf{Q})|_{\mathbf{k}\to\mathbf{k}_H}=0$. For small frequencies and temperatures, the momenta contributing to the integral~\eqref{eq_low_spiral: chit0i33} are situated at a distance of order $\omega$ from the hot spots. For these momenta, the coherence factor is of order $\omega^2$ as both $A^{33}_{\ell\ell}(\mathbf{k},\mathbf{Q})$ and its gradient vanish at the hot spots. Multiplying this result with the usual factor $\propto\omega$ coming from the difference of the Fermi functions, we obtain % \begin{equation} \widetilde{\chi}^{33}_{0i}(\mathbf{Q},\omega) \propto \omega^3, \end{equation} % for small $\omega$. We now consider the contribution to the imaginary part of $1/\widetilde{\chi}^{33}$ coming from the second term in Eq.~\eqref{eq_low_spiral: chit33inv}. Since $\widetilde{\Gamma}^{ab}(\mathbf{Q},0)$ is real, only the cross terms $\sum_{a,b}\widetilde{\chi}_{0i}^{3a}(\mathbf{Q},\omega)$ $\widetilde{\Gamma}^{ab}(\mathbf{Q},0)$ $\widetilde{\chi}_{0r}^{b3}(\mathbf{Q},\omega)$ and $\sum_{a,b}\widetilde{\chi}_{0r}^{3a}(\mathbf{Q},\omega)$ $\widetilde{\Gamma}^{ab}(\mathbf{Q},0)$ $\widetilde{\chi}_{0i}^{b3}(\mathbf{Q},\omega)$ contribute to the damping of the out-of-plane modes. The real parts of the bare susceptibilities $\widetilde{\chi}^{3a}_{0r}(\mathbf{Q},\omega)$ and $\widetilde{\chi}^{b3}_{0r}(\mathbf{Q},\omega)$ are antisymmetric in $\omega$ and of order $\omega$ for small frequencies. Their coherence factor vanish at $\mathbf{k}=\mathbf{k}_H$ but they have a finite gradient there. Hence, following the arguments given before for $\widetilde{\chi}^{33}_{0i}(\mathbf{Q},\omega)$, we deduce % \begin{equation} \widetilde{\chi}^{3a}_{0i}(\mathbf{Q},\omega)\propto\omega^2 \end{equation} % for $a\in\{0,1,2\}$ and small $\omega$. The imaginary part of the second term in Eq.~\eqref{eq_low_spiral: chit33inv} is therefore of order $\omega^3$. We thus have shown that the Landau damping of the out-of-plane modes at $\mathbf{q}=\mathbf{Q}$ obeys % \begin{equation}\label{eq_low_spiral: damping33 q=Q} {\rm Im}\frac{m^2}{\widetilde{\chi}^{33}(\mathbf{Q},\omega)}\propto\omega^3. \end{equation} % For $\mathbf{q}\neq\mathbf{Q}$, the hot spots are determined by $E^\ell_{\mathbf{k}_H}=E^{\ell'}_{\mathbf{k}_H+\mathbf{q}}$ and the coherence factors remain finite there, so that % \begin{equation} \widetilde{\chi}^{3a}_{0i}(\mathbf{q},\omega)\simeq -p^{3a}(\mathbf{q})\omega \end{equation} % for $a=0,1,2,3$ and small $\omega$. For $a=3$ both the coherence factor $A_{\ell\ell}^{33}(\mathbf{k},\mathbf{q})$ and its gradient vanish at the hot spots, implying $p^{33}(\mathbf{q})\propto(\mathbf{q}-\mathbf{Q})^2$ for $\mathbf{q}\sim\mathbf{Q}$. Differently, for $a\neq3$ the gradient of the coherence factor remains finite, so that $p^{3a}(\mathbf{q})\propto|\mathbf{q}-\mathbf{Q}|$ for $\mathbf{q}\sim\mathbf{Q}$ (and $a\neq3$). For $\omega\to 0$ the contribution coming fror $\widetilde{\chi}^{33}_{0i}$ is leading and we can generalize Eq.~\eqref{eq_low_spiral: damping33 q=Q} as % \begin{equation}\label{eq_low_spiral: damping outofplane2} {\rm Im}\frac{m^2}{\widetilde{\chi}^{33}(\mathbf{q},\omega)}=-\gamma(\mathbf{q})\omega+O(\omega^2), \end{equation} % with $\gamma(\mathbf{q})\propto(\mathbf{q}-\mathbf{Q})^2$ for $\mathbf{q}\to\mathbf{Q}$. The contributions to the damping coming from the off-diagonal susceptibilities are of order $\omega^2$ with a prefactor linear in $|\mathbf{q}-\mathbf{Q}|$. Considering the limit $\omega\to 0$, $\mathbf{q}\to\mathbf{Q}$ at fixed $\hat{\omega}=\omega/|\mathbf{q}-\mathbf{Q}|$, both diagonal and off-diagonal terms are of order $|\mathbf{q}-\mathbf{Q}|^3$. The above results are strongly dependent on the existence of hot spots. If Eq.~\eqref{eq_low_spiral: hot spots} has no solutions, the out-of-plane modes are not damped at all, at least within the RPA. % \section{Explicit evaluation of the Ward identities} \label{sec_low_spiral: explicit WIs} % In this section, we explicitly evaluate the Ward identities derived in Sec.~\ref{sec_low_spiral: Ward Identities} for a spiral magnet and explicitly show that the expressions for the spin stiffnesses and dynamical susceptibilities obtained from the response to a SU(2) gauge field coincide (within the RPA, which is a conserving approximation in the sense of Baym and Kadanoff~\cite{Baym1961,KadanoffBaym}) with those derived within the low-energy expansion of the susceptibilities, carried out in Sec.~\ref{sec_low_spiral: properties of Goldstones}. % \subsection{Gauge kernels} % We begin by setting up the formalism to compute the response to a SU(2) gauge field $A_\mu$ within the Hubbard model. We couple our system to $A_\mu$ via a Peierls substitution in the quadratic part of the Hubbard action~\eqref{eq_low_spiral: Hubbard action}: \begin{equation}\label{eq_low_spiral: S0 coupled to SU(2) gauge field} \begin{split} \mathcal{S}_0[\psi,\overline{\psi},A_\mu]=\int_0^\beta \!d\tau \sum_{jj'}&\overline{\psi}_j \Big[ (\partial_\tau - A_{0,j} + \mu)\delta_{jj'} +t_{jj'}e^{-\bs{r}_{jj'}\cdot(\nabla-i \bs{A}_j)} \Big]\psi_j, \end{split} \end{equation} where $e^{-\bs{r}_{jj'}\cdot\nabla}$ is the translation operator from site $j$ to site $j'$, with $\bs{r}_{jj'}=\bs{r}_j-\bs{r}_{j'}$. Notice that under the transformation $\psi_j\to R_j\psi_j$, with $R_j\in\mathrm{SU(2)}$, the interacting part of the action~\eqref{eq_low_spiral: Hubbard action} is left unchanged, while the gauge field transforms according to \eqref{eq_low_spiral: SU(2) Amu transformation}. Since the gauge kernels correspond to correlators of two gauge fields, we expand \eqref{eq_low_spiral: S0 coupled to SU(2) gauge field} to second order in $A_\mu$. After a Fourier transformation one obtains \begin{equation} \begin{split} \mathcal{S}_0[\psi,\overline{\psi},A_\mu]= &-\int_k \overline{\psi}_k \left[i\nu_n+\mu-\varepsilon_\mathbf{k}\right]\psi_k\\ &+\frac{1}{2}\int_{k,q}A_\mu^a(q) \gamma^\mu_{\mathbf{k}}\, \overline{\psi}_{k+q}\sigma^a\psi_k -\frac{1}{8}\int_{k,q,q'}A^a_\alpha(q-q')A^a_\beta(q') \gamma^{\alpha\beta}_\mathbf{k} \overline{\psi}_{k+q}\psi_k, \end{split} \label{eq_low_spiral: S0 coupled with Amu SU(2) momentum space} \end{equation} where the first order coupling is given by $\gamma^\mu_\mathbf{k}=(1,\nabla_\mathbf{k}\varepsilon_\mathbf{k})$, and the second order one is $\gamma^{\alpha\beta}_\mathbf{k}=\partial^2_{k_\alpha k_\beta}\varepsilon_{\mathbf{k}}$. In the equation above,the symbol $\int_k=\int_\mathbf{k} T\sum_{\nu_n}$ ($\int_q=\int_\mathbf{q} T\sum_{\Omega_m}$) denotes an integral over spatial momenta and a sum over fermionic (bosonic) Matsubara frequencies. Analyzing the coupling of the temporal component of the gauge field to the fermions in \eqref{eq_low_spiral: S0 coupled to SU(2) gauge field} and \eqref{eq_low_spiral: S0 coupled with Amu SU(2) momentum space}, we notice that the temporal components of the gauge kernel (see definition~\eqref{eq_low_spiral: gauge kernel definition}) are nothing but the susceptibilities in the original (unrotated) spin basis \begin{equation} K_{00}^{ab}(\mathbf{q},\mathbf{q}',\omega)=\chi^{ab}(\mathbf{q},\mathbf{q}',\omega), \end{equation} where $\omega$ is a real frequency. \begin{figure*} \centering \includegraphics[width=1.\textwidth]{diagrams_J2.png} \caption{Diagrams contributing to the spin stiffnesses. The wavy line represents the external SU(2) gauge field, the solid lines the electronic Green's functions, the black triangles the paramagnetic vertex $\gamma^\mu_\mathbf{k} \sigma^a$, the black circle the diamagnetic one $\gamma^{\alpha\beta}_\mathbf{k} \sigma^0$, and the dashed line the effective interaction $\Gamma(\mathbf{q},\mathbf{q}',\omega)$.} \label{fig: fig1} \end{figure*} The spatial components of the gauge kernel can be expressed in the general form (see Fig.~\ref{fig: fig1}) \begin{equation} \begin{split} K_{\alpha\beta}^{ab}(\mathbf{q},\mathbf{q}',\omega)= &K_{\mathrm{para},\alpha\beta}^{ab}(\mathbf{q},\mathbf{q}',\omega) +\delta_{ab}\,K_{\alpha\beta}^{\mathrm{dia}}\\ &+\int_{\mathbf{q}^{\prime\prime},\mathbf{q}^{\prime\prime\prime}}\sum_{c,d}K_{\mathrm{para},\alpha 0}^{ac}(\mathbf{q},\mathbf{q}^{\prime\prime},\omega) \Gamma^{cd}(\mathbf{q}^{\prime\prime},\mathbf{q}^{\prime\prime\prime},\omega)K_{\mathrm{para},0\beta}^{db}(\mathbf{q}^{\prime\prime\prime},\mathbf{q}^{\prime},\omega), \end{split} \label{eq_low_spiral: gauge kernel formula} \end{equation} where $\Gamma(\mathbf{q}',\mathbf{q}^{\prime\prime},\omega)$ is the effective interaction~\eqref{eq_low_spiral: Gamma(q) definition} expressed in the unrotated basis. Within the RPA, the paramagnetic terms are given by \begin{equation} \begin{split} &K_{\mathrm{para},\mu\nu}^{ab}(\mathbf{q},\mathbf{q}',\omega)= -\frac{1}{4}\int_{\mathbf{k},\mathbf{k}'}T\sum_{\nu_n} \gamma^\mu_\mathbf{k}\gamma^\nu_{\mathbf{k}'+\mathbf{q}'}\tr\Big[\sigma^a\bs{G}_{\mathbf{k},\mathbf{k}'}(i\nu_n)\sigma^b \bs{G}_{\mathbf{k}'+\mathbf{q}',\mathbf{k}+\mathbf{q}}(i\nu_n+i\Omega_m)\Big], \end{split} \label{eq_low_spiral: paramagnetic contr Kernel} \end{equation} with the replacement $i\Omega_m\to\omega+i0^+$. The Green's function in the unrotated basis takes the form \begin{equation}\label{eq_low_spiral: G unrotated} \bs{G}_{\mathbf{k},\mathbf{k}'}(i\nu_n)=\left( \begin{array}{cc} G_\mathbf{k}(i\nu_n)\delta_{\mathbf{k},\mathbf{k}'} & F_{\mathbf{k}}(i\nu_n)\delta_{\mathbf{k},\mathbf{k}'-\mathbf{Q}} \\ F_{\mathbf{k}-\mathbf{Q}}(i\nu_n)\delta_{\mathbf{k},\mathbf{k}'+\mathbf{Q}} & \overline{G}_{\mathbf{k}-\mathbf{Q}}(i\nu_n)\delta_{\mathbf{k},\mathbf{k}'} \end{array} \right), \end{equation} where $\delta_{\mathbf{k},\mathbf{k}'}$ is a shorthand for $(2\pi)^d\delta^d(\mathbf{k}-\mathbf{k}')$, and \begin{subequations} \begin{align} &G_\mathbf{k}(i\nu_n)=\frac{i\nu_n-\xi_{\mathbf{k}+\mathbf{Q}}}{(i\nu_n-\xi_{\mathbf{k}})(i\nu_n-\xi_{\mathbf{k}+\mathbf{Q}})-\Delta^2},\label{eq_low_spiral: Gk up unrotated}\\ &\overline{G}_\mathbf{k}(i\nu_n)=\frac{i\nu_n-\xi_{\mathbf{k}}}{(i\nu_n-\xi_{\mathbf{k}})(i\nu_n-\xi_{\mathbf{k}+\mathbf{Q}})-\Delta^2},\label{eq_low_spiral: Gk down unrotated}\\ &F_\mathbf{k}(i\nu_n)=\frac{\Delta}{(i\nu_n-\xi_{\mathbf{k}})(i\nu_n-\xi_{\mathbf{k}+\mathbf{Q}})-\Delta^2}. \end{align} \end{subequations} The diamagnetic term does not depend on $\mathbf{q}$, $\mathbf{q}'$ and $\omega$, and is proportional to the unit matrix in the gauge indices. It evaluates to \begin{equation} K_{\alpha\beta}^{\mathrm{dia}}=-\frac{1}{4}\int_{\mathbf{k},\mathbf{k}'} T\sum_{\nu_n} (\partial^2_{k_\alpha k_\beta}\varepsilon_\mathbf{k})\tr\left[\bs{G}_{\mathbf{k},\mathbf{k}'}(i\nu_n)\right]. \end{equation} We can now compute the spin stiffnesses and dynamical susceptibilities from the gauge kernels. \subsubsection{In-plane mode} The in-plane spin stiffness is defined as \begin{equation} J^\msml{\square}_{\alpha\beta}=-\lim_{\mathbf{q}\to\mathbf{0}}K_{\alpha\beta}^{33}(\mathbf{q},0), \end{equation} where we have defined as $K_{\mu\nu}(\mathbf{q},\omega)$ the prefactors of those components of the gauge kernels $K_{\mu\nu}(\mathbf{q},\mathbf{q}',\omega)$ which are proportional to $\delta_{\mathbf{q},\mathbf{q}'}$. In addition to the bare term \begin{equation} J_{\alpha\beta}^{0,\msml{\square}}=-\lim_{\mathbf{q}\to\mathbf{0}}K_{\alpha\beta}^{\mathrm{para},33}(\mathbf{q},0)-K_{\alpha\beta}^{\mathrm{dia}}, \label{eq_low_spiral: J inplane bare} \end{equation} we find nonvanishing paramagnetic contributions that mix spatial and temporal components. They involve \begin{subequations}\label{eq_low_spiral: off_diagonal kalphat_3a} \begin{align} &\lim_{\mathbf{q}\to\mathbf{0}}K_{0\alpha}^{30}(\mathbf{q},\mathbf{q}',0)=\kappa_\alpha^{30}(\mathbf{0})\delta_{\mathbf{q}',\mathbf{0}},\\ &\lim_{\mathbf{q}\to\mathbf{0}}K_{0\alpha}^{31}(\mathbf{q},\mathbf{q}',0)=\kappa_\alpha^{31}(\mathbf{0})\frac{\delta_{\mathbf{q}',\mathbf{Q}}+\delta_{\mathbf{q}',-\mathbf{Q}}}{2}, \label{eq_low_spiral: k31 def}\\ &\lim_{\mathbf{q}\to\mathbf{0}}K_{0\alpha}^{32}(\mathbf{q},\mathbf{q}',0)=\kappa_\alpha^{32}(\mathbf{0})\frac{\delta_{\mathbf{q}',\mathbf{Q}}-\delta_{\mathbf{q}',-\mathbf{Q}}}{2i}\label{eq_low_spiral: k32 def}, \end{align} \end{subequations} % where $\kappa_\alpha^{32}(\mathbf{0})=\kappa_\alpha^{31}(\mathbf{0})$. Noticing that for $a=0,1,2$, we have $\lim_{\mathbf{q}\to\mathbf{0}}K^{3a}_{\alpha 0}(\mathbf{q},0)=\lim_{\mathbf{q}\to\mathbf{0}}K^{a3}_{0\alpha}(\mathbf{q},0)$, and inserting this result into \eqref{eq_low_spiral: gauge kernel formula}, we obtain % \begin{equation} J_{\alpha\beta}^{\msml{\square}}=J_{\alpha\beta}^{0,\msml{\square}}-\sum_{a,b\in\{0,1\}}\kappa_\alpha^{3a}(\mathbf{0})\widetilde{\Gamma}^{ab}(\mathbf{q}\to\mathbf{0},0)\kappa_\beta^{3b}(\mathbf{0}),\label{eq_low_spiral: J3 gauge} \end{equation} % where $\widetilde{\Gamma}(\mathbf{q}\to\mathbf{0},0)$ is the effective interaction in the rotated spin basis, defined in Eq.~\eqref{eq_low_spiral: Gamma(q) definition}. Notice that the delta functions in Eq.~\eqref{eq_low_spiral: off_diagonal kalphat_3a} convert the unrotated $\Gamma$ to $\widetilde{\Gamma}$ and, together with the equality $\kappa_\alpha^{32}(\mathbf{0})=\kappa_\alpha^{31}(\mathbf{0})$, they remove the terms where $a$ or $b$ equal 2 in the sum. The dynamical susceptibility is defined as % \begin{equation} \chi_\mathrm{dyn}^\msml{\square}=\lim_{\omega\to0}K_{00}^{33}(\mathbf{0},\omega)=\lim_{\omega\to0}\chi^{33}(\mathbf{0},\omega). \end{equation} % From Eq.~\eqref{eq_low_spiral: spiral rotation matrix with charge} we deduce that % \begin{equation} \chi^{33}(\mathbf{q},\omega)=\widetilde{\chi}^{33}(\mathbf{q},\omega). \end{equation} % Remarking that for $\omega=0$ all the off-diagonal elements of the bare susceptibilities with one (and only one) of the two indices equal to 3 vanish, we obtain the RPA expression for $\chi_\mathrm{dyn}^\msml{\square}$ % \begin{equation} \begin{split} \chi_\mathrm{dyn}^\msml{\square} = \lim_{\omega\to 0} \frac{\widetilde{\chi}_0^{33}(\mathbf{0},\omega)}{1-2U\widetilde{\chi}_0^{33}(\mathbf{0},\omega)}. \end{split} \label{eq_low_spiral: chi3 gauge} \end{equation} % % \subsubsection{Out-of-plane modes} % To compute the the out-of-plane stiffness, that is, % \begin{equation} J_{\alpha\beta}^\perp=-\lim_{\mathbf{q}\to\mathbf{0}}K_{\alpha\beta}^{22}(\mathbf{q},0), \end{equation} % we find that all the paramagnetic contributions to the gauge kernel that mix temporal and spatial components vanish in the $\omega\to0$ and $\mathbf{q}=\mathbf{q}'\to\mathbf{0}$\footnote{The terms $K^{\alpha 0}_{13}(\mathbf{0},\pm\mathbf{Q},0)$ and $K^{\alpha 0}_{23}(\mathbf{0},\pm\mathbf{Q},0)$ are zero only if $\mathbf{Q}$ is chosen such that it minimizes the free energy (see Eq.~\eqref{eq_low_spiral: MF free energy}). In fact, if this is not the case, they can be shown to be proportional to $\partial_{q_\alpha}\widetilde{\chi}^0_{33}(\pm Q)$, which is finite for a generic $\mathbf{Q}$. } limits. Moreover, the $\mathbf{q}\to\mathbf{0}$ limit of the momentum diagonal paramagnetic contribution can be written as % \begin{equation} \begin{split} \lim_{\mathbf{q}\to\mathbf{0}}K_{\mathrm{para},\alpha\beta}^{22}(\mathbf{q},0)= &-\frac{1}{4}\int_{\mathbf{k},\mathbf{k}'} T\sum_{\substack{\nu_n\\\zeta=\pm}} \gamma^\alpha_\mathbf{k}\gamma^\beta_{\mathbf{k}'}\tr\left[\sigma^\zeta\bs{G}_{\mathbf{k},\mathbf{k}'}(i\nu_n)\sigma^{-\zeta}\bs{G}_{\mathbf{k}',\mathbf{k}}(i\nu_n)\right]\\ =&-\frac{1}{2}\int_{\mathbf{k}}T\sum_{\nu_n}\gamma^\alpha_\mathbf{k}\gamma^\beta_{\mathbf{k}}\,G_\mathbf{k}(i\nu_n)\overline{G}_{\mathbf{k}-\mathbf{Q}}(i\nu_n), \end{split} \end{equation} % where we have defined $\sigma^\pm=(\sigma^1\pm i\sigma^2)/2$. The out-of-plane spin stiffness is thus given by % \begin{equation} \begin{split} J^\perp_{\alpha\beta} = &-\frac{1}{2}\int_{\mathbf{k}}T\sum_{\nu_n}\gamma^\alpha_\mathbf{k}\gamma^\beta_{\mathbf{k}}\,G_\mathbf{k}(i\nu_n)\overline{G}_{\mathbf{k}-\mathbf{Q}}(i\nu_n) -\frac{1}{4}\int_{\mathbf{k},\mathbf{k}'} T\sum_{\nu_n} (\partial^2_{k_\alpha k_\beta}\varepsilon_\mathbf{k})\tr\left[\bs{G}_{\mathbf{k},\mathbf{k}'}(i\nu_n)\right] \end{split} \end{equation} % Finally, we evaluate the dynamical susceptibility of the out-of-plane modes. This is defined as % \begin{equation} \chi_\mathrm{dyn}^\perp=\lim_{\omega\to 0}K_{00}^{22}(\mathbf{0},\omega)=\lim_{\omega\to 0}\chi^{22}(\mathbf{0},\omega). \end{equation} % Applying transformation~\eqref{eq_low_spiral: rotated chis spiral RPA}, we can express the momentum-diagonal component of $\chi^{22}(\mathbf{q},\mathbf{q}',\omega)$ in terms of the susceptibilities in the rotated basis as % \begin{equation} \begin{split} \chi^{22}(\mathbf{q},\omega)=\frac{1}{4}\sum_{\zeta=\pm}\left[ \widetilde{\chi}^{11}(\mathbf{q}+\zeta\mathbf{Q},\omega) +\widetilde{\chi}^{22}(\mathbf{q}+\zeta\mathbf{Q},\omega) +2i\zeta\widetilde{\chi}^{12}(\mathbf{q}+\zeta\mathbf{Q},\omega) \right], \label{eq_low_spiral: uniform chi22 spiral} \end{split} \end{equation} % where we have used (see Table~\ref{tab_low_spiral: symmetries}) $\widetilde{\chi}^{12}(q)=-\widetilde{\chi}^{21}(q)$. Sending $\mathbf{q}$ to $\mathbf{0}$ in \eqref{eq_low_spiral: uniform chi22 spiral}, and using the symmetry properties of the susceptibilities for $\mathbf{q}\to-\mathbf{q}$ (see Table~\ref{tab_low_spiral: symmetries}), we obtain % \begin{equation} \begin{split} \chi^{22}(\mathbf{0},\omega)&=\frac{1}{2}\left[\widetilde{\chi}^{11}(\mathbf{Q},\omega)+\widetilde{\chi}^{22}(\mathbf{Q},\omega)+2i\widetilde{\chi}^{12}(\mathbf{Q},\omega)\right] =2\widetilde{\chi}^{-+}(\mathbf{Q},\omega), \end{split} \end{equation} % with $\widetilde{\chi}^{-+}(q)=\langle S^-(-q)S^+(q)\rangle$, and $S^\pm(q)=(S^1(q)\pm iS^2(q))/2$. It is convenient to express $\widetilde{\chi}^{-+}(\mathbf{Q},\omega)$ as % \begin{equation} \begin{split} \widetilde{\chi}^{-+}(\mathbf{Q},\omega)&=\widetilde{\chi}^{-+}_0(\mathbf{Q},\omega) +\sum_{a,b\in\{0,1,2,3\}}\widetilde{\chi}^{-a}_0(\mathbf{Q},\omega)\widetilde{\Gamma}^{ab}(\mathbf{Q},\omega)\widetilde{\chi}^{b+}_0(\mathbf{Q},\omega), \end{split} \label{eq_low_spiral: chit pm RPA} \end{equation} % where we have defined % \begin{subequations} \begin{align} &\widetilde{\chi}_0^{-a}(q)=\frac{1}{2}\left[\widetilde{\chi}_0^{1a}(q)-i\widetilde{\chi}_0^{2a}(q)\right],\\ &\widetilde{\chi}_0^{a+}(q)=\frac{1}{2}\left[\widetilde{\chi}_0^{a1}(q)+i\widetilde{\chi}_0^{a2}(q)\right]. \end{align} \end{subequations} % In the limit $\omega\to 0$, $\widetilde{\chi}_0^{-3}(\mathbf{Q},\omega)$ and $\widetilde{\chi}_0^{3+}(\mathbf{Q},\omega)$ vanish as they are odd in frequency (see Table~\ref{tab_low_spiral: symmetries}). We can now cast the dynamical susceptibility in the form % \begin{equation} \begin{split} \chi_\mathrm{dyn}^\perp&=2\widetilde{\chi}^{-+}_0(Q) +2\sum_{a,b\in\{0,1,2\}}\widetilde{\chi}^{-a}_0(Q)\widetilde{\Gamma}^{ab}(Q)\widetilde{\chi}^{b+}_0(Q), \end{split} \label{eq_low_spiral: chi22 gauge} \end{equation} % or, equivalently, % \begin{equation} \begin{split} \chi_\mathrm{dyn}^\perp&=2\widetilde{\chi}^{+-}_0(-Q) +2\sum_{a,b\in\{0,1,2\}}\widetilde{\chi}^{+a}_0(-Q)\widetilde{\Gamma}^{ab}(-Q)\widetilde{\chi}^{b-}_0(-Q). \end{split} \end{equation} % We remark that in the formulas above we have not specified in which order the limits $\mathbf{q}\to\pm\mathbf{Q}$ and $\omega\to0$ have to be taken as they commute. % \subsection{Equivalence of RPA and gauge theory approaches} % In this Section, we finally prove that the expressions for the spin stiffnesses and dynamical susceptibilities obtained from a low energy expansion of the susceptibilities (Sec.~\ref{sec_low_spiral: properties of Goldstones}) coincide with those computed via the SU(2) gauge response of the previous section via a direct evaluation. % \subsubsection{In-plane mode} % We start by computing the first term in Eq.~\eqref{eq_low_spiral: J3 RPA}. The second derivative of the 22-component of the bare susceptibility can be expressed as % \begin{equation} \begin{split} -2\Delta^2\partial^2_{q_\alpha q_\beta}\widetilde{\chi}_0^{22}(0)= -\Delta^2\int_\mathbf{k} \gamma_{\mathbf{k}}^\alpha\gamma_{\mathbf{k}+\mathbf{Q}}^\beta\left[\frac{f(E^-_\mathbf{k})-f(E^+_\mathbf{k})}{4e_\mathbf{k}^3} +\frac{f'(E^+_\mathbf{k})+f'(E^-_\mathbf{k})}{4e_\mathbf{k}^2}\right], \end{split} \label{eq_low_spiral: J03 RPA} \end{equation} % where $f'(x)=df/dx$ is the derivative of the Fermi function. On the other hand, the bare contribution to $J^\msml{\square}_{\alpha\beta}$ (Eq.~\eqref{eq_low_spiral: J inplane bare}) reads % \begin{equation} \begin{split} J^{0,\msml{\square}}_{\alpha\beta}=&\frac{1}{4}\int_\mathbf{k} T\sum_{\nu_n}\left[ G_\mathbf{k}(i\nu_n)^2\gamma_{\mathbf{k}}^\alpha\gamma_{\mathbf{k}}^\beta+\overline{G}_\mathbf{k}(i\nu_n)^2\gamma_{\mathbf{k}+\mathbf{Q}}^\alpha\gamma_{\mathbf{k}+\mathbf{Q}}^\beta -2F_\mathbf{k}(i\nu_n)^2\gamma_{\mathbf{k}}^\alpha\gamma_{\mathbf{k}+\mathbf{Q}}^\beta \right] \\ &+\frac{1}{4}\int_\mathbf{k} T\sum_{\nu_n} \Big[ G_\mathbf{k}(i\nu_n)\gamma^{\alpha\beta}_\mathbf{k} +\overline{G}_\mathbf{k}(i\nu_n)\gamma^{\alpha\beta}_{\mathbf{k}+\mathbf{Q}} \Big]. \end{split} \end{equation} % The second (diamagnetic) term can be integrated by parts, giving % \begin{equation} \begin{split} -\frac{1}{4}\int_\mathbf{k} T\sum_{\nu_n} \left[ G^2_\mathbf{k}(i\nu_n)\gamma_{\mathbf{k}}^\alpha\gamma_{\mathbf{k}}^\beta+\overline{G}^2_\mathbf{k}(i\nu_n)\gamma_{\mathbf{k}+\mathbf{Q}}^\alpha\gamma_{\mathbf{k}+\mathbf{Q}}^\beta +2F^2_\mathbf{k}(i\nu_n)\gamma_{\mathbf{k}}^\alpha\gamma_{\mathbf{k}+\mathbf{Q}}^\beta \right], \end{split} \end{equation} % where we have used the properties % \begin{subequations} \begin{align} &\partial_{k_\alpha}G_\mathbf{k}(i\nu_n)=G^2_\mathbf{k}(i\nu_n)\gamma^\alpha_\mathbf{k}+F^2_\mathbf{k}(i\nu_n)\gamma^\alpha_{\mathbf{k}+\mathbf{Q}},\\ &\partial_{k_\alpha}\overline{G}_\mathbf{k}(i\nu_n)=\overline{G}^2_\mathbf{k}(i\nu_n)\gamma^\alpha_{\mathbf{k}+\mathbf{Q}}+F^2_\mathbf{k}(i\nu_n)\gamma^\alpha_{\mathbf{k}}. \end{align} \label{eq_low_spiral: derivatives of G} \end{subequations} % Summing up both terms, we obtain % \begin{equation} \begin{split} J_{0,\msml{\square}}^{\alpha\beta}= -\int_\mathbf{k} T\sum_{\nu_n}\gamma_{\mathbf{k}}^\alpha\gamma_{\mathbf{k}+\mathbf{Q}}^\beta &F^2_\mathbf{k}(i\nu_n). \end{split} \label{eq_low_spiral: J03 explicit} \end{equation} % Performing the Matsubara sum, we arrive at % \begin{equation}\label{eq_low_spiral: J0inplane} \begin{split} J^{0,\msml{\square}}_{\alpha\beta}= -\Delta^2\int_\mathbf{k} \gamma_{\mathbf{k}}^\alpha\gamma_{\mathbf{k}+\mathbf{Q}}^\beta\left[\frac{f(E^-_\mathbf{k})-f(E^+_\mathbf{k})}{4e_\mathbf{k}^3} +\frac{f'(E^+_\mathbf{k})+f'(E^-_\mathbf{k})}{4e_\mathbf{k}^2}\right], \end{split} \end{equation} % which is the same result as in \eqref{eq_low_spiral: J03 RPA}. Furthermore, one can show that % \begin{subequations} \begin{align} &2i\Delta\partial_{q_\alpha}\widetilde{\chi}_0^{20}(0)=-2i\Delta\partial_{q_\alpha}\widetilde{\chi}_0^{02}(0)=\kappa^{30}_{\alpha}(\mathbf{0}),\\ &2i\Delta\partial_{q_\alpha}\widetilde{\chi}_0^{21}(0)=-2i\Delta\partial_{q_\alpha}\widetilde{\chi}_0^{12}(0)=\kappa^{31}_{\alpha}(\mathbf{0}). \end{align} \label{eq_low_spiral: RPA vs gauge inplane J, offdiagonal} \end{subequations} % Inserting results~\eqref{eq_low_spiral: J03 RPA}, \eqref{eq_low_spiral: J03 explicit}, and \eqref{eq_low_spiral: RPA vs gauge inplane J, offdiagonal} into \eqref{eq_low_spiral: J3 RPA} and \eqref{eq_low_spiral: J3 gauge}, we prove that these two expressions give the same result for the in-plane stiffness. Explicit expressions for $\kappa^{30}_{\alpha}(\mathbf{0})$ and $\kappa^{31}_{\alpha}(\mathbf{0})$ are given in Appendix~\ref{app: low en spiral}. If we now consider the dynamical susceptibility, it is straightforward to see that % \begin{equation} \begin{split} 2\Delta^2\partial^2_\omega\widetilde{\chi}_0^{22}(0)=&2i\Delta\partial_\omega\widetilde{\chi}^{23}_0(0)=\lim_{\omega\to 0}\widetilde{\chi}_0^{33}(\mathbf{0},\omega) =\Delta^2\int_\mathbf{k} \frac{f(E^-_\mathbf{k})-f(E^+_\mathbf{k})}{4e^3_\mathbf{k}}, \end{split} \label{eq_low_spiral: RPA=gauge for chi3} \end{equation} % which, if inserted into Eqs.~\eqref{eq_low_spiral: chi3 RPA} and \eqref{eq_low_spiral: chi3 gauge}, proves that the calculations of $\chi_\mathrm{dyn}^\msml{\square}$ via gauge kernels and via the low-energy expansion of the susceptibilities provide the same result. % \subsubsection{Out-of-plane modes} % With the help of some lengthy algebra, one can compute the second momentum derivative of the bare susceptibility $\widetilde{\chi}_0^{33}(q)$, obtaining % \begin{equation} \begin{split} -\Delta^2\partial^2_{q_\alpha q_\beta}\widetilde{\chi}_0^{33}(Q) =&\frac{1}{8}\int_\mathbf{k}\sum_{\ell,\ell'=\pm}\left(1-\ell\frac{h_\mathbf{k}}{e_\mathbf{k}}\right)\left(1+\ell\frac{h_{\mathbf{k}+\mathbf{Q}}}{e_{\mathbf{k}+\mathbf{Q}}}\right)\gamma^\alpha_{\mathbf{k}+\mathbf{Q}}\gamma^\beta_{\mathbf{k}+\mathbf{Q}} \frac{f(E^\ell_\mathbf{k})-f(E^{\ell'}_{\mathbf{k}+\mathbf{Q}})}{E^\ell_\mathbf{k}-E^{\ell'}_{\mathbf{k}+\mathbf{Q}}}\\ &-\frac{1}{8}\int_\mathbf{k} \sum_{\ell=\pm}\left[\left(1-\ell\frac{h_\mathbf{k}}{e_\mathbf{k}}\right)^2\gamma^{\alpha}_{\mathbf{k}+\mathbf{Q}}+\frac{\Delta^2}{e_\mathbf{k}^2}\gamma^{\alpha}_{\mathbf{k}}\right]\gamma^\beta_{\mathbf{k}+\mathbf{Q}} f'(E^\ell_\mathbf{k})\\ &-\frac{1}{8}\int_\mathbf{k} \sum_{\ell=\pm}\left[\frac{\Delta^2}{e_\mathbf{k}^2}(\gamma^{\alpha}_{\mathbf{k}+\mathbf{Q}}-\gamma^{\alpha}_{\mathbf{k}})\right]\gamma^\beta_{\mathbf{k}+\mathbf{Q}} \frac{f(E^\ell_\mathbf{k})-f(E^{-\ell}_{\mathbf{k}})}{E^\ell_\mathbf{k}-E^{-\ell}_{\mathbf{k}}}. \end{split} \end{equation} % Similarly to what we have done for the in-plane mode, we integrate by parts the diamagnetic contribution to the gauge kernel. Its sum with the paramagnetic one gives % \begin{equation} \begin{split} -\lim_{\mathbf{q}\to\mathbf{0}}K^{22}_{\alpha\beta}&(\mathbf{q},\mathbf{q},0)=\\ =&\frac{1}{2}\int_\mathbf{k} T\sum_{\nu_n}\Big\{ \gamma^\alpha_{\mathbf{k}+\mathbf{Q}}\gamma^\beta_{\mathbf{k}+\mathbf{Q}}\overline{G}_\mathbf{k}(i\nu_n)\left[G_{\mathbf{k}+\mathbf{Q}}(i\nu_n)-\overline{G}_\mathbf{k}(i\nu_n)\right] -\gamma^\alpha_{\mathbf{k}}\gamma^\beta_{\mathbf{k}+\mathbf{Q}}F^2_\mathbf{k}(i\nu_n) \Big\}. \end{split} \end{equation} % Performing the Matsubara sums, one can prove the equivalence of the RPA and gauge theory approach for the calculation of $J_{\alpha\beta}^{\perp}$. Similarly, we obtain for the second frequency derivative of the bubble $\widetilde{\chi}_0^{33}(q)$ % \begin{equation} \begin{split} \Delta^2\partial^2_\omega\widetilde{\chi}_0^{33}(Q)=-\frac{1}{8}\int_\mathbf{k}\sum_{\ell,\ell'=\pm} \left(1-\ell\frac{h_\mathbf{k}}{e_\mathbf{k}}\right)\left(1+\ell\frac{h_{\mathbf{k}+\mathbf{Q}}}{e_{\mathbf{k}+\mathbf{Q}}}\right) \frac{f(E^\ell_\mathbf{k})-f(E^{\ell'}_{\mathbf{k}+\mathbf{Q}})}{E^\ell_\mathbf{k}-E^{\ell'}_{\mathbf{k}+\mathbf{Q}}}=2\widetilde{\chi}^{-+}_0(Q). \end{split} \label{eq_low_spiral: proof Z2, diagonal component} \end{equation} % Furthermore, one can prove that % \begin{equation} \begin{split} \Delta\partial_\omega \widetilde{\chi}_0^{3a}(Q)=\Delta[\partial_\omega \widetilde{\chi}_0^{a3}(Q)]^* =\widetilde{\chi}_0^{-a}(Q)=[\widetilde{\chi}_0^{a+}(Q)]^*, \end{split} \label{eq_low_spiral: proof Z2, off diagonal components} \end{equation} % for $a=0,1,2$. Inserting results \eqref{eq_low_spiral: proof Z2, diagonal component} and \eqref{eq_low_spiral: proof Z2, off diagonal components} into Eqs.~\eqref{eq_low_spiral: chi22 RPA} and \eqref{eq_low_spiral: chi22 gauge}, one sees that the RPA and gauge theory approaches are equivalent for the calculation of $\chi_\mathrm{dyn}^\perp$. In Appendix~\ref{app: low en spiral} we provide explicit expressions for the off diagonal bare susceptibilities $\widetilde{\chi}^{-a}_0(Q)$. % \subsubsection{Remarks on more general models} % We remark that in the more general case of an interaction of the type % \begin{equation} \mathcal{S}_\mathrm{int}=\int_{k,k',q}\!U_{k,k'}(q)[\overline{\psi}_{k+q}\vec{\sigma}\psi_k]\cdot[\overline{\psi}_{k'-q}\vec{\sigma}\psi_{k'}], \end{equation} % producing, in general, a $k$-dependent gap, the identities we have proven above do not hold anymore within the RPA, as additional terms in the derivative of the inverse susceptibilities emerge, containing expressions involving first and second derivatives of the gap with respect to the spatial momentum and/or frequency. In fact, in the case of nonlocal interactions, gauge invariance requires additional couplings to the gauge field in $\mathcal{S}_\mathrm{int}$, complicating our expressions for the gauge kernels. Similarly, even for action~\eqref{eq_low_spiral: Hubbard action}, approximations beyond the RPA produce in general a $k$-dependent $\Delta$, and vertex corrections in the kernels are required to obtain the same result as the one obtained expanding the susceptibilities. % \section{N\'eel limit} \label{sec_low_spiral: Neel limit} % In this Section, we analyze the N\'eel limit, that is, $\mathbf{Q}=(\pi/a_0,\dots,\pi/a_0)$. In this case, it is easy to see that, within the RPA, the bare susceptibilities in the rotated basis obey the identities % \begin{subequations} \begin{align} &\widetilde{\chi}_0^{22}(\mathbf{q},\omega)=\widetilde{\chi}_0^{33}(\mathbf{q}+\mathbf{Q},\omega),\\ &\widetilde{\chi}_0^{20}(\mathbf{q},\omega)=\widetilde{\chi}_0^{21}(\mathbf{q},\omega)=0,\\ &\widetilde{\chi}_0^{30}(\mathbf{q},\omega)=\widetilde{\chi}_0^{31}(\mathbf{q},\omega)=0. \end{align} \end{subequations} % Furthermore, we obtain for the mixed gauge kernels (see Appendix~\ref{app: low en spiral}) % \begin{equation} K^{ab}_{\mathrm{para},\alpha 0}(\mathbf{q},\mathbf{q}',\omega)=K^{ab}_{\mathrm{para},0\alpha}(\mathbf{q},\mathbf{q}',\omega)=0. \end{equation} % We also notice that $K^{11}_{\alpha\beta}(\mathbf{q},\mathbf{q}',0)$ and $K^{22}_{\alpha\beta}(\mathbf{q},\mathbf{q}',0)$ have (different) momentum off-diagonal contributions for which $\mathbf{q}'=\mathbf{q}\pm 2\mathbf{Q}$. If $\mathbf{Q}=(\pi/a_0,\dots,\pi/a_0)$, these terms become diagonal in momentum, as $2\mathbf{Q}\sim\mathbf{0}$, such that % \begin{subequations} \begin{align} &\lim_{\mathbf{q}\to\mathbf{0}}K^{11}_{\alpha\beta}(\mathbf{q},0)=0,\\ &K^{22}_{\alpha\beta}(\mathbf{q},0)=K^{33}_{\alpha\beta}(\mathbf{q},0). \end{align} \end{subequations} % From the above relations, we can see that $J_{\alpha\beta}^\perp=J^\msml{\square}_{\alpha\beta}\equiv J_{\alpha\beta}$, and $\chi_\mathrm{dyn}^\perp=\chi_\mathrm{dyn}^\msml{\square}\equiv \chi_\mathrm{dyn}^\perp$, as expected for the N\'eel state. From these considerations, we obtain for the spin stiffness % \begin{equation} \begin{split} J_{\alpha\beta}=&-\lim_{\mathbf{q}\to\mathbf{0}}K^{22}_{\alpha\beta}(\mathbf{q},0)=-\lim_{\mathbf{q}\to\mathbf{0}}K^{33}_{\alpha\beta}(\mathbf{q},0) =-2\Delta^2\partial^2_{q_\alpha q_\beta}\widetilde{\chi}_0^{22}(0) =-2\Delta^2\partial^2_{q_\alpha q_\beta}\widetilde{\chi}_0^{33}(Q), \end{split} \end{equation} % which implies that $J_{\alpha\beta}$ is given by Eq.~\eqref{eq_low_spiral: J0inplane}. If the underlying lattice is $C_4$-symmetric, the spin stiffness is isotropic in the N\'eel state, that is, $J_{\alpha\beta}=J\delta_{\alpha\beta}$. Similarly, for the dynamical susceptibility, we have % \begin{equation} \begin{split} \chi_\mathrm{dyn}^\perp=&\lim_{\omega\to 0}\chi^{22}(\mathbf{0},\omega)=\lim_{\omega\to 0}\chi^{33}(\mathbf{0},\omega) =2\Delta^2\partial^2_\omega \widetilde{\chi}_0^{22}(0)=2\Delta^2\partial^2_\omega \widetilde{\chi}_0^{33}(Q), \end{split} \end{equation} % which, combined with \eqref{eq_low_spiral: RPA=gauge for chi3}, implies % \begin{equation} \chi_\mathrm{dyn}^\perp=\lim_{\omega\to 0}\frac{\widetilde{\chi}_0^{33}(\mathbf{0},\omega)}{1-2U\widetilde{\chi}_0^{33}(\mathbf{0},\omega)}, \end{equation} % with $\widetilde{\chi}_0^{33}(\mathbf{0},\omega\to 0)$ given by Eq.~\eqref{eq_low_spiral: RPA=gauge for chi3}. We notice that the dynamical susceptibility is obtained from the susceptibility by letting $\mathbf{q}\to\mathbf{0}$ \emph{before} $\omega\to 0$. This order of the limits removes the intraband terms (that is, the $\ell=\ell'$ terms in Eq.~\eqref{eq_low_spiral: chi0 def}), which instead would yield a finite contribution to the \emph{uniform transverse susceptibility} $\chi^\perp\equiv\lim_{\mathbf{q}\to\mathbf{0}}\chi^{22}(\mathbf{q},0)$. In the special case of an insulator at low temperature $T\ll\Delta$, the intraband contributions vanish and one has the identity $\chi_\mathrm{dyn}^\perp=\chi^\perp$, leading to the hydrodynamic relation for the spin wave velocity~\cite{Halperin1969} $c_s=\sqrt{J/\chi^\perp}$ (in an isotropic antiferromagnet). As noticed in Ref.~\cite{Sachdev1995}, in a doped antiferromagnet this hydrodynamic expression does not hold anymore, and one has to replace the uniform transverse susceptibility with the dynamical susceptibility. Since $J=0$ and $\chi_\mathrm{dyn}^\perp=0$ in the symmetric phase due to SU(2) gauge invariance, the expression $c_s=\sqrt{J/\chi_\mathrm{dyn}^\perp}$ yields a finite value $c_s$ at the critical point $\Delta\to 0$, provided that $J$ and $\chi_\mathrm{dyn}^\perp$ scale to zero with the same power of $\Delta$, as it happens within mean-field theory. Note that in the symmetric phase SU(2) gauge invariance does not pose any constraint on $\chi^\perp$, which is generally finite. In the simpler case of perfect nesting, that is, when $\xi_\mathbf{k}=-\xi_{\mathbf{k}+\mathbf{Q}}$, corresponding to the half-filled particle-hole symmetric Hubbard model, and at zero temperature, expressions for $J$ and $\chi^\perp$ have been derived in Refs.~\cite{Schulz1995,Borejsza2004} for two spatial dimensions, and it is straightforward to check that our results reduce to these in this limit. Moreover, Eqs.~31-34 in Ref.~\cite{Schulz1995} are similar to our Ward identities but no derivation is provided. We finally analyze the Landau damping of the Goldstone modes for a N\'eel antiferromagnet. Using the decoupling of the sector 0 and 1 from sectors 2 and 3, one obtains % \begin{equation} {\rm Im}\frac{1}{\widetilde{\chi}^{22}(\mathbf{q},\omega)}=-4U^2\widetilde{\chi}^{22}_{0i}(\mathbf{q},\omega)+O(|\mathbf{q}|^3) \end{equation} % for small $\mathbf{q}$, and % \begin{equation} {\rm Im}\frac{1}{\widetilde{\chi}^{33}(\mathbf{q},\omega)}=-4U^2\widetilde{\chi}^{22}_{0i}(\mathbf{q},\omega)+O(|\mathbf{q}-\mathbf{Q}|^3) \end{equation} % for $\mathbf{q}\sim\mathbf{Q}$. Because of $\widetilde{\chi}^{22}(\mathbf{q},\omega)=\widetilde{\chi}^{33}(\mathbf{q}+\mathbf{Q},\omega)$, the damping of the two Goldstone modes is identical. Returning to the susceptibilities in the unrotated basis, where $\chi^{22}(\mathbf{q},\omega)=\widetilde{\chi}^{22}(\mathbf{q}+\mathbf{Q},\omega)$ and $\chi^{33}(\mathbf{q},\omega)=\widetilde{\chi}^{33}(\mathbf{q},\omega)$, one has % \begin{equation} {\rm Im}\frac{1}{\chi^{22}(\mathbf{q}',\omega)}={\rm Im}\frac{1}{\chi^{33}(\mathbf{q}',\omega)}=-|\mathbf{q}'|^2\hat{\omega}\gamma(\hat{\mathbf{q}}',\hat{\omega})+O(|\mathbf{q}|^3), \end{equation} % for small $\mathbf{q}'=\mathbf{q}-\mathbf{Q}$ and at fixed $\hat{\omega}=\omega/|\mathbf{q}'|$. This form of the Landau damping in the N\'eel state has already been derived by Sachdev \emph{et al.} in Ref.~\cite{Sachdev1995}. % \section{Numerical results in two dimensions} \label{sec_low_spiral: numerical results} % % \begin{figure}[h!] \centering \includegraphics[width=0.6\textwidth]{fig1.png} \caption{Magnetization $m$ (left axis, solid line) and incommensurability $\eta$ (right axis, dashed line) as functions of the electron density in the mean-field ground state of the two-dimensional Hubbard model for $t'=-0.16t$, $U=2.5t$.} \label{fig_low_en_sp: fig1} \end{figure} % % \begin{figure}[h!] \centering \includegraphics[width=1.\textwidth]{fig2.png} \caption{Quasiparticle Fermi surfaces in the magnetic ground state at various densities. The blue (red) lines correspond to solutions of the equation $E^+_\mathbf{k}=0$ ($E^-_\mathbf{k}=0$). The gray dashed lines are solutions of $E^\ell_\mathbf{k}=E^\ell_{\mathbf{k}+\mathbf{Q}}$ (or, equivalently, $\xi_\mathbf{k}=\xi_{\mathbf{k}+2\mathbf{Q}}$) for $\mathbf{Q}\neq(\pi,\pi)$. For the densities $n=0.84$ and $n=0.63$ these lines intersect the Fermi surfaces at the hotspots (black dots), that is the points that are connected to other Fermi surface points (gray dots) by a momentum shift $\mathbf{Q}$. The numbers indicate a pairwise connection. In the N\'eel state obtained for $n\geq 1$ all $\mathbf{k}$ points satisfy $E^\ell_\mathbf{k}=E^\ell_{\mathbf{k}+\mathbf{Q}}$. } \label{fig_low_en_sp: fig2} \end{figure} % % \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{fig3.png} \caption{In-plane and out-of-plane spin stiffnesses as functions of the electron density. In the N\'eel state for $n\geq1$ all the stiffnesses take the same value. Notice that for the spiral magnetic state for $n<1$ we have multiplied the out-of-plane spin stiffnesses $J^\perp_{xx}$ and $J^\perp_{yy}$ by a factor of 2.} \label{fig_low_en_sp: fig3} \end{figure} % % \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{fig4.png} \caption{Spectral weights of the in-plane and out-of-plane Goldstone modes as functions of the electron density $n$. In the N\'eel state for $n\geq1$ both weights take the same value. Notice that in the spiral magnetic state for $n<1$ we have multiplied the out-of-plane spectral weight by a factor $1/2$.} \label{fig_low_en_sp: fig4} \end{figure} % % \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{fig5.png} \caption{In-plane and out-of-plane magnon velocities $c^a_{\alpha\alpha}=\sqrt{J^a_{\alpha\alpha}/\chi_\mathrm{dyn}^a}$ as functions of the electron density.} \label{fig_low_en_sp: fig5} \end{figure} % % \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{fig6.png} \caption{Damping term of the in-plane Goldstone mode as a function of $|\mathbf{q}|$ at $n=0.84$ for two fixed values of $\hat{\omega}=\omega/|\mathbf{q}|$ and three fixed directions. The parameter $\theta$ parameterizes the angle between $\mathbf{q}$ and the $q_x$ axis. The values of the prefactor $\gamma$ of the leading dependence on $\hat{\omega}|\mathbf{q}|^2$ are shown in the inset.} \label{fig_low_en_sp: fig6} \end{figure} % % \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{fig7.png} \caption{Damping term of the out-of-plane Goldstone mode as a function of $\omega$ for various fixed wave vectors $\mathbf{q}$ near $\mathbf{Q} = (0.82\pi, \pi )$ and fixed density $n$ = 0.84. The prefactors $\gamma_1$ of the linear frequency dependence for $\mathbf{q}\neq\mathbf{Q}$ and the prefactor $\mathbf{Q}$ of the cubic frequency dependence for $\mathbf{q}=\mathbf{Q}$ are shown in the inset.} \label{fig_low_en_sp: fig7} \end{figure} % % In this section, we present numerical results for the spin stiffnesses and Landau dampings of the Goldstone modes in a spiral magnetic state obtained from the two-dimensional Hubbard model on a square lattice with neatest and next-t-nearest neighbor hopping amplitudes $t$ and $t'$. All over this section we employ $t$ as energy unit, that is, we set $t=1$. All calculations have been performed in the ground state, that is, at $T=0$, and with fixed values of the Hubbard interaction $U=2.5t$ and nearest neighbor hopping $t'=-0.16t$. For this choice of parameters mean-field theory yields a spiral magnetic state for densities ranging from $n\simeq 0.61$ to half filling ($n=1$), with ordering wave vector of the form $\mathbf{Q}=(\pi-2\pi\eta,\pi)$ and symmetry related. The incommensurability $\eta$ changes increases monotonically upon reducing the density, and vanishes continuously for $n\to1$. For filling factors between $n=1$ and $n\simeq 1.15$, we obtain a N\'eel state. The transition between the paramagnetic and the spiral state at $n\simeq0.61$ is continuous, while the one occurring at $n\simeq1.15$ is of first order with a relatively small jump of the order parameter. The magnetization and incommensurability curves as functions of the electron density are shown in Fig.~\ref{fig_low_en_sp: fig1}. In Fig.~\ref{fig_low_en_sp: fig2}, we show the quasiparticle Fermi surfaces in the ground state for different densities. In the electron-doped region ($n>1$), these are given by solutions of $E^+_\mathbf{k}=0$ (cf.~Eq.~\eqref{eq_low_spiral: QP dispersions}), while in the hole-doped regime ($n<1$) by momenta satisfying $E^-_\mathbf{k}=0$. In principle, for sufficiently small gaps $\Delta$, both equations can have solutions, but we do not find such a case within our choice of parameters. In the ground state, the lines defined by $E^+_\mathbf{k}=0$ ($E^-_\mathbf{k}=0$) enclose doubly occupied (empty) states, and we therefore refer to them as electron (hole) pockets. In Fig.~\ref{fig_low_en_sp: fig2}, we also shown the lines along which the equality $E^\ell_\mathbf{k}=E^\ell_{\mathbf{k}+\mathbf{Q}}$ (with $\mathbf{Q}\neq(\pi,\pi)$) is satisfied. We notice that for small doping these lines never intersect the Fermi surfaces, implying that the out-of-plane modes are not Landau damped at all in this parameter region. In Fig.~\ref{fig_low_en_sp: fig3}, we show the in-plane and out-of-plane spin stiffnesses $J^a_{\alpha\beta}$ as functions of the electron density. Both in the spiral state for $n<1$ and in the N\'eel state for $n\geq1$ only the diagonal components $J^a_{xx}$ and $J^a_{yy}$ are nonzero. In the N\'eel state, the stiffnesses are isotropic ($J^a_{xx}=J^a_{yy}$) and degenerate, as dictated by symmetry. In the spiral state the in-plane and the out-of-plane spin stiffnesses differ significantly from each other. Both exhibit a slight nematicity ($J_{xx}^a\neq J^a_{yy}$), coming from the difference between $Q_x$ and $Q_y$\footnote{Note that for a spiral state with $\mathbf{Q}=(\pi-2\pi\eta,\pi-2\pi\eta)$, or symmetry related, one would have $J^a_{xx}=J^a_{yy}$ but $J^a_{xy}=J^a_{yx}\neq0$.}. Both in-plane and out-of-plane stiffnesses exhibits a sudden and sharp jump upon approaching half filling from the hole-doped side. This discontinuity is due to the sudden appearance of hole pockets, which allow for intraband excitation processes with small energies. Conversely, on the electron-doped side no discontinuity is found as the contributions from the electron pockets are suppressed by vanishing prefactors (see Eq.~\eqref{eq_low_spiral: J0inplane}) at the momenta $(\pi,0)$ and $(0,\pi)$, where they pop up. We remark that we find positive stiffnesses all over the density range in which a magnetic state appears. This proves the stability of the spiral magnetic state over smooth and small deformations of the order parameter, including variations of the wave vector $\mathbf{Q}$. In Fig.~\ref{fig_low_en_sp: fig4}, we plot the spectral weights of the magnon modes as functions of the electron density $n$. The discontinuity in $m^2/\chi_\mathrm{dyn}^\perp$ is due to the intraband terms coming from the emergence of the hole pockets. By contrast, $m^2/\chi_\mathrm{dyn}^\msml{\square}$ is finite as it only gets contributions from interband processes. The spectral weights vanish near the critical fillings beyond which the magnetic state disappears, indicating that the dynamical susceptibilities vanish slower than $m^2$. The dip in $m^2/\chi^\perp_\mathrm{dyn}$ at $n\approx0.84$ is due to the merging of two electron pockets. In Fig.~\ref{fig_low_en_sp: fig5}, we plot the magnon velocities $c^a_{\alpha\alpha}=\sqrt{J^a_{\alpha\alpha}/\chi_\mathrm{dyn}^a}$. They only exhibit a mild density dependence and they are always of order $t$ in the entire magnetized regime. It is worthwhile to remark that they remain finite at the critical fillings beyond which the paramagnetic state appears. The in-plane damping term ${\rm Im}[m^2/\widetilde{\chi}^{22}(\mathbf{q},\omega)]$ is plotted in Fig.~\ref{fig_low_en_sp: fig6} as a function of $|\mathbf{q}|$ for two fixed values of $\hat{\omega}=\omega/|\mathbf{q}|$ and three fixed directions $\hat{\mathbf{q}}=\mathbf{q}/|\mathbf{q}|$. The density is set to $n=0.84$. The characteristic quadratic behavior of Eq.~\eqref{eq_low_spiral: damping2} is clearly visible, with the prefactors $\gamma(\hat{\mathbf{q}},\hat{\omega})$ shown in the inset. In Fig.~\ref{fig_low_en_sp: fig7}, we plot the frequency dependence of the damping term of the out-of-plane mode ${\rm Im}[m^2/\widetilde{\chi}^{33}(\mathbf{q},\omega)]$ for various fixed momenta $\mathbf{q}$ at and near $\mathbf{Q}$. For $\mathbf{q}=\mathbf{Q}$ the damping is proportional to $\omega^3$ in agreement with Eq.~\eqref{eq_low_spiral: damping33 q=Q}. For $\mathbf{q}\neq\mathbf{Q}$, one can see the linear frequency dependence predicted by Eq.~\eqref{eq_low_spiral: damping outofplane2}. The prefactors of the leading cubic and linear terms are listed in the inset. \end{document} \chapter{SU(2) gauge theory of the pseudogap phase} \label{chap: pseudogap} % In this Chapter, we derive an effective theory for the pseudogap phase by fractionalizing the electron field into a fermionic chargon, carrying the original electron charge, and a charge neutral spinon. The latter is a SU(2) matrix describing position- and time-dependent (bosonic) fluctuations of the local spin orientation. The fractionalization brings in a SU(2) gauge redundancy, which is why we dub this theory as SU(2) gauge theory. We then consider a magnetically ordered state for the chargons, which leads to a reconstruction of the Fermi surface. We remark that symmetry breaking is nonetheless prevented at finite temperature by the spinon fluctuations, in agreement with Mermin-Wagner theorem. We compute the magnetic state properties of the chargons, starting from the 2D Hubbard model, employing the fRG+MF method described in Chapter~\ref{chap: fRG+MF}. We subsequently integrate out the fermionic degrees of freedom, obtaining an effective non-linear sigma model (NL$\sigma$M) for the spinons. The NL$\sigma$M is described by few parameters, namely spin stiffnesses $J$ and dynamical susceptibilities $\chi_\mathrm{dyn}$, which we compute following the formalism derived in Chapter~\ref{chap: low energy spiral}. A large-$N$ expansion returns a finite temperature pseudogap regime in the hole-doped and electron-doped regions of the phase diagram. On the hole-doped side, we also find a nematic phase at low temperatures, in agreement with the experimentally observed nematicity in cuprate materials~\cite{Ando2002,Cyr-Choiniere2015}. Within our moderate coupling calculation, the spinon fluctuations are found not to be sufficiently strong to destroy long range order in the ground state. The spectral function in the hole doped pseudogap regime has the form of hole pockets with suppressed weight on their backsides, leading to Fermi arcs. The content of this chapter appears in Ref.~\cite{Bonetti2022_III}. % \section{SU(2) gauge theory} \label{sec_pg: SU(2) gauge theory} % \subsection{Fractionalizing the electron field} % We consider the Hubbard model on a square lattice with lattice spacing $a=1$. The action in imaginary time reads % \begin{eqnarray} \mathcal{S}[c,c^*] &=& \int_0^\beta\!d\tau \bigg\{ \sum_{j,j',\sigma} c^*_{j,\sigma} \left[ \left( \partial_\tau - \mu\right)\delta_{jj'} + t_{jj'} \right] c_{j',\sigma} + \; U \sum_j n_{j,\uparrow}n_{j,\downarrow} \bigg\} , \label{eq_pg: Hubbard action} \end{eqnarray} % where $c_{j,\sigma} = c_{j,\sigma}(\tau)$ and $c^*_{j,\sigma} = c^*_{j,\sigma}(\tau)$ are Grassmann fields corresponding to the annihilation and creation, respectively, of an electron with spin orientation $\sigma$ at site $j$, and $n_{j,\sigma} = c^*_{j,\sigma}c_{j,\sigma}$. The chemical potential is denoted by $\mu$, and $U > 0$ is the strength of the (repulsive) Hubbard interaction. To simplify the notation, we write the dependence of the fields on the imaginary time $\tau$ only if needed for clarity. The action in \eqref{eq_pg: Hubbard action} is invariant under \emph{global} SU(2) rotations acting on the Grassmann fields as % \begin{equation} c_j \to \mathcal{U} c_j, \quad\quad c^*_j \to c^*_j \, \mathcal{U}^\dagger, \label{eq_pg: SU(2) transf. electrons} \end{equation} % where $c_j$ and $c^*_j$ are two-component spinors composed from $c_{j,\sigma}$ and $c^*_{j,\sigma}$, respectively, while $\mathcal{U}$ is a SU(2) matrix acting in spin space. To separate collective spin fluctuations from the charge degrees of freedom, we fractionalize the electronic fields as~\cite{Schulz1995, Borejsza2004, Scheurer2018, Wu2018} % \begin{equation} c_j = R_j \, \psi_j , \quad\quad c^*_j = \psi^*_j \, R^\dagger_j , \label{eq_pg: electron fractionaliz.} \end{equation} % where $R_j \in \mbox{SU(2)}$, to which we refer as ``spinon'', is composed of bosonic fields, and the components of the ``chargon'' $\psi_j$ are fermionic. According to \eqref{eq_pg: SU(2) transf. electrons} and \eqref{eq_pg: electron fractionaliz.} the spinons transform under the global SU(2) spin rotation by a \emph{left} matrix multiplication, while the chargons are left invariant. Conversely, a U(1) charge transformation acts only on $\psi_j$, leaving $R_j$ unaffected. The transformation in Eq.~\eqref{eq_pg: electron fractionaliz.} introduces a redundant SU(2) gauge symmetry, acting as % \begin{subequations} \begin{align} & \psi_j \to \mathcal{V}_j\,\psi_j , \qquad\quad \psi^*_j \to \psi^*_j \, \mathcal{V}^\dagger_j , \\ & R_j \to R_j \, \mathcal{V}_j^\dagger, \qquad\quad R^\dagger_j \to \mathcal{V}_j \, R^\dagger_{j} , \end{align} \end{subequations} % with $\mathcal{V}_j\in \mbox{SU(2)}$. Hence, the components $\psi_{j,s}$ of $\psi_j$ carry an SU(2) gauge index $s$, while the components $R_{j,\sigma s}$ of $R_j$ have two indices, the first one ($\sigma$) corresponding to the global SU(2) symmetry, and the second one ($s$) to SU(2) gauge transformations. We now rewrite the Hubbard action in terms of the spinon and chargon fields. The quadratic part of \eqref{eq_pg: Hubbard action} can be expressed as \cite{Borejsza2004} % \begin{eqnarray} \mathcal{S}_0[\psi,\psi^*,R] &=& \int_0^\beta\!d\tau \bigg\{ \sum_j \psi^*_j \left[ \partial_\tau - \mu - A_{0,j} \right] \psi_{j} + \, \sum_{j,j'}t_{jj'}\,\psi^*_{j}\, e^{-\mathbf{r}_{jj'} \cdot \left(\boldsymbol{\nabla} - i\mathbf{A}_j \right)} \, \psi_j \bigg\}, \label{eq_pg: S0 chargons spinons} \end{eqnarray} % where we have introduced a SU(2) gauge field, defined as % \begin{equation} A_{\mu,j} = (A_{0,j},\mathbf{A}_j) = i R^\dagger_j \partial_\mu R_j, \label{eq_pg: gauge field definition} \end{equation} % with $\partial_\mu = (i\partial_\tau,\boldsymbol{\nabla})$. Here, the nabla operator $\boldsymbol{\nabla}$ is defined as generator of translations on the lattice, that is, $e^{-\mathbf{r}_{jj'}\cdot\boldsymbol{\nabla}}$ with $\mathbf{r}_{jj'} = \mathbf{r}_j - \mathbf{r}_{j'}$ is the translation operator from site $j$ to site $j'$. To rewrite the interacting part in \eqref{eq_pg: Hubbard action}, we use the decomposition~\cite{Weng1991,Schulz1995,Borejsza2004} % \begin{equation} n_{j,\uparrow}n_{j,\downarrow} = \frac{1}{4}(n_j)^2 - \frac{1}{4}(c^*_j \, \vec{\sigma}\cdot\hat{\Omega}_j \, c_j)^2, \label{eq_pg: interaction decomposition} \end{equation} % where $n_j = n_{j,\uparrow} + n_{j,\downarrow}$ is the charge density operator, $\vec{\sigma} = (\sigma^1,\sigma^2,\sigma^3)$ are the Pauli matrices, and $\hat{\Omega}_j$ is an arbitrary time- and site-dependent unit vector. The interaction term of the Hubbard action can therefore be written in terms of spinon and chargon fields as % \begin{equation} \mathcal{S}_\mathrm{int}[\psi,\psi^*,R] = \int_0^\beta\!d\tau \, U \sum_j \left[\frac{1}{4} (n_j^\psi)^2 - \frac{1}{4}(\vec{S}^\psi_j\cdot\Hat{\Omega}^R_j)^2 \right] , \end{equation} % where $n^\psi_j = \psi^*_j\psi_j$ is the chargon density operator, $\vec{S}^\psi_j = \frac{1}{2} \psi^*_j\vec{\sigma}\psi_j$ is the chargon spin operator, and % \begin{equation} \vec{\sigma}\cdot\hat{\Omega}^R_j = R^\dagger_j \, \vec{\sigma} \cdot \hat{\Omega}_j \, R_j . \label{eq_pg: Omega and Omega^R} \end{equation} % Using \eqref{eq_pg: interaction decomposition} again, we obtain % \begin{equation} \mathcal{S}_\mathrm{int}[\psi,\psi^*,R] = \int_0^\beta\!d\tau \, U \sum_j n^\psi_{j,\uparrow}n^\psi_{j,\downarrow}, \end{equation} % with $n^\psi_{j,s} = \psi^*_{j,s} \psi_{j,s}$. Therefore, the final form of the action $\mathcal{S} = \mathcal{S}_0 + \mathcal{S}_\mathrm{int}$ is nothing but the Hubbard model action where the physical electrons have been replaced by chargons coupled to a SU(2) gauge field. Since the chargons do not carry any spin degree of freedom, a \emph{global} breaking of their SU(2) gauge symmetry ($\langle \vec{S}^\psi_j \rangle \neq 0$) does not necessarily imply long range order for the physical electrons. The matrices $R_j$ describe directional fluctuations of the order parameter $\langle \vec{S}_j \rangle$, where the most important ones vary slowly in time and space. % \subsection{Non-linear sigma model} \label{sec_pg: NLsM} % We now derive a low energy effective action for the spinon fields $R_j$ by integrating out the chargons, % \begin{equation} e^{-\mathcal{S}_\mathrm{eff}[R]} = \int\! \mathcal{D} \psi \mathcal{D} \psi^* \, e^{-\mathcal{S}[\psi,\psi^*,R]} . \label{eq_pg: integral over psi} \end{equation} % Since the action $\mathcal{S}$ is quartic in the fermionic fields, the functional integral must be carried out by means of an approximate method. In previous works~\cite{Schulz1995,Dupuis2002,Borejsza2004} a Hubbard-Stratonovich transformation has been applied to decouple the chargon interaction, together with a saddle point approximation on the auxiliary bosonic (Higgs) field. We will employ an improved approximation, which we describe in Sec.~\ref{sec_pg: fRG+MF}. The effective action for the spinons can be obtained by computing the response functions of the chargons to a fictitious SU(2) gauge field. Since we assign only low energy long wave length fluctuations to the spinons in the decomposition \eqref{eq_pg: electron fractionaliz.}, the spinon field $R_j$ is slowly varying in space and time. Hence, we can perform a gradient expansion. To second order in the gradient $\partial_\mu R_j$, the effective action $\mathcal{S}_\mathrm{eff}[R]$ has the general form % \begin{equation} \mathcal{S}_\mathrm{eff}[R] = \int_\mathcal {T} \! dx \Big[ \mathcal{B}^a_\mu A_{\mu}^a(x) + \textstyle{\frac{1}{2}} \mathcal{J}^{ab}_{\mu\nu} A_{\mu}^a(x) A_{\nu}^b(x) \Big] , \label{eq_pg: effective action Amu} \end{equation} % where $\mathcal{T} = [0,\beta] \times \mathbb{R}^2$, repeated indices are summed, and we have expanded the gauge field $A_\mu$ in terms of the SU(2) generators, % \begin{equation} A_\mu(x) = A_\mu^a(x) \, \frac{\sigma^a}{2} , \label{eq_pg: Amu SU(2) generators} \end{equation} % with $a$ running from 1 to 3. In line with the gradient expansion, the gauge field is now defined over a \emph{continuous} space-time. The coefficients in~\eqref{eq_pg: effective action Amu} do not depend on the spatio-temporal coordinates $x = (\tau,\mathbf{r})$ and are given by % \begin{eqnarray} \label{eq_pg: def Ba} \mathcal{B}^a_\mu &=& \frac{1}{2} \sum_{j,j'} \gamma^{(1)}_{\mu}(j,j') \langle \psi^*_j(0) \sigma^a \psi_{j'}(0) \rangle , \\ \label{eq_pg: def Jab} \mathcal{J}_{\mu\nu}^{ab} &=& \frac{1}{4} \sum_{j,j'} \sum_{l,l'} \gamma^{(1)}_{\mu}(j,j') \gamma^{(1)}_{\nu}(l,l') \nonumber \int_0^\beta d\tau \, \big\langle \left( \psi^*_j(\tau) \sigma^a \psi_{j'}(\tau) \right) \left( \psi^*_l(0) \sigma^b \psi_{l'}(0) \right) \big\rangle_c \nonumber \\ &&- \frac{1}{4} \sum_{j,j'} \gamma^{(2)}_{\mu\nu}(j,j') \langle \psi^*_j(0) \psi_{j'}(0) \rangle \, \delta_{ab} , \label{eq_pg: spin stiff definitions} \end{eqnarray} % where $\langle\bullet\rangle$ ($\langle\bullet\rangle_c$) denotes the (connected) average with respect to the chargon Hubbard action. The first and second order current vertices have been defined as % \begin{subequations} \begin{align} \label{eq_pg: gamma1} \gamma^{(1)}(j,j') =& \phantom{-} \left( \delta_{jj'}, i\,x_{jj'} \, t_{jj'}, i\,y_{jj'} \, t_{jj'} \right) , \hskip 1cm \\ \label{eq_pg: gamma2} \gamma^{(2)}(j,j') =& - \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & x_{jj'} x_{jj'} \, t_{jj'} & x_{jj'} y_{jj'} \, t_{jj'}\\ 0 & y_{jj'} x_{jj'} \, t_{jj'} & y_{jj'} y_{jj'} \, t_{jj'}\\ \end{array} \right) , \hskip -5mm \end{align} \end{subequations} % where $x_{jj'}$ and $y_{jj'}$ are the $x$ and $y$ components, respectively of $\mathbf{r}_{jj'} = \mathbf{r}_j - \mathbf{r}_{j'}$. In Sec.~\ref{sec_pg: linear term} we will see that the linear term in~\eqref{eq_pg: effective action Amu} vanishes. We therefore consider only the quadratic contribution to the effective action. We now derive an effective theory for the spinon fluctuations, which can be more convenitently expressed in terms of their adjoint representation % \begin{equation} R^\dagger \, \sigma^\a R = \mathcal{R}^{ab} \sigma^\b . \label{eq_pg: R to mathcal R} \end{equation} % We start by proving the identity % \begin{equation} \label{eq_pg: dmu R identity} \partial_\mu\mathcal{R} = -i \mathcal{R}\, \Sigma^a A_\mu^a, \end{equation} % $\Sigma^a$ are the generators of the SU(2) in the adjoint representation, % \begin{equation} \Sigma^a_{bc} = -i \varepsilon^{abc}, \end{equation} % with $\varepsilon^{abc}$ the Levi-Civita tensor. Rewriting Eq.~\eqref{eq_pg: R to mathcal R} as % \begin{equation} \mathcal{R}^{ab} = \frac{1}{2} \Tr\left[ R^\dagger \sigma^a R^{\phantom{\dagger}} \sigma^b \right] \, , \end{equation} % we obtain the derivative of $\mathcal{R}$ in the form, % \begin{equation} \partial_\mu\mathcal{R}^{ab} = \Tr\left[ R^\dagger \sigma^a \, (\partial_\mu R) \sigma^b \right] =\Tr\left[R^\dagger \sigma^a R R^\dagger(\partial_\mu R) \sigma^b \right] = -i \mathcal{R}^{ac} \Sigma^d_{cb} A_\mu^d, \end{equation} % which is the identity in \eqref{eq_pg: dmu R identity}. We now aim to express the object $\frac{1}{2} \mathcal{J}^{ab}_{\mu\nu} A_\mu^\a A^b_\nu$ in terms of the matrix field $\mathcal{R}$. We write the stiffness matrix in terms of a new matrix $\mathcal{P}_{\mu\nu}$ via % \begin{equation} \label{eq_pg: J to P} \mathcal{J}^{ab}_{\mu\nu} =\Tr[ \mathcal{P}_{\mu\nu} ]\delta_{ab} - \mathcal{P}^{ab}_{\mu\nu} = \Tr \left[ \mathcal{P}_{\mu\nu} \Sigma^a \Sigma^b \right] . \end{equation} % Using $\mathcal{R}^T \mathcal{R} = \mathbb{1}$, we obtain % \begin{equation} \frac{1}{2} \mathcal{J}^{ab}_{\mu\nu} A_\mu^a A^b_\nu = \frac{1}{2} \Tr \left[ \mathcal{P}_{\mu\nu} \, \Sigma^a \, \mathcal{R}^T \mathcal{R} \, \Sigma^b \right] A_\mu^a A^b_\nu = \frac{1}{2} \Tr \left[ \mathcal{P}_{\mu\nu} (\partial_\mu\mathcal{R}^T)(\partial_\nu\mathcal{R}) \right] , \end{equation} % where we have used Eq.~\eqref{eq_pg: dmu R identity} in the last line. The above equation yields Eq.~\eqref{eq_pg: general NLsM}. Relation~\eqref{eq_pg: J to P} can be easily inverted using $\Tr[\mathcal{J}_{\mu\nu} ] = 2\Tr[\mathcal{P}_{\mu\nu}]$. We have therefore obtained the non-linear sigma model (NL$\sigma$M) action for the directional fluctuations % \begin{equation} \mathcal{S}_\mathrm{NL\sigma M} = \int_\mathcal{T}\!dx \, \frac{1}{2}\Tr\left[\mathcal{P}_{\mu\nu} (\partial_\mu\mathcal{R}^T)(\partial_\nu\mathcal{R})\right], \label{eq_pg: general NLsM} \end{equation} % where $\mathcal{P}_{\mu\nu} = \frac{1}{2} \Tr[\mathcal{J}_{\mu\nu}] \mathbb{1} - \mathcal{J}_{\mu\nu} $. The structure of the matrices $\mathcal{J}_{\mu\nu} $ and $\mathcal{P}_{\mu\nu} $ depends on the magnetically ordered chargon state. In the trivial case $\langle \vec{S}^\psi_j \rangle = 0$ all the stiffnesses vanish and no meaningful low energy theory for $R$ can be derived. A well-defined low-energy theory emerges, for example, when N\'eel antiferromagnetic order is realized in the chargon sector, that is, % \begin{equation} \langle \vec{S}^\psi_j \rangle \propto (-1)^{\boldsymbol{r}_j} \hat{u}, \end{equation} % where $\hat{u}$ is an arbitrary fixed unit vector. Choosing $\hat{u} = \hat{e}_1 = (1,0,0)$, the spin stiffness matrix in the N\'eel state has the form % \begin{equation} \mathcal{J}_{\mu\nu} = \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & J_{\mu\nu} & 0 \\ 0 & 0 & J_{\mu\nu} \end{array} \right) , \end{equation} % with $(J_{\mu\nu}) = {\rm diag}(-Z,J,J)$. In this case the effective theory reduces to the well-known ${\rm O(3)/O(2)} \simeq S_2$ non-linear sigma model \cite{Haldane1983_I,Haldane1983_II} % \begin{equation} \mathcal{S}_\mathrm{NL\sigma M} = \frac{1}{2} \int_\mathcal {T} dx \, \left( Z |\partial_\tau\hat{\Omega}|^2 + J |\vec{\nabla}\hat{\Omega}|^2 \right) , \end{equation} % where $\hat{\Omega}^\a=\mathcal{R}^{\a1}$, and $|\hat{\Omega}|^2=1$. Another possibility is spiral magnetic ordering of the chargons, % \begin{equation} \langle\vec{S}^\psi_j\rangle \propto \cos(\mathbf{Q} \cdot \mathbf{r}_j)\hat{u}_1 + \sin(\mathbf{Q} \cdot \mathbf{r}_j)\hat{u}_2, \end{equation} % where $\mathbf{Q}$ is a fixed wave vector as obtained by minimizing the chargon free energy, while $\hat{u}_1$ and $\hat{u}_2$ are two arbitrary mutually orthogonal unit vectors. The special case $\mathbf{Q} = (\pi,\pi)$ corresponds to the N\'eel state. Fixing $\hat{u}_1$ to $\hat{e}_1$ and $\hat{u}_2$ to $\hat{e}_2\equiv(0,1,0)$, the spin stiffness matrix takes the form % \begin{equation} \mathcal{J}_{\mu\nu} = \left( \begin{array}{ccc} J_{\mu\nu}^\perp & 0 & 0 \\ 0 & J_{\mu\nu}^\perp & 0 \\ 0 & 0 & J_{\mu\nu}^\Box \end{array} \right), \label{eq_pg:spiral stiffness matrix} \end{equation} % where % \begin{equation} (J_{\mu\nu}^a) = \left( \begin{array}{ccc} -Z^a & 0 & 0 \\ 0 & J_{xx}^a & J_{xy}^a \\ 0 & J_{yx}^a & J_{yy}^a \end{array} \right) . \end{equation} % for $a \in \{ \perp,\Box \}$. In this case, the effective action maintains its general form~\eqref{eq_pg: general NLsM} and it describes the O(3)$\times$O(2)/O(2) symmetric NL$\sigma$M, which has been previously studied in the context of geometrically frustrated antiferromagnets~\cite{Azaria1990,Azaria1992,Azaria1993_PRL,Azaria1993,Klee1996}. This theory has three independent degrees of freedom, corresponding to one \emph{in-plane} and two \emph{out-of-plane} Goldstone modes. Antiferromagnetic N\'eel or spiral orders have been found in the two-dimensional Hubbard model over broad regions of the parameter space by several approximate methods, such as Hartree-Fock~\cite{Igoshev2010}, slave boson mean-field theory~\cite{Fresard1991}, expansion in the hole density~\cite{Chubukov1995}, moderate coupling fRG~\cite{Yamase2016}, and dynamical mean-field theory~\cite{Vilardi2018,Bonetti2020_I}. In our theory the mean-field order applies only to the chargons, while the physical electrons are subject to order parameter fluctuations. % \section{Computation of parameters} \label{sec_pg: fRG+MF} % In this section, we describe how we evaluate the chargon integral in Eq.~\eqref{eq_pg: integral over psi} to compute the magnetic order parameter and the stiffness matrix $\mathcal{J}_{\mu\nu}$. The advantage of the way we formulated our theory in Sec.~\ref{sec_pg: SU(2) gauge theory} is that it allows arbitrary approximations on the chargon action. One can employ various techniques to obtain the order parameter and the spin stiffnesses in the magnetically ordered phase. We use a renormalized mean-field (MF) approach with effective interactions obtained from a functional renormalization group (fRG) flow. In the following we briefly describe our approximation of the (exact) fRG flow, and we refer to Refs.~\cite{Berges2002, Metzner2012, Dupuis2021} and to Chapter~\ref{chap: methods} for the fRG, and to Refs.~\cite{Wang2014, Yamase2016, Bonetti2020_II, Vilardi2020} and to Chapter~\ref{chap: fRG+MF} for the fRG+MF method. \subsection{Symmetric regime} % We evaluate the chargon functional integral by using an fRG flow equation \cite{Berges2002, Metzner2012, Dupuis2021}, choosing the temperature $T$ as flow parameter \cite{Honerkamp2001}. Temperature can be used as a flow parameter after rescaling the chargon fields as $\psi_j \to T^\frac{3}{4}\psi_j$, and defining a rescaled bare Green's function, % \begin{equation} G_0^T(\mathbf{k},i\nu_n) = \frac{T^{\frac{1}{2}}}{i\nu_n - \epsilon_\mathbf{k} + \mu} , \end{equation} % where $\nu_n = (2n+1)\pi T$ the fermionic Matsubara frequency, and $\epsilon_\mathbf{k}$ is the Fourier transform of the hopping matrix in~\eqref{eq_pg: Hubbard action}. We approximate the exact fRG flow by a second order (one-loop) flow of the two-particle vertex $V^T$, discarding self-energy feedback and contributions from the three-particle vertex \cite{Metzner2012}. In an SU(2) invariant system the two-particle vertex has the spin structure % \begin{equation*} \begin{split} V^T_{\sigma_1\sigma_2\sigma_3\sigma_4}(k_1,k_2,k_3,k_4) &= V^T(k_1,k_2,k_3,k_4) \, \delta_{\sigma_1\sigma_3}\,\delta_{\sigma_2\sigma_4} \\ & - V^T(k_2,k_1,k_3,k_4) \, \delta_{\sigma_1\sigma_4}\,\delta_{\sigma_2\sigma_3} , \end{split} \end{equation*} % where $k_\alpha = (\mathbf{k}_\alpha,i\nu_{\alpha n})$ are combined momentum and frequency variables. Translation invariance imposes momentum conservation so that $k_1 + k_2 = k_3 + k_4$. We perform a static approximation, that is, we neglect the frequency dependency of the vertex. To parametrize the momentum dependence, we use the channel decomposition~\cite{Husemann2009, Husemann2012, Vilardi2017, Vilardi2019} % \begin{eqnarray} V^T(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3,\mathbf{k}_4) &=& U - \phi^{p,T}_{\frac{\mathbf{k}_1-\mathbf{k}_2}{2},\frac{\mathbf{k}_3-\mathbf{k}_4}{2}}(\mathbf{k}_1+\mathbf{k}_2) \nonumber \\ && + \, \phi^{m,T}_{\frac{\mathbf{k}_1+\mathbf{k}_4}{2},\frac{\mathbf{k}_2+\mathbf{k}_3}{2}}(\mathbf{k}_2-\mathbf{k}_3) + \frac{1}{2}\phi^{m,T}_{\frac{\mathbf{k}_1+\mathbf{k}_3}{2},\frac{\mathbf{k}_2+\mathbf{k}_4}{2}}(\mathbf{k}_3-\mathbf{k}_1) \nonumber \\ && - \frac{1}{2}\phi^{c,T}_{\frac{\mathbf{k}_1+\mathbf{k}_3}{2},\frac{\mathbf{k}_2+\mathbf{k}_4}{2}}(\mathbf{k}_3-\mathbf{k}_1) , \label{eq_pg: vertex parametrization} \end{eqnarray} % where the functions $\phi^{p,T}$, $\phi^{m,T}$, and $\phi^{c,T}$ capture fluctuations in the pairing, magnetic, and charge channel, respectively. The dependences of these functions on the linear combination of momenta in the brackets are typically much stronger than those in the subscripts. Hence, we expand the latter dependencies in form factors \cite{Husemann2009,Lichtenstein2017}, keeping only the lowest order s-wave, extended s-wave, p-wave and d-wave contributions. We run the fRG flow from the initial temperature $T_\mathrm{ini} = \infty$, at which $V^{T_\mathrm{ini}} = U$, down to a critical temperature $T^*$ at which $V^T$ diverges, signaling the onset of spontaneous symmetry breaking (SSB). If the divergence of the vertex is due to $\phi^{m,T}$, the chargons develop some kind of magnetic order. % \subsection{Order parameter} \label{sec_pg: order parameter and Q} % In the magnetic phase, that is, for $T < T^*$, we assume an order parameter of the form $\langle \psi^*_{\mathbf{k},\uparrow} \psi_{\mathbf{k}+\mathbf{Q},\downarrow} \rangle$, which corresponds to N\'eel antiferromagnetism if $\mathbf{Q} = (\pi,\pi)$, and to spiral order otherwise. For $T < T^*$ we simplify the flow equations by decoupling the three channels $\phi^{P,T}$, $\phi^{M,T}$, and $\phi^{C,T}$. The flow equations can then be formally integrated, and the formation of an order parameter can be easily taken into account \cite{Wang2014}. In the magnetic channel one thus obtains the magnetic gap equation~\cite{Yamase2016} % \begin{equation}\label{eq_pg: gap equation fRG+MF} \Delta_{\mathbf{k}} = \int_{\mathbf{k}'} \overline{V}^m_{\mathbf{k},\mathbf{k}'}(\mathbf{Q})\, \frac{f(E^-_{\mathbf{k}'}) - f(E^+_{\mathbf{k}'})}{E^+_{\mathbf{k}'} - E^-_{\mathbf{k}'}} \, \Delta_{\mathbf{k}'} , \end{equation} % where $f(x)=(e^{x/T}+1)^{-1}$ is the Fermi function, $\int_\mathbf{k}$ is a shorthand notation for $\int\!\frac{d^2\mathbf{k}}{(2\pi)^2}$, and $E^\pm_\mathbf{k}$ are the quasiparticle dispersions % \begin{equation} E^\pm_\mathbf{k} = \frac{\epsilon_\mathbf{k}+\epsilon_{\mathbf{k}+\mathbf{Q}}}{2} \pm\sqrt{\frac{1}{4} \left( \epsilon_\mathbf{k}-\epsilon_{\mathbf{k}+\mathbf{Q}} \right)^2 + \Delta_\mathbf{k}^2} \, -\mu . \end{equation} % The effective coupling $\overline{V}^m_{\mathbf{k},\mathbf{k}'}(\mathbf{Q})$ is the particle-hole irreducible part of $V^{T^*}$ in the magnetic channel, which can be obtained by inverting a Bethe-Salpeter equation at the critical scale, % \begin{equation} \begin{split} V^{m,T^*}_{\mathbf{k},\mathbf{k}'}(\mathbf{q}) &= \overline{V}^m_{\mathbf{k},\mathbf{k}'}(\mathbf{q}) - \int_{\mathbf{k}''} \overline{V}^m_{\mathbf{k},\mathbf{k}''}(\mathbf{q}) \, \Pi^{T^*}_{\mathbf{k}''}(\mathbf{q}) \, V^{m,T^*}_{\mathbf{k}'',\mathbf{k}'}(\mathbf{q}) , \end{split} \label{eq_pg: V phx Bethe-Salpeter} \end{equation} % where $V^{m,T}_{\mathbf{k},\mathbf{k}'}(\mathbf{q}) = V^{T}(\mathbf{k}-\mathbf{q}/2,\mathbf{k}'+\mathbf{q}/2,\mathbf{k}'-\mathbf{q}/2,\mathbf{k}+\mathbf{q}/2)$, and the particle-hole bubble is given by % \begin{equation} \Pi^{T}_{\mathbf{k}}(\mathbf{q}) = \sum_{\nu_n} G_0^{T}\left(\mathbf{k}-\mathbf{q}/2,i\nu_n\right) G_0^{T}\left(\mathbf{k}+\mathbf{q}/2,i\nu_n\right) . \end{equation} % Although $V^{m,T^*}_{\mathbf{k},\mathbf{k}'}(\mathbf{q})$ diverges at certain wave vectors $\mathbf{q} = \mathbf{Q}_c$, the irreducible coupling $\overline{V}^m_{\mathbf{k},\mathbf{k}'}(\mathbf{q})$ is finite for all $\mathbf{q}$. The dependence of $\overline{V}^m_{\mathbf{k},\mathbf{k}'}(\mathbf{q})$ on $\mathbf{k}$ and $\mathbf{k}'$ is rather weak and of no qualitative importance. Hence, to simplify the calculations, we discard the $\mathbf{k}$ and $\mathbf{k}'$ dependencies of the effective coupling by taking the momentum average $\overline{V}^m(\mathbf{q}) = \int_{\mathbf{k},\mathbf{k}'} \overline{V}^m_{\mathbf{k},\mathbf{k}'}(\mathbf{q})$. The magnetic gap then becomes momentum independent, that is, $\Delta_\mathbf{k} = \Delta$. While the full vertex $V^{m,T}_{\mathbf{k},\mathbf{k}'}(\mathbf{q})$ depends very strongly on $\mathbf{q}$, the dependence of its irreducible part $\overline{V}_{\mathbf{k},\mathbf{k}'}(\mathbf{q})$ on $\mathbf{q}$ is rather weak. The calculation of the stiffnesses in the subsequent section is considerably simplified approximating $\overline{V}^m(\mathbf{q})$ by a momentum independent effective interaction $U_{\rm eff}^m = \overline{V}^m(\mathbf{Q}_c)$. The gap equation~\eqref{eq_pg: gap equation fRG+MF} therefore simplifies to % \begin{equation}\label{eq_pg: simplified gap equation} 1 = U_{\rm eff}^m \int_\mathbf{k} \frac{f(E^-_{\mathbf{k}}) - f(E^+_{\mathbf{k}})}{E^+_{\mathbf{k}} - E^-_{\mathbf{k}}}. \end{equation} % The optimal ordering wave vector $\mathbf{Q}$ is found by minimizing the mean-field free energy of the system % \begin{equation} \label{eq_pg: MF theromdynamic potential} F(\mathbf{Q}) = - T \int_\mathbf{k}\sum_{\ell=\pm} \ln\left(1+e^{-E^\ell_\mathbf{k}(\mathbf{Q})/T}\right) + \frac{\Delta^2}{2U_{\rm eff}^m} + \mu n , \end{equation} % where the chemical potential $\mu$ is determined by keeping the density $n = \int_\mathbf{k} \sum_{\ell=\pm} f(E^\ell_\mathbf{k})$ fixed and the gap equation~\eqref{eq_pg: gap equation fRG+MF} fulfilled for each value of $\mathbf{Q}$. The optimal wave vectors $\mathbf{Q}$ at temperatures $T < T^*$ generally differ from the wave vectors $\mathbf{Q}_c$ at which $V^{T^*}_{\mathbf{k},\mathbf{k}'}(\mathbf{q})$ diverges. Eq.~\eqref{eq_pg: gap equation fRG+MF} has the form of a mean-field gap equation with a renormalized interaction that is reduced compared to the bare Hubbard interaction $U$ by fluctuations in the pairing and charge channels. This reduces the critical doping beyond which magnetic order disappears, compared to the unrealistically large values obtained already for weak bare interactions in pure Hartree-Fock theory (see e.g.\ Ref.~\cite{Igoshev2010}). \subsection{Spin stiffnesses} \label{sec_pg: spin stiff formalism} The NL$\sigma$M parameters, that is, the spin stiffnesses $\mathcal{J}_{\mu\nu}^{ab}$, are obtained by evaluating Eq.~\eqref{eq_pg: spin stiff definitions}. These expressions can be viewed as the response of the chargon system to an external SU(2) gauge field in the low energy and long wavelength limit, and they are equivalent to the stiffnesses defined by an expansion of the inverse susceptibilities to quadratic order in momentum and frequency around the Goldstone poles (see Chapter~\ref{chap: low energy spiral}). The following evaluation is obtained as a simple generalization of the RPA formula derived in Chapter~\ref{chap: low energy spiral} to a renormalized RPA with effective interaction % \begin{equation} \widetilde{\Gamma}^{ab}_{0}(\mathbf{q}) = \Gamma^{ab}_{0}(\mathbf{q}) = 2 \, {\rm diag} \left[ -U_{\rm eff}^c(\mathbf{q}),U_{\rm eff}^m,U_{\rm eff}^m,U_{\rm eff}^m \right] , \end{equation} % where $U_{\rm eff}^m$ has been defined before, and the effective charge interaction is given by $U_{\rm eff}^c(\mathbf{q}) = \int_{\mathbf{k},\mathbf{k}'} \overline V^c_{\mathbf{k},\mathbf{k}'}(\mathbf{q})$, where the irreducible coupling $\overline V^c_{\mathbf{k},\mathbf{k}'}(\mathbf{q})$ is obtained by inverting a Bethe-Salpeter equation similar to Eq.~\eqref{eq_pg: V phx Bethe-Salpeter}, % \begin{equation}\label{eq_pg: V ph Bethe-Salpeter} \begin{split} V^{c,T^*}_{\mathbf{k},\mathbf{k}'}(\mathbf{q}) &= \overline{V}^c_{\mathbf{k},\mathbf{k}'}(\mathbf{q}) + \int_{\mathbf{k}''} \overline{V}^c_{\mathbf{k},\mathbf{k}''}(\mathbf{q}) \, \Pi^{T^*}_{\mathbf{k}''}(\mathbf{q}) \, V^{c,T^*}_{\mathbf{k}'',\mathbf{k}'}(\mathbf{q}) , \end{split} \end{equation} % with % \begin{eqnarray} V^{c,T}_{\mathbf{k},\mathbf{k}'}(\mathbf{q}) &=& 2V^{T}(\mathbf{k}\!-\!\mathbf{q}/2,\mathbf{k}'\!+\!\mathbf{q}/2,\mathbf{k}\!+\!\mathbf{q}/2,\mathbf{k}'\!-\!\mathbf{q}/2) \nonumber \\ &-& V^{T}(\mathbf{k}\!-\!\mathbf{q}/2,\mathbf{k}'\!+\!\mathbf{q}/2,\mathbf{k}'\!-\!\mathbf{q}/2,\mathbf{k}\!+\!\mathbf{q}/2) . \nonumber \end{eqnarray} % Here we keep the dependence on $\mathbf{q}$ since it does not complicate the calculations. We remark that the temporal stiffnesses $Z^a$ of this chapter conincide with the dynamic susceptibilities $\chi_\mathrm{dyn}^a$ of chapter~\ref{chap: low energy spiral}. % \subsection{Linear term in the gauge field} \label{sec_pg: linear term} % We now show that the linear term in Eq.~\eqref{eq_pg: effective action Amu} vanishes. Fourier transforming the vertex and the expectation value, the coefficient $\mathcal{B}_\mu^a$ can be written as % \begin{equation} \mathcal{B}_\mu^a = \frac{1}{2} \int_\mathbf{k} T \sum_{\nu_n} \gamma_\mu^{(1)}(\mathbf{k}) \Tr \left[ \sigma^a \mathbf{G}_{\mathbf{k},\mathbf{k}}(i\nu_n) \right] \, . \end{equation} % Inserting $\mathbf{G}_{\mathbf{k},\mathbf{k}}(\nu_n) $ from Eq.~\eqref{eq_low_spiral: G unrotated} (see Chapter~\ref{chap: low energy spiral}) one immediately sees that $\mathcal{B}_\mu^1 = \mathcal{B}_\mu^2 = 0$ for $\mu = 0,1,2$, and $\mathcal{B}_0^3 = 0$, too. Performing the Matsubara sum for $\mathcal{B}^3_\alpha$ with $\alpha = 1,2$, we obtain % \begin{equation} \mathcal{B}^3_\alpha = \frac{1}{2} \int_\mathbf{k} \sum_{\ell=\pm} \left[ (\partial_{k_\alpha}\epsilon_\mathbf{k}) u^\ell_\mathbf{k} f(E^\ell_\mathbf{k}) + (\partial_{k_\alpha}\epsilon_{\mathbf{k}+\mathbf{Q}}) u^{-\ell}_\mathbf{k} f(E^\ell_\mathbf{k}) \right] \, . \label{eq_pg: linear term in Amu expression} \end{equation} % One can see by direct calculation that this term vanishes if $\partial F(\mathbf{Q})/\partial\mathbf{Q}$ with $F(\mathbf{Q})$ given by Eq.~\eqref{eq_pg: MF theromdynamic potential} vanishes. Hence, $\mathcal{B}^3_\alpha$ vanishes if $\mathbf{Q}$ minimizes the free energy. A similar result has been obtained in Ref.~\cite{Klee1996}. % \section{Evaluation of sigma model} To solve the NL$\sigma$M~\eqref{eq_pg: general NLsM}, we resort to a saddle point approximation in the $\text{CP}^{N-1}$ representation, which becomes exact in the large $N$ limit \cite{AuerbachBook1994,Chubukov1994}. \subsection{\texorpdfstring{CP$^{\bf 1}$}{CP} representation} The matrix $\mathcal{R}$ can be expressed as a triad of orthonormal unit vectors: % \begin{equation} \label{eq_pg:mathcal R to Omegas} \mathcal{R}=\big( \hat\Omega_1,\hat\Omega_2,\hat\Omega_3 \big), \end{equation} % where $\hat\Omega_i\cdot\hat\Omega_j = \delta_{ij}$. We represent these vectors in terms of two complex Schwinger bosons $z_\uparrow$ and $z_\downarrow$ \cite{Sachdev1995} % \begin{subequations} \label{eq_pg:mathcal R to z} \begin{align} & \hat\Omega_- = z(i\sigma^2\vec{\sigma})z, \\ & \hat\Omega_+ = z^*(i\sigma^2\vec{\sigma})^\dagger z^*, \\ & \hat\Omega_3 = z^*\vec{\sigma}z, \end{align} \end{subequations} % with $z = (z_\uparrow,z_\downarrow)$ and $\hat{\Omega}_\pm=\hat{\Omega}_1\mp i \hat{\Omega}_2$. The Schwinger bosons obey the non-linear constraint % \begin{equation} \label{eq_pg:z boson constraint} z^*_\uparrow z_\uparrow + z^*_\downarrow z_\downarrow = 1 \, . \end{equation} % The parametrization~\eqref{eq_pg:mathcal R to z} is equivalent to setting % \begin{equation} \label{eq_pg: R to z} R = \left( \begin{array}{cc} z_\uparrow & -z_\downarrow^* \\ z_\downarrow & \phantom{-} z_\uparrow^* \end{array} \right), \end{equation} % in Eq.~\eqref{eq_pg: R to mathcal R}. Inserting the expressions \eqref{eq_pg:mathcal R to Omegas} and \eqref{eq_pg:mathcal R to z} into Eq.~\eqref{eq_pg: general NLsM} and assuming a stiffness matrix $\mathcal{J}_{\mu\nu} $ of the form~\eqref{eq_pg:spiral stiffness matrix}, we obtain the $\rm CP^1$ action % \begin{equation} \label{eq_pg:CP1 action} \mathcal{S}_{\text{CP}^1}[z,z^*] = \int_\mathcal {T} dx \, \Big[ 2J^\perp_{\mu\nu} (\partial_\mu z^*)(\partial_\nu z) - \, 2(J^\perp_{\mu\nu} - J^\Box_{\mu\nu}) j_\mu j_\nu \Big] \, , \end{equation} % with sum convention for the spin indices of $z$ and $z^*$ and the current operator % \begin{equation} j_\mu = \frac{i}{2}\left[z^*(\partial_\mu z)-(\partial_\mu z^*)z\right] \, . \end{equation} % We recall that $x = (\tau,\mathbf{r})$ comprises the imaginary time and space variables, and $\mathcal{T} = [0,\beta] \times \mathbb{R}^2$. \subsection{Large {\em N} expansion} The current-current interaction in Eq.~\eqref{eq_pg:CP1 action} can be decoupled by a Hubbard-Stratonovich transformation, introducing a U(1) gauge field $\mathcal{A}_\mu$, and implementing the constraint~\eqref{eq_pg:z boson constraint} by means of a Lagrange multiplier $\lambda$. The resulting form of the action describes the so-called massive $\rm CP^1$ model~\cite{Azaria1995} % \begin{equation} \label{eq_pg:massive CP1 model} \mathcal{S}_{\text{CP}^1}[z,z^*,\mathcal{A}_\mu,\lambda] = \int_\mathcal {T} dx \Big[ 2J^\perp_{\mu\nu} (D_\mu z)^* (D_\nu z) + \frac{1}{2} M_{\mu\nu} \mathcal{A}_\mu \mathcal{A}_\nu + i\lambda(z^*z-1) \Big] \, , \end{equation} % where $D_\mu = \partial_\mu - i\mathcal{A}_\mu$ is the covariant derivative. The numbers $M_{\mu\nu}$ are the matrix elements of the mass tensor of the U(1) gauge field, % \begin{equation} {\rm M} = 4 \big[ 1 - {\rm J}^\Box ({\rm J}^\perp)^{-1} \big]^{-1} {\rm J}^\Box \, , \end{equation} % where ${\rm J}^\Box$ and ${\rm J}^\perp$ are the stiffness tensors built from the matrix elements $J_{\mu\nu}^\Box$ and $J_{\mu\nu}^\perp$, respectively. To perform a large $N$ expansion, we extend the two-component field $z = (z_\uparrow,z_\downarrow)$ to an $N$-component field $z = (z_1,\dots,z_N)$, and rescale it by a factor $\sqrt{N/2}$ so that it now satisfies the constraint % \begin{equation} z^*z = \sum_{\alpha=1}^N z^*_\alpha z_\alpha = \frac{N}{2} \, . \end{equation} % To obtain a nontrivial limit $N \to \infty$, we rescale the stiffnesses $J^\perp_{\mu\nu}$ and $J^\Box_{\mu\nu}$ by a factor $2/N$, yielding the action % \begin{equation} \label{eq_pg:massive CPN1 model} \mathcal{S}_{\text{CP}}^{N-1}[z,z^*,\mathcal{A}_\mu,\lambda] = \int_\mathcal {T} dx \Big[ 2J^\perp_{\mu\nu} (D_\mu z)^* (D_\nu z) + \! \frac{N}{4} M_{\mu\nu} \mathcal{A}_\mu \mathcal{A}_\nu + i\lambda \Big( z^*z - \frac{N}{2} \Big) \Big] . \end{equation} % This action describes the massive ${\rm CP}^{N-1}$ model~\cite{Campostrini1993}, which in $d>2$ dimensions displays two distinct critical points~\cite{Azaria1995,Chubukov1994,Chubukov1994_II}. The first one belongs to the pure ${\rm CP}^{N-1}$ class, where $M_{\mu\nu} \to 0$ ($J^\Box_{\mu\nu} = 0$), which applies, for example, in the case of N\'eel ordering of the chargons, and the U(1) gauge invariance is preserved. The second is in the O(2N) class, where $M_{\mu\nu} \to \infty$ ($J^\perp_{\mu\nu} = J^\Box_{\mu\nu}$) and the gauge field does not propagate. At the leading order in $N^{-1}$, the saddle point equations are the same for both fixed points, so that we can ignore this distinction in the following. At finite temperatures $T > 0$ the non-linear sigma model does not allow for any long-range magnetic order, in agreement with the Mermin-Wagner theorem. The spin correlations decay exponentially and the spin excitations are bounded from below by a spin gap $m_s = \sqrt{i\langle\lambda\rangle/Z^\perp}$. Integrating out the $z$-bosons from Eq.~\eqref{eq_pg:massive CPN1 model}, we obtain the effective action \cite{AuerbachBook1994} % \begin{equation} \mathcal{S}[\mathcal{A}_\mu,\lambda] = N \int_\mathcal{T} dx \Big[ \ln \left( -2J^\perp_{\mu\nu} D_\mu D_\nu + i \lambda \right) - \frac{i}{2} \lambda + \,\frac{1}{4} M_{\mu\nu} \mathcal{A}_\mu \mathcal{A}_\nu \Big] \, . \end{equation} % In the large $N$ limit the functional integral for its partition function is dominated by its saddle point, which is determined by the stationarity equations % \begin{equation} \frac{\delta\mathcal{S}}{\delta\mathcal{A}_\mu}= \frac{\delta\mathcal{S}}{\delta\lambda} = 0 \, . \end{equation} % The first condition implies $\mathcal{A}_\mu=0$, that is, in the large $N$ limit the U(1) gauge field fluctuations are totally suppressed. The variation with respect to $\lambda$ gives, assuming a spatially uniform average value for $\lambda$, % \begin{equation} T \sum_{\omega_n} \int_\mathbf{q} \frac{1}{Z^\perp \omega_n^2 + J^\perp_{\alpha\beta} q_\alpha q_\beta + i\langle\lambda\rangle} = 1 \, . \end{equation} % Performing the sum over the bosonic Matsubara frequencies $\omega_n = 2n\pi T$, inserting the identity % \begin{equation} 1 = \int_0^\infty \! d\epsilon \, \delta\Big( \epsilon - \sqrt{ J^\perp_{\alpha\beta} q_\alpha q_\beta/Z^\perp } \, \Big) , \end{equation} % and performing the $\mathbf{q}$-integral, we obtain a self-consistent equation for the spin gap % \begin{equation} \label{eq_pg:large N equation} \frac{1}{4\pi J} \int_0^{c_s{\Lambda_\mathrm{uv}}} \!\frac{\epsilon\,d\epsilon}{\sqrt{\epsilon^2+m_s^2}} \,\mathrm{coth}\left(\frac{\sqrt{\epsilon^2+m_s^2}}{2T}\right) = 1 \, , \end{equation} % where ${\Lambda_\mathrm{uv}}$ is an ultraviolet momentum cutoff. The constant $J$ is an ``average'' spin stiffness given by % \begin{equation} J = \sqrt{ \mathrm{det} \left( \begin{array}{cc} J^\perp_{xx} & J^\perp_{xy} \\ J^\perp_{yx} & J^\perp_{yy} \end{array} \right) } \, , \end{equation} % and $c_s = \sqrt{J/Z^\perp}$ is the corresponding average spin wave velocity. In Sec.~\ref{sec_pg: cutoff}, we shall discuss how to choose the value of ${\Lambda_\mathrm{uv}}$. For $m_s \ll c_s{\Lambda_\mathrm{uv}}$, and $T \ll c_s{\Lambda_\mathrm{uv}}$, the magnetic correlation length $\xi_s = \frac{1}{2} c_s/m_s$, behaves as % \begin{equation} \xi_s = \frac{c_s}{4T \, \sinh^{-1} \! \left[ \frac{1}{2} e^{-\frac{2\pi}{T}(J - J_c)} \right] } \, , \end{equation} % with the critical stiffness % \begin{equation} \label{eq_pg:Jc} J_c = \frac{c_s{\Lambda_\mathrm{uv}}}{4\pi} \, . \end{equation} % The correlation length is finite at each $T > 0$. For $J > J_c$, $\xi_s$ diverges exponentially for $T \to 0$, while for $J < J_c$ it remains finite in the zero temperature limit. At $T=0$, Eq.~\eqref{eq_pg:large N equation} may not have a solution for any value of $m_s$. This is due to the Bose-Einstein condensation of the Schwinger bosons $z$. One therefore has to account for this effect by adding a \emph{condensate fraction $n_0$} to the left hand side of Eq.~\eqref{eq_pg:large N equation}. For later convenience, we assume that only $z$ bosons with spin index $\uparrow$ condense. We obtain % \begin{equation} \label{eq_pg:large N eq at T=0} n_0 + \frac{1}{4\pi J} \int_0^{c_s{\Lambda_\mathrm{uv}}} \! \frac{\epsilon \, d\epsilon}{\sqrt{\epsilon^2 + m_s^2}} = 1 \, , \end{equation} % where $n_0 = |\langle z_\uparrow \rangle|^2$. Eq.~\eqref{eq_pg:large N eq at T=0} can be easily solved, yielding (if $m_s \ll {\Lambda_\mathrm{uv}}$) % \begin{subequations} \begin{align} &\begin{cases} & m_s=0 \\ & n_0 = 1 - \frac{J_c}{J} \end{cases} \hskip 5mm \text{for } J>J_c \, , \\ &\begin{cases} & n_0 = 0 \\ & m_s = 2\pi J\left[\left( J_c/J \right)^2 - 1 \right] \end{cases} \hskip 5mm \text{for } J<J_c. \end{align} \end{subequations} % The Mermin-Wagner theorem is thus respected already in the saddle-point approximation to the ${\rm CP}^{N-1}$ representation of the non-linear sigma model, that is, there is no long-range order at $T > 0$. In the ground state, long-range order (corresponding to a $z$ boson condensation) is obtained for a sufficiently large spin stiffness, while for $J < J_c$ magnetic order is destroyed by quantum fluctuations even at $T = 0$, giving rise to a paramagnetic state with a spin gap. \subsection{Choice of ultraviolet cutoff} \label{sec_pg: cutoff} % The impact of spin fluctuations described by the non-linear sigma model depends strongly on the ultraviolet cutoff ${\Lambda_\mathrm{uv}}$. In particular, the critical stiffness $J_c$ separating a ground state with magnetic long-range order from a disordered ground state is directly proportional to ${\Lambda_\mathrm{uv}}$. The need for a regularization of the theory by an ultraviolet cutoff is a consequence of the gradient expansion. While the expansion coefficients (the stiffnesses) are determined by the microscopic model, there is no systematic way of computing ${\Lambda_\mathrm{uv}}$. A pragmatic choice for the cutoff is given by the ansatz % \begin{equation} \label{eq_pg: Luv} {\Lambda_\mathrm{uv}} = C/\xi_A \, , \end{equation} % where $C$ is a dimensionless number, and $\xi_A$ is the magnetic coherence length, which is the characteristic length scale of spin amplitude correlations. This choice may be motivated by the observation that local moments with a well defined spin amplitude are not defined at length scales below $\xi_A$ \cite{Borejsza2004}. The constant $C$ can be fixed by matching results from the non-linear sigma model to results from a microscopic calculation in a suitable special case (see below). The coherence length $\xi_A$ can be obtained from the connected spin amplitude correlation function $\chi_A(\mathbf{r}_{j},\mathbf{r}_{j'}) = \big\langle (\hat{n}_j \cdot \vec{S}^\psi_j)(\hat{n}_{j'} \cdot \vec{S}^\psi_{j'}) \big\rangle_c$, where $\hat{n}_j = \langle \vec{S}^\psi_j \rangle / |\langle \vec{S}^\psi_j \rangle|$. At long distances between $\mathbf{r}_j$ and $\mathbf{r}_{j'}$ this function decays exponentially with an exponential dependence $e^{-r/\xi_A}$ of the distance $r$. Fourier transforming and using the rotated spin frame introduced in Chapter~\ref{chap: low energy spiral}, the long distance behavior of $\chi_A(\mathbf{r}_{j},\mathbf{r}_{j'})$ can be related to the momentum dependence of the static correlation function $\widetilde{\chi}^{ab}(\mathbf{q},0)$ in the amplitude channel $a=b=1$ for small $\mathbf{q}$, which has the general form % \begin{equation} \widetilde{\chi}^{11}(\mathbf{q},0) \propto \frac{1}{J^A_{\alpha\beta} q_\alpha q_\beta + m_A^2} \, . \end{equation} % The magnetic coherence length is then given by % \begin{equation} \xi_A = \sqrt{J_A}/(2 m_A) \, , \end{equation} % where $J_A = \left( J_{xx}^A J_{yy}^A - J_{xy}^A J_{yx}^A \right)^\frac{1}{2}$. The constant $C$ in Eq.~\eqref{eq_pg: Luv} can be estimated by considering the Hubbard model with pure nearest neighbor hopping at half filling. At strong coupling (large $U$) the spin degrees of freedom are then described by the antiferromagnetic Heisenberg model, which exhibits a N\'eel ordered ground state with a magnetization reduced by a factor $n_0 \approx 0.6$ compared to the mean-field value \cite{Manousakis1991}. On the other hand, evaluating the RPA expressions for the Hubbard model in the strong coupling limit, one recovers the mean-field results for the spin stiffness and spin wave velocity of the Heisenberg model with an exchange coupling $J_H = 4t^2/U$, namely $J = J_H/4$ and $c_s = \sqrt{2} J_H$. Evaluating the RPA spin amplitude correlation function yields $\xi_A = 1/\sqrt{8}$ in this limit. With the ansatz \eqref{eq_pg: Luv}, one then obtains $n_0 = 1 - 4C/\pi$. Matching this with the numerical result $n_0 \approx 0.6$ yields $C \approx 0.3$ and ${\Lambda_\mathrm{uv}} \approx 0.9$. % \section{Results} % In this section we present and discuss results obtained from our theory for the two-dimensional Hubbard model, both in the hole- ($n<1$) and electron-doped ($n>1$) regime. We fix the ratio of hopping amplitudes as $t'/t = -0.2$, and we choose a moderate interaction strength $U=4t$. The energy unit is $t$ in all plots. % \subsection{Chargon mean-field phase diagram} % \begin{figure}[h!] \centering \includegraphics[width=0.6\textwidth]{fig2.png} \caption{Pseudocritical temperature $T^*$ and nematic temperature $T_\mathrm{nem}$ functions of the density $n$. The labels $T^*_m$ and $T^*_p$ indicate whether the effective interaction diverges in the magnetic or in the pairing channel, respectively. The black solid line indicates the magnetic transition temperature if the pairing instability is ignored (see main text). The labels "N\'eel" and "Spiral" refer to the type of chargon order. The dashed black line refers indicates a topological transition of the quasiparticle Fermi surface within the spiral regime. Inset: irreducible magnetic effective interaction $U_{\rm eff}^m$ as a function of density $n$.} \label{fig_pg: fig1} \end{figure} % % \begin{figure}[h!] \centering \includegraphics[width=0.6\textwidth]{fig3.png} \caption{Magnetic gap $\Delta$ (left axis) and incommensurability $\eta$ (right axis) at $T=0$ as functions of the density.} \label{fig_pg: fig2} \end{figure} % In Fig.~\ref{fig_pg: fig1}, we plot the critical temperature $T^*$ at which the vertex $V^T(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3,\mathbf{k}_4)$ diverges. In a wide filling window, from $n=0.84$ to $n=1.08$, the divergence of the vertex is due to a magnetic instability. At the edges of the magnetic dome in the $n$-$T$ phase diagram, the leading instability occurs in the $d$-wave pairing channel. Pairing is expected to extend into the magnetic regime as a secondary instability \cite{Wang2014,Yamase2016}. Vice versa, magnetic order is possible in the regime where pairing fluctuations dominate. In Fig.~\ref{fig_pg: fig1}, we also show the magnetic pseudocritical temperature ($T^*$) obtained by neglecting the onset of pairing (black solid line). This can be determined by setting the magnetic order parameter $\Delta$ to zero in the gap equation~\eqref{eq_pg: gap equation fRG+MF} and solving for the temperature. In the hole-doped part of the $n$-$T$ phase diagram where the $d$-wave superconducting instability is the dominating one, the magnetic critical temperature is only slightly smaller than $T_p^*$. Conversely, in the same region on the electron doped side it vanishes. In Fig.~\ref{fig_pg: fig1}, we also plot the \emph{nematic} temperature $T_\mathrm{nem}$, below which the magnetic chargon state transitions from the N\'eel one to the spiral one, breaking the $C_4$ lattice rotational symmetry. At small hole dopings we first find a N\'eel antiferromagnetic phase at higher temperatures and the system undergoes a second transition to the spiral phase at lower $T$. Conversely, for fillings smaller then $n=0.88$, even right below $T_m^*$ we find a nematic state. Within the spiral regime there is a topological transition of the quasiparticle Fermi surface (indicated by a black dashed line in Fig.~\ref{fig_pg: fig1}), where hole pockets merge. The single-particle spectral function develops Fermi arcs on the right hand side of this transition, while it resembles the large bare Fermi surface on the left (see Sec.~\ref{sec_pg: spectral function}). In the inset in Fig.~\ref{fig_pg: fig1}, we also show the irreducible effective magnetic interaction $U_\mathrm{eff}^m$ defined in Sec.~\ref{sec_pg: order parameter and Q}. The effective interaction $U_\mathrm{eff}^m$ is strongly reduced from its bare value ($U=4t$) by the non-magnetic channels in the fRG flow. From now on we ignore the pairing instability and focus on magnetic properties. We compute the magnetic order parameter $\Delta$ together with the optimal wave vector $\mathbf{Q}$ in the ground state (at $T=0$) as described in Sec.~\ref{sec_pg: order parameter and Q}. In Fig.~\ref{fig_pg: fig2}, we show results for $\Delta$ as a function of the filling. We find a stable magnetic solution extending deep into the hole doped regime down to $n \approx 0.73$. On the electron doped side magnetic order terminates abruptly already at $n \approx 1.08$. This pronounced electron-hole asymmetry and the discontinuous transition on the electron doped side has already been observed in previous fRG+MF calculations for a slightly weaker interaction $U = 3t$ \cite{Yamase2016}. The magnetic gap reaches its peak at $n=1$, as expected, although the pseudocritical temperature $T^*$ and the irreducible effective interaction $U_{\rm eff}^m$ exhibit their maximum in the hole doped regime slightly away from half-filling. The magnetic states are either N\'eel type or spiral with a wave vector of the form $\mathbf{Q} = (\pi-2\pi\eta,\pi)$, or symmetry related, with an ``incommensurability'' $\eta > 0$. In Fig.~\ref{fig_pg: fig2} results for $\eta$ are shown as a function of the density. At half-filling and in the electron doped region only N\'eel order is found, as expected and in agreement with previous fRG+MF studies \cite{Yamase2016}. Hole doping instead immediately leads to a spiral phase with $\eta > 0$. Whether the N\'eel state persists at small hole doping depends on the hopping parameters and the interaction strength. Its instability toward a spiral state is favored by a larger interaction strength \cite{Chubukov1995}. Indeed, in a previous fRG+MF calculation at weaker coupling the N\'eel state was found to survive up to about 10 percent hole doping \cite{Yamase2016}. \subsection{Spinon fluctuations} % \begin{figure}[h!] \centering \includegraphics[width=1.\textwidth]{fig4.png} \caption{Out-of-plane (left panel) and in-plane (right panel) spatial ($J$) and temporal ($Z$) spin stiffnesses in the ground state ($T=0$) as functions of the filling $n$. In the N\'eel state (for $n \geq 1$) out-of-plane and in-plane stiffnesses coincide.} \label{fig_pg: fig3} \end{figure} % % \begin{figure}[h!] \centering \includegraphics[width=0.6\textwidth]{fig5.png} \caption{Magnetic coherence length $\xi_A$ (left axis) and average spin wave velocity $c_s$ in the ground state as functions of the filling $n$.} \label{fig_pg: fig4} \end{figure} % % \begin{figure}[h!] \centering \includegraphics[width=0.6\textwidth]{fig6.png} \caption{Fraction of condensed $z$-bosons $n_0$ at $T$ = 0 for two distinct choices of the ultraviolet cutoff ${\Lambda_\mathrm{uv}}$ as a function of the filling.} \label{fig_pg: fig5} \end{figure} % Once the magnetic order parameter $\Delta$ of the chargons and the wave vector $\mathbf{Q}$ have been computed, we are in the position to calculate the NL$\sigma$M parameters from the expressions presented in Sec.~\ref{sec_pg: spin stiff formalism}. In Fig.~\ref{fig_pg: fig3}, we plot results for the spatial and temporal spin stiffnesses $J^a_{\alpha\alpha}$ and $Z^a$ in the ground state. In the spiral state (for $n < 1$) out-of-plane and in-plane stiffnesses are distinct, while in the N\'eel state (for $n \geq 1$) they coincide. Actually, the order parameter defines an axis, not a plane, in the latter case. All the quantities except $Z^\Box$ exhibit pronounced jumps between half-filling and infinitesimal hole-doping. These discontinuities are due to the sudden appearance of hole pockets around the points $(\frac{\pi}{2},\frac{\pi}{2})$ in the Brillouin zone \cite{Bonetti2022}. The spatial stiffnesses are almost constant over a broad range of hole-doping, with a small spatial anisotropy $J^a_{xx} \neq J^a_{yy}$. The temporal stiffnesses $Z^a$ exhibit a stronger doping dependence. The peak of $Z^\perp$ at $n \approx 0.79$ is associated with a van Hove singularity of the quasiparticle dispersion \cite{Bonetti2022}. On the electron doped side all stiffnesses decrease almost linearly with the electron filling. The off-diagonal spin stiffnesses $J^a_{xy}$ and $J^a_{yx}$ vanish both in the N\'eel state and in the spiral state with $\mathbf{Q} = (\pi-2\pi\eta,\pi)$ and symmetry related. In Fig.~\ref{fig_pg: fig4}, we show the magnetic coherence length $\xi_A$ and the average spin wave velocity $c_s$ in the ground state. The coherence length is rather short and only weakly doping dependent from half-filling up to 15 percent hole-doping, while it increases strongly toward the spiral-to-paramagnet transition on the hole-doped side. On the electron-doped side it almost doubles from half-filling to infinitesimal electron doping. This jump is due to the sudden appearance of electron pockets upon electron doping. Note that $\xi_A$ does not diverge at the transition to the paramagnetic state on the electron doped side, as this transition is first order. The average spin wave velocity exhibits a pronounced jump at half-filling, which is inherited from the jumps of $J_{\alpha\alpha}^\perp$ and $Z^\perp$. Besides this discontinuity it does not vary much as a function of density, remaining finite at the transition points to a paramagnetic state, both on the hole- and electron-doped sides. We now investigate whether the magnetic order in the ground state is destroyed by quantum fluctuations or not. To this end we compute the boson condensation fraction $n_0$ as obtained from the large-$N$ expansion of the NL$\sigma$M. This quantity depends on the ultraviolet cutoff ${\Lambda_\mathrm{uv}}$. As a reference point, we may use the half-filled Hubbard model at strong coupling, as discussed in Sec.~\ref{sec_pg: cutoff}, which yields ${\Lambda_\mathrm{uv}} \approx 0.9$, and the constant in the ansatz Eq.~\eqref{eq_pg: Luv} is thereby fixed to $C \approx 0.3$. In Fig.~\ref{fig_pg: fig5} we show the condensate fraction $n_0$ computed with two distinct choices of the ultraviolet cutoff: ${\Lambda_\mathrm{uv}} = {\Lambda_\mathrm{uv}}(n) = C/\xi_A(n)$ and ${\Lambda_\mathrm{uv}} = C/\xi_A(n=1)$. For the former choice the cutoff vanishes at the edge of the magnetic region on the hole-doped side, where $\xi_A$ diverges. One can see that $n_0$ remains finite for both choices of the cutoff in nearly the entire density range where the chargons order. Only near the hole-doped edge of the magnetic regime, $n_0$ vanishes slightly above the mean-field transition point, if the ultraviolet cutoff is chosen as density independent. The discontinuous drop of $n_0$ upon infinitesimal hole doping is due to the corresponding drop of the out-of-plane stiffness. In the weakly hole-doped region there is a substantial reduction of $n_0$ below one, for both choices of the cutoff. Except for the edge of the magnetic region on the hole-doped side, the choice of the cutoff has only a mild influence on the results, and the condensate fraction remains well above zero. Hence, we can conclude that the ground state of the Hubbard model with a moderate coupling $U = 4t$ is magnetically ordered over wide density range. The spin stiffness is sufficiently large to protect the magnetic order against quantum fluctuations of the order parameter. \subsection{Electron spectral function} \label{sec_pg: spectral function} % \begin{figure}[h!] \centering \includegraphics[width=0.6\textwidth]{fig7.png} \caption{ Quasiparticle Fermi surfaces defined as zeros of the chargon quasiparticle energies $E_\mathbf{k}^\pm$ (left column) and momentum dependence of electron spectral function at zero frequency (right column) for various electron densities. The temperature is $T=0.05t$. } \label{fig_pg: fig6} \end{figure} % % Fractionalizing the electron operators as in Eq.~\eqref{eq_pg: electron fractionaliz.}, the electron Green's function assumes the form % \begin{eqnarray} [\mathcal{G}^e_{jj'}(\tau)]_{\sigma\sigma'} &=& - \langle c_{j' \sigma'}(\tau) c^*_{j \sigma}(0) \rangle \nonumber \\ &=& - \langle [R_{j'}(\tau)]_{\sigma' s'}[R^*_j(0)]_{\sigma s} \, \psi_{j' s'}(\tau) \psi^*_{j s}(0) \rangle \, . \nonumber \\ \end{eqnarray} % To simplify this expression, one can decouple the average $\langle R R^* \psi\psi^*\rangle$ as $\langle RR^*\rangle\langle\psi\psi^*\rangle$, yielding~\cite{Borejsza2004,Scheurer2018,Wu2018} % \begin{equation}\label{eq_pg: Gelectron real space} [\mathcal{G}^e_{jj'}(\tau)]_{\sigma\sigma'} \simeq -\langle[R_{j'}(\tau)]_{\sigma's'}[R_j^*(0)]_{\sigma s}\rangle\, \langle \psi_{j's'}(\tau) \psi^*_{js}(0)\rangle. \end{equation} % The spinon Green's function can be computed from the NL$\sigma$M in the continuum limit. Using the Schwinger boson parametrization~\eqref{eq_pg: R to z}, we obtain in the large-$N$ limit % \begin{equation} \langle [R_{j'}(\tau)]_{\sigma'\alpha'}[R^*_j(0)]_{\sigma s}\rangle = -D(\mathbf{r}_j-\mathbf{r}_{j'},\tau)\delta_{\sigma\sigma'}\delta_{ss'}+n_0 \delta_{\sigma s}\delta_{\sigma's'}. \end{equation} % The boson propagator $D(\mathbf{r},\tau)$ is the Fourier transform of % \begin{equation} D(\mathbf{q},i\omega_m)=\frac{1/Z_\perp }{\omega_m^2+(J_\perp^{\alpha\beta}q_\alpha q_\beta)/Z_\perp +m_s^2}, \end{equation} % with $\omega_m=2\pi m T$ a bosonic Matsubara frequency. Fourier transforming Eq.~\eqref{eq_pg: Gelectron real space}, we obtain the electron Green's function in momentum representation % \begin{equation}\label{eq_pg: Ge before freq sum} \mathcal{G}_{\sigma\sigma'}^e(\mathbf{k},\mathbf{k}',i\nu_n)= -T\sum_{\omega_m}\int_\mathbf{q} \tr\left[\mathbf{G}_{\mathbf{k}-\mathbf{q},\mathbf{k}'-\mathbf{q}}(i\nu_n-i\omega_m)\right] D(\mathbf{q},\omega_m)\delta_{\sigma\sigma'} +n_0 \,[\mathbf{G}_{\mathbf{k},\mathbf{k}'}(i\nu_n)]_{\sigma\sigma'} , \end{equation} % where $\mathbf{G}_{\mathbf{k},\mathbf{k}'}(i\nu_n)$ is the mean-field chargon Green's function, given by Eq.~\eqref{eq_low_spiral: G unrotated}. We see that when $n_0=0$, the electron Green's function is diagonal in momentum, that is, it is translational invariant, as the diagonal components of the chargon one entering the trace are nonzero only for $\mathbf{k}=\mathbf{k}'$. Furthermore, in this case there is no spontaneous symmetry breaking, because $\mathbf{G}^e$ is proportional to the identity matrix in spin space. The first term in Eq.~\eqref{eq_pg: Ge before freq sum} describes incoherent excitations and it is the only contribution to the electron Green's function at finite temperature and in the quantum disordered regime, where $n_0=0$. Performing the bosonic Matsubara sum and analytically continuing to real frequencies ($i\nu_n\to\omega+i0^+$), the first term in~\eqref{eq_pg: Ge before freq sum} becomes % \begin{equation} \begin{split} G^e(\mathbf{k},\omega) = \sum_{\ell,p=\pm}\int_{|\mathbf{q}|\leq{\Lambda_\mathrm{uv}}} \frac{1}{4Z^\perp \omega^{\mathrm{sp}}_\mathbf{q}}\left(1+\ell\frac{h_{\mathbf{k}-\mathbf{q}}}{e_{\mathbf{k}-\mathbf{q}}}\right)\frac{f(pE^\ell_{\mathbf{k}-\mathbf{q}})+n_B(\omega^{\mathrm{sp}}_\mathbf{q})}{\omega+i0^+-E^\ell_{\mathbf{k}-\mathbf{q}}+p\,\omega^{\mathrm{sp}}_\mathbf{q}} + \{\mathbf{k}\to-\mathbf{k}\}, \end{split} \end{equation} % where we have defined the spinon dispersion $\omega^\mathrm{sp}_\mathbf{q}=\sqrt{(J^\perp_{\alpha\beta} q_\alpha q_\beta)/Z^\perp + m_s^2}$, and $n_B(x)=(e^{x/T}-1)^{-1}$ is the Bose distribution function. The electron spectral function is computed from % \begin{equation}\label{eq_pg: sp function electron} A^e(\mathbf{k},\omega)=-\frac{1}{\pi} \mathrm{Im}\left\{G^e(\mathbf{k},\omega)+n_0 \left[G_\mathbf{k}(\omega)+\overline{G}_{\mathbf{k}-\mathbf{Q}}(\omega)\right]\right\}, \end{equation} % where $G_\mathbf{k}(\omega)$ and $\overline{G}_{\mathbf{k}-\mathbf{Q}}(\omega)$ are obtained by analytically continuing (that is, by replacing $i\nu_n\to\omega+i0^+$) Eqs.~\eqref{eq_low_spiral: Gk up unrotated} and~\eqref{eq_low_spiral: Gk down unrotated}, respectively. In the right column of Fig.~\ref{fig_pg: fig6}, we show the spectral function $A^e(\mathbf{k},\omega)$ at zero frequency as a function of momentum for various electron densities in the hole-doped regime. The temperature $T=0.05t$ is below the chargon ordering temperature $T^*$ in all cases. Because of the finite temperature, only the first term in Eq.~\eqref{eq_pg: sp function electron} contributes to the spectral function. The Fermi surface topology is the same as the one obtained from a mean-field approximation of spiral magnetic order~\cite{Eberlein2016}. At low hole doping, it originates from a superposition of hole pockets (see left column of Fig.~\ref{fig_pg: fig6}), where the spectral weight on the back sides is drastically suppressed by coherence factors, so that only the front sides are visible. The spinon fluctuations lead to a broadening of the spectral function, smearing out the Fermi surface. Since the spinon propagator does not depend on the fermionic momentum, the broadening occurs uniformly in the entire Brillouin zone. Hence, the backbending at the edges of the "arcs" obtained in our theory for $n=0.9$ is more pronounced than experimentally observed in cuprates. This backbending can be further suppressed by including a momentum dependent self-energy or scattering rate with a larger imaginary part in the antinodal region~\cite{Mitscherling2021}. For fillings smaller than $n\approx 0.82$ the chargon hole pockets merge and the spectral function resembles the large bare Fermi surface (see last row of Fig.~\ref{fig_pg: fig6}). % \end{document} \chapter*{Conclusion} % In this Thesis, we have dealt with two main problems. The first one was the identification of collective bosonic fluctuations in interacting systems, independent of the coupling strength, where the vertex function may exhibit an intricate dependence on momenta and frequencies. For the symmetric phase, we have combined the single-boson exchange (SBE) parametrization of the vertex function~\cite{Krien2019_I} with the functional renormalization group (fRG) and its fusion with dynamical mean-field theory (DMF\textsuperscript{2}RG). This allows not only for a clear and physically intuitive identification of the bosonic modes at play in the many-particle system, but also for a \emph{substantial} simplification of the complexity of the vertex function. In the symmetry-broken phases, this identification permits the explicit introduction of a bosonic field, describing order parameter fluctuations, and it therefore facilitates the study of fluctuation effects on top of mean-field solutions. The second problem we dealt with was the development of a theory for the pseudogap phase able to reconcile features typical of a magnetically ordered state, such as Fermi arcs in the spectral function~\cite{Eberlein2016} and charge carrier drop~\cite{Mitscherling2018,Bonetti2020_I}, with the experimentally observed absence of long-range order. This is realized by fractionalizing the electron into a fermionic chargon, carrying the original electron charge, and a bosonic spinon, carrying the electron spin~\cite{Schulz1995}. The resulting theory acquires a SU(2) gauge redundancy~\cite{Scheurer2018}. While the chargon degrees of freedom can be treated within a mean-field-like (MF) approximation, giving some kind of magnetic order (often N\'eel or spiral antiferromagnetism), the computation of the spinon dynamics requires to study the fluctuations on top of the MF. We have therefore analyzed the long-wavelength and low-frequency properties of the directional fluctuations (Goldstone modes) of the spins in an itinerant spiral magnet and studied their damping rates due to their decay into particle-hole pairs. We have also proven that a computation of the low-energy coefficients of the propagators of the Goldstone modes performed by expanding their relative susceptibilities is equivalent to computing the system's response to a fictitious SU(2) gauge field. Finally, we have applied the SU(2) gauge theory to the two-dimensional Hubbard model at moderate coupling and derived an effective non-linear sigma model (NL$\sigma$M) describing the slow and long wavelength dynamics of the spinons, which enabled us to study the pseudogap regime. In the following, we summarize the key results of each chapter. % \subsection*{Charge carrier drop driven by spiral antiferromagnetism} % In this chapter, we have performed a dynamical mean-field theory (DMFT) calculation in the magnetically ordered phase of the two-dimensional Hubbard model on a square lattice at strong coupling and finite temperature. We have found that over a broad doping regime spiral magnetic states have a lower energy than the N\'eel solution, and have a wave vector of the form $\mathbf{Q}=(\pi-2\pi\eta,\pi)$ (or symmetry related) with the incommensurability $\eta$ increasing monotonically as the hole doping $p$ is increased. The magnetic order parameter $\Delta$ decreases with $p$ and vanishes at a critical doping $p^*$. A zero temperature extrapolation gives an approximate linear dependence $\Delta(p)\propto p^*-p$ in a broad doping region below $p^*$. Spiral magnetic ordering leads to a Fermi surface reconstruction for $p<p^*$ that is responsible for the abrupt change in the charge carriers. We have computed the longitudinal and Hall conductivities by inserting the magnetic gap $\Delta$, the wave vector $\mathbf{Q}$, and the quasiparticle renormalization $Z$ (extracted from the diagonal component of the DMFT self-energy) into transport equations for mean-field spin-density wave states with a phenomenological scattering rate~\cite{Mitscherling2018}. Calculations have been performed with band parameters mimicking the real compounds YBa\textsubscript 2Cu\textsubscript 3O\textsubscript y (YBCO), and La\textsubscript{2-x}Sr\textsubscript xCuO\textsubscript4 (LSCO). We found a pronounced drop in both the longitudinal conductivity and the Hall number in a narrow doping range below $p^*$, in agreement with experiments performed at high magnetic fields~\cite{Badoux2016}. For $p>p^*$ the calculated Hall number $n_H(p)$ is close to the na\"ively expected value $1+p$ for YBCO, while for LSCO parameters it deviates significantly. This is due to the fact that in this regime the band structure in the vicinity of the Fermi surface cannot be approximated by a simple parabolic dispersion (in which the $1+p$ behavior has been derived). For $p<p^*$ and sufficiently far away from $p^*$, we find that $n_H(p)\sim p$, in agreement with the fact that the density of charge carriers is given be the volume of the Fermi pockets. The zero temperature extrapolation of our results as functions of the doping yields $p^*=0.21$ for LSCO parameters and $p^*=0.15$ for YBCO. Both values are in the correct range. To better reproduce the experimentally observed critical dopings one would probably need a modeling that goes beyond the single-band Hubbard model. % \subsection*{(Bosonized) fRG+MF approach to symmetry broken states} % In this chapter, we have performed a dynamical fRG analysis of magnetic and superconducting ordering tendencies in the 2D Hubbard model at moderate coupling $U=3t$. We have combined a one-loop flow with coupled charge, magnetic and pairing channels in the symmetric phase above the critical fRG scale $\Lambda_c$ with a mean-field approximation with decoupled channels in the symmetry-broken regime below $\Lambda_c$. All along the calculation, the full frequency dependence of the two-particle vertex has been retained, therefore methodologically improving the results of Ref.~\cite{Yamase2016}. For the parameters chosen, magnetism is the leading instability at $\Lambda_c$ in the hole doping range from half filling to about 20\%. Between 10\% and 20\% hole doping, also a robust $d$-wave pairing gap has been found, allowing for a computation of the superfluid phase stiffness and the Berezinskii-Kosterlitz-Thouless transition temperature $T_\mathrm{KT}$. In order to go beyond the mean-field approximation, one needs to account for order parameter fluctuations. This can be conveniently achieved by introducing a bosonic field by means of a Hubbard-Stratonovich transformation. However, this task may become difficult when the two-particle vertex at the critical scale exhibits an intricate dependence on momenta and frequencies. We have therefore devised a technique to factorize the singular part of the vertex at $\Lambda_c$ to introduce a bosonic field. We have subsequently reformulated the fRG+MF equations for a mixed boson-fermion system and proven that they reproduce the results of the "fermionic" framework and they fulfill fundamental constraints such as the Goldstone theorem and the Ward identities associated with global symmetries. As a practical example of the feasibility of the method, we have studied the attractive 2D Hubbard model at half filling, and computed the superconducting order parameter. We have then computed and analyzed frequency dependencies of the longitudinal and transverse Yukawa couplings, describing the interaction between the electron and collective amplitude and phase fluctuations of the order parameter, respectively, as well as those of the so-called residual two-fermion interactions, representing all the non factorizable (bot not singular) contributions to the two-particle vertex. % \subsection*{SBE decomposition of the fRG} % In this chapter, we have applied the single-boson exchange (SBE) representation of the vertex function~\cite{Krien2019_I} to the fRG and DMF\textsuperscript{2}RG. This representation relies on a diagrammatic decomposition in contributions mediated by the exchange of a single boson in the different channels. We have recast the fRG flow equations for the two-particle vertex into SBE contributions and a residual four-point vertices, which we label as rest functions. This formulation leads to a substantial reduction of the numerical effort required to compute the vertex function. In fact, the SBE contributions consist of one screened interaction, representing the propagator of an effective boson, and two Yukawa couplings, describing the interaction between the electrons and the boson. If on the one hand the vertex function is a challenging object to compute, as it depends on three variables $k_1$, $k_2$ and $k_3$, each of them combining momentum and frequency, on the other hand the Yukawa coupling and the screened interaction require a smaller memory cost, as they depend on two and one variable, respectively. Furthermore, we have shown that the rest functions are localized objects in frequency space, particularly at strong coupling, and one can therefore significantly restrict the total number of frequencies taken into account or even neglect all the non-SBE terms. The reduced numerical effort facilitates the applicability of the fRG and DMF\textsuperscript{2}RG to the physically most interesting regime of low temperatures. We have demonstrated the advantage of the implementation of the SBE decomposition by means of DMF\textsuperscript{2}RG calculations for the 2D Hubbard model performed up to very large interactions $U=16t$ at and away from half filling. We have specifically analyzed the impact of neglecting the rest function and observed a marginal effect from weak to strong coupling. Moreover, the SBE decomposition allows for a physical identification of the collective modes at play in the system, and we have therefore employed it to diagnose the mechanism for $d$-wave pairing formation in the doped regime in terms of processes involving the exchange of magnetic and charge bosons. % \subsection*{Collective modes of metallic spiral magnets} % In this chapter, we have derived Ward identities for fermionic systems in which a gauge symmetry is globally broken. In particular, we have shown that the zero-energy and long-wavelength components of the gauge kernels are connected to the transverse susceptibilities of the order parameter by exact relations. We have analyzed several examples, namely a superconductor, a N\'eel antiferromagnet, and a spiral magnet. In the latter case, we have performed a random phase approximation (RPA) analysis and identified three Goldstone poles in the susceptibilities, one associated with in-plane, and two associated with out-of-plane fluctuations of the order parameter. Expanding the susceptibilities near their poles, we have derived expressions for the spin stiffness and spectral weights of the magnons (corresponding to the Goldstone modes) and checked that they coincide with those derived by computing the response of the system to a fictitious SU(2) gauge field, as predicted by the Ward identities. Moreover, we have determined the form and the size of the decay rates of the magnons due to Landau damping. The Landau damping of the in-plane mode has the same form as that of a N\'eel antiferromagnet~\cite{Sachdev1995} and is of the same order as the energy $\omega$ of the mode. By contrast, the out-of-plane modes possess a parametrically smaller Landau damping, of the order $\omega^{3/2}$, implying that they are asymptotically stable excitations in the low-energy limit. In the N\'eel antiferromagnet, we have also shown that the hydrodynamic relation for the magnon velocities $c_s=\sqrt{J/\chi^\perp}$, with $J$ the spin stiffness and $\chi^\perp$ the static transverse susceptibilities, does not hold in presence of gapless fermionic excitations. In fact, it must be replaced by $c_s=\sqrt{J/\chi^\perp_\mathrm{dyn}}$, where $\chi^\perp_\mathrm{dyn}$ is obtained from the transverse susceptibility $\chi^\perp(\mathbf{q},\omega)$ by taking the $\omega\to 0$ limit \emph{after} letting $\mathbf{q}\to\mathbf{0}$, that is, the limits are taken in the reverse order compared to $\chi^\perp$. The equality $\chi^\perp=\chi^\perp_\mathrm{dyn}$ only holds for insulating magnets at low temperatures. Similar relations hold for a spiral magnet, too. We have complemented our analysis with a numerical evaluation of the spin stiffnesses, spectral weights, and decay rates for a specific two-dimensional model system. Some of the quantities exhibit peaks and discontinuities as a function of the electron density which are related to changes of the Fermi surface topology and special contributions in the N\'eel state. % \subsection*{SU(2) gauge theory of the pseudogap phase} % In this chapter, we have presented a SU(2) gauge theory of fluctuating magnetic order in the two-dimensional Hubbard model. The theory is based on a fractionalization of the electron field in fermionic chargons and bosonic spinons~\cite{Schulz1995,Borejsza2004,Scheurer2018}. We have treated the chargons within a renormalized mean-field theory with effective interactions obtained by a functional renormalization group flow, as described in Chapter~\ref{chap: fRG+MF}. We have found a broad density range in which they undergo N\'eel or spiral magnetic order below a density-dependent temperature $T^*$. We have treated the spinons, describing fluctuations of the spin orientation, within a gradient expansion, and found that their dynamics is governed by a non-linear sigma model (NL$\sigma$M). The parameters of the NL$\sigma$M, namely the spin stiffnesses, have been computed on top of the magnetically ordered chargon state using a renormalized RPA, closely following the formulas of Chapter~\ref{chap: low energy spiral}. At any finite temperature the spinon fluctuations prevent long-range order, in agreement with the Mermin-Wagner theorem, while at zero temperature they are not strong enough to destroy the magnetic order. Our approximations are valid for a weak or moderate Hubbard interaction $U$. It is possible that at strong coupling spinon fluctuations get enhanced, thus destroying long-range order \emph{even} in the ground state. Despite the moderate interaction strength chosen in our calculations, the phase below $T^*$, where the chargon magnetically order, displays all important feature typical of the pseudogap regime in high-$T_c$ cuprates. Even though spinon fluctuations destroy long-range order at any finite $T$, they do not strongly affect the electron spectral function, which remains similar to that of a magnetically ordered state, thus displaying Fermi arcs. They also do not affect charge transport significantly, that is, quantities like longitudinal or Hall conductivities can be computed within the ordered chargon subsystem, yielding~\cite{Eberlein2016,Mitscherling2018,Bonetti2020_I,Storey2016,Storey2017,Chatterjee2017,Verret2017} the drastic charge carrier drop observed at the onset of the psedogap regime in hole-doped cuprates~\cite{Badoux2016,Collignon2017,Proust2019}. Spiral order of the chargons entails nematic order of the electrons. At low hole doping, the chargons form a N\'eel state at $T^*$, and a spiral state below $T_\mathrm{nem}$. Thus, the electrons undergo a nematic phase transition at a critical temperature \emph{below} the pseudogap temperature $T^*$. Evidence for a nematic transition at a temperature $T_\mathrm{nem}<T^*$ has been found recently in slightly underdoped YBCO~\cite{Grissonnanche2022}. For large hole doping, instead, the nematic transition occurs exactly at $T^*$. For electron doping nematic order is completely absent. % \section*{Outlook} % The results presented in this thesis showed methodological advances and raised further questions beyond the scope of this thesis. In the following, we shortly present several paths for extensions. First of all, the parametrization of the vertex function in terms of single boson exchange processes and rest functions allows for a substantial reduction of the computational cost. In fact, the Yukawa coupling and the bosonic propagator depend on less arguments than the full two-particle vertex, while the rest function is shown to display a fast decay with respect to all its three frequency variables, especially in the strong coupling regime. The reduced numerical effort facilitates the applicability of the fRG and DMF\textsuperscript{2}RG to the most interesting regime of strong correlations and low temperatures. The SBE decomposition also offers the possibility to \emph{explicitly} introduce bosonic fields and therefore study the flow of mixed boson-fermion systems. This extension is particularly interesting to analyze the impact of bosonic fluctuations on top of mean-field solutions below the (pseudo-) critical scale, where symmetry breaking occurs. The reformulation of the fRG+MF approach with the explicit introduction of a bosonic field offers, in this respect, a convenient starting point. The generalization of the SBE decomposition to other models with different lattices or non-local interactions, where the higher degree of frustration reduces the pseudo-critical temperatures, is also an interesting extension. A second path for extensions is given by refinements of the SU(2) gauge theory for the pseudogap regime. In this thesis, we have considered only N\'eel or spiral ordering of the chargons. In the ground state of the two-dimensional Hubbard model, however, there is a whole zoo of possible magnetic ordering patterns, and away from half filling N\'eel or spiral order do not always minimize the energy. One possible competitor is stripe order, where the spins are antiferromagnetically ordered with a unidirectional periodic modulation of the amplitude of the order parameter and of the electron density. If we treat the chargon stripe phase with the same formalism developed in this thesis, magnetic long-range order would be destroyed by directional fluctuations of the spins, while the charge density wave (CDW) may survive. In the, actually quite general, case of \emph{incommensurate} stripe order wave vector, also charge order can become fluctuating due to a soft \emph{sliding} mode that acts as a Goldstone mode and destroys the CDW at finite temperatures, thus explaining the experimental observation of fluctuating charge order within the pseudogap phase~\cite{Frano2020}. Another refinement of our SU(2) gauge theory to make it more quantitative is to circumvent the need of a \emph{ultraviolet cutoff} by formulating it on the lattice, that is, by avoiding the long-wavelength expansion. The weak coupling calculation presented in this thesis has revealed that quantum fluctuations of the spinons are not strong enough to destroy long-range order in the ground state, giving rise to exponentially small spin gaps at low temperatures. This is due to the large value of the magnetic coherence length $\xi_A$ that makes the ultraviolet cutoff ${\Lambda_\mathrm{uv}}$ small, thereby weakening quantum fluctuations. At \emph{strong coupling}, the situation might change, as $\xi_A$ gets drastically reduced and ${\Lambda_\mathrm{uv}}$ enhanced, thus possibly disordering the ground state. Furthermore, our theory does not take into account topological defects in the spin pattern, as we expect them to be suppressed at low temperatures and deep in the pseudogap phase. However, at higher values of $T$ or near the critical doping at which the chargon order parameter vanishes, they may proliferate, potentially making the sharp metal-to-pseudogap-metal transition more similar to a crossover, similarly to what is observed in experiments. % % \end{document} \chapter*{Abstract} \addcontentsline{toc}{chapter}{Abstract} \subfile{extra/abstract.tex} \clearpage{\pagestyle{empty}\cleardoublepage} \chapter*{Deutsche Zusammenfassung} \addcontentsline{toc}{chapter}{Deutsche Zusammenfassung} \subfile{extra/De_Zus.tex} \clearpage{\pagestyle{empty}\cleardoublepage} {\hypersetup{linkcolor=black} \rhead[\fancyplain{}{\bfseries\leftmark}]{\fancyplain{}{\bfseries\thepage}} \lhead[\fancyplain{}{\bfseries\thepage}]{\fancyplain{}{\bfseries CONTENTS}} \tableofcontents \cleardoublepage} \mainmatter \addtocontents{toc}{\setcounter{tocdepth}{1}} \phantomsection \addcontentsline{toc}{chapter}{Introduction} \setcounter{page}{1} \subfile{chapters/00_introduction.tex} \cleardoublepage \subfile{chapters/1_methods.tex} \cleardoublepage \subfile{chapters/2_spiral_dmft.tex} \cleardoublepage \subfile{chapters/3_frg_mf.tex} \cleardoublepage \subfile{chapters/4_SBE_fRG.tex} \cleardoublepage \subfile{chapters/5_low_en_spiral.tex} \cleardoublepage \subfile{chapters/6_pseudogap.tex} \cleardoublepage \phantomsection \chapter*{Conclusion} \label{chap: conclusion} \rhead[\fancyplain{}{\bfseries Conclusion}]{\fancyplain{}{\bfseries\thepage}} \lhead[\fancyplain{}{\bfseries\thepage}]{\fancyplain{}{\bfseries Conclusion}} \addcontentsline{toc}{chapter}{Conclusion} \subfile{chapters/99_conclusions.tex} \cleardoublepage \addtocontents{toc}{\setcounter{tocdepth}{0}} \begin{appendices} \rhead[\fancyplain{}{\bfseries Appendix}]{\fancyplain{}{\bfseries\thepage}} \lhead[\fancyplain{}{\bfseries\thepage}]{\fancyplain{}{\bfseries Appendix}} \addcontentsline{toc}{part}{Appendix} \part*{Appendix} \subfile{appendices/symmV.tex} \subfile{appendices/app_fRG+MF.tex} \subfile{appendices/app_SBE_fRG.tex} \subfile{appendices/app_low_en_spiral.tex} \end{appendices} \backmatter \rhead[\fancyplain{}{\bfseries Bibliography}]{\fancyplain{}{\bfseries\thepage}} \lhead[\fancyplain{}{\bfseries\thepage}]{\fancyplain{}{\bfseries Bibliography}} \nocite{apsrev42Control} \bibliographystyle{mybst.bst}
{'timestamp': '2022-10-18T02:26:19', 'yymm': '2210', 'arxiv_id': '2210.08889', 'language': 'en', 'url': 'https://arxiv.org/abs/2210.08889'}
\section{Introduction} Mean field games with major and minor players were introduced to accommodate the presence of subgroups of players whose influence on the behavior of the remaining population does not vanish in the asymptotic regime of large games. In this paper we develop the theory of these dynamic games when the optimization problems faced by the players are over the dynamics of continuous time controlled processes with values in finite state spaces. The theory of finite state mean field games for a single homogeneous population of players was introduced in \cite{Gueant_tree,Gueant_congestion} and \cite{GomesMohrSouza_continuous}. The interested reader may also consult Chapter 7 of the book \cite{CarmonaDelarue_book_I} for a complete presentation of the theory. The present paper is concerned with the extension to models with major and minor players. We search for closed loop Nash equilibria and for this reason, we use the approach which was advocated in \cite{CarmonaWang_LQ}, and called \emph{an alternative approach} in Chapter 13 of \cite{CarmonaDelarue_book_II}. \vskip 2pt Our interest in mean field games with major and minor players when the state space is finite was sparked by the four state model \cite{KolokoltsovBensoussan} for the behavior of computer owners facing cyber attacks. Even though the model was not introduced and treated as a game with major and minor players, clearly, it is of this type if the behaviors of the attacker and the targets are strategic. Practical applications amenable to these models abound and a better theoretical understanding of their structures should lead to sorely needed numerical procedures to compute Nash equilibria. Early forms of mean field games with major and minor players appeared in \cite{Huang} in an infinite-horizon setting, in \cite{NguyenHuang1} for finite time horizons, and \cite{NourianCaines} offered a first generalization to non linear-quadratic cases. In these models, the state of the major player does not enter the dynamics of the states of the minor players: it only appears in their cost functionals. this was remedied in \cite{NguyenHuang2} for linear quadratic models. The asymmetry between major and minor players was emphasized in \cite{BensoussanChauYam} where the authors insist on the fact that the statistical distribution of the state of a generic minor player should be derived endogenously. Like \cite{NourianCainesMalhame}, \cite{BensoussanChauYam} characterizes the limiting problem by a set of stochastic partial differential equations. However, \cite{BensoussanChauYam} seems to be solving a Stackelberg game, and only the population of minor players ends up in a Nash equilibrium. this is in contrast with \cite{CarmonaZhu} which also insists on the endogenous nature of the statical distribution of the state of a generic minor player, but which formulates the search for a mean field equilibrium as the search for a Nash equilibrium in a two player game over the time evolutions of states, some of which being of a McKean-Vlasov type. The recent technical report \cite{JaimungalNourian} adds a major player to the particular case (without idiosyncratic random shocks) of extended mean field game model of optimal execution introduced in Chapter 1 and solved in Chapter 4 of \cite{CarmonaDelarue_book_I}. In this paper, we cast the search for Nash equilibria as a search for fixed points of the best response function constructed from the optimization problems of both types of players. Typically, in a mean field game with major and minor players, the dynamics of the state $X^0_t$ of the major player (as well as its costs) depend upon the statistical distribution $\mu_t$ of the state $X_t$ of a generic minor player. Throughout the paper we consider that the players are gender neutral and we use "its" instead of "his" or "her". Alternatively, the dynamics of the state $X_t$ of a generic minor player (as well as its costs) depend upon the values of the state $X^0_t$ and the control $\alpha^0_t$ of the major player as well as the statistical distribution $\mu_t$ which captures the mean field interactions between the minor players. In this paper, we prove that he processes $(X_t^0, \mu_t)$ and $(X_t^0, X_t, \mu_t)$ are Markovian and we characterize their laws by their infinitesimal generators. We start from the finite player version of the model and show convergence when the number of minor players goes to infinity. We rely on standard results on the convergence of Markov semigroups. Note that the control of the major player implicitly influences $\mu_t$ through the major player's state, so the major player's optimization problem should be treated as an optimal control problem for McKean-Vlasov dynamics. On the other hand, for the representative minor player's problem, we are just dealing with a classical Markov decision problem in continuous time. this allows us to adapt to the finite state space the approach introduced in \cite{CarmonaWang_LQ} and reviewed in Chapter 13 of \cite{CarmonaDelarue_book_II}, to define and construct Nash equilibria. We emphasize that these are Nash equilibria for the whole system \emph{Major + Minor Players} and not only for the minor players. this is fully justified by our results on the propagation of chaos and their applications to the proof that our mean field game equilibria provide approximate Nash equilibria for finite games, including both major and minor players. \vskip 4pt The paper is structured as follows. Games with finitely many minor players and a major player are introduced in Section \ref{se:finite} where we explain the conventions and notations we use to describe continuous time controlled Markov processes in finite state spaces. We also identify the major and minor players by specifying the information structures available to them, the types of actions they can take, and the costs they incur. The short and non-technical Section \ref{se:mfg} describes the mean field game strategy and emphasizes the steps needed in the search for Nash equilibria for the system. this is in contrast with some earlier works where the formulation of the problem lead to Stackelberg equilibria, only the minor players being in an approximate Nash equilibrium. To keep with the intuition that the mean field game strategy is to implement a form of limit when the number of minor players grows to infinity, Section \ref{se:convergence} considers the convergence of the state Markov processes in this limit, and identifies the optimization problems which the major and minor players need to solve in order to construct their best responses. this leads to the formalization of the search for a mean field equilibrium as the search of fixed points for the best response map so constructed. The optimization problems underpinning the definition of the best response map are studied in Section \ref{se:optimizations}. There, we use dynamic programming to prove that the value functions of these optimization problems are viscosity solutions of HJB type Partial Integro - Differential Equations (PIDEs). Section \ref{se:Nash} proves existence of the best response map and of Nash equilibria under reasonable conditions. Next, Section \ref{se:master} gives a verification theorem based on the existence of a classical solution to the master equation. The longer Section \ref{se:chaos} proves that the solution of the mean field game problem provides approximate Nash equilibria for the finite player games. this vindicates our formulation as the right formulation of the problem if the goal is to find Nash equilibria for the system including both major and minor players. The strategy of the proof in by now standard in the literature on mean field games. It relies on propagation of chaos results. However, the latter are usually derived for stochastic differential systems with mean field interactions, and because we could not find the results we needed in the existing literature, we provide proofs of the main steps of the derivations of these results in the context of controlled Markov evolutions in finite state spaces. Finally, an appendix provides the proofs of some of the technical results we used in the text. \section{Game Model with Finitely Many Players} \label{se:finite} We consider a stochastic game in continuous time, involving a major player indexed by $0$, and $N$ minor players indexed from $1$ to $N$. The states of all the players $X_t^0, X_t^1, \dots, X_t^N$ are described by a continuous-time finite-state Markov process. Let us denote $\{1,2,\dots,M^0\}$ the set of possible states of the major player, and $\{1,2,\dots,M\}$ the set of possible states of the minor players. We introduce the empirical distribution of the states of the minor players at time $t$: \[ \mu_t^N = [\frac{1}{N}\sum_{n=1}^N \mathbbm{1}(X_t^n = 1), \frac{1}{N}\sum_{n=1}^N \mathbbm{1}(X_t^n = 2), \dots, \frac{1}{N}\sum_{n=1}^N \mathbbm{1}(X_t^n = M-1)] \] We denote by $\mathcal{P}$ the $(M-1)$ - dimensional simplex: \[ \mathcal{P} := \{x\in\mathbb{R}^{M-1} | x_i \ge 0, \sum x_i \le 1\}. \] Obviously, $\mu_t^N\in\mathcal P$. We consider continuous-time Markov dynamics according to which the rates of jump, say $q$, of the state of a generic minor player depends upon the value of its control, the empirical distribution of the states of all the minor players, as well as the major player's control and state. We denote by $A_0$ (resp. $A$) a convex set in which the major player (resp. all the minor players) can choose their controls. So we introduce a function $q$: \[ [0,T]\times \{1,\dots, M\} ^2 \times A\times \{1, \dots, M^0\}\times A_0 \times \mathcal{P} \ni (t,i,j,\alpha,i^0,\alpha^0,x)\rightarrow q(t,i,j,\alpha, i^0, \alpha^0, x) \] and we make the following assumption on $q$: \begin{hypothesis} \label{transrateminor} For all $(t, \alpha, i^0,\alpha^0, x) \in [0,T]\;\times\;A\;\times\;\{1,\dots, M^0\}\;\times\;A_0\;\times\;\mathcal{P}$, the matrix $[q(t,i,j,\alpha, i^0, \alpha^0,x)]_{1\le i,j\le M}$ is a Q-matrix. \end{hypothesis} \noindent Recall that a matrix $Q=[Q(i,j)]_{i,j}$ is said to be a Q-matrix if $Q(i,j)\ge 0$ for $i\ne j$ and $$ \sum_{j \neq i} Q(i,j) = -Q(i,i),\qquad \text{for all } i. $$ \noindent Then we assume that at time $t$ if the state of minor player $n$ is $i$, this state will jump from $i$ to $j$ at a rate given by: \[ q(t, i, j, \alpha^n_t, X_t^0, \alpha^0_t, \mu_t^N) \] if $X_t^0$ is the major player state, $\alpha^0_t\in A_0$ is the major player control, $\alpha_t^n \in A$ is the $n$-th minor player control and $\mu_t^N \in\mathcal{P}$ is the empirical distribution of the minor player's states. Our goal is to use these rates to completely specify the law of a continuous time process in the following way: if at time $t$ the $n$-th minor player is in state $i$ and uses control $\alpha^n_t$, if the major player is in state $X_t^0$ and uses the control $\alpha^0_t$, and if the empirical distribution of the states of the population of minor players is $\mu_t^N$, then the probability of player $n$ remaining in the same state during the infinitesimal time interval $[t,t+\Delta t)$ is $[1 + q(t,i,i,\alpha_t^n, X_t^0,\alpha^0_t, \mu_t^N) \Delta t + o(\Delta t)]$, whereas the probability of this state changing to another state $j$ during the same time interval is given by $[q(t,i,j,\alpha_t^n, X_t^0,\alpha^0_t, \mu_t^N) \Delta t + o(\Delta t)]$. \vskip 4pt Similarly, to describe the evolution of the state of the major player we introduce a function $q^0$: \[ [0,T]\times \{1,\dots, M^0\} ^2 \times A_{0}\times \mathcal{P} \ni (t,i^0,j^0,\alpha^0,x)\rightarrow q^0(t,i^0,j^0,\alpha^0, x) \] which satisfies the following assumption: \begin{hypothesis}\label{transratemajor} For each $(t,\alpha^0, x) \in [0,T]\times A_{0}\times\mathcal{P}$, $[q^0(t,i^0,j^0,\alpha^0, x)]_{1\le i^0,j^0\le M^0}$ is a Q-matrix. \end{hypothesis} \noindent So if at time $t$ the state of the major player is $i^0$, its control is $\alpha^0_t\in A_0$, and the empirical distribution of the states of the minor players is $\mu_t^N$, we assume the state of the major player will jump to state $j^0$ at rate $q^0(t, i^0, j^0, \alpha^0_t, \mu_t^N)$. \vskip 4pt We now define the control strategies which are admissible to the major and minor players. In our model, we assume that the major player can only observe its own state and the empirical distribution of the states of the minor players, whereas each minor player can observe its own state, the state of the major player as well as the empirical distribution of the states of all the minor players. Furthermore, we only allow for Markov strategies given by feedback functions. Therefore the major player's control should be of the form $\alpha^0_t = \phi^0(t, X_t^0, \mu_t^N)$ for some feedback function $\phi^0:[0,T]\times\{1,\cdots,M^0\}\times\mathcal P\mapsto A_0$, and the control of minor player $n$ should be of the form $\alpha_t^n = \phi^n(t, X_t^n, X_t^0. \mu_t^N)$ for some feedback function $\phi^n:[0,T]\times\{1,\cdots,M\}\times\{1,\cdots,M^0\}\times\mathcal P\mapsto A$. We denote the sets of admissible control strategies by $\AA^0$ and $\AA^n$ respectively. Depending upon the application, we may add more restrictive conditions to the definitions of these sets of admissible control strategies. \vspace{3mm} We now define the joint dynamics of the states of all the players. We assume that conditioned on the current state of the system, the changes of states are independent for different players. this means that for all $i^0, i^1, \dots, i^N$ and $j^0, j^1, \dots, j^N$, where $i^0, j^0 \in \{1,2,\dots, M^0\}$ and $i^n, j^n \in \{1,2,\dots, M\}$ for $n=1,2,\dots, N$, we have: \begin{align*} &\mathbb{P}[X_{t+\Delta t}^0 = j^0, X_{t+\Delta t}^1 = j^1, \dots, X_{t+\Delta t}^N = j^N | X_t^0 = i^0, X_t^1 = i^1, \dots, X_t^N = i^N]\\ &\hskip 35pt := [\mathbbm{1}_{i^0 = j^0} + q^0(t, i^0, j^0, \phi^0(t, i^0, \mu_t^N), \mu_t^N)\Delta t + o(\Delta t)]\\ &\hskip 55pt \times \prod_{n=1}^N [\mathbbm{1}_{i^n = j^n} + q^n(t, i^n, j^n, \phi^n(t, i^n, i^0, \mu_t^N), i^0, \phi^0(t,i^0, \mu_t^N), \mu_t^N)\Delta t + o(\Delta t)] \end{align*} Formally, this statement is equivalent to the definition of the Q-matrix , say $Q^{(N)}$ of the continuous-time Markov chain $(X_t^0, X_t^1, X_t^2, \dots X_t^N)$. The state space of this Markov chain is the Cartesian product of each player's state space. Therefore $Q^{(N)}$ is a square matrix of size $M^0 \cdot M^N$. The non-diagonal entry of $Q^{(N)}$ can be found by simply retaining the first order term in $\Delta t$ when expanding the above product of probabilities. Because we assume that the transitions of states are independent among the individual players, $Q^{(N)}$ is a sparse matrix. Each individual player aims to minimize its expected cost in the game. We assume that these costs are given by: \begin{align*} J^{0,N}(\alpha^0, \alpha^1,\dots, \alpha^N ) :=\;\;&\; \mathbb{E}\left[\int_{0}^{T}f^0(t,X_t^0, \phi^0(t, X_t^0, \mu_t^N), \mu_t^N) dt + g^0(X_T^0, \mu_T^N)\right]\\ J^{n,N}(\alpha^0, \alpha^1,\dots, \alpha^N ) :=\;\;&\; \mathbb{E}\left[\int_{0}^{T}f^n(t, X_t^n, \phi^n(t, X_t^n, X_t^0, \mu_t^N),X_t^0, \phi^0(t, X_t^0, \mu_t^N), \mu_t^N) dt + g^n(X_T^n, X_T^0, \mu_T^N)\right]. \end{align*} In this paper, we focus on the special case of symmetric games, for which all the minor players share the same transition rate function and cost function, i.e.: $q^n:=q$, $f^n:=f$, $g^n:=g$, $J^{n,N}:=J^N$, and we search for symmetric Nash equilibria. We say that a couple of feedback functions $(\phi^0,\phi)$ form a symmetric Nash equilibrium if the controls $(\boldsymbol{\alpha}^0,\boldsymbol{\alpha}^1,\cdots,\boldsymbol{\alpha}^N)$ given by $\alpha^0_t=\phi^0(t,X^0_t,\mu^N_t)$ and $\alpha^n_t=\phi(t,X^n_t,X^0_t,\mu^N_t)$ for $n=1,\cdots, N$, form a Nash equilibrium in the sense that: \begin{align*} J^{0,N}(\boldsymbol{\alpha}^0, \boldsymbol{\alpha}^1,\dots, \boldsymbol{\alpha}^N) \le\;\;& J^{0,N}(\boldsymbol{\alpha}', \boldsymbol{\alpha}^1,\dots, \boldsymbol{\alpha}^N)\\ J^{N}(\boldsymbol{\alpha}^0, \boldsymbol{\alpha}^1,\dots, \boldsymbol{\alpha}^n, \dots \boldsymbol{\alpha}^N) \le\;\;& J^{N}(\boldsymbol{\alpha}^0, \boldsymbol{\alpha}^1,\dots, \boldsymbol{\alpha}^{'n}, \dots \boldsymbol{\alpha}^N) \end{align*} for any choices of alternative admissible controls $\boldsymbol{\alpha}^{'0}$ and $\boldsymbol{\alpha}^{'n}$ of the forms $\alpha^{'0}_t=\phi^{'0}(t,X^0_t,\mu^N_t)$ and $\alpha^{'n}_t=\phi'(t,X^n_t,X^0_t,\mu^N_t)$. In order to simplify the notation, we will systematically use the following notations when there is no risk of possible confusion. When $\boldsymbol{\alpha}^0\in\mathbb{A}^0$ is given by a feedback function $\phi^0$ and $\boldsymbol{\alpha}\in\mathbb{A}$ is given by a feedback fucntion $\phi$, we denote by $q^0_{\phi^0}$, $q_{\phi^0,\phi}$, $f^0_{\phi^0}$ and $f_{\phi^0,\phi}$ the functions: \begin{align*} q^{0}_{\phi^0}(t,i^0,j^0,x) :=&\;\; q^0(t,i^0,j^0, \phi^0(t,i^0,x),x)\\ q_{\phi^0,\phi}(t,i,j,i^0,x) :=&\;\; q(t,i,j,\phi(t,i,i^0,x),i^0,\phi^0(t,i^0,x), x)\\ f^0_{\phi^0}(t,i^0,x) := &\;\;f^0(t,i^0, \phi^0(t,i^0,x),x)\\ f_{\phi^0,\phi}(t,i^0,i,x) := &\;\;f(t,i,\phi(t,i,i^0,x),i^0,\phi^0(t,i^0,x),x) \end{align*} \section{Mean Field Game Formulation} \label{se:mfg} Solving for Nash equilibria when the number of players is finite is challenging. There are many reasons why the problem becomes quickly intractable. Among them is the fact that as the number of minor players increases, the dimension of the Q - matrix of the system increases exponentially. The paradigm of Mean Field Games consists in the analysis of the limiting case where the number $N$ of minor players tends to infinity. In this asymptotic regime, one expects that simplifications due to averaging effects will make it easier to find asymptotic solutions which could provide approximative equilibria for finite player games when the number $N$ of minor players is large enough. The rationale for such a belief is based on the intuition provided by classical results on the propagation of chaos for large particle systems with mean field interactions. We developed these results later in the paper. \vskip 4pt The advantage of considering the limit case is two-fold. First, when $N$ goes to infinity, the empirical distribution of the minor players' states converges to a random measure $\mu_t$ which we expect to be the conditional distribution of any minor player's state, i.e.: \[ \mu_t^N \rightarrow \mu_t := \bigl(\mathbb{P}[X_t^n = 1 | X_t^0], \mathbb{P}[X_t^n = 2 | X_t^0], \dots, \mathbb{P}[X_t^n = M-1 | X_t^0]\bigr). \] As we shall see later on in the next section, when considered together with the major player's state and one of the minor player's state, the resulting process is Markovian and its infinitesimal generator has a tractable form. Also, when the number of minor players goes to infinity, small perturbations of a single minor player's strategy will have no significant influence on the distribution of minor player's states. this gives rise to a simple formulation of the typical minor player's search for the best response to the control choices of the major player. In the limit $N\to\infty$, we understand a Nash equilibrium as a situation in which neither the major player, nor a typical minor player could be better off by changing control strategy. In order to formulate this limiting problem, we need to define the joint dynamics of the states of the major player and a representative minor player, making sure that the dynamics of the state of the major player depend upon the statistical distribution of the states of the minor players, and that the dynamics of the state of the representative minor player depend upon the values of the state and the control of the major player, its own state, and the statistical distribution of the states of all the minor players. As argued in \cite{CarmonaWang_LQ}, and echoed in Chapter 13 of \cite{CarmonaDelarue_book_II}, the best way to search for Nash equilibria in the mean field limit of games with major and minor players is first to identify the best response map of the major and a representative of the minor players by solving the optimization problems for the strategies of 1) the major player in response to the field of minor players as represented by a special minor player with special state dynamics which we call a representative minor player, and 2) the representative minor player in response to the behavior of the major player and the other minor players. Solving these optimization problems separately provides a definition of the best response map for the system. One can then search for a fixed point for this best response map. So the search for Nash equilibria for the mean field game with major and minor players can be summarized in the following two steps. \vskip 6pt\noindent \textbf{Step 1} (Identifying the Best Response Map) \vskip 6pt\noindent \textbf{1.1} (Major Player's Problem) \vskip 1pt\noindent Fix an admissible strategy $\boldsymbol{\alpha}\in\mathbb{A}$ of the form $\alpha_t = \phi(t, X_t, X^0_t, \mu_t)$ for the representative minor player, solve for the optimal control problem of the major player given that all the minor players use the feedback function $\phi$. We denote by $\boldsymbol{\phi}^{0,*}(\phi)$ the feedback function giving the optimal strategy of this optimization problem. Notice that, in order to formulate properly this optimization problem, we need to define Markov dynamics for the couple $(X^0_t,X_t)$ where $X_t$ is interpreted as the state of a representative minor player, and the (random) measure $\mu_t$ has to be defined clearly. this is done in the next section as the solution of the major player optimization problem, the Markovian dynamics being obtained from the limit of games with $N$ minor players. \vskip 6pt\noindent \textbf{1.2} (Representative Minor Player's Problem) \vskip 1pt\noindent We first single out a minor player and we search for its best response to the rest of the other players. So we fix an admissible strategy $\boldsymbol{\alpha}^0\in \mathbb{A}^0$ of the form $\alpha^0_t= \phi^0(t, X_t^0, \mu_t)$ for the major player, and an admissible strategy $\boldsymbol{\alpha}\in \AA$ of the form $\alpha_t= \phi(t, X_t, X_t^0, \mu_t)$ for the representative of the remaining minor players. We then assume that the minor player which we singled out responds to the other players by choosing an admissible strategy $\bar\boldsymbol{\alpha}\in \AA$ of the form $\bar\alpha_t= \bar\phi(t,\o X_t, X_t^0, \mu_t)$. Clearly, if we want to find the best response of the singled out minor player to the behavior of the major player and the field of the other minor players as captured by the behavior of the representative minor player, we need to define Markov dynamics for the triple $(X^0_t,\o X_t,X_t)$, and define clearly what we mean by the (random) measure $\mu_t$. this is done in the next section as the solution of the representative minor player optimization problem, the Markovian dynamics being obtained from the limit of games with $N$ minor players. We denote by $\boldsymbol{\phi}^{*}(\phi^0,\phi)$ the feedback function giving the optimal strategy of this optimization problem. \vskip 6pt\noindent \textbf{Step 2} (Search for a Fixed Point of the Best Response Map) \vskip 1pt\noindent A Nash equilibrium for the mean field game with major and minor players is a fixed point $[\hat\phi^{0},\hat\phi] = [\boldsymbol{\phi}^{0,*}(\hat\phi), \boldsymbol{\phi}^*(\hat\phi^0, \hat\phi)]$. \vskip 4pt Clearly, in order to take Step 1, we need to formulate properly the search for these two best responses, and study the limit $N\to\infty$ of both cases of interest. \section{Convergence of Large Finite Player Games} \label{se:convergence} Throughout the rest of the paper, we make the following assumptions on the regularity of the transition rate and cost functions: \begin{hypothesis}\label{lipassump} There exists a constant $L>0$ such that for all $i,j\in\{1,\dots, M\}$, $i^0, j^0 \in \{1, \dots, M^0\}$ and all $t,t'\in[0,T]$, $x,x'\in \mathcal{P}$, $\alpha^0, \alpha^{0 '} \in A_0$ and $\alpha, \alpha'\times A$, we have: \begin{align*} &|(f,f^0,g,g^0, q, q^0)(i,j,i^0, j^0, t,x,\alpha^0, \alpha) - (f,f^0,g,g^0, q, q^0)(i,j,i^0, j^0, t',x',\alpha^{0'}, \alpha')|\\ &\hskip 35pt \le L (|t-t'| + \|x-x'\| + \| \alpha^0 - \alpha^{0'}\| + \|\alpha -\alpha'\|) \end{align*} \end{hypothesis} \begin{hypothesis}\label{boundassump} There exists a constant $C>0$ such that for all $i,j\in\{1,\dots, M\}$, $i^0\in \{1, \dots, M^0\}$ and all $t\in[0,T]$, $x\in \mathcal{P}$, $\alpha^0\in A_0$ and $\alpha\in A$, we have: \[ |q(t,i,j,\alpha,i^0, \alpha^0, x)|\le C \] \end{hypothesis} Finally, we add a boundary condition on the Markov evolution of the minor players. Intuitively speaking, this assumption rules out extinction: it says that a minor player can no longer change its state, when the percentage of minor players who are in the same state falls below a certain threshold. \begin{hypothesis} \label{boundaryassump} There exists a constant $\epsilon>0$ such that for all $t\in[0,T]$, $i,j \in \{1,\dots, M-1\}, i\neq j$ and $\alpha^0\in A_0$ and $\alpha\in A$, we have: \begin{align*} x_i < \epsilon \implies&\;\; q(t,i,j,\alpha, i^0, \alpha^0, x) = 0\\ 1 - \sum_{k=1}^{M-1} x_k < \epsilon \implies&\;\; q(t,M,i,\alpha, i^0, \alpha^0, x) = 0. \end{align*} \end{hypothesis} \noindent The purpose of this section is to identify the state dynamics which should be posited in the formulation of the mean field game problem with major and minor players. In order to do so, we formulate the search for the best response of each player by first setting the game with finitely many minor players, and then letting the number of minor players go to $\infty$ to identify the dynamics over which the best response should be computed in the limit. \subsection{Major Player's Problem with Finitely Many Minor Players} For any integer $N$ (fixed for the moment), we consider a game with $N$ minor players, and we compute the best response of the major player when the minor players choose control strategies $\boldsymbol{\alpha}^n=(\alpha^n_t)_{0\le t\le T}$ given by the same feedback function $\phi$ so that $\alpha^n_t = \phi(t, X_t^{n,N}, X^{0,N}_t, \mu^N_t)$ for $n=1,\cdots,N$. Here $X_t^{n,N}$ denotes the state of the $n$-th minor player at time $t$, $X^{0,N}_t$ the state of the major player, and $\mu^N_t$ the empirical distribution of the states of the $N$ minor players at time $t$. The latter is a probability measure on the state space $E=\{1,\cdots,M\}$, and for the sake of convenience, we shall identify it with the element: \[ \mu_t^N = \frac{1}{N}\bigg(\sum_{n=1}^N \mathbbm{1}(X_t^{n,N} = 1), \sum_{n=1}^N \mathbbm{1}(X_t^{n,N} = 2), \dots, \sum_{n=1}^N \mathbbm{1}(X_t^{n,N} = M-1)\bigg) \] of the simplex. So for each $i\in E$, $\mu_t^N(i)$ is the proportion of minor players whose state at time $t$ is equal to $i$. Consequently, for $N$ fixed, $\mu^N_t$ can be viewed as an element of the finite space $\{0,1/N,\cdots,(N-1)/N,1\}^{M-1}$. For the sake of definiteness, we denote by $\mathcal P^N$ the set of possible values of $\mu^N_t$, in other words, we set: \[ \mathcal{P}^N:=\bigl\{\frac{1}{N}(n_1, n_2, \dots n_{M-1}) ;\; n_i \in \mathbb{N}, \sum_i n_i \le N\bigr\}. \] Given the choice of control strategies made by the minor players, we denote by $\boldsymbol{\alpha}^{0}=(\alpha^0_t)_{0\le t\le T}$ the control strategy of the major player, and we study the time evolution of the state of the system given these choices of control strategies. Later on, we shall find the optimal choice for major player's controls $\boldsymbol{\alpha}^{0}$ given by feedback functions $\phi^0$ in response to the choice of the feedback function $\phi$ of the minor players. While this optimization should be done over the dynamics of the whole state $(X^{0,N}_t,X^{1,N}_t,\cdots,X^{N,N}_t)$, we notice that the process $(X^{0,N}_t,\mu^N_t)_{0\le t\le T}$ is sufficient to define the optimization problem of the major player, and that it is also a continuous time Markov process in the finite state space $\{1,\dots,M^0\}\times\mathcal{P}^N$. \vskip 4pt Our goal is to show that as $N\to\infty$, the Markov process $(X_t^{0,N}, \mu_t^N)_{0\le t\le T}$ converges in some sense to a Markov process $(X_t^0, \mu_t)_{0\le t\le T}$. This will allow us to formulate the optimization problem of the major player in the mean field limit in terms of this limiting Markov process. \vskip 4pt For each integer $N$, we denote by $\mathcal G^{0,N}_{\phi^0, \phi}$ the infinitesimal generator of the Markov process $(X_t^{0,N}, \mu_t^N)_{0\le t\le T}$. Since the process is not time homogeneous, when we say infinitesimal generator, we mean the infinitesimal generator of the space-time process $(t,X_t^{0,N}, \mu_t^N)_{0\le t\le T}$. Except for the partial derivative with respect to time, this infinitesimal generator is given by the Q-matrix of the process, namely the instantaneous rates of jump in the state space $\{1,\dots,M^0\}\times\mathcal{P}^N$. So if $F: [0,T] \times \{1, 2, \dots, M^0\} \times \mathcal{P}^N\rightarrow \mathbb{R}$ is $C^1$ in time, \begin{equation} \label{generatorNmajor} \begin{aligned} [\mathcal G^{0,N}_{\phi^0, \phi} F] (t, i^0, x)=&\;\partial_t F(t, i^0, x)+ \sum_{j_0 \neq i_0} \bigl(F(t, j^0, x) - F(t, i^0, x)\bigr) q^0_{\phi^0}(t, i^0, j^0, x)\\ &\; + \sum_{j\neq i}\bigl(F(t, i^0, x + \frac{1}{N}e_{ij} ) -F(t, i^0, x )\bigr) N x_i q_{\phi^0,\phi}(t, i, j, i^0, x), \end{aligned} \end{equation} where the first summation in the right hand side corresponds to jumps in the state of the major player and the terms in the second summation account for the jumps of the state of one minor player from $i$ to $j$. Here we code the change in the empirical distribution $x$ of the states of the minor players caused by the jump from $i\in\{1,\cdots,M\}$ to $j\in\{1,\cdots,M\}$ with $j\ne i$, of the state of a single minor player as $(1/N)e_{ij}$ with the notation $e_{ij} := e_j \mathbbm{1}_{j\neq M} - e_i \mathbbm{1}_{i\neq M}$ where $e_i$ stands for the $i$-th vector in the canonical basis of the space $\mathbb{R}^{M-1}$. We have also used the notation $x_n = 1 - \sum_{i=1}^{n-1} x_i$ for sake of simplicity. \vskip 4pt Notice that the two summations appearing in \eqref{generatorNmajor} correspond to finite difference operators which are bounded. So the domain of the operator $\mathcal G^{0,N}_{\phi^0, \phi} $ is nothing else than the domain of the partial derivative with respect to time. Notice also that the sequence of generators ${\mathcal G}^{0,N}_{\phi^0, \phi}$ converges, at least formally, toward a limit which can easily be identified. Indeed, it is clear from the definition \eqref{generatorNmajor} that $[\mathcal G^{0,N}_{\phi^0, \phi} F] (t, i^0, x)$ still makes sense if $x\in\mathcal P$, where $\mathcal P$ is the $M-1$ dimensional simplex. Moreover, if $F: [0,T] \times \{1, 2, \dots, M^0\} \times \mathcal{P}\rightarrow \mathbb{R}$ is $C^1$ in both variables $t$ and $x$, we have $[\mathcal G^{0,N}_{\phi^0, \phi} F] (t,i^0,x) \rightarrow [\mathcal G^{0}_{\phi^0, \phi} F](t,i^0,x)$ defined by: \begin{align*} [\mathcal G^{0}_{\phi^0, \phi} F](t,i^0,x)&:= \partial_t F(t, i^0, x) + \sum_{j^0\neq i^0}[F(t, j^0, x) - F(t, i^0, x)] q^0_{\phi^0}(t,i^0, j^0, x)\\ &+ \sum_{i,j=1}^{M-1} \partial_{x_j} F(t, i^0, x) x_i q_{\phi^0,\phi}(t,i,j,i^0, x) +(1- \sum_{k=1}^{M-1} x_k) \sum_{j=1}^{M-1} \partial_{x_j} F(t, i^0, x) q_{\phi^0,\phi}(t,M, j, i^0, x). \end{align*} So far, we have a sequence of time-inhomogeneous Markov processes $(X_t^{0, N}, \mu_t^N)$ characterized by their infinitesimal generators $\mathcal{G}^{0,N}_{\phi^0, \phi}$ which converge to $\mathcal{G}^{0}_{\phi^0, \phi}$. We now aim to show the existence of a limiting Markov process with infinitesimal generator $\mathcal{G}^{0}_{\phi^0, \phi}$. The proof consists of first showing the existence of a Feller semigroup generated by the limiting generator $\mathcal{G}^{0}_{\phi^0, \phi}$, and then applying an argument of convergence of semigroups. \begin{remark} The standard results in the theory of semigroup are tailor-made for time-homogeneous Markov processes. However, they can easily be adapted to the case of time-inhomogeneous Markov process by simply considering the space-time expansion, specifically by augmenting the process $(X_t^{0, N},\mu_t^N)$ into $(t, X_t^{0, N}, \mu_t^N)$ and considering the uniform convergence on all bounded time intervals. \end{remark} \vskip 4pt Let us introduce some notations which are useful for the functional analysis of the infinitesimal generators and their corresponding semigroups. We set $E^N=[0,T]\times \{1,\dots,M^0\}\times \mathcal{P}^N$ and $E^\infty=[0,T]\times \{1,\dots,M^0\}\times \mathcal{P}$ for the state spaces, and we denote by $C(E^\infty)$ the Banach space for the norm $\|F\|_\infty=\sup_{t,i^0,x} |F(t,i^0,x)|$, of the real valued continuous functions defined on $E^\infty$. We also denote by $C^1(E^\infty)$ the collection of functions in $C(E^\infty)$ that are $C^1$ in $t$ and $x$ for all $i^0 \in \{1,\dots,M^0\}$. Note that the Markov process $(t, X_t^{0,N},\mu_t^N)$ lives in $E^N$ while the candidate limiting process $(t, X_t^{0,N},\mu_t^N)$ lives in $E^\infty$. The difference is that $\mu_t^N$ only takes values in $\mathcal{P}^N$, which is a finite subset of $\mathcal{P}$. Thus if we want to show the convergence, we need to reset all the processes on the same state space, and our first step should be to extend the definition of $(t,X_t^{0,N},\mu_t^N)$ to a Markov process taking value in $E^\infty$. To do so, we extend the definition of the generator $\mathcal G^{0,N}_{\phi^0,\phi}$ to accommodate functions $F$ defined on the whole $E^\infty$: \begin{align*} [\mathcal{G}^{0,N}_{\phi^0,\phi} F] (t, i^0, x)=\;&\partial_t F(t, i^0, x)+ \sum_{j_0 \neq i_0} \bigl(F(t, j^0, x) - F(t, i^0, x)\bigr) q^0_{\phi^0}(t, i^0, j^0, x)\\ &\; + \sum_{j\neq i}\bigl(F(t, i^0, x + \frac{1}{N}e_{ij} ) -F(t, i^0, x )\bigr) N x_i \mathbbm{1}_{x_i \ge \frac{1}{N}} q_{\phi^0,\phi}(t, i, j, i^0, x). \end{align*} We claim that for $N$ large enough, $\mathcal{G}^{0,N}_{\phi^0,\phi}$ generates a Markov process with a Feller semigroup taking values in $E^\infty$. Indeed, when the initial distribution is a probability measure on $\{1,\dots, M^0\}\times\mathcal{P}^N$, the process has exactly the same law as $(X_t^{0,N}, \mu_t^N)$. To see why this is true, let us denote for all $x\in\mathcal{P}$ the set $\mathcal{P}_{x}^N:=(x + \frac{1}{N}\mathbb{Z}^{M-1})\cap \mathcal{P}$. Then we can construct a Markov process starting from $(i,x)$ and living in the space of finite states $\{1,\dots,M\}\times \mathcal{P}_{x}^N$, which has the same transition rates as those appearing in the definition of $\mathcal{G}^{0,N}_{\phi^0,\phi}$. In particular, the indicator function $\mathbbm{1}_{x_i \ge \frac{1}{N}}$ forbids the component $x$ to exit the domain $\mathcal{P}$. Hypothesis \ref{boundaryassump} implies that the transition function is continuous on $E$ when $N \ge 1/\epsilon$, where $\epsilon$ is the extinction threshold in the assumption. So this process is a continuous time Markov process with continuous probability kernel in a compact space. By Proposition 4.4 in \cite{swart}, it is a Feller process. In the following, we will still denote this extended version of the process as $(X_t^{0,N}, \mu_t^N)$. \begin{proposition}\label{existfeller} There exists a Feller semigroup $\mathcal T=(\mathcal T_t)_{t\ge 0}$ on the space $C(E^\infty)$ such that the closure of $\mathcal{G}^{0}_{\phi^0, \phi}$ is the infinitesimal generator of $\mathcal{T}$. \end{proposition} \begin{proof} We use a simple perturbation argument. Observe that $\mathcal{G}^{0}_{\phi^0, \phi}$ is the sum of two linear operator $\mathcal{H}$ and $\mathcal{K}$ on $C(E^{\infty})$: \begin{align*} [\mathcal{H}F](t,i^0,x) :=& \partial_t F(t,i^0,x) + \mathbf{v}(t,i^0,x) \cdot \nabla F(t,i^0,x)\\ [\mathcal{K}F](t,i^0,x) :=& \sum_{j^0\neq i^0}[F(t, j^0, x) - F(t, i^0, x)] q^0_{\phi^0}(t,i^0, j^0, x) \end{align*} where we denote by $\nabla F(t,i^0,x)$ the gradient of $F$ with respect to $x$, and by $\mathbf{v}$ the vector field: \[ \mathbf{v}_j(t,i^0,x) := \sum_{i=1}^{M-1} x_i q_{\phi^0,\phi}(t,i,j,i^0,x) + \Bigl(1 - \sum_{i=1}^{M-1} x_i \Bigr) \;q_{\phi^0,\phi}(t,M,j,i^0,x). \] Being a finite difference operator, $\mathcal{K}$ is a bounded operator on $C(E^{\infty})$ so the proof reduces to showing that $\mathcal H$ generates a Feller semigroup. See for example Theorem 7.1, Chapter 1 in \cite{EthierKurtz}. To show that the closure of $\mathcal{H}$ generates a strongly continuous semigroup on $C(E^\infty)$ we use the characteristics of the vector field $\mathbf{v}$. For any $(t,i^0,x) \in E^{\infty}$, let $(Y^{t,i^0,x}_u)_{u\ge 0}$ be the solution of the Ordinary Differential Equation (ODE): \[ dY^{t,i^0,x}_u = \mathbf{v}(t+u,i^0,Y^{t,i^0,x}_u) du,\;\;\;\;Y^{t,i^0,x}_0 = x. \] Existence and uniqueness of solutions are guaranteed by the Lipschitz property of the vector field $\mathbf{v}$, which in turn is a consequence of the Lipschitz property of $q_{\phi^0, \phi}$. Notice that by Hypothesis \ref{boundaryassump}, the process $Y^{t,i^0,x}_u$ is confined to $\mathcal{P}$. So we can define the linear operator $\mathcal{T}_s$ on $C(E^{\infty})$: \[ [\mathcal{T}_s F](t,i^0,x) := F(s+t, i^0, Y^{t,i^0,x}_s) \] Uniqueness of solutions implies that $(\mathcal{T}_s)_{s\ge 0}$ is a semigroup. The latter is strongly continuous. Indeed, by the boundedness of $\mathbf{v}$, for a fixed $h>0$, there exists a constant $C_0$ such that $|Y^{t,i^0,x}_s - x| \le C_0 s$ for all $s\le h$ and $(t,i^0,x) \in E^{\infty}$. Combining this estimation with the fact that $F$ is uniformly continuous in $(t,x)$ for all $F\in C(E^{\infty})$, we obtain that $\|\mathcal{T}_s F - F\| \rightarrow 0, s\rightarrow 0$. Finally the semigroup $\mathcal{T}$ is Feller, since the solution of ODE $Y^{t,i^0,x}_s$ depends continuously on the initial condition as a consequence of the Lipschitz property of the vector field $\mathbf{v}$. It is plain to check that $\mathcal{H}$ is the infinitesimal generator of the semigroup $\mathcal{T}$, and the domain of $\mathcal{H}$ is $C^1(E^{\infty})$. \end{proof} The following lemma is a simple adaptation of Theorem 6.1, Chapter 1 in \cite{EthierKurtz} and is an important ingredient in the proof of the convergence. It says that the convergence of the infinitesimal generators implies the convergence of the corresponding semigroups. \begin{lemma}\label{convgeneratorsemigroup} For $N=1,2,\dots $, let $\{T_N(t)\}$ and $\{T(t)\}$ be strongly continuous contraction semigroups on $L$ with generator $\mathcal{G}_N$ and $\mathcal{G}$ respectively. Let $D$ be a core for $\mathcal{G}$ and assume that $D \subset \mathcal{D}(\mathcal{G}_N)$ for all $N\ge 1$. If $\lim_{N\to\infty} \mathcal{G}_N F = \mathcal{G} F$ for all $F \in D$, then for each $F\in L$, $\lim_{N\to\infty} T_N(t) F = T(t) F$ for all $t\ge 0$. \end{lemma} We are now ready to state and prove the main result of this section: the Markov process $(X_t^{0,N}, \mu_t^N)$ describing the dynamics of the state of the major player and the empirical distribution of the states of the $N$ minor players converges weakly to a Markov process with infinitesimal generator $\mathcal{G}^{0}_{\phi^0, \phi}$, when the players choose Lipschitz strategies. \begin{theorem}\label{convtheomajor} Assume that the major player chooses a control strategy $\boldsymbol{\alpha}^0$ given by a Lipschitz feedback function $\phi^0$ and that all the minor players choose control strategies given by the same Lipschitz feedback function $\phi$. Let $i^0\in \{1,\dots, M^0\}$ and for each integer $N\ge 1$, $x^N \in \mathcal{P}^N$ with limit $x \in \mathcal{P}$. Then the sequence of processes $(X_t^{0,N}, \mu_t^N)$ with initial conditions $X_0^{0,N} = i^0$, $\mu_t^N = x^N$ converges weakly to a Markov process $(X_t^0, \mu_t)$ with initial condition $X_0^0 = i^0$, $\mu_0 = x$. The infinitesimal generator for $(X_t^0, \mu_t)$ is given by: \begin{align*} [\mathcal{G}^0_{\phi^0, \phi} F] (t, i^0, x):=&\;\partial_t F(t, i^0, x) + \sum_{j^0\neq i^0}[F(t, j^0, x) - F(t, i^0, x)] q^0_{\phi^0}(t,i^0, j^0, x)\\ & + \sum_{i,j=1}^{M-1} \partial_{x_j} F(t, i^0, x) x_i q_{\phi^0,\phi}(t,i,j, i^0, x)+(1- \sum_{k=1}^{M-1} x_k) \sum_{k=1}^{M-1} \partial_{x_j} F(t, i^0, x) q_{\phi^0,\phi}(t,M, j, i^0, x). \end{align*} \end{theorem} \begin{proof} Let us denote $\mathcal{T}^N$ the semigroup associated with the time inhomogeneous Markov process $(t, X_t^{0,N},\mu_t^N)$ and the infinitesimal generator $\mathcal{G}^{0,N}_{\phi^0,\phi}$. Recall that by the procedure of extension we described above, the process $(t, X_t^{0,N},\mu_t^N)$ now lives in $E^{\infty}$ and the domain for $\mathcal{G}^{0,N}_{\phi^0,\phi}$ is $C(E^{\infty})$. In light of Theorem 2.5, Chapter 4 in \cite{EthierKurtz} and Proposition \ref{existfeller} we just proved, it boils down to proving that for any $F \in E^{\infty}$ and $t\ge 0$, $\mathcal{T}_t^N F$ converges to $\mathcal{T}_t F$, where $(\mathcal{T}_t)_{t\ge 0}$ is the strongly continuous semigroup generated by the closure of $\mathcal{G}_{\phi^0,\phi}^0$. To show the convergence, we apply Lemma \ref{convgeneratorsemigroup}. It is easy to see that $C^1(E^\infty)$ is a core for $\bar{\mathcal{G}}_{\phi^0,\phi}^0$ and $C^1(E^\infty)$ is included in the domain of $\mathcal{G}^{0,N}_{\phi^0,\phi}$. Therefore it only remains to show that for all $F\in C^1(E^\infty)$, $\mathcal{G}^{0,N}_{\phi^0,\phi} F$ converges to $\mathcal{G}^{0}_{\phi^0,\phi} F$ in the space $(C(E^\infty), \|\cdot\|)$. Using the notation $x_M := 1 - \sum_{i=1}^{M-1} x_i$, we have: \begin{equation}\label{generatorapproxest} \begin{aligned} &|[\mathcal{G}_{\phi^0,\phi}^{0,N} F](t,i^0,x) - [\mathcal{G}^{0}_{\phi^0,\phi} F](t,i^0,x)|\\ = & \sum_{j\neq i}|N(F(t, i^0, x + \frac{1}{N}e_{ij} ) -F(t, i^0, x )) - (\mathbbm{1}_{j\neq M} \partial_{x_j} F(t, i^0, x ) - \mathbbm{1}_{i\neq M} \partial_{x_i}F(t, i^0, x ))| x_i q^N(t, i, j, i^0, x)\\ \le & \sum_{j\neq i}( |\partial_{x_j} F(t, i^0, x + \frac{\lambda_{i,j}}{N}e_{i,j} ) -\partial_{x_j} F(t, i^0, x )|+ |\partial_{x_i}F(t, i^0, x + \frac{\lambda_{i,j}}{N}e_{i,j} ) - \partial_{x_i}F(t, i^0, x )|) x_i q^N(t, i, j, i^0, x) \end{aligned} \end{equation} where we applied intermediate value theorem at the last inequality and $\lambda_{i,j}\in[0,1]$. Note that $\lambda_{i,j}$ also depends on $t,x,i^0$ but we omit them for sake of the simplicity. Remark that $F\in C^1(E^\infty)$ and $E^\infty$ is compact, therefore $\partial_{x_i}F$ is uniformly continuous on $E^\infty$ for all $i$, which immediately implies that $\|\mathcal{G}_{\phi^0,\phi}^{0,N} F - \mathcal{G}_{\phi^0,\phi}^{0} F\| \rightarrow 0, N\rightarrow +\infty$. This completes the proof. \end{proof} \noindent \textbf{Mean Field Major Player's Optimization Problem} \noindent Given that all the minor players are assumed to use a control strategy based on the same feedback function $\phi$, the best response of the major player is to use the strategy $\hat\boldsymbol{\alpha}^0$ given by the feedback function $\hat\phi^0$ solving the optimal control problem: \[ \inf_{\boldsymbol{\alpha}^0\leftrightarrow\phi^0 \in\mathbb{A}^0}\mathbb{E}\left[\int_{0}^{T}f^0(t, X_t^0, \phi^0(t, X_t^0, \mu_t), \mu_t) dt + g^0(X_T^0, \mu_T)\right] \] where $(X_t^0, \mu_t)_{0\le t\le T}$ is the continuous time Markov process with infinitesimal generator $\mathcal{G}^0_{\phi^0,\phi}$. \subsection{Representative Minor Player's Problem} We turn to the computation of the best response of a generic minor player. We assume that the major player chooses a strategy $\boldsymbol{\alpha}^0\in \mathbb{A}^0$ of the form $\alpha^0_t= \phi^0(t, X_t^{0,N}, \mu^N_t)$ and that the minor players in $\{2,\cdots,N\}$ all use strategy $\boldsymbol{\alpha}^i\in \mathbb{A}$ of the form $\alpha^i_t= \phi(t, X^{i,N}_t,X_t^{0,N}, \mu^N_t)$ for $i=2,\cdots,N$, and that the first minor player uses strategy $\o\boldsymbol{\alpha}\in \mathbb{A}$ of the form $\o\alpha_t= \o\phi(t, X^{1,N}_t,X_t^{0,N}, \mu^N_t)$. Clearly, by symmetry, whatever we are about to say after we singled the first minor player out, can be done if we single out any other minor player. As before, for each fixed $N$, the process $(X_t^{0,N}, X_t^{1,N}, \mu_t^N)$ is a finite-state continuous time Markov process with state space $\{1,\dots, M^0\}\times \{1,\dots, M\}\times\mathcal{P}^N$ whose infinitesimal generator $\mathcal{G}^{N}_{\phi^0, \phi, \o\phi}$ is given, up to the time derivative, by the corresponding Q-matrix of infinitesimal jump rates. In the present situation, its value on any real valued function $F$ defined on $[0,T]\times \{1,\dots, M^0\}\times \{1,\dots, M\}\times\mathcal{P}^N$ such that $t\rightarrow F(t,i^0,i,x)$ is $\mathcal{C}^1$ for any $i^0,i$ and $x$ is given by the formula: \begin{equation}\label{generatorNminor} \begin{aligned} [\mathcal{G}^{N}_{\phi^0, \phi,\o\phi} F] (t, i^0, i, x) =& \partial_t F(t, i^0, i, x)+ \sum_{j^0, j_0 \neq i_0} [F(t, j^0,i, x) - F(t, i^0,i, x)] q^0_{\phi^0}(t, i^0, j^0, x)\\ &+ \sum_{j, j\neq i}[F(t, i^0, j, x + \frac{1}{N}e_{ij} ) -F(t, i^0,i, x )] q_{\phi^0, \o\phi}(t, i, j, i^0, x)\\ &+ \sum_{(j,k), j\neq k}[F(t, i^0, i, x +\frac{1}{N} e_{kj} ) - F(t, i^0, i, x)] (N x_k - \mathbbm{1}_{k=i}) q_{\phi^0,\phi}(t, k, j, i^0, x). \end{aligned} \end{equation} As before the summations appearing above correspond to single jumps when 1) only the state of the major player changes from state $i^0$ to $j^0$, 2) only the state of the singled out first minor player changes from state $i$ to $j$, and finally 3) the state of one of the last $N-1$ minor players jumps from state $k$ to $j$. \vskip 4pt Following the same treatment as in major player's problem, we have the convergence result for the process $(X_t^{N}, X_t^{0,N},\mu_t^N)$: \begin{theorem}\label{convtheominor} Assume that for each integer $N$, the major player chooses a control $\boldsymbol{\alpha}^0$ given by a Lipschitz feedback function $\phi^0$, the first minor player chooses a control $\o\boldsymbol{\alpha}$ given by a Lipschitz feedback function $\o\phi$, all the other minor players choose strategies given by the same Lipschitz feedback function $\phi$, and that these three feedback functions do not depend upon $N$. Let $i^0\in \{1,\dots, M^0\}$ and for each integer $N\ge 2$, $x^N \in \mathcal{P}^N$ with limit $x \in \mathcal{P}$. Then the sequence of processes $(X_t^{N}, X_t^{0,N}, \mu_t^N)_{0\le t\le T}$ with initial conditions $X_0^N = i$, $X_0^{0,N}= i^0$ and $\mu_0^N = x^N$ converges weakly to a Markov process $(X_t, X_t^0, \mu_t)$ with initial condition $X_0 = i$, $X_0^0 = i^0$ and $\mu_0 = x$. Its infinitesimal generator is given by: \begin{align*} [\mathcal{G}_{\phi^0, \phi, \o\phi} F] (t, i, i^0, x) :=& \;\partial_t F(t,i, i^0, x) + \sum_{j^0, j^0\neq i^0}[F(t,i, j^0, x) - F(t,i, i^0, x)] q^0_{\phi^0}(t, i^0, j^0, x) \\ &\hskip -35pt + \sum_{j, j\neq i}[F(t,j, i^0, x) - F(t,i, i^0, x)] q_{\phi^0,\o\phi}(t,i, j, i^0, x)+ \sum_{i,j=1}^{M-1} \partial_{x_j} F(t, i, i^0, x) x_i q_{\phi^0,\phi}(t,i,j,i^0,x)\\ &\;+(1- \sum_{k=1}^{M-1} x_k) \sum_{j=1}^{M-1} \partial_{x_k} F(t,i, i^0, x) q_{\phi^0,\phi}(t,M, j, i^0, x). \end{align*} \end{theorem} \noindent \textbf{Representative Minor Player's Optimization Problem} \noindent Accordingly, in the mean field game limit, we define the search for the best response of the representative minor player (i.e. the minor player we singled out) to the strategies adopted by the major player and the field of minor players as the following optimal control problem. Assuming that the major player uses a feedback function $\phi^0$ and all the other minor players the feedback function $\phi$, the best response of the representative minor player is given by the solution of: \[ \inf_{\o\boldsymbol{\alpha}\leftrightarrow \o\phi\in\mathbb{A}}\mathbb{E}\left[\int_{0}^{T} f(t,X_t, \bar\phi(t, X_t, X_t^0, \mu_t), X_t^0,\phi^0(t, X_t^0, \mu_t), \mu_t) dt + g(X_T, X_T^0, \mu_T)\right] \] where $(X_t, X_t^0, \mu_t)_{0\le t\le T}$ is a Markov process with infinitesimal generator $\mathcal{G}_{\phi^0, \phi, \o\phi}$. We shall denote by $\o\phi=\boldsymbol{\phi}(\phi^0,\phi)$ the optimal feedback function providing the solution of this optimal control problem. \section{Optimization Problem for Individual Players} \label{se:optimizations} In this section, we use the dynamic programming principle to characterize the value functions of the major and minor players' optimization problems as viscosity solutions of the corresponding Hamilton-Jacobi-Bellman (HJB for short) equations. We follow the detailed arguments given in Chapter II of \cite{FlemingSoner}. For both the major and representative minor player, we show that the value function solves a weakly coupled system of Partial Differential Equations (PDEs for short) in viscosity sense. We also prove an important uniqueness result for these solutions. This uniqueness result is important indeed because as the reader noticed, in defining the best response map, we implicitly assumed that these optimization problems could be solved and that their solutions were unique. \vskip 4pt We first consider the value function of the major player's optimization problem assuming that the minor players use the feedback function $\phi$: \[ V^0_{\phi}(t,i^0, x) := \inf_{\boldsymbol{\alpha}^0\leftrightarrow\phi^0}\mathbb{E}\left[\int_{t}^{T}f^0_{\phi^0}(s, X_s^0, \mu_s) ds + g^0(X_T^0, \mu_T) | X_t^0 = i^0, \mu_t = x\right] \] \begin{theorem}\label{hjbtheomajor} Assume that for all $i^0 \in \{1,\dots, M^0\}$, the mapping $(t,x) \rightarrow V^{0}_{\phi}(t,i^0, x)$ is continuous on $[0,T]\times \mathcal{P}$. Then $V^0_{\phi}$ is a viscosity solution to the system of $M^0$ PDEs on $[0,T]\times \mathcal P$: \begin{equation}\label{hjbmajor} \begin{aligned} &0=\partial_t v^0(t, i^0, x) + \inf_{\alpha^0\in A^0}\bigg\{f^0(t, i^0, \alpha^0, x) + \sum_{j^0\neq i^0} [v^0(t,j^0, x) - v^0(t,i^0, x)] q^0(t,i^0, j^0, \alpha^0, x)\\ &\hskip 25pt +(1-\sum_{k=1}^{M-1} x_k) \sum_{k=1}^{M-1} \partial_{x_k} v^0(t,i^0,x) q(t,M, k, \phi(t,M, i^0, x), i^0, \alpha^0, x)\\ &\hskip 25pt +\sum_{i,j=1}^{M-1} \partial_{x_j} v^0(t, i^0, x) x_i q(t,i,j,\phi(t,i,i^0,x), i^0,\alpha^0, x)\bigg\},\qquad (i_0,t,x)\in \{1,\dots, M^0\} \times [0,T[ \times \mathcal{P},\\ &v^0(T,i^0, x) = g^0(i^0, x),\qquad (i^0, x)\in \{1,\dots, M^0\}\times\mathcal{P}. \end{aligned} \end{equation} \end{theorem} The notion of viscosity solution in the above result is specified by the following definition: \begin{definition}\label{viscositydefi} A real valued function $v^0$ defined on $[0,T]\times\{1,\dots,M^0\}\times\mathcal{P}$ such that $v^0(\cdot, i^0, \cdot)$ is continuous on $[0,T]\times\{1,\dots,M^0\}\times\mathcal{P}$ for all $i^0 \in \{1,\dots,M^0\}$ is said to be a viscosity subsolution (resp. supersolution) if for any $(t,i^0,x) \in [0,T]\times\{1,\dots,M^0\}\times\mathcal{P}$ and any $\mathcal{C}^\infty$ function $\theta$ defined on $[0,T]\times\mathcal{P}$ such that the function $(v^0(\cdot, i^0, \cdot) - \theta)$ attains a maximum (resp. minimum) at $(t,x)$ and $v^0(t,i^0,x) = \theta(t,x)$, the following inequalities holds: \begin{align*} 0\le \text{(resp.}\ge)\;\;&\; \partial_t \theta(t, x) + \inf_{\alpha^0\in A^0}\bigg\{f^0(t, i^0, \alpha^0, x) + \sum_{j^0\neq i^0} [v^0(t,j^0, x) - v^0(t,i^0, x)] q^0(t,i^0, j^0, \alpha^0, x)\\ &\;+(1-\sum_{k=1}^{M-1} x_k) \sum_{k=1}^{M-1} \partial_{x_k}\theta(t,x) q(t,M, k, \phi(t,M, i^0, x), i^0,\alpha^0, x)\\ &\;+\sum_{i,j=1}^{M-1} \partial_{x_j} \theta(t, x) x_i q(t,i,j,\phi(t,i, i^0,x), i^0, \alpha^0,x)\bigg\},\;\text{if}\;\;\;t<T\\ 0\le \text{(resp.}\ge)\;\;&g^0(i^0, x) - v^0(t,i^0,x),\;\;\;\text{if}\;\;\; t=T \end{align*} If $v^0$ is both a viscosity subsolution and supersolution, we call it a viscosity solution. \end{definition} \begin{proof} Define $\mathcal{C}(\mathcal{P})^{M^0}$ the collection of functions $\theta(i^0, x)$ defined on $\{1,\dots,M^0\}\times\mathcal{P}$ such that $\theta(i^0,\cdot)$ is continuous on $\mathcal{P}$ for all $i^0$. Define the dynamic programming operator $\mathcal{T}_{t,s}$ on $\mathcal{C}(\mathcal{P})^{M^0}$ by: \begin{equation}\label{dynamicprogoperatordef} [\mathcal{T}_{t,s}\theta](i^0, x) := \inf_{\boldsymbol{\alpha}^0\leftrightarrow\phi^0}\mathbb{E}\left[\int_{t}^{s} f^0_{\phi^0}(u, X_u^0, \mu_u) du + \theta(X_s^0, \mu_s) | X_t^0 = i^0, \mu_t = x\right] \end{equation} where the Markov process $(X_u^0, \mu_u)_{0\le t\le T}$ has infinitesimal generator $\mathcal{G}^0_{\phi^0,\phi}$. Then the value function can be expressed as: \[ V^0_{\phi}(t,i^0,x) = [\mathcal{T}_{t,T} g^0](i^0, x) \] and the dynamic programming principle says that: \[ V^0_{\phi}(t, i^0, x) = [\mathcal{T}_{t,s} V^0_{\phi}(s, \cdot, \cdot)](i^0, x), \qquad (t,s, i^0, x) \in[0,T]^2\times \{1,\dots,M_0\}\times \mathcal{P}. \] We will use the following lemma whose proof we give in the appendix. \begin{lemma}\label{dynamicprogoperator} Let $\Phi$ be a function on $[0,T]\times\{1,\dots, M^0\} \times \mathcal{P}$ and $i^0\in\{1,\dots, M^0\}$ such that $\Phi(\cdot, i^0, \cdot)$ is $\mathcal{C}^1$ in $[0,T]\times\mathcal{P}$ and $\Phi(\cdot, j^0, \cdot)$ is continuous in $[0,T]\times\mathcal{P}$ for all $j^0\neq i^0$. Then we have: \begin{align*} &\lim_{h \rightarrow 0}\frac{1}{h}\left[(\mathcal{T}_{t,t+h} \Phi(t+h, \cdot, \cdot))(i^0, x) - \Phi(t, i^0, x)\right] \\ &=\partial_t \Phi(t,i^0, x) + \inf_{\alpha^0\in A^0}\bigg\{f^0(t, i^0, \alpha^0, x) +(1-\sum_{k=1}^{M-1} x_k) \sum_{k=1}^{M-1} \partial_{x_k} \Phi(t,i^0, x) q(t,M, k, \phi(t,M, i^0, x), i^0, \alpha^0, x)\\ &\;+\sum_{i,j=1}^{M-1} \partial_{x_j} \Phi(t,i^0, x) x_i q(t,i,j,\phi(t,i, i^0,x), i^0,\alpha^0, x)+ \sum_{j^0\neq i^0} [\Phi(t,j^0, x) - \Phi(t,i^0, x)] q^0(t,i^0, j^0, \alpha^0, x)\bigg\}. \end{align*} \end{lemma} We now prove the subsolution property. Let $\theta$ be a function defined on $[0,T]\times\mathcal{P}$ such that $(V^0(\cdot, i^0, \cdot) - \theta)$ attains maximum at $(t,x)$ and $V^0(t,i^0,x) = \theta(t,x)$. Define the function $\Phi$ on $[0,T]\times\{1,\dots, M^0\}\times\mathcal{P}$ by $\Phi(\cdot, i^0, \cdot) := \theta$ and $\Phi(\cdot, j^0, \cdot) := V^0(\cdot, j^0, \cdot)$ for $j^0\neq i^0$. Then clearly $\Phi \ge V^0$, which implies: \[ (\mathcal{T}_{t,s} \Phi(s, \cdot, \cdot))(i^0, x) \ge (\mathcal{T}_{t,s} V^0(s, \cdot, \cdot))(i^0, x) \] By the dynamic programming principle and the fact that $\Phi(t,i^0, x) = V^0(t,i^0, x)$ we have: \[ \lim_{h \rightarrow 0}\frac{1}{h}\left[(\mathcal{T}_{t,s} \Phi(s, \cdot, \cdot))(i^0, x) - \Phi(t, i^0, x)\right] \ge 0. \] Then applying the lemma we obtain the desired inequality. The viscosity property for supersolution can be checked in exactly the same way. \end{proof} For later reference, we state the comparison principle for the HJB equation we just derived. Again, its proof is postponed to the appendix. \begin{theorem}\label{compmajor}(Comparison Principle) Let us assume that the feedback function $\phi$ is Lipschitz, and let $w$ (resp. v) be a viscosity subsolution (resp. supersolution) of the equation (\ref{hjbmajor}). Then we have $w\le v$. \end{theorem} We now turn to the representative minor agent's optimization problem assuming that the major player uses the feedback function $\phi^0$ and all the other minor players use the feedback function $\phi$. We define the value function: \[ V_{\phi^0,\phi}(t,i, i^0, x) := \inf_{\o\boldsymbol{\alpha}\leftrightarrow \o\phi}\mathbb{E}\left[\int_{t}^{T} f_{\phi^0,\o\phi}(s, X_s,X^0_s,\mu_s) ds + g(X_T, X_T^0, \mu_T) |X_t = i, X_t^0 = i^0, \mu_t = x\right] \] where the Markov process $(X_t,X^0_t,\mu_t)_{0\le t\le T}$ has infinitesimal generator $\mathcal G_{\phi^0,\phi,\o\phi}$. In line with the analysis of the major player's problem, we can show that $V_{\phi^0,\phi}$ is the unique viscosity solution to a coupled system of PDEs. \begin{theorem}\label{hjbtheominor} Assume that for all $i \in \{1,\dots, M\}$ and $i^0 \in \{1,\dots, M^0\}$, the mapping $(t,x) \rightarrow V_{\phi^0,\phi}(t,i,i^0, x)$ is continuous on $[0,T]\times \mathcal{P}$. Then $V_{\phi^0,\phi}$ is a viscosity solution to the system of PDEs: \begin{equation}\label{hjbminor} \begin{aligned} &0=\partial_t v(t,i, i^0, x) \\ &\;\;+\inf_{\o\alpha\in A}\bigg\{f(t,i, \o\alpha, i^0, \phi^0(t, i^0, x), x) + \sum_{j\neq i} [v(t,j, i^0, x) - v(t,i, i^0, x)] q(t, i, j, \o\alpha, i^0, \phi^0(t, i^0, x), x)\bigg\}\\ &\;\;+ \sum_{j^0\neq i^0}[v(t,i, j^0, x) - v(t,i, i^0, x)] q^0_{\phi^0}(t, i^0, j^0, x) +(1-\sum_{k=1}^{M-1} x_k) \sum_{k=1}^{M-1} \partial_{x_k} v(t,i,i^0,x) q_{\phi^0,\phi}(t,M, k, i^0, x)\\ &\;\;+\sum_{i,j=1}^{M-1} \partial_{x_j} v(t,i, i^0, x) x_i q_{\phi^0,\phi}(t,i,j, i^0, x), \hskip 25pt (i,i_0,t,x)\in \{1,\dots, M\}\times\{1,\dots, M^0\} \times [0,T[ \times \mathcal{P}\\ &v(T,i,i^0, x) = g(i,i^0, x),\qquad (i,i_0,x)\in \{1,\dots, M\}\times\{1,\dots, M^0\}\times\mathcal{P}. \end{aligned} \end{equation} Moreover, if the feedback functions $\phi^0$ and $\phi$ are Lipschitz, then the above system of PDEs satisfies the comparison principle. \end{theorem} It turns out that the value functions $V^0_{\phi}$ and $V_{\phi^0,\phi}$ are Lipschitz in $(t,x)$. To establish this regularity property and estimate the Lipschitz constants, we need to first study the regularity of the value functions for the finite player games, and control the convergence in the regime of large games. We will state these results in Section \ref{se:chaos}, where we deal with the propagation of chaos and highlight more connections between finite player games and mean field games. We conclude this section with a result which we will use frequently in the sequel. To state it, we denote $J^0_{\phi^0,\phi}$ the expected cost of the major player when it uses the feedback function $\phi^0$ and the minor players all use the feedback function $\phi$. Put differently: \[ J^0_{\phi^0,\phi}(t,i^0,x) := \mathbb{E}\left[\int_{t}^{T}f^0_{\phi^0}(s, X_s^0, \mu_s) ds + g^0(X_T^0, \mu_T) | X_t^0 = i^0, \mu_t = x\right] \] where the Markov process $(X_s^0, \mu_s)_{t\le s\le T}$ has infinitesimal generator $\mathcal{G}^0_{\phi^0,\phi}$. Then by definition, we have $V_{\phi}^0(t,i^0, x) = \inf_{\boldsymbol{\alpha}^0\leftrightarrow\phi^0} J^0_{\phi^0,\phi}(t, i^0, x)$. Similarly, we denote $J_{\phi^0,\phi,\o\phi}$ the expected cost of the representative minor player when it uses the feedback function $\o\phi$, while the major player uses the feedback function $\phi^0$ and all the other minor players use the same feedback function $\phi$: \[ J_{\phi^0,\phi,\o\phi}(t,i,i^0,x) := \mathbb{E}\left[\int_{t}^{T}f_{\phi^0,\o\phi}(s, X_s, X_s^0, \mu_s) ds + g(X_T, X_T^0, \mu_T) | X_t =i, X_t^0 = i^0, \mu_t = x\right] \] where the Markov process $(X_s^0,X^s, \mu_s)_{t\le s\le T}$ has infinitesimal generator $\mathcal{G}_{\phi^0,\phi,\o\phi}$. \begin{proposition}\label{pdecharacterizepayoff} If the feedback functions $\phi^0$, $\phi$ and $\o\phi$ are Lipschitz, then $J^0_{\phi^0,\phi}$ and $J_{\phi^0,\phi,\o\phi}$ are respectively continuous viscosity solutions of the PDEs (\ref{majorJpde}) and (\ref{minorJpde}) \begin{equation} \label{majorJpde} \begin{cases} &0 =\;[\mathcal{G}^0_{\phi^0,\phi} v^0](t, i^0, x) + f^0_{\phi^0}(t, i^0, x)\\ &0 =\;g^0(i^0, x) - v^0(T,i^0, x),\qquad (i^0,x)\in\{1,\dots, M^0\}\times \mathcal{P}. \end{cases} \end{equation} \begin{equation} \label{minorJpde} \begin{cases} &0 =\; [\mathcal{G}_{\phi^0,\phi,\o\phi} v](t, i,i^0, x) + f_{\phi^0,\o\phi}(t, i, i^0, x)\\ &0 =\;g(i,i^0, x) - v(T,i,i^0, x),\qquad (i,i_0,x)\in \{1,\dots, M\}\times\{1,\dots, M^0\}\times\mathcal{P}. \end{cases} \end{equation} Moreover, the PDEs (\ref{majorJpde}) and (\ref{minorJpde}) satisfy the comparison principle. \end{proposition} \begin{proof} The continuity of $J^0_{\phi^0,\phi}$ and $J_{\phi^0,\phi,\o\phi}$ follows from the fact that $(X_t^0, \mu_t)$ and $(X_t, X_t^0,\mu_t)$ are Feller processes, which we have shown in Theorem \ref{convtheomajor} and Theorem \ref{convtheominor}. The viscosity property can be shown using the exact same technique as in the proof of Theorem \ref{hjbtheomajor}. Finally the comparison principle is a consequence of the Lipschitz property of $f^0,f,q^0,q,\phi^0,\phi,\o\phi$ and can be shown by slightly modifying the proof of Theorem \ref{compmajor}. We leave the details of the proof to the reader. \end{proof} \section{Existence of Nash Equilibria} \label{se:Nash} In this section, we prove existence of Nash equilibria when the minor player's jump rates and cost functions do not depend upon the major player's control. We work under the following assumption: \begin{hypothesis}\label{assumpopt1} Hypothesis \ref{lipassump} and Hypothesis \ref{boundassump} are in force. In addition, the transition rate function $q$ and the cost function $f$ for the minor player do not depend upon the major player's control $\alpha^0\in A_0$. \end{hypothesis} The following assumptions will guarantee the existence of optimal strategies for both the major player and representative minor player. \begin{hypothesis}\label{assumpopt2} For all $i^0 = 1,\dots, M^0$, $(t,x) \in [0,T]\times\mathcal{P}$ and $v^0\in\mathbb{R}^{M^0}$, the function $\alpha^0 \rightarrow f^0(t,i^0,\alpha^0,x) + \sum_{j^0\neq i^0}(v^0_{j^0} - v^0_{i^0}) q^0(t,i^0, j^0, \alpha^0, x)$ has a unique maximizer in $A^0$ denoted as $\hat\alpha^0(t,i^0,x,v^0)$. Additionally, $\hat\alpha^0$ is Lipschitz in $(t,x,v^0)$ for all $i^0=1,\dots,M^0$ with common Lipschitz constant $L_{\alpha^0}$. \end{hypothesis} \begin{hypothesis}\label{assumpopt3} For all $i = 1, \dots, M$, $i^0 = 1,\dots, M^0$, $(t,x) \in [0,T]\times\mathcal{P}$ and $v\in\mathbb{R}^{M \times M^0}$, the function $\alpha \rightarrow f(t,\alpha,i, i^0,x) + \sum_{j\neq i}(v_{j,i^0} - v_{i,i^0}) q(t,i, j, \alpha, i^0, x)$ has a unique maximizer in $A$ denoted as $\hat\alpha(t,i,i^0,x,v)$. Additionally, $\hat\alpha$ is Lipschitz in $(t,x,v)$ for all $i^0=1,\dots,M^0$ and $i=1,\dots,M$ with common Lipschitz constant $L_\alpha$. \end{hypothesis} \begin{proposition}\label{optimalresponseprop} Under Hypothesis \ref{assumpopt1} - \ref{assumpopt3}, we have: (i) For any Lipschitz feedback function $\phi$ for the representative minor player, the best response $\boldsymbol{\phi}^{0*}(\phi)$ of the major player exists and is given by: \begin{equation}\label{optimalresponse1} \boldsymbol{\phi}^{0*}(\phi)(t,i^0,x) = \hat\alpha^0(t, i^0, x, V^0_{\phi}(t,\cdot,x)) \end{equation} where $\hat\alpha^0$ is the minimizer defined in Hypothesis \ref{assumpopt2} and $V^0_{\phi}$ is the value function of the major player's optimization problem. (ii) For any Lipschitz feedback function $\phi^0$ for the major player and $\phi$ for the other minor players, the best response $\boldsymbol{\phi}^*(\phi^0, \phi)$ of the representative minor player exists and is given by: \begin{equation}\label{optimalresponse2} \boldsymbol{\phi}^*(\phi^0,\phi)(t,i,i^0,x) = \hat\alpha(t,i, i^0, x, V_{\phi^0,\phi}(t,\cdot,\cdot,x)) \end{equation} where $\hat\alpha$ is the minimizer defined in Hypothesis \ref{assumpopt3} and $V_{\phi^0,\phi}$ is the value function of representative minor player's optimization problem. \end{proposition} \begin{proof} Consider the expected total cost $J^0_{\phi^{0*}(\phi),\phi}$ of the major player when all the minor players use the feedback function $\phi$ and the major player uses the strategy given by the feedback function $\phi^{0*}(\phi)$ defined by (\ref{optimalresponse1}). Also consider $V^0_{\phi}$ the value function of the major player's optimization problem. By definition of $\phi^{0*}(\phi)$ and the PDE (\ref{hjbmajor}), we see that $V^0_{\phi}$ is a viscosity solution of the PDE (\ref{majorJpde}) with $\phi^0 = \phi^{0*}(\phi)$ and $\phi= \phi$ in Proposition \ref{pdecharacterizepayoff}. To be able to use the comparison principle, we need to show that $\phi^0 = \phi^{0*}(\phi)$ and $\phi$ are Lipschitz. Indeed the Lipschitz property follows from Hypothesis \ref{assumpopt2} and Corollary \ref{valuefunctionlipschitzconstant} (see Section \ref{se:chaos}). Now since $J^0_{\phi^{0*}(\phi),\phi}$ is another viscosity solution for the same PDE, we conclude that $J^0_{\phi^{0*}(\phi),\phi} = V^0_{\phi} = \inf_{\boldsymbol{\alpha}^0\leftrightarrow\phi^0} J^0_{\phi^0, \phi}$ and hence the optimality of $\phi^{0*}(\phi)$. Likewise we can show that $\boldsymbol{\phi}^*(\phi^0,\phi)$ is the best response of the representative minor player. \end{proof} In order to show that the Nash Equilibrium is actually given by a couple of Lipschitz feedback functions, we need an additional assumption on the regularity of value functions. \begin{hypothesis}\label{assumpopt4} There exists two constants $L_{\phi^0}, L_{\phi}$, such that for all $L_{\phi^0}$-Lipschitz feedback function $\phi^0$ and $L_{\phi}$-Lipschitz feedback function $\phi$, $V^0_{\phi}$ is $(L_{\phi^0}/L_{\alpha^0} - 1)$-Lipschitz and $V_{\phi^0,\phi}$ is $(L_{\phi}/L_\alpha - 1)$-Lipschitz. \end{hypothesis} The above assumption holds, for example, when the horizon of the game is sufficiently small. We shall provide more details (see Remark \ref{remarkassumptionlipschitz} below) after we reveal important connections between finite player games and mean field games in Section \ref{se:chaos}. We now state and prove existence of Nash equilibrium. \begin{theorem} Under Hypothesis \ref{assumpopt1} - \ref{assumpopt4}, there exists a Nash equilibrium in the sense that there exists Lipschitz feedback functions $\hat\phi^0$ and $\hat\phi$ such that: \[ [\hat\phi^0,\hat\phi] = [\boldsymbol{\phi}^{0*}(\hat\phi), \boldsymbol{\phi}^*(\hat\phi^0, \hat\phi)]. \] \end{theorem} \begin{proof} We apply Schauder's fixed point theorem. To this end, we need to: (i) specify a Banach space $\mathbb{V}$ containing the admissible feedback functions $(\phi^0, \phi)$ as elements, and a relatively compact convex subset $\mathbb{K}$ of $\mathbb{V}$; (ii) show that the mapping $\mathcal{R}: [\phi^0,\phi] \rightarrow [\boldsymbol{\phi}^{0*}(\phi), \boldsymbol{\phi}^*(\phi^0, \phi)]$ is continuous and leaves $K$ invariant (i.e. $\mathcal{R}(K) \subset K$). \vspace{3mm} \noindent (i) Define $\mathbb{C}^0$ as the collection of $A_0$ - valued functions $\phi^0$ on $[0,T]\times\{1,\dots,M^0\}\times\mathcal{P}$ such that $(t,x)\times \phi^0(t,i^0,x)$ is continuous for all $i^0$, $\mathbb{C}$ as the collection of $A$ - valued functions $\phi$ on $[0,T]\times\{1,\dots,M\}\times\{1,\dots,M^0\}\times\mathcal{P}$ such that $(t,x)\times \phi(t,i,i^0,x)$ is continuous for all $i,i^0$, and set $\mathbb{V} := \mathbb{C}^0 \times \mathbb{C}$. For all $(\phi^0,\phi)\in\mathbb{V}$, we define the norm: \[ \|(\phi^0,\phi)\| := \max\left\{\sup_{i^0,t\in[0,T],x\in\mathcal{P}} |\phi^0(t,i^0,x)|, \sup_{i, i^0,t\in[0,T],x\in\mathcal{P}} |\phi(t,i,i^0,x)|\right\}. \] It is easy to check that $(\mathbb{V}, \|\cdot\|)$ is a Banach space. Next, we define $\mathbb{K}$ as the collection of elements in $\mathbb{V}$ such that the mappings $(t,x)\rightarrow \phi^0(t,i^0,x)$ are $L^0-$Lipschitz and $(t,x)\rightarrow \phi(t,i,i^0,x)$ are $L-$Lipschitz in $(t,x)$ for all $i^0=1,\dots,M^0$ and $i=1,\dots,M$, where $L^0, L$ are specified in Hypothesis \ref{assumpopt4}. Clearly $\mathbb{K}$ is convex. Now consider the family $(\phi^0(\cdot, i^0, \cdot))_{(\phi^0,\phi)\in \mathbb{K}}$ of functions defined on $[0,T]\times\mathcal{P}$. Thanks to the Lipschitz property, we see immediately that the family is equicontinuous and pointwise bounded. Therefore by Arzel\`a-Ascoli theorem, the family is compact with respect to the uniform norm. Repeating this argument for all $i,i^0$ we see that $\mathbb{K}$ is compact under the norm $\|\cdot\|$. Moreover, thanks to Hypothesis \ref{assumpopt2} - \ref{assumpopt4}, we obtain easily that $\mathbb{K}$ is stable by $\mathcal{R}$. \vspace{3mm} \noindent(ii) It remains to show that $\mathcal{R}$ is a continuous mapping. We use the following lemma: \begin{lemma}\label{lemexistencenash} Let $(\phi^0_n, \phi_n)$ be a sequence in $\mathbb{K}$ converging to $(\phi^0, \phi)$ in $\|\cdot\|$, and denote by $V^{0}_n$ and $V_n$ the value functions of the major and representative minor players associated with $(\phi^0_n, \phi_n)$. Then $V^{0}_n$ and $V_n$ converge uniformly to $V^0$ and $V$ respectively where $V^0$ and $V$ are the value functions of the major player and the representative minor player associated with $(\phi^0, \phi)$. \end{lemma} The proof of the lemma uses standard arguments from the theory of viscosity solutions. We give it in the Appendix. Now the continuity of the mapping $\mathcal{R}$ follows readily from Lemma \ref{lemexistencenash}, Proposition \ref{optimalresponseprop} and Hypothesis \ref{assumpopt2} \& \ref{assumpopt3}. This completes the proof. \end{proof} \section{The Master Equation and the Verification Argument for Nash Equilibria} \label{se:master} If a Nash equilibrium exists and is given by feedback functions $\hat\phi^0$ for the major player and $\hat\phi$ for the minor players, these functions should also be equal to the respective minimizers of the Hamiltonians in the HJB equations of the optimization problems. This informal remark leads to a system of coupled PDEs with terminal conditions specified at $t=T$, which we expect to hold if the equilibrium exists. Now the natural question to ask is: if this system of PDEs has a solution, does this solution provide a Nash equilibrium? The following result provides a verification argument: \begin{theorem}\label{verificationtheo} (Verification Argument) Assume that there exists two function $\hat\phi^0: [0,T] \times \{1,\dots, M^0\} \times \mathcal{P}\ni(t, i^0, x) \rightarrow \hat\phi^0(t, i^0, x) \in\mathbb{R}$ and $\hat\phi: [0,T] \times \{1,\dots, M\} \times \{1,\dots, M^0\} \times \mathcal{P}\ni(t, i, i^0, x) \rightarrow \hat\phi(t, i, i^0, x)\in\mathbb{R}$ such that the system of PDEs in $(v^0, v)$: \begin{equation} \label{hjbmaster} \begin{aligned} &0=[\mathcal{G}^0_{\hat\phi^0,\hat\phi} v^0](t, i^0, x) + f^0(t, i^0, \hat\phi^0(t,i^0, x), x)\\ &\hskip 75pt v^0(T,i^0, x) = g^0(i^0, x),\qquad (i^0,x)\in\{1,\dots, M^0\}\times \mathcal{P}\\ &0 = [\mathcal{G}_{\hat\phi^0,\hat\phi,\hat\phi} v](t, i,i^0, x) + f(t,i, \hat\phi(t, i,i^0, x), i^0, \hat\phi^0(t,i^0, x), x)\\ &\hskip 75pt v(T,i,i^0, x) = g(i,i^0, x),\qquad (i,i_0,x)\in \{1,\dots, M\}\times\{1,\dots, M^0\}\times\mathcal{P} \end{aligned} \end{equation} admits a classical solution $(\hat V^{0}, \hat V)$ (i.e. the solution are $\mathcal{C}^1$ in $t$ and $x$). Assume in addition that: \begin{equation} \label{minimizers} \begin{aligned} &\hat\phi^0(t,i^0,x) = \hat\alpha^0(t, i^0, x, \hat V^0(t,\cdot,x))\\ &\hat\phi(t,i,i^0,x) = \hat\alpha(t,i, i^0, x, \hat V(t,\cdot,\cdot,x)) \end{aligned} \end{equation} Then $\hat\phi^0$ and $\hat\phi$ form a Nash equilibrium and $\hat V^{0}(0,X_0^0, \mu_0)$ and $\hat V(0, X_0, X_0^0, \mu_0)$ are the equilibrium expected costs of the major and minor players. \end{theorem} \begin{proof} We show that $\hat\phi^0 = \boldsymbol{\phi}^{0*}(\hat \phi)$ and $\hat\phi = \boldsymbol{\phi}^*(\hat \phi^0, \hat \phi)$. Notice first that $\hat\phi^0$ and $\hat\phi$ are Lipschitz strategies due to the regularity of $\hat V^0, \hat V$ and Hypothesis \ref{assumpopt2}-\ref{assumpopt3}. Consider the major player optimization problem where we let $\phi = \hat \phi$ and denote by $V^0_{\hat\phi}$ the corresponding value function. Then since $\hat V^0$ is a classical solution to (\ref{hjbmaster}) and because of (\ref{minimizers}), we deduce that $\hat V^0$ is a viscosity solution to the HJB equation (\ref{hjbmajor}) associated with the value function $V^0_{\hat\phi}$. By uniqueness of the viscosity solution, we conclude that $\hat V^0 = V^0_{\hat\phi}$. On the other hand, if we denote by $J^0_{\hat\phi^0,\hat\phi}$ the expected cost function of the major player when it uses the feedback function $\hat\phi^0$ and all the minor players use strategy $\hat\phi$, then the fact that $\hat V^0$ is a classical solution to (\ref{hjbmaster}) implies that $\hat V^0$ is also a viscosity solution. Then by Proposition \ref{pdecharacterizepayoff} we have $J^0_{\hat\phi^0,\hat\phi} = \hat V^0$ and therefore $J^0_{\hat\phi^0,\hat\phi} = V^0_{\hat\phi} = \inf_{\boldsymbol{\alpha}^0\leftrightarrow\phi^0} J^0_{\phi^0,\hat\phi}$. This means that $\hat \phi^0$ is the best response of the major player to the minor players using feedback function $\hat\phi$. \vskip 2pt For the optimization problem of the representative minor player, we use the same argument based on the uniqueness of solution of PDE to obtain $J_{\hat\phi^0,\hat\phi,\hat\phi}= \hat V = V_{\hat\phi^0,\hat\phi} = \inf_{\bar\boldsymbol{\alpha}\leftrightarrow\bar\phi} J_{\hat\phi^0,\hat\phi,\bar\phi}.$ This implies that $\hat\phi$ is the representative player's best response to the major player using feedback function $\hat\phi^0$ and the rest of the minor players using $\hat\phi$. We conclude that $\hat\phi^0$ and $\hat\phi$ form the desired fixed point for the best response map. \end{proof} It is important to keep in mind that the above verification argument of the Master equation does not speak to the problem of existence of Nash equilibria. However, it provides a convenient way to compute numerically the equilibrium via the solution of a coupled system of first-order PDEs. \section{Propagation of Chaos and Approximate Nash Equilibria} \label{se:chaos} In this section we show that in the $(N+1)$-player game (see description in Section \ref{se:finite}), when the major player and each minor player apply the respective equilibrium strategy in the mean field game, the system is in an approximate Nash equilibrium. To uncover this link, we first revisit the $(N+1)$-player game. We show that for a certain strategy profile, the expected cost of individual player in the finite player game converges to that of the mean field game. Our argument is largely similar to the one used in proving the convergence of numerical scheme for viscosity solutions. One crucial intermediate result we use here is the gradient estimate for the value functions of the $(N+1)$-player game. Similar results were proved in \cite{GomesMohrSouza_continuous} for discrete state mean field game without major player. As a biproduct of the proof, we can also conclude that the value function of the mean field game is Lipschitz in the measure argument. In the rest of the section, we assume that Hypothesis \ref{assumpopt1} is in force. \subsection{Back to the $(N+1)$-Player Game} In this section, we focus on the game with a major player and $N$ minor players. We show that both the expected costs of individual players and the value functions of the players' optimization problems can be characterized by coupled systems of ODEs, and their gradients are bounded by some constant independent of $N$. Such a gradient estimate will be crucial in establishing results on propagation of chaos, as well as the regularity of the value functions for the limiting mean field game. We start from the major player's optimization problem. Consider a strategy profile where the major player chooses a Lipschitz feedback function $\phi^0$ and all the $N$ minor players choose the same Lipschitz feedback function $\phi$. Recall that the process comprising the major player's state and the empirical distirbution of the states of the minor players, say $(X_t^{0,N}, \mu_t^N)$, is a finite-state Markov process in the space $\{1,\dots, M^0\} \times \mathcal{P}^N$, where $\mathcal{P}^N := \{\frac{1}{N}(k_1, \dots, k_{M-1}) | \sum_{i} k_i \le N, k_i \in \mathbb{N}\}$. Its infinitesimal generator $\mathcal{G}_{\phi^0,\phi}^{0, N}$ was given by (\ref{generatorNmajor}). The expected cost to the major player is given by: \[ J^{0,N}_{\phi^0,\phi}(t,i^0,x) := \mathbb{E}\left[\int_{t}^{T} f^0_{\phi^0}(s, X_s^{0,N}, \mu_s^N) ds + g^0(X_T^{0,N}, \mu_T^N) | \; X_t^{0,N} = i^0, \mu_t^N = x\right] \] and the value function of the major player's optimization problem by: \[ V^{0,N}_{\phi}(t,i^0,x) := \inf_{\boldsymbol{\alpha}^0\leftrightarrow\phi^0 \in\mathbb{A}^0}J^{0,N}_{\phi^0,\phi}(t,i^0,x). \] Despite the notation, $J^{0,N}_{\phi^0,\phi}$ can be viewed as a function defined on $[0,T]$ with values given by vectors indexed by $(i^0, x)$. The following result shows that $J^{0,N}_{\phi^0,\phi}$ is characterized by a coupled system of ODEs. \begin{proposition}\label{odemajorpayoffprop} Let $\phi^0\in\mathbb{L}^0$ and $\phi\in\mathbb{L}$, then $J^{0,N}_{\phi^0,\phi}$ is the unique classical solution of the system of ODEs: \begin{equation}\label{odemajorpayoff} \begin{aligned} 0 =&\; \dot\theta(t,i^0,x) + f^0_{\phi^0}(t, i^0, x) + \sum_{j^0, j^0 \neq i^0} (\theta(t, j^0, x) - \theta(t, i^0, x)) q^0_{\phi^0}(t, i^0, j^0, x)\\ &\;+ \sum_{(i,j), j\neq i}(\theta(t, i^0, x + \frac{1}{N}e_{ij} ) -\theta(t, i^0, x )) N x_i q_{\phi^0, \phi}(t, i, j, i^0, x)\\ 0 =&\; \theta(t,i^0,x) - g^0(i^0, x) \end{aligned} \end{equation} \end{proposition} \begin{proof} The existence and uniqueness of the solution to (\ref{odemajorpayoff}) is an easy consequence of the Lipschitz property of the functions $f^0, q, q^0, \phi^0,\phi$ and Cauchy-Lipschitz Theorem. The fact that $J^{0,N}_{\phi^0,\phi}$ is a solution to (\ref{odemajorpayoff}) follows from Dynkin formula. \end{proof} We state without proof the similar result for $V^{0,N}_{\beta}$. \begin{proposition}\label{odemajorvalueprop} If Hypotheses \ref{assumpopt1} - \ref{assumpopt3} hold and $\phi$ is a Lipschitz strategy, then $V^{0,N}_{\phi}$ is the unique classical solution of the system of ODEs: \begin{equation}\label{odemajorvalue} \begin{aligned} 0 =&\; \dot\theta(t,i^0,x) + \inf_{\alpha^0 \in A^0} \{ f^0(t, i^0,\alpha^0, x) + \sum_{j^0, j^0 \neq i^0} (\theta(t, j^0, x) - \theta(t, i^0, x)) q^0(t, i^0, j^0, \alpha^0, x)\}\\ &+ \sum_{(i,j), j\neq i}(\theta(t, i^0, x + \frac{1}{N}e_{ij} ) -\theta(t, i^0, x )) N x_i q(t, i, j, \phi(t, i, i^0, x), i^0, x)\\ 0 =&\; \theta(t,i^0,x) - g^0(i^0, x) \end{aligned} \end{equation} \end{proposition} The following estimates for $J^{0,N}_{\phi^0,\phi}$ and $V^{0,N}_{\phi}$ will play a crucial role in proving convergence to the solution of the mean field game. Their proofs are postponed to the appendix. \begin{proposition}\label{majorNproperty} For all Lipschitz strategies $\phi^0,\phi$, there exists a constant $L$ only depending on $T$ and the Lipschitz constants and bounds of $\phi^0,\phi, q^0,q,f^0, g^0$ such that for all $N>0$, $(t,i^0, x)\in [0,T]\times \{1,\dots, M^0\}\times\mathcal{P}^N$ and $j,k\in\{1,\cdots,M\}, j\neq k$, we have: \[ |J^{0,N}_{\phi^0,\phi}(t,i^0,x)|\le \|g^0\|_{\infty} + T \|f^0\|_{\infty}, \quad\text{and}\quad J^{0,N}_{\phi^0,\phi}(t,i^0,x+\frac{1}{N}e_{jk}) - J^{0,N}_{\phi^0,\phi}(t,i^0,x)|\le \frac{L}{N}. \] \end{proposition} \begin{proposition}\label{majorvalueodeproperty} For each Lipschitz feedback function $\phi$ with Lipschitz constant $L_\phi$, there exists constants $C_0, C_1, C_2, C_3, C_4 >0$ only depending on $M^0$, $M$, the Lipschitz constants and bounds of $q^0,q,f^0, g^0$, such that for all $N>0$, $(t,i^0, x)\in [0,T]\times \{1,\dots, M^0\}\times\mathcal{P}^N$ and $j,k\in\{1,\cdots,M\}, j\neq k$, we have: \[ |V^{0,N}_{\phi}(t,i^0,x+\frac{1}{N}e_{jk}) - V^{0,N}_{\phi}(t,i^0,x)|\le \frac{C_0 + C_1 T + C_2 T^2}{N}\exp[(C_3 + C_4 L_\phi)T]. \] \end{proposition} We now turn to the problem of the representative minor player. We consider a strategy profile where the major player uses a feedback function $\phi^0$, the first $(N-1)$ minor players use a feedback function $\phi$ and the remaining (de facto the representative) minor player uses the feedback function $\o\phi$. We recall that $(X_t^N, X_t^{0,N}, \mu_t^N)$ is a Markov process with infinitesimal generator $\mathcal{G}^{N}_{\phi^0,\phi,\o\phi}$ defined as in (\ref{generatorNminor}). We are interested in the representative minor player's expected cost: \begin{align*} J^{N}_{\phi^0,\phi,\o\phi}(t,i,i^0,x) :=&\; \mathbb{E}\bigg[\int_{t}^{T} f_{\phi^0, \o\phi}(s,X_s^N, X_s^{0,N},\mu_s^N) ds + g(X_T^N, X_T^{0,N}, \mu_T^N) | \; X_t^N = i, X_t^{0,N} = i^0, \mu_t^N = x\bigg] \end{align*} as well as the value function of the representative minor player's optimization problem: \[ V^{N}_{\phi^0,\phi}(t,i,i^0,x) := \sup_{\alpha\leftrightarrow\o\phi\in\mathbb{A}}J^{N}_{\phi^0,\phi,\o\phi}(t,i,i^0,x). \] In full analogy with propositions \ref{majorNproperty} and \ref{majorvalueodeproperty}, we state the following results without proof. \begin{proposition}\label{minorNproperty} For all Lipschitz feedback functions $\phi^0$ and $\phi$, there exists a constant $L$ only depending on $T$ and the Lipschitz constants and bounds of $\phi^0,\phi,\o\phi, q^0,q,f, g$ such that for all $N>0$, $(t,i^0, x)\in [0,T]\times \{1,\dots, M^0\}\times\mathcal{P}^N$ and $j,k\in\{1,\cdots,M\}, j\neq k$, we have: \[ |J^{N}_{\phi^0,\phi,\o\phi}(t,i,i^0,x)|\le \|g\|_{\infty} + T \|f\|_{\infty}, \quad\text{and}\quad J^{N}_{\phi^0,\phi,\o\phi}(t,i,i^0,x+\frac{1}{N}e_{jk}) - J^{N}_{\phi^0,\phi,\o\phi}(t,i,i^0,x)|\le \frac{L}{N}. \] \end{proposition} \begin{proposition}\label{minorvalueodeproperty} There exist constants $D_0, D_1, D_2, D_3, D_4, D_5 >0$ depending only on $M^0$, $M$, the Lipschitz constants and bounds of $q^0,q,f, g$ such that for all Lipschitz feedback functions $\phi^0$ and $\phi$ with Lipschitz constants $L_{\phi^0}$ and $L_\phi$ respectively, and for all $N>0$, $(t,i^0, x)\in [0,T]\times \{1,\dots, M^0\}\times\mathcal{P}^N$ and $j,k\in\{1,\cdots,M\}, j\neq k$, we have: \[ |V^{0,N}_{\phi^0,\phi}(t,i,i^0,x+\frac{1}{N}e_{jk}) - V^{0,N}_{\phi^0,\phi}(t,i,i^0,x)|\le \frac{D_0 + D_1 T + D_2 T^2 + D_3 L_{\phi^0} T}{N}\exp[(D_4 + D_5 L_\phi))T]. \] \end{proposition} \subsection{Propagation of Chaos} We now prove two important limiting results. They are related to the propagation of chaos in the sense that they identify the limiting behavior of an individual when interacting with the mean field. First, we prove uniform convergence of the value functions of the individual players' optimization problems. Combined with the gradient estimates proven in the previous subsection, this establishes the Lipschitz property of the value functions in the mean field limit. Second, we prove that the expected costs of the individual players in the $(N+1)$ - player game converge to their mean field limits at the rate $N^{-1/2}$. This will help us show that the Nash equilibrium of the mean field game provides approximative Nash equilibria for the finite player games. \begin{theorem}\label{conv1} For all Lipschitz strategy $\phi^0,\phi$, we have: \begin{align} \sup_{t,i^0,x} |V^0_{\phi}(t,i^0,x) - V^{0,N}_{\phi}(t,i^0,x)| \rightarrow 0,& \;\;\;N\rightarrow +\infty \label{convmajorvalue}\\ \sup_{t,i,i^0,x} |V_{\phi^0,\phi}(t,i,i^0,x) - V^{N}_{\phi^0,\phi}(t,i,i^0,x)| \rightarrow 0,& \;\;\;N\rightarrow +\infty \label{convminorvalue} \end{align} \end{theorem} \begin{proof} We only provide a proof for (\ref{convmajorvalue}) as (\ref{convminorvalue}) can be shown in the exact the same way. \noindent (i) Fix a Lipschitz strategy $\phi$ for the minor players. To simplify the notation, we set $v_N := V_{\phi}^{0, N}$. Notice that $(t,i^0,x)\rightarrow v_N(t,i^0,x)$ is only defined on $[0,T]\times\{1,\dots,M^0\}\times \mathcal{P}^N$, so our first step is to extend the domain of $v^N(t,i^0,x)$ to $[0,T]\times\{1,\dots,M^0\}\times\mathcal{P}$. This can be done by considering the linear interpolation of $v^N$. More specifically, for any $x\in\mathcal{P}$, we denote $x_k, k=1,\dots, 2^{M-1}$ the $2^{M-1}$ closest neighbors of $x$ in the set $\mathcal{P}^N$. There exists $\alpha_k, k=1,\dots,2^{M-1}$ positive constants such that $x = \sum_{k=1}^{2^{M-1}} \alpha_k x_k$. We then define the extension, still denoted as $v_N$, to be $v^N(t,i^0,x):=\sum_{k=1}^{2^{N-1}} \alpha_k v_N(t,i^0,x_k)$. It is straightforward to verify that $v_N$ is continuous in $(t,x)$, Lipschitz in $x$ uniformly in $(t,i^0)$, and $C^1$ in $t$. Using the boundedness and Lipschitz property of $v_N, f^0, q^0, q,\phi$, we obtain a straight forward estimation: \begin{equation}\label{convest1} \begin{aligned}\allowdisplaybreaks \frac{L}{N} \ge&\; \bigl |\dot v_N(t,i^0,x) + \inf_{\alpha^0\in A^0}\{f^0(t, i^0, \alpha^0, x) + \sum_{j^0, j^0 \neq i^0} (v_N(t, j^0, x) - v_N(t, i^0, x)) q^0(t, i^0, j^0, \alpha^0, x)\}\\ &+ \sum_{(i,j), j\neq i}(v_N(t, i^0, x + \frac{1}{N}e_{ij} ) -v_N(t, i^0, x )) N x_i q(t, i, j, \phi(t, i, i^0, x), i^0, x)\bigr | \end{aligned} \end{equation} \begin{equation}\label{convest2} \frac{L}{N} \ge \; |v_N(t,i^0,x) - g^0(i^0, x)| \end{equation} where the constant $L$ only depends on the bounds and Lipschitz constants of $f^0, q^0, q$ and $\phi$. \vspace{3mm} \noindent (ii) Now let us denote $\bar v(t,i^0,x):=\lim\sup^*v_N(t,i^0,x)$ and $\munderbar v_N(t,i^0,x):=\lim\inf_*v_N(t,i^0,x)$, see Section 9.3 for definitions of the operators $\lim\sup^*$ and $\lim\inf_*$. We show that $\bar v$ and $\munderbar v$ are viscosity subsolution and viscosity supersolution of the HJB equation (\ref{hjbmajor}) of the major player respectively. Recall that we assume now that $q$ does not depend on $\alpha$. Then since $V_{\phi}^{0}$ is also a viscosity solution to (\ref{hjbmajor}), the comparison principle allows us to conclude that $\bar v(t,i^0,x) = \munderbar v(t,i^0,x) = V_{\phi}^{0}$ and the uniform convergence follows by standard arguments. \vspace{3mm} \noindent (iii) It remains to show that $\bar v$ is a viscosity subsolution to the PDE (\ref{hjbmajor}). The proof of $\munderbar v$ being a viscosity supersolution can be done in exactly the same way. Let $\theta$ be a smooth function and $(\bar t,i^0,\bar x)\in [0,T]\times \{1,\dots, M^0\} \times \mathcal{P}$ be such that $(t,x)\rightarrow\bar v(t,i^0,x) - \theta(t,x)$ has maximum at $(\bar t,\bar x)$ and $\bar v(\bar t,i^0,\bar x) =\theta(\bar t,\bar x)$. Then by Lemma 6.1. in \cite{lions}, there exists sequences $N_n\rightarrow +\infty$, $t_n\rightarrow \bar t$, $x_n\rightarrow \bar x$ such that for each $n$, the mapping $(t,x) \rightarrow v_{N_n}(t,i^0,x) - \theta(t,x)$ attains a maximum at $t_n, x_n$ and $\delta_n := v_{N_n}(t_n,i^0,x_n) - \theta(t_n,x_n)\rightarrow 0$. Instead of extracting a subsequence, we may assume that $v_{N_n}(t_n,\cdot,x_n) \rightarrow (r_1, \dots, r_{M^0})$, where $r_{j^0} \le \bar v(\bar t,j^0,\bar x)$ and $r_{i^0} = \bar v(\bar t,i^0,\bar x)$. Assume that $\bar t=T$, then $\bar v (\bar t, i^0, \bar x) \le g^0(i^0, \bar x)$ follows easily from (\ref{convest2}). Now assume that $\bar t<T$. Instead of extracting a subsequence, we may assume that $t_n<T$ for all $n$. Then by maximality we have $\partial_t \theta(t_n,x_n) = \partial_t v_{N_n}(t_n,x_n)$. Again by maximality, we have for all $i,j = 1,\dots, M, i\neq j$: \[ v_{N_n}(t_n, i^0, x_n + \frac{1}{N_n}e_{i,j}) - v_{N_n}(t_n, i^0, x_n) \le \theta(t_n, i^0, x_n + \frac{1}{N_n}e_{i,j}) - \theta(t_n, i^0, x_n). \] Injecting the above inequalities into the estimation (\ref{convest1}) and using the postivity of $q$, we obtain: \begin{align*} -\frac{L}{N_n} \le&\; \partial_t\theta(t_n,i^0,x_n) + \inf_{\alpha^0\in A^0}\bigg\{f^0(t_n, i^0, \alpha^0, x_n) + \sum_{j^0, j^0 \neq i^0} (v_{N_n}(t_n, j^0, x_n) - v_{N_n}(t_n, i^0, x_n)) q^0(t_n, i^0, j^0, \alpha^0, x_n)\bigg\}\\ &+ \sum_{(i,j), j\neq i}(\theta(t_n, i^0, x_n + \frac{1}{N_n}e_{ij} ) -\theta(t_n, i^0, x_n )) N_n (x_n)_i q(t_n, i, j, \phi(t_n, i, i^0, x_n), i^0, x_n). \end{align*} Taking limit in $n$ we obtain: \begin{align*} 0 \le&\; \partial_t\theta(\bar t,i^0,\bar x) + \inf_{\alpha^0\in A^0}\{f^0(\bar t, i^0, \alpha^0, \bar x) + \sum_{j^0, j^0 \neq i^0} (r_{j^0} - r_{i^0}) q^0(\bar t, i^0, j^0, \alpha^0, \bar x)\}\\ &+ \sum_{(i,j), j\neq i}(\mathbbm{1}_{j\neq M} \partial_{x_j}\theta(\bar t, i^0, \bar x)- \mathbbm{1}_{i\neq M} \partial_{x_i}\theta(\bar t, i^0, \bar x)) \bar x_i q(\bar t, i, j, \phi(\bar t, i, i^0, \bar x), i^0, \bar x) \end{align*} Now since $q^0$ is positive and $r_{j^0} \le \bar v(\bar t,j^0,\bar x)$ for all $j^0 \neq i^0$ and $r_{i^0} = \bar v(\bar t,i^0,\bar x)$ we have: \begin{align*} &\;\;\inf_{\alpha^0\in A^0}\{f^0(\bar t, i^0, \alpha^0, \bar x) + \sum_{j^0, j^0 \neq i^0} (r_{j^0} - r_{i^0}) q^0(\bar t, i^0, j^0, \alpha^0, \bar x)\} \\ \le&\;\;\inf_{\alpha^0\in A^0}\{f^0(\bar t, i^0, \alpha^0, \bar x) + \sum_{j^0, j^0 \neq i^0} (\o v(\bar t,j^0,\bar x) - \o v(\bar t,i^0,\bar x)) q^0(\bar t, i^0, j^0, \alpha^0, \bar x)\}. \end{align*} The desired inequality for viscosity subsolution follows immediately. This completes the proof. \end{proof} As an immediate consequence of the uniform convergence and the gradient estimates for the value functions $V_{\phi}^{0,N}$ and $V_{\phi^0,\phi}^{N}$, we have: \begin{corollary}\label{valuefunctionlipschitzconstant} Under hypotheses \ref{assumpopt1}-\ref{assumpopt3}, for all Lipschitz feedback functions $\phi^0$ and $\phi$ with Lipschitz constants $L_{\phi^0}$ and $L_\phi$ respectively, the value functions $V_{\phi}^0$ and $V_{\phi^0,\phi}^0$ are Lipschitz in $(t,x)$. More specifically, there exist strictly positive constants $B$, $C_i, i=0,\dots,4$, $D_i, i= 0,\dots, 5$ that only depend on the bounds and Lipschitz constants of $f,f^0,g^0,g,q,q^0$ such that \begin{align} |V_{\phi}^0(t,i^0,x) - V_{\phi}^0(s,i^0,y)| \le& B |t-s| + (C_0 + C_1 T + C_2 T^2)\exp((C_3 + C_4 L_\phi)T)\|x - y\|\\ |V_{\phi^0,\phi}(t,i,i^0,x) - V_{\phi^0,\phi}(s,i,i^0,y)| \le& B |t-s| + (D_0 + D_1 T + D_2 T^2 + D_3 L_{\phi^0} T)\exp((D_4 + D_5 L_\phi)T)\|x - y\|. \end{align} \end{corollary} \begin{proof} The Lipschitz property in $x$ is an immediate consequence of Theorem \ref{conv1}, Proposition \ref{majorvalueodeproperty} and Proposition \ref{minorvalueodeproperty}. To prove the Lipschitz property on $t$, we remark that for each $N$, $V^{0,N}_{\phi}$ and $V^{N}_{\phi^0,\phi}$ are Lipschitz in $t$, uniformly in $N$. Indeed $V^{0,N}_{\phi}$ is a classical solution of the system \eqref{odemajorvalue} of ODEs. Then it is clear that $\frac{d}{dt}V^{0,N}_{\phi}$ is bounded by the bounds of $f^0,g^0,q^0,q$. We deduce that $V^{0,N}_{\phi}$ is Lipschitz in $t$ with a Lipschitz constant that only depends on $M, M^0$ and the bounds of $f^0,g^0,q^0,q$. By convergence of $V^{0,N}_{\phi}$, we conclude that $V^{0}_{\phi}$ is also Lipschitz in $t$ and shares the Lipschitz constant of $V^{0,N}_{\phi}$. The same argument applies to $V_{\phi^0,\phi}$. \end{proof} \begin{remark}\label{remarkassumptionlipschitz} From Corollary \ref{valuefunctionlipschitzconstant}, we see that Hypothesis \ref{assumpopt4} holds when $T$ is sufficiently small. Indeed we can first choose $L_{\phi^0} > L_a (1 + \max\{B, C_0\})$ and $L_\phi > L_b (1 + \max\{B, D_0\})$ and then choose $T$ sufficiently small, so that the Lipschitz constant of $V_{\phi}^0$ is smaller than $(L_{\phi^0}/L_a-1)$ and the Lipschitz constant of $V_{\phi^0, \phi}$ is smaller than $(L_\phi/L_b-1)$. \end{remark} We now state our second result on the propagation of chaos: the expected cost of an individual player in the $(N+1)$-player game converges to the expected cost in the mean field game at a rate of $N^{-1/2}$. \begin{theorem}\label{propachaos} There exists a constant $L$ depending only on $T$ and the Lipschitz constants of $\phi^0$, $\phi$, $\o\phi$, $f$, $f^0$, $g$, $g^0$, $q$ and $q^0$ such that for all $N>0$, $t\le T$, $x\in\mathcal{P}^N$, $i=1,\dots, M$ and $i^0=1,\dots,M^0$, we have \begin{align*} |J_{\phi^0,\phi}^{0,N}(t,i^0,x) - J_{\phi^0,\phi}^0(t,i^0,x)| \le& L/\sqrt{N}\\ |J_{\phi^0,\phi,\o\phi}^{N}(t,i,i^0,x) - J_{\phi^0,\phi,\o\phi}(t,i,i^0,x)| \le& L/\sqrt{N}. \end{align*} \end{theorem} Our proof is based on standard techniques from the convergence rate analysis of numerical schemes for viscosity solutions of PDEs, c.f. \cite{BrianiCamilliZidani} and \cite{BarlesSouganidis} for example. The key step of the proof is the construction of a smooth subsolution and a smooth supersolution of the PDEs (\ref{majorJpde}) and (\ref{minorJpde}) that characterizing $J_{\phi^0,\phi}^0$ and $J_{\phi^0,\phi,\o\phi}^0$ respectively. See Proposition \ref{pdecharacterizepayoff}. We construct these solutions by mollifying an extended version of $J_{\phi^0,\phi}^{0,N}$. Then we derive the bound by using the comparison principle. In the following we detail the proof for the convergence rate of the major player's expected cost. The case of the generic minor player can be dealt with in exactly the same way. Since $J_{\phi^0,\phi}^{0,N}(t,i,x)$ is only defined for $x\in\mathcal{P}^N$, in order to mollify $J_{\phi^0,\phi}^{0,N}$, we need to first construct an extension of $J_{\phi^0,\phi}^{0,N}$ defined for all $x\in O$ for an open set $O$ containing $\mathcal{P}$. To this end, we consider the following system of ODE: \begin{align*} 0 =&\; \dot\theta(t,i^0,x) + \tilde f^0(t, i^0, \tilde\phi^0(t, i^0, x), x) + \sum_{j^0, j^0 \neq i^0} (\theta(t, j^0, x) - \theta(t, i^0, x)) \tilde q^0(t, i^0, j^0, \tilde\phi^0(t, i^0, x), x)\\ &+ \sum_{(i,j), j\neq i}(\theta(t, i^0, x + \frac{1}{N}e_{ij} ) -\theta(t, i^0, x )) N \max\{x_i, 0\} \tilde q(t, i, j, \tilde\phi(t, i, i^0, x), i^0, \tilde\phi^0(t,i^0,x), x)\\ 0 =&\; \theta(T,i^0,x) - \tilde g^0(i^0, x) \end{align*} Here $\tilde\phi^0$, $\tilde\phi$, $\tilde f^0$, $\tilde g^0$ and $\tilde q^0$ are respectively extensions of $\phi^0$, $\phi$, $f^0$, $g^0$ and $q^0$ from $x\in\mathcal{P}$ to $x\in\mathbb{R}^{N-1}$, which are Lipschitz in $x$. The following is proved using the same arguments as for Proposition \ref{odemajorpayoffprop}. \begin{lemma}\label{propertyextension} The system \eqref{odemajorpayoff} of ODEs admits a unique solution $v^N$ defined in $[0,T]\times \{1,\dots,M^0\}\times \mathbb{R}^{M-1}$. Moreover we have: \noindent (i) $v^N(t,i^0,x)$ is Lipschitz in $x$ uniformly in $t$ and $i^0$ and the Lipschitz constant only depends on $T$ and the Lipschitz constants of $\phi^0$, $\phi$,$f^0$, $g^0$, $q^0$. \noindent (ii) $v^N(t,i^0,x) = J_{\phi^0,\phi}^{0,N}(t,i^0,x)$ for all $x \in \mathcal{P}^N$. \end{lemma} To construct smooth super and sub solutions, we use a family of mollifiers $\rho_\epsilon$ defined by $\rho_\epsilon(x) := \rho(x/\epsilon)/\epsilon^{N-1}$, where $\rho$ is a smooth and positive function with compact support in the unit ball of $\mathbb{R}^{M-1}$ and satisfying $\int_{R^{M-1}} \rho(x) dx = 1$. For $\epsilon>0$, we define $v^{N}_\epsilon$ as the mollification of $v^N$ on $[0,T]\times\{1,\dots, M^0\}\times \mathcal{P}$: \[ v^{N}_\epsilon(t,i^0,x) := \int_{y\in\mathbb{R}^{M-1}} v^N(t,i^0,x-y)\rho_\epsilon(y) dy. \] Using the Lipschitz property of $\phi^0$, $\phi$, $f^0$, $g^0$, $q^0$ and straightforward estimates on the mollifier $\rho_\epsilon$, we obtain the following properties on $v^{N}_\epsilon$. \begin{lemma}\label{molifyest} $v^{N}_\epsilon$ is $C^{\infty}$ in $x$ and $C^1$ in $t$. Moreover, there exists a constant $C$ that depends only on $T$ and the Lipschitz constants of $\phi^0$, $\phi$, $f^0$, $g^0$ and $q^0$ such that for all $i^0=1,\dots, M^0$, $i=1,\dots, M$, $t\le T$ and $x,y\in\mathcal{P}$, the following estimations hold: \begin{align} L\epsilon \ge&\;| [\mathcal{G}_{\phi^0,\phi}^{0, N}v^{N}_\epsilon](t,i^0,x)+ f^0(t, i^0,\phi^0(t,i^0,x),x)|\label{molifyest1}\\ L\epsilon \ge&\; |v^{N}_\epsilon(T,i^0,x) - g^0(i^0, x)|\label{molifyest2}\\ \frac{L}{\epsilon}\|x-y\| \ge&\; |\partial_{x_i}v^{N}_\epsilon(t,i^0,x) - \partial_{x_i}v^{N}_\epsilon(t,i^0, y)|. \label{molifyest3} \end{align} \end{lemma} We are now ready to prove Theorem \ref{propachaos}. We construct viscosity super and sub solutions by adjusting $v^{N}_\epsilon$ with a linear function on $t$. Then the comparison principle allows us to conclude. \begin{proof}(of Theorem \ref{propachaos}) We denote by $L$ a generic constant that only depends on $T$ and the Lipschitz constants of $\phi^0$, $\phi$,$f^0$, $g^0$ and $q^0$. Using (\ref{molifyest1}), (\ref{molifyest2}) and (\ref{molifyest3}) in Lemma \ref{molifyest}, we obtain: \begin{align*} L(\epsilon + \frac{1}{N\epsilon})\ge\;&|[\mathcal{G}_{\phi^0,\phi}^{0}v^{N}_\epsilon](t,i^0,x)+ f^0(t, i^0,\phi^0(t,i^0,x),x)|\\ L\epsilon\ge\;&|v^{N}_\epsilon(T,i^0, x) - g^0(i^0, x)|. \end{align*} Next we define: \[ v_{\pm}^{N}(t,i^0,x) :=v^{N}_\epsilon(t,i^0,x) \pm [ L(\epsilon + \frac{1}{N\epsilon})(T-t)+L\epsilon]. \] Since $v_{-}^{N}$ and $v_{+}^{N}$ are smooth, the above estimation immediately implies that $v_{-}^{N}$ and $v_{+}^{N}$ are viscosity sub and super solutions of the PDE (\ref{majorJpde}) respectively. By Proposition \ref{pdecharacterizepayoff}, $J^0_{\phi^0,\phi}$ is a continuous viscosity solution to the PDE (\ref{majorJpde}). Then by the comparison principle we have $v_{-}^{N}\le J^0_{\phi^0,\phi} \le v_{+}^{N}$, which implies: \[ |v^{N}_\epsilon(t,i^0,x) - J^0_{\phi^0,\phi}(t,i^0,x)| \le L(\epsilon + \frac{1}{N\epsilon})(T-t)+L\epsilon \] Now using the property of the mollifier and Lemma \ref{propertyextension}, we have for all $t\le T$, $i^0=1,\dots,M^0$ and $x\in\mathcal{P}^N$: \[ |v^{N}_\epsilon(t,i^0,x) - J^{0,N}_{\phi^0,\phi}(t,i^0,x)| = |v^{N}_\epsilon(t,i^0,x) - v^{N}(t,i^0,x)|\le L\epsilon. \] The desired results follow by combining the above inequalities and choosing $\epsilon = 1/\sqrt{N}$. \end{proof} \subsection{Approximative Nash Equilibria} The following is an immediate consequence of the above propagation of chaos results. \begin{theorem} Assume that the Mean Field Game attains Nash equilibrium when the major player chooses a Lipschitz strategy $\hat\alpha$ and all the minor players choose a Lipschitz strategy $\hat\beta$. Denote $L_0$ the Lipschitz constant for $\hat\alpha$ and $\hat\beta$. Then for any $L\ge L_0$, $(\hat\alpha,\hat\beta)$ is an approximative Nash equilibrium within all the $L-$Lipschitz strategies. More specifically, there exist constants $C>0$ and $N_0\in\mathbb{N}$ depending on $L_0$ such that for all $N\ge N_0$, all strategy $\alpha$ for major player and $\beta$ for minor player that are $L$-Lipschitz, we have \begin{align*} J^{0,N}(\hat\alpha, \hat\beta,\dots, \hat\beta) \ge\;\;& J^{0,N}(\alpha, \hat\beta,\dots, \hat\beta) - C/\sqrt{N}\\ J^{N}(\hat\alpha, \hat\beta,\dots, \hat\beta) \ge\;\;& J^{N}(\hat\alpha, \hat\beta,\dots, \beta, \dots \hat\beta)- C/\sqrt{N} \end{align*} \end{theorem} \section{Appendix} \label{se:appendix} \subsection{Proof of Lemma \ref{dynamicprogoperator}} Recall the dynamic programming operator defined in (\ref{dynamicprogoperatordef}). Let $\Phi$ be a mapping on $[0,T]\times\{1,\dots,M^0\}\times\mathcal{P}$ such that $\Phi(\cdot, i^0, \cdot)$ is $\mathcal{C}^1$ in $[0,T]\times\mathcal{P}$ and $\Phi(\cdot, j^0, \cdot)$ is continuous in $[0,T]\times\mathcal{P}$ for $j^0\neq i^0$. We are going to evaluate the following limit: \[ \lim_{h\rightarrow 0}\frac{1}{h}\left\{[\mathcal{T}_{t,t+h} \Phi(t+h, \cdot,\cdot)](i^0, x) - \Phi(t,i^0,x)\right\}:=\lim_{h\rightarrow 0}I_h \] (i) Let us first assume that $\Phi(\cdot, j^0, \cdot)$ is $\mathcal{C}^1$ in $[0,T]\times\mathcal{P}$ for all $j^0$. Consider a constant control $\alpha^0$, then by definition of the operator we have: \[ I_h\le \frac{1}{h}\left\{\mathbb{E}\left[\int_{t}^{t+h}f^0(u, X_u^0, \alpha^0, \mu_u)du + \Phi(t+h, X_{t+h}^0, \mu_{t+h})|X_t^0=i^0, \mu = x \right] - \Phi(t, i^0, x)\right\} \] Using the infinitesimal generator of the process $(X_u^0, \mu_u)$, the RHS of the above inequality has a limit. Take the limit and take the infimum over $\alpha$, we obtain: \begin{align*} \limsup_{h\rightarrow 0} I_h\le&\;\partial_t \Phi(t,i^0, x) + \inf_{\alpha^0\in A^0}\bigg\{f^0(t, i^0, \alpha^0, x) + (1-\sum_{k=1}^{M-1} x_k) \sum_{k=1}^{M-1} \partial_{x_k} \Phi(t,i^0, x) q(t,M, k, \phi(t,M, i^0, x), i^0, \alpha^0, x)\\ &+\;\sum_{i,j=1}^{M-1} \partial_{x_j} \Phi(t,i^0, x) x_i q(t,i,j,\phi(t,i, i^0,x), i^0, \alpha^0, x)+ \sum_{j^0\neq i^0} [\Phi(t,j^0, x) - \Phi(t,i^0, x)] q^0(i^0, j^0, \alpha^0, x)\bigg\} \end{align*} On the other hand, for all $h>0$, there exists a control $\phi^{0}_h$ such that \[ \mathbb{E}\left[\int_{t}^{t+h}f^0(u, X_u^0, \phi_h^0(u, X_u^0, \mu_u), \mu_u)du + \Phi(t+h, X_{t+h}^0, \mu_{t+h})|X_t^0=i^0, \mu = x \right] \le \mathcal{T}_{t,t+h} \Phi(t+h, \cdot,\cdot))(i^0, x) + h^2 \] This implies: \[ I_h \ge \frac{1}{h}\left\{\mathbb{E}\left[\int_{t}^{t+h}f^0(u, X_u^0, \phi^0_h(u, X_u^0, \mu_u), \mu_u)du + \Phi(t+h, X_{t+h}^0, \mu_{t+h})|X_t^0=i^0, \mu = x \right] - \Phi(t, i^0, x)\right\} - h \] Since $\Phi(\cdot, j^0, \cdot)$ is $\mathcal{C}^1$ for all $j^0$, this can be further written using the infinitesimal generator: \begin{align*} I_h \ge\;\;&\; \frac{1}{h}\mathbb{E}\left[\int_{t}^{t+h}f^0(u, X_u^0, \alpha^h(u, X_u^0, \mu_u), \mu_u) + [\mathcal{G}^0_{\phi^0_h, \phi}\Phi](u, X_u^0, \mu_u) du|X_t^0=i^0, \mu = x \right] -h \end{align*} Taking supremum over the control and applying Dominated Convergence Theorem, we obtain: \begin{align*} \;\liminf_{h\rightarrow 0} I_h\ge&\;\partial_t \Phi(t,i^0, x) + \inf_{\alpha^0\in A^0}\bigg\{f^0(t, i^0, \alpha^0, x) + (1-\sum_{k=1}^{M-1} x_k) \sum_{k=1}^{M-1} \partial_{x_k} \Phi(t,i^0, x) q(t,M, k, \phi(t,M, i^0, x), i^0, \alpha^0, x)\\ &+\;\sum_{i,j=1}^{M-1} \partial_{x_j} \Phi(t,i^0, x) x_i q(t,i,j,\phi(t,i, i^0,x), i^0, \alpha^0, x)+ \sum_{j^0\neq i^0} [\Phi(t,j^0, x) - \Phi(t,i^0, x)] q^0(i^0, j^0, \alpha^0, x)\bigg\} \end{align*} This proves the lemma for $\Phi$ such that $\Phi(\cdot, j^0, \cdot)$ is $\mathcal{C}^1$ for all $j^0$. \vspace{3mm} \noindent(ii) Now take a continuous mapping $\Phi$ and only assume that $\Phi(\cdot, i^0, \cdot)$ is $\mathcal{C}^1$. Applying Weierstrass approximation theorem, for any $\epsilon>0$, there exists $\mathcal{C}^1$ function $\phi^{\epsilon}_{j^0}$ on $[0,T]\times\mathcal{P}$ for all $j^0\neq i^0$ such that \[ \sup_{(t,x)\in[0,T]\times\mathcal{P}} | \Phi(t,j^0,x) - \phi^{\epsilon}_{j^0}(t,x)| \le \epsilon \] Define $\Phi^{\epsilon}(t,j^0,x):=\phi^{\epsilon}_{j^0}(t,x) + \epsilon$ for $j^0\neq i^0$ and $\Phi^{\epsilon}(t,i^0,x):= \Phi(t,i^0,x)$. Then we have $\Phi^{\epsilon} \ge \Phi$. By the monotonicity of the operator $\mathcal{T}_{t,t+h}$ we have: \[ \frac{1}{h}\left\{[\mathcal{T}_{t,t+h} \Phi^\epsilon(t+h, \cdot,\cdot)](i^0, x) - \Phi^\epsilon(t,i^0,x)\right\}\ge\frac{1}{h}\left\{[\mathcal{T}_{t,t+h} \Phi(t+h, \cdot,\cdot)](i^0, x) - \Phi(t,i^0,x)\right\} :=I_h \] Now apply the results from step (i), we obtain {\allowdisplaybreaks \begin{align*} &\;\limsup_{h\rightarrow 0}I_h \\ \le&\; \partial_t \Phi^\epsilon(t,i^0, x) + \inf_{\alpha^0\in A^0}\bigg\{f^0(t, i^0, \alpha^0, x) + (1-\sum_{k=1}^{M-1} x_k) \sum_{k=1}^{M-1} \partial_{x_k} \Phi^\epsilon(t,i^0, x) q(t,M, k, \phi(t,M, i^0, x), i^0, \alpha^0, x)\\ &+\;\sum_{i,j=1}^{M-1} \partial_{x_j} \Phi^\epsilon(t,i^0, x) x_i q(t,i,j,\phi(t,i, i^0,x), i^0, \alpha^0, x)+ \sum_{j^0\neq i^0} [\Phi^\epsilon(t,j^0, x) - \Phi^\epsilon(t,i^0, x)] q^0(t,i^0, j^0, \alpha^0, x)\bigg\}\\ =&\; \partial_t \Phi(t,i^0, x) + \inf_{\alpha^0\in A^0}\bigg\{f^0(t, i^0, \alpha^0, x) + (1-\sum_{k=1}^{M-1} x_k) \sum_{k=1}^{M-1} \partial_{x_k} \Phi(t,i^0, x) q(t,M, k, \phi(t,M, i^0, x), i^0, \alpha^0, x)\\ &+\;\sum_{i,j=1}^{M-1} \partial_{x_j} \Phi(t,i^0, x) x_i q(t,i,j,\phi(t,i, i^0,x), i^0, \alpha^0, x)\\ &+\;\sum_{j^0\neq i^0} [\Phi^\epsilon(t,j^0, x) - \Phi(t,j^0, x) + \Phi(t,j^0, x) - \Phi(t,i^0, x)] q^0(t,i^0, j^0, \alpha^0, x)\bigg\}\\ \le&\; \partial_t \Phi(t,i^0, x) + \inf_{\alpha^0\in A^0}\bigg\{f^0(t, i^0, \alpha^0, x) + (1-\sum_{k=1}^{M-1} x_k) \sum_{k=1}^{M-1} \partial_{x_k} \Phi(t,i^0, x) q(t,M, k, \phi(t,M, i^0, x), i^0, \alpha^0, x)\\ &+\;\sum_{i,j=1}^{M-1} \partial_{x_j} \Phi(t,i^0, x) x_i q(t,i,j,\phi(t,i, i^0,x), i^0, \alpha^0, x)+ \sum_{j^0\neq i^0} [\Phi(t,j^0, x) - \Phi(t,i^0, x)] q^0(t,i^0, j^0, \alpha^0, x)\bigg\} + \epsilon L \end{align*} } The last equality is due to the fact that $q^0$ is bounded and $|\Phi^\epsilon(t,j^0, x) - \Phi(t,j^0, x)|\le \epsilon$. We can write a similar inequality for $\liminf_{h\rightarrow 0}I_h$. Then tending $\epsilon$ to $0$ yields the desired result. \subsection{Proof of Theorem \ref{compmajor}} In this section we present the proof of the comparison principle for HJB equation associated with major player's optimization problem. The arguments used in this proof can be readily applied to prove uniqueness of solution to minor player's HJB equation (\ref{hjbminor}) (c.f. Theorem \ref{hjbtheominor}). The same argument can also be used to prove the uniqueness result for equations (\ref{majorJpde}) and (\ref{minorJpde}) (c.f. Proposition \ref{pdecharacterizepayoff}). Let $v$ and $w$ be respectively viscosity subsolution and supersolution to equation (\ref{hjbmajor}). Our objective is to show $v(t, i, x) \le w(t,i,x)$ for all $1\le i\le M^0$, $x\in\mathcal{P}$ and $t\in[0,T]$. \vspace{3mm} \noindent (i) Without loss of generality, we may assume that $v$ is a viscosity subsolution of: \begin{equation}\label{hjbmajoreta} \begin{aligned} 0=\;\;&\; -\eta + \partial_t v^0(t, i^0, x) + \inf_{\alpha^0 \in A^0}\bigg\{f^0(t, i^0, \alpha^0, x) + \sum_{j^0\neq i^0} [v^0(t,j^0, x) - v^0(t,i^0, x)] q^0(t,i^0, j^0, \alpha^0, x)\\ &\;+(1-\sum_{k=1}^{M-1} x_k) \sum_{k=1}^{M-1} \partial_{x_k} v^0(t,i^0,x) q(t,M, k, \phi(t,M, i^0, x), i^0, \alpha^0, x)\\ &\;+\sum_{i,j=1}^{M-1} \partial_{x_j} v^0(t, i^0, x) x_i q(t,i,j,\phi(t,i,i^0,x), i^0,\alpha^0, x)\bigg\},\;\;\;\forall (i_0,t,x)\in \{1,\dots, M^0\} \times [0,T[ \times \mathcal{P}\\ 0=\;\;&v^0(T,i^0, x) - g^0(i^0, x),\;\;\;\forall x\in \mathcal{P}\\ \end{aligned} \end{equation} where $\eta>0$ is a small parameter. Indeed we may consider the function $v_{\eta}(t,i,x):= v(t,i,x) - \eta(T-t)$. Then it is easy to see that $v_\eta$ is a viscosity subsolution to the above equation. If we can prove $v_\eta \le w$, the tending $\eta$ to $0$ yields $v \le w$. In the following, we will only consider the subsolution $v$ to equation (\ref{hjbmajoreta}) and the supersolution $w$ to the equation (\ref{hjbmajor}), and try to prove $v\le w$. \vspace{3mm} \noindent(ii) For $\epsilon>0$ and $1\le i^0 \le M^0$, consider the function $\Gamma_{i^0,\epsilon}$ defined on $[0,T]^2\times \mathcal{P}^2$: \[ \Gamma_{i^0,\epsilon}(t,s,x,y) := v(t,i^0,x) - w(s,i^0,y) - \frac{1}{\epsilon}|t-s|^2 - \frac{1}{\epsilon}\|x-y\|^2 \] where $\|\cdot\|$ is the euclidian norm on $\mathbb{R}^{(M^0 - 1)}$. Since $\Gamma_{i^0,\epsilon}$ is a continuous function on a compact set, it attains the maximum denote as $N_{i^0,\epsilon}$. Denote $(\bar t, \bar s, \bar x, \bar y)$ the maximizer (which obviously depends on $\epsilon$ and $i^0$, but for simplicity of the notation we suppress the notation). We show that for all $1\le i^0 \le M^0$, there exists a sequence $\epsilon_n \rightarrow 0$ and the corresponding maximizer $(\bar t_n,\bar s_n,\bar x_n, \bar y_n)$ such that \begin{subequations} \begin{equation}\label{seq1} (\bar t_n,\bar s_n,\bar x_n, \bar y_n) \rightarrow (\hat t, \hat t, \hat x, \hat x), \text{where}\;\; (\hat t,\hat x) := \arg\sup_{(t,x)\in[0,T]\times\mathcal{P}}\{v(t,i^0,x)- w(t,i^0,x)\} \end{equation} \begin{equation}\label{seq2} \frac{1}{\epsilon_n}|\bar t_n - \bar s_n|^2 + \frac{1}{\epsilon_n}\|\bar x_n - \bar y_n\|^2 \rightarrow 0 \end{equation} \begin{equation}\label{seq3} N_{i^0,\epsilon}\rightarrow N_{i^0} := \sup_{(t,x)\in[0,T]\times\mathcal{P}}\{v(t,i^0,x)- w(t,i^0,x)\} \end{equation} \end{subequations} Indeed for any $(t,x)\in[0,T]\times\mathcal{P}$, we have $v(t,i^0,x)-w(t,i^0,x) = \Gamma_{i^0,\epsilon}(t,t,x,x) \le N_{i^0,\epsilon}$. Taking the supremum we obtain $N_{i^0}\le N_{i^0,\epsilon}$ and therefore \[ \frac{1}{\epsilon}|\bar t - \bar s|^2 + \frac{1}{\epsilon}\|\bar x - \bar y\|^2 \le v(\bar t, i^0, \bar x) - w(\bar s, i^0, \bar y) - N_{i^0} \le 2L - N_{i^0} \] The last inequality comes from the fact that $v$ and $w$ are bounded on the compact $[0,T]\times\mathcal{P}$. It follows that $|\bar t - \bar s|^2 + \|\bar x - \bar y\|^2 \rightarrow 0$. Now since the sequence $(\bar t, \bar s, \bar x, \bar y)$ (indexed by $\epsilon$) is in the compact, we can extract a subsequence $\epsilon_n\rightarrow 0$ such that $(\bar t_n,\bar s_n,\bar x_n, \bar y_n) \rightarrow (\hat t, \hat t, \hat x, \hat x)$. We have the following inequality: \[ N_i \le v(\bar t_n, i^0, \bar x_n) - w(\bar s_n, i^0, \bar y_n) - \frac{1}{\epsilon_n}|\bar t_n - \bar s_n|^2 - \frac{1}{\epsilon_n}\|\bar x_n - \bar y_n\|^2 = N_{i^0,\epsilon_n} \le v(\bar t_n, i^0, \bar x_n) - w(\bar s_n, i^0, \bar y_n) \] Notice that $v(\bar t_n, i^0, \bar x_n) - w(\bar s_n, i^0, \bar y_n)\rightarrow v(\hat t, i^0, \hat x_n) - w(\hat t, i^0, \hat x_n) \le N_{i^0}$, we deduce that $N_{i^0} = v(\hat t, i^0, \hat x_n) - w(\hat t, i^0, \hat x_n)$ which implies (\ref{seq1}). (\ref{seq2}) and (\ref{seq3}) follows easily by taking the limit in the above inequality. \vspace{3mm} \noindent (iii) Now we prove the comparison principle. Using the notation introduced in step (ii), we need to prove $N_{i^0}\le 0$ for all $1\le i^0\le M^0$. Assume that there exists $1 \le i^0 \le M^0$ such that \[ N_{i^0} = \sup_{1\le j^0 \le M^0} N_{j^0} >0 \] We work towards a contradiction. Without loss of generality we assume that $N_{i^0} > N_{j^0}$ for all $j\neq i$. We then consider the subsequence $(\epsilon_n, \bar t_n, \bar s_n, \bar x_n, \bar y_n) \rightarrow (0, \hat t, \hat t, \hat x,\hat x)$ with regard to $i^0$ constructed in step (ii), for which (\ref{seq1}), (\ref{seq2}) and (\ref{seq3}) are satisfied. Since $v(\hat t, i^0, \hat x) - w(\hat t, i^0, \hat x) = N_{i^0} > 0$, we have $\hat t\neq T$. Instead of extracting a subsequence, we may assume that $\bar t_n\neq T$ and $\bar s_n \neq T$ for all $n\ge 0$. Moreover for any $j^0\neq i^0$, we have \[ v(\hat t, j^0, \hat x) - w(\hat t, j^0, \hat x) \le N_j < N_i = v(\hat t, i^0, \hat x) - w(\hat t, i^0, \hat x) \] Since $v(\bar t_n, j^0, \bar x_n) - w(\bar s_n, j^0, \bar y_n) \rightarrow v(\hat t, j^0, \hat x) - w(\hat t, j^0, \hat x)$, instead of extracting a subsequence, we can assume that for all $j^0\neq i^0$ and $n\ge 0$, \begin{equation}\label{seq4} v(\bar t_n, j^0, \bar x_n) - w(\bar s_n, j^0, \bar y_n) \le v(\bar t_n, i^0, \bar x_n) - w(\bar s_n, i^0, \bar y_n) \end{equation} In the following we suppress the index $n$ for the sequence $(\epsilon_n, \bar t_n, \bar s_n, \bar x_n, \bar y_n)$ for sake of simplicity of notation. By definition of the maximizer, for any $(t,x)\in[0,T]\times\mathcal{P}$, we have: \[ v(t,i^0,x) - w(\bar s,i^0, \bar y) - \frac{1}{\epsilon}\|x - \bar y\|^2- \frac{1}{\epsilon}|t - \bar s|^2 \le v(\bar t,i^0,\bar x) - w(\bar s,i^0, \bar y) - \frac{1}{\epsilon}\|\bar x - \bar y\|^2- \frac{1}{\epsilon}|\bar t - \bar s|^2 \] Therefore $v(\cdot,i^0,\cdot) - \phi$ attains maximum at $(\bar t,\bar x)$ where \begin{equation}\label{constsub1} \phi(t,x) := \frac{1}{\epsilon}\|x - \bar y\|^2 + \frac{1}{\epsilon}|t - \bar s|^2\;\;\;\;\partial_t\phi(\bar t, \bar x) = \frac{2}{\epsilon}(\bar t - \bar s)\;\;\;\; \nabla\phi(\bar t, \bar x) = \frac{2}{\epsilon}(\bar x - \bar y) \end{equation} Similarly $w(\cdot,i^0,\cdot) - \psi$ attains minimum at $(\bar s,\bar y)$ where \begin{equation}\label{constsub2} \psi(t,x) := -\frac{1}{\epsilon}\|x - \bar x\|^2 - \frac{1}{\epsilon}|t - \bar t|^2\;\;\;\; \partial_t\psi(\bar s, \bar y) = \frac{2}{\epsilon}(\bar t - \bar s)\;\;\;\; \nabla\psi(\bar s,\bar y) = \frac{2}{\epsilon}(\bar x - \bar y) \end{equation} Since $\bar s\neq T$ and $\bar t \neq T$, the definition of viscosity solution and (\ref{constsub1}), (\ref{constsub2}) gives the following inequalities: \begin{align*} \eta\le\;\;&\;\frac{2}{\epsilon}(\bar t - \bar s) + \inf_{\alpha^0\in A^0}\bigg\{f^0(\bar t, i^0, \alpha^0, \bar x) +(1-\sum_{k=1}^{M-1} \bar x_k) \sum_{k=1}^{M-1}\frac{2}{\epsilon}(\bar x_k - \bar y_k) q(\bar t, M, k, \phi(\bar t,M, i^0, \bar x), i^0, \alpha^0, \bar x)\\ &+\;\sum_{i,j=1}^{M-1} \frac{2}{\epsilon}(\bar x_j - \bar y_j) \bar x_i q(\bar t, i,j,\phi(\bar t,i,i^0,\bar x), i^0, \alpha^0, \bar x)+ \sum_{j^0\neq i^0} [v(\bar t,j^0, \bar x) - v(\bar t,i^0, \bar x)] q^0(\bar t, i^0, j^0, \alpha^0, \bar x)\bigg\}\\ 0\ge\;\;&\;\frac{2}{\epsilon}(\bar t - \bar s) + \inf_{\alpha^0\in A^0}\{f^0(\bar s, i^0, \alpha^0, \bar y) + (1-\sum_{k=1}^{M-1} \bar y_k) \sum_{k=1}^{M-1} \frac{2}{\epsilon}(\bar x_k - \bar y_k) q(\bar s, M, k, \phi(\bar s,M, i^0, \bar y), i^0, \alpha^0, \bar y)\\ &+\; \sum_{i,j=1}^{M-1} \frac{2}{\epsilon}(\bar x_j - \bar y_j) \bar y_i q(\bar s, i,j,\phi(\bar s,i,i^0, \bar y), i^0, \alpha^0, \bar y)+ \sum_{j^0\neq i^0} [w(\bar s, j^0, \bar y) - w(\bar s,i^0, \bar y)] q^0(\bar s, i^0, j^0, \alpha^0, \bar y)\bigg\} \end{align*} Substracting the above two inequalities, we obtain: \begin{equation}\label{contra} 0 < \eta \le I_1 + I_2 + I_3 \end{equation} where the three terms $I_1$, $I_2$ and $I_3$ will be dealt with in the following. For $I_1$ we have: \begin{align*} I_1:=\;\;&\;\sup_{\alpha^0\in A^0}\{f^0(\bar t, i^0, \alpha^0, \bar x)-f^0(\bar s, i^0, \alpha^0, \bar y)\} \\ &\;+ \sup_{\alpha^0\in A^0}\{\sum_{j^0\neq i^0} [v(\bar t,j^0, \bar x) - v(\bar t,i^0, \bar x)] q^0(\bar t, i^0, j^0, \alpha^0, \bar x) - \sum_{j^0\neq i^0} [w(\bar s, j^0, \bar y) - w(\bar s,i^0, \bar y)] q^0(\bar s, i^0, j^0, \alpha^0, \bar y)\}\\ \le\;\;&\; \sup_{\alpha^0\in A^0}\{f^0(\bar t, i^0, \alpha^0, \bar x) - f^0(\bar s, i^0, \alpha^0, \bar y)\} \\ &\;+ \sup_{\alpha^0\in A^0}\{\sum_{j^0\neq i^0}[v(\bar t,j^0, \bar x) - v(\bar t,i^0, \bar x)] (q^0(\bar t, i^0, j^0, \alpha^0, \bar x) - q^0(\bar s, i^0, j^0, \alpha^0, \bar y))\}\\ &\; + \sup_{\alpha^0\in A^0}\{\sum_{j^0\neq i^0}[v(\bar t,j^0, \bar x) - w(\bar s, j^0, \bar y) - v(\bar t,i^0, \bar x) + w(\bar s,i^0, \bar y)] q^0(\bar s, i^0, j^0, \alpha^0, \bar y)\}\\ \le\;\;& L(|\bar t - \bar s| + \|\bar x - \bar y\|) + 2C L(|\bar t - \bar s| + \|\bar x - \bar y\|) + 0 \end{align*} In the last inequality we use (\ref{seq4}) and the fact that $q^0(\bar s, i^0, j^0, \alpha^0, \bar y) \ge 0$ for $j^0\neq i^0$. We also use the Lipschitz property of $f^0$ and $q^0$. Now in light of $(\ref{seq1})$, we obtain $I_1\rightarrow 0$ as $\epsilon\rightarrow 0$. Now turning to $I_2$: \begin{align*} I_2:=\;\;&\; \sup_{\alpha^0\in A^0}\bigg\{\sum_{i,j=1}^{M-1}\frac{2}{\epsilon}(\bar x_j - \bar y_j) \left [ \bar x_i q(\bar t, i,j,\phi(\bar t,i,i^0,\bar x), i^0,\alpha^0, \bar x) - \bar y_i q(\bar s, i,j,\phi(\bar s,i,i^0, \bar y), i^0,\alpha^0, \bar y)\right]\bigg\}\\ \le\;\;&\;\sup_{\alpha^0\in A^0}\bigg\{ \sum_{i,j=1}^{M-1}\frac{2}{\epsilon}|(\bar x_j - \bar y_j)(\bar x_i - \bar y_i)q(\bar t, i,j,\phi(\bar t,i,i^0,\bar x), i^0, \alpha^0,\bar x) |\bigg\} \\ &\;+ \sup_{\alpha^0\in A^0}\bigg\{\sum_{i,j=1}^{M-1}\frac{2}{\epsilon}|\bar y_i (\bar x_j - \bar y_j)(q(\bar t, i,j,\phi(\bar t,i,i^0,\alpha^0,\bar x), i^0, \alpha^0,\bar x) - q(\bar s, i,j,\phi(\bar s,i,i^0, \bar y), i^0, \alpha^0, \bar y))|\bigg\}\\ \le\;\;& \; \sum_{i,j=1}^{M-1}\frac{2}{\epsilon}C |\bar x_j - \bar y_j| | \bar x_i - \bar y_i| + \sum_{j=1}^{M-1}\frac{2}{\epsilon}|\bar x_j - \bar y_j| L(M-1)(|\bar t - \bar s| + \|\bar x - \bar y\|) \end{align*} where in the last inequality we used the Lipschitz property of $q$ and $\phi$ uniformly in $\alpha$, as well as the boundedness of the function $q$. It follows that $I_2 \le C \frac{1}{\epsilon}(|\bar t - \bar s|^2 + \|\bar x - \bar y\|^2)$ and by (\ref{seq2}) we see that $I_2\rightarrow 0$ as $\epsilon\rightarrow 0$. Finally we deal with $I_3$, which is defined by: \begin{align*} I_3:=\;\;&\; \sup_{\alpha^0\in A^0}\bigg\{(1-\sum_{k=1}^{M-1} \bar x_k) \sum_{k=1}^{M-1}\frac{2}{\epsilon}(\bar x_k - \bar y_k) q(\bar t, M, k, \phi(\bar t,M, i^0, \bar x), i^0, \alpha^0, \bar x) \\ &\;- (1-\sum_{k=1}^{M-1} \bar y_k) \sum_{k=1}^{M-1} \frac{2}{\epsilon}(\bar x_k - \bar y_k) q(\bar s, M, k, \phi(\bar s,M, i^0, \bar y), i^0,\alpha^0, \bar y)\bigg\} \end{align*} Using a similar estimation as for $I_2$, we obtain $I_3 \le C \frac{1}{\epsilon}(|\bar t - \bar s|^2 + \|\bar x - \bar y\|^2)$. Therefore by tending $\epsilon$ to $0$ in the inequality (\ref{contra}), we obtain a contradiction. This completes the proof. \subsection{Proof of Lemma \ref{lemexistencenash}} The main tool we use for the proof is the theory of limit operation on viscosity solution. We refer the reader to \cite{lions} for an introductory presentation of limit operation on viscosity solution to non-linear second order PDE. Here we adapt the results established therein to the case of coupled system of non-linear first order PDE, which is the HJB equation we are interested in throughout the paper. Let $\mathcal{O}$ be some locally compact subset of $\mathbb{R}^d$ and $(F_n)_{n\ge 0}$ be a sequence of functions defined on $\{1,\dots,M\}\times\mathcal{O}$. We define the limit operator $\limsup^*$ and $\liminf_*$ as follow: \begin{align*} {\limsup}^* F_n(i, x) :=& \lim_{n\rightarrow +\infty, \epsilon\rightarrow 0} \sup\{F_k(i, y) | k\ge n, \|y - x\|\le \epsilon\}\\ {\liminf}_* F_n(i, x) :=& \lim_{n\rightarrow +\infty, \epsilon\rightarrow 0} \inf\{F_k(i, y) | k\ge n, \|y - x\|\le \epsilon\} \end{align*} Intuitively, we expect that the limit of a sequence of viscosity solutions solves the limiting PDE. It turns out that this is the proper definition of the limit operation, as is stated in the following lemma: \begin{lemma}\label{lemprooflem1} Let $\mathcal{O}$ be a locally compact subset of $\mathbb{R}^d$, $(u_n)_{n\ge 0}$ be a sequence of continuous functions defined on $\{1,\dots, M\}\times\mathcal{O}$ and $H_n$ be a sequence of functions defined on $\{1, \dots, M\} \times [0,T] \times \mathcal{O} \times \mathbb{R}^M \times \mathbb{R}^d$, such that for each $n$, $u_n$ is viscosity solutions to the system of PDEs $H_n(i, t, x, u, \partial_t u(\cdot,i,\cdot), \nabla u(\cdot,i,\cdot)) = 0, \forall 1\le i\le M$, in the sense of Definition \ref{viscositydefi}. Assume that for each $i,x,d,p$, the mapping $u \rightarrow H(i,t,x,u,d,p)$ is non-decreasing in the $j$-th component of $u$, for all $j\neq i$. Then $u^* := \limsup^*u_n$ (resp. $u_* := \liminf_* u_n$) is viscosity subsolution (resp. supersolution) to $H^*(i, t, x, u, \partial_t u(i,\cdot), \nabla u(i,\cdot)) = 0, \forall 1\le i\le M$ (resp. $H_*(i, t, x, u,\partial_t u(\cdot,\cdot, i,\cdot), \nabla u(\cdot,\cdot, i,\cdot)) = 0, \forall 1\le i\le M$). \end{lemma} The proof requires the definition of viscosity subsolution and supersolution based on the notion of first-order subjet and superjet (see definition 2.2 in \cite{lions}). Let $u$ be a continuous function defined on $[0,T]\times\{1,\dots, M\}\times\mathcal{O}$, we define the first order superjet of $u$ on $(t,i,x)$ to be: \[ \mathcal{J}^+ u (t,i,x) := \{(d,p)\in\mathbb{R}\times\mathbb{R}^d : u(s,i,y) \le u(t,i,x) + d(s-t) + (y-x) p + o(|t-s| + \|y-x\|), (s,y)\rightarrow(t,x) \} \] Then $u$ is a viscosity subsolution (resp. supersolution) to the system $H(i, t, x, u, \partial_t u(i,\cdot), \nabla u(i,\cdot))=0$ if only if for all $(i,t,x)$ and $(d,p)\in \mathcal{J}^+ u (t,i,x)$ (resp. $(d,p)\in \mathcal{J}^- u (t,i,x)$), we have $H(i, t, x, u, d, p) \ge 0$ (resp. $H(i, t, x, u, d, p) \le 0$). \begin{proof} Fix $(t,i,x)\in [0,T]\times \{1,\dots,M\}\times\mathcal{O}$ and let $(d,p)\in\mathcal{J}^+ u^*(t,i,x)$. We want to show that: \[ H^*(i,t,x,u^*(t,\cdot,x), d, p) \ge 0 \] where $u^*(t, \cdot , x)$ represents the $M-$dimensional vector $[u^*(t, k , x)]_{1\le k\le M}$. By lemma 6.1., there exists a sequence $n_j\rightarrow+\infty$, $x_j\in\mathcal{O}$, $(d_j, p_j)\in \mathcal{J}^+ u_{n_j}(t_j, i, x_j)$ such that: \[ (t_j, x_j, u_{n_j}(t_j, i, x_j), d_j, p_j) \rightarrow(t,x,u^*(t, i, x), d, p), \;\;\;j\rightarrow +\infty \] Since $u_{n}$ is viscosity subsolution to $H_n = 0$, we have \[ H_{n_j}(i, t_j, x_j, u_{n_j}(t_j, \cdot , x_j), d_j, p_j) \ge 0 \] Now let us denote \[ S^u_{n, \epsilon}(t,k,x):= \sup\{u_j(s,k,y), \max(|s-t|, \|y-x\|)\le\epsilon, j\ge n\} \] Then we have $u^*(t,i,x) = \lim_{n\rightarrow +\infty, \epsilon\rightarrow 0} S^u_{n,\epsilon}(t,i,x)$. Fix $\delta >0$, then there exists $\epsilon^0>0$ and $N^0>0$ such that for all $\epsilon\le \epsilon^0$ and $j\ge N^0$, we have \[ |S^u_{n_j, \epsilon}(t,k,x) - u^*(t,k,x)| \le \delta,\;\;\;\forall k\neq i \] Moreover, there exists $N>0$ such that for all $j\ge N$, \[ \|(t_j, x_j, u_{n_j}(t_j, i, x_j), d_j, p_j) - (t,x,u^*(t,i,x),d,p)\|\le \delta\wedge \epsilon^0 \] Then for any $j\ge N^0\vee N$, we have $\|(t_j, x_j) - (t,x)\|\le \epsilon^0$, and by definition of $S^u_{\epsilon,n}$ we deduce that $u_{n_j}(t_j, k, x_j) \le S^u_{n_j,\epsilon^0}(t,k,x)$ for all $k\neq i$. By the monotonicity property of $H_{n_j}$, we have: \[ H_{n_j}(i, t_j, x_j, S^u_{n_j,\epsilon^0}(t,1,x), \dots, S^u_{n_j,\epsilon^0}(t,i-1,x), u_{n_j}(t_j, i, x_j), S^u_{n_j,\epsilon^0}(t,i +1,x),\dots, S^u_{n_j,\epsilon^0}(t,M,x), d_j, p_j) \ge 0 \] Now in the above inequality, all the arguments of $H_{n_j}$ except $i$ is located in a ball of radius $\delta$ centered on the point $(t,x,u^*(t,\cdot,x), d, p)$. We have thus \[ S^H_{n_j,\delta}(i,t,x,u^*(t,\cdot,x), d, p) \ge 0 \] where we have defined: \[ S^H_{n,\delta}(i,t,x,u, d, p):=\sup\{H_{j}(i,s,y, v, e, q), \|(s,y,v,e,q) - (t,x,u,d,p)\|\le\epsilon, j\ge n\} \] We have just proved that for any $\delta >0$, there exists $M>0$ such that for any $j\ge M$, we have $S^H_{n_j,\delta}(i,t,x,u^*(t,\cdot,x), d, p) \ge 0$. Since we have \[ H^*(i,t,x,,u^*(t,\cdot,x), d, p) = \lim_{n\rightarrow +\infty, \epsilon \rightarrow 0} S^H_{n_j,\delta}(i,t,x,u^*(t,\cdot,x), d, p) \] We deduce that $H^*(i,t,x,,u^*(t,\cdot,x), d, p)\ge 0$. \end{proof} Now going back to the proof of Lemma \ref{lemexistencenash}, we consider a converging sequence of elements $(\phi^0_n, \phi_n) \rightarrow (\phi^0, \phi)$ in $\mathbb{K}$, where we have defined $\mathbb{K}$ to be the collection of major and minor player's controls $(\phi^0, \phi)$ that are $L-$Lipschitz in $(t,x)$. We denote $V^0_n$ (resp. $V^0$) the value function of major player's control problem associated with the controls $\phi_n$ (resp. $\phi$). We also use the notation $V^{0*} := \limsup V_n^0$ and $V^{0}_* := \liminf V_n^0$. For all $n\ge 0$, we define the operator $H_n$: \begin{align*} &H^0_n(i^0,t,x,u,d,p) :=\;\;d + \inf_{\alpha^0\in A^0} \bigg\{f^0(t,i^0,\alpha^0, x) + \sum_{j^0\neq i^0} (u_{j^0} - u_{i^0}) q^0(t,i^0,j^0, \alpha^0,x) \\ &\;\;+ (1-\sum_{k=1}^{M}x_k) \sum_{k=1}^{M-1}p_k q(t,M,k,\phi_n(t, M, i^0, x), i^0,\alpha^0, x)+ \sum_{k,l=1}^{M-1}p_l x_k q(t,k,l,\phi_n(t,k,i^0,x),i^0,\alpha^0,x)\bigg\}, \;\;\;\text{if}\;\;\; t<T \\ &H^0_n(i^0,T,x,u,d,p) :=\;\; g^0(i^0,x) - u_{i^0} \end{align*} Then $V^0_n$ is viscosity solution to the equation $H^0_n = 0$. It is clear to see that the operator $H_n^0$ satisfies the monotonicity condition in Lemma \ref{lemprooflem1}. To evaluate $H^{0*}:=\limsup H_n^0$ and $H^0_*:=\liminf H_n^0$, we remark that for each $1\le i^0\le M^0$, the sequence of functions $(t,x,u,d,p)\rightarrow H_n(i^0,t,x,u,d,p)$ is equicontinuous. Indeed, this is due to the fact that the sequence $\phi_n$ is equicontinuous and the function $q$ is Lipschitz. Therefore $H^{0*}$ and $H^0_*$ are simply the limit in the pointwise sense when $t<T$. When $t=T$, the boundary condition needs to be taken into account. The following computation is straightforward to verify: \begin{lemma}\label{lemprooflem2} Define the operator $H^0$ as: \begin{align*} &H^0(i^0,t,x,u,d,p) :=\;\;d + \inf_{\alpha^0\in A^0} \bigg\{f^0(t,i^0,\alpha^0, x) + \sum_{j^0\neq i^0} (u_{j^0} - u_{i^0}) q^0(t,i^0,j^0, \alpha^0,x) \\ &\;\;+ (1-\sum_{k=1}^{M}x_k) \sum_{k=1}^{M-1}p_k q(t,M,k,\phi(t, M, i^0, x), i^0, \alpha^0, x)+ \sum_{k,l=1}^{M-1}p_l x_k q(t,k,l,\phi(t,k,i^0,x),i^0,\alpha^0, x)\bigg\},\;\;\;\forall t\le T \end{align*} Then we have: \begin{align*} H^{0*}(i^0,t,x,u,d,p) &= H_0{*}(i^0,t,x,u,d,p) = H^0(i^0,t,x,u,d,p), \;\text{if}\;\;\;t<T\\ H^{0*}(i^0,T,x,u,d,p) &= \max \{ (g^0(i^0,x) - u_{i^0}), \;H^{0*}(i^0,t,x,u,d,p) \}\\ H^{0}_*(i^0,T,x,u,d,p) &= \min \{ (g^0(i^0,x) - u_{i^0}), \;H^{0*}(i^0,t,x,u,d,p) \} \end{align*} \end{lemma} From the proof of Theorem \ref{hjbtheomajor}, we see immediately that $V^0$ is a viscosity subsolution (resp. supersolution) of $H^{0,*} = 0$ (resp. $H^{0}_* = 0$). By Lemma \ref{lemprooflem2}, $V^{0*}$ (resp. $V^{0}_*$) is a viscosity subsolution of $H^{0,*} = 0$ (resp. supersolution of $H^{0}_* = 0$). Indeed, following exactly the proof of Theorem \ref{compmajor}, we can show that a viscosity supersolution of $H^{0}_* = 0$ is greater than a viscosity subsolution of $H^{0,*} = 0$. By definition of $\limsup$ and $\liminf$, we have $V^{0,*} \ge V^{0}_*$. It follows that $V^0 \le V^{0}_* \le V^{0,*} \le V^0$, and therefore we have $\limsup V_n^0 = \liminf H_n^0 = V^0$. Then we obtain the uniform convergence of $V_n^0$ to $V^0$ following Remark 6.4 in \cite{lions}. \subsection{Proof of Proposition \ref{majorNproperty} \& \ref{majorvalueodeproperty}} We use the similar techniques as in \cite{GomesMohrSouza_continuous} where the author provides gradient estimates for N-player game without major player. Let us first remark that the system of ODEs (\ref{odemajorpayoff}) can be written in the following form: \[ -\dot\theta_{m}(t) = f_m(t) + \sum_{m'\neq m} a_{m'm}(t)(\theta_{m'} - \theta_{m}),\;\;\;\theta_m(T) = b_m \] where we denote the index $m:=(i^0,x)\in\{1,\dots,M^0\}\times\mathcal{P}^N$ and we notice that $a_{m'm}\ge 0$ for all $m'\neq m$. This can be further written in the compact form: \begin{equation}\label{generalode} -\dot\theta(t) = f(t) + M(t) \theta,\;\;\;\theta(T) = b \end{equation} where $M$ is a matrix indexed by $m$, with all off-diagonal entries being positive and the sum of every row equals $0$. Define $\|\cdot\|$ to be the uniform norm of a vector: $\|b\| := \max_{m} |b_m|$. Instead of proving (i) in Proposition \ref{majorNproperty}, we prove a more general result, which is a consequence of Lemma 4 and Lemma 5 in \cite{GomesMohrSouza_continuous}. \begin{lemma}\label{odebound} Let $\phi$ be a solution to the ODE (\ref{generalode}). Assume $f$ is bounded then we have \[ \|\theta(t)\| \le \int_t^T \|f(s)\|ds + \|b\| \] \end{lemma} \begin{proof} For any $t\le s \le T$ we denote the matrix $K(t,s)$ as the solution of the following system: \[ \frac{dK(t,s)}{dt} = -M(t) K(t,s), \;\;\; K(s,s) = I \] where $I$ stand for the identity matrix. Now for $\theta$, we clearly have \[ -\dot\theta(s) = f(s) + M(s) \theta(s) \le \|f(s)\| e + M(s) \theta (s) \] where we denote $e$ to be the vector with all the components equal to $1$. Then using Lemma 5 in \cite{GomesMohrSouza_continuous}, we have \[ -K(t,s)\dot\theta(s) \le K(t,s) M(t) \theta (s) + \|f(s)\| K(t,s) e \] Note that $K(t,s)\dot\theta(s) + K(t,s) M(t) \theta (s) = \frac{d}{ds}K(t,s)\theta(s)$. We integrate the above inequality between $t$ and $T$ to obtain: \[ \theta(t) \le K(t,T) b + \int_{t}^{T} \|f(s)\| K(t,s) e \;ds \] Now using Lemma 4 in \cite{GomesMohrSouza_continuous}, we have $\|K(t,T) b\| \le \|b\|$ and $\|K(t,T) e\| \le \|e\|=1$. This implies that: \[ \max_{m} \theta_m(t) \le \|b\| + \int_t^T \|f(s)\|ds \] Indeed starting from the inequality $\dot\theta(s) \ge -\|f(s)\| e + M(s) \phi (s)$ and going through the same steps we obtain: \[ \min_{m} \theta_m(t) \ge -\|b\| - \int_t^T \|f(s)\|ds \] The desired inequality follows. \end{proof} Now we turn to the proof of Proposition \ref{majorNproperty}. Let $\theta$ be the unique solution to the system of ODEs (\ref{odemajorpayoff}). Recall the notation $e_{ij}:=\mathbbm{1}_{j\neq M}e_j-\mathbbm{1}_{i\neq M}e_i$. For any $k\neq l$ and define $z(t,i^0,x,k,l):=\theta(t,i^0,x+\frac{1}{N} e_{kl}) - \theta(t,i^0,x)$. Then $z(t,\cdot,\cdot,\cdot,\cdot)$ can be viewed as a vector indexed by $i^0, x, k, l$. Substracting the ODEs satisfied by $\theta(t,i^0,x+\frac{1}{N} e_{kl})$ and $\theta(t,i^0,x)$, we obtain that $z$ solves the following system of ODEs: \begin{equation}\label{odegradient} \begin{aligned} -\dot z(t,i^0,x,k,l) = & F(t,i^0,x,k,l) + \sum_{j^0, j^0\neq i^0}(z(t,j^0,x,k,l) - z(t,i^0,x,k,l)) q^0_{\phi^0}(t, i^0, j^0,x)\\ & + \sum_{(i,j),j\neq i} (z(t,i^0,x+\frac{1}{N} e_{ij}, k, l) - z(t,i^0,x,k,l)) N x_i q_{\phi}(t,i,i^0,x)\\ z(T, i^0, x, k, l) =&g^0(i^0, x+\frac{1}{N} e_{kl}) - g^0(i^0, x) \end{aligned} \end{equation} Where we have defined the $F$ to be: \begin{align*} F(t,i^0,x,k,l):=&\; f^0_{\phi^0}(t,i^0,x+\frac{1}{N} e_{kl}) - f^0_{\phi^0}(t, i^0, x)\\ &+\sum_{j^0,j^0\neq i^0} [\theta(t,j^0,x+\frac{1}{N}e_{kl})- \theta(t,i^0,x+\frac{1}{N}e_{kl})][q^{0}_{\phi^0}(t,i^0,j^0,x+\frac{1}{N}e_{kl}) - q^{0}_{\phi^0}(t, i^0, j^0,x)]\\ & + \sum_{(i,j),j\neq i} [\theta (t,i^0, x+\frac{1}{N}e_{ij}+\frac{1}{N}e_{kl}) - \theta (t,i^0, x+\frac{1}{N}e_{kl})]\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \times N[(x+\frac{1}{N}e_{kl})_{i} q_{\phi}(t,i,j,i^0, x+\frac{1}{N}e_{kl}) - x_i q_{\phi}(t,i,j,i^0, x)] \end{align*} Then by the uniform bound provided in Lemma \ref{odebound}, together with the Lipschitz property of $f^0, g^0, q^0, q, \alpha,\beta$, we have: \[ \|F(t)\| \le \frac{L}{N} + 2(T\|f^0\|_{\infty} + \|g^0\|_{\infty}) \frac{L}{N} + \|z(t)\|\cdot L,\;\;\;\;\|z(T)\|\le \frac{L}{N} \] Where $L$ is a generic Lipschitz constant. Now we are ready to apply Lemma \ref{odebound} to (\ref{odegradient}). We have for all $t\le T$ \[ \|z(t)\|\le \frac{L}{N} + \int_{t}^{T} (\frac{C}{N} + L \|z(s)\|) ds \] Where $C$ is a constant only depending on $T, L, \|f^0\|_{\infty}, \|g^0\|_{\infty}$. Finally the Gronwall's inequality allows to conclude. The proof of Proposition \ref{majorvalueodeproperty} is similar. We consider the solution $\theta$ to the ODE (\ref{odemajorvalue}) and we keep the notation $z(t,i^0,x,k,l)$ as before. For $v\in\mathbb{R}^{M^0}$, $x\in\mathcal{P}$, $t\le T$ and $i^0 = 1,\dots,M^0$, denote: \[ h^0(t,i^0,x,v) := \inf_{\alpha^0 \in A^0} \{f^0(t,\alpha^0,i^0,x) + \sum_{j^0\neq i^0} (v_{j^0} - v_{i^0}) q^0(t,i^0,j^0,\alpha^0,x)\} \] Then for all $x,y \in \mathcal{P}$, $u,v \in \mathbb{R}^{M^0}$, using the Lipschitz property of $f^0$ and $q^0$ and the boundedness of $q^0$, we have \begin{equation}\label{esth} |h^0(t,i^0,x,v) - h^0(t,i^0,y,u)| \le L|x-y| + 2(M^0-1) \max\{\|u\|, \|v\|\} \|x - y\| + C^q 2(M^0-1) \|v-u\| \end{equation} Substracting the ODEs satisfied by $\theta(t,i^0,x+\frac{1}{N} e_{kl})$ and $\theta(t,i^0,x)$, we obtain that $z$ solves the following system of ODEs: \begin{align*} -\dot z(t,i^0,x,k,l) = & F(t,i^0,x,k,l) + \sum_{(i,j),j\neq i} (z(t,i^0,x+\frac{1}{N} e_{ij}, k, l) - z(t,i^0,x,k,l)) N x_i q_{\phi}(t,i,j,i^0,x)\\ z(T, i^0, x, k, l) =&g^0(i^0, x+\frac{1}{N} e_{kl}) - g^0(i^0, x) \end{align*} where $F$ is given by: \begin{align*} F(t,i^0,x,k,l):=&h(t,i^0,x+\frac{1}{N}e_{kl},\theta (t,\cdot, x+\frac{1}{N}e_{kl})) - h(t,i^0,x,\theta (t,\cdot, x))\\ & + \sum_{(i,j),j\neq i} [\theta (t,i^0, x+\frac{1}{N}e_{ij}+\frac{1}{N}e_{kl}) - \theta (t,i^0, x+\frac{1}{N}e_{kl})]\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \times N[(x+\frac{1}{N}e_{kl})_{i} q_{\phi}(t,i,j,i^0, x+\frac{1}{N}e_{kl}) - x_i q_{\phi}(t,i,j,i^0, x)] \end{align*} By the estimation (\ref{esth}), the uniform bound provided in Lemma \ref{odebound}, together with the Lipschitz property of $f^0, g^0, q^0, q, \phi^0,\phi$, we have: \[ \|F(t)\| \le \frac{1}{N}(L + 2(M^0 - 1)(T\|f\|^0_{\infty} + \|g\|^0_{\infty})) + 2C_q(M^0-1)\|z(t)\| + M( M - 1)( L_\phi L + C_q )\|z(t)\| \] We apply Lemma \ref{odebound} to obtain: \begin{align*} \|z(t)\|\le&\;\; \frac{L}{N} + \int_{t}^{T} (M( M - 1)( L_\phi L + C_q )+ 2C_q(M^0-1))\|z(s)\| + \frac{1}{N}( L + 2(M^0 - 1)(T\|f\|^0_{\infty} + \|g\|^0_{\infty})) ds\\ :=&\;\;\frac{C_0 + C_1 T + C_2 T^2}{N} + \int_t^T (C_3 + C_4 L_\phi) \|z(s)\| ds \end{align*} The Gronwall's inequality allows to conclude.
{'timestamp': '2016-10-19T02:02:13', 'yymm': '1610', 'arxiv_id': '1610.05408', 'language': 'en', 'url': 'https://arxiv.org/abs/1610.05408'}
\section{Introduction}\label{intro} Helium is a very peculiar element in the sense that, despite its high abundance in the universe, its photospheric solar abundance is unknown. Unfortunately it is undetectable in the visible photospheric spectrum and, until recently, the so-called solar photospheric abundance has been derived using theoretical stellar evolution models. A commonly accepted value for the ratio [He] = N$_{He}$/N$_{H}$ is 0.1 and represents the helium abundance of the nebula from which the solar system formed. However, there are now strong indications that this value is too large: the inversion of helioseismic data by different authors leads to values of [He], in the convection zone, in the range 0.078 -- 0.088 \citep[for a review, see][]{Boothroyd-Sackmann:03}. A reduction of He abundance of roughly 10\% below its initial value could be explained including diffusion in the theoretical evolution models \citep{Proffitt-Michaud:91,Boothroyd-Sackmann:03}. The values quoted above refer to the outer convection zone and to the photosphere. Measurements of [He] in the solar wind and interplanetary medium exhibit a high variability, from almost negligible values ($\approx 0.01$) in equatorial regions, to high values ($\approx 0.3$) in transient events. An average value in the fast component of the solar wind is about 0.05, thus about a factor of two lower than the photospheric [He] \citep{Barraclough-etal:95,Bochsler:98}. This value is also in contrast with the general pattern of abundance variations in the fast wind, where elements with First Ionization Potential (FIP) $\geq 10$~eV typically retain their photospheric abundance, as opposed to low FIP elements ($\leq 10$~eV) which appear enhanced \citep[for a review, see][]{Meyer:96}. Direct measurements of [He] in the solar atmosphere closer to the surface are available only for the corona. In the quiet corona, at $R \approx 1.1\: R_{\sun}$, \cite{Gabriel-etal:95} and \cite{Feldman:98} found [He]~$\approx 0.08$, suggesting no helium depletion in the low corona, while in a equatorial streamer, at $R \approx 1.5\: R_{\sun}$, \cite{Raymond-etal:97} found an upper limit of 0.05. More recently, \cite{Laming-Feldman:01,Laming-Feldman:03} have estimated an abundance of 0.05 and 0.04 respectively, at distances $R<1.1\: R_{\sun}$. Apart from the observed variability and/or inconsistencies of the measurements in the corona and the solar wind, the general pattern of the FIP effect suggests that elemental fractionation should occur in the chromosphere, at $T<10^4$~K \citep{Geiss:82}. Detailed theoretical calculations including diffusion effects \citep{Hansteen-etal:97,Lie-Svendsen-etal:03} seem to confirm that, indeed, [He] undergoes strong changes from the chromosphere to the lower corona, before stabilizing at the values observed in the solar wind. Therefore it would be very important to determine the He abundance at chromospheric and transition region (TR) levels directly from line profiles. The strongest He lines in the solar atmosphere are in the extreme ultraviolet (EUV) below 584 \AA. From the ground, the only observable He line in the quiet solar atmosphere (on the disc) is in the near infrared at 10830 \AA, but when active regions or flares are observed the He lines are enhanced and in particular the $D_3$ line at 5876 \AA~can also be detected. The excitation of these subordinate lines is at least partially due to a photoionization-recombination mechanism or ``PR'' process \citep{Zirin:75,Andretta-Jones:97}, i.e. to direct photoionization of chromospheric helium atoms by coronal EUV radiation shortward of 504 \AA\ and successive recombination to the excited levels of \ion{He}{1}. Observations actually suggest that this process might not be very important in quiescent regions, at least for the strong resonance, hydrogen-like \ion{He}{2} lines such as the Lyman-$\alpha$ at 304~\AA~\citep[e.g.:][]{FAL:93,Andretta-etal:00,Andretta-etal:03}. However, no conclusive determination of the role of this process is available yet for active regions, where the coronal EUV radiation field is much more intense than in quiet areas. Hence, the study of the solar He spectrum requires not only spectral observations of an active region (or flare) in a very large spectral range (from EUV to near infrared), but also an estimate of the coronal EUV radiation impinging on the observed active region. To this aim we planned an observing campaign (SOHO JOP 139) coordinated between ground based and SOHO instruments to obtain simultaneous spectroheliograms of the same area in several spectral lines, including four He lines, that sample the solar atmosphere from the chromosphere to the transition region. During this campaign we observed the region NOAA 9468 (cos$\theta$=0.99) on May 26, 2001 from 13:00 to 18:00 UT. A small two-ribbon flare (GOES class C-1), developed in this region around 16 UT and its dynamics during the impulsive phase has been already studied in \cite{Teriaca-etal:03}. In this paper we concentrate on the pre-flare phase. We present semi-empirical models of the atmosphere of the active region (at 15:20 UT) constructed to match the observed line profiles from the chromosphere to the TR and taking into account the EUV radiation in the range 1 -- 500 \AA. The comparison between the observed and computed He line profiles allowed us to test the relevance of the PR process and of the He abundance values in an active region. In a following paper we will study the formation of He lines in the flare. \section{Observations}\label{obs} A detailed description of the observing program is given in \cite{Teriaca-etal:03}. We recall here that spectroheliograms were acquired with the Horizontal Spectrograph at the Dunn Solar Telescope (DST) of the National Solar Observatory / Sa\-cra\-men\-to Peak in the chromospheric lines Ca~{\sc ii} K, H$\alpha$ and Na~{\sc i} D as well as in the \ion{He}{1} lines at 5876 (D$_3$) and 10830 \AA. The full field of view of the DST, 170$\arcsec\times 170\arcsec$, was covered in about 5 minutes, with a sampling step of $2\arcsec$. Correcting for offsets among the different detectors resulted in a final useful field of view (FOV) of $160\arcsec\times 140\arcsec$ with an effective resolution of $2\arcsec$. During the same period, spectroheliograms of the active region were obtained in the spectral windows around the \ion{He}{1} 584 \AA\ and the \ion{He}{2} 304 \AA\ spectral lines with the Normal Incidence Spectrometer (NIS) of the Coronal Diagnostic Spectrometer (CDS) aboard SOHO. The slit was stepped $6\arcsec$ covering a $148\arcsec$ wide area in $\sim 5.5$ minutes. The final useful FOV was $148\arcsec\times 138\arcsec$ with an effective resolution of $6\arcsec\times 3.4\arcsec$. Ground-based and CDS data were aligned using SOHO/MDI images as a reference. We estimate the error around few arcseconds. CDS and ground based spectra are simultaneous within 2 minutes. The EUV flux was monitored by the Solar EUV Monitor (\SEM) instrument aboard SoHO \citep{Hovestadt-etal:95}. The CELIAS/\SEM\ instrument provides calibrated total photon counts in the range $\lambda<500$~\AA\ (zero-th order, \SEM[0]), and in the range $260<\lambda<340$~\AA\ (first order, \SEM[1]), at 1~AU. Finally, the Extreme ultraviolet Imaging Telescope \citep[EIT;][]{Delaboudiniere-etal:95} provided synoptic series of full disk images centered on 171, 195, 284 and 304 \AA\ around 13:00 and 19:00 UT. \section{Derivation of the photoionizing radiation}\label{calc:photorad} In Sec.~\ref{intro} we have mentioned the role of EUV radiation, emitted mainly by coronal plasma, in the excitation of helium lines. The relevant radiation we need to estimate falls in the wavelength range below the photoionization threshold of \ion{He}{1} at 504~\AA\ (the photoionization threshold for \ion{He}{2} is at 228~\AA). A misconception, commonly found in the literature, is that it would be sufficient to have an estimate of the intensities just near the photoionization thresholds, because of the sharp decrease with wavelength of the photoionization cross-sections -- which are proportional to $\approx \lambda^2$ and to $\approx \lambda^3$ for \ion{He}{1} and \ion{He}{2}, respectively. However, as discussed more in detail by \cite{Andretta-etal:03}, even photons of considerably shorter wavelength can efficiently photoionize helium atoms and ions: what matters most is the \emph{ratio} between helium and hydrogen photoionization cross sections, at least for $\lambda>20$--$50$~\AA\ (at shorter wavelengths, inner-shell photoionization of metals absorb most of the photons). The wavelength of the photoionizing photons mainly determines how deeply in the atmosphere they can penetrate, since the photoionization absorption cross-sections decrease with wavelength. This point will be further illustrated in Sec.~\ref{calc:atmos:EUVspectrum}. Therefore, for the calculation of the He line profiles we needed to estimate the total number of photoionizing photons impinging on the target active region, plus their spectral distribution. We obtained reliable estimates of these quantities combining both observational constraints from CELIAS/\SEM\ and EIT, and theoretical tools. The adopted procedures are illustrated in the following sections. \subsection{Coronal irradiance \label{calc:photorad:irradiance} The solar irradiances measured at about 15:20~UT in the \SEM[0] and \SEM[1] wavebands are $\Irr{\SEM[0]}=5.05\times 10^{10}$ photons cm$^{-2}$ s$^{-1}$ and $\Irr{\SEM[1]}=2.57\times 10^{10}$ photons cm$^{-2}$ s$^{-1}$, respectively, with noise-like variations of the order of 0.1\%. Even the subsequent C1.1-class flare, starting at 16:01 UT \citep{Teriaca-etal:03}, produced variations in the \SEM\ irradiances limited to at most 1\%. Both quantities include the contribution of the \ion{He}{2}~304~\AA\ line, while $\Irr{\SEM[0]}$ contains also the other \ion{He}{2} resonance features. Since the He lines and continua will be self-consistently computed in our radiative transfer calculations, we need to exclude those contributions to obtain the input values for our model calculations. The \ion{He}{2}~256~\AA\ line has been observed to be at least 15 times weaker than the 304~\AA\ line in quiet regions \citep{Mango-etal:78}. For an active region, observations of the first four terms of the resonance series indicate that the \ion{He}{2}~304~\AA\ is at least 40 times more intense than the 256~\AA\ line, and about 25 times more intense than the sum of the 256, 243, and 237~\AA\ terms \citep{Thomas-Neupert:94}. Evaluating the contribution of the continuum at $\lambda<228$~\AA\ is considerably more difficult \citep[see, for instance][]{Andretta-etal:03}. In an optically thin recombination spectrum, the contribution of that continuum could be as high as 30\% of the total number of photons. However, the existing observations of the resonance series (such as those cited above) strongly suggest a formation of the \ion{He}{2} spectrum at high optical depths. Therefore, we may expect a considerably smaller contribution of the continuum and of other lines, as already suggested by \cite{Zirin:75} and \cite{Athay:88}. Overall, we thus estimate that the contribution of the \ion{He}{2} features below 256~\AA\ is less than 10\% of that of the \ion{He}{2}~304~\AA\ line, so that only the latter line need be taken into account in the measured SEM irradiances. The contribution of the \ion{He}{2}~304~\AA\ line (plus the nearby \ion{Si}{11}~303~\AA\ line) to \SEM[1] irradiances has been estimated by \cite{Thompson-etal:02} at approximately 50\%--60\% for observations acquired during 2001. Adopting an average value of 55\%, $\Irr{\mathrm{He}+\mathrm{Si}} = 1.4 \times 10^{10}$ photons cm$^{-2}$ s$^{-1}$ (the contribution of the \ion{Si}{11} 303~\AA\ line is about 10\%). Subtracting this value from the \SEM[0] irradiances, we obtain the solar irradiance due to coronal lines only: $\Irr{\mathrm{corona}} \approx 3.64 \times 10^{10}$ photons cm$^{-2}$ s$^{-1}$. The uncertainty in this value is mainly due to the uncertainty in the absolute calibration of the \ion{He}{2}~304~\AA\ line relative to the other lines in the CDS spectra used by \cite{Thompson-etal:02}. For CDS, the maximum uncertainty of the 2$^\textrm{nd}$ order calibration, relevant for the \ion{He}{2}~304~\AA\ line, can be conservatively estimated at $\approx 30\%$ \citep{Andretta-etal:03}. Thus, the above value of $\Irr{\mathrm{He}+\mathrm{Si}}$ has a maximum relative error of about 14\%, which translates into a maximum error of $0.2 \times 10^{10}$ photons cm$^{-2}$ s$^{-1}$ for $\Irr{\mathrm{corona}}$. If we assimilate this maximum error to a 3-$\sigma$ uncertainty, the standard deviation for $\Irr{\mathrm{corona}}$ can be estimated to be $\approx 0.06 \times 10^{10}$ photons cm$^{-2}$ s$^{-1}$. We thus finally obtain: $\Irr{\mathrm{corona}} = (3.6\pm 0.1) \times 10^{10}$ photons cm$^{-2}$ s$^{-1}$ (1-$\sigma$ error). In the estimated uncertainty for this value, we can safely include the errors introduced by neglecting the contributions of the \ion{Si}{11}~303~\AA\ lines and of the others \ion{He}{2} features, contributions which should be roughly of the same order, as mentioned above, but of opposite sign. \subsection{Total number of ionizing photons at the solar surface \label{calc:photorad:total} To obtain the contribution from the area around the DST slit to the total EUV solar irradiance we used the spatial distribution of EUV emission provided by EIT images. The details of the method are described in the Appendix and we illustrate here only the main assumptions and results. Basically, to obtain the mean value of $I^{\SEM[0]}$ and $I^{\SEM[1]}$ in any region of the EIT FOV, we scale the \SEM\ calibrated irradiances, \Irr{\SEM[0]} and \Irr{\SEM[1]}, by the ratio of the mean counts in the region over the total counts in the full EIT FOV (Eq.~\ref{eq:fac_area_R}). This procedure assumes that the total counts over the detector in an EIT band (after basic CCD processing, including flat-fielding) are proportional to the irradiance, $\Irr{\mathrm{EIT}}$, in the same band. This is true if the contribution to $\Irr{\mathrm{EIT}}$ from the corona beyond 1.2--1.4 $R_\sun$ (the limit of an EIT image) is negligible. In fact, inspection of the EIT images used in our analysis reveals that only a small fraction of counts comes from areas above $1.1\: R_\sun$, and most of those counts are probably due to light scattered in the telescope. The strongest assumption in this procedure however, is that the images in the EIT wavebands are good linear proxies of the intensity integrated within the \SEM\ wavebands. In the case of the $\lambda<500$~\AA\ range (\SEM[0]), all EIT bands (around 171, 195, 284, and 304~\AA) include strong lines which contribute significantly to the total flux in that range. Thus, we may expect such an assumption to be quite reasonable. The case of the \SEM[1] range is more straightforward: the EIT~304 waveband, although narrower, is centered at the same wavelength ($\approx$~300~\AA), and we may expect EIT~304 images to be good proxies of the spatial variation on the disk of the radiance in the \SEM[1] waveband. On the other hand, the assumption of linear correlation of EIT intensities with integrated intensities in the \SEM\ wavebands, is equivalent to assume that the spectral distribution of EUV photons in those wavebands does not change with the location on the Sun and that only the overall intensity of the lines changes. This assumption is consistent with the calibration procedure of the \SEM\ data \citep{Judge-etal:98}, and is estimated to affect \SEM[0] measurements by about 10\% \citep{McMullin-etal:02}. Hence, we considered a region of (70\arcsec)$^2$ centered at the position of the DST slit (solar rotation being taken into account), and we computed the conversion factors from \Irr{\SEM[]} to the mean value of $I^{\SEM}$ using two sets of EIT synoptic images taken before and after the DST and CDS observations (at 13:00 and 19:00 UT). Since we already subtracted the contribution of the \ion{He}{2}~304~\AA\ line to the \SEM[0] irradiance, we did not consider the EIT~304 images. In this way, we obtained an average conversion factor $c = 3.3 \pm 0.6\times 10^4\: \mathrm{sr}^{-1}$, where the uncertainty was obtained from the standard deviation of the distribution of counts within the region of interest. Therefore, the intensity of coronal lines, integrated in the $\lambda<500$~ \AA\ wavelength range and averaged over the area of the DST slit is $I^\mathrm{corona} = \Irr{\mathrm{corona}}\times c = (1.2\pm 0.2)\times 10^{15}$ photons cm$^{-2}$ s$^{-1}$ sr$^{-1}$. The uncertainty is obtained combining the uncertainties of $\Irr{\mathrm{corona}}$ and $c$. In order to validate this approach, we compare the intensity measured directly from CDS in the considered area ($I^{\mathrm{He}+\mathrm{Si}} = (1.1\pm 0.1)\times 10^{15}$) with the one obtained from the \SEM[1] irradiance using the described procedure. The mean conversion factor from the two synoptic EIT~304 images is $(5.8\pm 2.5)\times 10^4\; \mathrm{sr}^{-1}$ and therefore $I^{\mathrm{He}+\mathrm{Si}} $ $= \Irr{\mathrm{He}+\mathrm{Si}} \times c$ $= (0.8\pm 0.4)\times 10^{15}$ photons cm$^{-2}$ s$^{-1}$ sr$^{-1}$. The agreement between these estimates and the values measured with CDS is well within the uncertainties. \subsection{Angular and spectral distribution of photoionizing photons \label{calc:photorad:spectrum} The values computed in the previous section refer to intensities incident perpendicularly to the solar surface. In our following calculations, we assume that the photoionizing incident intensities are constant with direction. It is very difficult to evaluate this assumption, since the variation with angle of the photoionizing radiation depends on the spatial distribution of the coronal plasma around the region of interest. For instance, had we assumed a vertically stratified, plane-parallel, optically thin corona, we would have obtained an intensity varying as $1/\mu$, where $\mu$ is the cosine of the angle with the normal. Under such an assumption, the variation of the mean intensity, $J$, (and thus of the photoionization rate) with optical depth, $\tau$, is $J(\tau) \sim E_1(\tau)$, where $E_1(x)$ is the first exponential integral. The constant intensity case we adopted gives instead a dependence $J(\tau) \sim E_2(\tau)$. However, at large depths the two assumptions give the same result: $J(\tau) \sim \exp(-\tau)/\tau$. The differences are strong only in the shallower layers, with the ratio $E_2(\tau)/E_1(\tau)$ falling below 2 already for $\tau > 0.26$. For the wavelength distribution of the ionizing photons, we adopt a synthetic spectrum obtained with the CHIANTI database \citep[version~4:][]{Young-etal:03} with the Differential Emission Measure (DEM) of an ``average'' active region, from \cite{Vernazza-Reeves:78}. In the calculations, we used standard elemental abundances, with some values scaled to mimic the FIP effect (further details in the file \texttt{sun\_hybrid\_ext.abund} in the CHIANTI package), and a value $P_\mathrm{e}/k = 3\times 10^{15}\; \mathrm{K}\; \mathrm{cm}^{-3}$, where $P_\mathrm{e}$ is the electron pressure and $k$ the Boltzmann constant. We need to scale the synthetic spectrum (excluding of course the \ion{He}{2} lines and continua) by a factor $1.3\pm 0.3$ to obtain the value of $I^\mathrm{corona}$ found in Sec.~\ref{calc:photorad:total}. We note that such a linear scaling is equivalent to assuming that the shape of the DEM distribution of the plasma in our region of interest is the same as that of the distribution used to compute the synthetic spectrum. The resulting spectrum is shown in Fig.~\ref{fig:EUVspec}. \begin{figure} \centering \resizebox{0.6\hsize}{!}{\includegraphics[angle=90]{compare_ar_spectra.ps}} \caption Estimated spectral distributions of photons around the DST slit, derived as described in Sec.~\ref{calc:photorad:spectrum}. The solid histogram shows the spectral distribution derived from a CHIANTI synthetic spectrum; the grey area indicates 1-$\sigma$ uncertainties. } \label{fig:EUVspec} \end{figure} \section{Atmospheric models}\label{calc:atmos} Our goal was the determination of semi-empirical models of the chromosphere and low transition region of the active region, providing a good match with the observed line profiles. The diagnostics we used for the chromospheric part of the models were the profiles of the Ca~{\sc ii} K, H$\alpha$, the Na~{\sc i} D lines, and the He~{\sc i} lines at 5876 \AA\ (D$_3$) and 10830 \AA, averaged over a region of $6\arcsec\times 3.4\arcsec$, which corresponds to the effective resolution of CDS. For the transition region we used the CDS profiles of the He~{\sc i} line at 584 \AA\ and of the He~{\sc ii} Ly-$\alpha$ line at 304 \AA. \subsection{General characteristics}\label{calc:atmos:charac} The modeling was done using the program PANDORA \citep{Avrett-Loeser:84}. Given a {\it T} vs. {\it z} distribution, we solved the non-LTE radiative transfer, and the statistical and hydrostatic equilibrium equations, and self-consistently computed non-LTE populations for 10 levels of H, 29 of He~{\sc i}, 15 of Fe~{\sc i}, 9 of C~{\sc i}, 8 of Si~{\sc i}, Ca~{\sc i} and Na~{\sc i}, 6 of Al~{\sc i} and He~{\sc ii}, and 7 of Mg~{\sc i}. In addition, we computed 6 levels of Mg~{\sc ii}, and 5 of Ca~{\sc ii}. More detail on the modeling, and on the different assumptions and their validity can be found in \cite{Falchi-Mauas:98}. Here, we just point out that the Ca II line-profiles are computed in Partial Redistribution and that line-blanketing is included considering Kurucz's opacities. The atomic models we used for H and Ca II are described in \cite{Mauas-etal:97} and \cite{Falchi-Mauas:98}. For He I, we used the 29 levels model described in \cite{Andretta-Jones:97} which, apart for the number of levels, differs very slightly from the model of \cite{FAL:93} The different components in the lines were treated assuming that the sublevels are populated according to LTE, as explained in \cite{Mauas-etal:88}. For He II we used the 6 levels model of \cite{FAL:93}. The calculations include incident radiation from the coronal lines, which affects in particular the ionization balance in the H, He~{\sc i} and He~{\sc ii} continua. As explained in Sect. \ref{calc:photorad:spectrum}, this radiation was obtained scaling by a factor of 1.3 the distribution given by the CHIANTI database for an active region. As a starting point for the helium abundance, we used the standard photospheric value [He]=0.1. The microturbulence followed the semiempirical distribution given by \cite{FAL:91} as a function of total hydrogen density. As studied by Fontenla et al. (2002), the inclusion of ambipolar diffusion of helium can have an important effect in the construction of a model, changing the energy balance and therefore the {\it T} vs. {\it z} structure. However, it must be remarked that their models of the low transition region are theoretical. We have made some trial runs including ambipolar diffusion but keeping our atmospheric structure unchanged, and found that the emitted profiles remain unchanged. \subsection{Model construction}\label{calc:atmos:standard} Given the different sensitivity of various spectral lines to modifications in different parts of the model, we can describe its construction as a step by step process. The source function for H$\alpha$ is sensitive to the structure of the chromosphere from the temperature minimum region to the base of the transition region, even if the line center is formed higher up in the atmosphere. The Ca~{\sc ii} K line, in particular its wings and the K$_1$ minimum, are formed at the temperature minimum and in the low and mid-chromosphere. Therefore, we fixed the deeper part of the chromospheric structure using these two profiles, and cross-checked it with the profile of the Na D lines, which are much less sensitive to these atmospheric layers. None of these lines depend on the radiation field. The profiles of the He~{\sc i} lines at 5876 \AA\ (D$_3$) and 10830 \AA\ are formed in two distinct regions. Most of the radiation in both lines is originated in the photosphere, which in the quiet Sun results in a weak absorption line at 10830 \AA\, and no noticeable line at 5876 \AA. However, in an active region there is also a chromospheric contribution which depends mainly on the coronal EUV incident radiation, but also on the thermal structure of the high chromosphere/low transition region, between $1.\times 10^4$ and $2.5\times 10^4$ K. We used these lines, therefore, to fix this region as a second step in the construction of the model. Finally, both ultraviolet lines, the He~{\sc i} line at 584 \AA\ and the He~{\sc ii} line at 304 \AA, depend on the structure of the low and mid-transition region, from $3.\times 10^4$ to $5.\times0^4$ K for the 584 \AA\ line, and up to $1.\times 10^5$ K for the 304 \AA\ line. As mentioned in the Introduction, their dependence on the radiation field is not yet assessed, and should therefore be investigated. Hence, we used these lines in a final step as diagnostics of the structure of the transition region. In Fig.~\ref{mod_vt} we show the distribution of the temperature as a function of column mass for the model that gives the best match to the observations (dashed line). Also shown are the regions of formation of the different lines we used as diagnostics for the model construction. \begin{figure} \centering \includegraphics[width=7cm]{mod_vt.eps} \caption{Temperature vs column mass distribution of the computed atmospheric models. The models shown were obtained with the microturbulent velocity of \cite{FAL:91} (dashed line), and with a value of 5 km/s in the region of formation of the lines (solid line, see text). The plage model P \citep{FAL:93} is displayed for reference (dotted line). Also shown are the heights of formation of the different lines we used to build the models. The He~{\sc i} 584 line is formed at temperatures between $2.4\times 10^4$ and $3.4\times 10^4$ K, and the He~{\sc ii} 304 line is formed between $9.5\times 10^4$ and $1.\times 10^5$ K.} \label{mod_vt} \end{figure} In Fig.~\ref{prof_hcn} we show the computed and observed line profiles for H$\alpha$, the Ca~{\sc ii} K and Na D lines, which do not depend on the radiation field. All computed profiles were convolved with the instrumental response. The bars in the Figure indicate the r.m.s. of the profiles averaged over the area observed by CDS. It can be seen that the agreement found is very good, well whithin the r.m.s. \begin{figure} \centering \includegraphics[width=7cm]{prof_hcn.eps} \caption{Observed (dashed line) and computed (solid line) profiles for H$\alpha$ and the Ca II K and Na D lines. The bars indicate the r.m.s. of the averaged profiles (see text) and point out the variability between pixels.} \label{prof_hcn} \end{figure} In Fig.~\ref{prof_hel} we compare the observed helium lines with the profiles obtained from this model, again convolved with the instrumental response. We want to stress that for the 584 \AA\ the instrumental effects are rather heavy, washing out completely the strong central self-absorption of the computed profile (see following Section and Fig. \ref{prof_584_dist}). The \ion{He}{2}~304 \AA\ line profile (not shown) is very similar to the 584 \AA \ line. For all the helium lines, although the central intensities agree with the observations within the variability between pixels, the computed profiles result too broad. Furthermore, the observed profile of the 10830 \AA\ line distinctively shows the weak blue component of the triplet, which in the computed profile appears washed out. The main parameter that contributes to the broadening of these lines is the microturbulent velocity (v$_t$). As described above, in this model we adopted the turbulence distribution given by \cite{FAL:91}. To obtain a better match with the observed He lines we then tried a different microturbulence distribution, changing v$_t$ from 10 to 5 km s$^{-1}$ in the region with temperature ranging from $1.\times 10^4$ to 2$1.\times 10^4$ K. Since the microturbulent velocity is included in the hydrostatic equilibrium equations, changing it implies changing the densities, and therefore the atmospheric structure. Hence, the whole procedure was repeated, until a satisfactory match with the observations was found, and a new model was built. This model is also shown in Fig.~\ref{mod_vt} as a solid line, and the corresponding profiles for the He lines are shown in Fig.~\ref{prof_hel}. The new profiles for H$\alpha$, the Ca~{\sc ii} K and Na D lines (not shown) are very similar to the ones obtained with the former model. It can be seen that the match to He profiles is now strongly improved. We have therefore adopted this model as our standard model for the observed active region. \begin{figure} \centering \includegraphics[width=7cm]{prof_hel.eps} \caption{Dotted line: Observed profiles for three helium lines. Dashed: computed profiles (convolved with instrumental response) for the model adopting the microturbulence of \cite{FAL:91}. Full line: same as dashed, for our standard model where microturbulence has been reduced (see text). In the upper and middle panel, bars indicate the r.m.s. of the averaged profiles and point out the variability between pixels. For the shallow D$_3$ line (maximum central depth $\approx$ 5\%) the variability is very large and the bars are clipped at the limits of the plot.} \label{prof_hel} \end{figure} Finally, we checked {\it a posteriori} that the value of $P_e/k=3\times 10^{15}$ K cm$^{-3}$, assumed to compute the spectral distribution of the coronal radiation with the CHIANTI database, agrees with that found with our models. The most external point of the model for which we have an observational constraint, has a temperature T$=1.1 \times 10^5$ K and an electron density N$_e=2.3\times 10^{10}$ cm$^{-3}$, resulting in a very good agreement with $P_e/k=2.5\times 10^{15}$ K cm$^{-3}$ . \subsection{Effect of the EUV radiation}\label{calc:atmos:EUVspectrum} \def\lam{} To assess the influence of the coronal radiation on the profiles of the helium lines, we performed new calculations for different intensities of the radiation field, and for different spectral distributions. Since the most important change in the profiles with the radiation field is in the central depth of the D$_3$ and of the \lam\ 10830 \AA\ line, in Fig.~\ref{int_vs_phot} we show the intensity in the center of the 10830 line as a function of the intensity $I^\mathrm{corona}$ \begin{figure} \centering \includegraphics[width=7cm]{int_vs_phot.eps} \caption{Central intensity for the \lam 10830 \AA\ line as a function of the intensity $I^\mathrm{corona}$ for different step distributions of the field.} \label{int_vs_phot} \end{figure} Each curve in the figure represents a different spectral distribution. In each case, we considered a step function, which is non-zero only in a limited spectral range, and zero outside it. In this way, we tested distributions with all the photons localized between 25 and 50 \AA, between 50 and 100 \AA, 100 and 200 \AA, 200 and 300 \AA, 300 and 400 \AA\ and between 400 and 504 \AA. Several important conclusions can be drawn from this Figure. First, photons at short wavelength ($\lambda < 50$~\AA) have a small influence on the line profiles. Second, for the same value of $I^\mathrm{corona}$, photons between 50 and 100 \AA\ are the most effective in increasing the line central intensity. Third, the details of the spectral distribution between 100 and 504 \AA\ are of little relevance for the resulting central intensity. Such effects, anticipated by \cite{Andretta-etal:03} and reprised in Sect. \ref{calc:photorad}, can be understood in terms of the dependence on wavelength of the fraction of EUV photons effectively capable of photoionizing helium atoms and ions. The first effect is essentially due to the fact that short-wavelength photons are almost entirely absorbed by metals, via inner-shell photoionizations. At longer wavelengths, the relevant parameter is the ratio of helium and hydrogen photoionization cross sections. The fact that the ratio between 50 and 100~\AA\ is about 3--5 times the same ratio at 500~\AA, is consistent with the second conclusion drawn from Fig.~\ref{int_vs_phot}. Likewise, the third result can be understood by noting that the ratio between the \ion{He}{1} and hydrogen cross-sections changes in a relatively slow fashion at long wavelengths. In light of these results, we note that, in principle, any given central intensity of the 10830 line could be obtained with different values of $I^\mathrm{corona}$, but only if assuming very different spectral distributions. For example, the observed central intensity can be obtained with 8 $\times 10^{14}$ photons cm$^{-2}$ s${^-1}$ sr$^{-1}$ between 50 and 100 \AA, or with almost twice as many photons between 300 and 504 \AA. However, for a ``stationary'' active region such as in our case, the hypothesis of a dominant contribution at shorter wavelengths is rather unplausible, so that our assumption of a synthetic spectrum obtained with the CHIANTI database (Sect. \ref{calc:photorad:spectrum}) seems fully justified. In Fig.~\ref{prof_584_dist} we show the effect of the coronal radiation on the 584 \AA\ profile. We compare the profiles for the 584 \AA\ obtained with no incident coronal radiation with those obtained with $I^\mathrm{corona} =$ 1 $\times 10^{15}$ photons cm$^{-2}$ s${^-1}$ sr$^{-1}$ with two different spectral distribution. We see that the profiles are very similar and that the maximum difference (in the peak intensity) is about 10\%. The profile variation depends more on the spectral distribution than on the intensity of coronal radiation. As expected, when all photons are between 50 and 100 \AA\ and penetrate deeper in the atmosphere, the line does not change appreciably. A small change occurs instead when all photons are between 400 and 504 \AA. We obtained a similar result for the He~{\sc ii} 304 \AA\ line. Our analysis hence shows that, similarly to the quiescent case, also for active regions the coronal radiation has a much stronger effect on the chromospheric He lines than on the TR He lines. \begin{figure} \centering \includegraphics[width=7cm]{prof_584_dist.eps} \caption{584 \AA\ profile computed with no incident coronal radiation (full line), compared with the profiles computed for $I^\mathrm{corona} =$ 1 $\times 10^{15}$ photons cm$^{-2}$ s$^{-1}$ sr$^{-1}$ but with different spectral distribution. Dashed line: all photons between 400 and 504 \AA. Dotted line: all photons between 50 and 100 \AA.} \label{prof_584_dist} \end{figure} \subsection{Helium Abundance}\label{calc:atmos:abun} In the previous Sections, we have shown how a semi-empirical model employing the standard value of [He]=0.1 can reproduce very well the observational data. On the other hand, the actual value of [He] in different parts of the solar atmosphere is a matter of discussion (see Introduction), so it might be of interest to investigate how different values of [He] affect our computations. \begin{figure} \centering \includegraphics[width=7cm]{mod_abund.eps} \caption{Temperature vs column mass distribution of the atmospheric models obtained for different values of the He abundance, in the region where they differ. Solid line: [He]=0.1, our standard model. Dotted line: [He]=0.07. Dashed line: [He]=0.15. } \label{mod_abund} \end{figure} Changing the abundance modifies the total density for a given height, and therefore affects the hydrostatic equilibrium. Hence, modifying [He] in the whole solar atmosphere would result in a completely different atmospheric structure. In particular, the resulting photosphere could not be constrained by our observations. Furthermore, the region between the chromosphere and transition region has been indicated as a good candidate for processes that might be responsible for strong variations of [He] \citep{Lie-Svendsen-etal:03}. For these reasons, we changed [He] only from the point where T = $1.\times 10^4$ K outwards, where the He lines are formed, and recomputed the models until we found a satisfactory match with the observations. We built two different models, assuming values of the abundance of [He]=0.07 and [He]=0.15. In Fig.~\ref{mod_abund} we show these models, together with the one obtained for the standard abundance [He]=0.1, in the region above log(m)=-3, where they differ. In Fig.~\ref{prof_abund} we compare the 10830 and 584 \AA~ profiles obtained with these three models. In general, the differences in the profiles are very small, although a larger value of abundance corresponds to narrower profiles in both lines. This is due to the fact that to compensate for a smaller abundance the region of formation of the lines needs to be extended, to obtain enough helium atoms. This larger region between $1.\times 10^4$ and $2.5\times 10^4$ K can be noted in Fig.~\ref{mod_abund}, and results in a broader line. Although the ratio between the peaks and the central absorption of the 584 \AA\ changes by a factor of two with the extreme abundance values, the convolution with the broad CDS response destroys any difference. Only observations of EUV lines at much higher spectral resolution might then help in discriminate between different abundance values. \begin{figure} \centering \includegraphics[width=7cm]{prof_abund.eps} \caption{Profile for two helium lines, for different values of the helium abundance. Solid line: [He]=0.1, our standard model. Dotted line: [He]=0.07. Dashed line: [He]=0.15. For the 584 \AA\ also the profiles before the convolution with the instrumental response are shown.} \label{prof_abund} \end{figure} \section{Discussion and conclusions}\label{disc} We obtained a large set of simultaneous and cospatial observations of an active region, including the chromospheric lines Ca~{\sc ii} K, H$\alpha$ and Na~{\sc i} D as well as the \ion{He}{1} lines at 5876 (D$_3$), 10830 and 584 \AA\ and the \ion{He}{2} line at 304 \AA. The EUV radiation in the range $\lambda<500$~\AA\ and in the range $260<\lambda<340$~\AA\ has also been measured at the same time. Using the program PANDORA we were able to build semiempirical atmospheric models to match this set of observables. The He lines and continua are self-consistently computed in the radiative transfer calculations. In particular we were interested in the role of the photoionization-recombination process in the formation of the He lines and in the effect of possible variations of [He] in an active region. One of the main problems for the calculation of the He line profiles was to estimate the total number of ionizing photons impinging on the target active region, and their spectral distribution. The adopted procedure combines the observations from CELIAS/\SEM\ and EIT instruments for our region with the synthetic spectrum obtained with the CHIANTI database for an ``average'' active region and a $P_\mathrm{e}/k = 3\times 10^{15}\; \mathrm{K}\; \mathrm{cm}^{-3}$. We needed to scale the synthetic spectrum by a factor $1.3\pm 0.3$ to obtain the estimated value of $I^\mathrm{corona}$ (Sec.~\ref{calc:photorad:total}). We have two independent checks to validate this approach. First, we compared the \ion{He}{2} 304 intensity, measured directly from CDS in the considered area, with the one obtained from the \SEM[1] irradiance in the range $260<\lambda<340$~\AA\, and the agreement resulted well within the uncertainties. Second, {\it a posteriori}, we found that the electron pressure for the most external point of the model for which we have an observational constraint, agrees with the one assumed to compute the spectral distribution with the CHIANTI database. The agreement guarantees that the obtained model is self-consistent with the EUV coronal intensity used. A semiempirical model with a standard value for [He] and a modified distribution of microturbulence v$_t$ (reduced in the region of formation of the He lines) provides a good agreement with all the observations. However, we must stress that the broad CDS response function heavily conceals the details of the EUV line profiles, thus limiting their diagnostic relevance. For this model, defined as our standard model, we study the influence of the coronal radiation on the computed helium lines. We find that, similarly to the quiescent case, also in an active region the incident coronal radiation has a limited effect on the UV He lines, while it results fundamental for the D$_3$ and 10830 lines. For these latter two lines, the photons of wavelengths below 100 \AA\ result very effective in increasing the line depth, confirming that for the calculation of the He line profiles it is crucial to correctly estimate the total number of ionizing photons and their spectral distribution. While the assumption of an average distribution might not be critical in the case of a ``stationary'' active region, this could be not true in the case of a flare, where the coronal EUV radiation is probably harder. Finally, we tested how the helium abundance influenced our computed profiles, changing the value of [He] only in the region where the temperatures are larger than $1.\times 10^4$ K. We built two more models for [He]=0.15 and [He]=0.07 and found that also with these models we could match the observations as well as with the standard model. The differences in the computed lines are mostly evident for 584 and 304 \AA\, whose ratio between the peaks and the central absorption changes by a factor of two with the extreme [He] values. However, given the coarse spectral resolution of CDS, they are not appreciable in our observations that, hence, do not provide enough constraints to choose between [He] values. Observations of the 584 and 304 \AA\ lines with a spectral element smaller than 0.025 AA/pix around 584 \AA \ might help in this issue. This resolution is currently achieved only by the SUMER spectrograph aboard SOHO. However, SUMER has been restricted to off limb observations since year 2000. Moreover it does not observe the He II Lyman alpha around 304 \AA. Thus we have to wait for a next generation EUV spectrographs aboard one of the fortcoming missions such as Solar Orbiter. Furthermore, since the changes of the atmospheric structure with [He] are larger in the TR, it would be useful to constrain this part of the atmosphere with other observables, independent on the He lines. In particular we suggest that simultaneous observations of EUV lines of C~{\sc ii}, C~{\sc iii}, Si~{\sc ii} and Si~{\sc iii} might be very important to define the structure of the TR and hence help to discriminate between [He] values. \acknowledgments We thank D.\ McMullin, W.\ Thompson, G.\ Del Zanna for their help and useful advices about obtaining, analyzing, and interpreting the SOHO data used in this paper. We would like to thank the CDS and NSO teams for their unvaluable support in performing these observations. SOHO is a project of international cooperation between NASA and ESA. CHIANTI is a collaborative project involving NRL (USA), RAL (UK), and the Universities of Florence (Italy) and Cambridge (UK).
{'timestamp': '2004-12-02T15:34:19', 'yymm': '0412', 'arxiv_id': 'astro-ph/0412058', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/0412058'}
\section{Introduction} The problem of contamination of kinematic samples of galaxies in clusters by foreground and background galaxies is longstanding. It arises because of the fact that only the projected positions and velocities of galaxies are measured in redshift surveys. Due to the lack of knowledge about the motion perpendicular to the line of sight, it is difficult to judge a priori which of the galaxies found close to a cluster in projected space are actually bound to it and a good tracer of the underlying potential. Excluding fiducial members or including unbound galaxies, or interlopers, may lead to significantly incorrect estimates of the cluster mass. Several methods have been suggested in the literature to address this problem. All these methods aim at cleaning the galaxy sample by removing non-members before attempting a dynamical analysis of the cluster. Some algorithms utilize only the redshift information, such as (i) the 3$\sigma$-clipping method \citep{Yahil77} which iteratively eliminates interlopers with velocities greater than 3$\sigma$; (ii) the fixed gapper technique \citep{Beers90,Zabludoff90} in which any galaxy that is separated by more than a fixed value (e.g., $1\sigma$ of the sample or 500-1000 km s$^{-1}$) from the central body of the velocity distribution is rejected as a non-member; or (iii) the jackknife technique \citep{Perea90} which removes the galaxy whose elimination causes the largest change in the virial mass estimator. These methods are primarily based on statistical rules and some selection criteria. Other algorithms utilize both position and redshift information, such as (i) the shifting gapper technique \citep{Fadda96} which applies the fixed gapper technique to a bin shifting along the distance from the cluster center, or (ii) the \citet{denHartog96} technique that estimates the maximum (escape) velocity as a function of distance from the cluster center calculated either by the virial or projected mass estimator (e.g., \citealp{Bahcall81,Heisler85}). In addition to the techniques described above, the spherical infall models (hereafter referred to as SIMs, e.g., \citealp{Gunn72,Yahil85,Regos89,Praton94}) can determine the infall velocity as a function of distance from the cluster center. The SIM in phase-space has the shape of two trumpet horns glued face to face \citep{Kaiser87} which enclose the cluster members. However, studies shows that clusters are not well fit by SIMs in projected phase-space diagram, because of the random motion of galaxies in the cluster outer region caused by the presence of substructure or ongoing mergers \citep{vanHaarlem93,Diaferio99}. A recent investigation \citep{Abdullah13} showed that SIMs can be applied to a sliced phase-space by taking into account the distortion of phase-space due to transverse motions of galaxies with respect to the observer and/or rotational motion of galaxies in the infall region in the cluster-rest frame. However, that is out of the scope of the current paper. Another sophisticated method is the caustic technique described by \cite{Diaferio99} which, based on numerical simulations \citep{Serra13}, is estimated to be able to identify cluster membership with $\sim 95\%$ completeness within $3r_v$ ($r_v$ is the virial radius defined below). The caustic technique depends on applying the two-dimensional adaptive kernel method (hereafter, 2DAKM, e.g., \citealp{Pisani93,Pisani96}) to galaxies in phase-space ($R_p$, $v_z$), with the optimal smoothing length $h_{opt} = (6.24/N) \sqrt{(\sigma_{R_p}^2+\sigma_{v_z}^2)/2}$, where $\sigma_{R_p}$ and $\sigma_{v_z}$ are the standard deviations of projected radius and line-of-sight velocity, respectively, and $N$ is the number of galaxies. $\sigma_{R_p}$ and $\sigma_{v_z}$ should have the same units and therefore the coordinates ($R_p$, $v_z$) should be rescaled such that $ q = \sigma_v / \sigma_{R_p}$, where $q$ is a constant which is usually chosen to be 25 or 35 (additional details about the application of this technique may be found in \citealp{Serra11}). One more technique that should be mentioned here is the halo-based group finder \citep{Yang05,Yang07}. \citet{Yang07} were able to recover true members with $\sim 95\%$ completeness in the case of poor groups ($\sim10^{13} \mbox{M}_\odot$). However, they found that the completeness dropped to $\sim 65\%$ for rich massive clusters ($\sim10^{14.5} \mbox{M}_\odot$). Also, theirs is an iterative method which needs to be repeated many times to obtain reliable members. Moreover, its application depends on some assumptions and empirical relations to identify the group members. This paper introduces a simple and effective new technique to constrain cluster membership which avoids some issues of other techniques e.g., selection criteria, statistical methods, assumption of empirical relations, or need for multiple iterations. The paper is organized as follows. The simulations used in the paper are described in \S \ref{sec:sims}. In \S \ref{sec:Tech} the GalWeight technique is introduced and its efficiency at identifying \emph{bona fide} members is tested on MultiDark N-body simulations. In \S \ref{sec:comp}, we compare GalWeight with four well-known existing cluster membership techniques (shifting gapper, den Hartog, caustic, SIM). We apply GalWeight to twelve Abell clusters (including the Coma cluster) in \S \ref{sec:sdss}, and present our conclusions in \S \ref{sec:conc}. Throughout this paper we adopt $\Lambda$CDM with $\Omega_m=0.3$, $\Omega_\Lambda=0.7$, and $H_0=100$ $h$ km s$^{-1}$ Mpc$^{-1}$, $h = 1$. \section{Simulations}\label{sec:sims} In this section we describe the simulated data that we use in this work in order to test the efficiency of the GalWeight technique to recover the true membership of a galaxy cluster. {\bf 1. MDPL2}: The MDPL2 \footnote{https://www.cosmosim.org/cms/simulations/mdpl2/} simulation is an N-body simulation of $3840^3$ particles in a box of co-moving length 1 $h^{-1}$ Gpc, mass resolution of $1.51 \times 10^9$ $h^{-1}$ M$_{\odot}$, and gravitational softening length of 5 $h^{-1}$ kpc (physical) at low redshifts from the suite of MultiDark simulations (see Table 1 in \citealp{Klypin16}). It was run using the L-GADGET-2 code, a version of the publicly available cosmological code GADGET-2 \citep{Springel05}. It assumes a flat $\Lambda$CDM cosmology, with cosmological parameters $\Omega_\Lambda$ = 0.692, $\Omega_m$ = 0.307, $\Omega_b$ = 0.048, $n$ = 0.96, $\sigma_8$ = 0.823, and $h$ = 0.678 \citep{Planck14}. MDPL2 provides a good compromise between numerical resolution and volume \citep{Favole16}. It also provides us with a large number of clusters of different masses extended from $0.7\times10^{14}$ to $37.4\times10^{14}$ $h^{-1}$ $M_{\odot}$ (the range used to test the efficiency of GalWeight). {\bf 2. Bolshoi}: The Bolshoi simulation is an N-body simulation of $2048^3$ particles in a box of co-moving length 250 $h^{-1}$ Mpc, mass resolution of $1.35 \times 10^8$ $h^{-1}$ M$_{\odot}$, and gravitational softening length of 1 $h^{-1}$ kpc (physical) at low redshifts. It was run using the Adaptive Refinement Tree (ART) code \citep{Kravtsov97}. It assumes a flat $\Lambda$CDM cosmology, with cosmological parameters ($\Omega_\Lambda$ = 0.73, $\Omega_m$ = 0.27, $\Omega_b$ = 0.047, $n$ = 0.95, $\sigma_8$ = 0.82, and $h$ = 0.70. Bolshoi provides us with clusters of higher mass resolution than MDPL2. Thus, we use both simulations to test the efficiency of GalWeight to recover the true membership. For both simulations halos are identified using the Bound Density Maximum (BDM) algorithm \citep{Klypin97,Riebe13}, that was extensively tested (e.g., \citealp{Knebe11}) which identifies local density maxima, determines a spherical cut-off for the halo with overdensity equal to 200 times the critical density of the Universe ($\rho = 200 \rho_c$) for MDPL2 and 360 times the background matter density of the Universe ($\rho = 360 \rho_{bg}$), and removes unbound particles from the halo boundary. Among other parameters, BDM provides a virial masses and radii. The virial mass is defined as $M_{v} =\frac{4}{3} \pi 200 \rho_c r_{v}^3$ for MDPL2 and $M_{v} =\frac{4}{3} \pi 360 \rho_{bg} r_{v}^3$ for Bolshoi (see \citealp{Bryan98,Klypin16}). The halo catalogs are complete for halos with circular velocity $v_c \geq 150$ km s$^{-1}$ for MDPL2 \citep{Klypin16} and $v_c \geq 100$ km s$^{-1}$ for Bolshoi (e.g., \citealp{Klypin11,Busha11,Old15}). For both MDPL2 and Bolshoi the phase-space (line-of-sight velocity $v_z$ versus projected radius $R_p$) of a distinct halo (cluster) is constructed as follows. We assume the line-of-sight to be along the z-direction and the projection to be on the x-y plane. We select a distinct halo of coordinates ($x^h,y^h,z^h$) and velocity components ($v_x^h,v_y^h,v_z^h$), and then we calculate the observed line-of-sight velocity of a subhalo, taking the Hubble expansion into account, as $v_{z} = (v_z^g- v_z^h) + H_0 (z^g-z^h)$, where ($x^g,y^g,z^g$) and ($v_x^g,v_y^g,v_z^g$) are the coordinates and velocity components of the subhalo, respectively. Finally, we select all subhalos within a projected radius of \footnote{Throughout the paper we utilize small $r$ to refer to 3D radius and capital $R$ to refer to projected radius.}$R_{p,max} = 10$ $h^{-1}$ Mpc from the center of distinct halo and within a line-of-sight velocity interval of $|v_{z,max}| = 3500$ km $\mbox{s}^{-1}$. These values are chosen to be sufficiently large to exceed both the turnaround radius and the length of the Finger-of-God (hereafter, FOG) which are typically $\sim7-8~h^{-1}$ Mpc and $\sim 6000$ km s$^{-1}$ respectively for massive clusters. The turnaround radius $r_t$ is the radius at which a galaxy's peculiar velocity ($v_{pec}$) is canceled out by the global Hubble expansion. In other words, it is the radius at which the infall velocity vanishes ($v_{inf} = v_{pec} - H~ r = 0$). \section{The Galaxy Weighting Function Technique (GalWeight)} \label{sec:Tech} In this section, we describe the GalWeight technique in detail and demonstrate its use by applying it interactively to a simulated cluster of mass $9.37 \times 10^{14}$ $h^{-1}$ M$_{\odot}$ selected from the Bolshoi simulation. Figure~\ref{fig:F01C} shows the phase-space distribution of subhalos (galaxies) near the center of the simulated cluster. \begin{figure} \hspace*{-0.75cm} \includegraphics[width=14cm]{01F01S.pdf} \vspace{-0.5cm} \caption{Line-of-sight velocity $v_z$ as a function of projected radius $R_p$ in the extended region around a simulated cluster of mass $9.37 \times 10^{14}$ $h^{-1}$ M$_{\odot}$ selected from the Bolshoi simulation. The Finger-of-God is clearly seen in the main body of the cluster within $R_p\lesssim 1$ Mpc $h^{-1}$. The effect of the mass concentration in and around the cluster is manifested as a concentration of galaxies around $v_z=0$ line well outside the cluster itself. Interlopers are mostly galaxies at large projected distances and large peculiar velocities. In \S~\ref{sec:Tech} and in Figures~\ref{fig:F02C}, \ref{fig:F03C} \& \ref{fig:totweight} we show in detail how GalWeight can be applied to this cluster to distinguish between interlopers and cluster members (Figure~\ref{fig:example}). } \label{fig:F01C} \end{figure} The GalWeight technique works by assigning a weight to each galaxy $i$ according to its position ($R_{p,i}$,$v_{z,i}$) in the phase-space diagram. This weight is the product of two separate two-dimensional weights which we refer to as the {\bf{dynamical}} and {\bf{phase-space}} weights. The dynamical weight (see \S~\ref{sec:Prob} parts A.1 and A.2, and Figure~\ref{fig:totweight}a which is the product of Figure~\ref{fig:F02C}b and Figure~\ref{fig:F03C}b) is calculated from the surface number density $\Sigma(R_p)$, velocity dispersion $\sigma_{v_z}(R_p)$, and standard deviation $\sigma_{R_p}(v_z)$ profiles of the cluster. The phase-space weight (see \S~\ref{sec:Prob} part B and Figure~\ref{fig:totweight}b) is calculated from the two-dimensional adaptive kernel method that estimates the probability density underlying the data and consequently identification of clumps and substructures in the phase-space (\citealp{Pisani96}). The total weight is then calculated as the product of the dynamical and phase-space weights (see \S~\ref{sec:Prob} part C and Figure~\ref{fig:totweight}c). The advantage of using the total weight rather than the dynamical weight or the phase-space weight alone is discussed in \S~\ref{sec:weights}. \begin{figure*} \hspace*{1.5cm} \includegraphics[width=13.5cm]{02F02S.pdf} \vspace{-2.55cm} \caption{Weighting function along projected radius $R_{p}$ for the simulated cluster of mass $9.37 \times 10^{14}$ $h^{-1}$ M$_{\odot}$ selected from Bolshoi (see \S~\ref{sec:Prob} A.1). The left panel (a) shows the function $\mathcal{D}_{R_p}$ derived from the data (black points, Equation (\ref{eq:ProbR})), normalized by Equation (\ref{eq:ProbRN}), and fitted by $\mathcal{W}_{R_p}$ (red curve, Equation (\ref{eq:king})). The right panel (b) presents its corresponding probability density function in phase-space diagram. As shown in (a \& b), the weighting is greatest at $R_p = 0$ and decreases outwards. } \label{fig:F02C} \end{figure*} \begin{figure*} \hspace*{1.5cm} \includegraphics[width=13.5cm]{03F02S.pdf} \vspace{-2.55cm} \caption{Weighting function along line-of-sight velocity $v_{z}$ for the simulated cluster selected from Bolshoi. The left panel (a) shows the function $\mathcal{D}_{v_z}$ calculated from the data (black points, Equation (\ref{eq:ProbV})), normalized by Equation (\ref{eq:ProbVN}), and fitted by $\mathcal{W}_{v_z}$ (blue curve, Equation (\ref{eq:Exp})). The right panel (b) presents its corresponding probability density function in phase-space. As shown in (a \& b), the applied weight is greatest at $v_z =0$ and decreases as the absolute line-of-sight velocity increases.} \label{fig:F03C} \end{figure*} \subsection{Galaxy Weighting Functions} \label{sec:Prob} \noindent {\bf A. Dynamical Weighting $\mathcal{W}_{dy}(R_p,v_z)$} In calculating the dynamical weighting function, we assume that the weighting we apply should be larger at the cluster center i.e., at the origin in phase-space (Figure~\ref{fig:F01C}), and decay along both the $R_p$ and $v_z$ phase space axes. This weighting function is, therefore, a product of two individual weighting functions; one which decays along the $R_p$-axis and the other along the $v_z$-axis as described below. \noindent {\bf A.1. $R_p$-axis Weighting Function, $\mathcal{W}_{R_p} (R_p)$} In order to calculate the projected radius weighting function, $\mathcal{W}_{R_p} (R_p)$, we select two properties that are strongly correlated with projected radius and with the dynamical state of a cluster. The first property is the {\bf\ Surface Number Density Profile $\Sigma(R_p)$}, defined as the number of galaxies per unit area as a function of distance from the cluster center. It has its maximum value at the cluster center and decreases with radial distance, and is also strongly correlated with the mass distribution of the cluster. The significance of introducing $\Sigma(R_p)$ for calculating $\mathcal{W}_{R_p}$ is that the velocities of member galaxies in the core of some clusters can be as high as $\approx 3000$ km s$^{-1}$. It produces the Kaiser or FOG effect (see \citealp{Kaiser87}). This FOG distortion is the main reason that many membership techniques fail to correctly identify galaxies in the core with high line-of-sight velocities as members. Thus, $\Sigma(R_p)$ is essential to recover the members in the cluster core. In other words, ignoring $\Sigma(R_p)$ means missing some of the cluster members in the core. The second property is the {\bf Projected Velocity Dispersion Profile, $\sigma_{v_z}(R_p)$}. The significance of introducing $\sigma_{v_z}(R_p)$ for calculating $\mathcal{W}_{R_p}$ is that it characterizes the dynamical state of a cluster from its core to its infall region. Specifically, there are random motion of member galaxies in the infall region due to the presence of substructures and recent mergers (e.g., \citealp{vanHaarlem92,vanHaarlem93,Diaferio97}). This effect of random motion can be taken into account implicitly in $\sigma_{v_z}(R_p)$. This is the main reason why the SIM technique fails in the cluster outskirts in the projected phase-space. Thus, $\sigma_{v_z}(R_p)$ is essential to recover the members in the cluster infall region. In other words, ignoring $\sigma_{v_z}(R_p)$ means missing some of the cluster members in the infall region. Thus, the weighting $\mathcal{W}_{R_p}(R_p)$ in the projected radius direction can be calculated by introducing the function $\mathcal{D}_{R_p}(R_p)$ that is given by \begin{equation} \label{eq:ProbR} \mathcal{D}_{R_p} (R_p) = \frac{\Sigma(R_p) \sigma_{v_z}(R_p)}{R_p^\nu}, \end{equation} \noindent with the normalization \begin{equation} \label{eq:ProbRN} \mathcal{N}_{R_p} = \int_{0}^{R_{p,max}} \mathcal{D}_{R_p} (R_p) dR_p, \end{equation} \noindent where $R_{p,max}$ is the maximum projected radius in phase-space. The denominator $R_p^\nu$, where the slope of the power low $\nu$ is a free parameter in the range $-1 \lesssim \nu \lesssim 1$, is introduced in Equation (\ref{eq:ProbR}) to provide flexibility and generalization for the technique. The free parameter $\nu$ is selected to adjust the effect of the distortion of FOG in the core and the distortion of the random motion in the outer region. It is defined as \begin{equation} \nu =\frac{\sigma_{FOG}(R\leq0.25)}{\sigma_{rand}(0.25<R\leq4))} -1, \end{equation} \noindent where $\sigma_{FOG}$ is the velocity dispersion of the core galaxies and $\sigma_{rand}$ is the velocity dispersion of the galaxies outside the core. The function $\mathcal{D}_{R_p} (R_p)$, calculated from the data, is contaminated by interlopers that cause scattering, especially at large projected distances (see black points in the left panel of Figure~\ref{fig:F02C}). Therefore, in order to apply a smooth weighting function, we fit $\mathcal{D}_{R_p} (R_p)$ with an analytical function. Any analytical function that is a good fit to $\mathcal{D}_{R_p} (R_p)$ could be utilized. In this paper we choose to use the function \begin{equation} \label{eq:king} \mathcal{W}_{R_p}(R_p)= \mathcal{A}_0\left(1+\frac{R_p^2}{a^2}\right)^{\gamma}+\mathcal{A}_{bg}, \end{equation} \noindent which has four parameters: $a$ is a scale radius ($0 < a \lesssim 1$), $\gamma$ is a slope of the power law ($-2 \lesssim \gamma < 0$), and $\mathcal{A}_{0}$ and $\mathcal{A}_{bg}$ are the central and background weights in the $R_p$-direction. These parameters are determined by applying the chi-squared algorithm using the Curve Fitting MatLab Toolbox. Note that the analytical function we selected here has the same form as the generalized King model \citep{King72,Adami98}. Thus, the weight $\mathcal{W}_{R_p}(R_{p,i})$ of each galaxy can be calculated according to its projected radius $R_{p}$ from the cluster center. The weighting along $R_p$ is shown in Figure~\ref{fig:F02C}a, where the function $\mathcal{D}_{R_p} (R_p)$ is normalized using Equation (\ref{eq:ProbRN}). The data are smoothed and approximated using Equation (\ref{eq:king}) (shown as red line). The right panel (b) shows the projected radius weight function in phase space.\\ \noindent{\bf A.2. $v_z$-axis Weighting Function, $\mathcal{W}_{v_z} (v_z)$} In phase-space, most members are concentrated near the line $v_z = 0$ and the number of members decreases with increasing absolute line-of-sight velocity. The weighting function along $v_z$-axis can, therefore, be approximated by the histogram of the number of galaxies per bin, $N_{bin} (v_z)$, or equivalently the standard deviation of projected radius, $\sigma_{R_p} (v_z)$, directed along the line-of-sight velocity axis, normalized by the total number of galaxies $N_{tot}$ in the cluster field. In other words, the weighting in the line-of-sight velocity direction can be calculated by introducing the function $\mathcal{D}_{v_z} (v_z)$ that is given by \begin{equation} \label{eq:ProbV} \mathcal{D}_{v_z} (v_z) = \sigma_{R_p} (v_z), \end{equation} \noindent with the normalization \begin{equation} \label{eq:ProbVN} \mathcal{N}_{v_z} = \int_{-v_{z,max}}^{v_{z,max}} \mathcal{D}_{v_z} (v_z) dv_z, \end{equation} \noindent where $v_{z,max}$ is the maximum line-of-sight velocity of phase-space. As above, to obtain a smooth weighting function in $v_{z}$, the histogram or equivalently $\mathcal{D}_{v_z} (v_z)$ can be fitted by an analytical function. In this paper we select an exponential model that is given by\\ \begin{figure*} \hspace*{0.5cm} \includegraphics[width=21cm]{04F04S.pdf} \vspace{-5.5cm} \caption{Weights to be applied as a function of position in line-of-sight velocity/projected radius phase-space for the simulated cluster selected from the Bolshoi simulation. Panel (a) shows the dynamical weight $\mathcal{W}_{dy}$ (The product of the weights shown in Figures \ref{fig:F02C}b and \ref{fig:F03C}b). Panel (b) presents the phase-space weight $\mathcal{W}_{ph}$ calculated from the 2DAKM. The total weight $\mathcal{W}_{tot} = \mathcal{W}_{dy} \times \mathcal{W}_{ph}$ is shown in panel (c) with explicitly drawing three contour weights. The weight $\mathcal{W}_{dy}$ is maximum at the origin (0,0) and decreases along both the $R_p$ and $v_z$ axes and $\mathcal{W}_{ph}$ gives higher weight for galaxy clumping around the center and substructures as well. Note that the scaling for each panel is independent, with magenta representing maximum values.} \label{fig:totweight} \end{figure*} \begin{equation} \label{eq:Exp} \mathcal{W}_{v_z} (v_z) = \mathcal{B}_0 \exp{(b \ v_z)}+\mathcal{B}_{bg}, \end{equation} \noindent where $\mathcal{B}_0$ is the central weight, $\mathcal{B}_{bg}$ is the background weight in $v_{z}$ and $b$ is scale parameter ($-0.01 \lesssim b < 0$). Again, these parameters are determined by applying the chi-squared algorithm using the Curve Fitting MatLab Toolbox. The weighting along $v_z$, is shown in Figure~\ref{fig:F03C}a, where the function $\mathcal{D}_{v_z} (v_z)$ (black points) is normalized using Equation (\ref{eq:ProbVN}). The data are smoothed and approximated by Equation (\ref{eq:Exp}) for an exponential model (blue curve). The right panel (b) shows the resulting exponential-model weight as a function of location in line-of-sight velocity/projected radius phase-space. As shown in (a \& b), the applied weight is greatest at $v_z =0$ and decreases as the absolute line-of-sight velocity increases.\\ We can now construct a two-dimensional {\bf{dynamical weight $\mathcal{W}_{dy}(R_p,v_z)$}} by multiplying $\mathcal{W}_{R_p} (R_p)$ and $\mathcal{W}_{v_z} (v_z)$ together: \begin{equation} \label{eq:ProbDy} \mathcal{W}_{dy}(R_p,v_z) = \mathcal{W}_{R_p} (R_p) \mathcal{W}_{v_z} (v_z), \end{equation} \noindent $\mathcal{W}_{dy}(R_p,v_z)$ is shown in the left panel of Figure~\ref{fig:totweight}, and is the product of the weights shown in Figure~\ref{fig:F02C}b and Figure~\ref{fig:F03C}b. The weight is maximum at the origin, and decreases along both $R_p$ and $v_z$. To sum up, the dynamical weight is calculated from three properties (surface number density $\Sigma(R_p)$ and velocity dispersion $\sigma_{v_z}(R_p)$ along $R_p$, and standard deviation of projected radius $\sigma_{R_p} (v_z)$ along $v_z$) which are correlated strongly with the dynamics of the cluster. This weight takes into account the effects of the FOG in the cluster core and the random motion of galaxies in the infall region. \noindent {\bf B. Phase-Space Weighting, $\mathcal{W}_{ph}(R_p,v_z)$} This weighting is the coarse-grained phase-space density which can be simply calculated by the 2-dimensional adaptive kernel method (2DAKM, e.g., \citealp{Silverman86,Pisani96}). The kernel density estimator is the estimated probability density function of a random variable. For $N$ galaxies with coordinates $(x , y) = (R_p , v_z)$ the density estimator is given by \begin{equation} \label{eq:kernal} f(x,y)=\frac{1}{N} \sum_{i=1}^{N} \frac{1}{h_{x,i} h_{y,i}} K\left(\frac{x-X_i}{h_{x,i}}\right) K\left(\frac{y-Y_i}{h_{y,i}}\right) \end{equation} \noindent where, the kernel $K(t)$ is given by Gaussian distribution \begin{equation} \label{eq:Gkernal} K(t) = \frac{1}{\sqrt{(2\pi)}} \exp {\left(-\frac{1}{2} t^2\right)} \end{equation} \noindent and $h_{j,i} = \lambda_i h_j$ is the local bandwidth, $h_j = \sigma_{j} N^{-1/6}$ is the fixed bandwidth for 2-dimensional space and $\sigma_j$ is the standard deviation for j=\{x,y\}. The term $\lambda_i = \left[\gamma/f_0(x_i,y_i)\right]^{0.5}$ and $\log{\gamma}=\sum_i\log{f_0(x_i,y_i)}/N$, where $f_0(x_i,y_i)$ is given by Equation \ref{eq:kernal} for $\lambda_i=1$ (see also, \citealp{Shimazaki10}). \begin{figure*} \hspace*{0.5cm} \includegraphics[width=19cm]{05F05S.pdf} \vspace{-4cm} \caption{Identification of the simulated cluster membership from weighted galaxies. Panel (a) shows the weight of each galaxy in line-of-sight velocity/projected radius phase-space (magenta color indicates higher weight). Panel (b) shows a histogram or PDF of the weight applied to each galaxy, $\mathcal{W}_{tot} (R_{p,i},v_{z,i})$. 1DAKM fitting returns a bimodal PDF in this example of the simulated cluster. We choose to use the number density method (NDM, \citealp{Abdullah13}) to identify the contour weight value which separates cluster members from interlopers. This is shown by the solid red vertical line in panel (c) and solid red line in panel (a). $1\sigma$ confidence intervals are shown by the two red dashed lines. The two vertical dashed-black lines represent the virial and turnaround radii, where the cluster members are those enclosed by the best contour line and within the turnaround radius. We impose one additional cut, shown by the black solid lines in panel (a), cutting the red contour line in the very inner radius by the maximum $v_z$ of the enclosed members.} \label{fig:example} \end{figure*} Consequently, applying 2DAKM for the distribution of galaxies in the phase-space demonstrates high weights for positions of high-density distribution of galaxies. Therefore, the main purpose of introducing the phase-space weight is to take into account the effect of the presence of any clump or substructure in the field that cannot be counted by the dynamical weight. Also, the phase-space weight is introduced to reduce the excessive increase of dynamical weight near the center (see \S \ref{sec:weights}). The phase-space weight $\mathcal{W}_{ph}(R_p,v_z)$ is shown in Figure~\ref{fig:totweight}b that gives more weights for galaxies in clumps and substructures, and from the distribution of galaxies in the cluster field this weighting function is maximum around the cluster center. \noindent {\bf C. Total Weighting, $\mathcal{W}_{tot}(R_p,v_z)$} The total weighting function is calculated as \begin{equation} \label{eq:ProbTot} \mathcal{W}_{tot}(R_p,v_z) = \mathcal{W}_{dy} (R_p,v_z) \mathcal{W}_{ph} (R_p,v_z), \end{equation} \noindent and shown in Figure~\ref{fig:totweight}c for the simulated cluster. It shows the probability distribution function of the total weight $\mathcal{W}_{tot}(R_p,v_z)$. The weighting in Figure~\ref{fig:totweight}c is then applied to individual galaxies. Figure~\ref{fig:example}a shows Fig~\ref{fig:F01C} once again, but now after applying the ``total weighting". We still need to separate cluster members from interlopers. We explain how to do that in \S~\ref{sec:mem}. \subsection{Membership Determination} \label{sec:mem} Figure~\ref{fig:example}a shows the weight of each galaxy in the simulated cluster phase-space. The question is now how to utilize the weighted galaxies in phase-space to best identify cluster members. One would like to identify a single, optimal weight value which separates cluster members from field galaxies i.e., identify the best contour weight to select in panel (a). One way is to consider the probability distribution function (PDF), or histogram of the total weight for all galaxies, which is shown in Figure~\ref{fig:example}b. Fitting the PDF using a 1DAKM reveals two obvious peaks (bimodal PDF). One might imagine simply drawing a vertical line to separate the members located on the right with higher weights from the interlopers located on the left. However, not all clusters show this bimodality in the PDF of $\mathcal{W}_{tot}$. Another way could be to exclude all galaxies that have weights less than, for example, $3\sigma$ from the average value of the main peak (i.e., $\mathcal{W}_{cut} = \mathcal{W}_{peak} - 3\sigma$). However, attempting to do the separation by either of these two ways is subjective. Therefore, we prefer to select the optimal contour weight by utilizing the Number Density Method (hereafter, NDM), a technique which was introduced in \citet{Abdullah13}. The goal in applying the method here, is to find the optimal contour weight (or line) that returns the maximum number density of galaxies. In other words, we select a certain contour line (weight) and calculate its enclosed area and number of galaxies, $N_{\rm in}$ (which contribute positively), then account for the number of galaxies, $N_{\rm out}$ (which contribute negatively) located outside this contour line. Then, the number density of this contour line can be calculated by ($N_{\rm in} - N_{\rm out})/{\rm Area}$ (see figure 9 in \citealp{Abdullah13}). \begin{figure*} \hspace*{1cm} \includegraphics[width=22cm]{06F06S.pdf} \vspace{-5.25cm} \caption{Application of dynamical, phase-space, and total weights (green, blue, and black lines, respectively) to three simulated clusters taken from the Bolshoi simulation (\S~\ref{sec:sims}). The red points show true members within $3r_v$. Applying the dynamical weight alone (green) results in the inclusion of many galaxies within $R \sim$ 1 Mpc $h^{-1}$ with very high line-of-sight velocities. Applying the phase-space weight alone (blue), fails to recover some members in the core while simultaneously incorrectly including some interlopers at large distances due to the presence of nearby clusterings and clumps. The total weight (black), the product of the dynamical and phase-space weights, recovers true members effectively in both the core and infall regions (see Table \ref{tab:frac}).} \label{fig:WeightsComp} \end{figure*} In Figure~\ref{fig:example}c the PDF of the number density of galaxies calculated by NDM is plotted for weights (contour lines) in the range $-12 \leq \log{\mathcal{W}_{tot}} \leq -6$. The optimal contour line corresponds to the maximum number density of galaxies, the value of weight which should be utilized as the separator of cluster members from interlopers, is shown by the red vertical solid line with $1\sigma$ confidence intervals shown by the red two vertical dashed lines. This optimal contour line with $1\sigma$ confidence are shown as solid and dashed red lines in panel (a), respectively. As shown in Figure~\ref{fig:example}a the optimal contour line extends to large distances ($R \sim 10~ h^{-1}$ Mpc) and not all galaxies within this boundary are members. Therefore, the last step of GalWeight is to determine a cutoff radius within which the galaxies are assumed to be bounded. Thus, the cluster members are defined as the galaxies enclosed by the optimal contour line and within the cutoff radius. This cutoff radius can be adopted as the virial radius $r_v$ (which is the boundary of the virialized region) or the turnaround radius $r_t$ (which is the boundary of the cluster infall region). Note that the main goal of this paper is to introduce and test the efficiency of GalWeight to recover the true members in the virial and infall regions using simulations. Thus, knowing the virial radius of each simulated cluster we test the efficiency of GalWeight at $r_v$, $2r_v$, and $3r_v$ projected on the phase-space diagram as described in \S \ref{sec:simulations} and Table \ref{tab:frac} (see, e.g., \citealp{Serra13}). However, for our sample of the twelve Abell clusters (observations) $r_v$ and $r_t$ are determined from the mass profile estimated by the virial mass estimator and NFW mass profile \citep{NFW96,NFW97} as discussed in \S \ref{sec:sdss}. We impose one additional cut, shown by the solid black lines highlighted by black circles in panel (a), to cut the red contour line in the very inner radius by the maximum $v_z$ of the enclosed members. This is because in some cases the optimal contour line extends to very high velocities in the innermost region ($R \lesssim 0.25 h^{-1}$ Mpc) without including any other members, so it is not necessarily to show this tail of the contour line. The main steps in applying the GalWeight technique to determine cluster membership are summarized below:\\ {\bf{1.}} Make an appropriate cut in $R_p$ and $v_z$, and plot galaxies in line-of-sight velocity/projected radius phase-space. In this paper, we use $R_{p,max} = 10$ $h^{-1}$ Mpc and $|v_{z,max}| = 3500$ km $\mbox{s}^{-1}$.\\ {\bf{2.}} Calculate the function $\frac{\Sigma(R_p) \sigma(R_p)}{R_p^\nu}$ and fit it with an analytical model (e.g., Equation~\ref{eq:king}) to obtain $\mathcal{W}_{R_p} (R_p)$.\\ {\bf{3.}} Calculate the function $\sigma_{v_z} (v_z)$ and fit it with an analytical model (e.g., Equation~\ref{eq:Exp}) to obtain $\mathcal{W}_{v} (v)$.\\ {\bf{4.}} Determine the dynamical weighting, $\mathcal{W}_{dy} (R_p,v_z) = \mathcal{W}_{R_p} (R_p) \times \mathcal{W}_{v_z} (v_z)$.\\ {\bf{5.}} Apply the 2DAKM in phase-space to determine the phase-space weighting, $\mathcal{W}_{ph} (R_p,v_z)$.\\ {\bf{6.}} Calculate the total weight $\mathcal{W}_{tot} (R_p,v_z) = \mathcal{W}_{dy} (R_p,v_z) \times \mathcal{W}_{ph} (R_p,v_z)$.\\ {\bf{7.}} Plot the PDF for all galaxy weights and apply a cut, retaining all galaxies with weight larger than this cut as members (NDM is used here to determine the optimal value of cut).\\ {\bf{8.}} Determine the cutoff radius ($r_v$ or $r_t$) using a dynamical mass estimator and identify cluster members as those enclosed by the optimal contour line and within the cutoff radius. \subsection{Why do we use total weight rather than dynamical or phase-space weights?} \label{sec:weights} One may ask why we depend on the total weight to assign a cluster membership rather than using the dynamical weight or phase-space weight alone. We present Figure~\ref{fig:WeightsComp} to help answer this question. It shows the phase-space of three Bolshoi simulated clusters (see \S \ref{sec:sims}). Using simulated clusters brings the advantage that true members are known definitively. Figure~\ref{fig:WeightsComp} shows the optimal contour lines determined by applying, separately, the dynamical weight (green line), the phase-space weight (blue line) and the total weight (black line). The red points show true members within $3r_v$. \begin{table*} \centering \caption{Efficiency of the GalWeight technique\\ determined by calculating $f_c$ and $f_i$ at $r_v$, $2r_v$ and $3r_v$ for a sample of $\sim$ 3000 clusters from the MDPL2 \& Bolshoi simulations.} \label{tab:frac} \begin{tabular}{cccccccccc}\hline Mass Range&mean&number of&\multicolumn{3}{c}{$f_c$ }&&\multicolumn{3}{c}{$f_i$}\\ \multicolumn{2}{c}{($10^{14}~ h^{-1}~ \mbox{M}_\odot$)}&halos &$r_v$ & $2r_v$ & $3r_v$& & $r_v$ & $2r_v$ & $3r_v$\\ (1)&(2)&(3)&(4)&(5)&(6)&&(7)&(8)&(9)\\ \hline \multicolumn{10}{c} {MDPL2}\\ \hline 0.73-37.39&4.28&1500 (All)&$0.993\pm0.014$ &$0.986\pm0.015$ &$0.981\pm0.013$&&$0.112\pm0.035$&$0.096\pm0.048$&$0.113\pm0.051$\\ \hline 0.73-2.00&1.37&253& $0.998\pm0.050$ &$0.992\pm0.016$ &$0.981\pm0.018$&&$0.096\pm0.039$&$0.098\pm0.050$&$0.118\pm0.053$\\ 2.00-4.00&3.16&617& $0.993\pm0.015$ &$0.983\pm0.016$ &$0.979\pm0.012$&&$0.113\pm0.034$&$0.099\pm0.050$&$0.118\pm0.053$\\ 4.00-8.00&5.37&484& $0.989\pm0.016$ &$0.984\pm0.013$ &$0.982\pm0.011$&&$0.118\pm0.032$&$0.099\pm0.045$&$0.117\pm0.049$\\ 8.00-37.39&11.20&146&$0.988\pm0.013$ &$0.988\pm0.010$ &$0.988\pm0.013$&&$0.121\pm0.028$&$0.105\pm0.043$&$0.122\pm0.045$\\ \hline \hline \multicolumn{10}{c} {Bolshoi}\\ \hline 0.70-10.92&1.53&500 (1500) (All)&$0.995\pm0.011$ &$0.981\pm0.021$ &$0.971\pm0.020$&&$0.126\pm0.045$&$0.217\pm0.109$&$0.226\pm0.102$\\ \hline 0.70-2.00&1.31&415 (1194)& $0.996\pm0.011$ &$0.983\pm0.0208$ &$0.972\pm0.019$&&$0.124\pm0.047$&$0.218\pm0.109$&$0.227\pm0.102$\\ 2.00-4.00&2.70&72 (252)& $0.992\pm0.012$ &$0.975\pm0.023$ &$0.967\pm0.025$&&$0.133\pm0.040$&$0.128\pm0.103$&$0.227\pm0.105$\\ 4.00-8.00&4.43&11 (48)& $0.990\pm0.012$ &$0.970\pm0.022$ &$0.961\pm0.022$&&$0.131\pm0.039$&$0.207\pm0.113$&$0.217\pm0.103$\\ 8.00-10.92&9.68&2 (6)&$0.997\pm0.004$ &$0.982\pm0.024$ &$0.973\pm0.025$&&$0.130\pm0.018$&$0.270\pm0.116$&$0.250\pm0.094$\\ \hline \end{tabular} \begin{tablenotes} \item $f_c$ is the completeness or the fraction of the number of fiducial members identified by GalWeight as members relative to the actual number of members. \item $f_i$ is the contamination or the fraction of interlopers incorrectly assigned to be members. \item Columns: (1) Cluster mass range; (2) cluster mean mass per bin; (3) the actual number of clusters per bin in simulations and the number between brackets gives the number of clusters in different orientations for Bolshoi; (4-6) and (7-9) are $f_c$ and $f_i$ for each mass bin at $r_v$, $2r_v$, and $3r_v$, respectively. \end{tablenotes} \end{table*} In Figure~\ref{fig:WeightsComp}, the dynamical weight $\mathcal{W}_{dy}(R_p,v_z)$ (green; see also Figure~\ref{fig:totweight}a) is seen to be very smooth and idealised. In other words, $\mathcal{W}_{dy}(R_p,v_z)$ describes well an isolated galaxy cluster in phase-space. It does not take into account the effects of nearby clusters, clumps and/or substructures. Also, it shows an excessive increase near the cluster center ($\sim 1~ h^{-1}$~Mpc) and incorrectly includes interlopers near the center which have very high velocities. This effect is due to introducing $\Sigma(R_p)$ in $\mathcal{W}_{dy}(R_p,v_z)$, where the surface number density is very high near the cluster center. However, ignoring $\mathcal{W}_{dy}(R_p,v_z)$ leads to missing some cluster members especially those that close to the center in phase-space. Thus, $\mathcal{W}_{dy}(R_p,v_z)$ cannot be used on its own to assign cluster membership, but it is very important for correctly identifying members with high line-of-sight velocities. Figure~\ref{fig:WeightsComp} demonstrates that, on its own, phase-space weighting $\mathcal{W}_{ph}(R_p,v_z)$ also has some difficulty in recovering true cluster members (blue; see also Figure~\ref{fig:totweight}b). This is because it does not take into account the FOG effect in the cluster core, where those members that have high velocities do not have high concentration, so they are assigned low weights in phase-space and not counted as members. Also, the presence of nearby clusterings and substructures have the effect of widening the ``optimal'' contour line. Consequently, it is very difficult to separate true members from galaxies (interlopers) located in nearby clumps. This results in the inclusion of some interlopers in the infall region. In summary, using $\mathcal{W}_{ph}(R_p,v_z)$ alone, simultaneously excludes some true members near the cluster center and includes some interlopers in the infall region. We have shown that both the dynamical weight and phase-space weight have issues in identifying true members when applied alone. However, as the black solid line in Figure~\ref{fig:WeightsComp} shows, the total weight (the product of the dynamical and phase-space weights), is very effective. It can simultaneously identify cluster members moving with high velocities in the core ($R_p \lesssim 1$ Mpc $h^{-1}$) as well as members moving with random motions in the infall regions ($R_p \sim 3 r_v$). \subsection{Testing the Efficiency of GalWeight on MDPL \& Bolshoi Simulations} \label{sec:simulations} To further demonstrate and quantify the GalWeight technique at assigning membership, we again utilize the MDPL2 \& Bolshoi\footnote{https://www.cosmosim.org/cms/simulations/mdpl2/} simulations from the suite of MultiDark simulations. The efficiency of GalWeight can be quantified by calculating two fractions defined as follows. The first is the completeness $f_c$, which is the fraction of the number of fiducial members identified by GalWeight as members in the projected phase-space relative to the actual number of 3D members projected in the phase-space. The second is the contamination $f_i$, which is the fraction of interlopers incorrectly assigned to be members, projected in the phase-space (see e.g., \citealp{Wojtak07,Serra13}). Ideally, of course, GalWeight would return fractions of $f_c = 1$ and $f_i = 0$. MDPL2 provides us with 1500 simulated clusters with masses ranging from $0.73\times10^{14} h^{-1} M_{\odot}$ to $37.4\times10^{14} h^{-1} M_{\odot}$ to which we can apply GalWeight. We calculate the fractions $f_c$ and $f_i$ at three radii -- $r_v, 2r_v$ and $3r_v$. As shown in Table~\ref{tab:frac}, the mean values of $f_c$ and $f_i$ within $r_v$ are $0.993$ and $0.112$ respectively for the 1500 clusters overall. Also, the fraction $f_c$ decreases from $0.993$ at $r_v$ to $0.981$ at $3r_v$. For Bolshoi, we have about 500 clusters with masses greater than $0.70\times10^{14} h^{-1} M_{\odot}$. In order to increase the cluster sample of Bolshoi to 1500 clusters, we randomly select different line-of-sights or ordinations for each distinct halo in additional to the original line-of-sight along the z-direction (see column 3 in Table \ref{tab:frac} for Bolshoi). Then, we apply GalWeight to each cluster. The mass range of the sample is $0.70\times10^{14} h^{-1} M_{\odot}$ to $10.92\times10^{14} h^{-1} M_{\odot}$ as shown in Table~\ref{tab:frac}. The mean values of $f_c$ and $f_i$ within $r_v$ are $0.995$ and $0.126$ respectively for the 1500 clusters overall. Also, the fraction $f_c$ decreases from $0.995$ at $r_v$ to $0.971$ at $3r_v$. \begin{figure*} \hspace*{-0.0cm} \includegraphics[width=32.45cm] {07F07S.pdf} \vspace{-3.05cm} \caption{Application of the GalWeight technique (solid black lines) to twelve simulated clusters selected from the MDPL simulation (\S~\ref{sec:sims}). Red points show fiducial members within $3r_v$. The virial mass ($\log \mbox{M}_v$ $h^{-1}$ $\mbox{M}_\odot$) and number of members within $r_v$ is shown for each cluster. Clearly, GalWeight does well in effectively identifying members with high accuracy in both the virialized and infall regions for structures ranging in mass from rich clusters to poor groups. } \label{fig:mocksample} \end{figure*} \begin{figure*}\hspace*{-0.2cm} \includegraphics[width=26cm]{08F08S.pdf} \vspace{-3.75cm} \caption{Example of four well-known membership techniques applied to two simulated clusters with mass of $10.92\times10^{14}$ $h^{-1}$ $\mbox{M}_\odot$ (top panles) \& $4.24\times10^{14}$ $h^{-1}$ $\mbox{M}_\odot$ (bottom) from the Bolshoi simulation (\S~\ref{sec:sims}). In each panel, the red points represent fiducial cluster members within $3r_v$, and the solid black lines show the demarcation contour enclosing cluster members, identified by applying our new technique (GalWeight). The open blue circles in panels (a, b, e \& f) show members identified by the shifting gapper technique using $N_{bin}=10$ and $N_{bin}=15$, respectively. Panel (c \& g) shows the caustic technique employing rescale parameters of q=25 (cyan lines), and q=35 (pink lines) and also the Den Hartog technique (dotted black lines). The Yahil SIM (dark green lines) and Reg\H{o}s SIM (light green lines) techniques are presented in panel (d \& h). GalWeight recovers fiducial members with high accuracy, improving upon the shifting gapper and den Hartog techniques simultaneously at small and large projected radii, the caustic techniques at small projected radius and the SIM technique at large projected radius ($\sim 3r_v$)}. \label{fig:Methods1} \end{figure*} The main reason that some interlopers are assigned as members ($f_i = 0.113$ for MDPL2 and $f_i = 0.226$ for Bolshoi, as maximal value) is because of the triple-value problem \citep{Tonry81}. That is, there are some foreground and background interlopers that appear to be part of the cluster body because of the distortion of phase-space. The effect of the triple-value problem is apparent in Figure~\ref{fig:mocksample} (discussed below), where most of the interlopers assigned as members are embedded in the cluster body. We defer a discussion of how GalWeight may be adapted to overcome this problem to a future work. In order to demonstrate the ability of GalWeight to assign membership in the case of both poor and massive clusters we divide the 1500 clusters (for each simulation) into four mass bins as shown in Table \ref{tab:frac}. The fraction $f_c$ varies from 0.998 (0.996) for the poor clusters of mean mass $1.44\times10^{14}~ (1.13\times10^{14})~ h^{-1} M_{\odot}$ to 0.988 (0.997) for the massive clusters of mean mass $11.34\times10^{14}~ (9.68\times10^{14})~ h^{-1} M_{\odot}$ at $r_v$ for MDPL2 (Bolshoi). We conclude that GalWeight can be applied effectively to a range of clusters masses with high efficiency. Figure~\ref{fig:mocksample} shows examples of GalWeight being applied to twelve simulated Bolshoi clusters (solid black lines), where red and gray points show fiducial members and interlopers, respectively, within $3r_v$. The twelve clusters shown in Figure~\ref{fig:mocksample} are ranked by virial mass, with the most massive cluster ($10.92\times10^{14}$ $h^{-1}$ $M_{\odot}$) shown in the upper left corner and the least massive one ($1.06\times10^{14}$ $h^{-1}$ $M_{\odot}$) shown in the lower right corner. The figure demonstrates that GalWeight can effectively recover cluster membership for rich massive galaxy clusters as well as small or poor groups of galaxies with the same efficiency. In summary, applying GalWeight to the suite of MDPL2 and Bolshoi simulations demonstrates that GalWeight can successfully recover cluster membership with high efficiency. It also further demonstrates that it can simultaneously identify members in both the virial and infall regions with taking into account the FOG effect and the random motion of galaxies in the infall region. Furthermore, it can be applied to both rich galaxy clusters and poor groups of galaxies with the same efficiency (see Table \ref{tab:frac}) \section{A comparison of membership techniques} \label{sec:comp} In this section, we perform a general comparison between GalWeight and four other well-known techniques ({\bf{shifting gapper, caustic, den Hartog technique, and SIM}}) without doing any quantitative comparison. We defer testing the efficiency of different membership techniques to recover the 3D true members of clusters and the influence of the determining their dynamical masses to a future work (see e.g., \citealp{Wojtak07}). We begin by showing how each technique fares when it is applied in turn to two simulated clusters with mass of $10.92\times10^{14}$ $h^{-1}$ $\mbox{M}_\odot$ \& $4.24\times10^{14}$ $h^{-1}$ $\mbox{M}_\odot$ from the Bolshoi simulation, shown in Figure~ \ref{fig:Methods1}. Making the assumption that the cluster is spherical, fiducial members are assumed to lie within three virial radii, $3r_v$, and are shown as 2D members in the phase-space (red points) in each panel of Figure~\ref{fig:Methods1}. We select this radius ($3r_v$) in order to examine the ability of each technique to recover true members not only within the virial radius but also in the infall region i.e., the region of a cluster that extends from the viral radius $r_v$ to the turnaround radius $r_t$, where $r_t \sim ~ 2-4 ~ r_v$. Shown in each panel by the solid black line is the optimal choice of demarcation contour separating members and field galaxies identified by our GalWeight technique. For reasons of space we do not describe each of the four techniques (shifting gapper, den Hartog, caustic and SIM) in detail here. However, we summarize them below and refer the reader to the references for more information. The {\bf{shifting gapper technique}} \citep{Fadda96} works by first placing galaxies into bins according to their projected radial distance from the cluster center. The user has the freedom to choose the number of galaxies per bin which they believe is best-suited to each application of the technique. Commonly chosen values are $N_{bin}=10$ or 15. For each bin, the galaxies are sorted according to their velocities, then any galaxy separated by more than a fixed value (e.g., $1\sigma$ of the sample or 500-1000 km s$^{-1}$) from the previous one is considered an interloper and removed. \citet{Fadda96} used a gap of 1000 km s$^{-1}$ and a bin of 0.4 $h^{-1}$ Mpc or larger, in order to have at least 15 galaxies. The open blue circles in panels {\bf (a, e) \& (b, f)} of Figure~\ref{fig:Methods1} represent the members identified by this technique, where the number of galaxies utilized per bin was $N_{bin}=10$ and $N_{bin}=15$, respectively. The gray points symbolize interlopers. Clearly, membership identification depends heavily upon the choice of $N_{bin}$, as there are many differences between the galaxies identified as members in panels {\bf (a, e) \& (b, f)}. Additionally, in both cases, some true members of the two cluster are missed, especially at small projected radius. Furthermore, the shifting gapper technique depends on the choice of the velocity gap used to remove interlopers in each bin. A choice of a high-velocity gap results in the identification of large fraction of interlopers as cluster members, while the choice of a low-velocity gap results in missing true cluster members \citep{Aguerri07}. The application of the {\bf{caustic technique}} (e.g., \citealp{Alpaslan12,Serra13}) is shown in panels {\bf (c \& g)} of Figure~\ref{fig:Methods1} for two rescale parameters, q = 25 (cyan lines) and q = 35 (pink lines). Although this technique is quite successful when applied to the cluster outskirts, it misses some of the true members located within the core, which are the most important galaxies affecting the dynamics of the clusters. They are missed because the caustic technique does not take into account the effects of the FOG distortion. Also, the caustic technique cannot be applied to small galaxy groups. Furthermore, applying the caustic technique is rather subjective and relies upon how the caustics can be inferred from the data \citep{Reisenegger00, Pearson14}. Nonetheless, it is still a powerful technique for estimating cluster masses. The application of the {\bf{den Hartog technique}} \citep{denHartog96} is also shown by the dotted black lines in Figure~\ref{fig:Methods1} panels {\bf (c \& g)}. This technique estimates the escape velocity as a function of distance from the cluster center by calculating the virial mass profile (see \S \ref{sec:sdss}), $v_{esc} (R) = \sqrt{\frac{2G M_{vir}(R)}{R}}$, where G is the gravitational constant,. The figure demonstrates that this technique is very biased towards including many far interlopers. In addition, its application relies on assumptions of hydrostatic equilibrium and spherical symmetry. Panels {\bf (d \& h)} in Figure~\ref{fig:Methods1} show the application of two {\bf{spherical infall models} (SIMs).} The Yahil \citep{Yahil85} and Reg\H{o}s models \citep{Regos89} are shown by dark green and light green lines respectively. Note that, one needs to determine the mass density profile and the background mass density in order to apply the SIM technique and determine the infall velocity profile (e.g., \citealp{vanHaarlem93}). We determine the mass density profile for the simulated cluster from the NFW model (\citealp{NFW96} \& 1997, Equations (\ref{eq:NFW1} \&\ref{eq:NFW11}), knowing its concentration $c$, virial radius $r_v$, and scale radius $r_s = r_v/C$. Also, the background mass density is given by $\rho_{bg} = \Omega_m ~ \rho_c$. As shown in Figure~\ref{fig:Methods1} (d \& h), SIMs have difficulty identifying true members in the infall region in projected phase-space. This is due to the fact that the effect of random motion of galaxies in the infall regions \citep{vanHaarlem93,Diaferio99} causes some members in the cluster outskirts to be missed. A recent investigation by our own team \citep{Abdullah13} has shown that SIMs can successfully be applied to sliced phase-space by taking into account some kinds of distortions such as the transverse motion of galaxies with respect to the observer and/or rotational motions of galaxies inside the cluster. However, this is out of the scope of the current paper. \section{Observations - Application to a Sample of 12 Abell Clusters}\label{sec:sdss} In this section we apply GalWeight to a sample of twelve Abell galaxy clusters, with galaxy coordinates and redshifts taken from SDSS-DR12\footnote{https://http://www.sdss.org/dr12} (hereafter, SDSS-DR12 \citealp{Alam15}). In order to demonstrate the technique for both massive and poor clusters, we selected clusters with Abell richness parameter ranging from 0 to 3 \citep{Abell89}. We deliberately selected some clusters which were almost isolated and others which had clumps or groups of galaxies nearby in order to demonstrate how the technique performs under these different scenarios. We apply the GalWeight technique only to this pilot sample of twelve clusters in this paper, deferring application to the entire SDSS-DR13 sample of $\sim 800$ clusters to a later paper. The data sample is collected as follows. The NASA/IPAC Extragalactic Database (NED)\footnote{https://ned.ipac.caltech.edu} provides us with a first approximation of the angular coordinates and redshift of the center of our cluster sample ($\alpha_c$, $\delta_c$, $z_c$). We then download the coordinates and redshifts (right ascension $\alpha$, declination $\delta$, and spectroscopic redshift $z$) for objects classified as galaxies near the center of each cluster from SDSS-DR12 \citep{Alam15}. The next step is to apply the binary tree algorithm (e.g., \citealp{Serra11}) to accurately determine the cluster center ($\alpha_{c}$, $\delta_{c}, z_c$) and create a line-of-sight velocity ($v_z$) versus projected radius ($R_p$) phase-space diagram. $R_p$ is the projected radius from the cluster center and $v_z$ is the line-of-sight velocity of a galaxy in the cluster frame, calculated as $ v_z = c( z - z_c)/(1+z_c)$, where $z$ is the observed spectroscopic redshift of the galaxy and $z_c$ is the cluster redshift. The term $(1+z_c)$ is a correction due to the global Hubble expansion \citep{Danese80} and c is the speed of light. We then apply GalWeight to the twelve Abell clusters as described in detail in \S~\ref{sec:Tech} in order to get the optimal contour line. The final step is to determine the virial radius, $r_v$, at which $\rho = 200 \rho_c$ and the turnaround radius, $r_t$, at which $\rho = 5.55 \rho_c$ (e.g., \citealp{Nagamine2003,Busha05,Dunner06}) from all galaxies located inside optimal contour line of a cluster. In order to calculate these two radii we should first determine the cluster mass profile. The cluster mass can be estimated from the virial mass estimator and NFW mass profile \citep{NFW96,NFW97} as follows. The viral mass estimator is given by \begin{equation} \label{eq:vir16} M(<r)=\frac{3\pi N \sum_{i}v_{z, i} (<r)^2}{2G\sum_{i\neq j}\frac{1}{R_{ij}}} \end{equation} \noindent where $v_{z,i}$ is the galaxy line-of-sight velocity and $R_{ij}$ is the projected distance between two galaxies (e.g., \citealp{Limber60,Binney87,Rines03}). If a system extends beyond the virial radius, Equation~(\ref{eq:vir16}) will overestimate the mass due to external pressure from matter outside the virialized region \citep{The86,Carlberg97,Girardi98}. The corrected virial mass can be determined using the following expression: \begin{equation} \label{eq:vir17} M_{v}(<r)=M(<r)[1-S(r)], \end{equation} \noindent where $S(r)$ is a term introduced to correct for surface pressure. For an NFW density profile and for isotropic orbits (i.e. the projected, $\sigma_v$, and angular, $\sigma_\theta$, velocity dispersion components of a galaxy in the cluster frame are the same, or equivalently the anisotropy parameter $\beta = 1- \frac{\sigma_\theta^2}{\sigma_r^2} = 0$), $S(r)$ can be calculated by \begin{equation}\label{eq:vir_25} S(r)=\left(\frac{x}{1+x}\right)^2\left[\ln(1+x)-\frac{x}{1+x}\right]^{-1}\left[\frac{\sigma_v(r)}{\sigma(<r)}\right]^2, \end{equation} \begin{figure*} \hspace*{0.0cm} \includegraphics[width=32cm]{10SDSSsample.pdf} \vspace{-4cm} \caption{Application of the GalWeight technique to twelve Abell clusters from SDSS-DR12 (see also Table~\ref{tab:parameters}). The solid black lines shows the optimal contour line and the two dashed vertical lines show the virial and turnaround radii respectively. The red points show galaxies identified as clusters members - those enclosed by optimal contour line and $r_t$. Also shown in each panel is the cluster virial mass ($\log \mbox{M}_v$ $h^{-1}$ $\mbox{M}_\odot$) and number of galaxies within $r_v$. } \label{fig:All} \end{figure*} \noindent where $x=r/r_s$, $r_s$ is the scale radius, $\sigma(<r)$ is the integrated three-dimensional velocity dispersion within $r$, and $\sigma_v(r)$ is a projected velocity dispersion (e.g., \citealp{Koranyi2000,Abdullah11}). The mass density within a sphere of radius $r$ introduced by NFW is given by \begin{equation} \label{eq:NFW1} \rho(r)=\frac{\rho_s}{x\left(1+x\right)^2}, \end{equation} \noindent and its corresponding mass is given by \begin{equation} \label{eq:NFW11} M(<r)=\frac{M_s}{\ln(2)-(1/2)}\left[\ln(1+x)-\frac{x}{1+x}\right], \end{equation} \noindent where $M_s=4\pi\rho_s r^3_s [\ln(2)-(1/2)]$ is the mass within $r_s$, $\rho_s = \delta_s \rho_c$ is the characteristic density within $r_s$ and $\delta_s = (\Delta_v/3) c^3 \left[\ln(1+c) - \frac{c}{1+c}\right]^{-1}$, and the concentration $c = r_v/r_s$ (e.g., \citealp{NFW97,Rines03,Mamon13}). \begin{table*} \centering \caption{Dynamical parameters derived for the sample of twelve Abell galaxy clusters} \label{tab:parameters} \scriptsize \begin{tabular}{cc cccccc c cccccc c ccc}\hline cluster & $z_c$ &\multicolumn{6}{c}{virial mass estimator} && \multicolumn{6}{c}{NFW mass estimator} && \multicolumn{3}{c}{NFW parameters}\\ \cline{3-8} \cline{10-15} \cline{17-19}\\ &&$r_{500}$&$M_{500}$&$r_{200}$ & $M_{200}$&$r_{100}$&$M_{100}$ && $r_{500}$&$M_{500}$&$r_{200}$ & $M_{200}$&$r_{100}$&$M_{100}$&&$r_{s}$&$M_{s}$ & c \\ (1)&(2) &(3)&(4)&(5)&(6)&(7)&(8) && (9)&(10)&(11)&(12)&(13)&(14) && (15)&(16)&(17)\\ \hline A2065 & 0.073 & 1.27 &11.95 & 1.78 &12.97& 2.29&13.86 && 1.20 &10.11 & 1.78 &13.01 & 2.36 &15.30 && 0.22 & 1.90 & 8.16\\ A1656 & 0.023 & 1.07 & 7.14 & 1.58 & 9.06& 2.07&10.30 && 1.06 & 6.78 & 1.58 & 9.09 & 2.11 &10.96 && 0.26 & 1.61 & 6.01\\ A2029 & 0.078 & 0.99 & 5.68 & 1.49 & 7.65& 1.97& 8.84 && 0.94 & 4.73 & 1.49 & 7.67 & 2.07 &10.31 && 0.61 & 2.80 & 2.46\\ A2142 & 0.090 & 0.97 & 5.24 & 1.47 & 7.27& 2.03& 9.64 && 0.91 & 4.35 & 1.47 & 7.32 & 2.05 &10.02 && 0.67 & 2.99 & 2.19\\ A2063 & 0.035 & 0.80 & 2.91 & 1.17 & 3.73& 1.54& 4.23 && 0.81 & 3.10 & 1.18 & 3.76 & 1.55 & 4.27 && 0.08 & 0.40 & 14.40\\ A1185 & 0.033 & 0.75 & 2.38 & 1.08 & 2.89& 1.42& 3.29 && 0.72 & 2.20 & 1.08 & 2.91 & 1.44 & 3.48 && 0.17 & 0.49 & 6.53\\ A0117 & 0.055 & 0.65 & 1.55 & 0.93 & 1.86& 1.26& 2.31 && 0.61 & 1.29 & 0.93 & 1.88 & 1.27 & 2.37 && 0.24 & 0.46 & 3.87\\ A2018 & 0.088 & 0.55 & 0.92 & 0.90 & 1.67& 1.35& 2.84 && 0.55 & 0.95 & 0.90 & 1.67 & 1.27 & 2.36 && 0.48 & 0.82 & 1.83\\ A1436 & 0.065 & 0.49 & 0.68 & 0.89 & 1.64& 1.25& 2.27 && 0.61 & 1.29 & 0.89 & 1.64 & 1.18 & 1.92 && 0.10 & 0.23 & 8.68\\ A1983 & 0.045 & 0.58 & 1.09 & 0.85 & 1.39& 1.09& 1.51 && 0.57 & 1.03 & 0.85 & 1.41 & 1.14 & 1.71 && 0.16 & 0.27 & 5.37\\ A1459 & 0.020 & 0.50 & 0.73 & 0.72 & 0.85& 0.92& 0.89 && 0.50 & 0.73 & 0.72 & 0.87 & 0.95 & 0.97 && 0.04 & 0.08 & 18.4\\ A2026 & 0.091 & 0.46 & 0.54 & 0.71 & 0.82& 0.91& 0.88 && 0.47 & 0.59 & 0.71 & 0.83 & 0.96 & 1.03 && 0.16 & 0.19 & 4.32\\ \hline \end{tabular} \begin{tablenotes} \item \noindent Radii and their masses are calculated by virial and NFW mass estimators at overdensities of $\Delta = 500$, $200$ and 100 $\rho_c$. The radius and mass are in units of $h^{-1} \mbox{Mpc}$ and $10^{14}$ $h^{-1} \mbox{M}_\odot$.\\ Columns: (1) cluster name; (2) cluster redshift; (3-4), (5-6) \& (7-8) are radii and their corresponding masses calculated bythe virial mass estimator at overdensities of $\Delta= 500, 200$ and 100, respectively. (9-10), (11-12) \& (13-14) are radii and their corresponding masses calculated by an NFW model at overdensities of $\Delta= 500, 200$ and 100, respectively. (15-17) are scale radius, its corresponding scale mass and concentration of NFW parameters. \end{tablenotes} \end{table*} The projected number of galaxies within a cylinder of radius R is given by integrating the NFW profile (Equation~(\ref{eq:NFW1})) along the line of sight (e.g., \citealp{Bartelmann96,Zenteno16}) \begin{equation} \label{eq:NFW2} N(<R)=\frac{N_s}{\ln(2)-(1/2)} g(x), \end{equation} \noindent where $N_s$ is the number of galaxies within $r_s$ that has the same formula as $M_s$, and $g(x)$ is given by (e.g., \citealp{Golse02,Mamon10}) \begin{equation} g(x) = \begin{cases} \ln(x/2) + \frac{\cosh^{-1} (1/x)}{\sqrt{1-x^2}} \ \ \ \mbox{if} \ x \ < \ 1\\ 1-\ln(2) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mbox{if} \ x \ = \ 1 \\ \ln(x/2) + \frac{\cos^{-1} (1/x)}{\sqrt{x^2-1}} \ \ \ \ \ \mbox{if} \ x \ > \ 1 \end{cases} \end{equation} Thus, we can fit $r_s$ for each cluster to get $S(r)$ from Equation~\ref{eq:vir_25} and calculate the corrected mass profile $M_v(r)$ from Equation~\ref{eq:vir17}. Also, the NFW mass profile is calculated from Equation~\ref{eq:NFW11}. Then, $r_v$, at which $\Delta = 200 \rho_c$, can be calculated from the viral or NFW mass profiles. While $r_t$, at which $\Delta = 5.55 \rho_c$, can be determined from NFW mass profile only. We cannot determine $r_t$ from the virial mass profile because the assumption of hydrostatic equilibrium is invalid. Finally, after we calculate $r_v$ and $r_t$ (from NFW mass profile) the cluster membership can be defined as all galaxies enclosed by the optimal contour line and within $r_t$, as shown by the red points in Figure~\ref{fig:All}. It is worth noting once again that GalWeight is effective at taking into account the effects of the FOG distortion in the innermost regions and the random motion of galaxies in the cluster infall region. Moreover, GalWeight is not affected by the presence of substructures or nearby clusters or groups as demonstrated, for example, for A2063 \& A2065. Furthermore, GalWeight can be applied both to rich clusters such as A2065 \& A1656 and to poor galaxy groups such as A1459 \& A2026. In order to compare our results with the literature, we calculate the radii and their corresponding masses at three overdensities, $\Delta_{500} = 500 \rho_c$, $\Delta_{200} = 200 \rho_c$ and $\Delta_{100} = 100 \rho_c$ as shown in Table \ref{tab:parameters}. The sample is displayed in order of decreasing NFW $M_{200}$ mass. A complete list of NFW parameters is also provided in Table \ref{tab:parameters}. In Table \ref{tab:comp} we list ratios of radii and masses for each of the twelve Abell clusters using our GalWeight-determined method (assuming an NFW profile) divided by previously-published values, $(r_{NFW}/r_{ref})$ and $(M_{NFW}/M_{ref})$ respectively, at overdensities of $\Delta = 500$, $200$ and $100 \rho_c$. Column 8 of Table~\ref{tab:comp} also lists the ratio of GalWeight-determined masses relative to those estimated from the caustic technique \citep{Rines16}, ($M_{NFW}/M_{caus})_{200}$, at $\Delta= 200 \rho_c$. Table~\ref{tab:comp} clearly shows that the radii and masses estimated for a cluster are strongly dependent on the technique used to assign membership and remove interlopers (see \citealp{Wojtak07}). The ratio $(r_{NFW}/r_{ref})$ ranges between 0.63 and 1.55, while the ratio $(M_{NFW}/M_{ref})$ ranges between 0.58 and 2.18. \begin{table*} \centering \caption{GalWeight-determined ratios of radii and mass for each of the twelve Abell clusters compared to previously-published values} \label{tab:comp} \scriptsize \begin{tabular}{cccccccccccc}\hline cluster & \multicolumn{3}{c}{$(r_{NFW}/r_{ref})$} &&& \multicolumn{3}{c}{$(M_{NFW}/M_{ref})$} &&& \multicolumn{1}{c}{($M_{NFW}/M_{caus})_{200}$}\\ \cline{2-4} \cline{7-9} \\ (1)& (2) &(3)&(4)&&&(5)&(6)&(7)&&&(8)\\ &\multicolumn{1}{c}{$\Delta_{500}$}&\multicolumn{1}{c}{$\Delta_{200}$}& \multicolumn{1}{c}{$\Delta_{100}$}&&&\multicolumn{1}{c}{$\Delta_{500}$}&\multicolumn{1}{c}{$\Delta_{200}$}&\multicolumn{1}{c}{$\Delta_{100}$}&&&\multicolumn{1}{c}{\citet{Rines16}}\\ \hline A2065 & $1.70^{5}$, $1.15^{8}$ & $1.04^{8}$, $1.11^{11}$ & --- &&& $1.50^{8}$ & $1.12^{8}$, $1.30^{11}$ & --- &&& 3.84 \\ A1656 & $1.16^{5}$ & $0.89^{4}$, $1.05^{9}$ & $1.04^{6}$ &&& $1.09^{5}$ & $0.72^{4}$, $1.16^{9,+}$ & $1.12^{6}$ &&& 2.02 \\ A2029 & $1.05^{5}$, $0.93^{8}$ & $0.93^{8}$, $0.89^{11}$ & --- &&& $0.92^{5}$, $0.80^{8}$ & $0.82^{8}$, $0.88^{11}$ & --- &&& 1.42 \\ A2142 & $0.91^{8}$, $1.30^{10,+}$ & $1.30^{10,+}$, $0.94^{11}$ & $0.96^{12}$ &&& $0.66^{8}$ & $1.80^{10,+}$, $0.75^{11}$ & $0.86^{12}$ &&& 2.27 \\ A2063 & $1.30^{8}$ & $1.18^{8} $ & $1.05^{12}$ &&& $2.18^{8}$ & $1.65^{8}$ & $1.03^{12}$ &&& 1.40 \\ A1185 & --- & $1.01^{3,\bullet} $ & --- &&& --- & $2.77^{3,\bullet}$ & --- &&& 1.37 \\ A0117 & --- & $0.83^{1,\star}$, $1.05^{2} $ & --- &&& --- & $0.58^{1,\star}$ & --- &&& --- \\ A2018 & --- & $0.82^{2} $, $1.18^{13}$ & --- &&& --- & --- & --- &&& 0.94 \\ A1983 & $0.85^{7}$ & $0.90^{3,\bullet} $, $0.81^{7}$ & --- &&& $1.18^{7}$ & $0.99^{3,\bullet}$,$1.03^{7}$ & --- &&& 1.64 \\ A1436 & $1.55^{10,+}$ & $0.64^{1,\star}$, $1.24^{10,+}$ & --- &&& --- & $0.64^{1,\star}$, $1.30^{10,+}$ & --- &&& --- \\ A1459 & --- & $0.94^{1,\star} $ & --- &&& --- & $0.84^{1,\star}$ & --- &&& --- \\ A2026 & --- & $0.63^{2} $ & --- &&& --- & --- & --- &&& --- \\ \hline \end{tabular} \begin{tablenotes} \item Columns: (1) cluster name; (2-4) ratio of GalWeight radii to those in the literature at overdensities of $\Delta = 500$, $200$ and $100$ $\rho_c$ respectively, assuming an NFW model. (5-7) ratio of GalWeight masses to those in the literature at overdensities of $\Delta = 500$, $200$ and $100$ $\rho_c$ respectively, assuming an NFW model. (8) ratio of GalWeight masses assuming an NFW model to those calculated from the caustic technique in \citet{Rines16} at $\Delta= 200$ $\rho_c$ .\\ 1=\citet{Abdullah11}, 2=\citet{Aguerri07}, 3=\citet{Girardi02}, 4=\citet{Kubo07}, 5=\citet{Lagana11}, 6=\cite{Lokas03}, 7=\citet{Pointecouteau05}, 8=\citet{Reiprich02}, 9=\citet{Rines03}, 10=\citet{Rines06}, 11=\citet{sifon15}, 12=\citet{Wojtak10}. $^+$ (caustic technique), $^\bullet$ (shifting gapper),$^\star$ (SIM). \end{tablenotes} \vspace{4mm} \end{table*} The cluster masses from the literature tabulated in Table~\ref{tab:comp} have been calculated in various ways. Below, we explicitly compare our values to those obtained from applying the shifting gapper, SIM and caustic methods. First, comparing to the shifting gapper technique (see ($^\bullet$) in Table~\ref{tab:comp}, \citealp{Girardi02,sifon15}), we find that the ratio $(M_{NFW}/M_{ref})$ is larger than unity in some cases (A2065, A1185) and smaller than unity in others (A2029, A2142). This is because members assigned by this technique, and consequently the mass calculated, depend on the selection criteria of number of galaxies and velocity gap per bin. As discussed before, the choice of a high-velocity gap includes more members and consequently larger mass and vice versa. Second, comparing to the SIM method (see ($^\star$) in Table~\ref{tab:comp}, \citealp{Abdullah11}) we note that the mass ratio $(M_{NFW}/M_{ref})$ is less than unity for the three clusters A0117, A1436 and A1459. This is because SIM includes more galaxy members inside the virial region even though they are very far from the cluster body. This is due to the assumption of conservation of mass that influences on the validity of SIM in the innermost region (see Figure 6 in \citealp{Abdullah11}). \begin{figure*}\hspace*{0.0cm} \includegraphics[width=26.5cm]{10F10C.pdf} \vspace{-7.0cm} \caption{Example of four well-known membership techniques applied to the cluster. The blue open symbols and solid lines are as in Figure~\ref{fig:Methods1}. Clearly GalWeight (solid black lines) appears to identify cluster members well both in the virialized and infall regions of phase-space.} \label{fig:Methods2} \end{figure*} Third, comparing to the caustic technique (see ($^+$) in Table \ref{tab:comp}, \citealp{Rines03,Rines06,Rines16}) we specifically calculate the ratio $(M_{NFW}/M_{caus})_{200}$ as listed in Table~\ref{tab:comp}, column 8. It demonstrates that this ratio is larger than unity for 7 clusters with the highest ratio is for A2065, for which the estimated mass from NFW is four times that expected from the caustic technique. As described above, the main reason for this discrepancy is that the caustic technique does not take into consideration the effect of FOG. Thus, it misses more members inside the virial region and consequently expects lower masses. We compare again GalWeight with the four well-known techniques (shifting gapper, caustic, den Hartog, and SIM) for the Coma cluster as shown in Figure~\ref{fig:Methods2}. The Figure (see also Figure~\ref{fig:Methods1}) demonstrates that the GalWeight performs very favorably against established methods, taking into account as it does the effects of the FOG distortion at small projected radius well as the random motion of galaxies in the infall region. In order to apply SIM to the Coma cluster the spatial number density profile is calculated from the NFW model (\citealp{NFW96,NFW97}). Also, we assume that the background number density $\rho_{bg} = 0.0106$ $h^3$ $\mbox{Mpc}^{-3}$ which is calculated using the parameters of Schechter luminosity function ($\phi^\ast = 0.0149$ $h^3$ $\mbox{Mpc}^{-3}$, $\mbox{M}^\ast -5\log{h} = -20.44$ and $\alpha = -1.05$ for $r$ magnitude, \citealp{Blanton03}). Because of the presence of interlopers, estimates of cluster mass tend to be biased too high and estimates of cluster concentration tend to be biased too low. Our work suggests that applying GalWeight rather than another technique to determine cluster membership before applying a dynamical mass estimator (virial theorem, NFW model etc.), likely results in a more accurate estimate of the true cluster mass and concentration. In a future work we will compare the efficiency of different membership techniques to assign membership and their influence on estimating cluster mass using different mass estimators. \section{Discussion and Conclusion} \label {sec:conc} In this paper we introduced the Galaxy Weighting Function Technique (GalWeight), a powerful new technique for identifying cluster members. specifically designed to simultaneously maximize the number of {\it bona fide} cluster members while minimizing the number of contaminating interlopers. GalWeight takes into account the causes of different distortions in phase-space diagram and is independent of statistical or selection criteria. It can recover membership in both the virial and infall regions with high accuracy and is minimally affected by substructure and/or nearby clusters. We first demonstrated GalWeight's use by applying it interactively to a simulated cluster of mass $9.37 \times 10^{14}$ $h^{-1}$ M$_{\odot}$ selected from Bolshoi simulation. Next, we tested the efficiency of the technique on $\sim 3000$ clusters selected from the MDPL2 and Bolshoi simulations with masses ranging from $0.70\times10^{14} h^{-1} M_{\odot}$ to $37.4\times10^{14} h^{-1} M_{\odot}$. The completeness and interloper fractions for MDPL2 are $f_c = 0.993, 0.992$ and $0.981$ and $f_i = 0.096, 0.098$ and $0.118$, while for Bolshoi $f_c = 0.995, 0.981$ and $0.971$ and $f_i = 0.126, 0.217$ and $0.226$ within $r_v$, $2r_v$ and $3r_v$, respectively. We then compared its performance to four well-known existing cluster membership techniques (shifting gapper, den Hartog, caustic, SIM). Finally, we applied GalWeight to a sample of twelve Abell clusters of varying richnesses taken from SDSS-DR12. By assuming an NFW model and applying the virial mass estimator we determined the radius and corresponding mass at overdensities of $\Delta_{500}$, $\Delta_{200}$ and $\Delta_{100}$. The virial mass (at $\Delta_{200}$) of the sample ranged from $0.82\times10^{14}$ $h^{-1}$ $M_{\odot}$ to $12.97\times10^{14}$ $h^{-1}$ $M_{\odot}$, demonstrating that GalWeight is effective for poor and massive clusters. In the future we plan to apply GalWeight to a larger SDSS sample of galaxy clusters at low and high redshifts. We believe that GalWeight has the potential for astrophysical applications far beyond the identification of cluster members e.g., identifying stellar members of nearby dwarf galaxies, or separating star-forming and quiescent galaxies. We also plan to investigate these applications in a future work. \section*{Acknowledgement} We thank Brian Siana for useful discussions. We also thanks Gary Mamon for his useful comments on the paper. Finally, we thank the reviewer for suggestions which improved this paper. G.W. acknowledges financial support for this work from NSF grant AST-1517863 and from NASA through programs GO-13306, GO- 13677, GO-13747 \& GO-13845/14327 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555, and grant number 80NSSC17K0019 issued through the Astrophysics Data Analysis Program (ADAP).
{'timestamp': '2018-05-22T02:10:16', 'yymm': '1805', 'arxiv_id': '1805.06479', 'language': 'en', 'url': 'https://arxiv.org/abs/1805.06479'}
\section{Introduction} Over the last two decades, modeling complex systems as networks has proven to be a sucessful approach to characterize the structure and dynamics of many real-world systems \cite{albert02,newman03,bocaletti06,barrat08}. Different dynamics have been investigated on top of networks, such as spreading processes \cite{arruda18}, percolation \cite{Dorogovtsev08}, or synchronization \cite{tang14,rodrigues16}. Among these dynamical processes, diffusion and random walks (RWs) have been analyzed thoroughly. Indeed, RWs models have widespread use both in the analysis of diffusion and navigability in networks as in exploring their fine-grained organization \cite{noh04,klafter2011first,masuda17}. Most of the research on RWs relies on the nearest-neighbor (\textbf{NN}) paradigm \cite{noh04} in which the walker can only hop to one of the \textbf{NN}s of its current node (or position). However, other RWs definitions in discrete and continuous times, allow for both \textbf{NN} and long distance hops. Well known examples are, e.g., L\'evy RWs \cite{riascos12,guo16,estrada17b,nigris17}, RWs based on the $d$-path Laplacian operator \cite{estrada17b}, and those defined by fractional transport in networks \cite{riascos14,riascos15,michelitsch19}. Such non-\textbf{NN} strategies often correspond to the better options for randomly reaching a target in an unknown environment as, for instance, the foraging of species in a given environment \cite{lomholt2008levy,humphries2010environmental,song2010modelling,rhee2011levy,ghandi2011foraging}. More recently, multiplex networks and other multilayer structures were identified as more comprehensive frameworks to describe those complex systems in which agents may interact through several channels. Here, links representing channels with different meaning and relevance are embedded in distinct layers \cite{boccaletti14,kivela14,battiston17,bianconi18}. As before, multiplexes have widespread use to describe, among others, social \cite{szell10,cozzo13,li15,arruda17}, biochemical \cite{cozzo12,battiston17}, or transportation systems \cite{dedomenico14,aleta17}. Here, layers may represent underground, bus and railway networks in large cities, each one associated to a different spatial and temporal scale. A diffusion process in a transport system like this occurs both within and across layers \cite{arruda18,gomez13,cencetti19,dedomenico16,tejedor18}, accounting for the actual displacement to reach different places as within the change of embarking platforms. Multiplexes have also been used to study related dynamical systems, such as reaction-diffusion \cite{asllani14,kouvaris15,busiello18} and synchronization processes \cite{gambuzza15,sevilla15,genio16,allen17}. The eigenvalue spectrum of the supra-Laplacian matrix associated to the multiplex plays a key role in the description such diffusive process. Several results indicate the presence of super-diffusive behavior in undirected and directed multiplexes \cite{gomez13,sole13,radicchi13,cozzo12,cozzo16,sanchez14,cozzob16,arruda18b,serrano17,tejedor18,cencetti19}, meaning that their relaxation time to the steady state is smaller than any other observed for the isolated layer. On the other hand, it seems that fractional diffusion on multiplex networks has not yet received due attention. In this work, we address this matter, following the framework introduced in Refs.~\cite{riascos14,riascos15,michelitsch19}. Our study considers the fractional diffusion of node-centric continuous-time RWs on undirected multiplex with two layers. We present numerical results of main dynamical features for several multiplexes with well known topology, and derive exact analytical expressions for circulant layers. An important aspect is the role played by the ratio between the inter- and intra-layer coefficients. Our results indicate a nonmonotonic behavior in the rate of convergence to the steady state as the inter-layer coefficient increases. The walkers mean square displacement (MSD$\equiv \left \langle r^2(t) \right \rangle$) illustrates the existence of an optimal diffusive regime depending on both the inter-layer coupling and on the fractional parameter. Following the nomenclature in \cite{dedomenico14}, we show that fractional dynamics turns \textit{classic random walkers} into a new type of \textit{physical random walkers}, which are allowed to (i) switch layer and (ii) perform long hops to another distant vertex in the same jump. In spite of such enhanced diffusion, our results also show that the MSD still increases linearly with time when the number of multiplex nodes is finite. The paper is organized as follows. In Sec.~\ref{Sec:Diff}, we describe the diffusion dynamics on undirected multiplex networks and MSD. Section \ref{Sec:FracDiff} defines fractional dynamics on such systems. Analytical results for fractional random walks with continuous time on regular multiplex with circulant layers are presented in Sec.~\ref{Sec:results}. Finally, our conclusions are summarized in Sec.~\ref{conclusions}. \section{Diffusion dynamics on multiplex networks} \label{Sec:Diff} Let us consider a multiplex $\mathcal{M}$ with $N$ nodes and $M$ layers. Let $\mathbf{A}^\alpha =\left ( \mathbf{A}_{ij}^\alpha \right )$ denote the adjacency matrix for the $\alpha$th layer with $1\leq \alpha \leq M$. In this work we focus on multiplexes $\mathcal{M}$ whose layers are undirected and unsigned (i.e., the edge weights are nonnegative), and contain no self-loops, i.e., $\mathbf{A}_{ij}^\alpha=\mathbf{A}_{ji}^\alpha=w_{ij}>0$ if there is a link between the nodes $i$ and $j$ in the layer $\alpha$ (and $i\neq j$), and 0 otherwise. If the layers of $\mathcal{M}$ are also unweighted, then $w_{ij}=1$. On the other hand, the strength $s_{i}^{\alpha}$ of a vertex $i$ with respect to its connections with other vertices $j$ (with $j=1,\cdots,N$) in the same layer $\alpha$ is given by $s_{i}^{\alpha}=\sum_{j=1}^N \mathbf{A}_{ij}^\alpha$. On a discrete space, diffusive phenomena are described in terms of Laplacian matrices, which can be formally obtained as a discretized version of the Laplacian operator $\left ( -\bigtriangledown ^2 \right )$ on regular lattices, and have been generalized for more complex topologies \cite{riascos14,riascos15}. In the case of multiplexes, we let $\vec{x}$ be a $NM\times1$ state (column) vector whose entry $i+(\alpha-1)N$ (with $i=1,\cdots,N$) describes the concentration of a generic flowing quantity at time $t$ on node $i$ at the $\alpha$th layer, $x_i^\alpha$. Therefore, the usual diffusion equation in matrix form reads: \begin{equation} \frac{\mathrm{d}\vec{x}(t) }{\mathrm{d} t}=-\mathbf{\mathcal{L}}^\mathcal{M}\vec{x}(t) \label{sol_prob} \end{equation} \noindent where \begin{equation} \mathbf{\mathcal{L}}^\mathcal{M}=\mathbf{\mathcal{L}}^\ell+\mathbf{\mathcal{L}}^x \label{def_comb_lap} \end{equation} \noindent denotes the $NM\times NM$ (combinatorial) supra-Laplacian matrix defined in \cite{masuda17,gomez13,cencetti19,sole13, bianconi18}, and $\mathcal{L}^\ell$ and $\mathcal{L}^x$ represent the intra-layer and the inter-layer supra-Laplacian matrices, respectively, given by: \begin{equation} \mathcal{L}^\ell=\left ( \begin{matrix} D_1 \mathbf{L}^1 & & & \\ & D_2 \mathbf{L}^2 & & \\ & & \ddots & \\ & & & D_M \mathbf{L}^M \end{matrix} \right ), \end{equation} and \begin{equation} \mathcal{L}^x=\left ( \begin{matrix} \sum_\alpha D_{1 \alpha}\mathbf{I}_N & -D_{1 2}\mathbf{I}_N & \cdots & -D_{1 M}\mathbf{I}_N\\ -D_{21}\mathbf{I}_N & \sum_\alpha D_{2 \alpha}\mathbf{I}_N & \cdots & -D_{2M}\mathbf{I}_N\\ \vdots & \vdots & \ddots & \vdots \\ -D_{M 1}\mathbf{I}_N & -D_{M 2}\mathbf{I}_N & \cdots & \sum_\alpha D_{M \alpha}\mathbf{I}_N \end{matrix} \right ). \label{interlayer_connect_matrix} \end{equation} \noindent In the above equations, $\mathcal{L}^\ell$ is a block-diagonal matrix, $D_\alpha$ denotes the intra-layer diffusion constant in the $\alpha$th layer, $D_{\alpha\beta}$ (with $\alpha,\beta\in\left \{ 1,\cdots,M \right \}$ and $\beta\neq \alpha$) refers to the inter-layer diffusion constant between the $\alpha$th and $\beta$th layers, $\mathbf{I}_N$ represents the $N\times N$ identity matrix, and $\mathbf{L}^\alpha$ is the usual $N\times N$ (combinatorial) Laplacian matrix of the layer $\alpha$, with elements $\mathbf{L}^{\alpha}_{ij}=s_{i}^{\alpha}\delta_{ij} - \mathbf{A}_{ij}^{\alpha}$, and $\delta$ is the Kronecker delta function. Thus, the matrix $\mathbf{\mathcal{L}}^\mathcal{M}$ represents the generalization of the graph Laplacian to the case of linear diffusion on multiplex networks. For simplicity we will consider only diffusion processes where $D_{\alpha\beta}=D_{\beta\alpha}$, so that $\mathcal{L}^x$ and $\mathbf{\mathcal{L}}^\mathcal{M}$ are symmetric matrices. Finally, according to Eq.~(\ref{def_comb_lap}), the elements of the main diagonal of $\mathbf{\mathcal{L}}^\mathcal{M}$ represent the total strength of a given node at a given layer, i.e., the sum of (i) the strength of such vertex with respect to its connections with other vertices in the same layer and (ii) the strength of the same vertex with respect to connections to its counterparts in different layers. To denote the total strength of node $i$ in layer $\alpha$, we introduce the following short-hand notation: \begin{equation} \sigma_i^\alpha=\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )_{ff}, \label{total_strenght} \end{equation} \noindent where $f=i+(\alpha-1)N$. \subsection{CTRW\lowercase{s} and MSD on multiplex networks} The usual \textit{discrete-time random walk} (DTRW) is a random sequence of vertices generated as follows: given a starting vertex $i$, denoted as ``origin of the walk'', at each discrete time step $t$, the walker jumps to one \textbf{NN} of its current node \cite{masuda17,aldous02,lovasz93,noh04}. In the case of multiplex networks, because of its peculiar interconnected structure, DTRW can also move from one layer to another one, provided that such layers ($\alpha$ and $\beta$) are connected with each other (i.e., $D_{\alpha\beta}\neq0$). In the case of \textit{continuous-time random walks} (CTRWs), it is assumed that the duration of the walkers waiting times between two moves obeys a given probability density function \cite{masuda17}. For that reason, the actual timing of the moves must be taken into account. For the sake of simplicity, in this work we consider that the waiting times are distributed according to a Poisson distribution with constant rate (i.e.,the exponential distribution). Here, it becomes necessary to distinguish between two different cases of Poissonian CTRWs: Node-centric and edge-centric RWs. The Poissonian node-centric CTRWs follow the same assumption of DTRWs: when a walker becomes active, it moves from its current node to one of the neighbors with a probability proportional to the weight of the connection between such nodes. On the other hand, in the Poissonian edge-centric CTRWs, each edge (rather than a node) is activated independently according to a renewal process. Thus, if a trajectory includes many nodes with large strengths, the number $n$ of moves in the time interval $[0,t]$ tends to be larger than for trajectories that traverse many nodes with small strengths. For a wider description of the specific features of each random walk the reader is referred to Ref.~\cite{masuda17}. To generalize the fractional diffusion framework introduced in Refs.~\cite{riascos14,riascos15,michelitsch19} to multiplex networks, we restrict our analyzes to CTRWs. Let $\vec{p}$ be a $NM\times1$ vector whose entry $i+(\alpha-1)N$ (with $i=1,\cdots,N$) is the probability of finding the random walker at time $t$ on node $i$ at the $\alpha$th layer. The transition rules governing the diffusion dynamics of the node-centric random walks are determined a master equation which, in terms of suitably defined matrices, can be written as \begin{equation} \frac{\mathrm{d}\vec{p}(t)^T }{\mathrm{d} t}=-\vec{p}(t)^T\mathbf{\mathcal{S}}^{-1}\mathbf{\mathcal{L}}^\mathcal{M}=-\vec{p}(t)^T \mathcalboondox{L}, \label{prob_node_centric} \end{equation} \noindent On the other hand, the dynamics of the edge-centric ones are described by \begin{equation} \frac{\mathrm{d}\vec{p}(t)^T }{\mathrm{d} t}=-\vec{p}(t)^T\mathbf{\mathcal{L}}^\mathcal{M}. \label{prob_edge_centric} \end{equation} \noindent In Eqs.~(\ref{prob_node_centric}) and (\ref{prob_edge_centric}), $\mathbf{X}^T$ stands for the transpose of matrix $\mathbf{X}$, and $\mathbf{\mathcal{S}}$ is the $NM\times NM$ diagonal matrix with elements $\mathbf{\mathcal{S}}_{ii}=\mathbf{\mathcal{L}}^\mathcal{M}_{ii}$. The $NM\times NM$ matrix $\mathcalboondox{L}$ denotes the ``random walk normalized supra-Laplacian" \cite{masuda17} (or just ``normalized supra-Laplacian" \cite{dedomenico14}). According to the definition of $\mathcalboondox{L}$, its elements can be expressed as, \begin{equation} \mathcalboondox{L}_{fg}=\frac{\mathbf{\mathcal{L}}^\mathcal{M}_{fg}}{\mathbf{\mathcal{L}}^\mathcal{M}_{ff}}=\delta_{fg}-\mathcalboondox{T}_{fg}, \label{transition_matrix} \end{equation} \noindent where $\mathcalboondox{T}_{fg}$ are the elements of the $NM\times NM$ transition matrix $\mathcalboondox{T}$ of a discrete-time random walk, describing transition probability from one node to its \textbf{NN}s in the corresponding layer or to the node's counterparts in different layers, with equal probability \cite{noh04,klafter2011first,riascos14,riascos15,michelitsch19,masuda17}. Indeed, note that $\mathcalboondox{T}$ is a stochastic matrix, that satisfies $\mathcalboondox{T}_{ff}=0$ and $\sum_{g=1}^{NM} \mathcalboondox{T}_{fg}=1$. Following \cite{dedomenico14,riascos14,riascos15,michelitsch19}, heareafter we will consider only the case of node-centric CTRW (or \textit{classical random walker} as in \cite{dedomenico14}). The MSD, defined by $\left \langle r^2(t) \right \rangle$, is a measure of the ensemble average distance between the position of a walker at a time $t$, $x(t)$, and a reference position, $x_0$. Assuming that $\left \langle r^2(t) \right \rangle$ has a power law dependence with respect to time, we have \begin{equation} \mathrm{MSD}\equiv \left \langle r^2(t) \right \rangle = \left \langle \left ( x(t)-x_0 \right )^2 \right \rangle \sim t^\varepsilon, \end{equation} \noindent where the value of the parameter $\varepsilon$ classifies the type of diffusion into normal diffusion ($\varepsilon=1$), sub-diffusion ($\varepsilon<1$), or super-diffusion ($\varepsilon>1$). Although MSD is one of the used measures to analyze general stochastic data \cite{almaas03,gallos04}, in order to better characterize diffusion, additional measures are also required, e.g., first passage observables \cite{masuda17}. For the type of results we discuss here, $\left \langle r^2(t) \right \rangle$ is essential to provide a clear cut way to characterize the time dependence. According to Eq.~(\ref{prob_node_centric}), the probability of finding the random walker at node $j$ in the $\beta$th layer (at time $t$), when the random walker was initially located at node $i$ in the $\alpha$th layer, is given by: \begin{equation} p(t)_{i^\alpha\rightarrow j^\beta}=\vec{p}(t)_g= \vec{\mathcal{C}}_g\,^T \exp\left (-t\mathcalboondox{L}\right )\vec{\mathcal{C}}_f, \label{prob_walker_normal} \end{equation} \noindent where $g=j+(\beta-1)N$ and $f=i+(\alpha-1)N$ (with $i,j \in \left \{ 1, \cdots, N \right \}$ and $\alpha,\beta \in \left \{ 1, \cdots, M \right \}$), and $\vec{\mathcal{C}}_\ell$ represents the ($NM\times1$) $\ell-$th vector of the canonical base of $\mathbb{R}^{NM}$ with components $\left ( \vec{\mathcal{C}}_g \right )_m=\delta_{m\ell}$). Therefore, in the case of node-centric CTRWs, we can quantify $\left \langle r^2(t) \right \rangle$ at time $t$ as follows: \begin{equation} \left \langle r^2(t) \right \rangle = \frac{1}{N^2M^2}\sum_{\alpha=1}^M \sum_{\beta=1}^{M} \sum_{i=1}^N \sum_{j=1}^{N}\left ( d_{i^\alpha\rightarrow j^\beta} \right )^2 p(t)_{i^\alpha\rightarrow j^\beta}, \label{eq_MSD_multpl_new} \end{equation} \noindent where $d_{i^\alpha\rightarrow j^\beta}$ is the length of the shortest path distance between $i$ in the $\alpha$th layer and $j$ in the $\beta$th layer, that is, the smallest number of edges connecting those nodes. \section{Fractional diffusion on multiplex networks} \label{Sec:FracDiff} \subsection{General Case} \label{Sec:FracDiff_general} In this sub-section we present the general expressions for the combinatorial and normalized supra-Laplacian matrices required to study fractional diffusion in any multiplex network. Thus, following Refs.~\cite{riascos14,riascos15,michelitsch19}, we generalize Eq.~(\ref{sol_prob}) as \begin{equation} \frac{\mathrm{d}\vec{x}(t) }{\mathrm{d} t}=-\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )^\gamma \vec{x}(t), \label{frac_diff} \end{equation} \noindent where $\gamma$ is a real number ($0<\gamma<1$) and $\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )^\gamma$, the combinatorial supra-Laplacian matrix raised to a power $\gamma$, denotes here the \textit{fractional (combinatorial) supra-Laplacian} matrix. Let us briefly discuss some mathematical properties of the model defined by Eq. (\ref{frac_diff}), as well as qualitative aspects of the expected behavior, limiting cases, and relations to other scenarios characterized by anomalous diffusion. An immediate consequence is that we recover Eq.~(\ref{sol_prob}) in the limit $\gamma\rightarrow 1$. This way of defining the fractional supra-Laplacian matrix preserves the essential features of Laplacian matrices, namely: $\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )^\gamma$ is (i) positive semidefinite, (ii) stochastic, and (iii) all its non-diagonal elements are non-positive. On the other hand, by setting $D_\alpha=1$ and $D_{\alpha\beta}=1$ (with $\alpha,\beta\in\left \{ 1,\cdots,M \right \}$ and $\beta\neq \alpha$), $\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )^\gamma$ is equivalent to the fractional Laplacian matrices of monolayer networks described in \cite{riascos14,riascos15,michelitsch19}. For such cases, it has been shown analytically that the continuum limits of the fractional Laplacian matrix (with $0<\gamma<1$) are connected with the operators of fractional calculus. Indeed, in the case of cycle graphs and its continuum limits, the distributional representations for fractional Laplacian matrices take the forms of Riesz fractional derivatives (see Chapter 6 of \cite{michelitsch19} for further details). Besides that, when the above definition of fractional Laplacian matrix is considered, the asymptotic behavior of node-centric CTRWs on homogeneous networks and their continuum limits (with homogeneous and isotropic node distributions) shows explicitly the convergence to a L\'evy propagator associated with the emergence of L\'evy flights with self-similar inverse power-law distributed long-range steps and anomalous diffusion (see Chapter 8 of \cite{michelitsch19} for further details). Alternatively, by using (non-fractional) Laplacian matrices (i.e., $\gamma=1$), Brownian motion (Rayleigh flights) and Gaussian diffusion appear. Both types of asymptotic behaviors are in good agreement with the findings presented in Ref.~\cite{Metzler2000} for the CTRW model with Poisson distribution of waiting times in homogeneus, isotropic systems, when a L\'evy distribution of jump lengths and a Gaussian one are considered, respectively. We can obtain a set of eigenvalues $\mu_j$ and eigenvectors $\vec{\psi}_j$ of $\mathbf{\mathcal{L}}^\mathcal{M}$ that satisfy the eigenvalue equation $\mathbf{\mathcal{L}}^\mathcal{M} \vec{\psi}_j = \mu_j \vec{\psi}_j$ for $j\in\left \{ 1,\cdots,NM \right \}$ and the orthonormalization condition $\vec{\psi}_i^T \vec{\psi}_j=\delta _{ij}$. Since $\mathbf{\mathcal{L}}^\mathcal{M}$ is a symmetric matrix, the eigenvalues $\mu_j$ are real and nonnegative. In the case of connected multiplex networks, the smallest eigenvalue is 0 and all others are positive. Following Refs.~\cite{riascos14,riascos15,michelitsch19}, we define the orthonormal matrix $\mathbf{Q}$ with elements $\mathbf{Q}_{ij} = \vec{i}\,^T\vec{\psi}_j $ and the diagonal matrix $\mathbf{\Delta} = \mathrm{diag}\left ( \mu _1,\mu _2,\cdots,\mu _{NM} \right )$. These matrices satisfy $\mathbf{\mathcal{L}}^\mathcal{M}\mathbf{Q}=\mathbf{Q\Delta}$, therefore $\mathbf{\mathcal{L}}^\mathcal{M}=\mathbf{Q\Delta}\mathbf{Q}^\dagger$, where the matrix $\mathbf{Q}^\dagger$ is the conjugate transpose (or Hermitian transpose) of $\mathbf{Q}$. Therefore, we have: \begin{equation} \left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )^\gamma = \mathbf{Q\Delta}^\gamma\mathbf{Q}^\dagger= \sum _{m=1}^{NM} \mu_m^\gamma \vec{\psi}_j\vec{\psi}_j^\dagger , \label{frac_Lapl} \end{equation} \noindent where $\mathbf{\Delta}^\gamma= \mathrm{diag}\left ( \mu_1^\gamma,\mu_2^\gamma,\cdots,\mu_{NM}^\gamma \right )$. According to Eq.~(\ref{frac_Lapl}), $\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )^\gamma \vec{\psi}_j = \mu_j^\gamma \vec{\psi}_j$ for $j\in\left \{ 1,\cdots,NM \right \}$. Consequently, the eigenvalues of $\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )^\gamma$ are equal to those of $\mathbf{\mathcal{L}}^\mathcal{M}$ to the power of gamma, $\mu_j^\gamma$, and the eigenvectors $\vec{\psi}_j$ remain the same for both the supra-Laplacian and the fractional supra-Laplacian matrices. On the other hand, the diagonal elements of the fractional supra-Laplacian matrix defined in Eq.~(\ref{frac_Lapl}) introduce a generalization of the strength $\sigma_i^\alpha = \mathbf{\mathcal{L}}^\mathcal{M}_{ff}$ with $f=i+(\alpha-1)N$ to the fractional case. In this way, the fractional strength of node $i$ at layer $\alpha$ is given by: \begin{equation} \left ( \sigma_i^\alpha \right )^{(\gamma)}=\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )_{ff}^\gamma = \sum _{m=1}^{NM} \mu_m^\gamma \mathbf{Q}_{fm}\mathbf{Q}_{fm}^*, \label{frac_degree} \end{equation} \noindent where $X^*$ denotes the conjugate of $X$. In general, the elements of the fractional (combinatorial) supra-Laplacian matrix can be calculated as follows: \begin{equation} \left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )_{fg}^\gamma =\sum _{m=1}^{2N} \mu_m^\gamma \mathbf{Q}_{fm}\mathbf{Q}_{gm}^*. \label{element_frac_Lapl} \end{equation} Now, by analogy with the random walk normalized supra-Laplacian matrix $\mathcalboondox{L}=\mathbf{\mathcal{S}}^{-1}\mathbf{\mathcal{L}}^\mathcal{M}$, we introduce the normalized fractional supra-Laplacian matrix $\mathcalboondox{L}^{\left (\gamma \right )}$ with elements \begin{equation} \mathcalboondox{L}_{fg}^{\left (\gamma \right )}=\frac{\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )_{fg}^\gamma}{\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )_{ff}^\gamma}=\delta_{fg}-\mathcalboondox{T}_{fg}^{\left (\gamma \right )}, \label{frac_RWLapl} \end{equation} \noindent where $\mathcalboondox{T}_{fg}^{\left (\gamma \right )}$ denotes the elements of the $NM\times NM$ fractional transition matrix $\mathcalboondox{T}^{\left (\gamma \right )}$. Note that $\mathcalboondox{T}^{\left (\gamma \right )}$ is a stochastic matrix, that satisfies $\mathcalboondox{T}_{ff}^{\left (\gamma \right )}=0$ and $\sum_{g=1}^{NM} \mathcalboondox{T}_{fg}^{\left (\gamma \right )}=1$. Finally, when fractional diffusion takes place for a given $\gamma$, the probability $p(t)_{i^\alpha\rightarrow j^\beta}^{\left (\gamma \right )}$ of finding a node-centric CTRW at node $j$ in the $\beta$th layer (at time $t$), when the random walker was initially located at node $i$ in the $\alpha$th layer, is expressed by: \begin{equation} p(t)_{i^\alpha\rightarrow j^\beta}^{\left (\gamma \right )}=\vec{p}(t)_g^{\left (\gamma \right )}= \vec{\mathcal{C}}_g\,^T \exp\left (-t\mathcalboondox{L}^{(\gamma)} \right ) \vec{\mathcal{C}}_f, \label{prob_walker_frac} \end{equation} \noindent where $g=j+(\beta-1)N$ and $f=i+(\alpha-1)N$ with $i,j \in \left \{ 1, \cdots, N \right \}$ and $\alpha,\beta \in \left \{ 1, \cdots, M \right \}$. Thus, the MSD for fractional dynamics, denoted as $\left \langle r^2(t) \right \rangle^{\left (\gamma \right )}$, is given by: \begin{equation} \left \langle r^2(t) \right \rangle^{\left (\gamma \right )} = \frac{1}{N^2M^2}\sum_{\alpha=1}^M \sum_{\beta=1}^{M} \sum_{i=1}^N \sum_{j=1}^{N}\left ( d_{i^\alpha\rightarrow j^\beta} \right )^2 p(t)_{i^\alpha\rightarrow j^\beta}^{\left (\gamma \right )}. \label{eq_frac_MSD_multpl} \end{equation} \noindent According to Eq.~(\ref{eq_frac_MSD_multpl}), the time evolution of $\left \langle r^2(t) \right \rangle^{\left (\gamma \right )}$ and the corresponding diffusive behavior for the multiplexes $\mathcal{M}$ considered in this work depend on $\gamma$, the number of nodes $N$, the total amount of layers $M$, the topology of each layer, given by $\mathbf{A}^\alpha$, and the intra- and inter-layer diffusion constants $D_\alpha$ and $D_{\alpha\beta}$ (with $\alpha,\beta\in\left \{ 1,\cdots,M \right \}$ and $\beta\neq \alpha$), respectively. As expected, when $\gamma\rightarrow 1$, Eqs.~(\ref{frac_degree}), (\ref{frac_RWLapl}), (\ref{prob_walker_frac}), and (\ref{eq_frac_MSD_multpl}) reproduce the corresponding equations in the previous section. \subsection{Circulant multiplexes} \label{Sec:Circulant_Analitycal} In this subsection we analyze fractional diffusion on a $N$-node multiplex network in which all its $M$ layers consist of \textit{interacting cycle graphs} i.e., each layer $\alpha$ (with $\alpha \in \left \{ 1, \cdots, M \right \}$) contains a $N-$ring topology in which each node is connected to its $J^\alpha$ left and $J^\alpha$ right nearest nodes. Thus, $J^\alpha$ represent the interaction parameter of the layer $\alpha$. It is easy to see that, if $N$ is odd, then $J^\alpha \in \left \{ 1, \cdots, (N-1)/2 \right \}$. Note that, when $J^\alpha=1$, the $\alpha$-th layer contains a cycle graph whereas, if $J^\alpha=(N-1)/2$, it corresponds to a complete graph. For the purpose of deriving exact expressions for the eigenvalues, hearafter we only consider multiplex networks with $M=2$ layers and odd number of nodes. Besides, to emphasize the inter-layer diffusion process and simplify the notation, we choose the diffusion coefficients $D_1=D_2=1$ and $D_{12}/D_\alpha=D_x$ for $\alpha \in \left \{ 1, 2 \right \}$ \cite{gomez13,cencetti19,tejedor18}. According to Eqs.~(\ref{def_comb_lap})-(\ref{interlayer_connect_matrix}), the combinatorial supra-Laplacian matrix of the multiplex is written as \begin{equation} \mathbf{\mathcal{L}}^\mathcal{M}=\left ( \begin{matrix}\mathbf{L}^1 +D_x\mathbf{I}_N & -D_x\mathbf{I}_N\\ -D_x\mathbf{I}_N & \mathbf{L}^2 +D_x\mathbf{I}_N \end{matrix} \right ) =\left ( \begin{matrix}\mathbf{C}^1 & -D_x\mathbf{I}_N\\ -D_x\mathbf{I}_N & \mathbf{C}^2 \end{matrix} \right ), \label{comb_circ_ini1} \end{equation} \noindent where both $\mathbf{C}^1$ and $\mathbf{C}^2$ are $N\times N$ circulant matrices. Since exact analytical expressions for the eigenvalues and eigenvectors of circulant matrices are well known \cite{Mieghem11}, it is also possible to obtain similar expressions for $\mu_j$ and $\vec{\psi}_j$ (for $j\in\left \{ 1,\cdots,2N \right \}$). So let us write \begin{equation} \mathbf{F}^{-1} \left ( \begin{matrix}\mathbf{C}^1 & -D_x\mathbf{I}_N\\ -D_x\mathbf{I}_N & \mathbf{C}^2 \end{matrix} \right ) \mathbf{F} = \left ( \begin{matrix}\mathrm{\Xi}^1 & -D_x\mathbf{I}_N\\ -D_x\mathbf{I}_N & \mathrm{\Xi}^2 \end{matrix} \right ) \label{comb_circ_ini2} \end{equation} \noindent where \begin{equation} \mathbf{F} = \left ( \begin{matrix} \mathbf{U} & \mathbf{0}\\ \mathbf{0} & \mathbf{U} \end{matrix} \right ) \label{def_F} \end{equation} \noindent is a $2N\times 2N$ block-diagonal matrix, $\mathbf{U}$ is the $N\times N$ hermitian matrix with elements $\mathbf{U}_{ij}=\omega ^{(i-1)(j-1)}/\sqrt{N}$, $\omega \equiv \exp(-\mathfrak{i}2\pi/N)$, $\mathfrak{i}=\sqrt{-1}$, $\mathrm{\Xi}^\alpha=\mathrm{diag}\left ( \xi_1^\alpha ,\cdots, \xi_{N}^\alpha \right )$, and $\xi_m^\alpha$ are the eigenvalues of $\mathbf{C}^\alpha$, given by \begin{align} \xi_m^\alpha&=D_x+A_m^\alpha\equiv D_x+2\left (J^\alpha+1\right )-\frac{2\sin\left ((J^\alpha +1)\frac{\pi(m-1)}{N}\right )\cos\left (J^\alpha\frac{\pi(m-1)}{N}\right )}{\sin\left (\frac{\pi(m-1)}{N}\right )} \label{eigen_xi} \end{align} \noindent for $1<m\leq N$, and $\xi_m^\alpha=D_x$ for $m=1$. Since the matrices $-D_x\mathbf{I}_N$ and $\left (\mathrm{\Xi}^2-\mu_m\mathbf{I}_N\right )$ commute, the eigenvalues of $\mathbf{\mathcal{L}}^\mathcal{M}$ can be obtained as: \begin{equation} \mu_{2m-1}= \frac{\xi_m^1+\xi_m^2+\sqrt{\left (\xi_m^1-\xi_m^2\right )^2+4D_x^2}}{2}, \label{lapl_eigenA} \end{equation} \noindent and \begin{equation} \mu_{2m}=\frac{\xi_m^1+\xi_m^2-\sqrt{\left (\xi_m^1-\xi_m^2\right )^2+4D_x^2}}{2}, \label{lapl_eigenB} \end{equation} \noindent for $m\in\left \{ 1,\cdots,N \right \}$. Note that the eigenvalues $\mu_m$ are not ordered from smallest to largest and vice versa (for instance, when $m=1$, $\mu_2=0$). Given such set of eigenvalues, the corresponding hermitian matrix of eigenvectors $\mathbf{Q}=\left ( \begin{matrix} \vec{\psi}_1 &\cdots & \vec{\psi}_{2N}\end{matrix} \right )$ has the following elements: \begin{widetext} \begin{equation} \left.\begin{matrix} \mathbf{Q}_{fg}=\frac{1}{\sqrt{N\left ( 1+M_{g}^2 \right )}}\exp\left ( -\mathfrak{i}\frac{2\pi}{N} (f-1)\left \lfloor \frac{g-1}{2} \right \rfloor \right)\\ \mathbf{Q}_{(f+N)g}=\frac{M_g}{\sqrt{N\left ( 1+M_{g}^2 \right )}}\exp\left ( -\mathfrak{i}\frac{2\pi}{N} (f-1)\left (\left \lfloor \frac{g-1}{2} \right \rfloor +N \right ) \right) \end{matrix}\right\}\mathrm{for}\;f,g\in\left \{ 1,\cdots,N \right \}, \label{Q_layer1} \end{equation} \noindent and \begin{equation} \left.\begin{matrix} \mathbf{Q}_{(f-N)g}=\frac{1}{\sqrt{N\left ( 1+M_{g}^2 \right )}}\exp\left ( -\mathfrak{i}\frac{2\pi}{N}(f-1-N)\left (\left \lfloor \frac{g-1}{2} \right \rfloor +N \right ) \right)\\ \mathbf{Q}_{fg}=\frac{M_g}{\sqrt{N\left ( 1+M_{g}^2 \right )}}\exp\left ( -\mathfrak{i}\frac{2\pi}{N}(f-1)\left (\left \lfloor \frac{g-1}{2} \right \rfloor +N\right ) \right) \end{matrix}\right\}\mathrm{for}\;f,g \in \left \{ N+1,\cdots,2N \right \} \label{Q_layer2} \end{equation} \end{widetext} \noindent where \begin{equation} M_g=\frac{\xi_{\left ( 1+\left \lfloor (g-1)/2 \right \rfloor\right)}^1-\mu_g}{D_x}, \label{coeff_M_g} \end{equation} \noindent and $\left \lfloor . \right \rfloor$ denotes the floor function [see Appendix A for further details on the derivation of Eqs.~(\ref{lapl_eigenA})-(\ref{coeff_M_g})]. Using the eigenvalue spectrum of $\mathbf{\mathcal{L}}^\mathcal{M}$ [Eqs.~(\ref{lapl_eigenA}) and (\ref{lapl_eigenB})] and its eigenvectors [Eqs.~(\ref{Q_layer1}) and (\ref{Q_layer2})], the fractional strength of any node at layer 1 is \begin{align} \sigma_1^{(\gamma)}&\equiv\left ( \sigma_i^1 \right )^{(\gamma)}=\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )_{ii}^\gamma =\sum _{m=1}^{2N} \mu_m^\gamma \mathbf{Q}_{im}\mathbf{Q}_{im}^* =\sum _{m=1}^{2N} \frac{1}{N\left ( 1+M_{m}^2 \right )}\mu_{m}^\gamma, \label{frac_degree1} \end{align} \noindent whereas the fractional strength of the nodes at layer 2 is given by \begin{align} \sigma_2^{(\gamma)}&\equiv\left ( \sigma_i^2 \right )^{(\gamma)}=\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )_{(i+N)(i+N)}^\gamma=\sum _{m=1}^{2N} \mu_m^\gamma \mathbf{Q}_{(i+N)m}\mathbf{Q}_{(i+N)m}^* =\sum _{m=1}^{2N} \frac{M_{m}^2}{N\left ( 1+M_{m}^2 \right )}\mu_{m}^\gamma, \label{frac_degree2} \end{align} \noindent for $i \in \left \{ 1,\cdots,N \right \}$. Note that Eqs.~(\ref{frac_degree1}) and (\ref{frac_degree2}) do not depend on $i$, as expected for circulant layers of interacting cycles. The set of Eqs.~(\ref{element_frac_Lapl})-(\ref{eq_frac_MSD_multpl}) and (\ref{eigen_xi})-(\ref{frac_degree2}) allows to derive the the fractional (combinatorial) supra-Laplacian matrix ($\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )^\gamma$), the normalized fractional supra-Laplacian matrix ($\mathcalboondox{L}^{\left (\gamma \right )}$), the fractional transition matrix ($\mathcalboondox{T}^{\left (\gamma \right )}$), the probability of finding a node-centric CTRW at a given position ($p(t)_{i^\alpha\rightarrow j^\beta}^{\left (\gamma \right )}$), and $\left \langle r^2(t) \right \rangle^{\left (\gamma \right )}$. Finally, note that, according to Eq.~(\ref{frac_Lapl}), the fractional supra-Laplacian matrix $\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )^\gamma$ is a block-matrix whose blocks are circulant and, consequently, so is the normalized fractional supra-Laplacian matrix $\mathcalboondox{L}^{\left (\gamma \right )}$. Therefore, using a strategy similar to that previously described for $\mathbf{\mathcal{L}}^\mathcal{M}$, it is possible to derive the eigenvalue spectrum of $\mathcalboondox{L}^{\left (\gamma \right )}$. After conducting the necessary manipulation, the resulting eigenvalues are given by: \begin{equation} \lambda_{2m-1}= \frac{\kappa+\sqrt{\kappa^2-\nu}}{2}, \label{RWlapl_eigenA} \end{equation} \noindent and \begin{equation} \lambda_{2m}=\frac{\kappa-\sqrt{\kappa^2-\nu}}{2}, \label{RWlapl_eigenB} \end{equation} \noindent where \begin{align} \kappa &=\frac{\mu_{2m-1}^\gamma }{\left ( 1+\left ( S + C \right )^2 \right )}\left ( \frac{1}{\sigma_1^{(\gamma )}}+ \frac{\left ( S + C \right )^2}{\sigma_2^{(\gamma )}}\right )+\frac{\mu_{2m}^\gamma}{\left ( 1+\left ( S - C \right )^2 \right )}\left ( \frac{1}{\sigma_1^{(\gamma )}}+ \frac{\left ( S - C \right )^2}{\sigma_2^{(\gamma )}}\right ), \end{align} \begin{align} \nu=\frac{16C^2}{\sigma_1^{(\gamma )}\sigma_2^{(\gamma )}}\frac{\mu_{2m-1}^\gamma \mu_{2m}^\gamma}{\left ( 1+\left ( S + C \right )^2 \right )\left ( 1+\left ( S - C \right )^2 \right )}, \label{nu_def} \end{align} \begin{align} S=\frac{A_m^1-A_m^2}{2D_x}, \label{def_S} \end{align} \noindent and $C^2=1+S^2$ for $m \in \left \{ 1,\cdots,N \right \}$ [see Appendix B for further details on the derivation of Eqs.~(\ref{RWlapl_eigenA})-(\ref{def_S})]. Note that the eigenvalues $\mu_m$ and $\mu_m^\gamma$ of $\mathbf{\mathcal{L}}^\mathcal{M}$ and $\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )^\gamma$ are not ordered and, consequently, neither are the $\lambda_m$. It is also noteworthy that $\lambda_m$ (with $m \in \left \{ 1,\cdots,2N \right \}$) depend on the value of the inter-layer diffusion constant, $D_x$. By using Eqs.~(\ref{RWlapl_eigenA}) and (\ref{RWlapl_eigenB}), it is possible to obtain the algebraic connectivity of $\mathcalboondox{L}^{\left (\gamma \right )}$, i.e., its second-smallest eigenvalue, denoted here as $\Lambda_2$. When $D_x\rightarrow 0$, $\Lambda_2=\lambda_1$, that is: \begin{equation} \Lambda_2=\frac{1}{2}\left ( 2D_x \right )^\gamma\left ( \frac{1}{\sigma_1^{(\gamma )}}+\frac{1}{\sigma_2^{(\gamma )}} \right ) \sim D_x ^{\gamma}. \label{alge_conn_small} \end{equation} \noindent In the case of $\Lambda_2 \neq \lambda_1$, it is easy to see that $\Lambda_2 = \lambda_{2m} < \lambda_{2m-1}$ for $m \in \left \{ 1,\cdots,N \right \}$ [see Eqs.~(\ref{RWlapl_eigenA}) and (\ref{RWlapl_eigenB})]. Thus, for $J^1\neq J^2$ and $D_x\rightarrow \infty$, the algebraic connectivity can be approximated as: \begin{equation} \Lambda_2\approx\left ( \frac{A_{2m}^1+A_{2m}^2}{2D_x} \right )^\gamma \sim D_x ^{-\gamma}. \label{alge_conn_large} \end{equation} \noindent According to Eqs.~(\ref{alge_conn_small}) and (\ref{alge_conn_large}), the algebraic connectivity of the normalized supra-Laplacian matrix $\mathcalboondox{L}^{\left (\gamma \right )}$ is nonmonotonic. When $D_x\rightarrow0$, the inter-layer diffusion disappears and the dynamics reduces to those for diffusion on single (isolated) layers. On the other hand, when $D_x\rightarrow\infty$, the strength of the vertices are approximately equal to interlayer connection, i.e., $\sigma_i^{(\gamma)}\approx D_x^\gamma$. Thus, the centric-node random walkers spend most of the time in switching layer, instead of jumping to other vertices. Consequently, when the interlayer diffusion coefficient is very small or very large, the diffusion is hindered. For that reason, there is an optimal range of values for $D_x$. Note that the previous nonmonotonic trends of node-centric CTRWs emerge from the multiplex structure itself and, consequently, they persist even in the case of $\gamma=1$. \section{Results} \label{Sec:results} In this section, we present our results for the diffusion processes of Poissonian node-centric CTRWs on multiplex networks. Our discussion is mainly focused on their rate of convergence to the steady state in such systems, given by the diffusion time scale $\tau\sim1/\Lambda_2$ \cite{masuda17}. To do so, we analyze the nonmonotonic dependence of $\Lambda_2$ on the inter-layer coupling (i.e., $D_x$), as well as the influence of fractional dynamics (i.e., $\gamma$). In subsection \ref{regular_multiplex}, we present our analytical results for $\Lambda_2$ in regular multiplex networks. By analyzing the case of regular multiplexes with $J=1$, in subsection \ref{long_navigation} we show that the enhanced diffusion induced by fractional dynamics is due to the emergence of a new type long-range navigation between layers. In subsection \ref{sec_MSD}, we provide an example of optimal convergence to the steady state of node-centric CTRWs, discussing the dependence of $\left \langle r^2(t) \right \rangle^{(\gamma)}$ on $D_x$ and $\gamma$. Finally, in subsection \ref{sec_nonreg} we present our results for $\Lambda_2$ in non-regular multiplexes. \subsection{Regular multiplexes} \label{regular_multiplex} Regular multiplex networks meet the condition $J\equiv J^1=J^2$. Therefore, in these systems $\xi_m\equiv \xi_m^1=\xi_m^2=D_x+A_m$ with $A_m\equiv A_m^1=A_m^2$ and $m\in\left \{ 1,\cdots,N \right \}$. Under these circumstances, the eigenvalues of $\mathbf{\mathcal{L}}^\mathcal{M}$ reduce to $\mu_{2m-1}= A_m+2D_x$ and $\mu_{2m}= A_m$ (for $m\in\left \{ 1,\cdots,N \right \}$). Thus, according to Eq.~(\ref{coeff_M_g}), $(M_m)^2=1$ and the fractional strength of the nodes of both layers [see Eqs.~(\ref{frac_degree1}) and (\ref{frac_degree2})] is given by: \begin{align} \sigma^{(\gamma)}&\equiv \sigma_1^{(\gamma)}= \sigma_2^{(\gamma)} =\sum _{m=1}^{N} \frac{1}{2N}\left ( \mu_{2m-1}^\gamma +\mu_{2m}^\gamma\right )=\frac{1}{2N}\sum _{m=1}^{N} \left ( \left ( A_m+2D_x\right )^\gamma + \left ( A_m\right )^\gamma\right ). \label{frac_degree_reg} \end{align} On the other hand, since the fractional strength is a constant for all nodes, the eigenvalues of the normalized supra-Laplacian matrix are given by $\lambda_m=\mu_m/\sigma^{(\gamma)}$ for $m\in\left \{ 1,\cdots,2N \right \}$, while the matrix of the corresponding eigenvectors is $\mathbf{Q}$ [see Eqs.~(\ref{Q_layer1}) and (\ref{Q_layer2})]. Thus, in regular multiplexes the algebraic connectivity can be calculated as: \begin{equation} \Lambda_2= \left\{ \begin{matrix} \left ( 2D_x \right )^\gamma /\sigma^{(\gamma )} &\mathrm{for}\; D_x\leq {A_c}/{2}, \\ A_c^\gamma /\sigma^{(\gamma)} & \mathrm{for}\; D_x \geq {A_c}/{2}, \end{matrix} \right. \label{algebraic_reg} \end{equation} \noindent where $c\in \left \{ 2,N \right \}$ represents the natural number that minimizes $A_m$ [see Eq.~(\ref{eigen_xi})]. According to Eq.~(\ref{algebraic_reg}), $\Lambda_2$ reaches a global maximum at $D_x = {A_c}/{2}$. Therefore, such value of the interlayer diffusion constant guarantees the fastest convergence to the steady state of node-centric CTRWs in these regular multiplexes (for an example see subsection \ref{sec_MSD}). On the other hand, it is worth mentioning that $D_x = {A_c}/{2}$ does not depend on $\gamma$. Therefore, for a given $J$ and $N$, the optimal value remains the same. However, the smaller the $\gamma$, the larger the optimal algebraic connectivity. Thus, for a given value of $N$ and $J$, the more intense the fractional diffusion (i.e., the smaller the paramenter $\gamma$), the larger $\Lambda_2$ is and, therefore, the faster the convergence to the steady state is ($\tau\sim1/\Lambda_2$). In Fig.~\ref{lambda_2_reg}(a), we show the dependence of $\Lambda_2$ on $D_x$ for several regular multiplex networks with two layers. As can be observed, the numerical results are in excelent agreement with Eq.~(\ref{algebraic_reg}). Finally, note that, according to Eqs.~(\ref{eigen_xi}) and (\ref{algebraic_reg}), for $c=2$ and fixed values of $J$ and $\gamma$, the smaller the system size $N$, the larger $D_x=A_c/2$ and $\Lambda_2$, corroborating the expected results of a faster convergence to the steady state. \begin{figure}[h!] \centering \subfloat[]{ \centering \includegraphics[width=0.5\linewidth]{Fig1a} \label{} } \subfloat[]{ \centering \includegraphics[width=0.5\linewidth]{Fig1b} \label{} } \caption{Dependence of $\Lambda_2$ on $D_x$ for regular multiplex networks with two layers. (a) Results for $N=501$ nodes: $J=1$ and $\gamma=1$ (blue circles), $J=10$ and $\gamma=1$ (red diamonds), $J=1$ and $\gamma=0.5$ (green triangles), $J=10$ and $\gamma=0.5$ (magenta hexagons). (b) Results for $J=1$: $N=501$ and $\gamma=1$ (blue circles), $N=251$ and $\gamma=1$ (red diamonds), $N=501$ and $\gamma=0.5$ (green triangles), and $N=251$ and $\gamma=0.5$ (magenta hexagons). Hollow symbols represent the results obtained from computer simulations, while continuous lines show the findings obtained from Eq.~(\ref{algebraic_reg}). Solid black symbols indicate the optimal value $D_x = {A_c}/{2}$ highlighting that, for fixed $J$ and $N$, they do not depend on $\gamma$.} \label{lambda_2_reg} \end{figure} \subsection{Emergence of interlayer long-range navigation} \label{long_navigation} In this section we explore the navigation strategy of node-centric CTRWs on regular multiplex networks that are formed by cycle graphs (i.e., $J=1$). We will explore the probability transition between two nodes $i$ and $j$ that are located in different layers. Let us suppose that $i$ is at layer 1 and $j$ is at layer 2. Following Refs. \cite{riascos14,riascos15,michelitsch19}, by using Eqs.~(\ref{Q_layer1})-(\ref{coeff_M_g}), it is possible to approximate the element of the fractional supra-Laplacian that refers to $i$ and $j$ as: \begin{align} \left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )_{i(j+N)}^\gamma \approx \frac{1}{2N} \sum _{m=1}^{N} \left ( A_m^\gamma -\gamma A_m \left (2D_x \right )^{\gamma-1} \right )\exp\left ( \mathfrak{i}\theta_{m}d\right )-\frac{1}{2N}\sum _{m=1}^{N} \left ( 2D_x \right )^\gamma \exp\left ( \mathfrak{i}\theta_{m}d\right ), \label{element_frac_Lapl_regular} \end{align} \noindent for $\left | D_x \right |>\left | A_m \right |$ and $D_x>0$, where $d\equiv d_{i^1\rightarrow j^1}$, $\theta_m=2\pi\left ( m-1 \right )/N$ represents the shortest path distance between $i$ and $j$ at layer 1, and $A_m=2+2\cos\left ( 2\theta_m\right )$ [see Eq.(\ref{eigen_xi}) for $J^1=J^2=1$ and Appendix C for further details of these derivations]. Besides that, note that $\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )_{i(j+N)}^\gamma=0$ for $D_x=0$. On the other hand, by using Eq.~(\ref{frac_degree_reg}), a similar expression can be derived for the fractional strength: \begin{align} \sigma^{(\gamma)} \approx \frac{1}{2N} \sum _{m=1}^{N} \left ( A_m^\gamma +\gamma A_m \left (2D_x \right )^{\gamma-1} \right )\exp\left ( \mathfrak{i}\theta_{m}0\right )+\frac{1}{2N}\sum _{m=1}^{N} \left ( 2D_x \right )^\gamma \exp\left ( \mathfrak{i}\theta_{m}0\right ). \label{frac_degree_regJ1} \end{align} Following Refs. \cite{riascos14,riascos15,michelitsch19}, Eqs.~(\ref{element_frac_Lapl_regular}) and (\ref{frac_degree_regJ1}) can be expressed in terms of an integral in the thermodynamic limit (i.e., $N\rightarrow\infty$) which can be explored analytically (see Ref. [43] for a discussion on that integral and Appendix C). The resulting expressions are given by: \begin{align} \left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )_{i(j+N)}^\gamma \approx -\frac{1}{2}\gamma \left ( 2D_x \right )^{\gamma-1} K_d-\frac{1}{2}\left ( 2D_x \right )^{\gamma}\delta_{d0}-\frac{1}{2}\frac{\Gamma \left ( d-\gamma \right )\Gamma \left ( 2\gamma +1 \right )}{\pi \Gamma \left ( 1+\gamma +d \right )}\sin\left ( \pi \gamma \right ), \label{element_frac_Lapl_regular_inf} \end{align} \noindent and \begin{align} \sigma^{(\gamma)} \approx \gamma \left ( 2D_x \right )^{\gamma-1} -\frac{1}{2}\frac{\Gamma \left ( -\gamma \right )\Gamma \left ( 2\gamma +1 \right )}{\pi \Gamma \left ( 1+\gamma \right )}\sin\left ( \pi \gamma \right )+\frac{1}{2}\left ( 2D_x \right )^{\gamma}, \label{frac_degree_regJ1_inf} \end{align} \noindent where \begin{align} K_d=\left\{\begin{matrix} 2 & \mathrm{if}\;d=0, \\ -1 & \mathrm{if}\;d=1,\\ 0 & \mathrm{otherwise}, \end{matrix}\right. \label{Eq_Kd} \end{align} \noindent and $\Gamma (x)$ is the $\Gamma$ function [see Appendix C for further details on the derivation of Eqs.~(\ref{element_frac_Lapl_regular_inf})-(\ref{Eq_Kd})]. According to Eqs.~(\ref{frac_RWLapl}), in the therodynamic limit, the elements of the transition matrix between two nodes $i$ and $j$ that belong to different layers, can be approximated by: \begin{align} \mathcalboondox{T}_{i(j+N)}^{(\gamma)}=\delta_{i(j+N)}-\frac{\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )_{i(j+N)}^\gamma}{\sigma^{(\gamma)}}\approx \frac{1}{2\sigma^{(\gamma)}}\frac{\Gamma \left ( d-\gamma \right )\Gamma \left ( 2\gamma +1 \right )}{\pi \Gamma \left ( 1+\gamma +d \right )}\sin\left ( \pi \gamma \right ), \end{align} \noindent for $d\gg1$ (i.e., $K_d=0$). By using the asymptotic property $\Gamma \left ( x+\gamma \right )\approx\Gamma \left ( x \right )x^\gamma$, it is possible to express $\mathcalboondox{T}_{i(j+N)}^{(\gamma)}$ as follows: \begin{equation} \mathcalboondox{T}_{i(j+N)}^{(\gamma)} \sim \frac{\Gamma \left ( 2\gamma +1 \right )}{2\sigma^{(\gamma)}}\frac{\sin\left ( \pi \gamma \right )}{\pi}d^{-\varrho}. \label{power_decay} \end{equation} \noindent where $\varrho=1+2\gamma$. Consequently, in the simple case of regular multiplexes with $J=1$, a power-law relation emerge for the transitions between both layers when $d\gg1$ and $N\rightarrow\infty$. Once $0<\gamma<1$, in this process the long-range transitions between different layers decay with exponent $1<\varrho<3$, in a similar way as in the case of fractional diffusion in monolayer regular networks \cite{riascos14,riascos15,michelitsch19}. In Fig.~\ref{ltransicion_gamma_0p5}(a) we show the dependence of $\mathcalboondox{T}_{i(j+N)}^{(\gamma)}$ on $d$ for regular multiplex networks with two layers and $J=1$. As can be observed, the predicted exponent $\varrho$ is in excellent agreement with the results obtained from Eqs.~(\ref{frac_RWLapl}). Besides, Fig.~\ref{ltransicion_gamma_0p5}(a) shows that, for a given $D_x$, the larger the value of $\gamma$, the larger $\mathcalboondox{T}_{i(j+N)}^{(\gamma)}$ is when $d\sim1$ (see inset). On the other hand, in Fig.~\ref{ltransicion_gamma_0p5}(b) we illustrate the dependence of $\mathcalboondox{T}_{i(j+N)}^{(\gamma)}$ on $D_x$. As expected, the larger $D_x$, the smaller (larger) the element of the transition matrix when $d\gg1$ ($d\sim1$). In the case of $d\gg1$, an increase on the interlayer diffusion coefficient also increases $\sigma^{(\gamma)}$, and the latter is inversely proportional to $\mathcalboondox{T}_{i(j+N)}^{(\gamma)}$ [see Eq.~(\ref{frac_RWLapl})]. The results in Fig.~\ref{ltransicion_gamma_0p5}(b) also confirm that $\varrho$ does not depend on $D_x$ (when $d\gg1$). \begin{figure}[h!] \centering \subfloat[]{ \centering \includegraphics[width=0.5\linewidth]{Fig2a} \label{der_sim_MSD_EqCG} } \subfloat[]{ \centering \includegraphics[width=0.5\linewidth]{Fig2b} \label{der_sim_MSD_EqCG} } \caption{(a) Dependence of $\mathcalboondox{T}_{i(j+N)}^{(\gamma)}$ on $d$ for regular multiplex networks with two layers, $N=20001$ nodes, $J=1$, and $D_x=2$: $\gamma=0.25$ (red circles), $\gamma=0.5$ (blue diamonds), and $\gamma=0.9$ (black hexagons) [see Eqs.~(\ref{frac_RWLapl})]. Inset: Detail of the previous series when $d\sim1$. (b) Dependence of $\mathcalboondox{T}_{i(j+N)}^{(\gamma)}$ on $D_x$ when $\gamma=0.5$, $N=20001$, and $J=1$: $D_x= 0.01$ (blue circles), $D_x=1$ (red squares), and $D_x= 100$ (black triangles). In both panels, the color dashed lines show power-law decay with exponent $\varrho= 1+2\gamma$ [see Eq (\ref{power_decay})].} \label{ltransicion_gamma_0p5} \end{figure} Finally, it is worth mentioning that fractional diffusion induces a novel mechanism of interlayer diffusion: fractional node-centric CTRWs are allowed to switch layer and jump to another vertex that may be very far away. For instance, L\'evy RWs in \cite{guo16} are not allowed to switch layer and hop during the same jump. On the other hand, the physical RWs presented in \cite{dedomenico14} reduce to classic RWs on monolayer networks, which are subject to the \textbf{NN} paradigm. However, these fractional node-centric CTRWs exhibit long-range hops on top of monolayer networks (see \cite{riascos14,riascos15,michelitsch19}). \subsection{Gaussian enhanced diffusion: MSD on regular multiplex networks} \label{sec_MSD} \begin{figure}[h!] \centering \subfloat[]{ \centering \includegraphics[width=0.5\linewidth]{Fig3a} \label{MSD_comp_a} } \subfloat[]{ \centering \includegraphics[width=0.5\linewidth]{Fig3b} \label{MSD_comp_b} } \caption{Time evolution of $\left \langle r^2(t) \right \rangle^{\left (\gamma \right )}$ on a regular multiplex network with $N=501$ and $J=10$, for $\gamma=1$ (a) and $\gamma=0.5$ (b): $D_x=10^{-4}$ (blue circles), $D_x=A_c/2\approx 0.03$ (red squares), and $D_x=10^{2}$ (black triangles). Red dashed lines indicate the results for the optimal $D_x=A_c/2$. The black dashed line is a guide for the eye to identify a Gaussian behavior. The insets show details of the very small differences between the results for $D_x=10^{-4}$ and $D_x=A_c/2$.} \label{MSD_comp} \end{figure} In this section, we show an example of the nonmonotonic increase in the rate of convergence to the steady state of centric-node CTRWs diffusion. To do so, we study the dependence of the mean square displacement of the walkers, $\left \langle r^2(t) \right \rangle^{(\gamma)}$, on $D_x$ and $\gamma$ [see Eqs.~(\ref{prob_walker_frac}) and (\ref{eq_frac_MSD_multpl})]. In Fig.~\ref{MSD_comp} we show the results for a regular multiplex network without fractional diffusion ($\gamma=1.0$, see left panel) and with it ($\gamma=0.5$, see right panel), for several values of $D_x$: the optimal interlayer coefficient $D_x=A_c/2$, and two aditional values, which are very large and very small in comparison to it. As expected, the results show that, for a given value of $\gamma$, the fastest convergence to the steady state corresponds to the optimal $D_x$. We can observe that the differences between the results for $D_x=A_c/2$ and $D_x\ll A_c/2$ are very small. In both cases the layers are barely coupled due to the very small value of $A_c$ [see Eq.~(\ref{eigen_xi}) when $c=2$]. Nonetheless, when $D_x=A_c/2$, the inter-layer connection is stronger and $\Lambda_2$ reaches a maximum, i.e., the diffusion is enhanced. In the case of very large values of $D_x$, the diffusion of the node-centric CTRWs is hindered once, as $\sigma^{(\gamma)}\approx D_x$, the walkers spend most of their time switching layers instead of hoping to other nodes inside the layers. On the other hand, for a given value of $D_x$, it can be seen that, the smaller the $\gamma$, the larger $\Lambda_2$, and the faster the diffusion (the diffusion time scale $\tau\sim1/\Lambda_2$). Thus, the previous findings are in good agreement with Eq.~(\ref{algebraic_reg}) and the data in Fig.~\ref{lambda_2_reg}(a). Finally, it is worth mentioning that the increased algebraic connectivity induced by fractional dynamics is reflected in the long-range navigation of node-centric walkers. As can be observed in Fig.~\ref{MSD_comp}(b), in the case of $\gamma=0.5$, $\left \langle r^2(t) \right \rangle^{(\gamma)}\approx t$ (i.e. $\varepsilon =1$). Thus, a Gaussian behavior emerges from the fractional dynamics in finite circulant multiplex networks with two layers. Other examples of circulant multiplex networks with different values of $N$, $J^1$, $J^2$ and $\gamma$ are presented in the Supplemental Material accompanying this paper, and all of them show perfect agreement with the developed analysis. \subsection{Optimal diffusion on non-regular multiplexes} \label{sec_nonreg} In Fig.~\ref{Lambda_2_gamma_1_varias_top} we present examples of multiplexes with circulant and noncirculant layers, when $\gamma=1$ (left panel) and $\gamma=0.5$ (right panel). As can be seen, in all the cases the nonmonotonic trend of $\Lambda_2$ is present. Indeed, when $D_x\rightarrow0$ ($D_x\rightarrow\infty$), its dependence on $D_x$ is similar to $D_x ^{\gamma}$ ($D_x ^{-\gamma}$). For that reason, there is a nonmonotonic increase in the rate of convergence to the steady state of node-centric CTRWs. Besides, in all the topologies tested, the smaller the value of $\gamma$, the larger the global maximum of $\Lambda_2$. Therefore, there exist optimal combinations of $D_x$ and $\gamma$ that enhance diffusion processes of node-centric RWs and make them faster than those obtained when layers are fully coupled and $\gamma=1$. In the case of circulant multiplexes, the findings presented here are in excellent agreement with Eqs.~(\ref{alge_conn_small}) and (\ref{alge_conn_large}). On the other hand, these results suggest that the apparent plateau observed in circulant multiplexes is not present in other topological configurations. It seems that the more random the layers are, the larger $\Lambda_2$ is. For that reason, finding an optimal value of $D_x$ is more crucial in noncirculant multiplex networks than in circulant ones. \begin{figure}[h!] \centering \subfloat[]{ \centering \includegraphics[width=0.5\linewidth]{Fig4a} \label{} } \subfloat[]{ \centering \includegraphics[width=0.5\linewidth]{Fig4b} \label{} } \caption{Dependence of $\Lambda_2$ on $D_x$ for multiplex networks with two layers and $N=1001$ nodes, when $\gamma=1$ (a) and $\gamma=0.5$ (b): two circulant layers ($J^1=1$, $J^2=5$, red line), two Bar\'abasi-Albert (BA) networks ($m_1=1$, $m_2=2$, blue line), two Erd\H{o}s-R\'enyi (ER) random graphs ($p_1=0.1$, $p_2=0.05$, black continuous line), and BA-ER layers ($m=1$, $p=0.05$, magenta line) \cite{note1}. Symbols represent the global maximum of $\Lambda_2$ for each series. Black dash-dotted line is a guide for the eye proportional to $D_x^\gamma$, whereas the black dotted line is proportional to $D_x^{-\gamma}$.} \label{Lambda_2_gamma_1_varias_top} \end{figure} \section{Conclusions} \label{conclusions} In this work, we have extended the continuous time fractional diffusion framework (for simple networks) introduced in Refs. \cite{riascos14,riascos15,michelitsch19} to multiplex networks with undirected and unsigned layers. Hence fractional diffusion is defined here in terms of the fractional supra-Laplacian matrix of the system, i.e., the combinatorial supra-Laplacian matrix of the multiplex $\mathbf{\mathcal{L}}^\mathcal{M}$ to a power $\gamma$, where $0<\gamma<1$. For the purpose of deriving exact analytical expressions, we have considered only diffusion processes in which $\mathbf{\mathcal{L}}^\mathcal{M}$ is a symmetric matrix. We have focused our discussion on the characterization of Poissonian node-centric continuous-time random-walks on circulant multiplexes with two layers, and explored the combined effect of inter and intra layer diffusion with fractional dynamics. We have directed our attention to (i) the effect of the fractional dynamics on the nonmonotonic increase in the rate of convergence to the steady state of such process, (ii) the existence of an optimal regime that depends on both the inter-layer coupling $D_x$ and on the fractional parameter $\gamma$, and (iii) the emergence of a new type of long-range navigation on multiplex networks. For circulant multiplexes, analytical expressions were obtained for the main quantities involved in these dynamics, namely: the eigenvalues and eigenvectors of the combinatorial supra-Laplacian matrix and of the normalized supra-Laplacian matrix, the fractional strength of the nodes, the fractional transition matrix, the probability of finding the walkers at time $t$ on any node of a given layer, and the mean-square displacement for fractional dynamics. For other multiplex topologies some of these quantities were obtained by numerical evaluations. We have shown that, for a given circulant multiplex network, the more intense the fractional diffusion (i.e., the smaller the paramenter $\gamma$), the larger the algebraic connectivity of the normalized supra-Laplacian matrix, denoted as $\Lambda_2$. Since the diffusion time scale of the Poissonian node-centric CTRWs on the multiplex $\tau$ is inversely proportional to $\Lambda_2$, the smaller the value of $\gamma$, the faster the convergence to the steady state is (i.e., the smaller $\tau$ is). Additionally, in multiplexes with two layers, both $\Lambda_2$ and $\tau$ exhibit a nonmonotonic dependence on $D_x$, respectively, whether or not there are fractional diffusion. Consequently, the rate of convergence to the steady state must be optimized when using fractional dynamics. On the other hand, in the simple case of circulant (regular) multiplexes with $J=1$, we have illustrated that, once the fractional diffusion is present (i.e., $0<\gamma<1$), long-range transitions between different layers appear. Indeed, a new continuous-time random walk process appears, since here walkers are allowed to (i) switch layer and (ii) perform long hops to another distant vertex in the same jump. Additionally, in the thermodinamic limit, we have shown that the probability of long range transitions decay according to a power-law with exponent $1<\varrho<3$, in a similar way as in the case of fractional diffusion in monolayer regular networks \cite{riascos14,riascos15,michelitsch19}. We have also shown that the larger $D_x$, the smaller (larger) the transition probability between nodes that are very far away (very close). Finally, the evaluation of $\equiv \left \langle r^2(t) \right \rangle$ indicates the existence of the optimal regime that depends on both $D_x$ and the fractional parameter $\gamma$. On the other hand, we have shown that the enhaced diffusion induced by fractional dynamics on finite circulant multiplexes exhibits a Gaussian behavior ($\left \langle r^2(t) \right \rangle\sim t$) before saturation appears. The introduction of fractional dynamics on multiplex networks opens new possibilities for analyzing and optimizing (anomalous) diffusion on such arrangements. For instance, given the attention devoted recently to the optimal diffusion dynamics on directed multiplex networks, the generalization of this framework to such systems can be of great interest. On the other hand, the emerging long-range transitions can enhance the efficiency of the navigation on non-circulant multiplex topologies. While studing the dependence of $\Lambda_2$ on $D_x$ in circulant multiplexes, we have found an apparent plateau in $\Lambda_2$ that is not present in other topological configurations. Indeed, it seems that the more random the layers are, the more pronounced is the optimal $\Lambda_2$. Thus, finding optimal combinations of $D_x$ and $\gamma$ seems more crucial in noncirculant multiplex networks than in circulant ones. For that reason, an exhaustive research on the dependence of $\Lambda_2$ on noncirculant topologies with fractional diffusion is being conducted and it will be published elsewhere. \begin{acknowledgments} This work was supported by the Brazilian agencies CAPES and CNPq through the Grants 151466/2018-1 (AA-P) and 305060/2015-5 (RFSA). RFSA also acknowledges the support of the National Institute of Science and Technology for Complex Systems (INCT-SC Brazil). \end{acknowledgments} \section*{APPENDIX A: Eigenvalues and Eigenvectors of the combinatorial supra-Laplacian matrix for circulant multiplexes with two layers.} \label{Eigen_Comb_Supr_L_App} Let $\mathcal{M}$ be an undirected multiplex network with $N$ nodes and $M=2$ layers, all of which consist of interacting cycle graphs. According to Eqs.~(\ref{comb_circ_ini1}) and (\ref{comb_circ_ini2}), the eigenvalues of $\mathbf{\mathcal{L}}^\mathcal{M}$, denoted as $\mu_g$ for $g\in\left \{ 1,\cdots,2N \right \}$, meet the following condition: \begin{equation} \mathrm{det}\left ( \Upsilon \right ) =\mathrm{det}\left ( \begin{matrix}\mathrm{\Xi}^1 -\mu_g\mathbf{I}_N & -D_x\mathbf{I}_N\\ -D_x\mathbf{I}_N & \mathrm{\Xi}^2 -\mu_g\mathbf{I}_N \end{matrix} \right )=0. \label{comb_circ_ini3} \end{equation} \noindent where $\mathrm{det}\left ( \mathbf{X} \right )$ refers to the determinant of a matrix $\mathbf{X}$. Taking into account that (i) the blocks of $\Upsilon$ are square matrices of the same order, and (ii) matrices $-D_x\mathbf{I}_N$ and $\mathrm{\Xi}^2 -\mu_g\mathbf{I}_N$ commute, it is possible to show \cite{Silvester00} that Eq.~(\ref{comb_circ_ini3}) reduces to \begin{equation} \mathrm{det}\left ( (\mathrm{\Xi}^1 -\mu_g\mathbf{I}_N)(\mathrm{\Xi}^2 -\mu_g\mathbf{I}_N)-(-D_x\mathbf{I}_N)(-D_x\mathbf{I}_N)\right )=0. \label{comb_circ_ini4} \end{equation} \noindent By definition, $\mathrm{\Xi}^\alpha=\mathrm{diag}\left ( \xi_1^\alpha ,\cdots, \xi_{N}^\alpha \right )$, where $\xi_m^\alpha$ are the eigenvalues of $\mathbf{C}^\alpha$ [see Eqs.~(\ref{comb_circ_ini1}) and (\ref{eigen_xi})], and $m\in\left \{ 1,\cdots,N \right \}$. Consequently, Eq.~(\ref{comb_circ_ini4}) is equivalent to the following $N$ equations: \begin{equation} (\mu_g)^2+\mu_g(\xi_m^1+\xi_m^2)+(\xi_m^1\xi_m^2-D_x^2)=0. \label{comb_circ_ini5} \end{equation} \noindent For a given value of $m$, we donote the two roots of Eq.~(\ref{comb_circ_ini5}) as $\mu_{2m-1}$ and $\mu_{2m}$, respectively. Thus, we obtain Eqs.~(\ref{lapl_eigenA}) and (\ref{lapl_eigenB}). On the other hand, the corresponding eigenvector of $\mu_g$, denoted by $\vec{\iota}_g$, can be calculated from \begin{equation} \left ( \begin{matrix}\mathrm{\Xi}^1 -\mu_g\mathbf{I}_N & -D_x\mathbf{I}_N\\ -D_x\mathbf{I}_N & \mathrm{\Xi}^2 -\mu_g\mathbf{I}_N \end{matrix} \right )\vec{\iota}_g= \left ( \begin{matrix}\mathrm{\Xi}^1 -\mu_g\mathbf{I}_N & -D_x\mathbf{I}_N\\ -D_x\mathbf{I}_N & \mathrm{\Xi}^2 -\mu_g\mathbf{I}_N \end{matrix} \right )\left ( \begin{matrix} \vec{v}_g^1\\ \vec{v}_g^2 \end{matrix} \right )=0, \label{comb_circ_ini6} \end{equation} \noindent where $\vec{v}_g^1$ and $\vec{v}_g^2$ are $N\times 1$ vectors. According to Eq.~(\ref{comb_circ_ini6}), the elements of $\vec{v}_g^1$ and of $\vec{v}_g^2$ should meet simultaneously the following conditions: \begin{equation} \left\{\begin{matrix} \frac{\xi_m^1-\mu_g}{D_x}\left ( \vec{v}_g^1 \right )_m=\left ( \vec{v}_g^2 \right )_m\\ \frac{\xi_m^2-\mu_g}{D_x}\left ( \vec{v}_g^2 \right )_m=\left ( \vec{v}_g^1 \right )_m \end{matrix}\right.. \label{comb_circ_ini7} \end{equation} \noindent It is possible to see that the previous restrictions are equivalent to Eq.~(\ref{comb_circ_ini5}). Therefore, in the case of $g\in\left \{ 2m-1,\,2m \right \}$, Eq.~(\ref{comb_circ_ini7}) requires that only the elements $\left ( \vec{v}_g^1 \right )_m$ and $\left ( \vec{v}_g^1 \right )_m$ are non-zero. Consequently, to normalize $\vec{\iota}_g$, we set $\left ( \vec{v}_g^1 \right )_m=1/T_g$ and $\left ( \vec{v}_g^2 \right )_m=(\xi_m^1-\mu_g)/(D_xT_g)$, where \begin{equation} T_g = \sqrt{1+\left ( \frac{\xi_m^1-\mu_g}{D_x} \right )^2}=\sqrt{1+\left ( M_g \right )^2}, \label{comb_circ_ini9} \end{equation} \noindent for $g\in\left \{ 2m-1,\,2m \right \}$ [see Eq.~(\ref{coeff_M_g})]. According to the previous results, given the matrix definedby the right hand side of Eq.~(\ref{comb_circ_ini2}) and its corresponding eigenvectors $\vec{\iota}_g$, the matrix $\mathbf{\Omega}=\left ( \begin{matrix} \vec{\iota}_1 &\cdots & \vec{\iota}_{2N}\end{matrix} \right )$ has elements \begin{equation} \left.\begin{matrix} \mathbf{\Omega}_{fg}=1/T_g\\ \mathbf{\Omega}_{(f+N)g}=M_g/T_g\ \end{matrix}\right\}, \label{comb_circ_ini9_2} \end{equation} \noindent for $g\in\left \{ 1,\cdots,2N \right \}$ and $f=1+\left \lfloor (g-1)/2 \right \rfloor$ (i.e., $f\in\left \{ 1,\cdots,N \right \}$), and zero otherwise. Finally, the eigenvectors of the combinatorial supra-Laplacian matrix $\mathbf{\mathcal{L}}^\mathcal{M}$, i.e. $\vec{\psi}_g $, and the matrix $\mathbf{Q}=\left ( \begin{matrix} \vec{\psi}_1 &\cdots & \vec{\psi}_{2N}\end{matrix} \right )$ [in Eqs.~(\ref{Q_layer1}) and (\ref{Q_layer2})] can be obtained from \begin{equation} \mathbf{Q}=\mathbf{F}\mathbf{\Omega}, \label{comb_circ_ini10} \end{equation} \noindent as can be seen from Eq.~(\ref{def_F}). \section*{APPENDIX B: Eigenvalues and Eigenvectors of the normalizedl supra-Laplacian matrix for circulant multiplexes with two layers.} \label{Eigen_Norm_Supr_L_App} Let $\mathcal{M}$ be an undirected $N-$node multiplex network in which all its $M=2$ layers consist of interacting cycle graphs. According to Eqs.~(\ref{frac_degree})-(\ref{frac_RWLapl}), the normalized fractional supra-Laplacian matrix of the multiplex $\mathcalboondox{L}^{\left (\gamma \right )}$ is given by $\mathcalboondox{L}^{\left (\gamma \right )}=\mathcal{K}^{-1}\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )^\gamma$, where \begin{equation} \mathcal{K}^{-1}=\left ( \begin{matrix}\frac{1}{\sigma_1^{\left (\gamma \right )}}\mathbf{I}_N & \mathbf{0}\\ \mathbf{0} & \frac{1}{\sigma_2^{\left (\gamma \right )}}\mathbf{I}_N \end{matrix} \right ). \label{comb_circ2_ini1} \end{equation} \noindent Taking into account Eqs.~(\ref{frac_Lapl}), (\ref{def_F}), (\ref{comb_circ_ini9_2}) and (\ref{comb_circ_ini10}), it is possible to write \begin{align} \mathbf{F}^{-1} \mathcalboondox{L}^{\left (\gamma \right )}\mathbf{F}&=\mathbf{F}^{-1} \mathcal{K}^{-1} \mathbf{F}\mathbf{\Omega} \mathbf{\Delta}^\gamma \mathbf{\Omega}^{-1} \mathbf{F}^{-1} \mathbf{F}\nonumber \\ &=\mathcal{K}^{-1}\mathbf{\Omega} \mathbf{\Delta}^\gamma \mathbf{\Omega}^{T}, \label{comb_circ2_ini2} \end{align} \noindent where $\mathbf{\Omega}^{-1} = \mathbf{\Omega}^{T}$, since $\mathbf{Q}=\mathbf{F}\mathbf{\Omega}$, $\mathbf{Q}\mathbf{Q}^\dagger=\mathbf{I}_{2N}$, and $\mathbf{I}_{2N}$ represents the $2N \times 2N$ identity matrix. Thus, the eigenspectrum of $\mathcalboondox{L}^{\left (\gamma \right )}$ is equal to that of $\mathcal{K}^{-1}\mathbf{\Omega} \mathbf{\Delta}^\gamma \mathbf{\Omega}^{T}$. Considering that $\mathbf{\Delta}^\gamma= \mathrm{diag}\left ( \mu_1^\gamma,\mu_2^\gamma,\cdots,\mu_{2N}^\gamma \right )$, as well as the definition of $\Omega$ [Eq.~(\ref{comb_circ_ini9_2})], the right hand side of Eq.~(\ref{comb_circ2_ini2}) can be rewritten as \begin{equation} \mathcal{K}^{-1}\mathbf{\Omega} \mathbf{\Delta}^\gamma \mathbf{\Omega}^{T}=\left ( \begin{matrix} \frac{1}{\sigma_1^{(\gamma )}}\mathbf{D}^1 & \frac{1}{\sigma_1^{(\gamma )}}\mathbf{D}^3\\ \frac{1}{\sigma_2^{(\gamma )}}\mathbf{D}^3 & \frac{1}{\sigma_2^{(\gamma )}}\mathbf{D}^2 \end{matrix} \right ), \label{comb_circ2_ini3} \end{equation} \noindent where $\mathbf{D}^{1}$, $\mathbf{D}^{2}$ and $\mathbf{D}^{3}$ are $N\times N$ diagonal matrices, whose respective $m$ elements are given by \begin{equation} \left ( \mathbf{D}^1 \right )_m=\frac{\mu_{2m-1}^\gamma }{1+(S+C)^2}+\frac{\mu_{2m}^\gamma }{1+(S-C)^2}, \label{comb_circ2_ini4} \end{equation} \begin{equation} \left ( \mathbf{D}^2 \right )_m=\frac{\mu_{2m-1}^\gamma (S+C)^2}{1+(S+C)^2}+\frac{\mu_{2m}^\gamma (S-C)^2}{1+(S-C)^2}, \label{comb_circ2_ini5} \end{equation} \noindent and \begin{equation} \left ( \mathbf{D}^3 \right )_m=\frac{\mu_{2m-1}^\gamma (S+C)}{1+(S+C)^2}+\frac{\mu_{2m}^\gamma (S-C)}{1+(S-C)^2}, \label{comb_circ2_ini6} \end{equation} \noindent for $m\in\left \{ 1,\cdots,N \right \}$, where \begin{align} S=\frac{\xi_m^1-\xi_m^2}{2D_x}=\frac{A_m^1-A_m^2}{2D_x}, \end{align} \noindent and $C^2=1+S^2$. Notice that, by making use of Eqs.~(\ref{coeff_M_g}) and (\ref{comb_circ_ini9}), it is possible to express some terms in Eqs.~(\ref{comb_circ2_ini4})-(\ref{comb_circ2_ini6}) in terms of $M_g$ and $T_g$. Being more specific, we can write $\left ( M_{2m-1} \right )^2=(S+C)^2$ and $\left ( M_{2m} \right )^2=(S-C)^2$, as well as $\left ( T_{2m-1} \right )^2=1+(S+C)^2$ and $\left ( T_{2m} \right )^2=1+(S-C)^2$ [see also the definition of $\Omega$ given by Eq.~(\ref{comb_circ_ini9_2})]. According to Eq.~(\ref{comb_circ2_ini3}), to calculate the eigenvalues of $\mathcalboondox{L}^{\left (\gamma \right )}$, denoted as $\lambda_g$ for $g\in\left \{ 1,\cdots,2N \right \}$, we solve the following equation: \begin{equation} \mathrm{det}\left ( \Phi \right )=\mathrm{det}\left ( \begin{matrix} \frac{1}{\sigma_1^{(\gamma )}}\mathbf{D}^1 -\lambda_g \mathbf{I}_N& \frac{1}{\sigma_1^{(\gamma )}}\mathbf{D}^3\\ \frac{1}{\sigma_2^{(\gamma )}}\mathbf{D}^3 & \frac{1}{\sigma_2^{(\gamma )}}\mathbf{D}^2 -\lambda_g \mathbf{I}_N \end{matrix} \right )=0. \label{comb_circ2_ini7} \end{equation} \noindent Since (i) the blocks of $\Phi$ are square matrices of the same order, and (ii) matrices $\frac{1}{\sigma_2^{(\gamma )}}\mathbf{D}^3$ and $\frac{1}{\sigma_2^{(\gamma )}}\mathbf{D}^2 -\lambda_g \mathbf{I}_N$ commute, it is possible to show \cite{Silvester00} that Eq.~(\ref{comb_circ2_ini7}) reduces to \begin{equation} \mathrm{det}\left ( \left ( \frac{1}{\sigma_1^{(\gamma )}}\mathbf{D}^1 -\lambda_g \right )\left ( \frac{1}{\sigma_2^{(\gamma )}}\mathbf{D}^2 -\lambda_g \mathbf{I}_N \right )-\left ( \frac{1}{\sigma_1^{(\gamma )}}\mathbf{D}^3\right )\left ( \frac{1}{\sigma_2^{(\gamma )}}\mathbf{D}^3 \right )\right )=0. \label{comb_circ2_ini8} \end{equation} \noindent Finally, the eigenspectrum of $\mathcalboondox{L}^{\left (\gamma \right )}$ is obtained by calculating the roots of the following $N$ equations: \begin{equation} (\lambda_g)^2+\lambda_g\left ( \frac{1}{\sigma_1^{(\gamma )}}\left ( \mathbf{D}^1 \right )_m+\frac{1}{\sigma_2^{(\gamma )}}\left ( \mathbf{D}^2 \right )_m \right )+\left ( \frac{1}{\sigma_1^{(\gamma )}\sigma_2^{(\gamma )}}\left ( \mathbf{D}^1 \right )_m\left ( \mathbf{D}^2 \right )_m - \frac{1}{\sigma_1^{(\gamma )}\sigma_2^{(\gamma )}} \left ( \left ( \mathbf{D}^3 \right )_m \right )^2 \right )=0, \label{comb_circ2_ini9} \end{equation} \noindent which are equivalent to Eq.~(\ref{comb_circ2_ini8}). For a given value of $m\in\left \{ 1,\cdots,N \right \}$, we denote the two roots of Eq.~(\ref{comb_circ2_ini9}) as $\lambda_{2m-1}$ and $\lambda_{2m}$, respectively. Thus, we obtain obtain Eqs.(\ref{RWlapl_eigenA})-(\ref{nu_def}). \section*{APPENDIX C: Transitions between nodes that are located in different layers.} \label{Transitions_finite} Let $\mathcal{M}$ be an undirected multiplex network with $M=2$ layers, both of which are cycle graphs (i.e., $J=1$). Let us consider two nodes in $\mathcal{M}$, which are located in different layers: $i$ is at layer 1 and $j$ is at layer 2. Following Refs. \cite{riascos14,riascos15,michelitsch19}, by using Eqs.~(\ref{Q_layer1})-(\ref{coeff_M_g}) and conducting the necessary manipulation in Eq.~(\ref{element_frac_Lapl}), the element of the fractional supra-Laplacian that refers to $i$ and $j$ can be approximated as follows: \begin{align} \left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )_{i(j+N)}^\gamma &=\sum _{m=1}^{N} -\frac{1}{2N}\left ( A_m+2D_x \right )^\gamma \exp\left ( \mathfrak{i}\frac{2\pi }{N}\left ( j-i \right ) \left ( m-1 \right )\right )\nonumber\\ &+\sum _{m=1}^{N} \frac{1}{2N}A_m^\gamma \exp\left ( \mathfrak{i}\frac{2\pi }{N}\left ( j-i \right ) \left ( m-1 \right )\right )\nonumber\\ &= \frac{1}{2N}\sum _{m=1}^{N}\left (A_m^\gamma -\left ( A_m+2D_x \right )^\gamma \right ) \exp\left ( \mathfrak{i}\theta_{m}d\right )\nonumber\\ &\approx \frac{1}{2N} \sum _{m=1}^{N} \left ( A_m^\gamma -\gamma A_m \left (2D_x \right )^{\gamma-1} \right )\exp\left ( \mathfrak{i}\theta_{m}d\right )-\frac{1}{2N}\sum _{m=1}^{N} \left ( 2D_x \right )^\gamma \exp\left ( \mathfrak{i}\theta_{m}d\right ) \label{element_frac_Lapl_regular_app} \end{align} \noindent for $\left | D_x \right |>\left | A_m \right |$ and $D_x>0$, where $d\equiv d_{i^1\rightarrow j^1}$, $\theta_m=2\pi\left ( m-1 \right )/N$, and $A_m=2+2\cos\left ( 2\theta_m\right )$ [see Eq.(\ref{eigen_xi}) for $J^\alpha=1$ and $\alpha \in \left \{ 1,2 \right \}$]. If $D_x=0$, then $A_m^\gamma -\left ( A_m+2D_x \right )^\gamma = 0$, and this element of the fractional supra-Laplacian is equal to zero. In the limit $N\rightarrow\infty$, the sums in Eq.~(\ref{element_frac_Lapl_regular_app}) can be replaced by integrals, so that \begin{widetext} \begin{align} \left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )_{i(j+N)}^\gamma &=\frac{1}{2}\left ( \frac{1}{2\pi }\int _0^{2\pi }A_m^\gamma \exp(\mathfrak{i}d\theta_m)\mathrm{d\theta}_m\right ) -\frac{\gamma \left ( 2Dx \right )^ {\gamma-1} }{2}\lim_{\zeta \rightarrow 1}\left ( \frac{1}{2\pi }\int _0^{2\pi }A_m^\zeta \exp(\mathfrak{i}d\theta_m)\mathrm{d\theta}_m\right )\nonumber\\ &-\frac{\left ( 2Dx \right )^ {\gamma}}{2}\left ( \frac{1}{2\pi }\int _0^{2\pi } \exp(\mathfrak{i}d\theta_m)\mathrm{d\theta }_m\right ). \label{sum_integrals} \end{align} \end{widetext} \noindent Taking into account that (i) the last right-hand term in Eq.~(\ref{sum_integrals}) is equal to $-\left ( 2Dx \right )^ {\gamma}/2$ for $d=0$ and 0 otherwise, and (ii) the analytical results \begin{align} \frac{1}{2\pi }\int _0^{2\pi }A_m^\gamma \exp(\mathfrak{i}d\theta_m)\mathrm{d\theta }=-\frac{\Gamma \left ( d-\gamma \right )\Gamma \left ( 2\gamma +1 \right )}{\pi \Gamma \left ( 1+\gamma +d \right )}\sin\left ( \pi \gamma \right ), \label{integral} \end{align} \noindent and \begin{align} \lim_{\gamma \rightarrow 1}\frac{\Gamma \left ( d-\gamma \right )\Gamma \left ( 2\gamma +1 \right )}{\pi \Gamma \left ( 1+\gamma +d \right )}\sin\left ( \pi \gamma \right )=K_d, \label{lim_integral} \end{align} \noindent we obtain Eq.(\ref{element_frac_Lapl_regular_inf}) [see \cite{riascos14,riascos15,michelitsch19,Zoia07} for further details about the derivation of Eqs.~(\ref{integral}) and (\ref{lim_integral})]. Finally, note that Eq.(\ref{frac_degree_regJ1_inf}) can be derived from Eq.~(\ref{frac_degree_reg}) by using the same process presented in this Appendix and considering $d=0$ ($\left ( \mathbf{\mathcal{L}}^\mathcal{M} \right )_{ii}^\gamma = \sigma^{(\gamma)}$).
{'timestamp': '2019-10-22T02:06:37', 'yymm': '1908', 'arxiv_id': '1908.02609', 'language': 'en', 'url': 'https://arxiv.org/abs/1908.02609'}
\section{Introduction} \label{sec:introduction} Probabilistic models have proven to be very useful in a lot of applications in signal processing where signal estimation is needed \cite{rabiner1989tutorial,arulampalam2002tutorial,ji2008bayesian}. Some of their advantages are that 1) they force the designer to specify all the assumptions of the model, 2) they provide a clear separation between the model and the algorithm used to solve it, and 3) they usually provide some measure of uncertainty about the estimation. On the other hand, adaptive filtering is a standard approach in estimation problems when the input is received as a stream of data that is potentially non-stationary. This approach is widely understood and applied to several problems such as echo cancellation \cite{gilloire1992adaptive}, noise cancellation \cite{nelson1991active}, and channel equalization \cite{falconer2002frequency}. Although these two approaches share some underlying relations, there are very few connections in the literature. The first important attempt in the signal processing community to relate these two fields was the connection between a linear Gaussian state-space model (i.e. Kalman filter) and the RLS filter, by Sayed and Kailath \cite{sayed1994state} and then by Haykin \emph{et al.} \cite{haykin1997adaptive}. The RLS adaptive filtering algorithm emerges naturally when one defines a particular state-space model (SSM) and then performs exact inference in that model. This approach was later exploited in \cite{van2012kernel} to design a kernel RLS algorithm based on Gaussian processes. A first attempt to approximate the LMS filter from a probabilistic perspective was presented in \cite{park2014probabilistic}, focusing on a kernel-based implementation. The algorithm of \cite{park2014probabilistic} makes use of a Maximum a Posteriori (MAP) estimate as an approximation for the predictive step. However, this approximation does not preserve the estimate of the uncertainty in each step, therefore degrading the performance of the algorithm. In this work, we provide a similar connection between state-space models and least-mean-squares (LMS). Our approach is based on approximating the posterior distribution with an isotropic Gaussian distribution. We show how the computation of this approximated posterior leads to a linear-complexity algorithm, comparable to the standard LMS. Similar approaches have already been developed for a variety of problems such as channel equalization using recurrent RBF neural networks \cite{cid1994recurrent}, or Bayesian forecasting \cite{harrison1999bayesian}. Here, we show the usefulness of this probabilistic approach for adaptive filtering. The probabilistic perspective we adopt throughout this work presents two main advantages. Firstly, a novel LMS algorithm with adaptable step size emerges naturally with this approach, making it suitable for both stationary and non-stationary environments. The proposed algorithm has less free parameters than previous LMS algorithms with variable step size \cite{kwong1992variable,aboulnasr1997robust,shin2004variable}, and its parameters are easier to be tuned w.r.t. these algorithms and standard LMS. Secondly, the use of a probabilistic model provides us with an estimate of the error variance, which is useful in many applications. Experiments with simulated and real data show the advantages of the presented approach with respect to previous works. However, we remark that the main contribution of this paper is that it opens the door to introduce more Bayesian machine learning techniques, such as variational inference and Monte Carlo sampling methods \cite{barber2012bayesian}, to adaptive filtering.\\ \section{Probabilistic Model} Throughout this work, we assume the observation model to be linear-Gaussian with the following distribution, \begin{equation} p(y_k|{\bf w}_k) = \mathcal{N}(y_k;{\bf x}_k^T {\bf w}_k , \sigma_n^2), \label{eq:mess_eq} \end{equation} where $\sigma_n^2$ is the variance of the observation noise, ${\bf x}_k$ is the regression vector and ${\bf w}_k$ is the parameter vector to be sequentially estimated, both $M$-dimensional column vectors. In a non-stationary scenario, ${\bf w}_k$ follows a dynamic process. In particular, we consider a diffusion process (random-walk model) with variance $\sigma_d^2$ for this parameter vector: \begin{equation} p({\bf w}_k|{\bf w}_{k-1})= \mathcal{N}({\bf w}_k;{\bf w}_{k-1}, \sigma_d^2 {\bf I}), \label{eq:trans_eq} \end{equation} where $\bf I$ denotes the identity matrix. In order to initiate the recursion, we assume the following prior distribution on ${\bf w}_k$ \begin{equation} p({\bf w}_0)= \mathcal{N}({\bf w}_0;0, \sigma_d^2{\bf I}).\nonumber \end{equation} \section{Exact inference in this model: Revisiting the RLS filter} Given the described probabilistic SSM, we would like to infer the posterior probability distribution $p({\bf w}_k|y_{1:k})$. Since all involved distributions are Gaussian, one can perform exact inference, leveraging the probability rules in a straightforward manner. The resulting probability distribution is \begin{equation} p({\bf w}_k|y_{1:k}) = \mathcal{N}({\bf w}_k;{\bf\boldsymbol\mu}_{k}, \boldsymbol\Sigma_{k}), \nonumber \end{equation} in which the mean vector ${\bf\boldsymbol\mu}_{k}$ is given by \begin{equation} {\bf\boldsymbol\mu}_k = {\bf\boldsymbol\mu}_{k-1} + {\bf K}_k (y_k - {\bf x}_k^T {\bf\boldsymbol\mu}_{k-1}){\bf x}_k, \nonumber \end{equation} where we have introduced the auxiliary variable \begin{equation} {\bf K}_k = \frac{ \left(\boldsymbol\Sigma_{k-1} + \sigma_d^2 {\bf I}\right)}{{\bf x}_k^T \left(\boldsymbol\Sigma_{k-1} + \sigma_d^2 {\bf I}\right) {\bf x}_k + \sigma_n^2}, \nonumber \end{equation} and the covariance matrix $\boldsymbol\Sigma_k$ is obtained as \begin{equation} \boldsymbol\Sigma_k = \left( {\bf I} - {\bf K}_k{\bf x}_k {\bf x}_k^T \right) ( \boldsymbol\Sigma_{k-1} +\sigma_d^2), \nonumber \end{equation} Note that the mode of $p({\bf w}_k|y_{1:k})$, i.e. the maximum-a-posteriori estimate (MAP), coincides with the RLS adaptive rule \begin{equation} {{\bf w}}_k^{(RLS)} = {{\bf w}}_{k-1}^{(RLS)} + {\bf K}_k (y_k - {\bf x}_k^T {{\bf w}}_{k-1}^{(RLS)}){\bf x}_k . \label{eq:prob_rls} \end{equation} This rule is similar to the one introduced in \cite{haykin1997adaptive}. Finally, note that the covariance matrix $\boldsymbol\Sigma_k$ is a measure of the uncertainty of the estimate ${\bf w}_k$ conditioned on the observed data $y_{1:k}$. Nevertheless, for many applications a single scalar summarizing the variance of the estimate could prove to be sufficiently useful. In the next section, we show how such a scalar is obtained naturally when $p({\bf w}_k|y_{1:k})$ is approximated with an isotropic Gaussian distribution. We also show that this approximation leads to an LMS-like estimation. \section{Approximating the posterior distribution: LMS filter } The proposed approach consists in approximating the posterior distribution $p({\bf w}_k|y_{1:k})$, in general a multivariate Gaussian distribution with a full covariance matrix, by an isotropic spherical Gaussian distribution \begin{equation} \label{eq:aprox_post} \hat{p}({\bf w}_{k}|y_{1:k})=\mathcal{N}({\bf w}_{k};{\bf \hat{\boldsymbol\mu}}_{k}, \hat{\sigma}_{k}^2 {\bf I} ). \end{equation} In order to estimate the mean and covariance of the approximate distribution $\hat{p}({\bf w}_{k}|y_{1:k})$, we propose to select those that minimize the Kullback-Leibler divergence with respect to the original distribution, i.e., \begin{equation} \{\hat{\boldsymbol\mu}_k,\hat{\sigma}_k\}=\arg \displaystyle{ \min_{\hat{\boldsymbol\mu}_k,\hat{\sigma}_k}} \{ D_{KL}\left(p({\bf w}_{k}|y_{1:k}))\| \hat{p}({\bf w}_{k}|y_{1:k})\right) \}. \nonumber \end{equation} The derivation of the corresponding minimization problem can be found in Appendix A. In particular, the optimal mean and the covariance are found as \begin{equation} {\hat{\boldsymbol\mu}}_{k} = {\boldsymbol\mu}_{k};~~~~~~ \hat{\sigma}_{k}^2 = \frac{{\sf Tr}\{ \boldsymbol\Sigma_k\} }{M}. \label{eq:sigma_hat} \end{equation} We now show that by using \eqref{eq:aprox_post} in the recursive predictive and filtering expressions we obtain an LMS-like adaptive rule. First, let us assume that we have an approximate posterior distribution at $k-1$, $\hat{p}({\bf w}_{k-1}|y_{1:k-1}) = \mathcal{N}({\bf w}_{k-1};\hat{\bf\boldsymbol\mu}_{k-1}, \hat{\sigma}_{k-1}^2 {\bf I} )$. Since all involved distributions are Gaussian, the predictive distribution is obtained as % \begin{eqnarray} \hat{p}({\bf w}_k|y_{1:k-1}) &=& \int p({\bf w}_k|{\bf w}_{k-1}) \hat{p}({\bf w}_{k-1}|y_{1:k-1}) d{\bf w}_{k-1} \nonumber\\ &=& \mathcal{N}({\bf w}_k;{\bf\boldsymbol\mu}_{k|k-1}, \boldsymbol\Sigma_{k|k-1}), \label{eq:approx_pred} \end{eqnarray} where the mean vector and covariance matrix are given by \begin{eqnarray} \hat{\bf\boldsymbol\mu}_{k|k-1} &=& \hat{\bf\boldsymbol\mu}_{k-1} \nonumber \\ \hat{\boldsymbol\Sigma}_{k|k-1} &=& (\hat{\sigma}_{k-1}^2 + \sigma_d^2 ){\bf I}\nonumber. \end{eqnarray} From \eqref{eq:approx_pred}, the posterior distribution at time $k$ can be computed using Bayes' Theorem and standard Gaussian manipulations (see for instance \cite[Ch. 4]{murphy2012machine}). Then, we approximate the posterior $p({\bf w}_k|y_{1:k})$ with an isotropic Gaussian, \begin{equation} \hat{p}({\bf w}_k|y_{1:k}) = \mathcal{N}({\bf w}_k ; {\hat{\boldsymbol\mu}}_{k}, \hat{\sigma}_k^2 {\bf I} ),\nonumber \end{equation} where \begin{eqnarray} {\hat{\boldsymbol\mu}}_{k} &= & {\hat{\boldsymbol\mu}}_{k-1}+ \frac{ (\hat{\sigma}_{k-1}^2+ \sigma_d^2) }{(\hat{\sigma}_{k-1}^2+ \sigma_d^2) \|{\bf x}_k\|^2 + \sigma_n^2} (y_k - {\bf x}_k^T {\hat{\boldsymbol\mu}}_{k-1}){\bf x}_k \nonumber \\ &=& {\hat{\boldsymbol\mu}}_{k-1}+ \eta_k (y_k - {\bf x}_k^T {\hat{\boldsymbol\mu}}_{k-1}){\bf x}_k . \label{eq:prob_lms} \end{eqnarray} Note that, instead of a gain matrix ${\bf K}_k$ as in Eq.~\eqref{eq:prob_rls}, we now have a scalar gain $\eta_k$ that operates as a variable step size. Finally, to obtain the posterior variance, which is our measure of uncertainty, we apply \eqref{eq:sigma_hat} and the trick ${\sf Tr}\{{\bf x}_k{\bf x}_k^T\}= {\bf x}_k^T{\bf x}_k= \|{\bf x}_k \|^2$, \begin{eqnarray} \hat{\sigma}_k^2 &=& \frac{{\sf Tr}(\boldsymbol\Sigma_k)}{M} \\ &=& \frac{1}{M}{\sf Tr}\left\{ \left( {\bf I} - \eta_k {\bf x}_k {\bf x}_k^T \right) (\hat{\sigma}_{k-1}^2 +\sigma_d^2)\right\} \\ &=& \left(1 - \frac{\eta_k \|{\bf x}_k\|^2}{M}\right)(\hat{\sigma}_{k-1}^2 +\sigma_d^2). \label{eq:sig_k} \end{eqnarray} If MAP estimation is performed, we obtain an adaptable step-size LMS estimation \begin{equation} {\bf w}_{k}^{(LMS)} = {\bf w}_{k-1}^{(LMS)} + \eta_k (y_k - {\bf x}_k^T {\bf w}_{k-1}^{(LMS)}){\bf x}_k, \label{eq:lms} \end{equation} with \begin{equation} \eta_k = \frac{ (\hat{\sigma}_{k-1}^2+ \sigma_d^2) }{(\hat{\sigma}_{k-1}^2+ \sigma_d^2) \|{\bf x}_k\|^2 + \sigma_n^2}.\nonumber \end{equation} At this point, several interesting remarks can be made: \begin{itemize} \item The adaptive rule \eqref{eq:lms} has linear complexity since it does not require us to compute the full matrix $\boldsymbol\Sigma_k$. \item For a stationary model, we have $\sigma_d^2=0$ in \eqref{eq:prob_lms} and \eqref{eq:sig_k}. In this case, the algorithm remains valid and both the step size and the error variance, $\hat{\sigma}_{k}$, vanish over time $k$. \item Finally, the proposed adaptable step-size LMS has only two parameters, $\sigma_d^2$ and $\sigma_n^2$, (and only one, $\sigma_n^2$, in stationary scenarios) in contrast to other variable step-size algorithms \cite{kwong1992variable,aboulnasr1997robust,shin2004variable}. More interestingly, both $\sigma_d^2$ and $\sigma_n^2$ have a clear underlying physical meaning, and they can be estimated in many cases. We will comment more about this in the next section. \end{itemize} \section{Experiments} \label{sec:experiments} We evaluate the performance of the proposed algorithm in both stationary and tracking experiments. In the first experiment, we estimate a fixed vector ${\bf w}^{o}$ of dimension $M=50$. The entries of the vector are independently and uniformly chosen in the range $[-1,1]$. Then, the vector is normalized so that $\|{\bf w}^o\|=1$. Regressors $\boldsymbol{x}_{k}$ are zero-mean Gaussian vectors with identity covariance matrix. The additive noise variance is such that the SNR is $20$ dB. We compare our algorithm with standard RLS and three other LMS-based algorithms: LMS, NLMS \cite{sayed2008adaptive}, VSS-LMS \cite{shin2004variable}.\footnote{The used parameters for each algorithm are: for RLS $\lambda=1$, $\epsilon^{-1}=0.01$; for LMS $\mu=0.01$; for NLMS $\mu=0.5$; and for VSS-LMS $\mu_{max}=1$, $\alpha=0.95$, $C=1e-4$.} The probabilistic LMS algorithm in \cite{park2014probabilistic} is not simulated because it is not suitable for stationary environments. In stationary environments, the proposed algorithm has only one parameter, $\sigma^2_n$. We simulate both the scenario where we have perfectly knowledge of the amount of noise (probLMS1) and the case where the value $\sigma^2_n$ is $100$ times smaller than the actual value (probLMS2). The Mean-Square Deviation (${\sf MSD} = {\mathbb E} \| {\bf w}_0 - {\bf w}_k \|^2$), averaged out over $50$ independent simulations, is presented in Fig. \ref{fig:msd_statationary}. \begin{figure}[htb] \centering \begin{minipage}[b]{\linewidth} \centering \centerline{\includegraphics[width=\textwidth]{results_stationary_MSD}} \end{minipage} \caption{Performance in terms of MSD of probabilistic LMS with both optimal (probLMS1) and suboptimal (probLMS2) compared to LMS, NLMS, VS-LMS, and RLS.} \label{fig:msd_statationary} \end{figure} The performance of probabilistic LMS is close to RLS (obviously at a much lower computational cost) and largely outperforms previous variable step-size LMS algorithms proposed in the literature. Note that, when the model is stationary, i.e. $\sigma^2_d=0$ in \eqref{eq:trans_eq}, both the uncertainty $\hat{\sigma}^2_k$, and the adaptive step size $\eta_k$, vanish over time. This implies that the error tends to zero when $k$ goes to infinity. Fig. \ref{fig:msd_statationary} also shows that the proposed approach is not very sensitive to a bad choice of its only parameter, as demonstrated by the good results of probLMS2, which uses a $\sigma^2_n$ that is $100$ times smaller than the optimal value. \begin{figure}[htb] \centering \begin{minipage}[b]{\linewidth} \centering \centerline{\includegraphics[width=\textwidth]{fig2_final}} \end{minipage} \caption{Real part of one coefficient of the measured and estimated channel in experiment two. The shaded area represents two standard deviations from the prediction {(the mean of the posterior distribution)}.} \label{fig_2} \end{figure} \begin{table}[ht] \begin{footnotesize} \setlength{\tabcolsep}{2pt} \def1.5mm{1.5mm} \begin{center} \begin{tabular}{|l@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|} \hline Method & LMS & NLMS & LMS-2013 & VSSNLMS & probLMS & RLS \\ \hline \hline MSD (dB) &-28.45 &-21.07 &-14.36 &-26.90 &-28.36 &-25.97\\ \hline \end{tabular} \end{center} \caption{Steady-state MSD of the different algorithms for the tracking of a real MISO channel.} \label{tab:table_MSD} \end{footnotesize} \end{table} \newpage In a second experiment, we test the tracking capabilities of the proposed algorithm with {real} data of a wireless MISO channel acquired in a realistic indoor scenario. More details on the setup can be found in \cite{gutierrez2011frequency}. Fig. \ref{fig_2} shows the real part of one of the channels, and the estimate of the proposed algorithm. The shaded area represents the estimated uncertainty for each prediction, i.e. $\hat{\mu}_k\pm2\hat{\sigma}_k$. Since the experimental setup does not allow us to obtain the optimal values for the parameters, we fix these parameters to their values that optimize the steady-state mean square deviation (MSD). \hbox{Table \ref{tab:table_MSD}} shows this steady-state MSD of the estimate of the MISO channel with different methods. As can be seen, the best tracking performance is obtained by standard LMS and the proposed method. \section{Conclusions and Opened Extensions} \label{sec:conclusions} {We have presented a probabilistic interpretation of the least-mean-square filter. The resulting algorithm is an adaptable step-size LMS that performs well both in stationary and tracking scenarios. Moreover, it has fewer free parameters than previous approaches and these parameters have a clear physical meaning. Finally, as stated in the introduction, one of the advantages of having a probabilistic model is that it is easily extensible:} \begin{itemize} \item If, instead of using an isotropic Gaussian distribution in the approximation, we used a Gaussian with diagonal covariance matrix, we would obtain a similar algorithm with different step sizes and measures of uncertainty, for each component of ${\bf w}_k$. Although this model can be more descriptive, it needs more parameters to be tuned, and the parallelism with LMS vanishes. \item Similarly, if we substitute the transition model of \eqref{eq:trans_eq} by an Ornstein-Uhlenbeck process, \begin{equation} p({\bf w}_k|{\bf w}_{k-1})= \mathcal{N}({\bf w}_k;\lambda {\bf w}_{k-1}, \sigma_d^2), \nonumber \label{eq:trans_eq_lambda} \end{equation} a similar algorithm is obtained but with a forgetting factor $\lambda$ multiplying ${\bf w}_{k-1}^{(LMS)}$ in \eqref{eq:lms}. This algorithm may have improved performance under such a kind of autoregresive dynamics of ${\bf w}_{k}$, though, again, the connection with standard LMS becomes dimmer. \item As in \cite{park2014probabilistic}, the measurement model \eqref{eq:mess_eq} can be changed to obtain similar adaptive algorithms for classification, ordinal regression, and Dirichlet regression for compositional data. \item A similar approximation technique could be applied to more complex dynamical models, i.e. switching dynamical models \cite{barber2010graphical}. The derivation of efficient adaptive algorithms that explicitly take into account a switch in the dynamics of the parameters of interest is a non-trivial and open problem, though the proposed approach could be useful. \item Finally, like standard LMS, this algorithm can be kernelized for its application in estimation under non-linear scenarios. \end{itemize} \begin{appendices} \section{KL divergence between a general gaussian distribution and an isotropic gaussian} \label{sec:kl} We want to approximate $p_{{\bf x}_1}(x) = \mathcal{N}({\bf x}; \boldsymbol\mu_1,\boldsymbol\Sigma_1)$ by $p_{{\bf x}_2}({\bf x}) = \mathcal{N}({\bf x}; \boldsymbol\mu_2,\sigma_2^2 {\bf I})$. In order to do so, we have to compute the parameters of $p_{{\bf x}_2}({\bf x})$, $\boldsymbol\mu_2$ and $\sigma_2^2$, that minimize the following Kullback-Leibler divergence, \begin{eqnarray} D_{KL}(p_{{\bf x}_1}\| p_{{\bf x}_2}) &=&\int_{-\infty}^{\infty} p_{{\bf x}_1}({\bf x}) \ln{\frac{p_{{\bf x}_1}({\bf x})}{p_{{\bf x}_2}({\bf x})}}d{\bf x} \nonumber \\ &= & \frac{1}{2} \{ -M + {\sf Tr}(\sigma_2^{-2} {\bf I}\cdot \boldsymbol\Sigma_1^{-1}) \nonumber \\ & & + (\boldsymbol\mu_2 - \boldsymbol\mu_1 )^T \sigma^{-2}_2{\bf I} (\boldsymbol\mu_2 - \boldsymbol\mu_1 ) \nonumber \\ & & + \ln \frac{{\sigma_2^2}^M}{\det\boldsymbol\Sigma_1} \}. \label{eq:divergence} \end{eqnarray} Using symmetry arguments, we obtain \begin{equation} \boldsymbol\mu_2^{*} =\arg \displaystyle{ \min_{\boldsymbol\mu_2}} \{ D_{KL}(p_{{\bf x}_1}\| p_{{\bf x}_2}) \} = \boldsymbol\mu_1. \end{equation} Then, \eqref{eq:divergence} gets simplified into \begin{eqnarray} D_{KL}(p_{{\bf x}_1}\| p_{{\bf x}_2}) = \frac{1}{2}\lbrace { -M + {\sf Tr}(\frac{\boldsymbol\Sigma_1}{\sigma_2^{2}}) + \ln \frac{\sigma_2^{2M}}{\det\boldsymbol\Sigma_1}}\rbrace. \end{eqnarray} The variance $\sigma_2^2$ is computed in order to minimize this Kullback-Leibler divergence as \begin{eqnarray} \sigma_2^{2*} &=& \arg\min_{\sigma_2^2} D_{KL}(P_{x_1}\| P_{x_2}) \nonumber \\ &=& \arg\min_{\sigma_2^2}\{ \sigma_2^{-2}{\sf Tr}\{\boldsymbol\Sigma_1\} + M\ln \sigma_2^{2} \} . \end{eqnarray} Deriving and making it equal zero leads to \begin{equation} \frac{\partial}{\partial \sigma_2^2} \left[ \frac{{\sf Tr}\{\boldsymbol\Sigma_1\}}{\sigma_2^{2}} + M \ln \sigma_2^{2} \right] = \left. {\frac{M}{\sigma_2^{2}}-\frac{{\sf Tr}\{\boldsymbol\Sigma_1\}}{(\sigma_2^{2})^2}}\right|_{\sigma_2^{2}=\sigma_2^{2*}}\left. =0 \right. . \nonumber \end{equation} Finally, since the divergence has a single extremum in $R_+$, \begin{equation} \sigma_2^{2*} = \frac{{\sf Tr}\{\boldsymbol\Sigma_1\}}{M}. \end{equation} \end{appendices} \vfill \clearpage \bibliographystyle{IEEEbib}
{'timestamp': '2015-01-29T02:02:35', 'yymm': '1501', 'arxiv_id': '1501.06929', 'language': 'en', 'url': 'https://arxiv.org/abs/1501.06929'}
\section{Introduction} This paper investigates the existence of traveling wave solutions for the heterogeneous reaction-diffusion equation \begin{equation}\label{eqprinc} \partial_t u -\Delta u = f(t,u),\quad x\in\R^N,\ t\in\R, \end{equation} where $f=f(t,u)$ vanishes when $u=0$ or $u=1$, is strictly positive for $u\in(0,1)$ and is of KPP type, that is, $f(t,u)\leq f_u'(t,0)u$ for $(t,u)\in \R\times [0,1]$. When $f$ is constant with respect to $t$, we recover the classical Fisher-KPP equation \begin{equation}\label{eq-hom} \partial_t u -\Delta u = f(u), \quad x\in\R^N,\ t\in\R. \end{equation} It is well-known (see \cite{AronsonWeinberger, KPP}) that for all $c\geq c^*=2\sqrt{f'(0)}$, there exists a {\em planar traveling wave} of speed $c$ in any direction $e\in\mathbb{S}^{N-1}$, that is, a solution $u$ of (\ref{eq-hom}) which can be written as $u(x,t) = \phi(x\cdot e -ct)$, with $\phi\geq 0$, $\phi(-\infty)=1$ and $\phi(+\infty)=0$. In this case, the profile $\phi$ of the planar traveling wave satisfies the ordinary differential equation $$-\phi''-c\phi' = f(\phi) \quad\hbox{in } \R.$$ When $f$ is periodic with respect to $t$, planar traveling waves no longer exist and the relevant notion is that of {\em pulsating traveling wave}. Assume for instance that exists $T>0$ such that $f(t+T,u)= f(t,u)$ for all $(t,u) \in \R\times [0,1]$. Then a pulsating traveling wave of speed $c$ in direction $e\in\mathbb{S}^{N-1}$ is a solution $u$ of (\ref{eqprinc}) which can be written as $u(x,t)=\phi (x\cdot e -ct,t)$, where $\phi\geq 0$, $\phi =\phi (z,t) $ is $T-$periodic with respect to $t$, $\phi (-\infty,t)=1$ and $\phi (+\infty,t)=0$ for all $t$. The profile $\phi$ then satisfies the time-periodic parabolic equation $$\partial_t \phi-\partial_{zz}\phi-c\partial_z\phi = f(t,\phi) \quad\text{in } \R\times\R.$$ The existence of pulsating traveling waves has been proved by Nolen, Rudd and Xin \cite{NolenRuddXin} and the first author \cite{Nadinptf} under various hypotheses in the more general framework of space-time periodic reaction-diffusion equations (see also \cite{Alikakos} for time periodic bistable equations). These results yield that, in the particular case of temporally periodic equations, a pulsating traveling wave of speed $c$ exists if and only if $c\geq c^*=2\sqrt{\langle \mu\rangle}$, where $\langle \mu\rangle=\frac{1}{T}\int_0^T f_u'(t,0)dt$. Note that the notion of pulsating traveling wave has first been introduced in the framework of space periodic reaction-diffusion equations by Shigesada, Kawasaki and Teramoto \cite{Shigesada} and Xin \cite{Xinperiodic} in parallel ways. Xin \cite{Xinperiodic}, Berestycki and Hamel \cite{BerestyckiHamel} and Berestycki, Hamel and Roques \cite{BerestyckiHamelRoques2} proved the existence of such waves in space periodic media under various hypotheses. In this case the minimal speed $c^*$ for the existence of pulsating traveling waves is not determined using the mean average of $x\mapsto f_u'(x,0)$, but rather by means of a family of periodic principal eigenvalues to characterize $c^*$. The case of a time almost periodic and bistable reaction term $f$ has been investigated by Shen \cite{Shenap1, Shenap2}. A time heterogeneous nonlinearity is said to be {\em bistable} if there exists a smooth function $\theta : \R\to (0,1)$ such that \begin{equation}\label{f-bistable} \left\{\begin{array}{l} f(t,0)=f(t,1)=f(t,\theta (t))=0 \ \text{ for }t\in\R,\\ f(t,s)<0 \ \text{ for }t\in\R,\ s\in (0,\theta (t)),\\ f(t,s)>0 \ \text{ for }t\in\R,\ s\in (\theta (t),1). \end{array} \right. \end{equation} Shen constructed examples where there exists no solution $u$ of the form $u(x,t)= \phi (x\cdot e -ct,t)$ such that $\phi (-\infty,t)=1$, $\phi (+\infty,t)=0$ uniformly with respect to $t\in\R$ and $\phi=\phi (z,t)$ is almost periodic with respect to $t$. She proved that the appropriate notion of wave in almost periodic media incorporates a time dependence of the speed $c=c(t)$. Namely, she defined an {\em almost periodic traveling wave} as a solution $u$ of (\ref{eqprinc}) which can be written as $u(x,t)= \phi (x\cdot e -\int_0^t c(s)ds,t)$, where $c=c(t)$ and $\phi=\phi (z,t)$ are smooth functions which are almost periodic with respect to $t$ and $\phi (-\infty,t)=1$ and $\phi (+\infty,t)=0$ hold uniformly with respect to $t\in\R$. She proved that for nonlinearities $f$ which satisfy (\ref{f-bistable}), there exists an almost periodic traveling wave and that its profile and speed are uniquely determined up to translation of the profile and to addition of the derivative of a time almost periodic function to the speed. \smallskip In order to handle general heterogeneous equations such as (\ref{eqprinc}), we will use the notion of (almost planar) generalized transition wave introduced by Berestycki and Hamel \cite{BerestyckiHamelgtw} and Shen \cite{Shenbistable}. This definition is a natural extension of the earlier notions. In particular, it enables a dependence of the speed with respect to time, as in almost periodic media. \begin{defi}\label{def-gtf} An (almost planar) {\em generalized transition wave} in the direction $e\in\mathbb{S}^{N-1}$ of equation (\ref{eqprinc}) is a time-global solution $u$ which can be written as $u(x,t)=\phi (x\cdot e -\int_0^t c(s)ds,t)$, where $c\in L^\infty(\R)$ is bounded and $\phi : \R\times \R \to [0,1]$ satisfies \begin{equation} \lim_{z\to-\infty}\phi(z,t)=1 \ \hbox{ and }\ \lim_{z\to+\infty}\phi(z,t)=0\ \hbox{ uniformly in } t\in\R. \end{equation} The functions $\phi$ and $c$ are respectively called the {\em profile} and the {\em speed} of the generalized transition wave $u$. \end{defi} We will only consider almost planar waves in the present paper and thus we will omit to mention the almost planarity in the sequel. Notice that the speed $c$ and the profile $\phi$ in Definition \ref{def-gtf} are not univocally determined: they can be replaced by $c+\xi$ and $\phi (z+\int_0^t \xi(s)ds,t)$ respectively, for any $\xi\in L^\infty(\R)$ such that $t\in \R\mapsto \int_0^t \xi (s)ds$ is bounded. If $u$ is a generalized transition wave, then its profile $\phi$ satisfies \begin{equation}\label{def-tf} \left\{\begin{array}{l} \partial_t \phi-\partial_{zz}\phi -c(t)\partial_z \phi = f(t,\phi),\quad z\in\R,\ t\in\R,\\ \displaystyle\lim_{z\to-\infty}\phi(z,t)=1 \ \hbox{ and }\ \lim_{z\to+\infty}\phi(z,t)=0\ \hbox{ uniformly in } t\in\R.\\ \end{array} \right. \end{equation} The existence of generalized transition waves has been proved by Shen in the framework of time heterogeneous bistable reaction-diffusion equations \cite{Shenbistable}. Berestycki and Hamel \cite{BerestyckiHamelgtw2} also proved the existence of generalized transition waves for the monostable equation (\ref{eqprinc}) when $t\mapsto f(t,u)$ converges as $t\to\pm\infty$ uniformly in $u$ (see Section \ref{sec:examples} below). Lastly, we learned while we were ending this paper that Shen proved the existence of generalized transition waves when the coefficients are uniquely ergodic in a very recent paper \cite{Shenue}. We will describe more precisely the differences between Shen's approach and ours in Section \ref{sec:thm} below. These are the only types of temporal heterogeneities for which the existence of generalized transition waves has been proved. In space heterogeneous media, generalized transition waves have been shown to exist for ignition type nonlinearity in dimension $1$. Namely, assuming that $f=g(x)f_0(u)$, with $g$ smooth, uniformly positive and bounded, and with $f_0$ satisfying $$ \exists\theta\in (0,1),\qquad \forall s\in [0,\theta]\cup \{1\},\quad f_0(s)=0,\qquad \forall s\in (\theta,1),\quad f_0(s)>0. $$ the existence of generalized transition waves has been obtained in parallel ways by Nolen and Ryzhik \cite{NolenRyzhik} and Mellet, Roquejoffre and Sire \cite{MelletRoquejoffreSire}. These generalized transition waves attract the solutions of the Cauchy problem associated with front-like initial data \cite{MelletNolenRoquejoffreRyzhik}. Zlatos gave some extensions of these results to multi-dimensional media and more general nonlinearities \cite{Zlatosdisordered}. Lastly, Nolen, Roquejoffre, Ryzhik and Zlatos constructed some space heterogeneous KPP nonlinearities such that equation (\ref{eqprinc}) does not admit any generalized transition wave \cite{NRRZ}. A different generalization of the notion of traveling wave has been introduced in the framework of bistable reaction-diffusion equations with a small perturbation in space of the homogeneous equation by Vakulenko and Volpert \cite{VakulenkoVolpert}. When the nonlinearity is homogeneous and of ignition or bistable type, we know that there exists a unique speed associated with a planar traveling wave. The analogue of this property in heterogeneous media is that, for bistable or ignition type nonlinearities, the generalized transition wave is unique up to translation in time (see \cite{BerestyckiHamelgtw2, MelletNolenRoquejoffreRyzhik, Shenbistable}). This means that the speed $t\mapsto c(t)$ of the generalized transition wave is unique in some sense. For monostable homogeneous nonlinearities, we know that there exists a unique speed $c^*$ such that planar traveling waves of speed $c$ exist if and only if $c\geq c^*$. Moreover, it is possible to construct solutions which behave as planar traveling waves with different speeds when $t\to \pm\infty$ (see \cite{HamelNadirashvili}) and one can remark that these solutions are generalized transition waves. Hence, we expect a wide range of speeds $t\mapsto c(t)$ to be associated with generalized transition waves in heterogeneous media. The aims of the present paper are the following. \begin{itemize} \item Prove the existence of generalized transition waves for time heterogeneous monostable equations. \item Identify a set of speeds $t\mapsto c(t)$ associated with generalized transition waves. \item Apply our results to particular nonlinearities such as random stationary ergodic. \end{itemize} \section{Statement of the results} \subsection{Hypotheses}\label{sec:hyp} In this paper we just assume the nonlinear term $f(t,u)$ to be bounded and measurable with respect to $t$. The notion of solution considered is that of strong solution: a subsolution (resp.~supersolution) $u$ of (\ref{eqprinc}) is a function in $W^{2,1}_{N+1,loc}(\R^N\times\R)$ \footnote{ $W^{2,1}_{p}(\mc{Q})$, $\mc{Q}\subset\R^N\times\R$, stands for the space of functions $u$ such that $u,\,\partial_{x_{i}}u,\,\partial_{x_i x_j}u,\,\partial_t u\in L^p(\mc{Q})$.} satisfying $$\partial_t u -\Delta u\leq f(t,u) \quad\text{(resp.~$\geq f(t,u)$)},\quad \text{for a.e.~}x\in\R^N,\ t\in\R,$$ and a solution is a function $u$ which is both a sub and a supersolution. It then follows from the standard parabolic theory that solutions belong to $W^{2,1}_{p,loc}(\R^N\times\R)$, for all $p<\infty$, and then they are uniformly continuous by the embedding theorem. The fundamental hypothesis we make on $f$ is that it is of KPP type. Namely, $f(t,u)\leq\mu(t)u$, with $\mu(t):=f_u'(t,0)$, which means that $f(t,\.)$ lies below its tangent at $0$ for all $t$. Hence, we expect the linearization at $u=0$ to play a crucial role in the dynamics of the equation. A typical $f$ we want to handle is \begin{equation} \label{examplef} f(t,u)=\mu (t) u(1-u), \end{equation} with $\mu\in L^\infty (\R)$ and $\inf_{t\in\R}\mu(t)>0$. This function fulfills the following set of hypotheses, which are the ones we will require in the statements of our results. \begin{hyp}\label{hyp:KPP} The function $f=f(t,u)$ satisfies $f(\.,u)\in L^\infty(\R)$, for all $u\in[0,1]$, and, as a function of $u$, is Lipschitz continuous in $[0,1]$ and of class $C^1$ in a neighborhood of $0$, uniformly with respect to $t\in\R$. Moreover, setting $\mu(t):= f_u'(t,0)$, the following properties hold: \begin{equation}\label{hyp-pos} \text{for a.e.~} t\in\R,\quad f(t,0)=f(t,1)=0,\qquad \forall u\in(0,1),\quad \essinf_{t\in\R}f(t,u)>0, \end{equation} \begin{equation} \label{hyp-KPP} \text{for a.e.~}(t,u)\in\R\times [0,1],\quad f(t,u)\leq \mu (t) u, \end{equation} \begin{equation} \label{hyp-lower} \exists C>0,\ \gamma,\delta\in(0,1],\quad \text{for a.e.~}(t,u)\in\R\times [0,\delta],\quad f(t,u)\geq \mu (t) u-Cu^{1+\gamma}, \end{equation} \end{hyp} Notice that (\ref{hyp-KPP}), (\ref{hyp-lower}) are fulfilled if $f(t,0)=0$ and $f(t,\.)$ is respectively concave and in $C^{1+\gamma}([0,\delta])$, uniformly with respect to $t$. \subsection{Existence of generalized transition waves for general nonlinearities} \label{sec:thm} Since the range of speeds associated with planar traveling waves in homogeneous media is given by $[2\sqrt{f'(0)},+\infty)$, we expect to find similar constraints on the speeds of generalized transition waves in heterogeneous media. The constraint we will exhibit depends on the {\em least mean} of the speed. \begin{defi}\label{def-mean} For any given function $g\in L^\infty(\R)$, we define $$\underline{g}:=\sup_{T>0}\,\inf_{t\in\R} \frac{1}{T}\int_t^{t+T}g(s)\,ds$$ and we call this quantity the {\em least mean} of the function $g$ (over $\R$). \end{defi} The definition of least mean does not change if one replaces the $\sup_{T>0}$ with $\lim_{T\to+\infty}$ (see Proposition \ref{reformulation} below). Hence, if $g$ admits a {\em mean value} $\langle g\rangle$, i.e., if there exists \Fi{mv} \langle g\rangle:=\lim_{T\to+\infty} \frac{1}{T}\int_t^{t+T}g(s)\,ds,\quad\text{uniformly with respect to }t\in\R, \Ff then $\ul g=\langle g\rangle$. Notice that (\ref{hyp-pos}) and (\ref{hyp-KPP}) yield $\ul \mu>0$. We are now in position to state our main result. \begin{thm}\label{thm-existence} Assume that $f$ satisfies Hypothesis \ref{hyp:KPP} and let $e\in\mathbb{S}^{N-1}$. 1) For all $\gamma>2\sqrt{\ul \mu}$, there exists a generalized transition wave $u$ in direction $e$ with a speed $c$ such that $\ul c= \gamma$ and a profile $\phi$ which is decreasing with respect to $z$. 2) There exists no generalized transition wave $u$ in direction $e$ with a speed $c$ such that $\ul c< 2\sqrt{\ul\mu}$. \end{thm} The speeds we construct have the particular form \begin{equation} \label{formc} c(t)=\frac{\mu(t)}{\kappa}+\kappa, \qquad \kappa\in (0, \sqrt{\ul \mu}). \end{equation} Hence, they keep some properties of the function $\mu$. In particular, if $\mu$ admits a mean value $\langle \mu \rangle$ then our result implies that a generalized transition wave with a speed $c$ such that $\langle c \rangle=\gamma$ exists if $\gamma>2\sqrt{\langle \mu \rangle}$ and does not exist if $\gamma<2\sqrt{\langle \mu \rangle}$. Of course, this construction is not exhaustive: there might exist generalized transition waves with a speed $c$ which cannot be written in the form (\ref{formc}), as exhibited in Example \ref{E:2speeds} below. More generally, trying to characterize the set of speeds associated with generalized transition waves is a very hard task, since this notion covers many types of speeds (see \cite{BerestyckiHamelgtw, HamelNadirashvili}). Indeed, it is still an open problem even in the case of homogeneous equations. We did not manage to prove the existence of generalized transition waves with a speed with least mean $2\sqrt{\ul \mu}$. We leave this extension as an open problem, that we will discuss in Examples \ref{E:speeds} and \ref{E:minimal} of Section \ref{sec:examples} below. Theorem \ref{thm-existence} shows that the range of least means of the speeds associated with generalized transition waves is a half-line, with infimum $2\sqrt{\ul\mu}$. If instead one considers other notions of mean then the picture is far from being complete: our existence result implies that, for every notion of mean $\mc{M}$ satisfying $$\fa g\in L^\infty(\R),\quad \mc{M}(g)\geq\ul g,\qquad \fa\alpha,\beta>0,\quad\mc{M}(\alpha g+\beta)=\alpha\mc{M}(g)+\beta,$$ and every $\kappa\in(0,\sqrt{\ul \mu})$, there exists a wave with speed $c$ satisfying $\mc{M}(c)=\frac{\mc{M}(\mu)}\kappa+\kappa$, whereas there are no waves with $\mc{M}(c)<2\sqrt{\ul\mu}$. But if $\mc{M}(\mu)>\ul\mu$ then $2\sqrt{\ul\mu}<\frac{\mc{M}(\mu)}{\sqrt{\ul\mu}}+\sqrt{\ul\mu}$, whence there is a gap between the thresholds of the existence and non-existence results. \bigskip In order to conclude this section, we briefly comment the differences between Shen's approach in \cite{Shenue} and the present one. First, Shen only considered uniquely ergodic coefficients. We refer to \cite{Shenue} for a precise definition but we would like to point out that if $\mu$ is uniquely ergodic, then $\langle \mu \rangle=\lim_{T\to +\infty} \frac{1}{T}\int_{t}^{t+T}\mu (s)ds$ exists uniformly with respect to $t\in\R$. This hypothesis is quite restrictive since it excludes, for example, general random stationary ergodic coefficients. Under this hypothesis, Shen proves that a generalized transition wave with a uniquely ergodic speed $c=c(t)$ satisfying $\langle c \rangle = \gamma$ exists if and only if $\gamma\geq 2\sqrt{\langle \mu\rangle}$. This is a slightly stronger result than ours since it provides existence in the critical case $\gamma= 2\sqrt{\langle \mu \rangle}$. Lastly, Shen's approach is different since she uses a dynamical systems setting, while we use a PDE approach inspired by \cite{BerestyckiHamel}. \subsection{Application to random stationary ergodic equations} We consider the reaction-diffusion equation with random nonlinear term \begin{equation}\label{eq-random} \partial_t u -\Delta u = f(t,\omega,u),\quad x\in\R^N,\ t\in\R,\ \omega\in\Omega. \end{equation} The function $f:\R\times \O\times [0,1]\to \R$ is a random function defined on a probability space $(\O,\P,\mathcal{F})$. We assume that $(t,u)\mapsto f(t,\omega,u)$ satisfies Hypothesis \ref{hyp:KPP} for almost every $\omega\in\O$, and in addition that \begin{equation} \label{hyp-vois1} \fa t\in\R,\quad u\mapsto f(t,\omega,u)/u \hbox{ is nonincreasing in } [1-\delta,1], \end{equation} where the constants $\gamma$ and $\delta$ depend on $\omega$. Notice that (\ref{hyp-vois1}) is satisfied in particular when $u\mapsto f(t,\omega,u)$ is nonincreasing in $(1-\delta,1)$, since $f$ is positive there. We further suppose that $f(t,\omega,u)$ is a stationary ergodic random function with respect to $t$. Namely, there exists a group $(\pi_t)_{t\in\R}$ of measure-preserving transformations of $\O$ such that $$\forall (t,s,\omega,u)\in\R\times \R\times \O\times [0,1],\quad f(t+s,\omega,u)=f(t,\pi_s \omega,u),$$ and for all $A\in\mathcal{F}$, if $\pi_t A=A$ for all $t\in\mathbb{R}$, then $\P(A)=0$ or $1$. A generalization of the notion of traveling waves for equation (\ref{eq-random}) has been given by Shen in \cite{Shenrandom}. \begin{defi}\label{def-randomtw} {\bf (see \cite{Shenrandom}, Def. 2.3)} A {\em random transition wave} in the direction $e\in\mathbb{S}^{N-1}$ of equation (\ref{eq-random}) is a function $u:\R^N\times \R \times \Omega \to [0,1]$ which satisfies: \begin{itemize} \item There exist two bounded measurable functions $\tilde{c}:\Omega \to \R$ and $\tilde{\phi}:\R \times\Omega\to[0,1]$ such that $u$ can be written as $$u(x,t,\omega) = \tilde{\phi}(x\cdot e -\int_0^t \tilde{c}(\pi_s \omega)ds,\pi_t \omega) \hbox{ for all } (x,t,\omega)\in \R^N\times \R\times\Omega.$$ \item For almost every $\omega\in\Omega$, $(x,t)\mapsto u(x,t,\omega)$ is a solution of (\ref{eq-random}). \item For almost every $\omega\in\Omega$, $\displaystyle\lim_{z\to-\infty}\tilde{\phi}(z,\omega)=1$ and $\displaystyle\lim_{z\to+\infty}\tilde{\phi}(z,\omega)=0$. \end{itemize} The functions $\tilde{\phi}$ and $\tilde{c}$ are respectively called the {\em random profile} and the {\em random speed} of the random transition wave $u$. \end{defi} Notice that if (\ref{eq-random}) admits a generalized transition wave for a.e.~$\omega\in\O$, and the associated profiles $\phi(z,t,\omega)$ and speeds $c(t,\omega)$ are stationary ergodic with respect to $t$, then the functions $\t\phi(z,\omega):=\phi(z,0,\omega)$ and $\t c(\omega):=c(0,\omega)$ are the profile and the speed of a random transition wave. The existence of random transition waves has been proved in the framework of space-time random stationary ergodic bistable nonlinearities by Shen \cite{Shenrandom} and in the framework of space random stationary ergodic ignition type nonlinearities by Nolen and Ryzhik \cite{NolenRyzhik} (see also \cite{Zlatosdisordered} for some extensions). Starting from Theorem \ref{thm-existence}, we are able to characterize the existence of random transition waves in terms of the least mean of their speed. For a stationary ergodic function $g:\R\times\O\to\R$, the least mean of $t\mapsto g(t,\omega)$ is independent of $\omega$, for every $\omega$ in a set of probability $1$ (see Proposition \ref{pro:lmergodic} below). We call this quantity the least mean of the random function $g$, and we denote it by $\ul g$. \begin{thm}\label{thm-existencerandom} Let $e\in\mathbb{S}^{N-1}$. Under the previous hypotheses, for all $\gamma>2\sqrt{\ul\mu}$, there exists a random transition wave $u$ in direction $e$ with random speed $\t c$ such that $c(t,\omega):=\tilde{c}(\pi_t\omega)$ has least mean $\gamma$, and a random profile $\t\phi$ which is decreasing with respect to $z$. \end{thm} This result is not an immediate corollary of Theorem \ref{thm-existence}. In fact, for given $\kappa\in (0,\sqrt{\ul \mu})$ and almost every $\omega\in\O$, Theorem \ref{thm-existence} provides a generalized transition wave with speed $c(\cdot,\omega)$ satisfying $c=\frac{\mu}{\kappa}+\kappa$. Hence, $c$ is stationary ergodic in $t$, but it is far from being obvious that the same is true for the profile $\phi$. Actually, to prove this we require the additional hypothesis (\ref{hyp-vois1}). Instead, the non-existence of random transition waves with speeds $c$ satisfying $\ul c<2\sqrt{\ul\mu}$ follows directly from Theorem \ref{thm-existence}. \subsection{Spreading properties}\label{sec:spreading} When the nonlinearity $f$ does not depend on $t$, Aronson and Weinberger \cite{AronsonWeinberger} proved that if $u$ is the solution of the associated Cauchy problem, with an initial datum which is ``front-like'' in direction $e$, then for all $\sigma>0$, $$\lim_{t\to +\infty}\,\inf_{x\leq (2\sqrt{f'(0)}-\sigma)t}u(x,t)=1,\qquad \lim_{t\to +\infty}\, \inf_{x\geq (2\sqrt{f'(0)}+\sigma)t}u(x,t)=0.$$ This result is called a {\em spreading property} and means that the level-lines of $u(t,\cdot)$ behave like $2\sqrt{f'(0)}t$ as $t\to +\infty$. The aim of this section is to extend this property to the Cauchy problem associated with (\ref{eqprinc}), namely, \Fi{Cauchy} \left\{\begin{array}{ll} \partial_t u -\Delta u = f(t,u),& x\in\R^N,\ t>0,\\ u(x,0)=u_0(x), & x\in\R^N. \end{array}\right. \Ff Our result will involve once again the least mean of $\mu$, but this time over $\R_+$, because the equation is defined only for $t>0$. For a given function $g\in L^\infty ((0,+\infty))$, we set $$ \ul g_+:=\sup_{T>0}\,\inf_{t>0} \frac{1}{T}\int_t^{t+T}g(s)\,ds. $$ We similarly define the upper mean $\ol g^+$: $$ \ol g^+:=\inf_{T>0}\,\sup_{t>0} \frac{1}{T}\int_t^{t+T}g(s)\,ds. $$ In \cite{BHN} Berestycki, Hamel and the first author partially extended the result of \cite{AronsonWeinberger} to general space-time heterogeneous equations. They showed in particular that the level-lines of $u(t,\cdot)$ do not grow linearly and can oscillate. They obtained some estimates on the location of these level-lines, which are optimal when $t\mapsto f(t,u)$ converges as $t\to +\infty$ locally in $u$, but not when $f$ is periodic for example. These properties have been improved by Berestycki and the first author in \cite{BN}, by using the notion of {\em generalized principal eigenvalues} in order to estimate more precisely the maximal and the minimal linear growths of the location of the level-lines of $u(t,\cdot)$. When $f$ only depends on $t$, as in the present paper, they proved that if $u_0\in C^0(\R^N)$ is such that $0\leq u_0\leq 1$, $u_0\not\equiv 0$ and it is compactly supported, then the solution $u$ of \eq{Cauchy} satisfies \Fi{01} \fa e\in\mathbb{S}^{N-1},\quad\lim_{t\to+\infty}u(x+\gamma te,t)= \left\{\begin{array}{lll} 1 &\hbox{ if }& 0\leq\gamma <2\sqrt{\ul\mu_+}\\ 0 &\hbox{ if }& \gamma>2\sqrt{\ol\mu^+}\\ \end{array}\right. \quad\hbox{locally in } x. \Ff In \cite{BN}, this result follows from a more general statement, proved in the framework of space-time heterogeneous equations using homogenization techniques. Here we improve \eq{01} by decreasing the threshold for the convergence to 0. Our proof is based on direct arguments. \begin{prop}\label{prop-spreading} Assume that $f$ satisfies Hypothesis \ref{hyp:KPP} and let $u_0\in C^0 (\R^N)$ be such that $0\leq u_0 \leq 1$, $u_0\not\equiv0$. Then the solution $u$ of \eq{Cauchy} satisfies \begin{equation}\label{eq:spreadinginf} \fa\gamma<2\sqrt{\ul\mu_+},\quad \lim_{t\to +\infty}\inf_{|x|\leq \gamma t} u(x,t)=1. \end{equation} If in addition $u_0$ is compactly supported then \begin{equation}\label{eq:spreadingsup} \fa\sigma>0,\quad \lim_{t\to+\infty}\sup_{|x|\geq 2\sqrt{t\int_0^t \mu (s)ds} +\sigma t} u(x,t) =0. \end{equation} \end{prop} If $u_0$ is ``front-like'' in the direction $e$, then $|x|\geq 2\sqrt{t\int_0^t \mu (s)ds} +\sigma t$ can be replaced by $x\cdot e\geq 2\sqrt{t\int_0^t \mu (s)ds} +\sigma t$ in \eq{spreadingsup}. \begin{rmq} Proposition \ref{prop-spreading} still holds if (\ref{hyp-lower}), (\ref{hyp-vois1}) are not satisfied and, in case of \eq{spreadinginf}, the KPP condition (\ref{hyp-KPP}) can also be dropped. \end{rmq} If $\frac1t\int_0^t \mu (s)ds\to\ul\mu_+$ as $t$ goes to $+\infty$ then the result of Proposition \ref{prop-spreading} is optimal. Otherwise it does not describe in an exhaustive way the large time behavior of $u$. \begin{opb} Assume that the hypotheses of Proposition \ref{prop-spreading} hold and that $$\ul\mu_+<\gamma<\liminf_{t\to+\infty}\frac1t\int_0^t \mu (s)ds.$$ What can we say about $\lim_{t\to+\infty}u(\gamma t e,t)$. \end{opb} \subsection{Examples.} \label{sec:examples} We now present some examples in order to illustrate the notion of generalized transition waves and to discuss the optimality of our results. \begin{ex}\label{E:means} {\em Functions without uniform mean.}\\ Set $t_1:=2$ and, for $n\in\N$, $$\sigma_n:=t_n+n,\qquad \tau_n:=\sigma_n+n,\qquad t_{n+1}:=\tau_n+2^n.$$ The function $\mu$ defined by $$\mu(t):=\left\{\begin{array}{ll} 3 & \text{if }t_n<t<\sigma_n,\ n\in\N,\\ 1 & \text{if }\sigma_n<t<\tau_n,\ n\in\N,\\ 2 & \text{otherwise} \end{array} \right.$$ satisfies $$\ul\mu_+=1<\lim_{t\to+\infty}\frac1t\int_0^t \mu (s)ds=2<\ol\mu_+=3.$$ Therefore, $\mu$ does not admit a uniform mean $\langle\mu\rangle$ (over $\R_+$). \end{ex} \begin{ex}\label{E:speeds} {\em Generalized transition waves with various choices of speeds.}\\ Fix $\alpha>1$ and consider the homogeneous reaction-diffusion equation $$ \partial_t u -\partial_{xx}u = u(1-u)(u+\alpha). $$ A straightforward computation yields that the function $$U (z) = \frac{1}{1+e^{z/\sqrt2}}$$ is the profile of a planar traveling wave of speed $c_0 = \sqrt{2} \alpha +\frac{1}{\sqrt{2}}$. Note that $c_0$ is strictly larger than the minimal speed $2\sqrt{\alpha}$ for the existence of traveling waves. We now perturb this equation by adding some time heterogeneous bounded function $\xi\in C^0(\R)$ in the nonlinearity: \begin{equation}\label{example-2} \partial_t v -\partial_{xx}v= v(1-v)(v+\alpha-\frac{1}{\sqrt{2}} \xi (t)), \end{equation} with $\|\xi\|_\infty < \sqrt{2}(\alpha-1)$ so that this equation is still of KPP type. Let $$v(x,t):=U(x-c_0 t +\int_0^t \xi (s)ds).$$ Since $U' = -\frac{U}{\sqrt{2}} (1-U)$, one readily checks that $v$ is a generalized transition wave of equation (\ref{example-2}), with speed $c(t)= c_0 +\xi (t)$. If $\langle\xi \rangle =0$, then this generalized transition wave has a global mean speed $c_0$. But as $\xi$ is arbitrary, the fluctuations around this global mean speed can be large. This example shows that the speeds associated with generalized transition waves can be very general, depending on the structure of the heterogeneous equation. \end{ex} \begin{ex}\label{E:minimal} {\em Generalized transition waves with speeds $c$ satisfying $\ul c = 2\sqrt{\ul \mu}$.} We can generalize the method used in the previous example to obtain generalized transition waves with a speed with minimal least mean. Consider any homogeneous function $f: [0,1]\to [0,\infty)$ such that $f$ is Lipschitz-continuous, $f(0)=f(1)=0$, $f(s)>0$ if $s\in (0,1)$ and $s\mapsto f(s)/s$ is nonincreasing. Then we know from \cite{AronsonWeinberger} that for all $c\geq 2\sqrt{f'(0)}$, there exists a decreasing function $U_c\in C^2(\R)$ such that $U_c(-\infty)=1$, $U_c (+\infty)=0$ and $-U_c''-cU_c'=f(U_c)$ in $\R$. It is well-known that $-U_c''(x)/U_c'(x)\to \lambda_c$ as $x\to +\infty$, where $\lambda_c>0$ is the smallest root of $\lambda\mapsto-\lambda^2+c\lambda_c-f'(0)$. Moreover, writing the equation satisfied by $U_c'/U_c$, one can prove that $U_c'\geq -\lambda_c U_c$ in $\R$. It follows that the function $P_c : [0,1] \to[0,+\infty)$ defined by $P_c(u):=-U_c'(U_c^{-1}(u))$ for $u\in(0,1)$ and $P_c (0)=P_c (1)=0$ is Lipschitz-continuous. Furthermore, it is of KPP type, because $$P_c'(0)=-\lim_{x\to+\infty}\frac{U_c''(x)}{U_c'(x)}=\lambda_c, \qquad \frac{P_c(u)}u=-\frac{U_c'(U_c^{-1}(u))}{U_c(U_c^{-1}(u))} \geq\lambda_c.$$ We now consider a given function $\xi \in C^\alpha(\R)$ with least mean $\ul \xi=0$. The function $v$ defined by $v(x,t):= U_c (x-ct -\int_0^t \xi (s)ds)$ satisfies $$\partial_t v -\partial_{xx} v = f(v) +\xi (t) P_c(v)=: g(t,v)\quad \hbox{in } \R.$$ It is clearly a generalized transition wave of this equation, with speed $c_1 (t)= c+\xi (t)$. Let now see what Theorem \ref{thm-existence} gives. Here, $\mu (t)= g_u'(t,0)=f'(0)+\lambda_c \xi (t)$ and thus $\ul \mu = f'(0)$ since $\ul \xi=0$. If $c>2\sqrt{f'(0)}$, then we know that there exists a generalized transition wave $w$ with a speed $c_2(t)=\frac{\mu (t)}{\kappa}+\kappa$, for some $0<\kappa<\sqrt{f'(0)}$ such that $\ul c_2= c$. These two conditions impose $\kappa=\lambda_c$ and thus $c_2 (t)=\frac{f'(0)}{\lambda_c}+\xi (t)+\lambda_c= c_1 (t)$. Hence, $c_1\equiv c_2$, which means that the speed obtained through Theorem \ref{thm-existence} is the speed of the generalized transition wave $v$. The case $c=2\sqrt{f'(0)}$ is not covered by Theorem \ref{thm-existence}. In this case, the speed $c_1$ of the generalized transition wave $v$ satisfies $c_1(t)=2\sqrt{f'(0)}+\xi(t)=\frac{\mu (t)}{\lambda_c} +\lambda_c$. Thus, in this example, it is possible to improve Theorem \ref{thm-existence} part 1) to the case $\ul c= 2\sqrt{\ul \mu}$. \end{ex} \begin{ex}\label{E:2speeds} {\em Speeds which cannot be written as $c(t)=\frac{\mu(t)}{\kappa}+\kappa$.}\\ Consider a smooth positive function $\mu= \mu (t)$ such that $\mu (t)\to \mu_1$ as $t\to -\infty$ and $\mu (t) \to \mu_2$ as $t\to +\infty$, with $\mu_1>0$ and $\mu_2>0$. Let $f(t,u)= \mu (t) u (1-u)$. Then it has been proved by Berestycki and Hamel (in a more general framework, see \cite{BerestyckiHamelgtw2}) that if $\mu_1<\mu_2$, then for all $c_1\in [2\sqrt{\mu_1},+\infty)$, there exists a generalized transition wave of equation (\ref{eqprinc}) with speed $c$ such that $c(t)\to c_1$ as $t\to -\infty$ and $\frac{1}{t}\int_0^t c(s)ds\to c_2$ as $t\to +\infty$, where $c_2 =\frac{\mu_2}{\kappa_1}+\kappa_1$ and $\kappa_1$ is the smallest root of $\kappa_1^2-\kappa_1 c_1 +\mu_1$. This result can be deduced from Theorem \ref{thm-existence} when $c_1>2\sqrt{\mu_1}$, which even gives a stronger result since we get $c(t)= \frac{\mu (t)}{\kappa_1}+\kappa_1$ for all $t\in\R$. When $\mu_1>\mu_2$, then Berestycki and Hamel obtained a different result. Namely, they prove that for all $c_1\in [2\sqrt{\mu_1},+\infty)$, if $\kappa_1\geq \sqrt{\mu_2}$ (which is true in particular when $c_1=2\sqrt{\mu_1}$), there exists a generalized transition wave of equation (\ref{eqprinc}) with speed $c$ such that $c(t)\to c_1$ as $t\to -\infty$ and $\frac{1}{t}\int_0^t c(s)ds\to 2\sqrt{\mu_2}$ as $t\to +\infty$. In this case the speed $c$ cannot be put in the form $c(t)= \frac{\mu (t)}{\kappa}+\kappa$ for some $\kappa>0$. Hence, the class of speeds we construct in Theorem \ref{thm-existence} is not exhaustive. Moreover, in this example, this class of speeds misses the most important generalized transition waves: the one which travels with speed $2\sqrt{\mu_1}$ when $t\to -\infty$ and $2\sqrt{\mu_2}$ when $t\to +\infty$. As these two speeds are minimal near $t=\pm\infty$, one can expect this generalized transition wave to be attractive in some sense, as in homogeneous media (see \cite{KPP}). \end{ex} \section{Proof of the results} As we said at the beginning of Section \ref{sec:hyp}, in this paper the terms (strong) sub and supersolution refer to functions in $W^{2,1}_{N+1,loc}$, satisfying the differential inequalities a.e. We say that a function is a {\em generalized subsolution} (resp.~{\em supersolution}) if it is the supremum (resp.~infimum) of a finite number of subsolutions (resp.~supersolutions). \subsection{Properties of the least mean} We first give an equivalent formulation of the least mean (see Definition \ref{def-mean}). \begin{prop} \label{reformulation} If $g\in L^\infty(\R)$ then its least mean $\ul g$ satisfies $$\ul g=\lim_{T\to+\infty}\essinf_{t\in\R}\frac{1}{T}\int_t^{t+T}g(s)\,ds.$$ In particular, if $g$ admits a mean value $\langle g\rangle$, defined by \eq{mv}, then $\ul g=\langle g\rangle$. \end{prop} \begin{proof} For $T>0$, define the following function: $$F(T):=\inf_{t\in\R}\int_t^{t+T}g(s)\,ds.$$ We have that $$\ul g=\sup_{T>0}\frac{F(T)}{T}\geq\limsup_{T\to+\infty}\frac{F(T)}{T}.$$ Therefore, to prove the statement we only need to show that $\liminf_{T\to+\infty}F(T)/T\geq\ul g$. For any $\e>0$, let $T_\e>0$ be such that $F(T_\e)/T_\e\geq\ul g-\e$. We use the notation $\lfloor x \rfloor$ to indicate the floor of the real number $x$ (that is, the greatest integer $n\leq x$) and we compute $$\fa T>0,\quad F(T)=\inf_{t\in\R}\left(\int_t^{t+\left\lfloor\frac{T}{T_\e} \right\rfloor T_\e} g(s)\,ds+\int_{t+\left\lfloor\frac{T}{T_\e} \right\rfloor T_\e}^{t+T} g(s)\,ds\right)\geq \left\lfloor\frac{T}{T_\e}\right\rfloor F(T_\e)-\|g\|_{L^\infty(\R)}T_\e.$$ As a consequence, $$\liminf_{T\to+\infty}\frac{F(T)}{T}\geq \lim_{T\to+\infty}\left\lfloor\frac{T}{T_\e}\right\rfloor \frac{T_\e}{T}\frac{F(T_\e)}{T_\e}=\frac{F(T_\e)}{T_\e}\geq\ul g-\e.$$ The proof is thereby achieved due to the arbitrariness of $\e$. \end{proof} We now derive another characterization of the least mean. This is the property underlying the fact that the existence of generalized transition waves is expressed in terms of the least mean of their speeds. \begin{lem}\label{lem:lm>0} Let $B\in L^\infty(\R)$. Then $$\ul B=\sup_{A\in W^{1,\infty}(\R)}\essinf_{t\in\R}(A'+B)(t).$$ \end{lem} \begin{proof} If $B$ is a periodic function, then $g(t):= \underline{B}- B(t)$ is periodic with zero mean. Thus $A(t):=\int_0^t g(s)ds$ is bounded and satisfies $A'+B\equiv\underline{B}$. This shows that \Fi{lm<} \ul B\leq\sup_{A\in W^{1,\infty}(\R)}\essinf_{t\in\R}(A'+B)(t) \Ff in this simple case. We will now generalize this construction in order to handle general functions B. Fix $m<\ul B$. By definition, there exists $T>0$ such that $$\inf_{t\in\R}\frac1T\int_t^{t+T}B(s)\, ds>m.$$ We define $$ \forall k\in\Z,\ \text{for a.e.~} t\in [(k-1)T,kT),\quad g(t):=-B(t)+\beta_k,\qquad\text{where }\ \beta_k:=\frac{1}{T}\int_{(k-1)T}^{kT} B(s)\, ds.$$ Then we set $A(t):=\int_0^t g(s)ds$. It follows that $A'+B\geq m$ and, since $\int_{(k-1)T}^{kT} g(s)\, ds=0$, that $$\|A\|_{L^\infty(\R)}\leq\|g\|_{L^\infty(\R)} T\leq2T\|B\|_{L^\infty(\R)}.$$ Therefore \eq{lm<} holds due to the arbitrariness of $m<\ul B$. Consider now a function $A\in W^{1,\infty}(\R)$. Owing to Proposition \ref{reformulation} we derive \[\begin{split} \ul B &=\lim_{T\to+\infty}\inf_{t\in\R}\frac{1}{T}\int_t^{t+T}B(s)\,ds \geq\essinf_\R(A'+B)+ \lim_{T\to+\infty}\inf_{t\in\R}\frac{1}{T}\int_t^{t+T}(-A'(s))\,ds\\ &=\essinf_\R(A'+B)+\lim_{T\to+\infty}\inf_{t\in\R}\frac{1}{T}(A(t)-A(t+T))= \essinf_\R(A'+B). \end{split}\] This concludes the proof. \end{proof} \begin{rmq}\label{rmq:lm>0} In the proof of Lemma \ref{lem:lm>0} we have shown the following fact: if $\eta\in\N\cup\{+\infty\}$, $T>0$ are such that $$m:=\inf_{\su{k\in\N}{k\leq\eta}}\frac1T\int_{(k-1)T}^{kT} B(s)\, ds>0,$$ then there exists $A\in W^{1,\infty}((0,\eta T))$ satisfying $$\essinf_{[0,\eta T)}(A'+B)=m,\qquad \|A\|_{L^\infty((0,\eta T))}\leq2T \|B\|_{L^\infty((0,\eta T))}.$$ \end{rmq} \subsection{Construction of the generalized transition waves when $\ul c>2\sqrt{\ul \mu}$.} In order to construct generalized transition waves, we will use appropriate sub and supersolutions. The particular form of the speeds (\ref{formc}) will naturally emerge from constraints on the exponential supersolution. \begin{prop}\label{pro:subsuper} Under the assumptions of Theorem \ref{thm-existence}, for all $\gamma> 2\sqrt{\ul\mu}$, there exists a function $c\in L^\infty(\R)$, with $\ul c=\gamma$, such that (\ref{def-tf}) admits some uniformly continuous generalized sub and supersolutions $\ul\phi(z,t)$, $\ol\phi(z)$ satisfying $$0\leq\ul\phi<\ol\phi\leq1,\qquad \ol\phi(+\infty)=0,\qquad\ol\phi(-\infty,t)=1\ \hbox{ uniformly in } t\in\R,$$ $$\exists\xi\in\R,\ \ \inf_{t\in\R}\ul\phi(\xi,t)>0,\qquad \forall z\in\R,\ \ \inf_{t\in\R}(\ol\phi-\ul\phi)(z,t)>0.$$ $$\ol\phi\ \text{ is nonincreasing in }\R,\qquad \fa\tau>0,\ \ \lim_{z\to+\infty}\frac{\ol\phi(z+\tau)}{\ul\phi(z,t)}<1 \ \hbox{ uniformly in } t\in\R.$$ \end{prop} \begin{proof} Fix $\gamma>2\sqrt{\ul{\mu}}$. We choose $c$ in such a way that the linearized equation around $0$ associated with (\ref{def-tf}) admits an exponential solution of the type $\psi(z)=e^{-\kappa z}$, for some $\kappa>0$. Namely, $$0=\partial_t \psi-\partial_{zz}\psi -c(t)\partial_z \psi-\mu(t)\psi= [-\kappa^2+c(t)\kappa-\mu(t)]\psi,\quad \text{for a.e.~}t\in\R.$$ It follows that $c\equiv\kappa+\kappa^{-1}\mu$. Imposing $\ul c=\gamma$ yields $$\gamma=\lim_{T\to +\infty}\inf_{t\in\R} \frac{1}{T}\int_t^{t+T} [\kappa+\kappa^{-1}\mu(s)]ds=\kappa+\kappa^{-1}\ul\mu.$$ Since $\gamma>2\sqrt{\ul{\mu}}$, the equation $\kappa^2-\gamma\kappa+\ul\mu=0$ has two positive solutions. We take the smallest one, that is, $$\kappa=\frac{\gamma-\sqrt{\gamma^2-4\ul\mu}}2.$$ Extending $f(t,\.)$ linearly outside $[0,1]$, we can assume that $\psi$ is a global supersolution. We then set $\ol\phi(z):=\min(\psi(z),1)$. Let $C$, $\gamma$ be the constants in (\ref{hyp-lower}). Our aim is to find a function $A\in W^{1,\infty}(\R)$ and a constant $h>\kappa$ such that the function $\vp$ defined by $\vp(z):=\psi(z)-e^{A(t)-hz}$ satisfies \begin{equation}\label{ulphi} \partial_t\vp-\partial_{zz}\vp-c(t)\partial_z\vp \leq\mu(t)\vp-C\vp^{1+\gamma},\quad\text{for a.e.~} z>0,\ t\in\R. \end{equation} By direct computation we see that $$\partial_t \vp-\partial_{zz}\vp-c(t)\partial_z\vp-\mu(t)\vp =[-A'(t)+h^2-c(t)h+\mu(t)]e^{A(t)-hz}.$$ Hence (\ref{ulphi}) holds if and only if $$\text{for a.e.~} z>0,\ t\in\R,\quad A'(t)+B(t)\geq C\vp^{1+\gamma}e^{hz-A(t)},\qquad\text{where } B(t):=-h^2+c(t)h-\mu(t).$$ Let $\kappa< h<(1+\gamma)\kappa$. Since $$\forall z>0,\ t\in\R,\quad \vp^{1+\gamma}e^{hz-A(t)}\leq e^{[h-(1+\gamma)\kappa]z-A(t)} \leq e^{-A(t)},$$ if $\essinf_\R(A'+B)>0$ then the desired inequality follows by adding a large constant to $A$. Owing to Lemma \ref{lem:lm>0}, this condition is fulfilled by a suitable function $A\in W^{1,\infty}(\R)$ as soon as $\ul B>0$. Let us compute \[\begin{split} \ul B &= \lim_{T\to +\infty}\inf_{t\in\R} \frac{1}{T}\int_t^{t+T}h\left[ \kappa-h+\mu(s)\left(\frac1\kappa-\frac1h\right)\right ] ds \\ & =h\left[ \kappa-h+\ul\mu\left(\frac1\kappa-\frac1h\right)\right ]\\ &= -h^2+\gamma h-\ul\mu. \end{split}\] Since $\kappa$ is the smallest root of $-x^2+\gamma x-\ul\mu=0$, we can choose $h\in(\kappa,(1+\gamma)\kappa)$ in such a way that $\ul B>0$. Therefore, there exists $A\in W^{1,\infty}(\R)$ such that (\ref{ulphi}) holds. Up to adding a suitable constant $\alpha<0$ to $A$, it is not restrictive to assume that $\vp$ is less than the constant $\delta$ in (\ref{hyp-lower}). Hence, $\vp$ is a subsolution of (\ref{def-tf}) in $(0,+\infty)\times\R$. Since $\vp\geq0$ if and only if $z\geq(h-\kappa)^{-1}A(t)$, $\alpha$ can be chosen in such a way that $\vp\leq0$ for $z\leq0$. Whence, due to the arbitrariness of the extension of $f(t,u)$ for $u<0$, it follows that $\ul\phi(z,t):=\max(\vp(z,t),0)$ is a generalized subsolution. Finally, for $\tau>0$, it holds that $$\lim_{z\to+\infty}\frac{\ol\phi(z+\tau)}{\ul\phi(z,t)}= \lim_{z\to+\infty}\frac{e^{-\kappa\tau}}{1-e^{A(t)-(h-\kappa)z}}=e^{-\kappa\tau} .$$ \end{proof} \begin{proof}[Proof of Theorem \ref{thm-existence} part 1)] Let $c,\ \ul\phi,\ \ol\phi$ be the functions given by Proposition \ref{pro:subsuper}, with $\ul c=\gamma$. For $n\in\N$, consider the solution $\phi_n$ of the problem \Fi{-n} \left\{\begin{array}{ll} \partial_t \phi-\partial_{zz}\phi -c(t)\partial_z \phi = f(t,\phi),& z\in\R,\ t>-n, \\ \phi(z,-n)=\ol\phi(z), & z\in\R. \end{array} \right. \Ff The comparison principle implies that $\ul\phi\leq\phi_n\leq\ol\phi$ and, since $\ol\phi$ is nonincreasing, that $\phi_n(\.,t)$ is nonincreasing too. Owing to the parabolic estimates and the embedding theorem, using a diagonal extraction method we can find a subsequence of $(\phi_n)_{n\in\N}$ converging weakly in $W^{2,1}_p(K)$ and strongly in $L^\infty(K)$, for any compact $K\subset\R\times\R$ and any $p<\infty$, to a solution $\phi$ of $$\partial_t \phi-\partial_{zz}\phi -c(t)\partial_z \phi = f(t,\phi),\quad z\in\R,\ t\in\R.$$ The function $\phi$ is nonincreasing in $z$ and satisfies $\ul\phi\leq\phi\leq\ol\phi$. Applying the parabolic strong maximum principle to $\phi(z-z_0,t)-\phi(z,t)$, for every $z_0>0$, we find that $\phi$ is decreasing in~$z$. It remains to prove that $\phi(-\infty,t)=1$ uniformly with respect to $t\in\R$. Set $$\theta:=\lim_{z\to-\infty}\inf_{t\in\R}\phi(z,t).$$ Our aim is to show that $\theta=1$. Let $(t_n)_{n\in\N}$ be such that $\lim_{n\to\infty}\phi(-n,t_n)=\theta$. We would like to pass to the limit in the sequence of equations satisfied by the $\phi(\.-n,\.+t_n)$, but this is not possible due to the presence of the drift term. To overcome this difficulty we come back to the fixed coordinate system by considering the functions $(v_n)_{n\in\N}$ defined by $$v_n(z,t):=\phi (z-n-\int_{t_n}^{t_n+t}c(s)ds,t+t_n).$$ These functions are solutions of $$\partial_t v_n-\partial_{zz}v_n= f(t+t_n,v_n),\quad z\in\R,\ t\in\R,$$ and satisfy $\lim_{n\to\infty}v_n(0,0)=\theta$ and $\liminf_{n\to\infty}v_n(z,t)\geq\theta$ locally uniformly in $(z,t)\in\R\times\R$. The same diagonal extraction method as before shows that $(v_n)_{n\in\N}$ converges (up to subsequences) weakly in $W^{2,1}_{p,loc}$ and strongly in $L^\infty_{loc}$ to some function $v$ satisfying $$\partial_t v-\partial_{zz}v=g(z,t)\geq0,\quad\text{for a.e.~}z\in\R,\ t\in\R,$$ where $g(z,t)$ is the weak limit in $L^p_{loc}(\R\times\R)$ of (a subsequence of) $f(t+t_n,v_n)$. Moreover, $v$ attains its minimum value $\theta$ at $(0,0)$. As a consequence, the strong maximum principle yields $v=\theta$ in $\R\times(-\infty,0]$. In particular, $g=0$ a.e.~in $\R\times(-\infty,0)$. Using the Lipschitz continuity of $f(t,\.)$, we then derive $$\text{for a.e.~}(z,t)\in\R\times(-\infty,0),\quad 0=g(z,t)\geq\essinf_{s\in\R}f(s,\theta).$$ Therefore, hypothesis \ref{hyp-pos} yields $\theta=0$ or $1$. The proof is then concluded by noticing that $$\theta=\lim_{z\to-\infty}\inf_{t\in\R}\phi(z,t) \geq\inf_{t\in\R}\phi(\xi, t)\geq\inf_{t\in\R} \ul\phi(\xi,t)>0$$ ($\xi$ being the constant in Proposition \ref{pro:subsuper}). \end{proof} \subsection{Non-existence of generalized transition waves when $\ul c < 2\sqrt{\ul \mu}$.}\label{sec:nonE} This section is dedicated to the proof of the lower bound for the least mean of admissible speeds - Theorem \ref{thm-existence} part 2. This is achieved by comparing the generalized transition waves with some subsolutions whose level-sets propagate at speeds less than $2\sqrt{\ul \mu}$. The construction of the subsolution is based on an auxiliary result which is quoted from \cite{BHRossi} and reclaimed in the Appendix here. \begin{lem}\label{lem:0nT} Let $g\in L^\infty(\R)$, $\eta\in\N\cup\{+\infty\}$ and $T>0$ be such that $$\gamma_*:=\inf_{\su{k\in\N}{k\leq\eta}}\, 2\sqrt{\frac1T\,\int_{(k-1)T}^{kT}g(s)\, ds}>0.$$ Then for all $0<\gamma<\gamma_*$, there exists a uniformly continuous subsolution $\ul v$ of \Fi{0etaT} \partial_t v-\Delta v=g(t)v, \quad x\in\R^N,\ t\in(0,\eta T), \Ff such that $$0\leq\ul v\leq1,\qquad \ul v(x,0)=0 \text{ for }|x|\geq R,\qquad \inf_{\su{0\leq t<\eta T}{|x|\leq\gamma t}}\ul v(x,t)\geq C,$$ where $R$, $C$ only depend on $T$, $\gamma_*-\gamma$, $N$, $\|g\|_{L^\infty((0,\eta T))}$ and not on $n$. \end{lem} \begin{proof} Fix $\gamma\in(0,\gamma^*)$. By Lemma \ref{lem:h} in the Appendix, there exist $h\in C^{2,\alpha}(\R)$ and $r>0$, both depending on $\gamma$ and $\gamma_*-\gamma$, satisfying $$h=0\ \text{in }(-\infty,0],\qquad h'>0\ \text{in }(0,r),\qquad h=1\text{ in }[r,+\infty),$$ $$Q\leq \gamma+1,\quad 4C-Q^2\geq\frac12(\gamma_*-\gamma)^2 \qquad\Rightarrow\qquad-h''+Qh'-Ch<0\quad\text{in }(0,r).$$ Note that $r$, $h$ actually depend on $\gamma_*-\gamma$ and $\|g\|_\infty$, because $\gamma<\gamma_*\leq\|g\|_\infty$. We set $$\ul v(x,t):=e^{A(t)}h(R-|x|+\gamma t),$$ where $R>r$ and $A:\R\to\R$ will be chosen later. This function solves $$\partial_t \ul v-\Delta\ul v-g(t)\ul v =\left[-h''(\rho)+\left(\gamma+\frac{N-1}{|x|}\right)h'(\rho)-C(t)h(\rho)\right] e^{A(t)},\quad x\in\R^N,\ t\in(0,\eta T).$$ with $\rho=R-|x|+\gamma t$ and $C(t)=g(t)-A'(t)$. Since $h'$ is nonnegative and vanishes in $[r,+\infty)$, it follows that $$\partial_t \ul v-\Delta\ul v-g(t)\ul v \leq[-h''(\rho)+Qh'(\rho)-C(t)h(\rho)]e^{A(t)},\quad \text{for a.e.~} x\in\R^N,\ t\in(0,\eta T),$$ with $Q=\gamma+\frac{N-1}{R-r}$. We write $$4C(t)-Q^2=B(t)-4A'(t),\quad\text{where } B(t)=4 g(t)-\left(\gamma+\frac{N-1}{R-r}\right)^2.$$ and we compute $$m:=\inf_{\su{k\in\N}{k\leq\eta}}\frac1T \int_{(k-1)T}^{kT}B(s)\, ds=\gamma_*^2-\left(\gamma+\frac{N-1}{R-r}\right)^2.$$ Hence, since $\gamma_*^2-\gamma^2\geq(\gamma_*-\gamma)^2$, it is possible to choose $R$, depending on $N$, $r$, $\|g\|_\infty$ and $\gamma_*-\gamma$, such that $m\geq\frac12(\gamma_*-\gamma)^2$. By Remark \ref{rmq:lm>0} there exists a function $A\in W^{1,\infty}(\R)$ such that $$\min_{[0,\eta T)}(B-4A')=m,\qquad \|A\|_{L^\infty((0,\eta T))}\leq\frac T2 \|B\|_{L^\infty((0,\eta T))}\leq4T\|g\|_{L^\infty((0,\eta T))}.$$ Consequently, $4C-Q^2\geq\frac12(\gamma_*-\gamma)^2$ a.e.~in $(0,\eta T)$ and, up to increasing $R$ if need be, $Q\leq\gamma+1$. Therefore, $\ul v$ is a subsolution of \eq{0etaT}. This concludes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm-existence} part 2)] Let $u$ be a generalized transition wave with speed $c$. Definition \ref{def-gtf} yields \Fi{Le} \lim_{L\to+\infty}\,\inf_{\su{x\.e<-L}{t\in\R}}u(x+e\int_0^tc(s)ds,t)=1, \qquad \lim_{L\to+\infty}\,\sup_{\su{x\.e>L}{t\in\R}}u(x+e\int_0^tc(s)ds,t)=0. \Ff By the definition of least mean, for all $\e>0$, there exists $T>0$ such that $$\fa T'\geq T,\quad \frac1{T'}\inf_{t\in\R}\int_t^{t+T'}c(s)\,ds<\ul c+\e,\qquad \fa t\in\R,\quad\frac1T\int_t^{t+T}\mu(s)\,ds>\ul\mu-\e.$$ For $n\in\N$, let $t_n$ be such that $$\frac1{nT}\int_{t_n}^{t_n+nT}c(s)\,ds<\ul c+2\e.$$ Taking $\e$ small enough in such a way that $2\sqrt{\ul\mu-2\e}>\e$, we find that $$\gamma_*^n:=\inf_{k\in\{1,\dots,n\}}2\sqrt{\frac1T\,\int_{(k-1)T}^{kT} (\mu(s+t_n)-\e)\, ds}>2\sqrt{\ul\mu-2\e}>\e.$$ Let $(\ul v_n)_{n\in\N}$ be the functions obtained applying Lemma \ref{lem:0nT} with $g(t)=\mu(t+t_n)-\e$ and $\gamma=\gamma^n:=\gamma_*^n-\e$, and let $R$, $C$ be the associated constants, which are independent of $n$. By the regularity hypothesis on $f$, there exists $\sigma\in(0,1)$ such that $f(t,w)\geq(\mu(t)-\e)w$ for $w\in[0,\sigma]$. As a consequence, the functions $\ul u_n(x,t):=\sigma \ul v_n(x,t-t_n)$ satisfy $$\partial_t \ul u_n-\Delta \ul u_n\leq f(t,\ul u_n), \quad \text{for a.e.~} x\in\R^N,\ t\in(t_n,t_n+nT).$$ By \eq{Le}, for $L$ large enough we have that $$\fa t\in\R,\quad\inf_{|x|<R}u(x-Le+e\int_0^tc(s)ds,t)\geq\sigma.$$ Thus, up to replacing $u(x,t)$ with $u(x-Le,t)$, it is not restrictive to assume that $$\fa n\in\N,\ x\in\R^N,\quad u(x+e\int_0^{t_n}c(s)ds,t_n)\geq \ul u_n(x,t_n).$$ The comparison principle then yields $$\fa n\in\N,\ x\in\R^N,\ t\in(t_n,t_n+nT),\quad u(x+e\int_0^{t_n}c(s)ds,t)\geq \ul u_n(x,t).$$ Therefore, $$\liminf_{n\to\infty}u(\gamma^n nT e+e\int_0^{t_n}c(s)ds,t_n+nT)\geq \liminf_{n\to\infty}\ul u_n(\gamma^n nT,t_n+nT)\geq\sigma C,$$ whence, owing to \eq{Le}, we deduce that \[\begin{split} +\infty &>\limsup_{n\to\infty}\left(\gamma^n nT +\int_0^{t_n}c(s)ds -\int_0^{t_n+nT}c(s)ds\right)\\ &=\limsup_{n\to\infty}\left(\gamma^n nT -\int_{t_n}^{t_n+nT}c(s)ds\right)\\ &\geq \limsup_{n\to\infty}\left(2\sqrt{\ul\mu-2\e}-\e-\ul c-2\e\right)nT. \end{split}\] That is, $\ul c\geq2\sqrt{\ul\mu-2\e}-3\e$. Since $\e$ can be chosen arbitrarily small, we eventually infer that $\ul c\geq2\sqrt{\ul\mu}$. \end{proof} \begin{rmq} The same arguments as in the above proof yield the non-existence of fronts $(\phi,c)$ such that $\ul c_{\pm}<2\sqrt{\ul\mu_\pm}$, where, for a given function $g\in L^\infty(\R)$, $$\underline{g}_+:=\sup_{T>0}\,\inf_{t>0} \frac{1}{T}\int_t^{t+T}g(s)\,ds,\qquad \underline{g}_-:=\sup_{T>0}\,\inf_{t<0} \frac{1}{T}\int_{t-T}^T g(s)\,ds$$ \end{rmq} \section{The random stationary ergodic case} To start with, we show that the temporal least mean of a stationary ergodic function is almost surely independent of $\omega$. \begin{prop}\label{pro:lmergodic} For a given bounded measurable function $\tilde{g}:\O\to\R$, the mapping $$\omega\,\mapsto\,\sup_{T>0}\,\inf_{t\in\R} \frac{1}{T}\int_t^{t+T}\tilde{g}(\pi_s\omega)\,ds$$ is constant in a set of probability measure $1$. We call this constant value the {\em least mean} of the random stationary ergodic function $g$ defined by $g(t,\omega):= \tilde{g}(\pi_t \omega)$ and we denote it by $\ul g$. \end{prop} \begin{proof} The result follows immediately from the ergodicity of the process $(\pi_t)_{t\in\R}$. Indeed, setting $$G(\omega):=\sup_{T>0}\,\inf_{t\in\R}\frac{1}{T}\int_t^{t+T}\tilde{g}(\pi_s\omega)\,ds,$$ for given $\e>0$, there exists a set $\mc{A}_\e\in\mc{F}$, with $\P(\mc{A}_\e)>0$, such that $G(\omega)<\essinf_\O G+\e$ for $\omega\in\mc{A}_\e$. It is easily seen that $\mc{A}_\e$ is invariant under the action of $(\pi_t)_{t\in\R}$, and then $\P(\mc{A}_\e)=1$. Owing to the arbitrariness of $\e$, we infer that $G$ is almost surely equal to $\essinf_\O G$. \end{proof} The proof of Theorem \ref{thm-existencerandom} relies on a general uniqueness result for the profile of generalized transition waves that share the same behavior at infinity. This, in turn, is derived using the following strong maximum principle-type property. \begin{lem}\label{lem:I} Let $c\in L^\infty(\R)$ and assume that $f$ satisfies the regularity conditions in Hypothesis \ref{hyp:KPP}. Let $I$ be an open interval and $\vp,\psi$ be respectively a generalized sub and supersolution of $$\partial_t\phi-\partial_{zz}\phi-c(t)\partial_z\phi =f(t,\phi),\quad z\in I,\ t\in\R,$$ which are uniformly continuous and satisfy $0\leq\vp\leq\psi\leq1$ in $I\times\R$. Then, the set $$\{z\in I\ : \ \inf_{t\in\R}(\psi-\vp)(z,t)=0\}$$ is either empty or coincides with $I$. \end{lem} \begin{proof} Clearly, it is sufficient to prove the result for strong sub and supersolutions. We achieve this by showing that the set $$J:=\{z\in I\ : \ \inf_{t\in\R}(\psi-\vp)(z,t)=0\}$$ is open and closed in the topology of $I$. That it is closed follows immediately from the uniform continuity of $\vp$ and $\psi$. Let us show that it is open. Suppose that there exists $z_0\in J$. There is a sequence $(t_n)_{n\in\N}$ such that $(\psi-\vp)(z_0,t_n)$ tends to $0$ as $n$ goes to infinity. For $n\in\N$, define $\Phi_n(z,t):=(\psi-\vp)(z,t+t_n)$. These functions satisfy $$\partial_t\Phi_n-\partial_{zz}\Phi_n-c(t+t_n)\partial_z\Phi_n -\zeta(z,t+t_n)\Phi_n\geq0\quad \text{for a.e.~}z\in I,\ t\in\R,$$ where $$\zeta(z,t):=\frac{f(t,\psi)-f(t,\vp)}{\psi-\vp}$$ belongs to $L^\infty(I\times\R)$ due to the Lipschitz-continuity of $f$. Let $\delta>0$ be such that $[z_0-\delta,z_0+\delta]\subset I$. We now make use of the parabolic weak Harnack inequality (see e.g. Theorem 7.37 in \cite{Lie}). It provides two constants $p,C>0$ such that $$\fa n\in\N,\quad \|\Phi_n\|_{L^p((z_0-\delta,z_0+\delta)\times(-2,-1))} \leq C\inf_{(z_0-\delta,z_0+\delta)\times(-1,0)}\Phi_n \leq C\Phi_n(z_0,0).$$ Whence $(\Phi_n)_{n\in\N}$ converges to $0$ in $L^p((z_0-\delta,z_0+\delta)\times(-2,-1))$. By the Arzela-Ascoli theorem we then infer that, up to subsequences, $(\Phi_n)_{n\in\N}$ converges to $0$ uniformly in $(z_0-\delta,z_0+\delta)\times(-2,-1)$. This means that $(z_0-\delta,z_0+\delta)\subset J$. \end{proof} \begin{prop}\label{pro:!} Assume that $c\in L^\infty(\R)$ and that $f$ satisfies (\ref{hyp-vois1}). Let $\vp,\ \psi$ be a subsolution and a positive supersolution of (\ref{def-tf}) which are uniformly continuous and satisfy $$0\leq\vp,\psi\leq1,\qquad \psi(\.,t) \text{ is nonincreasing},\qquad\fa\tau>0,\quad \lim_{R\to+\infty}\sup_{\su {z>R}{t\in\R}}\frac{\vp(z,t)}{\psi(z-\tau,t)}<1.$$ Then $\vp\leq\psi$ in $\R\times\R$. \end{prop} \begin{proof} Since $\vp,\ \psi$ are uniformly continuous and satisfy $\vp(+\infty,t)=0$, $\psi(-\infty,t)=1$ uniformly in $t$, applying Lemma \ref{lem:I}, first with $\psi\equiv1$ and then with $\vp\equiv0$, we derive \Fi{supinf} \fa r\in\R,\quad\sup_{(r,+\infty)\times\R}\vp<1,\quad \inf_{(-\infty,r)\times\R}\psi>0. \Ff Let $\delta\in(0,1)$ be the constant in (\ref{hyp-vois1}). By hypothesis, there exists $\rho\in\R$ such that $\psi>1-\delta$ in $(-\infty,\rho)\times\R$. Let $\chi:\R\to[0,1]$ be a smooth function satisfying $\chi=1$ in $(-\infty,\rho]$, $\chi=0$ in $[\rho+1,+\infty)$. Define the family of functions $(\psi^{\e,\tau})_{\e,\tau\geq0}$ by setting $$\psi^{\e,\tau}(z,t):=[1+\e\chi(z)]\psi(z-\tau,t).$$ Since $\lim_{\e,\tau\to0^+}\psi^{\e,\tau}\equiv\psi$, the statement is proved if we show that $\psi^{\e,\tau}\geq\vp$ for all $\e,\tau>0$. The $\psi^{\e,\tau}$ are nondecreasing with respect to both $\e$ and $\tau$. Moreover, there exists $z_0\in\R$ such that $\psi^{0,1}>\vp$ in $(z_0,+\infty)\times\R$. On the other hand, for all $\e>0$, there exists $\tau\geq0$ such that $\psi^{\e,\tau}>1\geq\vp$ in $(-\infty,z_0]\times\R$. Consequently, for all $\e>0$, we have that $\psi^{\e,\tau}\geq\vp$ for $\tau$ large enough. Define $$\fa\e>0,\quad\tau(\e):=\min\{\tau\geq0\ : \ \psi^{\e,\tau}\geq\vp\}.$$ The function $\e\mapsto\tau(\e)$ is nonincreasing and it holds that $\psi^{\e,\tau(\e)}\geq\vp$. We argue by contradiction, assuming that there exists $\t\e>0$ such that $\tau(\t\e)>0$. By hypothesis, we have that \Fi{psi>vp} \exists\, h>1,\ R\in\R,\quad \psi^{0,\tau(\t\e)/2}\geq h\vp\quad\text{in }(R,+\infty)\times\R. \Ff Fix $\e\in(0,\t\e]$. We know that $\tau(\e)\geq\tau(\t\e)>0$ and, for $\tau\in(0,\tau(\e))$, $\inf_{\R\times\R}(\psi^{\e,\tau}-\vp)<0$. Hence, from \eq{psi>vp} it follows that, for $\tau\in[\tau(\t\e)/2,\tau(\e))$, $\inf_{(-\infty,R]\times\R}(\psi^{\e,\tau}-\vp)<0$. Thus, by the uniform continuity of $\psi$ we get $\inf_{(-\infty,R]\times\R}(\psi^{\e,\tau(\e)}-\vp)=0$. We now use the assumption (\ref{hyp-vois1}). Since $\psi^{0,\tau(\e)}\geq\psi>1-\delta$ in $(-\infty,\rho)\times\R$, for a.e.~$z\in(-\infty,\rho)$, $t\in\R$ we have $$\partial_t \psi^{\e,\tau(\e)}-\partial_{zz}\psi^{\e,\tau(\e)} -c(t)\partial_z \psi^{\e,\tau(\e)} =(1+\e)f(t,\psi^{0,\tau(\e)})\geq f(t,\psi^{\e,\tau(\e)})$$ (where we have extended $f$ by $0$ in $\R\times(1,+\infty)$). By hypothesis, we can find $R_\e<\rho$ such that $\inf_{(-\infty,R_\e]\times\R}\psi^{\e,\tau(\e)}>1\geq\sup\vp$. Consequently, Lemma \ref{lem:I} yields $\inf_{(-\infty,\rho-1]\times\R}(\psi^{\e,\tau(\e)}-\vp)>0$. It follows that $R>\rho-1$ and that \Fi{psi=vp} \inf_{(\rho-1,R]\times\R}(\psi^{\e,\tau(\e)}-\vp)=0. \Ff In order to pass to the limit $\e\to0^+$ in the above expression, we notice that, by \eq{supinf}, there exists $\tau_0>0$ such that $$\inf_{(-\infty,R]\times\R}\psi^{0,\tau_0}>\sup_{[\rho-1,R]\times\R}\vp.$$ As a consequence, \eq{psi=vp} implies that the nonincreasing function $\tau$ is bounded form above by $\tau_0$, and then there exists $\tau^*:=\lim_{\e\to0^+}\tau(\e)$. Letting $\e\to0^+$ in the inequality $\psi^{\e,\tau(\e)}\geq\vp$ and in \eq{psi=vp} yields $$\psi^{0,\tau^*}\geq\vp\ \text{ in }\R\times\R,\qquad\inf_{[\rho-1,R]\times\R}(\psi^{0,\tau^*} -\vp)=0.$$ Thus, using once again Lemma \ref{lem:I} and then the inequality \eq{psi>vp} we derive $$0=\inf_{t\in\R}(\psi^{0,\tau^*}-\vp)(R,t)\geq\inf_{t\in\R}(\psi^{0, \tau(\t\e)/2} -\vp)(R , t) \geq(1-h^{-1})\inf_{t\in\R}\psi^{0,\tau(\t\e)/2}(R,t).$$ This contradicts \eq{supinf}. We have shown that $\tau(\e)=0$ for all $\e>0$. That is, $\psi^{\e,\tau}\geq\vp$ for all $\e,\tau>0$. \end{proof} We are now in position to prove Theorem \ref{thm-existencerandom}. \begin{proof}[Proof of Theorem \ref{thm-existencerandom}] First, we fix $\omega\in\Omega$ such that $\mu(\cdot,\omega)$ admits $\underline{\mu}$ as a least mean. By Theorem \ref{thm-existence} there exists a generalized transition wave in direction $e$ with a speed $c(t,\omega)$ such that $\ul c(\.,\omega)= \gamma$ and a profile $\phi(z,t,\omega)$ which is decreasing with respect to $z$. Moreover, $c(t,\omega)=\kappa+\kappa^{-1}\mu(t,\omega)$, where $\kappa$ is the unique solution in $(0,\sqrt{\underline{\mu}})$ of $\kappa+\kappa^{-1} \underline{\mu} =\gamma$. For $s\in\R$, we set $\phi^s(z,t,\omega):=\phi^s(z,t-s,\pi_s\omega)$. As $f$ and $c$ are random stationary, the functions $\phi$ and $\phi^s$ satisfy the same equation $$\partial_t \phi-\partial_{zz}\phi -c(t,\omega)\partial_z\phi = f(t,\omega,\phi),\quad z\in\R, t\in\R.$$ We further know that $\ul\phi\leq\phi\leq\ol\phi$, where $\underline{\phi}=\underline{\phi}(z,t,\omega)$ and $\overline{\phi}=\overline{\phi}(z)$ are given by Proposition \ref{pro:subsuper}. We point out that $\ol\phi(z)\equiv\min(e^{-\kappa z},1)$ does not depend on $\omega$. For $\tau>0$, we get $$\lim_{R\to+\infty}\sup_{\su {z>R}{t\in\R}}\frac{\phi^s(z,t,\omega)}{\phi(z-\tau,t,\omega)}\leq \lim_{R\to+\infty}\sup_{\su {z>R}{t\in\R}}\frac{\ol\phi(z)} {\ul\phi(z-\tau,t,\omega)}<1.$$ Hence, Proposition \ref{pro:!} gives $\phi^s(\cdot,\cdot,\omega)\leq\phi (\cdot,\cdot,\omega)$. Exchanging the roles of $\phi^s$, $\phi$ we derive $\phi^s(\cdot,\cdot,\omega)\equiv\phi(\cdot,\cdot,\omega)$ for almost every $\omega$, that is, $$\fa (z,t,s,\omega)\in\R\times\R\times \R\times \Omega,\quad \phi (z,t+s,\omega)=\phi (z,t,\pi_s\omega).$$ Set $\tilde{\phi} (z,\omega):= \phi (z,0,\omega)$ and $\t c(\omega):=c(0,\omega)$. We see that the function $u$ defined by $u(x,t,\omega):= \t\phi (x\cdot e -\int_0^t \tilde{c}(\pi_s\omega)ds,\pi_t\omega)$ is a random transition wave with random speed $\tilde{c}$ and random profile $\tilde{\phi}$. \end{proof} \section{Proof of the spreading properties} This Section is dedicated to the proof of Proposition \ref{prop-spreading}. We prove separately properties \eq{spreadinginf} and \eq{spreadingsup}. \begin{proof}[Proof of \eq{spreadinginf}] Let $\gamma<\gamma'<2\sqrt{\ul\mu}$. Take $0<\e<\ul\mu$ in such a way that $\gamma'<2\sqrt{\ul\mu-\e}$. By definition of least mean over $\R_+$ there exists $T>0$ such that $$2\sqrt{\inf_{t>0}\frac1T\int_t^{t+T}(\mu(s)-\e)ds}>\gamma'.$$ Hence $$\gamma^*:=\inf_{k\in\N}\,2\sqrt{\frac1T\int_{(k-1)T}^{kT}(\mu(s+1)-\e)ds} >\gamma'.$$ Let $\ul v$ be the subsolution given by Lemma \ref{lem:0nT} with $g(t)=\mu(t+1)-\e$, $\eta=+\infty$ and $\gamma$ replaced by $\gamma'$. Since $\ul v(\.,0)$ is compactly supported and $u(\.,1)$ is positive by the strong maximum principle, it is possible to normalize $\ul v$ in such a way that $\ul v(\.,0)\leq u(\.,1)$. Moreover, by further decreasing $\ul v$ if need be, it is not restrictive to assume that $$\partial_t\ul v-\Delta\ul v=(\mu(t+1)-\e)\ul v\leq f(t+1,\ul v), \quad \text{for a.e.~} x\in\R^N,\ t>0.$$ Therefore, $u(x,t+1)\geq \ul v (x,t)$ for $x\in\R^N$, $t>0$ by the comparison principle. Whence, \Fi{m} m:=\liminf_{t\to +\infty}\inf_{|x|\leq \gamma't} u(x,t+1) \geq\liminf_{t\to +\infty}\inf_{|x|\leq \gamma' t}\ul v(x,t)>0. \Ff If $f$ was smooth with respect to $t$, then the conclusion would follow from Hypothesis \ref{hyp:KPP} (condition (\ref{hyp-pos}) in particular) through classical arguments. But as we only assume $f$ to be bounded and measurable in $t$, some more arguments are needed here. Set $$\theta:=\liminf_{t\to +\infty}\inf_{|x|\leq \gamma t} u(x,t),$$ and let $(x_n)_{n\in\N}$ and $(t_n)_{n\in\N}$ be such that $$\lim_{n\to\infty}t_n=+\infty,\qquad \forall n\in\N,\ \ |x_n|\leq\gamma t_n, \qquad\lim_{n\to\infty}u(x_n,t_n)=\theta.$$ Usual arguments show that, as $n\to\infty$, the functions $u_n(x,t):=u(x+x_n,t+t_n)$ converge (up to subsequences) weakly in $W^{2,1}_{p,loc}(\R^N\times\R)$, for any $p<\infty$, and strongly in $L^\infty_{loc}(\R^N\times\R)$ to a solution $v$ of $$\partial_t v-\Delta v= g(x,t),\quad x\in\R^N,\ t\in\R,$$ where $g$ is the weak limit in $L^p_{loc}(\R^N\times\R)$ of $f(t+t_n,u_n)$. We further see that $v(0,0)=\theta$ and that $v$ is bounded from below by the constant $m$ in \eq{m}. Let $(\xi_k,\tau_k)_{k\in\N}$ be a minimizing sequence for $v$, that is, $$\lim_{k\to\infty}v(\xi_k,\tau_k)=\eta:=\inf_{\R^N\times\R}v.$$ In particular, $0<m\leq\eta\leq\theta$. The same arguments as before, together with the strong maximum principle, imply that the $v_k(x,t):=v(x+\xi_k,t+t_k)$ converge (up to subsequences) to $\eta$ in, say, $L^\infty(B_1\times(-1,0))$. For $h\in\N\backslash \{0\}$, let $k_h, n_h\in\N$ be such that $$\|v_{k_h}-\eta\|_{L^\infty(B_1\times(-1,0))}<\frac1h,\qquad \|u_{n_h}-v\|_{L^\infty(B_1(\xi_{k_h})\times (\tau_{k_h}-1,\tau_{k_h}))}<\frac1h.$$ Hence, the functions $\t u_h(x,t):=u(x+x_{n_h}+\xi_{k_h}, t+t_{n_h}+\tau_{k_h})$ satisfy $\|\t u_h-\eta\|_{L^\infty(B_1\times(-1,0))}<\frac2h$ and then $(\t u_h)_{h\in\N}$ converges to $\eta$ uniformly in $B_1\times(-1,0)$. On the other hand, it converges (up to subsequences) to a solution $\t v$ of $$\partial_t\t v-\Delta\t v=\t g(x,t),\quad x\in B_1,\ t\in(-1,0),$$ where $\t g$ is the weak limit in $L^p(B_1\times(-1,0))$ of $f(t+t_{n_h}+\tau_{k_h},\t u_h)$. As a consequence, $$\text{for a.e.~}x\in B_1,\ t\in(-1,0),\quad 0=\t g(x,t)\geq\essinf_{s\in\R} f(s,\eta).$$ Hypothesis \ref{hyp-pos} then yields $1=\eta\leq\theta$. \end{proof} \begin{proof}[Proof of \eq{spreadingsup}] Let $R>0$ be such that $\supp u_0\subset B_R$. For all $ \kappa>0$ and $e\in\mathbb{S}^{N-1}$, we define $$v_{\kappa,e} (x,t):= e^{-\kappa (x\cdot e-R-\kappa t)+\int_0^t \mu (s)ds}. $$ Direct computation shows that the functions $v_{\kappa,e}$ satisfy $$ \partial_t v_{\kappa,e} -\Delta v_{\kappa,e} -\mu (t) v_{\kappa,e} =0,\quad x\in\R^N,\ t>0, \qquad\fa x\in B_R,\quad v_{\kappa,e}(x)>1.$$ Hence, by (\ref{hyp-KPP}), they are supersolutions of \eq{Cauchy} and then they are greater than $u$ due to the comparison principle. Let $\sigma>0$, $x\in\R^N$ and $t>0$ be such that $|x|\geq2\sqrt{t\int_0^t\mu(s)ds}+\sigma t$. Applying the inequality $u(x,t)\leq v_{\kappa,e}(x,t)$ with $e=\frac x{|x|}$ and $\kappa=\frac{|x|-R}{2t}$ yields $$u(x,t)\leq\exp\left(-\frac{(|x|-R)^2}{4t} +\int_0^t\mu(s)ds\right).$$ If in addition $t>R/\sigma$ then $|x|-R\geq2\sqrt{t\int_0^t\mu(s)ds}+\sigma t-R>0$, whence $$u(x,t)\leq\exp\left(-\frac{\sigma^2t}{4}-\frac{R^2}{4t}-\sigma\sqrt{t\int_0^t\mu(s)ds} +R\sqrt{\frac1t\int_0^t\mu(s)ds}+\frac{R\sigma}2\right).$$ Since the right hand side tends to $0$ as $t\to+\infty$, \eq{spreadingsup} follows. \end{proof}
{'timestamp': '2011-05-03T02:05:03', 'yymm': '1104', 'arxiv_id': '1104.3686', 'language': 'en', 'url': 'https://arxiv.org/abs/1104.3686'}
\section{Introduction} \label{sec:intro} The Whirlpool Galaxy (NGC~5194, M51) and its companion (NGC~5195) are a nearby interacting galaxy pair at a distance of $8.58\pm0.1\text{ Mpc}$ \citep{McQuinn2016} in the constellation Canes Venatici. NGC 5194 is a face-on grand design spiral galaxy that lends itself well to studies of its spiral arms, globular clusters, and X-ray binaries XRBs. There have been a large number of studies of the X-ray sources in M51 going back decades. \cite{Terashima2004} studied the X-ray point source population observed in the two of the earliest M51 {\it Chandra X-Ray Observatory} (\chandra) observations (ObsId 354 \& 1622). In a follow-up, \cite{Terashima2006} investigated the candidate optical counterparts to those X-ray point sources using \hst\ (with the additional \chandra\ observation ObsId 3932). More recently, \cite{Kuntz2016} used most of the available (at the time) \chandra\ data. X-ray binaries (XRBs) are gravitationally bound systems containing a compact object (black hole or neutron star) accreting matter from a main sequence or massive star companion. XRBs fall generally into two main classes: low-mass (LMXBs) and high mass (HMXBs), distinguished by the mass of the companion star. A LMXB has X-ray emission originating in an accretion disk supplied by Roche-lobe overflow of a low-mass ($M_\text{donor}\le 1.0\,M_\odot$) stellar companion. A HMXB has X-ray emission originating in the accretion of the stellar wind of a high-mass ($M_\text{donor}>8.0\,M_\odot$) stellar companion. HMXBs are divided into two sub-classes: those with O-type companions and those with Be companions. Some excellent reviews on XRBs are found in \cite{Shapiro1983}, \cite{Tauris2006}, and \cite{Remillard2006}. Images of nearby spiral galaxies taken with \chandra\ reveal bright X-ray sources, many of which are believed to be HMXBs. \citet{Fabbiano1989,Fabbiano2006} present an extensive summary of the X-ray source populations in nearby spiral galaxies. X-ray properties such as X-ray luminosity, hardness ratios, and variability can be utilized to identify and study X-ray binaries \citep{Kaaret2001,Prestwich2003,Luan2018,Jin2019,Sell2019}. \hst\ provides another avenue to investigate XRBs. While the optical magnitudes of LMXBs will be too small to detect, the massive donor stars of HMXBs can be detected with \hst\ at the distance to M51. For example, Supergiant donors are bright, with $M_{V}$ brighter than $\approx-6.5$ \citep{CI1998}, while Be donors tend to be fainter, with typical $M_{V}$ ranging from $-2$ to $-5$ \citep{McBride2008}. Deep photometry with \hst\ can therefore be used to distinguish between the two main classes of HMXBs. In this paper we present \chandra\ X-ray and \hst\ optical data analysis on the X-ray sources and their stellar counterpart candidates in the M51 system. In \S\ref{sec:observations}, we describe the X-ray and optical observations used in this study, discuss the results in \S\ref{sec:results}, in \S\ref{sec:conclusions}, we describe our conclusions. Due to the large amount of information, here (Paper I) we will primarily show the methodology used to compile our results. We will present a more in-depth analysis of the X-ray source population, as well as analysis of individual sources in a follow up study (Paper II). \section{Observations and Data Reduction} \label{sec:observations} \subsection{Chandra X-ray Observatory}\label{subsec:Chandra} The Whirlpool Galaxy was the focus of many \chandra\ programs since 2000. For this work, we select data with exposure time $t_{\text{exp}} \ge 10.0\text{ ks}$, resulting in 13 \chandra\ observations, the longest of which is 189\,ks. Information about the X-ray data is listed in Table~\ref{tbl-1}. The data were taken with the Advanced CCD Imaging Spectrometer (ACIS) instrument onboard \chandra. The data were analyzed with the Chandra Interactive Analysis of Observations (CIAO) software version $4.10$ and Chandra Calibration Data Base (CALDB) version 4.7.9\footnote{\href{http://cxc.harvard.edu/ciao/}{http://cxc.harvard.edu/ciao/}} We aligned all datasets with \emph{US Naval Observatory Robotic Astrometric Camera} (USNO URAT1\footnote{\href{https://www.usno.navy.mil/USNO/astrometry/optical-IR-prod/urat}{https://www.usno.navy.mil/USNO/astrometry/optical-IR-prod/urat}}) Catalog using the CIAO scripts \texttt{wcs\_match} and \texttt{wcs\_update}. Taking into account the new aspect ratio solution and bad pixel files, the observation event files were merged into one event file using \texttt{merge\_obs}. The CIAO script \texttt{mkpsfmap} was run on the full merged event file, taking the minimum PSF map size at each pixel location. We used the CIAO's Mexican-hat wavelet source detection routine \texttt{wavdetect} \citep{Freeman2002} on the merged data to create source lists. Wavelets of 1, 2, 4, 6, 8, 12, 16, 24, and 32 pixels and a detection threshold of $10^{-6}$ were used, which typically results in one spurious detection per million pixels. We followed standard CIAO procedures\footnote{\url{http://cxc.harvard.edu/ciao/threads/wavdetect_merged/}}, using an exposure-time-weighted average PSF map in the calculation of the merged PSF. We detected a total of 497 X-ray sources in the merged dataset. In this paper we focus on the sources that are also withing the \hst\ field-of-view, of which there are left 334 (Figure~\ref{fig:m51}). The \texttt{srcflux} CIAO tool was then run individually on each observation (using the coordinates found by \texttt{wavdetect}). The data have been restricted to the energy range between 0.5 and 7.0\,keV and filtered in three energy bands, 0.5--1.2\,keV (soft), 1.2--2.0\,keV (medium), and 2.0--7.0\,keV (hard). We corrected our source catalog to the effects of neutral hydrogen absorption along the line of sight using the Galactic Neutral Hydrogen Density Calculator (COLDEN\footnote{\href{https://cxc.harvard.edu/toolkit/colden.jsp}{https://cxc.harvard.edu/toolkit/colden.jsp}}) tool, finding a mean neutral hydrogen absorption along the line of sight to each source of $n_{\rm {H}} = (1.53\pm0.03)\times10^{20}\text{ cm}^{-2}$. Our fluxes are consistent with the Chandra Source Catalog v2 (CSC\footnote{\url{https://cxc.harvard.edu/csc/}}). \begin{deluxetable}{cccccc} \tablecaption{\chandra\ Observations\label{tbl-1}} \tablewidth{0pt} \tablehead{ \colhead{ObsId} & \colhead{Date} & \colhead{Detector} & \colhead{Mode\tablenotemark{a}} & \colhead{PI} & \colhead{Exp\tablenotemark{b}}} \startdata 354 & 2000-06-20 & ACIS-S & F & Wilson & 15 \\ 1622 & 2001-06-23 & ACIS-S & VF & Wilson & 29 \\ 3932 & 2003-08-07 & ACIS-S & VF & Terashima & 50 \\ 12562 & 2011-06-12 & ACIS-S & VF & Pooley & 10 \\ 12668 & 2011-07-03 & ACIS-S & VF & Soderberg & 10 \\ 13813 & 2012-09-09 & ACIS-S & F & Kuntz & 180 \\ 13812 & 2012-09-12 & ACIS-S & F & Kuntz & 180 \\ 15496 & 2012-09-19 & ACIS-S & F & Kuntz & 40 \\ 13814 & 2012-09-20 & ACIS-S & F & Kuntz & 190 \\ 13815 & 2012-09-23 & ACIS-S & F & Kuntz & 68 \\ 13816 & 2012-09-26 & ACIS-S & F & Kuntz & 74 \\ 15553 & 2012-10-10 & ACIS-S & F & Kuntz & 38 \\ 19522 & 2017-03-17 & ACIS-I & F & Brightman & 40 \\ \enddata \tablenotetext{a}{F = ``Faint'', VF = ``Very Faint''} \tablenotetext{b}{Proposed exposure in ks.} \end{deluxetable} \subsection{Hubble Space Telescope} \label{subsec:hst} A six-image mosaic image of M51 with the {\it Hubble Space Telescope} (\hst) Advanced Camera for Surveys was obtained by the Hubble Heritage Team\footnote{\href{https://archive.stsci.edu/prepds/m51/index.html}{https://archive.stsci.edu/prepds/m51/index.html}} (PI: Beckwith, program GO~10452) in January 2005 (see \citealt{Mutchler2005}). The pixel scale of these observations is $0.05''$\,pix$^{-1}$, corresponding to 2.1\,pc\,pix$^{-1}$ at the observed distance of M51. The full mosaic consists of four bands $I$, $V$, $B$, and $H\alpha$ with exposure times of $1360$, $1360$, $2720$, and $2720$ seconds, respectively. The total exposure time is thus $t_{exp} = 8160$\,s over 96 separate exposures. We identified sources in each of the four \hst\ images to align with the URAT1 Catalog and improve the absolute astrometry of the images (similar to \chandra). The common sources totaled 43, distributed across the M51 system. In IRAF, the command \texttt{ccmap} was run on all four of the \hst\ images. The \texttt{ccmap} command finds a six-parameter linear coordinate transformation (plate solution) that takes the $(X,Y)$ centroids and maps them to the more accurate astrometric positions (URAT1 Catalog). In the four bands ($I,V,H\alpha,B$) the mean $(\text{RA},\text{Dec})$ offsets were $(0.142'',0.119'')$, $(0.141'',0.124'')$, $(0.143'',0.124'')$, and $(0.144'',0.117'')$, respectively. We identified candidate \hst\ point sources that fell within $0.5''\,(10\,\text{px})$ of the 334 \chandra\ X-ray point source centroids in our X-ray catalog. We chose $0.5''$ to limit the total number of sources in the catalog while making sure all candidate optical counterparts were identified. We used the AstroPy package \texttt{photutils}\footnote{\href{https://photutils.readthedocs.io/en/stable/}{https://photutils.readthedocs.io/en/stable/}} to perform photometry calculations on the candidate \hst\ sources. Within \texttt{photutils} we created a circular aperture of radius $r=3.0$ px around each source. The background counts were summed within an annulus centered on each \hst\ point source with inner radius $r_{in}=8.0$\,px and outer radius $r_{out}=11.0$\,px. We corrected for the encircled energy fraction (EEF) using the most recent ACS encircled energy values\footnote{\href{https://www.stsci.edu/hst/instrumentation/acs/data-analysis/aperture-corrections}{https://www.stsci.edu/hst/instrumentation/acs/data-analysis/aperture-corrections}}. The output of \texttt{photutils} on the \hst\ data includes the corrected $(I,V,H\alpha,B)$ magnitudes in the VegaMag system\footnote{\href{https://www.stsci.edu/hst/instrumentation/acs/data-analysis/zeropoints}{https://www.stsci.edu/hst/instrumentation/acs/data-analysis/zeropoints}} for each candidate point source. \begin{figure*} \begin{center} \includegraphics[width=\linewidth]{m51_final.pdf} \caption{\label{fig:m51}Combined \hst\ (r: F658N, g: F555W, b: F435W) image with candidate X-ray point source centroids overlaid as white crosses. North is to the right.} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=\columnwidth]{fb0-9.pdf} \includegraphics[width=\columnwidth]{fb10-19.pdf} \caption{\label{fig:fb09} Broad X-ray flux light curves for the top twenty (by net counts) X-ray sources during ObsId 13813, 13812, 15496, 13814, 13815, 13816, and 15553. {\bf Left}: Broad X-ray flux light curves of the brightest 1-5 (black curves) and next brightest 6-10 (copper curves) X-ray sources. {\bf Right}: Broad X-ray flux light curves of the next brightest 10-15 (black curves) and next brightest 16-20 (copper curves) X-ray sources. The black triangles indicate upper limits, while the dashed black and copper line indicates one particular source ($x_{id}=199$: RA: 13:30:06.0397, DEC: +47:15:42.477) was outside the FOV for ObsIDs 13814, 13815, and 13816 and thus had no measured X-ray counts in those observations, except for a 90\% upper limit on the counts during ObsID 13814. In both panels, in the 30-day window of these observations, some sources vary in flux by approximately two orders of magnitude. {\bf Note}: Line thickness increases with decreasing net counts within each group of five curves. Also note the different $y$-axis scales. The time-averaged mean broad flux error of these twenty sources over the 30-day window of observations in this figure is $\langle\delta F_{X,b}\rangle\simeq\,5.7\times10^{-15}\,{\rm erg}\,{\rm cm}^{-2}\,{\rm s}^{-1}$, and for most of the data points the error is within the line thickness.} \end{center} \end{figure*} \subsection{Optical Counterparts}\label{subsec:counterparts} Candidate point source optical counterparts were found by identifying the brightest \hst\ point source within the \chandra\ positional uncertainty of each X-ray source. We used a 90\% confidence level positional uncertainty of $0.5''$ typical for a $5'$ off-axis X-ray source with 50 counts (see Eq. 12 in \citealt{Kim2007}). This positional uncertainty corresponds to $20.8$\,pc at the distance of M51. In total, there are 173 such candidate optical counterparts. The closest \hst\ source to the X-ray centroid is not always the brightest and often is not visible in all four \hst\ bands, so we justify the optical counterpart candidate identification process in this way. It is possible that the true physical counterparts are invisible in the four \hst\ bands and we identify the incorrect physical counterpart using this method, but it seems to capture the majority of the sources sufficiently. We used the $I$-band images to select the brightest candidate optical counterpart within the \chandra\ $1\sigma$ uncertainty. If we select the closest candidate optical counterpart in the same way, we pick up $\sim 65\%$ (113/173) of the same sources; that is, 60 of the candidate \hst\ counterparts are the brightest, but not closest sources to the X-ray centroids. \section{Results \& Discussion}\label{sec:results} \subsection{X-ray Variability}\label{subsec:xrayvariability} We look for short term X-ray variability using seven \chandra\ observations from 2012 (ObsId 13813, 13812, 15496, 13814, 13815, 13816, and 15553). These observations span over a one month period (see Table~\ref{tbl-1}). In Figure~\ref{fig:fb09}, we plot the broad X-ray flux light curves of the brightest twenty (by net counts) X-ray sources. The brightest ten are shown in the left panel, while the next brightest ten are shown in the right panel. In the 30-day window of these observations, some sources vary in flux by approximately two orders of magnitude. We calculate a reduced chi-square statistic, $\chi^2_\nu$, for each broad flux light curve in the 30-day window as a measure of variability. We assume the null hypothesis that the underlying broad X-ray flux light curve is described by a uniform function whose value is the weighted mean of the flux across the seven observations in the 30-day window. The chi-square statistic $\chi^2$ used here is defined as \begin{align} \chi^2 &\equiv \sum\limits_{i=1}^\nu \frac{(F_i-\mu_i)^2}{\sigma_i^2}, \end{align} where $F_i$ is the X-ray flux of the $i$th source, $\mu_i$ is the weighted mean error of the associated $i$th flux measurement, and $\sigma_i^2$ is the variance of the $i$th flux measurement. The reduced chi-square statistic is simply $\chi^2_\nu\equiv\chi^2/\nu$. The mean of the chi-square distribution is $\nu$, so that $\chi^2_\nu = 1$ is a natural value with which to compare results. The value of $\chi^2_\nu$ should be approximately unity if the null hypothesis is to be accepted. Large values of $\chi^2_\nu$ indicate that the null hypothesis should be rejected. Thus, the sources with $\chi^2_\nu \ge 1$ are sources whose flux varies greatly in the 30-day window and we label them ``variable'' sources. Sources with $\chi^2_\nu \lesssim 1$ are sources that have a light curve in the 30-day window that is consistent with the null hypothesis (uniform flux). Approximately $80\%$ ($266/334$) of the sources are considered variable by our $\chi^2_\nu \ge 1$ criterion. Approximately $69\%$ (120/173) of the sources with at least one detected candidate stellar counterpart and no cluster counterparts are variable (see our upcoming follow-up Paper II for a discussion of cluster counterparts), while about $77\%$ (124/161) of the sources without a stellar or cluster counterpart are variable. In addition, about $76\%$ (22/29) of the X-ray sources that have both an associated candidate stellar source and candidate cluster are considered variable. There is a strong positive correlation between the variability and flux of the X-ray sources. Our findings are consistent with the inter-observation variability reported in the CSC (for sources that overlap, which is the majority of sources), even though we have limited our variability study to data within this 30-day window. We speculate that the observed strong correlation is due to the small uncertainty associated with very bright sources (see Figure~\ref{fig:fb09}), i.e. the time-averaged mean broad flux error over the 30-day window of observations is $\langle\delta F_{X,b}\rangle\simeq\,5.7\times10^{-15}\,{\rm erg}\,{\rm cm}^{-2}\,{\rm s}^{-1}$. \subsection{X-ray Hardness Ratios}\label{subsec:hardness} We calculate two X-ray hardness ratios (HRs), ``soft'' and ``hard'' ($HR_1$ and $HR_2$, respectively), for all X-ray sources using the same seven observations as follows: \begin{align} HR_1 &\equiv \frac{M-S}{M+S}\quad\text{and}\\ HR_2 &\equiv \frac{H-M}{H+M}, \end{align} \noindent where $S$, $M$, and $H$ are the X-ray counts in each of the \chandra\ bands (soft, medium, and hard) discussed in \S\S\ref{subsec:Chandra}. We also calculate the associated uncertainty in each of the hardness ratios. In Figure~\ref{fig:hrchi2}, top panel, we plot the X-ray color-color diagram for all sources, colored by the logarithm of their reduced chi-square statistic calculated in the 30-day window discussed in \S\S\ref{subsec:xrayvariability}. The two X-ray colors are the measurements from the longest of the observations in the 30-day window, ObsId 13814. Hardness ratio diagrams, such as our Figure~\ref{fig:hrchi2} and Figure~4 in \cite{Prestwich2003} (which uses a different definition\footnote{In \cite{Prestwich2003}, they define the hard and soft X-ray colors as $HR_1\equiv(M-S)/T$ and $HR_2\equiv(H-M)/T$, respectively, where $S$, $M$, $H$, and $T\equiv S+M+H$ are the soft, medium, hard, and total X-ray counts, respectively.} of the X-ray hardness ratios), have been used historically to assist with revealing the nature of the X-ray sources. The majority of the variable sources ($\chi^2_\nu \ge 1$) lie in the XRB (LMXB and HMXB) regions of the figure (see e.g., \citealt{Prestwich2003}), while most of the low variability sources lie in the region of the diagram that is generally occupied by thermal supernova remnants. However, it is well established that X-ray information alone is not enough to accurately identify the nature of unknown X-ray sources. Therefore, we use the X-ray colors together with optical information (see Section~\ref{subsec:opticalcounterparts}) to classify these sources. In Figure~\ref{fig:hrchi2} we also plot the X-ray color-color diagrams of the brightest twenty (by net counts) X-ray sources; the top ten brightest in the middle panel and the next ten brightest in the bottom panel. Overlaid in all three plots are the hardness color evolution tracks of various accretion disk models. In blue are power law models with increasing photon index $\Gamma$ from $0.4-4$, in orange are absorbed power law models with increasing hydrogen column density, in green are disk blackbody models with temperature ranging from $0.02-2.0\,{\rm keV}$, and in red are absorbed bremsstrahlung models with temperature ranging from $0.1-10.0\,{\rm keV}$. These color-color diagrams contain X-ray colors from all available data in the 30-day window, with appropriate $1\sigma$ error bars. Each source has multiple (same plotted color) points in the diagram, and the color-color evolution is thus apparent. Typically, the color-color evolution is $\lesssim 0.5$ in either color over the entire data set. This suggests that while some spectral change may occur over the 30-day period, the accretion process for these sources does not change dramatically. There are a few sources that appear to significantly change their spectral properties as indicated by movement in the plane of the X-ray color-color diagram, for example the middle panel of Figure~\ref{fig:hrchi2}. It is possible, however, that the movement in the color-color diagram could arise due to a drop in flux, which would raise signficant uncertainties in the location of a source in the diagram due to low count statistics. The error bars are large enough for many of these sources that the spectral evolution cannot be confidently confirmed. Attempting to track the spectral evolution of fainter sources becomes meaningless due to the large uncertainties associated with the X-ray hardness ratio measurements. Detailed analysis of bright sources will be presented in Paper II. \begin{figure} \begin{center} \includegraphics[trim= 5mm 5mm 5mm 5mm clip, width=\columnwidth]{Fig_3_top.pdf} \includegraphics[trim= 5mm 5mm 5mm 3mm clip, width=\columnwidth]{Fig_3_middle.pdf} \includegraphics[trim= 5mm 5mm 5mm 3mm, clip, width=\columnwidth]{Fig_3_bottom.pdf} \caption{\label{fig:hrchi2} {\bf Top}: X-ray color-color diagram for X-ray sources, colored by the logarithm of their reduced chi-square statistic calculated in the 30-day window. The values of $HR_1$ and $HR_2$ in the figure are those of the longest observation (ObsId 13814) in the 30-day window. Overlaid in all three plots are the hardness color evolution tracks of various accretion disk models, see text for details. {\bf Middle}: Color-color diagram of the brightest ten sources (by net counts), with $1\sigma$ error bar shown in black. {\bf Bottom}: Color-color diagram of the next brightest ten sources (by net counts), with $1\sigma$ error bar shown in black. } \end{center} \end{figure} \subsection{X-ray Luminosity Function}\label{subsec:xrayluminosity} The X-ray luminosity function (XLF) for all the 334 X-ray point source candidates in M51 can be approximated as a power law within a certain luminosity range. We use the differential luminosity function defined as \begin{align} \frac{dN(>L_0)}{dL} &= A\biggl(\frac{L}{L_0}\biggr)^{-\alpha},\label{eq:dXLF} \end{align} \noindent where the luminosity $L_0$ is an arbitrary lower limit and $A$ is some normalization constant. Integrating Eq.~\ref{eq:dXLF} gives the luminosity function within a particular range: \begin{align} N(>L_0) &= \frac{AL_0}{1-\alpha}\biggl(\frac{L}{L_0}\biggr)^{1-\alpha}, \end{align} and the fractional luminosity function is given by \begin{align} f(>L_0) &= \frac{AL_0}{N_{tot}(1-\alpha)}\biggl(\frac{L}{L_0}\biggr)^{1-\alpha}, \end{align} where $f(>L_0)$ is the fraction of sources with $L>L_0$ and $N_{tot}$ is the total number of sources. An important luminosity is the Eddington luminosity of a $1.4\,M_\odot$ compact object (the typical mass of NSs) accreting at the Eddington rate: \begin{align} \nonumber\dot{L}_E &\equiv \dot{M}_Ec^2 = \nonumber\frac{4\pi Gm_pc}{\sigma_T}M\\ &\simeq 1.76\times10^{38}\text{ erg$\,$s$^{-1}$}\,\biggl(\frac{M}{1.4\,M_\odot}\biggr),\label{eq:Eddi} \end{align} \noindent where $\dot{M}_E$ is the Eddington accretion rate, $M$ is the mass of the accretor, $G$ is Newton's gravitation constant, $m_p$ is the proton mass, $c$ is the speed of light, and $\sigma_T$ is the Thomson scattering cross section for electrons. In Figure~\ref{fig:XLF}, we plot the combined XLF (total and fractional) on various cuts of the data. The purple curve is the full sample of the $86\%$ (288/334) of X-ray sources that have a measured X-ray luminosity in ObsId 13814 (the observation with the longest exposure time). The green curve is the $39\%$ (130/334) of X-ray sources that have a stellar counterpart in \hst\ (within 10\,px). Across few orders of magnitude of X-ray luminosity starting at $L_{X,b} \ge 2\times10^{36}\text{ erg$\,$s$^{-1}$}$ the curves follow a power law $N(>L_{X,b})\propto L_{X,b}^{-0.65}$, i.e. we fit a power-law to the differential luminosity function with $\alpha = 1.65\pm0.03$. This is consistent with XLFs for star forming galaxies dominated by HMXBs, for example \cite{Lehmer2019} who find $\alpha = 1.59\pm0.05$ for M51. The blue curve represents the X-ray sources that have no stellar or cluster counterparts within 10\,px, $40\%$ (133/334) and has the same slope. The black vertical dashed line indicates the Eddington luminosity of a canonical NS (e.g., $1.4\,M_\odot$) accretor of $L_{Edd}\simeq 1.8\times10^{38}\text{ erg$\,$s$^{-1}$}$. Fewer than $10\%$ of the sources have an X-ray luminosity that is greater than $L_{Edd}$ for the typical NS accretor. A major obstacle in studying the extragalactic XRB population is differentiating HMXBs from LMXBs, which cannot be done by their X-ray properties alone. One attempt to solve this problem was done by \cite{Mineo2012} who used galactocentric distance to distinguish between the two types of XRBs. However, many galaxies, including spirals such as M51, show a spatially mixed population of ``young'' and ``old'' XRBs. Our results show that combining \chandra\ and \hst\ data can break this degeneracy. \begin{figure*} \begin{center} \includegraphics[width=\columnwidth]{XLF_10pix.pdf} \includegraphics[width=\columnwidth]{XLF_frac_10pix.pdf} \caption{\label{fig:XLF} {\bf Left}: Combined X-ray luminosity functions. The purple curve is the full sample of 288/334 X-ray sources that have a measured X-ray luminosity in ObsId 13814. For $L_{X,b}\ge 2\times10^{36}\text{ erg$\,$s$^{-1}$}$, the purple curve follows a power law $N(>L_{X,b})\propto L_{X,b}^{1-\alpha}$ where $\alpha = 1.65\pm0.03$; see text for details. The green curve is a cut of the full sample with 130/334 X-ray sources that have only a stellar \hst\ candidate counterpart within 10 px. The blue curve is a cut of the full sample with 133/334 X-ray sources that have no stellar or cluster counterparts within 10 px. The vertical dashed black line is the Eddington luminosity of a $1.4\,M_\odot$ accretor, i.e. $L_{Edd}\simeq 1.76\times10^{38}\text{ erg$\,$s$^{-1}$}$ (see Eq. \ref{eq:Eddi} in the text). {\bf Right}: Combined fractional X-ray luminosity functions. Same as the left panel but normalized. } \end{center} \end{figure*} \subsection{Optical Counterparts to X-ray Sources}\label{subsec:opticalcounterparts} Due to the distance to M51 there are issues with crowding and source confusion. Many X-rays sources ($51\%$; 173/334) have at least one \hst\ stellar counterpart within the $0\farcs5$ \chandra\ positional uncertainty, whereas ($75\%$; 252/334) have at least one \hst\ stellar counterpart within the 2$\sigma$ \chandra\ uncertainty. Just over half, $51\%$ ($88/173$), of sources that have at least one detected \hst\ stellar counterpart within 1$\sigma$ have at least two detected candidate stellar counterparts. Selecting the counterpart candidate can be challenging in cases where there are more than two or more optical sources in the search radius. One method of choosing the donor star candidate is to select the closest optical source to the \chandra\ position. On the other hand, large fraction of the XRBs in M51 are expected to be HMXBs with early-type stars as the donors. Therefore, an alternative method of selecting an optical counterpart is to select the brightest optical sources within the 1$\sigma$ radius ($10$ px). In Figure~\ref{fig:b-v_v-i} we plot the $B-V$ and $V-I$ color-magnitude diagrams for the candidate \hst\ optical counterparts that are the brightest or closest within $10$ px of the X-ray point source centroids. If we select the closest candidate \hst\ optical counterpart within $10$ px, out of 173 total candidate optical counterparts, $\sim65.3\%$ (113/173) of the sources are the same. That is, 113 of the \hst\ counterparts are both the brightest and the closest source within $10$ px. As expected, selecting the closest optical counterpart to the X-ray sources is biased toward fainter (and older) stellar sources. However, we performed a two-sample Kolmogorov-Smirnov (K-S) test on the following data from Figure~\ref{fig:b-v_v-i}: \begin{enumerate} \item $M_{V,closest}$ vs. $M_{V,brightest}$ ($y$-axis of both panels) \item $(m_B-m_V)_{closest}$ vs. $(m_B-m_V)_{brightest}$ ($x$-axis of left panel) \item $(m_V-m_I)_{closest}$ vs. $(m_V-m_I)_{brightest}$ ($x$-axis of right panel), \end{enumerate} and found that in each case the null hypothesis $H_0$, namely that the two samples in 1, 2, and 3 above are drawn from the same unknown underlying continuous distribution, cannot be rejected. The two-sample KS test statistic $D$ and $p$-values for each of the three tests above are: \begin{enumerate} \item $D = 0.13194$ and $p = 0.15056$ \item $D = 0.07639$ and $p = 0.77899$ \item $D = 0.06250$ and $p = 0.93376$. \end{enumerate} At a level of significance $\alpha=0.05$, we cannot reject $H_0$ since in each case $p\ge\alpha$. Thus we cannot claim a statistically significant difference in choosing either the closest or the brightest sources as the candidate optical counterpart to our X-ray sources. The mean photometric error is approximately $0.1$ mag in $V$ and $I$. In Table~\ref{tbl-2}, we select the brightest source as the donor star candidate in case of multiple matches. Also in Figure~\ref{fig:b-v_v-i} we plot four mass tracks: $5\,M_\odot$, $8\,M_\odot$, $20\,M_\odot$, and $40\,M_\odot$, respectively from bottom to top, taken from the MESA Isochrones \& Stellar Tracks\footnote{\href{http://waps.cfa.harvard.edu/MIST/index.html}{http://waps.cfa.harvard.edu/MIST/index.html}} (see \citealt{Dotter2016,Choi2016,Paxton2011,Paxton2013,Paxton2015,Paxton2018}). The initial protosolar bulk metallicity for the models used is $Z_i = 0.0147$, with extinction $A_V=0$ ($R_V=3.1$). It is clear from the color-magnitude diagram that most of the candidate \hst\ optical counterparts lie above the $8\,M_\odot$ mass track, indicating that most of our candidate sources are likely HMXBs. In classifying the candidate sources as HMXBs, there are no statistically significant differences in choosing either the brightest (black) or closest (light orange) sources (see the two-sample K-S test discussion above). \begin{figure* \begin{center} \includegraphics[width=85mm]{b-v_final.pdf} \includegraphics[width=85mm]{v-i_final.pdf} \caption{\label{fig:b-v_v-i}Color-magnitude diagrams for the potential HST optical counterpart candidates that are the brightest (black) or closest (light orange) within 10 px of the X-ray source centroids. The dark orange indicates a candidate optical counterpart is both the brightest and closest to the X-ray centroid. There are 173 candidate optical counterparts within 10 px of the 334 \emph{Chandra} X-ray point sources. For both panels, the four mass tracks from bottom to top are from MESA Isochrones \& Stellar Tracks (MIST; see text) and have masses $5\,M_\odot$, $8\,M_\odot$, $20\,M_\odot$, and $40\,M_\odot$, respectively. {\bf Left}: $B-V$ color-magnitude diagram. {\bf Right}: $V-I$ color-magnitude diagram.} \end{center} \end{figure*} \section{Conclusions}\label{sec:conclusions} In this study we presented a catalog and statistical analysis of archival \chandra\ and \hst\ data of point sources in the interacting galaxy pair NGC 5194/5195 (M51). \begin{itemize} \item Using standard CIAO procedures, we detected 334 X-ray point sources in the merged thirteen \chandra\ observations. We corrected the data for neutral hydrogen absorption along the line of sight and improved the astrometry using the USNO URAT1 catalog. \item We identified 173 candidate optical counterparts to the X-ray sources in our catalog by finding the brightest \hst\ point sources within 10 px of the X-ray source. We found no statistically different results by choosing the closest \hst\ point sources by performing a two-sample Kolmogorov-Smirnov test (see text for details). Similar to \chandra\ the astrometry of the data was corrected by using the USNO URAT1 catalog and applying a six-parameter plate transformation. \item We calculated a reduced chi-square statistic, $\chi_\nu$, as a measurement of the broad flux variability in a 30-day window of the longest seven observations for the X-ray sources in our catalog and found that approximately $80\%$ of the sources are considered variable, i.e. $\chi_\nu\ge 1$. \item Approximately $69\%$ of the sources with at least one detected candidate stellar counterpart (but no cluster counterpart) are considered variable and about $77\%$ of the sources without a stellar counterpart are variable (see our upcoming follow-up Paper II for a discussion of candidate cluster counterparts to our X-ray sources). \item The majority of optical counterparts are above the 8\,M$_\odot$ line in Figure~\ref{fig:b-v_v-i}, which is consistent with these sources being HMXB candidates. \item There is a strong positive correlation between the broad X-ray flux and the variability of the X-ray sources in the 30-day window, consistent with the interobservation variability in the CSC catalog. \item We calculated X-ray hardness ratios for all sources and found that the majority of the variable sources lie in the XRB region of the X-ray color-color diagram (e.g., hard or absorbed X-ray sources; see Figure~\ref{fig:hrchi2}). \item The broad X-ray luminosity function above a few times $10^{36}\, {\rm erg\,s^{-1}}$ follows a power law $N(>L_{X,b})\propto L_{X,b}^{1-\alpha}$ with $\alpha=1.65\pm0.03$, consistent with X-ray luminosity functions of star-forming galaxies dominated by HMXBs. \item Most of the brightest 20 sources do not show any evidence of of flux variability. \item Fewer than $10\%$ of the X-ray sources have a broad X-ray luminosity greater than the Eddington luminosity of a typical NS accretor. \end{itemize} As mentioned earlier, a detailed analysis of individual sources will be presented in a follow-up paper. \software{CIAO \citep{Fruscione2006}, IRAF \citep{Tody1986,Tody1993}, SAOImageDS9 \citep{Joye2003}, NumPy \citep{Harris2020}, SciPy \citep{Virtanen2020}, Matplotlib \citep{Perez2007}, Astropy \citep{Astropy2013,Astropy2018}} \section{Acknowledgments}\label{sec:acknowledgments} We thank an anonymous referee for constructive comments. \begin{deluxetable*}{cccccccccrr} \tablecaption{Catalog of X-ray sources\label{tbl-2}} \tablewidth{0pt} \tablehead{ \colhead{$X_\text{ID}$} & \colhead{RA} & \colhead{Dec} & \colhead{$C_\text{net}/10^3$} & \colhead{$F_{X,b}/(10^{-15}\text{erg}\,\text{s}^{-1})$} & \colhead{$I$} & \colhead{$V$} & \colhead{$H\alpha$} & \colhead{$B$} & \colhead{$HR_1$} & \colhead{$HR_2$}} \startdata $178$ & $202.504274$ & $47.228885$ & $6.73$ & $441$ & $-8.26$ & $-7.58$ & $-8.08$ & $-6.62$ & $0.073$ & $0.012$\\ $62$ & $202.531533$ & $47.185057$ & $6.03$ & $263$ & $-6.70$ & $-5.10$ & $-7.27$ & $-4.69$ & $-0.083$ & $-0.37$\\ $166$ & $202.496180$ & $47.221801$ & $3.63$ & $240$ & $-6.18$ & $-3.83$ & $-4.91$ & $-2.62$ & $0.64$ & $0.069$\\ $190$ & $202.473939$ & $47.243281$ & $2.25$ & $153$ & $-$ & $-$ & $-$ & $-$ & $0.14$ & $0.027$\\ $154$ & $202.414552$ & $47.212141$ & $2.05$ & $109$ & $-6.35$ & $-6.58$ & $-6.34$ & $-6.67$ & $0.023$ & $-0.15$\\ $100$ & $202.470043$ & $47.194362$ & $1.90$ & $62.4$ & $-8.49$& $-7.47$ & $-8.05$ & $-6.49$ & $-0.64$ & $-0.79$\\ $94$ & $202.430548$ & $47.193022$ & $1.77$ & $70.7$ & $-6.97$ & $-$ & $-$ & $-$ & $-0.83$ & $-0.82$\\ $149$ & $202.416637$ & $47.210268$ & $1.65$ & $64.1$ & $-5.72$ & $-5.74$ & $-6.14$ & $-5.67$ & $-0.36$ & $-0.55$\\ $168$ & $202.517998$ & $47.222471$ & $1.38$ & $92.0$ & $-8.06$ & $-7.23$ & $-8.15$ & $-6.80$ & $0.22$ & $-0.033$\\ $54$ & $202.489953$ & $47.180183$ & $1.08$ & $56.7$ & $-7.11$ & $-5.65$ & $-5.86$ & $-5.42$ & $0.065$ & $-0.21$ \enddata \end{deluxetable*} \newpage
{'timestamp': '2021-09-08T02:04:13', 'yymm': '2109', 'arxiv_id': '2109.02742', 'language': 'en', 'url': 'https://arxiv.org/abs/2109.02742'}
\section{Introduction} The discovery of the M{\"o}ssbauer effect is unique. The emission and absorption of x-rays by gases had been observed previously, and it was expected that the resonance effects would be found for gamma rays, which are created by nuclear transitions (as opposed to x-rays, which are typically produced by electron transitions). However, attempts to observe nuclear resonance produced by gamma-rays in gases failed due to recoil, preventing resonance (the Doppler effect also broadens the gamma-ray spectrum). M{\"o}ssbauer was able to observe resonance in nuclei of solid iridium, as opposite to no gamma-ray resonance in gases. He proposed that, for the case of atoms bound into a solid, the nuclear events could occur essentially without the recoil. The photon emission energy for the atom of iridium is approximately ${\rm 1\;eV}$ and the recoil energy is $E_{recoil}(optical) = \;$ GeV = $10^{-11}\;$ eV . (Rohlf, 1994). On the other hand the $\gamma$-emission energy of the iridium nucleus is $\approx {\rm 10^{5}\; eV}$. Then, the recoil energy caused by the emission of such $\gamma$-ray is $E_{recoil} \approx 10^{-1}\;$ eV (Rohlf, 1994). So, we see that the recoil caused by the $\gamma$-ray emission is substantially greater than the recoil caused by the optical emission and we can expect the big shift of spectral lines in the nuclear system. The motion of the decaying excited nucleus of iridium $^{191}{\rm Ir}^{*}$ causes the Doppler broadening and the Doppler shift of the gamma spectrum. Let us consider the motion of the excited iridium in the direction of the emitted photon. The Doppler formula for the Lorentz boosted proton energy $E'$ is as follows (Rohlf, 1994): $$E' = E\frac{\sqrt{1 + v/c}}{1 - v/c} = E\gamma(1 - v/c) \approx E(1 + v/c); .\eqno(1)$$ The fractional change of he proton energy is $(E'- E)/E = v/c. $ The resonance (the overlapping of emission and absorption curves) is destroyed if $v/c$ is equal few times $\Gamma/E = (\hbar/\tau)/E ( = 2.7 \times 10^{-11}$ for iridium), where $\Gamma$ is the natural spectral line width and $\tau$ is the life time of the excited state of nucleus. For $v/c = \Gamma/E$, we get $v/c = {\rm 2.7\times 10^{-11}}$. The corresponding speed is $v = {\rm (3\times 10^{8}\; m/s)\times (2.7 \times 10^{-11}) \approx 10^{-2}\; m.s^{-1}}$. So, the speed of centimeters per second destroys the resonance absorption. In other words, the overlap of the very narrow absorption and emission curves is zero for the nuclear system. In a solid, the nuclei are bound to the lattice and do not recoil in the same way as in a gas. The lattice as a whole with mass $M$ recoils but the recoil energy is negligible because $M$ is the mass of the whole lattice. However, the energy in a decay can be taken up or supplied by lattice vibrations. The energy of these vibrations is quantized in units known as phonons. The M\"ossbauer effect occurs because there is a finite probability of a decay occurring involving no phonons. Thus, the entire crystal acts as the recoiling body, and these events are essentially recoilless. In these cases, since the recoil energy is negligible, the emitted gamma rays have the appropriate energy and resonance can occur. Gamma rays have very narrow line widths. This means they are very sensitive to small changes in the energies of nuclear transitions. In fact, gamma rays can be used as a probe to observe the effects of interactions between a nucleus and its electrons and those of its neighbors. This is the basis for M{\"o}ssbauer spectroscopy, which combines the M{\"o}ssbauer effect with the Doppler effect to monitor such interactions. \section{The quantum theory of the M{\"o}ssbauer effect in the homogeneous magnetic field} We can define the M\"ossbauer effect in homogeneous magnetic field as the analogue of the M\"ossbauer effect for crystal, where particle in crystal is replaced by the charged particle in homogeneous magnetic field. We consider the situation where the nucleus emitting the gamma rays is inbuilt (implanted) in homogeneous magnetic field. The initial state of the crystal let be $\psi_{crystal}$ and the final state of crystal let be $\psi_{crystal}$. Then, according to Feynman (1972), there is a probability of no recoil after the photon emission with momentum $\bf k$ from nucleus. The amplitude of probability is $({\bf k} = {\bf p}/\hbar)$ $$a = \left<\psi_{crystal}|e^{i({\bf k}\cdot{\bf r})}|\psi_{crystal}\right> ,\eqno(2)$$ here ${\bf r}$ is the displacement of lattice atom. The probability of unchanging of the basic magnetic state $\psi_{0}$ (the analogue of the persistence of vacuum in quantum field theory) after the $\gamma$-emission is then $P = a^{2}$. So, we have $$P = \left|\left<\psi_{0}\left|e^{i{\bf k}\cdot{\bf r}}\right|\psi_{0}\right>\right|^{2} = \left|\int e^{i{\bf k}\cdot{\bf r}}|\psi_{0}|^{2}d{\bf r}\right|^{2}, \eqno(3)$$ where the exponential function in (3) can be expanded using the partial amplitudes taken from the textbooks of quantum mechanics of scattering processes as follows: $$e^{i{\bf k}\cdot{\bf r}} = 4\pi\sum_{l=0}^{\infty}\sum_{m = -l}^{l}i^{l}j_{kr}Y_{lm}(\Theta,\Phi) Y_{lm}^{*}(\theta,\phi). \eqno(4)$$ The mathematical term $i{\bf k}\cdot{\bf r}$ can be written using the azimuthal angle $\varphi$ for the process in the plane of motion in the magnetic field as $ikr\cos\varphi$. So, we shall calculate the probability corresponding to the situation where the crystal is replaced by the homogeneous magnetic field. We take the basic function $\psi_{0}$ for one electron in the lowest Landau level, as $$\psi_{0} = \left(\frac{m\omega_{c}}{2\pi\hbar}\right)^{1/2} \exp\left(-\frac{m\omega_{c}}{4\hbar}(x^{2} + y^{2})\right), \eqno(5)$$ which is solution of the Schr\"odinger equation in the magnetic field with potentials ${\bf A} = (-Hy/2, Hx/2, 0)$, $A_{0} = 0$ (Drukarev, 1988): $$\left[\frac{p_{x}^{2}}{2m} + \frac{p_{y}^{2}}{2m} - \frac{m}{2} \left(\frac{\omega_{c}}{2}\right)^{2}(x^{2} + y^{2})\right]\psi = E\psi. \eqno(6)$$ So, The main problem is to calculate the integral in the polar coordinates $r, \varphi$ as follows: $$I = \int_{0}^{2\pi}\int_{0}^{\infty}d\varphi rdr |\psi_{0}|^{2}e^{ikr\cos\varphi}, \eqno(7)$$ which can be simplified introducing constants $C$ and $\alpha$ as follows ($Q$ is a charge of the M\"ossbauer particle, $c$ is the velocity of light): $$C = \left(\frac{m\omega_{c}}{2\pi\hbar}\right)^{1/2}; \quad \alpha = \left(\frac{m\omega_{c}}{4\hbar}\right); \quad \omega _{c} = \frac{|Q|H}{mc}.\eqno(8)$$ Then, $$I = C^{2}\int_{0}^{2\pi}\int_{0}^{\infty}d\varphi rdr e^{-2\alpha r^{2}}e^{ikr\cos\varphi}. \eqno(9)$$ Let us firs consider the calculation of the polar integral of the form: $$I_{1} = \int_{0}^{2\pi}[\cos(kr\cos\varphi) + i\sin(kr\cos\varphi)]d\varphi.\eqno(10)$$ Using identities $$\cos(a\cos\varphi) = J_{0}(a) + 2\sum_{n = 1}^{\infty}(-1)^{n}J_{2n}(a)\cos(2n\varphi), \eqno(11)$$ $$\sin(a\cos\varphi) = 2\sum_{n = 1}^{\infty}(-1)^{n + 1}J_{2n - 1}(a)\cos[(2n-1)\varphi)], \eqno(12)$$ where $J_{n}$ are the Bessel functions, we get after integration that $$I_{1} = J_{0}(kr),\eqno(13)$$ where the Bessel function $J_{0}$ can be expressed as the series $$J_{0}(x) = \sum_{k = 0}^{\infty} \frac{(-1)^{k}x^{2k}}{2^{2}2^{4}....(2k)^{2}} = 1 - \frac{x^{2}}{2^{2}} + \frac{x^{4}}{2^{2}4^{2}} - \frac{x^{6}}{2^{2}4^{2}6^{2}} + ... \eqno(14)$$ So, The following step is, to calculate the following integral: $$I_{2} = \int_{0}^{\infty}J_{0}(kr)e^{-2\alpha r^{2}} rdr\eqno(15)$$ If we restrict the calculation with the approximate Bessel function, then we get for the probability of the persistence of the state in the form: $$P \approx C^{2}\left|2\pi\int_{0}^{\infty}rdr\left[e^{-2\alpha r^{2}} - e^{-2\alpha r^{2}}\frac{(kr)^{2}}{2^{2}}\right]\right|^{2} \eqno(16)$$ Using the integrals $$\int_{0}^{\infty} e^{-2\alpha r^{2}} rdr = \frac{1}{4\alpha}; \quad \eqno(17)$$ $$\int_{0}^{\infty} \frac{k^{2}r^{3}}{2^{2}}e^{-2\alpha r^{2}} dr = \frac{k^{2}}{32}\frac{1}{\alpha^{2}}, \eqno(18)$$ where the integrals are the special cases of the table integral (Gradshteyn and Ryzhik, 2007a) $$\int_{0}^{\infty}x^{2n+1} e^{-p x^{2}} dx = \frac{n!}{2p^{n + 1}}; \quad p>0,\eqno(19)$$ we get the final approximation formula for the existence of the M\"ossbauer effect in magnetic field realized by the decay of the charged ion. Or, $$ P \approx 4\pi^{2}C^{2}\left|\frac{1}{4\alpha} + \frac{k^{2}}{32\alpha^{2}}\right|^{2} \eqno(20)$$ Using explicit constants from eq. (8), we get the final approximation form for the existence of the M\"ossbauer effect in magnetic field: $$P \approx \frac{\pi}{2}\frac{\hbar^{3}c^{3}}{|Q|^{3}H^{3 }}\left(\frac{2|Q|H}{\hbar c} + k^{2}\right)^{2}.\eqno(21)$$ Let us remark, that we can use approximation $e^{-2\alpha r^{2}} \approx 1 - 2\alpha r^{2}$. Then instead of eq. (16), we write: $$P \approx C^{2}\left|2\pi\int_{0}^{\infty}rdr\left[J_{0}(kr) - 2\alpha r^{2} J_{0}(kr)\right]\right|^{2}. \eqno(22)$$ Then using table integral (Gradshteyn and Ryzhik, 2007b) $$\int_{0}^{\infty}x^{n}J_{l}(ax) dx = 2^{n}a^{-n-1}\frac{\Gamma\left(\frac{1}{2} + \frac{l}{2} + \frac{n}{2}\right)}{\Gamma\left(\frac{1}{2} + \frac{l}{2} - \frac{n}{2}\right)},\eqno(23)$$ we get: $$P \approx C^{2}\left|\frac{2\pi}{k^{2}}\frac{\Gamma\left(1\right)}{\Gamma\left(0\right)} + \frac{32\pi\alpha}{k^{4}}\frac{\Gamma\left(2\right)}{\Gamma\left(-1\right)}\right|^{2}.\eqno(24)$$ To our surprise, this form of the magnetic M\"ossbauer effect was not published in the M\"ossbauer literature. \section{Discussion} The M\"ossbauer effect on magnetic field is in no case the exact analogue of the M\"ossbauer effect in crystal, because magnetic field is the special physical reality (medium) with unique quantum electrodynamic properties. In case that the decaying charged particle moves in accelerating potential $V = Fx$ where $F = - \partial V/\partial x$ is the accelerating force, then the corresponding Schr\"odinger equation in the momentum representation $(\hat x = i\hbar\partial/\partial p)$ is as follows (Drukarev, 1988): $$\left(-i\hbar F \frac{\partial}{\partial p} + \frac{p^{2}}{2m} - E\right)\left<p|E\right> = 0\eqno(25)$$ with the solution $$\left<p|E\right> = \frac{1}{\sqrt{2\pi\hbar F}}\exp\left[\frac{i}{\hbar F}\left(Ep - \frac{p^{3}}{6m}\right)\right].\eqno(26)$$ Then, $$\left<x|E\right> = \int_{-\infty}^{\infty}\left<x|p\right>\left<p|E\right>dp = \frac{1}{2\pi\hbar F^{1/2}}\int_{-\infty}^{\infty}\exp\left\{ \frac{i}{\hbar}\left[\left(x + \frac{E}{F}\right)p - \frac{p^{3}}{6mF}\right]\right\}dp,\eqno(27)$$ where we have used relation $$\left<x|p\right> = \frac{1}{\sqrt{2\pi\hbar}}e^{ipx/\hbar}.\eqno(28)$$ The classical turning point is given by relation $V = E$, from which follows the coordinate of the turning point $x_{0} = E/F$. Then we write with regard to the last statement: $$\left<x|E\right> = \frac{1}{2\pi\hbar F^{1/2}}\int_{-\infty}^{\infty}\exp\left\{ \frac{i}{\hbar}\left[(x - x_{0})p - \frac{p^{3}}{6mF}\right]\right\}dp.\eqno(29)$$ After introducing the new variable $$u = \frac{p}{(2m\hbar F)^{1/3}}; \quad z = \left(\frac{2mF}{\hbar^{2}}\right)^{1/2}(x_{0} - x), \eqno(30)$$ we get the solution of the Schr\"odinger equation for charged particle moving in the accelerated potential in the final form: $$\left<x|E\right> = \left(\frac{2m}{\hbar^{2} F^{1/2}}\right)^{1/3}\frac{1}{2\pi}\int_{-\infty}^{\infty}\exp \left[-i\left(\frac{u^{3}}{3} + zu\right)\right]dp,\eqno(31)$$ where $$v(z) = \frac{1}{2\pi}\int_{-\infty}^{\infty}\exp \left[-i\left(\frac{u^{3}}{3} + zu\right)\right]dp\eqno(32)$$ is so called Airy function. The final formula for the existence of the M\"ossbauer effect in the accelerated field is $$ P = \left|\int e^{i{\bf x}{\bf k}}|\left<x|E\right>|^{2}d{\bf x}\right|^{2}.\eqno(33)$$ Ninio (1973) used instead of the Feynman amplitude the impulsive force $F(t) = const\,\delta(t-\lambda)$ to calculate the persistence of harmonic oscillator. After applying such impulsive force, the basic oscillator function is $$\psi_{0} = \exp\left(-\frac{1}{2}|\xi(t)|^{2}\right) \sum_{m=0}^{\infty}\frac{[\xi(t)]^{2}}{(m!)^{1/2}}\phi_{m}, \eqno(34)$$ where $$\xi(t) = i(2m\hbar\omega)^{-1/2}Ae^{i\omega\lambda} \quad (t>\lambda). \eqno(35)$$ The corresponding probability of the basic state persistence is $$P \approx |\left<\psi_{0}|\psi_{0}\right>|^{2} = e^{(-|\xi(t)|^{2})} = e^{(-A^{2}/2m\hbar\omega)}. \eqno(36)$$ Thus, there is non zero probability that the impulse creates no phonons. However, it must be remembered that the oscillator particle is bound to a fixed center. No doubt, that it is possible to use the Ninio method to calculate the M\"ossbauer effect in magnetic field and in the accelerated field. In addition, there is not excluded, that the so called maximal acceleration may play some role in case of accelerated charged particles. The recent discussion of the specific application of the maximal acceleration in the M\"ossbauer physics was presented by Potzel (2014). The introduction of the maximal acceleration into physics by means of transformations between the reference systems was given by author (Pardy, 2003). The article is in a some sense the new mainstream of ideas related to the M{\"o}ssbauer effect in physics and it can be applied in chemistry, biology, geology, cosmology, medicine and other human activities. Let us remark that the discovery of the M{\"o}ssbauer was rewarded with the Nobel Prize in Physics in 1961 together with Robert Hofstadter's research of electron scattering in atomic nuclei. M\"ossbauer effect in the magnetic, or, electric field represents, the crucial problem for experimentalists and it is not excluded that the experimental realization of this effect leads to the adequate appreciation. \vspace{7mm} \noindent {\bf REFERENCES} \vspace{7mm} \noindent Berestetzkii, V. B.; Lifshitz, E. M. and Pitaevskii, L. P. (1989). {\it Quantum electrodynamics}, (Moscow, Nauka) (in Russian).\\[3mm] Drukarev, G. F. (1988). {\it Quantum mechanics}, (Leningrad University Press), (in Russian). \\[3mm] Feynman, R. (1972). {\it Statistical mechanics}, (W. B. Benjamin, Inc., Reading, Massachusetts).\\[3mm] Gradshteyn, I. S. and Ryzhik, I. M. (2007a). {\it Tables of integrals, series and products}, Seven ed., (New York, Academic Press), section 3.461(3), page 364.\\[3mm] Gradshteyn, I. S. and Ryzhik, I. M. (2007b). {\it Tables of integrals, series and products}, Seven ed., (New York, Academic Press), section 6.561(14), page 676.\\[3mm] Kuznetsov, D. S. (1962). {\it The special functions}, Moscow. (in Russian). \\[3mm] M\"ossbauer. R. L. (1958). Kernresonanzfluoreszenz von Gammastrahlung in $^{191}{\rm Ir}$, Z. Phys. {\bf 151}, 124. \\[3mm] M\"ossbauer, R. L. (1958). Kernresonanzfluoreszenz von Gammastrahlung in $^{191}{\rm Ir}$, Naturwissenschaften {\bf 45}, 538.\\[3mm] M\"ossbauer, R. L. (1959). Kernresonanzabsorption von $\gamma$-Strahlung in $^{191}{\rm Ir}$, Z. Naturforsch. {\bf A 14}, 211.\\[3mm] M\"ossbauer, R. L. (2000). The discovery of the M\"ossbauer effect, Hyperfine Interactions {\bf 126}, 1.\\[3mm] Ninio, F. (1973). The forced harmonic oscillator and the zero-phonon transition of the M\"ossbauer effect, AJP, {\bf 41}, 648-649.\\[3mm] Pardy, M. (2003). The Space-time transformations between accelerated systems, arXiv:gr-qc/0302007, (gr-qc). \\[3mm] Potzel, W. (2014). Clock hypothesis of relativity theory, maximal acceleration, and M\"ossbauer spectroscopy, arXiv:1403.2412, (physics.ins-det). \\[3mm] Rohlf, J. W. (1994). {\it Modern physics from $\alpha$ to $Z^{0}$}, (John Willey \& Sons, Inc., New York). \end{document}
{'timestamp': '2015-01-28T02:15:32', 'yymm': '1501', 'arxiv_id': '1501.06815', 'language': 'en', 'url': 'https://arxiv.org/abs/1501.06815'}
\section{Introduction} A \emph{planar graph} is a graph that can be \emph{embedded} in the plane, i.e., it can be drawn on the plane in such a way that its edges intersect only at their endpoints. A planar graph already drawn in the plane without edge intersections is called a \emph{plane graph} or \emph{planar embedding} of the graph. A straight-line drawing $\Gamma$ of a plane graph $G$ is a graph embedding in which each vertex is drawn as a point and each edge is drawn as straight line segments (as opposed to curves, etc.). Given a plane graph $G$ with $n$ vertices and a set $S$ of $n$ points in the plane, a point-set embedding of $G$ on $S$ is a straight-line drawing of $G$ such that each vertex is represented as a distinct point of $S$. The problem of computing a point-set embedding of a graph, also referred to as the point-set embeddability problem in the literature, has been extensively studied both when the mapping of the vertices to the points is chosen by the drawing algorithm and when it is partially or completely given as part of the input. There exists a number of results of the point-set embeddability problem on different graph classes in the literature~\cite{DBLP:journals/comgeo/Bose02,DBLP:journals/dcg/IkebePTT94,DBLP:journals/dam/KanekoK00,DBLP:journals/gc/PachW01}. A number of variants of the original problem have also been studied in the literature. For example in~\cite{DBLP:conf/wads/BadentGL07,DBLP:journals/jgaa/GiacomoDLMTW08}, a variant of the point-set embeddability problem has been studied, where the vertex set of the given graph and the given set of points are divided into a number of partitions and a particular vertex subset is to be mapped to a particular point subset. Other variants have also been studied with great interest~\cite{DBLP:journals/ijcga/KanekoK05,DBLP:journals/ijfcs/GiacomoLT06}. Very recently, Nishat et al.~\cite{DBLP:conf/gd/NishatMR10} studied the point set embeddability problem on a class of graphs known as the \emph{plane 3-tree}. Plane 3-trees belong to an interesting class of graphs and recently a number of different drawing algorithms have been presented in the literature on plane 3-trees~\cite{BiedlV10,MondalNRA10,DBLP:conf/gd/NishatMR10}. In this paper, we follow up the work of \cite{DBLP:conf/gd/NishatMR10} and improve upon their result from an algorithmic point of view. In~\cite{DBLP:conf/gd/NishatMR10}, Nishat et al. presented an $O(n^2 \log n)$ time algorithm that can decide whether a plane 3-tree $G$ of $n$ vertices admits a point-set embedding on a given set of $n$ points or not and compute a point-set embedding of $G$ if such an embedding exists. In this paper, we show how to improve the running time of the above algorithm. In particular, we take their algorithmic ideas as the building block of our algorithm and with some non trivial modifications we achieve a running time of $O(n^{4/3+\epsilon}\log n)$. The efficiency of our algorithm comes mainly from clever uses of triangular range search and counting queries \cite{Paterson1986441,ALGOR::ChazelleSW1992,journals/dcg/Chazelle97} and bounding the number of such queries. Furthermore, we study a generalized version of the Point-Set Embeddability problem where the point set $S$ has more points than the number of vertices of the input plane 3-tree, i.e., $|S| = k >n$. For this version of the problem, an $O(nk^8)$ time algorithm was presented in~\cite{DBLP:conf/gd/NishatMR10}. We present an improved algorithm running in $O(nk^4)$ time. The rest of this paper is organized as follows. Section~\ref{sec:prel} presents some definitions and preliminary results. Section~\ref{sec:quad} presents a brief review of the algorithm presented in~\cite{DBLP:conf/gd/NishatMR10}. In Section~\ref{sec:subq} we present our main result. Section~\ref{sec:genProb} briefly discusses about the generalized version of the problem and we briefly conclude in Section~\ref{sec:con}. \section{Preliminaries}\label{sec:prel} In this section we present some preliminary notations, definitions and results that we use in our paper. We follow mainly the definitions and notations of~\cite{nishizeki2004planar}. We start with a formal definition of the \emph{straight-line drawing}. \begin{dfn} [Straight-Line Drawing] \label{prb:strln} Given a plane graph $G$, a straight-line drawing $\Gamma(G)$ of $G$ is a drawing of $G$ where vertices are drawn as points and edges are drawn as connecting lines. \end{dfn} The problem we handle in this paper is formally defined as follows. \begin{prb} [Point-Set Embeddability] \label{prb:ptemb} Let $G$ be a plane graph of $n$ vertices and $S$ be a set of $n$ points on plane. The point-set embeddability problem wants to find a straight-line drawing of $G$ such that the vertices of $G$ are mapped to the points of $S$. \end{prb} Finding a point-set embedding for an arbitrary plane graph is proved to be NP-Complete~\cite{journals/jgaa/Cabello06}, even for some restricted subclasses. On the other hand, polynomial time algorithm exists for finding point-set embedding for outerplanar graphs or trees~\cite{journals/dcg/IkebePTT94,DBLP:journals/comgeo/Bose02}. An interesting research direction in the literature is to investigate this problem on various other restricted graph classes. One such interesting graph class, known as the plane 3-tree, is formally defined below. \begin{definition}[Plane $3$-Tree] \label{def:plane3t} A plane 3-tree is a triangulated plane graph $G=(V,E)$ with $n$ vertices such that either $n=3$, or there exists a vertex $x$ such that the graph induced by $V-\{x\}$ is also a plane $3$-tree. \end{definition} \begin{figure}[t!] \begin{center} \leavevmode \scalebox{0.7}{ \includegraphics{newplane3tree1} } \end{center} \caption{A plane 3-tree of 17 vertices} \label{fig:plane3tree} \end{figure} Figure~\ref{fig:plane3tree} presents a plane 3-tree with 17 vertices. As has been mentioned above, the very recent work of Nishat et al.~\cite{DBLP:conf/gd/NishatMR10} proved that finding point-set embedding is polynomially solvable if the input is restricted to a Plane $3$-Tree. Since a plane $3$-tree is triangulated, its outer face has only $3$ vertices, known as the \emph{outer vertices}. The following two interesting properties of a plane $3$-tree with $n>3$ will be required later. \begin{prop} [\cite{conf/gd/BiedlV09}] Let $G$ be a plane 3-tree with $n>3$ vertices. Then, there is a node $x$ with degree $3$ whose deletion will give a plane $3$-tree of $n-1$ vertices. \end{prop} \begin{prop} [\cite{conf/gd/BiedlV09}] \label{prop-P} Let $G$ be a plane 3-tree with $n>3$ vertices. Then, there exists exactly $1$ vertex (say, $p$) that is a common neighbor of all $3$ outer vertices. \end{prop} For a plane 3-tree $G$, the vertex $p$ (as defined in Proposition~\ref{prop-P}) is referred to as the \textit{representative vertex} of $G$. For a plane graph $G$, and a cycle $C$ in it, we use $G(C)$ to denote the subgraph of $G$ inside $C$ (including $C$). In what follows, if a cycle $C$ is a triangle involving the vertices $x,y$ and $z$, we will often use $\triangle xyz$ and $G(\triangle xyz)$ to denote $C$ and $G(C)$. The following interesting lemma was recently proved in \cite{DBLP:conf/gd/NishatMR10} and will be useful later in this paper. \begin{lemma} [\cite{DBLP:conf/gd/NishatMR10}] Let $G$ be a plane $3$-tree of $n>3$ vertices and $C$ be any triangle of $G$. Then, the subgraph $G(C)$ is a plane $3$-tree. \end{lemma} We now define an interesting structure related to a plane 3-tree, known as the \emph{representative tree}. \begin{definition} [Representative Tree]\label{defn-rtree} Let $G$ be a plane $3$-tree with $n$ vertices with outer vertices $a$, $b$ and $c$ and representative vertex $p$ (if $n>3$). The representative tree $T$ of $G$ is an ordered tree defined as follows: \begin{itemize} \item If $n=3$, then $T$ is an single vertex. \item Otherwise, the root of $T$ is $p$ and its subtrees are the representative trees of $G(\triangle apb)$, $G(\triangle bpc)$ and $G(\triangle cpa)$ in that order. \end{itemize} \end{definition} \begin{figure}[t!] \begin{center} \leavevmode \scalebox{0.75}{ \includegraphics{newcorrespondingTree} } \end{center} \caption{The representative tree of the plane 3-tree of Figure~\ref{fig:plane3tree}} \label{fig:rtree} \end{figure} The representative tree of the plane 3-tree of Figure~\ref{fig:plane3tree} is presented in Figure~\ref{fig:rtree}. Note that, the representative tree $T$ has $n'=n-3$ internal nodes, each internal node having degree $3$. Also, note that the outer vertices of $G$ are named as $a,b$ and $c$ respectively in counter-clockwise order around $p$. Therefore, the representative tree $T$ of a plane 3-tree $G$ is unique as per Definition~\ref{defn-rtree}. Now consider a plane 3-tree $G$ and its representative tree $T$. Assume that $G'$ is a subgraph of $G$ and $T'$ is a subtree of $T$. Then, $G'$ is referred to as the \emph{corresponding subgraph} of $T'$ if and only if $T'$ is the representative tree of $G'$. There is an $O(n)$ time algorithm to construct the representative tree from a given plane graph~\cite{DBLP:conf/gd/NishatMR10}. Given a set of points $S$, we use the symbol $P_S(\triangle xyz)$ to denote the set of points that are inside the triangle $\triangle xyz$. We use the symbol $N_S(\triangle xyz)$ to denote size of the set $P_S(\triangle xyz)$. We will extensively use the triangular range search and counting queries in our algorithm. Below we formally define these two types of queries. \begin{prb}[Triangular Range Search] \label{prb:trs} Given a set $S$ of points that can be preprocessed, we have to answer queries of the form $SetQuery(S,\triangle abc)$, where the query returns $P_S(\triangle abc)$. \end{prb} \begin{prb}[Triangular Range Counting] \label{prb:trc} Given a set $S$ of points that can be preprocessed, we have to answer queries of the form $CountQuery(S,\triangle abc)$, where the query returns $N_S(\triangle abc)$. \end{prb} In what follows, we will follow the following convention: If an algorithm has preprocessing time $f(n)$ and query time $g(n)$, we will say its overall running time is $\langle f(n), g(n) \rangle$. We conclude this section with an example illustrating the point set embedding of a plane 3-tree. \begin{figure}[p!] \centering \subfigure[An example of the point set $S$.] { \scalebox{0.55}{ \includegraphics{newPointSet} } \label{fig:pointset} } \\ \subfigure[An embedding of the plane 3-tree of Figure~\ref{fig:plane3tree} on the point set of Figure~\ref{fig:pointset}] { \scalebox{0.6}{ \includegraphics{newplane3tree2} } \label{fig:embedding} } \caption{An example of point set embedding of a plane 3-tree.} \label{fig:example} \end{figure} \begin{my_example} In Figure~\ref{fig:pointset} we present an example of the point set $S$ having $n = 17$ points. Then, in Figure~\ref{fig:embedding}, an embedding of the plane 3-tree of Figure~\ref{fig:plane3tree} is illustrated. \end{my_example} \section{Algorithm of~\cite{DBLP:conf/gd/NishatMR10}} \label{sec:quad} In this section, we briefly describe the quadratic time algorithm of~\cite{DBLP:conf/gd/NishatMR10}. To simplify the description, we first assume that the points of $S$ are in general positions, i.e., no three points of $S$ are collinear. \begin{lemma}[\cite{DBLP:conf/gd/NishatMR10}] \label{lmm:conv} Let $G$ be a plane $3$-tree of $n$ vertices and $S$ be a set of $n$ points. If $G$ admits a point-set embedding on $S$, then the convex hull of $S$ contains exactly three points in $S$. \end{lemma} \begin{lemma}[\cite{DBLP:conf/gd/NishatMR10}] \label{lmm:uniqp} Let $G$ be a plane $3$-tree of $n$ vertices with $a$, $b$ and $c$ being the three outer vertices of $G$, and let $p$ be the representative vertex of $G$. Let $S$ be a set of $n$ points such that the convex hull of S contains exactly three points. Assume that $G$ has a point-set embedding $\Gamma(G)$ on $S$ for a given mapping of $a$, $b$ and $c$ to the three points of the convex-hull of $S$. Then $p$ has a unique valid mapping. \end{lemma} The algorithm of~\cite{DBLP:conf/gd/NishatMR10} performs the following steps to find a valid embedding of $G$ in a given point-set $S$ if one exists. \begin{enumerate}[Step: 1] \label{algo:quad} \item Find the convex hull of the given points. If the convex hull does not have exactly $3$ points, return the message that no embedding exists. \item For each of the possible $6$ mappings of the outer vertices of $G$ to the three points of the convex hull, perform Steps~\ref{step:start} and~\ref{step:start1} (recursively). \item \label{step:start} Assume that at the beginning of this step, we are considering the representative (sub)tree $T'$ and the corresponding graph is $G'$ (obviously a subgraph of $G$). Let the three outer vertices of $G'$ are $a'$, $b'$ and $c'$ and the representative vertex of it is $p'$. Note that, initially, $G' = G$, $T' = T$ and the outer vertices and the representative vertex are $a$, $b$, $c$ and $p$ respectively. Assume that the number of internal nodes in $T'$ is $n'$. Note that, number of vertices in the corresponding graph $G'$ is $n'+3$. If $n'=0$ then embedding is trivially possible and this step returns immediately terminating the recursion. Otherwise, the following step is executed to check whether an embedding is indeed possible. \item \label{step:start1} Let the root of $T'$ be $r$. Also let the three children of $r$ be $r_1$, $r_2$ and $r_3$ and the number of internal nodes in the subtrees rooted at $r_1$, $r_2$ and $r_3$ be $n_1'$, $n_2'$ and $n_3'$ respectively. Note that $n'=n_1'+n_2'+n_3'+1$. Let the three outer vertices $a'$, $b'$ and $c'$ of $G'$ are mapped to points $x$, $y$ and $z$ of $S$. Now, we find a point $u$ in $S$ such that $N_S(\triangle xuy)=n_1'$, $N_S(\triangle yuz)=n_2'$, and $N_S(\triangle zux)=n_3'$. By Lemma \ref{lmm:uniqp}, $u$ is unique if it exists. To find $u$, all the points of $S$ lying within the triangle $\triangle xyz$ are checked. If $u$ can be found, then $p$ is mapped to $u$ and Steps \ref{step:start} and \ref{step:start1} are executed recursively for all three subtrees of $T'$; otherwise no embedding is possible. \end{enumerate} In what follows, we will refer to this algorithm as the NMR Algorithm. Naive implementation of NMR algorithm runs in $O(n^3)$ time~\cite{DBLP:conf/gd/NishatMR10}. By sorting all points of $S$ according the polar angle with respect to each point of $S$ and employing some non-trivial observations, this algorithm can be made to run in $O(n^2)$ time~\cite{DBLP:conf/gd/NishatMR10}. Note that, the $O(n^2)$ algorithm assumes that the points of $S$ are at general positions. If this assumption is removed, NMR algorithm runs in $O(n^2\log n)$ time. \section{Our Result} \label{sec:subq} In this section, we modify the algorithm of~\cite{DBLP:conf/gd/NishatMR10} described in Section \ref{sec:quad} and achieve a better running time. For the ease of exposition, we assume for the time being that triangular range search has running time $\langle f(|S|),g(|S|)+\ell\rangle$ and triangular range counting has running time $\langle f(|S|),g(|S|)\rangle$, where $S$ is the input set and $\ell$ is the output size for triangular range search. We will finally use the actual running times during the analsysis of the algorithm. We first present our modified algorithm below followed by a detailed running time analysis. \begin{enumerate}[Step 1:] \label{algo:subq} \item \label{step:subq:chull} Find the convex hull of the points of $S$. By Lemma \ref{lmm:conv} the convex hull should have $3$ points, otherwise no embedding exists. \item \label{step:subq:trp} Preprocess the points of $S$ for triangular range search and triangular range counting. \item \label{step:subq:map} For each of the possible $6$ mappings of the outer vertices of $G$ to the three points of the convex hull, perform Steps~\ref{step:subq:start} to~\ref{step:subq:check} (recursively). \item \label{step:subq:start} We take the same assumptions as we took at Step~\ref{step:start} of the NMR algorithm. Now, if $n'=0$ then embedding is trivially possible and this step returns immediately terminating the recursion. Otherwise, the following step is executed to check whether an embedding is indeed possible. \item \label{step:subq:prune2} Now we want to find a point $u$ such that $N_S(\triangle xuy)=n_1'$, $N_S(\triangle yuz)=n_2'$, and $N_S(\triangle zux)=n_3'$. Recall that, by lemma \ref{lmm:uniqp} such a point is unique if it exists. Now, without loss of generality we can assume that $n_2' \leq \min(n_1', n_3')$. In order to find $u$, we first find points $v_1$ and $v_2$ on the line $yz$ such that $N_S(\triangle xv_1y)=n_1'$ and $N_S(\triangle xv_2z)=n_3'$. Note carefully that, in line $yz$, $v_1$ appears closer to $y$ than $v_2$; otherwise there will not be $n'$ points inside the triangle $\triangle xyz$. We will use a binary search and triangular range counting queries to find $v_1, v_2$ as follows. We first choose the mid point $w$ of the line $BC$. Then we compute $N_S(\triangle xwy)$ using a triangular range counting query. If $N_S(\triangle xwy) = n_1'$ we are done and we assign $v_1 = w$. Otherwise, if $N_S(\triangle xwy) > n_1'$ ($N_S(\triangle xwy) < n_1'$), then we choose the mid point $w'$ of the line $Bw$ ($wC$). Then we perform similar checks on $\triangle xw'y$. The point $v_2$ can also found similarly. Clearly, there always exist such points and steps of binary search is bounded by $O(\log N)$, where $N$ is the maximum absolute value of a point of $S$ in any coordinate. \item \label{step:subq:check} With points $v_1$ and $v_2$ at our disposal, we now try to find point $u$. Note that the point $u$ cannot be inside either $\triangle xv_1y$ or $\triangle xv_2z$. This is because if $u$ is in $\triangle xv_1y$ then $N_S(\triangle xuy) < N_S(\triangle xv_1y)= n_1'$ implying $N_S(\triangle xuy) < n_1'$, a contradiction. A similar argument is possible for $\triangle xv_2z$. So, we must have $u \in P_S(\triangle xv_1v_2)$. Also note that $N_S(\triangle xv_1v_2)=N_S(\triangle xyz)-N_S(\triangle xv_1y)-N_S(\triangle xv_2z)=n'-n_1'-n_3'=n_2'+1$. Using triangular range search we now find the points $P_S(\triangle xv_1v_2)$. To find $u$, we now simply check whether any of these points satisfies the requirement $N_S(\triangle xuy)=n_1'$, $N_S(\triangle yuz)=n_2'$, and $N_S(\triangle zux)=n_3'$. If no such point exists, then we return stating that it will be impossible to embed the graph on the points. Otherwise we find a point $u$, which is mapped to vertex $p$. Now Steps \ref{step:subq:start} to \ref{step:subq:check} are recursively executed for all three subtrees. \end{enumerate} \subsection{Analysis}\label{sec:analysis} Now we analyze our modified algorithm presented above. Step \ref{step:subq:chull} is executed once and can be done in $O(n \log n)$ time. Step \ref{step:subq:trp} is executed once and can be done in $f(|S|)$ time. Steps \ref{step:subq:start} to \ref{step:subq:check} are executed recursively. Step \ref{step:subq:start} basically gives us the terminating condition of the recursion. We focus on Step \ref{step:subq:prune2} and Step \ref{step:subq:check} separately below. In Step \ref{step:subq:prune2}, we find out the two points $v_1$ and $v_2$ using binary search and triangular range counting queries. Time required for this step is $O(g(|S|)\log N)$. Note carefully that both the parameters $|S|$ and $N$ are constant in terms of recursion. Also, it is easy to see that, overall, Step \ref{step:subq:prune2} is executed once for each node in $T$. Hence, the overall running time of this step is $O(g(|S|)n\log N)$. Now we focus on Step~\ref{step:subq:check}. Time required for triangular range search in Step \ref{step:subq:check} is $O(g(|S|)+n_2')$. In this step we also need $O(n_2')$ triangular range counting queries which add up to $O(g(|S|)n_2')$ time. Recall that, $n_2' = \min(n_1', n_3')$, i.e., $n_2'$ is the number internal nodes of the subtree having the least number of internal nodes. Hence, we have $n_2' \leq n'/3$. Now, the overall running time of Step~\ref{step:subq:check} can be expressed using the following recursive formula: $T(n') = T(n_1')+T(n_2')+T(n_3')+ n_2' g(|S|))$, where $n_2' \leq \min(n_1',n_3')$. Now we have the following theorem: \begin{theorem} The overall running time of Step~\ref{step:subq:check} is $O(g(|S|)~ n\log n)$. \end{theorem} \begin{proof} First, assume that $T(n)\le c(n\log n) g(|S|), c\geq 1$. Then we have, \begin{align*} T(n') &= T(n_1')+T(n_2')+T(n_3')+ n_2' g(|S|)\\ &\leq c(n_1'\log n_1') g(|S|) + c(n_2'\log n_2') g(|S|) + c(n_3'\log n_3') g(|S|) + n_2' g(|S|)\\ &< c(n_1'\log n') g(|S|) + c(n_2'\log \frac{n'}{2}) g(|S|) + c(n_3'\log n') g(|S|) + c\times n_2' g(|S|)\\ &= c(n_1'\log n') g(|S|) + c(n_2'(-1+\log n')) g(|S|) + c(n_3'\log n') g(|S|) + cn_2' g(|S|) \\ &=c g(|S|) (-n_2'+(n_1' +n_2'+n_3') \log n' + n_2' )\\ &= c g(|S|) n' \log n' \end{align*} This completes the proof. \qed \end{proof} Based on the above discussion, total time required for this algorithm is $O(n\log n + f(|S|)+ n g(|S|) \log N + n g(|S|) \log n) = O(f(|S|) + n~g(|S|) (\log n + \log N))$. Now we are ready to replace $f(|S|)$ and $g(|S|)$ with some concrete values. To the best of our knowledge, the best result of triangular range search and counting queries is due to Chazelle et al.~\cite{ALGOR::ChazelleSW1992}. In particular, Chazelle et al. proposed a solution for the triangular range search queries in~\cite{ALGOR::ChazelleSW1992} with time complexity $\langle O(m^{1+\epsilon}),O(n^{1+\epsilon}/m^{1/2})\rangle$, where $n<m<n^2$. Using this result the running time of our algorithm becomes $O(m^{1+\epsilon}+ (\log n + \log N) n^{2+\epsilon}/m^{1/2})$, which reduces to $O(n^{4/3+\epsilon}+ (\log n + \log N) n^{4/3+\epsilon})$ if we choose $m=n^{4/3}$. Finally, we can safely ignore the $\log N$ component from our running time as follows. Firstly, the $\log N$ component becomes significant only when $N$ is doubly exponential in $n$ or larger, which is not really practical. Secondly, while we talk about the theoretical running time of algorithms, we often ignore the inherent $O(\log N)$ terms assuming that two (large) numbers can be compared in $O(1)$ time. For example, in comparison model, since sorting $n$ (large) numbers having maximum value $N$ requires $\Theta(n \log n)$ comparisons we usually say that sorting requires $\Theta(n \log n)$ time. Essentially, here, we ignore the fact that comparing two numbers actually requires $\Omega(\log N)$ time. Notably, the algorithm of ~\cite{DBLP:conf/gd/NishatMR10} also has an hidden $O\log N$ term since it requires $O(n^2)$ comparisons each of which actually requires $O(\log N)$ time. One final note is that for instances where the $\log N$ term does have significant effect, we can in fact get rid of the term using standard techniques to transform a counting algorithm into a ranking algorithm at the cost of a $\log n$ time increase in the running time. Similar techniques are also applicable for the algorithm of ~\cite{DBLP:conf/gd/NishatMR10}. So, we have the following theorem. \begin{theorem}\label{thm-generalPos} The point set Embeddability problem can be solved in $O(n^{4/3+\epsilon}\log n)$ time if the input graph is a plane 3-tree and $S$ does not contain any three points that are collinear. \end{theorem} \subsection{For points not in General positions} So far we have assumed that the points of $S$ are in general positions, i.e., no three points in $S$ are collinear. We now discuss how to get around this assumption. Note that, the algorithm of Nishat et al~\cite{DBLP:conf/gd/NishatMR10} shows improved performance of $O(n^2)$ when the points of $S$ are in general positions. Now, if we remove our assumption, then we may have more than two points that are collinear. In this case, the only only modification needed in our algorithm is in Step \ref{step:subq:prune2}. Now, the problem is that the two points $v_1$ and $v_2$ could not be found readily as before. More specifically, even if Step \ref{step:subq:prune2} returns that $v_1$ and $v_2$ do not exist, still $u$ may exist. Now note that, in Step \ref{step:subq:prune2}, we want to find out $v_1$ and $v_2$ to ensure that $N_S(\triangle xv_1v_2)= n_2'+1$, where $n_2' = \min(n_1', n_3')$, i.e., $n_2' \leq n'/3$. Since, we have to check each points of $P_S(\triangle xv_1v_2)$ (to find $u$), the above bound of $n_2' \leq n'/3$ provides us with the required efficiency in our algorithm. To achieve the same efficiency, we now slightly modify Step \ref{step:subq:prune2}. Suppose we are finding $v_1$ ($v_2$). We now try to find $v_1$ ($v_2$) such that the $N_S(\triangle xv_1y) > n_1'$ ($N_S(\triangle xv_2z) > n_3'$) and $v_1$ ($v_2$) is as near as possible to $B$ ($C$) on the line $BC$. Let us assume that we need $\mathcal I$ iterations now to find $v_1$ ($v_2$). We have the following bound for $\mathcal I$. \begin{lemma} $\mathcal I$ is bounded by $O(\log N)$ \end{lemma} \begin{proof} There may not be any point candidate of $v_1$ ($v_2$) which has integer coordinates. But as $x$ can be intersection of two lines, each of which goes through two points of $S$, there may exists a point candidate of $v_1$ having denominator less than $N^2$ or there is none. Either way, to find such a point or to be sure no such point exists we only need precision less than $1/N^2$. Therefore, $O(\log N)$ iterations are sufficient. \qed \end{proof} Again, the argument presented at the end of Section~\ref{sec:analysis} about the component $\log N$ applies here as well. Therefor, the result of Theorem~\ref{thm-generalPos} holds even the points of $S$ are not in general positions. So, we restate our stronger and more general result as follows. \begin{theorem}\label{thm-nogeneralPos} The point set Embeddability problem can be solved in $O(n^{4/3+\epsilon}\log n)$ time if the input graph is a plane 3-tree. \end{theorem} \section{Generalized Version}\label{sec:genProb} A generalized version of the Point Set Embeddability problem is also of interest in the graph drawing research community, where $S$ has more points than the number of vertices of the input graph $G$. More formally, the generalized point set embeddability problem is defined as follows. \begin{prb} [Generalized Point-Set Embeddability] \label{prb:ptemb} Let $G$ be a plane graph of $n$ vertices and $S$ is a set of $k$ points on plane such that $k > n$. The generalized point-set embeddability problem wants to find a straight-line drawing of $G$ such that vertices of $G$ are mapped to some $n$ points of $S$. \end{prb} In this section, we extend our study to solve the Generalized Point-set Embeddability problem for plane 3-trees. This version of the problem was also handled in~\cite{DBLP:conf/gd/NishatMR10} for plane 3-trees and they presented an algorithm for the problem that runs in $O(nk^8)$ time. Our target again is to improve upon their algorithm. We show how to improve the result to $O(n k^4)$. We will use dynamic programming (DP) for this purpose. For DP, we define our (sub)problem to be $Embed(r',a',b',c')$ where $r'$ is the root of the (sub)tree $T'$ and $a'$,$b'$ and $c'$ are points in $S$. Now, $Embed(r',a',b',c')$ returns true if and only if it is possible to embed the corresponding subgraph $G'$ of the subtree $T'$ rooted at $r'$ such that its three outer vertices are mapped to the points $a'$, $b'$ and $c'$. Now we can start building the DP matrix by computing $Embed(r',a',b',c')$ for the smaller subtrees to see whether the corresponding subgraphs can be embedded for a combination of 3 points of $S$ as outer vertices of the corresponding subgraphs. In the end, the goal is to check whether $Embed(r,a,b,c)$ returns true for any particular points $a,b,c \in S$, where $r$ is the root of the representative tree $T$ of the input plane 3-tree $G$. Clearly, if there exists $a,b,c\in S$ such that $Embed(r,a,b,c)$ is true, then $G$ is embeddable in $S$. We now have the following theorem. \begin{theorem} The Generalized Point-Set Embeddability problem can be solved in $O(n\times k^4)$ time. \end{theorem} \begin{proof} If $r'$ is a leaf, then $Embed(r',a',b',c')$ is trivially true for any $a',b',c'\in S$. Now consider the calculation of $Embed(r',a',b',c')$ when $r'$ is not a leaf. Then, assume that the children of $r'$ are $r_1'$, $r_2'$ and $r_3'$ in that order. Then $Embed(r',a',b',c')$ is true if and only if we can find a point $u \in P_S(\triangle a'b'c')$ such that $Embed(r_1',a',b',u)$, $Embed(r_2',u,b',c')$ and $Embed(r_3',a',u,c')$ are true; otherwise $Embed(r',a',b',c')$ will be false. Clearly, the time required to calculate $Embed(r',a',b',c')$ is $O(g(|S|) + N_S(\triangle a'b'c'))$. Therefore, in the worst case the time required to calculate an entry of the DP matrix is $O(g(|S|) + k)$. Now, it is easy to realize that there are in total $nk^3$ entries in the DP matrix. Note that to be able to compute $P_S(\triangle a'b'c')$ we need to spend $O(f(|S|))$ time for a one-time preprocessing. Hence, the total running time is $O(f(|S|) + nk^3 \times (g(|S|) + k)) = O(f(|S|) + nk^3 g(|S|) + nk^4)$. Using the $\langle O(m^{1+\epsilon}),O(n^{1+\epsilon}/m^{1/2})\rangle$ result of Chazelle et al.~\cite{ALGOR::ChazelleSW1992}, the running time becomes $O(O(m^{1+\epsilon}) + nk^3 \times n^{1+\epsilon}/m^{1/2} + nk^4)$. Since, $n<m<n^2$, this running time is $O(nk^4)$ in the worst case. \qed \end{proof} \section{Conclusion}\label{sec:con} In this paper, we have followed up the work of \cite{DBLP:conf/gd/NishatMR10} and presented an algorithm to solve the point-set embeddability problem in $O(n^{4/3+\epsilon} \log n)$ time. This improves the recent $O(n^2 \log n)$ time result of~\cite{DBLP:conf/gd/NishatMR10}. Whether this algorithm can be improved further is an interesting open problem. In fact, an $o(n^{4/3})$ algorithm could be an interesting avenue to explore, which however, does not seem to be likely with our current technique. Since there are $\Omega(n)$ nodes in the tree, any solution that uses triangular range search to check validity at least once for each node in the tree would require $\Omega(n)$ calls to triangular range query. Lower bound for triangular range search is shown to be $\langle \Omega(m), \Omega(n/m^{1/2}) \rangle$~\cite{journals/dcg/Chazelle97}, which suggests an $\Omega(n^{4/3})$ lower bound for our algorithm using triangular range search. We have also studied a generalized version of the point-set embeddability problem where the point set $S$ has more than $n$ points. For this version of the problem we have presented an algorithm that runs in $O(nk^4)$ time, where $k = |S|$. Nishat et al.~\cite{DBLP:conf/gd/NishatMR10} also handled this version of the problem and presented an $O(nk^8)$ time algorithm. It would be interesting to see whether further improvements in this case are possible. Also, future research may be directed towards solving these problems for various other classes of graphs. \bibliographystyle{abbrv}
{'timestamp': '2010-12-02T02:02:13', 'yymm': '1012', 'arxiv_id': '1012.0230', 'language': 'en', 'url': 'https://arxiv.org/abs/1012.0230'}
\section{Introduction} \label{sec:introduction} It is well-documented in management, economics as well as operations research that businesses, even in narrowly defined industries, are quite different from one another in terms of productivity. These cross-firm productivity differentials are large, persistent and ubiquitous \citep[see][]{syverson2011}. Research on this phenomenon is therefore unsurprisingly vast and includes attempts to explain it from the perspective of firms' heterogeneous behaviors in research and development \citep[e.g.,][]{griffithetal2004}, corporate operational strategies \citep{smithreece1999}, ability of the managerial teams \citep{demerjianetal2012}, ownership structure \citep{ehrlichetal1994}, employee training and education \citep{moretti2004}, allocation efficiency \citep{songetal2011}, participation in globalization \citep{grossmanhelpman2015} and many others. In most such studies, a common production function/technology is typically assumed for all firms within the industry, and the differences in { operations performance of firms} are {confined to} variation in the {``total factor productivity,''} the Solow residual {\citep{solow1957}}.\footnote{A few studies alternatively specify an ``augmented'' production function which, besides the traditional inputs, also admits various firm-specific shifters such as the productivity-modifying factors mentioned above. {But such studies continue to assume that the same technology frontier applies to all firms.}} In this paper, we approach the {heterogeneity in firm performance} from a novel perspective in that we explicitly acknowledge the existence of locational effects on the {operations technology of firms} and their underlying productivity. {We} allow the firm-level production function to vary across space, thereby {accommodating potential neighborhood influences on firm production.} In doing so, we are able to examine the role of locational heterogeneity for cross-firm differences in {operations} performance{/efficiency}. {A firm's location is important for its operations technology. For example, \citet{ketokivi2017locate} show that hospital location is significantly related to {its} performance and {that} a hospital’s choice of strategy can {help} moderate the effect of location {through} the interplay of {local} environmental factors with organizational strategy. As shown in Figure \ref{fig:number}, chemical enterprises in China, the focus of empirical analysis in this paper, are widely {(and unevenly)} distributed {across} space. {Given the sheer size of the country (it is the third largest by area), it is implausible that, even after controlling for firm heterogeneity, all these businesses operate using} the same production technology. Organizations in all industries\textemdash not only hospitals and chemical manufacturers\textemdash develop strategies to respond to {local environment} and {the associated} competitive challenges, and those strategies drive operational decisions regarding investments in new or updated technologies}. \begin{figure}[t] \centering \includegraphics[scale=0.26]{map_number.jpg} \caption{Spatial Distribution of Manufacturers of Chemicals in China, 2004--2006} \label{fig:number} \end{figure} {Theoretically,} there are many reasons to believe that the production technology is location-specific. First, exogenous {local} endowments and institutional environments, such as {laws, regulations and local supply chains}, play a key role in determining firm performance. {The location of firms} {determines key linkages between {the} production, market, supply chain and product development \citep{goldsteinetal2002}}. If we look at the global distribution of the supply chains of many products, the product development and design is usually conducted in developed countries such as the U.S. and European countries, while the manufacturing and assembly process is performed in East Asian countries such as China and Vietnam. This spatial distribution largely reflects the endowment differences in factors of production (e.g., skilled vs.~unskilled labor) and the consequent relative input price differentials across countries. Analogously, take the heterogeneity in endowment and institutions across different locations \textit{within} a country. There are many more world's leading universities on the East and West Coasts of the U.S.~than in the middle of the country, and they provide thousands of talented graduates each year to the regional development, bolstering growth in flagship industries such as banking and high-tech in those locations. In China, which our empirical application focuses on, networking and political connections are, anecdotally, the key factors for the success of a business in the Northeast regions, whereas the economy on the Southeast Coast is more market-oriented. Furthermore, there are many broadly defined special economics zones (SEZs) in China, which all are characterized by a small designated geographical area, local management, unique benefits and separate customs and administrative procedures \citep[see][]{craneetal2018}. According to a report from the China Development Bank, in 2014, there were 6 SEZs, 14 open coastal cities, 4 pilot free-trade areas and 5 financial reform pilot areas. There were also 31 bonded areas, 114 national high-tech development parks, 164 national agricultural technology parks, 85 national eco-industrial parks, 55 national eco-civilization demonstration areas and 283 national modern agriculture demonstration areas. They spread widely in China and support various economic functions, giving rise to locational heterogeneity in the country's production. Second, most industries are geographically concentrated in general, whereby firms in the same or related industries tend to spatially cluster, benefiting from agglomeration economies reflected, among other things, in their production technologies that bring about localized \textit{aggregate} increasing returns. Ever since \citet{marshall1920} popularized these ideas, researchers have shown that industry concentration is too great to be explained solely by the differences in exogenous locational factors and that there are at least three behavioral micro-foundations for agglomeration: benefits from labor market pooling/sharing, efficiency gains from the collocation of industries with input-output relationships that improves the quality of matches and technology spillovers \citep[see][]{ellison_glaeser1999,duranton2004,ellisonetal2010,singh_marx2013}. The key idea of agglomeration economies is that geographic proximity reduces the transport costs of goods, people and, perhaps more importantly, ideas. While it is more intuitive that the movement of goods and people is hindered by spatial distance, the empirical evidence from prior studies shows that technology spillovers are also highly localized because knowledge transfers require interaction that proximity facilitates \citep[see][]{almeida_Kogut1999,alcacer_Chung2007,singh_marx2013}. Therefore, {owing to the role of local neighborhood influences,} firms that produce the same/similar products but are located in regions with different industry concentration levels are expected to enjoy different agglomeration effects on their {operations}. Because location is an important factor affecting firm performance, previous empirical studies heavily rely on spatial econometrics to examine the locational/spatial effects on production. Oftentimes, spatially-weighted averages of other firms' outputs and inputs are included as additional regressors in spatial autoregressive (SAR) production-function models \citep[e.g.,][]{glassetal2016,glassetal2020,glassetal2020b, vidoli_Canello2016,serpakrishnan2018, glassetal2019,kutluetal2020,houetal2020}. The appropriateness of such {a conceptualization of firm-level production functions in the presence of locational influences} however remains unclear because {these SAR specifications are difficult to reconcile with the theory of firm.} For instance, the reduced form of such models effectively implies substitutability of the firm's inputs with those of its peers and does not rule out the possibility of the firm's output increasing when the neighboring firms use more inputs even if the firm itself keeps own inputs fixed { and the productivity remains the same. Further, these models continue to implausibly assume that all firms use the same production technology no matter their location. The practical implementation of SAR production-function models is, perhaps, even more problematic: (\textsl{i}) they imply additional, highly nonlinear parameter restrictions necessary to ensure that the conventional production axioms are not violated, and (\textsl{ii}) they are likely unidentifiable from the data given the inapplicability of available proxy-variable estimators and the pervasive lack of valid external instruments at the firm level. We discuss this in detail in Appendix \ref{sec:appx_sar}.}\footnote{But we should note that studies of the nexus between location/geography and firm performance in operations research and management are not all confined to the production theory paradigm; e.g., see \citet{bannisterstolp1995}, \citet{goldsteinetal2002}, \citet{kalnins2004,kalnins2006}, \citet{dahlsorenson2012} and \citet{kulchina2016}.} In this paper, we consider a semiparametric production function in which both the {input-to-output transformation technology} and productivity are location-specific. Concretely, using the location information for firms, we let the {input-elasticity} and productivity-process parameters be nonparametric functions of the firm's {geographic location (latitude and longitude)} and estimate these unknown functions via kernel methods. Our methodology captures the cross-firm spatial {influences} through local smoothing, whereby the production technology for each location is calculated as the geographically weighted average of the input-output \textit{relationships} for firms in the nearby locations with larger weights assigned to the firms that are more spatially proximate. This is fundamentally different from the SAR production-function models that formulate {neighborhood influences} using spatially-weighed averages of the output/inputs \textit{quantities} while keeping the production technology the same for all firms. Consistent with the agglomeration literature, our approach implies that learning and knowledge spillovers are localized and that their chances/intensity diminish with distance. Importantly, by utilizing the data-driven selection of smoothing parameters that regulate spatial weighting of neighboring firms in kernel smoothing, we avoid the need to rely on \textit{ad hoc} specifications of the weighting schemes and spatial radii {of neighborhood influences like the} traditional SAR models do. It also allows us to be agnostic about the channels through which {firm location affects its production,} and our methodology inclusively captures all possible mechanisms of agglomeration economies. {Our conceptualization of spatial influences by means of locationally-varying parameters is akin to the idea of ``geographically weighted regressions'' (GWR) introduced and popularized in the field of geography by \citet{bfc1996}; also see \citet{fbc-book} and many references therein. Just like ours, the GWR technique aims to model processes that are not constant over space but exhibit local variations and do so using a varying-coefficient specification estimated via kernel smoothing over locations. However, the principal\textemdash and non-trivial\textemdash distinction of our methodology from the GWR approach is in its emphasis on \textit{identification} of the spatially varying relationship. Concretely, for consistency and asymptotic unbiasedness the GWR methods rely on the assumption that (non-spatial) regressors in the relationship of interest are mean-orthogonal to the stochastic disturbance which rules out the presence of correlated unobservables as well as the potential simultaneity of regressors and the outcome variable for reasons other than spatial autoregression. The latter two are, however, more the rule rather than the exception for economic relations, which are affected by behavioral choices, including the firm-level production function. Recovering the data generating process underlying the firm's production operations from observational data (i.e., its identification) requires tackling the correlation between regressors and the error term that the GWR cannot handle, making it unable to consistently estimate the production technology and firm productivity. This is precisely our focus.\footnote{In effect, our methodology constitutes a generalization of the GWR technique to accommodate endogenous regressors in the context of production-function estimation.}} The identification of production functions in general, let alone with locational heterogeneity, is not trivial due to the endogeneity issue whereby the firm's input choices are correlated with its productivity. Complexity stems from the latency of firm productivity. Due to rather unsatisfactory performance of the conventional approaches to identification of production functions, such as fixed effects estimation or instrumentation using prices, there is a growing literature targeted at solving endogeneity using a proxy-variable approach \citep[e.g., see][]{op1996,lp2003,acf2015,gnr2013} which has gained wide popularity among empiricists. To identify the locationally-varying production functions, we develop a semiparametric proxy-variable estimator that accommodates locational heterogeneity across firms. To this end, we build upon \citet{gnr2013} whose framework we extend to incorporate spatial information about the firms in a semiparametric fashion. More specifically, we make use of the structural link between the production function (of the known varying-coefficient functional form) and the optimality condition for a flexible input derived from the firm's static expected profit maximization problem. We propose a two-step estimation procedure and, to approximate the unknown functional coefficients, employ local-constant kernel fitting. Based on the estimated location-specific production functions, we further propose a locational productivity differential decomposition to break down the cross-region production differences that cannot be explained by input usage (i.e., the differential in ``total productivity'' of firms across locations) into the contributions attributable to differences in available production technologies and to differences in {total-factor operations efficiency of firms.} We apply our model to study locationally heterogeneous production technology among Chinese manufacturing firms in the chemical industry in 2002--2004. Based on the results of the data-driven cross-validation as well as formal statistical tests, the empirical evidence provides strong support to the importance and relevance of location for production. Qualitatively, we find that both technology and {firm} productivity vary significantly across regions. Firms are more likely to exhibit higher {(internal)} returns to scale in regions of agglomeration. However, the connection between {firm} productivity and industry concentration {across space} is unclear. The decomposition analysis reveals that differences in {\textit{technology} (as opposed to in idiosyncratic firm heterogeneity)} are the main source of cross-location total productivity differentials, on average {accounting for 2/3} of the differential. To summarize, our contribution is as follows. We propose a semiparametric methodology to accommodates locational heterogeneity in the production-function estimation while maintaining the standard structural assumptions about firm production. {Unlike the available SAR-type alternatives, our model explicitly estimates the cross-locational variation in production technology.} To operationalize our methodology, we extend the widely-used proxy-variable identification methods to incorporate {firm location}. Our model as well as the proposed decomposition method for disentangling the effects of location on {firm} productivity from those on technological input-output relationship should provide a valuable addition to the toolkit of {empiricists} interested in studying agglomeration economies and technology spillovers. {In the context of operations management in particular, our methodology will be most useful for empirical studies focused on the analysis of operations efficiency/productivity and} {its ``determinants;'' \citep[e.g.,][are just a few recent examples of such analyses]{ross2004analysis,berenguer2016disentangling,jola2016effect,lam2016impact}. {In the case of multi-input production, the ``total factor productivity'' is among the most popular comprehensive measures of operations efficiency/productivity of the firm,} and our paper shows how to measure the latter robustly when production relationships are not constant over space {and are subject to neighborhood influences. This is particularly interesting because the effects of location, supply chain integration and agglomeration on firm performance have recently attracted much attention among researchers in operations management} \citep[e.g.,][]{goldsteinetal2002,ketokivi2017locate,flynn2010impact}.} The rest of the paper is organized as follows. Section \ref{sec:model} describes the model of firm-level production {exhibiting} locational heterogeneity. We describe our identification and estimation strategy in Section \ref{sec:identification_estimation}. We provide the locational productivity differential decomposition in Section \ref{sec:decomposition_level}. The empirical application is presented in Section \ref{sec:application}. Section \ref{sec:conclusion} concludes. Supplementary materials are relegated to the Appendix. \section{Locational Heterogeneity in Production} \label{sec:model} Consider the production process of a firm $i$ ($i=1,\dots,n$) in the time period $t$ ($t=1,\dots,T$) in which physical capital $K_{it} {\in\Re_{+}}$, labor $L_{it} {\in\Re_{+}}$ and an intermediate input such as materials $M_{it}{\in\Re_{+}}$ are transformed into the output $Y_{it}{\in\Re_{+}}$ via a production function given the (unobserved) firm productivity. Also, let $S_{i}$ be the (fixed) location of firm $i$, with the obvious choice being $S_{i}=(\text{lat}_i,\text{long}_i)'$, where $\text{lat}_i$ and $\text{long}_i$ are the latitude and longitude coordinates of the firm's location. Then, the locationally varying production function is \begin{equation}\label{eq:prodfn_hicks_f} Y_{it} = F_{|S_{i}}(K_{it},L_{it},M_{it})\exp\left\{\omega_{it}\right\}\exp\left\{\eta_{it}\right\} , \end{equation} where $F_{|S_{i}}(\cdot)$ is the firm's location-specific production {function} that {varies} over space (as captured by $S_i$) to accommodate locational heterogeneity in production technology, $\omega_{it}$ is the firm's persistent Hicks-neutral {total factor productivity capturing its operations efficiency}, and $\eta_{it}$ is a random transitory shock. Note that, so long as the firm's location is fixed, $\omega_{it}$ that persist for the same firm $i$, by implication, then also has the evolution process that is specific to this firm's location $S_i$; we expand on this below. As in \citet{gnr2013}, \citet{malikovetal2020} and \citet{malikovzhao2021}, physical capital $K_{it}$ and labor $L_{it}$ are said to be subject to adjustment frictions (e.g., time-to-install, hiring/training costs), and the firm optimizes them dynamically at time $t-1$ rendering these predetermined inputs quasi-fixed at time $t$. Materials $M_{it}$ is a freely varying (flexible) input and is determined by the firm statically at time $t$. Thus, both $K_{it}$ and $L_{it}$ are the state variables with dynamic implications and follow their respective deterministic laws of motion: \begin{equation}\label{eq:k_lawofmotion} K_{it}=I_{it-1}+(1-\delta)K_{it-1}\quad \text{and}\quad L_{it}=H_{it-1}+L_{it-1} , \end{equation} where $I_{it}$, $H_{it}$ and $\delta$ are the gross investment, net hiring and the depreciation rate, respectively. Following the convention, we assume that the risk-neutral firm maximizes a discounted stream of expected life-time profits in perfectly competitive output and factor markets subject to the state variables and expectations about the market structure variables including prices that are common to all firms.\footnote{{ We use the perfect competition and homogeneous price assumption mainly for two reasons: (\textsl{i}) it is the most widely used assumption in the literature on structural identification of production function and productivity, and (\textsl{ii}) this assumption has been {repeatedly} used {when} studying the same data as ours \citep[e.g.,][]{baltagietal2016,malikovetal2020}. Relaxing the perfect competition assumption is possible but non-trivial, and it requires additional assumptions {about} the output demand \citep[e.g.,][]{deLoecker2011product} and/or extra information on firm-specific output prices that are usually not available for manufacturing data \citep[e.g.,][]{deloeckeretal2016} {or imposing \textit{ex ante} structure on the returns to scale \citep[see][]{flynnetal2019}.} It is still a subject of ongoing research. Given {the emphasis of our contribution on incorporating \textit{technological} heterogeneity (associated with firm location, in our case) in the measurement of firm productivity, we opt} to keep all other aspects of {modeling} consistent with {the convention in the literature to ensure meaningful comparability with most available methodologies.}}} Also, for convenience, we denote $\mathcal{I}_{it}$ to be the information set available to the firm $i$ for making the period $t$ production decisions. In line with the proxy variable literature, we model firm productivity $\omega_{it}$ as a first-order Markov process which we, however, endogenize \`a la \citet{dj2013} and \citet{deloecker2013} by incorporating productivity-enhancing and ``learning'' activities of the firm. To keep our model as general as possible, we denote all such activities via a generic variable $G_{it}$ which, depending on the empirical application of interest, may measure the firm's R\&D expenditures, foreign investments, export status/intensity, etc.\footnote{A scalar variable $G_{it}$ can obviously be replaced with a vector of such variables.} Thus, $\omega_{it}$ evolves according to a location-inhomogeneous controlled first-order Markov processes with transition probability $\mathcal{P}^{\omega}_{|S_i}(\omega_{it}|\omega_{it-1},G_{it-1})$. This implies the following location-specific mean regression for firm productivity: \begin{equation}\label{eq:productivity_hicks_law} \omega_{it}=h_{|S_i}\left(\omega_{it-1},G_{it-1}\right)+\zeta_{it}, \end{equation} where $h_{|S_i}(\cdot)$ is the location-specific conditional mean function of $\omega_{it}$, and $\zeta_{it}$ is a random innovation unanticipated by the firm at period $t-1$ and normalized to zero mean: $ \mathbb{E}\left[\zeta_{it} | \mathcal{I}_{it-1}\right]=\mathbb{E}\left[\zeta_{it}\right]=0$. The evolution process in \eqref{eq:productivity_hicks_law} implicitly assumes that productivity-enhancing activities and learning take place with a delay which is why the dependence of $\omega_{it}$ on a control $G_{it}$ is lagged implying that the improvements in firm productivity take a period to materialize. Further, in $\mathbb{E}[\zeta_{it} |\ \mathcal{I}_{it-1}]=0$ we assume that, due to adjustment costs, firms do not experience changes in their productivity-enhancing investments in light of expected \textit{future} productivity innovations. Since the innovation $\zeta_{it}$ represents inherent uncertainty about productivity evolution as well as the uncertainty about the success of productivity-enhancing activities, the firm relies on its knowledge of the \textit{contemporaneous} productivity $\omega_{it-1}$ when choosing the level of $G_{it-1}$ in period $t-1$ while being unable to anticipate $\zeta_{it}$. These structural timing assumptions are commonly made in models with controlled productivity processes \citep[e.g.,][]{vanbiesebroeck2005, dj2013,dj2018, deloecker2013, malikovetal2020,malikovzhao2021} and are needed to identify the within-firm productivity-improving learning effects. We now formalize the firm's optimization problem in line with the above discussion. Under risk neutrality, the firm's optimal choice of freely varying input $M_{it}$ is described by the (static) restricted expected profit-maximization problem subject to the already optimal dynamic choice of quasi-fixed inputs: \begin{align}\label{eq:profitmax} \max_{M_{it}}\ P_{t}^Y F_{|S_i}(K_{it},L_{it},M_{it})\exp\left\{\omega_{it}\right\}\theta - P_{t}^M M_{it} , \end{align} where $P_{t}^Y$ and $P_{t}^M$ are respectively the output and material prices that, given the perfect competition assumption, need not vary across firms; and $\theta\equiv\mathbb{E}[\exp\{\eta_{it}\}|\ \mathcal{I}_{it}]$. The first-order condition corresponding to this optimization yields the firm's conditional demand for $M_{it}$. Building on \citeauthor{dj2018}'s (2013, 2018) treatment of productivity-enhancing R\&D investments (a potential choice of $G_{it}$ in our framework) as a contemporaneous decision, we describe the firm's dynamic optimization problem by the following Bellman equation: \begin{align}\label{eq:bellman} \mathbb{V}_{t}\big(\Xi_{it}\big) = &\max_{I_{it},H_{it},G_{it}} \Big\{ \Pi_{t|S_i}(\Xi_{it}) - \text{C}^{I}_{t}(I_{it}) - \text{C}^{H}_{t}(H_{it})- \text{C}^{G}_{t}(G_{it}) + \mathbb{E}\Big[\mathbb{V}_{t+1}\big(\Xi_{it+1}\big) \Big| \Xi_{it},I_{it},H_{it},G_{it}\Big]\, \Big\} , \end{align} where $\Xi_{it}=(K_{it},L_{it},\omega_{it})'\in \mathcal{I}_{it}$ are the state variables;\footnote{The firm's location $S_i$ is suppressed in the list of state variables due to its time-invariance.} $\Pi_{t|S_i}(\Xi_{it})$ is the restricted profit function derived as a value function corresponding to the static problem in \eqref{eq:profitmax}; and $\text{C}^{\kappa}_t(\cdot)$ is the cost function for capital ($\kappa=I$), labor ($\kappa=H$) and productivity-enhancing activities ($\kappa=G$).\footnote{The assumption of separability of cost functions is unimportant, and one can reformulate \eqref{eq:bellman} using one $\text{C}_{t}(I_{it},H_{it},G_{it})$ for all dynamic production variables.} In the above dynamic problem, the level of productivity-enhancing activities $G_{it+1}$ is chosen in time period $t+1$ unlike the amounts of dynamic inputs $K_{it+1}$ and $L_{it+1}$ that are chosen by the firm in time period $t$ (via $I_{it}$ and $H_{it}$, respectively). Solving \eqref{eq:bellman} for $I_{it}$, $H_{it}$ and $G_{it}$ yields their respective optimal policy functions. {An important assumption of our structural model of firm production in the presence of locational heterogeneity is that firm location $S_i$ is both fixed and exogenous. However, the identification of locational heterogeneity in production may be complicated by the potentially endogenous spatial sorting problem, whereby more productive firms might \textit{ex ante} sort into the what-then-become high productivity locations. Under this scenario, when we compare firm productivity and technology across locations, we may mistakenly attribute gradients therein to the locational effects such as agglomeration and neighborhood influences, while in actuality it may be merely reflecting the underlying propensity of all firms in a given location to be more productive \textit{a priori}. While there has recently been notable progress in formalizing and understanding these coincident phenomena theoretically \citep[e.g.,][]{behrensetal2014,gaubert2018}, disentangling firm sorting and spatial agglomeration remains a non-trivial task empirically.\footnote{Urban economics literature also distinguishes the third endogenous process usually referred to as the ``selection'' which differs from sorting in that it occurs \textit{ex post} after the firms had self-sorted into locations and which determines their continuing survival. We abstract away from this low-productivity-driven attrition issue in the light of the growing empirical evidence suggesting that it explains none of spatial productivity differences which, in contrast, are mainly driven by agglomeration economies \citep[see][]{combesetal2012}. Relatedly, the firm attrition out of the sample has also become commonly accepted as a practical non-issue in the productivity literature so long as the data are kept unbalanced. For instance, \citet[][p.324]{lp2003} write: ``The original work by Olley and Pakes devoted significant effort to highlighting the importance of not using an artificially balanced sample (and the selection issues that arise with the balanced sample). They also show once they move to the unbalanced panel, their selection correction does not change their results.''} However, by including the firm’s own lagged productivity in the autoregressive $\omega_{it}$ process in \eqref{eq:productivity_hicks_law}, we are able (at least to some extent) to account for this potential self-sorting because sorting into locations is heavily influenced by the firm's own productivity (oftentimes stylized as the ``talent'' or ``efficiency'' in theoretical models). That is, the locational heterogeneity in firm productivity and technology in our model is measured after partialling out the contribution of its own past productivity. Incidentally, \citet{deloecker2013} argues the same in the context of productivity effects of exporting and self-selection of exporters.} \section{Methodology} \label{sec:identification_estimation} This section describes our strategy for (structural) identification and estimation of the firm's location-specific production technology and unobserved productivity. Following the popular practice in the literature \citep[e.g., see][]{op1996,lp2003, dj2013,acf2015,collarddeloecker2015, koningsvanormelingen2015}, we assume the Cobb-Douglas specification for the production function which we adapt to allow for potential locational heterogeneity in production. We do so in a semiparametric fashion as follows: \begin{equation}\label{eq:prodfn_hicks_cd} \ln F_{|S_i}(\cdot) = \beta_K(S_i) k_{it}+\beta_L(S_i)l_{it}+\beta_M(S_i)m_{it} , \end{equation} where the lower-case variables denote the logs of the corresponding upper-case variables, and the input elasticity functions $[\beta_K(\cdot),\beta_L(\cdot),\beta_M(\cdot)]'$ are unspecified {smooth} functions of the firm's location $S_i$. {The local smoothness feature of the production {relationship}, including both {the} input elasticities and persistent productivity {process below}, captures the effects of technology spillovers and agglomeration economies} {that give rise to local neighborhood influences}. Our methodology can also adopt more flexible specifications such as the log-quadratic translog, which provides a natural extension of the log-linear Cobb-Douglas form. See Appendix \ref{sec:appx_tl} for the details on this extension. Before proceeding further, we formalize the location-specific autoregressive conditional mean function of $\omega_{it}$ in its evolution process \eqref{eq:productivity_hicks_law}. Following \citet{dj2013,dj2019}, \citet{acf2015}, \citet{griecoetal2016,griecoetal2019} and many others, we adopt a parsimonious first-order autoregressive specification of the Markovian evolution for productivity but take a step further by assuming a more flexible semiparametric location-specific formulation akin to that for the production technology in \eqref{eq:prodfn_hicks_cd}: \begin{equation}\label{eq:productivity_hicks_lawsp} h_{|S_i}(\cdot ) = \rho_0(S_i)+ \rho_1(S_i) \omega_{it-1}+\rho_2(S_i)G_{it-1}. \end{equation} \subsection{Proxy Variable Identification} \label{sec:identification} Substituting for $F_{|S_i}(\cdot)$ in the locationally varying production function \eqref{eq:prodfn_hicks_f} using \eqref{eq:prodfn_hicks_cd}, we obtain \begin{align} y_{it} &= \beta_K(S_i) k_{it}+\beta_L(S_i)l_{it}+\beta_M(S_i)m_{it} + \omega_{it}+ \eta_{it} \label{eq:prodfn_hicks_cd2} \\ &= \beta_K(S_i) k_{it}+\beta_L(S_i)l_{it}+\beta_M(S_i)m_{it} + \rho_0(S_i)+ \rho_1(S_i) \omega_{it-1}+\rho_2(S_i)G_{it-1} + \zeta_{it}+ \eta_{it}, \label{eq:prodfn_hicks_cd3} \end{align} where we have also used the Markov process for $\omega_{it}$ from \eqref{eq:productivity_hicks_law} combined with \eqref{eq:productivity_hicks_lawsp} in the second line. Under our structural assumptions, all right-hand-side covariates in \eqref{eq:prodfn_hicks_cd3} are predetermined and weakly exogenous with respect to $\zeta_{it} + \eta_{it}$, except for the freely varying input $m_{it}$ that the firm chooses in time period $t$ conditional on $\omega_{it}$ (among other state variables) thereby making it a function of $\zeta_{it}$. That is, $m_{it}$ is endogenous. Prior to finding ways to tackle the endogeneity of $m_{it}$, to consistently estimate \eqref{eq:prodfn_hicks_cd3}, we first need to address the latency of firm productivity $\omega_{it-1}$. A popular solution is a proxy variable approach \`a la \citet{lp2003} whereby latent productivity is controlled for by inverting the firm's conditional demand for an observable static input such as materials. However, such a standard proxy approach generally fails to identify the firm's production function and productivity due to the lack of a valid instrument (from within the production function) for the endogenous $m_{it}$ despite the abundance of predetermined lags of inputs. As recently shown by \citet{gnr2013}, identification cannot be achieved using the standard procedure because \textit{no} exogenous higher-order lag provides excluded relevant variation for $m_{it}$ after conditioning the model on the already included self-instrumenting variables. As a result, the production function remains unidentified in flexible inputs. In order to solve this under-identification problem, \citet{gnr2013} suggest exploiting a structural link between the production function and the firm's (static) first-order condition for the freely varying input. In what follows, we build on this idea which we modify along the lines of \citet{dj2013} and \citet{malikovzhao2021} in explicitly making use of the assumed functional form of production technology. \textsl{First step}.\textemdash We first focus on the identification of production function in its flexible input $m_{it}$. Specifically, given the technology specification in \eqref{eq:prodfn_hicks_cd}, we seek to identify the material elasticity function $\beta_M(S_i)$. To do so, we consider an equation for the firm's first-order condition for the static optimization in \eqref{eq:profitmax}. The optimality condition with respect to $M_{it}$ in logs is given by (in logs) \begin{equation}\label{eq:foc_cd} \ln P_{t}^Y+\beta_K(S_i) k_{it}+\beta_L(S_i)l_{it}+\ln \beta_M(S_i)+[\beta_M(S_i)-1]m_{it} + \omega_{it}+ \ln \theta = \ln P_{t}^M , \end{equation} which can be transformed by subtracting the production function in \eqref{eq:prodfn_hicks_cd2} from it to obtain the following location-specific material share equation: \begin{equation}\label{eq:fst} v_{it} = \ln [\beta_M(S_i)\theta] - \eta_{it} , \end{equation} where $v_{it} \equiv \ln \left(P_{t}^M M_{it}\right)-\ln \left( P_{t}^Y Y_{it}\right)$ is the log nominal share of material costs in total revenue, which is observable in the data. The material share equation in \eqref{eq:fst} is powerful in that it enables us to identify unobservable material elasticity function $\beta_M(S_i)$ using the information about the log material share $v_{it}$. Specifically, we first identify a ``scaled'' material elasticity function $\beta_M(S_i)\times\theta$ using the moment condition $\mathbb{E}[\eta_{it} | \mathcal{I}_{it}] =\mathbb{E}\left[\eta_{it}|S_i\right]=0$, from where we have that \begin{equation}\label{eq:fst_ident_theta} \ln [\beta_M(S_i)\theta] = \mathbb{E}[v_{it}|S_i]. \end{equation} To identify the material elasticity function $\beta_M(S_i)$ net of constant $\theta$, note that $\theta$ is \begin{align}\label{eq:theta} \theta&\equiv \mathbb{E}\left[\exp\left\{ \eta_{it}\right\} \right]=\mathbb{E}\left[\exp\left\{ \eta_{it}\right\} \right] =\mathbb{E}\left[\exp\left\{ \mathbb{E}[v_{it}|S_i]-v_{it}\right\} \right], \end{align} which allows us to isolate $\beta_M(S_i)$ via \begin{equation}\label{eq:fst_ident} \beta_M(S_i) = \exp\left\{ \mathbb{E}[v_{it}|S_i] \right\}/ \mathbb{E}\left[\exp\left\{ \mathbb{E}[v_{it}|S_i]-v_{it}\right\} \right]. \end{equation} By having identified the material elasticity function $\beta_M(S_i)$, we have effectively pinpointed the production technology in the dimension of its endogenous static input thereby effectively circumventing the \citet{gnr2013} critique. This is evident when \eqref{eq:prodfn_hicks_cd3} is rewritten as \begin{equation}\label{eq:prodfn_hicks_cd4} y_{it}^* = \beta_K(S_i) k_{it}+\beta_L(S_i)l_{it}+ \rho_0(S_i)+ \rho_1(S_i) \omega_{it-1}+\rho_2(S_i)G_{it-1} + \zeta_{it}+ \eta_{it} , \end{equation} where $y_{it}^*\equiv y_{it} - \beta_M(S_i) m_{it}$ on the left-hand side is already identified/observable and, hence, model in \eqref{eq:prodfn_hicks_cd4} now contains \textit{no} endogenous regressors that need instrumentation. \textsl{Second step}.\textemdash To identify the rest of the production function, we proxy for latent $\omega_{it-1}$ using the known functional form of the conditional material demand function implied by the static first-order condition in \eqref{eq:foc_cd} which we analytically invert for productivity. Namely, using the inverted (log) material function $\omega_{it}=\ln[ P_{t}^M/P_{t}^Y]-\beta_K(S_i) k_{it}-\beta_L(S_i)l_{it}-\ln [\beta_M(S_i)\theta]+[1-\beta_M(S_i)]m_{it}$ to substitute for $\omega_{it-1}$ in \eqref{eq:prodfn_hicks_cd4}, we get \begin{align}\label{eq:sst} y_{it}^* &= \beta_K(S_i) k_{it}+\beta_L(S_i)l_{it}+ \rho_0(S_i)+ \rho_1(S_i) \Big[\nu^*_{it-1}-\beta_K(S_i) k_{it-1}-\beta_L(S_i)l_{it-1}\Big]+\rho_2(S_i)G_{it-1} + \zeta_{it}+ \eta_{it} , \end{align} where $\nu^*_{it-1}=\ln[ P_{t-1}^M/P_{t-1}^Y]-\ln [\beta_M(S_i)\theta]+[1-\beta_M(S_i)]m_{it-1}$ is already identified/observable and predetermined with respect to $\zeta_{it}+ \eta_{it}$.\footnote{{Following the convention {in the} literature \citep[e.g.,][]{op1996,lp2003,acf2015,dj2013,gnr2013}, we assume there is no measurement error in $m_{it}$. However, if the (log) material input is measured with errors, due to {the} reasons such as inventories, subcontracting and outsourcing, it will affect both the first- and the second-step estimation. More specifically, {adjusting} the first step is less problematic if {the measurement error is classical} since $m_{it}$ is in the dependent variable. However, {the second-step} equation \eqref{eq:sst} {will have a new} endogeneity issue due to the measurement error. In such a case, additional {identifying} assumptions are often needed; {see} \citet{hu2020estimating} for an example.}} All regressors in \eqref{eq:sst} are weakly exogenous, and this proxied model is identified based on the moment conditions: \begin{equation}\label{eq:sst_ident} \mathbb{E}[ \zeta_{t} + \eta_{t} |\ k_{it},l_{it},k_{it-1},l_{it-1},G_{it-1},\nu^*_{it-1}(m_{it-1}), S_{i} ] = 0. \end{equation} With the production technology and the transitory shock ${\eta}_{it}$ successfully identified in the two previous steps, we can readily recover $\omega_{it}$ from \eqref{eq:prodfn_hicks_cd2} via $\omega_{it}=y_{it}- \beta_K(S_i)k_{it}-\beta_L(S_i)l_{it}-\beta_M(S_i)m_{it}-\eta_{it}$. Our identification methodology is also robust to the \citet{acf2015} critique that focuses on the inability of structural proxy estimators to separably identify the additive production function and productivity proxy. Such an issue normally arises in the wake of perfect functional dependence between freely varying inputs appearing both inside the unknown production function and productivity proxy. Our second-step equation \eqref{eq:sst} does not suffer from such a problem because it contains no (endogenous) variable input on the right-hand side, the corresponding elasticity of which has already been identified from the material share equation in the first step. \subsection{Semiparametric Estimation} \label{sec:estimation} Given the semiparametric varying-coefficient specifications adopted for both the production technology [in \eqref{eq:prodfn_hicks_cd}] and productivity evolution [in \eqref{eq:productivity_hicks_lawsp}], we estimate both the first- and second-step equations \eqref{eq:fst} and \eqref{eq:sst} via \textit{local} least squares. We employ local-constant kernel fitting. Denote the unknown $\ln [\beta_M(S_i)\theta]$ as some nonparametric function $b_M(S_i)$. Under the assumption that input elasticity functions are smooth and twice continuously differentiable in the neighborhood of $S_i=s$, unknown $b_M(S_i)$ can be locally approximated around $s$ via $b_M(S_i) \approx b_M(s)$ at points $S_i$ close to $s$, $|S_i-s|=o(1)$. Therefore, for locations $S_i$ in the neighborhood of $s$, we can approximate \eqref{eq:fst} by \begin{equation}\label{eq:fst_est} v_{it} \approx b_M(s) - \eta_{it} , \end{equation} with the corresponding local-constant kernel estimator of $\ln [\beta_M(s)\theta]$ given by \begin{equation}\label{eq:fst_est2_pre} \widehat{b}_M(s) = \left[\sum_i\sum_t \mathcal{K}_{{h}_1}(S_i,s)\right]^{-1}\sum_i\sum_t \mathcal{K}_{{h}_1}(S_i,s)v_{it}, \end{equation} where {$\mathcal{K}_{h_1}(S_i,s)$ is a kernel} that weights each observation on the basis of proximity of its $S_i$ value to $s$. {To avoid over-smoothing in dense ranges of the support of the data while under-smoothing in sparse tails, which a ``fixed'' bandwidth parameter is well-known to produce, we employ an ``adaptive'' bandwidth capable of adapting to the local distribution of the data. Specifically, to weight observations, we use an $h_1$-nearest-neighbor bandwidth $R_{h_1}(s)$ defined as the Euclidean distance between the fixed location $s$ and its $h_1$th nearest location among $\{S_i\}$, i.e., \begin{equation}\label{eq:knn-bw} R_{h_1}(s)=\lVert S_{(h_1)}-s\lVert, \end{equation} where $S_{(h_1)}$ is the $h_1$th nearest neighbor of $s$. Evidently, $R_{h_1}(s)$ is just the $h_1$th order statistic on the distances $\lVert S_i-s\lVert$. It is $s$-specific and, hence, adapts to data distribution. Correspondingly, the kernel weight function is given by \begin{equation} \mathcal{K}_{h_1}(S_i,s)=\mathsf{K}\left(\frac{\lVert S_i-s\lVert}{R_{h_1}(s)}\right) , \end{equation} where $\mathsf{K}(\cdot)$ is a (non-negative) smooth kernel function such that $\int \mathsf{K}(\lVert u\lVert)du=1$; we use a second-order Gaussian kernel. The key parameter here that controls the degree of smoothing in the first-step estimator \eqref{eq:fst_est2_pre} is the number of nearest neighbors (i.e., locations) $h_1$, which diverges to $\infty$ as $n\to \infty$ but slowly: $h_1/n\to 0$. We select the optimal $h_1$ using the data-driven cross-validation procedure. Also note that, despite the location $S_i$ being multivariate, the parameter $h_1$ is a scalar because it modulates a univariate quantity, namely the distance. Hence, the bandwidth $R_{h_1}(s)$ is also scalar. That is, unlike in the case of a more standard kernel fitting based on fixed bandwidths when the data are weighted using the product of univariate kernels corresponding to each element in $S_i-s$, the adaptive kernel fitting weights data using a \textit{norm} of the vector $S_i-s$. For this reason, when employing nearest neighbor methods, the elements of smoothing variables are typically rescaled so that they are all comparable because, when $S_i$ is multivariate, the nearest neighbor ordering is not scale-invariant. In our case however, we do \textit{not} rescale the elements of $S_i$ (i.e., latitude and longitude) because they are already measured on the same scale and the (partial) distances therein have a concrete physical interpretation. From \eqref{eq:fst_ident},} the first-step estimator of $\beta_M(s)$ is \begin{equation}\label{eq:fst_est2} \widehat{\beta}_M(s) = nT\exp\left\{ \widehat{b}_M(s) \right\}\Big/ \sum_i\sum_t\exp\left\{ \widehat{b}_M(s)-v_{it}\right\} . \end{equation} We construct $\widehat{y}_{it}^*\equiv y_{it} - \widehat{\beta}_M(S_i) m_{it}$ and $\widehat{\nu}^*_{it-1}=\ln[ P_{t-1}^M/P_{t-1}^Y]-\ln [\widehat{\beta}_M(S_i)\theta]+[1-\widehat{\beta}_M(S_i)]m_{it-1}$ using the first-step local estimates of $\beta_M(S_i)$. Analogous to the first-step estimation, we then locally approximate each unknown parameter function in \eqref{eq:sst} around $S_i=s$ via local-constant approach. Therefore, for locations $S_i$ near $s$, we have \begin{align}\label{eq:sst_est} \widehat{y}_{it}^* &\approx \beta_K(s) k_{it}+\beta_L(s)l_{it}+ \rho_0(s)+ \rho_1(s) \Big[\widehat{\nu}^*_{it-1}-\beta_K(s) k_{it-1}-\beta_L(s)l_{it-1}\Big]+\rho_2(s)G_{it-1} + \zeta_{it}+ \eta_{it}. \end{align} Denoting all unknown parameters in \eqref{eq:sst_est} collectively as $\Theta(s)=[\beta_K(s),\beta_L(s),\rho_0(s),\rho_1(s),\rho_2(s)]'$, we estimate the second-step equation via locally weighted {nonlinear} least squares. The corresponding {kernel} estimator is \begin{align}\label{eq:sst_est2} \widehat{\Theta}(s) = \arg\min_{\Theta(s)}\ \sum_i\sum_t &\ \mathcal{K}_{{h}_2}(S_i,s)\Big( \widehat{y}_{it}^* - \beta_K(s) k_{it}-\beta_L(s)l_{it}\ - \notag \\ &\ \rho_0(s)- \rho_1(s) \Big[\widehat{\nu}^*_{it-1}-\beta_K(s) k_{it-1}-\beta_L(s)l_{it-1}\Big]+\rho_2(s)G_{it-1} \Big)^2, \end{align} where {$h_2$ is the number of nearest neighbors of a fixed location $s$ in the second-step estimation. It diverges faster than does the first-step smoothing parameter $h_1$} so that the first-step estimation has an asymptotically ignorable impact on the second step. Lastly, the firm productivity is estimated as $\widehat{\omega}_{it}=y_{it}- \widehat{\beta}_K(S_i)k_{it}-\widehat{\beta}_L(S_i)l_{it}-\widehat{\beta}_M(S_i)m_{it}-\widehat{\eta}_{it}$ using the results from both steps. \medskip { \noindent\textbf{Finite-Sample Performance.} Before applying our proposed methodology to the data, we first study its performance in a small set of Monte Carlo simulations. The results are encouraging, and simulation experiments show that our estimator recovers the true parameters well. As expected of a consistent estimator, the estimation becomes more stable as the sample size grows. For details, see Appendix \ref{sec:appex_sim}. \medskip \noindent\textbf{Inference.} Due to a multi-step nature of our estimator as well as the presence of nonparametric components, computation of the asymptotic variance of the estimators is not simple. For statistical inference, we therefore use bootstrap. We approximate sampling distributions of the estimators via wild residual block bootstrap that takes into account a panel structure of the data, with all the steps bootstrapped jointly owing to a sequential nature of our estimation procedure. The bootstrap algorithm is described in Appendix \ref{sec:appx_inference}. \medskip \noindent\textbf{Testing of Location Invariance.} Given that our semiparametric locationally varying production model nests a more traditional fixed-parameter specification that implies locational invariance of the production function and the productivity evolution as a special case, we can formally discriminate between the two models to see if the data support our more flexible modeling approach. We discuss this specification test in detail in Appendix \ref{sec:appx_test}. } \section{Locational Productivity Differential Decomposition} \label{sec:decomposition_level} Since the production function can vary across space, a meaningful comparison of productivity for firms dispersed across space now requires that locational differences in technology be explicitly controlled. That is, the productivity differential between two firms is no longer limited to the difference in their firm-specific total factor productivities $\omega_{it}$ (unless they both belong to the same location) because either one of the firms may have access to a more productive technology $F_S(\cdot)$. Given that locational heterogeneity in production is the principal focus of our paper, in what follows, we provide a procedure for measuring and decomposing firm productivity differentials across any two locations of choice. Let $\mathcal{L}(s,t)$ represent a set of $n_{t}^s$ firms operating in location $s$ in the year $t$. For each of these firms, the estimated Cobb-Douglas production function (net of random shocks) in logs is \begin{equation}\label{eq:prodfn_hicks_cd-s} \widehat{y}_{it}^s = \widehat{\beta}_K(s) k_{it}^s+\widehat{\beta}_L(s)l_{it}^s+\widehat{\beta}_M(s)m_{it}^s + \widehat{\omega}_{it}^s, \end{equation} where we have also explicitly indexed these firms' observable output/inputs as well as the estimated productivities using the location. Averaging over these firms, we arrive at the ``mean'' production function for location $s$ in time $t$: \begin{equation}\label{eq:prodfn_hicks_cd-save} \overline{y}_{t}^s = \widehat{\beta}_K(s) \overline{k}_{t}^s+\widehat{\beta}_L(s)\overline{l}_{t}^s+\widehat{\beta}_M(s)\overline{m}_{t}^s + \overline{\omega}_{t}^s, \end{equation} where $\overline{y}_{t}^s= \tfrac{1}{n^s_t}\sum_i\widehat{y}_{it}^s\mathbbm{1}\{i\in \mathcal{L}(s,t)\}$, $\overline{x}_{t}^s=\tfrac{1}{n^s_t}\sum_i x_{it}^s\mathbbm{1}\{i\in \mathcal{L}(s,t)\}$ for $x\in \{k,l,m\}$, and $\overline{\omega}_{t}^s= \tfrac{1}{n^s_t}\sum_i\widehat{\omega}_{it}^s\mathbbm{1}\{i\in \mathcal{L}(s,t)\}$. Taking the difference between \eqref{eq:prodfn_hicks_cd-save} and the analogous mean production function for the benchmark location of interest $\kappa$ in the same year, we obtain the mean \textit{output} differential between these two locations {(in logs)}: \begin{equation}\label{eq:prodfn_hicks_cd-skdif} \underbrace{\overline{y}_{t}^s - \overline{y}_{t}^{\kappa}}_{\Delta\overline{y}_{t}^{s,\kappa}}= \left[\widehat{\beta}_K(s) \overline{k}_{t}^s+\widehat{\beta}_L(s)\overline{l}_{t}^s +\widehat{\beta}_M(s)\overline{m}_{t}^s\right] - \left[\widehat{\beta}_K(\kappa) \overline{k}_{t}^{\kappa}+\widehat{\beta}_L(\kappa)\overline{l}_{t}^{\kappa} +\widehat{\beta}_M(\kappa)\overline{m}_{t}^{\kappa}\right] + \Big[ \overline{\omega}_{t}^s - \overline{\omega}_{t}^{\kappa}\Big] . \end{equation} To derive the mean \textit{productivity} differential (net of input differences) between these two locations, we add and subtract the $s$ location's production technology evaluated at the $\kappa$ location's inputs, i.e., \mbox{ $\left[\widehat{\beta}_K(s) \overline{k}_{t}^{\kappa}+\widehat{\beta}_L(s)\overline{l}_{t}^{\kappa} +\widehat{\beta}_M(s)\overline{m}_{t}^{\kappa}\right]$}, in \eqref{eq:prodfn_hicks_cd-skdif}: \begin{align}\label{eq:prodfn_hicks_cd-decomp} \Delta \overline{\text{PROD}}_{t}^{s,\kappa} &\equiv \Delta\overline{y}_{t}^{s,\kappa} - \widehat{\beta}_K(s) \Delta \overline{k}_{t}^{s,\kappa} -\widehat{\beta}_L(s)\Delta \overline{l}_{t}^{s,\kappa} -\widehat{\beta}_M(s)\Delta \overline{m}_{t}^{s,\kappa} \notag \\ & =\underbrace{\left[\widehat{\beta}_K(s)-\widehat{\beta}_K(\kappa)\right] \overline{k}_{t}^{\kappa}+ \left[\widehat{\beta}_L(s)-\widehat{\beta}_L(\kappa)\right] \overline{l}_{t}^{\kappa} + \left[\widehat{\beta}_M(s)-\widehat{\beta}_M(\kappa)\right] \overline{m}_{t}^{\kappa}}_{\Delta\overline{\text{TECH}}_{t}^{s,\kappa}} + \underbrace{\Big[ \overline{\omega}_{t}^s - \overline{\omega}_{t}^{\kappa}\Big]}_{\Delta\overline{\text{TFP}}_{t}^{s,\kappa}}, \end{align} where $\Delta\overline{x}_{t}^{s,\kappa}=\overline{x}_{t}^{s}-\overline{x}_{t}^{\kappa}$ for $x\in \{k,l,m\}$. Equation \eqref{eq:prodfn_hicks_cd-decomp} measures mean productivity differential across space and provides a \textit{counterfactual} decomposition thereof. By utilizing the counterfactual output that, given its location-specific technology, the average firm in location $s$ would have produced using the mean inputs employed by the firms in location $\kappa$ in year $t$ $\big[\widehat{\beta}_K(s) \overline{k}_{t}^{\kappa}+\widehat{\beta}_L(s)\overline{l}_{t}^{\kappa} +\widehat{\beta}_M(s)\overline{m}_{t}^{\kappa}\big]$, we are able to measure the locational differential in the mean productivity of firms in the locations $s$ and $\kappa$ that is \textit{un}explained by their different input usage: $\Delta \overline{\text{PROD}}_{t}^{s,\kappa}$. More importantly, we can then decompose this locational differential in the total productivity into the contribution attributable to the difference in production technologies $\Delta\overline{\text{TECH}}_{t}^{s,\kappa}$ and to the difference in the average total-factor {operations efficiencies} $\Delta\overline{\text{TFP}}_{t}^{s,\kappa}$. The locational productivity differential decomposition in \eqref{eq:prodfn_hicks_cd-decomp} is time-varying, but should one be interested in a scalar measure of locational heterogeneity for the entire sample period, time-specific averages can be replaced with the ``grand'' averages computed by pooling over all time periods. \section{Empirical Application} \label{sec:application} Using our proposed model and estimation methodology, we explore the locationally heterogeneous production technology among manufacturers in the Chinese chemical industry. We report location-specific elasticity and productivity estimates for these firms and then decompose differences in their productivity across space to study if the latter are mainly driven by the use of different production technologies or the underlying total factor productivity differentials. \subsection{Data} \label{sec:data} We use the data from \citet{baltagietal2016}. The dataset is a panel of $n=12,490$ manufacturers of chemicals continuously observed over the 2004--2006 period ($T=3$). The industry includes manufacturing of basic chemical materials (inorganic acids and bases, inorganic salts and organic raw chemical materials), fertilizers, pesticides, paints, coatings and adhesives, synthetic materials (plastic, synthetic resin and fiber) as well as daily chemical products (soap and cleaning compounds). The original source of these firm-level data is the Chinese Industrial Enterprises Database survey conducted by China's National Bureau of Statistics (NBS) which covers all state-owned firms and all non-state-owned firms with sales above 5 million Yuan (about \$0.6 million). \citet{baltagietal2016} have geocoded the location of each firm at the zipcode level in terms of the longitude and latitude (the ``$S$'' variables) using their postcode information in the dataset. The coordinates are constructed for the location of each firm's headquarters and are time-invariant. By focusing on the continually operating firms, we mitigate a potential impact of spatial sorting (as well as the attrition due to non-survival) on the estimation results and treat the firm location as fixed (exogenous). The total number of observations is 37,470. Figure \ref{fig:number} shows the spatial distribution of firms in our dataset on a map of mainland China (we omit area in the West with no data in our sample). The majority are located on the East Coast and in the Southeast of China, especially around the Yangtze River Delta that is generally comprised of Shanghai and the surrounding areas, the southern Jiangsu province and the northern Zhejiang province. \begin{table}[t] \centering \caption{Data Summary Statistics}\label{tab:datasummary} \footnotesize \makebox[\linewidth]{ \begin{tabular}{lrrrr} \toprule[1pt] Variables& Mean & 1st Qu. & Median & 3rd Qu.\\ \midrule &\multicolumn{4}{c}{\it \textemdash Production Function Variables\textemdash} \\[2pt] Output & 86,381.98 & 11,021.09 & 23,489.53 & 59,483.71 \\ Capital & 35,882.40 & 1,951.47 & 5,319.28 & 17,431.35 \\ Labor & 199.07 & 43.00 & 80.00 & 178.00 \\ Materials & 48,487.82 & 5,896.49 & 12,798.35 & 33,063.81 \\[2pt] &\multicolumn{4}{c}{\it \textemdash Productivity Controls\textemdash} \\[2pt] Skilled Labor Share & 0.174 & 0.042 & 0.111 & 0.242 \\ Foreign Equity Share & 0.140 & 0.000 & 0.000 & 0.000 \\ Exporter & 0.237 & & & \\ State-Owned & 0.051 & & & \\[2pt] &\multicolumn{4}{c}{\it \textemdash Location Variables\textemdash} \\[2pt] Longitude & 2.041 & 1.984 & 2.068 & 2.102 \\ Latitude & 0.557 & 0.504 & 0.547 & 0.632 \\ \midrule \multicolumn{5}{p{9.3cm}}{\scriptsize Output, capital and materials are in 1,000s of 1998 RMB. Labor is measured in the number of employees. The skilled labor share and foreign equity share are unit-free proportions. The exporter and state-owned variables are binary indicators. The location coordinates are in radians.} \\ \bottomrule[1pt] \end{tabular} } \end{table} The key production variables are defined as follows. Output ($Y$) is measured using sales. The labor input ($L$) is measured by the number of employees. Capital stock ($K$) is the net fixed assets for production and operation, and the materials ($M$) are defined as the expenditure on direct materials. Output, capital and materials are deflated to the 1998 values using the producer price index, the price index for investment in fixed assets and the purchasing price index for industrial inputs, respectively, where the price indices are obtained from the NBS. The unit of monetary values is thousands RMB (Chinese Yuan). We include four productivity-modifying variables in the evolution process of firm productivity $\omega_{it}$: the share of high-skilled workers ($G_1$), which is defined as the fraction of workers with a university or comparable education and is time-invariant because the data on workers' education level are only available for 2004; the foreign equity share ($G_2$), which is measured by the proportion of equity provided by foreign investors; a binary export status indicator ($G_3$), which takes value one if the firm is an exporter and zero otherwise; and a binary state/public ownership status indicator ($G_4$), which takes value one if the firm is state-owned and zero otherwise. Table \ref{tab:datasummary} shows the summary statistics, including the mean, 1st quartile, median and 3rd quartile for the variables. For the production-function variables, the mean values are significantly larger than their medians, which suggests that their distributions are skewed to the right. Among firms in the chemical industry, 23.7\% are exporters and 5.1\% are state-owned. Most firms do not have foreign investors, and the average ratio of foreign to total equity is 0.14. On average, 17.4\% of employees in the industry have a college degree or equivalent. \subsection{Estimation Results} \label{sec:results} In order to estimate the locationally varying (semiparametric) production function and firm productivity process in \eqref{eq:prodfn_hicks_cd}--\eqref{eq:productivity_hicks_lawsp}, we use the data-driven leave-one-location-out cross-validation method to choose the optimal {number of nearest neighboring locations in each step of the estimation ($h_1$ and $h_2$) to smooth over ``contextual'' location variables} $S_i$ inside the unknown functional coefficients. {This smoothing parameters regulate} spatial weighting of neighboring firms in kernel {fitting} and, as noted earlier, by selecting it via a data-driven procedure, we avoid the need to rely on \textit{ad hoc} specifications of both the spatial weights and radii defining the extent of {neighborhood influences}. The optimal {$h_1$ and $h_2$ values are 520 and 340 firm-years in the first- and second-step estimation, respectively. On average across all $s$, the corresponding adaptive bandwidths are 0.0171 and 0.0169 radians.} These bandwidth values are reasonable, given our sample size and the standard deviations of the longitude and latitude,\footnote{For the reference, the sample standard deviations for the longitude and latitude are respectively 0.0889 and 0.0941 radians.} and, evidently, are \textit{not} too large to ``smooth out'' the firm location {to imply location invariance/homogeneity. In fact, we can argue this more formally if kernel-smoothing is done using \textit{fixed} bandwidths so that we can rely on the theoretical results by \citet{halletal2007}, whereby local-constant kernel methods can remove irrelevant regressors via data-driven over-smoothing (i.e., by selecting large bandwidths). When we re-estimate our locationally-varying model in this manner, the optimal fixed bandwidths for the longitude and latitude in the first-step estimation are 0.009 and 0.010 radians, respectively; the corresponding second-step bandwidths are 0.024 and 0.023 radians. Just like in the case of adaptive bandwidths, these bandwidth values are fairly small relative to variation in the data, providing strong evidence in support of the overall relevancy of geographic location for firm production (i.e., against location invariance). Our location-varying formulation of the production technology and productivity is also formally supported by the \citet{ullah1985} specification test described in Appendix \ref{sec:appx_test}. Using cross-validated fixed bandwidths,} the bootstrap $p$-value is {0.001.} At the conventional significance level, our locationally heterogeneous production model is confidently preferred to a location-invariant formulation. {In what follows, we discuss our semiparametric results obtained using adaptive bandwidths. For inference, we use the bias-corrected bootstrap percentile intervals as described in Appendix \ref{sec:appx_inference}. The number of bootstrap replications is set to $B=1,000$.} \paragraph{Production Function.} We first report production-function estimates from our main model in which the production technology is locationally heterogeneous. We then compare these estimates with those obtained from the more conventional, {location-invariant} model that \textit{a priori} assumes common production technology for all firms. {The latter ``global'' formulation of the production function postulates constancy of the production relationship over space. This model is therefore fully parametric (with constant coefficients) and a special case of our locationally-varying model when $S_i$ is fixed across all $i$. Its estimation is straightforward and follows directly from \eqref{eq:fst_est2_pre}--\eqref{eq:sst_est} by letting the adaptive bandwidths in both steps diverge to $\infty$ which, in effect, obviates the need to locally weight the data because all kernels will be the same (for details, see Appendix \ref{sec:appx_test}).}\footnote{{Following a suggestion provided by a referee, we also estimate the location-invariant model {with location fixed effects added to the production function during the estimation.} We find the results do not change much from including these location effects, and therefore these results are not reported.}} \begin{table}[p] \centering \caption{Input Elasticity Estimates}\label{tab:coef} \footnotesize \makebox[\linewidth]{ \begin{tabular}{lcccc|c} \toprule[1pt] & \multicolumn{4}{c}{\it Locationally Varying} & \it Location-Invariant \\ & Mean & 1st Qu. & Median & 3rd Qu.& Point Estimate\\ \midrule Capital & 0.112 & 0.095 & 0.115 & 0.128 & 0.130 \\ & (0.104, 0.130) & (0.083, 0.116) & (0.110, 0.130) & (0.119, 0.147) & (0.118, 0.141) \\ Labor & 0.303 & 0.272 & 0.293 & 0.342 & 0.299 \\ & (0.285, 0.308) & (0.248, 0.284) & (0.278, 0.293) & (0.313, 0.356) & (0.280, 0.318) \\ Materials & 0.480 & 0.452 & 0.481 & 0.503 & 0.495 \\ & (0.466, 0.501) & (0.414, 0.467) & (0.437, 0.502) & (0.456, 0.524) & (0.460, 0.519) \\ \midrule \multicolumn{6}{p{13cm}}{\scriptsize The left panel summarizes point estimates of $\beta_{\kappa}(S_i)\ \forall\ \kappa\in\{K, L, M\}$ with the corresponding two-sided 95\% bias-corrected confidence intervals in parentheses. The right panel reports their counterparts from a fixed-coefficient location-invariant model. } \\ \bottomrule[1pt] \end{tabular} } \end{table} \begin{figure}[p] \centering \includegraphics[scale=0.36]{capital.jpg}\includegraphics[scale=0.36]{labor.jpg} \includegraphics[scale=0.36]{material.jpg}\includegraphics[scale=0.36]{RTS.jpg} \caption{Input Elasticity Estimates \\ {\small (Notes: Vertical lines correspond to location-invariant estimates)}} \label{fig:coef_elas} \end{figure} Since our model has location-specific input elasticities, there is a distribution of them (over space) and Table \ref{tab:coef} summarizes their point estimates. The table also reports the elasticity estimates from the alternative, location-invariant model. The corresponding two-sided 95\% bias-corrected confidence intervals for these statistics are reported in parentheses. Based on our model, the mean (median) capital, labor and material elasticity estimates are {0.112, 0.303 and 0.480 (0.115, 0.293 and 0.481),} respectively. Importantly, these location-specific elasticities show significant variation. {For the capital and labor inputs, the first quartiles are significantly different from the third quartiles.} Within the inter-quartile interval of their point estimates, elasticities of capital, labor and materials respectively increase by {0.033, 0.070 and 0.051, which in turn correspond to the 35\%, 26\% and 11\% changes.} In comparison, the elasticity estimates from the location-invariant production function with fixed coefficients are all larger than the corresponding {median} estimates from our model and fall in between the second and third quartiles of our locationally-varying point estimates. Figure \ref{fig:coef_elas} provides visualization of the non-negligible technological heterogeneity in the chemicals production technology across different locations in China, which the traditional location-invariant model assumes away. The figure plots histograms of the estimated location-specific input elasticities (and the returns to scale) with the location-invariant counterpart estimates depicted by vertical lines. Consistent with the results in Table \ref{tab:coef}, all distributions show relatively wide dispersion, and the locationally homogeneous {model} is apparently unable to provide a reasonable representation of production technology across different regions. \begin{table}[t] \centering \caption{Locationally Varying Returns to Scale Estimates}\label{tab:RTC} \footnotesize \makebox[\linewidth]{ \begin{tabular}{lcccc|cc} \toprule[1pt] & Mean & 1st Qu. & Median & 3rd Qu.& $= 1$ &$<1$ \\ \midrule RTS & 0.895 & 0.875 & 0.903 & 0.929 & 21.6\% & 82.3\% \\ & (0.820, 0.931) & (0.801, 0.912) & (0.827, 0.942) & (0.855, 0.968) & & \\ \midrule \multicolumn{7}{p{11.8cm}}{\scriptsize The left panel summarizes point estimates of $\sum_{\kappa}\beta_{\kappa}(S_i)$ with $\kappa\in\{K, L, M\}$ with the corresponding two-sided 95\% bias-corrected confidence intervals in parentheses. The counterpart estimate of the returns to scale from a fixed-coefficient location-invariant model is 0.924 (0.865, 0.969). The right panel reports the shares of locations in which location-specific point estimates are (\textit{i}) not significantly different from 1 (constant returns to scale) and (\textit{ii}) statistically less than 1 (decreasing returns to scale). The former classification is based on a two-sided test, the latter is on a one-sided test.} \\ \bottomrule[1pt] \end{tabular} } \end{table} Table \ref{tab:RTC} provides summary statistics of the estimated returns to scale (RTS) from our locationally varying production function (also see the bottom-right plot in Figure \ref{fig:coef_elas}). The mean RTS is {0.895, and the median is 0.903, with the inter-quartile range being 0.054.} The right panel of Table \ref{tab:RTC} reports the fraction of locations in which the Chinese manufacturers of chemicals exhibit constant or decreasing returns to scale. This classification is based on the RTS point estimate being statistically equal to or less than one, respectively, at the 5\% significance level. The ``$=1$'' classification is based on a two-sided test, whereas the ``$<1$'' test is one-sided. In most locations in China {(82.3\%)}, the production technologies of the chemicals firms exhibit \textit{dis}economies of scale, but {21.6\%} regions show evidence of the constant returns to scale (i.e., scale efficiency). \begin{figure}[t] \centering \includegraphics[scale=0.3]{map.Returns.to.Scale.jpg} \caption{Spatial Distribution of Returns to Scale Estimates \\ {\small (Notes: The color shade cutoffs correspond to the first, second (median) and third quartiles)} } \label{fig:rts} \end{figure} To further explore the locational heterogeneity in the production technology for chemicals in China, we plot the spatial distribution of the RTS estimates in the country in Figure \ref{fig:rts}. We find that the firms {with the largest RTS are mainly located in the Southeast Coast provinces and some parts of the West and Northeast China.} The area nearby Beijing also exhibits larger RTS. There are a few possible explanations of such a geographic distribution of the returns to scale. As noted earlier, spillovers and agglomeration have positive effects on the marginal productivity of inputs which typically take form of the scale effects, and they may explain the high RTS on the Southeast Coast and in the Beijing area. Locality-specific resources, culture and polices can also facilitate firms' production process. For example, the rich endowment of the raw materials like coal, phosphate rock and sulfur make the provinces such as Guizhou, Yunnan and Qinghai among the largest fertilizer production zones in China. Furthermore, RTS is also related to the life cycle of a firm. Usually, it is the small, young and fast-growth firms that enjoy higher RTS, whereas the more mature firms that have grown bigger will have transitioned to the low-RTS scale. This may explain the prevalence of the higher-RTS firms in the West and Northeast China. \paragraph{Productivity Process.} We now analyze our semiparametric estimates of the firm productivity process in \eqref{eq:productivity_hicks_lawsp}. Table \ref{tab:prod.coef} summarizes point estimates of the location-specific marginal effects of productivity determinants in the evolution process of $\omega_{it}$, with the corresponding two-sided 95\% bias-corrected confidence intervals in parentheses. In the last column of the left panel, for each productivity-enhancing control $G_{it}$, we also report the share of locations in which location-specific point estimates are statistically positive (at a 5\% significance level) as inferred via a one-sided test. \begin{table}[p] \centering \caption{Productivity Process Coefficient Estimates}\label{tab:prod.coef} \footnotesize \makebox[\linewidth]{ \begin{tabular}{lccccc|c} \toprule[1pt] & \multicolumn{5}{c}{\it Locationally Varying} & \it Location-Invariant \\ Variables& Mean & 1st Qu. & Median & 3rd Qu.& $>0$ & Point Estimate\\ \midrule Lagged Productivity & 0.576 & 0.518 & 0.597 & 0.641 & 99.9\% & 0.497 \\ & (0.540, 0.591) & (0.469, 0.541) & (0.553, 0.614) & (0.580, 0.665) & & (0.455, 0.530) \\ Skilled Labor Share & 0.387 & 0.287 & 0.419 & 0.500 & 85.7\% & 0.387 \\ & (0.346, 0.395) & (0.241, 0.309) & (0.345, 0.459) & (0.471, 0.493) & & (0.345, 0.425) \\ Foreign Equity Share & 0.054 & --0.001 & 0.062 & 0.103 & 47.7\% & 0.056 \\ & (0.006, 0.074) & (--0.034, 0.066) & (0.033, 0.069) & (0.099, 0.099) & & (0.036, 0.075) \\ Exporter & --0.001 & --0.032 & --0.005 & 0.038 & 24.0\% & 0.006 \\ & (--0.011, 0.018) & (--0.041, --0.016) & (--0.012, 0.013) & (0.025, 0.067) & & (--0.008, 0.018) \\ State-Owned & 0.005 & --0.052 & 0.007 & 0.073 & 29.6\% & --0.043 \\ & (--0.021, 0.010) & (--0.101, --0.014) & (--0.028, 0.025) & (0.062, 0.076) & & (--0.072, --0.009) \\ \midrule \multicolumn{7}{p{16.5cm}}{\scriptsize The left panel summarizes point estimates of $\rho_j(S_i)\ \forall\ j=1,\dots,\dim(G)$ with the corresponding two-sided 95\% bias-corrected confidence intervals in parentheses. Reported is also a share of locations in which location-specific point estimates are statistically positive as inferred via a one-sided test. The right panel reports the counterparts from a fixed-coefficient location-invariant model.} \\ \bottomrule[1pt] \end{tabular} } \end{table} \begin{figure}[p] \centering \includegraphics[scale=0.33]{lag_productivity.jpg} \includegraphics[scale=0.33]{lag_share_of_skilled_labor.jpg}\includegraphics[scale=0.33]{lag_capital_ratio.jpg} \includegraphics[scale=0.33]{lag_export.jpg}\includegraphics[scale=0.33]{lag_state-owned.jpg} \caption{Productivity Process Coefficient Estimates \\ {\small (Notes: Vertical lines correspond to location-invariant estimates)}} \label{fig:coef_prod} \end{figure} The autoregressive coefficient on the lagged productivity, which measures the persistence of $\omega_{it}$, is {0.576 at the mean and 0.597 at the median, with the quartile statistics varying from 0.518 to 0.641.} It is significantly positive for firms in {virtually} all locations. For firms in most locations {(85.7\%)}, skilled labor has a large and significantly positive effect on productivity: a percentage point increase in the skilled labor share is associated with an improvement in the next period's firm productivity by about 0.4\%, on average. Point estimates of the foreign ownership effect are positive {in the majority of locations}, but firms in only about half the locations benefit from a statistically positive productivity-boosting effect of the inbound foreign direct investment, with the average magnitude of only {7/50} of that attributable to hiring more skilled labor. In line with the empirical evidence reported for China's manufacturing in the literature \citep[see][and references therein]{malikovetal2020}, firms in most regions show insignificant {(and negative)} effects of the export status on productivity. The ``learning by exporting'' effects are very limited and statistically positive in {a quarter} of locations only. Interestingly, we find that state/public ownership is a significantly positive contributor to the improvements in firm productivity in about a third of the locations in which the Chinese chemicals manufacturing firms operate. This may be because the less productive state firms exited the market during the market-oriented transition in the late 1990s and early 2000s (prior to our sample period), and the remaining state-owned firms are larger and more productive \citep[also see][]{hsiehsong2015,zhaoqiankumbhakar2020}. Another potential reason may be that state ownership could have brought non-trivial financing benefits to these firms which, otherwise, were usually financially constrained due to the under-developed financial market in China during that period. The far right panel of Table \ref{tab:prod.coef} reports productivity effects of the $G_{it}$ controls estimated using the location-invariant model. Note that, under the assumption of a location-invariant production, the evolution process of $\omega_{it}$ becomes a parametric linear model, and there is only one point estimate of each fixed marginal effect for all firms. Comparing these estimates with the {median} estimates from our model, the location-invariant marginal effects tend to be smaller. While the persistence coefficient as well as fixed coefficients on the skilled labor and foreign equity shares are positive and statistically significant, the location-invariant estimate of the state ownership effect on productivity is however significantly negative (for all firms, by design). Together with the tendency of a location-invariant model to underestimate, this underscores the importance of allowing sufficient flexibility in modeling heterogeneity across firms (across different locations, in our case) besides the usual Hicks-neutral TFP. The contrast between the two models is even more apparent in Figure \ref{fig:coef_prod}, which plots the distributions of estimated marginal effects of the productivity-enhancing controls. Like before, the location-invariant counterparts are depicted by vertical lines. The distribution of each productivity modifier spans a relatively wide range, and the corresponding location-invariant estimates are evidently not good representatives for the {centrality} of these distributions. For example, the productivity-boosting effect of the firm's skilled labor roughly varies between {0.06 and 0.61\% per unit percentage point increase in the skilled labor share, depending on the location.} The distribution of this marginal effect across locations is somewhat left-skewed, and the corresponding location-invariant effect estimate evidently does not measure central tendency of these locationally-varying effects well. Similar observations can be made about other varying coefficients in the productivity process. \paragraph{Productivity Decomposition.} We now examine the average productivity differentials for firms in different regions. To this end, we perform the locational decomposition proposed in Section \ref{sec:decomposition_level} to identify the sources of production differences that cannot be explained by input usage. Recall that, by our decomposition, the locational differential in the mean total productivity ($\Delta \overline{\text{PROD}}^{s,\kappa}_t$) accounts for the cross-regional variation in both the input elasticities ($\Delta\overline{\text{TECH}}_{t}^{s,\kappa}$) and the total factor productivity ($\Delta\overline{\text{TFP}}^{s,\kappa}$). It is therefore more inclusive than the conventional analyses that rely on fitting a common production technology for all firms regardless of their locations and thus confine cross-firm heterogeneity to differences in $\omega_{it}$ only. \begin{table}[t] \centering \caption{Locational Productivity Differential Decomposition}\label{tab:decom} \footnotesize \makebox[\linewidth]{ \begin{tabular}{lcccc} \toprule[1pt] Components& Mean & 1st Qu. & Median & 3rd Qu.\\ \midrule & \multicolumn{4}{c}{\it Locationally Varying Model} \\ $\Delta\overline{\text{TECH}}^{s,\kappa}$ & 1.292 & 1.135 & 1.331 & 1.521 \\ $\Delta\overline{\text{TFP}}^{s,\kappa}$ & 0.574 & 0.203 & 0.571 & 0.893 \\ $\Delta \overline{\text{PROD}}^{s,\kappa}$ & 1.866 & 1.652 & 1.869 & 2.086 \\ \midrule & \multicolumn{4}{c}{\it Location-Invariant Model} \\ $\Delta \overline{\text{PROD}}^{s,\kappa}$ & 1.797 & 1.589 & 1.816 & 2.040 \\ \midrule \multicolumn{5}{p{7cm}}{\scriptsize The top panel summarizes point estimates of the locational mean productivity differential $\Delta \overline{\text{PROD}}^{s,\kappa}=\Delta\overline{\text{TECH}}^{s,\kappa}+ \Delta\overline{\text{TFP}}^{s,\kappa}$ with the corresponding two-sided 95\% bias-corrected confidence intervals in parentheses. The bottom panel reports the counterparts from a fixed-coefficient location-invariant model for which, by construction, $\Delta \overline{\text{PROD}}^{s,\kappa}= \Delta\overline{\text{TFP}}^{s,\kappa}$ with $\Delta\overline{\text{TECH}}^{s,\kappa}=0$. In both cases, the decomposition is pooled for the entire sample period and the benchmark/reference location $\kappa$ is the one with the smallest mean production: $\kappa=\arg\min_{s} \overline{y}^{s}$.} \\ \bottomrule[1pt] \end{tabular} } \end{table} Table \ref{tab:decom} presents the decomposition results (across locations $s$) following \eqref{eq:prodfn_hicks_cd-decomp}. Because we just have three years of data, we perform the decomposition by pooling over the entire sample period. Thus, reported are the average decomposition results across 2002--2004. Also note that, for a fixed benchmark location $\kappa$, the decomposition is done for each $s$-location separately. For the benchmark/reference location $\kappa$, we choose the zipcode with the smallest mean production, i.e., $\kappa=\arg\min_{s} \overline{y}^{s}$, where $\overline{y}^{s}$ is defined as the time average of \eqref{eq:prodfn_hicks_cd-save}.\footnote{Obviously, the choice of a reference location is inconsequential because its role is effectively that of a normalization.} Therefore, the numbers ($\times 100\%$) in Table \ref{tab:decom} can be interpreted as the percentage differences between the chemicals manufacturers operating in various locations ($s$) versus those from the least-production-scale region $(\kappa)$ in China. Because the reference location is fixed, the results are comparable across $s$. Based on our estimates, the mean productivity differential is {1.866,} which means that, compared to the location with the smallest scale of chemicals production, other locations are, on average, {187\%} more productive (or more effective in the input usage). {The inter-quartile range of the average productivity differential spans from 1.652 and 2.086. Economically, these differences are large: firms that are located at the third quartile of the locational productivity distribution are about 43\% more productive than firms at the first quartile.} When we decompose the productivity differential into the technology and TFP differentials, on average, {$\Delta\overline{\text{TECH}}^{s,\kappa}$ is 2.3 times as large as $\Delta\overline{\text{TFP}}^{s,\kappa}$ and accounts for about 69\%} of the total productivity differences across locations.\footnote{That is, the ratio of $\Delta\overline{\text{TECH}}^{s,\kappa}$ to $\Delta\overline{\text{PROD}}^{s,\kappa}$ is 0.69.} This suggests that the cross-location {technological} heterogeneity in China's chemicals industry explains most of the productivity differential and that the regional {TFP differences are \textit{relatively} more modest.} Table \ref{tab:decom} also summarizes the locational productivity differential estimates from the standard location-invariant model. Given that this model assumes fixed coefficients (same technology for all firms), we cannot perform a decomposition here, and all cross-location variation in productivity is \textit{a priori} attributed to TFP by design. Compared with our locationally-varying model, this model {yields similar} total productivity differentials across regions but, due to its inability to recognize technological differences, it {grossly} over-estimates cross-location differences in TFP. \begin{figure}[t] \centering \includegraphics[scale=0.29]{map.Difference.in.Tchnologies.jpg}\includegraphics[scale=0.29]{map.Difference.in.Productivities.jpg} \caption{Locational Productivity Differential Decomposition Estimates Across Space \\ {\small (Notes: The color shade cutoffs correspond to the first, second (median) and third quartiles)} } \label{fig:decomposition_map} \end{figure} To explore the spatial heterogeneity in the decomposition components, we plot the spatial distributions of $\Delta\overline{\text{TECH}}^{s,\kappa}$, $\Delta\overline{\text{TFP}}^{s,\kappa}$ and $\Delta \overline{\text{PROD}}^{s,\kappa}$ on the map in Figure \ref{fig:decomposition_map}. The spatial distribution of $\Delta\overline{\text{TECH}}^{s,\kappa}$ aligns {remarkably} with that of RTS in Figure \ref{fig:rts}. Noticeably, the regions of agglomeration in the chemicals industry (see Figure \ref{fig:number}) tend to demonstrate large technology differentials. In contrast, the spatial distribution of $\Delta\overline{\text{TFP}}^{s,\kappa}$ shows quite a different pattern, whereby the locations of large TFP (differentials) are less concentrated. Unlike with the $\Delta\overline{\text{TECH}}^{s,\kappa}$ map, the dark-shaded regions on the $\Delta\overline{\text{TFP}}^{s,\kappa}$ map are widely spread around and have no clear overlapping with the main agglomeration regions in the industry. The comparison between these two maps suggests that, at least for the Chinese chemicals manufacturing firms, the widely-documented agglomeration effects on firm productivity are associated more with the scale effects via production technology rather than the improvements in overall TFP. That is, by locating closer to other firms in the same industry, it may be easier for a firm to pick up production technologies and know-hows that improve productiveness of inputs technologically and thus expand the input requirement set corresponding to the firm's output level\footnote{And more generally, shifting the \textit{family} of firm's isoquants corresponding to a fixed level of $\omega_{it}$ toward the origin.} \textit{given} its total factor productivity. Instead, agglomeration effects that increase the effectiveness of transforming all factors into the outputs via available technology (by adopting better business practices or management strategies) may be less likely to spill among the Chinese manufacturers of chemicals. Importantly, if we \textit{a priori} assume the fixed-coefficient production function common to all firms, the technological effects of agglomeration (via input elasticities) would be wrongly attributed to the TFP differentials. \section{Concluding Remarks} \label{sec:conclusion} Although it is widely documented {in the operations management literature} that the firm's location matters for its performance, few empirical studies {of operations efficiency} explicitly control for it. {This paper fills in this gap by providing a semiparametric methodology for the} identification of production functions in which locational factors have heterogeneous effects on the firm's production technology and productivity evolution. {Our approach is novel in that we explicitly model spatial variation in parameters in the production-function estimation.} We generalize the popular Cobb-Douglas production function in a semiparametric fashion by writing the input elasticities and productivity parameters as unknown functions of the firm's {geographic} location. In doing so, not only do we render the production {technology} location-specific but also accommodate {neighborhood influences on firm operations} with the strength thereof depending on the distance between {firms. Importantly, this enables us to examine the role of cross-location differences in explaining the variation in operational productivity among firms. The proposed model is superior to the alternative SAR-type production-function formulations because it (i) explicitly estimates the locational variation in production functions, (ii) is readily reconcilable with the conventional production axioms and, more importantly, (iii) can be identified from the data by building on the popular proxy-variable methods, which we extend to incorporate locational heterogeneity in firm production.} Our methodology provides a practical tool for examining the effects of agglomeration and technology spillovers on firm performance {and will be most useful for empiricists focused on the analysis of operations efficiency/productivity and its ``determinants.''} Using the methods proposed in our paper, we can {separate the effects of firm location on} production technology from those on firm productivity and find evidence consistent with the conclusion that agglomeration economies affect {the productivity of Chinese chemicals manufacturers mainly through the scale effects of production \textit{technology} rather than the improvements in overall TFP.} Comparing our flexible semiparametric model with the more conventional parametric model that postulates a common technology for all firms regardless of their location, we show that the latter does not provide {an adequate representation of} the industry and that the conclusion based on its results can be misleading. {For managerial implications, our study re-emphasizes the importance of firm location for its operations efficiency in manufacturing industries. Our findings also suggest that hiring skilled labor has a larger productivity effect compared to other widely-discussed productivity-enhancing techniques, such as learning by exporting. } \section{Introduction} \label{sec:introduction} It is well-documented in management, economics as well as operations research that businesses, even in narrowly defined industries, are quite different from one another in terms of productivity. These cross-firm productivity differentials are large, persistent and ubiquitous \citep[see][]{syverson2011}. Research on this phenomenon is therefore unsurprisingly vast and includes attempts to explain it from the perspective of firms' heterogeneous behaviors in research and development \citep[e.g.,][]{griffithetal2004}, corporate operational strategies \citep{smithreece1999}, ability of the managerial teams \citep{demerjianetal2012}, ownership structure \citep{ehrlichetal1994}, employee training and education \citep{moretti2004}, allocation efficiency \citep{songetal2011}, participation in globalization \citep{grossmanhelpman2015} and many others. In most such studies, a common production function/technology is typically assumed for all firms within the industry, and the differences in { operations performance of firms} are {confined to} variation in the {``total factor productivity,''} the Solow residual {\citep{solow1957}}.\footnote{A few studies alternatively specify an ``augmented'' production function which, besides the traditional inputs, also admits various firm-specific shifters such as the productivity-modifying factors mentioned above. {But such studies continue to assume that the same technology frontier applies to all firms.}} In this paper, we approach the {heterogeneity in firm performance} from a novel perspective in that we explicitly acknowledge the existence of locational effects on the {operations technology of firms} and their underlying productivity. {We} allow the firm-level production function to vary across space, thereby {accommodating potential neighborhood influences on firm production.} In doing so, we are able to examine the role of locational heterogeneity for cross-firm differences in {operations} performance{/efficiency}. {A firm's location is important for its operations technology. For example, \citet{ketokivi2017locate} show that hospital location is significantly related to {its} performance and {that} a hospital’s choice of strategy can {help} moderate the effect of location {through} the interplay of {local} environmental factors with organizational strategy. As shown in Figure \ref{fig:number}, chemical enterprises in China, the focus of empirical analysis in this paper, are widely {(and unevenly)} distributed {across} space. {Given the sheer size of the country (it is the third largest by area), it is implausible that, even after controlling for firm heterogeneity, all these businesses operate using} the same production technology. Organizations in all industries\textemdash not only hospitals and chemical manufacturers\textemdash develop strategies to respond to {local environment} and {the associated} competitive challenges, and those strategies drive operational decisions regarding investments in new or updated technologies}. \begin{figure}[t] \centering \includegraphics[scale=0.26]{map_number.jpg} \caption{Spatial Distribution of Manufacturers of Chemicals in China, 2004--2006} \label{fig:number} \end{figure} {Theoretically,} there are many reasons to believe that the production technology is location-specific. First, exogenous {local} endowments and institutional environments, such as {laws, regulations and local supply chains}, play a key role in determining firm performance. {The location of firms} {determines key linkages between {the} production, market, supply chain and product development \citep{goldsteinetal2002}}. If we look at the global distribution of the supply chains of many products, the product development and design is usually conducted in developed countries such as the U.S. and European countries, while the manufacturing and assembly process is performed in East Asian countries such as China and Vietnam. This spatial distribution largely reflects the endowment differences in factors of production (e.g., skilled vs.~unskilled labor) and the consequent relative input price differentials across countries. Analogously, take the heterogeneity in endowment and institutions across different locations \textit{within} a country. There are many more world's leading universities on the East and West Coasts of the U.S.~than in the middle of the country, and they provide thousands of talented graduates each year to the regional development, bolstering growth in flagship industries such as banking and high-tech in those locations. In China, which our empirical application focuses on, networking and political connections are, anecdotally, the key factors for the success of a business in the Northeast regions, whereas the economy on the Southeast Coast is more market-oriented. Furthermore, there are many broadly defined special economics zones (SEZs) in China, which all are characterized by a small designated geographical area, local management, unique benefits and separate customs and administrative procedures \citep[see][]{craneetal2018}. According to a report from the China Development Bank, in 2014, there were 6 SEZs, 14 open coastal cities, 4 pilot free-trade areas and 5 financial reform pilot areas. There were also 31 bonded areas, 114 national high-tech development parks, 164 national agricultural technology parks, 85 national eco-industrial parks, 55 national eco-civilization demonstration areas and 283 national modern agriculture demonstration areas. They spread widely in China and support various economic functions, giving rise to locational heterogeneity in the country's production. Second, most industries are geographically concentrated in general, whereby firms in the same or related industries tend to spatially cluster, benefiting from agglomeration economies reflected, among other things, in their production technologies that bring about localized \textit{aggregate} increasing returns. Ever since \citet{marshall1920} popularized these ideas, researchers have shown that industry concentration is too great to be explained solely by the differences in exogenous locational factors and that there are at least three behavioral micro-foundations for agglomeration: benefits from labor market pooling/sharing, efficiency gains from the collocation of industries with input-output relationships that improves the quality of matches and technology spillovers \citep[see][]{ellison_glaeser1999,duranton2004,ellisonetal2010,singh_marx2013}. The key idea of agglomeration economies is that geographic proximity reduces the transport costs of goods, people and, perhaps more importantly, ideas. While it is more intuitive that the movement of goods and people is hindered by spatial distance, the empirical evidence from prior studies shows that technology spillovers are also highly localized because knowledge transfers require interaction that proximity facilitates \citep[see][]{almeida_Kogut1999,alcacer_Chung2007,singh_marx2013}. Therefore, {owing to the role of local neighborhood influences,} firms that produce the same/similar products but are located in regions with different industry concentration levels are expected to enjoy different agglomeration effects on their {operations}. Because location is an important factor affecting firm performance, previous empirical studies heavily rely on spatial econometrics to examine the locational/spatial effects on production. Oftentimes, spatially-weighted averages of other firms' outputs and inputs are included as additional regressors in spatial autoregressive (SAR) production-function models \citep[e.g.,][]{glassetal2016,glassetal2020,glassetal2020b, vidoli_Canello2016,serpakrishnan2018, glassetal2019,kutluetal2020,houetal2020}. The appropriateness of such {a conceptualization of firm-level production functions in the presence of locational influences} however remains unclear because {these SAR specifications are difficult to reconcile with the theory of firm.} For instance, the reduced form of such models effectively implies substitutability of the firm's inputs with those of its peers and does not rule out the possibility of the firm's output increasing when the neighboring firms use more inputs even if the firm itself keeps own inputs fixed { and the productivity remains the same. Further, these models continue to implausibly assume that all firms use the same production technology no matter their location. The practical implementation of SAR production-function models is, perhaps, even more problematic: (\textsl{i}) they imply additional, highly nonlinear parameter restrictions necessary to ensure that the conventional production axioms are not violated, and (\textsl{ii}) they are likely unidentifiable from the data given the inapplicability of available proxy-variable estimators and the pervasive lack of valid external instruments at the firm level. We discuss this in detail in Appendix \ref{sec:appx_sar}.}\footnote{But we should note that studies of the nexus between location/geography and firm performance in operations research and management are not all confined to the production theory paradigm; e.g., see \citet{bannisterstolp1995}, \citet{goldsteinetal2002}, \citet{kalnins2004,kalnins2006}, \citet{dahlsorenson2012} and \citet{kulchina2016}.} In this paper, we consider a semiparametric production function in which both the {input-to-output transformation technology} and productivity are location-specific. Concretely, using the location information for firms, we let the {input-elasticity} and productivity-process parameters be nonparametric functions of the firm's {geographic location (latitude and longitude)} and estimate these unknown functions via kernel methods. Our methodology captures the cross-firm spatial {influences} through local smoothing, whereby the production technology for each location is calculated as the geographically weighted average of the input-output \textit{relationships} for firms in the nearby locations with larger weights assigned to the firms that are more spatially proximate. This is fundamentally different from the SAR production-function models that formulate {neighborhood influences} using spatially-weighed averages of the output/inputs \textit{quantities} while keeping the production technology the same for all firms. Consistent with the agglomeration literature, our approach implies that learning and knowledge spillovers are localized and that their chances/intensity diminish with distance. Importantly, by utilizing the data-driven selection of smoothing parameters that regulate spatial weighting of neighboring firms in kernel smoothing, we avoid the need to rely on \textit{ad hoc} specifications of the weighting schemes and spatial radii {of neighborhood influences like the} traditional SAR models do. It also allows us to be agnostic about the channels through which {firm location affects its production,} and our methodology inclusively captures all possible mechanisms of agglomeration economies. {Our conceptualization of spatial influences by means of locationally-varying parameters is akin to the idea of ``geographically weighted regressions'' (GWR) introduced and popularized in the field of geography by \citet{bfc1996}; also see \citet{fbc-book} and many references therein. Just like ours, the GWR technique aims to model processes that are not constant over space but exhibit local variations and do so using a varying-coefficient specification estimated via kernel smoothing over locations. However, the principal\textemdash and non-trivial\textemdash distinction of our methodology from the GWR approach is in its emphasis on \textit{identification} of the spatially varying relationship. Concretely, for consistency and asymptotic unbiasedness the GWR methods rely on the assumption that (non-spatial) regressors in the relationship of interest are mean-orthogonal to the stochastic disturbance which rules out the presence of correlated unobservables as well as the potential simultaneity of regressors and the outcome variable for reasons other than spatial autoregression. The latter two are, however, more the rule rather than the exception for economic relations, which are affected by behavioral choices, including the firm-level production function. Recovering the data generating process underlying the firm's production operations from observational data (i.e., its identification) requires tackling the correlation between regressors and the error term that the GWR cannot handle, making it unable to consistently estimate the production technology and firm productivity. This is precisely our focus.\footnote{In effect, our methodology constitutes a generalization of the GWR technique to accommodate endogenous regressors in the context of production-function estimation.}} The identification of production functions in general, let alone with locational heterogeneity, is not trivial due to the endogeneity issue whereby the firm's input choices are correlated with its productivity. Complexity stems from the latency of firm productivity. Due to rather unsatisfactory performance of the conventional approaches to identification of production functions, such as fixed effects estimation or instrumentation using prices, there is a growing literature targeted at solving endogeneity using a proxy-variable approach \citep[e.g., see][]{op1996,lp2003,acf2015,gnr2013} which has gained wide popularity among empiricists. To identify the locationally-varying production functions, we develop a semiparametric proxy-variable estimator that accommodates locational heterogeneity across firms. To this end, we build upon \citet{gnr2013} whose framework we extend to incorporate spatial information about the firms in a semiparametric fashion. More specifically, we make use of the structural link between the production function (of the known varying-coefficient functional form) and the optimality condition for a flexible input derived from the firm's static expected profit maximization problem. We propose a two-step estimation procedure and, to approximate the unknown functional coefficients, employ local-constant kernel fitting. Based on the estimated location-specific production functions, we further propose a locational productivity differential decomposition to break down the cross-region production differences that cannot be explained by input usage (i.e., the differential in ``total productivity'' of firms across locations) into the contributions attributable to differences in available production technologies and to differences in {total-factor operations efficiency of firms.} We apply our model to study locationally heterogeneous production technology among Chinese manufacturing firms in the chemical industry in 2002--2004. Based on the results of the data-driven cross-validation as well as formal statistical tests, the empirical evidence provides strong support to the importance and relevance of location for production. Qualitatively, we find that both technology and {firm} productivity vary significantly across regions. Firms are more likely to exhibit higher {(internal)} returns to scale in regions of agglomeration. However, the connection between {firm} productivity and industry concentration {across space} is unclear. The decomposition analysis reveals that differences in {\textit{technology} (as opposed to in idiosyncratic firm heterogeneity)} are the main source of cross-location total productivity differentials, on average {accounting for 2/3} of the differential. To summarize, our contribution is as follows. We propose a semiparametric methodology to accommodates locational heterogeneity in the production-function estimation while maintaining the standard structural assumptions about firm production. {Unlike the available SAR-type alternatives, our model explicitly estimates the cross-locational variation in production technology.} To operationalize our methodology, we extend the widely-used proxy-variable identification methods to incorporate {firm location}. Our model as well as the proposed decomposition method for disentangling the effects of location on {firm} productivity from those on technological input-output relationship should provide a valuable addition to the toolkit of {empiricists} interested in studying agglomeration economies and technology spillovers. {In the context of operations management in particular, our methodology will be most useful for empirical studies focused on the analysis of operations efficiency/productivity and} {its ``determinants;'' \citep[e.g.,][are just a few recent examples of such analyses]{ross2004analysis,berenguer2016disentangling,jola2016effect,lam2016impact}. {In the case of multi-input production, the ``total factor productivity'' is among the most popular comprehensive measures of operations efficiency/productivity of the firm,} and our paper shows how to measure the latter robustly when production relationships are not constant over space {and are subject to neighborhood influences. This is particularly interesting because the effects of location, supply chain integration and agglomeration on firm performance have recently attracted much attention among researchers in operations management} \citep[e.g.,][]{goldsteinetal2002,ketokivi2017locate,flynn2010impact}.} The rest of the paper is organized as follows. Section \ref{sec:model} describes the model of firm-level production {exhibiting} locational heterogeneity. We describe our identification and estimation strategy in Section \ref{sec:identification_estimation}. We provide the locational productivity differential decomposition in Section \ref{sec:decomposition_level}. The empirical application is presented in Section \ref{sec:application}. Section \ref{sec:conclusion} concludes. Supplementary materials are relegated to the Appendix. \section{Locational Heterogeneity in Production} \label{sec:model} Consider the production process of a firm $i$ ($i=1,\dots,n$) in the time period $t$ ($t=1,\dots,T$) in which physical capital $K_{it} {\in\Re_{+}}$, labor $L_{it} {\in\Re_{+}}$ and an intermediate input such as materials $M_{it}{\in\Re_{+}}$ are transformed into the output $Y_{it}{\in\Re_{+}}$ via a production function given the (unobserved) firm productivity. Also, let $S_{i}$ be the (fixed) location of firm $i$, with the obvious choice being $S_{i}=(\text{lat}_i,\text{long}_i)'$, where $\text{lat}_i$ and $\text{long}_i$ are the latitude and longitude coordinates of the firm's location. Then, the locationally varying production function is \begin{equation}\label{eq:prodfn_hicks_f} Y_{it} = F_{|S_{i}}(K_{it},L_{it},M_{it})\exp\left\{\omega_{it}\right\}\exp\left\{\eta_{it}\right\} , \end{equation} where $F_{|S_{i}}(\cdot)$ is the firm's location-specific production {function} that {varies} over space (as captured by $S_i$) to accommodate locational heterogeneity in production technology, $\omega_{it}$ is the firm's persistent Hicks-neutral {total factor productivity capturing its operations efficiency}, and $\eta_{it}$ is a random transitory shock. Note that, so long as the firm's location is fixed, $\omega_{it}$ that persist for the same firm $i$, by implication, then also has the evolution process that is specific to this firm's location $S_i$; we expand on this below. As in \citet{gnr2013}, \citet{malikovetal2020} and \citet{malikovzhao2021}, physical capital $K_{it}$ and labor $L_{it}$ are said to be subject to adjustment frictions (e.g., time-to-install, hiring/training costs), and the firm optimizes them dynamically at time $t-1$ rendering these predetermined inputs quasi-fixed at time $t$. Materials $M_{it}$ is a freely varying (flexible) input and is determined by the firm statically at time $t$. Thus, both $K_{it}$ and $L_{it}$ are the state variables with dynamic implications and follow their respective deterministic laws of motion: \begin{equation}\label{eq:k_lawofmotion} K_{it}=I_{it-1}+(1-\delta)K_{it-1}\quad \text{and}\quad L_{it}=H_{it-1}+L_{it-1} , \end{equation} where $I_{it}$, $H_{it}$ and $\delta$ are the gross investment, net hiring and the depreciation rate, respectively. Following the convention, we assume that the risk-neutral firm maximizes a discounted stream of expected life-time profits in perfectly competitive output and factor markets subject to the state variables and expectations about the market structure variables including prices that are common to all firms.\footnote{{ We use the perfect competition and homogeneous price assumption mainly for two reasons: (\textsl{i}) it is the most widely used assumption in the literature on structural identification of production function and productivity, and (\textsl{ii}) this assumption has been {repeatedly} used {when} studying the same data as ours \citep[e.g.,][]{baltagietal2016,malikovetal2020}. Relaxing the perfect competition assumption is possible but non-trivial, and it requires additional assumptions {about} the output demand \citep[e.g.,][]{deLoecker2011product} and/or extra information on firm-specific output prices that are usually not available for manufacturing data \citep[e.g.,][]{deloeckeretal2016} {or imposing \textit{ex ante} structure on the returns to scale \citep[see][]{flynnetal2019}.} It is still a subject of ongoing research. Given {the emphasis of our contribution on incorporating \textit{technological} heterogeneity (associated with firm location, in our case) in the measurement of firm productivity, we opt} to keep all other aspects of {modeling} consistent with {the convention in the literature to ensure meaningful comparability with most available methodologies.}}} Also, for convenience, we denote $\mathcal{I}_{it}$ to be the information set available to the firm $i$ for making the period $t$ production decisions. In line with the proxy variable literature, we model firm productivity $\omega_{it}$ as a first-order Markov process which we, however, endogenize \`a la \citet{dj2013} and \citet{deloecker2013} by incorporating productivity-enhancing and ``learning'' activities of the firm. To keep our model as general as possible, we denote all such activities via a generic variable $G_{it}$ which, depending on the empirical application of interest, may measure the firm's R\&D expenditures, foreign investments, export status/intensity, etc.\footnote{A scalar variable $G_{it}$ can obviously be replaced with a vector of such variables.} Thus, $\omega_{it}$ evolves according to a location-inhomogeneous controlled first-order Markov processes with transition probability $\mathcal{P}^{\omega}_{|S_i}(\omega_{it}|\omega_{it-1},G_{it-1})$. This implies the following location-specific mean regression for firm productivity: \begin{equation}\label{eq:productivity_hicks_law} \omega_{it}=h_{|S_i}\left(\omega_{it-1},G_{it-1}\right)+\zeta_{it}, \end{equation} where $h_{|S_i}(\cdot)$ is the location-specific conditional mean function of $\omega_{it}$, and $\zeta_{it}$ is a random innovation unanticipated by the firm at period $t-1$ and normalized to zero mean: $ \mathbb{E}\left[\zeta_{it} | \mathcal{I}_{it-1}\right]=\mathbb{E}\left[\zeta_{it}\right]=0$. The evolution process in \eqref{eq:productivity_hicks_law} implicitly assumes that productivity-enhancing activities and learning take place with a delay which is why the dependence of $\omega_{it}$ on a control $G_{it}$ is lagged implying that the improvements in firm productivity take a period to materialize. Further, in $\mathbb{E}[\zeta_{it} |\ \mathcal{I}_{it-1}]=0$ we assume that, due to adjustment costs, firms do not experience changes in their productivity-enhancing investments in light of expected \textit{future} productivity innovations. Since the innovation $\zeta_{it}$ represents inherent uncertainty about productivity evolution as well as the uncertainty about the success of productivity-enhancing activities, the firm relies on its knowledge of the \textit{contemporaneous} productivity $\omega_{it-1}$ when choosing the level of $G_{it-1}$ in period $t-1$ while being unable to anticipate $\zeta_{it}$. These structural timing assumptions are commonly made in models with controlled productivity processes \citep[e.g.,][]{vanbiesebroeck2005, dj2013,dj2018, deloecker2013, malikovetal2020,malikovzhao2021} and are needed to identify the within-firm productivity-improving learning effects. We now formalize the firm's optimization problem in line with the above discussion. Under risk neutrality, the firm's optimal choice of freely varying input $M_{it}$ is described by the (static) restricted expected profit-maximization problem subject to the already optimal dynamic choice of quasi-fixed inputs: \begin{align}\label{eq:profitmax} \max_{M_{it}}\ P_{t}^Y F_{|S_i}(K_{it},L_{it},M_{it})\exp\left\{\omega_{it}\right\}\theta - P_{t}^M M_{it} , \end{align} where $P_{t}^Y$ and $P_{t}^M$ are respectively the output and material prices that, given the perfect competition assumption, need not vary across firms; and $\theta\equiv\mathbb{E}[\exp\{\eta_{it}\}|\ \mathcal{I}_{it}]$. The first-order condition corresponding to this optimization yields the firm's conditional demand for $M_{it}$. Building on \citeauthor{dj2018}'s (2013, 2018) treatment of productivity-enhancing R\&D investments (a potential choice of $G_{it}$ in our framework) as a contemporaneous decision, we describe the firm's dynamic optimization problem by the following Bellman equation: \begin{align}\label{eq:bellman} \mathbb{V}_{t}\big(\Xi_{it}\big) = &\max_{I_{it},H_{it},G_{it}} \Big\{ \Pi_{t|S_i}(\Xi_{it}) - \text{C}^{I}_{t}(I_{it}) - \text{C}^{H}_{t}(H_{it})- \text{C}^{G}_{t}(G_{it}) + \mathbb{E}\Big[\mathbb{V}_{t+1}\big(\Xi_{it+1}\big) \Big| \Xi_{it},I_{it},H_{it},G_{it}\Big]\, \Big\} , \end{align} where $\Xi_{it}=(K_{it},L_{it},\omega_{it})'\in \mathcal{I}_{it}$ are the state variables;\footnote{The firm's location $S_i$ is suppressed in the list of state variables due to its time-invariance.} $\Pi_{t|S_i}(\Xi_{it})$ is the restricted profit function derived as a value function corresponding to the static problem in \eqref{eq:profitmax}; and $\text{C}^{\kappa}_t(\cdot)$ is the cost function for capital ($\kappa=I$), labor ($\kappa=H$) and productivity-enhancing activities ($\kappa=G$).\footnote{The assumption of separability of cost functions is unimportant, and one can reformulate \eqref{eq:bellman} using one $\text{C}_{t}(I_{it},H_{it},G_{it})$ for all dynamic production variables.} In the above dynamic problem, the level of productivity-enhancing activities $G_{it+1}$ is chosen in time period $t+1$ unlike the amounts of dynamic inputs $K_{it+1}$ and $L_{it+1}$ that are chosen by the firm in time period $t$ (via $I_{it}$ and $H_{it}$, respectively). Solving \eqref{eq:bellman} for $I_{it}$, $H_{it}$ and $G_{it}$ yields their respective optimal policy functions. {An important assumption of our structural model of firm production in the presence of locational heterogeneity is that firm location $S_i$ is both fixed and exogenous. However, the identification of locational heterogeneity in production may be complicated by the potentially endogenous spatial sorting problem, whereby more productive firms might \textit{ex ante} sort into the what-then-become high productivity locations. Under this scenario, when we compare firm productivity and technology across locations, we may mistakenly attribute gradients therein to the locational effects such as agglomeration and neighborhood influences, while in actuality it may be merely reflecting the underlying propensity of all firms in a given location to be more productive \textit{a priori}. While there has recently been notable progress in formalizing and understanding these coincident phenomena theoretically \citep[e.g.,][]{behrensetal2014,gaubert2018}, disentangling firm sorting and spatial agglomeration remains a non-trivial task empirically.\footnote{Urban economics literature also distinguishes the third endogenous process usually referred to as the ``selection'' which differs from sorting in that it occurs \textit{ex post} after the firms had self-sorted into locations and which determines their continuing survival. We abstract away from this low-productivity-driven attrition issue in the light of the growing empirical evidence suggesting that it explains none of spatial productivity differences which, in contrast, are mainly driven by agglomeration economies \citep[see][]{combesetal2012}. Relatedly, the firm attrition out of the sample has also become commonly accepted as a practical non-issue in the productivity literature so long as the data are kept unbalanced. For instance, \citet[][p.324]{lp2003} write: ``The original work by Olley and Pakes devoted significant effort to highlighting the importance of not using an artificially balanced sample (and the selection issues that arise with the balanced sample). They also show once they move to the unbalanced panel, their selection correction does not change their results.''} However, by including the firm’s own lagged productivity in the autoregressive $\omega_{it}$ process in \eqref{eq:productivity_hicks_law}, we are able (at least to some extent) to account for this potential self-sorting because sorting into locations is heavily influenced by the firm's own productivity (oftentimes stylized as the ``talent'' or ``efficiency'' in theoretical models). That is, the locational heterogeneity in firm productivity and technology in our model is measured after partialling out the contribution of its own past productivity. Incidentally, \citet{deloecker2013} argues the same in the context of productivity effects of exporting and self-selection of exporters.} \section{Methodology} \label{sec:identification_estimation} This section describes our strategy for (structural) identification and estimation of the firm's location-specific production technology and unobserved productivity. Following the popular practice in the literature \citep[e.g., see][]{op1996,lp2003, dj2013,acf2015,collarddeloecker2015, koningsvanormelingen2015}, we assume the Cobb-Douglas specification for the production function which we adapt to allow for potential locational heterogeneity in production. We do so in a semiparametric fashion as follows: \begin{equation}\label{eq:prodfn_hicks_cd} \ln F_{|S_i}(\cdot) = \beta_K(S_i) k_{it}+\beta_L(S_i)l_{it}+\beta_M(S_i)m_{it} , \end{equation} where the lower-case variables denote the logs of the corresponding upper-case variables, and the input elasticity functions $[\beta_K(\cdot),\beta_L(\cdot),\beta_M(\cdot)]'$ are unspecified {smooth} functions of the firm's location $S_i$. {The local smoothness feature of the production {relationship}, including both {the} input elasticities and persistent productivity {process below}, captures the effects of technology spillovers and agglomeration economies} {that give rise to local neighborhood influences}. Our methodology can also adopt more flexible specifications such as the log-quadratic translog, which provides a natural extension of the log-linear Cobb-Douglas form. See Appendix \ref{sec:appx_tl} for the details on this extension. Before proceeding further, we formalize the location-specific autoregressive conditional mean function of $\omega_{it}$ in its evolution process \eqref{eq:productivity_hicks_law}. Following \citet{dj2013,dj2019}, \citet{acf2015}, \citet{griecoetal2016,griecoetal2019} and many others, we adopt a parsimonious first-order autoregressive specification of the Markovian evolution for productivity but take a step further by assuming a more flexible semiparametric location-specific formulation akin to that for the production technology in \eqref{eq:prodfn_hicks_cd}: \begin{equation}\label{eq:productivity_hicks_lawsp} h_{|S_i}(\cdot ) = \rho_0(S_i)+ \rho_1(S_i) \omega_{it-1}+\rho_2(S_i)G_{it-1}. \end{equation} \subsection{Proxy Variable Identification} \label{sec:identification} Substituting for $F_{|S_i}(\cdot)$ in the locationally varying production function \eqref{eq:prodfn_hicks_f} using \eqref{eq:prodfn_hicks_cd}, we obtain \begin{align} y_{it} &= \beta_K(S_i) k_{it}+\beta_L(S_i)l_{it}+\beta_M(S_i)m_{it} + \omega_{it}+ \eta_{it} \label{eq:prodfn_hicks_cd2} \\ &= \beta_K(S_i) k_{it}+\beta_L(S_i)l_{it}+\beta_M(S_i)m_{it} + \rho_0(S_i)+ \rho_1(S_i) \omega_{it-1}+\rho_2(S_i)G_{it-1} + \zeta_{it}+ \eta_{it}, \label{eq:prodfn_hicks_cd3} \end{align} where we have also used the Markov process for $\omega_{it}$ from \eqref{eq:productivity_hicks_law} combined with \eqref{eq:productivity_hicks_lawsp} in the second line. Under our structural assumptions, all right-hand-side covariates in \eqref{eq:prodfn_hicks_cd3} are predetermined and weakly exogenous with respect to $\zeta_{it} + \eta_{it}$, except for the freely varying input $m_{it}$ that the firm chooses in time period $t$ conditional on $\omega_{it}$ (among other state variables) thereby making it a function of $\zeta_{it}$. That is, $m_{it}$ is endogenous. Prior to finding ways to tackle the endogeneity of $m_{it}$, to consistently estimate \eqref{eq:prodfn_hicks_cd3}, we first need to address the latency of firm productivity $\omega_{it-1}$. A popular solution is a proxy variable approach \`a la \citet{lp2003} whereby latent productivity is controlled for by inverting the firm's conditional demand for an observable static input such as materials. However, such a standard proxy approach generally fails to identify the firm's production function and productivity due to the lack of a valid instrument (from within the production function) for the endogenous $m_{it}$ despite the abundance of predetermined lags of inputs. As recently shown by \citet{gnr2013}, identification cannot be achieved using the standard procedure because \textit{no} exogenous higher-order lag provides excluded relevant variation for $m_{it}$ after conditioning the model on the already included self-instrumenting variables. As a result, the production function remains unidentified in flexible inputs. In order to solve this under-identification problem, \citet{gnr2013} suggest exploiting a structural link between the production function and the firm's (static) first-order condition for the freely varying input. In what follows, we build on this idea which we modify along the lines of \citet{dj2013} and \citet{malikovzhao2021} in explicitly making use of the assumed functional form of production technology. \textsl{First step}.\textemdash We first focus on the identification of production function in its flexible input $m_{it}$. Specifically, given the technology specification in \eqref{eq:prodfn_hicks_cd}, we seek to identify the material elasticity function $\beta_M(S_i)$. To do so, we consider an equation for the firm's first-order condition for the static optimization in \eqref{eq:profitmax}. The optimality condition with respect to $M_{it}$ in logs is given by (in logs) \begin{equation}\label{eq:foc_cd} \ln P_{t}^Y+\beta_K(S_i) k_{it}+\beta_L(S_i)l_{it}+\ln \beta_M(S_i)+[\beta_M(S_i)-1]m_{it} + \omega_{it}+ \ln \theta = \ln P_{t}^M , \end{equation} which can be transformed by subtracting the production function in \eqref{eq:prodfn_hicks_cd2} from it to obtain the following location-specific material share equation: \begin{equation}\label{eq:fst} v_{it} = \ln [\beta_M(S_i)\theta] - \eta_{it} , \end{equation} where $v_{it} \equiv \ln \left(P_{t}^M M_{it}\right)-\ln \left( P_{t}^Y Y_{it}\right)$ is the log nominal share of material costs in total revenue, which is observable in the data. The material share equation in \eqref{eq:fst} is powerful in that it enables us to identify unobservable material elasticity function $\beta_M(S_i)$ using the information about the log material share $v_{it}$. Specifically, we first identify a ``scaled'' material elasticity function $\beta_M(S_i)\times\theta$ using the moment condition $\mathbb{E}[\eta_{it} | \mathcal{I}_{it}] =\mathbb{E}\left[\eta_{it}|S_i\right]=0$, from where we have that \begin{equation}\label{eq:fst_ident_theta} \ln [\beta_M(S_i)\theta] = \mathbb{E}[v_{it}|S_i]. \end{equation} To identify the material elasticity function $\beta_M(S_i)$ net of constant $\theta$, note that $\theta$ is \begin{align}\label{eq:theta} \theta&\equiv \mathbb{E}\left[\exp\left\{ \eta_{it}\right\} \right]=\mathbb{E}\left[\exp\left\{ \eta_{it}\right\} \right] =\mathbb{E}\left[\exp\left\{ \mathbb{E}[v_{it}|S_i]-v_{it}\right\} \right], \end{align} which allows us to isolate $\beta_M(S_i)$ via \begin{equation}\label{eq:fst_ident} \beta_M(S_i) = \exp\left\{ \mathbb{E}[v_{it}|S_i] \right\}/ \mathbb{E}\left[\exp\left\{ \mathbb{E}[v_{it}|S_i]-v_{it}\right\} \right]. \end{equation} By having identified the material elasticity function $\beta_M(S_i)$, we have effectively pinpointed the production technology in the dimension of its endogenous static input thereby effectively circumventing the \citet{gnr2013} critique. This is evident when \eqref{eq:prodfn_hicks_cd3} is rewritten as \begin{equation}\label{eq:prodfn_hicks_cd4} y_{it}^* = \beta_K(S_i) k_{it}+\beta_L(S_i)l_{it}+ \rho_0(S_i)+ \rho_1(S_i) \omega_{it-1}+\rho_2(S_i)G_{it-1} + \zeta_{it}+ \eta_{it} , \end{equation} where $y_{it}^*\equiv y_{it} - \beta_M(S_i) m_{it}$ on the left-hand side is already identified/observable and, hence, model in \eqref{eq:prodfn_hicks_cd4} now contains \textit{no} endogenous regressors that need instrumentation. \textsl{Second step}.\textemdash To identify the rest of the production function, we proxy for latent $\omega_{it-1}$ using the known functional form of the conditional material demand function implied by the static first-order condition in \eqref{eq:foc_cd} which we analytically invert for productivity. Namely, using the inverted (log) material function $\omega_{it}=\ln[ P_{t}^M/P_{t}^Y]-\beta_K(S_i) k_{it}-\beta_L(S_i)l_{it}-\ln [\beta_M(S_i)\theta]+[1-\beta_M(S_i)]m_{it}$ to substitute for $\omega_{it-1}$ in \eqref{eq:prodfn_hicks_cd4}, we get \begin{align}\label{eq:sst} y_{it}^* &= \beta_K(S_i) k_{it}+\beta_L(S_i)l_{it}+ \rho_0(S_i)+ \rho_1(S_i) \Big[\nu^*_{it-1}-\beta_K(S_i) k_{it-1}-\beta_L(S_i)l_{it-1}\Big]+\rho_2(S_i)G_{it-1} + \zeta_{it}+ \eta_{it} , \end{align} where $\nu^*_{it-1}=\ln[ P_{t-1}^M/P_{t-1}^Y]-\ln [\beta_M(S_i)\theta]+[1-\beta_M(S_i)]m_{it-1}$ is already identified/observable and predetermined with respect to $\zeta_{it}+ \eta_{it}$.\footnote{{Following the convention {in the} literature \citep[e.g.,][]{op1996,lp2003,acf2015,dj2013,gnr2013}, we assume there is no measurement error in $m_{it}$. However, if the (log) material input is measured with errors, due to {the} reasons such as inventories, subcontracting and outsourcing, it will affect both the first- and the second-step estimation. More specifically, {adjusting} the first step is less problematic if {the measurement error is classical} since $m_{it}$ is in the dependent variable. However, {the second-step} equation \eqref{eq:sst} {will have a new} endogeneity issue due to the measurement error. In such a case, additional {identifying} assumptions are often needed; {see} \citet{hu2020estimating} for an example.}} All regressors in \eqref{eq:sst} are weakly exogenous, and this proxied model is identified based on the moment conditions: \begin{equation}\label{eq:sst_ident} \mathbb{E}[ \zeta_{t} + \eta_{t} |\ k_{it},l_{it},k_{it-1},l_{it-1},G_{it-1},\nu^*_{it-1}(m_{it-1}), S_{i} ] = 0. \end{equation} With the production technology and the transitory shock ${\eta}_{it}$ successfully identified in the two previous steps, we can readily recover $\omega_{it}$ from \eqref{eq:prodfn_hicks_cd2} via $\omega_{it}=y_{it}- \beta_K(S_i)k_{it}-\beta_L(S_i)l_{it}-\beta_M(S_i)m_{it}-\eta_{it}$. Our identification methodology is also robust to the \citet{acf2015} critique that focuses on the inability of structural proxy estimators to separably identify the additive production function and productivity proxy. Such an issue normally arises in the wake of perfect functional dependence between freely varying inputs appearing both inside the unknown production function and productivity proxy. Our second-step equation \eqref{eq:sst} does not suffer from such a problem because it contains no (endogenous) variable input on the right-hand side, the corresponding elasticity of which has already been identified from the material share equation in the first step. \subsection{Semiparametric Estimation} \label{sec:estimation} Given the semiparametric varying-coefficient specifications adopted for both the production technology [in \eqref{eq:prodfn_hicks_cd}] and productivity evolution [in \eqref{eq:productivity_hicks_lawsp}], we estimate both the first- and second-step equations \eqref{eq:fst} and \eqref{eq:sst} via \textit{local} least squares. We employ local-constant kernel fitting. Denote the unknown $\ln [\beta_M(S_i)\theta]$ as some nonparametric function $b_M(S_i)$. Under the assumption that input elasticity functions are smooth and twice continuously differentiable in the neighborhood of $S_i=s$, unknown $b_M(S_i)$ can be locally approximated around $s$ via $b_M(S_i) \approx b_M(s)$ at points $S_i$ close to $s$, $|S_i-s|=o(1)$. Therefore, for locations $S_i$ in the neighborhood of $s$, we can approximate \eqref{eq:fst} by \begin{equation}\label{eq:fst_est} v_{it} \approx b_M(s) - \eta_{it} , \end{equation} with the corresponding local-constant kernel estimator of $\ln [\beta_M(s)\theta]$ given by \begin{equation}\label{eq:fst_est2_pre} \widehat{b}_M(s) = \left[\sum_i\sum_t \mathcal{K}_{{h}_1}(S_i,s)\right]^{-1}\sum_i\sum_t \mathcal{K}_{{h}_1}(S_i,s)v_{it}, \end{equation} where {$\mathcal{K}_{h_1}(S_i,s)$ is a kernel} that weights each observation on the basis of proximity of its $S_i$ value to $s$. {To avoid over-smoothing in dense ranges of the support of the data while under-smoothing in sparse tails, which a ``fixed'' bandwidth parameter is well-known to produce, we employ an ``adaptive'' bandwidth capable of adapting to the local distribution of the data. Specifically, to weight observations, we use an $h_1$-nearest-neighbor bandwidth $R_{h_1}(s)$ defined as the Euclidean distance between the fixed location $s$ and its $h_1$th nearest location among $\{S_i\}$, i.e., \begin{equation}\label{eq:knn-bw} R_{h_1}(s)=\lVert S_{(h_1)}-s\lVert, \end{equation} where $S_{(h_1)}$ is the $h_1$th nearest neighbor of $s$. Evidently, $R_{h_1}(s)$ is just the $h_1$th order statistic on the distances $\lVert S_i-s\lVert$. It is $s$-specific and, hence, adapts to data distribution. Correspondingly, the kernel weight function is given by \begin{equation} \mathcal{K}_{h_1}(S_i,s)=\mathsf{K}\left(\frac{\lVert S_i-s\lVert}{R_{h_1}(s)}\right) , \end{equation} where $\mathsf{K}(\cdot)$ is a (non-negative) smooth kernel function such that $\int \mathsf{K}(\lVert u\lVert)du=1$; we use a second-order Gaussian kernel. The key parameter here that controls the degree of smoothing in the first-step estimator \eqref{eq:fst_est2_pre} is the number of nearest neighbors (i.e., locations) $h_1$, which diverges to $\infty$ as $n\to \infty$ but slowly: $h_1/n\to 0$. We select the optimal $h_1$ using the data-driven cross-validation procedure. Also note that, despite the location $S_i$ being multivariate, the parameter $h_1$ is a scalar because it modulates a univariate quantity, namely the distance. Hence, the bandwidth $R_{h_1}(s)$ is also scalar. That is, unlike in the case of a more standard kernel fitting based on fixed bandwidths when the data are weighted using the product of univariate kernels corresponding to each element in $S_i-s$, the adaptive kernel fitting weights data using a \textit{norm} of the vector $S_i-s$. For this reason, when employing nearest neighbor methods, the elements of smoothing variables are typically rescaled so that they are all comparable because, when $S_i$ is multivariate, the nearest neighbor ordering is not scale-invariant. In our case however, we do \textit{not} rescale the elements of $S_i$ (i.e., latitude and longitude) because they are already measured on the same scale and the (partial) distances therein have a concrete physical interpretation. From \eqref{eq:fst_ident},} the first-step estimator of $\beta_M(s)$ is \begin{equation}\label{eq:fst_est2} \widehat{\beta}_M(s) = nT\exp\left\{ \widehat{b}_M(s) \right\}\Big/ \sum_i\sum_t\exp\left\{ \widehat{b}_M(s)-v_{it}\right\} . \end{equation} We construct $\widehat{y}_{it}^*\equiv y_{it} - \widehat{\beta}_M(S_i) m_{it}$ and $\widehat{\nu}^*_{it-1}=\ln[ P_{t-1}^M/P_{t-1}^Y]-\ln [\widehat{\beta}_M(S_i)\theta]+[1-\widehat{\beta}_M(S_i)]m_{it-1}$ using the first-step local estimates of $\beta_M(S_i)$. Analogous to the first-step estimation, we then locally approximate each unknown parameter function in \eqref{eq:sst} around $S_i=s$ via local-constant approach. Therefore, for locations $S_i$ near $s$, we have \begin{align}\label{eq:sst_est} \widehat{y}_{it}^* &\approx \beta_K(s) k_{it}+\beta_L(s)l_{it}+ \rho_0(s)+ \rho_1(s) \Big[\widehat{\nu}^*_{it-1}-\beta_K(s) k_{it-1}-\beta_L(s)l_{it-1}\Big]+\rho_2(s)G_{it-1} + \zeta_{it}+ \eta_{it}. \end{align} Denoting all unknown parameters in \eqref{eq:sst_est} collectively as $\Theta(s)=[\beta_K(s),\beta_L(s),\rho_0(s),\rho_1(s),\rho_2(s)]'$, we estimate the second-step equation via locally weighted {nonlinear} least squares. The corresponding {kernel} estimator is \begin{align}\label{eq:sst_est2} \widehat{\Theta}(s) = \arg\min_{\Theta(s)}\ \sum_i\sum_t &\ \mathcal{K}_{{h}_2}(S_i,s)\Big( \widehat{y}_{it}^* - \beta_K(s) k_{it}-\beta_L(s)l_{it}\ - \notag \\ &\ \rho_0(s)- \rho_1(s) \Big[\widehat{\nu}^*_{it-1}-\beta_K(s) k_{it-1}-\beta_L(s)l_{it-1}\Big]+\rho_2(s)G_{it-1} \Big)^2, \end{align} where {$h_2$ is the number of nearest neighbors of a fixed location $s$ in the second-step estimation. It diverges faster than does the first-step smoothing parameter $h_1$} so that the first-step estimation has an asymptotically ignorable impact on the second step. Lastly, the firm productivity is estimated as $\widehat{\omega}_{it}=y_{it}- \widehat{\beta}_K(S_i)k_{it}-\widehat{\beta}_L(S_i)l_{it}-\widehat{\beta}_M(S_i)m_{it}-\widehat{\eta}_{it}$ using the results from both steps. \medskip { \noindent\textbf{Finite-Sample Performance.} Before applying our proposed methodology to the data, we first study its performance in a small set of Monte Carlo simulations. The results are encouraging, and simulation experiments show that our estimator recovers the true parameters well. As expected of a consistent estimator, the estimation becomes more stable as the sample size grows. For details, see Appendix \ref{sec:appex_sim}. \medskip \noindent\textbf{Inference.} Due to a multi-step nature of our estimator as well as the presence of nonparametric components, computation of the asymptotic variance of the estimators is not simple. For statistical inference, we therefore use bootstrap. We approximate sampling distributions of the estimators via wild residual block bootstrap that takes into account a panel structure of the data, with all the steps bootstrapped jointly owing to a sequential nature of our estimation procedure. The bootstrap algorithm is described in Appendix \ref{sec:appx_inference}. \medskip \noindent\textbf{Testing of Location Invariance.} Given that our semiparametric locationally varying production model nests a more traditional fixed-parameter specification that implies locational invariance of the production function and the productivity evolution as a special case, we can formally discriminate between the two models to see if the data support our more flexible modeling approach. We discuss this specification test in detail in Appendix \ref{sec:appx_test}. } \section{Locational Productivity Differential Decomposition} \label{sec:decomposition_level} Since the production function can vary across space, a meaningful comparison of productivity for firms dispersed across space now requires that locational differences in technology be explicitly controlled. That is, the productivity differential between two firms is no longer limited to the difference in their firm-specific total factor productivities $\omega_{it}$ (unless they both belong to the same location) because either one of the firms may have access to a more productive technology $F_S(\cdot)$. Given that locational heterogeneity in production is the principal focus of our paper, in what follows, we provide a procedure for measuring and decomposing firm productivity differentials across any two locations of choice. Let $\mathcal{L}(s,t)$ represent a set of $n_{t}^s$ firms operating in location $s$ in the year $t$. For each of these firms, the estimated Cobb-Douglas production function (net of random shocks) in logs is \begin{equation}\label{eq:prodfn_hicks_cd-s} \widehat{y}_{it}^s = \widehat{\beta}_K(s) k_{it}^s+\widehat{\beta}_L(s)l_{it}^s+\widehat{\beta}_M(s)m_{it}^s + \widehat{\omega}_{it}^s, \end{equation} where we have also explicitly indexed these firms' observable output/inputs as well as the estimated productivities using the location. Averaging over these firms, we arrive at the ``mean'' production function for location $s$ in time $t$: \begin{equation}\label{eq:prodfn_hicks_cd-save} \overline{y}_{t}^s = \widehat{\beta}_K(s) \overline{k}_{t}^s+\widehat{\beta}_L(s)\overline{l}_{t}^s+\widehat{\beta}_M(s)\overline{m}_{t}^s + \overline{\omega}_{t}^s, \end{equation} where $\overline{y}_{t}^s= \tfrac{1}{n^s_t}\sum_i\widehat{y}_{it}^s\mathbbm{1}\{i\in \mathcal{L}(s,t)\}$, $\overline{x}_{t}^s=\tfrac{1}{n^s_t}\sum_i x_{it}^s\mathbbm{1}\{i\in \mathcal{L}(s,t)\}$ for $x\in \{k,l,m\}$, and $\overline{\omega}_{t}^s= \tfrac{1}{n^s_t}\sum_i\widehat{\omega}_{it}^s\mathbbm{1}\{i\in \mathcal{L}(s,t)\}$. Taking the difference between \eqref{eq:prodfn_hicks_cd-save} and the analogous mean production function for the benchmark location of interest $\kappa$ in the same year, we obtain the mean \textit{output} differential between these two locations {(in logs)}: \begin{equation}\label{eq:prodfn_hicks_cd-skdif} \underbrace{\overline{y}_{t}^s - \overline{y}_{t}^{\kappa}}_{\Delta\overline{y}_{t}^{s,\kappa}}= \left[\widehat{\beta}_K(s) \overline{k}_{t}^s+\widehat{\beta}_L(s)\overline{l}_{t}^s +\widehat{\beta}_M(s)\overline{m}_{t}^s\right] - \left[\widehat{\beta}_K(\kappa) \overline{k}_{t}^{\kappa}+\widehat{\beta}_L(\kappa)\overline{l}_{t}^{\kappa} +\widehat{\beta}_M(\kappa)\overline{m}_{t}^{\kappa}\right] + \Big[ \overline{\omega}_{t}^s - \overline{\omega}_{t}^{\kappa}\Big] . \end{equation} To derive the mean \textit{productivity} differential (net of input differences) between these two locations, we add and subtract the $s$ location's production technology evaluated at the $\kappa$ location's inputs, i.e., \mbox{ $\left[\widehat{\beta}_K(s) \overline{k}_{t}^{\kappa}+\widehat{\beta}_L(s)\overline{l}_{t}^{\kappa} +\widehat{\beta}_M(s)\overline{m}_{t}^{\kappa}\right]$}, in \eqref{eq:prodfn_hicks_cd-skdif}: \begin{align}\label{eq:prodfn_hicks_cd-decomp} \Delta \overline{\text{PROD}}_{t}^{s,\kappa} &\equiv \Delta\overline{y}_{t}^{s,\kappa} - \widehat{\beta}_K(s) \Delta \overline{k}_{t}^{s,\kappa} -\widehat{\beta}_L(s)\Delta \overline{l}_{t}^{s,\kappa} -\widehat{\beta}_M(s)\Delta \overline{m}_{t}^{s,\kappa} \notag \\ & =\underbrace{\left[\widehat{\beta}_K(s)-\widehat{\beta}_K(\kappa)\right] \overline{k}_{t}^{\kappa}+ \left[\widehat{\beta}_L(s)-\widehat{\beta}_L(\kappa)\right] \overline{l}_{t}^{\kappa} + \left[\widehat{\beta}_M(s)-\widehat{\beta}_M(\kappa)\right] \overline{m}_{t}^{\kappa}}_{\Delta\overline{\text{TECH}}_{t}^{s,\kappa}} + \underbrace{\Big[ \overline{\omega}_{t}^s - \overline{\omega}_{t}^{\kappa}\Big]}_{\Delta\overline{\text{TFP}}_{t}^{s,\kappa}}, \end{align} where $\Delta\overline{x}_{t}^{s,\kappa}=\overline{x}_{t}^{s}-\overline{x}_{t}^{\kappa}$ for $x\in \{k,l,m\}$. Equation \eqref{eq:prodfn_hicks_cd-decomp} measures mean productivity differential across space and provides a \textit{counterfactual} decomposition thereof. By utilizing the counterfactual output that, given its location-specific technology, the average firm in location $s$ would have produced using the mean inputs employed by the firms in location $\kappa$ in year $t$ $\big[\widehat{\beta}_K(s) \overline{k}_{t}^{\kappa}+\widehat{\beta}_L(s)\overline{l}_{t}^{\kappa} +\widehat{\beta}_M(s)\overline{m}_{t}^{\kappa}\big]$, we are able to measure the locational differential in the mean productivity of firms in the locations $s$ and $\kappa$ that is \textit{un}explained by their different input usage: $\Delta \overline{\text{PROD}}_{t}^{s,\kappa}$. More importantly, we can then decompose this locational differential in the total productivity into the contribution attributable to the difference in production technologies $\Delta\overline{\text{TECH}}_{t}^{s,\kappa}$ and to the difference in the average total-factor {operations efficiencies} $\Delta\overline{\text{TFP}}_{t}^{s,\kappa}$. The locational productivity differential decomposition in \eqref{eq:prodfn_hicks_cd-decomp} is time-varying, but should one be interested in a scalar measure of locational heterogeneity for the entire sample period, time-specific averages can be replaced with the ``grand'' averages computed by pooling over all time periods. \section{Empirical Application} \label{sec:application} Using our proposed model and estimation methodology, we explore the locationally heterogeneous production technology among manufacturers in the Chinese chemical industry. We report location-specific elasticity and productivity estimates for these firms and then decompose differences in their productivity across space to study if the latter are mainly driven by the use of different production technologies or the underlying total factor productivity differentials. \subsection{Data} \label{sec:data} We use the data from \citet{baltagietal2016}. The dataset is a panel of $n=12,490$ manufacturers of chemicals continuously observed over the 2004--2006 period ($T=3$). The industry includes manufacturing of basic chemical materials (inorganic acids and bases, inorganic salts and organic raw chemical materials), fertilizers, pesticides, paints, coatings and adhesives, synthetic materials (plastic, synthetic resin and fiber) as well as daily chemical products (soap and cleaning compounds). The original source of these firm-level data is the Chinese Industrial Enterprises Database survey conducted by China's National Bureau of Statistics (NBS) which covers all state-owned firms and all non-state-owned firms with sales above 5 million Yuan (about \$0.6 million). \citet{baltagietal2016} have geocoded the location of each firm at the zipcode level in terms of the longitude and latitude (the ``$S$'' variables) using their postcode information in the dataset. The coordinates are constructed for the location of each firm's headquarters and are time-invariant. By focusing on the continually operating firms, we mitigate a potential impact of spatial sorting (as well as the attrition due to non-survival) on the estimation results and treat the firm location as fixed (exogenous). The total number of observations is 37,470. Figure \ref{fig:number} shows the spatial distribution of firms in our dataset on a map of mainland China (we omit area in the West with no data in our sample). The majority are located on the East Coast and in the Southeast of China, especially around the Yangtze River Delta that is generally comprised of Shanghai and the surrounding areas, the southern Jiangsu province and the northern Zhejiang province. \begin{table}[t] \centering \caption{Data Summary Statistics}\label{tab:datasummary} \footnotesize \makebox[\linewidth]{ \begin{tabular}{lrrrr} \toprule[1pt] Variables& Mean & 1st Qu. & Median & 3rd Qu.\\ \midrule &\multicolumn{4}{c}{\it \textemdash Production Function Variables\textemdash} \\[2pt] Output & 86,381.98 & 11,021.09 & 23,489.53 & 59,483.71 \\ Capital & 35,882.40 & 1,951.47 & 5,319.28 & 17,431.35 \\ Labor & 199.07 & 43.00 & 80.00 & 178.00 \\ Materials & 48,487.82 & 5,896.49 & 12,798.35 & 33,063.81 \\[2pt] &\multicolumn{4}{c}{\it \textemdash Productivity Controls\textemdash} \\[2pt] Skilled Labor Share & 0.174 & 0.042 & 0.111 & 0.242 \\ Foreign Equity Share & 0.140 & 0.000 & 0.000 & 0.000 \\ Exporter & 0.237 & & & \\ State-Owned & 0.051 & & & \\[2pt] &\multicolumn{4}{c}{\it \textemdash Location Variables\textemdash} \\[2pt] Longitude & 2.041 & 1.984 & 2.068 & 2.102 \\ Latitude & 0.557 & 0.504 & 0.547 & 0.632 \\ \midrule \multicolumn{5}{p{9.3cm}}{\scriptsize Output, capital and materials are in 1,000s of 1998 RMB. Labor is measured in the number of employees. The skilled labor share and foreign equity share are unit-free proportions. The exporter and state-owned variables are binary indicators. The location coordinates are in radians.} \\ \bottomrule[1pt] \end{tabular} } \end{table} The key production variables are defined as follows. Output ($Y$) is measured using sales. The labor input ($L$) is measured by the number of employees. Capital stock ($K$) is the net fixed assets for production and operation, and the materials ($M$) are defined as the expenditure on direct materials. Output, capital and materials are deflated to the 1998 values using the producer price index, the price index for investment in fixed assets and the purchasing price index for industrial inputs, respectively, where the price indices are obtained from the NBS. The unit of monetary values is thousands RMB (Chinese Yuan). We include four productivity-modifying variables in the evolution process of firm productivity $\omega_{it}$: the share of high-skilled workers ($G_1$), which is defined as the fraction of workers with a university or comparable education and is time-invariant because the data on workers' education level are only available for 2004; the foreign equity share ($G_2$), which is measured by the proportion of equity provided by foreign investors; a binary export status indicator ($G_3$), which takes value one if the firm is an exporter and zero otherwise; and a binary state/public ownership status indicator ($G_4$), which takes value one if the firm is state-owned and zero otherwise. Table \ref{tab:datasummary} shows the summary statistics, including the mean, 1st quartile, median and 3rd quartile for the variables. For the production-function variables, the mean values are significantly larger than their medians, which suggests that their distributions are skewed to the right. Among firms in the chemical industry, 23.7\% are exporters and 5.1\% are state-owned. Most firms do not have foreign investors, and the average ratio of foreign to total equity is 0.14. On average, 17.4\% of employees in the industry have a college degree or equivalent. \subsection{Estimation Results} \label{sec:results} In order to estimate the locationally varying (semiparametric) production function and firm productivity process in \eqref{eq:prodfn_hicks_cd}--\eqref{eq:productivity_hicks_lawsp}, we use the data-driven leave-one-location-out cross-validation method to choose the optimal {number of nearest neighboring locations in each step of the estimation ($h_1$ and $h_2$) to smooth over ``contextual'' location variables} $S_i$ inside the unknown functional coefficients. {This smoothing parameters regulate} spatial weighting of neighboring firms in kernel {fitting} and, as noted earlier, by selecting it via a data-driven procedure, we avoid the need to rely on \textit{ad hoc} specifications of both the spatial weights and radii defining the extent of {neighborhood influences}. The optimal {$h_1$ and $h_2$ values are 520 and 340 firm-years in the first- and second-step estimation, respectively. On average across all $s$, the corresponding adaptive bandwidths are 0.0171 and 0.0169 radians.} These bandwidth values are reasonable, given our sample size and the standard deviations of the longitude and latitude,\footnote{For the reference, the sample standard deviations for the longitude and latitude are respectively 0.0889 and 0.0941 radians.} and, evidently, are \textit{not} too large to ``smooth out'' the firm location {to imply location invariance/homogeneity. In fact, we can argue this more formally if kernel-smoothing is done using \textit{fixed} bandwidths so that we can rely on the theoretical results by \citet{halletal2007}, whereby local-constant kernel methods can remove irrelevant regressors via data-driven over-smoothing (i.e., by selecting large bandwidths). When we re-estimate our locationally-varying model in this manner, the optimal fixed bandwidths for the longitude and latitude in the first-step estimation are 0.009 and 0.010 radians, respectively; the corresponding second-step bandwidths are 0.024 and 0.023 radians. Just like in the case of adaptive bandwidths, these bandwidth values are fairly small relative to variation in the data, providing strong evidence in support of the overall relevancy of geographic location for firm production (i.e., against location invariance). Our location-varying formulation of the production technology and productivity is also formally supported by the \citet{ullah1985} specification test described in Appendix \ref{sec:appx_test}. Using cross-validated fixed bandwidths,} the bootstrap $p$-value is {0.001.} At the conventional significance level, our locationally heterogeneous production model is confidently preferred to a location-invariant formulation. {In what follows, we discuss our semiparametric results obtained using adaptive bandwidths. For inference, we use the bias-corrected bootstrap percentile intervals as described in Appendix \ref{sec:appx_inference}. The number of bootstrap replications is set to $B=1,000$.} \paragraph{Production Function.} We first report production-function estimates from our main model in which the production technology is locationally heterogeneous. We then compare these estimates with those obtained from the more conventional, {location-invariant} model that \textit{a priori} assumes common production technology for all firms. {The latter ``global'' formulation of the production function postulates constancy of the production relationship over space. This model is therefore fully parametric (with constant coefficients) and a special case of our locationally-varying model when $S_i$ is fixed across all $i$. Its estimation is straightforward and follows directly from \eqref{eq:fst_est2_pre}--\eqref{eq:sst_est} by letting the adaptive bandwidths in both steps diverge to $\infty$ which, in effect, obviates the need to locally weight the data because all kernels will be the same (for details, see Appendix \ref{sec:appx_test}).}\footnote{{Following a suggestion provided by a referee, we also estimate the location-invariant model {with location fixed effects added to the production function during the estimation.} We find the results do not change much from including these location effects, and therefore these results are not reported.}} \begin{table}[p] \centering \caption{Input Elasticity Estimates}\label{tab:coef} \footnotesize \makebox[\linewidth]{ \begin{tabular}{lcccc|c} \toprule[1pt] & \multicolumn{4}{c}{\it Locationally Varying} & \it Location-Invariant \\ & Mean & 1st Qu. & Median & 3rd Qu.& Point Estimate\\ \midrule Capital & 0.112 & 0.095 & 0.115 & 0.128 & 0.130 \\ & (0.104, 0.130) & (0.083, 0.116) & (0.110, 0.130) & (0.119, 0.147) & (0.118, 0.141) \\ Labor & 0.303 & 0.272 & 0.293 & 0.342 & 0.299 \\ & (0.285, 0.308) & (0.248, 0.284) & (0.278, 0.293) & (0.313, 0.356) & (0.280, 0.318) \\ Materials & 0.480 & 0.452 & 0.481 & 0.503 & 0.495 \\ & (0.466, 0.501) & (0.414, 0.467) & (0.437, 0.502) & (0.456, 0.524) & (0.460, 0.519) \\ \midrule \multicolumn{6}{p{13cm}}{\scriptsize The left panel summarizes point estimates of $\beta_{\kappa}(S_i)\ \forall\ \kappa\in\{K, L, M\}$ with the corresponding two-sided 95\% bias-corrected confidence intervals in parentheses. The right panel reports their counterparts from a fixed-coefficient location-invariant model. } \\ \bottomrule[1pt] \end{tabular} } \end{table} \begin{figure}[p] \centering \includegraphics[scale=0.36]{capital.jpg}\includegraphics[scale=0.36]{labor.jpg} \includegraphics[scale=0.36]{material.jpg}\includegraphics[scale=0.36]{RTS.jpg} \caption{Input Elasticity Estimates \\ {\small (Notes: Vertical lines correspond to location-invariant estimates)}} \label{fig:coef_elas} \end{figure} Since our model has location-specific input elasticities, there is a distribution of them (over space) and Table \ref{tab:coef} summarizes their point estimates. The table also reports the elasticity estimates from the alternative, location-invariant model. The corresponding two-sided 95\% bias-corrected confidence intervals for these statistics are reported in parentheses. Based on our model, the mean (median) capital, labor and material elasticity estimates are {0.112, 0.303 and 0.480 (0.115, 0.293 and 0.481),} respectively. Importantly, these location-specific elasticities show significant variation. {For the capital and labor inputs, the first quartiles are significantly different from the third quartiles.} Within the inter-quartile interval of their point estimates, elasticities of capital, labor and materials respectively increase by {0.033, 0.070 and 0.051, which in turn correspond to the 35\%, 26\% and 11\% changes.} In comparison, the elasticity estimates from the location-invariant production function with fixed coefficients are all larger than the corresponding {median} estimates from our model and fall in between the second and third quartiles of our locationally-varying point estimates. Figure \ref{fig:coef_elas} provides visualization of the non-negligible technological heterogeneity in the chemicals production technology across different locations in China, which the traditional location-invariant model assumes away. The figure plots histograms of the estimated location-specific input elasticities (and the returns to scale) with the location-invariant counterpart estimates depicted by vertical lines. Consistent with the results in Table \ref{tab:coef}, all distributions show relatively wide dispersion, and the locationally homogeneous {model} is apparently unable to provide a reasonable representation of production technology across different regions. \begin{table}[t] \centering \caption{Locationally Varying Returns to Scale Estimates}\label{tab:RTC} \footnotesize \makebox[\linewidth]{ \begin{tabular}{lcccc|cc} \toprule[1pt] & Mean & 1st Qu. & Median & 3rd Qu.& $= 1$ &$<1$ \\ \midrule RTS & 0.895 & 0.875 & 0.903 & 0.929 & 21.6\% & 82.3\% \\ & (0.820, 0.931) & (0.801, 0.912) & (0.827, 0.942) & (0.855, 0.968) & & \\ \midrule \multicolumn{7}{p{11.8cm}}{\scriptsize The left panel summarizes point estimates of $\sum_{\kappa}\beta_{\kappa}(S_i)$ with $\kappa\in\{K, L, M\}$ with the corresponding two-sided 95\% bias-corrected confidence intervals in parentheses. The counterpart estimate of the returns to scale from a fixed-coefficient location-invariant model is 0.924 (0.865, 0.969). The right panel reports the shares of locations in which location-specific point estimates are (\textit{i}) not significantly different from 1 (constant returns to scale) and (\textit{ii}) statistically less than 1 (decreasing returns to scale). The former classification is based on a two-sided test, the latter is on a one-sided test.} \\ \bottomrule[1pt] \end{tabular} } \end{table} Table \ref{tab:RTC} provides summary statistics of the estimated returns to scale (RTS) from our locationally varying production function (also see the bottom-right plot in Figure \ref{fig:coef_elas}). The mean RTS is {0.895, and the median is 0.903, with the inter-quartile range being 0.054.} The right panel of Table \ref{tab:RTC} reports the fraction of locations in which the Chinese manufacturers of chemicals exhibit constant or decreasing returns to scale. This classification is based on the RTS point estimate being statistically equal to or less than one, respectively, at the 5\% significance level. The ``$=1$'' classification is based on a two-sided test, whereas the ``$<1$'' test is one-sided. In most locations in China {(82.3\%)}, the production technologies of the chemicals firms exhibit \textit{dis}economies of scale, but {21.6\%} regions show evidence of the constant returns to scale (i.e., scale efficiency). \begin{figure}[t] \centering \includegraphics[scale=0.3]{map.Returns.to.Scale.jpg} \caption{Spatial Distribution of Returns to Scale Estimates \\ {\small (Notes: The color shade cutoffs correspond to the first, second (median) and third quartiles)} } \label{fig:rts} \end{figure} To further explore the locational heterogeneity in the production technology for chemicals in China, we plot the spatial distribution of the RTS estimates in the country in Figure \ref{fig:rts}. We find that the firms {with the largest RTS are mainly located in the Southeast Coast provinces and some parts of the West and Northeast China.} The area nearby Beijing also exhibits larger RTS. There are a few possible explanations of such a geographic distribution of the returns to scale. As noted earlier, spillovers and agglomeration have positive effects on the marginal productivity of inputs which typically take form of the scale effects, and they may explain the high RTS on the Southeast Coast and in the Beijing area. Locality-specific resources, culture and polices can also facilitate firms' production process. For example, the rich endowment of the raw materials like coal, phosphate rock and sulfur make the provinces such as Guizhou, Yunnan and Qinghai among the largest fertilizer production zones in China. Furthermore, RTS is also related to the life cycle of a firm. Usually, it is the small, young and fast-growth firms that enjoy higher RTS, whereas the more mature firms that have grown bigger will have transitioned to the low-RTS scale. This may explain the prevalence of the higher-RTS firms in the West and Northeast China. \paragraph{Productivity Process.} We now analyze our semiparametric estimates of the firm productivity process in \eqref{eq:productivity_hicks_lawsp}. Table \ref{tab:prod.coef} summarizes point estimates of the location-specific marginal effects of productivity determinants in the evolution process of $\omega_{it}$, with the corresponding two-sided 95\% bias-corrected confidence intervals in parentheses. In the last column of the left panel, for each productivity-enhancing control $G_{it}$, we also report the share of locations in which location-specific point estimates are statistically positive (at a 5\% significance level) as inferred via a one-sided test. \begin{table}[p] \centering \caption{Productivity Process Coefficient Estimates}\label{tab:prod.coef} \footnotesize \makebox[\linewidth]{ \begin{tabular}{lccccc|c} \toprule[1pt] & \multicolumn{5}{c}{\it Locationally Varying} & \it Location-Invariant \\ Variables& Mean & 1st Qu. & Median & 3rd Qu.& $>0$ & Point Estimate\\ \midrule Lagged Productivity & 0.576 & 0.518 & 0.597 & 0.641 & 99.9\% & 0.497 \\ & (0.540, 0.591) & (0.469, 0.541) & (0.553, 0.614) & (0.580, 0.665) & & (0.455, 0.530) \\ Skilled Labor Share & 0.387 & 0.287 & 0.419 & 0.500 & 85.7\% & 0.387 \\ & (0.346, 0.395) & (0.241, 0.309) & (0.345, 0.459) & (0.471, 0.493) & & (0.345, 0.425) \\ Foreign Equity Share & 0.054 & --0.001 & 0.062 & 0.103 & 47.7\% & 0.056 \\ & (0.006, 0.074) & (--0.034, 0.066) & (0.033, 0.069) & (0.099, 0.099) & & (0.036, 0.075) \\ Exporter & --0.001 & --0.032 & --0.005 & 0.038 & 24.0\% & 0.006 \\ & (--0.011, 0.018) & (--0.041, --0.016) & (--0.012, 0.013) & (0.025, 0.067) & & (--0.008, 0.018) \\ State-Owned & 0.005 & --0.052 & 0.007 & 0.073 & 29.6\% & --0.043 \\ & (--0.021, 0.010) & (--0.101, --0.014) & (--0.028, 0.025) & (0.062, 0.076) & & (--0.072, --0.009) \\ \midrule \multicolumn{7}{p{16.5cm}}{\scriptsize The left panel summarizes point estimates of $\rho_j(S_i)\ \forall\ j=1,\dots,\dim(G)$ with the corresponding two-sided 95\% bias-corrected confidence intervals in parentheses. Reported is also a share of locations in which location-specific point estimates are statistically positive as inferred via a one-sided test. The right panel reports the counterparts from a fixed-coefficient location-invariant model.} \\ \bottomrule[1pt] \end{tabular} } \end{table} \begin{figure}[p] \centering \includegraphics[scale=0.33]{lag_productivity.jpg} \includegraphics[scale=0.33]{lag_share_of_skilled_labor.jpg}\includegraphics[scale=0.33]{lag_capital_ratio.jpg} \includegraphics[scale=0.33]{lag_export.jpg}\includegraphics[scale=0.33]{lag_state-owned.jpg} \caption{Productivity Process Coefficient Estimates \\ {\small (Notes: Vertical lines correspond to location-invariant estimates)}} \label{fig:coef_prod} \end{figure} The autoregressive coefficient on the lagged productivity, which measures the persistence of $\omega_{it}$, is {0.576 at the mean and 0.597 at the median, with the quartile statistics varying from 0.518 to 0.641.} It is significantly positive for firms in {virtually} all locations. For firms in most locations {(85.7\%)}, skilled labor has a large and significantly positive effect on productivity: a percentage point increase in the skilled labor share is associated with an improvement in the next period's firm productivity by about 0.4\%, on average. Point estimates of the foreign ownership effect are positive {in the majority of locations}, but firms in only about half the locations benefit from a statistically positive productivity-boosting effect of the inbound foreign direct investment, with the average magnitude of only {7/50} of that attributable to hiring more skilled labor. In line with the empirical evidence reported for China's manufacturing in the literature \citep[see][and references therein]{malikovetal2020}, firms in most regions show insignificant {(and negative)} effects of the export status on productivity. The ``learning by exporting'' effects are very limited and statistically positive in {a quarter} of locations only. Interestingly, we find that state/public ownership is a significantly positive contributor to the improvements in firm productivity in about a third of the locations in which the Chinese chemicals manufacturing firms operate. This may be because the less productive state firms exited the market during the market-oriented transition in the late 1990s and early 2000s (prior to our sample period), and the remaining state-owned firms are larger and more productive \citep[also see][]{hsiehsong2015,zhaoqiankumbhakar2020}. Another potential reason may be that state ownership could have brought non-trivial financing benefits to these firms which, otherwise, were usually financially constrained due to the under-developed financial market in China during that period. The far right panel of Table \ref{tab:prod.coef} reports productivity effects of the $G_{it}$ controls estimated using the location-invariant model. Note that, under the assumption of a location-invariant production, the evolution process of $\omega_{it}$ becomes a parametric linear model, and there is only one point estimate of each fixed marginal effect for all firms. Comparing these estimates with the {median} estimates from our model, the location-invariant marginal effects tend to be smaller. While the persistence coefficient as well as fixed coefficients on the skilled labor and foreign equity shares are positive and statistically significant, the location-invariant estimate of the state ownership effect on productivity is however significantly negative (for all firms, by design). Together with the tendency of a location-invariant model to underestimate, this underscores the importance of allowing sufficient flexibility in modeling heterogeneity across firms (across different locations, in our case) besides the usual Hicks-neutral TFP. The contrast between the two models is even more apparent in Figure \ref{fig:coef_prod}, which plots the distributions of estimated marginal effects of the productivity-enhancing controls. Like before, the location-invariant counterparts are depicted by vertical lines. The distribution of each productivity modifier spans a relatively wide range, and the corresponding location-invariant estimates are evidently not good representatives for the {centrality} of these distributions. For example, the productivity-boosting effect of the firm's skilled labor roughly varies between {0.06 and 0.61\% per unit percentage point increase in the skilled labor share, depending on the location.} The distribution of this marginal effect across locations is somewhat left-skewed, and the corresponding location-invariant effect estimate evidently does not measure central tendency of these locationally-varying effects well. Similar observations can be made about other varying coefficients in the productivity process. \paragraph{Productivity Decomposition.} We now examine the average productivity differentials for firms in different regions. To this end, we perform the locational decomposition proposed in Section \ref{sec:decomposition_level} to identify the sources of production differences that cannot be explained by input usage. Recall that, by our decomposition, the locational differential in the mean total productivity ($\Delta \overline{\text{PROD}}^{s,\kappa}_t$) accounts for the cross-regional variation in both the input elasticities ($\Delta\overline{\text{TECH}}_{t}^{s,\kappa}$) and the total factor productivity ($\Delta\overline{\text{TFP}}^{s,\kappa}$). It is therefore more inclusive than the conventional analyses that rely on fitting a common production technology for all firms regardless of their locations and thus confine cross-firm heterogeneity to differences in $\omega_{it}$ only. \begin{table}[t] \centering \caption{Locational Productivity Differential Decomposition}\label{tab:decom} \footnotesize \makebox[\linewidth]{ \begin{tabular}{lcccc} \toprule[1pt] Components& Mean & 1st Qu. & Median & 3rd Qu.\\ \midrule & \multicolumn{4}{c}{\it Locationally Varying Model} \\ $\Delta\overline{\text{TECH}}^{s,\kappa}$ & 1.292 & 1.135 & 1.331 & 1.521 \\ $\Delta\overline{\text{TFP}}^{s,\kappa}$ & 0.574 & 0.203 & 0.571 & 0.893 \\ $\Delta \overline{\text{PROD}}^{s,\kappa}$ & 1.866 & 1.652 & 1.869 & 2.086 \\ \midrule & \multicolumn{4}{c}{\it Location-Invariant Model} \\ $\Delta \overline{\text{PROD}}^{s,\kappa}$ & 1.797 & 1.589 & 1.816 & 2.040 \\ \midrule \multicolumn{5}{p{7cm}}{\scriptsize The top panel summarizes point estimates of the locational mean productivity differential $\Delta \overline{\text{PROD}}^{s,\kappa}=\Delta\overline{\text{TECH}}^{s,\kappa}+ \Delta\overline{\text{TFP}}^{s,\kappa}$ with the corresponding two-sided 95\% bias-corrected confidence intervals in parentheses. The bottom panel reports the counterparts from a fixed-coefficient location-invariant model for which, by construction, $\Delta \overline{\text{PROD}}^{s,\kappa}= \Delta\overline{\text{TFP}}^{s,\kappa}$ with $\Delta\overline{\text{TECH}}^{s,\kappa}=0$. In both cases, the decomposition is pooled for the entire sample period and the benchmark/reference location $\kappa$ is the one with the smallest mean production: $\kappa=\arg\min_{s} \overline{y}^{s}$.} \\ \bottomrule[1pt] \end{tabular} } \end{table} Table \ref{tab:decom} presents the decomposition results (across locations $s$) following \eqref{eq:prodfn_hicks_cd-decomp}. Because we just have three years of data, we perform the decomposition by pooling over the entire sample period. Thus, reported are the average decomposition results across 2002--2004. Also note that, for a fixed benchmark location $\kappa$, the decomposition is done for each $s$-location separately. For the benchmark/reference location $\kappa$, we choose the zipcode with the smallest mean production, i.e., $\kappa=\arg\min_{s} \overline{y}^{s}$, where $\overline{y}^{s}$ is defined as the time average of \eqref{eq:prodfn_hicks_cd-save}.\footnote{Obviously, the choice of a reference location is inconsequential because its role is effectively that of a normalization.} Therefore, the numbers ($\times 100\%$) in Table \ref{tab:decom} can be interpreted as the percentage differences between the chemicals manufacturers operating in various locations ($s$) versus those from the least-production-scale region $(\kappa)$ in China. Because the reference location is fixed, the results are comparable across $s$. Based on our estimates, the mean productivity differential is {1.866,} which means that, compared to the location with the smallest scale of chemicals production, other locations are, on average, {187\%} more productive (or more effective in the input usage). {The inter-quartile range of the average productivity differential spans from 1.652 and 2.086. Economically, these differences are large: firms that are located at the third quartile of the locational productivity distribution are about 43\% more productive than firms at the first quartile.} When we decompose the productivity differential into the technology and TFP differentials, on average, {$\Delta\overline{\text{TECH}}^{s,\kappa}$ is 2.3 times as large as $\Delta\overline{\text{TFP}}^{s,\kappa}$ and accounts for about 69\%} of the total productivity differences across locations.\footnote{That is, the ratio of $\Delta\overline{\text{TECH}}^{s,\kappa}$ to $\Delta\overline{\text{PROD}}^{s,\kappa}$ is 0.69.} This suggests that the cross-location {technological} heterogeneity in China's chemicals industry explains most of the productivity differential and that the regional {TFP differences are \textit{relatively} more modest.} Table \ref{tab:decom} also summarizes the locational productivity differential estimates from the standard location-invariant model. Given that this model assumes fixed coefficients (same technology for all firms), we cannot perform a decomposition here, and all cross-location variation in productivity is \textit{a priori} attributed to TFP by design. Compared with our locationally-varying model, this model {yields similar} total productivity differentials across regions but, due to its inability to recognize technological differences, it {grossly} over-estimates cross-location differences in TFP. \begin{figure}[t] \centering \includegraphics[scale=0.29]{map.Difference.in.Tchnologies.jpg}\includegraphics[scale=0.29]{map.Difference.in.Productivities.jpg} \caption{Locational Productivity Differential Decomposition Estimates Across Space \\ {\small (Notes: The color shade cutoffs correspond to the first, second (median) and third quartiles)} } \label{fig:decomposition_map} \end{figure} To explore the spatial heterogeneity in the decomposition components, we plot the spatial distributions of $\Delta\overline{\text{TECH}}^{s,\kappa}$, $\Delta\overline{\text{TFP}}^{s,\kappa}$ and $\Delta \overline{\text{PROD}}^{s,\kappa}$ on the map in Figure \ref{fig:decomposition_map}. The spatial distribution of $\Delta\overline{\text{TECH}}^{s,\kappa}$ aligns {remarkably} with that of RTS in Figure \ref{fig:rts}. Noticeably, the regions of agglomeration in the chemicals industry (see Figure \ref{fig:number}) tend to demonstrate large technology differentials. In contrast, the spatial distribution of $\Delta\overline{\text{TFP}}^{s,\kappa}$ shows quite a different pattern, whereby the locations of large TFP (differentials) are less concentrated. Unlike with the $\Delta\overline{\text{TECH}}^{s,\kappa}$ map, the dark-shaded regions on the $\Delta\overline{\text{TFP}}^{s,\kappa}$ map are widely spread around and have no clear overlapping with the main agglomeration regions in the industry. The comparison between these two maps suggests that, at least for the Chinese chemicals manufacturing firms, the widely-documented agglomeration effects on firm productivity are associated more with the scale effects via production technology rather than the improvements in overall TFP. That is, by locating closer to other firms in the same industry, it may be easier for a firm to pick up production technologies and know-hows that improve productiveness of inputs technologically and thus expand the input requirement set corresponding to the firm's output level\footnote{And more generally, shifting the \textit{family} of firm's isoquants corresponding to a fixed level of $\omega_{it}$ toward the origin.} \textit{given} its total factor productivity. Instead, agglomeration effects that increase the effectiveness of transforming all factors into the outputs via available technology (by adopting better business practices or management strategies) may be less likely to spill among the Chinese manufacturers of chemicals. Importantly, if we \textit{a priori} assume the fixed-coefficient production function common to all firms, the technological effects of agglomeration (via input elasticities) would be wrongly attributed to the TFP differentials. \section{Concluding Remarks} \label{sec:conclusion} Although it is widely documented {in the operations management literature} that the firm's location matters for its performance, few empirical studies {of operations efficiency} explicitly control for it. {This paper fills in this gap by providing a semiparametric methodology for the} identification of production functions in which locational factors have heterogeneous effects on the firm's production technology and productivity evolution. {Our approach is novel in that we explicitly model spatial variation in parameters in the production-function estimation.} We generalize the popular Cobb-Douglas production function in a semiparametric fashion by writing the input elasticities and productivity parameters as unknown functions of the firm's {geographic} location. In doing so, not only do we render the production {technology} location-specific but also accommodate {neighborhood influences on firm operations} with the strength thereof depending on the distance between {firms. Importantly, this enables us to examine the role of cross-location differences in explaining the variation in operational productivity among firms. The proposed model is superior to the alternative SAR-type production-function formulations because it (i) explicitly estimates the locational variation in production functions, (ii) is readily reconcilable with the conventional production axioms and, more importantly, (iii) can be identified from the data by building on the popular proxy-variable methods, which we extend to incorporate locational heterogeneity in firm production.} Our methodology provides a practical tool for examining the effects of agglomeration and technology spillovers on firm performance {and will be most useful for empiricists focused on the analysis of operations efficiency/productivity and its ``determinants.''} Using the methods proposed in our paper, we can {separate the effects of firm location on} production technology from those on firm productivity and find evidence consistent with the conclusion that agglomeration economies affect {the productivity of Chinese chemicals manufacturers mainly through the scale effects of production \textit{technology} rather than the improvements in overall TFP.} Comparing our flexible semiparametric model with the more conventional parametric model that postulates a common technology for all firms regardless of their location, we show that the latter does not provide {an adequate representation of} the industry and that the conclusion based on its results can be misleading. {For managerial implications, our study re-emphasizes the importance of firm location for its operations efficiency in manufacturing industries. Our findings also suggest that hiring skilled labor has a larger productivity effect compared to other widely-discussed productivity-enhancing techniques, such as learning by exporting. }
{'timestamp': '2023-02-28T02:19:15', 'yymm': '2302', 'arxiv_id': '2302.13430', 'language': 'en', 'url': 'https://arxiv.org/abs/2302.13430'}
\section{Introduction} Sign language is one of the most commonly-used communication tools for the deaf community in their daily life. It mainly conveys information by both manual components (hand/arm gestures), and non-manual components (facial expressions, head movements, and body postures)~\cite{dreuw2007speech,ong2005automatic}. However, mastering this language is rather difficult and time-consuming for the hearing people, thus hindering direct communications between two groups. To relieve this problem, isolated sign language recognition tries to classify a video segment into an independent gloss\footnote{Gloss is the atomic lexical unit to annotate sign languages.}. Continuous sign language recognition (CSLR) progresses by sequentially translating image streams into a series of glosses to express a complete sentence, more prospective towards bridging the communication gap. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/figure1.pdf} \caption{Visualization of class activation maps with Grad-CAM~\cite{selvaraju2017grad} for VAC~\cite{Min_2021_ICCV} (baseline). Top: Original frames. Bottom: activation maps. It's observed that without extra supervision, it fails to locate discriminative face and hand regions precisely. } \label{fig1} \end{figure} In sign language, the left hand, right hand, and face play the most important role in expressing glosses. Mostly, they convey the information through horizontal/vertical hand movements, finger activities, and static gestures, assisted with facial expressions and mouth shapes to holistically deliver messages~\cite{dreuw2007speech,ong2005automatic}. As a result, hand and face, are always especially leveraged and incorporated in sign language systems. In isolated sign language recognition, early methods~\cite{freeman1995orientation,sun2013discriminative} leveraged hand-crafted features to describe the gestures and motion of both hands. Recent methods either choose to build a pure pose-based system~\cite{tunga2021pose,hu2021signbert} based on detected keypoints for both hands and face, or construct appearance-based systems~\cite{hu2021hand,boukhayma20193d} with cropped patches for hands and face as collaborative inputs. In CSLR, CNN-LSTM-HMM~\cite{koller2019weakly} builds a multi-stream (hands and face) Hidden-Markov-Model (HMM) to integrate multiple visual inputs to boost recognition accuracy. STMC~\cite{zhou2020spatial} explicitly inserts a pose-estimation network and uses the detected regions (hand and face) as multiple cues to perform recognition. More recently, C$^2$SLR~\cite{zuo2022c2slr} leverages the pre-extracted pose keypoints heatmaps as additional supervision to guide models to focus on hand and face areas. Although it has been proven effective to incorporate hand and face features to improve recognition performance for sign language systems, previous methods usually come at huge computations with increased training complexity, and rely on additional pose estimation networks or extra expensive supervision (e.g., heatmaps). However, without these supervision signals, we find current methods~\cite{Min_2021_ICCV,hao2021self,cheng2020fully} in CSLR fail to precisely locate the hand and face regions (Fig.~\ref{fig1}), thus unable to effectively leverage these features. To more effectively excavate these key cues but avoid introducing huge computations or relying on expensive supervision, we propose a self-emphasizing network (SEN) to explicitly emphasize informative spatial regions in a self-motivated way. Specifically, SEN first employs a lightweight subnetwork to incorporate local spatial-temporal features to identify informative regions, and then dynamically emphasizes or suppresses input features via attention maps. It's also observed that not all frames contribute equally to recognition. For example, frames with hand/arm movements of the signer are usually more important than those transitional frames. We present a temporal self-emphasizing module to emphasize those discriminative frames and suppress redundant ones dynamically. Remarkably, SEN yields new state-of-the-art accuracy upon four large-scale CSLR datasets, especially outperforming previous methods equipped with hand and face features, even though they always come at huge computations and rely on expensive supervision. Visualizations verify the effects of SEN in emphasizing spatial and temporal features. \section{Related Work} \subsection{Continuous Sign Language Recognition} Sign language recognition methods can be roughly categorized into isolated sign language recognition~\cite{tunga2021pose,hu2021signbert,hu2021hand} and continuous sign language recognition~\cite{pu2019iterative,cheng2020fully,cui2019deep,niu2020stochastic,Min_2021_ICCV} (CSLR), and we focus on the latter in this paper. CSLR tries to translate image frames into corresponding glosses in a weakly-supervised way: only sentence-level label is provided. Early methods in CSLR usually depend on hand-crafted features~\cite{gao2004chinese,freeman1995orientation} to provide visual information, especially body gestures, hands, and face, or rely on HMM-based systems~\cite{koller2016deepsign,han2009modelling,koller2017re,koller2015continuous} to perform temporal modeling and then translate sentences step by step. The HMM-based methods typically first employ a feature extractor to capture visual representations and then adopt an HMM to perform long-term temporal modeling. The recent success of convolutional neural networks (CNNs) and recurrent neural networks brings huge progress for CSLR. The widely-used CTC loss~\cite{graves2006connectionist} enables end-to-end training for recent methods by aligning target glosses with inputs. Especially, hands and face are paid close attention to by recent methods. For example, CNN-LSTM-HMM~\cite{koller2019weakly} employs a multi-stream HMM (including hands and face) to integrate multiple visual inputs to improve recognition accuracy. STMC~\cite{zhou2020spatial} utilizes a pose-estimation network to estimate human body keypoints and then sends cropped patches (including hands and face) for integration. More recently, C$^2$SLR~\cite{zuo2022c2slr} leverages the pre-extracted pose keypoints as supervision to guide the model. Despite high accuracy, they consume huge additional computations and training complexity. Practically, recent methods~\cite{pu2019iterative,pu2020boosting,cheng2020fully,cui2019deep,niu2020stochastic,Min_2021_ICCV} usually first employ a feature extractor to capture frame-wise visual representations for each frame, and then adopt 1D CNN and BiLSTM to perform short-term and long-term temporal modeling, respectively. However, several methods~\cite{pu2019iterative,cui2019deep} found in such conditions the feature extractor is not well trained and propose the iterative training strategy to refine the feature extractor, but consume much more computations. More recent methods try to directly enhance the feature extractor by adding visual alignment losses~\cite{Min_2021_ICCV} or adopt pseudo label~\cite{cheng2020fully,hao2021self} for supervision. We propose the self-emphasizing network to emphasize informative spatial features, which can be viewed to enhance the feature extractor in a self-motivated way. \subsection{Spatial Attention } Spatial attention has been proven to be effective in many fields including image classification~\cite{cao2019gcnet,hu2018gather,woo2018cbam,hu2018squeeze}, scene segmentation~\cite{fu2019dual} and video classification~\cite{wang2018non}. SENet~\cite{hu2018squeeze}, CBAM~\cite{woo2018cbam}, SKNet~\cite{li2019selective} and ECA-Net~\cite{wang2020eca} devise lightweight channel attention modules for image classification. The widely used self-attention operator~\cite{wang2018non} employs dot-product feature similarities to build attention maps and aggregate long-term dependencies. However, the calculation complexity of the self-attention operator is quadratic to the incorporated pixels, incurring a heavy burden for video-based tasks~\cite{wang2018non}. Instead of feature similarities, our SEN employs a learnable subnetwork to aggregate local spatial-temporal representations and generates spatial attention maps for each frame, much more lightweight than self-attention operators. Some works also propose to leverage external supervision to guide the spatial attention module. For example, GALA~\cite{linsley2018learning} collects click maps from games to supervise the spatial attention for image classification. A relation-guided spatial attention module~\cite{li2020relation} is designed to explore the discriminative regions globally for Video-Based Person Re-Identification. MGAN~\cite{pang2019mask} introduces an attention network to emphasize visible pedestrian regions by modulating full body features. In contrast to external supervision, our self-emphasizing network strengthens informative spatial regions in a self-motivated way, thus greatly lowering required computations and training complexity. \section{Method} \subsection{Framework Overview} As shown in fig.~\ref{fig2}, the backbone of CSLR models is consisted of a feature extractor (2D CNN\footnote{Here we only consider the feature extractor based on 2D CNN, because recent findings~\cite{adaloglou2021comprehensive,zuo2022c2slr} show 3D CNN can not provide as precise gloss boundaries as 2D CNN, and lead to lower accuracy. }), a 1D CNN, a BiLSTM, and a classifier (a fully connected layer) to perform prediction. Given a sign language video with $T$ input frames $x = \{x_{t}\}_{t=1}^T \in \mathcal{R}^{T \times 3\times H_0 \times W_0} $, a CSLR model aims to translate the input video into a series of glosses $y=\{ y_i\}_{i=1}^{N}$ to express a sentence, with $N$ denoting the length of the label sequence. Specifically, the feature extractor first processes input frames into frame-wise features $v = \{v_t\}_{t=1}^{T} \in \mathcal{R}^{T\times d}$. Then the 1D CNN and BiLSTM perform short-term and long-term temporal modeling based on these extracted visual representations, respectively. Finally, the classifier employs widely-used CTC loss to predict the probability of target gloss sequence $p(y|x)$. To emphasize the informative spatial and temporal features for CSLR models, we present a spatial self-emphasizing module (SSEM) and a temporal self-emphasizing module (TSEM). Specifically, we incorporate them into the feature extractor to operate on each frame. Fig.~\ref{fig2} shows an example of a common feature extractor consisting of multiple stages with several blocks in each. We sequentially place the SSEM and TSEM before the $3\times 3$ spatial convolution in each block to emphasize informative spatial and temporal features, respectively. When designing the architecture, efficiency is our core consideration, to avoid heavy computational burdens like previous methods~\cite{zhou2020spatial,zuo2022c2slr} based on heavy pose-estimation networks or expensive heatmaps. We next introduce our SSEM and TSEM, respectively. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/figure2.pdf} \caption{A overview for our SEN. It first employs a feature extractor (2D CNN) to capture frame-wise features, and then adopts a 1D CNN and a BiLSTM to perform short-term and long-term temporal modeling, respectively, followed by a classifier to predict sentences. We place our proposed spatial self-emphasizing module (SSTM) and temporal self-emphasizing module (TSEM) into each block of the feature extractor to emphasize the spatial and temporal features, respectively.} \label{fig2} \end{figure} \subsection{Spatial Self-Emphasizing Module (SSEM)} From fig.~\ref{fig1}, we argue current CSLR models fail to effectively leverage the informative spatial features, e.g., hands and face. We try to enhance the capacity of the feature extractor of CSLR models to incorporate such discriminative features without affecting its original spatial modeling ability. Practically, our SSEM is designed to first leverage the closely correlated local spatial-temporal features to identify the informative regions for each frame, and then augment original representations in the form of attention maps. As shown in fig.~\ref{fig3}, SSEM first projects the input features $s = \{s_t\}_{t=1}^T \in \mathcal{R}^{T \times C\times H \times W}$ into $s_r\in \mathcal{R}^{T \times C/r\times H \times W}$ to decrease the computational costs brought by SSEM, with $r$ the reduction factor as 16 by default. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{figures/figure3.pdf} \caption{Illustration for our spatial self-emphasizing module (SSEM).} \label{fig3} \end{figure} The frame-wise features $s$ in the feature extractor are independently extracted for each frame by 2D convolutions, failing to incorporate local spatial-temporal features to distinguish the informative spatial regions. Besides, as the signer has to throw his/her arms and hands to express glosses, the informative regions in adjacent frames are always misaligned. Thus, we devise a multi-scale architecture to perceive spatial-temporal features in a large neighborhood to help identify informative regions. Instead of a large spatial-temporal convolution kernel, we employ $N$ parallel factorized branches with group-wise convolutions of progressive dilation rates to lower computations and increase the model capacity. As shown in fig.~\ref{fig3}, these $N$ branches own the same spatial-temporal kernel size $K_t\times K_s\times K_s$, with different spatial dilation rates $[1\cdots N]$. Features from different branches are multiplied with learnable factors $\{\sigma_{1}\dots \sigma_k\}$ to control the importance of different branches via gradient-based backward propagation, and are then added to mix information from different receptive fields. This multi-scale architecture is expressed as: \begin{equation} \label{e1} s_m = \sum_{i=1}^{N}{\sigma_i \times {\rm Conv}_i(s_r)} \end{equation} where the group-wise convolution ${\rm Conv}_i$ at different levels captures spatial-temporal features from different receptive fields, with dilation rate $(1,i,i)$. Especially, as the channels are downsized by $r$ times in SSEM and we employ group-wise convolutions with small spatial-temporal kernels to capture multi-scale features, the overall architecture is rather lightweight with few (\textless 0.1\%) extra computations compared to the original model, as demonstrated in our ablative experiments. Next, $s_m$ is sent into a $1\times 1 \times 1$ convolution to project channels back into $C$, and then passed through a sigmoid activation function to generate attention maps $M_s\in \mathcal{R}^{T \times C\times H \times W}$ with values ranging between $[0, 1 ]$ as: \begin{equation} \label{e2} M_s = {\rm Sigmoid}({\rm Conv}_{1\times 1\times 1}(s_m)) \end{equation} Finally, the attention maps $M_s$ are used to emphasize informative spatial regions for input features. To avoid hurting original representations and degrading accuracy, we propose to emphasize input features via a residual way as: \begin{equation} \label{e3} u = (M_s-0.5\times \mathds{1})\odot s+s \end{equation} where $\odot$ denotes element-wise multiplication and $u$ is the output. In specific, we first subtract $0.5\times \mathds{1}$ from the attention maps $M_s$, with $\mathds{1}\in \mathcal{R}^{T \times C\times H \times W}$ denoting an all-one matrix, to change the range of values in $M_s$ into $[-0.5, 0.5]$. Then we element-wisely multiply the resulting attention maps with input features $s$ to dynamically emphasize the informative regions and suppress unnecessary areas. Here, the values in $M_s$ larger than 0 would strengthen the corresponding inputs features, otherwise they would weaken the input features. Finally, we add the modulated features with input features $s$ to emphasize or suppress certain spatial features, but avoid hurting original representations. \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth]{figures/figure4.pdf} \caption{Illustration for our temporal self-emphasizing module (TSEM).} \label{fig4} \end{figure} \subsection{Temporal Self-Emphasizing Module} We argue that not all frames in a video contribute equally to recognition, where some frames are more discriminative than others. For example, frames in which the signer moves his/her arms to express a sign are usually more important than those transitional frames or idle frames with meaningless contents. However, the feature extractor only employs 2D spatial convolutions to capture spatial features for each frame, equally treating frames without considering their temporal correlations. We propose a temporal self-emphasizing module (TSEM) to adaptively emphasize discriminative frames and suppress redundant ones. As shown in fig.~\ref{fig4}, input features $u \in \mathcal{R}^{T \times C\times H \times W}$ first undergo a global average pooling layer to eliminate the spatial dimension, i.e., $H$ and $W$. Then these features pass through a convolution with kernel size of 1 to reduce channels by $r$ times into $u_r \in \mathcal{R}^{T \times C/r}$ as: \begin{equation} \label{e4} u_r = {\rm Conv}_{K=1}({\rm AvgPool}(u)) \end{equation} where $K$ denotes the kernel size. To better exploit local temporal movements to identify the discriminative frames, we leverage the temporal difference operator to incorporate motion information between adjacent frames to make decisions better. Specially, we calculate the difference between two adjacent frames for $u_r$ as approximate motion information, and then concatenate it with appearance features $u_r$ as : \begin{equation} \label{e5} u_m = {\rm Concat}([u_r, u_r(t+1)-u_r]) \end{equation} Next, we send $u_m$ into a 1D temporal convolution with kernel size of $P_t$ to capture the short-term temporal information. As the size of $u_m$ is rather small, we here employ a normal temporal convolution instead of a multi-scale architecture. The features then undergo a convolution with kernel size of 1 to project channels back into $C$, and pass through a sigmoid activation function to generate attention maps $M_t \in \mathcal{R}^{T \times C}$ as: \begin{equation} \label{e5} M_t = {\rm Sigmoid}({\rm Conv}_{K=1}(u_m)) \end{equation} Finally, we employ $M_t $ to emphasize the discriminative features for input $u$ in a residual way as : \begin{equation} \label{e6} o = (M_t-0.5\times \mathds{1})\odot u+u \end{equation} where $\odot$ denotes element-wise multiplication and $o$ is the output. \section{Experiments} \subsection{Experimental Setup} \subsubsection{Datasets.} \textbf{PHOENIX14}~\cite{koller2015continuous} and \textbf{PHOENIX14-T}~\cite{camgoz2018neural} are both recorded from a German weather forecast broadcast before a clean background with a resolution of 210 $\times$ 260. They contain 6841/8247 sentences with a vocabulary of 1295/1085 signs, divided into 5672/7096 training samples, 540/519 development (Dev) samples and 629/642 testing (Test) samples. \textbf{CSL-Daily}~\cite{zhou2021improving} is recorded indoor with 20654 sentences, divided into 18401 training samples, 1077 development (Dev) samples and 1176 testing (Test) samples. \textbf{CSL}~\cite{huang2018video} is collected in the laboratory environment by fifty signers with a vocabulary size of 178 with 100 sentences. It contains 25000 videos, divided into training and testing sets by a ratio of 8:2. \subsubsection{Training details.} We adopt ResNet18~\cite{he2016deep} as the 2D CNN with ImageNet~\cite{deng2009imagenet} pretrained weights. We place SSEM and TSEM before the second convolution in each block. The 1D CNN consists of a sequence of \{K5, P2, K5, P2\} layers where $K$ and $P$ denotes a 1D convolutional layer and a pooling layer with kernel size of 5 and 2, respectively. We then adopt a two-layer BiLSTM with 1024 hidden states and a fully connected layer for prediction. We train our model for 80 epochs with initial learning rate 0.0001 decayed by 5 after 40 and 60 epochs. Adam optimizer is adopted with weight decay 0.001 and batch size 2. All frames are first resized to 256$\times$256 and then randomly cropped to 224$\times$224, with 50\% horizontal flip and $\pm$20\% random temporal scaling during training. During inference, a central 224$\times$224 crop is simply selected. We use VE and VA losses from VAC~\cite{Min_2021_ICCV} for extra supervision. \subsubsection{Evaluation Metric.} We use Word Error Rate (WER) as the evaluation metric, which is defined as the minimal summation of the \textbf{sub}stitution, \textbf{ins}ertion, and \textbf{del}etion operations to convert the predicted sentence to the reference sentence, as: \begin{equation} \label{e11} \rm WER = \frac{ \#sub+\#ins+\#del}{\#reference}. \end{equation} Note that the \textbf{lower} WER, the \textbf{better} accuracy. \begin{table}[t] \centering \begin{tabular}{cccc} \hline Configurations & FLOPs & Dev(\%) & Test(\%)\\ \hline - & 3.64G& 21.2 & 22.3\\ $K_t$=9, $K_s$=3, $N$=\textbf{1} & +0.4M & 20.5 & 22.0 \\ $K_t$=9, $K_s$=3, $N$=\textbf{2} & +0.6M& 20.2 & 21.8 \\ $K_t$=9, $K_s$=3, $N$=\textbf{3} & +0.8M & \textbf{19.9} & \textbf{21.4} \\ $K_t$=9, $K_s$=3, $N$=\textbf{4} & +1.0M & 20.2 & 21.7 \\ $K_t$=\textbf{7}, $K_s$=3, $N$=3 & +0.7M & 20.2 & 21.6 \\ $K_t$=\textbf{11}, $K_s$=3, $N$=3 & +1.0M & 20.3 & 21.8 \\ $K_t$=9, $K_s$=\textbf{7}, $N$=1 & +2.9M & 20.5 & 22.0 \\ \hline \end{tabular} \caption{Ablations for the multi-scale architecture of SSEM on the PHOENIX14 dataset.} \label{tab1} \end{table} \iffalse \begin{table}[t] \centering \begin{tabular}{ccc} \hline Configurations & Dev(\%) & Test(\%)\\ \hline - & 21.2 & 22.3\\ Stage 1 & 20.6 & 21.9 \\ Stage 1-2 & 20.4 & 21.7 \\ Stage 1-3 & 20.1 & 21.6 \\ Stage 1-4 & \textbf{19.9} & \textbf{21.4} \\ \hline \end{tabular} \caption{Ablations for the numbers of SSEMs on the PHOENIX14 dataset.} \label{tab2} \end{table} \fi \begin{table}[t] \centering \begin{tabular}{ccc} \hline Configurations & Dev(\%) & Test(\%)\\ \hline - & 21.2 & 22.3\\ $M_s \odot s$ & 22.3 & 23.4 \\ $M_s \odot s + s$ & 20.6 & 21.7 \\ $(M_s -0.5\times \mathds{1}) \odot s$ & 20.2 & 21.5 \\ $(M_s -0.5\times \mathds{1}) \odot s +s $ & \textbf{19.9} & \textbf{21.4}\\ \hline \end{tabular} \caption{Ablations for the implementations of SSEM to augment input features on the PHOENIX14 dataset.} \label{tab3} \end{table} \begin{table}[t] \centering \begin{tabular}{ccc} \hline Configurations & Dev(\%) & Test(\%)\\ \hline - & 19.9 & 21.4 \\ \hline $u_r$ & 19.8 & 21.2 \\ ${\rm Concat}([u_r, u_r(t+1)-u_r])$ & \textbf{19.5} & \textbf{21.0} \\ \hline $P_t$ = 7 & 19.6 & 21.2 \\ $P_t$ = 9 & \textbf{19.5} & \textbf{21.0} \\ $P_t$ = 11 & 19.7 & 21.3 \\ \hline \end{tabular} \caption{Ablations for TSEM on the PHOENIX14 dataset.} \label{tab4} \end{table} \begin{table}[t] \centering \begin{tabular}{ccc} \hline Configurations & Dev(\%) & Test(\%)\\ \hline - & 21.2 & 22.3\\ SSEM & 19.9 & 21.4\\ TSEM & 20.5 & 21.7 \\ SSEM + TSEM & 19.8 & 21.4 \\ TSEM + SSEM & 19.6 & 21.2 \\ Parallelled & \textbf{19.5} & \textbf{21.0} \\ \hline \end{tabular} \caption{Ablations for the effectiveness of SSEM and TSEM on the PHOENIX14 dataset.} \label{tab5} \end{table} \begin{table}[t] \centering \setlength\tabcolsep{3pt} \begin{tabular}{lcc} \hline Methods & Dev(\%) & Test(\%)\\ \hline - & 21.2 & 22.3\\ w/ SENet~\cite{hu2018squeeze} & 20.7 & 21.6 \\ w/ CBAM~\cite{woo2018cbam} & 20.5 & 21.3 \\ \hline CNN+HMM+LSTM~\cite{koller2019weakly} & 26.0 & 26.0 \\ STMC~\cite{zhou2020spatial} & 21.1 & 20.7 \\ C$^2$SLR~\cite{zuo2022c2slr} & 20.5 & 20.4 \\ \hline SEN & \textbf{19.5} & \textbf{21.0} \\ \hline \end{tabular} \caption{Comparison with other methods of channel attention or hand and face features on the PHOENIX14 dataset.} \label{tab6} \end{table} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/spatial_figure.pdf} \caption{Visualizations of class activation maps by Grad-CAM~\cite{selvaraju2017grad}. Top: raw frames; Middle: class activation maps of our baseline; Bottom: class activation maps of our SEN. Our baseline usually focuses on nowhere or only attends to a single hand or face. Our SEN could generally focus on the human body (light yellow areas) and pays special attention to informative regions like hands and face (dark red areas).} \label{fig5} \end{figure} \subsection{Ablation Study} We perform ablation studies on the PHOENIX14 dataset and report on both development (Dev) and testing (Test) sets. \subsubsection{Effects of the multi-scale architecture of SSEM.} Tab.~\ref{tab1} ablates the implementations for the multi-scale architecture of SSEM. Our baseline achieves 21.2\% and 22.3\% WER on the Dev and Test Set. When fixing $K_t$=9, $K_s$=3 and varying the number of branches to expand spatial receptive fields, it's observed larger $N$ consistently brings better performance. When $N$ reaches 3, it brings no more performance gain. We set $N$ as 3 by default and test the effects of $K_t$. One can see that either increasing $K_t$ to 11 or decreasing $K_t$ to 7 achieves worse performance. We thus adopt $K_t$ as 9 by default. Notably, one can find SSEM brings few extra computations compared to our baseline. For example, the best-performing SSEM with $K_t$=9, $K_s$=3 and $N$=3 only owns 0.8M (\textless 0.1\%) extra FLOPs, which can be neglected compared to 364G FLOPs of our baseline model. Finally, we compare our proposed multi-scale architecture with a normal implementation of more computations. The receptive field of SSEM with $K_t$=9, $K_s$=3 and $N$=3 is identical to a normal convolution with $K_t$=9 and $K_s$=7. As shown in the bottom of tab.~\ref{tab1}, a normal convolution not only brings more computations than SSEM, but also performs worse, verifying the effectiveness of our architecture. \subsubsection{Implementations of SSEM to augment inputs features.} Tab.~\ref{tab3} ablates the implementations of SSEM to augment original features. It's first observed directly multiplying the attention maps $M_s$ with input features $s$ severely degrades performance, attributed to destroying input features distributions. Implemented in a residual way by adding $s$, $M_s \odot s + s$ could notably relieve such phenomenon and achieves +0.6\% \& +0.6\% on the Dev and Test Sets. Further, we first subtract $0.5\times \mathds{1}$ from the attention maps $M_s$ to emphasize or suppress certain positions, and then element-wisely multiply it with $s$. This implementation bring +1.0\% \& +0.8\% performance boost. Finally, we update this implementation in a residual way by adding input features $s$ as $(M_s -0.5\times \mathds{1}) \odot s +s $, achieving notable performance boost by +1.3\% \& +0.9\%. \subsubsection{Study on TSEM.} Tab.~\ref{tab4} ablates the configurations for TSEM. We here adopt SSEM as our baseline and ablate the configurations for TSEM. It's first noticed that combining motion information by concatenating $u_r(t+1)-u_r$ with $u_r$ slightly outperforms only using $u_r$ to capture short-term temporal dependencies, verifying the effectiveness of local motion information. Next, when varying $P_t$, it's observed $P_t$=9 achieves the best performance among $P_t$=[7,9,11], which is adopted by default in the following. \subsubsection{Study on the effectiveness of SSEM and TSEM.} Tab.~\ref{tab5} studies how to combine SSEM with TSEM. We first notice that only using SSEM or TSEM could already bring a notable performance boost, by +1.3\& +0.9\% and +0.7 \& +0.6\% on the Dev and Test Sets, respectively. When further combining SSEM with TSEM by sequentially placing SSEM before TSEM (SSEM+TSEM), placing TSEM before SSEM (TSEM+SSEM) or paralleling TSEM and TSEM, it's observed SSEM+TSEM performs best with +1.7\% \& +1.3\% performance boost on the Dev and Test Sets, respectively, adopted as the default setting. \subsubsection{Comparison with other methods.} We compare our SEN with related well-known channel attention methods like SENet~\cite{hu2018squeeze} and CBAM~\cite{woo2018cbam}, and previous CSLR methods equipped with hand and face features by extra pose-estimation networks or pre-extracted heatmaps. In the upper part of tab.~\ref{tab6}, one can see SEN largely outperforms these channel attention methods, for its superior ability to emphasize informative hand and face features. In the bottom part of tab.~\ref{tab6}, it's observed SEN greatly surpasses previous CSLR methods equipped with hand and face features, even though they employ extra heavy networks or expensive supervision. These results verify the effectiveness of our SEN in leveraging hand and face features. \subsection{Visualizations} \subsubsection{Visualization for SSEM.} We sample a few frames for expressing a gloss and plot the class activation maps for our baseline and SEN with Grad-CAM~\cite{selvaraju2017grad} in fig.~\ref{fig5}. The activation maps generated by our baseline usually focus on nowhere or only attend to a single hand or face, failing to fully focus on the informative regions (e.g., hands and face). Instead, our SEN could generally focus on the human body (light yellow areas), and pays special attention to those discriminative regions like hands and face (dark red areas). These visualizations show that without additional expensive supervision, our SEN could still effectively leverage the informative spatial features in a self-supervised way. \begin{table*}[t] \centering \setlength\tabcolsep{3pt} \begin{tabular}{ccccccccc} \hline \multirow{3}{*}{Methods} &\multirow{3}{*}{Backbone} & \multicolumn{4}{c}{PHOENIX14} & \multicolumn{2}{c}{PHOENIX14-T} \\ & &\multicolumn{2}{c}{Dev(\%)} & \multicolumn{2}{c}{Test(\%)} & \multirow{2}{*}{Dev(\%)} & \multirow{2}{*}{Test(\%)}\\ & &del/ins & WER & del/ins& WER & & \\ \hline Align-iOpt~\cite{pu2019iterative}& 3D-ResNet &12.6/2 & 37.1& 13.0/2.5 & 36.7 & -&-\\ Re-Sign~\cite{koller2017re}& GoogLeNet&- & 27.1 &- &26.8 &- &-\\ SFL~\cite{niu2020stochastic}& ResNet18 & 7.9/6.5 & 26.2 & 7.5/6.3& 26.8 & 25.1&26.1\\ FCN~\cite{cheng2020fully}& Custom & - & 23.7 & -& 23.9 & 23.3& 25.1\\ CMA~\cite{pu2020boosting} & GoogLeNet & 7.3/2.7 & 21.3 & 7.3/2.4 & 21.9 & -&-\\ VAC~\cite{Min_2021_ICCV}& ResNet18 & 7.9/2.5 & 21.2 &8.4/2.6 & 22.3 &- &-\\ SMKD~\cite{hao2021self}& ResNet18 &6.8/2.5 &20.8 &6.3/2.3 & 21.0 & 20.8 & 22.4\\ \hline SLT$^*$~\cite{camgoz2018neural}& GoogLeNet & - & - & - & - & 24.5 & 24.6\\ CNN+LSTM+HMM$^*$~\cite{koller2019weakly}& GoogLeNet & - &26.0 & - & 26.0 & 22.1 & 24.1 \\ DNF$^*$~\cite{cui2019deep}& GoogLeNet & 7.3/3.3 &23.1& 6.7/3.3 & 22.9 & - & -\\ STMC$^*$~\cite{zhou2020spatial}& VGG11 & 7.7/3.4 &21.1 & 7.4/2.6 & 20.7 & 19.6 & 21.0\\ C$^2$SLR$^*$~\cite{zuo2022c2slr} & ResNet18 & - & 20.5 &- & 20.4 & 20.2 & 20.4 \\ \hline Baseline & ResNet18 & 7.9/2.5 & 21.2 &8.4/2.6 & 22.3 & 21.1 & 22.8\\ \textbf{SEN (Ours)} & ResNet18 & 5.8/2.6 &\textbf{19.5} & 7.3/4.0 & \textbf{21.0} & \textbf{19.3} & \textbf{20.7} \\ \hline \end{tabular} \caption{Comparison with state-of-the-art methods on the PHOENIX14 and PHOENIX14-T datasets. $*$ indicates extra clues such as face or hand features are included by additional networks or pre-extracted heatmaps.} \label{tab7} \end{table*} \subsubsection{Visualization for TSEM.} We visualize the temporal attention maps of TSEM in fig~\ref{fig6}. We sample several frames corresponding to an output gloss 'nord' as an example. The darker color, the higher weight. One can find that TSEM tends to allocate higher weights for frames with rapid movements (the latter two frames in the first line; the middle three frames in the second line). TSEM assigns lower weights for static frames with few body movements. Such observation is consistent with our habits, as humans always pay more attention to those moving objects in the visual field to capture key movements. Those frames can also be considered conveying more important pattern for expressing a sign. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/temporal_figure.pdf} \caption{Visualizations of temporal attention maps for TSEM. One can find that TSEM highlight frames with rapid movements and suppress those static frames.} \label{fig6} \end{figure} \subsection{Comparison with State-of-the-Art Methods} \textbf{PHOENIX14} and \textbf{PHOENIX14-T}. Tab.~\ref{tab7} shows a comprehensive comparison between our SEN and other state-of-the-art methods. We notice that with few extra computations, SEN could outperform other state-of-the-art methods upon both datasets. Especially, SEN outperforms previous CSLR methods equipped with hand and faces acquired by heavy pose-estimation networks or pre-extracted heatmaps (notated with *), without additional expensive supervision. \textbf{CSL-Daily}. CSL-Daily is a recently released large-scale dataset with the largest vocabulary size (2k) among commonly-used CSLR datasets, covering daily contents. Tab.~\ref{tab8} shows that our SEN achieves new state-of-the-art accuracy upon this challenging dataset with large progresses, which generalizes well upon real-world scenarios. \textbf{CSL}. As shown in tab.~\ref{tab9}, our SEN could achieve extreme superior accuracy (0.8\% WER) upon this well-examined dataset, outperforming existing CSLR methods. \begin{table}[t] \centering \setlength\tabcolsep{2pt} \begin{tabular}{cccc} \hline Methods& Dev(\%) & Test(\%)\\ \hline LS-HAN~\cite{huang2018video} & 39.0 & 39.4\\ TIN-Iterative~\cite{cui2019deep} & 32.8 & 32.4\\ Joint-SLRT~\cite{camgoz2020sign} & 33.1 & 32.0 \\ FCN~\cite{cheng2020fully} & 33.2 & 32.5 \\ BN-TIN~\cite{zhou2021improving} & 33.6 & 33.1 \\ \hline Baseline & 32.8 & 32.3\\ \textbf{SEN(Ours)} & \textbf{31.1} & \textbf{30.7} \\ \hline \end{tabular} \caption{Comparison with state-of-the-art methods on the CSL-Daily dataset~\cite{zhou2021improving}.} \label{tab8} \end{table} \begin{table}[t] \centering \setlength\tabcolsep{2pt} \begin{tabular}{cc} \hline Methods& WER(\%)\\ \hline SubUNet~\cite{cihan2017subunets} & 11.0\\ SF-Net~\cite{yang2019sf} & 3.8 \\ FCN~\cite{cheng2020fully} & 3.0 \\ STMC~\cite{zhou2020spatial} & 2.1 \\ VAC~\cite{Min_2021_ICCV} & 1.6 \\ C$^2$SLR~\cite{zuo2022c2slr} & 0.9 \\ \hline Baseline & 3.5\\ \textbf{SEN(Ours)} & \textbf{0.8} \\ \hline \end{tabular} \caption{Comparison with state-of-the-art methods on the CSL dataset~\cite{huang2018video}.} \label{tab9} \end{table} \section{Conclusion} This paper proposes a self-motivated architecture, coined as SEN, to adaptively emphasize informative spatial and temporal features. Without extra expensive supervision, SEN outperforms existing CSLR methods upon four CSLR datasets. Visualizations confirm the effectiveness of SEN in leveraging discriminative hand and face features. \section{Introduction} Sign language is one of the most commonly-used communication tools for the deaf community in their daily life. It mainly conveys information by both manual components (hand/arm gestures), and non-manual components (facial expressions, head movements, and body postures)~\cite{dreuw2007speech,ong2005automatic}. However, mastering this language is rather difficult and time-consuming for the hearing people, thus hindering direct communications between two groups. To relieve this problem, isolated sign language recognition tries to classify a video segment into an independent gloss\footnote{Gloss is the atomic lexical unit to annotate sign languages.}. Continuous sign language recognition (CSLR) progresses by sequentially translating image streams into a series of glosses to express a complete sentence, more prospective towards bridging the communication gap. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/figure1.pdf} \caption{Visualization of class activation maps with Grad-CAM~\cite{selvaraju2017grad} for VAC~\cite{Min_2021_ICCV} (baseline). Top: Original frames. Bottom: activation maps. It's observed that without extra supervision, it fails to locate discriminative face and hand regions precisely. } \label{fig1} \end{figure} In sign language, the left hand, right hand, and face play the most important role in expressing glosses. Mostly, they convey the information through horizontal/vertical hand movements, finger activities, and static gestures, assisted with facial expressions and mouth shapes to holistically deliver messages~\cite{dreuw2007speech,ong2005automatic}. As a result, hand and face, are always especially leveraged and incorporated in sign language systems. In isolated sign language recognition, early methods~\cite{freeman1995orientation,sun2013discriminative} leveraged hand-crafted features to describe the gestures and motion of both hands. Recent methods either choose to build a pure pose-based system~\cite{tunga2021pose,hu2021signbert} based on detected keypoints for both hands and face, or construct appearance-based systems~\cite{hu2021hand,boukhayma20193d} with cropped patches for hands and face as collaborative inputs. In CSLR, CNN-LSTM-HMM~\cite{koller2019weakly} builds a multi-stream (hands and face) Hidden-Markov-Model (HMM) to integrate multiple visual inputs to boost recognition accuracy. STMC~\cite{zhou2020spatial} explicitly inserts a pose-estimation network and uses the detected regions (hand and face) as multiple cues to perform recognition. More recently, C$^2$SLR~\cite{zuo2022c2slr} leverages the pre-extracted pose keypoints heatmaps as additional supervision to guide models to focus on hand and face areas. Although it has been proven effective to incorporate hand and face features to improve recognition performance for sign language systems, previous methods usually come at huge computations with increased training complexity, and rely on additional pose estimation networks or extra expensive supervision (e.g., heatmaps). However, without these supervision signals, we find current methods~\cite{Min_2021_ICCV,hao2021self,cheng2020fully} in CSLR fail to precisely locate the hand and face regions (Fig.~\ref{fig1}), thus unable to effectively leverage these features. To more effectively excavate these key cues but avoid introducing huge computations or relying on expensive supervision, we propose a self-emphasizing network (SEN) to explicitly emphasize informative spatial regions in a self-motivated way. Specifically, SEN first employs a lightweight subnetwork to incorporate local spatial-temporal features to identify informative regions, and then dynamically emphasizes or suppresses input features via attention maps. It's also observed that not all frames contribute equally to recognition. For example, frames with hand/arm movements of the signer are usually more important than those transitional frames. We present a temporal self-emphasizing module to emphasize those discriminative frames and suppress redundant ones dynamically. Remarkably, SEN yields new state-of-the-art accuracy upon four large-scale CSLR datasets, especially outperforming previous methods equipped with hand and face features, even though they always come at huge computations and rely on expensive supervision. Visualizations verify the effects of SEN in emphasizing spatial and temporal features. \section{Related Work} \subsection{Continuous Sign Language Recognition} Sign language recognition methods can be roughly categorized into isolated sign language recognition~\cite{tunga2021pose,hu2021signbert,hu2021hand} and continuous sign language recognition~\cite{pu2019iterative,cheng2020fully,cui2019deep,niu2020stochastic,Min_2021_ICCV} (CSLR), and we focus on the latter in this paper. CSLR tries to translate image frames into corresponding glosses in a weakly-supervised way: only sentence-level label is provided. Early methods in CSLR usually depend on hand-crafted features~\cite{gao2004chinese,freeman1995orientation} to provide visual information, especially body gestures, hands, and face, or rely on HMM-based systems~\cite{koller2016deepsign,han2009modelling,koller2017re,koller2015continuous} to perform temporal modeling and then translate sentences step by step. The HMM-based methods typically first employ a feature extractor to capture visual representations and then adopt an HMM to perform long-term temporal modeling. The recent success of convolutional neural networks (CNNs) and recurrent neural networks brings huge progress for CSLR. The widely-used CTC loss~\cite{graves2006connectionist} enables end-to-end training for recent methods by aligning target glosses with inputs. Especially, hands and face are paid close attention to by recent methods. For example, CNN-LSTM-HMM~\cite{koller2019weakly} employs a multi-stream HMM (including hands and face) to integrate multiple visual inputs to improve recognition accuracy. STMC~\cite{zhou2020spatial} utilizes a pose-estimation network to estimate human body keypoints and then sends cropped patches (including hands and face) for integration. More recently, C$^2$SLR~\cite{zuo2022c2slr} leverages the pre-extracted pose keypoints as supervision to guide the model. Despite high accuracy, they consume huge additional computations and training complexity. Practically, recent methods~\cite{pu2019iterative,pu2020boosting,cheng2020fully,cui2019deep,niu2020stochastic,Min_2021_ICCV} usually first employ a feature extractor to capture frame-wise visual representations for each frame, and then adopt 1D CNN and BiLSTM to perform short-term and long-term temporal modeling, respectively. However, several methods~\cite{pu2019iterative,cui2019deep} found in such conditions the feature extractor is not well trained and propose the iterative training strategy to refine the feature extractor, but consume much more computations. More recent methods try to directly enhance the feature extractor by adding visual alignment losses~\cite{Min_2021_ICCV} or adopt pseudo label~\cite{cheng2020fully,hao2021self} for supervision. We propose the self-emphasizing network to emphasize informative spatial features, which can be viewed to enhance the feature extractor in a self-motivated way. \subsection{Spatial Attention } Spatial attention has been proven to be effective in many fields including image classification~\cite{cao2019gcnet,hu2018gather,woo2018cbam,hu2018squeeze}, scene segmentation~\cite{fu2019dual} and video classification~\cite{wang2018non}. SENet~\cite{hu2018squeeze}, CBAM~\cite{woo2018cbam}, SKNet~\cite{li2019selective} and ECA-Net~\cite{wang2020eca} devise lightweight channel attention modules for image classification. The widely used self-attention operator~\cite{wang2018non} employs dot-product feature similarities to build attention maps and aggregate long-term dependencies. However, the calculation complexity of the self-attention operator is quadratic to the incorporated pixels, incurring a heavy burden for video-based tasks~\cite{wang2018non}. Instead of feature similarities, our SEN employs a learnable subnetwork to aggregate local spatial-temporal representations and generates spatial attention maps for each frame, much more lightweight than self-attention operators. Some works also propose to leverage external supervision to guide the spatial attention module. For example, GALA~\cite{linsley2018learning} collects click maps from games to supervise the spatial attention for image classification. A relation-guided spatial attention module~\cite{li2020relation} is designed to explore the discriminative regions globally for Video-Based Person Re-Identification. MGAN~\cite{pang2019mask} introduces an attention network to emphasize visible pedestrian regions by modulating full body features. In contrast to external supervision, our self-emphasizing network strengthens informative spatial regions in a self-motivated way, thus greatly lowering required computations and training complexity. \section{Method} \subsection{Framework Overview} As shown in fig.~\ref{fig2}, the backbone of CSLR models is consisted of a feature extractor (2D CNN\footnote{Here we only consider the feature extractor based on 2D CNN, because recent findings~\cite{adaloglou2021comprehensive,zuo2022c2slr} show 3D CNN can not provide as precise gloss boundaries as 2D CNN, and lead to lower accuracy. }), a 1D CNN, a BiLSTM, and a classifier (a fully connected layer) to perform prediction. Given a sign language video with $T$ input frames $x = \{x_{t}\}_{t=1}^T \in \mathcal{R}^{T \times 3\times H_0 \times W_0} $, a CSLR model aims to translate the input video into a series of glosses $y=\{ y_i\}_{i=1}^{N}$ to express a sentence, with $N$ denoting the length of the label sequence. Specifically, the feature extractor first processes input frames into frame-wise features $v = \{v_t\}_{t=1}^{T} \in \mathcal{R}^{T\times d}$. Then the 1D CNN and BiLSTM perform short-term and long-term temporal modeling based on these extracted visual representations, respectively. Finally, the classifier employs widely-used CTC loss to predict the probability of target gloss sequence $p(y|x)$. To emphasize the informative spatial and temporal features for CSLR models, we present a spatial self-emphasizing module (SSEM) and a temporal self-emphasizing module (TSEM). Specifically, we incorporate them into the feature extractor to operate on each frame. Fig.~\ref{fig2} shows an example of a common feature extractor consisting of multiple stages with several blocks in each. We place the SSEM and TSEM in parallel before the $3\times 3$ spatial convolution in each block to emphasize informative spatial and temporal features, respectively. When designing the architecture, efficiency is our core consideration, to avoid heavy computational burdens like previous methods~\cite{zhou2020spatial,zuo2022c2slr} based on heavy pose-estimation networks or expensive heatmaps. We next introduce our SSEM and TSEM, respectively. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/figure2.pdf} \caption{A overview for our SEN. It first employs a feature extractor (2D CNN) to capture frame-wise features, and then adopts a 1D CNN and a BiLSTM to perform short-term and long-term temporal modeling, respectively, followed by a classifier to predict sentences. We place our proposed spatial self-emphasizing module (SSTM) and temporal self-emphasizing module (TSEM) into each block of the feature extractor to emphasize the spatial and temporal features, respectively.} \label{fig2} \end{figure} \subsection{Spatial Self-Emphasizing Module (SSEM)} From fig.~\ref{fig1}, we argue current CSLR models fail to effectively leverage the informative spatial features, e.g., hands and face. We try to enhance the capacity of the feature extractor of CSLR models to incorporate such discriminative features without affecting its original spatial modeling ability. Practically, our SSEM is designed to first leverage the closely correlated local spatial-temporal features to identify the informative regions for each frame, and then augment original representations in the form of attention maps. As shown in fig.~\ref{fig3}, SSEM first projects the input features $s = \{s_t\}_{t=1}^T \in \mathcal{R}^{T \times C\times H \times W}$ into $s_r\in \mathcal{R}^{T \times C/r\times H \times W}$ to decrease the computational costs brought by SSEM, with $r$ the reduction factor as 16 by default. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{figures/figure3.pdf} \caption{Illustration for our spatial self-emphasizing module (SSEM).} \label{fig3} \end{figure} The frame-wise features $s$ in the feature extractor are independently extracted for each frame by 2D convolutions, failing to incorporate local spatial-temporal features to distinguish the informative spatial regions. Besides, as the signer has to throw his/her arms and hands to express glosses, the informative regions in adjacent frames are always misaligned. Thus, we devise a multi-scale architecture to perceive spatial-temporal features in a large neighborhood to help identify informative regions. Instead of a large spatial-temporal convolution kernel, we employ $N$ parallel factorized branches with group-wise convolutions of progressive dilation rates to lower computations and increase the model capacity. As shown in fig.~\ref{fig3}, these $N$ branches own the same spatial-temporal kernel size $K_t\times K_s\times K_s$, with different spatial dilation rates $[1\cdots N]$. Features from different branches are multiplied with learnable factors $\{\sigma_{1}\dots \sigma_k\}$ to control the importance of different branches via gradient-based backward propagation, and are then added to mix information from different receptive fields. This multi-scale architecture is expressed as: \begin{equation} \label{e1} s_m = \sum_{i=1}^{N}{\sigma_i \times {\rm Conv}_i(s_r)} \end{equation} where the group-wise convolution ${\rm Conv}_i$ at different levels captures spatial-temporal features from different receptive fields, with dilation rate $(1,i,i)$. Especially, as the channels are downsized by $r$ times in SSEM and we employ group-wise convolutions with small spatial-temporal kernels to capture multi-scale features, the overall architecture is rather lightweight with few (\textless 0.1\%) extra computations compared to the original model, as demonstrated in our ablative experiments. Next, $s_m$ is sent into a $1\times 1 \times 1$ convolution to project channels back into $C$, and then passed through a sigmoid activation function to generate attention maps $M_s\in \mathcal{R}^{T \times C\times H \times W}$ with values ranging between $[0, 1 ]$ as: \begin{equation} \label{e2} M_s = {\rm Sigmoid}({\rm Conv}_{1\times 1\times 1}(s_m)) \end{equation} Finally, the attention maps $M_s$ are used to emphasize informative spatial regions for input features. To avoid hurting original representations and degrading accuracy, we propose to emphasize input features via a residual way as: \begin{equation} \label{e3} u = (M_s-0.5\times \mathds{1})\odot s+s \end{equation} where $\odot$ denotes element-wise multiplication and $u$ is the output. In specific, we first subtract $0.5\times \mathds{1}$ from the attention maps $M_s$, with $\mathds{1}\in \mathcal{R}^{T \times C\times H \times W}$ denoting an all-one matrix, to change the range of values in $M_s$ into $[-0.5, 0.5]$. Then we element-wisely multiply the resulting attention maps with input features $s$ to dynamically emphasize the informative regions and suppress unnecessary areas. Here, the values in $M_s$ larger than 0 would strengthen the corresponding inputs features, otherwise they would weaken the input features. Finally, we add the modulated features with input features $s$ to emphasize or suppress certain spatial features, but avoid hurting original representations. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{figures/figure4.pdf} \caption{Illustration for our temporal self-emphasizing module (TSEM).} \label{fig4} \end{figure} \subsection{Temporal Self-Emphasizing Module} We argue that not all frames in a video contribute equally to recognition, where some frames are more discriminative than others. For example, frames in which the signer moves his/her arms to express a sign are usually more important than those transitional frames or idle frames with meaningless contents. However, the feature extractor only employs 2D spatial convolutions to capture spatial features for each frame, equally treating frames without considering their temporal correlations. We propose a temporal self-emphasizing module (TSEM) to adaptively emphasize discriminative frames and suppress redundant ones. As shown in fig.~\ref{fig4}, input features $u \in \mathcal{R}^{T \times C\times H \times W}$ first undergo a global average pooling layer to eliminate the spatial dimension, i.e., $H$ and $W$. Then these features pass through a convolution with kernel size of 1 to reduce channels by $r$ times into $u_r \in \mathcal{R}^{T \times C/r}$ as: \begin{equation} \label{e4} u_r = {\rm Conv}_{K=1}({\rm AvgPool}(u)) \end{equation} where $K$ denotes the kernel size. To better exploit local temporal movements to identify the discriminative frames, we leverage the temporal difference operator to incorporate motion information between adjacent frames to make decisions better. Specially, we calculate the difference between two adjacent frames for $u_r$ as approximate motion information, and then concatenate it with appearance features $u_r$ as : \begin{equation} \label{e5} u_m = {\rm Concat}([u_r, u_r(t+1)-u_r]) \end{equation} Next, we send $u_m$ into a 1D temporal convolution with kernel size of $P_t$ to capture the short-term temporal information. As the size of $u_m$ is rather small, we here employ a normal temporal convolution instead of a multi-scale architecture. The features then undergo a convolution with kernel size of 1 to project channels back into $C$, and pass through a sigmoid activation function to generate attention maps $M_t \in \mathcal{R}^{T \times C}$ as: \begin{equation} \label{e5} M_t = {\rm Sigmoid}({\rm Conv}_{K=1}(u_m)) \end{equation} Finally, we employ $M_t $ to emphasize the discriminative features for input $u$ in a residual way as : \begin{equation} \label{e6} o = (M_t-0.5\times \mathds{1})\odot u+u \end{equation} where $\odot$ denotes element-wise multiplication and $o$ is the output. \section{Experiments} \subsection{Experimental Setup} \subsubsection{Datasets.} \textbf{PHOENIX14}~\cite{koller2015continuous} and \textbf{PHOENIX14-T}~\cite{camgoz2018neural} are both recorded from a German weather forecast broadcast before a clean background with a resolution of 210 $\times$ 260. They contain 6841/8247 sentences with a vocabulary of 1295/1085 signs, divided into 5672/7096 training samples, 540/519 development (Dev) samples and 629/642 testing (Test) samples. \textbf{CSL-Daily}~\cite{zhou2021improving} is recorded indoor with 20654 sentences, divided into 18401 training samples, 1077 development (Dev) samples and 1176 testing (Test) samples. \textbf{CSL}~\cite{huang2018video} is collected in the laboratory environment by fifty signers with a vocabulary size of 178 with 100 sentences. It contains 25000 videos, divided into training and testing sets by a ratio of 8:2. \subsubsection{Training details.} We adopt ResNet18~\cite{he2016deep} as the 2D CNN with ImageNet~\cite{deng2009imagenet} pretrained weights. We place SSEM and TSEM before the second convolution in each block. The 1D CNN consists of a sequence of \{K5, P2, K5, P2\} layers where $K$ and $P$ denotes a 1D convolutional layer and a pooling layer with kernel size of 5 and 2, respectively. We then adopt a two-layer BiLSTM with 1024 hidden states and a fully connected layer for prediction. We train our model for 80 epochs with initial learning rate 0.0001 decayed by 5 after 40 and 60 epochs. Adam optimizer is adopted with weight decay 0.001 and batch size 2. All frames are first resized to 256$\times$256 and then randomly cropped to 224$\times$224, with 50\% horizontal flip and $\pm$20\% random temporal scaling during training. During inference, a central 224$\times$224 crop is simply selected. We use VE and VA losses from VAC~\cite{Min_2021_ICCV} for extra supervision. \subsubsection{Evaluation Metric.} We use Word Error Rate (WER) as the evaluation metric, which is defined as the minimal summation of the \textbf{sub}stitution, \textbf{ins}ertion, and \textbf{del}etion operations to convert the predicted sentence to the reference sentence, as: \begin{equation} \label{e11} \rm WER = \frac{ \#sub+\#ins+\#del}{\#reference}. \end{equation} Note that the \textbf{lower} WER, the \textbf{better} accuracy. \begin{table}[t] \centering \begin{tabular}{cccc} \hline Configurations & FLOPs & Dev(\%) & Test(\%)\\ \hline - & 3.64G& 21.2 & 22.3\\ $K_t$=9, $K_s$=3, $N$=\textbf{1} & +0.4M & 20.5 & 22.0 \\ $K_t$=9, $K_s$=3, $N$=\textbf{2} & +0.6M& 20.2 & 21.8 \\ $K_t$=9, $K_s$=3, $N$=\textbf{3} & +0.8M & \textbf{19.9} & \textbf{21.4} \\ $K_t$=9, $K_s$=3, $N$=\textbf{4} & +1.0M & 20.2 & 21.7 \\ $K_t$=\textbf{7}, $K_s$=3, $N$=3 & +0.7M & 20.2 & 21.6 \\ $K_t$=\textbf{11}, $K_s$=3, $N$=3 & +1.0M & 20.3 & 21.8 \\ $K_t$=9, $K_s$=\textbf{7}, $N$=1 & +2.9M & 20.5 & 22.0 \\ \hline \end{tabular} \caption{Ablations for the multi-scale architecture of SSEM on the PHOENIX14 dataset.} \label{tab1} \end{table} \iffalse \begin{table}[t] \centering \begin{tabular}{ccc} \hline Configurations & Dev(\%) & Test(\%)\\ \hline - & 21.2 & 22.3\\ Stage 1 & 20.6 & 21.9 \\ Stage 1-2 & 20.4 & 21.7 \\ Stage 1-3 & 20.1 & 21.6 \\ Stage 1-4 & \textbf{19.9} & \textbf{21.4} \\ \hline \end{tabular} \caption{Ablations for the numbers of SSEMs on the PHOENIX14 dataset.} \label{tab2} \end{table} \fi \begin{table}[t] \centering \begin{tabular}{ccc} \hline Configurations & Dev(\%) & Test(\%)\\ \hline - & 21.2 & 22.3\\ $M_s \odot s$ & 22.3 & 23.4 \\ $M_s \odot s + s$ & 20.6 & 21.7 \\ $(M_s -0.5\times \mathds{1}) \odot s$ & 20.2 & 21.5 \\ $(M_s -0.5\times \mathds{1}) \odot s +s $ & \textbf{19.9} & \textbf{21.4}\\ \hline \end{tabular} \caption{Ablations for the implementations of SSEM to augment input features on the PHOENIX14 dataset.} \label{tab3} \end{table} \begin{table}[t] \centering \begin{tabular}{ccc} \hline Configurations & Dev(\%) & Test(\%)\\ \hline - & 19.9 & 21.4 \\ \hline $u_r$ & 19.8 & 21.2 \\ ${\rm Concat}([u_r, u_r(t+1)-u_r])$ & \textbf{19.5} & \textbf{21.0} \\ \hline $P_t$ = 7 & 19.6 & 21.2 \\ $P_t$ = 9 & \textbf{19.5} & \textbf{21.0} \\ $P_t$ = 11 & 19.7 & 21.3 \\ \hline \end{tabular} \caption{Ablations for TSEM on the PHOENIX14 dataset.} \label{tab4} \end{table} \begin{table}[t] \centering \begin{tabular}{ccc} \hline Configurations & Dev(\%) & Test(\%)\\ \hline - & 21.2 & 22.3\\ SSEM & 19.9 & 21.4\\ TSEM & 20.5 & 21.7 \\ SSEM + TSEM & 19.8 & 21.4 \\ TSEM + SSEM & 19.6 & 21.2 \\ Parallelled & \textbf{19.5} & \textbf{21.0} \\ \hline \end{tabular} \caption{Ablations for the effectiveness of SSEM and TSEM on the PHOENIX14 dataset.} \label{tab5} \end{table} \begin{table}[t] \centering \setlength\tabcolsep{3pt} \begin{tabular}{lcc} \hline Methods & Dev(\%) & Test(\%)\\ \hline - & 21.2 & 22.3\\ w/ SENet~\cite{hu2018squeeze} & 20.7 & 21.6 \\ w/ CBAM~\cite{woo2018cbam} & 20.5 & 21.3 \\ \hline CNN+HMM+LSTM~\cite{koller2019weakly} & 26.0 & 26.0 \\ STMC~\cite{zhou2020spatial} & 21.1 & 20.7 \\ C$^2$SLR~\cite{zuo2022c2slr} & 20.5 & 20.4 \\ \hline SEN & \textbf{19.5} & \textbf{21.0} \\ \hline \end{tabular} \caption{Comparison with other methods of channel attention or hand and face features on the PHOENIX14 dataset.} \label{tab6} \end{table} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/spatial_figure.pdf} \caption{Visualizations of class activation maps by Grad-CAM~\cite{selvaraju2017grad}. Top: raw frames; Middle: class activation maps of our baseline; Bottom: class activation maps of our SEN. Our baseline usually focuses on nowhere or only attends to a single hand or face. Our SEN could generally focus on the human body (light yellow areas) and pays special attention to informative regions like hands and face (dark red areas).} \label{fig5} \end{figure} \subsection{Ablation Study} We perform ablation studies on the PHOENIX14 dataset and report on both development (Dev) and testing (Test) sets. \subsubsection{Effects of the multi-scale architecture of SSEM.} Tab.~\ref{tab1} ablates the implementations for the multi-scale architecture of SSEM. Our baseline achieves 21.2\% and 22.3\% WER on the Dev and Test Set. When fixing $K_t$=9, $K_s$=3 and varying the number of branches to expand spatial receptive fields, it's observed larger $N$ consistently brings better performance. When $N$ reaches 3, it brings no more performance gain. We set $N$ as 3 by default and test the effects of $K_t$. One can see that either increasing $K_t$ to 11 or decreasing $K_t$ to 7 achieves worse performance. We thus adopt $K_t$ as 9 by default. Notably, one can find SSEM brings few extra computations compared to our baseline. For example, the best-performing SSEM with $K_t$=9, $K_s$=3 and $N$=3 only owns 0.8M (\textless 0.1\%) extra FLOPs, which can be neglected compared to 364G FLOPs of our baseline model. Finally, we compare our proposed multi-scale architecture with a normal implementation of more computations. The receptive field of SSEM with $K_t$=9, $K_s$=3 and $N$=3 is identical to a normal convolution with $K_t$=9 and $K_s$=7. As shown in the bottom of tab.~\ref{tab1}, a normal convolution not only brings more computations than SSEM, but also performs worse, verifying the effectiveness of our architecture. \subsubsection{Implementations of SSEM to augment inputs features.} Tab.~\ref{tab3} ablates the implementations of SSEM to augment original features. It's first observed directly multiplying the attention maps $M_s$ with input features $s$ severely degrades performance, attributed to destroying input features distributions. Implemented in a residual way by adding $s$, $M_s \odot s + s$ could notably relieve such phenomenon and achieves +0.6\% \& +0.6\% on the Dev and Test Sets. Further, we first subtract $0.5\times \mathds{1}$ from the attention maps $M_s$ to emphasize or suppress certain positions, and then element-wisely multiply it with $s$. This implementation bring +1.0\% \& +0.8\% performance boost. Finally, we update this implementation in a residual way by adding input features $s$ as $(M_s -0.5\times \mathds{1}) \odot s +s $, achieving notable performance boost by +1.3\% \& +0.9\%. \subsubsection{Study on TSEM.} Tab.~\ref{tab4} ablates the configurations for TSEM. We here adopt SSEM as our baseline and ablate the configurations for TSEM. It's first noticed that combining motion information by concatenating $u_r(t+1)-u_r$ with $u_r$ slightly outperforms only using $u_r$ to capture short-term temporal dependencies, verifying the effectiveness of local motion information. Next, when varying $P_t$, it's observed $P_t$=9 achieves the best performance among $P_t$=[7,9,11], which is adopted by default in the following. \subsubsection{Study on the effectiveness of SSEM and TSEM.} Tab.~\ref{tab5} studies how to combine SSEM with TSEM. We first notice that only using SSEM or TSEM could already bring a notable performance boost, by +1.3\& +0.9\% and +0.7 \& +0.6\% on the Dev and Test Sets, respectively. When further combining SSEM with TSEM by sequentially placing SSEM before TSEM (SSEM+TSEM), placing TSEM before SSEM (TSEM+SSEM) or paralleling TSEM and TSEM, it's observed SSEM+TSEM performs best with +1.7\% \& +1.3\% performance boost on the Dev and Test Sets, respectively, adopted as the default setting. \subsubsection{Comparison with other methods.} We compare our SEN with related well-known channel attention methods like SENet~\cite{hu2018squeeze} and CBAM~\cite{woo2018cbam}, and previous CSLR methods equipped with hand and face features by extra pose-estimation networks or pre-extracted heatmaps. In the upper part of tab.~\ref{tab6}, one can see SEN largely outperforms these channel attention methods, for its superior ability to emphasize informative hand and face features. In the bottom part of tab.~\ref{tab6}, it's observed SEN greatly surpasses previous CSLR methods equipped with hand and face features, even though they employ extra heavy networks or expensive supervision. These results verify the effectiveness of our SEN in leveraging hand and face features. \subsection{Visualizations} \subsubsection{Visualization for SSEM.} We sample a few frames for expressing a gloss and plot the class activation maps for our baseline and SEN with Grad-CAM~\cite{selvaraju2017grad} in fig.~\ref{fig5}. The activation maps generated by our baseline usually focus on nowhere or only attend to a single hand or face, failing to fully focus on the informative regions (e.g., hands and face). Instead, our SEN could generally focus on the human body (light yellow areas), and pays special attention to those discriminative regions like hands and face (dark red areas). These visualizations show that without additional expensive supervision, our SEN could still effectively leverage the informative spatial features in a self-supervised way. \begin{table*}[t] \centering \setlength\tabcolsep{3pt} \begin{tabular}{ccccccccc} \hline \multirow{3}{*}{Methods} &\multirow{3}{*}{Backbone} & \multicolumn{4}{c}{PHOENIX14} & \multicolumn{2}{c}{PHOENIX14-T} \\ & &\multicolumn{2}{c}{Dev(\%)} & \multicolumn{2}{c}{Test(\%)} & \multirow{2}{*}{Dev(\%)} & \multirow{2}{*}{Test(\%)}\\ & &del/ins & WER & del/ins& WER & & \\ \hline Align-iOpt~\cite{pu2019iterative}& 3D-ResNet &12.6/2 & 37.1& 13.0/2.5 & 36.7 & -&-\\ Re-Sign~\cite{koller2017re}& GoogLeNet&- & 27.1 &- &26.8 &- &-\\ SFL~\cite{niu2020stochastic}& ResNet18 & 7.9/6.5 & 26.2 & 7.5/6.3& 26.8 & 25.1&26.1\\ FCN~\cite{cheng2020fully}& Custom & - & 23.7 & -& 23.9 & 23.3& 25.1\\ CMA~\cite{pu2020boosting} & GoogLeNet & 7.3/2.7 & 21.3 & 7.3/2.4 & 21.9 & -&-\\ VAC~\cite{Min_2021_ICCV}& ResNet18 & 7.9/2.5 & 21.2 &8.4/2.6 & 22.3 &- &-\\ SMKD~\cite{hao2021self}& ResNet18 &6.8/2.5 &20.8 &6.3/2.3 & 21.0 & 20.8 & 22.4\\ \hline SLT$^*$~\cite{camgoz2018neural}& GoogLeNet & - & - & - & - & 24.5 & 24.6\\ CNN+LSTM+HMM$^*$~\cite{koller2019weakly}& GoogLeNet & - &26.0 & - & 26.0 & 22.1 & 24.1 \\ DNF$^*$~\cite{cui2019deep}& GoogLeNet & 7.3/3.3 &23.1& 6.7/3.3 & 22.9 & - & -\\ STMC$^*$~\cite{zhou2020spatial}& VGG11 & 7.7/3.4 &21.1 & 7.4/2.6 & 20.7 & 19.6 & 21.0\\ C$^2$SLR$^*$~\cite{zuo2022c2slr} & ResNet18 & - & 20.5 &- & 20.4 & 20.2 & 20.4 \\ \hline Baseline & ResNet18 & 7.9/2.5 & 21.2 &8.4/2.6 & 22.3 & 21.1 & 22.8\\ \textbf{SEN (Ours)} & ResNet18 & 5.8/2.6 &\textbf{19.5} & 7.3/4.0 & \textbf{21.0} & \textbf{19.3} & \textbf{20.7} \\ \hline \end{tabular} \caption{Comparison with state-of-the-art methods on the PHOENIX14 and PHOENIX14-T datasets. $*$ indicates extra clues such as face or hand features are included by additional networks or pre-extracted heatmaps.} \label{tab7} \end{table*} \subsubsection{Visualization for TSEM.} We visualize the temporal attention maps of TSEM in fig~\ref{fig6}. We sample several frames corresponding to an output gloss 'nord' as an example. The darker color, the higher weight. One can find that TSEM tends to allocate higher weights for frames with rapid movements (the latter two frames in the first line; the middle three frames in the second line). TSEM assigns lower weights for static frames with few body movements. Such observation is consistent with our habits, as humans always pay more attention to those moving objects in the visual field to capture key movements. Those frames can also be considered conveying more important pattern for expressing a sign. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/temporal_figure.pdf} \caption{Visualizations of temporal attention maps for TSEM. One can find that TSEM highlight frames with rapid movements and suppress those static frames.} \label{fig6} \end{figure} \subsection{Comparison with State-of-the-Art Methods} \textbf{PHOENIX14} and \textbf{PHOENIX14-T}. Tab.~\ref{tab7} shows a comprehensive comparison between our SEN and other state-of-the-art methods. We notice that with few extra computations, SEN could outperform other state-of-the-art methods upon both datasets. Especially, SEN outperforms previous CSLR methods equipped with hand and faces acquired by heavy pose-estimation networks or pre-extracted heatmaps (notated with *), without additional expensive supervision. \textbf{CSL-Daily}. CSL-Daily is a recently released large-scale dataset with the largest vocabulary size (2k) among commonly-used CSLR datasets, covering daily contents. Tab.~\ref{tab8} shows that our SEN achieves new state-of-the-art accuracy upon this challenging dataset with large progresses, which generalizes well upon real-world scenarios. \textbf{CSL}. As shown in tab.~\ref{tab9}, our SEN could achieve extreme superior accuracy (0.8\% WER) upon this well-examined dataset, outperforming existing CSLR methods. \begin{table}[t] \centering \setlength\tabcolsep{2pt} \begin{tabular}{cccc} \hline Methods& Dev(\%) & Test(\%)\\ \hline LS-HAN~\cite{huang2018video} & 39.0 & 39.4\\ TIN-Iterative~\cite{cui2019deep} & 32.8 & 32.4\\ Joint-SLRT~\cite{camgoz2020sign} & 33.1 & 32.0 \\ FCN~\cite{cheng2020fully} & 33.2 & 32.5 \\ BN-TIN~\cite{zhou2021improving} & 33.6 & 33.1 \\ \hline Baseline & 32.8 & 32.3\\ \textbf{SEN(Ours)} & \textbf{31.1} & \textbf{30.7} \\ \hline \end{tabular} \caption{Comparison with state-of-the-art methods on the CSL-Daily dataset~\cite{zhou2021improving}.} \label{tab8} \end{table} \begin{table}[t] \centering \setlength\tabcolsep{2pt} \begin{tabular}{cc} \hline Methods& WER(\%)\\ \hline SubUNet~\cite{cihan2017subunets} & 11.0\\ SF-Net~\cite{yang2019sf} & 3.8 \\ FCN~\cite{cheng2020fully} & 3.0 \\ STMC~\cite{zhou2020spatial} & 2.1 \\ VAC~\cite{Min_2021_ICCV} & 1.6 \\ C$^2$SLR~\cite{zuo2022c2slr} & 0.9 \\ \hline Baseline & 3.5\\ \textbf{SEN(Ours)} & \textbf{0.8} \\ \hline \end{tabular} \caption{Comparison with state-of-the-art methods on the CSL dataset~\cite{huang2018video}.} \label{tab9} \end{table} \section{Conclusion} This paper proposes a self-motivated architecture, coined as SEN, to adaptively emphasize informative spatial and temporal features. Without extra expensive supervision, SEN outperforms existing CSLR methods upon four CSLR datasets. Visualizations confirm the effectiveness of SEN in leveraging discriminative hand and face features.
{'timestamp': '2022-12-01T02:17:42', 'yymm': '2211', 'arxiv_id': '2211.17081', 'language': 'en', 'url': 'https://arxiv.org/abs/2211.17081'}
\section{Introduction} \label{sec:intro} Given large amounts of training data (such as ImageNet~\cite{deng2009imagenet}) and compute power (powerful GPUs~\cite{dl_gpu}), deep models are highly accurate on the respective tasks for which it is trained. However, these models can easily get fooled when they encounter adversarial images as input that are crafted using an adversarial attack~\cite{szegedy2013intriguing}. A lot of efforts have been made in the literature to secure the models against adversarial attacks. Adversarial training~\cite{madry2017towards,goodfellow2014explaining} is one of the most common ways to provide empirical defense where the models are trained to minimize the maximum training loss induced by adversarial samples. Such defenses are heuristic-based and are only robust to known or specific adversarial attacks. Powerful adversaries easily break them, hence are not truly robust against adversarial perturbations~\cite{athalye2018robustness,uesato2018adversarial,croce2020reliable}. This motivated the researchers to develop methods where a trained model can be guaranteed to have a constant prediction in the input neighborhood. Such methods that provide formal guarantees are called certified defense methods~\cite{raghunathan2018certified,wong2018provable,ehlers2017formal,katz2017reluplex,mirman2018differentiable}. Randomized smoothing~\cite{cao2017mitigating, liu2018towards,lecuyer2019certified,li2019certified,cohen2019certified} is a certified defense technique used to provide provable robustness against $l_{2}$ adversarial perturbations. It outperforms other certification methods and is also scalable to deep neural networks due to its architectural independence. Using randomized smoothing, any base classifier can be converted to a smoothed classifier, which is certifiably robust against $l_{2}$ attacks as it has the desirable property of constant predictions within the $l_{2}$ ball around the input. The prediction by the smoothed classifier on an input image is simply the most probable class predicted by the base classifier on random gaussian perturbations of the input. Note that the higher the probability with which the base classifier predicts the most probable class as the correct class, the higher the certified radius~\cite{cohen2019certified}. However, any vanilla trained / off-the-shelf classifier would not be robust to the input corrupted by gaussian noise. The predicted most probable class can be incorrect or may be predicted with very low confidence leading to poor certification. Thus, models are often trained from scratch using the gaussian perturbed samples~\cite{cohen2019certified} as augmentation, and also with adversarial training~\cite{salman2019provably} Training from scratch on gaussian-noise augmentation is not always a viable option, especially when the large pretrained models are shared as an API, either as a white box or a black box. Also, retraining these large cumbersome models adds a lot of computational overload. To avoid this, Salman \etal~\cite{salman2020denoised} proposed a “denoised smoothing” technique where a custom-trained denoiser is prepended before the pretrained classifier. Though their approach provides certified robustness to pretrained models, they use the entire training data for training the denoiser. In fact, all the previous certification methods also assume the availability of entire training data. This assumption is unrealistic as the API provider may not share the whole large-scale training set. Due to heavy transmission costs associated with complete training data or proprietary reasons, they may provide access to only a few training samples. In such cases, when we perform denoised smoothin ~\cite{salman2020denoised} directly, the certified accuracy decreases significantly as the number of training samples reduces from $100\%$ to $1\%$ as shown in Fig.~\ref{fig:motivation}. \begin{figure}[htp] \centering \centerline{\includegraphics[width=0.49\textwidth]{figures/motivation_fig.pdf}} \caption{The certified accuracy (both standard and robust) of existing method drops significantly as the training set size of CIFAR-$10$ reduces. Our method (\textit{DE-CROP}) overcomes this issue by obtaining significant gains in certified performance against $l_2$ perturbations within radius $0.25$, across different training data budgets. We use gaussian noise with $\sigma$ as $0.5$ for certification } \label{fig:motivation} \end{figure} In this work, we attempt to provide certified robustness to a pretrained non-robust model in the challenging setting of limited training data. We first explore whether the performance of denoised smoothing can be improved using different approaches such as weight decay as a regularization scheme or even generate extra data using augmentation and mix-up strategies \cite{zhang2017mixup} to avoid overfitting. However, these traditional approaches only show a marginal improvement (refer to Tables~\ref{tab:table1},~\ref{tab:table2}). Hence, we propose our approach of \textbf{d}ata-\textbf{e}fficient \textbf{c}ertified \textbf{ro}bustness (dubbed as `\textit{DE-CROP}’) (refer to Sec.~\ref{sec:proposed} and Fig.~\ref{fig:tsne},~\ref{fig:approach}) that generates better samples, yielding diverse features on the pretrained classifier similar to complete training data. Since generative models are hard to train and perform poorly (on downstream tasks) in the presence of limited data, we adopt a simple and intuitive approach to generate additional data to minimize overfitting. We achieve this by generating adversarial samples (corresponding to our limited data) that we term `boundary' samples. Adversarial samples act as an upper bound to the decision boundary~\cite{nanda2021fairness} while maintaining the semantic content of the image, allowing the denoiser to learn from samples in its neighborhood. Moreover, we also generate interpolated samples (lying between original and boundary samples) that further increase the data diversity in the feature space. We train the denoiser by maximizing the similarity between the pre-trained classifier’s features on the original data and denoised perturbed generated/original data. We achieve this by performing orthogonal modifications at two granularities: a) instance-level (using cosine-similarity) and b) distribution-level (using Maximum Mean Desprecancy \cite{gretton2012kernel} and negative gradients from a Domain-discriminator \cite{ganin2015unsupervised}). In our experiments, we observed that although cosine-similarity improved denoised performance by exploiting the discrimnativeness of the pretrained classifier, its benefits were limited as it only operates at an instance level. Thus, inspired by fundamental ideas (such as maximum mean discrepancy and domain discriminator) in domain adaptation, we formulate the objective of obtaining correct predictions on denoised images by reducing the distribution discrepancy between feature representations of the original clean inputs and the denoised gaussian perturbed output. As shown in Fig.~\ref{fig:motivation}, our method DE-CROP significantly improves certified performance across different sample budgets ($100\%$, $20\%$, $10\%$, $5\%$, $1\%$) on CIFAR-$10$ compared to ~\cite{salman2020denoised}. Our contributions are summarized as follows: \iffalse \begin{itemize} \item \textbf{Formulation}: We formulate a novel problem of certified defense in limited data settings. Given only a few training samples, we provide robustness guarantees for a non-robust pretrained classifier against $l_{2}$ perturbations for both white-box and black-box setups. To the best of our knowledge, certified defense using a few training samples hasn't been explored yet. \item \textbf{Sample Generation}: To avoid overfitting with limited data, our method generates class-boundary samples using an adversarial attack. We also interpolate between original training sample and its respective class-boundary sample in feature space to obtain diverse features of that class. These features are used as ground truth corresponding to which `interpolated’ sample is generated by perturbing the original sample (Sec.~\ref{subsec:generation}). \item \textbf{Training Loss}: We regularize the denoiser network through our proposed losses (Sec.~\ref{subsec:denoiser_training}) that align the feature representations of original and denoised gaussian perturbed generated/original samples at multiple granularities (both instance and distribution levels). \item \textbf{Experiments}: We show the contribution of our boundary and interpolated samples (Sec.~\cite{}) along with individual benefits of each of the proposed losses (Sec.~\cite{}), which leads to significant improvements in certified accuracy over the baselines across different sample budgets and noise levels in both white-box (Sec.~\cite{}) and black-box settings (Sec.~\cite{}). \end{itemize} \fi \begin{itemize} \item Given only limited training data, we provide robustness guarantees for a non-robust pretrained classifier against $l_{2}$ perturbations on both white-box and black-box setups. To the best of our knowledge, we are the first to provide certified adversarial defense using only the few training samples \item To mitigate overfitting on limited training data, we propose a novel sample-generation strategy that synthesize `boundary' and `interpolated' samples (Sec.~\ref{subsec:generation}) to augment the limited training data, leading to improved feature-diversity on the pretrained classifier. \item The denoiser network trained with regular cross entropy loss provides limited benefit. To enhance the performance further, we proposed additional losses (Sec.~\ref{subsec:denoiser_training}) that align the feature representations of original and denoised gaussian perturbed generated/original samples at multiple granularities (both instance and distribution levels). \item We show benefit of our generated samples (Sec.~\ref{gen_data}) along with contributions from each of the proposed losses (Sec.~\ref{dist_align}), by reporting significant improvements observed across diverse sample budgets and noise levels in both white-box and black-box settings. \end{itemize} \vspace{-0.1in} \section{Related Works} \label{sec:related} We broadly categorize the relevant works that provides adversarial robustness and briefly discuss them below: \textbf{Empirical Robustness}: Empirically motivated adversarial robustness defenses can be broadly classified into: a) adversarial training (AT) and b) non-adversarial training regularizations. AT \cite{madry2017towards, goodfellow2014explaining, szegedy2013intriguing} improves robustness by augmenting the training data with adversarial samples generated by a particular threat model. Although AT is widely regarded as the best empirical defense, it suffers from high-computational costs due to generation of adversarial samples at training time. Non-AT based approaches \cite{chan2019jacobian} attempt to reduce the computational burden (usually at the cost of drop in adversarial robustness) by mimicking properties observed in robust networks explicitly. AT is also highly dependent on the quantity of training data. Aditi \etal \cite{carmon2019unlabeled} demonstrated that using additional unlabeled data in a pseudo-labelling setup led to a significant increase in adversarial robustness. However, since performance of pseudo-labelling itself depends on the amount of labeled data, the performance of their technique drops considerably as labeled data is reduced. To alleviate this problem, Sehwag \etal \cite{sehwag2021robust} illustrated the benefit of using additional data generated from generative models for improving adversarial robustness. Since the empirical defenses are designed based on heuristics, they can be easily fooled as stronger adversarial attacks are developed in the future. In contrast, we attempt to provably robustify pretrained classifier against $l_{2}$ adversarial attacks under the challenging constraint of limited training samples. \textbf{Certified Robustness}: Unlike empirical defense, here the model predictions on the neighborhood region lying within ball of small radius around the input sample, is guaranteed to not vary and remain constant. The methods that provide certification are either ‘exact’~\cite{fischetti2017deep,ehlers2017formal,katz2017reluplex,tjeng2019evaluating} or ‘conservative’~\cite{wong2018provable,zhang2018efficient}. The former is not scalable to large architectures, is compute-intensive, and often uses less expressive networks but can verify with guarantees about the existence of any adversarial samples if lying within the radius of input. The latter is more scalable and requires less computation but can incorrectly decline the certification even if no adversarial sample is present. Both techniques require either customized or specific architectures, hence are not suitable for modern deep architectures. Randomized smoothing is a popular technique that does not have any architectural dependency and was initially used as heuristic defense~\cite{cao2017mitigating, liu2018towards}. It was first shown to provide certified guarantees by Lecuyer \etal~\cite{lecuyer2019certified} where techniques from `differential privacy’ were used for certification. It was later improved by Li \etal~\cite{li2019certified} who provided better guarantees using ideas from `information theory’. Both these methods have lower guarantees on the smoothed classifier. Cohen \etal~\cite{cohen2019certified} provided a tight certified guarantee against $l_{2}$ norm adversarial perturbations. After that, the certified accuracy was further improved by using adversarial training techniques in the randomized smoothing framework~\cite{salman2019provably}. However, all these techniques trained the classifier from scratch while providing certified robustness. Recently, Salman \etal~\cite{salman2020denoised} provided provable robustness to pretrained models by appending a custom trained denoiser before it. We also train the custom denoiser but only using few training samples. Unlike \cite{salman2020denoised}, our limited data setup is more challenging and directly using their method yields poor certification results. Our generated samples with the added domain discriminator optimized using our proposed losses, handles the overfitting on the denoiser quite well and gives significant improvements on certified accuracy. We now discuss the necessary preliminaries to give required background before explaining our approach. \vspace{-0.02in} \section{Preliminaries} \label{sec:prelims} \textbf{Notations}: The complete original training dataset is denoted by $D_o = \{D_{train}, D_{test}\}$, where $D_{train}$ and $D_{test}$ are the training set and the test set respectively. The base classifier $B_c$ is pretrained on the entire $D_{train}$ which consists of $N$ training samples. The API provider has granted public access to the trained $B_c$ which can be used by clients to obtain predictions. However, only a limited amount of $D_{train}$ (denoted by $D_{train}^{lim}$) is shared to the clients. $D_{train}^{lim}$ is only $k\%$ of $D_{train}$ containing $N^k$ training samples such that $N^k \ll N$). The same relationship holds for each class of the classifier $B_c$ i.e. for any class $c$: $N_c^k \ll N_c$ and $N_c^k$ is $k\%$ of $N_c$. An $i^{th}$ sample of $D_{train}^{lim}$ (i.e. $x_o^i$) with label $y_o^i$ is perturbed by gaussian noise which is denoted by $\bar{x}_o^i = x_o^i + \epsilon$ where $\epsilon \sim \mathcal{N}(0, \sigma^2 I)$, constituting the collection of perturbed samples as $\bar{D}_{train}^{lim}$. The penultimate features, logits and label predictions from the pretrained classifier $B_c$ on $x_o^i$ are denoted by $F_{B_c}(x_o^i)$, $label(B_c(x_o^i))$ and $B_c(x_o^i)$ respectively. Similarly, $label(S_c(x_o^i))$ is the label predicted by smoothed classifier $S_c$ on input $x_o^i$. The classifier $B_c$ is converted to smoothed classifier $S_c$, which is used for certified defense via randomized smoothing. The certified radius of an $i^{th}$ test sample is $R_c^i$. The evaluations are performed on $l_{2}$ perturbations at a radius $r$ denoted by $l_2^r$. The denoiser network $D_n$ and domain discriminator $D_d$, are parameterized by $\theta$ and $\phi$ respectively. The boundary sample and interpolated sample generated using an $i^{th}$ training sample of $D_{train}^{lim}$ (i.e. $x_o^i$) are represented by $x_b^i$ and $x_{int}^i$ respectively. \textbf{Randomized Smoothing (RS)}: This technique is used to build a new smoothed classifier $S_c$ from the given base classifier $B_c$. For any $i^{th}$ sample of $D_{train}^{lim}$ (i.e. $x_o^i$), the output of the classifier $S_c$ corresponding to input $x_o^i$ is the most likely class that has the highest probability to be get predicted by $B_c$ on $\bar{x}_o^i$ \begin{equation} \begin{gathered} \label{eq1} label(S_c(x_o^i)) = \underset{c \in C}{\mathrm{argmax}}\hspace{0.1in} Prob(label(B_c(x_o^i + \epsilon)) = c)\\ \text{where} \hspace{0.1in} \epsilon \sim \mathcal{N}(0, \sigma^2 I) \end{gathered} \end{equation} Here, $\sigma$ is a hyperparameter which controls the noise level and $C$ is the set of unique target labels in $D_{train}^{lim}$. The process of RS does not assume $B_c$'s architecture and thus permits the $B_c$ to be any arbitrary large deep neural network. \pagebreak \textbf{Certified Robustness using RS}: Lecuyer \etal~\cite{lecuyer2019certified} and Li \etal~\cite{li2019certified} gave robustness guarantees for the smoothed classifier $S_c$ using RS, but they were loose as $S_c$ was provably more robust than the obtained guarantees. Cohen~\etal~\cite{cohen2019certified} gave a tight bound on $l_{2}$ robust guarantees using RS. If the prediction on the base classifier $B_c$ for the gaussian perturbed copies of $x_o^i$ i.e. $\mathcal{N}(x_o^i, \sigma^2 I)$ is $c_1$ as the ‘most probable class’ with probability $p_1$ and $c_2$ as “runner-up” class with $p_2$, then the smoothed classifier $S_c$ is provably robust around the input $x_o^i$ within the radius $R_c = \sigma/2 (\phi^{-1}(p_1)-\phi^{-1}(p_2))$. Here $R_c$ is the certified radius as the predictions are guaranteed to remain constant inside the radius and $\phi^{-1}$ denotes the inverse of standard gaussian CDF. It is impossible to calculate $p_1$ and $p_2$ exactly when $B_c$ is a deep neural network. Hence, Cohen \etal, estimates lower bound on $p_1$ ($\underline{p_1}$) and upper bound on $p_2$ ($\overline{p_2}$) using Monte Carlo technique. \textbf{Theorem} [Cohen \etal~\cite{cohen2019certified}]: \textit{Let $B_c$ be any function that maps the input to one of the output class labels. Let $S_c$ be defined as in eq.~\ref{eq1}. The noise $\epsilon$ is sampled from the normal distribution i.e. $\epsilon \sim \mathcal{N}(0, \sigma^2 I)$. If $c_1 \in C$ and $\underline{p_1}$, $\overline{p_2} \in [0,1]$ holds the below inequality}: \begin{equation} \begin{gathered} Prob(label(B_c(x_o^i+\epsilon))=c_1) \geq \underline{p_1} \geq \overline{p_2} >= \\\underset{c \neq c_1}{max} Prob(label(B_c(x_o^i+\epsilon)=c) \end{gathered} \end{equation} \textit{Then $label(S_c(x_o^i+\epsilon)=c_1 \forall \left\lVert\epsilon\right\rVert_{2} < R_c$, where} \begin{equation} \begin{gathered} \label{eq3} R_c = \sigma/2 ( \phi^{-1}(\underline{p_1}) - \phi^{-1}(\overline{p_2}) ) \end{gathered} \end{equation} In practice, Cohen \etal used $R_c = \sigma \phi^{-1}(\underline{p_1})$ for $\underline{p_1}>1/2$, assuming $\overline{p_2} = 1-\underline{p_1}$, otherwise abstained with $R_c =0$. These above expressions can be derived using the “Neyman-Pearson” lemma for which we refer the reader to~\cite{cohen2019certified}. Now, we discuss our proposed approach in detail in the next section. \section{Proposed Approach} \label{sec:proposed} We aim to provide certified robustness to the given pretrained base classifier $B_c$. However, obtaining certification using randomized smoothing expects the model $B_c$ to be robust against the input perturbations with the random Gaussian noise, which may not be the case with the model $B_c$ supplied by the API provider. In order to make the base model $B_c$ appropriate for randomized smoothing based certification without modifying/retraining $B_c$, a denoiser network $D_n$ is prepended to $B_c$. Thus, $B_c \circ D_n$ is the new base classifier using which the prediction on smoothed classifier $S_c$ on an $i^{th}$ training sample is defined as follows: \begin{equation} \begin{gathered} \label{eq4} label(S_c(x_o^i))=\underset{c \in C}{\mathrm{argmax}}\hspace{0.01in}Prob(label(B_c(D_n(x_o^i + \epsilon)))=c)\\ \text{where} \hspace{0.1in} \epsilon \sim \mathcal{N}(0, \sigma^2 I) \end{gathered} \end{equation} The above smoothed classifier $S_c$ is provably robust against the $l_{2}$ perturbations with certified radius $R_c$ (refer to Sec.~\ref{sec:prelims}). For high performance on the certified defense, high $R_c$ is required, which is directly proportional to $\underline{p_1}$ (eq.~\ref{eq3}). The probability ($\underline{p_1}$) of predicting the most probable class ($c_1$) i.e. confidence depends on the performance of the denoiser network $D_n$. However, $D_n$ trained on the given limited training data $D_{train}^{lim}$ using the existing technique~\cite{salman2020denoised} yields poor certification (shown in Fig.~\ref{fig:motivation}). Even when we attempt to minimize the overfitting of $D_n$ on $D_{train}^{lim}$ ($\because$ $|D_{train}^{lim}| \ll |D_{train}|$) using different traditional approaches such as weight decay, augmentation, and mix-up strategies, we observe only a minor improvement (refer to Tables~\ref{tab:table1},~\ref{tab:table2}) in certified accuracy. Hence, we propose our method (`DE-CROP') that overcomes this limitation by crafting boundary and interpolated samples using $D_{train}^{lim}$, followed by using them to train the denoiser with appropriate losses. \begin{figure}[htp] \centering \centerline{\includegraphics[width=0.49\textwidth]{figures/tsne_multi_big.pdf}} \caption{t-SNE visualization of the pretrained base classifier's features for training samples of a particular class of CIFAR-$10$. Our generated samples (interpolated and boundary) increases the feature diversity of limited-original training data. } \label{fig:tsne} \end{figure} \begin{figure*}[htp] \centering \centerline{\includegraphics[width=\textwidth]{figures/main_figure.pdf}} \caption{Different stages involved in the proposed method (\textit{DE-CROP}) that provides robustness guarantees on pretrained classifier ($B_c$) against $l_2$ perturbations using only limited training samples ($x_o$). Samples crafted via adversarial attack act as proxy \textit{boundary} samples ($x_b$). Using $x_o$ and $x_b$, \textit{interpolated} samples ($x_{int}$) are generated in stage $1$. The generated samples along with limited training data are used in stage $2$ to train the denoiser network ($D_n^\theta$) by aligning the feature representation at instance and distribution levels using $L_{lc}$, $L_{cs}$, $L_{mmd}$ and $L_{dd}$ losses. The forward pass to compute these losses are shown by different colors } \label{fig:approach} \end{figure*} \vspace{-0.15in} \subsection{Generation of Boundary \& Interpolated sample} \label{subsec:generation} In Fig.~\ref{fig:tsne}, we show the visualization of the logit layer features of the pretrained base classifier $B_c$ on a particular class via t-SNE plot. The class-features corresponding to available limited class samples of $D_{train}^{lim}$ is shown in green color (on left). We focus on improving the feature diversity of the limited training samples. For this, we first estimate class-boundary samples that would have respective features in the boundary region of the t-SNE. Adversarial attacks~\cite{madry2017towards} synthesize samples via an optimization procedure that carefully perturbs the input sample with a small human-imperceptible noise (i.e., adversarial noise). The model gets fooled on such samples (i.e., adversarial samples) as the prediction gets flipped to some other class. As these samples cross the decision boundary, which are constructed by adding a small noise to the input original sample, they often lie very close to the decision boundary. Moreover, they are human-imperceptible and preserve the class semantics of the input class sample. Hence, adversarial samples serve as a good candidate for a proxy of class-boundary samples. For any $i^{th}$ training sample of $D_{train}^{lim}$ (i.e., $x_o^i$), we obtain boundary sample ($x_b^i$) by computing adversarial noise $\delta$ for which the following relation holds: \begin{equation} \begin{gathered} x_b^i=x_o^i+\delta, \\\left\lVert\delta\right\rVert_\infty<\epsilon, \hspace{0.1in}\epsilon>0, \hspace{0.1in}label(B_c(x_b^i)) \ne label(B_c(x_o^i)) \end{gathered} \end{equation} Next, we obtain interpolated features by performing a mixup between the features of generated boundary sample $x_b^i$ and original training sample $x_o^i$ as follows: \begin{equation} \begin{gathered} Logit_{int}^i = \alpha \times B_c (x_o^i) + (1-\alpha) \times B_c (x_b^i) \end{gathered} \end{equation} Here $\alpha$ is the mixing coefficient. The interpolated features, the class-boundary features, and the original limited class-sample features are highlighted in yellow, blue, and green colors respectively, in the t-SNE plot (Fig.~\ref{fig:tsne}). These interpolated features being label preserving help to improve the feature-diversity leading to improved certified accuracy (refer to Sec.~\ref{gen_data}). We craft the input sample corresponding to the interpolated feature $Logit_{int}^i$ (i.e., $x_{int}^i$) by perturbing the original sample $x_o^i$ to match its feature response to $Logit_{int}^{i}$ using mean square error loss ($L_{mse}$). Mathematically, we obtain interpolated sample $x_{int}^i$ as follows: \begin{equation} \label{eq7} x_{int}^i \leftarrow \underset{x}{\mathrm{min}} \hspace{0.1in} L_{mse}(B_c(x), Logit_{int}^i) \end{equation} where $x$ is initialized with $x_o^i$ and kept trainable with ground truth as $Logit_{int}^i$. The model $B_c$ is non-trainable, but the gradients are allowed to backpropagate from the model to update the input $x$. Thus, we obtain the boundary and the interpolated samples corresponding to each training sample. Refer supplementary for visualization of both the boundary and interpolated samples in the input space. All of them preserve the class semantics. Our generated boundary samples $x_b$ and interpolated samples $x_{int}$ are used along with limited training samples $x_o$ to train the denoiser network $D_n$, which we discuss in the subsequent subsection. \subsection{Training of the denoiser network ($D_n$)} \label{subsec:denoiser_training} The denoiser network $D_n$ is attached before the pretrained base classifier $B_c$ to make it suitable for randomized smoothing. Apart from this, we also add a domain discriminator network $D_d$ whose input is the normalized features of the penultimate layer of $B_c$. The discriminator $D_d$ learns to distinguish between distribution of clean samples and distribution of denoised outputs of gaussian perturbed input samples. Motivated from domain adaptation literature~\cite{ganin2016domain}, we use gradient reversal layer (GRL) before feeding the normalized penultimate layer features to the discriminator that allows normal forward pass but reverses the direction of gradient in the backward pass. As a consequence, this negative gradients backpropagates to the denoiser network $D_n$ that helps it to produce denoised output which yields indistinguishable domain-invariant features on the pretrained $B_c$ classifier. Refer to supplementary for architectural details of the networks $D_n$ and $D_d$. The overall steps involved in the proposed framework (`DE-CROP’) are also shown in Fig.~\ref{fig:approach}. The network is trained with our generated data and limited training data $D_{train}^{lim}$ by using different losses aimed at different objectives - label consistency to ensure correct predictions on $B_c$ and matching of high-level feature similarity obtained on $B_c$ for denoised output of gaussian perturbed input and clean original training samples both at the sample level and distribution level. The respective losses are described below:\\ \textbf{Ensuring label consistency}: Similar to~\cite{salman2020denoised}, we use cross entropy loss ($L_{ce}$) to ensure that the label predicted by the pretrained network $B_c$ on original clean data and denoised output of its gaussian perturbed counterpart are same. \begin{equation} \begin{split} L_{lc} = 1/N^k\sum\nolimits_{i=1}^{N^k} L_{ce}(softmax(B_c(D_n(\bar{x}_o^i))),\\ label(B_c(x_o^i))) \end{split} \end{equation} \textbf{Enforcing feature similarity at sample level}: The pretrained base classifier $B_c$ trained on complete training data $D_{train}$ is highly discriminative. To leverage this on our crafted data, we use cosine similarity loss at the logits of the $B_c$ network to encourage logit features on the denoised output of our generated data to be as discriminative as the features of the limited original training samples $D_{train}^{lim}$ by maximizing this loss. \begin{equation} \begin{gathered} L_{cs} = 1/N^{k} \sum\nolimits_{i=1}^{N^k} (\hspace{0.05in}CS(B_c(D_n(\bar{x}_b^i)), B_c(x_o^i)) +\\ CS(B_c(D_n(\bar{x}_{int}^i)), B_c(x_o^i))\hspace{0.05in})\text{,} \hspace{0.1in} \text{s.t.} \hspace{0.05in}CS(w, z) = \frac{w^{T}z}{\norm{w}\norm{z}} \end{gathered} \end{equation} \textbf{Enforcing feature similarity at distribution level}: Unlike $L_{cs}$ which is applied at sample level, here we enforce distribution level matching between the set of our denoised generated data and the set of limited original training data by using maximum mean discrepancy (MMD)~\cite{gretton2012kernel} loss on the normalized pre-logit layer of $B_c$ network. \begin{equation} \begin{gathered} L_{mmd} = MMD(F_{B_c}(D_n(\bar{x}_b)),F_{B_c}(x_o)) +\\ MMD(F_{B_c}(D_n(\bar{x}_{int})),F_{B_c}( x_o)) \end{gathered} \end{equation} Moreover, we also train the domain discriminator network $D_d$ (parameterized by $\phi$) using binary cross entropy loss ($L_{bce}$) to distinguish the distribution of gaussian perturbed samples and clean samples. \begin{equation} \begin{gathered} L_{dd} = \sum_{x^i \in D_{train}^{lim}\cup \hspace{0.01in}D_n(\bar{D}_{train}^{lim})} L_{bce}(D_d(F_{B_c}(x^i)),d^i) \end{gathered} \end{equation} Here, if $x^i \in D_{train}^{lim}$ then $d^i=1$ and if $x^i \in D_n(\bar{D}_{train}^{lim})$ then $d^i=0$ . The negative gradients are backpropagated via GRL~\cite{ganin2015unsupervised} (multiplies calculated gradient by $-1$), to update the parameters $\theta$ of denoiser network $D_n$ such that features of limited training data $D_{train}^{lim}$ and its corresponding denoised output of gaussian corrupted data ($D_n(\bar{D}_{train}^{lim})$) on network $B_c$ are domain invariant. Hence, the total loss can be written as follows: \begin{equation} \label{eq12} L(D_n^\theta, D_d^\phi) = \beta_1 L_{lc} - \beta_2 L_{cs} + \beta_3 L_{mmd} + \beta_4 L_{dd} \end{equation} At test time, the trained denoiser network $D_n$ with optimal parameters $\theta^{*}$ prepended with base classifier $B_c$ is used for evaluation. \section{Experiments} \label{sec:experiments} We demonstrate the effectiveness of our proposed approach (DE-CROP) on two widely-popular image-classification datasets, namely, CIFAR-10 \cite{krizhevsky2009learning} and Mini-ImageNet \cite{vinyals2016matching}. We limit the training set size of the datasets mentioned above by randomly selecting $1\%$ and $10\%$ samples respectively (ensuring class balance) from the $D_{train}$. Our baseline corresponds to training the denoiser using the $L_{lc}$ loss (similar to~\cite{salman2020denoised}). We fix the selected samples and use a ResNet-$110$ and ResNet-$12$ \cite{he2016deep} network (for CIFAR-$10$ and Mini-ImageNet respectively) as our pre-trained classifiers ($B_c$) with a value of $\sigma$ as $0.25$ for all our ablations and state-of-the-art comparisons, unless otherwise specified. Refer to supplementary for additional ablations on different values of noise strength $\sigma$ ($0.12$, $0.50$, $1.00$), quantity of limited training data $D_{train}^{lim}$ ($5\%$, $10\%$, $20\%$, $100\%$) and choice of architecture for the pretrained classifier $B_c$. We set the weights for the final loss-equation (refer eq.~\ref{eq12}) i.e. $\beta_1$, $\beta_2$, $\beta_3$, $\beta_4$ as $1$, $4$, $4$, $1$ respectively. In the subsequent subsections, we first show limited benefit of conventional techniques, followed by benefits of each component in DE-CROP and a comparison with state-of-the-art techniques. \subsection{Improving Certification on Limited Training Data via Conventional Techniques} $D_{n}$ trained with the $L_{lc}$ objective proposed by \cite{salman2020denoised} severely overfits in the presence of limited data resulting in poor certification accuracy (refer Fig.~\ref{fig:motivation}). In this section, we explore whether conventional supervised-learning techniques such as explicit regularization and data-augmentation are equipped to meaningfully improve certification accuracy when dealing with limited data. \begin{table}[htp] \centering \scalebox{0.8}{ \begin{tabular}{|c|c|ccc|} \hline \multirow{2}{*}{\textbf{Method}} & \textbf{\begin{tabular}[c]{@{}c@{}}Standard\\ Certified\end{tabular}} & \multicolumn{3}{c|}{\textbf{Robust Certified}} \\ \cline{2-5} & (r=0.00) & \multicolumn{1}{c|}{(r=0.25)} & \multicolumn{1}{c|}{(r=0.50)} & (r=0.75) \\ \hline Without Reg. & 20.60 & \multicolumn{1}{c|}{4.60} & \multicolumn{1}{c|}{0.80} & 0.00 \\ \hline $L_{1}$ Reg. & 22.60 & \multicolumn{1}{c|}{5.80} & \multicolumn{1}{c|}{0.60} & 0.00 \\ \hline $L_{2}$ Reg. & \textbf{27.80} & \multicolumn{1}{c|}{\textbf{7.80}} & \multicolumn{1}{c|}{\textbf{0.80}} & 0.00 \\ \hline \end{tabular} } \caption{Effect of adding weight decay regularizer ($L_{1}$ and $L_{2}$) on limited data certification against adversarial attacks. $L_{2}$ regularization obtains better certified standard and robust accuracies. \label{tab:table1} \end{table} In Table~\ref{tab:table1}, we regularize the $D_{n}$ by applying $L_{1}$ and $L_{2}$ reg. (regularization) \cite{hanson1988comparing}. We observe that $L_{2}$ reg. improves certified performance, whereas $L_{1}$ reg. only results in a marginal improvement. Although $L_{2}$ reg. limits the overfitting effect in the presence of limited data, it can’t help in improving the inherent data diversity. Thus, in Table~\ref{tab:table2}, we explore whether traditional affine and specialized augmentation methods (such as mixup \cite{zhang2017mixup} and cutmix \cite{yun2019cutmix}), when combined with $L_{2}$ reg. further boost performance. Rows $2$-$4$ in Table~\ref{tab:table2} correspond to affine-transformations where the intensity of the augmentation is increased progressively (i.e. policy$1$$<$policy$2$$<$policy$3$). We also experiment with widely-popular augmentation techniques: ‘mixup’ and 'cutmix'. Surprisingly, we observe that policy $1$(row-$2$; the lightest augmentation) performs the best, followed closely by ‘no-aug.’. We hypothesize that if the augmentation-policy is too aggressive, $B_c$ can make incorrect predictions leading to noisy gradients for the $D_n$ to learn from, resulting in poor certification accuracy. \begin{table}[htp] \centering \scalebox{0.8}{ \begin{tabular}{|c|c|ccc|} \hline \multirow{2}{*}{\textbf{Method}} & \textbf{\begin{tabular}[c]{@{}c@{}}Standard\\ Certified\end{tabular}} & \multicolumn{3}{c|}{\textbf{Robust Certified}} \\ \cline{2-5} & (r=0.00) & \multicolumn{1}{c|}{(r=0.25)} & \multicolumn{1}{c|}{(r=0.50)} & (r=0.75) \\ \hline No Aug. & 27.80 & \multicolumn{1}{c|}{7.80} & \multicolumn{1}{c|}{0.80} & 0.00 \\ \hline \begin{tabular}[c]{@{}c@{}}Aug.\\ (policy 1)\end{tabular} & \textbf{29.80} & \multicolumn{1}{c|}{\textbf{9.20}} & \multicolumn{1}{c|}{\textbf{1.40}} & 0.00 \\ \hline \begin{tabular}[c]{@{}c@{}}Aug.\\ (policy 2)\end{tabular} & 26.40 & \multicolumn{1}{c|}{7.40} & \multicolumn{1}{c|}{1.00} & 0.00 \\ \hline \begin{tabular}[c]{@{}c@{}}Aug\\ (policy 3)\end{tabular} & 21.00 & \multicolumn{1}{c|}{3.00} & \multicolumn{1}{c|}{0.20} & 0.00 \\ \hline Mixup & 24.20 & \multicolumn{1}{c|}{5.40} & \multicolumn{1}{c|}{0.60} & 0.00 \\ \hline Cutmix & 19.80 & \multicolumn{1}{c|}{3.20} & \multicolumn{1}{c|}{0.00} & 0.00 \\ \hline \end{tabular} } \caption{Investigating the effect of augmentation at different intensity levels (policies), mixup and cutmix strategies in minimizing overfitting on limited training data. The augmentation with light intesnity (policy 1) yields marginal improvement against l2 perturbations of different radii compared to no-augmentation strategy.} \label{tab:table2} \end{table} Thus, a combination of policy $1$ and L$2$-reg. improves the certified standard and robust accuracies. We use this combination as a baseline for further experiments upon which we make orthogonal improvements. \subsection{Effectiveness of our Generated Data} \label{gen_data} One of the critical reasons for the drop in certified accuracy is the lack of diversity in the limited data. We address this problem by generating synthetic samples that provide diversity in the feature space. As elaborated in the proposed section (refer Sec.~\ref{subsec:generation}): adversarial samples (termed as boundary-samples i.e. $x_b$) serve as a good candidate for this task since they flip the classifier prediction with minimum possible perturbations, thus allowing the $D_n$ to train on samples from less-dense boundary regions. Moreover, we also generate samples whose features are an interpolation between the original and boundary samples, crafted by minimizing the $L_{mse}$ loss between the interpolated logits and the generated sample logits (as described by eq.~\ref{eq7}). We empirically validate our motivation in Table~\ref{tab:table3}, where we observe that using $L_{cs}$ on both $x_{b}^{i}$ and corresponding $x_{int}^{i}$ leads to a massive improvement in performance over the baseline. Interestingly, the gain in performance when using only $x_{b}^{i}$ is comparatively modest. This observation further reinforces our intuition regarding the complementary nature of the information provided by the $x_{b}^{i}$ and $x_{int}^{i}$. Moreover, contrary to adversarial training that generates adversarial samples at every iteration during the training time, we only need to generate the boundary and interpolated samples once (amounting to negligible increase in training time), as pre-trained classifier $B_c$ is fixed and not trainable. \vspace{-0.15in} \begin{table}[htp] \centering \scalebox{0.8}{ \begin{tabular}{|c|c|ccc|} \hline \multirow{2}{*}{\textbf{Method}} & \textbf{\begin{tabular}[c]{@{}c@{}}Standard \\ Certified\end{tabular}} & \multicolumn{3}{c|}{\textbf{Robust Certified}} \\ \cline{2-5} & (r=0.00) & \multicolumn{1}{c|}{(r=0.25)} & \multicolumn{1}{c|}{(r=0.50)} & (r=0.75) \\ \hline Baseline & 29.80 & \multicolumn{1}{c|}{9.20} & \multicolumn{1}{c|}{1.40} & 0.00 \\ \hline \begin{tabular}[c]{@{}c@{}}Ours (with boundary\\ samples)\end{tabular} & 31.60 & \multicolumn{1}{c|}{7.80} & \multicolumn{1}{c|}{1.00} & 0.20 \\ \hline \begin{tabular}[c]{@{}c@{}}Ours (with boundary +\\ interpolated samples)\end{tabular} & \textbf{48.80} & \multicolumn{1}{c|}{\textbf{22.00}} & \multicolumn{1}{c|}{\textbf{6.00}} & \textbf{0.80} \\ \hline \end{tabular} } \caption{Benefit of our generated samples in improving certification. Using boundary and interpolated samples, we obtain significant boost in certified accuracy on original and l2 perturbed data.} \label{tab:table3} \end{table} \vspace{-0.15in} \subsection{Distribution Alignment: Enhancing Certification in Limited Data Setup} \label{dist_align} In Table~\ref{tab:table4}, we observe that using domain-discriminator ($D_d$) along with the previously introduced $L_{cs}$ loss works quite well as the standard certified accuracy improves by ~6$\%$ and certified robust accuracy improves by 4$\%$ (at r=$0.25$) compared to only using $L_{cs}$. We also explore whether equipping the $D_n$ with explicit distribution discrepancy losses such as $L_{mmd}$ would also work well in combination with the $D_d$. Intuitively, applying $L_{mmd}$ along with $L_{cs}$ should make the job of $D_d$ harder, leading to a better $D_d$. Consequently, improving $B_c$’s feature representation via the negative gradients of domain-discriminator loss ($L_{dd}$). We indeed observe this in Table~\ref{tab:table4}, where using $L_{mmd}$ in combination with the $L_{cs}$ and the $L_{dd}$ setup leads to an improvement in performance across both standard and robust accuracies (across radii). Thus, using $L_{mmd}$ and $L_{dd}$ in conjunction with the previously discussed $L_{cs}$ and $L_{lc}$ constitutes our final approach: DE-CROP \begin{table}[htp] \centering \scalebox{0.8}{ \begin{tabular}{|c|c|ccc|} \hline \multirow{2}{*}{\textbf{Method}} & \textbf{\begin{tabular}[c]{@{}c@{}}Standard\\ Certified\end{tabular}} & \multicolumn{3}{c|}{\textbf{Robust Certified}} \\ \cline{2-5} & (r=0.00) & \multicolumn{1}{c|}{(r=0.25)} & \multicolumn{1}{c|}{(r=0.50)} & (r=0.75) \\ \hline Baseline & 29.80 & \multicolumn{1}{c|}{9.20} & \multicolumn{1}{c|}{1.40} & 0.00 \\ \hline Ours (instance level) & 48.80 & \multicolumn{1}{c|}{22.00} & \multicolumn{1}{c|}{6.00} & 0.80 \\ \hline \begin{tabular}[c]{@{}c@{}}Ours (instance + \\ distribution level via \\ discriminator)\end{tabular} & 54.00 & \multicolumn{1}{c|}{26.00} & \multicolumn{1}{c|}{6.80} & 1.80 \\ \hline \begin{tabular}[c]{@{}c@{}}Ours (instance + \\ distribution level via \\ discriminator and MMD)\end{tabular} & \textbf{57.60} & \multicolumn{1}{c|}{\textbf{27.20}} & \multicolumn{1}{c|}{\textbf{9.20}} & \textbf{2.20} \\ \hline \end{tabular} } \caption Besides performance gains with instance-level feature matching, we observe further improvements in certified standard and robust accuracy when distributions of denoised and clean data are aligned in feature space via domain-discriminator and MMD.} \label{tab:table4} \end{table} \subsection{Comparison with state-of-the-art} In this section, we compare the effectiveness of our approach DE-CROP against state-of-the-art robustness certification techniques, namely, denoised-smoothing by Salman \etal~\cite{salman2020denoised} and gaussian-augmentation by Cohen \etal~\cite{cohen2019certified}. Since Salman \etal~\cite{salman2020denoised} don’t report performance on limited data scenarios in their paper, we use the code from their official implementation\footnote{https://github.com/microsoft/denoised-smoothing} for evaluating their proposed method ($D_n$ with $L_{lc}$) in the presence of only $1\%$ (for CIFAR-$10$) and $10\%$ (for Mini-ImageNet) $D_{train}$. Similarly, for Cohen \etal~\cite{cohen2019certified}, we re-train the classifier with gaussian augmentation only on the available limited training data. Although Cohen \etal~\cite{cohen2019certified}’s technique is unfeasible for our problem setup as the API provider may not prefer re-training and replacing the deployed model, we still compare our performance as gaussian-augmentation often outperforms previous denoiser-based approaches in presence of full training data. As shown in Fig.~\ref{fig:sota}, our proposed approach DE-CROP comfortably outperforms Salman \etal~\cite{salman2020denoised} on CIFAR-$10$, improving the certified standard accuracy by $27\%$ and consistently improving robust accuracy across radii. The performance of Cohen \etal~\cite{cohen2019certified} drops to $10\%$ across all radii, indicating that the network behaves like a random baseline (i.e. predicting each class with equal probability irrespective of the input). We observe similar trends for Mini-ImageNet, where we comfortably outperform both Salman \etal~\cite{salman2020denoised} and Cohen \etal~\cite{cohen2019certified} further demonstrating the wide applicability of our approach. \begin{figure}[htp] \centering \centerline{\includegraphics[width=0.49\textwidth]{figures/sota_result_per.pdf}} \caption{Performance comparison of our approach (\textit{DE-CROP}) with other methods. We comfortably outperform on both the datasets. We also compared with Cohen \etal where the classifier is trained from scratch unlike ours where retraining is not feasible. } \label{fig:sota} \end{figure} \vspace{-0.11in} \subsection{Certification of Black Box Classifiers with \newline Limited Training Data} In previous sections we assumed white-box access to the pre-trained base classifier $B_c$ i.e. we can backpropagate the gradients through $B_c$ to optimize the denoiser ($D_n$). However, this may not always be the case as the API provider can limit access to only $B_c$’s predictions (i.e. black-box) due to proprietary reasons. Since the black-box setup restricts the gradient information of $B_c$, we first use a black-box model stealing technique \cite{barbalau2020black} to train a surrogate model: $S_m$. We use $S_m$ which allows gradient backpropagation, to train $D_n$ using our proposed approach DE-CROP (refer Fig.~\ref{fig:approach}). Finally, for evaluation we use the denoiser (trained via $S_m$) to certify robustness of the black-box classifier $B_c$. \begin{figure}[htp] \centering \centerline{\includegraphics[width=0.35\textwidth]{figures/black_box_excel.pdf}} \caption{Investigating the efficacy of our method in black-box scenario (no access to pertrained model weights). We observe a minor drop in performance as compared to white-box setup for both certified standard accuracy ($l_2^r$=$0.00$) and robust accuracy ($l_2^r$=$0.25$).} \label{fig:blackbox} \end{figure} In Fig.~\ref{fig:blackbox}, we compare the certification performance (of $B_c$) using denoisers trained via $S_m$ (`black-box access’) to the denoiser trained directly on $B_c$ (`white-box access’). We take $B_c$ as Alexnet and two different choices for $S_m$, namely ResNet-$18$ and Half-Alexnet. Our method DE-CROP yields very similar performance in the black box setting across different architectures of $S_m$. Also, the performance drop compared to white box setting is marginal, highlighting the suitability of our technique even when the pretrained classifier weights are not shared. \section{Conclusion} We presented our approach (DE-CROP), which for the first time, solves the problem of providing provable robustness guarantees to a pretrained classifier in the challenging limited training data settings. Our method comprises a two-step process - a) generation of boundary and interpolated samples ensuring feature diversity and b) effectively utilizing the generated samples along with limited training samples for training the denoiser using the proposed losses to ensure feature similarity between the denoised output and clean data at two different granularities (instance level and distribution level). We validate the efficacy of the generated data as well as the contribution of individual losses by extensive ablations and experiments across CIFAR-$10$ and Mini-ImageNet datasets. Moreover, our method works quite well in the black-box setting as it provides similar certification performance compared to the white-box setup. \vspace{0.1in} \noindent{\textbf{Acknowledgements}} This work is partially supported by a Young Scientist Research Award (Sanction no. 59/20/11/2020-BRNS) from DAE-BRNS, India. {\small \bibliographystyle{ieee_fullname}
{'timestamp': '2022-10-18T02:27:22', 'yymm': '2210', 'arxiv_id': '2210.08929', 'language': 'en', 'url': 'https://arxiv.org/abs/2210.08929'}
\section*{References}
{'timestamp': '1998-09-29T09:42:12', 'yymm': '9809', 'arxiv_id': 'hep-ph/9809577', 'language': 'es', 'url': 'https://arxiv.org/abs/hep-ph/9809577'}
\section{Introduction} The first evidence for an accelerated expansion of the universe at the present epoch was presented in the late 1990s by Riess et al.~\cite{riess98} and Perlmutter et al.~\cite{perlmutter}. Fitting a cosmological model including a cosmological constant matched the supernovae type Ia of the considered data sets significantly better than a model without such a constant. In the following years, a lot of effort was put into getting larger supernova (SN) sets of increasing quality \cite{jha,snls,gold07,essence}. On the other hand, a variety of cosmological models have been developed that could be fit to the newest data sets. Those models are characterized by certain parameters which are used to derive a redshift-luminosity relation that can be compared to the observed values. The problem with this so-called dynamical approach is that it is impossible to test without assumptions on the matter and energy content of the universe if there really is a phase of acceleration in the expansion history of the universe. Some authors tried to avoid this problem by taking a kinematical approach, i.e. they only considered the scale factor $a$ and its derivatives, such as the deceleration parameter $q(z)$, without using any model specific density parameters or a dark energy equation of state. The first kinematical analysis of SN data was done by Turner and Riess \cite{turner} who considered averaged values $q_1$ for redshift $z<z_1$ and $q_2$ for $z>z_1$ concluding that a present acceleration and a past deceleration is favoured by the data. Other authors tested a variety of special parameterizations of $q(z)$ \cite{gold04,elgaroy} or used $a(t)$ \cite{wang} or the Hubble rate $H(z)$ \cite{john}. Instead of considering a special parameterization, a more general approach has been done by Shapiro and Turner \cite{shapiro} who expanded the deceleration parameter $q$ into principal components. Rapetti et al.~\cite{rapetti} expanded the jerk parameter $j$ into a series of orthonormal functions. However, one has to be careful when doing a series expansion in $z$ as SNe with a redshift larger than 1 are not within the radius of convergence. This problem can be solved by reparameterizing redshift \cite{cattoen}. In the present work we will neither make any assumptions about the content of the universe, nor about the parameterization of $q(z)$ or another kinematical quantity. Moreover, we do not need to assume the validity of Einstein's equations. Instead, we will only ask the question if the hypothesis holds that the universe never expanded accelerated between the time when the light was emitted from a SN and today. Our assumption for the test is that the universe is isotropic and homogeneous. This is certainly not true for the real universe in a strict sense. However, we follow the standard approach in assuming that the cosmic structure does not modify the observed SN magnitudes and redshifts apart from random peculiar motion. The basic idea of this analysis has already been presented by Visser \cite{visser} and has been applied to SN data by different groups \cite{santos, gong}. But while Santos et al.~\cite{santos} made some mistakes in their analysis (that we will discuss later in section \ref{concl}) and Gong et al.~\cite{gong} only state that accelerated expansion is ``evident'', we are able to give a quantitative value for this evidence. Additionally, we study the size of systematic effects. For the fit one usually marginalizes over a function of the absolute magnitude $M$ and the Hubble constant $H_0$ because these two values cannot be determined independently by only considering SNe. Marginalization is not suitable for our analysis because in order to do so a special cosmological model or at least a parametization of $q(z)$ has to be inserted, which is what we want to avoid. But the SNe can be calibrated by cepheid meassurements and thus the values of $M$ and $H_0$ are determined. Using this additional information, no marginalization is needed. As there is still a controversy \cite{jackson} about the appropriate calibration method, we will take two quite different results of calibration into account \cite{riess, sandage}. In order to test the robustness of our analysis, we consider two different SN data sets (the 2007 Gold sample \cite{gold07} and the ESSENCE set \cite{essence}), where the data of the ESSENCE set are once obtained by the multicolour light-curve shape (MLCS2k2) \cite{jha} fitting method and once by the spectral adaptive light-curve template (SALT) \cite{guy}. Both calibration methods are applied to each set. The analysis shows that in all cases the data indicate an accelerated expansion. But the confidence level at which acceleration can be stated strongly depends on the data set, the fitting method and the calibration. We begin our analysis by assuming a flat universe, but later also consider the cases of an open and a closed universe. \section{Method} We want to keep our test as model-independent as possible, but still a few assumptions have to be made. We consider inflationary cosmology to be correct. This implies large scale homogeneity and isotropy of the universe as well as spatial flatness. (Yet, we will give up the assumption of a flat universe later in section \ref{openclosed}.) We also assume that cosmological averaging (here along the line of sight) does not modify the result obtained in a Friedmann-Lema\^{i}tre model. In such a universe, the luminosity distance $d_{\mbox{\scriptsize L}}(z)$ is given by \begin{equation} d_{\mbox{\scriptsize L}}(z) = (1+z) \int_0^z \frac{\rmd \tilde{z}}{H(\tilde{z})}\;. \end{equation} As we are interested in the question whether the universe accelerates or decelerates at the present epoch, we need to examine the deceleration parameter $q(z)$: If $q$ is positive, the universe expands decelerated, for a negative $q$ it accelerates. $q(z)$ can be expressed in terms of the Hubble parameter $H(z)$: \begin{equation} q(z) = \frac{H'(z)}{H(z)} (1+z) -1 \,, \end{equation} where the prime denotes the derivative with respect to $z$. Integrating this equation yields \begin{equation} \ln \frac{H(z)}{H_0} = \int_0^z\frac{1+q(\tilde{z})}{1+\tilde{z}}\,\rmd \tilde{z}\;. \end{equation} Our null hypothesis is that the universe never expanded accelerated, i.e. $q(z)\ge 0$ for all $z$. Under this assumption, the above equation turns into the inequality \begin{equation} \ln \frac{H(z)}{H_0} \ge \int_0^z\frac{1}{1+\tilde{z}}\,\rmd \tilde{z} = \ln (1+z) \end{equation} or $H(z)\ge H_0(1+z)$. Thus, for the luminosity distance we have \begin{equation}\label{dlinequ} d_{\mbox{\scriptsize L}}(z) \le (1+z) \frac{1}{H_0} \int_0^z \frac{\rmd \tilde{z}}{1+\tilde{z}} = (1+z)\frac{1}{H_0}\ln (1+z)\;. \end{equation} In order to test our hypothesis, we consider different data sets of SNe type Ia. If the observed luminosity distance is significantly larger than the luminosity distance of a universe with a constant $q=0$, the hypothesis can be rejected at a high confidence level. Note that the rejection of the hypothesis does not mean that there was no deceleration between the time the light was emitted from a SN until now, it only gives evidence that there was at some time a phase of acceleration. Thus, we cannot determine the transition redshift between deceleration and acceleration. This restriction to our analysis comes from the integral over redshift in the calculation of $d_{\mbox{\scriptsize L}}(z)$. As a first step, the data in the considered sets have to be calibrated consistently. The distance modulus $\mu$ is related to the luminosity distance by \begin{equation} \mu=m-M = 5\log d_{\mbox{\scriptsize L}} +25 \,, \end{equation} where $d_{\mbox{\scriptsize L}}$ is given in units of Mpc. The distance modulus of the SNe Ia in those sets is always given in arbitrary units because the absolute magnitude $M$ cannot be measured independently of the Hubble constant $H_0$. Only the apparent magnitude $m$ and thus the relative distance moduli are measured at high precision. In order to determine the absolute magnitude, the SNe have to be calibrated by measuring the distance of cepheids in the host galaxies. Then also the Hubble constant can be determined. But there is still disagreement between different groups about the correct calibration analysis. In this work we will consider two results for $M$ and $H_0$, namely that of Riess et al.~\cite{riess} and that of Sandage et al.~\cite{sandage}. In the following we will refer to those results as Riess calibration and Sandage calibration, respectively. The difference in the calibration comes mainly from the different assumptions on the evolution of the cepheid period-luminosity relation with host metallicity \cite{jackson}. So for preparing the data for our analysis, first the distance moduli $\mu_i$ have to be adjusted to the assumed absolute magnitude. We define the magnitude $\Delta\mu_i$ as the observed distance modulus $\mu_i$ of the $i$-th SN minus the distance modulus in a universe with constant deceleration parameter $q=0$ at the same redshift $z_i$: \begin{equation}\label{deltamui} \Delta\mu_i = \mu_i - \mu(q=0) = \mu_i - 5\log\left[ \frac{1}{H_0}(1+z_i)\ln(1+z_i) \right] -25 \,. \end{equation} If the error in redshift and the peculiar velocities of the SNe are already included in the error $\sigma_{\mu_i}$ given in a certain data set, then the error $\sigma_i$ of $\Delta\mu_i$ equals $\sigma_{\mu_i}$. Otherwise, the resulting error of $\Delta\mu_i$ is calculated by: \begin{equation} \sigma_i = \left[\sigma_{\mu_i}^2 + \left( 5\frac{\ln(1+z_i) +1}{(1+z_i)\ln(1+z_i)\ln10} \right)^2 \left(\sigma_z^2 + \sigma_v^2 \right) \right]^{\frac{1}{2}} \,. \end{equation} Let $\mu_{\mbox{\scriptsize th}}(z)$ be the theoretical distance modulus of a cosmological model, which describes the expansion of the universe correctly. Then \begin{equation} \Delta\mu_{\mbox{\scriptsize th}}(z_i) = \mu_{\mbox{\scriptsize th}}(z_i) - \mu(q=0) \end{equation} would be the ``true'' value corresponding to the meassured SN value $\Delta\mu_i$. The null hypothesis for our test is that the universe never expanded accelerated, i.e. $\Delta\mu_{\mbox{\scriptsize th}} \le 0$ for each SN. We reject that hypothesis if the measured value $\Delta\mu_i$ lies above a certain action limit $A_{\mbox{\scriptsize a}}$, otherwise we accept it. We want to keep the risk low that we conclude a late time acceleration of the universe, when there is indeed no acceleration at all. Therefore the action limit must be relatively high. If we want the confidence level for concluding an accelerated expansion to be 99\%, the action limit must be $A_{\mbox{\scriptsize a}}(99\%)=2.326\sigma_i$. For a confidence level of 95\%, the limit is $A_{\mbox{\scriptsize a}}(99\%)=1.645\sigma_i$. Here we assume that the measured values $\Delta\mu_i$ at a given redshift follow a normal distribution. On the other hand, we can test the hypothesis that the universe expanded accelerated all the time from light emission of a SN until today. This hypothesis can be rejected at a 99\% CL if $\Delta\mu_i$ is below the action limit $A_{\mbox{\scriptsize d}}(99\%)=-2.326\sigma_i$. That would mean that there must have been a phase of deceleration in the late time universe, but it would not exclude a phase of acceleration. \section{Data Sets} The important parameters of the SNe are obtained by fitting their light-curves. The results depend on the fitter that is used. The most common fitter is MLCS2k2 \cite{jha}. Here we will also consider the SALT fitting method \cite{guy}. The main difference between the two methods is in how the peak luminosity corrections are determined by the colour of the SNe \cite{conley}. While MLCS2k2 assumes that the corrections that have to be made are only due to dust, SALT takes an empirically approach to determine the relation between the colour and the luminosity correction. For the analysis we take the following SN Ia data sets: \begin{description} \item{Gold 2007 (MLCS2k2):} the Gold sample by Riess et al.~(2007) \cite{gold07}, which was obtained by using the MLCS2k2 fitting method \item{ESSENCE (MLCS2k2):} the set given by Wood-Vasey et al.~(2007) \cite{essence}, which includes data from ESSENCE, SNLS \cite{snls} and nearby SNe \cite{jha}, fitted with MLCS2k2 \item{ESSENCE (SALT):} the same set fitted with SALT \end{description} As suggested by Riess et al.~\cite{gold07}, we discarded all SNe with a redshift of $z<0.0233$ from the Gold sample, which leaves us 182 SNe with $0.0233<z<1.755$. In ESSENCE (MLCS2k2) and ESSENCE (SALT), the SNe with bad light curve fits were rejected for each fitting method seperately. This leaves 162 SNe for ESSENCE (MLCS2k2) and 178 for ESSENCE (SALT) with $0.015<z<1.01$. 153 of the SNe are contained in both sets. Due to the differences of the SNe contained in ESSENCE (MLCS2k2) and ESSENCE (SALT) we will refer to them as different sets in the following. Nevertheless, you should keep in mind that they share a large number of SNe and thus are not independent sets. The Riess calibration yields a value for the V-band magnitude of $M_{\mbox{\scriptsize V}}(t_0)=-19.17\pm 0.07$mag, where $t_0$ is the time of the B-band maximum. For the Gold 2007 set $M_{\mbox{\scriptsize V}}(t_0)$ was considered to be $-19.44$mag. Therefore, in order to get the appropriate distance modulus $\mu=m-M$, we need to substract $0.27$mag from the value given in the Gold sample. From the distance modulus values in the ESSENCE sets $0.22$mag have to be substracted because in these sets a B-band magnitude of $M_{\mbox{\scriptsize B}}=-19.5$mag is assumed and Riess et al.~\cite{riess} give the relation $M_{\mbox{\scriptsize B}}-M_{\mbox{\scriptsize V}}=-0.11$. For this calibration one gets a Hubble constant of $H_0=73\pm 4(\mbox{statistical}) \pm 5(\mbox{systematic})$. The results of Sandage et al.~\cite{sandage} for the SNe absolute magnitudes are $M_{\mbox{\scriptsize V}}=-19.46$mag and $M_{\mbox{\scriptsize B}}=-19.49$mag and for the Hubble constant $H_0=62.3 \pm 1.3$(statistical) $\pm$ 5.0(systematic). So considering the Sandage calibration, we have to add 0.02mag to distance moduli of the Gold sample and substract 0.01mag from the values given in the ESSENCE sets. \section{Results for a flat universe} \subsection{Single SNe} The relevant parameter for our analysis is $\Delta\mu_i$ given by equation \eref{deltamui}. Its values are plotted in figure \ref{deltamufig} for the SNe of the three data sets and the two different calibration methods. The curve for a flat $\Lambda$CDM model (with $\Omega_{\mbox{\scriptsize m}}=0.3$) is shown in the plots in order to give the reader a notion of how the redshift-dependency of $\Delta\mu$ could look like. Most of the data values are positive which indicates an accelerated expansion. A similar diagram has been presented by Gong et al.~\cite{gong} for single SNe as well as for binned SN data where they used a combined Gold and ESSENCE data set. They kept the arbitrary value of the absolute magnitude $M$ given in the set and then determined the Hubble constant by using SNe with $z\le 0.1$ to be $H_0=66.04$. At this point they stopped their analysis by concluding that due to the large number of SNe that lie above the curve of a universe with $q=0$, accelerated expansion is evident. But the fact that there are also several data points below that curve and that the errors $\sigma_i$ (with a typical value of 0.23mag for MLCS2k2 and 0.18mag for SALT) are approximately of the same size as the values $\Delta\mu_i$ give rise to the question how certain acceleration really is. Thus, we will provide a more quantitative analysis in this work. \begin{figure} \centering \subfloat[Gold 2007 (MLCS2k2), Riess calibration]{ \includegraphics*[width=7.5cm]{gold_riess_1_a.eps}} \hfill \subfloat[Gold 2007 (MLCS2k2), Sandage calibration]{ \includegraphics*[width=7.5cm]{gold_sand_1_b.eps}} \\ \subfloat[ESSENCE (MLCS2k2), Riess calibration]{ \includegraphics*[width=7.5cm]{mlcs2k2_riess_1_c.eps}} \hfill \subfloat[ESSENCE (MLCS2k2), Sandage calibration]{ \includegraphics*[width=7.5cm]{mlcs2k2_sand_1_d.eps}} \\ \subfloat[ESSENCE (SALT), Riess calibration]{ \includegraphics*[width=7.5cm]{salt_riess_1_e.eps}} \hfill \subfloat[ESSENCE (SALT), Sandage calibration]{ \includegraphics*[width=7.5cm]{salt_sand_1_f.eps}} \caption{Magnitude $\Delta\mu_i$, as defined in equation \eref{deltamui}, for the three data sets and the two calibrations. SNe that indicate acceleration at 99\% CL are plotted in red ($\times$), those that indicate deceleration in blue ($\hexstar$). Also shown is the curve for a flat $\Lambda$CDM model with $\Omega_m=0.3$ (which is not a fit to the data). Note that the SALT method leads to a larger spread in $\Delta\mu_i$, whereas the Gold set extends to higher redshifts.} \label{deltamufig} \end{figure} We counted the SNe of each set whose values of $\Delta\mu_i$ were above the action limit $A_{\mbox{\scriptsize a}}(95\%)=1.645\sigma_i$ and those above $A_{\mbox{\scriptsize a}}(99\%)=2.326\sigma_i$ (which indicates acceleration at a 95\% and a 99\% CL, respectively) and those with values below $A_{\mbox{\scriptsize d}}(95\%)=-1.645\sigma_i$ and below $A_{\mbox{\scriptsize d}}(99\%)=-2.326\sigma_i$ (which indicates deceleration). The results are shown in table \ref{acctab1}. The clearest evidence for an accelerated expansion is given by the ESSENCE (SALT) set with the Riess calibration with 64 SNe indicating acceleration at a 99\% confidence level and none indicating deceleration. Also in the other sets accelerated expansion is preferred with the exception of ESSENCE (MLCS2k2) in the Sandage calibration where none of the two expansion histories is preferred in the test using the action limit $A(99\%)$. \begin{table} \begin{indented} \lineup \caption{\label{acctab1} Number of SNe indicating acceleration or deceleration at 95\% and 99\% CL for the different data sets and calibrations. Also given is the total number of SNe in each set. The most different results are highlighted.} \item[] \begin{tabular}{@{}lllllll} \br &\centre{2}{Gold 2007 (MLCS2k2)} &\centre{2}{ESSENCE (MLCS2k2)} &\centre{2}{ESSENCE (SALT)} \\ & Riess & Sandage & Riess & Sandage & Riess & Sandage \\\mr acceleration (95\% CL) & \037 & \027 & \037 & \014 & \096 & \053\\ acceleration (99\% CL) & \013 &\0\07 & \011 &\0\0{\bf 3}&\0{\bf 64}&\030 \\\mr deceleration (95\% CL) &\0\02 &\0\04 &\0\03 &\0\07 &\0\01 &\0\07\\ deceleration (99\% CL) &\0\00 &\0\01 &\0\02 &\0\0{\bf 3}&\0\0{\bf 0}&\0\02 \\\mr number of SNe & 182 & 182 & 162 & 162 & 178 & 178 \\ \br \end{tabular} \end{indented} \end{table} It is noticeable that the two light-curve fitting methods for the ESSENCE set yield very different results. (For a discussion on these differences at low redshifts, see \cite{conley}.) This could be an indication that at least one of the two fitting methods does not give the correct result. In order to consider this possibility we take a look at the differences between $\Delta\mu_i$ obtained by SALT and $\Delta\mu_i$ obtained by MLCS2k2. They are shown in figure \ref{mudiff} for the 153 SNe contained in both ESSENCE sets. For some SNe the difference is very large (namely up to one magnitude). In order to see if the two sets are consistent, we need to know how large the difference is in terms of the error $\sigma_i=\left[\sigma^2_i\mbox{(SALT)} + \sigma^2_i\mbox{(MLCS2k2)}\right]^{1/2}$. The result is shown in figure \ref{weightmudiff}. We want the systematical error due to the different fitting methods to be smaller than the statistical error of the observational data. Thus, we discard all SNe with a difference in $\Delta\mu_i$ larger than $1\sigma_i$ and those that are only contained in one of the two ESSENCE sets, which leaves 129 SNe in the sets. Again we counted the SNe indicating acceleration or deceleration for each set and each calibration method. The result is shown in table \ref{acctab2}. The outcome of this test did not change qualitatively by discarding suspicious SNe from the sets. Thus, we will use all the SNe of the ESSENCE sets in the following. \begin{figure} \centering \subfloat[]{\label{mudiff} \includegraphics*[width=7.5cm]{mu_diff_2_a.eps}} \hfill \subfloat[]{\label{weightmudiff} \includegraphics*[width=7.5cm]{mu_diff_sigma_2_b.eps}} \caption{Differences of apparent magnitudes obtained by the SALT and the MLCS2k2 fitting method \subref{mudiff} and these differences devided by $\sigma_i$ \subref{weightmudiff}.} \end{figure} \begin{table} \begin{indented}\lineup \caption{\label{acctab2} Number of SNe indicating acceleration or deceleration for SNe of the ESSENCE sets with $|\Delta\mu_i\mbox{(SALT)}-\Delta\mu_i\mbox{(MLCS2k2)}|/\sigma_i \le 1$. The total number of SNe in each set is 129. Again, the most discrepant results are highlighted.} \item[]\begin{tabular}{@{}lllll} \br &\centre{2}{ESSENCE (MLCS2k2)} &\centre{2}{ESSENCE (SALT)} \\ & Riess & Sandage & Riess & Sandage \\\mr acceleration (95\% CL) & 33 & 11 & 69 & 30 \\ acceleration (99\% CL) &\08 &\0{\bf 2} &{\bf 40} & 14 \\\mr deceleration (95\% CL) &\01 &\02 &\00 &\06 \\ deceleration (99\% CL) &\00 &\0{\bf 1} &\0{\bf 0} &\01 \\ \br \end{tabular} \end{indented} \end{table} The error of the distance modulus contains the peculiar velocities $v_{\mbox{\scriptsize pec}}$ of the SNe. In the ESSENCE sets $\sigma_v$ is assumed to be 400km/s for all SNe. We wanted to know how sensitive our test is with respect to changes of the peculiar velocity. Table \ref{acctabvel} shows that varying $\sigma_v$ almost does not change the result with the exception of ESSENCE (MLCS2k2) in the Sandage calibration, where the number of SNe indicating deceleration at a 95\% CL is 7 for $\sigma_v=400$km/s and 500km/s, but 11 for $\sigma_v=300$km/s. \begin{table} \begin{indented}\lineup \caption{\label{acctabvel} Number of SNe indicating acceleration or deceleration for SNe of the ESSENCE sets with different peculiar velocity dispersions $\sigma_v$.} \item[]\begin{tabular}{@{}llllll} \br &&\centre{2}{ESSENCE (MLCS2k2)} &\centre{2}{ESSENCE (SALT)} \\ $\sigma_v$[km/s] && Riess & Sandage & Riess & Sandage \\\mr 300 &acceleration (95\% CL) & \037 & \014 & \099 & \053 \\ &acceleration (99\% CL) & \011 &\0\03 & \064 & \030 \\ &deceleration (95\% CL) &\0\03 & \011 &\0\01 &\0\07 \\ &deceleration (99\% CL) &\0\02 &\0\03 &\0\00 &\0\02 \\ \mr 400 &acceleration (95\% CL) & \037 & \014 & \096 & \053 \\ &acceleration (99\% CL) & \011 &\0\03 & \064 & \030 \\ &deceleration (95\% CL) &\0\03 &\0\07 &\0\01 &\0\07 \\ &deceleration (99\% CL) &\0\02 &\0\03 &\0\00 &\0\02 \\ \mr 500 &acceleration (95\% CL) & \036 & \013 & \095 & \053 \\ &acceleration (99\% CL) & \010 &\0\03 & \063 & \030 \\ &deceleration (95\% CL) &\0\03 &\0\07 &\0\01 &\0\05 \\ &deceleration (99\% CL) &\0\02 &\0\03 &\0\00 &\0\02 \\ \mr \multicolumn{2}{@{}l}{number of SNe} & 162 & 162 & 178 & 178 \\ \br \end{tabular} \end{indented} \end{table} \subsection{Averaging over SN data} Until now, we have only made tests for single SNe. The problem in combining these data is that the quantity $\Delta\mu$ depends on the redshift $z$. Thus the data from different redshifts do not have the same mean value. Nevertheless it is possible to average over $\Delta\mu_i$. Then the result of course depends on how many SNe from a certain redshift are used to calculate this mean value. Therefore, the average over all SNe of a set does not characterize the function $\Delta\mu(z)$. But still, when combined with its standard deviation, the mean value can be an evidence for acceleration or deceleration. Thus, we define \begin{equation} \overline{\Delta\mu} = \frac{\sum_{i=1}^Ng_i\Delta\mu_i}{\sum_{i=1}^Ng_i} \;, \end{equation} where $g_i=1/\sigma_i^2$. The mean value is calculated in such a way that data points with a small error are weighted more than those with large errors. The standard deviation of the mean value is calculated by \begin{equation} \sigma_{\overline{\Delta\mu}} = \left[\frac{\sum_{i=1}^Ng_i\left(\Delta\mu_i - \overline{\Delta\mu}\right)^2}{(N-1)\sum_{i=1}^Ng_i}\right]^{\frac{1}{2}} \;. \end{equation} We start with averaging $\Delta\mu_i$ over redshift bins of width 0.2. For all data sets the averaged value of $\Delta\mu$ increases with redshift (see figure \ref{zbinfig}) which could be expected for an accelerated expansion. The curve for a flat $\Lambda$CDM with $\Omega_m=0.3$ has its maximum at $z=1.2$ and then decreases again. Unfortunately, there are not enough SNe at high redshifts to state a possible decrease of $\Delta\mu$ at some point. The differences in the data points of the two ESSENCE sets again seem to be too large. MLCS2k2 gives smaller values than the SALT fitter for all redshift bins. As $\Delta\mu(z)$ should be close to zero at low redshifts in a homogeneous and isotropic universe, the MLCS2k2 data look more consistent with our assumptions in the Riess calibration, whereas SALT gives the better results in the Sandage calibration. The Gold sample has sensible values in both calibrations. $\overline{\Delta\mu}$ devided by the error $\sigma_{\overline{\Delta\mu}}$ gives the evidence for acceleration in each redshift bin. As can be seen in table \ref{zbin}, the strongest evidence is given in redshifts between 0.4 and 0.8. \begin{figure} \centering \subfloat[Riess calibration]{ \includegraphics*[width=7.5cm]{zbin_riess_3_a.eps}} \hfill \subfloat[Sandage calibration]{ \includegraphics*[width=7.5cm]{zbin_sand_3_b.eps}} \caption{Magnitude $\Delta\mu$ averaged over redshift bins of width 0.2 for different data sets.} \label{zbinfig} \end{figure} \begin{table} \begin{indented}\lineup \caption{\label{zbin} Statistical evidence $\overline{\Delta\mu}/\sigma_{\overline{\Delta\mu}}$ within the given redshift range for a flat and an open universe.} \item[]\begin{tabular}{@{}lllllll} \br &\centre{2}{Gold 2007 (MLCS2k2)} &\centre{2}{ESSENCE (MLCS2k2)} &\centre{2}{ESSENCE (SALT)} \\ z & Riess & Sandage & Riess & Sandage & Riess & Sandage \\ \mr flat universe &&&&&&\\ 0.0 -- 0.2 & 2.1 & 0.4 & 1.6 &\-2.8 & \05.7 & 0.8 \\ 0.2 -- 0.4 & 4.2 & 2.9 & 3.9 & 0.7 & \07.3 & 4.3 \\ 0.4 -- 0.6 & 9.8 & 7.6 & 7.1 & 3.1 & 12.1 & 7.7 \\ 0.6 -- 0.8 & 5.4 & 4.0 & 7.7 & 3.7 & \08.5 & 4.8 \\ 0.8 -- 1.0 & 4.8 & 3.5 & 6.8 & 4.1 & \06.0 & 4.2 \\ 1.0 -- 1.2 & 2.8 & 2.2 &&&& \\ 1.2 -- 1.4 & 2.3 & 1.8 &&&& \\ \mr open universe &&&&&&\\ 0.0 -- 0.2 & 2.1 & 0.4 & 1.5 &\-2.8 &\05.7 & 0.7 \\ 0.2 -- 0.4 & 3.5 & 2.2 & 3.2 &\-0.0 &\06.6 & 3.6 \\ 0.4 -- 0.6 & 7.4 & 5.2 & 5.5 & 1.4 & 10.2 & 5.8 \\ 0.6 -- 0.8 & 2.8 & 1.4 & 4.9 & 0.9 &\05.9 & 2.2 \\ 0.8 -- 1.0 & 1.4 & 0.2 & 3.9 & 1.2 &\04.0 & 2.2 \\ 1.0 -- 1.2 & 0.9 & 0.4 &&&&\\ 1.2 -- 1.4 &\-0.2 &\-0.7\\ \br \end{tabular} \end{indented} \end{table} Next we average over all SNe of each data set with a redshift $z\ge0.2$. We discard SNe with a smaller redshift for the following reasons: (a) $\Delta\mu(z)$ is expected to be relatively close to zero for small redshifts. Therefore, nearby SNe do not contribute to the evidence for acceleration. (b) Considering only the nearby universe, local effects can modify the results for the distance modulus as the cosmological principal is not valid on small scales. (c) Another disadvantage of nearby SNe is that they were observed with many different telescopes and thus the obtained systematic error is potentially higher. Table \ref{meantab2} shows the mean values $\overline{\Delta\mu}$ and their standard deviations $\sigma_{\overline{\Delta\mu}}$ evaluated by using only SN data with $z\ge0.2$. $\overline{\Delta\mu}$ is positive for all data sets. $\overline{\Delta\mu}$ divided by $\sigma_{\overline{\Delta\mu}}$ indicates the confidence level at which an accelerated expansion can be stated. The weakest evidence for acceleration is given for ESSENCE (MLCS2k2) in the Sandage calibration. Here the mean value lies $5.2 \sigma$ above 0, i.e. above the value for a universe that neither accelerates nor decelerates. In the other cases the confidence level is even larger, up to $17.0\sigma$ for ESSENCE (SALT) in the Riess calibration. \begin{table} \begin{indented}\lineup \caption{\label{meantab2} Mean values and standard deviations of $\Delta\mu$ obtained by using only SNe with $z\ge 0.2$ for a flat universe.} \item[]\begin{tabular}{@{}lllllll} \br &\centre{2}{Gold 2007 (MLCS2k2)} &\centre{2}{ESSENCE (MLCS2k2)} &\centre{2}{ESSENCE (SALT)} \\ & Riess & Sandage & Riess & Sandage & Riess & Sandage \\ \mr $\overline{z}$ & \00.63 & 0.63 & \00.54 & 0.54 & \00.51 & \00.51 \\ $\overline{\Delta\mu}$ & \00.2196 & 0.1655 & \00.2398 & 0.1056 & \00.3457 & \00.2115 \\ $\sigma_{\overline{\Delta\mu}}$ & \00.0167 & 0.0167 & \00.0201 & 0.0201 & \00.0203 & \00.0203 \\ $\overline{\Delta\mu}/\sigma_{\overline{\Delta\mu}}$ & 13.1 & 9.9 & 11.9 & 5.2 & 17.0 & 10.4 \\ \br \end{tabular} \end{indented} \end{table} \section{Open and closed universe}\label{openclosed} Although there are good reasons to believe that the universe is flat we give up this assumption in the following section. In an open universe the luminosity distance of a universe that neither accelerates nor decelerates is given by \begin{equation} d_{\mbox{\scriptsize L}}(z) = \frac{1+z}{H_0\sqrt{\Omega_k}} \sinh\left( \sqrt{\Omega_k}\ln(1+z)\right) \end{equation} and thus depends on the density parameter of the scalar curvature $\Omega_k$. $d_{\mbox{\scriptsize L}}$ increases with increasing $\Omega_k$ and thus the evidence for acceleration becomes weaker. As we are interested in the lower limit of this evidence we have to take the highest possible value for the scalar curvature, i.e. $\Omega_k=1$, which corresponds to an empty universe. (Here we allow ourself to make use of the Einstein equation.) Then equation \eref{deltamui} for $\Delta\mu_i$ changes to \begin{equation} \Delta\mu_i = \mu_i - \mu(q=0) = \mu_i - 5\log\left[ \frac{1}{H_0}(1+z_i)\sinh[\ln(1+z_i)] \right] -25 \,. \end{equation} \begin{table} \begin{indented}\lineup \caption{\label{opentab} Statistical evidence $\overline{\Delta\mu}/\sigma_{\overline{\Delta\mu}}$ for an open universe (obtained by using SNe within the redshift range $0.2\le z <1.2$), a flat and a closed universe ($0.2\le z$).} \item[]\begin{tabular}{@{}lllllll} \br &\centre{2}{Gold 2007 (MLCS2k2)} &\centre{2}{ESSENCE (MLCS2k2)} &\centre{2}{ESSENCE (SALT)} \\ & Riess & Sandage & Riess & Sandage & Riess & Sandage \\ \mr open universe & \08.0 & \04.9 & \08.8 & 1.8 & 13.8 & \07.2 \\ flat universe & 13.1 & \09.9 & 11.9 & 5.2 & 17.0 & 10.4 \\ closed universe & 13.1 & \09.9 & 11.9 & 5.2 & 17.0 & 10.4 \\\br \end{tabular} \end{indented} \end{table} The evidence for accelerated expansion is then calculated in the same way as for a flat universe. The result obtained by averaging over redshift bins is shown in table \ref{zbin}. For the overall average we only used SNe between redshift 0.2 and 1.2 because including higher redshifts would weaken the evidence. This is due to the fact that the values of $\Delta\mu_i$ become negative when the phase of deceleration within the redshift range over which is integrated is large enough. The result is shown in table \ref{opentab}. As could be expected, the evidence is now much weaker than for a flat universe. But we still find a hint of acceleration at 1.8$\sigma$. For a closed universe we have a different situation: Here $d_{\mbox{\scriptsize L}}$ decreases with increasing spatial curvature. Thus the lower limit of the evidence for acceleration is given in the case of the lowest possible curvature which corresponds to a flat universe. \section{Conclusion}\label{concl} We tested the cosmic expansion without specifying any density parameters or parameterizing kinematical quantities. This was not possible without assuming a certain calibration of $M$ and $H_0$. Therefore, we considered two very different calibrations. We also considered two different data sets and two different light curve fitters and varied the peculiar velocity dispersion. Using single SNe for the test has already given a clear indication of acceleration in a flat universe although a few SNe strongly favour deceleration. We find large systematic effects already at the level of individual SNe. Similar analyses have already been done by other groups \cite{santos, gong}. But note that the work of Santos et al.~\cite{santos} exhibits two major shortcomings. First, they did not calibrate the SN data consistently. They took a certain value for $H_0$ but kept the arbitrary value of the absolute magnitude $M$ given in the set which led to distance moduli that are too high. Thus, they concluded erroneously that there was a recent phase of super-acceleration. The second problem is that they did not realize that due to the integration over redshift this method is not suitable to determine the transition redshift $z_{\mbox{\scriptsize t}}$ between deceleration and acceleration. Thus, their value of $z_{\mbox{\scriptsize t}}$ does not hold. The shortcoming of the work of Gong et al.~\cite{gong} is that they have not applied any statistics at all. Instead of only considering single SNe, we obtained a more significant result by averaging over the SN data of each set. Here it is justified to discard nearby SNe in order to decrease systematics from using different telescopes or from local effects. We argue that we reject the null hypothesis of no acceleration for a spatially flat, homogeneous and isotropic universe at high confidence ($>5\sigma$). However, the results show large differences for each data set, calibration and light-curve fitter. E.g. changing the calibration from Sandage to Riess calibration in a flat universe using ESSENCE(MLCS2k2) increases the evidence for acceleration from 5.2$\sigma$ to 11.9$\sigma$. For the two different light-curve fitters applied to the ESSENCE SNe using the Riess calibration, we get values of 11.9$\sigma$ and 17.0$\sigma$. Wood-Vasey et al.~argue that the differences due to different light curve fitters are not important as they disappear when marginalization is applied \cite{essence}. This is true if the data are only used to determine the parameter values of a certain cosmological model as in this case no calibration of $M$ and $H_0$ is needed. But nevertheless, the fitters should give the same result for a given absolute magnitude $M$. If this is not the case, a systematic error in the determination of the apparent magnitude $m$ is introduced by at least one of the fitters. Systematics due to different fitters have also been described in \cite{conley} and those within the Gold sample in \cite{jassal1,jassal2,nesseris}. The different results obtained by the two calibrations are very surprising. Our test only depends on the reduced distance modulus $\mathcal{M}=M-5\log H_0+25$ and not on the absolute magnitude or the Hubble constant individually. (Note that this is the quantity which is marginalized when fitting cosmological parameters.) Starting from different calibrations $M_1$ and $M_2$ leads to different values of $H_0$, determined by observation of SNe, in such a way that $\mathcal{M}$ should in principle be the same for both values of $M$. Thus our test should not depend on the calibration. However, the fact that the calibration changes our result significantly can be explained as follows: $H_0$ is determined by observing SNe. Riess et al.~and Sandage et al.~use different sets of SNe and different fitting methods for this determination. The systematic errors due to different sets and fitters then of course also influence the result for the Hubble constant leading to different values $\mathcal{M}_1$ and $\mathcal{M}_2$. For a conservative conclusion, we need to take the set that gives the weakest evidence for acceleration, namely ESSENCE (MLCS2k2) in the Sandage calibration. Using this set, accelerated expansion can be stated at 5$\sigma$ if we assume a spatially flat or closed universe. In an open universe the evidence is much weaker, namely 1.8$\sigma$. These results only hold if the correct analysis of SNe is somewhere in the range of the cases we considered here. But as we observe enormous systematic effects in our test, we cannot be sure of this assumption. Thus, it is a major issue to better understand how SNe have to be observed and analyzed. Remember that results of this work are only valid if the universe is homogeneous and isotropic. The situation changes dramatically if we give up those assumptions. There exist enormously large structures in the universe, an example is the Sloan Great Wall with an extension of $\sim$400 Mpc \cite{gott}. Besides, there are also big voids and superclusters on a 100 Mpc scale. This roughness of the universe could spoil the validity of equation (1). Indeed it is possible to construct inhomogeneous models that can describe the observational data without the need of an average acceleration \cite{celerier,havard,enqvist,ishak}. Some observational evidence for inhomogeneity and anisotropy in SN Hubble diagrams has recently been presented by \cite{dyer,weinhorst}, probably due to large scale bulk motion and perhaps systematic effects. A non-trivial scale dependence of the Hubble rate due to the so-called cosmological backreaction \cite{buchert,schwarz,rasanen,kolb,vanderveld,wiltshire} has been shown in \cite{li}. This effect of averaging can also mimic curvature effects \cite{li1} and thus strongly influence the reconstruction of the dark energy equation of state, as has been shown in \cite{bassett}. Thus we think a major task must be to establish the acceleration of the universe independently of the assumption of strict homogeneity and isotropy. \ack We thank Dragan Huterer, Thanu Padmanabhan, Aleksandar Raki\'{c}, Adam Riess and Bastian Weinhorst for discussions, comments and references to the literature. This work is supported by the DFG under grant GRK 881. \section*{References}
{'timestamp': '2007-11-26T18:14:31', 'yymm': '0711', 'arxiv_id': '0711.3180', 'language': 'en', 'url': 'https://arxiv.org/abs/0711.3180'}
\section{Introduction} A drawing of a graph $G$ in the plane is a mapping, in which every vertex of $G$ is mapped into a point in the plane, and every edge into a continuous curve connecting the images of its endpoints. We assume that no three curves meet at the same point, and no curve contains an image of any vertex other than its endpoints. A \emph{crossing} in such a drawing is a point where the images of two edges intersect, and the \emph{crossing number} of a graph $G$, denoted by $\optcro{G}$, is the smallest number of crossings achievable by any drawing of $G$ in the plane. The goal in the {\sf Minimum Crossing Number}\xspace problem is to find a drawing of the input graph $G$ with minimum number of crossings. We denote by $n$ the number of vertices in $G$, and by $d_{\mbox{\textup{\footnotesize{max}}}}$ its maximum vertex degree. The concept of the graph crossing number dates back to 1944, when P\'al Tur\'an has posed the question of determining the crossing number of the complete bipartite graph $K_{m,n}$. This question was motivated by improving the performance of workers at a brick factory, where Tur\'an has been working at the time (see Tur\'an's account in \cite{turan_first}). Later, Anthony Hill (see~\cite{Guy-complete-graphs}) has posed the question of computing the crossing number of the complete graph $K_n$, and Erd\"{o}s and Guy~\cite{erdos_guy73} noted that \emph{``Almost all questions one can ask about crossing numbers remain unsolved.''} Since then, the problem has become a subject of intense study, with hundreds of papers written on the subject (see, e.g. the extensive bibliography maintained by Vrt'o \cite{vrto_biblio}.) Despite this enormous stream of results and ideas, some of the most basic questions about the crossing number problem remain unanswered. For example, the crossing number of $K_{11}$ was established just a few years ago (\cite{K11}), while the answer for $K_t, t\geq 13$, remains elusive. We note that in general $\optcro{G}$ can be as large as $\Omega(n^4)$, for example for the complete graph. In particular, one of the famous results in this area, due to Ajtai et al.~\cite{ajtai82} and Leighton~ \cite{leighton_book} states that if $|E(G)|\geq 4n$, then $\optcro{G}=\Omega(|E(G)|^3/n^2)$. In this paper we focus on the algorithmic aspect of the problem. The first non-trivial algorithm for {\sf Minimum Crossing Number}\xspace was obtained by Leighton and Rao \cite{LR}, who combined their breakthrough result on balanced separators with the techniques of Bhatt and Leighton~\cite{bhatt84} for VLSI design, to obtain an algorithm that finds a drawing of any bounded-degree $n$-vertex graph with at most $O(\log^4 n) \cdot (n + \optcro{G})$ crossings. This bound was later improved to $O(\log^3 n) \cdot (n+\optcro{G})$ by Even, Guha and Schieber \cite{EvenGS02}, and the new approximation algorithm of Arora, Rao and Vazirani~\cite{ARV} for Balanced Cut gives a further improvement to $O(\log^2 n) \cdot (n+\optcro{G})$, thus implying an $O(n \cdot \log^2 n)$-approximation for {\sf Minimum Crossing Number}\xspace on bounded-degree graphs. This result can also be extended to general graphs with maximum vertex degree $d_{\mbox{\textup{\footnotesize{max}}}}$, where the approximation factor becomes $O(n\cdot\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}})\cdot \log^2n)$. Chuzhoy, Makarychev and Sidiropoulos~\cite{CMS10} have recently improved this result to an $O(n\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}})\cdot \log^{3/2} n)$-approximation. On the negative side, the problem was shown to be NP-complete by Garey and Johnson \cite{crossing_np_complete}, and remains NP-complete even on cubic graphs~\cite{Hlineny06a}. More surprisingly, even in the very restricted case, where the input graph $G$ is obtained by adding a single edge to a planar graph, the problem is still NP-complete~\cite{cabello_edge}. The NP-hardness proof of ~\cite{crossing_np_complete}, combined with the inapproximability result for Minimum Linear-Arrangement \cite{Ambuhl07}, implies that there is no PTAS for {\sf Minimum Crossing Number}\xspace unless NP has randomized subexponential time algorithms. To summarise, although current lower bounds do not rule out the possibility of a constant-factor approximation for the problem, the state of the art, prior to this work, only gives an $\tilde O(n\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}))$-approximation. In view of this glaring gap in our understanding of the problem, a natural question is whether we can obtain good algorithms for the case where the optimal solution cost is low --- arguably, the most interesting setting for this problem. A partial answer was given by Grohe~\cite{Grohe04}, who showed that the problem is fixed-parameter tractable. Specifically, Grohe designed an exact $O(n^2)$-time algorithm, for the case where the optimal solution cost is bounded by a constant. Later, Kawarabayashi and Reed \cite{KawarabayashiR07} have shown a linear-time algorithm for the same setting. Unfortunately, the running time of both algorithms depends super-exponentially on the optimal solution cost. Our main result is an efficient randomized algorithm, that, given any $n$-vertex graph with maximum degree $d_{\mbox{\textup{\footnotesize{max}}}}$, produces a drawing of $G$ with $O\left ((\optcro{G})^{10}\cdot\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)\right )$ crossings with high probability. In particular, we obtain an $O\left (n^{9/10}\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)\right )$-approximation for general graphs, and an $\tilde{O}(n^{9/10})$-approximation for bounded-degree graphs, thus breaking the long standing barrier of $\tilde{O}(n)$-approximation for this setting. We note that many special cases of the {\sf Minimum Crossing Number}\xspace problem have been extensively studied, with better approximation algorithms known for some. Examples include $k$-apex graphs~\cite{crossing_apex,CMS10}, bounded genus graphs~\cite{BorozkyPT06,crossing_genus,crossing_projective,crossing_torus,CMS10} and minor-free graphs~\cite{WoodT06}. Further overview of work on {\sf Minimum Crossing Number}\xspace can be found in the expositions of Richter and Salazar \cite{richter_survey}, Pach and T\'{o}th \cite{pach_survey}, Matou\v{s}ek \cite{matousek_book}, and Sz\'{e}kely \cite{szekely_survey}. \noindent {\bf Our results and techniques.} Our main result is summarized in the following theorem. \begin{theorem}\label{theorem: main-crossing-number} There is an efficient randomized algorithm, that, given any $n$-vertex graph $G$ with maximum degree $d_{\mbox{\textup{\footnotesize{max}}}}$, finds a drawing of $G$ in the plane with $O\left ((\optcro{G})^{10}\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)\right )$ crossings with high probability. \end{theorem} Combining this theorem with the algorithm of Even et al.~\cite{EvenGS02}, we obtain the following corollary. \begin{corollary}\label{corollary: main-approx-crossing-number} There is an efficient randomized $O\left (n^{9/10}\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)\right )$-approximation algorithm for {\sf Minimum Crossing Number}\xspace. \end{corollary} We now give an overview of our techniques. Instead of directly solving the {\sf Minimum Crossing Number}\xspace problem, it is more convenient to work with a closely related problem -- {\sf Minimum Planarization}\xspace. In this problem, given a graph $G$, the goal is to find a minimum-cardinality subset $E^*$ of edges, such that the graph $G\setminus E^*$ is planar. The two problem are closely related, and this connection was recently formalized by~\cite{CMS10}, in the following theorem: \begin{theorem}[\cite{CMS10}]\label{thm:CMS10} Let $G=(V,E)$ be any $n$-vertex graph of maximum degree $d_{\max}$, and suppose we are given a subset $E^*\subset E$ of edges, $|E^*|=k$, such that $G\setminus E^*$ is planar. Then there is an efficient algorithm to find a drawing of $G$ in the plane with at most $O\left(d_{\max}^3 \cdot k \cdot (\optcro{G} + k)\right)$ crossings. \end{theorem} Therefore, in order to solve the {\sf Minimum Crossing Number}\xspace problem, it is sufficient to find a good solution to the {\sf Minimum Planarization}\xspace problem on the same graph. We note that an $O(\sqrt {n\log n}\cdot d_{\mbox{\textup{\footnotesize{max}}}})$-approximation algorithm for the {\sf Minimum Planarization}\xspace problem follows easily from the Planar Separator theorem of Lipton and Tarjan~\cite{planar-separator} (see e.g.~\cite{CMS10}), and we are not aware of any other algorithmic results for the problem. Our main technical result is the proof of the following theorem, which, combined with Theorem~\ref{thm:CMS10}, implies Theorem~\ref{theorem: main-crossing-number}. \begin{theorem}\label{thm:main} There is an efficient randomized algorithm, that, given an $n$-vertex graph $G=(V,E)$ with maximum degree $d_{\mbox{\textup{\footnotesize{max}}}}$, finds a subset $E^*\subseteq E$ of edges, such that $G\setminus E^*$ is planar, and with high probability $|E^*| = O\left ((\optcro{G})^5\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \cdot \log n)\right )$. \end{theorem} We now describe our main ideas and techniques. Given an optimal solution $\phi$ to the {\sf Minimum Crossing Number}\xspace problem on graph $G$, we say that an edge $e\in E(G)$ is \emph{good} iff it does not participate in any crossings in $\phi$. For convenience, we consider a slightly more general version of the problem, where, in addition to the graph $G$, we are given a simple cycle $X\subseteq G$, that we call the \emph{bounding box}, and our goal is to find a drawing of $G$, such that the edges of $X$ do not participate in any crossings, and all vertices and edges of $G\setminus X$ appear on the same side of the closed curve to which $X$ is mapped. In other words, if $\gamma_X$ is the simple closed curve to which $X$ is mapped, and $F_1,F_2$ are the two faces into which $\gamma_X$ partitions the plane, then one of the faces $F\in \set{F_1,F_2}$ must contain the drawings of all the edges and vertices of $G\setminus X$. We call such a drawing \emph{a drawing of $G$ inside the bounding box $X$}. Since we allow $X$ to be empty, this is indeed a generalization of the {\sf Minimum Crossing Number}\xspace problem. In fact, from Theorem~\ref{thm:CMS10}, it is enough to find what we call a \emph{weak solution} to the problem, namely, a small-cardinality subset $E^*$ of edges with $E^*\cap E(X)=\emptyset$, such that there is a planar drawing of the remaining graph $G\setminus E^*$ inside the bounding box $X$. Our proof consists of three major ingredients that we describe below. The algorithm is iterative. Throughout the algorithm, we gradually remove some edges from the graph, and gradually build a planar drawing of the remaining graph. One of the central notions we use is that of \emph{graph skeletons}. A skeleton $K$ of graph $G$ is simply a sub-graph of $G$, that contains the bounding box $X$, and has a unique planar drawing (for example, it may be convenient to think of $K$ as being $3$-vertex connected). Given a skeleton $K$, and a small subset $E'$ of edges (that we eventually remove from the graph), we say that $K$ is an \emph{admissible skeleton} iff all the edges of $K$ are good, and every connected component of $G\setminus (K\cup E')$ only contains a small number of vertices (say, at most $(1-1/\rho)n$, for some balance parameter $\rho$). Since $K$ has a unique planar drawing, and all its edges are good, we can find its unique planar drawing efficiently, and it must be identical to the drawing $\phi_K$ of $K$ induced by the optimal solution $\phi$. Let ${\mathcal{F}}$ be the set of faces in this drawing. Since $K$ only contains good edges, for each connected component $C$ of $G\setminus (K\cup E')$, all edges and vertices of $C$ must be drawn completely inside one of the faces $F_C\in {\mathcal{F}}$ in $\phi$. Therefore, if, for each such connected component $C$, we can identify the face $F_C$ inside which it needs to be embedded, then we can recursively solve the problems induced by each such component $C$, together with the bounding box formed by the boundary of $F_C$. In fact, given an admissible skeleton $K$, we show that we can find a good assignment of the connected components of $G\setminus (K\cup E')$ to the faces of ${\mathcal{F}}$, so that, on the one hand, all resulting sub-problems have solutions of total cost at most $\optcro{G}$, while, on the other hand, if we combine weak solutions to these sub-problems with the set $E'$ of edges, we obtain a feasible weak solution to the original problem. The assignment of the components to the faces of ${\mathcal{F}}$ is done by reducing the problem to an instance of the Min-Uncut problem. We defer the details of this part to later sections, and focus here on finding an admissible skeleton $K$. Our second main ingredient is the use of well-linked sets of vertices, and well-linked balanced bi-partitions. Given a set $S$ of vertices, let $G[S]$ be the sub-graph of $G$ induced by $S$, and let $\Gamma(S)$ be the subset of vertices of $S$ adjacent to the edges in $E(S,\overline S)$. Informally, we say that $S$ is $\alpha$-well-linked, iff every pair of vertices in $\Gamma(S)$ can send one flow unit to each other, with overall congestion bounded by $\alpha|\Gamma(S)|$. We say that a bi-partition $(S,\overline S)$ of the vertices of $G$ is $\rho$-balanced and $\alpha$-well-linked, iff $|S|,|\overline S|\geq n/\rho$, and both $S$ and $\overline S$ are $\alpha$-well-linked. Suppose we can find a $\rho$-balanced, $\alpha$-well linked bi-partition of $G$ (it is convenient to think of $\rho,\alpha=\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \cdot \log n)$). In this case, we show a randomized algorithm, that w.h.p. constructs an admissible skeleton $K$, as follows. Let ${\mathcal{P}},{\mathcal{P}}'$ be the collections of the flow-paths in $G[S]$ and $G[\overline S]$ respectively, guaranteed by the well-linkedness of $S$ and $\overline S$. Since the congestion on all edges is relatively low, only a small number of paths in ${\mathcal{P}}\cup {\mathcal{P}}'$ contain bad edges. Therefore, if we choose a random collection of paths from ${\mathcal{P}}$ and ${\mathcal{P}}'$ with appropriate probability, the resulting skeleton $K$, obtained from the union of these paths, is unlikely to contain bad edges. Moreover, we can show that w.h.p., every connected component of $G\setminus K$ only contains a small number of edges in $E(S,\overline S)$. It is still possible that some connected component $C$ of $G\setminus K$ contains many vertices of $G$. However, only one such component $C$ may contain more than $n/2$ vertices. Let $E'$ be the subset of edges in $E(S,\overline S)$, that belong to $C$. Then, since the original cut $(S,\overline S)$ is $\rho$-balanced, once we remove the edges of $E'$ from $C$, it will decompose into small enough components. This will ensure that all connected components of $G\setminus (K\cup E')$ are small enough, and $K$ is admissible. Using these ideas, given an efficient algorithm for computing $\rho$-balanced $\alpha$-well-linked cuts, we can obtain an algorithm for the {\sf Minimum Crossing Number}\xspace problem. Unfortunately, we do not have an efficient algorithm for computing such cuts. We can only compute such cuts in graphs that do not contain a certain structure, that we call \emph{nasty vertex sets}. Informally, a subset $S$ of vertices is a nasty set, iff $|S|>>|E(S,\overline S)|^2$, and the sub-graph $G[S]$ induced by $S$ is planar. We show an algorithm, that, given any graph $G$, either produces a $\rho$-balanced $\alpha$-well linked cut, or finds a nasty set $S$ in $G$. Therefore, if $G$ does not contain any nasty sets, we can compute the $\rho$-balanced $\alpha$-well-linked bi-paritition of $G$, and hence obtain an algorithm for {\sf Minimum Crossing Number}\xspace. Moreover, given {\bf any} graph $G$, if our algorithm fails to produce a good solution to {\sf Minimum Crossing Number}\xspace on $G$, then w.h.p. it returns a nasty set of vertices in $G$. The third major component of our algorithm is handling the nasty sets. Suppose we are given a nasty set $S$, and assume for now that it is also $\alpha$-well-linked for some parameter $\alpha=\operatorname{poly}(\log n)$. Let $\Gamma(S)$ denote the endpoints of the edges in $E(S,\overline S)$ that belong to $S$, and let $|\Gamma(S)|=z$. Recall that $|S|>>z^2$, and $G[S]$ is planar. Intuitively, in this case we can use the $z\times z$ grid to ``simulate'' the sub-graph $G[S]$. More precisely, we replace the sub-graph $G[S]$ with the $z\times z$ grid $Z_S$, and identify the vertices of the first row of the grid with the vertices in $\Gamma(S)$. We call the resulting graph the \emph{contracted graph}, and denote it by $G_{|S}$. Notice that the number of vertices in $G_{|S}$ is smaller than that in $G$. When $S$ is not well-linked, we perform a simple well-linked decomposition procedure to partition $S$ into a collection of well-linked subsets, and replace each one of them with a grid separately. Given a drawing of the resulting contracted graph $G_{|S}$, we say that it is a \emph{canonical drawing} if the edges of the newly added grids do not participate in any crossings. Similarly, we say that a planarizing subset $E^*$ of edges is a weak canonical solution for $G_{|S}$, iff the edges of the grids do not belong to $E^*$. We show that the crossing number of $G_{|S}$ is bounded by $\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)\optcro{G}$, and this bound remains true even for canonical drawings. On the other hand, we show that given any weak canonical solution $E^*$ for $G_{|S}$, we can efficiently find a weak solution of comparable cost for $G$. Therefore, it is enough to find a weak feasible canonical solution for graph $G_{|S}$. However, even the contracted graph $G_{|S}$ may still contain nasty sets. We then show that, given any nasty set $S'$ in $G_{|S}$, we can find another subset $S''$ of vertices in the original graph $G$, such that the contracted graph $G_{|S''}$ contains fewer vertices than $G_{|S}$. The crossing number of $G_{|S''}$ is again bounded by $\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)\optcro{G}$ even for canonical drawings, and a weak canonical solution to $G_{|S''}$ gives a weak solution to $G$ as before. Our algorithm then consists of a number of stages. In each stage, it starts with the current contracted graph $G_{|S}$ (where in the first stage, $S=\emptyset$, and $G_{|S}=G$). It then either finds a good weak canonical solution for problem $G_{|S}$, thus giving a feasible solution to the original problem, or returns a nasty set $S'$ in graph $G_{|S}$. We then construct a new contracted graph $G_{|S''}$, that contains fewer vertices than $G_{|S}$, and becomes the input to the next stage. \noindent{\bf Organization.} We start with some basic definitions, notation, and general results on cuts and flows in Section~\ref{sec: Prelims}. We then present a more detailed algorithm overview in Section~\ref{sec: overview}. Section~\ref{sec: graph contraction} is devoted to the graph contraction step, and the rest of the algorithm appears in Sections~\ref{sec: alg} and~\ref{sec: iteration}. For convenience, the list of all main parameters appears in Section~\ref{sec: param-list} of Appendix. Our conclusions appear in Section~\ref{sec: conclusions}. \label{------------------------------------------------Preliminaries--------------------------------------------} \section{Preliminaries and Notation}\label{sec: Prelims} In order to avoid confusion, throughout the paper, we denote the input graph by ${\mathbf{G}}$, with $|V({\mathbf{G}})|=n$, and maximum vertex degree $d_{\mbox{\textup{\footnotesize{max}}}}$. In statements regarding general arbitrary graphs, we will denote them by $G$, to distinguish them from the specific graph ${\mathbf{G}}$. \noindent{\bf General Notation.} We use the words ``drawing'' and ``embedding'' interchangeably. Given any graph $G$, a drawing $\phi$ of $G$, and any sub-graph $H$ of $G$, we denote by $\phi_H$ the drawing of $H$ induced by $\phi$, and by $\operatorname{cr}_{\phi}(G)$ the number of crossings in the drawing $\phi$ of $G$. Notice that we can assume w.l.o.g. that no edge crosses itself in any drawing. For any pair $E_1,E_2\subseteq E(G)$ of subsets of edges, we denote by $\operatorname{cr}_{\phi}(E_1,E_2)$ the number of crossings in $\phi$ in which the images of edges of $E_1$ intersect the images of edges of $E_2$, and by $\operatorname{cr}_{\phi}(E_1)$ the number of crossings in $\phi$ in which the images of edges of $E_1$ intersect with each other. Given two disjoint sub-graphs $H_1,H_2$ of $G$, we will sometimes write $\operatorname{cr}_{\phi}(H_1,H_2)$ instead of $\operatorname{cr}_{\phi}(E(H_1),E(H_2))$, and $\operatorname{cr}_{\phi}(H_1)$ instead of $\operatorname{cr}_{\phi}(E(H_1))$. If $G$ is a planar graph, and $\phi$ is a drawing of $G$ with no crossings, then we say that $\phi$ is a \emph{planar} drawing of $G$. For a graph $G=(V,E)$, and subsets $V'\subseteq V$, $E'\subseteq E$ of its vertices and edges respectively, we denote by $G[V']$, $G\setminus V'$, and $G\setminus E'$ the sub-graphs of $G$ induced by $V'$, $V\setminus V'$, and $E\setminus E'$, respectively. \begin{definition} Let $\gamma$ be any closed simple curve, and let $F_1,F_2$ be the two faces into which $\gamma$ partitions the plane. Given any drawing $\phi$ of a graph $G$, we say that $G$ is \emph{embedded inside $\gamma$}, iff one of the faces $F\in\set{F_1,F_2}$ contains the images of all edges and vertices of $G$ (the images of the vertices of $G$ may lie on $\gamma$). Similarly, if $C\subseteq G$ is a simple cycle, then we say that $G$ is embedded inside $C$, iff the edges of $C$ do not participate in any crossings, and $G\setminus E(C)$ is embedded inside $\gamma_C$ -- the simple closed curve to which $C$ is mapped. \end{definition} Given a graph $G$ and a bounding box $X$, we define the problem $\pi(G,X)$, that we use extensively. \begin{definition} Given a graph $G$ and a simple (possibly empty) cycle $X\subseteq G$, called the \emph{bounding box}, a \emph{strong solution} for problem $\pi(G,X)$, is a drawing $\psi$ of $G$, in which $G$ is embedded inside the bounding box $X$, and its cost is the number of crossings in $\psi$. A \emph{weak solution} to problem $\pi(G,X)$ is a subset $E'\subseteq E(G)\setminus E(X)$ of edges, such that $G\setminus E'$ has a \emph{planar drawing}, in which it is embedded inside the bounding box $X$. \end{definition} Notice that in order to prove Theorem~\ref{thm:main}, it is enough to find a weak solution for problem $\pi({\mathbf{G}},X_0)$, where $X_0=\emptyset$, of cost $O\left ((\optcro{{\mathbf{G}}})^5\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \cdot \log n)\right )$. \begin{definition} For any graph $G=(V,E)$, a subset $V'\subseteq V$ of vertices is called a $c$-separator, iff $|V'|=c$, and the graph $G\setminus V'$ is not connected. We say that $G$ is $c$-connected iff it does not contain $c'$-separators, for any $0<c'<c$. \end{definition} We will use the following four well-known results: \begin{theorem} (Whitney~\cite{Whitney})\label{thm:Whitney} Every 3-connected planar graph has a unique planar drawing. \end{theorem} \begin{theorem}(Hopcroft-Tarjan~\cite{planar-drawing})\label{thm:planar drawing} For any graph $G$, there is an efficient algorithm to determine whether $G$ is planar, and if so, to find a planar drawing of $G$. \end{theorem} \begin{theorem}(Ajtai et al.~\cite{ajtai82}, Leighton \cite{leighton_book})\label{thm: large average degree large crossing number} Let $G$ be any graph with $n$ vertices and $m\geq 4n$ edges. Then $\optcro{G}=\Omega(m^3/n^2)=\Omega(n)$. \end{theorem} \begin{theorem}(Lipton-Tarjan~\cite{planar-separator})\label{thm: planar separator} Let $G$ be any $n$-vertex planar graph. Then there is a constant $q$, and an efficient algorithm to partition the vertices of $G$ into three sets $A,B,C$, such that $|A|,|C|\geq n/3$, $|B|\leq q\sqrt{n}$, and there are no edges in $G$ connecting the vertices of $A$ to the vertices of $C$. \end{theorem} \subsection{Well-linkedness} \begin{definition} Let $G=(V,E)$ be any graph, and $J\subseteq V$ any subset of its vertices. We denote by $\operatorname{out}_G(J)=E_G(J,V\setminus J)$, and we call the edges in $\operatorname{out}_G(J)$ the \emph{terminal edges for $J$}. For each terminal edge $e=(u,v)$, with $u\in J$, $v\not \in J$, we call $u$ the \emph{interface vertex} and $v$ the \emph{terminal vertex} for $J$. We denote by $\Gamma_G(J)$ and $T_G(J)$ the sets of all interface and terminal vertices for $J$, respectively, and we omit the subscript $G$ when clear from context (see Figure~\ref{fig: terminal interface vertices}). \end{definition} \begin{figure}[h] \scalebox{0.3}{\rotatebox{0}{\includegraphics{terminal-vertices-edges-cut.pdf}}} \caption{Terminal vertices and edges for set $J$ are red; interface vertices are blue. \label{fig: terminal interface vertices}} \end{figure} \begin{definition} Given a graph $G$, a subset $J$ of its vertices, and a parameter $\alpha>0$, we say that $J$ is $\alpha$-well-linked, iff for any partition $(J_1,J_2)$ of $J$, if we denote by $T_1=\operatorname{out}(J_1)\cap \operatorname{out}(J)$, and by $T_2=\operatorname{out}(J_2)\cap \operatorname{out}(J)$, then $|E(J_1,J_2)|\geq \alpha\cdot \min\set{|T_1|,|T_2|}$ \end{definition} Notice that if $G$ is a connected graph and $J\subset V(G)$ is $\alpha$-well-linked for any $\alpha>0$, then $G[J]$ must be connected. Finally, we define $\rho$-balanced $\alpha$-well-linked bi-partitions. \begin{definition} Let $G$ be any graph, and let $\rho>1, 0<\alpha\leq 1$ be any parameters. We say that a bi-partition $(S,\overline S)$ of $V(G)$ is $\rho$-balanced and $\alpha$-well-linked, iff $|S|,|\overline S|\geq |V(G)|/\rho$ and both $S$ and $\overline S$ are $\alpha$-well-linked. \end{definition} \subsection{Sparsest Cut and Concurrent Flow}\label{subsec: sparsest cut} In this section we summarize some well-known results on graph cuts and flows that we use throughout the paper. We start by defining the non-uniform sparsest cut problem. Suppose we are given a graph $G=(V,E)$, with weights $w_v$ on vertices $v\in V$. Given any partition $(A,B)$ of $V$, the \emph{sparsity} of the cut $(A,B)$ is $\frac{|E(A,B)|}{\min\set{W(A),W(B)}}$, where $W(A)=\sum_{v\in A}w_v$ and $W(B)=\sum_{v\in B}w_v$. In the non-uniform sparsest cut problem, the input is a graph $G$ with weights on vertices, and the goal is to find a cut of minimum sparsity. Arora, Lee and Naor~\cite{sparsest-cut} have shown an $O(\sqrt{\log n}\cdot \log\log n)$-approximation algorithm for the non-uniform sparsest cut problem. We denote by $\ensuremath{{\mathcal{A}}_{\mbox{\textup{\footnotesize{ALN}}}}}\xspace$ this algorithm and by $\ensuremath{\alpha_{\mbox{\textup{\footnotesize{ALN}}}}}=O(\sqrt{\log n}\cdot\log\log n)$ its approximation factor. We will usually work with a special case of the sparsest cut problem, where we are given a subset $T\subseteq V$ of vertices, called terminals, and the vertex weights are $w_v=1$ for $v\in T$, and $w_v=0$ otherwise. A problem dual to sparsest cut is the maximum concurrent multicommodity flow problem. Here, we need to compute the maximum value $\lambda$, such that $\lambda/|T|$ flow units can be simultaneously sent in $G$ between every pair of terminals with no congestion. The flow-cut gap is the maximum possible ratio, in any graph, between the value of the minimum sparsest cut and the maximum concurrent flow. The value of the flow-cut gap in undirected graphs, that we denote by $\ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} $ throughout the paper, is $\Theta(\log n)$~\cite{LR, GVY,LLR,Aumann-Rabani}. In particular, if the value of the sparsest cut is $\alpha$, then every pair of terminals can send $\frac{\alpha}{|T|\cdot \ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} }$ flow units to each other with no congestion. Let $G$ be any graph, let $S$ be a subset of vertices of $G$, and let $0 < \alpha < 1$, such that $S$ is $\alpha$-well-linked. We now define the sparsest cut and the concurrent flow instances corresponding to $S$, as follows. For each edge $e\in \operatorname{out}(S)$, we sub-divide the edge by adding a new vertex $t_e$ to it. Let $G'$ denote the resulting graph, and let $T$ denote the set of all vertices $t_e$ for $e\in \operatorname{out}_G(S)$. Consider the graph $H=G'[S]\cup \operatorname{out}_{G'}(S)$. We can naturally define an instance of the non-uniform sparsest cut problem on $H$, where the set of terminals is $T$. The fact that $S$ is $\alpha$-well-linked is equivalent to the value of the sparsest cut in the resulting instance being at least $\alpha$. We obtain the following simple well-known consequence: \begin{observation}\label{observation: existence of flow in well-linked instance} Let $G$, $S$, $H$, and $T$ be defined as above, and let $0<\alpha<1$, such that $S$ is $\alpha$-well-linked. Then every pair of vertices in $T$ can send one flow unit to each other in $H$, such that the maximum congestion on any edge is at most $\ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} |T|/\alpha$. Moreover, if $M$ is any partial matching on the vertices of $T$, then we can send one flow unit between every pair $(u,v)\in M$ in graph $H$, with maximum congestion at most $2\ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} /\alpha$. \end{observation} \begin{proof} The first part is immediate from the definition of the flow-cut gap. Let $F$ denote the resulting flow. In order to obtain the second part, for every pair $(u,v)\in M$, $u$ will send $1/|T|$ flow units to every vertex in $T$, and $v$ will collect $1/|T|$ flow units from every vertex in $T$, via the flow $F$. It is easy to see that every flow-path is used at most twice. \end{proof} For convenience, when given an $\alpha$-well-linked subset $S$ of vertices in a graph $G$, we will omit the subdivision of the edges in $\operatorname{out}(S)$, and we will say that the edges $e\in \operatorname{out}(S)$ send flow to each other, instead of the corresponding vertices $t_e$. We will also use the algorithm of Arora, Rao and Vazirani~\cite{ARV} for balanced cut, summarized below. \begin{theorem}[Balanced Cut~\cite{ARV}]\label{thm: ARV} Let $G$ be any $n$-vertex graph, and suppose there is a partition of the vertices of $G$ into two sets, $A$ and $B$, with $|A|,|B|\geq \epsilon n$ for some constant $\epsilon>0$, and $|E(A,B)|=c$. Then there is an efficient algorithm to find a partition $(A',B')$ of the vertices of $G$, such that $|A'|,|B'|\geq \epsilon' n$ for some constant $0<\epsilon'<\epsilon$, and $|E(A',B')|\leq O(c\sqrt{\log n})$. \end{theorem} \subsection{Canonical Vertex Sets and Solutions} As already mentioned in the Introduction, we will perform a number of graph contraction steps on the input graph ${\mathbf{G}}$, where in each such graph contraction step, a sub-graph of ${\mathbf{G}}$ will be replaced with a grid. So in general, if $H$ is the current graph, we will also be given a collection ${\mathcal{Z}}$ of disjoint subsets of vertices of $H$, such that for each $Z\in {\mathcal{Z}}$, $H[Z]$ is the $k_Z\times k_Z$ grid, for some $k_Z\geq 2$. We will also ensure that $\Gamma_H(Z)$ is precisely the set of the vertices in the first row of the grid $H[Z]$, and the edges in $\operatorname{out}_H(Z)$ form a matching between $\Gamma_H(Z)$ and $T_H(Z)$. Given such a graph $H$, and a collection ${\mathcal{Z}}$ of vertex subsets, we will be looking for solutions in which the edges of the grids $H[Z]$ do not participate in any crossings. This motivates the following definitions of canonical vertex sets and canonical solutions. Assume that we are given a graph $G$ and a collection ${\mathcal{Z}}$ of disjoint subsets of vertices of $G$, such that each subset $Z\in{\mathcal{Z}}$ is $1$-well-linked (but some vertices of $G$ may not belong to any subset $Z\in {\mathcal{Z}}$). \begin{definition}\label{definiton: canonical subset} We say that a subset $J\subseteq V$ of vertices is \emph{canonical} for ${\mathcal{Z}}$ iff for each $Z\in{\mathcal{Z}}$, either $Z\subseteq J$, or $Z\cap J=\emptyset$. \end{definition} We next define canonical drawings and canonical solutions w.r.t. the collection ${\mathcal{Z}}$ of subsets of vertices: \begin{definition}\label{definition: canonical drawing} Let $G=(V,E)$ be any graph, and ${\mathcal{Z}}$ any collection of disjoint subsets of vertices of $G$. We say that a drawing $\phi$ of $G$ is \emph{canonical} for ${\mathcal{Z}}$ iff for each $Z\in {\mathcal{Z}}$, no edge of $G[Z]$ participates in crossings. Similarly, we say that a solution $E^*$ to the {\sf Minimum Planarization}\xspace problem on $G$ is \emph{canonical} for ${\mathcal{Z}}$, iff for each $Z\in {\mathcal{Z}}$, no edge of $G[Z]$ belongs to $E^*$. \end{definition} \begin{definition} Given a graph $G$, a simple cycle $X\subseteq G$ (that may be empty), and a collection ${\mathcal{Z}}$ of disjoint subsets of vertices of $G$, a \emph{strong solution} to problem $\pi(G,X,{\mathcal{Z}})$ is a drawing $\psi$ of $G$, in which the edges of $E(X)\cup\left(\bigcup_{Z\in{\mathcal{Z}}}E(G[Z])\right )$ do not participate in any crossings, and $G$ is embedded inside the bounding box $X$. The cost of the solution is the number of edge crossings in $\psi$. A \emph{weak solution} to problem $\pi(G,X,{\mathcal{Z}})$ is a subset $E'\subseteq E(G)\setminus E(X)$ of edges, such that graph $G\setminus E'$ has a \emph{planar drawing} inside the bounding box $X$, and for all $Z\in {\mathcal{Z}}$, $E'\cap E(G[Z])=\emptyset$. \end{definition} We will sometimes use the above definition for problem $\pi(G',X,{\mathcal{Z}})$, where $G'$ is a sub-graph of $G$. That is, some sets $Z\in {\mathcal{Z}}$ may not be contained in $G'$, or only partially contained in it. We can then define ${\mathcal{Z}}'$ to contain, for each $Z\in {\mathcal{Z}}$, the set $Z\cap V(G')$. We will sometimes use the notion of weak or strong solution to problem $\pi(G',X,{\mathcal{Z}})$ to mean weak or strong solutions to $\pi(G',X,{\mathcal{Z}}')$, to simplify notation. \subsection{Cuts in Grids} The following simple claim about grids and its corollary are used throughout the paper. \begin{claim}\label{claim: cut of grids} Let $Z$ be the $k\times k$ grid, for any integer $k\geq 2$, and let $\Gamma$ denote the set of vertices in the first row of $Z$. Let $(A,B)$ be any partition of the vertices of $Z$, with $A,B\neq \emptyset$. Then $|E(A,B)|\geq\min\set{|A\cap \Gamma|, |B\cap \Gamma|}+1$. \end{claim} \begin{proof} Let $\Gamma_A=\Gamma\cap A$, $\Gamma_B=\Gamma\cap B$, and assume w.l.o.g. that $|\Gamma_A|\leq |\Gamma_B|$. If $\Gamma_A=\emptyset$, then the claim is clearly true. Otherwise, there is some vertex $t\in \Gamma_A$, such that a vertex $t'$ immediately to the right or to the left of $t$ in the first row of the grid belongs to $\Gamma_B$. Let $e=(t,t')$ be the corresponding edge in the first row of $Z$. We can find a collection of $|\Gamma_A|$ edge-disjoint paths, connecting vertices in $\Gamma_A$ to vertices in $\Gamma_B$, that do not include the edge $e$, as follows: assign a distinct row of $Z$ (different from the first row) to each vertex in $\Gamma_A$. Route each such vertex inside its column to its designated row, and inside this row to the column corresponding to some vertex in $\Gamma_B$. If we add the path consisting of the single edge $e$, we will obtain a collection of $|\Gamma_A|+1$ edge-disjoint paths, connecting vertices in $\Gamma_A$ to vertices in $\Gamma_B$. All these paths have to be disconnected by the above cut. \end{proof} \begin{corollary}\label{corollary: canonical s-t cut} Let $G$ be any graph, ${\mathcal{Z}}$ any collection of disjoint subsets of vertices of $G$, such that for each $Z\in {\mathcal{Z}}$, $G[Z]$ is the $k_Z\times k_Z$ grid, for $k_Z\geq 2$. Moreover, assume that each vertex in the first row of $Z$ is adjacent to exactly one edge in $\operatorname{out}_G(Z)$, and no other vertex of $Z$ is adjacent to edges in $\operatorname{out}_G(Z)$. Let $s,t$ be any pair of vertices of $G$, that do not belong to any set $Z\in {\mathcal{Z}}$, and let $(A,B)$ be the minimum $s$--$t$ cut in $G$. Then both sets $A$ and $B$ are canonical w.r.t. ${\mathcal{Z}}$. \end{corollary} \begin{proof} Assume for contradiction that some set $Z\in {\mathcal{Z}}$ is split between the two sides, $A$ and $B$. Let $\Gamma=\Gamma(Z)$ denote the set of vertices in the first row of $Z$, and let $\Gamma_A=\Gamma\cap A$, $\Gamma_B=\Gamma\cap B$. Assume w.l.o.g. that $|\Gamma_A|\leq |\Gamma_B|$. Then by Claim~\ref{claim: cut of grids} $|E(A\cap Z,B\cap Z)|>|\Gamma_A|$, and so the value of the cut $(A\setminus Z,B\cup Z)$ is smaller than the value of the cut $(A,B)$, a contradiction. \end{proof} \begin{claim}\label{claim: cutting the grid} Let $Z$ be the $k\times k$ grid, for any integer $k\geq 2$, and let $\Gamma$ be the set of vertices in the first row of $Z$. Suppose we are given any partition $(A,B)$ of $V(Z)$, denote $\Gamma_A=\Gamma\cap A$, $\Gamma_B=\Gamma\cap B$, and assume that $|\Gamma_B|\leq |\Gamma_A|$. Then $|B|\leq 4|E(A,B)|^2$. \end{claim} \begin{proof} Denote $M=|E(A,B)|$. Let $C_A$ denote the set of columns associated with the vertices in $\Gamma_A$, and similarly, $C_B$ is the set of columns associated with the vertices in $\Gamma_B$. Notice that $(C_A,C_B)$ define a partition of the columns of $Z$. We consider three cases. The first case is when no column is completely contained in $A$. In this case, for every column in $C_A$, at least one edge must belong to $E(A,B)$, and so $M\geq |\Gamma_A|\geq k/2$. Since $|B|\leq |Z|\leq k^2$, the claim follows. From now on we assume that there is some grid column, denoted by $c$, that is completely contained in $A$. The second case is when some grid column $c'$ is completely contained in $B$. In this case, it is easy to see that $M\geq k$ must hold, as there are $k$ edge-disjont paths connecting vertices of $c$ to vertices of $c'$ in $Z$. So $|B|\leq |Z|\leq k^2\leq M^2$, as required. Finally, assume that no column is contained in $B$. Let $C'_B$ be the set of columns that have at least one vertex in $B$. Clearly, $M\geq |C'_B|$. Let $M'$ be the maximum number of vertices in any column $c'\in C'_B$, which are contained in $B$. Then $M\geq M'$ must hold, since there are $M'$ edge-disjoint paths between the vertices of column $c$, and the vertices of $c'\cap B$. On the other hand, $|B|\leq |C'_B|\cdot M'\leq M^2$. \end{proof} \subsection{Well-linked Decompositions} The next theorem summarizes well-linked decomposition of graphs, which has been used extensively in graph decomposition (e.g., see~\cite{CSS,Raecke}). For completeness we provide its proof in Appendix. \begin{theorem}[Well-linked decomposition]\label{thm: well-linked} Given any graph $G=(V,E)$, and any subset $J\subseteq V$ of vertices, we can efficiently find a partition ${\mathcal{J}}$ of $J$, such that each set $J'\in {\mathcal{J}}$ is $\alpha^*$-well-linked for $\alpha^*=\Omega(1/(\log^{3/2} n\log\log n))$, and $\sum_{J'\in {\mathcal{J}}}|\operatorname{out}(J')|\leq 2\operatorname{out}(J)$. \end{theorem} We now define some additional properties that set $J$ may possess, that we use throughout the paper. We will then show that if a set $J$ has any collection of these properties, then we can find a well-linked decomposition ${\mathcal{J}}$ of $J$, such that every set $J'\in {\mathcal{J}}$ has these properties as well. \begin{definition} Given a graph $G$ and any subset $J\subseteq V(G)$ of its vertices, we say that $J$ has property (P1) iff the vertices of $T(J)$ are connected in $G\setminus J$. We say that it has property (P2) iff there is a planar drawing of $J$ in which all interface vertices $\Gamma(J)$ lie on the boundary of the same face, that we refer to as the \emph{outer face}. We denote such a planar drawing by $\pi(J)$. If there are several such drawing, we select any of them arbitrarily. \end{definition} The next theorem is an extension of Theorem~\ref{thm: well-linked}, and its proof appears in Appendix. \begin{theorem}\label{thm: well-linked-general} Suppose we are given any graph $G=(V,E)$, a subset $J\subseteq V$ of vertices, and a collection ${\mathcal{Z}}$ of disjoint subsets of vertices of $V$, such that each set $Z\in {\mathcal{Z}}$ is $1$-well-linked. Then we can efficiently find a partition ${\mathcal{J}}$ of $J$, such that each set $J'\in {\mathcal{J}}$ is $\alpha^*$-well linked for $\alpha^*=\Omega(1/(\log^{3/2} n\log\log n))$, and $\sum_{J'\in {\mathcal{J}}}|\operatorname{out}(J')|\leq 2\operatorname{out}(J)$. Moreover, if $J$ has any combination of the following three properties: (1) property (P1); (2) property (P2); (3) it is a canonical set for ${\mathcal{Z}}$, then each set $J'\in {\mathcal{J}}$ will also have the same combination of these properties.\end{theorem} Throughout the paper, we use $\alpha^*$ to denote the parameter from Theorem~\ref{thm: well-linked-general}. \label{--------------------------------sec: high level overview----------------------------------} \section{High Level Algorithm Overview}\label{sec: overview} In this section we provide a high-level overview of the algorithm. We start by defining the notion of nasty vertex sets. \begin{definition} Given a graph $G$, we say that a subset $S\subseteq V(G)$ of vertices is \emph{nasty} iff it has properties (P1) and (P2), and $|S|\geq \frac{2^{16}\cdot d_{\mbox{\textup{\footnotesize{max}}}}^6}{(\alpha^*)^2}\cdot|\Gamma(S)|^2$, where $\alpha^*$ is the parameter from Theorem \ref{thm: well-linked}. \end{definition} Note that we do not require that $G[S]$ is connected. For the sake of clarity, let us first assume that the input graph ${\mathbf{G}}$ contains no nasty sets. Our algorithm then proceeds as follows. We use a balancing parameter $\rho=O(\optcro{G}\cdot\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n))$ whose exact value is set later. The algorithm has $O(\rho\cdot\log n)$ iterations. At the beginning of each iteration $h$, we are given a collection $G_1,\ldots, G_{k_h}$ of $k_h\leq \optcro{G}$ disjoint sub-graphs of ${\mathbf{G}}$, together with bounding boxes $X_i\subseteq G_i$ for all $i$. We are guaranteed that w.h.p., there is a strong solution to each problem $\pi(G_i,X_i)$, of total cost at most $\optcro{{\mathbf{G}}}$. In the first iteration, $k_1=1$, and the only graph is $G_1={\mathbf{G}}$, whose bounding box is $X_0=\emptyset$. We now proceed to describe each iteration. The idea is to find a \emph{skeleton} $K_i$ for each graph $G_i$, with $X_i\subseteq K_i$, such that $K_i$ only contains good edges --- that is, edges that do not participate in any crossings in the optimal solution $\phi$, and $K_i$ has a unique planar drawing, in which $X_i$ serves as the bounding box. Therefore, we can efficiently find the drawing $\phi_{K_i}$ of the skeleton $K_i$, induced by the optimal drawing $\phi$. We then decompose the remaining graph $G_i\setminus E(K_i)$ into \emph{clusters}, by removing a small subset of edges from it, so that, on the one hand, for each such cluster $C$, we know the face $F_C$ of $\phi_{K_i}$ where we should embed it, while on the other hand, different clusters $C,C'$ do not interfere with each other, in the sense that we can find an embedding of each one of these clusters separately, and their embeddings do not affect each other. For each such cluster $C$, we then define a new problem $\pi(C,\gamma(F_C))$, where $\gamma(F_C)$ is the boundary of the face $F_C$. We will ensure that all resulting sub-problems have strong solutions whose total cost is at most $\optcro{{\mathbf{G}}}$. In particular, there are at most $\optcro{G}$ resulting sub-problems, for which $\emptyset$ is not a feasible weak solution. Therefore, in the next iteration we will need to solve at most $\optcro{G}$ new sub-problems. The main challenge is to find $K_i$, such that the number of vertices in each such cluster $C$ is bounded by roughly $(1-1/\rho)|V(G_i)|$, so that the number of iterations is indeed bounded by $O(\rho \log n)$. We need this bound on the number of iterations, since the probability of successfully constructing the skeletons in each iteration is only $(1-1/\rho)$. Roughly speaking, we are able to build the skeleton as required, if we can find a $\rho$-balanced $\alpha$-well-linked bipartition of the vertices of $G_i$, where $\alpha=1/\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \cdot \log n)$. We are only able to find such a partition if no nasty sets exist in ${\mathbf{G}}$. More precisely, we show an efficient algorithm, that either finds the desired bi-partition, or returns a nasty vertex set. In order to obtain the whole algorithm, we therefore need to deal with nasty sets. We do so by performing a graph contraction step, which is formally defined in the next section. Informally, given a nasty set $S$, we find a partition ${\mathcal{X}}$ of $S$, such that for every pair $X,X'\in {\mathcal{X}}$, the graphs ${\mathbf{G}}[X],{\mathbf{G}}[X']$ share at most one interface vertex and no edges. Each such graph ${\mathbf{G}}[X]$ is also $\alpha^*$-well-linked, has properties (P1) and (P2), and $\sum_{X\in {\mathcal{X}}}|\Gamma(X)|\leq O(|\Gamma(S)|)$. We then replace each sub-graph ${\mathbf{G}}[X]$ of ${\mathbf{G}}$ by a grid $Z_X$, whose interface is $\Gamma(X)$. After we do so for each $X\in {\mathcal{X}}$, we denote by ${\mathbf{G}}_{|S}$ the resulting contracted graph. Notice that we have replaced ${\mathbf{G}}[S]$ by a much smaller graph, whose size is bounded by $O(|\Gamma(S)|^2)$. Let ${\mathcal{Z}}$ denote the collection of sets $V(Z_X)$ of vertices, for $X\in {\mathcal{X}}$. We then show that the cost of the optimal solution to problem $\pi({\mathbf{G}}_{|S},\emptyset,{\mathcal{Z}})$ is at most $\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot\log n)\optcro{G}$. Therefore, we can restrict our attention to canonical solutions only. We also show that it is enough to find a weak solution to problem $\pi({\mathbf{G}}_{|S},\emptyset,{\mathcal{Z}})$, in order to obtain a weak solution for the whole graph ${\mathbf{G}}$. Unfortunately, we do not know how to find a nasty set $S$, such that the corresponding contracted graph ${\mathbf{G}}_{|S}$ contains no nasty sets. Instead, we do the following. Let $H={\mathbf{G}}_{|S}$ be the current graph, which is a result of the graph contraction step on some set $S$ of vertices, and let ${\mathcal{Z}}$ be the corresponding collection of sub-sets of vertices representing the grids. Suppose we can find a nasty canonical set $R$ in the graph $H$. We show that this allows us to find a new set $S'$ of vertices in ${\mathbf{G}}$, such that the contracted graph ${\mathbf{G}}_{|S'}$ contains fewer vertices than ${\mathbf{G}}_{|S}$. Returning to our algorithm, let ${\mathbf{G}}_{|S}$ be the current contracted graph. We show that with high probability, the algorithm either returns a weak solution for ${\mathbf{G}}_{|S}$ of cost $O\left ((\optcro{G})^5\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \cdot \log n)\right )$, or it returns a nasty canonical subset $S'$ of ${\mathbf{G}}_{|S}$. In the former case, we can recover a good weak solution for the original graph ${\mathbf{G}}$. In the latter case, we find a subset $S''$ of vertices in the original graph ${\mathbf{G}}$, and perform another contraction step on ${\mathbf{G}}$, obtaining a new graph ${\mathbf{G}}_{|S''}$, whose size is strictly smaller than that of ${\mathbf{G}}_{|S}$. We then apply the algorithm to graph ${\mathbf{G}}_{|S''}$. Since the total number of graph contraction steps is bounded by $n$, after $n$ such iterations, we are guaranteed w.h.p. to obtain a weak feasible solution of cost $O\left ((\optcro{G})^5\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \cdot \log n)\right )$ to $\pi({\mathbf{G}},\emptyset)$, thus satisfying the requirements of Theorem~\ref{thm:main}. We now turn to formal description of the algorithm. One of the main ingredients is the graph contraction step, summarized in the next section. \section{Graph Contraction Step}\label{sec: graph contraction} The input to the graph contraction step consists of the input graph ${\mathbf{G}}$, and a subset $S\subseteq V({\mathbf{G}})$ of vertices, for which properties (P1) and (P2) hold. It will be convenient to think of $S$ as a nasty set, but we do not require it. Let ${\mathcal{C}}=\set{G_1,\ldots,G_q}$ be the set of all connected components of ${\mathbf{G}}[S]$. For each $1\leq i\leq q$, let $\Gamma_i=V(G_i)\cap \Gamma(S)=\Gamma(V(G_i))$ be the set of the interface vertices of $G_i$. The goal of the graph contraction step is to find, for each $1\leq i\leq q$, a partition ${\mathcal{X}}_i$ of the set $V(G_i)$, that has the following properties. Let ${\mathcal{X}}=\bigcup_{i=1}^q{\mathcal{X}}_i$. \begin{properties}{C} \item Each set $X\in {\mathcal{X}}$ is $\alpha^*$-well-linked, and has properties (P1) and (P2). Moreover, there is a planar drawing $\pi'(X)$ of ${\mathbf{G}}[X]$, and a simple closed curve $\gamma_X$, such that ${\mathbf{G}}[X]$ is embedded inside $\gamma_X$ in $\pi'(X)$, and the vertices of $\Gamma(X)$ lie on $\gamma_X$.\label{property: subsets-first} \item For each $X\in{\mathcal{X}}$, either $|\Gamma(X)|=2$, or there is a partition $(C^*_X,R_1,\ldots,R_t)$ of $X$, such that ${\mathbf{G}}[C^*_X]$ is $2$-connected and $\Gamma(X)\subseteq C^*_X$. Moreover, for each $1\leq t'\leq t$, there is a vertex $u_{t'}\in C^*_X$, whose removal from ${\mathbf{G}}[X]$ separates the vertices of $R_{t'}$ from the remaining vertices of $X$. \label{property: structure of X} \item For each pair $X,X'\in {\mathcal{X}}$, the two sets of vertices are completely disjoint, except for possibly sharing one interface vertex, $v\in \Gamma(X)\cap \Gamma(X')$. \label{property: disjointness} \item For each $1\leq i\leq q$, if $\Gamma_i'=\bigcup_{X\in{\mathcal{X}}_i}\Gamma(X)$, then $|\Gamma_i'|\leq 9|\Gamma_i|$. \label{property: subsets - last} \item For each $X\in {\mathcal{X}}$, $|X|\geq (\alpha^*|\Gamma(X)|)^2/64d_{\mbox{\textup{\footnotesize{max}}}}^2$.\label{property: size-last} \end{properties} For each set $X\in {\mathcal{X}}$, we now define a new graph $Z'_X$, that will eventually replace the sub-graph ${\mathbf{G}}[X]$ in ${\mathbf{G}}$. Intuitively, we need $Z'_X$ to contain the vertices of $\Gamma(X)$ and to be $1$-well-linked w.r.t. these vertices. We also need it to have a unique planar embedding where the vertices of $\Gamma(X)$ lie on the boundary of the same face, and finally, we need the size of the graph $Z'_X$ to be relatively small, since this is a graph contraction step. The simplest graph satisfying these properties is a grid of size $|\Gamma(X)|\times |\Gamma(X)|$. Specifically, we first define a graph $Z_X$ as follows: if $|\Gamma_X|=1$, then $Z_X$ consists of a single vertex, and if $|\Gamma_X|=2$, then $Z_X$ consists of a single edge. Otherwise, $Z_X$ is a grid of size $|\Gamma(X)|\times |\Gamma(X)|$. In order to obtain the graph $Z'_X$, we add the set $\Gamma(X)$ of vertices to $Z_X$, and add a matching between the vertices of the first row of the grid and the vertices of $\Gamma(X)$. This is done so that the order of the vertices of $\Gamma(X)$ along the first row of the grid is the same as their order along the curve $\gamma_X$ in the drawing $\pi'(X)$. We refer to these new edges as the \emph{matching edges}. For the cases where $|\Gamma_X|=1$ and $|\Gamma_X|=2$, we obtain $Z'_X$ by adding the vertices of $\Gamma(X)$ to $Z_X$, and adding an arbitrary matching between $\Gamma_X$ and the vertices of $Z_X$. (See Figure~\ref{fig: grids}). \begin{figure}[h] \centering \subfigure[General case]{ \scalebox{0.4}{\includegraphics{grid-new-cut.pdf}} \label{fig: grid} } \hfill \subfigure[$|\Gamma(X)|=1$]{ \scalebox{0.3}{\includegraphics{grid-1-interface-cut.pdf}} \label{fig: grid 1 interface} } \hfill \subfigure[$|\Gamma(X)|=2$]{ \scalebox{0.3}{\includegraphics{grid-2-interface-cut.pdf}} \label{fig: grid 2 interface} } \caption{Graph $Z'_X$. The matching edges and the interface vertices are blue; the grid $Z_X$ is black.}\label{fig: grids} \end{figure} The contracted graph ${\mathbf{G}}_{|S}$ is obtained from ${\mathbf{G}}$, by replacing, for each $X\in{\mathcal{X}}$, the subgraph ${\mathbf{G}}[X]$ of ${\mathbf{G}}$, with the graph $Z'_X$. This is done as follows: first, delete all vertices and edges of ${\mathbf{G}}[X]$, except for the vertices of $\Gamma(X)$, from ${\mathbf{G}}$, and add the edges and the vertices of $Z'_X$ instead. Next, identify the copies of the interface vertices $\Gamma[X]$ in the two graphs. Let $H={\mathbf{G}}_{|S}$ denote the resulting contracted graph. Notice that \begin{equation}\label{eq: final size} \sum_{i=1}^q\sum_{X\in {\mathcal{X}}_i}|V(Z'_X)|\leq \sum_{i=1}^q\sum_{X\in {\mathcal{X}}_i}2|\Gamma(X)|^2\leq \sum_{i=1}^q2|\Gamma_i'|^2d_{\mbox{\textup{\footnotesize{max}}}}^2\leq 162d_{\mbox{\textup{\footnotesize{max}}}}^2|\Gamma|^2 \end{equation} (we have used the fact that a vertex may belong to the interface of at most $d_{\mbox{\textup{\footnotesize{max}}}}$ sets $X\in {\mathcal{X}}_i$, and Property~(\ref{property: subsets - last})). Therefore, if the initial vertex set $S$ is nasty, then we have indeed reduced the graph size, as $|V(H)|<|V({\mathbf{G}})|$. We now define a collection ${\mathcal{Z}}$ of subsets of vertices of $H$, as follows: ${\mathcal{Z}}=\set{V(Z_X)\mid X\in {\mathcal{X}}}$. Notice that these sets are completely disjoint, as $Z_X$ does not contain the interface vertices $\Gamma(X)$. Moreover, for each $Z\in {\mathcal{Z}}$, $H[Z]$ is a grid, $\Gamma_H(Z)$ consists of the vertices in the first row of the grid, and $\operatorname{out}_H(Z)$ consists of the set of the matching edges, each of which connects a vertex in the first row of the grid $Z$ to a distinct vertex in $T_H(Z)$. Using Definitions~\ref{definiton: canonical subset} and \ref{definition: canonical drawing}, we can now define canonical subsets of vertices, canonical drawings and canonical solutions to the {\sf Minimum Planarization}\xspace problem on $H$, with respect to ${\mathcal{Z}}$. Our main result for graph contraction is summarized in the next theorem, whose proof appears in Appendix. \begin{theorem}\label{thm: graph contraction} Let $S\subseteq V({\mathbf{G}})$ be any subset of vertices with properties (P1) and (P2), and let $\set{G_1,\ldots,G_q}$ be the set of all connected components of graph ${\mathbf{G}}[S]$. Then for each $1\leq i\leq q$, we can efficiently find a partition ${\mathcal{X}}_i$ of $V(G_i)$, such that the resulting partition ${\mathcal{X}}=\bigcup_{i=1}^q{\mathcal{X}}_i$ of $S$ has properties~(\ref{property: subsets-first})--(\ref{property: size-last}). Moreover, there is a canonical drawing of the resulting contracted graph $H={\mathbf{G}}_{|S}$ with $O(d_{\max}^9 \cdot \log^{10} n \cdot (\log\log n)^4 \cdot \optcro{{\mathbf{G}}})$ crossings. \end{theorem} The next claim shows, that in order to find a good solution to the {\sf Minimum Planarization}\xspace problem on ${\mathbf{G}}$, it is enough to solve it on ${\mathbf{G}}_{|S}$. \begin{claim}\label{claim: enough to solve contracted graph} Let $S$ be any subset of vertices of ${\mathbf{G}}$, ${\mathcal{X}}$ any partition of $S$ with properties~(\ref{property: subsets-first})--(\ref{property: size-last}), $H={\mathbf{G}}_{|S}$ the corresponding contracted graph and ${\mathcal{Z}}$ the collection of grids $Z_X$ for $X\in {\mathcal{X}}$. Then given any canonical solution $E^*$ to the {\sf Minimum Planarization}\xspace problem on $H$, we can efficiently find a solution of cost $O(d_{\mbox{\textup{\footnotesize{max}}}})|E^*|$ to {\sf Minimum Planarization}\xspace on ${\mathbf{G}}$.\end{claim} \begin{proof} Partition set $E^*$ of edges into two subsets: $E^*_1$ contains all edges that belong to sub-graphs $Z'_X$ for $X\in {\mathcal{X}}$, and $E^*_2$ contains all remaining edges. Notice that since $E^*$ is a canonical solution, each edge $e\in E^*_1$ must be a matching edge for some graph $Z'_X$. Also from the construction of the contracted graph $H$, all edges in $E^*_2$ belong to $E({\mathbf{G}})$. Consider some set $X\in {\mathcal{X}}$, and let $\Gamma'(X)\subseteq \Gamma(X)$ denote the subset of the interface vertices of $Z'_X$, whose matching edges belong to $E^*_1$. Let $\Gamma'=\bigcup_{X\in {\mathcal{X}}}\Gamma'(X)$. We now define a subset $E^{**}_1$ of edges of ${\mathbf{G}}$ as follows: for each vertex $v\in \Gamma'$, add all edges incident to $v$ in ${\mathbf{G}}$ to $E^{**}_1$. Finally, we set $E^{**}=E^{**}_1\cup E^*_2$. Notice that $E^{**}$ is a subset of edges of ${\mathbf{G}}$, and $|E^{**}|=|E_1^{**}|+|E_2^*|\leq d_{\mbox{\textup{\footnotesize{max}}}}|E_1^*|+|E_2^*|\leq d_{\mbox{\textup{\footnotesize{max}}}} |E^*|$. In order to complete the proof of the claim, it is enough to show that $E^{**}$ is a feasible solution to the {\sf Minimum Planarization}\xspace problem on ${\mathbf{G}}$. Let ${\mathbf{G}}'={\mathbf{G}}\setminus E^{**}$, let $H'=H\setminus E^*$, and let $\psi$ be a planar drawing of $H'$. It is now enough to construct a planar drawing $\psi'$ of ${\mathbf{G}}'$. In order to do so, we start from the planar drawing $\psi$ of $H'$. We then consider the sets $X\in {\mathcal{X}}$ one-by-one. For each such set, we replace the drawing of $Z'_X\setminus \Gamma'(X)$ with a drawing of ${\mathbf{G}}[X]\setminus \Gamma'(X)$. The drawings of the vertices in $\Gamma(X)$ are not changed by this procedure. After all sets $X\in {\mathcal{X}}$ are processed, we will obtain a planar drawing of graph ${\mathbf{G}}'$ (that may also contain drawings of some edges in $E^{**}$, that we can simply erase). Consider some such set $X\in {\mathcal{X}}$. Let $G$ be the current graph (obtained from $H'$ after a number of such replacement steps), and let $\psi$ be the current planar drawing of $G$. Observe that the grid $Z_X$ has a unique planar drawing. We say that a planar drawing of graph $Z'_X\setminus \Gamma'(X)$ is \emph{standard} in $\psi$, iff we can draw a simple closed curve $\gamma'_X$, such that $Z_X$ is embedded completely inside $\gamma'_X$; no other vertices or edges of $G$ are embedded inside $\gamma'_X$; the only edges that $\gamma'_X$ intersects are the matching edges of $Z'_X\setminus \Gamma'(X)$, and each such matching edge is intersected exactly once by $\gamma'_X$ (see Figure~\ref{fig: standard drawing}). \begin{figure}[h] \scalebox{0.4}{\rotatebox{0}{\includegraphics{standard-drawing-cut.pdf}}}\caption{A standard drawing of $Z'_X\setminus \Gamma'(X)$ \label{fig: standard drawing}} \end{figure} It is possible that the drawing of $Z'_X\setminus \Gamma'(X)$ in $\psi$ is not standard. However, since $\psi$ is planar, this can only happen for the following three reasons: (1) some connected component $C$ of the current graph $G$ is embedded inside some face of the grid $Z_X$: in this case we can simply move the drawing of $C$ elsewhere; (2) there is some subset $C$ of $V(G)$, and a vertex $v\in \Gamma(X)\setminus \Gamma'(X)$, such that $\Gamma_G(C)=v$, and $G[C]$ is embedded inside one of the faces of the grid $Z_X$ incident to the other endpoint of the matching edge of $v$; and (3) there is some subset $C$ of $V(G)$, and two consecutive vertices $u,v\in \Gamma(X)\setminus \Gamma'(X)$, such that $\Gamma_G(C)=\set{u,v}$, and $G[C]$ is embedded inside the unique face of the grid $Z_X$ incident to the other endpoints of the matching edges of $u$ and $v$ (See Figure \ref{fig: any to standard drawing}). In the latter two cases, we simply move the drawing of $C$ right outside the grid, so that the corresponding matching edges now cross the curve $\gamma'(X)$. \begin{figure}[h] \centering \subfigure{ \scalebox{0.4}{\includegraphics{any-to-standard-drawing1-cut.pdf}} \label{fig: any to standard before} } \hfill \subfigure{ \scalebox{0.4}{\includegraphics{any-to-standard-drawing2-cut.pdf}} \label{fig: any to standard after} } \caption{Transforming drawing $\psi$ to obtain a standard drawing of $Z'_X\setminus \Gamma'(X)$. Cases 1, 2 and 3 are illustrated by clusters $C_1$, $C_2$ and $C_3$, respectively. \label{fig: any to standard drawing}} \end{figure} To conclude, we can transform the current planar drawing $\psi$ of the graph $G$ into another planar drawing $\tilde{\psi}$, such that the induced drawing of $Z'_X\setminus \Gamma'(X)$ is standard. We can now draw a simple closed curve $\gamma''(X)$, such that $Z'_X\setminus \Gamma'(X)$ is embedded inside $\gamma''(X)$, no other vertices or edges are embedded inside $\gamma''(X)$, and the set of vertices whose drawings lie on $\gamma''(X)$ is precisely $\Gamma(X)\setminus \Gamma'(X)$. Notice that the ordering of the vertices of $\Gamma(X)\setminus \Gamma'(X)$ along this curve is exactly the same as their ordering along the curve $\gamma(X)$ in the planar embedding $\pi'(X)$ of ${\mathbf{G}}[X]$, guaranteed by Property (\ref{property: subsets-first}). Let $\pi''(X)$ be the drawing of ${\mathbf{G}}[X]\setminus \Gamma'(X)$ induced by $\pi'(X)$. We can now simply replace the drawing of $Z'_X\setminus \Gamma'(X)$ with the drawing $\pi''(X)$ of ${\mathbf{G}}[X]\setminus \Gamma'(X)$, identifying the curves $\gamma_X$ and $\gamma''_X$, and the drawings of the vertices in $\Gamma(X)\setminus \Gamma'(X)$ on them. The resulting drawing remains planar, and the drawings of the vertices in $\Gamma(X)$ do not change. \end{proof} Finally, we show that if we find a nasty canonical set in ${\mathbf{G}}_{|S}$, then we can contract ${\mathbf{G}}$ even further. The proof of the following theorem appears in Appendix. \begin{theorem}\label{thm: nasty canonical set to contraction} Let $S$ be any subset of vertices of ${\mathbf{G}}$, ${\mathcal{X}}$ any partition of $S$ with properties~(\ref{property: subsets-first})--(\ref{property: size-last}), $H={\mathbf{G}}_{|S}$ the corresponding contracted graph, and ${\mathcal{Z}}$ the corresponding collection of grids $Z_X$ for $X\in {\mathcal{X}}$. Then given any nasty canonical vertex set $R\subseteq V(H)$, we can efficiently find a subset $S'\subseteq V({\mathbf{G}})$ of vertices, and a partition ${\mathcal{X}}'$ of $S'$, such that properties~(\ref{property: subsets-first})--(\ref{property: size-last}) hold for ${\mathcal{X}}'$, and if $H'={\mathbf{G}}_{|S'}$ is the corresponding contracted graph, then $|V(H')|<|V(H)|$. Moreover, there is a canonical drawing $\phi'$ of $H'$ with $\operatorname{cr}_{\phi'}(H') = O(d_{\max}^9 \cdot \log^{10} n \cdot (\log\log n)^4 \cdot \optcro{{\mathbf{G}}})$. \end{theorem} Notice that Claim~\ref{claim: enough to solve contracted graph} applies to the new contracted graph as well. \label{---------------------------------------------------------sec algorithm---------------------------------------------------} \section{The Algorithm}\label{sec: alg} The algorithm consists of a number of stages. In each stage $j$, we are given as input a subset $S$ of vertices of ${\mathbf{G}}$, the contracted graph $\H={\mathbf{G}}_{|S}$, and the collection ${\mathcal{Z}}$ of disjoint sub-sets of vertices of $\H$, corresponding to the grids $Z_X$ obtained during the contraction step. The goal of stage $j$ is to either produce a nasty canonical set $R$ in $\H$, or to find a weak feasible solution to problem $\pi(\H,\emptyset,{\mathcal{Z}})$. We prove the following theorem. \begin{theorem}\label{thm: alg summary} There is an efficient randomized algorithm, that, given a contracted graph $\H$, a corresponding collection ${\mathcal{Z}}$ of disjoint subsets of vertices of $\H$, and a bound $\mathsf{OPT}'$ on the cost of the strong optimal solution to problem $\pi(\H,\emptyset,{\mathcal{Z}})$, with probability at least $1/\operatorname{poly}(n)$, produces either a nasty canonical subset $R$ of vertices of $\H$, or a weak feasible solution $E^*$, $|E^*|\leq O((\mathsf{OPT}')^{5}\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot\log n))$ for problem $\pi(\H,\emptyset,{\mathcal{Z}})$. (Here, $n=|V({\mathbf{G}})|$). \end{theorem} We prove this theorem in the rest of this section, but we first show how Theorems~\ref{thm:main}, \ref{theorem: main-crossing-number} and Corollary~\ref{corollary: main-approx-crossing-number} follow from it. We start with proving Theorem~\ref{thm:main}, by showing an efficient randomized algorithm to find a subset $E^*\subseteq E({\mathbf{G}})$ of edges, such that ${\mathbf{G}}\setminus E^*$ is planar, and $|E^*|\leq O((\optcro{G})^5\cdot\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n))$. We assume that we know the value $\optcro{{\mathbf{G}}}$, by using the standard practice of guessing this value, running the algorithm, and then adjusting the guessed value accordingly. It is enough to ensure that whenever the guessed value $\mathsf{OPT}\geq \optcro{{\mathbf{G}}}$, the algorithm indeed returns a subset $E^*$ of edges, $|E^*|\leq O(\mathsf{OPT}^{5}\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n))$, such that ${\mathbf{G}}\setminus E^*$ is a planar graph w.h.p. Therefore, from now on we assume that we are given a value $\mathsf{OPT}\geq \optcro{{\mathbf{G}}}$. The algorithm consists of a number of stages. The input to stage $j$ is a contracted graph $\H$, with the corresponding family ${\mathcal{Z}}$ of vertex sets. In the input to the first stage, $\H={\mathbf{G}}$, and ${\mathcal{Z}}=\emptyset$. In each stage $j$, we run the algorithm from Theorem~\ref{thm: alg summary} on the current contracted graph $\H$, and the family ${\mathcal{Z}}$ of vertex subsets. From Theorem~\ref{thm: graph contraction}, there is a strong feasible solution to problem $\pi(\H,\emptyset,{\mathcal{Z}})$ of cost $O(\mathsf{OPT} \cdot\operatorname{poly}(\log n\cdot d_{\mbox{\textup{\footnotesize{max}}}}))$, and so we can set the parameter $\mathsf{OPT}'$ to this value. Whenever the algorithm returns a nasty canonical set $R$ in graph $\H$, we terminate the current stage, and compute a new contracted graph $\H'$, guaranteed by Theorem~\ref{thm: nasty canonical set to contraction}. Graph $\H'$, together with the corresponding family ${\mathcal{Z}}'$ of vertex subsets, becomes the input to the next stage. Alternatively, if, after $\operatorname{poly}(n)$ executions of the algorithm from Theorem~\ref{thm: alg summary}, no nasty canonical set is returned, then with high probability, one of the algorithm executions has returned a weak feasible solution $E^*$, $|E^*|\leq O(\mathsf{OPT}^{5}\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot\log n))$ for problem $\pi(\H,\emptyset,{\mathcal{Z}})$. From Claim~\ref{claim: enough to solve contracted graph}, we can recover from this solution a planarizing set $E^{**}$ of edges for graph ${\mathbf{G}}$, with $|E^{**}|=O(\mathsf{OPT}^{5}\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot\log n))$. Since the size of the contracted graph $\H$ goes down after each contraction step, the number of stages is bounded by $n$, thus implying Theorem~\ref{thm:main}. Combining Theorem~\ref{thm:main} with Theorem~\ref{thm:CMS10} immediately gives Theorem~\ref{theorem: main-crossing-number}. Finally, we obtain Corollary~\ref{corollary: main-approx-crossing-number} as follows. Recall that the algorithm of Even et al.~ \cite{EvenGS02} computes a drawing of any $n$-vertex bounded degree graph ${\mathbf{G}}$ with $O(\log^2 n) \cdot (n+\optcro{G})$ crossings. It was shown in~\cite{CMS10}, that this algorithm can be extended to arbitrary graphs, where the number of crossings becomes $O(\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}})\cdot \log^2 n) \cdot (n+\optcro{G})$. We run their algorithm, and the algorithm presented in this section, on graph ${\mathbf{G}}$, and output the better of the two solutions. If $\optcro{{\mathbf{G}}}<n^{1/10}$, then our algorithm is an $O(n^{9/10}\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n))$-approximation; otherwise, the algorithm of~\cite{EvenGS02} gives an $O(n^{9/10}\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n))$-approximation. The remainder of this section is devoted to proving Theorem~\ref{thm: alg summary}. Recall that we are given the contracted graph $\H$, and a collection ${\mathcal{Z}}$ of vertex-disjoint subsets of $V(\H)$. For each $Z\in {\mathcal{Z}}$, $\H[Z]$ is a grid, and $E(Z,V(\H)\setminus Z)$ consists of a set $M_Z$ of matching edges. Each such edge connects a vertex in the first row of $Z$ to a distinct vertex in $T_{\H}(Z)$, and these edges form a matching between the first row of $Z$ and $T_{\H}(Z)$. Abusing the notation, we denote the bound on the cost of the strong optimal solution to $\pi(\H,\emptyset,{\mathcal{Z}})$ by $\mathsf{OPT}$ from now on, and the number of vertices in $\H$ by $n$. For each $Z\in {\mathcal{Z}}$, we use $Z$ to denote both the set of vertices itself, and the grid $\H[Z]$. We assume throughout the rest of the section that $\mathsf{OPT}\cdot d_{\mbox{\textup{\footnotesize{max}}}}^6< \sqrt n$: otherwise, if $\mathsf{OPT}\cdot d_{\mbox{\textup{\footnotesize{max}}}}^6\geq \sqrt n$, then the set $E'$ of all edges of $\H$ that do not participate in grids $Z\in {\mathcal{Z}}$, is a feasible weak canonical solution for problem $\pi(\H,\emptyset,{\mathcal{Z}})$. It is easy to see that $|E'|\leq O(\mathsf{OPT}^2\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}))$: this is clearly the case if $|E'|\leq 4n$; otherwise, if $|E'|>4n$, then by Theorem~\ref{thm: large average degree large crossing number}, $\mathsf{OPT}=\Omega(n)$, and so $|E'|=O(n^2)=O(\mathsf{OPT}^2)$. We use two parameters: $\rho=O(\mathsf{OPT}\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n))$ and $m^*=O(\mathsf{OPT}^3\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \cdot \log n))$, whose exact values we set later. The algorithm consists of $2\rho\log n$ iterations. The input to iteration $h$ is a collection $G_1,\ldots,G_{k_h}$ of $k_h\leq \mathsf{OPT}$ sub-graphs of $\H$, together with bounding boxes $X_i\subseteq G_i$ for all $1\leq i\leq k_h$. We denote $H_i=G_i\setminus V(X_i)$ and $n(H_i)=|V(H_i)|$. Additionally, we have collections $\edges1,\ldots,\edges{h-1}$ of edges of $\H$, where for each $1\leq h'\leq h-1$, set $\edges{h'}$ has been computed in iteration $h'$. We say that $(G_1,X_1),\ldots,(G_{k_h},X_{k_h})$, and $\edges1,\ldots,\edges{h-1}$ is a \emph{valid} input to iteration $h$, iff the following invariants hold: \label{start invariants for the whole algorithm-------------------------------------------------------------} \begin{properties}{V} \item For all $1\leq i,j\leq k_h$, graphs $H_i$ and $H_j$ are completely disjoint. \label{invariant 1: disjointness} \item For all $1\leq i\leq k_h$, $G_i\subseteq \H\setminus (\edges 1,\ldots,\edges{h-1})$, and $H_i$ is the sub-graph of $\H$ induced by $V(H_i)$. In particular, no edges $e\subseteq V(H_i)$ belong to $\edges 1,\ldots,\edges{h-1}$. Moreover, every edge $e\in E(\H)$ belongs to either $\bigcup_{h'=1}^h\edges{h'}$ or to $\bigcup_{i=1}^{k_h}G_i$. \label{invariant 2: proper subgraph} \item For all $Z\in {\mathcal{Z}}$, for all $1\leq i\leq k_h$, either $Z\cap V(H_i)=\emptyset$, or $Z\subseteq V(H_i)$. Let ${\mathcal{Z}}_i=\set{Z\in {\mathcal{Z}}\mid Z\subseteq V(H_i)}$.\label{invariant 2: canonical} \item For all $1\leq i\leq k_h$, there is a strong solution $\phi_i$ to $\pi(G_i,X_i,{\mathcal{Z}}_i)$, with $\sum_{i=1}^{k_h}\operatorname{cr}_{\phi_i}(G_i)\leq \mathsf{OPT}$. \label{invariant 3: there is a cheap solution} \item If we are given {\bf any} weak solution $E_i'$ to problem $\pi(G_i,X_i,{\mathcal{Z}}_i)$, for all $1\leq i\leq k_h$, and denote $\tilde{E}^{(h)}=\bigcup_{i=1}^{k_h}E_i$, then $\edges1\cup\cdots\edges {h-1}\cup \tilde{E}^{(h)}$ is a feasible weak solution to problem $\pi(\H,\emptyset,{\mathcal{Z}})$.\label{invariant 4: any weak solution is enough} \item For each $1\leq h'<h$, and $1\leq i\leq k_h$, the number of edges in $\edges{h'}$ incident on vertices of $H_i$ is at most $m^*$, and $|\edges{h'}|\leq \mathsf{OPT} \cdot m^*$. Moreover, no edges in grids $Z\in {\mathcal{Z}}$ belong to $\bigcup_{h'=1}^{h-1}\edges {h'}$.\label{invariant 4.5: number of edges removed so far} \item Let $n_h=(1-1/\rho)^{(h-1)/2}\cdot n$. For each $1\leq i\leq k_h$, either $n(H_i)\leq n_h$, or $X_i=\emptyset$ and $n(H_i)\leq n_{h-1}$. \label{invariant 5: bound on size} \end{properties} \label{end invariants for the whole algorithm-------------------------------------------------------------} The input to the first iteration consists of a single graph, $G_1=\H$, with the bounding box $X_1=\emptyset$. It is easy to see that all invariants hold for this input. We end the algorithm at iteration $h^*$, where $n_{h^*}\leq (m^*\cdot \rho\cdot \log n)^2$. Clearly, $h^*\leq 2\rho\log n$, from Invariant~(\ref{invariant 5: bound on size}). Let ${\mathcal{G}}$ be the set of all instances that serve as input to iteration $h^*$. We need the following theorem, whose proof appears in Appendix. \begin{theorem}\label{thm: stopping condition} There is an efficient algorithm, that, given any problem $\pi(G,X,{\mathcal{Z}}')$, where $V(G\setminus X)$ is canonical for ${\mathcal{Z}}'$, and $\pi(G,X,{\mathcal{Z}}')$ has a strong solution of cost $\overline{\mathsf{OPT}}$, finds a weak feasible solution to $\pi(G,X,{\mathcal{Z}}')$ of cost $O(\overline{\mathsf{OPT}}\cdot \sqrt{n'}\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n')+\overline{\mathsf{OPT}}^3)$, where $n'=|V(G\setminus X)|$, and $d_{\mbox{\textup{\footnotesize{max}}}}$ is the maximum degree in $G$. \end{theorem} For each $1\leq i\leq k_{h^*}$, let $\edges{h^*}_i$ be the weak solution from Theorem~\ref{thm: stopping condition}, and let $\edges{h^*}=\bigcup_{i=1}^{k_{h^*}}\edges{h^*}_i$. Let $\mathsf{OPT}_i$ denote the cost of the strong optimal solution to $\pi(G_i,X_i,{\mathcal{Z}}_i)$. Then $|\edges{h^*}|=\sum_{i=1}^{k_{h^*}}O(\mathsf{OPT}_i\cdot \sqrt{n(H_i)}\cdot\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)+\mathsf{OPT}_i^3)$. Since $n(H_i)\leq n_{h^*-1}\leq 2n_{h^*}$ for all $i$, this is bounded by $\sum_{i=1}^{k_{h^*}}O(\mathsf{OPT}_i\cdot m^*\cdot \rho\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\log n)+\mathsf{OPT}_i^3)\leq O(\mathsf{OPT}\cdot m^*\cdot \rho\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \log n)+\mathsf{OPT}^3)$, as $\sum_{i=1}^{k_{h^*}}\mathsf{OPT}_i\leq \mathsf{OPT}$ from Invariant~(\ref{invariant 3: there is a cheap solution}). The final solution is $E^*=\bigcup_{h=1}^{h^*}\edges{h}$, and \[\begin{split} |E^*|&\leq \sum_{h=1}^{h^*-1}|\edges{h}|+|\edges{h^*}|\\ &\leq (2\rho\log n)(\mathsf{OPT}\cdot m^*)+O(\mathsf{OPT}\cdot m^*\cdot \rho\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}} \cdot \log n)+\mathsf{OPT}^3)\\ &=O(\mathsf{OPT}^5\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)). \end{split}\] We say that the execution of iteration $h$ is \emph{successful}, iff it either produces a valid input to the next iteration, together with the set $\edges{h}$ of edges, or finds a nasty canonical set in $\H$. We show how to execute each iteration, so that it is successful with probability at least $(1-1/\rho)$, if all previous iterations were successful. If any iteration returns a nasty canonical set, then we stop the algorithm and return this vertex set as an output. Since there are at most $2\rho\log n$ iterations, the probability that all iterations are successful is at least $(1-1/\rho)^{2\rho\log n}\geq 1/\operatorname{poly}(n)$. In order to complete the proof of Theorem~\ref{thm: alg summary}, it is now enough to show an algorithm for executing each iteration, such that, given a valid input to the current iteration, the algorithm either finds a nasty canonical set in $\H$, or returns a valid input to the next iteration, with probability at least $\frac 1 \rho$. We do so in the next section. \section{Iteration Execution}\label{sec: iteration} \label{------------------------------------------------iteration execution-----------------------------------------------------------------------} Throughout this section, we denote $n=|V(\H)|$, $\phi$ is the optimal canonical solution for the {\sf Minimum Crossing Number}\xspace problem on $\H$, and $\mathsf{OPT}$ is its cost. We start by setting the values of the parameters $\rho$ and $m^*$. The value of the parameter $\rho$ depends on two other parameters, that we define later. Specifically, we will define two functions $\lambda: \mathbb{N}\rightarrow {\mathbb R}$, $N:\mathbb{N}\rightarrow {\mathbb R}$: \[\lambda(n')=\Omega\left(\frac 1{\log n'\cdot d_{\mbox{\textup{\footnotesize{max}}}}^2}\right )\] and \[N(n')=O(d_{\mbox{\textup{\footnotesize{max}}}}\sqrt{n'\log n'})\] for all $n'>0$. Also, recall that $\alpha^*=\Omega\left(\frac 1 {\log^{3/2}n\cdot \log\log n}\right )$ is the well-linkedness parameter from Theorem~\ref{thm: well-linked-general}. We need the value of $\rho$ to satisfy the following two inequalities: \begin{equation}\label{eq: value of rho 1} \forall 0<n'\leq n\quad \quad \rho>\frac{25\cdot 2^{24}d_{\mbox{\textup{\footnotesize{max}}}}^6\cdot N^2(n')}{n'\cdot \lambda^2(n')\cdot (\alpha^*)^2} \end{equation} \begin{equation}\label{eq: value of rho 2} \forall 0<n'\leq n\quad \quad \rho >\frac{9\mathsf{OPT}}{\lambda(n')} \end{equation} Substituting the values of $N(n'),\lambda(n')$ and $\alpha^*$ in the above inequalities, we get that it is sufficient to set: \[\rho=\Theta(\log n\cdot d_{\mbox{\textup{\footnotesize{max}}}}^2)\max\set{d_{\mbox{\textup{\footnotesize{max}}}}^{10}\log^5 n(\log\log n)^2,\mathsf{OPT}}=O\left (\mathsf{OPT}\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\log n)\right ).\] The value of parameter $m^*$ is: \[m^*=O\left (\frac{\mathsf{OPT}^2\cdot\rho\cdot\log^2n\cdot d_{\mbox{\textup{\footnotesize{max}}}}^2\cdot \ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} }{\alpha^*}\right )=O\left (\mathsf{OPT}^3\cdot \operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n\right ))\] We now turn to describe each iteration $h$. Our goal is to either find a nasty canonical subset of vertices in $\H$, or produce a feasible input to the next iteration, $h+1$. Throughout the execution of iteration $h$, we construct a set ${\mathcal{G}}_{h+1}$ of new problem instances, for which Invariants~(\ref{invariant 1: disjointness})--(\ref{invariant 5: bound on size}) hold. We do not need to worry about the number of the instances in ${\mathcal{G}}_{h+1}$ being bounded by $\mathsf{OPT}$, since, from Invariant~(\ref{invariant 3: there is a cheap solution}), the number of instances in ${\mathcal{G}}_{h+1}$, which do not have a solution of cost $0$, is bounded by $\mathsf{OPT}$. Since we can efficiently identify such instances, they will then become the input to the next iteration. We will also gradually construct the set $\edges h$ of edges, that we remove from the problem instance in this iteration. The iteration is executed on each one of the graphs $G_i$ separately. We fix one such graph $G_i$, for $1\leq i\leq k_h$, and focus on executing iteration $h$ on $G_i$. We need a few definitions. \begin{definition} Given any graph $H$, we say that a simple path $P\subseteq H$ is a $2$-path, iff the degrees of all inner vertices of $P$ are $2$. We say that it is a maximal $2$-path iff it is not contained in any other $2$-path. \end{definition} \begin{definition} We say that a connected graph $H$ is \emph{rigid} iff either $H$ is a simple cycle, or, after we replace every maximal $2$-path in $H$ with an edge, we obtain a $3$-vertex connected graph, with no self-loops or parallel edges. \end{definition} Observe that if $H$ is rigid, then it has a unique planar drawing. We now define the notion of a valid skeleton. \begin{definition} Assume that we are given an instance $\pi=\pi(G,X,{\mathcal{Z}}')$ of the problem, and let $\phi'$ be the optimal strong solution for this instance. Given a subset $\tilde{E}$ of edges of $G$, and a sub-graph $K\subseteq G$, we say that $K$ is a \emph{valid skeleton} for $\pi,\tilde{E},\phi'$, iff the following conditions hold: \begin{itemize} \item Graph $K$ is rigid, and the edges of $K$ do not participate in crossings in $\phi'$. Moreover, the set $V(K)$ of vertices is canonical for ${\mathcal{Z}}'$. \item $X\subseteq K$, and no edges of $\tilde{E}$ belong to $K$. \item Every connected component of $G\setminus (K\cup \tilde E)$ contains at most $n_{h+1}$ vertices. \end{itemize} \end{definition} Notice that if $K$ is a valid skeleton, then we can efficiently find the drawing $\phi_K'$ induced by $\phi'$ -- this is the unique planar drawing of $K$. Each connected component $C$ of $G\setminus (K\cup \tilde E)$ must then be embedded entirely inside some face $F_C$ of $\phi'$. Once we determine the face $F_C$ for each such component $C$, we can solve the problem recursively on these components, where for each component $C$, the bounding box becomes the boundary of $F_C$. This is the main idea of our algorithm. In fact, we will be able to find a valid skeleton $K_i$ for each instance $\pi(G_i,X_i,{\mathcal{Z}}_i)$ and drawing $\phi_i$, for $1\leq i\leq k_h$, w.h.p., but we cannot ensure that this skeleton will contain the bounding box $X_i$. If there is a large collection of edge-disjoint paths, connecting $K_i$ to $X_i$ in $G_i$, we can still connect $X_i$ to $K_i$, by choosing a small subset of these paths at random. This will give the desired final valid skeleton that contains $X_i$. However, if there is only a small number of such paths, then we cannot find a single valid skeleton that contains $X_i$ (in particular, it is possible that all edges incident on $X_i$ participate in crossings in $\phi_i$, so such a skeleton does not exist). However, in the second case, we can find a small subset $E'_i$ of edges, whose removal disconnects $X_i$ from many vertices of $G_i$. In particular, after we remove $E'_i$ from $G_i$, graph $G_i$ will decompose into two connected components: one containing $X_i$, and at most $n_{h+1}$ other vertices, and another that does not contain $X_i$. The first component is denoted by $G_i^X$, and the second by $G_i'$. The sub-instance defined by $G_i'$ is now completely disconnected from the rest of the graph, and it has no bounding box, so we can add it directly to ${\mathcal{G}}_{h+1}$. For the sub-instance $G_i^X$, we show that $X_i$ is a valid skeleton. The edges in $E_i'$ are then added to $\edges h$. We now define these notions more formally. Recall that for each $i: 1\leq i\leq k_h$, problem $\pi(G_i,X_i,{\mathcal{Z}}_i)$ is guaranteed to have a strong feasible solution $\phi_i$ of cost at most $\mathsf{OPT}_i$. For each such instance, we will find two subsets of edges $E'_i$, and $E''_i$, where $|E'_i|=O(\mathsf{OPT}^2\cdot \rho\cdot d_{\mbox{\textup{\footnotesize{max}}}})$, and $|E''_i|=O\left (\frac{\mathsf{OPT}^2\cdot\rho\cdot \log^2n\cdotd_{\mbox{\textup{\footnotesize{max}}}}^2\cdot \ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}}}{\alpha^*}\right)$, that will be added to $\edges h$. Assume first that $X_i\neq \emptyset$. So by Invariant~(\ref{invariant 5: bound on size}), $|V(G_i\setminus X_i)|\leq n_h$. The graph $G_i\setminus E'_i$ consists of two connected sub-graphs: $G_i^X$, that contains the bounding box $X_i$, and the remaining graph $G'_i$. We will find a subset $E''_i$ of edges and a skeleton $K_i$ for graph $G_i^X$, such that w.h.p., $K_i$ is a valid skeleton for the instance $\pi(G_i^X,X_i,{\mathcal{Z}}_i)$, the set $E''_i$ of edges, and the solution $\phi_i$. Therefore, each one of the connected components of $G_i^X\setminus (K_i\cup E''_i)$ contains at most $n_{h+1}$ vertices. We will process these components, to ensure that we can solve them independently, and then add them to set ${\mathcal{G}}_{h+1}$, where they will serve as input to the next iteration. The remaining graph, $G'_i$, contains at most $n_h$ vertices from Invariant~(\ref{invariant 5: bound on size}), and has no bounding box. So we can add $\pi(G_i,\emptyset,{\mathcal{Z}}_i)$ to ${\mathcal{G}}_{h+1}$ directly. If $X_i=\emptyset$, then we will ensure that $E'_i=\emptyset$, $G'_i=\emptyset$ and $G_i^X=G_i$. Recall that in this case, from Invariant~(\ref{invariant 5: bound on size}), $|V(G_i)|\leq n_{h-1}$. We will find a valid skeleton $K_i$ for $\pi(G_i,X_i,{\mathcal{Z}}_i),E''_i,\phi_i$, and then process the connected components of $G_i\setminus (K_i\cup E''_i)$ as in the previous case, before adding them to set ${\mathcal{G}}_{h+1}$. The algorithm consists of three steps. Given a graph $G_i\in \set{G_1,\ldots,G_{k_h}}$ with the bounding box $X_i$, the goal of the first step is to either produce a nasty canonical vertex set in the whole contracted graph $\H$, or to find a $\rho$-balanced $\alpha^*$-well-linked partition $(A,B)$ of $V(G_i)$, where $A$ and $B$ are canonical, and $|E(A,B)|$ is small. The goal of the second step is to find the sets $E'_i,E''_i$ of edges and a valid skeleton $K_i$ for instance $\pi(G^X_i,X_i,{\mathcal{Z}}_i)$. In the third step, we produce a new collection of instances, from the connected components of graphs $G_i\setminus (E''_i\cup K_i)$, which, together with the graphs $G_i'$, for $1\leq i\leq k_h$, are then added to ${\mathcal{G}}_{h+1}$, to become the input to the next iteration. \subsection{Step 1: Partition}\label{sec: step 1} Throughout this step, we fix some graph $G\in \set{G_1,\ldots,G_{k_h}}$. We denote by $X$ its bounding box, and let $H^0=G\setminus V(X)$. Notice that graph $H^0$ is not necessarily connected. We denote by $H$ the largest connected component of $H^0$, and by ${\mathcal{H}}$ the set of the remaining connected components. We focus on $H$ only in the current step. Let $n'=|V(H)|$. If $n'\leq (m^*\cdot \rho\cdot \log n)^2$, then we can simply proceed to the third step, as the size of every connected component of $H^0$ is bounded by $n'\leq n_{h^*}\leq n_{h+1}$. We then define $E'=E''=\emptyset$, $G^X=G$, $G'=\emptyset$, and we use $X$ as the skeleton $K$ for $G$. It is easy to see that it is a valid skeleton. Therefore, we assume from now on that: \begin{equation}\label{eq: upper bound on rho in terms of n'} n'\geq (m^*\cdot \rho\cdot \log n)^2 \end{equation} Recall that from Invariant~(\ref{invariant 2: canonical}), $H$ is canonical w.r.t. ${\mathcal{Z}}$, so we define ${\mathcal{Z}}'=\set{Z\in {\mathcal{Z}}: Z\subseteq H}$. Throughout this step, whenever we say that a set is canonical, we mean that it is canonical w.r.t. ${\mathcal{Z}}'$. Recall that the goal of the current step is to produce a partition $(A,B)$ of the vertices of $H$, such that $A$ and $B$ are both canonical, the partition is $\rho$-balanced and $\alpha^*$-well-linked, and $|E(A,B)|$ is small, or to find a nasty canonical vertex set in $\H$. In fact we will define 4 different cases. The first two cases are the easy cases, for which it is easy to find a suitable skeleton, even though we do not obtain a $\rho$-balanced $\alpha^*$-well-linked bi-partition. The third case will give the desired bi-partition $(A,B)$, and the fourth case will produce a partition with slightly different, but still sufficient properties. We then show that if none of these four cases happen, then we can find a nasty canonical set in $\H$. The first case is when there is some grid $Z\in {\mathcal{Z}}'$ with $|Z|\geq n'/2$. If this case happens, we continue directly to the second step (this is the simple case where eventually the skeleton will be simply $Z$ itself, after we connect it to the bounding box). In the rest of this step we assume that for each $Z\in {\mathcal{Z}}'$, $|Z|< n'/2$. The initial partition is summarized in the next theorem, whose proof appears in Appendix. \begin{theorem}\label{thm: initial partition} Assume that for each $Z\in {\mathcal{Z}}'$, $|Z|< n'/2$. Then we can efficiently find a partition $(A,B)$ of $V(H)$, such that: \begin{itemize} \item Both $A$ and $B$ are canonical. \item $|A|, |B|\geq \lambda n'$, for $\lambda=\Omega\left(\frac{1}{\log n'\cdot d_{\mbox{\textup{\footnotesize{max}}}}^2}\right )$ and $|E(A,B)|\leq O(d_{\mbox{\textup{\footnotesize{max}}}}\sqrt {n'\log n'})$. \item Set $A$ is $\alpha^*$-well-linked \end{itemize} \end{theorem} We say that Case 2 happens iff $|E(A,B)|\leq \frac{10^7\mathsf{OPT}^2\cdot \rho\cdot\log^2n\cdot d_{\mbox{\textup{\footnotesize{max}}}}^2\cdot \ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} }{\alpha^*}$. If Case 2 happens, we continue directly to Step 2 (this is also a simple case, in which the eventual skeleton is the bounding box $X$ itself, and $E''=E(A,B)$). Let $N=\Theta(d_{\mbox{\textup{\footnotesize{max}}}}\sqrt{n'\log n'})$, so that $|E(A,B)|\leq N$. Notice that set $B$ has property (P1) in $H$, since set $A$ is connected. Our next step is to use Theorem~\ref{thm: well-linked-general} to produce an $\alpha^*$-well-linked decomposition ${\mathcal{C}}$ of $B$, where each set of $C\in {\mathcal{C}}$ has property (P1) and is canonical w.r.t. ${\mathcal{Z}}'$, with $\sum_{C\in {\mathcal{C}}}|\operatorname{out}_H(C)|\leq 2N$. It is easy to see that the decomposition will give a slightly stronger property than (P1): namely, for each $C\in {\mathcal{C}}$, for every edge $e\in \operatorname{out}_H(C)$, there is a path $P\subseteq H\setminus C$, connecting $e$ to some vertex of $A$. We will use this property later. We are now ready to define the third case. This case happens if there is some set $C\in {\mathcal{C}}$, with $|C|\geq n'/\rho$. So if Case 3 happens, we have found two disjoint sets $A,C$ of vertices of $H$, with $|A|,|C|\geq n'/\rho$, both sets being canonical w.r.t. ${\mathcal{Z}}'$ and $\alpha^*$-well-linked. In the next lemma, whose proof appears in Appendix, we show that we can expand this partition to the whole graph $H$. \begin{lemma}\label{lemma: decomposition for Case 2} If Case 3 happens, then we can efficiently find a partition $(A',B')$ of $V(H)$, such that $|A'|,|B'|\geq n'/\rho$, both sets are canonical w.r.t. ${\mathcal{Z}}'$, and $\alpha^*$-well-linked w.r.t. $\operatorname{out}_H(A'),\operatorname{out}_H(B')$, respectively. \end{lemma} If Case 3 happens, we continue directly to the second step. We assume that Case 3 does not happen from now on Notice that the above decomposition is done in the graph $H$, that is, the sets $C\in {\mathcal{C}}$ are well-linked w.r.t. $\operatorname{out}_H(C)$, and $\sum_{C\in {\mathcal{C}}}|\operatorname{out}_H(C)|\leq 2N$. Property (P1) is also only ensured for $T_H(C)$, and not necessarily for $T_G(C)$. For each $C\in {\mathcal{C}}$, let $\operatorname{out}^X(C)=\operatorname{out}_G(C)\setminus \operatorname{out}_H(C)$, that is, $\operatorname{out}^X(C)$ contains all edges connecting $C$ to the bounding box $X$. We do not have any bound on the size of $\operatorname{out}^X(C)$, and $C$ is not guaranteed to be well-linked w.r.t. these edges. The purpose of the final partitioning step is to take care of this. This step is only performed if $X\neq \emptyset$. We perform the final partitioning step on each cluster $C\in {\mathcal{C}}$ separately. We start by setting up an $s$-$t$ min-cut/max-flow instance, as follows. We construct a graph $\tilde{C}$, by starting with $H[C]\cup \operatorname{out}_G(C)$, and identifying all vertices in $T_H(C)$ into a source $s$, and all vertices in $T_G(C)\setminus T_H(C)$ into a sink $t$. Let $F$ be the maximum $s$-$t$ flow in $\tilde{C}$, and let $(\tilde{C}_1,\tilde{C}_2)$ be the corresponding minimum $s$-$t$ cut, with $s\in \tilde{C}_1,t\in \tilde{C}_2$. From Corollary~\ref{corollary: canonical s-t cut}, both $\tilde C_1$ and $\tilde C_2$ are canonical. We let $C_1$ be the set of vertices of $\tilde{C}_1$, excluding $s$, and $C_2$ is the set of vertices of $\tilde{C}_2$, excluding $t$. Notice that both $C_1$ and $C_2$ are also canonical. We say that $C_1$ is a cluster of type $1$, and $C_2$ is cluster of type $2$. Recall that we have computed a max-flow $F$ connecting $s$ to $t$ in $\tilde{C}$. Since all capacities are integral, and all capacities of edges in $H[C]$ are unit, $F$ consists of a collection ${\mathcal{P}}$ of edge-disjoint paths in the graph $H[C]\cup\operatorname{out}_G(C)$. Each such path $P$ connects an edge in $\operatorname{out}_H(C)$ to an edge in $\operatorname{out}^X(C)$. Path $P$ consists of two consecutive segments: one is completely contained in $C_1$, and the other is completely contained in $C_2$. If the first segment is non-empty, then it defines a path $P_1\subseteq H[C_1]\cup \operatorname{out}_G(C_1)$, connecting an edge in $\operatorname{out}_H(C)$, to an edge in $E(\tilde C_1,\tilde C_2)$. Similarly, if the second segment is non-empty, then it defines a path $P_2\subseteq H[C_2]\cup \operatorname{out}_G(C_2)$, connecting an edge in $E(\tilde C_1,\tilde C_2)$ to an edge in $\operatorname{out}^X(C)$. Every edge in $E(C_1,C_2)$ participates in one such path $P_1\subseteq H[C_1]\cup\operatorname{out}_G(C_1)$, and one such path $P_2\subseteq H[C_2]\cup\operatorname{out}_G(C_2)$. Similarly, if $e\in \operatorname{out}^X(C)\cap \operatorname{out}_G(C_1)$, then it is also an endpoint of exactly one path $P_1\subseteq H[C_1]\cup\operatorname{out}_G(C_1)$, and if $e\in \operatorname{out}_G(C_2)\setminus \operatorname{out}^X(C)$, then it is an endpoint of exactly one such path $P_2\subseteq H[C_2]\cup\operatorname{out}_G(C_2)$. \begin{figure}[h] \scalebox{0.5}{\rotatebox{0}{\includegraphics{step1-last-partition-cut.pdf}}} \caption{Partition of cluster $C$. Edges of $\operatorname{out}_H(C)$ are blue, edges of $\operatorname{out}^X(C)$ are red; edges participating in the min-cut are marked by $*$. The black edges belong to both $E_2(C_1)$ and $E_1(C_2)$.} \label{fig: step 1 last partition} \end{figure} For the cluster $C_1$, let $E_1(C_1)=\operatorname{out}_H(C_1)\cap \operatorname{out}_H(C)$, and $E_2(C_1)=\operatorname{out}_G(C_1)\setminus \operatorname{out}_H(C)$. All edges in $E_2(C_1)$ belong to either $E(C_1,C_2)$ or $\operatorname{out}^X(C)$. By the above discussion, we have a collection ${\mathcal{P}}(C_1)$ of edge disjoint paths in $H[C_1]\cup\operatorname{out}_G(C_1)$, each path connecting an edge in $E_1(C_1)$ to an edge in $E_2(C_1)$, and every edge in $E_2(C_1)$ is an endpoint of a path in ${\mathcal{P}}(C_1)$. An important property of cluster $C_1$ that we will use later is that if $C_1\neq \emptyset$, then $E_1(C_1)\neq \emptyset$. All edges in $E_1(C_1)$ can reach set $A$ in graph $H\setminus C_1$, and all edges in $E_2(C_1)$ can reach the set $V(X)$ of vertices in the graph $G\setminus C_1$. Moreover, if $E_2(C_1)\neq \emptyset$, then there is a path $P(C_1)$, connecting a vertex of $C_1$ to a vertex of $X$, such that $P(C_1)$ only contains vertices of $C_2$. In particular, it does not contain vertices of any other type-1 clusters. Similarly, for the cluster $C_2$, let $E_2(C_2)=\operatorname{out}_G(C_2)\cap \operatorname{out}^X(C)$, and $E_1(C_2)=\operatorname{out}_G(C_2)\setminus \operatorname{out}^X(C_2)$. All edges in $E_1(C_2)$ belong to either $E(C_1,C_2)$, or to $\operatorname{out}_H(C)$. From the above discussion, we have a set ${\mathcal{P}}(C_2)$ of edge-disjoint paths in $H[C_2]\cup\operatorname{out}_G(C_2)$, each such path connecting an edge in $E_1(C_2)$ to an edge in $E_2(C_2)$, and every edge in $E_1(C_2)$ is an endpoint of one such path. Let ${\mathcal T}_1$ be the set of all non-empty clusters of type $1$, and ${\mathcal T}_2$ the set of clusters of type $2$. For the case where $X=\emptyset$, all clusters $C\in {\mathcal{C}}$ are type-$1$ clusters, and ${\mathcal T}_2=\emptyset$. We are now ready to define the fourth case. We say that Case 4 happens, iff clusters in ${\mathcal T}_2$ contain at least $\lambda n'/2$ vertices altogether. Notice that Case 4 can only happen if $X\neq \emptyset$. The proof of the next lemma appears in Appendix. \begin{lemma}\label{lemma: decomposition for Case 3} If Case 4 happens, then we can find a partition $(A',B')$ of $V(H)$, such that $|A'|,|B'|\geq n'/\rho$, both $A'$ and $B'$ are canonical, and $A'$ is $\alpha^*$-well-linked w.r.t. $E(A',B')$. Moreover, if we denote by $\operatorname{out}^X(B')=\operatorname{out}_G(B')\setminus E(A',B')$, then there is a collection ${\mathcal{P}}$ of edge-disjoint paths in graph $H[B']\cup \operatorname{out}_G(B')$, connecting the edges in $E(A',B')$ to edges in $\operatorname{out}^X(B')$, such that each edge $e\in E(A',B')$ is an endpoint of exactly one such path.\end{lemma} We will show below that for cases 1---4, we can successfully construct a skeleton and produce an input to the next iteration, with high probability. In the next theorem, whose proof appears in Appendix, we show that if none of these cases happen, then we can efficiently find a nasty canonical set. \begin{theorem}\label{thm: case 4} If none of the cases 1--4 happen, then we can efficiently find a nasty canonical set in the original contracted graph $\H$. \end{theorem} \subsection{Step 2: Skeleton Construction}\label{subsection: skeleton construction} Let $(G,X)\in\set{(G_1,X_1),\ldots,(G_{k_h},X_{k_h})}$, let $\phi'$ be the strong solution to problem $\pi(G,X,{\mathcal{Z}}')$, guaranteed by Invariant~(\ref{invariant 3: there is a cheap solution}), and let $\mathsf{OPT}'$ denote its cost. Recall that $H$ is the largest connected component in $G\setminus X$, and ${\mathcal{Z}}'=\set{Z\in {\mathcal{Z}}: Z\subseteq V(H)}$. We say that an edge $e\in E(G)$ is \emph{good} iff it does not participate in any crossings in $\phi'$. Recall that for each $Z\in {\mathcal{Z}}'$, all edges of $G[Z]$ are good. In the second step we define the subsets $E',E''$ of edges, the two sub-graphs $G^X$ and $G'$ of $G$, and construct a valid skeleton $K$ for $\pi(G^X,X,{\mathcal{Z}}'), E''$ and $\phi'$, for Cases 1---4. We define a set $T\subseteq E(G)$ of edges, that we refer to as ``terminals'' for the rest of this section, as follows. For Case 1, $T=\emptyset$. For Case 2, $T=E(A,B)$, where $(A,B)$ is the partition of $H$ from Theorem~\ref{thm: initial partition}. For Cases 3 and 4, $T=E(A',B')$, where $(A',B')$ are the partitions of $H$ given by Lemmas~\ref{lemma: decomposition for Case 2} and \ref{lemma: decomposition for Case 3}, respectively. For convenience, we rename $(A',B')$ as $(A,B)$ for these two cases. Since the partition $(A,B)$ of $H$ is canonical for cases 2--4, we are guaranteed that $T$ does not contain any edges of grids $Z\in {\mathcal{Z}}'$. The easiest case is Case 2. The skeleton $K$ for this case is simply the bounding box $X$, and we set $E''=T$. Recall that $|T|\leq \frac{10^7\mathsf{OPT}^2\cdot \rho\cdot\log^2n\cdot d_{\mbox{\textup{\footnotesize{max}}}}^2\cdot \ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} }{\alpha^*}$ for this case. Since $|A|,|B|\geq n'/\rho$, it is easy to verify that $X$ is a valid skeleton for $G$, $\phi'$ and $E''$. In particular, $|A|,|B|\leq n'(1-\rho)\leq n_{h-1}(1-\rho)\leq n_{h+1}$. We set $E'=\emptyset$, $G^X=G$, and $G'=\emptyset$. From now on we focus on Cases 1, 3 and 4. We first build an initial skeleton $K'$ of $G$, and a subset $E''$ of edges, such that $K'$ has all the required properties, except that it is possible that $X\not\subseteq K'$. Specifically, we will ensure that $K'$ only contains good edges, is rigid, and every connected component of $H\setminus (K'\cup E'')$ contains at most $n_{h+1}$ vertices. In the end, we will either connect $K'$ to $X$, or find a small subset $E'$ of edges, separating the two sets. The initial skeleton $K'$ for Case 1 is simply the grid $Z\in{\mathcal{Z}}'$ with $|Z|\geq n'/2$, and we set $E''=\emptyset$. Observe that $K'$ is good, rigid, canonical, and every connected component of $H\setminus K'$ contains at most $n'/2\leq n_{h-1}/2\leq n_{h+1}$ vertices. The construction of the initial skeleton for Cases 3 and 4 is summarized in the next theorem, whose proof is deferred to the Appendix. \begin{theorem}\label{thm: initial skeleton for Cases 3 and 4} Assume that Cases 3 or 4 happen. Then we can efficiently construct a skeleton $K'\subseteq G$, such that with probability at least $\left (1-\frac 1{2\rho\cdot \mathsf{OPT}}\right )$, $K'$ is good, rigid, and every connected component of $H\setminus K'$ contains at most $O\left ( \frac{\mathsf{OPT}^2\cdot \rho\cdot \log^2 n\cdot d_{\mbox{\textup{\footnotesize{max}}}}^2 \cdot \ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} }{\alpha^*}\right )$ terminals. \end{theorem} Let ${\mathcal{C}}$ be the set of all connected components of $H\setminus K'$. Observe that at most one of the components may contain more than $n'/2$ vertices. Let $C$ denote this component, and let $E''$ be the set of terminals contained in $C$, $E''=T\cap E(C)$. Let ${\mathcal{C}}'$ be the set of all connected components of $C\setminus E''$. Then for each $C'\in {\mathcal{C}}'$, $|V(C')|\leq n'(1-\rho)$ must hold: otherwise, $V(C')$ must contain vertices that belong to both $A$ and $B$, and so $E(C')$ must contain at least one terminal. Therefore, the size of every connected component of $H\setminus (K'\cup E'')$ is bounded by $n'(1-\rho)\leq n_{h-1}(1-\rho)\leq n_{h+1}$ from Invariant~(\ref{invariant 5: bound on size}). Recall that the terminals do not belong to the grids $Z\in {\mathcal{Z}}'$. Observe that it is possible that $V(K')$ is not canonical. Consider some grid $Z\in{\mathcal{Z}}$, such that $V(Z)\cap V(K')\neq \emptyset$. If $Z\cap K'$ is a simple path, then we will deal with such grids at the end of the third step. Let ${\mathcal{Z}}''(G)$ denote the set of all such grids. Assume now that $Z\cap K'$ is not a simple path. Since graph $K'$ is rigid, it must be the case that there are at least three matching edges from $\operatorname{out}_G(Z)$ that belong to $K'$. In this case, we can simply add the whole grid $Z$ to the skeleton $K'$, and still the new skeleton $K'$ remains good and rigid, and every connected component of $H\setminus (K'\cup E'')$ contains at most $n_{h+1}$ vertices. So from now on we assume that if $V(Z)\cap V(K')\neq \emptyset$ for some $Z\in {\mathcal{Z}}$, then $Z\cap K'$ is a simple path, and so $Z\in {\mathcal{Z}}''(G)$. We denote by $K^+$ the union of $K'$ with all the grids in ${\mathcal{Z}}''(G)$. Clearly, $K^+$ is connected, canonical, but it is not necessarily rigid. Consider Cases 1, 3 and 4. If $X=\emptyset$, then we define $E'=\emptyset$, $G^X=G$, $G'=\emptyset$ and the final skeleton $K=K'$. It is easy to see that $K$ is a valid skeleton for $\pi(G^X,X,{\mathcal{Z}}'\setminus{\mathcal{Z}}''(G))$, $E''$ and $\phi'$. Otherwise, if $X\neq \emptyset$, we now try to connect the skeleton $K'$ to the bounding box $X$ (observe that some of the vertices of $X$ may already belong to $K'$). In order to do so, we will try to find a set ${\mathcal{P}}'$ of $24\mathsf{OPT}^2\rho$ vertex-disjoint paths in $G\setminus E''$, connecting the vertices of $X$ to the vertices of $K^+$ (where some of these paths can be simply vertices in $X\cap K^+$). We distinguish between three cases. The first case is when such a collection of paths does not exist in $G\setminus E''$. Then there must be a set $V'\subseteq V(G)$ of at most $24\mathsf{OPT}^2\rho$ vertices, whose removal from $G\setminus E''$ separates $X$ from $K^+$. Therefore, the size of the edge min-cut separating $X$ from $K^+\setminus X$ in $G\setminus E''$ is at most $24\mathsf{OPT}^2\rhod_{\mbox{\textup{\footnotesize{max}}}}$. Observe that both $K^+$ and $X$ are canonical w.r.t. ${\mathcal{Z}}'$, and the vertices in $V(X)\cap V(K^+)$ cannot belong to sets $Z\in {\mathcal{Z}}'$, by the definition of ${\mathcal{Z}}'$. Therefore, from Corollary~\ref{corollary: canonical s-t cut}, there is a subset $E'$ of at most $24\mathsf{OPT}^2\rhod_{\mbox{\textup{\footnotesize{max}}}}$ edges (canonical edge min-cut), whose removal partitions graph $G\setminus E''$ into two connected sub-graphs, $G^X$ containing $X$, and $G'=G\setminus V(G^X)$, and moreover, $V(G_X)$ and $V(G')$ are both canonical, and the edges of $E'$ do not belong to any grids $Z\in {\mathcal{Z}}'$. We add the instance $\pi(G',\emptyset,{\mathcal{Z}}')$ directly to ${\mathcal{G}}_{h+1}$. From Invariant~(\ref{invariant 5: bound on size}), since $X\neq \emptyset$, $|V(G')|\leq n_h$, and since the bounding box of the new instance is $\emptyset$, it is a valid input to the next iteration. For graph $G^X$, we use $X$ as its skeleton. Observe that every connected component of $G^X\setminus (X\cup E'')$ must be either a sub-graph of some connected component of $H\setminus (K'\cup E'')$ (and then its size is bounded by $n_{h+1}$), or it must belong to ${\mathcal{H}}^0$ (and then its size is bounded by $n_{h-1}/2\leq n_{h+1}$). Therefore, $X$ is a valid skeleton for $\pi(G^X,X,{\mathcal{Z}}'\setminus {\mathcal{Z}}''(G))$, $E''$, and $\phi'$. The second case is when there is some grid $Z\in {\mathcal{Z}}''(G)$, such that for any collection ${\mathcal{P}}'$ of $24\mathsf{OPT}^2\rho$ vertex-disjoint paths, connecting the vertices of $X$ to the vertices of $K^+$ in $G$, at least half the paths contain vertices of $\Gamma(Z)$ as their endpoints. Recall that only $2$ edges of $\operatorname{out}_H(Z)$ belong to $K'$. Then there is a collection $E'$ of at most $12d_{\mbox{\textup{\footnotesize{max}}}}\mathsf{OPT}^2\rho+2$ edges in $G\setminus E''$, whose removal separates $V(X)\cup Z$ from $V(K^+)\setminus (Z\cup X)$. Again, we can ensure that the edges of $E'$ do not belong to the grids $Z\in {\mathcal{Z}}'$. Let $G^X$ denote the resulting subgraph that contains $X$, and $G'=G\setminus G^X$. Then both $G^X$ and $G'$ are canonical as before, and we can add the instance $\pi(G',\emptyset,{\mathcal{Z}}')$ to ${\mathcal{G}}_{h+1}$, as before. In order to build a valid skeleton for graph $G^X$, we consider the subset ${\mathcal{P}}''\subseteq {\mathcal{P}}'$ of $12\mathsf{OPT}^2\rho$ vertex-disjoint paths, connecting the vertices of $X$ to the vertices of $\Gamma(Z)$, and we randomly choose three such paths. We then let the skeleton $K$ of $G^X$ consist of the union of $X$, $Z$, and the three selected paths. It is easy to see that the resulting graph $K$ is rigid, and with probability at least $(1-\frac 1{2\rho\cdot \mathsf{OPT}})$, it only contains good edges. Moreover, every connected component of $G^X\setminus (K\cup E'')$ is either a sub-graph of a connected component of $H\setminus (K'\cup E'')$ (and may contain at most $n_{h+1}$ vertices), or it belongs to ${\mathcal{H}}^0$ (and then its size is bounded by $n_{h+1}$). Therefore, $K$ is a valid skeleton for $\pi(G^X,X,{\mathcal{Z}}'\setminus {\mathcal{Z}}''(G))$, $E''$, and $\phi'$. The third case is when we can find the desired collection ${\mathcal{P}}'$ of paths, and moreover, for each grid $Z\in {\mathcal{Z}}''(G)$, at most half the paths in ${\mathcal{P}}'$ contain vertices of $\Gamma(Z)$. We then randomly select three paths from ${\mathcal{P}}'$, making sure that at most two paths containing vertices of $\Gamma(Z)$ are selected for any grid $Z\in {\mathcal{Z}}''(G)$. Since at most $2\mathsf{OPT}$ of the paths in ${\mathcal{P}}'$ are bad, with probability at least $1-1/(2\mathsf{OPT}\rho)$, none of the selected paths is bad. We then define $K$ to be the union of $K'$, $X$, and the three selected paths. Additionally, if, for some grid $Z\in {\mathcal{Z}}''(G)$, one or two of the selected paths contain vertices in $\Gamma(Z)$, then remove $Z$ from ${\mathcal{Z}}''(G)$, and add it to $K$. It is easy to verify that the resulting skeleton is rigid, and it only contains good edges. Moreover, every connected component of $G\setminus (K\cup E'')$, is either a sub-graph of a connected component of $H\setminus (K'\cup E'')$, or it is a sub-graph of one of the graphs in ${\mathcal{H}}^0$. In the former case, its size is bounded by $n_{h+1}$ as above, while in the latter case, its size is bounded by $|V(G\setminus X)|/2\leq n_{h-1}/2<n_{h-1}(1-\rho)\leq n_{h+1}$. We set $E'=\emptyset$, $G^X=G$, and $G'=\emptyset$. To summarize this step, we have started with the instance $\pi(G,X,{\mathcal{Z}}')$, and defined two subsets $E',E''$ of edges, with $|E'|\leq O(\mathsf{OPT}^2d_{\mbox{\textup{\footnotesize{max}}}}\rho)$ and $|E''|\leq O\left ( \frac{\mathsf{OPT}^2\cdot \rho\cdot \log^2 n\cdot d_{\mbox{\textup{\footnotesize{max}}}}^2 \cdot \ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} }{\alpha^*}\right )$, whose removal disconnects $G$ into two connected sub-graphs: $G^X$ containing $X$, and $G'$. Moreover, both sets $V(G^X)$, $V(G')$ are canonical, and $E',E''$ do not contain edges belonging to grids $Z\in{\mathcal{Z}}'$. We have added instance $\pi(G',\emptyset,{\mathcal{Z}}')$ to ${\mathcal{G}}_{h+1}$, and we have defined a skeleton $K$ for $G^X$. We have shown that $K$ is a valid skeleton for $\pi(G^X,X,{\mathcal{Z}}'\setminus {\mathcal{Z}}''(G))$, $E''$, and $\phi'$. The probability that this step is successful for a fixed graph $G\in\set{G_1,\ldots,G_{k_h}}$ is at least $(1-1/(\rho\cdot \mathsf{OPT}))$, and so the probability that it is successful across all graphs is at least $(1-1/\rho)$. We can assume w.l.o.g. that every edge in set $E'$ has one endpoint in $G^X$ and one endpoint in $G'$: otherwise, this edge does not separate $G^X$ from $G'$, and can be removed from $E'$. Similarly, we can assume w.l.o.g. that for every edge $e\in E''$, the two endpoints of $e$ either belong to distinct connected components of $G^X\setminus (K\cup E'')$, or one endpoint belongs to $G^X$, and the other to $G'$. We will use these facts later, to claim that Invariant~(\ref{invariant 2: proper subgraph}) holds for the resulting instances. \subsection{Step 3: Producing Input to the Next Iteration}\label{sec: step 3} Recall that so far, for each $1\leq i\leq k_h$, we have found two collections $E_i',E_i''$ of edges, two sub-graphs $G_i^X$ and $G_i'$ with $X_i\subseteq G_i^X$, and a valid skeleton $K_i$ for $\pi(G_i^X,X_i,{\mathcal{Z}}\setminus {\mathcal{Z}}''(G_i))$, $\phi_i$, $E''_i$. The sets $E_i'\cup E_i''$ do not contain any edges of the grids $Z\in {\mathcal{Z}}$, and each edge in $E_i'\cup E''_i$ either connects a vertex of $G_i^X$ to a vertex of $G_i'$, or vertices of two distinct connected components of $G_i^X\setminus (K_i\cup E''_i)$. Recall that $G_i'$ contains at most $n_{h}$ vertices, and there are no edges in $G_i\setminus (E_i'\cup E_{i}'')$ connecting the vertices of $G_i'$ to those of $G_i^X$. Let ${\mathcal{C}}'$ denote the set of all connected components of $G_i^X\setminus (K_i\cup E_i'')$. Then for each $C\in {\mathcal{C}}'$, $|V(C)|\leq n_{h+1}$. Since graph $K_i$ is rigid, we can find the planar drawing $\phi_i(K_i)$ of $K_i$ induced by $\phi_i$ efficiently. Since all edges of $K_i$ are good for $\phi_i$, each connected component $C\in{\mathcal{C}}'$ is embedded inside a single face $F_C^*$ of $\phi_i$. Intuitively, we would like to find this face $F_C^*$ for each such connected component $C$, and then solve the problem recursively on $C$, together with the bounding box $\gamma(F_C^*)$ --- the boundary of the face $F_C^*$. Apart from the difficulty in identifying the face $F_C^*$, a problem with this approach is that it is not clear that we can solve the problems induced by different connected components separately. For example, if both $C$ and $C'$ need to be embedded inside the same face $F$, then even if we find weak solutions for problems $\pi(C,\gamma(F),{\mathcal{Z}})$ and $\pi(C',\gamma(F),{\mathcal{Z}}')$, it is not clear that these two solutions can be combined together to give a feasible weak solution for the whole problem, since the drawings of $C\cup \gamma(F)$ and $C'\cup \gamma(F)$ may interfere with each other. We will define below the condition under which the two clusters are considered independent and can be solved separately. We will then find an assignment of each cluster $C$ to one of the faces of $\phi_i(K_i)$, and find a further partition of each cluster $C\in {\mathcal{C}}'$, such that all resulting clusters assigned to the same face are independent, and their corresponding problems can therefore be solved separately. We now focus on some graph $G=G_i^X\in \set{G_1^X,\ldots,G_{k_h}^X}$, and we denote its bounding box by $X$, its skeleton $K_i$ by $K$, and the two sets $E_i',E_i''$ of edges by $E'$ and $E''$ respectively. We let $\phi'$ denote the drawing of $G_i^X$ induced by the drawing $\phi_i$, guaranteed by Invariant~(\ref{invariant 3: there is a cheap solution}). As before, ${\mathcal{C}}'$ is the set of all connected components of $G\setminus (K\cup E'')$. While further partitioning the clusters $C\in{\mathcal{C}}'$ to ensure independence, we may have to remove edges that connect the vertices of $C$ to the skeleton $K$. However, such edges do not strictly belong to the cluster $C$. We next perform a simple transformation of the graph $G\setminus (E'\cup E'')$ in order to take care of this technicality. Consider the graph $G\setminus (E'\cup E'')$. We perform the following transformation: let $e=(v,x)$ be any edge in $E(G)\setminus (E'\cup E'')$, such that $x\in K$, $v\not\in K$. We add an artificial vertex $z_e$, that subdivides $e$ into two edges: an artificial edge $(x,z_e)$, and a non-artificial edge $(v,z_e)$. We denote $x_{z_e}=x$. Similarly, if $e=(x,x')$ is any edge in $E(G)\setminus (E'\cup E'')$, with $x,x'\in K$, then we add two artificial vertices $z_e,z'_e$, that subdivide $e$ into three edges, artificial edges $(x,z_e)$, and $(z'_e,x')$, and a non-artificial edge $(z_e,z'_e)$. We denote $x_{z_e}=x$, and $x_{z_{e}'}=x'$. If edge $e$ belonged to any grids $Z\in {\mathcal{Z}}$ (which can happen if $Z\in {\mathcal{Z}}''(G)$), then we consider all edges obtained from sub-divviding $e$ also a part of $Z$. Let $\tilde{G}$ denote the resulting graph, $\Gamma$ the set of all these artificial vertices, and let $E_{\tilde{G}}(\Gamma,K)$ be the set of all artificial edges in $\tilde G$. Let $\tilde{\phi}$ be the drawing of $\tilde{G}$ induced by $\phi'$. Notice that we can assume w.l.o.g. that the edges of $E_{\tilde G}(\Gamma,K)$ do not participate in any crossings in $\tilde{\phi}$. We use this assumption throughout the current section. For any sub-graph $C$ of $\tilde G\setminus K$, we denote by $\Gamma(C)=\Gamma\cap V(C)$, and $\operatorname{out}_K(C)$ is the subset of artificial edges adjacent to the vertices of $C$, that is, $\operatorname{out}_K(C)=E_{\tilde G}(\Gamma_C,K)$. We also denote by $C^+=C\cup \operatorname{out}_K(C)$, and by $\delta(C)$ the set of endpoints of the edges in $\operatorname{out}_K(C)$ that belong to $K$. Let ${\mathcal{C}}$ the set of all connected components of $\tilde G\setminus K$. We next formally define the notion of independence of clusters. Eventually, we will find a further partition of each one of the clusters $C\in {\mathcal{C}}$, so that the resulting clusters are independent, and can be solved separately in the next iteration. Let $\phi_K'$ be the drawing of $K$ induced by $\phi'$. Recall that this is the unique planar drawing of $K$, that can be found efficiently. Let ${\mathcal{F}}$ be the set of faces of $\phi_K'$. For each face $F\in {\mathcal{F}}$, let $\gamma(F)$ denote the set of edges and vertices lying on its boundary. Since $K$ is rigid, $\gamma(F)$ is a simple cycle. Since all edges of $K$ are good for $\phi'$, for every component $C\in {\mathcal{C}}$, $C^+$ is embedded completely inside some face $F^*_C$ of ${\mathcal{F}}$ in the drawing $\tilde {\phi}$, and so $\delta(C)\subseteq \gamma(F)$ must hold. Therefore, there are three possibilities: either there is a unique face $F_C\in {\mathcal{F}}$, such that $\delta(C)\subseteq \gamma(F_C)$. In this case we say that $C$ is of type 1, and $F_C=F^*_C$ must hold; or there are two faces $F_1(C),F_2(C)$, whose both boundaries contain $\delta(C)$, so $F^*_C\in\set{F_1(C),F_2(C)}$. In this case we say that $C$ is of type 2. The third possibility is that $|\delta(C)|\leq 1$. In this case we say that $C$ is of type 3, and we can embed $C$ inside any face whose boundary contains the vertex $\delta(C)$. The embedding of such clusters does not affect other clusters. For convenience, when $C$ is of type 1, we denote $F_1(C)=F_2(C)=F_C$, and if it is of type 3, then we denote $F_1(C)=F_2(C)=F$, where $F$ is any face of ${\mathcal{F}}$ whose boundary contains $\delta(C)$. We now formally define when two clusters $C,C'\in {\mathcal{C}}$ are independent. Let $C,C'\in {\mathcal{C}}$ be any two clusters, such that there is a face $F\in {\mathcal{F}}$, with $\delta(C),\delta(C')\subseteq \gamma(F)$. The set $\delta(C)$ of vertices defines a partition $\Sigma$ of $\gamma(F)$ into segments, where every segment $\sigma\in \Sigma$ contains two vertices of $\delta(C)$ as its endpoints, and does not contain any other vertices of $\delta(C)$. Similarly, the set $\delta(C')$ of vertices defines a partition $\Sigma'$ of $\gamma(F)$. \begin{definition} We say that the two clusters $C,C'$ are \emph{independent}, iff $\delta(C)$ is completely contained in some segment $\sigma'\in \Sigma'$. Notice that in this case, $\delta(C')$ must also be completely contained in some segment $\sigma\in \Sigma$. \end{definition} Our goal in this step is to assign to each cluster $C\in {\mathcal{C}}$, a face $F(C)\in\set{F_1(C),F_2(C)}$, and to find a partition ${\mathcal{Q}}(C)$ of the vertices of the cluster $C$. Intuitively, each such cluster $Q\in{\mathcal{Q}}(C)$ will become an instance in the input to the next iteration, with $\gamma(F(C))$ as its bounding box. Suppose we are given such an assignment $F(C)$ of faces, and the partition ${\mathcal{Q}}(C)$ for each $C\in {\mathcal{C}}$. We will use the following notation. For each $C\in {\mathcal{C}}$, let $E^*(C)$ denote the set of edges cut by ${\mathcal{Q}}(C)$, that is, $E^*(C)=\bigcup_{Q\neq Q'\in {\mathcal{Q}}(C)}E_{\tilde{G}}(Q,Q')$, and let $E^*=\bigcup_{C\in {\mathcal{C}}}E^*(C)$. For each $Q\in {\mathcal{Q}}(C)$, we denote by $X_Q=\gamma(F(C))$, the boundary of the face inside which $C$ is to be embedded. For each face $F\in {\mathcal{F}}$, we denote by ${\mathcal{Q}}(F)=\bigcup_{C:F(C)=F}{\mathcal{Q}}(C)$ the set of all clusters to be embedded inside $F$, and we denote by ${\mathcal{Q}}=\bigcup_{C\in {\mathcal{C}}}{\mathcal{Q}}(C)$. Abusing the notation, for each cluster $Q\in {\mathcal{Q}}$, we will refer to $Q$ both as the set of vertices, and as the sub-graph $\tilde G[Q]$ induced by it. As before, we denote $Q\cup \operatorname{out}_K(Q)$ by $Q^+$. The next theorem shows that it is enough to find an assignment of every cluster $C\in {\mathcal{C}}$ to a face $F(C)\in \set{F_1(C),F_2(C)}$, and a partition ${\mathcal{Q}}(C)$ of the vertices of $C$, such that all the resulting clusters assigned to every face of ${\mathcal{F}}$ are independent. \begin{theorem}\label{thm: no conflict case} Suppose we are given, for each cluster $C\in {\mathcal{C}}$, a face $F(C)\in \set{F_1(C),F_2(C)}$, and a partition ${\mathcal{Q}}(C)$ of the vertices of $C$. Moreover, assume that for every face $F\in {\mathcal{F}}$, every pair $Q,Q'\in {\mathcal{Q}}(F)$ of clusters is independent, and for each $Z\in {\mathcal{Z}}$, $E^*\cap E(Z)=\emptyset$. Then: \begin{itemize} \item For each $Q\in {\mathcal{Q}}$, there is a strong solution to the problem $\pi(Q^+\cup X_Q,X_Q,{\mathcal{Z}})$, such that the total cost of these solutions, over all $Q\in {\mathcal{Q}}$, is bounded by $\operatorname{cr}_{\tilde \phi}(\tilde G)\leq \operatorname{cr}_{\phi'}(G)$. \item For each $Q\in {\mathcal{Q}}$, let $E^{**}_Q$ be any feasible weak solution to the problem $\pi(Q^+\cup X_Q,X_Q,{\mathcal{Z}})$, and let $E^{**}=\bigcup_{Q\in {\mathcal{Q}}}E^{**}_Q$. Then $E'\cup E''\cup E^*\cup E^{**}$ is a feasible weak solution to problem $\pi(G,X,{\mathcal{Z}})$. \end{itemize} \end{theorem} We remark that this theorem does not require that the sets $C\in {\mathcal{C}}$ are canonical vertex sets. \begin{proof} Fix some $Q\in {\mathcal{Q}}$, and let $\tilde{\phi}_{Q^+}$ be the drawing of $Q^+\cup X_Q$ induced by $\tilde{\phi}$. Recall that the edges of the skeleton $K$ do not participate in any crossings in $\tilde \phi$, and every pair $Q,Q'\in {\mathcal{Q}}$ of graphs is completely disjoint. Therefore, $\sum_{Q\in {\mathcal{Q}}}\operatorname{cr}_{\tilde \phi_{Q^+}}(Q^+)\leq \operatorname{cr}_{\tilde \phi}(\tilde G)$. Observe that every edge of $\tilde G$ belongs either to $K$, or to $E^*$, or to $Q^+$ for some $Q\in {\mathcal{Q}}$. Therefore, it is now enough to show that for each $Q\in {\mathcal{Q}}$, $\tilde \phi_{Q^+}$ is a feasible strong solution to problem $\pi(Q^+\cup X_Q,X_Q,{\mathcal{Z}})$. Since $\phi'$ is canonical, so is $\tilde \phi_{Q^+}$. It now only remains to show that $Q^+$ is completely embedded on one side (that is, inside or outside) of the cycle $X_Q$ in $\tilde \phi_{Q^+}$. Let $C\in {\mathcal{C}}$, such that $Q\in {\mathcal{Q}}(C)$. Recall that $C$ is a connected component of $\tilde G\setminus K$. Since $K$ is good, $C$ is embedded completely inside one face in ${\mathcal{F}}$. In particular, since $X_Q$ is the boundary of one of the faces in ${\mathcal{F}}$, all vertices and edges of $C$ (and therefore of $Q$) are completely embedded on one side of $X_Q$. Therefore, $X_Q$ can be viewed as the bounding box in the embedding $\tilde \phi_{Q^+}$. We now prove the second part of the theorem. For each $Q\in {\mathcal{Q}}$, let $E^{**}_Q$ be any feasible weak solution to the problem $\pi(Q^+\cup X_Q,X_Q,{\mathcal{Z}})$, and let $E^{**}=\bigcup_{Q\in {\mathcal{Q}}}E^{**}_Q$. We first show that $E'\cup E''\cup E^*\cup E^{**}$ is a feasible weak solution to the problem $\pi(\tilde G,X,{\mathcal{Z}})$. Let $F\in {\mathcal{F}}$ be any face of $\phi'_K$. For each $Q\in {\mathcal{Q}}(F)$, let $\tilde Q=Q\setminus E^{**}_Q$, and let $\tilde Q^+=Q^+\setminus E^{**}_Q$. Since $E^{**}_Q$ is a weak solution for instance $\pi(Q^+\cup X_Q,X_Q,{\mathcal{Z}})$, there is a planar drawing $\psi_{Q}$ of $\tilde{Q}^+\cup X_Q$, inside the bounding box $X_Q=\gamma(F)$. It is enough to show that for each face $F\in {\mathcal{F}}$, we can find a planar embedding of graphs $\tilde{Q}^+$, for all $Q\in{\mathcal{Q}}(F)$ inside $\gamma(F)$. Fix an arbitrary ordering ${\mathcal{Q}}(F)=\set{Q_1,\ldots,Q_r}$. We now gradually construct a planar drawing of the graphs $\tilde{Q}_j^+$ inside $\gamma(F)$. For convenience, we will also be adding new artificial edges to this drawing. We perform $r$ iterations, and the goal in iteration $j: 1\leq j\leq r$ is to add the graph $\tilde{Q}_j^+$ to the drawing. We will maintain the following invariant: at the beginning of every iteration $j$, for each $j'\geq j$, there is a face $F'$ in the current drawing, such that $\delta(Q_j)\subseteq \gamma(F')$. In the first iteration, we simply use the drawing $\psi_{Q_1}$ of $\tilde{Q}_1^+\cup \gamma(F)$. The vertices of $\delta(Q_1)$ define a partition $\Sigma_1$ of $\gamma(F)$ into segments, such that every segment contains two vertices of $\delta(Q_1)$ as its endpoints, and no other vertices of $\delta(Q_1)$. For each such segment $\sigma$, we add a new artificial edge $e_{\sigma}$ connecting its endpoints to the drawing. All such edges can be added without creating any crossings. Since every pair of clusters in ${\mathcal{Q}}(F)$ is independent, for each graph $Q_j$, $j> 1$, the vertices of $\delta(Q_j)$ are completely contained in one of the resulting segments $\sigma\in \Sigma_1$. The face $F'$ of the current drawing, whose boundary consists of $\sigma$ and $e_{\sigma}$ then has the property that $\delta(Q_j)\subseteq \gamma(F')$. Consider now some iteration $j+1$, and let $F'$ be the face of the current drawing, such that $\delta(Q_{j+1})\subseteq \gamma(F')$. We add the drawing $\psi_{Q_{j+1}}$ of $\tilde Q_{j+1}^+\cup \gamma(F)$, with $\gamma(F')$ replacing $\gamma(F)$ as the bounding box. We can do so since $\delta(Q_j)\subseteq \gamma(F')$. We can therefore add this drawing, so that no crossings with edges that already belong to the drawing are introduced. The bounding box $\gamma(F')$ is then sub-divided into the set $\Sigma'$ of sub-segments, by the vertices of $\delta(Q_j)$. Again, for each such segment $\sigma'$, we add an artificial edge $e_{\sigma'}$, connecting its endpoints, to the drawing, inside the face $F'$, such that no crossings are introduced. Since there are no conflicts between clusters in ${\mathcal{Q}}(F)$, for each $Q_{j'}$, with $j'>j+1$, such that $\delta(Q_{j'})\subseteq \gamma(F')$, there is a segment $\sigma'\in \Sigma'$, containing all vertices of $\delta(Q_{j'})$. The corresponding new face $F''$, formed by $\sigma'$ and the edge $e_{\sigma'}$ will then have the property that $\delta(Q_{j'})\subseteq \gamma(F'')$. We have thus shown that $\tilde{G}\setminus (E^*\cup E^{**})$ has a planar drawing. The same drawing induces a planar drawing for $G\setminus (E'\cup E''\cup E^*\cup E^{**})$. \end{proof} In the rest of this section, we will show an efficient algorithm to find the assignment of the faces of ${\mathcal{F}}$ to the clusters $C\in {\mathcal{C}}$, and the partition ${\mathcal{Q}}(C)$ of each such cluster, satisfying the requirements of Theorem~\ref{thm: no conflict case}. Our goal is also to ensure that $|E^*|$ is small, as these edges are eventually removed from the graph. If two clusters $C,C'\in {\mathcal{C}}$, with $\delta(C),\delta(C')\subseteq \gamma(F)$ for some $F\in {\mathcal{F}}$ are not independent, then we say that they have a conflict. The process of partitioning both clusters into sub-clusters to ensure that the sub-clusters are independent is called conflict resolution. The next theorem shows how to perform conflict resolution for a pair of clusters. The proof of this theorem is due to Yury Makarychev~\cite{Yura}. We provide it here for completeness. \begin{theorem}\label{thm: conflict resolution for 2 clusters} Let $C,C'\in {\mathcal{C}}$, such that both $C$ and $C'$ are embedded inside the same face $F\in {\mathcal{F}}$ in $\tilde{\phi}$. Then we can efficiently find a subset $E_{C,C'}\subseteq E(C)$ of edges, $|E_{C,C'}|\leq 30 \operatorname{cr}_{\tilde\phi}(E(C),E(C'))$, such that if ${\mathcal{C}}'$ denotes the collection of all connected components of $C\setminus E_{C,C'}$, then for every cluster $Q\in {\mathcal{C}}'$, $Q$ and $C'$ are independent. Moreover, $E_{C,C'}$ does not contain any edges of the grids $Z\in {\mathcal{Z}}$. \end{theorem} \begin{proof} We say that a set $\tilde{E}$ of edges is valid iff it satisfies the condition of the theorem. For simplicity, we will assign weights $w_e$ to edges as follows: edges that belong to grids $Z\in {\mathcal{Z}}$ have infinite weight, and all other edges have weight $1$. We first claim that there is a valid set of weight at most $\operatorname{cr}_{\tilde{\phi}}(C, C')$. Indeed, let $\tilde{E}$ be the set of edges of $C$, that are crossed by the edges of $C'$ in $\tilde \phi$. Clearly, $|\tilde E| \leq \operatorname{cr}_{\tilde{\phi}}(C,C')$, and this set does not contain any edges in grids $Z\in {\mathcal{Z}}$, or edges adjacent to the vertices of $K$ (this was our assumption when we defined $\tilde\phi$). Let ${\mathcal{C}}'$ be the set of all connected components of $C\setminus \tilde E$, and consider some cluster $Q\in {\mathcal{C}}'$. Assume for contradiction, that $Q$ and $C'$ are not independent. Then there are four vertices $a,b,c,d\in \gamma(F)$, whose ordering along $\gamma(F)$ is $(a,b,c,d)$, and $a,c\in \delta(Q)$, while $(b,d)\in \delta(C')$. But then there must be a path $P\subseteq Q\cup\operatorname{out}_K(Q)$ connecting $a$ to $c$, and a path $P'\subseteq C'\cup \operatorname{out}_K(C')$, connecting $b$ to $d$, as both $Q$ and $C'$ are connected graphs. Moreover, since $Q$ and $C'$ are completely disjoint, the two paths must cross in $\tilde{\phi}$. Recall that we have assumed that the artificial edges adjacent to $K$ do not participate in any crossings in $\tilde{\phi}$. Therefore, the crossing is between an edge of $Q$ and an edge of $C'$. This is impossible, since we have removed all edges that participate in such crossings from $C$. We now show how to \textit{efficiently} find a valid set $\tilde E$ of edges, of weight at most $30 \operatorname{cr}_{\tilde{\phi}}(E(C),E(C'))$. Let $\Sigma' = \{\sigma'_1,\sigma'_2,\dots, \sigma'_k\}$ be the set of segments of $\gamma(F)$, defined by $\delta(C')$, in the circular order. Throughout the rest of the proof we identify $k+1$ and $1$. Consider the set $\Gamma(C)$ of vertices. We partition this set into a number of subsets, as follows. For $1\leq i\leq k$, let $\Gamma_i\subseteq \Gamma(C)$ denote the subset of vertices $z\in \Gamma(C)$, for which $x_z$ lies strictly inside the segment $\sigma_i'$. Let $\Gamma_{i,i+1}\subseteq \Gamma(C)$ denote the subset of vertices $z\in \Gamma(C)$, for which $x_z$ is the vertex separating segments $\sigma'_i$ and $\sigma'_{i+1}$. We now restate the problem of finding a valid cut $E_{C,C'}$ as an assignment problem. We need to assign each vertex of $C$ to one of the segments $\sigma'_1, \dots, \sigma'_k$ so that \begin{itemize} \item every vertex in $\Gamma_i$ is assigned to the segment $\sigma'_i$; \item every vertex in $\Gamma_{i,i+1}$ is assigned to either $\sigma'_i$ or $\sigma'_{i+1}$. \end{itemize} We say that an edge of $C$ is cut by such an assignment, iff its endpoints are assigned to different segments. Given any such assignment, whose weight is finite, let $\tilde E$ be the set of cut edges. We prove that set $\tilde E$ is valid. Since the weight of $\tilde E$ is finite, it cannot contain edges of grids $Z\in {\mathcal{Z}}$. Let ${\mathcal{C}}'$ be the collection of all connected components of $C\setminus \tilde E$. It is easy to see that for each $Q\in {\mathcal{C}}'$, $Q$ and $C'$ is independent. This is since for all edges in $\operatorname{out}_K(Q)$, their endpoints that belong to $K$ must all be contained inside a single segment $\sigma'$ of $\Sigma'$. On the other hand, every finite-weight valid set $\tilde E$ of edges corresponds to a valid assignment. Let ${\mathcal{C}}'$ be the set of all connected components of $C\setminus \tilde E$, and let $Q\in {\mathcal{C}}'$. Since there are no conflicts between $Q$ and $C'$, all vertices of $\delta(Q)$ that serve as endpoints of the set $\operatorname{out}_K(Q)$ of edges, must be contained inside a single segment $\sigma'\in \Sigma'$. If the subset of $\delta(Q)$ contains a single vertex, there can be two such segments of $\Sigma'$, and we choose any one of them arbitrarily; if this subset of $\delta(Q)$ is empty, then we choose an arbitrary segment of $\Sigma'$. We then assign all vertices of $Q$ to $\sigma'$. Since $\tilde E$ does not contain any edges that are adjacent to the vertices of $K$ (as such edges are not part of $E(C)$), we are guaranteed that every vertex in $\Gamma_i$ is assigned to the segment $\sigma'_i$, and every vertex in $\Gamma_{i,i+1}$ is assigned to either $\sigma'_i$ or $\sigma'_{i+1}$, for all $1\leq i\leq k$. We now show how to approximately solve the assignment problem, and therefore the original problem, using linear programming. We will ensure that the weight of the solution $E_{C,C'}$ is at most $30$ times the optimum, and so $|E_{C,C'}|\leq 30 \operatorname{cr}_{\tilde \phi}(E(C),E(C'))$. For each vertex $u$ of $C$ and segment $\sigma'_i$ we introduce an indicator variable $y_{u,i}$, for assigning $u$ to segment $\sigma'_i$. All variables for vertex $u$ form a vector $y_u = (y_{u,1}, \dots, y_{u,k}) \in {\mathbb R}^k$. We denote the standard basis of ${\mathbb R}^k$ by $e_1,\dots, e_k$. In the intended integral solution, $y_u = e_i$ if $u$ is assigned to $\sigma'_i$; that is, $y_{u,i} = 1$ and $y_{u,j} = 0$ for $j\neq i$. Equip the space ${\mathbb R}^k$ with the $\ell_1$ norm $\|y_u\|_1 = \sum_{i=1}^k |y_{u,i}|$. We solve the following linear program. \begin{align*} \text{minimize } &&\frac{1}{2} \sum_{e=(u,v)\in E(C)} w_e\cdot \|y_u - y_v\|_1\\ \text{subsject to }&& \\ && \|y_u\|_1 = 1 &&& \forall u \in V(C);\\ && y_{u,i} = 1 &&& \forall 1\leq i\leq k, \forall u \in \Gamma_i;\\ && y_{u,i} + y_{u,i+1} = 1 &&& \forall 1\leq i\leq k,\forall u \in \Gamma_{i,i+1};\\ && y_{u,i} \geq 0 &&& \forall u \in V(C), \forall 1\leq i\leq k. \end{align*} Let $\mathsf{OPT}_{LP}$ be the value of the optimal solution of the LP. For all $1\leq i\leq k$, $r\in (1/2,3/5)$, define balls $B_i^r = \{u: y_{u,i} \geq r\}$ and $B_{i,i+1}^r = \{u: u\not\in B_i^r\cup B_{i+1}^r; y_{u,i}+y_{u,i+1} \geq 5r/3\}$. Note that since, for each $u\in V(C)$, at most one coordinate $y_{u,i}$ can be greater than $\ensuremath{\frac{1}{2}}$, whenever $r\geq \ensuremath{\frac{1}{2}}$, the balls $B_i^r$ and $B_j^r$ are disjoint for all $i\neq j$. Similarly, balls $B_{i,i+1}^r$ and $B_{j,j+1}^{r}$ are disjoint for $i\neq j$ when $r \geq 1/2$: this is since, if $u\in B_{i,i+1}^r$, then $y_{u,i}+y_{u,i+1}\geq 5/6$ must hold, while $y_{u,i},y_{u,i+1}<\ensuremath{\frac{1}{2}}$. Therefore, $y_{u,i},y_{u,i+1}>1/3$ must hold, and there could be at most two coordinates $1\leq j\leq k$, for which $y_{u,j}>1/3$. For each value of $r: 1/2\leq r/\leq 3/5$, we let $E^r$ denote all edges that have exactly one endpoint in the balls $B_i^r$, and $B_{i,i+1}^r$, for all $1\leq i\leq k$. We choose $r\in (1/2,3/5)$ that minimizes $|E^r|$, and we let $E_{C,C'}$ denote the set $E^r$ for this value of $r$. We assign all vertices in balls $B_i^r$ and $B_{i,i+1}^r$ to the segment $\sigma'_i$. We assign all unassigned vertices to an arbitrary segment. We need to verify that this assignment is valid; that is, vertices from $\Gamma_i$ are assigned to $\sigma'_i$ and vertices from $\Gamma_{i,i+1}$ are assigned to either $\sigma'_i$ or $\sigma'_{i+1}$, for all $1\leq i\leq k$. Indeed, if $u\in\Gamma_i$, then $y_{u,i} = 1$, and so $u\in B^r_i$; similarly, if $u\in\Gamma_{i,i+1}$ then $y_{u,i} + y_{u,i+1} = 1$, and so $u\in B^r_i\cup B^r_{i+1}\cup B^r_{i,i+1}$. Finally, we need to show that the cost of the assignment is at most $30 \mathsf{OPT}_{LP}$. In fact, we show that if we choose $r\in (1/2,3/5)$ uniformly at random, then the expected cost is at most $30\mathsf{OPT}_{LP}$. Consider an edge $e=(u,v)$. We compute the probability that $e\in B^r_i$, for each $1\leq i\leq k$. This is the probability that $y_{u,i}\geq r$, but $y_{v,i}<r$ (or vice versa if $y_{v,i}<y_{u,i}$). This probability is bounded by $10|y_{u,i}-y_{v,i}|$. Similarly, the probability that $u\in B^r(i,i+1)$ but $v\not \in B^r(i,i+1)$ is bounded by the probability that $y_{u,i}+y_{u,i+1}\geq 5r/3$, but $y_{v,i}+y_{v,i+1}< 5r/3$, or vice versa. This probability is at most $6\cdot \frac 5 3 ((y_{u,i}+y_{u,i+1})-(y_{v,i}+y_{v,i+1}))\leq 10(|y_{u,i}-y_{v,i}|+|y_{u,i+1}-y_{v,i+1}|)$. Therefore, overall, the probability that $e=(u,v)$ belongs to the cut is at most: \[\sum_{i=1}^k10 |y_{u,i}-y_{v,i}|+\sum_{i=1}^k 10(|y_{u,i}-y_{v,i}|+|y_{u,i+1}-y_{v,i+1}|)\leq 10 \norm{y_u-y_v}_1+20\norm{y_u-y_v}_1=30\norm{y_u-y_v}_1\] \end{proof} We now show how to find the assignment $F(C)$ of faces of ${\mathcal{F}}$ to all clusters $C\in {\mathcal{C}}$, together with the partition ${\mathcal{Q}}(C)$ of the vertices of $C$. We will reduce this problem to an instance of the min-uncut problem. Recall that the input to the min-uncut problem is a collection $X$ of Boolean variables, together with a collection $\Psi$ of constraints. Each constraint $\psi\in \Psi$ has non-negative weight $w_{\psi}$, and involves exactly two variables of $X$. All constraints $\psi\in \Psi$ are required to be of the form $x\neq y$, for $x,y\in X$. The goal is to find an assignment to all variables of $X$, to minimize the total weight of unsatisfied constraints. Agarwal et. al.~\cite{ACMM} have shown an $O(\sqrt{\log n})$-approximation algorithm for Min Uncut. Fix any pair $C,C'\in {\mathcal{C}}$ of clusters, and a face $F\in {\mathcal{F}}$, such that $\delta(C),\delta(C')\subseteq \gamma(F)$. Let $E'_{C,C'}$ denote the union of the sets $E_{C,C'}$ and $E_{C',C}$ of edges from Theorem~\ref{thm: conflict resolution for 2 clusters}, and let $w_{C,C'}=|E'(C,C')|$, $w_{C,C'}\leq 60\operatorname{cr}_{\tilde{\phi}}(E(C),E(C'))$. For each face $F\in {\mathcal{F}}$, we denote by ${\mathcal{C}}(F)\subseteq {\mathcal{C}}$ the set of all clusters $C\in {\mathcal{C}}$, of the first type, for which $\delta(C)\subseteq \delta(F)$. Recall that for each such cluster, $F^*_C=F$ must hold. Let $E'(F)=\bigcup_{C,C'\in {\mathcal{C}}(F)}E'_{C,C'}$, and let $w_F=|E'(F)|$. Let ${\mathcal{P}}$ be the set of all maximal $2$-paths in $K$. For every path $P\in {\mathcal{P}}$, we denote by ${\mathcal{C}}(P)\subseteq {\mathcal{C}}$ the set of all type-2 clusters $C$, for which $\delta(C)\subseteq P$. Let $F_1(P),F_2(P)$ be the two faces of $K$, whose boundaries contain $P$. Recall that for each $C\in {\mathcal{C}}(P)$, $F^*_C\in \set{F_1(P),F_2(P)}$. For every $C\in {\mathcal{C}}(P)$, $F\in \set{F_1(C),F_2(C)}$, let $w_{C,F}=\sum_{C'\in {\mathcal{C}}(F)}w_{C,C'}$. If we decide to assign $C$ to face $F_1(P)$, then we will pay $w_{C,F_1(P)}$ for this assignment, and similarly, if $C$ is assigned to face $F_2(P)$, we will pay $w_{C,F_2(P)}$. We now set up an instance of the min-uncut problem, as follows. The set of variables $X$ contains, for each path $P\in {\mathcal{P}}$, for each $F\in \set{F_1(P),F_2(P)}$, a Boolean variable $y_{P,F}$, and for each path $P\in {\mathcal{P}}$ and cluster $C\in {\mathcal{C}}(P)$ a Boolean variable $y_C$. Intuitively, if $y_C=y_{P,F_1(P)}$, then $C$ is assigned to $F_1(P)$, and if $y_C=y_{P,F_2(P)}$, then $C$ is assigned to $F_2(P)$. The set $\Psi$ of constraints contains constraints of three types: first, for each path $P\in {\mathcal{P}}$, we have the constraint $y_{P,F_1(P)}\neq y_{P,F_2(P)}$ of infinite weight. For each $P\in {\mathcal{P}}$, for each pair $C,C'\in {\mathcal{C}}(P)$ of clusters, there is a constraint $y_C\neq y_{C'}$, of weight $w_{C,C'}$. Finally, for each $P\in {\mathcal{P}}$, $F\in \set{F_1(P),F_2(P)}$, and for each $C\in {\mathcal{C}}(P)$, we have a constraint $y_C\neq y_{P,F}$ of weight $w_{C,F}$. \begin{claim} There is a solution to the min-uncut problem, whose cost, together with $\sum_{F\in {\mathcal{F}}}w_F$, is bounded by $60\operatorname{cr}_{\tilde{\phi}}(G)$. \end{claim} \begin{proof} We simply consider the optimal solution $\tilde{\phi}$. For each path $P\in {\mathcal{P}}$, we assign $y_{P,F_1(P)}=0$ and $y_{P,F_2(P)}=1$. For each cluster $C\in {\mathcal{C}}(P)$, if $F^*(C)=F_1(P)$, then we set $y_C=y_{P,F_1(P)}$, and otherwise we set $y_C=y_{P,F_2(P)}$. From Theorem~\ref{thm: conflict resolution for 2 clusters}, for every pair $C,C'$ of clusters with $F^*_C=F^*_{C'}$, $w_{C,C'}\leq 60\operatorname{cr}_{\tilde{\phi}}(C,C')$. \end{proof} We can therefore find an $O(\sqrt{\log n})$-approximate solution to the resulting instance of the min-uncut problem, using the algorithm of~\cite{ACMM}. This solution naturally defines an assignment of faces to clusters. Namely, if $C$ is a type-1 cluster, then we let $F(C)=F$, where $F$ is the unique face with $\delta(C)\subseteq \gamma(F)$. If $C$ is a type-2 cluster, and $C\in {\mathcal{C}}(P)$, for some path $P\in {\mathcal{P}}$, then we assign $C$ to $F_1(P)$ if $y_C=y_{P,F_1(P)}$, and we assign it to $F_2(C)$ otherwise. If $C$ is a type-3 cluster, then we assign it to any face that contains the unique vertex in $\delta(C)$. For each face $F$, let ${\mathcal{C}}'(F)$ denote all clusters $C$ that are assigned to $C$. Let $\tilde E(F)$ denote the union of the sets $E'_{C,C'}$ of edges for all $C,C'\in {\mathcal{C}}'(F)$, and let $\tilde E=\bigcup_{F\in{\mathcal{F}}}\tilde E(F)$. For each cluster $C\in {\mathcal{C}}$, we now obtain a partition ${\mathcal{Q}}'(C)$ of its vertices that corresponds to the connected components of graph $C\setminus \tilde E$. For each $Q\in {\mathcal{Q}}'(C)$, we let $Q$ denote both the set of vertices in the connected component of $C\setminus \tilde{E}$, and the sub-graph of $\tilde{G}$ induced by $Q$. From Theorem~\ref{thm: conflict resolution for 2 clusters}, we are guaranteed that for every face $F\in {\mathcal{F}}$, for all $C,C'\in {\mathcal{C}}'_F$, if $Q\in {\mathcal{Q}}'(C)$ and $Q'\in {\mathcal{Q}}'(C')$, then $Q$ and $Q'$ are independent. It is however possible that for some $C\in {\mathcal{C}}$, there is a pair $Q,Q'\in {\mathcal{Q}}'(C)$ of clusters, such that there is a conflict between $Q$ and $Q'$. In order to avoid this, we perform the following grouping procedure: For each $F\in{\mathcal{F}}$, for each $C\in {\mathcal{C}}'_F$, while there is a pair $Q,Q'\in {\mathcal{Q}}(C)$ of clusters that are not independent, remove $Q,Q'$ from ${\mathcal{Q}}'(C)$, and replace them with $Q\cup Q'$. For each $C\in {\mathcal{C}}$, let ${\mathcal{Q}}(C)$ be the resulting partition of the vertices of $C$. Clearly, each pair $Q,Q'\in {\mathcal{Q}}(C)$ is independent. \begin{claim} For each $F\in {\mathcal{F}}$, for each pair $C,C'\in {\mathcal{C}}'(F)$ of clusters, and for each $Q\in {\mathcal{Q}}(C), Q'\in {\mathcal{Q}}(C')$, clusters $Q$ and $Q'$ are independent.\end{claim} \begin{proof} Consider the partitions ${\mathcal{Q}}'(C)$, ${\mathcal{Q}}'(C')$, as they change throughout the grouping procedure. Before we have started the grouping procedure, every pair $Q\in {\mathcal{Q}}'(C)$, $Q'\in {\mathcal{Q}}'(C')$ was independent. Consider the first step in the grouping procedure, such that this property held for ${\mathcal{Q}}'(C),{\mathcal{Q}}'(C')$ before this step, but does not hold anymore after this step. Assume w.l.o.g. that the grouping step was performed on a pair $Q_1,Q_2\in {\mathcal{Q}}'(C)$. Since no other clusters in ${\mathcal{Q}}'(C)$ or ${\mathcal{Q}}'(C')$ were changed, there must be a cluster $Q'\in {\mathcal{Q}}'(C')$, such that both pairs $Q_1,Q'$ and $Q_2,Q'$ are independent, but $Q_1\cup Q_2$ and $Q'$ are not independent. We now show that this is impossible. Let $\Sigma$ be the partitioning of $\gamma(F)$ defined by the vertices of $\delta(Q')$. Since $Q_1$ and $Q'$ are independent, there is a segment $\sigma\in \Sigma$, such that $\delta(Q_1)\subseteq \sigma$. Similarly, since $Q_2$ and $Q'$ are independent, there is a segment $\sigma'\in \Sigma$, such that $\delta(Q_2)\subseteq \sigma'$. However, since $Q_1$ and $Q_2$ are not independent, $\sigma=\sigma'$ must hold. But then all vertices of $\delta(Q_1\cup Q_2)$ are contained in the segment $\sigma\in \Sigma$, contradicting the fact that $(Q_1\cup Q_2)$ and $Q'$ are not independent. \end{proof} To summarize, we have shown how to find an assignment $F(C)\in \set{F_1(C),F_2(C)}$ for every cluster $C\in {\mathcal{C}}$, and a partition ${\mathcal{Q}}(C)$ of the vertices of every cluster $C$, such that for every face $F\in {\mathcal{F}}$, every pair $Q,Q'\in {\mathcal{Q}}(F)$ of clusters is independent. Moreover, if $E^*$ denotes the subset of edges $E_{\tilde{G}}(Q,Q')$ for all $Q,Q'\in {\mathcal{Q}}$, then we have ensured that $|E^*|\leq O(\sqrt{\log n})\operatorname{cr}_{\tilde {\phi}}(\tilde{G})=O(\sqrt{\log n})\operatorname{cr}_{\phi'}(G)$, and set $E^*$ does not contain edges of grids $Z\in {\mathcal{Z}}$, or artificial edges. Therefore, the conditions of Theorem~\ref{thm: no conflict case} hold. We now define the set $\edges h(G)$, as follows: $\edges h(G)=E'\cup E''\cup E^*$. Recall that $|E'|\leq O(\mathsf{OPT}^2\rhod_{\mbox{\textup{\footnotesize{max}}}})$, $|E''|\leq O\left(\frac{\mathsf{OPT}^2\cdot\rho\cdot\log^2n\cdotd_{\mbox{\textup{\footnotesize{max}}}}^2\cdot\ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} }{\alpha^*}\right )$, and $|E^*|\leq O(\sqrt{\log n})\operatorname{cr}_{\phi'}(G)$. Therefore, \[|\edges h(G)|\leq O\left(\frac{\mathsf{OPT}^2\cdot\rho\cdot\log^2n\cdotd_{\mbox{\textup{\footnotesize{max}}}}^2\cdot\ensuremath{\beta_{\mbox{\textup{\footnotesize{FCG}}}}} }{\alpha^*}\right )+O(\sqrt{\log n})\operatorname{cr}_{\phi'}(G)\leq m^*.\] We also set $\edges h=\bigcup_{G\in\set{G_1^X,\ldots,G_{k_h}^X}}\edges h(G)$, so $|\edges h|\leq m^*\cdot \mathsf{OPT}$ as required. We now define a collection ${\mathcal{G}}_{h+1}$ of instances. Recall that for all $1\leq i\leq k_h$, this collection already contains the instance $\pi(G_i',\emptyset,{\mathcal{Z}})$. Let $G=G_i^X$, and $Q\in {\mathcal{Q}}$. Let $Q'$ denote the subset of vertices of $Q$ without the artificial vertices, and let $H_Q$ be the sub-graph of $\H$ induced by $Q\cup X_Q$. We then add the instance $\pi_G(H_Q,X_Q,{\mathcal{Z}})$ to ${\mathcal{G}}_{h+1}(G)$. This finishes the definition of the set ${\mathcal{G}}'_{h+1}$. From Theorem~\ref{thm: no conflict case}, for each $1\leq i\leq k_h$, there is a strong solution to each resulting sub-instance of $G_i^X$, such that the total cost of these solutions is at most $\operatorname{cr}_{\phi_i}(G_i^X)$. Clearly, $\phi_i$ also induces a strong solution to instance $\pi(G_i',\emptyset,{\mathcal{Z}})$ of cost $\operatorname{cr}_{\phi_i}(G_i')$. Therefore, there is a strong solution for each instance in ${\mathcal{G}}_{h+1}$ of total cost at most $\sum_{i=1}^{k_h}\operatorname{cr}_{\phi_i}(G_i)\leq \mathsf{OPT}$, and so the number of instances in ${\mathcal{G}}_{h+1}$, for which $\emptyset$ is not a feasible weak solution is bounded by $\mathsf{OPT}$. We let ${\mathcal{G}}'_{h+1}\subseteq {\mathcal{G}}_{h+1}$ denote the set of all instances for which $\emptyset$ is not a feasible solution. Observe that we can efficiently verify whether $\emptyset$ is a feasible solution for a given instance, so we can compute ${\mathcal{G}}'_{h+1}$ efficiently. We now claim that ${\mathcal{G}}'_{h+1}$ is a valid input to the next iteration, except that it may not satisfy Invariant~(\ref{invariant 2: canonical}) due to the grid sets ${\mathcal{Z}}''(G)$ -- we deal with this issue at the end of this section. We have already established Invariant~(\ref{invariant 3: there is a cheap solution}) in the above discussion. Also, from Theorem~\ref{thm: no conflict case}, if we find a weak feasible solution $\tilde{E}_H$ for each instance $H\in {\mathcal{G}}_{h+1}$, then the union of these solutions, together with $\edges h$, gives a weak feasible solution to all instances $\pi(G_i,X_i,{\mathcal{Z}}_i)$ for $1\leq i\leq k_h$, thus giving Invariant~(\ref{invariant 4: any weak solution is enough}). In order to establish Invariant~(\ref{invariant 4.5: number of edges removed so far}), observe that the number of edges in $\edges h$, incident on any new instance is bounded by the maximum number of edges in $\edges h$ that belong to any original instance $G_1,\ldots,G_{k_h}$, which is bounded by $m^*$, and the total number of edges in $\edges h$ is bounded by $m^*\cdot \mathsf{OPT}$. Invariant~(\ref{invariant 5: bound on size}) follows from the fact that for each $1\leq i\leq k_h$, $|V(G_i')|\leq n_h$, and these graphs have empty bounding boxes. All sub-instances of $G_i^X$ were constructed by further partitioning the clusters in $G_i^X\setminus (K_i\cup E'')$, and each such cluster contains at most $n_{h+1}$ vertices. Invariant~(\ref{invariant 1: disjointness}) is immediate, as is Invariant~(\ref{invariant 2: proper subgraph}) (recall that we have ensured that all edges in $E',E'',E^*$ connect vertices in distinct sub-instances). Finally, if we assume that ${\mathcal{Z}}''(G)=\emptyset$ for all $G\in \set{G_1,\ldots,G_{k_h}}$, then the resulting sub-instances are canonical, as we have ensured that the edges in sets $E',E'',E^*$ do not belong to the grids $Z\in {\mathcal{Z}}$, thus giving Invariant~(\ref{invariant 2: canonical}). Therefore, we have shown how to produce a valid input to the next iteration, for the case where ${\mathcal{Z}}''(G)=\emptyset$ for all $G\in \set{G_1,\ldots,G_{k_h}}$. It now only remains to show how to deal with the grids in sets ${\mathcal{Z}}''(G)$. \paragraph{Dealing with grids in sets ${\mathcal{Z}}''(G)$} Let $G\in \set{G_1^X,\ldots,G_{k_h}^X}$, and let $Z\in {\mathcal{Z}}''(G)$. Recall that this means that $Z\cap K$ is a simple path, that we denote by $P_Z$, and in particular, $K$ contains exactly two edges of $\operatorname{out}(Z)$, that we denote by $e_Z$ and $e'_Z$. The difficulty in dealing with such grids is that, on the one hand, we need to ensure that all new sub-instances are canonical, so we would like to add such grids to the skeleton $K$. On the other hand, since $Z\cap K$ is a simple path, graph $K\cup Z$ is not rigid, and has 2 different planar drawings (obtained by ``flipping'' $Z$ around the axis $P_Z$), so we cannot claim that we can efficiently find the optimal drawing $\phi'_{K\cup Z}$ of $K\cup Z$. Our idea in dealing with this problem is that we use the conflict resolution procedure to establish which face of the skeleton $K$ each such grid $Z\in{\mathcal{Z}}''(G)$ must be embedded in. Once this face is established, we can simply add $Z$ to $K$. Even though the resulting skeleton is not rigid, its drawing is now fixed. More specifically, let $Z\in {\mathcal{Z}}''(G)$ be any such grid, and let $v,v'$ be the two vertices in the first row of $Z$ adjacent to the edges $e_z$ and $e'_z$, respectively. We start by replacing the path $P_Z$ in the skeleton $K$, with the unique path connecting $v$ and $v'$ that only uses the edges of the first row of $Z$. Let $P'_Z$ denote this path. We perform this transformation for each $Z\in {\mathcal{Z}}''(G)$. The resulting skeleton $K$ is still rigid and good. It is now possible that the size of some connected component of $G\setminus (K\cup E'')$ becomes larger. However, since we eventually add all vertices of all such grids $Z\in {\mathcal{Z}}''(G)$ to the skeleton, this will not affect the final outcome of the current iteration. We then run the conflict resolution procedure exactly as before, and obtain the collection ${\mathcal{G}}_{h+1}$ of new instances as before. Consider some such instance $\pi(H_Q,X_Q,{\mathcal{Z}})$, and assume that $H_Q$ is a sub-graph of $G\in\set{G_1^X,\ldots,G^X_{k_h}}$. Let $Q=H_Q\setminus X_Q$. From the above discussion, $Q$ is canonical w.r.t. ${\mathcal{Z}}\setminus {\mathcal{Z}}''(G)$. The only problem is that for some grids $Z\in {\mathcal{Z}}''(G)$, $Q$ may contain the vertices of $Z\setminus P'_Z$. This can only happen if $P'_Z$ belongs to the bounding box $X_Q$. Recall that we are guaranteed that there is a strong solution to instance $\pi(H_Q,X_Q,{\mathcal{Z}})$, and the total cost of all such solutions over all instances in ${\mathcal{G}}_{h+1}$ is at most $\mathsf{OPT}$. In particular, the edges of $Z$ do not participate in crossings in this solution. Therefore, we can simply consider the grid $Z$ to be part of the skeleton, remove its vertices from $Q$, and update the bounding box of the resulting instance if needed. In other words, the conflict resolution procedure, by assigning every cluster $C\in {\mathcal{C}}$ to a face of ${\mathcal{F}}$, has implicitly defined a drawing of the graph $K\cup(\bigcup_{Z\in{\mathcal{Z}}''(G)}Z)$. Even though this drawing may be different from the drawing induced by $\phi'$, we are still guaranteed that the resulting sub-problems all have strong feasible solutions of total cost bounded by $\mathsf{OPT}$. The final instances in ${\mathcal{G}}'_{h+1}$ are now guaranteed to satisfy all Invariants~(\ref{invariant 1: disjointness})--(\ref{invariant 5: bound on size}). \section{Conclusions} \label{sec: conclusions} We have shown an efficient randomized algorithm to find a drawing of any graph ${\mathbf{G}}$ in the plane with at most $O\left ((\optcro{{\mathbf{G}}})^{10}\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)\right )$ crossings. We did not make an effort to optimize the powers of $\mathsf{OPT},d_{\mbox{\textup{\footnotesize{max}}}}$ and $\log n$ in this guarantee, or the constant hidden in the $O(\cdot)$ notation, and we believe that they can be improved. We hope that the technical tools developed in this paper will help obtain better algorithms for the {\sf Minimum Crossing Number}\xspace problem. A specific possible direction is obtaining efficient algorithms for $\rho$-balanced $\alpha$-well-linked bi-partitions. In particular, an interesting open question is whether there is an efficient algorithm, that, given an $n$-vertex graph $G$ with maximum degree $d_{\mbox{\textup{\footnotesize{max}}}}$, finds a $\rho$-balanced $\alpha$-well-linked bi-partition of $G$, for $\rho,\alpha=\operatorname{poly}(d_{\mbox{\textup{\footnotesize{max}}}}\cdot \log n)$. In fact it is not even clear whether such a bi-partition exists in every graph. We note that the dependence of $\rho$ on $d_{\mbox{\textup{\footnotesize{max}}}}$ is necessary, for example, in the star graph. This question appears to be interesting in its own right, and its positive resolution would greatly simplify our algorithm and improve its performance guarantee. We also note that if we only require that one of the two sets in the bi-partition is well-linked, then there is an efficient algorithm for finding such bi-partitions, similarly to the proof of Theorem~\ref{thm: initial partition}. \paragraph{Acknowledgements.} The author thanks Yury Makarychev and Anastasios Sidiropoulos for many fruitful discussions, and for reading earlier drafts of the paper. \label{------------------------------------------------Start Appendix------------------------------------------------------------------------}
{'timestamp': '2010-12-02T02:02:31', 'yymm': '1012', 'arxiv_id': '1012.0255', 'language': 'en', 'url': 'https://arxiv.org/abs/1012.0255'}
\section{Introduction} \label{sec:intro} A cancellative and commutative (additive) monoid is called \textit{atomic} if every non-invertible element is the sum of \textit{atoms} (i.e., \textit{irreducibles}). Much recent literature has focused on the arithmetic of such monoids; the monograph \cite{GH06} contains an extensive bibliography of such work. Many of these recent works have centered on important classes of monoids such as Krull monoids, the multiplicative monoids of integral domains, numerical monoids, and congruence monoids. In their landmark study of the multiplicative monoid of an integral domain \cite{AAZ90}, Anderson, Anderson, and Zafrullah introduced the properties of bounded and finite factorizations. These ideas can easily be extended to all commutative cancellative monoids, and we include below a diagram~\eqref{diag:AAZ's chain for monoids} containing their factorization properties modified for the general case. \begin{equation} \label{diag:AAZ's chain for monoids} \begin{tikzcd \textbf{ UFM } \ \arrow[r, Rightarrow] \arrow[d, Rightarrow] & \ \textbf{ HFM } \arrow[d, Rightarrow] \\ \textbf{ FFM } \ \arrow[r, Rightarrow] & \ \textbf{ BFM } \arrow[r, Rightarrow] & \textbf{ ACCP monoid} \arrow[r, Rightarrow] & \textbf{ atomic monoid} \end{tikzcd} \end{equation} While it is well known in the general case that each implication in ~\eqref{diag:AAZ's chain for monoids} holds, it is also known that none of the implications are reversible (even in the class of integral domains, see~\cite{AAZ90}). \smallskip The fundamental purpose of this work is to survey the recent results regarding the atomicity of additive submonoids of the nonnegative cone of the real line. Such a survey is important, as in many cases these monoids offer simpler examples of complex factorization properties, than those currently in the commutative algebra literature. We begin with the following definition. \begin{definition} An additive submonoid of $(\mathbb{R}_{\ge 0}, +)$ is called a \emph{positive monoid}. \end{definition} Submonoids of $(\mathbb{N}_0,+)$ are clearly positive monoids, and they are called \emph{numerical monoids}. An introduction to numerical monoids is offered in~\cite{GR09}. In addition, submonoids of $(\mathbb{Q}_{\ge 0},+)$ are called \emph{Puiseux monoids}. Every numerical monoid is clearly a Puiseux monoid, and one can readily verify that a Puiseux monoid is finitely generated if and only if it is isomorphic to a numerical monoid. Puiseux monoids are perhaps the positive monoids that have been most systematically investigated in the last five years (see \cite{GGT21} and references therein). A survey on the atomicity of Puiseux monoids can be found in the recent Monthly article~\cite{CGG21}. Positive monoids that are increasingly generated have been studied in~\cite{mBA20,BG21,fG19,GG18}. On the other hand, positive semirings (i.e., positive monoids closed under multiplication) have been studied in \cite{BCG21,BG20}, while the special case of positive monoids generated by a geometric sequence (necessarily positive semirings) have been studied in~\cite{CGG20,CG20}. Finally, Furstenberg positive monoids (i.e., those whose nonzero elements are divisible by an atom) have been recently investigated in~\cite{fG21}. We break our remaining work into 5 sections. In Section 2, we lay out the basic necessary definitions and notation. Section 3 explores the factorization properties of cyclic semirings (i.e., additive monoids generated over the nonnegative integers by the sequence $\{\alpha^n\}_{n\in \mathbb{N}_0}$ where $\alpha$ is a positive real number). In Theorem 3.3, we characterize when these monoids are atomic, and in those cases determine completely in Proposition 3.6 which elements are atoms. In Section 4, we explore constructing atomic positive monoids that do not satisfy the ACCP. These examples are vital, as such examples in the realm of integral domains are extremely difficult to construct. As Proposition 4.4 shows us how to do this with arbitrary rank, Proposition 4.6 constructs a positive monoid with the ACCP which is not a BFM. Section 5 explores bounded factorizations, and Proposition 5.5 constructs positive monoids which are BFMs. As a by product of these results, we offer several examples of BFMs which are not FFMs. We conclude in Section 6 by exploring in detail the finite factorization property; we prove in Theorem 6.1 that every positive monoid generated by an increasing sequence is an FFM. In some sense, our entire paper is motivated by diagram \eqref{diag:AAZ's chain for monoids}. We offer counterexamples using positive monoids to all the reverse implications of \eqref{diag:AAZ's chain for monoids} (see Remarks 3.1, 4.5, 4.7, 5.4, 6.5, and 6.7). \bigskip \section{Preliminaries} \label{sec:prelim} We let $\mathbb{P}$, $\mathbb{N}$, and $\mathbb{N}_0$ denote the set of primes, positive integers, and nonnegative integers, respectively. If $X$ is a subset of $\mathbb{R}$ and $r$ is a real number, we let $X_{\ge r}$ denote the set $\{s \in X : s \ge r\}$. In a similar fashion, we use the notations $X_{> r}, X_{\le r}$, and $X_{< r}$. For a positive rational $q$, the positive integers $a$ and $b$ with $q = a/b$ and $\gcd(a,b) = 1$ are denoted by $\mathsf{n}(q)$ and $\mathsf{d}(q)$, respectively. \smallskip The following definition of a monoid, albeit not the most standard\footnote{A monoid is most commonly defined as a semigroup with an identity element.}, will be the most appropriate in the context of this paper. \begin{definition} A \emph{monoid} is a semigroup with identity that is cancellative and commutative. \end{definition} Monoids will be written additively, unless we say otherwise. In addition, we shall tacitly assume that every monoid here is reduced, that is, its only invertible element is zero. Let $M$ be a monoid. We set $M^\bullet = M \setminus \{0\}$. For a subset $S$ of $M$, we let $\langle S \rangle$ denote the submonoid of $M$ generated by $S$, i.e., the intersection of all submonoids of $M$ containing $S$. We say that a monoid is \emph{finitely generated} if it can be generated by a finite set. \smallskip A nonzero element $a \in M$ is called an \emph{atom} if whenever $a = b_1 + b_2$ for some $b_1, b_2 \in M$ either $b_1 = 0$ or $b_2 = 0$. As it is customary, we let $\mathcal{A}(M)$ denote the set consisting of all atoms of $M$. If $\mathcal{A}(M)$ is empty, $M$ is said to be \emph{antimatter}. \begin{definition} A monoid is \emph{atomic} if every nonzero element of the monoid is the sum of atoms. \end{definition} If $I$ is a subset of $M$, then $I$ is called an \emph{ideal} provided that $I + M = I$. Ideals of the form $b + M$, where $b \in M$, are called \emph{principal}. The monoid $M$ satisfies the \emph{ascending chain condition on principal ideals} (\emph{ACCP} for short) if every ascending chain of principal ideals of $M$ becomes stationary from some point onward. It is not hard to prove that every monoid satisfying the ACCP is atomic (see \cite[Proposition~1.1.4]{GH06}). \smallskip A monoid $F$ is called a \emph{free commutative monoid} with basis~$A$ if every element $b \in F$ can be written uniquely as the sum of elements in $A$. It is well known that for every set $A$ there exists, up to isomorphism, a unique free commutative monoid on $A$, which we denote by $F(A)$. It is also well known that every map $A \to M$, where $M$ is a monoid, uniquely extends to a monoid homomorphism $F(A) \to M$. \smallskip The \emph{Grothendieck group} of a monoid $M$, here denoted by $\textsf{gp}(M)$, is the abelian group (unique up to isomorphism) satisfying the property that any abelian group containing a homomorphic image of $M$ will also contain a homomorphic image of $\textsf{gp}(M)$. The \emph{rank} of the monoid $M$ is then defined to be the rank of $\textsf{gp}(M)$ as a $\mathbb{Z}$-module or, equivalently, the dimension of the $\mathbb{Q}$-vector space $\mathbb{Q} \otimes_\mathbb{Z} \textsf{gp}(M)$. \smallskip For an atomic monoid $M$, we let $\mathsf{Z}(M)$ denote the free commutative monoid on the set $\mathcal{A}(M)$. The elements of $\mathsf{Z}(M)$ are called \emph{factorizations}. Then we can think of factorizations in $\mathsf{Z}(M)$ as formal sums of atoms. The unique monoid homomorphism $\pi \colon \mathsf{Z}(M) \to M$ such that $\pi(a) = a$ for all $a \in \mathcal{A}(M)$ is called the \emph{factorization homomorphism}. For every element $b \in M$, \[ \mathsf{Z}(b) := \pi^{-1}(b) \subseteq \mathsf{Z}(M) \] is called the \emph{set of factorizations} of $b$. If for every $b \in M$ the set $\mathsf{Z}(b)$ is finite, then $M$ is called a \emph{finite factorization monoid} (\emph{FFM}). Also, if $\mathsf{Z}(b)$ is a singleton for every $b \in M$, then $M$ is called a \emph{unique factorization monoid} (\emph{UFM}). Note that every UFM is an FFM. It follows from \cite[Proposition~2.7.8]{GH06} that every finitely generated monoid is an FFM. \smallskip Let $z \in \mathsf{Z}(M)$ be a factorization in $M$. If we let $|z|$ denote the number of atoms (counting repetitions) in the formal sum defining $z$ in $\mathsf{Z}(M)$, then $|z|$ is called the \emph{length} of $z$. For each element $b \in M$, \[ \mathsf{L}(b) := \{|z| \ : z \in \mathsf{Z}(b)\} \] is called the \emph{set of lengths} of $b$. If the set $\mathsf{L}(b)$ is finite for each $b \in M$, then $M$ is called a \emph{bounded factorization monoid} (\emph{BFM}). It is clear that FFMs are BFMs. The finite and bounded factorization properties were introduced by Anderson, Anderson, and Zafrullah in \cite{AAZ90} in the context of integral domains. Bounded and finite factorization monoids were first studied by Halter-Koch in~\cite{fHK92}. A recent survey on the finite and bounded factorization properties can be found in~\cite{AG20}. The monoid $M$ is called a \emph{half-factorial monoid} (\emph{HFM}) provided that for every $b\in M$ the set $\mathsf{L}(b)$ is a singleton. It follows directly from the definition that every HFM is a BFM. The study of half-factoriality, mainly in the context of algebraic number theory, dates back to the 1960s (see \cite{lC60}). The term ``half-factorial" was coined by Zaks in \cite{aZ76}. A survey on half-factoriality can be found in~\cite{CC00}. \bigskip \section{A Class of Atomic Positive Monoids} \label{sec:atomicity} A positive monoid consisting of rational numbers is called a \emph{Puiseux monoid}. The class of Puiseux monoids will be a convenient source of examples throughout our exposition. None of the implications of Diagram~\ref{diag:AAZ's chain for monoids} are reversible in the class of positive monoids. Moreover, as is illustrated in~\cite{CGG21}, none of the implications (except \textbf{UFM} $\Rightarrow$ \textbf{HFM}) is reversible in the subclass of Puiseux monoids. An example of a half-factorial positive monoid that is not a UFM is given in Example~\ref{ex:HFM not UFM}. \begin{remark} In this section, we primarily focus on atomic monoids. However, there are many positive monoids that are not atomic. Indeed, the Puiseux monoid $\langle 1/p^n : n \in \mathbb{N} \rangle$ is antimatter for every $p \in \mathbb{P}$. \end{remark} Perhaps the class of non-finitely generated positive monoids that has been most thoroughly studied is that one consisting of cyclic semirings~\cite{CGG20}. \begin{definition} For $\alpha \in \mathbb{R}_{> 0}$, we let $\mathbb{N}_0[\alpha]$ denote the positive monoid $\langle \alpha^n : n \in \mathbb{N}_0 \rangle$. \end{definition} Observe that $\mathbb{N}_0[\alpha]$ is closed under multiplication and, therefore, $(\mathbb{N}_0[\alpha]^\bullet, \cdot)$ is also a monoid (not necessarily reduced). Positive monoids closed under multiplication are called \emph{positive semirings} and have been recently studied in \cite{BCG21} by Baeth, Gotti, and the first author. We will only be concerned here with the additive structure of the semiring $\mathbb{N}_0[\alpha]$. For $q \in \mathbb{Q}_{> 0}$, the atomicity of $\mathbb{N}_0[q]$ was first considered in \cite[Section~6]{GG18}; later in~\cite{CGG20} several factorization invariants of $\mathbb{N}_0[q]$ were compared and contrasted to those of numerical monoids generated by arithmetic sequences. \smallskip In the next theorem, we characterize when $\mathbb{N}_0[\alpha]$ is atomic. In addition, we give two sufficient conditions for atomicity. First, we recall Descartes' Rule of Signs. Given a polynomial $f(x) = c_n x^n + \cdots + c_1x + c_0 \in \mathbb{R}[x]$, the \emph{number of variations of the sign} of $f(x)$ is the cardinality of the set $\{j \in \ldb 1, n \rdb : c_j c_{j-1} < 0\}$. Descartes' Rule of Signs states that the number of variations of the sign of a polynomial $f(x) \in \mathbb{R}[x]$ is at least, and has the same parity as, the number of positive roots of $f(x)$ provided that we count each root with multiplicity. \begin{theorem}\cite[Theorem~4.1]{CG20} \label{thm:atomic characterization} For every $\alpha \in \mathbb{R}_{>0}$, the following conditions are equivalent. \begin{enumerate} \item[(a)] The monoid $\mathbb{N}_0[\alpha]$ is atomic. \smallskip \item[(b)] The monoid $\mathbb{N}_0[\alpha]$ is not antimatter. \smallskip \item[(c)] The element $1$ is an atom of $\mathbb{N}_0[\alpha]$. \end{enumerate} In addition, if $\alpha$ is an algebraic number and $m(x)$ is the minimal polynomial of $\alpha$, then the following statements hold. \begin{enumerate} \item If $\alpha$ is not rational and $|m(0)| \neq 1$, then $\mathbb{N}_0[\alpha]$ is atomic. \smallskip \item If $m(x)$ has at least two positive roots, counting repetitions, then $\mathbb{N}_0[\alpha]$ is atomic. \end{enumerate} \end{theorem} \begin{proof} (a) $\Rightarrow$ (b): This is clear. \smallskip (b) $\Rightarrow$ (c): Suppose that $1 \notin \mathcal{A}(\mathbb{N}_0[\alpha])$. Then $1 = \sum_{i=1}^k c_i \alpha^i$ for some $c_1, \dots, c_k \in \mathbb{N}_0$ with $\sum_{i=1}^k c_i \ge 2$. As a result, $\alpha^n = \sum_{i=1}^k c_i \alpha^{i+n}$ for all $n \in \mathbb{N}_0$. This implies that $\mathbb{N}_0[\alpha]$ is antimatter. \smallskip (c) $\Rightarrow$ (a): If $\alpha \ge 1$, then for each $n \in \mathbb{N}$ the set $\mathbb{N}_0[\alpha] \cap [0,n]$ is finite, and so the elements of $\mathbb{N}_0[\alpha]$ are the terms of an increasing sequence. Therefore $\mathbb{N}_0[\alpha]$ is atomic by~\cite[Theorem~5.6]{fG19}. Now suppose that $\alpha \in (0,1)$. As $\alpha < 1$, we see that $\alpha^i \nmid_{\mathbb{N}_0[\alpha]} \alpha^j$ whenever $i < j$. Because $1 \in \mathcal{A}(\mathbb{N}_0[\alpha])$, it follows that $\alpha^n \in \mathcal{A}(\mathbb{N}_0[\alpha])$ for all $n \in \mathbb{N}_0$. Thus, $\mathbb{N}_0[\alpha]$ is atomic, as desired. \smallskip Now assume that $\mathbb{N}_0[\alpha]$ is atomic, and let us proceed to argue (1) and (2). \smallskip (1) Suppose, for the sake of a contradiction, that the monoid $\mathbb{N}_0[\alpha]$ is not atomic. By Theorem~\ref{thm:atomic characterization}, $1$ is not an atom of $\mathbb{N}_0[\alpha]$ and, therefore, there exist $c_1, \dots, c_n \in \mathbb{N}_0$ with $1 = \sum_{i=1}^n c_i \alpha^i$. Hence $\alpha$ is a root of $p(x) := 1 - \sum_{i=1}^n c_ix^i \in \mathbb{Q}[x]$. Then write $p(x) = m(x)q(x)$ for some $q(x) \in \mathbb{Q}[x]$. Observe that Gauss' Lemma guarantees that $q(x)$ belongs to $\mathbb{Z}[x]$. Since $p(0) = 1$, the equality $|m(0)| = 1$ holds, which gives the desired contradiction. \smallskip (2) Assume, by way of contradiction, that the monoid $\mathbb{N}_0[\alpha]$ is not atomic, and write $1 = \sum_{i=1}^n c_i \alpha^i$ for some $c_1, \dots, c_n \in \mathbb{N}_0$. As we have seen in the previous paragraph, $\alpha$ is a root of the polynomial $p(x) = 1 - \sum_{i=1}^n c_ix^i \in \mathbb{Q}[x]$. It follows now from Descartes' Rule of Signs that $\alpha$ is the only positive root which $p(x)$ can have. Since $m(x)$ is a divisor of $p(x)$ in $\mathbb{Q}[x]$, each root of $m(x)$ must be a root of $p(x)$. Hence $\alpha$ is the only positive root of $m(x)$, a contradiction. \end{proof} The condition $1 \in \mathcal{A}(\mathbb{N}_0[\alpha])$ in part~(c) of Theorem~\ref{thm:atomic characterization} does not hold in general, which means that there are algebraic numbers $\alpha$ giving antimatter monoids $\mathbb{N}_0[\alpha]$. \begin{example} Take $\alpha = \frac{\sqrt{5} - 1}{2}$, whose minimal polynomial is $m(x) = x^2 + x - 1$. Since $\alpha$ is a root of $m(x)$, we see that $1 = \alpha^2 + \alpha$. As a consequence, $1$ is not an atom of $\mathbb{N}_0[\alpha]$, and so Theorem~\ref{thm:atomic characterization} guarantees that $\mathbb{N}_0[\alpha]$ is antimatter. \end{example} As the next example illustrates, none of the sufficient conditions for atomicity we gave as part of Theorem~\ref{thm:atomic characterization} implies the other. \begin{example} Consider the polynomial $m_1(x) = x^2 - 4x + 1$. It is clearly irreducible, and it has two distinct positive real roots: $2 \pm \sqrt{3}$. However, we see that $|m_1(0)| = 1$. Now consider the polynomial $m_2(x) = x^2 + 2x - 2$. It is also irreducible, and it has only one positive real root, namely, $\alpha = \sqrt{3} - 1$. However, $|m_2(0)| \neq 1$. \end{example} \smallskip We proceed to describe the set of atoms of $\mathbb{N}_0[\alpha]$ when it is atomic. \begin{prop} \cite[Theorem~4.1]{CG20} \label{prop:atoms of cyclic algebraic semirings} If $\mathbb{N}_0[\alpha]$ is atomic, then the following statements hold. \begin{enumerate} \item If $\alpha$ is transcendental, then $\mathcal{A}(\mathbb{N}_0[\alpha]) = \{\alpha^n : n \in \mathbb{N}_0\}$. \smallskip \item If $\alpha$ is algebraic and $\sigma := \min \{n \in \mathbb{N} \cup \{\infty\} : \alpha^n \in \langle \alpha^j : j \in \ldb 0,n-1 \rdb \rangle \}$, then \begin{itemize} \item if $\sigma < \infty$, then $\mathcal{A}(\mathbb{N}_0[\alpha]) = \{\alpha^n : n \in \ldb 0, \sigma-1 \rdb\}$, and \smallskip \item if $\sigma = \infty$, then $\mathcal{A}(\mathbb{N}_0[\alpha]) = \{ \alpha^n : n \in \mathbb{N}_0 \}$. \end{itemize} \end{enumerate} \end{prop} \begin{proof} (1) Suppose that $\alpha$ is transcendental. If $\alpha^n = \sum_{i=0}^N c_i \alpha^i$ for some $n \in \mathbb{N}_0$, $N \in \mathbb{N}_{\ge n}$, and coefficients $c_0, \dots, c_N \in \mathbb{N}_0$, then $\alpha$ is a root of the polynomial $x^n - \sum_{i=0}^N c_i x^i \in \mathbb{Q}[x]$. Since $\alpha$ is transcendental, $c_i = 0$ for every $i \in \ldb 0,N \rdb \setminus \{n\}$ and $c_n = 1$, which implies that $\alpha^n$ is an atom. Hence $\mathcal{A}(\mathbb{N}_0[\alpha]) = \{\alpha^n : n \in \mathbb{N}_0\}$, as desired. \smallskip (2) Now suppose that $\alpha$ is an algebraic number. First, we will assume that $\sigma \in \mathbb{N}$. Since $\alpha^{\sigma} \in \langle \alpha^n : n \in \ldb 0, \sigma-1 \rdb \rangle$, it follows that $\alpha \ge 1$. Note that $\alpha^{\sigma + j} \in \langle \alpha^{n + j} : n \in \ldb 0, \sigma-1 \rdb \rangle$ for all $j \in \mathbb{N}_0$ and, as a consequence, $\alpha^n \notin \mathcal{A}(\mathbb{N}_0[\alpha])$ for any $n \ge \sigma$. Now fix $m \in \ldb 0, \sigma - 1 \rdb$, and write $\alpha^m = \sum_{i=0}^k c_i \alpha^i$ for some $c_0, \dots, c_k \in \mathbb{N}_0$ such that $c_k > 0$. Because $\alpha \ge 1$, we see that $k \le m$. It follows from the minimality of $\sigma$ that $k = m$. As a consequence, $\alpha^m \in \mathcal{A}(\mathbb{N}_0[\alpha])$. Then we can conclude that $\mathcal{A}(\mathbb{N}_0[\alpha]) = \{\alpha^n : n \in \ldb 0, \sigma-1 \rdb\}$. \smallskip Finally, we suppose that $\sigma = \infty$. This implies, in particular, that $\alpha \neq 1$. Assume first that $\alpha > 1$. In this case, $\alpha^n$ does not divide $\alpha^m$ in $\mathbb{N}_0[\alpha]$ for any $n > m$. Therefore $\alpha^m$ is an atom of $\mathbb{N}_0[\alpha]$ if and only if $\alpha^m$ is not in $\langle \alpha^n : n \in \ldb 0, m-1 \rdb \rangle$. Thus, $\mathcal{A}(\mathbb{N}_0[\alpha]) = \{\alpha^n : n \in \mathbb{N}_0\}$. Now assume that $\alpha < 1$. Take $m \in \mathbb{N}_0$ and suppose that $\alpha^m = \sum_{i=m}^k c_i \alpha^i$ for $c_m, \dots, c_k \in \mathbb{N}_0$. Observe that $c_m > 0$ as otherwise $1 = \sum_{i=m+1}^k c_i \alpha^{i-m} \notin \mathcal{A}(\mathbb{N}_0[\alpha])$ and so $\mathbb{N}_0[\alpha]$ would not be atomic. Then $c_m = 1$, and so $\alpha^m \in \mathcal{A}(\mathbb{N}_0[\alpha])$. Hence $\mathcal{A}(\mathbb{N}_0[\alpha]) = \{ \alpha^n : n \in \mathbb{N}_0 \}$, as desired. \end{proof} Observe that we did not use the atomicity of $\mathbb{N}_0[\alpha]$ to argue that $\mathcal{A}(\mathbb{N}_0[\alpha]) = \{\alpha^n : n \in \mathbb{N}_0\}$ in part~(1) of Proposition~\ref{prop:atoms of cyclic algebraic semirings}. Thus, we have that $\mathbb{N}_0[\alpha]$ is atomic for every transcendental number $\alpha$; indeed, in this case, $\mathbb{N}_0[\alpha]$ is a free commutative monoid and, therefore, a UFM. The next corollary follows immediately from Proposition~\ref{prop:atoms of cyclic algebraic semirings}. \begin{cor} \cite[Corollary~4.3]{CG20} For $\alpha \in \mathbb{R}_{>0}$, the monoid $\mathbb{N}_0[\alpha]$ is finitely generated if and only if there is an $n \in \mathbb{N}_0$ such that $\mathcal{A}(\mathbb{N}_0[\alpha]) = \{\alpha^j : j \in \ldb 0,n \rdb \}$. \end{cor} \bigskip \section{Atomic Positive Monoids Without the ACCP} \label{sec:ACCP} It is not hard to argue that every monoid satisfying the ACCP is atomic. However, the converse does not hold in general. Indeed, there are integral domains satisfying the ACCP that are not atomic. The first of such examples was constructed in 1974 by Grams~\cite{aG74}, and further examples were given by Zaks in~\cite{aZ82} and, more recently, by Boynton and Coykendall in~\cite{BC19}. It turns out that there exist valuations of $\mathbb{N}_0[x]$ that are atomic but do not satisfy the ACCP, and we will construct some of them in this section. Before offering a necessary condition for $\mathbb{N}_0[x]$ to satisfy the ACCP, we introduce some needed terminology. \smallskip For a polynomial $f(x) \in \mathbb{Q}[x]$, we call the set of exponents of the monomial summands of $f(x)$ the \emph{support} of $f(x)$, and we denote it by $\text{supp} \, f(x)$, i.e., $\text{supp} \, f(x) := \{n \in \mathbb{N}_0 : f^{(n)}(0) \neq 0 \}$, where $f^{(n)}$ denotes the $n$-th formal derivative of $f$. Assume that $\alpha \in \mathbb{C}$ is algebraic over $\mathbb{Q}$, and let $m(x)$ be the minimal polynomial of $\alpha$. Clearly, there exists a unique $\ell \in \mathbb{N}$ such that $\ell m(x)$ has content~$1$. Also, there exist unique polynomials $p(x)$ and $q(x)$ in $\mathbb{N}_0[x]$ such that $\ell m(x) = p(x) - q(x)$ and $\text{supp} \, p(x) \bigcap \text{supp} \, q(x) = \emptyset$. We say that $(p(x), q(x))$ is the \emph{minimal pair} of $\alpha$. \begin{prop} \cite[Theorem~4.7]{CG20} \label{prop:ACCP necessary condition} Let $\alpha$ be an algebraic number in $(0,1)$ with minimal pair $(p(x),q(x))$. If $\mathbb{N}_0[\alpha]$ satisfies the ACCP, then $p(x) - x^m q(x)$ is not in $\mathbb{N}_0[x]$ for any $m \in \mathbb{N}_0$. \end{prop} \begin{proof} Assume that the monoid $\mathbb{N}_0[\alpha]$ satisfies the ACCP. Now suppose, by way of contradiction, that there exists $m \in \mathbb{N}_0$ with $f(x) := p(x) - x^mq(x) \in \mathbb{N}_0[x]$. For each $n \in \mathbb{N}$, we see that \[ q(\alpha)\alpha^{nm} = p(\alpha) \alpha^{nm} = \big( f(\alpha) + \alpha^m q(\alpha) \big) \alpha^{nm} = f(\alpha) \alpha^{nm} + q(\alpha) \alpha^{(n+1)m}. \] Therefore $\big( q(\alpha)\alpha^{nm} + \mathbb{N}_0[\alpha] \big)_{n \in \mathbb{N}}$ is an ascending chain of principal ideals in $\mathbb{N}_0[\alpha]$. Since $\mathbb{N}_0[\alpha]$ satisfies the ACCP, such a sequence must eventually stabilize. However, this would imply that $q(\alpha)\alpha^{nm} = \min (q(\alpha)\alpha^{nm} + \mathbb{N}_0[\alpha]) = \min (q(\alpha)\alpha^{(n+1)m} + \mathbb{N}_0[\alpha]) = q(\alpha)\alpha^{(n+1)m}$ for some $n \in \mathbb{N}$, which is clearly a contradiction. \end{proof} As a consequence of Proposition~\ref{prop:ACCP necessary condition}, we obtain the following. \begin{cor} \cite[Theorem~6.2]{GG18}, \cite[Corollary~4.4]{CGG21} \label{cor:atomic rational cyclic semirings without ACCP} If $q$ is a rational number in $(0,1)$ such that $\mathsf{n}(q) \ge 2$, then $\mathbb{N}_0[q]$ is an atomic monoid that does not satisfy the ACCP. \end{cor} As we have mentioned before, for each $\alpha \in \mathbb{R}_{> 0}$, the monoid $\mathbb{N}_0[\alpha]$ is indeed a semiring. When $\alpha$ is a transcendental number, $\mathbb{N}_0[\alpha] \cong \mathbb{N}_0[x]$ as semirings, and, therefore, a simple degree argument shows that the multiplicative monoid $\mathbb{N}_0[\alpha]^\bullet$ is atomic. Factorizations of the multiplicative monoid $\mathbb{N}_0[x]^\bullet$ were studied by Campanini and Facchini in~\cite{CF19}. However, the following question remains unanswered. \begin{question}\footnote{A version of this question is stated in \cite[Section~3]{BG20} as a conjecture.} For which algebraic numbers $\alpha \in \mathbb{R}_{> 0}$ is the multiplicative monoid $\mathbb{N}_0[\alpha]^\bullet$ atomic? \end{question} We can actually use Corollary~\ref{cor:atomic rational cyclic semirings without ACCP} to construct positive monoids of any prescribed rank that are atomic, but do not satisfy the ACCP. As far as we know, the following result does not appear in the current literature. \begin{prop} \label{prop:atomic positive monoid without the ACCP} For any rank $s \in \mathbb{N}$, there exists an atomic positive monoid with rank $s$ that does not satisfy the ACCP. \end{prop} \begin{proof} Fix $s \in \mathbb{N}$. Since $\mathbb{R}$ is an infinite-dimensional vector space over $\mathbb{Q}$, we can take $S \subset \mathbb{R}_{> 0}$ to be a linearly independent set over $\mathbb{Q}$ such that $|S| = s-1$ and $\mathbb{Q} \cap S = \emptyset$. Take then $q \in \mathbb{Q} \cap (0,1)$ with $\mathsf{n}(q) \neq 1$. Because $\mathbb{N}_0[q]$ is a Puiseux monoid, $\textsf{gp}(\mathbb{N}_0[q])$ is an additive subgroup of $\mathbb{Q}$, and so $\text{rank}(\mathbb{N}_0[q]) = 1$. Now consider the positive monoid $M := \langle \mathbb{N}_0[q] \cup S \rangle$. It is not hard to see that $M = \mathbb{N}_0[q] \bigoplus_{s \in S} s\mathbb{N}_0$. Since $\text{rank}(\mathbb{N}_0[q]) = 1$, it follows that $\text{rank}(M) = \text{rank}(\mathbb{N}_0[q]) + s - 1 = s$. Because all direct summands in $\mathbb{N}_0[q] \bigoplus_{s \in S \setminus \{1\}} s\mathbb{N}_0$ are atomic, $M$ must be atomic. Consider the sequence of principal ideals $(\mathsf{n}(q)q^n + \mathbb{N}_0[q])_{n \in \mathbb{N}_0}$ of $\mathbb{N}_0[q]$. Since \[ \mathsf{n}(q)q^n = \mathsf{d}(q)q^{n+1} = (\mathsf{d}(q) - \mathsf{n}(q))q^{n+1} + \mathsf{n}(q)q^{n+1}, \] $\mathsf{n}(q)q^{n+1} \mid_{\mathbb{N}_0[q]} \mathsf{n}(q)q^n$ for every $n \in \mathbb{N}_0$. Therefore $(\mathsf{n}(q)q^n + \mathbb{N}_0[q])_{n \in \mathbb{N}_0}$ is an ascending chain of principal ideals. In addition, it is clear that such a chain of ideals does not stabilize. As a result, $\mathbb{N}_0[q]$ does not satisfy the ACCP, from which we obtain that $M$ does not satisfy the ACCP. \end{proof} Proposition~\ref{prop:atomic positive monoid without the ACCP} allows us to state the following remark in connection to Diagram~\eqref{diag:AAZ's chain for monoids}. \begin{remark} The converse of the implication \textbf{ACCP} $\Rightarrow$ \textbf{atomic} does not hold in the class of positive monoids. \end{remark} \smallskip Our next task will be to construct, for each cardinal number in $\mathbb{N}$, a class of positive monoids satisfying the ACCP but failing to be BFMs. To do so, we first construct Puiseux monoids that satisfy the ACCP but are not atomic, and then we achieve positive monoids with any prescribed rank by mimicking the technique used in the proof of Proposition~\ref{prop:atomic positive monoid without the ACCP}. \begin{prop} \cite[Example~2.1]{AAZ90}, \cite[Proposition~4.2.2]{fG21} \label{prop:a class of ACCP monoids} For any $s \in \mathbb{N}_0$, there exists a positive monoid with rank $s$ that satisfies the ACCP but is not a BFM. \end{prop} \begin{proof} Let $(d_n)_{n \in \mathbb{N}}$ be a strictly increasing sequence of positive integers with $d_1 \ge 2$ such that $\gcd(d_i, d_j) = 1$ for any distinct $i,j \in \mathbb{N}$. Consider the monoid $M := \langle 1/d_n : n \in \mathbb{N} \rangle$. It is not hard to verify that $1/d_j \in \mathcal{A}(M)$ for every $j \in \mathbb{N}$ and, therefore, $M$ is an atomic monoid with $\mathcal{A}(M) = \{1/d_j : j \in \mathbb{N}\}$. In addition, we can easily check that for each $q \in M^\bullet$, we can take $n \in \mathbb{N}_0$ and $c_1, \dots, c_k \in \mathbb{N}_0$ with $c_k \neq 0$ satisfying \begin{align} \label{eq:decomposition existence} q = n + \sum_{i=1}^k c_i \frac{1}{d_i}, \end{align} where $c_i \in \ldb 0,d_i - 1 \rdb$ for each $i \in \ldb 1,k \rdb$. We claim that the decomposition in~(\ref{eq:decomposition existence}) is unique. To argue our claim, take $n' \in \mathbb{N}_0$ and $c'_1, \dots, c'_m \in \mathbb{N}_0$ with $c'_m \neq 0$ and $c'_i \in \ldb 0, d_i - 1 \rdb$ for all $i \in \ldb 1, m \rdb$ such that \begin{align} \label{eq:decomposition uniqueness} n + \sum_{i=1}^k c_i \frac{1}{d_i} = n' + \sum_{i=1}^m c'_i \frac{1}{d_i}. \end{align} We can complete with zero coefficients if necessary to assume, without loss of generality, that $m = k$. Set $d = d_1 \cdots d_k$, and $n_i := d/d_i \in \mathbb{N}$ for every $i \in \ldb 1, k \rdb$. Now, for each $j \in \ldb 1,k \rdb$, we can rewrite~\eqref{eq:decomposition uniqueness} as follows: \[ (c_j - c'_j) n_j = (n' - n)d + \sum_{i \neq j} (c'_i - c_i)n_i. \] Since $d_j \mid d$ and $d_j \mid n_i$ for every $i \in \ldb 1,k \rdb \setminus \{j\}$, the right-hand side of the last equality is divisible by $d_j$. Thus, $(c_j - c'_j) n_j$ is divisible by $d_j$ and because $\gcd(n_j, d_j) = 1$, we see that $d_j \mid (c_j - c'_j)$. This implies that $c'_j = c_j$ for each $j \in \ldb 1, k \rdb$, and so $n' = n$. Hence the decomposition in \eqref{eq:decomposition existence} is unique, as claimed. With notation as in~\eqref{eq:decomposition existence}, define $N(q) := n$ and $S(q) := \sum_{i=1}^k c_i$. If $q'$ divides $q$ in $M$, then it is clear that $N(q') \le N(q)$. Also, if $q'$ properly divides $q$ in $M$, then the equality $N(q') = N(q)$ guarantees that $S(q') < S(q)$. Putting the last two observations together, we conclude that each sequence $(q_n)_{n \in \mathbb{N}}$ in $M$ satisfying $q_{n+1} \mid_M q_n$ for every $n \in \mathbb{N}$ must eventually terminate. Hence the Puiseux monoid $M$ satisfies the ACCP. Since $M$ is a Puiseux monoid, $\text{rank}(M) = 1$. In addition, we observe that $d_n \in \mathsf{L}_M(1)$ for every $n \in \mathbb{N}$ because $1 = d_n \frac{1}{d_n}$, whence $M$ is not a BFM. Thus, we have found a rank-one positive monoid that satisfies the ACCP but is not a BFM. Now, as we did in the proof of Proposition~\ref{prop:atomic positive monoid without the ACCP}, let us take a $\mathbb{Q}$-linearly independent set $S \subset \mathbb{R}_{> 0}$ such that $|S| = s-1$ and $\mathbb{Q} \cap S = \emptyset$. Then consider the positive monoid $M_s := M \bigoplus_{s \in S} s\mathbb{N}_0$. As $\text{rank}(M) = 1$, it follows that $\text{rank}(M_s) = s$. In addition, since $M$ satisfies the ACCP, each direct summand of $M_s$ satisfies the ACCP, which implies that $M_s$ also satisfies the ACCP. Since $M$ is a divisor-closed submonoid of $M_s$, the fact that $M$ is not a BFM immediately implies that $M_s$ is not a BFM. Thus, the positive monoid $M_s$ has rank $s$, satisfies the ACCP, but is not a BFM. \end{proof} Then we can state the following remark in connection to Diagram~\eqref{diag:AAZ's chain for monoids}. \begin{remark} The converse of the implication \textbf{BFM} $\Rightarrow$ \textbf{ACCP} does not hold in the class of positive monoids. \end{remark} \smallskip \bigskip \section{The Bounded Factorization Property} We begin this section providing two equivalent sufficient conditions for a positive monoid to be a BFM. \begin{prop} \cite[Proposition~4.5]{fG19} \label{prop:BF sufficient condition} For a positive monoid $M$ the following statements are equivalent. \begin{enumerate} \item $\inf M^\bullet > 0$. \smallskip \item $M$ is atomic and $\inf \mathcal{A}(M) > 0$. \end{enumerate} In addition, any of the above conditions implies that $M$ is a BFM. \end{prop} \begin{proof} (1) $\Rightarrow$ (2): Since $\inf M^\bullet > 0$, the inclusion $\mathcal{A}(M) \subseteq M^\bullet$ guarantees that $\inf \mathcal{A}(M) > 0$. Let us verify now that $M$ is atomic. Because $\inf M^\bullet > 0$, we can take $\epsilon \in \mathbb{R}_{> 0}$ satisfying $\epsilon < \inf M^\bullet$. Take now $y \in M^\bullet$ such that $y = b_1 + \cdots + b_n$ for some $b_1, \dots, b_n \in M^\bullet$. Then $y \ge n \min\{b_1, \dots, b_n\} \ge n \epsilon$, which implies that $n \le y/\epsilon$. Then there exists a maximum $m \in \mathbb{N}$ such that $y = a_1 + \cdots + a_m$ for some $a_1, \dots, a_m \in M^\bullet$. In this case, the maximality of $m$ ensures that $a_1, \dots, a_m \in \mathcal{A}(M)$. As a result, $M$ must be atomic. \smallskip (2) $\Rightarrow$ (1): Take $\epsilon \in \mathbb{R}_{> 0}$ such that $\epsilon < \inf \mathcal{A}(M)$. For each $r \in M^\bullet$, the fact that $M$ is atomic guarantees the existence of $a \in \mathcal{A}(M)$ dividing $r$ in $M$, and so $r \ge a > \epsilon$. As a result, $\inf M^\bullet > 0$. \smallskip We have seen in the first paragraph that if we take $\epsilon \in \mathbb{R}_{> 0}$ with $\epsilon < \inf M^\bullet$, then each $y \in M^\bullet$ can be written as the sum of at most $\lfloor y/\epsilon \rfloor$ atoms, and this implies that $\mathsf{L}(y)$ is bounded. As a consequence, $M$ is a BFM. \end{proof} The reverse implication of Proposition~\ref{prop:BF sufficient condition} does not hold. The following example sheds some light upon this observation. \begin{example} \label{ex:an FF PM having 0 as a limit point} Since $\mathbb{R}$ is an infinite-dimensional vector space over $\mathbb{Q}$, we can take a sequence $(r_n)_{n \in \mathbb{N}}$ of real numbers whose underlying set is linearly independent over $\mathbb{Q}$. After dividing each $r_n$ by a large enough positive integer $d_n$, one can further assume that $(r_n)_{n \in \mathbb{N}}$ decreases to zero. Therefore $M = \langle r_n : n \in \mathbb{N} \rangle$ is a UFM with $\mathcal{A}(M) = \{r_n : n \in \mathbb{N}\}$. In particular, $M$ is a BFM with $\inf M^\bullet = 0$. \end{example} Let us now identify a class of positive monoids that are BFMs but are neither FFMs nor HFMs. \begin{example} \label{ex:BFM that is neither FFM nor HFM} Consider the positive monoid $M := \{0\} \cup \mathbb{R}_{\ge 1}$. It follows from Proposition~\ref{prop:BF sufficient condition} that~$M$ is a BFM. Note that $\mathcal{A}(M) = [1,2)$. Let us show that $M$ is not an FFM. To do this, note that for each $b \in (2,3]$ the formal sum $(1 + 1/n) + (b - 1 - 1/n)$ is a factorization of length $2$ in $\mathsf{Z}(b)$ provided that $n \ge \big\lceil \frac{1}{b-2} \big\rceil$. This implies that $|\mathsf{Z}(b)| = \infty$ for all $b \in M_{>2}$. To see that $M$ is not an HFM, it suffices to observe that $3 = 3 \cdot 1 = 2 \cdot \frac{3}{2}$, which implies that $\{2,3\} \subseteq \mathsf{L}(3)$. \end{example} In light of Example~\ref{ex:BFM that is neither FFM nor HFM}, we make the following observation. \begin{remark} The converse of the implications \textbf{HFM} $\Rightarrow$ \textbf{BFM} and \textbf{FFM} $\Rightarrow$ \textbf{BFM} do not hold in the class of positive monoids. \end{remark} \smallskip We can generalize the monoid in Example~\ref{ex:BFM that is neither FFM nor HFM} and create two classes of positive monoids that are BFMs whose sets of atoms can be nicely described. These classes of monoids are quite suitable to provide counterexamples, as we just did in Example~\ref{ex:BFM that is neither FFM nor HFM}. \begin{prop} \cite[Example~4.7]{AG20}, \cite[Proposition~3.14]{BG20} \label{thm:BF sufficient condition} Let $r,s \in \mathbb{R}_{> 0}$ with $r>1$. Then the following statements hold. \begin{enumerate} \item $M_s = \{0\} \cup \mathbb{R}_{\ge s}$ is a BFM with $\mathcal{A}(M_s) = [s, 2s)$. \smallskip \item $S_r = \mathbb{N}_0 \cup \mathbb{R}_{\ge r}$ is a BFM with $\mathcal{A}(S_r) = \big( \{1\} \cup [r,r+1) \big) \setminus \{ \lceil r \rceil \}$. \end{enumerate} \end{prop} \begin{proof} (1) As $\inf M_s^\bullet = s > 0$, it follows from Proposition~\ref{prop:BF sufficient condition} that $M_s$ is a BFM. In addition, since $2s$ is a lower bound for the set $[s,2s) + [s,2s)$, it follows that $[s,2s) \subseteq \mathcal{A}(M_s)$. Finally, it is clear that $[s,2s)$ generates $M_s$, which implies that $\mathcal{A}(M_s) = [s,2s)$. \smallskip (2) Once again, it follows from Proposition~\ref{prop:BF sufficient condition} that $S_r$ is a BFM. In addition, it is clear that $\mathbb{R}_{\ge r+1} \subseteq 1 + S_r^\bullet$. As a consequence, $\mathcal{A}(S_r) \subseteq S^\bullet_r \cap \mathbb{R}_{< r+1} = \ldb 1, \lceil r \rceil \rdb \cup [r, r+1)$. Since $1 \in \mathcal{A}(S_r)$ and $m \notin \mathcal{A}(S_r)$ for any $m$ in the discrete interval $\ldb 2, \lceil r \rceil \rdb$, we can conclude that $\mathcal{A}(S_r) = \big( \{1\} \cup [r,r+1) \big) \setminus \{ \lceil r \rceil \}$. \end{proof} \bigskip \section{The Finite Factorization Property} We turn our discussion to the finite factorization property on the class of positive monoids. A positive monoid $M$ is called \emph{increasing} provided that it can be generated by an increasing sequence of positive real numbers. We show that increasing positive monoids are FFMs. \begin{theorem} \cite[Proposition~3.3]{GG18} \label{thm:increasing PM of Archimedean fields are FF} Every increasing positive monoid is an FFM. In addition, if $(r_n)_{n \in \mathbb{N}}$ is an increasing sequence of positive real numbers generating a positive monoid $M$, then $\mathcal{A}(M) = \{r_n : r_n \notin \langle r_1, \dots, r_{n-1} \rangle\}$. \end{theorem} \begin{proof} It is clear that $M$ is atomic; indeed, it follows from Proposition~\ref{prop:BF sufficient condition} that $M$ is a BFM. Let us suppose for the sake of contradiction that $M$ fails to be an FFM. Because $M$ is not an FFM, the set \[ X := \{r \in M : \, |\mathsf{Z}(r)| = \infty\} \] is not empty. Set $s = \inf X$ and note that $s$ is positive. Since $M$ is increasing, it follows that $m := \inf M^\bullet \in M$. Take $\epsilon \in (0,m)$ and then $r \in X$ with $s \le r < s + \epsilon$. Observe that every $a \in \mathcal{A}(M)$ appears in only finitely many factorizations of $r$. Because $|\mathsf{L}(r)| < \infty$, we can choose $\ell \in \mathsf{L}(r)$ so that $Z_\ell := \{z \in \mathsf{Z}(r) : \, |z| = \ell\}$ has infinite size. Let $z = a_1 \cdots a_\ell \in Z_\ell$ for some atoms $a_1, \dots, a_\ell$ of $M$ with $a_1 \le \cdots \le a_\ell$. As each of the atoms appears in only finitely many factorizations of the infinite set $Z_\ell$, we can take $z' = a'_1 \cdots a'_\ell \in Z_\ell$ for some atoms $a'_1, \dots, a'_\ell$ of $M$ satisfying $a_\ell < \min\{a'_1, \dots, a'_\ell\}$. Accordingly, we obtain \[ a_1 + \cdots + a_\ell \le \ell a_\ell < a'_1 + \cdots + a'_\ell, \] which contradicts that both $a_1 \cdots a_\ell$ and $a'_1 \cdots a'_\ell$ are factorizations of the same element, namely, $r$. Hence $M$ is an FFM. \smallskip In order to argue the second statement, let $A$ denote the set $\{r_n : r_n \notin \langle r_1, \dots, r_{n-1} \rangle\}$. It follows immediately that $A = \mathcal{A}(M)$ if $M$ is finitely generated, in which case, $M$ is an FFM by \cite[Corollary~3.7]{AG20}. We assume, therefore, that $|A| = \infty$. Let $(a_n)_{n \in \mathbb{N}}$ be a strictly increasing sequence with underlying set $A$. Because $a_1$ is the minimum of $M^\bullet$, it must be an atom. In addition, as $(a_n)_{n \in \mathbb{N}}$ is strictly increasing, for each $n \ge 2$ the fact that $a_n \notin \langle a_1,\dots, a_{n-1} \rangle$ guarantees that $a_n \in \mathcal{A}(M)$. As a result, $\mathcal{A}(M) = A$, which concludes the proof. \end{proof} There are positive monoids that are FFM but are not increasing, that is, the converse of Theorem~\ref{thm:increasing PM of Archimedean fields are FF} does not hold in general. \begin{example} Consider the monoid $M = \langle r_n : n \in \mathbb{N} \rangle$ constructed in Example~\ref{ex:an FF PM having 0 as a limit point}, where $(r_n)_{n \in \mathbb{N}}$ is a sequence of real numbers that strictly decreases to zero and whose terms are linearly independent over $\mathbb{Q}$. Since $M$ is a UFM, it is clearly an FFM. However, the fact that $0$ is a limit point of $M^\bullet$ guarantees that $M$ cannot be generated by an increasing sequence of real numbers. Hence $M$ is not an increasing positive monoid. \end{example} \medskip For every $n \in \mathbb{N}$, we can take elements $r_1, \dots, r_n \in \mathbb{R}_{> 0}$ that are linearly independent over $\mathbb{Q}$. Consider the positive monoid $M_n := \langle r_1, \dots, r_n \rangle$. It is clear that $M_n$ is a UFM of rank $n$. In the same way, we can create (and have created in previous examples) positive monoids of infinite rank. Since every UFM is an FFM, we have finite factorization positive monoids of any rank. It turns out that just inside the class of positive monoids $\{\mathbb{N}_0[\alpha] : \alpha \in \mathbb{R}_{>0}\}$ discussed in Section~\ref{sec:atomicity}, there are FFMs of any rank that are not UFMs. \begin{prop} \cite[Theorem~5.4]{CG20} \label{prop:UFM characterization} For $\alpha \in \mathbb{R}_{>0}$, the following statements hold. \begin{enumerate} \item If $\alpha$ is transcendental, then $\mathbb{N}_0[\alpha]$ is UFM of infinite rank. \smallskip \item If $\alpha > 1$, then $\mathbb{N}_0[\alpha]$ is an FFM. \smallskip \item If $\alpha$ is algebraic with minimal polynomial $m(x)$, then $\mathbb{N}_0[\alpha]$ is a UFM if and only if $\deg m(x) = |\mathcal{A}(\mathbb{N}_0[\alpha])|$. \end{enumerate} \end{prop} \begin{proof} (1) Because $\alpha$ is transcendental, there is no nonzero polynomial in $\mathbb{Q}[x]$ having $\alpha$ as a root and, therefore, the set $\{\alpha^n : n \in \mathbb{N}_0\}$ is linearly independent over $\mathbb{Q}$. Hence $\mathbb{N}_0[\alpha]$ is a UFM. \smallskip (2) Since $\alpha > 1$, we see that $\alpha^n < \alpha^{n+1}$ for every $n \in \mathbb{N}_0$. As a result, $(\alpha^n)_{n \in \mathbb{N}_0}$ is an increasing sequence generating $\mathbb{N}_0[\alpha]$. Hence $\mathbb{N}_0[\alpha]$ is an increasing positive monoid, and it follows from Theorem~\ref{thm:increasing PM of Archimedean fields are FF} that it is an FFM. \smallskip (3) For the direct implication, suppose that $\mathbb{N}_0[\alpha]$ is a UFM (and so an HFM). If $\alpha \in \mathbb{Q}$, then $\deg m(x) = 1$, while it follows from \cite[Proposition~4.2]{fG20} that $\mathbb{N}_0[\alpha]$ is isomorphic to the additive monoid $\mathbb{N}_0$. So in this case, the equalities $\deg m(x) = 1 = |\mathcal{A}(\mathbb{N}_0[\alpha])|$ hold. We assume, therefore, that $\alpha \notin \mathbb{Q}$, that is, $\deg m(x) > 1$. Now set \[ \sigma = \min \{n \in \mathbb{N} : \alpha^n \in \langle \alpha^j : j \in \ldb 0, n-1 \rdb \rangle \}. \] As $\mathbb{N}_0[\alpha]$ is atomic, $\mathcal{A}(\mathbb{N}_0[\alpha]) = \{\alpha^j : j \in \ldb 0, \sigma-1 \rdb\}$ by Theorem~\ref{thm:atomic characterization}. Because $m(x)$ divides any polynomial in $\mathbb{Q}[x]$ having $\alpha$ as a root, we obtain that $\deg m(x) \le \sigma$. Suppose for the sake of contradiction that $\deg m(x) < \sigma$. Now take $d \in \mathbb{N}$ with $dm(x) \in \mathbb{Z}[x]$ and write $dm(x) = p(x) - q(x)$ for polynomials $p(x)$ and $q(x)$ in $\mathbb{N}_0[x]$. Since $m(\alpha) = 0$, both $p(\alpha)$ and $q(\alpha)$ induce factorizations in $\mathbb{N}_0[\alpha]$ of the same element. Since $\mathbb{N}_0[\alpha]$ is a UFM, we see that $m(x) = 1/d(p(x) - q(x)) = 0$, which is a contradiction. Thus, $\deg m(x) = \sigma = |\mathcal{A}(\mathbb{N}_0[\alpha])|$, as desired. For the reverse implication, suppose that $\deg m(x) = |\mathcal{A}(\mathbb{N}_0[\alpha])|$. As the set $\mathcal{A}(\mathbb{N}_0[\alpha])$ is not empty, $\mathbb{N}_0[\alpha]$ is atomic by virtue of Theorem~\ref{thm:atomic characterization}. In addition, \[ \mathcal{A}(\mathbb{N}_0[\alpha]) = \{\alpha^j : j \in \ldb 0,d-1 \rdb\}, \] where $d$ is the degree of $m_\alpha(x)$. Then $\alpha^d = \sum_{i=0}^{d-1} c_i \alpha^i$ for some $c_0, \dots, c_{d-1} \in \mathbb{N}_0$, which implies that $m(x) = x^d - \sum_{i=0}^{d-1} c_i x^i$. Let $\sigma$ be as in the previous paragraph. Now for any two factorizations $z_1, z_2 \in \mathsf{Z}(\mathbb{N}_0[\alpha])$ of the same element in $\mathbb{N}_0[\alpha]$, we have that $\max \{ \deg z_1(x), \deg z_2(x) \} < d$ and $z_1(\alpha) = z_2(\alpha)$. This implies that $m(x)$ divides the polynomial $z_1(x) - z_2(x)$, which has degree strictly less than $m(x)$. As a result, $z_1(x) = z_2(x)$, which implies that $z_1 = z_2$. As a consequence, $\mathbb{N}_0[\alpha]$ is a UFM. \end{proof} Let us show now that, for any rank $n$, there is a positive monoid of rank $n$ that is an FFM but not a UFM. As far as we know, the following result does not appear in the current literature. \begin{prop} For every $n \in \mathbb{N}$, there exists an algebraic element $\alpha \in \mathbb{R}_{> 0}$ such that $\mathbb{N}_0[\alpha]$ is a rank-$n$ FFM that is not a UFM. \end{prop} \begin{proof} For $n=1$, we can take $M = \langle q^n : n \in \mathbb{N}_0 \rangle$, where $q \in \mathbb{Q}_{> 1}$. It is clear that $M$ has rank $1$. Since $M$ is generated by the increasing sequence $(q^n)_{n \in \mathbb{N}_0}$, Theorem~\ref{thm:increasing PM of Archimedean fields are FF} ensures that $M$ is an FFM. Also, it follows from \cite[Proposition~4.2]{fG20} that $M$ is not a UFM. For $n \ge 2$, consider the polynomial $m(x) = x^n - 4x + 2 \in \mathbb{Z}[x]$. Since $m(1) = -1$ and $m(4) > 0$, the polynomial $m(x)$ has a root $\alpha$ in the interval $(1,4)$. It follows from Eisenstein's Criterion at the prime ideal $2\mathbb{Z}$ that $m(x)$ is irreducible. As a result, $m(x)$ is the minimal polynomial of $\alpha$. Consider now the monoid $M = \mathbb{N}_0[\alpha]$. It follows from~\cite[Proposition~3.2]{CG19} that the rank of $M$ equals the degree of $m(x)$, that is, $\text{rank} \, M = n$. Because $\alpha > 1$, the monoid $M$ is an FFM by virtue of Theorem~\ref{thm:increasing PM of Archimedean fields are FF}. Finally, let us show that $M$ is not a UFM. Suppose, by way of contradiction, that $\deg m(x) = |\mathcal{A}(M)|$. In this case, it follows from Proposition~\ref{prop:atoms of cyclic algebraic semirings} that \[ \mathcal{A}(M) = \{\alpha^j : j \in \ldb 0,n-1 \rdb\}. \] Then $\alpha^n \in \langle \alpha^j : j \in \ldb 0, n-1 \rdb \rangle$, and so we can take $c_0, \dots, c_{n-1} \in \mathbb{N}_0$ such that $\alpha^n = \sum_{i=0}^{n-1} c_i \alpha^i$. Now the fact that $\alpha$ is a root of the polynomial $f(x) = x^n - \sum_{i=0}^{n-1} c_i x^i$, which is monic of degree $\deg m(x)$, implies that $f(x) = m(x)$. However, in this case one finds that $c_0 = -f(0) = -m(0) = -2$, a contradiction. As a consequence, $\deg m(x) \neq |\mathcal{A}(M)|$, and so Proposition~\ref{prop:UFM characterization} guarantees that $M$ is not a UFM, which concludes our proof. \end{proof} Let us record the following remark in connection to Diagram~\eqref{diag:AAZ's chain for monoids}. \begin{remark} The converse of the implication \textbf{UFM} $\Rightarrow$ \textbf{FFM} does not hold in the class of positive monoids. \end{remark} For the sake of completeness, we conclude with an example of a positive monoid that is an HFM but not a UFM; this is \cite[Example~7.2]{BCG21}. \begin{example} \label{ex:HFM not UFM} Take $n \in \mathbb{N}$ and consider the positive monoid $M_n = \big\langle \pi, n, \frac{\pi + n}{2} \big\rangle$. One can easily show that $\mathcal{A}(M_n) = \big\{ \pi, n, \frac{\pi + n}{2} \big\}$. As $\pi + n$ and $2 \frac{\pi + n}{2}$ are distinct factorizations of $\pi + n$, we see that $M_n$ is not a UFM. Now take $c_1, c_2, c_3 \in \mathbb{Z}$ such that \[ c_1 \pi + c_2 n +c_3 \frac{\pi + n}2 = 0. \] Since $\pi$ is irrational, $c_1 + c_3/2 = c_2 + c_3/2 = 0$ and, therefore, $c_1 + c_2 + c_3 = 0$. Thus, the positive monoid $M_n$ is an HFM. \end{example} We conclude with the following remark in connection to Diagram~\eqref{diag:AAZ's chain for monoids}. \begin{remark} The converse of the implication \textbf{UFM} $\Rightarrow$ \textbf{HFM} does not hold in the class of positive monoids. \end{remark} \bigskip \section*{Acknowledgments} The authors would like to thank Felix Gotti for helpful conversations during the preparation of this paper. \bigskip
{'timestamp': '2021-08-13T02:11:26', 'yymm': '2108', 'arxiv_id': '2108.05561', 'language': 'en', 'url': 'https://arxiv.org/abs/2108.05561'}
\chapter{Introduction} A foliation $\mathcal F$ on a manifold $M$ can be thought of as a partition of the manifold into injectively immersed submanifolds of $M$ called leaves of the foliation. The tangent spaces of the leaves combine together to define the tangent bundle of the foliation $\mathcal F$ which we denote by $T\mathcal F$. The quotient bundle $TM/T\mathcal F$ is referred as the normal bundle of the foliation and is denoted by $\nu\mathcal F$, which plays an important role in the study of foliations. The simplest type of foliations on a manifold are defined by submersions. In this case the level sets of the submersions define the leaves of regular foliations on a manifold. More generally, if a map $f:M\to N$ is transverse to a foliation $\mathcal F_N$ on $N$ then the inverse image of $\mathcal F_N$ under $f$ is a foliation on $M$. If $M$ is an open manifold, then it follows from Gromov-Phillips Transversality Theorem (\cite{gromov},\cite{phillips},\cite{phillips1}) that the homotopy classes of maps $M\to N$ transversal to $\mathcal F_N$ are in one to one correspondence with the homotopy classes of epimorphisms $TM\to \nu(\mathcal F_N)$. Gromov-Phillips Theorem can be translated into the language of $h$-principle and can be deduced from a general theorem due to Gromov (\cite{gromov_pdr}). In the vocabulary of $h$-principle, a subset $\mathcal R$ of $J^r(M,N)$, the space of $r$-jets of maps from a manifold $M$ to $N$, is called an $r$-th order\emph{ partial differential relation} or simply a relation. If $\mathcal R$ is open then it is called an open relation. A (continuous) section $\sigma:M\to J^r(M,N)$ of the $r$-jet bundle whose image is contained in $\mathcal R$ is referred as a section of $\mathcal R$. A \emph{solution} of $\mathcal R$ is a smooth map $f:M\to N$ whose $r$-jet extension $j^r_f:M\to J^r(M,N)$ is a section of $\mathcal R$. The space of solutions, $Sol(\mathcal R)$, has the $C^\infty$-compact open topology, whereas the space of sections of $\mathcal R$, $\Gamma(\mathcal R)$, has the $C^0$-compact open topology. The relation $\mathcal R$ is said to satisfy the \emph{parametric $h$-principle} if the $r$-jet map $j^r:Sol(\mathcal R)\to\Gamma(\mathcal R)$ is a weak homotopy equivalence. Thus the $h$-principle reduces a differential topological problem to a problem in algebraic topology. The diffeomorphism group of $M$ acts on the space of maps $M\to N$ by pull-back operation. This extends to an action of \emph{Diff}$(M)$, the pseudogroup of local diffeomorphisms of $M$, on the space of $r$-jets. If $\mathcal R$ is invariant under this action then we say that $\mathcal R$ is \emph{Diff}$(M)$-invariant. Gromov proved in \cite{gromov} that if $M$ is an open manifold then every open, \emph{Diff}$(M)$-invariant relation on $M$ satisfies the parametric $h$-principle. We shall refer to this result as Open Invariant Theorem for future reference. Using the full strength of the hypothesis on $\mathcal R$, one first proves that $\mathcal R$ satisfies the parametric $h$-principle near any submanifold $K$ of positive codimension. A key point about an open manifold $M$ is that it has the homotopy type of a CW complex $K$ of dimension strictly less than that of $M$. Furthermore, $M$ admits deformations into arbitrary open neighbourhood of $K$. As a result, open manifolds exhibit tremendous amount of flexibility. This allows the $h$-principle to be lifted from an open neighbourhood of $K$ to all of $M$. Since transversality is a differential condition on the derivative of a function, these are solutions to a first order differential relation $\mathcal R_T$. Transversality being a stable property, the relation $\mathcal R_T$ is open. Furthermore, the relation is clearly \emph{Diff}$(M)$-invariant since the pull-back of a map $M\to N$ transverse to a foliation $\mathcal F_N$ on $N$ by a diffeomorphism of $M$ is also transverse to $\mathcal F_N$. Thus the Gromov-Phillips Theorem says that $\mathcal R_T$ satisfies the parametric $h$-principle. The transversality theorem mentioned above plays a central role in the classification of foliations on open manifolds. Formally, the codimension $q$ foliations on a manifold $M$ are defined by local submersions $f_i:U_i\to \R^q$ for some open covering $\mathcal U=\{U_i,i\in I\}$, such that there are diffeomorphisms $g_{ij}:f_i(U_i)\to f_j(U_j)$ satisfying the relations $g_{ij}f_i=f_j$ and cocycle conditions. The germs of the diffeomorphisms $g_{ij}$ at points $f_i(x)$, $x\in U_i$, define maps $\gamma_{ij}:U_i\cap U_j\to \Gamma_q$, where $\Gamma_q$ is the topological groupoid of germs of local diffeomorphisms of $\R^q$. For any topological groupoid $\Gamma$, there is a notion of $\Gamma$-structure (\cite{haefliger},\cite{haefliger1}). Following Milnor's topological join construction (\cite{husemoller}) to define classifying space of principal $G$-bundles, one can construct a topological space $B\Gamma$ with universal $\Gamma$-structure $\Omega$ such that $[M,B\Gamma]$, the homotopy classes of maps $M\to B\Gamma$ classifies the $\Gamma$-structures up to homotopy (\cite{haefliger}, \cite{haefliger1}). In particular, when $\Gamma=\Gamma_q$, the derivative map $d:\Gamma_q\to GL_q(\R)$ induces a continuous map $Bd:B\Gamma_q\to BGL_q(\R)$ into the classifying space $BGL_q(\R)$ of real vector bundles of rank $q$. If $\tilde{f}$ is a classifying map of a $\Gamma_q$ structure $\omega$, then $Bd\circ\tilde{f}$ classifies the normal bundle $\nu(\omega)$ associated to $\omega$. Furthermore, there is a vector bundle $\nu\Omega_q$ over $B\Gamma_q$ which is `universal' for the bundles $\nu(\omega)$ as $\omega$ runs over all $\Gamma_q$ structures on $M$. Haefliger cocycles of a foliation $\mathcal F$ on $M$ naturally give rise to a $\Gamma_q$-structure $\omega_{\mathcal F}$ on $M$. It is a general fact that $\nu(\omega_{\mathcal F})$ is isomorphic to the normal bundle of the foliation $\mathcal F$. Hence, $\nu(\omega_{\mathcal F})$ admits an embedding into the tangent bundle $TM$ and consequently, the classifying map $\tilde{f}$ can be covered by an epimorphism $F:TM\to \nu\Omega_q$. Haefliger observed, that any $\Gamma_q$-structure on $M$ can indeed be defined as the inverse image of a foliation by an embedding $e:M\to (N,\mathcal F_N)$ into a foliated manifold $N$. The $\Gamma_q$ structure is a foliation if and only if $e$ is transverse to $\mathcal F_N$. Thus, he reduced the homotopy classification of foliations on open manifolds to Gromov-Phillips Theorem and showed that the `integrable' homotopy classes of codimension $q$ foliations on an open manifold $M$ are in one-one correspondence with the homotopy classes of epimorphism $F:TM\to \nu\Omega_q$ (\cite{haefliger1}). In particular, it shows that if a map $f:M\to BGL(q)$ classifying the normal bundle of a codimension $q$ distribution $D$ on $M$ lifts to a map $\tilde{f}:M\to B\Gamma_q$, then the distribution $D$ is homotopic to one which is integrable, provided $M$ is open. Soon after the work of Haefliger, Thurston extended the classification of foliations to closed manifolds thereby completing the classification problem (\cite{thurston},\cite{thurston1}). Thurston showed that the `concordant classes' of foliations are in one to one correspondence with the homotopy classes of $\Gamma_q$ structures $\mathcal H$ together with the `concordance classes' of bundle monomorphisms $\nu(\mathcal H)\to TM$. The proof of these results are very much involved and beyond the scope of our study. In the thesis, we study foliations whose leaves carry some specific geometric structures. In particular we are interested in foliations whose leaves are symplectic, locally conformal symplectic or contact manifolds. In his seminal thesis, Gromov had shown that the obstruction to the existence of a contact or a symplectic form on open manifolds is purely topological. Gromov obtained these results as applications to Open Invariant Theorem mentioned above. In a recent article (\cite{fernandes}), Fernandes and Frejlich proved that a foliation with a leafwise non-degenerate 2-form is homotopic through such pairs to a foliation with a leafwise symplectic form. Symplectic foliations on a manifold $M$ can be explained in terms of regular Poisson structures on the manifold (\cite{vaisman}). Recall that a Poisson structure $\pi$ is a bivector field satisfying the condition $[\pi,\pi]=0$, where the bracket denotes the Schouten bracket of multivector fields (\cite{vaisman}). The bivector field $\pi$ induces a vector bundle morphism $\pi^\#:T^*M\to TM$ by $\pi^\#(\alpha)(\beta)=\pi(\alpha,\beta)$ for all $\alpha,\beta\in T^*_xM$, $x\in M$. The characteristic distribution $\mathcal D=\text{ Image }\pi^\#$ is, in general, a singular distribution which, however, integrates to a foliation. The restriction of the Poisson structure to a leaf of the foliation has the maximum rank and so we obtain a symplectic form on the leaf by dualizing $\pi$. Thus, the characteristic foliation is a (singular) symplectic foliation. A Poisson bivector field $\pi$ is said to be \emph{regular} if the rank of $\pi^\#$ is constant. In this case the characteristic foliation is a regular symplectic foliation on $M$. On the other hand, given a regular symplectic foliation $\mathcal F$ on $M$ one can associate a Poisson bivector field $\pi$ having $\mathcal F$ as its characteristic foliation. Since a symplectic form on a manifold corresponds to a non-degenerate Poisson structure, Gromov's result on the existence of symplectic form is equivalent to saying that a non-degenerate bivector field on an open manifold is homotopic to a non-degenerate Poisson structure. In the same light, the result of Fernandes and Frejlich \cite{fernandes} can be translated into the statement that a regular bivector field $\pi_0$ is homotopic to a Poisson bivector field, provided the manifold is \emph{open} and the characteristic distribution of $\pi_0$ is integrable. However, this can not be done without deforming the underlying characteristic foliation Im$\pi_0^{\#}$. It would be pertinent to recall a result of Bertelson which preceded \cite{fernandes}. She showed that a leafwise non-degenerate 2-form on a foliation need not be homotopic to a leafwise symplectic form on the same foliation even if $M$ is open (\cite{bertelson}) - in order to keep the underlying foliation constant during homotopy, one needs to impose some additional `open-ness' condition on the foliation itself. Poisson structures have further generalisations to Jacobi structures which are given by pairs $(\Lambda,E)$ consisting of a bivector field $\Lambda$ and a vector field $E$ on $M$ satisfying the following conditions: \[[\Lambda,\Lambda]=2E\wedge\Lambda,\ \ \ \ \ [\Lambda,E]=0.\] If $E=0$ then clearly $\Lambda$ is a Poisson structure on $M$. A Jacobi structure, as in the case of Poisson, is associated with an integrable singular distribution namely, $\mathcal D=\text{Im\,}\Lambda^{\#}+\langle E\rangle$, where $\langle E\rangle$ denotes the distribution generated by the vector field $E$. The leaves of $\mathcal D$ inherit the structures of locally conformal symplectic or contact manifolds according as the dimension of the leaf is even or odd (\cite{kirillov}). In particular, if the characteristic distribution $\mathcal D$ is regular then we obtain either a locally conformal symplectic foliation or a contact foliation on $M$. Motivated by a comment in \cite{fernandes}, we extend the work of Fernandes and Frejlich to give a homotopy classification of contact and locally conformal symplectic foliations. We prove that if an open manifold admits a foliation with a leafwise non-degenerate 2-form then it admits a locally conformal symplectic foliation with its foliated Lee class defined by a given cohomology class $\xi \in H_{deR}^{1}(M)$. In the same footing, we show that if there is a foliation on an open manifold with a leafwise almost contact structure then the manifold must admit a contact foliation. We also interprete these results in terms of regular Jacobi structures. In the second part of the thesis, following the steps of Haefliger we study foliations on open manifolds $M$ in the presence of a contact form $\alpha$ such that the leaves of the foliations are contact submanifolds of $(M,\alpha)$. We first classify those foliations which are obtained by means of maps into a foliated manifold, as in Gromov-Phillips Theorem. To state it explicitly, let $Tr_\alpha(M,\mathcal F_N)$ denote the space of maps $f:M\to N$ which are transversal to a given foliation $\mathcal F_N$ on $N$ and for which the inverse foliations $f^*\mathcal F_N$ are contact foliations on $M$. Since the contactness property of 1-forms is a stable property, the space $Tr_\alpha(M,\mathcal F_N)$ is realised as the space of solutions to some first order open differential relation $\mathcal R_\alpha$. The space $Tr_\alpha(M,\mathcal F_N)$ is clearly not invariant under \emph{Diff}$(M)$, though it is invariant under the action of contact diffeomorphisms of $M$. This suffices for the $h$-principle of $\mathcal R_\alpha$ near a core $K$ of $M$. In order to lift the $h$-principle to all of $M$, we can not use the ordinary deformations of $M$ into Op\,$K$ - since the relation is not invariant under \emph{Diff}$(M)$ it would not give a homotopy within $Tr_\alpha(M,\mathcal F_N)$. We would have liked to get deformations of $M$ into $Op\,K$ which would keep the contact form invariant. We can, however, only show that if $M$ is an open manifold, then there exists a regular homotopy $\varphi_t$ of isocontact immersions into itself such that $\varphi_0=id_M$ and $\varphi_1(M)$ is contained in an arbitrary small neighbourhood of $K$. In fact, we prove a weaker version of Gray's Stability Theorem for contact forms on open contact manifolds which is one of the main results of the thesis. It may be recalled that a similar result for open symplectic manifolds was earlier obtained by Ginzburg in \cite{ginzburg}. Now coming back to contact set-up, since the composition of an $f\in Tr_\alpha(M,\mathcal F_N)$ with a contact immersion $\varphi$ of $M$ is again an element of $Tr_\alpha(M,\mathcal F_N)$, we can lift the $h$-principle near $K$ to a global $h$-principle on $M$ using the homotopy $\varphi_t$. More generally, we prove an extension of Open Invariant Theorem of Gromov on open contact manifolds $(M,\alpha)$. A similar result was obtained for open symplectic manifolds in \cite{ datta-rabiul}. Proceeding as in Haefliger, we then prove that the 'integrable' homotopy classes of contact foliations are in one-to-one correspondence with the homotopy classes of epimorphisms $F:TM\to \nu\Gamma_q$ such that $\ker F\cap \ker\alpha$ is a symplectic subbundle of $\ker\alpha$ relative to the symplectic form defined by $d\alpha$. The thesis is organised as follows. We discuss the preliminaries in Chapter 2. This consists of five parts - In the first two sections we recall the preliminaries of symplectic and contact manifolds and review the basic definitions and examples of foliations. In the third section we introduce foliations with geometric structures and review the basic theory of Poisson and Jacobi structures. In the last two section we discuss the language of $h$-principle and some major results including Haefliger's classification theorem which serves as a background of the problems treated in the thesis. In Chapter 3, we give a classification of contact and locally conformal symplectic foliations and then interpret these results in terms of regular Jacobi structures. Chapter 4 is again divided into several sections. In Section 1 we recall a homotopy classification of submersions with symplectic fibres on open symplectic manifolds (\cite{datta-rabiul}) and note that a generalisation of this result leads to homotopy classification of symplectic foliations on open symplectic manifolds. In Section 2 we prove a `stability theorem' for contact forms on open contact manifolds. In section 3 we obtain an extension of Open Invariant Theorem of Gromov in the contact set-up. In sections 4 and 5 we prove a contact version of Gromov-Phillips Theorem and discuss some of its special cases. In the final section we obtain a homotopy classification of contact foliations on open contact manifolds. \chapter{Preliminaries} \section{Preliminaries of symplectic and contact manifolds} In this section we review various geometric structures on manifolds which are defined by differential forms. These are already standard in the Mathematics literature and can be found in \cite{mcduff-salamon} and \cite{geiges1}. \subsection{Symplectic manifolds} \begin{definition} {\em An antisymmetric bilinear form $\omega$ on a vector space $V$ defines a linear map $\tilde{\omega}:V\to V^*$ given by $\tilde{\omega}(v)(v')=\omega(v,v')$ for all $v,v'\in V$. The dimension of the image of $\tilde{\omega}$ (which is an even integer) is called the \emph{rank} of $\omega$. A 2-form $\omega$ is said to be \emph{non-degenerate} if $\tilde{\omega}$ is an isomorphism; equivalently, if $\omega(v,w)=0$ for all $w\in V$ implies that $v=0$. A vector space $V$ is called a \emph{symplectic vector space} if there exists a nondegenerate 2-form $\omega$ on it. } \end{definition} Since rank of a 2-form is an even integer, a symplectic vector space is even dimensional. If $\dim V=2n$, then $\omega$ is non-degenerate if and only if $\omega^n\neq 0$. The \emph{symplectic complement} of a subspace $W$ in a symplectic vector space $(V,\omega)$, denoted by $W^{\perp_\omega}$, is defined as \[W^{\perp_{\omega}}=\{v\in V:\omega(v,w)=0 \text{ for all }\ w\in W\}\] A subspace $W$ of a symplectic vector space $(V,\omega)$ is said to be \emph{symplectic} if the restriction of $\omega$ to $W$ is symplectic. The symplectic complement of a symplectic subspace $W$ is also a symplectic subspace of $V$ and $V=W\oplus W^{\perp_\omega}$. \begin{definition} \em{A 2-form $\omega$ on a manifold $M$ is said to be an \emph{almost symplectic form} \index{almost symplectic form} if its restrictions to the tangent spaces $T_xM$, $x\in M$, are non-degenerate. An almost symplectic form which is also closed is called a \emph{symplectic form}\index{symplectic form} on the manifold. Manifolds equipped with such forms are called almost symplectic and symplectic manifolds respectively.} \end{definition} \begin{example}\label{ex:symp}\end{example} \begin{enumerate}\item The Euclidean space $\mathbb{R}^{2n}$ has a canonical symplectic form given by $\omega_0 = \Sigma_idx_i\wedge dy_i$, where $(x_1,\dots,x_n,y_1,\dots,y_n)$ is the canonical coordinate system on $\R^{2n}$. \item All oriented surfaces are symplectic manifolds. \item The 2-sphere $\mathbb{S}^2$ is a symplectic manifold but $\mathbb{S}^{2n}$ are not for $n>1$. \item The 6 dimensional sphere $\mathbb{S}^6$ is an example of an almost symplectic manifold which is not symplectic. \item The total space of the cotangent bundle has a canonical symplectic form which is exact.\end{enumerate} \begin{definition} \label{D:symplectic vector bundle} \em{ Let $p:E\to B$ be a vector bundle over a topological space $B$. Let $\wedge^k(E^*)$ denote the $k$-th exterior bundle associated with the dual $E^*$. A section $\omega$ of $\wedge^2(E^*)$ is called a symplectic form on $E$ if $\omega_b$ is a symplectic form on the fiber $E_b$ for all $b\in B$. The pair $(E,\omega)$ is then called a symplectic vector bundle.} \end{definition} Clearly, the tangent bundle of a symplectic manifold is a symplectic vector bundle. \begin{definition}{\em Let $(M,\omega)$ and $(N,\omega')$ be two symplectic manifolds. A diffeomorphism $f:M\to N$ is said to be a \emph{symplectomorphism} if it pulls back the form $\omega'$ onto $\omega$.} \end{definition} The following theorem implies that there is no local invariant for symplectic manifolds. \begin{theorem}(Darboux) \label{T:darboux} Any symplectic manifold $(M^{2n},\omega)$ is locally symplectomorphic to the Euclidean manifold $(\R^{2n},\omega_0)$, where $\omega_0$ is the standard symplectic form defined as in Example~\ref{ex:symp}. \end{theorem} \begin{definition}{\em Two symplectic forms $\omega_0,\omega_1$ on a manifold $M$ are said to be \emph{isotopic} if there is an isotopy $\delta_t$, $t\in [0,1]$, such that $\delta_1^*\omega_0=\omega_1$}. \end{definition} Therefore, if $\omega_0$ and $\omega_1$ are isotopic they can be joined by a path $\omega_t$ in the space of symplectic forms such that the cohomology class of $\omega_t$ is independent of $t$. Explicitly, one can take $\omega_t=\delta_t^*\omega$ for $t\in\mathbb I$. The following theorem due to Moser says that the converse of this is true on a closed manifold. \begin{theorem}(Moser's Stability Theorem \cite{moser}) \label{T:moser's stability} Let $M$ be a closed manifold (that is, compact and without boundary) and let $\omega_t,t\in \mathbb I=[0,1]$ be a family of symplectic forms belonging to the same de Rham cohomology class. Then there exists an isotopy $\{\phi_t\}_{t\in \mathbb I}$ of $M$ such that $\phi_0=id_M$ and $\phi_t^*\omega_t=\omega_0$. \end{theorem} A version of Moser's stability theorem for open manifolds was proved by Ginzburg in \cite{ginzburg}. Here we give a version due to Eliashberg. \begin{theorem}(\cite{eliashberg}) \label{T:equidimensional-symplectic-immersion} Let $(\tilde{M},\tilde{\omega})$ be a symplectic manifold without boundary and let $M$ be an equidimensional submanifold of $\tilde{M}$ with boundary. Suppose that $\omega_t,\ t\in \mathbb I$, is a family of symplectic forms on $M$ representing the same cohomology class. If $\tilde{\omega}|_M=\omega_0$, then there exists a regular homotopy $f_t:M\to \tilde{M}$ (that is, a homotopy of immersions) such that $f_0$ is the inclusion $M\to \tilde{M}$ and $f_t^*\tilde{\omega}=\omega_t,\ t\in \mathbb I$. \end{theorem} We shall obtain a contact analogue of this result in Chapter 4. \subsection{Locally Conformal Symplectic manifolds} \begin{definition}{\em A non-degenerate 2-form $\omega$ on a manifold $M$ is said to be \emph{conformal symplectic} \index{conformal symplectic} if there is a nowhere vanishing $C^\infty$ function $f$ on $M$ such that $f\omega$ is a symplectic form.}\end{definition} \begin{definition}{\em A \emph{locally conformal symplectic} structure on a manifold $M$ is given by a pair $(\omega,\theta)$, where $\omega$ is a non-degenerate 2-form and $\theta$ is a closed 1-form on $M$ satisfying the relation \begin{equation}d\omega+\theta\wedge\omega=0.\label{def_lcs} \index{locally conformal symplectic} \end{equation} The form $\theta$ is called the \emph{Lee form} of $\omega$. \index{Lee form} If $\dim M\geq 4$ then $\omega\wedge -:\Omega^1(M)\to \Omega^3(M)$ is injective because of the non-degeneracy of $\omega$. In this case, $\theta$ is uniquely determined by the relation (\ref{def_lcs}). }\end{definition} If $\omega$ is a locally conformal symplectic form, then there is an open covering $\{U_i\}_{i\in I}$ of $M$ such that $d\omega=df_i\wedge \omega$ on $U_i$ for some smooth functions $f_i$ defined on $U_i$. This implies that $d(e^{-f_i}\omega)=0$, that is, $\omega$ is conformal symplectic on each $U_i$. This can be taken as the alternative definition of locally conformal symplectic structure if $\dim M\geq 4$. \noindent\textbf{Lichnerowicz cohomology.} A closed 1-form $\theta$ on a manifold $M$ defines a coboundary operator $d_\theta:\Omega^*(M)\to \Omega^{*+1}(M)$ by \[d_\theta=d+\theta\wedge\ \index{$d_\theta$},\] where $d$ is the exterior differential operator on differential forms. Indeed, it is easy to verify that $d_\theta^2\alpha=d\theta\wedge \alpha$ for any differential form $\alpha$ and therefore $d_\theta^2=0$ if and only if $\theta$ is closed. The resulting cohomology is called the Lichnerowicz cohomology which depends only on the cohomology class of $\theta$. A locally conformal symplectic form with Lee form $\theta$ is therefore a $d_\theta$-closed non-degenerate 2-form on $M$. \subsection{Contact manifolds} A hyperplane distribution $\xi$ on a manifold $M$ can be locally written as $\xi=\ker \alpha$ for some local 1-form $\alpha$ on $M$. The form $\alpha$ is only unique upto multiplication by a nowhere vanishing function. If $\xi$ is coorientable i.e. when the quotient bundle $TM/\xi$ is a trivial line bundle, then $\xi$ is obtained as the kernel of a global 1-form on $M$ given by the following composition \[TM\stackrel{q}{\to}TM/\xi\cong M\times\R\stackrel{p_1}{\to}\R,\] where $q$ is the quotient map and $p_1$ is the projection onto the first factor. \begin{definition}{\em Let $M$ be a $2n+1$ dimensional manifold. A hyperplane distribution $\xi$ is called a \emph{contact structure} \index{contact structure} if $\alpha \wedge (d\alpha)^n$ is nowhere vanishing for any local 1-form $\alpha$ defining $\xi$. A global 1-form $\alpha$ for which $\alpha \wedge (d\alpha)^n$ is nowhere vanishing is called a \emph{contact form} \index{contact form} on $M$. The distribution $\ker\alpha$ is then called the \emph{contact distribution of} $\alpha$.}\label{contact_form} \end{definition} \begin{example}\end{example} \begin{enumerate}\item Every odd dimensional Euclidean space $\R^{2n+1}$ has a canonical contact form given by $\alpha=dz+\sum_{i=1}^nx_i\,dy_i$, where $(x_1,\dots,x_n,y_1,\dots,y_n,z)$ is the canonical coordinate system on $\R^{2n+1}$. \item Every even dimensional Euclidean space $\R^{2n}$ has a canonical 1-form $\lambda=\sum_{i=1}^n(x_idy_i-y_idx_i)$ which is called the Liouville form of $\R^{2n}$, where $(x_1,\dots,x_n$, $y_1,\dots,y_n)$ is the canonical coordinate system on $\R^{2n}$. The restriction of $\lambda$ on the unit sphere in $\R^{2n}$ defines a contact form. \item For any manifold $M$, the total space of the vector bundle $T^*M\times\R\to M$ has a canonical contact form. \end{enumerate} If $\alpha$ is a contact form then \[d'\alpha=d\alpha|_{\ker\alpha}\index{$d'\alpha$}\] defines a symplectic structure on the contact distribution $\xi=\ker\alpha$. Also, there is a global vector field $R_\alpha$ on $M$ defined by the relations \begin{equation}\alpha(R_\alpha)=1,\ \ \ i_{R_\alpha}.d\alpha=0,\label{reeb} \index{Reeb vector field} \end{equation} where $i_X$ denotes the interior multiplication by the vector field $X$. Thus, $TM$ has the following decomposition: \begin{equation}TM=\ker\alpha \oplus \ker\,d\alpha,\label{decomposition}\end{equation} where $\ker\alpha$ is a symplectic vector bundle and $\ker\,d\alpha$ is the 1-dimensional subbundle generated by $R_\alpha$. The vector field $R_\alpha$ is called the \emph{Reeb vector field} of the contact form $\alpha$. A contact form $\alpha$ also defines a canonical isomorphism $\phi:TM\to T^*M$ between the tangent and the cotangent bundles of $M$ given by \begin{equation}\phi(X)=i_X d\alpha+\alpha(X)\alpha, \text{ for } X\in TM.\label{tgt_cotgt}\end{equation} It is easy to see that the Reeb vector field $R_\alpha$ corresponds to the 1-form $\alpha$ under $\phi$. \begin{definition} {\em Let $(N,\xi)$ be a contact manifold. A monomorphisn $F:TM\to (TN,\xi)$ is called \textit{contact} if $F$ is transversal to $\xi$ and $F^{-1}(\xi)$ is a contact structure on $M$. A smooth map $f:M\to (N,\xi)$ is called \textit{contact} if its differential $df$ is contact. If $M$ is also a contact manifold with a contact structure $\xi_0$, then a monomorphism $F:TM\to TN$ is said to be \textit{isocontact} if $\xi_0=F^{-1}\xi$ and $F:\xi_0\to\xi$ is conformal symplectic with respect to the conformal symplectic structures on $\xi_0$ and $\xi$. A smooth map $f:M\to N$ is said to be \textit{isocontact} if $df$ is isocontact. A diffeomorphism $f:(M,\xi)\to (N,\xi')$ is said to be a \emph{contactomorphism} \index{contactomorphism} if $df$ is isocontact. \index{isocontact map}} \end{definition} If $\xi=\ker\alpha$ for a globally defined 1-form $\alpha$ on $N$, then $f$ is contact if $f^*\alpha$ is a contact form on $M$. Furthermore, if $\xi_0=\ker\alpha_0$ then $f$ is isocontact if $f^*\alpha=\varphi \alpha_0$ for some nowhere vanishing function $\varphi:M\to\R$. \begin{definition}{\em A vector field $X$ on a contact manifold $(M,\alpha)$ is called a \emph{contact vector field} if it satisfies the relaion $\mathcal L_X\alpha=f\alpha$ for some smooth function $f$ on $M$, where $\mathcal L_X$ denotes the Lie derivation operator with respect to a vector field $X$.}\label{D:contact_vector_field} \index{contact vector field}\end{definition} Every smooth function $H$ on a contact manifold $(M,\alpha)$ gives a contact vector field $X_H=X_0+\bar{X}_H$ defined as follows: \begin{equation}X_0=HR_\alpha \ \ \ \text{ and }\ \ \ \bar{X}_H\in \Gamma(\xi) \text{ such that }i_{\bar{X}_H}d\alpha|_\xi = -dH|_\xi,\label{contact_hamiltonian} \index{contact hamiltonian}\end{equation} where $\xi=\ker\alpha$; equivalently, \begin{equation}\alpha(X_H)=H\ \ \text{ and }\ \ i_{X_H}d\alpha=-dH+dH(R_{\alpha})\alpha.\label{contact_hamiltonian1} \end{equation} The vector field $X_H$ is called the \emph{contact Hamiltonian vector field} of $H$. If $\phi_t$ is a local flow of a contact vector field $X$, then \[\frac{d}{dt}\phi_t^*\alpha = \phi_t^*(i_X.d\alpha+d(\alpha(X)))=\phi_t^*(f\alpha)=(f\circ\phi_t)\phi_t^*\alpha.\] Therefore, $\phi_t^*\alpha=\lambda_t\alpha$, where $\lambda_t=e^{\int f\circ\phi_t \,dt}$. Thus the flow of a contact vector field preserves the contact structure. \begin{theorem}Every contact form $\alpha$ on a manifold $M$ of dimension $2n+1$ can be locally represented as $dz-\sum_{i=1}^np_i\,dq_i$, where $(z,q_1,\dots,q_n,p_1,\dots,p_n)$ is a local coordinate system on $M$.\end{theorem} \begin{theorem}(Gray's Stability Theorem (\cite{gray}) \label{T:gray stability} If $\xi_t,\ t\in \mathbb I$ is a smooth family of contact structures on a closed manifold $M$, then there exists an isotopy $\psi_t,\ t\in \mathbb I$, of $M$ such that \[d\psi_t(\xi_0)=\xi_t\ for\ all\ t\in \mathbb I\] \end{theorem} Next we shall give some examples of compact domains $U$ with piecewise smooth boundary in a contact manifold which contracts into itself by isocontact embeddings. We shall first recall the formal definition of such domains (\cite{eliashberg}). \begin{definition} \em{Let $U$ be a compact domain with piecewise smooth boundary in a contact manifold $(M,\alpha)$; $U$ is called \emph{contactly contractible} if there exists a contact vector field $X$ which is inward transversal to the boundary of $U$ and is such that its flow $\psi_t$ satisfies the following property: \[\psi_t^*\alpha =h_t\alpha, \text{ where } h_t\to 0\ as\ t\to +\infty.\] } \end{definition} \begin{example} \label{L:contactly contractible domains}\end{example} \begin{enumerate}\item The Euclidean ball in $(\mathbb{R}^{2n+1},dz-\Sigma_1^n(x_jdy_j-y_jdx_j))$ centered at the origin; \item the semi-ball centred at the origin i.e, one half of the Euclidean ball cut by a hyperplane; \item the image of a contactly contractible domain under a $C^1$-small diffeomorphism.\end{enumerate} \begin{remark}{\em In Chapter 4, we shall see an extension of \ref{T:gray stability} for non-closed contact manifold, which is one of the main results in the thesis (see Theorem~\ref{T:equidimensional_contact immersion}).} \end{remark} We end this section with the concept of a contact submanifold. \begin{definition} {\em A submanifold $N$ of a contact manifold $(M,\xi)$ is said to be a \emph{contact submanifold} if the inclusion map $i:N\to M$ is a contact map.}\label{contact_submanifold} \index{contact submanifold}\end{definition} \begin{lemma} A submanifold $N$ of a contact manifold $(M,\alpha)$ is a contact submanifold if and only if $TN$ is transversal to $\xi|_N$ and $TN\cap\xi|_N$ is a symplectic subbundle of $(\xi,d'\alpha)$.\label{L:contact_submanifold}\end{lemma} \newpage \section{Preliminaries of foliations} In this section we recall definitions of foliations and some primary examples for which our main reference is \cite{moerdijk}. We also review the notion of $\Gamma_q$-structures and its relations with foliations following \cite{haefliger}. \subsection{Foliations} Foliations on $n$-dimensional manifolds are modelled on the product structure $\R^q\times\R^{n-q}$ of $\R^n$ for some $q>0$. We will call a diffeomorphism $f:\R^q\times\R^{n-q}\to \R^q\times\R^{n-q}$ \emph{admissible} if there are smooth functions $g: \R^q \to \R^q$ and $h:\R^q\times\R^{n-q}\to \R^{n-q}$ such that \[f(x,y)=(g(x),h(x,y)) \text{ for all } (x,y)\in \R^q\times\R^{n-q}.\] \begin{definition}\label{D:foliation atlas} \em{ A codimension $q$ \emph{foliation atlas} on a manifold $M$ is defined by an atlas $\{U_i,\phi_i\}_{i\in I}$, where $\{U_i\}$ is an open cover of $M$ and \[\phi_i:U_i\to \phi_i(U_i)\subset \R^{q}\times\R^{n-q}\] are homeomorphisms such that the transition maps \[\phi_j\phi_i^{-1}:\phi_i(U_i\cap U_j)\to \phi_j(U_i\cap U_j)\] are admissible maps. A codimension $q$ \emph{foliation} \index{foliation} on a manifold is a maximal foliation atlas on it. For any foliation chart $(U_i,\phi_i)$, the sets $\phi_i^{-1}(x\times\R^{n-q})$ are called \emph{plaques}. Since the transition maps are admissible, the plaques through a point $p\in U_i\cap U_j$ defined by $\phi_i$ and $\phi_j$ coincide on the open set $U_i\cap U_j$. We define an equivalence relation on $M$ as follows: Two points $p$ and $q$ in $M$ are equivalent if there is a sequence of points $p=p_0,p_1,\dots,p_k=q$ such that any two consecutive points $p_i$ and $p_{i+1}$ lie on a plaque. The equivalence classes of this relation are called \emph{leaves} of the foliation. These are injectively immersed submanifolds of $M$.}\end{definition} \subsection{Foliations as involutive distribution} The tangent spaces of the plaques (or leaves) of a foliation $\mathcal F$ define a subbundle $T\mathcal F$ of $TM$, called the \emph{tangent bundle of $\mathcal F$},\index{$T\mathcal F$} which is clearly an involutive distribution. A subbundle $D$ of $TM$ is said to be \emph{involutive} if the space of sections of $D$ is closed under the Lie bracket of vector fields, that is, if $X,Y\in \Gamma(D)$ then so is $[X,Y]=XY-YX$. Conversely, if $D$ is an involutive distribution on a manifold $M$, then Frobenius Theorem (\cite{warner}) says that $D$ is integrable; that is, through any point $x\in M$ there exists a maximal integral submanifold of $D$. The integral submanifolds of $D$ are the leaves of some foliation $\mathcal F$ on $M$. \subsection{Foliations as Haefliger Cocycle} A foliation $\mathcal F$ on a manifold can also be defined by the following data: \begin{enumerate}\item An open covering $\{U_i, i\in I\}$ of $M$ \item submersions $s_i:U_i\to \R^q$ for each $i\in I$ \item local diffeomorphisms $h_{ij}:s_i(U_i\cap U_j)\to s_j(U_i\cap U_j)$ for all $i,j\in I$ for which $U_i\cap U_j\neq\emptyset$ \end{enumerate} satisfying the commutativity relations \[h_{ij}s_i=s_j \text{ on } U_i\cap U_j \text{ for all }(i,j) \] and the cocycle conditions \[h_{jk}h_{ij}=h_{ik} \text{ on }s_i(U_i\cap U_j\cap U_k).\] The diffeomorphisms $\{h_{ij}\}$ are referred as \emph{Haefliger cocycles}\index{Haefliger cocycles}. Since $s_i$'s are submersions, $s_i^{-1}(x)$ are submanifolds of $U_i$ of codimension $q$. Furthermore, since \[s_i^{-1}(x)=s_j^{-1}(h_{ij}(x)) \ \text{ for all }\ x\in s_i(U_i\cap U_j),\] the sets $s_i^{-1}(x)$ patch up to define a decomposition of $M$ into immersed submanifolds of codimension $q$. These submanifolds are the leaves of a foliation $\mathcal F$ on $M$. The tangent distribution $T\mathcal F$ is given by the local data $\ker ds_i$, $i\in I$. On the other hand, if the foliation data is given by $\{U_i,\phi_i\}$ as in Definition~\ref{D:foliation atlas} then $s_i:U_i\to \R^q$ defined by $s_i=p_1\circ \phi_i$ are submersions, where $p_1:\R^q\times\R^{n-q}\to \R^q$ is the projection onto the first factor. Since $\phi_j\phi_i^{-1}$ is an admissible map, $h_{ij}:s_i(U_i\cap U_j)\to s_j(U_i\cap U_j)$ given by $h_{ij}(s_i(x))=s_j(x)$ is well-defined on $s_i(U_i\cap U_j)$. Furthermore, $\{h_{ij}\}$ satisfy the cocycle conditions. \begin{definition}{\em Let $\mathcal F$ be a foliation on a manifold $M$. The quotient bundle $TM/T\mathcal F$ is defined as the \emph{normal bundle of the foliation $\mathcal F$} and is denoted by $\nu\mathcal F$.\index{$\nu(\mathcal F)$}}\end{definition} If a foliation is given by the Haefliger data $\{U_i,s_i,h_{ij}\}$ then note that $(ds_i)_x:T_xM\to \mathbb{R}^q$ are surjective linear maps and $\ker(ds_i)_x=T_x\mathcal{F}$ for all $x\in U_i$. Therefore, $s_i$ induces an isomorphism $\tilde{s}_i:\nu(\mathcal F)|_{U_i}\to U_i\times\mathbb{R}^q$ given by \[\tilde{s}_i(v+T_x\mathcal F)=(ds_i)_x(v)\ \text{ for all }v\in T_xM.\] Noting that $(ds_j)_x\circ (ds_i)_x^{-1}$ is well defined for all $x\in U_i\cap U_j$, the transition maps of the normal bundle of $\mathcal F$ are given as follows: \[\tilde{s}_j(x)\tilde{s}_i(x)^{-1}=ds_j\circ (ds_i)_x^{-1}=(dh_{ij})_{s_i(x)},\] where the second equality follows from the relation $h_{ij}s_i=s_j$. \begin{definition} A smooth map $f:(M,\mathcal F)\to (M',\mathcal F')$ between foliated manifolds is said to be a \emph{foliation preserving map} if the derivative map of $f$ take $T\mathcal F$ into $T\mathcal F'$. \end{definition} \subsection{Maps transversal to a foliation} The simplest type of foliations on manifolds are defined by submersions. Indeed, if $f:M\to N$ is a submersion then the fibres $f^{-1}(x)$ define (the leaves of) a foliation on the manifold $M$. In this case the leaves turn out to be embedded submanifolds of $M$. Now let $N$ itself be equipped with a foliation $\mathcal{F}_N$ of codimension $q$. In general, the inverse images of the leaves of a foliation on $N$ under a smooth map $f:M\to N$ need not give a foliation on $M$. We require some additional condition on the maps and this brings us to the notion of maps transversal to a foliation. Let $N$ be a manifold with a foliation $\mathcal F_N$ and let $q:TN\to \nu(\mathcal F_N)$ denote the quotient map. A smooth map $f:M\to N$ is said to be \emph{transversal to the foliation} $\mathcal F_N$ if $q\circ df:TM\to \nu(\mathcal F_N)$ is an epimorphism; in other words, \[df_x(T_xM) +(T\mathcal F_N)_{f(x)}=T_{f(x)}N \mbox{ \ for all \ }x\in M\] If $\mathcal{F}_N$ is represented by the Haefliger data $\{U_i,s_i,h_{ij}\}$, then $\{f^{-1}(U_i),s_i\circ f, h_{ij}\}$ gives a Haefliger structure on $M$. The associated foliation is referred as the \emph{inverse image foliation of $\mathcal F_N$ under} $f$ and is denoted by $f^*\mathcal F_N$. The leaves of $f^*\mathcal F_N$ are the preimages of the leaves of $\mathcal F_N$ under $f$. Hence codimension of $f^*\mathcal F_N$ is the same as that of $\mathcal F_N$. \subsection{$\Gamma_q$ structures\label{classifying space}} In this section we review some basic facts about $\Gamma$-structures for a topological groupoid $\Gamma$ following \cite{haefliger}. We also recall the connection between foliations on manifolds and $\Gamma_q$ structures, where $\Gamma_q$ is the groupoid of germs of local diffeomorphisms of $\R^q$\index{$\Gamma_q$}). For preliminaries of topological groupoid we refer to \cite{moerdijk}. \begin{definition}\label{GS}{\em Let $X$ be a topological space with an open covering $\mathcal{U}=\{U_i\}_{i\in I}$ and let $\Gamma$ be a topological groupoid over a space $B$. A 1-cocycle on $X$ over $\mathcal U$ with values in $\Gamma$ is a collection of continuous maps \[\gamma_{ij}:U_i \cap U_j\rightarrow \Gamma\] such that \[\gamma_{ik}(x)=\gamma_{ij}(x)\gamma_{jk}(x),\ \text{ for all }\ x\in U_i \cap U_j \cap U_k. \] The above conditions imply that $\gamma_{ii}$ has its image in the space of units of $\Gamma$ which can be identified with $B$ via the unit map $1:B\to \Gamma$. We call two 1-cocycles $(\{U_i\}_{i\in \mathbb I},\gamma_{ij})$ and $(\{\tilde{U}_k\}_{k\in K},\tilde{\gamma}_{kl})$ equivalent if for each $i\in I$ and $k\in K$, there are continuous maps \[\delta_{ik}:U_i \cap \tilde{U}_k\rightarrow \Gamma\] such that \[\delta_{ik}(x)\tilde{\gamma}_{kl}(x)=\delta_{il}(x)\ \text{for}\ x\in U_i \cap \tilde{U}_k \cap \tilde{U}_l\] \[\gamma_{ji}(x)\delta_{ik}(x)=\delta_{ij}(x)\ \text{for}\ x\in U_i \cap U_j \cap \tilde{U}_k.\] An equivalence class of a 1-cocycle is called a $\Gamma$-\emph{structure}\index{$\Gamma$-structure}. These structures have also been referred as Haefliger structures in the later literature.} \end{definition} For a continuous map $f:Y\rightarrow X$ and a $\Gamma$-structure $\Sigma=(\{U_i\}_{i\in I},\gamma_{ij})$ on $X$, the \emph{pullback $\Gamma$-structure} $f^*\Sigma$ is defined by the covering $\{f^{-1}U_i\}_{i \in I}$ together with the cocycles $\gamma_{ij}\circ f$. If $f,g:Y\to X$ are homotopic maps and $\Sigma$ is a $\Gamma$-structure on $X$ then the pull-back structures $f^*\Sigma$ and $g^*\Sigma$ are not the same. They are homotopic in the following sense. \begin{definition}{\em Two $\Gamma$-structures $\Sigma_0$ and $\Sigma_1$ on a topological space $X$ are called \emph{homotopic} if there exists a $\Gamma$-structure $\Sigma$ on $X\times I$, such that $i_0^*\Sigma=\Sigma_0$ and $i_1^*\Sigma=\Sigma_1$, where $i_0:X\to X\times I$ and $i_1:X\to X\times I$ are canonical injections defined by $i_t(x)=(x,t)$ for $t=0,1$.}\end{definition} \begin{definition}{\em Let $\Gamma$ be a topological groupoid with space of units $B$, source map $\mathbf{s}$ and target map $\mathbf{t}$. Consider the infinite sequences \[(t_0,x_0,t_1,x_1,...)\] with $t_i \in [0,1],\ x_i \in \Gamma$ such that all but finitely many $t_i$'s are zero and $\mathbf{t}(x_i)=\mathbf{t}(x_j)$ for all $i,j$. Two such sequences \[(t_0,x_0,t_1,x_1,...)\] and \[(t'_0,x'_0,t'_1,x'_1,...)\] are called equivalent if $t_i=t'_i$ for all $i$ and $x_i=x'_i$ for all $i$ with $t_i\neq 0$. Denote the set of all equivalence classes by $E\Gamma$. The topology on $E\Gamma$ is defined to be the weakest topology such that the following set maps are continuous: \[t_i:E\Gamma \rightarrow [0,1]\ \text{ given by }\ (t_0,x_0,t_1,x_1,...)\mapsto t_i \] \[x_i: t_i^{-1}(0,1] \rightarrow \Gamma \ \text{ given by }\ (t_0,x_0,t_1,x_1,...)\mapsto x_i.\] There is also a `$\Gamma$-action' on $E\Gamma$ as follows: Two elements $(t_0,x_0,t_1,x_1,...)$ and $(t'_0,x'_0,t'_1,x'_1,...)$ in $E\Gamma$ are said to be $\Gamma$-equivalent if $t_i=t'_i$ for all $i$, and if there exists a $\gamma\in \Gamma$ such that $x_i=\gamma x'_i$ for all $i$ with $t_i\neq 0$. The set of equivalence classes with quotient topology is called the \emph{classifying space of} $\Gamma$, and is denoted by $B\Gamma$\index{$B\Gamma$}.} \end{definition} Let $p: E\Gamma \rightarrow B\Gamma$ denote the quotient map. The maps $t_i:E\Gamma \rightarrow [0,1]$ project down to maps $u_i:B\Gamma \rightarrow [0,1]$ such that $u_i \circ p=t_i$. The classifying space $B\Gamma$ has a natural $\Gamma$-structure $\Omega=(\{V_i\}_{i\in I},\gamma_{ij})$, where $V_i=u_i^{-1}(0,1]$ and $\gamma_{ij}:V_i \cap V_j \rightarrow \Gamma$ is given by \[(t_0,x_0,t_1,x_1,...)\mapsto x_i x_j^{-1}\] We shall refer to this $\Gamma$ structure as the \emph{universal $\Gamma$-structure}\index{universal $\Gamma$-structure}. For any two topological groupoids $\Gamma_1,\Gamma_2$ and for a groupoid homomorphism $f:\Gamma_1\rightarrow \Gamma_2$ there exists a continuous map \[Bf:B\Gamma_1\rightarrow B\Gamma_2,\] defined by the functorial construction. \begin{definition}{\em (Numerable $\Gamma$-structure) Let $X$ be a topological space. An open covering $\mathcal{U}=\{U_i\}_{i\in I}$ of $X$ is called \emph{numerable} if it admits a locally finite partition of unity $\{u_i\}_{i\in I}$, such that $u_i^{-1}(0,1]\subset U_i$. If a $\Gamma$-structure can be represented by a 1-cocycle whose covering is numerable then the $\Gamma$-structure is called \emph{numerable}. } \end{definition} It can be shown that every $\Gamma$-structure on a paracompact space is numerable. \begin{definition}{\em Let $X$ be a topological space. Two numerable $\Gamma$-structures are called \emph{numerably homotopic} if there exists a homotopy of numerable $\Gamma$-structures joining them.} \end{definition} Haefliger proved that the homotopy classes of numerable $\Gamma$-structures on a topological space $X$ are in one-to-one correspondence with the homotopy classes of continuous maps $X\to B\Gamma$. \begin{theorem}(\cite{haefliger1}) \label{CMT} Let $\Gamma$ be a topological groupoid and $\Omega$ be the universal $\Gamma$ structure on $B\Gamma$. Then \begin{enumerate} \item $\Omega$ is numerable. \item If $\Sigma$ is a numerable $\Gamma$-structure on a topological space $X$, then there exists a continuous map $f:X\rightarrow B\Gamma$ such that $f^*\Omega$ is homotopic to $\Sigma$. \item If $f_0,f_1:X\rightarrow B\Gamma$ are two continuous functions, then $f_0^*\Omega$ is numerably homotopic to $f_1^*\Omega$ if and only if $f_0$ is homotopic to $f_1$. \end{enumerate} \end{theorem} \subsection{$\Gamma_q$-structures and their normal bundles} We now specialise to the groupoid $\Gamma_q$ of germs of local diffeomorphisms of $\mathbb{R}^{q}$. The source map $\mathbf s:\Gamma_q\to \R^q$ and the target map $\mathbf t:\Gamma_q\to \R^q$ are defined as follows: If $\phi\in\Gamma_q$ represents a germ at $x$, then \[{\mathbf s}(\phi)=x\ \ \text{ and }\ \ {\mathbf t}(\phi)=\phi(x)\] The units of $\Gamma_q$ consists of the germs of the identity map at points of $\R^q$. $\Gamma_q$ is topologised as follows: For a local diffeomorphism $f:U\rightarrow f(U)$, where $U$ is an open set in $\mathbb{R}^q$, define $U(f)$ as the set of germs of $f$ at different points of $U$. The collection of all such $U(f)$ forms a basis of some topology on $\Gamma_q$ which makes it a topological groupoid. The derivative map gives a groupoid homomorphism \[\bar{d}:\Gamma_q \rightarrow GL_q(\mathbb{R})\] which takes the germ of a local diffeomorphism $\phi$ of $\R^q$ at $x$ onto $d\phi_x$. Thus, to each $\Gamma_q$-structure $\omega$ on a topological space $M$ there is an associated (isomorphism class of) $q$-dimensional vector bundle $\nu(\omega)$ over $M$ which is called the \emph{normal bundle of} $\omega$. In fact, if $\omega$ is defined by the cocycles $\gamma_{ij}$ then the cocycles $\bar{d}\circ \gamma_{ij}$ define the vector bundle $\nu(\omega)$. Moreover, two equivalent cocycles in $\Gamma_q$ have their normal bundles isomorphic. Thus the normal bundle of a $\Gamma_q$ structure is the isomorphism class of the normal bundle of any representative cocycle. If two $\Gamma_q$ structures $\Sigma_0$ and $\Sigma_1$ are homotopic then there exists a $\Gamma_q$ structure $\Sigma$ on $X\times I$ such that $i_0^*\Sigma=\Sigma_0$ and $i_1^*\Sigma=\Sigma_1$, where $i_0:X\to X\times \{0\}\hookrightarrow X\times I$ and $i_1:X\to X\times \{1\}\hookrightarrow X\times I$ are canonical injective maps. Then $\nu(i_0^*\Sigma_0)\cong i_0^*\nu(\Sigma)\cong i_1^*\nu(\Sigma)\cong \nu(i_1^*\Sigma_1)$. Hence, normal bundles of homotopic $\Gamma_q$ structures are isomorphic. In particular, we have a vector bundle $\nu\Omega_q$ on $B\Gamma_q$ associated with the universal $\Gamma_q$-structure $\Omega_q$ \index{$\Omega_q$} on $B\Gamma_q$. \begin{proposition}If a continuous map $f:X\to B\Gamma_q$ classifies a $\Gamma_q$-structure $\omega$ on a topological space $X$, then $Bd\circ f$ classifies the vector bundle $\nu(\omega)$. In particular, $\nu\Omega_q\cong Bd^*E(GL_q(\R))$ and hence $\nu(\omega)\cong f^*\nu\Omega_q$. \end{proposition} \subsection{$\Gamma_q$-structures vs. foliations} If a foliation $\mathcal F$ on a manifold $M$ is represented by the Haefliger data $\{U_i,s_i,h_{ij}\}$, then we can define a $\Gamma_q$ structure on $M$ by $\{U_i,g_{ij}\}$, where \[g_{ij}(x) = \text{ the germ of } h_{ij} \text{ at } s_i(x) \text{ for }x\in U_i\cap U_j.\] In particular, $g_{ii}(x)$ is the germ of the identity map of $\R^q$ at $s_i(x)$ and hence $g_{ii}$ takes values in the units of $\Gamma_q$. If we identify the units of $\Gamma_q$ with $\R^q$, then $g_{ii}$ may be identified with $s_i$ for all $i$. Thus, one arrives at a $\Gamma_q$-structure $\omega_{\mathcal F}$ represented by 1-cocycles $(U_i,g_{ij})$ such that \[g_{ii}:U_i\rightarrow \mathbb{R}^q\subset \Gamma_q\] are submersions for all $i$. The functions $\tau_{ij}:U_i\cap U_j\to GL_q(\R)$ defined by $\tau_{ij}(x)=(\bar{d}\circ g_{ij})(x)$ for $x\in U_i\cap U_j$, define the normal bundle of $\omega_{\mathcal F}$. Furthermore, since $\tau_{ij}(x)=dh_{ij}(s_i(x))$, $\nu(\omega_{\mathcal F})$ is isomorphic to the quotient bundle $\nu(\mathcal F)$. Thus a foliation on a manifold $M$ defines a $\Gamma_q$-structure whose normal bundle is embedded in $TM$. As we have noted above, foliations do not behave well under the pullback operation, unless the maps are transversal to foliations. However, in view of the relation between foliations and $\Gamma_q$ structures, it follows that the inverse image of a foliation by any map gives a $\Gamma_q$-structure. The following result due to Haefliger says that any $\Gamma_q$ structure is of this type. \begin{theorem}(\cite{haefliger1}) \label{HL} Let $\Sigma$ be a $\Gamma_{q}$-structure on a manifold $M$. Then there exists a manifold $N$, a closed embedding $s:M \hookrightarrow N$ and a $\Gamma_{q}$-foliation $\mathcal{F}_N$ on $N$ such that $s^*(\mathcal{F}_N)=\Sigma$ and $s$ is a cofibration. \end{theorem} \newpage \section{Foliations with geometric structures\label{forms_foliations}} \subsection{Foliated de Rham cohomology} Let $\Omega^r(M)$ denote the space of differential $r$-forms on a manifold $M$. For any foliation $\mathcal F$ on a manifold $M$, let $I^r(\mathcal{F})$ denote the subspace of $\Omega^r(M)$ consisting of all $r$-forms which vanish on the $r$-tuple of vectors from $T\mathcal F$. In other words, $I^r(\mathcal F)$ consists of all forms whose pull-back to the leaves of $\mathcal F$ are zero. Define \[\Omega^r(M,\mathcal{F})=\frac{\Omega^r(M)}{I^r(\mathcal{F})}\] and let $q:\Omega^r(M)\to \Omega^r(M,\mathcal{F})$ be the quotient map. Since the leaves are integral submanifolds of $M$, the exterior differential operator $d$ maps $I^r(\mathcal{F})$ into $I^{r+1}(\mathcal{F})$ for all $r>0$, and thus we obtain a coboundary operator $d_{\mathcal{F}}:\Omega^r(M,\mathcal{F})\to \Omega^{r+1}(M,\mathcal{F})$ \index{$d_{\mathcal{F}}$} defined by $d_{\mathcal F}(\omega+ I^r(\mathcal{F}))=d\omega + I^{r+1}(\mathcal{F})$ so that the following diagram commutes: \[ \xymatrix@=2pc@R=2pc{ \Omega^r(M) \ar@{->}[r]^-{d}\ar@{->}[d]_-{q} & \Omega^{r+1}(M)\ar@{->}[d]^-{q}\\ \Omega^r(M,\mathcal{F})\ar@{->}[r]_-{d_{\mathcal{F}}} & \Omega^{r+1}(M,\mathcal{F}) } \] The cohomology groups of the cochain complex $(\Omega^r(M,\mathcal F),d_{\mathcal F})$ are called \emph{foliated de-Rham cohomology} \index{foliated de-Rham cohomology} groups of $(M,\mathcal F)$ and are denoted by $H^r(M,\mathcal{F})$, $r\geq 0$. \begin{definition} {\em Let $M$ be a manifold with a foliation $\mathcal F$. A differential form $\omega$ on $M$ will be called $\mathcal F$-leafwise closed (resp. leafwise exact or leafwise symplectic) if the pull-back of $\omega$ to the leaves of $\mathcal F$ are closed forms (resp. exact forms, symplectic forms).} \end{definition} Let $T^*\mathcal F$ denote the dual bundle of $T\mathcal F$. The space $\Omega^r(M,\mathcal{F})$ can be identified with the space of sections of the exterior bundle $\wedge^r(T^*\mathcal F)$ by the correspondence $\omega+I^r(\mathcal{F})\mapsto \omega|_{\Lambda^r(T\mathcal F)}$, $\omega\in \Omega^r(M)$. The induced coboundary map $\Gamma(\wedge^r(T^*\mathcal F))\to \Gamma(\wedge^{r+1}(T^*\mathcal F))$ will also be denoted by the same symbol $d_{\mathcal F}$. The sections of $\wedge^r(T^*\mathcal F)$ will be referred as \emph{tangential $r$-forms} or \emph{foliated $r$-forms}, or simply, $r$-forms on $\mathcal F$.\index{foliated $r$-forms} on $(M,\mathcal F)$. \begin{definition} {\em Let $\mathcal F$ be a foliation on a manifold $M$. A foliated $k$-form $\alpha$ is said to be a \emph{foliated closed} or $d_{\mathcal F}$-\emph{closed} if $d_\mathcal F\alpha=0$. It is \emph{foliated exact} or $d_{\mathcal F}$-\emph{exact} if there exists a foliated $(k-1)$ form $\tau$ on $(M,\mathcal F)$ such that $\alpha=d_{\mathcal F}\tau$.} \end{definition} \begin{definition}{\em Let $\mathcal F$ be an even-dimensional foliation on a manifold $M$. A smooth section $\omega$ of $\wedge^2(T^*\mathcal F)$ will be called a \emph{symplectic form} on $\mathcal F$ if the following conditions are satisfied: \begin{enumerate} \item $\omega$ is non-degenerate (i.e., $\omega_x$ is non-degenerate on the tangent space $T_x\mathcal F$ for all $x\in M$,) and \item $\omega$ is $d_{\mathcal F}$-closed. \end{enumerate} The pair $(\mathcal F,\omega)$ will be called a \emph{symplectic foliation} on $M$. \index{symplectic foliation} }\end{definition} \begin{definition}{\em Let $\mathcal F$ be an even-dimensional foliation on a manifold $M$. A smooth section $\omega$ of $\wedge^2(T^*\mathcal F)$ will be called a \emph{locally conformal symplectic form} on $\mathcal F$ if the following conditions are satisfied: \begin{enumerate} \item $\omega$ is non-degenerate and \item there exists a $d_\mathcal F$-closed foliated 1-form $\theta$ satisfying the relation $d_{\mathcal F}\omega+\theta\wedge \omega=0$.\end{enumerate} The foliated deRham cohomology class of $\theta$ will be referred as the (foliated) \emph{Lee class} of $\omega$. The pair $(\mathcal F,\omega)$ will be called a \emph{locally conformal symplectic foliation} on $M$.} \end{definition} \begin{definition}{\em Let $\mathcal F$ be a foliation of dimension $2k+1$ on a manifold $M$. A foliated 1-form $\alpha$ (that is, a section of $T^*\mathcal F$) is said to be a \emph{contact form} on $\mathcal F$ if $\alpha\wedge (d_{\mathcal F}\alpha)^k$ is nowhere vanishing. The pair $(\mathcal F,\alpha)$ will be referred as a \emph{contact foliation} \index{contact foliation} on $M$. A pair $(\alpha,\beta)$ consisting of a foliated 1-form $\alpha$ and a foliated 2-form $\beta$ is said to be an \emph{almost contact structure} on $(M,\mathcal F)$ if $\alpha\wedge \beta^k$ is nowhere vanishing. The triple $(\mathcal F,\alpha,\beta)$ will be called an \emph{almost contact foliation} on $M$. }\end{definition} \subsection{Poisson and Jacobi manifolds} We shall now consider some higher geometric structures which are given by multi-vector fields in contrast with the ones described in the previous section, which were defined by differential forms. These geometric structures are intimately related with foliations for which the leaves are equipped with locally conformal symplectic or contact forms. \begin{definition} {\em Let $M$ be a smooth manifold. A (smooth) section of the vector bundle $\wedge^p(TM)$ will be called a \emph{$p$-vector field}. The space of $p$-vector fields for all $p\geq 0$ will be referred as the space of \emph{multi-vector fields}\index{multi-vector fields}.} \end{definition} If $X,Y$ are two vector fields on $M$ written locally as $X=\sum_i a_i\frac{\partial}{\partial x_i}$ and $Y=\sum_i b_i\frac{\partial}{\partial x_i}$ then the formula for the Lie bracket of $X$ and $Y$ is given as follows: \[[X,Y]=\sum_i a_i(\sum_j\frac{\partial b_j}{\partial x_i}\frac{\partial}{\partial x_j})-\sum_i b_i(\sum_j\frac{\partial a_j}{\partial x_i}\frac{\partial}{\partial x_j}).\] If we use the notation $\zeta_i$ for $\frac{\partial}{\partial x_i}$ then the vector fields $X$ and $Y$ could be thought of as functions of $x_i$'s and $\zeta_i$'s which are linear with respect to $\zeta_i$'s. So the formula for the lie bracket turns out to be \[[X,Y]=\sum_i\frac{\partial X}{\partial \zeta_i}\frac{\partial Y}{\partial x_i}-\sum_i\frac{\partial Y}{\partial \zeta_i}\frac{\partial X}{\partial x_i}\] Now let $X=\Sigma_{i_1<\dots<i_p}X_{i_1,\dots,i_p}\zeta_{i_1}\wedge\dots\wedge\zeta_{i_p}$ and $Y=\Sigma_{i_1<\dots<i_q}Y_{i_1,\dots,i_q}\zeta_{i_1}\wedge\dots\wedge\zeta_{i_q}$ be $p$ and $q$ vector fields respectively. Define the bracket of $X$ and $Y$, in analogy with the formula for the Lie bracket of vector fields as \begin{equation} \label{schouten bracket} [X,Y]=\sum_i\frac{\partial X}{\partial \zeta_i}\frac{\partial Y}{\partial x_i}-(-1)^{(p-1)(q-1)}\sum_i\frac{\partial Y}{\partial \zeta_i}\frac{\partial X}{\partial x_i} \index{Schouten-Nijenhuis bracket}\end{equation} \begin{theorem}(\cite{vaisman}) \label{schouten-nijenhuis} The formula (\ref{schouten bracket}) satisfies the following \begin{enumerate} \item Let $X$ and $Y$ be $p$ and $q$ vector fields respectively, then \[[X,Y]=-(-1)^{(p-1)(q-1)}[Y,X]\] \item Let $X,Y$ and $Z$ be $p,q$ and $r$ vector fields respectively, then \[[X,Y\wedge Z]=[X,Y]\wedge Z+(-1)^{(p-1)q}Y\wedge [X,Z]\] \[[X\wedge Y,Z]=X\wedge [Y,Z]+(-1)^{(r-1)q}[X,Z]\wedge Y\] \item \[(-1)^{(p-1)(r-1)}[X,[Y,Z]]+(-1)^{(q-1)(p-1)}[Y,[Z,X]]\]\[+(-1)^{(r-1)(q-1)}[Z,[X,Y]]=0\] \item If $X$ is a vector field and $f$ is a real valued function on $M$ then \[[X,Y]=\mathcal{L}_XY \ \ and \ \ [X,f]=X(f)\] \end{enumerate} \end{theorem} \begin{definition} \em{The bracket in (\ref{schouten bracket}) is called the Schouten-Nijenhuis bracket.} \end{definition} The second assertion in Theorem~\ref{schouten-nijenhuis} implies that the definition of the Schouten-Nijenhuis bracket given by (\ref{schouten bracket}) is independent of the choice of local coordinates. \begin{definition}{\em A bivector field $\pi$ on $M$ is called a \emph{Poisson bivector field} if it satisfies the relation $[\pi,\pi]=0$, where [\ ,\ ] is the Schouten-Nijenhuis bracket (\cite{vaisman}).}\index{Poisson bivector field}\end{definition} A Poisson structure on a smooth manifold $M$ can also be defined by a $\mathbb{R}$-bilinear antisymmetric operation \[\{,\}:C^{\infty}(M,\mathbb{R})\times C^{\infty}(M,\mathbb{R})\to C^{\infty}(M,\mathbb{R})\] which satisfies the Jacobi identity: \[\{f,\{g,h\}\}+\{g,\{h,f\}\}+\{h,\{f,g\}\}=0 \ \text{for all }f,g,h\in C^\infty(M);\] and the Leibnitz identity for each $f\in C^\infty(M)$: \[\{f,gh\}=\{f,g\}h+g\{f,h\}\ \ \ \text{ for all } g,h\in C^\infty(M).\] The relation between a \emph{Poisson bracket} $\{\ ,\ \}$ \index{Poisson bracket} and the associated Poisson bi-vector field is given as follows: For any two functions $f,g\in C^\infty(M)$ \[\{f,g\}=\pi(df,dg).\] \begin{example} {\em Let $M$ be a $2n$-dimensional manifold with a symplectic form $\omega$. The non-degeneracy condition implies that $b:TM\to T^*M$, given by $b(X)=i_X\omega$ is a vector bundle isomorphism, where $i_X$ denotes the interior multiplication by $X\in TM$. Then $M$ has a Poisson structure defined by \[\pi(\alpha,\beta)=\omega(b^{-1}(\alpha),b^{-1}(\beta)), \ \text{ for all }\alpha,\beta\in T^*_xM, x\in M.\]}\label{ex_symplectic}\end{example} In \cite{kirillov}, Kirillov further generalised the Poisson bracket. The underlying motivation was to understand the geometric properties of all manifolds $M$ which admit a local lie algebra structure on $C^{\infty}(M)$. \begin{definition} {\em A \emph{local lie algebra} structure on $C^{\infty}(M)$ is an antisymmetric $\R$ bilinear map \[\{,\}:C^{\infty}(M)\times C^{\infty}(M)\to C^{\infty}(M)\] such that \begin{enumerate}\item the bracket satisfies the Jacobi identity namely, \[\{f,\{g,h\}\}+\{g,\{h,f\}\}+\{h,\{f,g\}\}=0 \ \text{for all }f,g,h\in C^\infty(M);\] \item supp\,$(\{f,g\})\subset \text{supp\,}(f)\cap \text{supp\,}(g)$ for $f,g\in C^{\infty}(M)$ (that is, $\{\ ,\ \}$ is local). \end{enumerate} }\end{definition} The bracket defined above is called a \emph{Jacobi bracket}\index{Jacobi bracket}. \begin{definition} {\em A \emph{Jacobi structure} on a smooth manifold $M$ is given by a pair $(\Lambda,E)$, where $\Lambda$ is a bivector field and $E$ is a vector field on $M$, satisfying the following two conditions: \begin{equation}[\Lambda,\Lambda] = 2E\wedge \Lambda,\ \ \ \ \ \ [E,\Lambda] = 0.\label{jacobi_def}\index{Jacobi structure}\end{equation} If $E=0$ then $\Lambda$ is a Poisson bivector field on $M$.}\end{definition} The notion of a local Lie algebra structure on $C^\infty(M)$ is equivalent to that of a Jacobi structure on $M$ (\cite{kirillov}). If $(\Lambda, E)$ is a Jacobi pair, then we can define the associated Jacobi bracket by the following relation: \begin{equation}\{f,g\}=\Lambda(df,dg)+fE(g)-gE(f),\ \text{for}\ f,g\in C^{\infty}(M) \label{eqn:jacobi}\index{Jacobi bracket}\end{equation} Taking $E=0$ we get the relation between the Poisson bracket and the Poisson bivector field. \begin{example} {\em Every locally conformal symplectic manifold (in short, an l.c.s manifold) is a Jacobi manifold, where the Jacobi pair is given by \[\Lambda(\alpha,\beta)=\omega(b^{-1}(\alpha),b^{-1}(\beta))\ \ \text{ and }\ \ \ E=b^{-1}(\theta),\] where $b:TM\to T^*M$ is defined as in Example~\ref{ex_symplectic}.}\end{example} \begin{example} {\em Every manifold with a contact form is a Jacobi manifold. If $\alpha$ is a contact form on $M$, then recall that there is an isomorphism $\phi:TM\to T^*M$ defined by $\phi(X)=i_X d\alpha+\alpha(X)\alpha$ for all $X\in TM$. A Jacobi pair on $(M,\alpha)$ can be defined as follows: \[\Lambda(\beta,\beta')=d\alpha(\phi^{-1}(\beta),\phi^{-1}(\beta')),\ \ \ \mbox{and }\ \ \ \ E=\phi^{-1}(\alpha),\] where $\beta,\beta'$ are 1-forms on $M$. The bivector field $\Lambda$ defines a bundle homomorphism ${\Lambda}^\#:T^*M\to TM$ by \[{\Lambda}^\#(\alpha)(\beta)=\Lambda(\alpha,\beta),\] where $\alpha,\beta\in T^*_xM$, $x\in M$. The image of the vector bundle morphism $\Lambda^\#:T^*M\to TM$ is $\ker\alpha$ and $\ker\Lambda^\#$ is spanned by $R_\alpha$. The contact Hamiltonian vector field $X_H$ can then be expressed as $X_H=HR_\alpha+\Lambda^{\#}(dH)$. }\end{example} Let $(M,\Lambda,E)$ be a Jacobi manifold. The Jacobi pair $(\Lambda, E)$ defines a distribution $\mathcal D$, called the \emph{characteristics distribution} \index{characteristics distribution} of the Jacobi pair, as follows: \begin{equation}{\mathcal D}_x={\Lambda}^\#(T^*_xM)+\langle E_x\rangle, x\in M,\label{distribution_jacobi}\end{equation} where $\langle E_x\rangle$ denotes the subspace of $T_xM$ spanned by the vector $E_x$. \begin{remark}{\em In general, $\mathcal D$ is only a singular distribution; however, it is completely integrable in the sense of Sussman (\cite{vaisman}).}\end{remark} \begin{definition}{\em A Jacobi pair $(\Lambda,E)$ is called \emph{regular} if $x\mapsto \dim\mathcal D_x$ is a locally constant function on $M$. It is said to be a \emph{non-degenerate} Jacobi structure if $\mathcal D$ equals $TM$.}\end{definition} Every $C^\infty$ function $f$ on a Jacobi manifold $(M,\Lambda,E)$ defines a vector field $X_f$ by $X_f=\Lambda^{\#}(df)$ so that $\Lambda(df,dg)=X_f(g)$. Then we have the following relations (\cite{kirillov}): \begin{equation}\begin{array}{rcl} \ [E, X_f] & = & X_{Ef}\\ \ [X_f,X_g] & = & X_{\{f,g\}}-fX_{Eg}+gX_{Ef}-\{f,g\}E \label{jacobi_bracket_hamiltonian}\end{array}\end{equation} where $[,]$ is the usual Lie bracket of vector fields. If $\mathcal D$ is regular then the characteristic distribution $\mathcal D$ is spanned by the vector fields $E$ and $X_f$, $f\in C^\infty(M)$. Thus, it follows easily from the relations in (\ref{jacobi_bracket_hamiltonian}) that $\mathcal D$ is involutive and therefore, integrable. \begin{lemma} A Jacobi structure $(\Lambda, E)$ restricts to a non-degenerate Jacobi structure on the leaves of its characteristic distribution.\label{L:jacobi_leaf}\end{lemma} \begin{proof} Let $f,g$ be two smooth functions on a leaf $L$ of the characteristic distribution $\mathcal D$. The induced Jacobi bracket on a leaf $L$ is given as follows: \[\{f,g\}(x)=\{\tilde{f},\tilde{g}\}(x) \text{ for all }x\in L\] where $\tilde{f}$ and $\tilde{g}$ are arbitrary extensions of $f$ and $g$ respectively on some open neighbourhood of $L$. Since $E(x)\in \mathcal D_x$, $E\tilde{f}(x)$ depends only on the values of $f$ on the leaf $L$ through $x$. Also, $X_{\tilde{f}}\tilde{g}(x)=d\tilde{g}_x(X_{\tilde{f}}(x))=dg_x(X_{\tilde{f}}(x))$ since $X_{\tilde{f}}(x)\in \mathcal D_x=T_xL$. This shows that the value of $X_{\tilde{f}}\tilde{g}(x)$ is independent of the extension of $\tilde{g}$. Similarly, it is also independent of the choice of the extension $\tilde{f}$. Thus $\{f,g\}$ is well-defined by (~\ref{eqn:jacobi}). It follows through a routine calculation that the above defines a Jacobi bracket. The non-degeneracy of the bracket on $L$ is immediate from the definition of $\{f,g\}$. \end{proof} \begin{theorem}(\cite{kirillov}) Every non-degenerate Jacobi manifold is either locally conformal symplectic or a contact manifold.\label{T:jacobi_leaf1}\end{theorem} \begin{proof} First suppose that $M$ is of even dimension. Since $(\Lambda,E)$ is non-degenerate and the rank of $\Lambda^\#$ is even, $\Lambda^{\#}:T^*M\to TM$ must be an isomorphism. Define a 2-form $\omega$ and a 1-form $\theta$ on $M$ as follows: \begin{center}$\begin{array}{rcll}\omega(\Lambda^\#(\alpha),\Lambda^\#(\beta)) & = & \Lambda(\alpha,\beta) & \text{for all } \alpha,\beta\in T^*_xM, x\in M,\\ \Lambda^{\#}\circ\theta & = & E. & \end{array}$\end{center} We shall show that $\omega$ is a locally conformal symplectic form with Lee form $\theta$; in other words, we need to show that $\theta$ is closed and $d\omega+\theta\wedge\omega=0$. Since the vector fields $X_f=\Lambda^\#(df)$, $f\in C^\infty(M)$, generate $TM$, it is enough to verify any relation on $X_f$'s only. In the following we shall use the notation $\Sigma_{\circlearrowright}$ for cyclic sum over $f,g,h$. First note that \[(\theta \wedge \omega)(X_f,X_g,X_h)=-\Sigma_{\circlearrowright}Ef.X_g(h)\] since $\theta(X_f)=-Ef$. Next, we have \[ \begin{array}{rcl} d\omega(X_f,X_g,X_h)&=&\Sigma_{\circlearrowright}X_f \omega(X_g,X_h)-\Sigma_{\circlearrowright}\omega([X_f,X_g],X_h)\\ &=&\Sigma_{\circlearrowright}X_fX_g(h)-\Sigma_{\circlearrowright}[X_f,X_g]h\\ &=&\Sigma_{\circlearrowright}X_fX_g(h)-\Sigma_{\circlearrowright}X_fX_g(h)-\Sigma_{\circlearrowright}X_gX_h(f)\\ &=&-\Sigma_{\circlearrowright}X_gX_h(f)\\ &=&-\Sigma_{\circlearrowright}X_g[\{h,f\}-hEf+fEh]\\ &=&-\Sigma_{\circlearrowright}X_g\{h,f\}+\Sigma_{\circlearrowright}X_g(hEf)-\Sigma_{\circlearrowright}X_g(fEh)\\ &=&-\Sigma_{\circlearrowright}[\{g,\{h,f\}-gE\{h,f\}+\{h,f\}Eg]+\Sigma_{\circlearrowright}hX_g(Ef)\\ & & +\Sigma_{\circlearrowright}EfX_g(h)-\Sigma_{\circlearrowright}fX_g(Eh)-\Sigma_{\circlearrowright}EhX_g(f)\\ &=&\Sigma_{\circlearrowright}gE\{h,f\}-\Sigma_{\circlearrowright}\{h,f\}Eg+\Sigma_{\circlearrowright}hX_g(Ef)+\Sigma_{\circlearrowright}EfX_g(h)\\ & &-\Sigma_{\circlearrowright}fX_g(Eh)-\Sigma_{\circlearrowright}EhX_g(f)\\ \end{array} \] The second summand in the last expression will cancel the fourth summand, as it will follow from the identity below: \[ \begin{array}{rcl} -\Sigma_{\circlearrowright}\{h,f\}Eg &=&-\Sigma_{\circlearrowright}[hEf-fEh+X_h(f)]Eg\\ &=&-\Sigma_{\circlearrowright}hEfEg+\Sigma_{\circlearrowright}fEhEg-\Sigma_{\circlearrowright}X_hf.Eg\\ &=& -\Sigma_{\circlearrowright}X_h(f).Eg\\ \end{array} \] Furthermore, the first summand can be written as \[ \begin{array}{rcl} \Sigma_{\circlearrowright}gE\{h,f\}&=& \Sigma_{\circlearrowright}gE[hEf-fEh+X_hf]\\ &=&\Sigma_{\circlearrowright}g[E(hEf)-E(fEh)+E(X_hf)]\\ &=&\Sigma_{\circlearrowright}g[hEEf+EfEh-fEEh-EhEf+E(X_hf)]\\ &=&\Sigma_{\circlearrowright}gEX_hf\\ \end{array} \] Thus we get \[ \begin{array}{rcl} d\omega(X_f,X_g,X_h)&=&\Sigma_{\circlearrowright}gEX_hf+\Sigma_{\circlearrowright}hX_g(Ef)-\Sigma_{\circlearrowright}fX_g(Eh)-\Sigma_{\circlearrowright}Eh.X_gf\\ &=& -\Sigma_{\circlearrowright}g[X_h,E]f+\Sigma_{\circlearrowright}hX_g(Ef)-\Sigma_{\circlearrowright}Eh.X_gf\\ &=& \Sigma_{\circlearrowright}gX_{Eh}f+\Sigma_{\circlearrowright}hX_g(Ef)-\Sigma_{\circlearrowright}Eh.X_gf\\ &=&-\Sigma_{\circlearrowright}Eh.X_gf\\ &=&\Sigma_{\circlearrowright}Eh.X_fg\\ &=& -\theta \wedge \omega(X_f,X_g,X_h)\\ \end{array} \] To show that $\theta$ is closed, we observe that \[ \begin{array}{rcl} d\theta(X_f,X_g)&=&X_f\theta(X_g)-X_g\theta(X_f)-\theta([X_f,X_g])\\ &=& -X_fEg+X_gEf-\theta(X_{\{f,g\}}-fX_{Eg}+gX_{Ef}-\{f,g\}E)\\ &=& -X_fEg+X_gEf+E(\{f,g\})-fEEg+gEEf\\ \end{array} \] and \[ \begin{array}{rcl} E(\{f,g\})&=&E(fEg-gEf+X_fg)\\ &=&fEEg+EgEf-gEEf-EgEf+E(X_fg)\\ &=&fEEg-gEEf+E(X_fg)\\ \end{array} \] Combining the above relations we get \[ \begin{array}{rcl} d\theta(X_f,X_g) &=& -X_fEg+X_gEf+fEEg-gEEf+E(X_fg)-fEEg+gEEf\\ &=& -[X_f,E]g+X_gEf\\ &=& X_{Ef}g+X_g(Ef)\\ &=&0\\ \end{array} \] Thus, we have proved that $M$ is a locally conformal symplectic manifold when $M$ is even dimensional. If $\dim M$ is odd then $E\notin \text{Im}(\Lambda^{\#})$. In this case, we can define a 1-form $\alpha$ by \[\alpha(E)=1,\ \alpha(X_f)=0,\text{ for all }f\in C^\infty(M).\] Then, \[ \begin{array}{rcl} d\alpha(X_f,X_g)&=&X_f\alpha(X_g)-X_g\alpha(X_f)-\alpha([X_f,X_g])\\ &=&-\alpha([X_f,X_g])\\ &=&-\alpha(X_{\{f,g\}}-fX_{Eg}+gX_{Ef}-\{f,g\}E)\\ &=&\{f,g\}\\ &=&fEg-gEf+X_fg\\ \end{array} \] To show that $d\alpha$ is non-degenerate on Im\,$\Lambda^\#$, suppose that $d\alpha(X_f,X_g)=0 \text{ for all }g\in C^\infty(M)$; that is, \[fEg-gEf+X_fg=0 \text{ for all }g\in C^\infty(M).\] In particular, if we take $g=1$ in the above we get $Ef=0$. Hence, \[X_fg+fEg=(X_f+fE)g=0 \text{ for all } g\in C^\infty(M),\] which can only happen if $f=0$, as $X_f$ and $E$ are linearly independent. Thus, $X_f=0$ proving that $d\alpha|_{\text{Im\,}\Lambda^{\#}}$ is nondegenerate. Finally we observe that \[ \begin{array}{rcl} d\alpha(E,X_f)&=&E\alpha(X_f)-X_f\alpha(E)-\alpha([E,X_f])\\ &=&0 \end{array} \] This proves that $\alpha$ is a contact form with Reeb vector field $E$. \end{proof} Combining Lemma~\ref{L:jacobi_leaf} and Theorem~\ref{T:jacobi_leaf1} we obtain the following theorem. \begin{theorem} The characteristic foliation of a regular Jacobi structure is either a locally conformal symplectic foliation or a contact foliation. Conversely, a locally conformal symplectic foliation or a contact foliation defines a regular Jacobi structure.\label{T:jacobi_foliation} \end{theorem} Results of this section will be used in Chapter 3. \newpage \section{Preliminaries of $h$-principle} In this section we recall some preliminaries of $h$-principle following \cite{eliashberg}, \cite{geiges} and \cite{gromov_pdr}. The theory of $h$-principle addresses questions related to partial differential equations or more general relations which appear in topology and geometry. As Gromov mentions in the foreword of his book `Partial Differential Relations'(\cite{gromov_pdr}), these equations or relations are mostly underdetermined, in contrast with those which arise in Physics. As a result, there are plenty of solutions to these equations/relations and one can hope to classify the solution space using homotopy theory. The $r$-jet bundle associated with sections of a fibration $X\to M$ has the structure of an affine bundle over $X$. An $r$-th order partial differential relation for smooth sections of $X$ determines a subset $\mathcal R$ in the $r$-jet space $X^{(r)}$. The theory of $h$-principle studies to what extent the topological and geometric properties of this set $\mathcal R$ govern the solution space. \subsection{Jet bundles}(\cite{golubitsky}) An ordered tuple $\alpha=(\alpha_1,\alpha_2,\dots,\alpha_m)$ of non-negative integers will be called a multi-index. For any $x=(x_1,x_2,\dots,x_n)\in\R^n$ and any multi-index $\alpha$, the notation $x^\alpha$ will represent the monomial $x_1^{\alpha_1}\dots x_m^{\alpha_m}$ and $\partial^\alpha$ will stand for the operator \[\frac{\partial^{|\alpha|}}{{\partial x_1}^{\alpha_1}{\partial x_2}^{\alpha_2}\dots{\partial x_m}^{\alpha_m}},\] where $|\alpha|=\alpha_1+\alpha_2+\dots+\alpha_m$. Two smooth maps $f,g:\R^m\to \R^n$ are said to be \emph{$k$-equivalent} at $x\in M$ if $f(x)=g(x)=y$ and $\partial^{\alpha}f(x)=\partial^{\alpha}g(x)$, for every multi-index $\alpha$ with $|\alpha|\leq k$. The equivalence class of $(f,x)$ is called the $k$-jet of $f$ at $x$ and is denoted by $j^k_f(x)$. Thus a $k$-jet $j^k_f(x)$ can be represented by a polynomial $\sum_{|\alpha|\leq k}\partial^\alpha f(x) x^\alpha$. Let $B^k_{n,m}$ be the vector space of polynomials of degree at most $k$ in $m$ variables and values in $\mathbb{R}^n$. Then the space of $k$-jets of maps $\R^m\to \R^n$, denoted by $J^k(\R^m,\R^n)$, can be identified with the set $\R^m\times\R^n\times B^k_{m,n}$. \begin{definition} \label{D:Jet bundles} \em{Let $M,N$ be $C^{\infty}$-manifolds. Two $C^\infty$ maps $f,g:M\to N$ are said to be $k$-equivalent at $x\in M$ if $f(x)=g(x)=y$ and with respect to some local coordinates around $x$ and $y$, $\partial^{\alpha}f(x)=\partial^{\alpha}g(x)$, for every multi-index $\alpha$ with $|\alpha|\leq k$. Using the chain rule one can see that the partial derivative condition does not depend on the choice of the local coordinate system around $x$ and $y$. As before, the equivalence class of an $f$ defined on an open neighbourhood of $x$ will be called the $k$-jet of $f$ at $x$ and will be denoted by $j^k_f(x)$. The set of $k$-jets of germs of all functions from $M$ to $N$ will be denoted by $J^k(M,N)$ and will be called the \emph{$k$-jet bundle} associated with the function space $C^\infty(M,N)$.}\end{definition} \begin{remark}{\em In particular, $J^0(M,N)=M\times N$. The 1-jet bundle $J^1(M,N)$ can be identified with \emph{Hom}$(TM,TN)$ consisting of all linear maps $T_xM\to T_yN$, $x\in M$ and $y\in N$, under the correspondence \[j^1_f(x)\mapsto (x,f(x),df_x),\] where $f:U\to N$ is a smooth map defined on an open set $U$ containing $x$.}\label{1-jet}\end{remark} If $(U,\phi)$ and $(V,\psi)$ are two charts of $M$ and $N$ respectively, then there is an obvious bijection \[T_{U,V}:J^k(U,V)\to \R^m\times \R^n\times B^k_{n,m}.\] The jet bundle $J^k(M,N)$ is topologised by declaring the sets $J^k(U,V)$ open. A manifold structure is given by declaring $T_{U,V}$ as charts. We can generalise the notion of jet bundle to sections of a smooth fibration $p:X\to M$ as well. \begin{definition}{\em Let $X^{(k)}_x$ denote the set of all $k$-jets of germs of smooth sections of $p$ defined on an open neighbourhood of $x\in M$. The $k$-th \emph{jet bundle of sections} of $X$ is defined as follows: \[X^{(k)}=\cup_{x\in M}X^{(k)}_x. \index{$X^{(k)}$}\]\index{jet bundle} Clearly, $X^{(0)}=X$. } \end{definition} \begin{remark}{\em If $X$ is a trivial fibration over a manifold $M$ with fibre $N$, then the sections of $X$ are in one-to-one correspondence with the maps from $M$ to $N$. Therefore, we can identify the jet space $X^{(k)}$ with $J^k(M,N)$.}\end{remark} If $f$ and $g$ are two local sections of a fibration $p:X\to M$ which represent the same $k$-jet at a point $x\in M$, then they also represent the same $l$-jet at $x$ for any $l\leq k$. Therefore, we have natural projection maps: \[p_l^k:X^{(k)}\to X^{(l)}\ \ \ \text{for }l\leq k.\] Set $p^{(k)}=p\circ p^k_0:X^{(k)}\to M$. If $g$ is a section of $p$ then $x\mapsto j^k_g(x)$ defines a section of $p^{(k)}$. We shall denote this section by $j^k_g$ or $j^kg$. \begin{theorem}(\cite{golubitsky}) Let $p:X\to M$ be a smooth fibration over a manifold $M$ of dimension $m$. Suppose that the dimension of the fibre is $n$. Then \begin{enumerate}\item $X^{(k)}$ is a smooth manifold of dimension $m+n+dim(B^k_{n,m})$; \item $p^{(k)}:X^{(k)}\to M$ is a fibration; \item for any smooth section $g:M\to X$, $j^kg:M\to X^{(k)}$ is smooth.\end{enumerate} \end{theorem} \subsection{Weak and fine topologies} Let $p:X\to M$ be a smooth fibration. We shall denote the space of $C^k$-sections of $X$ by $\Gamma^k(X)$ for $0\leq k\leq \infty$\index{$\Gamma^k(X)$}. \begin{definition}{\em (\cite{hirsch}) The \emph{weak $C^0$ topology} on $\Gamma^0(X)$ is the usual compact open topology. If $k$ is finite, then the \emph{weak $C^k$-topology} (or the $C^k$ \emph{compact open topology}) on $\Gamma^\infty(X)$ is the topology induced by the $k$-jet map $j^k:\Gamma^\infty(X)\to \Gamma^0(X^{(k)})$, where $\Gamma^0(X^{(k)})$ has the $C^0$ compact open topology. The \emph{weak $C^\infty$ topology} (or the $C^\infty$ compact open topology) is the union of the weak $C^k$ topologies for $k$ finite.}\label{weak topology}\index{weak $C^k$ topology}\end{definition} We shall now describe the fine topologies on the function spaces. For any set $C\subset X^{(k)}$ define a subset $B(C)$ of $\Gamma^{\infty}(X)$ as follows: \[B(C)=\{f\in \Gamma^{\infty}(X):j^k_f(M)\subset C\}. \] Then observe that $B(C)\cap B(D)= B(C\cap D)$. \begin{definition} \label{D:fine topology} \em{(\cite{golubitsky}) The collection $\{B(U): U \text{ open in }X^{(k)}\}$ forms a basis of some topology on $\Gamma^{\infty}(X)$, which we call the \emph{fine $C^k$-topology}. The \emph{fine $C^{\infty}$-topology} on $\Gamma^{\infty}(X)$ is the inverse limit of these $C^k$-topologies. The maps $p^k_{k-1}:X^{(k)}\to X^{(k-1)}$ define a spectrum with respect to the fine topologies. }\index{fine topology} \end{definition} \begin{remark} {\em The fine $C^{k}$-topology on $\Gamma^{\infty}(X)$ is induced from the fine $C^0$-topology on $\Gamma^0(X^{(k)})$ by the $k$-jet map \[j^k:\Gamma^{\infty}(X)\to \Gamma^0(X^{(k)}), \ \ f\mapsto j^kf.\] }\end{remark} The fine $C^k$ topology is, in general, finer than the weak $C^k$ topology. However, if $M$ is compact then these are equal. For a better understanding of the fine $C^k$-topologies we describe a basis of the neighborhood system of an $f\in \Gamma^\infty(X)$. Let us first fix a metric on $X^{(k)}$. For any smooth section $f$ of $X$ and a positive smooth function $\delta:M\to \R_+$ define \[\mathcal{N}^k_{\delta}(f)=\{g\in \Gamma^{\infty}(X): \text{dist}(j^k_f(x),j^k_g(x))<\delta(x) \text{ for all }x\in M\}.\] The sets $\mathcal{N}^k_{\delta}(f)$ form a neighbourhood basis of $f$ in the fine $C^k$-topology. \begin{remark} {\em If $\mathcal R$ is an open subset of $X^{(k)}$ then the space of sections of $X^{(k)}$ with images contained in $\mathcal R$ is an open subset of $\Gamma^0(X^{(k)})$ in the fine $C^0$-topology. Consequently, $(j^k)^{-1}(\Gamma(\mathcal R))$ is an open subspace of $\Gamma^\infty(X)$ in the fine $C^\infty$ topology.} \end{remark} \subsection{Holonomic Approximation Theorem} \begin{definition}{\em A section of the jet-bundle $p^{(k)}:X^{(k)}\to M$ is said to be a \emph{holonomic} section if it is the $r$-jet map of some section $f:M\to X$.}\index{holonomic}\end{definition} We now recall the Holonomic Approximation Theorem from \cite{eliashberg}. Throughout the thesis, for any subset $A\subset M$, $Op\,A$ \index{$Op\,A$} will denote an unspecified open neighbourhood of $A$ (which may change in course of an argument). \begin{theorem} Let $A$ be a polyhedron (possibly non-compact) in $M$ of positive codimension. Let $\sigma$ be any section of the $k$-jet bundle $X^{(k)}$ over $Op\,A$. Given any positive functions $\varepsilon$ and $\delta$ on $M$ there exist a diffeotopy $\delta_t:M\to M$ and a holonomic section $\sigma':Op\,\delta_1(A)\to X^{(k)}$ such that \begin{enumerate}\item $\delta_t(A)\subset domain\,(\sigma)$ for all $t$; \item $dist(x,\delta_t(x))<\delta(x)$ for all $x\in M$ and $t\in [0,1]$ and \item $dist(\sigma(x),\sigma'(x))<\varepsilon(x)$ for all $x\in Op\,(\delta_1(A))$. \end{enumerate}$($Any diffeotopy $\delta_t$ satisfying (2) will be referred as $\delta$-small diffeotopy.$)$ \label{T:holonomic_approximation}\index{Holonomic Approximation Theorem}\end{theorem} We now mention the main steps in the proof of the theorem when $\dim M=2$ and $\dim A=1$. To start with $A$ is covered by small coordinate neighbourhoods $\{U_i\}$ of $M$. If the intersection $U_i\cap U_j$ is non-empty then we choose a small hypersurface $S_{i,j}$ in $U_i\cap U_j$ transversal to $A$. The map $\sigma$ is then approximated by holonomic sections $\sigma_i$ on open sets $U_i$. On the intersection $U_i\cap U_j$, the two holonomic approximations do not match, in general. However, the set of points in $U_i\cap U_j$ where $\sigma_i$ is not equal to $\sigma_j$ can be made to lie in an arbitrary small neighbourhood of $S_{i,j}$. Let $S$ be the union of the transversals $S_{i,j}$. The main task is to modify the local holonomic sections $\sigma_i$ on $U_i$ to get a holonomic section $\sigma'$ defined on the subset $U\setminus S$, where $U$ is an open neighbourhood of $A$. It can also be ensured that $\sigma'$ lies sufficiently $C^0$-close to $\sigma$. The next step is to get a small isotopy which would move $A$ outside the set $S$ where $\sigma'$ is already defined. Indeed, if the transversals $S_{i,j}$ are small enough then there exist diffeotopies $\delta_t, t\in\mathbb I$ which have the following properties: \begin{enumerate}\item $\delta_0$ is the identity map of $U$; \item $\delta_t$ is identity outside a small neighbourhood of $S$; \item $\delta_1$ maps $Op\,A$ into $U\setminus S$. \end{enumerate} \begin{picture}(100,120)(-100,5)\setlength{\unitlength}{1cm} \linethickness{.075mm} \multiput(-1,1.5)(2,0){5} {\line(0,1){1}} \multiput(-2,2)(0,2){2} {\line(1,0){10}} \put(-2,1) {\line(1,0){10}} \multiput(-2,1)(10,0){2} {\line(0,1){3}} \multiput(-1,2)(2,0){5}{\circle*{.1}} \multiput(-1,0)(2,0){5}{\qbezier(-1,2)(-.3,2)(-.2,2.5) \qbezier(-.2,2.5)(-.1,3.3)(0,3.3)} \multiput(-1,0)(2,0){5}{\qbezier(0,3.3)(.1,3.3)(.2,2.5) \qbezier(.2,2.5)(.3,2)(1.1,2)} \put(6,1.5){$A$} \put(3.2,3){$\delta_1(A)$} \put(1.1,1.4){$S_i$} \put(3,.5){$U$} \end{picture}\\ In the above diagram, $A$ is denoted by the horizontal line and the rectangle represents a neighbourhood $U$ of $A$ in $M$. The small vertical segments represent the set $S$. The intersection of $S$ with $A$ is shown by bullets. The curve in $U\setminus S$ represents the locus of $\delta_1(A)$. \begin{remark}{\em The diffeotopies characterized by (1)-(3) above are referred as \emph{sharply moving diffeotopies} by Gromov (\cite{gromov_pdr}). It will appear once again in Definition~\ref{D:sharp_diffeotopy} }\end{remark} \subsection{Language of $h$-principle} Let $p:X\to M$ be a smooth fibration. \begin{definition}{\em A subset $\mathcal{R} \subset X^{(k)}$ of the $k$-jet space is called a \emph{partial differential relation of order} $k$ \index{partial differential relation} (or simply a \emph{relation}\index{relation}). If $\mathcal{R}$ is an open subset of the jet space then we call it an \emph{open relation}. }\end{definition} A $C^k$ section $f:M\rightarrow X$ is said to be a \emph{solution} of $\mathcal{R}$ if the image of its $k$-jet extension $j^k_f:M\rightarrow X^{(k)}$ lies in $\mathcal{R}$. We denote by $\Gamma(\mathcal{R})$\index{$\Gamma(\mathcal{R})$} the space of sections of the $k$-jet bundle $X^{(k)}\to M$ having images in $\mathcal{R}$. The space of $C^\infty$ solutions of $\mathcal{R}$ is denoted by $Sol(\mathcal{R})$\index{$Sol(\mathcal{R})$}. The $k$-jet map $j^k$ maps $Sol(\mathcal R)$ to $\Gamma(\mathcal R)$: \[j^k:Sol(\mathcal R)\to \Gamma(\mathcal R)\] and the image of $Sol(\mathcal R)$ under $j^k$ consists of all holonomic sections of $\mathcal R$. The function spaces $Sol(\mathcal R)$ and $\Gamma(\mathcal R)$ will be endowed with the weak $C^{\infty}$ topology and the weak $C^0$ topology respectively. \begin{definition}{\em A differential relation $\mathcal{R}$ is said to satisfy the \emph{$h$-principle} if every element $\sigma_{0} \in \Gamma(\mathcal{R})$ admits a homotopy $\sigma_{t}\in \Gamma(\mathcal{R})$ such that $\sigma_{1}$ is holonomic. We shall also say, in this case, that the solutions of $\mathcal R$ satisfies the $h$-principle. The relation $\mathcal R$ satisfies the \emph{parametric $h$-principle} \index{parametric $h$-principle} if the $k$-jet map $j^{k}:Sol(\mathcal{R})\rightarrow \Gamma(\mathcal{R})$ is a weak homotopy equivalence. }\index{$h$-principle}\end{definition} \begin{remark}{\em We shall often talk about (parametric) $h$-principle for certain function spaces without referring to the relations of which they are solutions.} \end{remark} Since $j^k$ is an injective map, $Sol(\mathcal R)$ can be identified with the holonomic sections of $\mathcal R$. Thus, if $\mathcal R$ satisfies the parametric $h$-principle, then it follows from the homotopy exact sequence of pairs that $\pi_i(\Gamma(\mathcal R),Sol(\mathcal R))=0$ for all integers $i\geq 0$. In other words, every continuous map $F_0:({\mathbb D}^{i},{\mathbb S}^{i-1})\to (\Gamma(\mathcal R),Sol(\mathcal R))$, $i\geq 1$, admits a homotopy $F_t$ such that $F_1$ takes all of ${\mathbb D}^{i}$ into $Sol(\mathcal R)$. \begin{remark}{\em The space $\Gamma(\mathcal R)$ is referred as the space of formal solutions of $\mathcal R$. Finding a formal solution is a purely (algebraic) topological problem which can be addressed with the obstruction theory. Finding a solution of $\mathcal R$ is, on the other hand, a differential topological problem. Thus, the $h$-principle reduces a differential topological problem to a problem in algebraic topology.}\end{remark} Let $Z$ be any topological space. Any continuous map $F:Z\to \Gamma(X)$ will be referred as a \emph{parametrized section }of $X$ with parameter space $Z$. \begin{definition}{\em Let $M_0$ be a submanifold of $M$. We shall say that a relation $\mathcal R$ satisfies the \emph{$h$-principle near $M_0$} (or on $Op(M_0)$) if given a section $F:U\to \mathcal{R}|_U$ defined on an open neighbourhood $U$ of $M_0$, there exists an open neighbourhood $\tilde{U}\subset U$ of $M_0$ such that $F|_{\tilde{U}}$ is homotopic to a holonomic section $\tilde{F}:\tilde{U}\to \mathcal{R}$ in $\Gamma(\mathcal R)$. Parametric $h$-principle is said to hold for $\mathcal R$ near $M_0$ if given any open set containing $\mathcal R$ and a parametrized section $F_0:{\mathbb D}^i\to \Gamma(\mathcal{R}|_U)$ such that $F_0(z)$ is holonomic on $U$ for all $z\in {\mathbb S}^{i-1}$, there exists an open set $\tilde{U}$, $M_0\subset \tilde{U}\subset U$, and a homotopy $F_t:{\mathbb D}^i\to \Gamma(\mathcal{R}|_{\tilde{U}})$ satisfying the following conditions: \begin{enumerate}\item $F_t(z)=F_0(z)$ for all $z\in {\mathbb S}^{i-1}$ and \item $F_1$ maps ${\mathbb D}^i$ into $Sol(\mathcal R|_{\tilde{U}})$. \end{enumerate}} \end{definition} \subsection{Open relations on open manifolds} We shall here apply the Holonomic Approximation Theorem to open relations on open manifolds. \begin{definition}{\em A manifold is said to be \emph{closed} if it is compact and without boundary. A manifold is \emph{open} if it is not closed.} \end{definition} \begin{remark}{\em Every open manifold admits a Morse function $f$ without a local maxima. The codimension of the Morse complex of such a function is, therefore, strictly positive (\cite{milnor_morse},\cite{milnor}). The gradient flow of $f$ brings the manifold into an arbitrary small neighbourhood of the Morse complex. In fact, one can get a polyhedron $K\subset M$ such that codim\,$K>0$, and an isotopy $\phi_t:M\to M$, $t\in[0,1]$, such that $K$ remains pointwise fixed and $\phi_1$ takes $M$ into an arbitrarily small neighborhood $U$ of $K$. The polyhedron $K$ is called a \emph{core} of $M$.}\label{core}\index{core}\end{remark} \begin{proposition} Let $p:X\to M$ be a smooth vector bundle over an open manifold $M$. Let $\mathcal R$ be an open subset of the jet space $X^{(k)}$. Then given any section $\sigma$ of $\mathcal R$ there exist a core $K$ of $M$ and a holonomic section $\sigma':Op\,K\to X$ such that the linear homotopy between $\sigma$ and $\sigma'$ lies completely within $\Gamma(\mathcal R)$ over $Op\,K$. \label{h-principle}\end{proposition} \begin{proof} We fix a metric on $X^{(k)}$. Since $\mathcal R$ is an open subset of $X^{(k)}$, the space of sections of $\mathcal R$ is an open subset of $\Gamma(X^{(k)})$ in the fine $C^0$-topology. Therefore, given a section $\sigma$ of $\mathcal R$, there exists a positive function $\varepsilon$ satisfying the following condition: \[\tau\in\Gamma(X^{(k)}) \text{ and } dist\,(\sigma(x),\tau(x))<\varepsilon(x)\ \ \ \Rightarrow \tau \text{ is a section of } \mathcal R\] Consider a core $A$ of $M$ and a $\delta$-tubular neighbourhood of $A$ for some positive $\delta$. By the Holonomic Approximation Theorem (Theorem~\ref{T:holonomic_approximation}) there exist a diffeotopy $\delta_t$ and a holonomic section $\sigma'$ such that \begin{enumerate}\item $dist\,(x,\delta_t(x))<\delta(x)$ for all $x\in M$ and $t\in [0,1]$ and \item $dist\,(\sigma(x),\sigma'(x))<\varepsilon(x)$ for all $x\in U_\rho$, \end{enumerate} where $U_\rho$ is a $\rho$-tubular neighbourhood of $K=\delta_1(A)$ for some real number $\rho>0$. Now take a smooth map $\chi :M\rightarrow [0,1]$ satisfying the following conditions: \[\chi \equiv 1, \text{ on } U_{\rho/2}\ \ \ \ \text{and}\ \ \ \text{supp\,}\chi \subset U_{\rho},\] Define a homotopy $\sigma_t$ as follows: \[\sigma_t = \sigma +t\chi (\sigma'-\sigma), \ \ t\in [0,1].\] Then \begin{enumerate} \item $\sigma_0=\sigma$ and each $\sigma_t$ is globally defined; \item $\sigma_t=\sigma$ outside $U_\rho$ for each $t$; \item $\sigma_1$ is holonomic on $U_{\rho/2}$. \end{enumerate} Moreover, since the above homotopy between $\sigma$ and $\sigma'$ is linear, $\sigma_t$ lies in the $\varepsilon$-neighbourhood of $\sigma$ for each $t$. Hence the homotopy $\sigma_t$ lies completely within $\mathcal R$ by the choice of $\varepsilon$. This completes the proof of the proposition since $K=\delta_1(A)$ is also a core of $M$.\end{proof} \begin{remark}\end{remark}\begin{enumerate}\item[(a)] Note that the core $K$ can not be fixed a priori in the statement of the proposition. \item[(b)] The theorem, in fact, remains valid in the general set up where $X$ is a smooth fibration \cite{gromov_pdr}. In this case, however, the linearity condition on the homotopy $\sigma_t$ has to be dropped for obvious reason.\end{enumerate} \subsection{Open Diff invariant relations and $h$-principle} The set of all diffeomorphisms on a manifold is a group under composition of maps. Let \emph{Diff}$(M)$ \index{\emph{Diff}$(M)$} denote the set of all local diffeomorphisms of $M$, i.e, all diffeomorphisms $f:U\to V$ where $U$, $V$ are open subsets of $M$. The composition of maps in \emph{Diff}$(M)$ is not defined for every pair of local diffeomorphisms. However, if $f,g$ are local diffeomorphisms of $M$, then $g\circ f\in Diff(M)$ if and only if domain of $g$ is equal to the codomain of $f$. A subset $\mathcal D$ of \emph{Diff}$(M)$ is called a \emph{pseudogroup} if the following conditions are satisfied (\cite{geiges}): \begin{enumerate} \item If $f\in \mathcal D$ and $V$ is an open subset of the domain of $f$, then $f|_V:V\to f(V)$ is in $\mathcal D$. \item If the domain $U$ of $f$ has the decomposition $U=\cup_i U_i$ and if $f|_{U_i}:U_i\to f(U_i)\in \mathcal D$ for all $i$, then $f\in \mathcal D$. \item For any open set $U$, the identity map $id_U\in \mathcal D$. \item For any $f\in \mathcal D$, $f^{-1}\in \mathcal D$. \item If $f_1,f_2\in \mathcal D$ are such that $f_1\circ f_2$ is well defined then $f_1\circ f_2\in \mathcal D$. \end{enumerate} \begin{example}\end{example} \begin{enumerate}\item \emph{Diff}$(M)$ has all the above properties. \item The set of all local symplectomorphisms of a symplectic manifold $(M,\omega)$ preserving the symplectic form $\omega$ is a pseudogroup. \item The set of all local contactomorphisms $\varphi$ of a contact manifold $(M,\xi)$ is a pseudogroup. \end{enumerate} \begin{definition}(\cite{geiges}) \em{A fibration $p:X\to M$ is said to be \emph{natural} if there exists a map $\Phi:Diff(M)\to Diff(X)$ having the following properties: \begin{enumerate} \item For $f\in Diff(M)$ with domain $U$ and target $V$, $\Phi(f):p^{-1}(U)\to p^{-1}(V)$ is such that $p\circ \Phi(f)=f\circ p$. \begin{equation} \xymatrix@=2pc@R=2pc{ p^{-1}(U) \ar@{->}[r]^-{\Phi(f)}\ar@{->}[d] & p^{-1}(V)\ar@{->}[d]\\ U \ar@{->}[r]_-{f} & V }\label{D:extension} \end{equation} \item $\Phi(id_U)=id_{p^{-1}(U)}$. \item If $f,g\in Diff(M)$ are composable, then $\Phi(f\circ g)=\Phi(f)\circ \Phi(g)$. \item For any open set $U$ in $M$, $\Phi:Diff(U)\to Diff(p^{-1}(U))$ is continuous with respect to the $C^{\infty}$ compact open topologies. \end{enumerate} The map $\Phi$ satisfying (1) - (4) above is called a \emph{continuous extension} of \emph{Diff}$(M)$. } \end{definition} \begin{example}\label{ex:extension} \end{example} \begin{enumerate}\item Let $X=M\times N$ be the trivial bundle over a manifold $M$ with fibre $N$ which is also a manifold. The group of diffeomorphisms of $M$ has a natural action on the space $C^\infty(M,N)$ given by $\delta\mapsto \delta^*f=f\circ\delta$, where $M, N$ are smooth manifolds. This gives an extension of \emph{Diff}$(M)$ to \emph{Diff}$(X)$ as follows: If $\delta:U\to V$ belongs to \emph{Diff}$(M)$ then \[\Phi(\delta):(\text{id}_U, f)\mapsto (\text{id}_V,f\circ\delta^{-1}),\ \ \ f\in C^\infty(U,N).\] \item All exterior bundles are natural. The pull-back operation on forms by maps define an extension of \emph{Diff}$(M)$ to \emph{Diff}$(\wedge^k(T^*M))$ for all $k\geq 1$: If $\delta:U\to V$ is a local diffeomorphism of $M$ then \[\Phi(\delta):\omega_x\mapsto (d\delta^{-1})_{\delta(x)}^*\omega_x,\ \ \ \omega_x\in \wedge^k(T_x^*M), x\in U.\] \end{enumerate} Any continuous extension $\Phi$ defines an `action' of \emph{Diff}$(M)$ on the space of local sections of $X$. Furthermore, $\Phi$ naturally gives an extension of \emph{Diff}$(M)$ to \emph{Diff}$(X^{(k)})$ which we shall denote by $\Phi^k:Diff(M)\to Diff(X^{(k)})$. For any $f:U\to V$ in \emph{Diff}$(M)$, \begin{equation}\Phi^k(f)(j^k_x\sigma)=j^k_{f(x)}(\Phi(f)\circ \sigma \circ f^{-1}), \ \ x\in U\label{D:extension_jet}\end{equation} This gives an `action' of \emph{Diff}$(M)$ on the jet space $X^{(k)}$. For brevity, we shall denote the $k$-jet $\Phi^k(f)(j^k_x\sigma)$ by $f^*(j^k_x\sigma)$. \begin{definition}(\cite{geiges}) \em{Let $X\to M$ be a natural fibration with an extension $\Phi$. A relation $\mathcal{R}\subset X^{(k)}$ is said to be $\mathcal D$-invariant (for some pseudogroup $\mathcal D$)\index{$\mathcal D$-invariant} if $\Phi^k(f)$ maps $\mathcal{R}$ into itself for all $f\in\mathcal D$. We also say, in this case, that $\mathcal R$ is invariant under the action of $\mathcal D$.} \end{definition} \begin{example}\label{ex:invariant_relation}\end{example} \begin{enumerate}\item Let $\mathcal R$ denote the relation consisting of 1-jets of germs of local immersions of a manifold $M$ into another manifold $N$. Then $\mathcal R$ can be identified with the subset of \emph{Hom}$(TM,TN)$ consisting of all injective linear maps. Hence, $\mathcal R$ is open. Also, it is easy to see that $\mathcal R$ is invariant under the natural action of \emph{Diff}$(M)$ (see Example~\ref{ex:extension}(1)). Similarly the relation consisting of 1-jets of germs of local submersions is also open and \emph{Diff}$(M)$-invariant. \item Let $\mathcal R$ denote the set of 1-jets of germs of 1-forms $\alpha$ on a manifold $M$ such that $d\alpha$ is non-degenerate. Since non-degeneracy is an open condition, it can be shown that $\mathcal R$ is open (see Lemma~\ref{lemma_lcs}). Furthermore, it easy to see that if $\omega$ is a symplectic form then so is $f^*\omega$ for any diffeomorphism $f$ of $M$. Hence, $\mathcal R$ is clearly invariant under the natural action of \emph{Diff}$(M)$ on $(T^*M)^{(1)}$ (see Example~\ref{ex:extension}(2)). \item Let $\mathcal R$ be the set of 1-jets of germs of contact forms on an odd-dimensional manifold $M$. The defining condition of contact forms (Definition~\ref{contact_form}) is an open condition; therefore, $\mathcal R$ is an open relation. Moreover, if $\alpha$ is a contact form then $f^*\alpha$ is also contact for any diffeomorphism $f$ of $M$. Thus $\mathcal R$ is invariant under the natural action of \emph{Diff}$(M)$ on $(T^*M)^{(1)}$ (see Example~\ref{ex:extension}(2)). \end{enumerate} The following result, due to Gromov, is the first general result in the theory of $h$-principle. We shall refer to this result as Open Invariant Theorem for future reference\index{Open Invariant Theorem}. \begin{theorem}(\cite{gromov}) Every open, Diff$(M)$ invariant relation $\mathcal R$ on an open manifold $M$ satisfies the parametric $h$-principle.\label{open-invariant} \end{theorem} \begin{proof} We give a very brief outline of the proof of ordinary $h$-principle. Since $M$ is an open manifold, it has a core $K$ which is by definition a polyhedron of positive codimension. Hence by Proposition~\ref{h-principle} and part (b) of the previous remark, any section $\sigma_0$ of $\mathcal R$ can be homotoped to a holonomic section $\sigma_1$ on an open neighbourhood $U$ of $K$ such that the homotopy $\sigma_t$ lies in $\Gamma(\mathcal R|_U)$. Now, $K$ being a core of the open manifold $M$, there exists an isotopy $\delta_t$ of $M$ such that $\delta_1$ maps $M$ into $U$. Since $\mathcal R$ is invariant under the action of \emph{Diff}$(M)$, the sections $\delta_1^*(\sigma_t)$, $t\in \mathbb I$, lie in $\mathcal R$; moreover, $\delta_1^*\sigma_1$ is holonomic. On the other hand, the homotopy $\delta_t^*\sigma_0$ also lies in $\mathcal R$. The concatenation of the two homotopies defines a homotopy between $\sigma_0$ and $\delta_1^*\sigma_1$ which is a holonomic section of $\mathcal R$. Thus $\mathcal R$ satisfies the $h$-principle. \end{proof} For a detailed proof of the above result we refer to \cite{haefliger2}. \subsection{Open, non-Diff invariant relations and $h$-principle} If a relation is invariant under the action of a smaller pseudogroup of diffeomorphism, say $\mathcal D$, then also we may expect $h$-principle to hold, provided $\mathcal D$ has some additional properties. \begin{definition} {\em (\cite{gromov_pdr}) Let $M_{0}$ be a submanifold of $M$ of positive codimension and let $\mathcal{D}$ be a pseudogroup of local diffeomorphisms of $M$. We say that $M_0$ \emph{is sharply movable} by $\mathcal{D}$, if given any hypersurface $S$ in an open set $U$ in $M_0$ and any $\varepsilon>0$, there is an isotopy $\delta_{t}$, $t\in\mathbb I$, in $\mathcal{D}$ and a positive real number $r$ such that the following conditions hold: \begin{enumerate}\item[$(i)$] $\delta_{0}|_U=id_{U}$, \item[$(ii)$] $\delta_{t}$ fixes all points outside the $\varepsilon$-neighbourhood of $S$, \item[$(iii)$] $dist(\delta_{1}(x),M_{0})\geq r$ for all $x\in S$ and for some $r>0$,\end{enumerate} where $dist$ denotes the distance with respect to any fixed metric on $M$.}\label{D:sharp_diffeotopy}\index{sharp diffeotopy}\end{definition} The diffeotopy $\delta_t$ will be referred as a \emph{sharply moving diffeotopy}. A pseudogroup $\mathcal D$ is said to have the \emph{sharply moving property} if every submanifold $M_0$ of positive codimension is sharply movable by $\mathcal D$. \begin{example}\end{example} \begin{enumerate}\item Let $M$ be a smooth manifold. A diffeomorphism $f:M\times\R\to M\times\R$ is called a fibre-preserving diffeomorphism if $\pi\circ f=\pi$, where $\pi:M\times\R\to M$ is the projection onto the first factor. Then the set $\mathcal{D}(M\times\R,\pi)$ consisting of fiber preserving diffeomorphisms of $M\times\R$ forms a subgroup of \emph{Diff}$(M\times\R)$. It is also easy to see that $\mathcal{D}(M\times\R,\pi)$ sharply moves $M=M\times \{0\}$ in $M\times\R$. \item Symplectomorphisms of a symplectic manifold $(M,\omega)$ have the sharply moving property (\cite{gromov_pdr}). \item Contactomorphisms of a contact manifold $(M,\alpha)$ also have the sharply moving property. (We refer to Theorem~\ref{CT} for a proof of this fact.) \label{ex:sharp_diffeotopy}\end{enumerate} We end this section with the following result due to Gromov (\cite{gromov_pdr}). \begin{theorem} \label{T:gromov-invariant} Let $p:X\to M$ be a smooth fibration and $\mathcal{R}\subset X^{(r)}$ an open relation which is invariant under the action of a pseudogroup $\mathcal{D}$. If $\mathcal{D}$ sharply moves a submanifold $M_{0}$ in $M$ of positive codimension then the parametric h-principle holds for $\mathcal{R}$ on $Op\,(M_{0})$. \end{theorem} \begin{proof}Let $\sigma_0$ be a section of $\mathcal R$ on $Op\,M_0$. We apply the Holonomic Approximation theorem to $\sigma_0$ as in Proposition~\ref{h-principle}, and obtain a homotopy $\sigma_t$ in $\Gamma(\mathcal R)$ defined over $Op\,(\delta_1(M_0))$. However, this time we take the diffeotopies $\delta_t$ from $\mathcal D$. This can be done because $\mathcal D$ has the sharply moving property. Since $\mathcal R$ is invariant under the action of $\mathcal D$, we can bring the homotopy onto an open neighbourhood of $M_0$ by the action of $\mathcal D$. Indeed, the two homotopies $\delta_t^*\sigma_0$ and $\delta_1^*\sigma_t$ lie in $\Gamma(\mathcal R)$ over $Op\,(M_0)$. The concatenation of these two gives a path between $\sigma_0$ and the holonomic section $\delta_1^*\sigma_1$ within $\Gamma(\mathcal R|_{Op\,M_0})$ proving the $h$-principle. \end{proof} \newpage \section{Some examples of $h$-principle} \subsection{Early evidence of $h$-principle} Early evidences of $h$-principle can be found in the work of Nash on isometric immersions (\cite{nash1}, \cite{nash2}) and in the work of Smale and Hirsch (\cite{hirsch},\cite{smale1},\cite{smale2}), Phillips (\cite{phillips},\cite{phillips1}) and Feit (\cite{feit}). The general framework of $h$-principle developed by Gromov unified these works and gave many new results. We state some of these results here which will be referred in the later chapters. \begin{theorem}(Smale-Hirsch Immersion theorem \cite{hirsch}) Let $M$ and $N$ be smooth manifolds with $\dim M<\dim N$. Then the space of smooth immersions $M\to N$ is weak homotopy equivalent to the space of bundle monomorphisms $TM\to TN$.\end{theorem} \begin{theorem}(Phillips Submersion theorem \cite{phillips}) \label{Phillips Submersion theorem} Let $M$ be an open manifold such that $\dim M\geq \dim N$. Then the space of smooth submersions $M\to N$ is weak homotopy equivalent to the space of bundle epimorphisms $TM\to TN$.\end{theorem} \begin{theorem}$($Gromov-Phillips Theorem \cite{gromov}, \cite{phillips1}$)$ \index{Gromov-Phillips Theorem} \label{T:Gromov Phillip} Let $M$ be an open manifold and $N$ a foliated manifold with a foliation $\mathcal F_N$. Let $\pi:TN\to\nu(\mathcal F_N)$ denote the projection onto the normal bundle of $\mathcal F_N$. Then the space of smooth maps $f:M\to (N,\mathcal F_N)$ transversal to $\mathcal F_N$ has the same weak homotopy type as the space of all bundle homomorphisms $F:TM\to TN$ such that $\pi\circ F:TM\to \nu(\mathcal F_N)$ is an epimorphism. \end{theorem} \begin{proof} Smooth maps $f:M\to N$ transversal to the foliation $\mathcal F_N$ are solutions of a first order relation $\mathcal{R}^{\pitchfork}$ on $M$ defined as follows: \[\mathcal{R}^{\pitchfork}=\{(x,y,F)\in J^1(M,N)| \pi\circ F:T_xM\to \nu_y(\mathcal{F}_N) \text{ is an epimorphism}\} \] The relation $\mathcal{R}^{\pitchfork}$ is open, as the set of all surjective linear maps $T_xM\to \nu_y\mathcal{F}_N$ is an open subset of \emph{Hom}$(TM,\nu(\mathcal F_N))$. Furthermore, $\mathcal{R}^{\pitchfork}$ is invariant under the action of \emph{Diff}$(M)$. Indeed, if $\delta:U\to V$ is in \emph{Diff}$(M)$ and $f:U\to N$ is transversal to $\mathcal F_N$ then clearly, $f\circ\delta$ is transversal to $\mathcal F_N$. Hence, by Theorem~\ref{open-invariant}, $\mathcal{R}^{\pitchfork}$ satisfies the parametric $h$-principle provided $M$ is open. Observe that a section of $\mathcal{R}^{\pitchfork}$ can be realised as a bundle morphism $F:TM\to TN$ such that $\pi\circ F:TM\to \nu(\mathcal{F_N})$ is an epimorphism. This completes the proof.\end{proof} \subsection{$h$-principle in symplectic and contact geometry} We have already noted in Example~\ref{ex:extension} that the diffeomorphism group of a manifold $M$ has a natural action on the space of differential forms on the manifold and the space of symplectic forms (resp. the space of contact forms) is invariant under this action (Example~\ref{ex:invariant_relation}). Furthermore, the non-degeneracy conditions on symplectic and contact forms are open conditions. The following results in the symplectic and contact geometry were obtained as applications of Open Invariant Theorem (Theorem~\ref{open-invariant}). \begin{theorem}(\cite{gromov}) Let $M$ be an open manifold and $\zeta$ be a fixed de Rham cohomology class in $H^2(M)$. Then the space of symplectic forms in the cohomology class $\zeta$ has the same homotopy type as the space of almost symplectic forms on $M$.\label{gromov_symplectic} \end{theorem} In Corollary~\ref{conformal_symp}, we shall obtain a similar classification for locally conformal symplectic forms on open manifolds. \begin{definition}{\em Let $M$ be a manifold of dimension $2n+1$. An almost contact structure on $M$ is a pair $(\alpha,\beta)\in \Omega^1(M)\times \Omega^2(M)$ such that $\alpha \wedge \beta^n$ is a nowhere vanishing form on $M$.}\index{almost contact structure}\end{definition} \begin{theorem}(\cite{gromov}) The space of contact forms on an open manifold has the same weak homotopy type as the space of almost contact structures on it.\label{gromov_contact} \end{theorem} The above results show that the obstruction to the existence of a symplectic form (resp. a contact form) on an open manifold is purely topological. The results are not true for closed manifolds. \subsection{Homotopy classification of foliations\label{haefliger_map}} Let $M$ be a smooth manifold and $Fol^q(M)$ be the set of all codimension $q$ foliations on $M$\index{$Fol^q(M)$}. Recall the classifying space $B\Gamma_q$ and the universal $\Gamma_q$ structure $\Omega_q$ on it (see Subsection \ref{classifying space}). If $\mathcal F\in Fol^q(M)$ and $f:M\to B\Gamma_{q}$ is a classifying map of $\mathcal F$, then $f^*\Omega_q= \mathcal F$ as $\Gamma_q$-structure. We define a vector bundle epimorphisms $TM\to \nu\Omega_q$ by the following diagram (see \cite{haefliger1}) \begin{equation} \xymatrix@=2pc@R=2pc{ TM \ar@{->}[r]^-{\pi}\ar@{->}[rd] & \nu \mathcal{F}\cong f^*(\nu \Omega_q) \ar@{->}[r]^-{\bar{f}}\ar@{->}[d] & \nu \Omega_q \ar@{->}[d]\\ & M \ar@{->}[r]_-{f} & B\Gamma_{q} }\label{F:H(foliation)} \end{equation} where $TM\to \nu(\mathcal F)$ is the quotient map and $(\bar{f},f)$ defines the pull-back diagram. The morphism $\bar{f}\circ \pi$ is defined uniquely only up to homotopy. Thus, there is a function \[H':Fol^q(M)\to \pi_0(\mathcal E(TM,\nu\Omega_q)),\] where $\mathcal E(TM,\nu\Omega_q))$ is the space of all vector bundle epimorphism $F:TM\to \nu \Omega_q$ and $\pi_0(\mathcal E(TM,\nu\Omega_q))$ is the set of its components. \begin{definition} {\em Two foliations $\mathcal F_0$ and $\mathcal F_1$ on a manifold $M$ are said to be \emph{integrably homotopic} if there exists a foliation $\tilde{\mathcal F}$ on $M\times\R$ which is transversal to the trivial foliation of $M\times\R$ by leaves $M\times\{t\}$ ($t\in [0,1]$) and that the induced foliations on $M\times\{0\}$ and $M\times\{1\}$ coincide with $\mathcal F_0$ and $\mathcal F_1$ respectively.}\label{D:integrably_homotopic} \end{definition} If $\mathcal F_0$ and $\mathcal F_1$ are integrably homotopic as in Definition~\ref{D:integrably_homotopic} and if $F:M\times[0,1]\to B\Gamma_q$ is a classifying map of $\tilde{\mathcal F}$ then we have a diagram similar to (\ref{F:H(foliation)}) given as follows: \[ \xymatrix@=2pc@R=2pc{ T(M\times[0,1]) \ar@{->}[r]^-{\bar{\pi}}\ar@{->}[rd] & \nu \tilde{F} \ar@{->}[r]^-{\bar{F}}\ar@{->}[d] & \nu \Omega_q \ar@{->}[d]\\ & M\times [0,1] \ar@{->}[r]_-{F} & B\Gamma_{q} } \] Let $i_t:M\to M\times\{t\}\hookrightarrow M\times\R$ denote the canonical injective map of $M$ into $M\times\{t\}$ and $f_t:M\to B\Gamma_q$ be defined as $f_t(x)=F(x,t)$ for $(x,t)\in M\times[0,1]$. Then $\bar{F}\circ \bar{\pi}\circ di_t:TM\to \nu(\Omega_q)$ defines a homotopy between $\bar{f}_0\circ\pi$ and $\bar{f}_1\circ\pi$, where $(\bar{f}_i,f_i):\nu(\mathcal F_i)\to \nu\Omega_q$, $i=0,1$, denote the pull-back diagrams. Thus, we get $H'(\mathcal F_0)= H'(\mathcal F_1)$. Hence, $H'$ induces a function \[H:\pi_0(Fol^q(M))\longrightarrow \pi_0(\mathcal E(TM,\nu\Omega_q)),\] where $\pi_0(Fol^q(M))$ denotes the integrable homotopy classes of codimension $q$ foliations on $M$. We shall refer to $H$ as the \emph{Haefliger map}\index{Haefliger map}. \begin{theorem}(\cite{haefliger1}) \label{HCF} If $M$ is an open manifold, then the Haefliger map induces a bijection between the sets $\pi_0(Fol^{q}(M))$ and $\pi_0(\mathcal E(TM,\nu\Omega_q))$.\index{Haefliger's Classification Theorem} \end{theorem} Let $Tr(M,\mathcal{F}_N)$ \index{$Tr(M,\mathcal{F}_N)$} be the space of smooth maps $f:M\to (N,\mathcal F_N)$ into a foliated manifold $(N,\mathcal F_N)$ and $\mathcal E(TM,\nu(\mathcal F_N))$ \index{$\mathcal E(TM,\nu(\mathcal F_N)$} denote the space of epimorphisms $F:TM\to \nu(\mathcal F_N)$. Then we have a commutative diagram \[\xymatrix@=2pc@R=2pc{ \pi_0(Tr(M,\mathcal{F}_N))\ar@{->}[r]^-{P}\ar@{->}[d]_-{\cong}^-{\pi_0(\pi \circ d)} & \pi_0(Fol^{q}(M))\ar@{->}[d]^-{H}\\ \pi_0(\mathcal E(TM,\nu \mathcal{F}_N))\ar@{->}[r] & \pi_0(\mathcal E(TM,\nu\Omega_q)) }\] in which the left vertical arrow is a bijection by Gromov-Phillips Theorem (Theorem~\ref{T:Gromov Phillip}). The function $P$ is induced by the natural map which takes an $f\in Tr(M,\mathcal F_N)$ onto the inverse foliation $f^*\mathcal F_N$. On the other hand, there is a reverse path from $Fol^{q}(M)$ to $Tr(M,\mathcal{F}_N)$ for some foliated manifold $(N,\mathcal F_N)$ as suggested in Theorem~\ref{HL}. These two observations reduce the classification of foliations to Gromov-Phillips Theorem. \begin{corollary}(\cite{haefliger1}) Let $M$ be an open manifold of dimension $n$ and let $\tau:M\to BGL(n)$ be a classifying map of the tangent bundle $TM$. There is a one-to-one correspondence between the integrable homotopy classes of foliations on $M$ and the homotopy classes of lifts of $\tau$ in $B\Gamma_q\times BGL(n-q)$. In particular, a codimension $q$ distribution $D$ on $M$ is homotopic to a foliation if the classifying map of $TM/D$ lifts to $B\Gamma_q$.\label{C:haefliger} \end{corollary} In \cite{thurston}, Thurston generalized Haefliger's result to closed manifolds. He viewed a $\Gamma_q$ structure on a manifold $M$ as a triple $\Sigma=(\nu,Z,\mathcal{F})$, where $\nu$ is a $q$-dimensional vector bundle on $M$ with a section $Z$ and $\mathcal{F}$ is a foliation of codimension $q$ on a neighbourhood $U$ of $Z(M)$ which is transversal to the fibers of $\nu$. If $\mathcal{G}$ is a foliation of codimension $q$ then we can associate a $\Gamma_q$ structure $\Sigma(\mathcal{G})$ on $M$ to it by taking $\nu=\nu(\mathcal{G})$, $Z=M\hookrightarrow \nu(\mathcal F)$ and $\mathcal{F}=(\exp|_\nu)^*\mathcal{G}$. The vector bundle $\nu$ in this case embeds in $TM$. In this setting Thurston proved the following. \begin{theorem}(\cite{thurston}) Let $\Sigma=(\nu,Z,\mathcal{F})$ be a $\Gamma_q$ structure on a manifold $M$ with $q>1$. Then for any vector bundle monomorphism $i:\nu \to TM$, there exists a codimension $q$ foliation on $M$ whose induced $\Gamma_q$ structure is homotopic to $\Sigma$. \end{theorem} \chapter{Regular Jacobi structures on open manifolds} In this chapter we shall prove that locally conformal symplectic foliations and contact foliations on open manifolds satisfy the $h$-principle. We also interpret these results in terms of regular Jacobi strucres. For basic definitions of foliations with geometric structures, we refer to Section~\ref{forms_foliations}. \section{Background of the problem - $h$-principle in Poisson geometry} In a recent article (\cite{fernandes}), Fernandes and Frejlich have proved the following $h$-principle for symplectic foliations. \begin{theorem} Let $M$ be an open manifold equipped with a foliation $\mathcal{F}_0$ and a 2-form $\Omega_0$ on $\mathcal{F}_0$ which is nondegenerate. Then $(\mathcal{F}_0,\Omega_0)$ can be homotoped through such pairs to a pair $(\mathcal{F}_1,\Omega_1)$ such that $\Omega_1$ is a symplectic form on $\mathcal F$.\label{T:Fernandes_Poisson} \end{theorem} In the statement of Theorem~\ref{T:Fernandes_Poisson}, we can not replace $\mathcal F_0$ by an arbitrary distribution, since it need not be homotopic to any integrable distribution at all (See Corollary~\ref{C:haefliger}). However, we can replace $\mathcal F_0$ by a distribution which is homotopic to a foliation. Taking this into account, Fernandes and Frejlich interpreted the above theorem in terms of regular Poisson structures as follows. \begin{theorem} $($\cite{fernandes}$)$ Every regular bivector field $\pi_0$ on an open manifold can be homotoped to a regular Poisson bivector provided the distribution \text{Im\,}$\pi_0^\#$ is homotopic to an integrable one.\label{hprinciple_fernandes}\end{theorem} \noindent Since a symplectic form on a manifold corresponds to a non-degenerate Poisson structure, the above result may be seen as a generalisation of Thereom~\ref{gromov_symplectic} due to Gromov. The authors further remarked in \cite{fernandes} that there should be analogues of Theorem~\ref{hprinciple_fernandes} for Jacobi manifolds, in other words, for locally conformal symplectic foliations and contact foliations on open manifolds. The results of this chapter are inspired by this remark. In this connection we also recall a result of M. Bertelson. She observed that symplectic forms on a given foliation $\mathcal F$ may not satisfy $h$-principle, even if the leaves of $\mathcal F$ are open manifolds (\cite{bertelson1}). However, she proved $h$-principle with some `strong open-ness' condition on $\mathcal F$ (\cite{bertelson}). Following Bertelson we shall refer to such foliated manifolds $(M,\mathcal F)$ as \emph{open foliated manifolds}. Bertelson, in fact, obtained an $h$-principle for general relations on open foliated manifolds $(M,\mathcal F)$ which can be stated as follows: \begin{theorem}(\cite{bertelson}) If $(M,\mathcal{F})$ is an open foliated manifold, then any relation $\mathcal R$ which is open and invariant under foliation preserving diffeotopies of $(M,\mathcal F)$ satisfies the parametric $h$-principle.\label{foliation_hprinciple} \end{theorem} The $h$-principle for foliated symplectic forms was derived as a corollary of the above theorem by observing that the associated differential relation is open and invariant under the action of foliation preserving diffeotopies. \begin{theorem} (\cite{bertelson}) Let $(M,\mathcal{F})$ be an open foliated manifold. Then every non-degenerate foliated 2-form $\omega_0$ on $(M,\mathcal F)$ is homotopic through such 2-forms to a symplectic form $\omega_1$ on $\mathcal F$. \label{bertelson}\end{theorem} Theroem~\ref{bertelson} can also be viewed as an $h$-principle of regular Poisson structures with prescribed characteristic foliation. The requirement of an additional condition on the foliation is better understood when the result is stated in the following form:\\ \emph{Let $\pi_0$ be a regular bivector field (on a manifold $M$) for which the distribution $\mathcal D=\text{Im\ }{\pi_0}^{\#}$ integrates to a foliation satisfying some `strong open-ness' condition. Then $\pi_0$ can be homotoped through regular bivector fields $\pi_t$ to a Poisson bivector field $\pi_1$ such that the underlying distributions $\text{Im\ }{\pi_t}^{\#}$ remains constant.} \\ \begin{remark} {\em A contact analogue of Theorem~\ref{bertelson} also follows from Theorem~\ref{foliation_hprinciple}. Suppose that $(M,\mathcal F)$ is an open foliated manifold, where dimension of $\mathcal F$ is $2n+1$. Let $(\alpha,\beta)$ be a section of $E= T^*{\mathcal F}\oplus\Lambda^2(T^*\mathcal F)$ which gives an almost contact structure on $\mathcal F$. The nowhere vanishing condition on $\alpha\wedge\beta^n$ is an open condition and hence defines an open subset $\mathcal R$ in the 1-jet space $E^{(1)}$. The non-vanishing condition is also invariant under the action of foliation preserving diffeotopies and hence the general theorem of Bertelson applies to this relation. Therefore, the pair $(\alpha,\beta)$ can be homotoped in the space of almost contact structures on $\mathcal F$ to $(\eta,d_{\mathcal F}\eta)$ for some foliated 1-form $\eta$ on $(M,\mathcal F)$, where $d_\mathcal F$ is the coboundary map of the foliated deRham complex. Note that $\eta$ is then a contact form on the foliation $\ mathcal F$.}\end{remark} \section{Locally conformal symplectic foliations} In this section we prove an $h$-principle for locally conformal symplectic foliations on open manifolds. \begin{lemma} Let $M^{n}$ be a smooth manifold with a 1-form $\theta$. Then there exists an epimorphism $D_{\theta}:E^{(1)}=(T^*M)^{(1)}\rightarrow \wedge^{2}(T^*M)$ satisfying $D_{\theta}\circ j^{1}_\alpha=d_{\theta}\alpha$ so that the following diagram is commutative: \[\begin{array}{rcl} E^{(1)} & \stackrel{D_\theta}{\longrightarrow} & \wedge^{2}(T^*M)\\ \downarrow & & \downarrow \\ M & \stackrel{\text{id}_{M}}{\longrightarrow} & M \end{array}\] In particular, given any 2-form $\omega$ there exists a section $F_\omega:M\to E^{(1)}$ such that $D_\theta\circ F_\omega=\omega$.\label{lemma_lcs} \end{lemma} \begin{proof} Let $\theta$ be as in the hypothesis. Define $D_\theta(j^1_\alpha(x_0))=d_\theta\alpha(x_0)$ for any local 1-form $\alpha$ on $M$. To prove that the right hand side is independent of the choice of a representative $\alpha$, choose a local coordinate system $(x^1,...,x^n)$ around $x_{0}\in M$. We may then express $\alpha$ and $\theta$ as follows: \[\alpha=\Sigma_{i=1}^n \alpha_idx^i, \ \ \ \theta=\Sigma_{i=1}^n \theta_idx^i\] where $\alpha_i$ and $\theta_i$ are smooth (local) functions defined in a neighbourhood of $x_0$. The 1-jet $j^1_\alpha(x_0)$ is completely determined by the ordered tuple $(a_i,a_{ij})\in\mathbb{R}^{(n+n^{2})}$, where \[a_i=\alpha_i(x_0), \ \ \ a_{ij}=\frac{\partial \alpha_{i}}{\partial x^{j}}(x_{0}), \ \ \ i,j=1,2,\dots,n.\] Now, \[\begin{array}{rcl}d_\theta\alpha(x_0) & = & d\alpha(x_0)+\theta(x_0)\wedge\alpha(x_0)\\ & = & \Sigma_{i<j}[(a_{ji}-a_{ij})+(\theta_i(x_0)a_j-a_i\theta_j(x_0))]dx^i\wedge dx^j\end{array}\] This shows that $d_\theta\alpha(x_0)$ depends only on the 1-jet $j^1_\alpha$ at $x_0$ and the value of $\theta(x_0)$. Since $\theta$ is fixed, $D_\theta$ is well-defined. Clearly, $D_{\theta}\circ j^1_\alpha = d_\theta(\alpha)$ for any 1-form $\alpha$. It is easy to check that $D_\theta$ is a vector bundle epimorphism. Indeed, given a set of real numbers $b_{ij}, 1\leq i<j\leq n$, the following system of linear equations \[(a_{ij}-a_{ji})+(a_{i}\theta_{j}(x_{0})-a_{j}\theta_{i}(x_{0}))=b_{ij}\] has a solution, namely $a_i=0$, $a_{ij}=-a_{ji}=\frac{b_{ij}}{2}$. Therefore, the fibres of $D_\theta$ are affine subspaces and hence contractible. This implies that $D_\theta$ has a right inverse. Hence every section $\omega :M \rightarrow \wedge^{2}M$ can be lifted to a section $F_{\omega}:M \rightarrow (T^*M)^{(1)}$ such that $D_{\theta} F_{\omega}=\omega$. Moreover, any two such lifts of $\omega$ are homotopic.\end{proof} \begin{proposition}Let $M$ be an open manifold and $\mathcal F_0$ be a foliation on $M$. Let $\theta$ be a closed 1-form on $M$. Then any $\mathcal F_0$-leafwise non-degenerate 2-form $\omega_0$ on $M$ can be homotoped through such forms to a 2-form $\omega_1$ which is $d_\theta$-exact on a neighbourhood $U$ of some core $K$ of $M$.\label{approx_lcs} \end{proposition} \begin{proof} Let $\mathcal S$ denote the set of all elements $\omega_x$ in $\wedge^2(T_x^*M)$, $x\in M$, such that the restriction of $\omega_x$ is non-degenerate on $T_x\mathcal F_0$. Since non-degeneracy is an open condition, $\mathcal S$ is an open subset of $\wedge^2(T^*M)$. Let \[{\mathcal R}_\theta = D_\theta^{-1}(\mathcal S)\subset (T^*M)^{(1)},\] where $D_\theta:(T^*M)^{(1)}\to \wedge^2(T^*M)$ is defined as in Lemma~\ref{lemma_lcs}. Then ${\mathcal R}_\theta$ is an open relation. Let $\sigma_0$ be a section of ${\mathcal R}_\theta$ such that $D_\theta\circ \sigma_0=\omega_0$. By Proposition~\ref{h-principle}, there exists a homotopy of sections $\sigma_t:M\to \mathcal R_{\theta}$, such that $\sigma_1$ is holonomic on an open neighbourhood $U$ of $K$, where $K$ is a core of $M$. Therefore, $\sigma_1=j^1_\alpha$ for some 1-form $\alpha$ on $U$. The 2-forms $\omega_t=D_\theta\circ \sigma_t$, $t\in [0,1]$, are sections of $\wedge^2(T^*M)$ with values in $\mathcal S$. Hence, \begin{enumerate} \item $\omega_t$ is $\mathcal{F}_{0}$-leafwise non-degenerate for all $t\in [0,1]$, and \item $\omega_1=d_\theta\alpha$ on $U$; in particular $\omega_1$ is $d_\theta$-closed on $U$. \end{enumerate} \end{proof} \begin{theorem} Let $M^{2n+q}$ be an open manifold with a codimension $q$ foliation $\mathcal{F}_{0}$ and a 2-form $\omega_0$ on $M$ which is $\mathcal{F}_{0}$-leafwise non-degenerate. Let $\xi \in H_{deR}^{1}(M,\R)$ be a fixed de Rham cohomology class. Then there exists a homotopy $(\mathcal{F}_t, \omega_t)$ and a closed 1-form $\theta_0$ representing $\xi$ such that \begin{enumerate}\item $\omega_t$ is $\mathcal{F}_t$-leafwise non-degenerate and \item $\omega_1$ is $d_{\theta_0}$-closed, that is, $d\omega_1+\theta_0\wedge \omega_1=0$. \end{enumerate} \label{lcs}\end{theorem} \begin{proof} To prove the result we proceed as in \cite{fernandes}. Consider the canonical Grassmann bundle $G_{2n}(TM)\stackrel{\pi}{\longrightarrow}M$ for which the fibres $\pi^{-1}(x)$ over a point $x\in M$ is the Grassmannian of $2n$-planes in $T_x M$. The space $Dist_q(M)$ of codimension $q$ distributions on $M$ can be identified with the section space $\Gamma(G_{2n}(M))$. We topologize $Dist_q(M)$ by the $C^\infty$ compact open topology. The space $Fol_{q}(M)$ consisting of codimension $q$ foliations can be viewed as a subspace of $Dist_q(M)$ if we identify a foliation with its tangent distribution. Let $\Phi_q$ be the subspace of $Dist_q(M) \times \Omega^2(M)$ defined as follows: \[\Phi_q=\{(\mathcal F,\omega)|\omega \text{ is } \mathcal F\text{-leafwise symplectic}\}\] Fix a 1-form $\theta$ which represents the class $\xi$. By Proposition~\ref{approx_lcs}, there exists a homotopy of 2-forms, $\omega'_t$, $0\leq t\leq 1$, such that \begin{enumerate}\item $\omega_0'=\omega_0$ \item $\omega'_t$ is $\mathcal{F}_0$-leafwise non-degenerate and \item $\omega'_1$ is $d_\theta$-closed on some open set $U$ containing a core $K$ of $M$. \end{enumerate} Then $(\mathcal{F}_0,\omega'_t) \in \Phi_q$ for $0\leq t\leq 1$. Since $M$ is an open manifold there exists an isotopy $g_t$, $0\leq t\leq 1$, with $g_{0}=id_{M}$ such that $g_1$ takes $M$ into $U$ (see Remark~\ref{core}). Now, we define $(\mathcal{F}''_t, \omega''_t) \in \Phi_{q}$ for $t\in [0,1]$ by setting \begin{center}$\mathcal F''_t=g_t^*\mathcal F_0, \ \ \ \omega''_t=g_t^* \omega'_1.$\end{center} Then, $\omega''_t$ is $\mathcal F''_t$-leafwise non-degenerate. Further, it is easy to see that $\omega''_1$ is $d_{g_1^*\theta}$ closed: Indeed, \[\begin{array}{rcl} d_{g_1^*{\theta}}\omega''_1 & = & d_{g_1^*{\theta}}(g_1^* \omega'_1)\\ & = & dg_1^* \omega'_1 + g_1^*\theta \wedge g_1^* \omega'_1\\ &=& g_1^*[d \omega'_1+ \theta \wedge \omega'_1]\\ &=& g_1^*d_\theta \omega'_1=0 \end{array}\] since $\omega'_1$ is $d_\theta$-closed on $U$ and $g_1$ maps $M$ into $U$. Since $g_1$ is homotopic to the identity map of $M$ the de Rham cohomology class $[g_1^*\theta]=[\theta]=\xi$. The desired homotopy is obtained by the concatenation of the two homotopies, namely $(\mathcal{F}_0,\omega'_t)$ and $(\mathcal{F}''_t,\omega''_t)$, and taking $\theta_0=g_1^*\theta$.\end{proof} {\bf Remark} Theorem~\ref{T:Fernandes_Poisson} follows as a particular case of the above result by taking $\theta$ equal to zero. \begin{theorem} Let $M$ be an open manifold and $\xi$ be any de Rham cohomology class in $H^1(M,\R)$. Then every almost symplectic foliation $(\mathcal F_0,\omega_0)$ is homotopic to a locally conformal symplectic foliation $(\mathcal F_1,\omega_1)$ with foliated Lee form $\theta$ such that the canonical morphism $H^2(M,\R)\to H^2(M,\mathcal F_1)$ maps $\xi$ onto the foliated de-Rham cohomology class of $\theta$. \end{theorem} \begin{proof} Let $\tilde{\omega}_0$ be a global 2-form on $M$ which extends $\omega_0$. By Theorem~\ref{lcs}, we get a homotopy $(\mathcal F_t, \tilde{\omega}_t)$ and a closed 1-form $\tilde{\theta}$ representing $\xi$ satisfying the following: \begin{enumerate} \item $\tilde{\omega}_t$ is $\mathcal F_t$-leafwise non-degenerate, \item $d\tilde{\omega}_1+\tilde{\theta}\wedge\tilde{\omega}_1=0$. \end{enumerate} Let $\omega_t$ be a foliated 2-form obtained by restricting $\tilde{\omega}_t$ to $T\mathcal F_t$ and let $\theta$ be the restriction of $\tilde{\theta}$ to $T\mathcal F_1$. Note that, we have a commutative diagram as follows: \[ \xymatrix@=2pc@R=2pc{ \Omega^k(M) \ar@{->}[r]^-{d}\ar@{->}[d]_-{r} & \Omega^{k+1}(M)\ar@{->}[d]^-{r}\\ \Gamma(\wedge^k(T^*\mathcal F_1))\ar@{->}[r]_-{d_{\mathcal{F}_1}} & \Gamma(\wedge^{k+1}(T^*\mathcal{F}_1)) } \] where the vertical arrows are the restriction maps. Hence, relation (2) above implies that $d_{{\mathcal F}_1}\omega_1+\theta\wedge\omega_1=0$; thus, $(\mathcal F_1, \omega_1)$ is a locally conformal symplectic foliation on $M$ and $\theta$ is the foliated Lee class of $\omega_1$. Further, the foliated de Rham cohomology class of $\theta$ in $H^1(M,\mathcal F_1)$ is the image of $\xi$ under the induced morphism $H_{deR}^1(M,\R)\to H^1(M,\mathcal F_1)$. \end{proof} \begin{corollary}Let $M$ be an open manifold and $\omega$ be a non-degenerate 2-form on $M$. Given any de Rham cohomology class $\xi\in H^1(M,\R)$, $\omega$ can be homotoped through non-degenerate 2-forms to a locally conformal symplectic form $d_\theta \alpha$, where the deRham cohomology class of $\theta$ is $\xi$.\label{conformal_symp}\end{corollary} \begin{proof}This is a direct consequence of Theorem~\ref{lcs}. Indeed, the form $\omega_1$ in the theorem can be taken to be $d_\theta$-exact (see Proposition~\ref{approx_lcs}).\end{proof} \section{Contact foliations} In this section we prove an $h$-principle for contact foliations on open manifolds. \begin{lemma} Let $M^n$ be a smooth manifold and $E=T^*M$ be the cotangent bundle of $M$. Then there exists a vector bundle epimorphism $\bar{D}$ \[\begin{array}{rcl} E^{(1)} & \stackrel{\bar{D}}{\longrightarrow} & T^*M\oplus \wedge^{2}(T^*M)\\ \downarrow & & \downarrow \\ M & \stackrel{id_{M}}{\longrightarrow} & M \end{array}\] such that $\bar{D}\circ (j^{1}_\alpha)=(\alpha,d\alpha)$ for any 1-form $\alpha$ on $M$. Moreover, any section of $T^*M\oplus\wedge^2(T^*M)$ can be lifted to a section of $E^{(1)}$ through $\bar{D}$. \label{lemma_contact}\end{lemma} \begin{proof} Define $\bar{D}$ by \[\bar{D}(j^1_\alpha(x_0))=(\alpha(x_0),d\alpha(x_0))\] for any local 1-form $\alpha$ defined near a point $x_0$. It follows from the proof of Lemma~\ref{lemma_lcs} that this map is well defined. Hence $\bar{D}\circ j^1_\alpha=(\alpha, d\alpha)$ for any 1-form $\alpha$. Let $(x^{1},...,x^{n})$ be a local coordinate system around $x_{0}\in M$ and $\alpha=\Sigma_{i=1}^{n}\alpha_{i}dx^{i}$ be the representation of $\alpha$ with respect to these coordinates. Then $j^{1}_{\alpha}(x_{0})$ is uniquely determined by the ordered tuple $(a_{i},a_{ij})\in \mathbb{R}^{n+n^{2}}$ as in Lemma~\ref{lemma_lcs} and \[\bar{D}(j^{1}_{\alpha}(x_{0}))= (\alpha(x_{0}),d\alpha(x_{0}))= (\Sigma_{i=1}^n a_i dx^i, \Sigma_{i<j}(a_{ij}-a_{ji})dx^i \wedge dx^j)\] It is easy to see that the following system of equations \begin{center} $a_i=b_i$ \ \ and \ \ $a_{ij}-a_{ji}=b_{ij}$ \ \ for all $i\neq j$, $i,j=1,...,n$. \end{center} is solvable in $a_i$ and $a_{ij}$. Hence, $\bar{D}$ is an epimorphism, and so the fibres of $\bar{D}$ are affine subspaces. Consequently, any section $(\theta,\omega):M\rightarrow T^*M\oplus \wedge^{2}(T^*M)$ can be lifted to a section $F_{(\theta,\omega)}:M\rightarrow E^{(1)}$ such that $\bar{D}\circ F_{(\theta,\omega)}=(\theta,\omega)$ and any two such lifts of a given $(\theta,\omega)$ are homotopic.\end{proof} \begin{proposition}Let $M$ be an open manifold and $\mathcal F_0$ be a foliation on $M$. Let $(\theta_0,\omega_0)$ be a pair consisting of a 1-form $\theta_0$ and a 2-form $\omega_0$ on $M$ such that the restriction of $(\theta_0,\omega_0)$ to the leaves of $\mathcal F_0$ are almost contact structures. Then $(\theta_0,\omega_0)$ can be homotoped through such pairs to a pair $(\theta_1, \omega_1)$, where $\omega_1=d\theta_1$ on a neighbourhood $U$ of some core $K$ of $M$.\label{approx_contact} \end{proposition} \begin{proof} Let $\mathcal C$ denote the set of all pairs $(\theta_x, \omega_x)\in T_x^*M\times \wedge^2(T_x^*M)$, $x\in M$, such that $\iota_D^*\theta_x\wedge\iota_D^*\omega_x\neq 0$, where $D=T_x\mathcal F$. Then $\mathcal C$ is an open subset of $T^*M\oplus \wedge^2(T^*M)$. Let \[{\mathcal R} = \bar{D}^{-1}(\mathcal C)\subset E^{(1)},\] where $E=T^*M$ and $\bar{D}$ is as in Lemma~\ref{lemma_contact}. Then ${\mathcal R}$ is an open first order relation. Let $\sigma_0$ be a section of $\mathcal R$ such that $\bar{D}\circ\sigma_0=(\theta_0,\omega_0)$. By Proposition~\ref{h-principle}, there exists a homotopy of sections $\sigma_t$ lying in ${\mathcal R}$ such that $\sigma_1$ is holonomic on an open neighbourhood $U$ of some core $K$ of $M$. Thus, there exists $1$-form $\theta_1$ on $U$ such that $\sigma_1=j^1\theta_1$. Evidently, the pairs $(\theta_t,\omega_t)=\bar{D}\circ \sigma_t$, $t\in [0,1]$ are sections of $T^*M\oplus \wedge^2(T^*M)$ with values in $\mathcal C$. Hence, \begin{enumerate} \item $(\theta_t,\omega_t)$ is a $\mathcal{F}_{0}$-leafwise almost contact structures and \item $\omega_1=d\theta_1$ on $U$. \end{enumerate} This completes the proof. \end{proof} \begin{theorem} Let $M^{(2n+1)+q}$ be an open manifold and $\mathcal{F}_{0}$ a codimension $q$ foliation on $M$. Let $(\theta_{0},\omega_{0})\in \Omega^{1}(M)\times \Omega^{2}(M)$ be a $\mathcal F_0$-leafwise almost contact structure. Then there exists a homotopy $(\mathcal{F}_{t},\theta_{t},\omega_{t})$ such that \begin{enumerate} \item $(\theta_{t},\omega_{t})$ is a $\mathcal{F}_{t}$-leafwise almost contact structure and \item $\omega_{1}=d\theta_{1}$. \end{enumerate} In particular, $\theta_1$ is leafwise contact form on $(M,\mathcal F_1)$. \label{contact}\end{theorem} \begin{proof} Let $Dist_q(M)$ denote the space of all codimension $q$ distribution on $M$, as in Theorem~\ref{lcs}. Define a subset $\Phi_q$ of $Dist_{q}(M)\times \Omega^1(M) \times \Omega^2(M)$ as follows: \[\Phi_{q} =\{(\mathcal F,\alpha,\beta): (\alpha,\beta) \text{ restricts to an almost contact structure on }\mathcal F\}.\] By the given hypothesis, $(\mathcal{F}_0,\theta_0,\omega_0)$ is in $\Phi_q$. By Proposition~\ref{approx_contact} there exists a homotopy $(\theta'_t,\omega'_t)$ such that \begin{enumerate} \item $(\theta'_0,\omega'_0)=(\theta_0,\omega_0)$ \item $(\theta'_t,\omega'_t)$ is a $\mathcal{F}_{0}$-leafwise almost contact structures and \item $d\theta'_1=\omega'_1$ on some open set $U$ containing a core of $M$. \end{enumerate} Then $(\mathcal{F}_0,\theta'_t,\omega'_t)$ belongs to $\Phi_{q}$ for $0\leq t\leq 1$. Choose an isotopy $g_{t}:M\rightarrow M$ such that $g_{0}=id_{M}$ and $g_{1}(M)\subset U$. Now, we define $(\mathcal{F}''_t,\theta''_t,\omega''_t)\in \Phi_{q}$, $t\in[0,1]$ by setting \begin{center}$\mathcal{F}''_t= g_t^*(\mathcal{F}_0),\ \ \ \theta''_t=g_t^*\theta'_1,\ \ \ \omega''_t=g_t^*\omega'_1.$\end{center} Observe that, \[d\theta''_1=dg_1^*\theta'_1= g_1^*d\theta'_1= g_1^*\omega'_1=\omega''_1,\] since $g_1(M)\subset U$ and $d\theta_1=\omega_1$ on $U$. Therefore, $\theta''_1$ is a ${\mathcal F}''_1$-leafwise contact form. Concatenating the homotopies $(\mathcal F_0,\theta'_t,\omega'_t)$ and $(\mathcal F''_t,\theta''_t,\omega''_t)$ we obtain the desired homotopy. \end{proof} \begin{theorem} Let $M$ be an open manifold. Then every almost contact foliation $(\mathcal F_0, \theta_0,\omega_0)$ is homotopic to a contact foliation. \label{T:contact} \end{theorem} \begin{proof} Choose global differential forms $\tilde{\theta}_0,\tilde{\omega}_0$ on $M$ which extend $\theta_0$ and $\omega_0$ respectively. By Theorem~\ref{contact}, we get a homotopy $(\mathcal F_t, \tilde{\theta}_t,\tilde{\omega}_t)$ satisfying the following: \begin{enumerate} \item $(\tilde{\theta}_t,\tilde{\omega}_t)$ restrict to an almost complex structure on $\mathcal F_t$, \item $d\tilde{\theta}_1= \tilde{\omega}_1$. \end{enumerate} Let $\omega_t$ and $\theta_t$ be foliated forms obtained by restricting $\tilde{\omega}_t$ and $\tilde{\theta}_t$ to $T\mathcal F_t$. Clearly, $(\mathcal F_t,\theta_t,\omega_t)$, $0\leq t\leq 1$, is an almost contact foliation on $M$. Also, by restricting both sides of relation (2) to $T\mathcal F_1$ we get $d_{{\mathcal F}_1}\theta_1=\omega_1$; thus, $(\mathcal F_1, \theta_1)$ is, in fact, a contact foliation on $M$.\end{proof} \section{Regular Jacobi structures on open manifolds} We now reformulate Theorems ~\ref{lcs} and ~\ref{contact} in terms of Jacobi structures. Let $\nu^k(M)$ \index{$\nu^k(M)$} denote the space of sections of the alternating bundle $\wedge^k(TM)$. We shall refer to these sections as $k$-\emph{multivector fields} on $M$. We may recall that every bivector field $\Lambda$ defines a bundle homomorphism $\Lambda^\#:T^*M\to TM$ by $\Lambda^\#(\alpha)=\Lambda(\alpha,\ \ )$, for all $\alpha\in T^*M$. \begin{definition}{\em A bivector field $\Lambda$ is said to be \emph{regular} if rank\,$\Lambda^{\#}$ is constant. A pair $(\Lambda,E)\in\nu^2(M)\times\nu^1(M)$ will be called a regular pair if $\mathcal D=\Lambda^\#(T^*M)+\langle E\rangle$ is a regular distribution on $M$.}\end{definition} If $\Lambda$ is a regular bivector field and $\text{Im\,}\Lambda^\#=\mathcal D$, then there exists a bundle homomorphism $\phi:\mathcal D^*\to \mathcal D$ such that the following square is commutative: \begin{equation} \begin{array}{rcl} T^*M & \stackrel{\Lambda^\#}{\longrightarrow} & TM\\ i^* {\downarrow} & & {\uparrow} i\\ \mathcal D^*& \stackrel{\phi}{\longrightarrow} & \mathcal D \end{array}\label{bivector lambda} \end{equation} where $i:\mathcal D\to TM$ is the inclusion map. In fact, we can define $\phi$ by $\phi(i^*\alpha)=\Lambda^\#\alpha$ for all $\alpha\in T^*M$. If $i^*\alpha=0$ then $\alpha|_{\mathcal D}=0$. The skew symmetry property of $\Lambda$ implies that $\alpha\in\ker\Lambda^\#$. Hence $\phi$ is well defined. Moreover, it is an isomorphism as $\text{Im\,}\phi=\text{Im\,}\Lambda^{\#}=\mathcal D$. We can define a section $\omega$ of $\wedge^2\mathcal D^*$ by \[\omega(\Lambda^{\#}\eta,\Lambda^{\#}\eta')=\Lambda(\eta,\eta'),\] for any two 1-forms $\eta,\eta'$ on $M$. This is well-defined. Moreover $\omega$ is non-degenerate, since $\omega(\Lambda^{\#}\eta,\Lambda^{\#}\eta')=0$ for all $\eta'$ implies that $\eta'(\Lambda^\#(\eta)=0$ for all $\eta'$ and therefore, $\Lambda^\#(\eta)=0$. If $\tilde{\omega}:\mathcal D\to \mathcal D^*$ is given by by $\tilde{\omega}(X)=i_X\omega$ for all $X\in\Gamma\mathcal D$, then we have the relation $\tilde{\omega}\circ\Lambda^\#=-i^*$, and since $\tilde{\omega}$ is an isomorphism we have $\Lambda^\#=-\tilde{\omega}^{-1}\circ i^*$. Thus $\tilde{\omega}$ is the inverse of $-\phi$. Conversely, any section $\omega$ of $\wedge^2(\mathcal D^*)$ which is fibrewise non-degenerate, defines a bivector field $\Lambda$ by the relation $\Lambda^\#=-\tilde{\omega}^{-1}\circ i^*$. Observe that the image of $\Lambda^\#=\mathcal D$. In view of the above correspondence, we can interpret Theorem~\ref{lcs} as follows. \begin{theorem} Let $M$ be an open manifold with a regular bivector field $\Lambda_0$ such that the distribution $\mathcal D_0=\text{Im\,}\Lambda_0^\#$ is integrable. Let $\xi$ be a fixed de Rham cohomology class in $H^1(M,\R)$. Then there is a homotopy $\Lambda_t$ of regular bivector fields and a vector field $E_1$ on $M$ such that \begin{enumerate}\item $\mathcal D_t=\text{Im\,}\Lambda_t^\#$ is an integrable distribution for all $t\in [0,1]$, \item $E_1$ is a section of $\mathcal D_1$ and \item $(\Lambda_1,E_1)$ is a regular Jacobi pair.\end{enumerate} Furthermore, we can choose $E_1$ such that the foliated de Rham cohomology class of $\phi_1^{-1} (E_1)$ in $H^1(M,\mathcal F_1)$ is equal to the image of $\xi$ under $i^*:H^1(M,\R)\to H^1(M,\mathcal F_1)$, where $\mathcal F_1$ is the characteristic foliation of the Jacobi pair $(\Lambda_1,E_1)$. \label{even_jacobi}\end{theorem} \begin{proof} Suppose that $\mathcal D_0=\text{Im\,}\Lambda_0^\#$ integrates to a foliation $\mathcal F_0$. It follows from the above discussion that the associated section $\omega_0\in \Gamma(\wedge^2(\mathcal D_0^*))$ is non-degenerate. By Theorem ~\ref{lcs}, there exists a homotopy $(\mathcal F_t,\omega_t)$ of $(\mathcal F_0,\omega_0)$ such that $(\mathcal F_1,\omega_1)$ is a locally conformal symplectic foliation. Let $\mathcal D_t=T\mathcal F_t$ and define $\Lambda_t$ by a diagram analogous to (\ref{bivector lambda}). If $\theta_1$ is the Lee form of $\omega_1$ then define $E_1$ by the relation $i_{E_1}\omega_1=\theta_1$. This proves that $(\Lambda_1,E_1)$ is a regular Jacobi pair (Theorem~\ref{T:jacobi_foliation}). \end{proof} \begin{theorem} Let $(\Lambda_{0},E_{0})\in \nu^2(M)\times\nu^1(M)$ be a regular pair on an open manifold $M$. Suppose that the distribution $\mathcal{D}_0:= \text{Im\,}\Lambda_0^\#+\langle E_0\rangle$ is odd-dimensional and integrable. Then there is a homotopy of regular pairs $(\Lambda_t,E_t)$ of $(\Lambda_0,E_0)$ such that \begin{enumerate}\item $\mathcal{D}_t=\text{Im\,}\Lambda_t^\#+\langle E_t\rangle$, $t\in [0,1]$, are integrable distributions and \item $(\Lambda_1,E_1)$ is a Jacobi pair. \end{enumerate}\label{odd-jacobi}\end{theorem} \begin{proof} Suppose that $(\Lambda,E)$ is a regular pair and the distribution $\mathcal D=\Lambda^\#(T^*M)+\langle E\rangle$ is odd dimensional, then we can define a section $\alpha$ of $\mathcal D^*$ by the relations \begin{equation}\alpha(\text{Im}\,\Lambda^\#)=0 \ \ \text{ and }\ \ \ \alpha(E)=1.\label{alpha}\end{equation} Also, we can define a section $\beta$ of $\wedge^2(\mathcal D^*)$ by \begin{equation}i_E\beta=0,\ \ \ \ \ \beta(\Lambda^\#\eta,\Lambda^\#\eta')=\Lambda(\eta,\eta') \text{ for all }\eta,\eta'\in \Omega^1(M)\label{beta},\end{equation} where $i_E$ denotes the interior multiplication by $E$. It can be shown easily that $\beta$ is non-degenerate on Im\,$\Lambda^\#=\ker\alpha$. Hence $\alpha\wedge \beta^n$ is nowhere vanishing. On the other hand, suppose that $\mathcal D$ is a $(2n+1)$-dimensional distribution. If $\alpha$ is a section of $\mathcal D^*$ and $\beta$ is a section of $\wedge^2(\mathcal D^*)$ such that $\alpha\wedge\beta^n$ is nowhere vanishing, then we can write $\mathcal D=\ker\alpha\oplus\ker \beta$. Define a vector field $E$ on $M$ satisfying the relations \begin{equation}i_E\beta=0, \ \ \text{and}\ \ \alpha(E)=1\label{vector E}\end{equation} Since $\beta$ is non-degenerate on $\ker\alpha$ by our hypothesis, $\tilde{\beta}:\ker\alpha \to (\ker\alpha)^*$ is an isomorphism. For any $\eta\in T^*(M)$ define $\Lambda^\#(\eta)$ to be the unique element in $\ker\alpha$ such that $\tilde{\beta}(\Lambda^{\#}\eta)=-\eta|_{\ker\alpha}$. In other words, \begin{equation} \Lambda^\#=-\tilde{\beta}^{-1}\circ i^*.\label{jacobi_lambda}\end{equation} This relation shows that the image of $\Lambda^\#$ is equal to $\ker \alpha$ and $\ker\beta$ is spanned by $E$. Hence $\mathcal D=\text{Im\,}\Lambda^\#\oplus \langle E\rangle$ which means that $(\Lambda,E)$ is a regular pair. Thus there is a one to one correspondence between regular pairs $(\Lambda,E)$ and the triples $(\mathcal D,\alpha,\beta)$ such that $\alpha\wedge\beta^n$ is nowhere vanishing. Further, the regular contact foliations correspond to regular Jacobi pairs with odd-dimensional characteristic distributions under this correspondence \cite{kirillov}. The result now follows directly from Theorem~\ref{contact}. Let $(\Lambda_0,E_0)$ be as in the hypothesis and $\mathcal F_0$ be the foliation such that $T\mathcal F_0=\mathcal D_0$. We can define $(\alpha_0,\beta_0)$ by the equations (\ref{alpha}) and (\ref{beta}) so that $\alpha_0\wedge \beta_0^n$ is non-vanishing on $\mathcal D_0$. By Theorem~\ref{contact}, we obtain a homotopy $(\mathcal F_t,\alpha_t,\beta_t)$ of $(\mathcal F_0, \alpha_0,\beta_0)$ such that $\alpha_t\wedge\beta_t^n$ is a nowhere vanishing form on $T\mathcal F_t$ and $\beta_1=d\alpha_1$ on $\mathcal F_1$, so that $(\mathcal F_1,\alpha_1)$ is a contact foliation. The desired homotopy $(\Lambda_t,E_t)$ is then obtained from $(\alpha_t,\beta_t)$ by (\ref{vector E}) and (\ref{jacobi_lambda}). \end{proof} We conclude with the following remark. \begin{remark} {\em The integrability condition on the initial distribution in Theorems~\ref{lcs} and \ref{contact} can be relaxed to the extent that we can take the initial distribution to be \emph{homotopic} to an integrable one. We refer to \cite{fernandes} for a detailed argument.}\end{remark} \chapter{Contact foliations on open contact manifolds} In this chapter we shall give a complete homotopy classification of contact foliations on open contact manifolds. On our way to the classification result, we study equidimensional contact immersions which plays a very significant role in the proof. We also prove a general $h$-principle for open relations on open contact manifolds which are invariant under an action of local contactomorphisms. This leads to an extension of Gromov-Phillips Theorem in the contact setting. We shall begin with a review of similar results in the context of symplectic manifolds. \section{Backgrouond: Symplectic foliations on symplectic manifolds} In \cite{datta-rabiul}, M. Datta and Md. R. Islam proved an extension of Theorem~\ref{open-invariant} which can be stated as follows. \begin{theorem} \label{ST} Let $(M,\omega)$ be an open symplectic manifold and $\mathcal{R}\subset J^{r}(M,N)$ be an open relation which is invariant under the action of the pseudogroup of local symplectomrphisms of $(M,\omega)$. Then $\mathcal R$ satisfies the $h$-principle. \end{theorem} The symplectic diffeotopies have the sharply moving property (Definition~\ref{D:sharp_diffeotopy}, Example~\ref{ex:sharp_diffeotopy}); hence the relation satisfies the local $h$-principle near a core $K$ by Theorem~\ref{T:gromov-invariant}. The global $h$-principle follows with a consequence of Ginzburg's theorem (Theorem~\ref{T:equidimensional-symplectic-immersion}) which guarantees a deformation of $M$ through isosymplectic immersions into a neighbourhood of $K$ (\cite{datta-rabiul}). As a corollary of it the authors obtained the following result. \begin{theorem}(\cite{datta-rabiul}) If $(M,\omega)$ is an open contact manifold then submersions $f:M\to N$ whose level sets are symplectic submanifolds of $(M,\omega)$ satisfy the $h$-principle. \end{theorem} In fact, we can obtain a generalisation of the above result for maps which are transversal to a foliation $\mathcal F_N$ on $N$. We denote by $\pi:TN\rightarrow \nu\mathcal{F}_N$ the projection onto the normal bundle of the foliation $\mathcal F_N$. Let $Tr_{\omega}(M,\mathcal{F}_N)$ \index{$Tr_{\omega}(M,\mathcal{F}_N)$} be the set of all smooth maps $f:M\rightarrow N$ transversal to $\mathcal{F}_N$ such that $\ker (\pi\circ df)$ is a symplectic subbundle of $(TM,\omega)$. Let $\mathcal E_{\omega}(TM,\nu{\mathcal{F}_N})$ \index{$\mathcal E_{\omega}(TM,\nu{\mathcal{F}_N})$} be the set of all vector bundle morphisms $F:TM\rightarrow TN$ such that \begin{enumerate} \item $\pi\circ F$ is an epimorphism onto $\nu{\mathcal{F}_N}$ and \item $\ker (\pi\circ F)$ is a symplectic subbundle of $(TM,\omega)$. \end{enumerate} These spaces, as before, will be equipped with the $C^\infty$ compact open topology and the $C^0$ compact open topology respectively. Then we have the following extension of Gromov-Phillips Theorem in the symplectic setting: \begin{theorem} Let $(M^{2m},\omega)$ be an open symplectic manifold and $N$ be any manifold with a foliation $\mathcal{F}_N$ of codimension $2q$, where $m>q$. Then the map \[\begin{array}{rcl}\pi\circ d:Tr_{\omega}(M,\mathcal{F}_N) & \rightarrow & \mathcal E_{\omega}(TM,\nu{\mathcal{F}_N})\\ f & \mapsto & \pi\circ df\end{array}\] is a weak homotopy equivalence.\label{T:symplectic transverse} \end{theorem} The maps in $Tr_{\omega}(M,\mathcal{F}_N)$ are solutions of an open relation $\mathcal{R}$ which is invariant under the action of local symplectomorphisms. Hence the result follows as a direct application of Theorem~\ref{ST}. We would like to observe that the relation in Theorem~\ref{ST}, in fact, satisfies the parametric $h$-principle. \begin{definition} {\em A foliation $\mathcal F$ on a symplectic manifold $(M,\omega)$ will be called a \emph{symplectic foliation subordinate to $\omega$} if its leaves are symplectic submanifolds of $(M,\omega)$. We shall often mention these foliations simply as \emph{symplectic foliations on} $(M,\omega)$}\end{definition} \begin{definition} {\em Two symplectic foliations $\mathcal F_0$ and $\mathcal F_1$ on a symplectic manifold $(M,\omega)$ are said to be \emph{integrably homotopic relative to $\omega$} if there exists a foliation $\tilde{\mathcal F}$ on $(M\times\mathbb I,\omega\oplus 0)$ transversal to the trivial foliation of $M\times\R$ by leaves $M\times\{t\}$ ($t\in [0,1]$) such that the following conditions are satisfied: \begin{enumerate}\item the induced foliation on $M\times\{t\}$ for each $t\in [0,1]$ is a symplectic foliation subordinate to $\omega$; \item the induced foliations on $M\times\{0\}$ and $M\times\{1\}$ coincide with $\mathcal F_0$ and $\mathcal F_1$ respectively,\end{enumerate} where $\omega\oplus 0$ denotes the pull-back of $\omega$ by the projection map $p_1:M\times\R\to M$.}\end{definition} Let $Fol^{2q}_{\omega}(M)$ be the space of all codimension $2q$ symplectic foliations on the symplectic manifold $(M,\omega)$ and let $\pi_0(Fol^{2q}_{\omega}(M))$ denote the integrable homotopy classes of symplectic foliations on $(M,\omega)$. The map $H'$ defined in Subsection~\ref{haefliger_map} induces a map \[H_\omega:\pi_0(Fol^{2q}_{\omega}(M)) \longrightarrow \pi_0(\mathcal E_{\omega}(TM,\nu\Omega_{2q})),\] where $\Omega_{2q}$ is the universal $\Gamma_{2q}$-structure on $B\Gamma_{2q}$ (Subsection~\ref{classifying space}) and $\mathcal E_{\omega}(TM,\nu\Omega_{2q})$ is the space of all vector bundle epimorphisms from $F:TM\to \nu \Omega_{2q}$ such that $\ker F$ is a symplectic subbundle of $(TM,\omega)$. Indeed, if $\mathcal{F}$ is a symplectic foliation on $M$ (subordinate to $\omega$), then the kernel of $H'(\mathcal F)$ is $T\mathcal F$ which is by hypothesis a symplectic subbundle of $TM$. Therefore, $H_\omega$ is well-defined. Proceeding as in \cite{haefliger} we can then obtain the following classification result. \begin{theorem} \label{T:symplectic foliation} The map $\pi_0(Fol^{2q}_{\omega}(M)) \stackrel{H_{\omega}}\longrightarrow \pi_0(\mathcal E_{\omega}(TM,\nu\Omega_{2q}))$ is bijective. \end{theorem} We have omitted the proofs of Theorem~\ref{T:symplectic transverse} and Theorem~\ref{T:symplectic foliation} here to avoid repetition of arguments. In the subsequent sections we shall deal with the classification problem of contact foliations on open contact manifolds in full details. The proofs of the above theorems will be very similar to Theorem~\ref{T:contact-transverse} and Theorem~\ref{haefliger_contact}. \section{Equidimensional contact immersions} In this section we get an analogue of Ginzburg's theorem (Theorem~\ref{T:equidimensional-symplectic-immersion}) in the contact setting. We begin with a simple observation. \begin{observation}{\em Let $(M,\alpha)$ be a contact manifold. The product manifold $M\times\R^2$ has a canonical contact form given by $\tilde{\alpha}=\alpha- y\,dx$, where $(x,y)$ are the coordinate functions on $\R^2$. We shall denote the contact structure associated with $\tilde{\alpha}$ by $\tilde{\xi}$. Now suppose that $H:M\times\R\to \R$ is a smooth function which vanishes on some open set $U$. Define $\bar{H}:M\times\R\to M\times\R^2$ by $\bar{H}(u,t)=(u,t,H(u,t))$ for all $(u,t)\in M\times\R$. Since $\bar{H}(u,t)=(u,t,0)$ for all $(u,t)\in U$, the image of $d\bar{H}_{(u,t)}$ is $T_uM\times\R\times\{0\}$. On the other hand, $\tilde{\xi}_{(u,t,0)}=\xi_u\times\R^2$. Therefore, $\bar{H}$ is transversal to $\tilde{\xi}$ on $U$. }\end{observation} \begin{proposition} Let $M$ be a contact manifold with contact form $\alpha$. Suppose that $H$ is a smooth real-valued function on $M\times(-\varepsilon,\varepsilon)$ with compact support such that its graph $\Gamma$ in $M\times\R^2$ is transversal to the kernel of $\tilde{\alpha}=\alpha-y\,dx$. Then there is a diffeomorphism $\Psi:M\times(-\varepsilon,\varepsilon)\to \Gamma$ which pulls back $\tilde{\alpha}|_{\Gamma}$ onto $h(\alpha\oplus 0)$, where $h$ is a nowhere-vanishing smooth real-valued function on $M\times\R$.\label{characteristic} \end{proposition} \begin{proof} Since the graph $\Gamma$ of $H$ is transversal to $\tilde{\xi}$, the restriction of $\tilde{\alpha}$ to $\Gamma$ is a nowhere vanishing 1-form on it. Define a function $\bar{H}:M\times(-\varepsilon,\varepsilon)\to M\times\R^2$ by $\bar{H}(u,t)=(u,t,H(u,t))$. The map $\bar{H}$ defines a diffeomorphism of $M\times(-\varepsilon,\varepsilon)$ onto $\Gamma$, which pulls back the form $\tilde{\alpha}|_{\Gamma}$ onto $\alpha-H\,dt$. It is therefore enough to obtain a diffeomorphism $F:M\times(-\varepsilon,\varepsilon)\to M\times(-\varepsilon,\varepsilon)$ which pulls back the 1-form $\alpha-H\,dt$ onto a multiple of $\alpha\oplus 0$. For each $t$, define a smooth function $H^t$ on $M$ by $H^t(u)=H(u,t)$ for all $u\in M$. Let $X_{H^t}$ denote the contact Hamiltonian vector field on $M$ associated with $H^t$. Consider the vector field $\bar{X}$ on $M\times\R$ as follows: \[\bar{X}(u,t)=(X_{H^t}(u),1),\ \ (u,t)\in M\times(-\varepsilon,\varepsilon).\] Let $\{\bar{\phi}_s\}$ denote a local flow of $\bar{X}$ on $M\times\R$. Then writing $\bar{\phi}_s(u,t)$ as \begin{center}$\bar{\phi}_s(u,t)=(\phi_s(u,t),s+t)$ for all $u\in M$ and $s,t\in \R$,\end{center} we get the following relation: \[\frac{d\phi_s}{ds}(u,t)=X_{t+s}(\phi_s(u,t)), \] where $X_t$ stands for the vector field $X_{H^t}$ for all $t$. In particular, we have \begin{equation}\frac{d\phi_t}{dt}(u,0)=X_t(\phi_t(u,0)),\label{flow_eqn}\end{equation} Define a level preserving map $F:M\times(-\varepsilon,\varepsilon)\to M\times(-\varepsilon,\varepsilon)$ by \[F(u,t)=\bar{\phi}_t(u,0)=(\phi_t(u,0),t).\] Since the support of $H$ is contained in $K\times (-\varepsilon,\varepsilon)$ for some compact set $K$, the flow $\bar{\phi}_s$ starting at $(u,0)$ remains within $M\times (-\varepsilon,\varepsilon)$ for $s\in (-\varepsilon,\varepsilon)$. Note that \begin{center}$dF(\frac{\partial}{\partial t})=\frac{\partial}{\partial t}\bar{\phi}_t(u,0)=\bar{X}(\bar{\phi}_t(u,0))=\bar{X}(\phi_t(u,0),t)=(X_{H^t}(\phi_t(u,0)),1).$\end{center} This implies that \begin{center}$\begin{array}{rcl}F^*(\alpha\oplus 0) (\frac{\partial}{\partial t}|_{(u,t)}) & = & (\alpha\oplus 0)(dF(\frac{\partial}{\partial t}|_{(u,t)}))\\ & = & \alpha(X_{H^t}(\phi_t(u,0)))\\ & = &H^t(\phi_t(u,0))\ \ \ \ \ \text{by equation }(\ref{contact_hamiltonian1}) \\ & = & H(\bar{\phi}_t(u,0))=H(F(u,t)) \end{array}$\end{center} Also, \begin{center}$F^*(H\,dt)(\frac{\partial}{\partial t})=(H\circ F)\,dt(dF(\frac{\partial}{\partial t}))=H\circ F$\end{center} Hence, \begin{equation}F^*(\alpha-Hdt)(\frac{\partial}{\partial t})=0.\label{eq:F1}\end{equation} On the other hand, \begin{equation}F^*(\alpha-H\,dt)|_{M\times\{t\}}=F^*\alpha|_{M\times\{t\}}=\psi_t^*\alpha,\label{eq:F2}\end{equation} where $\psi_t(u)=\phi_t(u,0)$, $\psi_0(u)=u$. Thus, $\{\psi_t\}$ are the integral curves of the time dependent vector field $\{X_t\}$ on $M$ (see (\ref{flow_eqn})), and we get \[\begin{array}{rcl}\frac{d}{dt}\psi_t^*\alpha & = & \psi^*_t(i_{X_t}d\alpha+d(i_{X_t}\alpha))\\ & = & \psi^*_t(dH^t(R_\alpha)\alpha-dH^t+dH^t) \ \ \text{by equation }(\ref{contact_hamiltonian1})\\ & = & \psi^*_t(dH^t(R_\alpha)\alpha)\\ & = & \theta(t)\psi_t^*\alpha,\end{array}\] where $\theta(t)=\psi_t^*(dH^t(R_\alpha))$. Hence $\psi_t^*\alpha=e^{\int_0^t\theta(s)ds}\psi_0^*\alpha=e^{\int_0^t\theta(s)ds}\alpha$. We conclude from equation (\ref{eq:F1}) and (\ref{eq:F2}) that $F^*(\alpha-H\,dt)=e^{\int_0^t\theta(s)ds}\alpha$. Finally, take $\Psi=\bar{H}\circ F$ which has the desired properties. \end{proof} \begin{remark}{\em If there exists an open subset $\tilde{U}$ of $M$ such that $H$ vanishes on $\tilde{U}\times (-\varepsilon,\varepsilon)$ then the contact Hamiltonian vector fields $X_t$ defined above are identically zero on $\tilde{U}$ for all $t\in(-\varepsilon,\varepsilon)$. Since $\psi_t=\phi_t(\ \ ,0)$ are the integral curves of the time dependent vector fields $X_t=X_{H^t}$, $0\leq t\leq 1$, we must have $\psi_t(u)=u$ for all $u\in \tilde{U}$. Therefore, $F(u,t)=(u,t)$ and hence $\Psi(u,t)=(u,t,0)$ for all $u\in\tilde{U}$ and all $t\in(-\varepsilon,\varepsilon)$.}\label{R:characteristic}\end{remark} \begin{remark}{\em If $\Gamma$ is a codimension 1 submanifold of a contact manifold $(N,\tilde{\alpha})$ such that the tangent planes of $\Gamma$ are transversal to $\tilde{\xi}=\ker\tilde{\alpha}$ then there is a codimension 1 distribution $D$ on $\Gamma$ given by the intersection of $\ker \tilde{\alpha}|_\Gamma$ and $T\Gamma$. Since $D=\ker\tilde{\alpha}|_\Gamma\cap T\Gamma$ is an odd dimensional distribution, $d\tilde{\alpha}|_D$ has a 1-dimensional kernel. If $\Gamma$ is locally defined by a function $\Phi$ then $d\Phi_x$ does not vanish identically on $\ker\tilde{\alpha}_x$, for $\ker d\Phi_x$ is transversal to $\ker\tilde{\alpha}_x$. Thus there is a unique non-zero vector $Y_x$ in $\ker\tilde{\alpha}_x$ satisfying the relation $i_{Y_x}d\alpha_x=d\Phi_x$. Clearly, $Y_x$ is tangent to $\Gamma$ at $x$ and it is defined uniquely only up to multiplication by a non-zero real number (as $\Phi$ is not unique). However, the 1-dimensional distribution on $\Gamma$ defined by $Y$ is uniquely defined by the contact form $\tilde{\ alpha}$. The integral curves of $Y$ are called \emph{characteristics} of $\Gamma$ (\cite{arnold}). It can be checked in the proof of the above proposition, that the diffeomorphism $\Psi$ maps the lines in $M\times\R$ onto the characteristics on $\Gamma$.} \end{remark} The following lemma is a parametric version of a result proved in \cite{eliashberg}. As we shall see later, it is a key ingradient in the proof of equidimensional contact immersions for open manifolds. \begin{lemma} \label{EM2} Let $\alpha_{t}, t\in[0,1]$, be a continuous family of contact forms on a compact manifold $M$, possibly with non-empty boundary. Then for each $t\in [0,1]$, there exists a sequence of primitive 1-forms $\beta_t^l=r_t^l\,ds_t^l, l=1,..,N$ such that \begin{enumerate} \item $\alpha_t=\alpha_0+\sum_1^N \beta_t^l$ for all $t\in [0,1]$, \item for each $j=0,..,N$ the form $\alpha^{(j)}_t=\alpha_{0}+\sum_{1}^{j}\beta_t^{l}$ is contact, \item for each $j=1,..,N$ the functions $r_t^j$ and $s_t^j$ are compactly supported within a coordinate neighbourhood. \end{enumerate} Furthermore, the forms $\beta_t^l$ depends continuously on $t$. If $\alpha_t=\alpha_0$ on $Op\,V_0$, where $V_0$ is a compact subset contained in the interior of $M$, then the functions $r^l_t$ and $s^l_t$ can be chosen to be equal to zero on an open neighbourhood of $V_0$. \end{lemma} \begin{proof} If $M$ is compact and with boundary, then we can embed it in a bigger manifold, say $\tilde{M}$, of the same dimension. We may assume that $\tilde{M}$ is obtained from $M$ by attaching a collar along the boundary of $M$. Using the compactness property of $M$, one can cover $M$ by finitely many coordinate neighbourhoods $U^i, i=1,2,\dots,L$. Choose a partition of unity $\{\rho^i\}$ subordinate to $\{U^i\}$. \begin{enumerate}\item Since $M$ is compact, the set of all contact forms on $M$ is an open subspace of $\Omega^1(M)$ in the weak topology. Hence, there exists a $\delta>0$ such that $\alpha_t+s(\alpha_{t'}-\alpha_t)$ is contact for all $s\in[0,1]$, whenever $|t-t'|<\delta$. \item Get an integer $n$ such that $1/n<\delta$. Define for each $t$ a finite sequence of contact forms, namely $\alpha^j_t$, interpolating between $\alpha_0$ and $\alpha_t$ as follows: \[\alpha^j_t=\alpha_{[nt]/n}+\sum_{i=1}^j\rho^i(\alpha_t-\alpha_{[nt]/n}),\] where $[x]$ denotes the largest integer which is less than or equal to $x$ and $j$ takes values $1,2,\dots,L$. In particular, for $k/n\leq t\leq (k+1)/n$, we have \[\alpha^j_t=\alpha_{k/n}+\sum_{i=1}^j\rho^i(\alpha_t-\alpha_{k/n}),\] and $\alpha^L_t=\alpha_t$ for all $t$. \item Let $\{x_j^i:j=1,\dots,m\}$ denote the coordinate functions on $U^i$, where $m$ is the dimension of $M$. There exists unique set of smooth functions $y_{t,k}^{ij}$ defined on $U^i$ satisfying the following relation: \[\alpha_t-\alpha_{k/n}=\sum_{j=1}^m y_{t,k}^{ij} dx^i_j\ \ \text{on } U^i \text{ for }k/n\leq t\leq (k+1)/n\] Further, note that $y_{t,k}^{ij}$ depends continuously on the parameter $t$ and $y_{t,k}^{ij}=0$ when $t=k/n$, $k=0,1,\dots,n$. \item Let $\sigma^i$ be a smooth function such that $\sigma^i\equiv 1$ on a neighbourhood of $\text{supp\,}\rho^i$ and $\text{supp}\,\sigma^i\subset U^i$. Define functions $r^{ij}_{t,k}$ and $s^{ij}$, $j=1,\dots,m$, as follows: \[ r_{t,k}^{ij}=\rho^i y_t^{ij} \ \ \ s^{ij}=\sigma^i x^i_j.\] These functions are compactly supported and supports are contained in $U^i$. It is easy to see that $r^{ij}_{t,k}=0$ when $t=k/n$ and \[\rho^i(\alpha_t-\alpha_{k/n})=\sum_{j=1}^m r_{t,k}^{ij}\,ds^{ij} \ \text{ for }\ t\in [k/n,(k+1)/n].\] \end{enumerate} It follows from the above discussion that $\alpha_t-\alpha_{k/n}$ can be expressed as a sum of primitive forms which depends continuously on $t$ in the interval $[k/n,(k+1)/n]$. We can now complete the proof by finite induction argument. Suppose that $(\alpha_{t}-\alpha_0)=\sum_l\alpha_{t,k}^l$ for $t\in [0,k/n]$, where each $\alpha_{t,k}^l$ is a primitive 1-form. Define \[\begin{array}{rcl}\tilde{\alpha}_{t,k}^l& = & \left\{ \begin{array}{cl} \alpha_{t,k}^l & \text{if } t\in [0,k/n]\\ \alpha_{k/n,k}^l & \text{if } t\in [k/n,(k+1)/n]\end{array}\right.\end{array}\] Further define for $j=1,\dots,N$, $i=1,\dots,L$, \[\begin{array}{rcl}\beta_{t,k}^{ij}& = & \left\{ \begin{array}{cl} 0 & \text{if } t\in [0,k/n]\\ r^{ij}_{t,k}\,ds^{ij} & \text{if } t\in [k/n,(k+1)/n]\end{array}\right.\end{array}\] Finally note that for $t\in [0,(k+1)/n]$, we can write $\alpha_t-\alpha_0$ as the sum of all the above primitive forms. Indeed, if $k/n\leq t<(k+1)/n$, then \begin{eqnarray*}\alpha_t-\alpha_0 & = & (\alpha_t-\alpha_{k/n})+(\alpha_{k/n}-\alpha_0)\\ & = & \sum_{i=1}^L\sum_{j=1}^m r^{ij}_{t,k}\,ds^{ij}+\sum_l \alpha^l_{k/n,k}\\ & = & \sum_{i,j}\beta^{ij}_{t,k}+\sum_l \tilde{\alpha}^l_{t,k}.\end{eqnarray*} The same relation holds for $0\leq t\leq k/n$, since $\beta^{ij}_{t,k}$ vanish for all such $t$. This proves the first part of the lemma. Now suppose that $\alpha_t=\alpha_0$ on an open neighbourhood $U$ of $V_0$. Choose two compact neighbourhoods of $V_0$, namely $K_0$ and $K_1$ such that $K_0\subset \text{Int\,}K_1$ and $K_1\subset U$. Since $M\setminus\text{Int\,}K_1$ is compact we can cover it by finitely many coordinate neighbourhoods $U^i$, $i=1,2,\dots,L$, such that $(\bigcup_{i=1}^L U^i)\cap K_0=\emptyset$. Proceeding as above we get a decomposition of $\alpha_t$ on $\bigcup_{i=1}^L U^i$ into primitive 1-forms $r^l_t\,ds^l_t$. Observe that $\{U^i:i=1,\dots,L\}\cup\{U\}$ is an open covering of $M$ in this case. The functions $r^l_t$ and $s^l_t$ can be extended to all of $M$ without disturbing their supports. Hence, the functions $r^l_t$ and $s^l_t$ vanish on $K_0$. This completes the proof of the lemma. \end{proof} \begin{theorem} \label{GRAY} Let $\xi_{t}$, $t\in[0,1]$ be a family of contact structures defined by the contact forms $\alpha_t$ on a compact manifold $M$ with boundary. Let $(N,\tilde{\xi}=\ker\eta)$ be a contact manifold without boundary. Then every isocontact immersion $f_0:(M,\xi_0)\to (N,\tilde{\xi})$ admits a regular homotopy $\{f_t\}$ such that $f_t:(M,\xi_t)\to (N,\tilde{\xi})$ is an isocontact immersion for all $t\in[0,1]$. In addition, if $M$ contains a compact submanifold $V_{0}$ in its interior and $\xi_{t}=\xi_{0}$ on $\it{Op}(V_{0})$ then $f_{t}$ can be chosen to be a constant homotopy on $Op\,(V_{0})$.\label{T:equidimensional_contact immersion} \end{theorem} \begin{proof} In view of Lemma~\ref{EM2}, it is enough to assume that $\alpha_{t}=\alpha_{0}+r_tds_t$, $t\in [0,1]$, where $r_t,s_t$ are smooth real valued functions (compactly) supported in an open set $U$ of $M$. We shall first show that $f_{0}:(M,\xi_{0})\rightarrow (N,\tilde{\xi})$ can be homotoped to an immersion $f_{1}:M\to N$ such that $f_{1}^{*}\tilde{\xi}=\xi_{1}$. The stated result is a parametric version of this. For simplicity of notation we write $(r,s)$ for $(r_1,s_1)$ and define a smooth embedding $\varphi:U\to U\times\R^2$ by \[\varphi(u)=(u,s(u),-r(u)) \text{ \ for \ }u\in U.\] Since $r,s$ are compactly supported $\varphi(u)=(u,0,0)$ for all $u\in Op\,(\partial U)$ and there exist positive constants $\varepsilon_1$ and $\varepsilon_2$ such that $Im\,f$ is contained in $U\times I_{\varepsilon_1}\times I_{\varepsilon_2}$, where $I_\varepsilon$ denotes the open interval $(-\varepsilon,\varepsilon)$ for $\varepsilon>0$. Clearly, $\varphi^*(\alpha_0-y\,dx)=\alpha_0+r\,ds$ and so \begin{equation}\varphi:(U,\xi_{1})\rightarrow (U\times \R^2,\ker(\alpha_{0}- y\,dx)) \label{eq:equidimensional_1}\end{equation} is an isocontact embedding. The image of $\varphi$ is the graph of a smooth function $k=(s,-r):U\rightarrow I_{\varepsilon_{1}}\times I_{\varepsilon_{2}}$ which is compactly supported with support contained in the interior of $U$. Further note that $\pi(\varphi(U))$ is the graph of $s$ and hence a submanifold of $U\times I_{\varepsilon_1}$. Now let $\pi:U\times I_{\varepsilon_{1}}\times I_{\varepsilon_{2}}\rightarrow U\times I_{\varepsilon_{1}}$ be the projection onto the first two coordinates. Since Im\,$\varphi$ is the graph of $k$, $\pi|_{\text{Im\,}\varphi}$ is an embedding onto the set $\pi(\varphi(U))$ which is the graph of $s$. Now observe that Im\,$\varphi$ can also be viewed as the graph of a smooth function, namely $h:\pi(\varphi(U))\rightarrow I_{\varepsilon_{2}}$ defined by $h(u,s(u))=-r(u)$. It is easy to see that $h$ is compactly supported. \begin{center} \begin{picture}(300,140)(-100,5)\setlength{\unitlength}{1cm} \linethickness{.075mm} \multiput(-1,1.5)(6,0){2} {\line(0,1){3}} \multiput(-.25,2)(4.5,0){2} {\line(0,1){2}} \multiput(-1,1.5)(0,3){2} {\line(1,0){6}} \multiput(-.25,2)(0,2){2} {\line(1,0){4.5}} \put(1.7,1){$U\times I_{\varepsilon_{1}}$} \put(1.2,2.7){\small{$U$}} \put(2,3.4){\small{$\pi(\varphi(U))$}} \multiput(-.9,1.6)(.2,0){30}{\line(1,0){.05}} \multiput(-.9,1.7)(.2,0){30}{\line(1,0){.05}} \multiput(-.9,1.8)(.2,0){30}{\line(1,0){.05}} \multiput(-.9,1.9)(.2,0){30}{\line(1,0){.05}} \multiput(-.9,4.0)(0,-.1){21}{\line(1,0){.05}} \multiput(-.7,4.0)(0,-.1){21}{\line(1,0){.05}} \multiput(-.5,4.0)(0,-.1){21}{\line(1,0){.05}} \multiput(-.3,4.0)(0,-.1){21}{\line(1,0){.05}} \multiput(4.3,4.0)(0,-.1){21}{\line(1,0){.05}} \multiput(4.5,4.0)(0,-.1){21}{\line(1,0){.05}} \multiput(4.7,4.0)(0,-.1){21}{\line(1,0){.05}} \multiput(4.9,4.0)(0,-.1){21}{\line(1,0){.05}} \multiput(-.9,4.1)(.2,0){30}{\line(1,0){.05}} \multiput(-.9,4.2)(.2,0){30}{\line(1,0){.05}} \multiput(-.9,4.3)(.2,0){30}{\line(1,0){.05}} \multiput(-.9,4.4)(.2,0){30}{\line(1,0){.05}} \multiput(-1,3)(.3,0){20}{\line(1,0){.1}} \multiput(-1,3)(5.1,0){2}{\line(1,0){.9}} \qbezier(-.1,3)(1,3.1)(1.5,3.8) \qbezier(1.5,3.8)(1.7,4)(1.9,3.7) \qbezier(1.9,3.7)(2.1,3)(2.2,2.3) \qbezier(2.2,2.3)(2.4,2)(2.6,2.2) \qbezier(2.6,2.2)(3.2,2.9)(4.3,3) \end{picture}\end{center} In the above figure, the bigger rectangle represents the set $U\times I_{\varepsilon_{1}}$ and the central dotted line represents $U\times 0$. The curve within the rectangle stands for the domain of $h$, which is also the graph of $s$. We can now extend $h$ to a compactly supported function $H:U\times I_{\varepsilon_{1}}\rightarrow I_{\varepsilon_{2}}$ (see \cite{whitney}) which vanishes on the shaded region and is such that its graph is transversal to $\ker(\alpha_0- y\,dx)$. Indeed, since $\varphi$ is an isocontact embedding it is transversal to $\ker(\alpha_0-y\,dx)$ and hence graph $H$ is transversal to $\ker(\alpha_0-y\,dx)$ on an open neighbourhood of $\pi(\varphi(U))$ for any extension $H$ of $h$. Since transversality is a generic property, we can assume (possibly after a small perturbation) that graph of $H$ is transversal to $\ker(\alpha_0- y\,dx)$. Let $ \varGamma $ be the graph of $H$; then the image of $\varphi$ is contained in $\varGamma$. By Lemma~\ref{characteristic} there exists a diffeomorphism $\Phi:\Gamma\to U\times I_{\varepsilon_1}$ with the property that \begin{equation}\Phi^*(\ker(\alpha_{0}\oplus 0))=\ker((\alpha_{0}- y\,dx)|_\varGamma).\label{eq:equidimensional_2}\end{equation} Next we use $f_0$ to define an immersion $F_{0}:U\times \mathbb R\rightarrow N\times \mathbb{R}$ as follows: \begin{center} $F_0(u,x)=(f_0(u),x)$ for all $u\in U$ and $x\in \R$.\end{center} It is straightforward to see that \begin{itemize} \item $F_{0}(u,0)\in N \times 0$ for all $u\in U$ and \item $F_0^*(\eta \oplus 0)$ is a multiple of $\alpha_{0}\oplus 0$ by a nowhere vanishing function on $M\times\R$. \end{itemize} Therefore, the following composition is defined: \[U\stackrel{\varphi}{\longrightarrow} \Gamma\stackrel{\Phi}{\longrightarrow} U\times I_{\varepsilon_1} \stackrel{F_0}{\longrightarrow} N\times\mathbb{R} \stackrel{\pi_{N}}{\longrightarrow} N, \] where $\pi_{N}:N\times \mathbb{R}\rightarrow N$ is the projection onto $N$. Observe that $\pi_{N}^*\eta=\eta \oplus 0$ and therefore, it follows from equations (\ref{eq:equidimensional_1}) and (\ref{eq:equidimensional_2}) that the composition map $f_1=\pi_{N} F_{0}\Phi \varphi:(U,\xi_1)\rightarrow (N,\tilde{\xi})$ is isocontact. Such a map is necessarily an immersion. Let $K=(\text{supp\,}r\cup\text{supp\,}s)$. Take a compact set $K_1$ in $U$ such that $K\subset \text{Int\,}K_1$, and let $\tilde{U}=U\setminus K_1$. If $u\in \tilde{U}$ then $\varphi(u)=(u,0,0)$. This gives $h(u,0)=0$ for all $u\in\tilde{U}$. We can choose $H$ such that $H(u,t)=0$ for all $(u,t)\in\tilde{U}\times I_{\varepsilon_1}$. Then, by Remark~\ref{R:characteristic}, $\Phi(u,0,0)=(u,0)$ for all $u\in\tilde{U}$. Consequently, \[f_1(u)=\pi_{N} F_{0}\Phi \varphi(u)=\pi_{N} F_{0}(u,0)=\pi_N(f_0(u),0)=f_0(u) \ \text{for all } u\in\tilde{U}.\] In other words, $f_1$ coincides with $f_0$ outside an open neighbourhood of $K$. Now observe that if we have a one parameter family of compactly supported functions $(r_t,s_t)$ which depend continuously on the parameter $t$, then $\varphi$ and $\Phi$ can be made to vary continuously with respect to the parameter $t$. Thus we get the desired homotopy $f_t$. This completes the proof of the theorem. \end{proof} The above result may be viewed as an extension of Gray's Stability Theorem for open manifolds. We shall now prove the existence of isocontact immersions of an open manifold $M$ into itself which compress the manifold $M$ into an arbitrary small neighbourhoods of its core. \begin{corollary} \label{CO} Let $(M,\xi=\ker\alpha)$ be an open contact manifold and let $K$ be a core of it. Then for a given neighbourhood $U$ of $K$ in $M$ there exists a homotopy of isocontact immersions $f_{t}:(M,\xi)\rightarrow (M,\xi), t\in[0,1]$, such that $f_{0}=id_{M}$ and $f_{1}(M)\subset U$. \end{corollary} \begin{proof}Since $K$ is a core of $M$ there is an isotopy $g_t$ such that $g_0=id_M$ and $g_1(M)\subset U$ (see Remark~\ref{core}). Using $g_t$, we can express $M$ as $M=\bigcup_{0}^{\infty}V_{i}$, where $V_{0}$ is a compact neighbourhood of $K$ in $U$ and $V_{i+1}$ is diffeomorphic to $V_i\bigcup (\partial V_{i}\times [0,1])$ so that $\bar{V_i}\subset \text{Int}\,(V_{i+1})$ and $V_{i+1}$ deformation retracts onto $V_{i}$. If $M$ is a manifold with boundary then this sequence is finite. We shall inductively construct a homotopy of immersions $f^{i}_{t}:M\rightarrow M$ with the following properties: \begin{enumerate} \item $f^i_0=id_M$ \item $f^i_1(M)\subset U$ \item $f^i_t=f^{i-1}_t$ on $V_{i-1}$ \item $(f^i_t)^*\xi=\xi$ on $V_{i}$. \end{enumerate} Assuming the existence of $f^{i}_{t}$, let $\xi_{t}=(f^{i}_{t})^{*}(\xi)$ (so that $\xi_0=\xi$, and consider a 2-parameter family of contact structures defined by $\eta_{t,s}=\xi_{t(1-s)}$. Then for all $t,s\in\mathbb I$, we have: \[\eta_{t,0}=\xi_t,\ \ \eta_{t,1}=\xi_0=\xi\ \text{ and }\ \eta_{0,s}=\xi.\] The parametric version of Theorem~\ref{GRAY} gives a homotopy of immersions $\tilde{f}_{t,s}:V_{i+2}\rightarrow M$, $(t,s)\in \mathbb I\times\mathbb I$, satisfying the following conditions: \begin{enumerate} \item $\tilde{f}_{t,0},\tilde{f}_{0,s}:V_{i+2}\hookrightarrow M$ are the inclusion maps \item $(\tilde{f}_{t,s})^*\xi_t=\eta_{t,s}$; in particular, $(\tilde{f}_{t,1})^*\xi_t=\xi$ \item $\tilde{f}_{t,s}=id$ on $V_i$ since $\eta_{t,s}=\xi_0$ on $V_i$. \end{enumerate} We now extend the homotopy $\{\tilde{f}_{t,s}|_{V_{i+1}}\}$ to all of $M$ as immersions such that $\tilde{f}_{0,s}=id_M$ for all $s$. By an abuse of notation, we denote the extended homotopy by the same symbol. Define the next level homotopy as follows: \[f^{i+1}_{t}=f^{i}_{t}\circ \tilde{f}_{t,1} \ \text{ for }\ t\in [0,1].\] This completes the induction step since $(f^{i+1}_t)^*(\xi)=(\tilde{f}_{t,1})^*\xi_t=\xi$ on $V_{i+2}$ for all $t$, and $f^{i+1}_t|_{V_{i}}=f^i_t|_{V_{i}}$. To start the induction we use the isotopy $g_t$ and let $\xi_t=g_t^*\xi$. Note that $\xi_t$ is a family of contact structures on $M$ defined by contact forms $g_t^*\alpha$. For starting the induction we construct $f^{0}_{t}$ as above by setting $V_{-1}=\emptyset$. Having constructed the family of homotopies $\{f^i_t\}$ as above we set $f_t=\lim_{i\to\infty}f^i_t$ which is the desired homotopy of isocontact immersions. \end{proof} \section{An $h$-principle for open relations on open contact manifolds} In this section we prove an extension of Theorem~\ref{open-invariant} for some open relations on open contact manifolds. The main result of this section can be stated as follows: \begin{theorem} \label{CT} Let $(M,\alpha)$ be an open contact manifold and $\mathcal{R}\subset J^{r}(M,N)$ be an open relation invariant under the action of the pseudogroup of local contactomorphisms of $(M,\alpha)$. Then parametric $h$-principle holds for $\mathcal{R}$. \end{theorem} \begin{proof} Let $\mathcal{D}$ denote the pseudogroup of contact diffeomorphisms of $M$. We shall first show that $\mathcal{D}$ has the sharply moving property (see Definition~\ref{D:sharp_diffeotopy}). Let $M_0$ be a submanifold of $M$ of positive codimension. Take a closed hypersurface $S$ in $M_0$ and an open set $U\subset M$ containing $S$. We take a vector field $X$ along $S$ which is transversal to $M_{0}$. Let $H:M\rightarrow \mathbb{R}$ be a function such that \[\alpha(X)=H,\ \ \ \ i_X d\alpha|_\xi=-dH|_\xi, \ \ \text{at points of } S.\] (see equation~\ref{contact_hamiltonian1}). The contact-Hamiltonian vector field $X_H$ is clearly transversal to $M_0$ at points of $S$. As transversality is a stable property and $U$ is small, we can assume that $X_{H}\pitchfork U$. Now consider the initial value problem \[\frac{d}{dt}\delta_{t}(x)=X_{H}(\delta_t(x)), \ \ \delta_0(x)=x\] The solution to this problem exists for small time $t$, say for $t\in [0,\bar{\varepsilon}]$, for all $x$ lying in some small enough neighbourhood of $S$. Moreover, since $X_H$ is transversal to $S$, there would exist a positive real number $\varepsilon$ such that the integral curves $\delta_t(x)$ for $x\in S$ do not meet $M_0$ during the time interval $(0,\varepsilon)$. Let \[S_{\varepsilon}=\cup_{t\in[0,\varepsilon/2]}\delta_{t}(S).\] Take a smooth function $\varphi$ which is identically equal to 1 on a small neighbourhood of $S_{\varepsilon}$ and supp\,$\varphi\subset \cup_{t\in[0,\varepsilon)}\delta_t(S)$. We then consider the initial value problem with $X_{H}$ replaced by $X_{\varphi H}$. Since $X_{\varphi H}$ is compactly supported the flow of $X_{\varphi H}$, say $\bar{\delta_{t}}$, is defined for all time $t$. Because of the choice of $\varphi$, the integral curves $\bar{\delta}_t(x_0)$, $x_0\in M_0$, cannot come back to $M_0$ for $t>0$. Hence, we have the following: \begin{itemize} \item $\bar{\delta}_0|_U=id_U$ \item $\bar{\delta_t}=id$ outside a small neighbourhood of $S_{\varepsilon}$ \item $dist(\bar{\delta}_1(x),M_0)>r$ for all $x\in S$ and for some $r>0$. \end{itemize} This proves that $\mathcal{D}$ sharply moves any submanifold of $M$ of positive codimension. Since $M$ is open it has a core $K$ which is of positive codimension. Since the relation $\mathcal R$ is open and invariant under the action of $\mathcal D$, we can apply Theorem~\ref{T:gromov-invariant} to conclude that $\mathcal R$ satisfies the parametric $h$-principle near $K$. We now need to lift the $h$-principle from $Op\,K$ to all of $M$. By the local $h$-principle near $K$, an arbitrary section $F_0$ of $\mathcal R$ admits a homotopy $F_{t}$ in $\Gamma(\mathcal{R}|_U)$ such that $F_1$ is holonomic on $U$, where $U$ is an open neighbourhood of $K$ in $M$. Let $f_{t}=p^{(r)}\circ F_t$, where $p^{(r)}:J^{r}(M,N)\rightarrow N$ is the canonical projection map of the jet bundle. By Corollary~\ref{CO} above we get a homotopy of isocontact immersions $g_{t}:(M,\xi)\rightarrow (M,\xi)$ satisfying $g_{0}=id_{M}$ and $g_{1}(M)\subset U$, where $\xi=\ker\alpha$. The concatenation of the homotopies $g_t^*(F_0)$ and $g_1^*(F_t)$ gives the desired homotopy in $\Gamma(\mathcal R)$ between $F_0$ and the holonomic section $g_1^*(F_1)$. This proves that $\mathcal R$ satisfies the ordinary $h$-principle. To prove the parametric $h$-principle, take a parametrized section $F_z\in \Gamma(\mathcal{R})$, $z\in \mathbb D^n$, such that $F_z$ is holonomic for all $z\in \mathbb S^{n-1}$. This implies that there is a family of smooth maps $f_z\in Sol(\mathcal R)$, parametrized by $z\in \mathbb S^{n-1}$, such that $F_z=j^r_f(z)$. We shall homotope the parametrized family $F_z$ to a family of holonomic sections in $\mathcal R$ such that the homotopy remains constant on $\mathbb S^{n-1}$. By the parametric $h$-principle near $K$, there exists an open neighbourhood $U$ of $K$ and a homotopy $H:\mathbb D^n\times\mathbb I\to \Gamma(\mathcal R|_U)$, such that $H^0_z=F_z$ and $H_z^1$ is holonomic for all $z\in \mathbb D^n$; furthermore, $H_z^t=j^r_f(z)$ on $U$ for all $z\in \mathbb S^{n-1}$. Let $\delta:[0,1/2]\to [0,1]$ be the linear homeomorphism such that $\delta(0)=0$ and $\delta(1/2)=1$. Define a function $\mu$ as follows: \begin{eqnarray*}\mu(z) = & \delta(\|z\|)z/\|z\| & \text{ if }\ \|z\|\leq 1/2.\end{eqnarray*} First deform $F_z$ to $\widetilde{F}_z$, where \[\begin{array}{rcl}\widetilde{F}_z & = & \left\{ \begin{array}{ll} F_{\mu(z)} & \text{if } \|z\|\leq 1/2\\ F_{z/\|z\|} & \text{if } 1/2\leq\|z\|\leq 1\end{array}\right.\end{array}\] Let $\bar{\delta}:[1/2,1]\to [0,1]$ be the linear homeomorphism such that $\bar{\delta}(1/2)=1$ and $\bar{\delta}(1)=0$. Define a homotopy $\widetilde{F}^s_z$ of $\tilde{F}_z$ as follows: \[\begin{array}{rcl}\widetilde{F}_z^s & = & \left\{ \begin{array}{ll} g_s^*(F_{\mu(z)}), & \|z\|\leq 1/2\\ g^*_{s\bar{\delta}(\|z\|)}(F_{z/\|z\|}) & 1/2\leq\|z\|\leq 1\end{array}\right.\end{array}\] Note that \[\begin{array}{rcl}\widetilde{F}^1_z & = & \left\{ \begin{array}{ll} g_1^*(F_{\mu(z)}), & \|z\|\leq 1/2\\ g^*_{\bar{\delta}(\|z\|)}(F_{z/\|z\|}) & 1/2\leq\|z\|\leq 1\end{array}\right.\end{array}\] Finally we consider a parametrized homotopy given as follows: \[\begin{array}{rcl}\widetilde{H}^s_z & = & \left\{ \begin{array}{ll} g_1^*(H^s_{\mu(z)}), & \|z\|\leq 1/2\\ g^*_{\bar{\delta}(\|z\|)}(F_{z/\|z\|}) & 1/2\leq\|z\|\leq 1\end{array}\right.\end{array}\] Note that $\widetilde{H}^1_z$ is holonomic for all $z\in\mathbb D^n$ and $\widetilde{H}^s_z=j^r_f(z)$ for all $z\in \mathbb S^{n-1}$. The concatenation of the three homotopies now give a homotopy between the parametrized sections $F_z$ and $\tilde{H}^1_z$ relative to $\mathbb S^{n-1}$. This proves the parametric $h$-principle for $\mathcal R$. \end{proof} \section{Gromov-Phillips Theorem on open contact manifolds} Recall that a leaf $L$ of an arbitrary foliation on $M$ admits an injective immersion $i_L:L\to M$. We shall say that $L$ is a contact submanifold of $(M,\alpha)$ if the pullback form $i_L^*\alpha$ is a contact form on $L$. \begin{definition} {\em Let $M$ be a smooth manifold with a contact form $\alpha$. A foliation $\mathcal F$ on $M$ will be called a \emph{contact foliation subordinate to} $\alpha$ or, a \emph{contact foliation on} $(M,\alpha)$ if the leaves of $\mathcal F$ are contact submanifolds of $(M,\alpha)$.\label{subordinate_contact_foliation}} \end{definition} \begin{remark}{\em In view of Lemma~\ref{L:contact_submanifold}, $\mathcal F$ is a contact foliation on $(M,\alpha)$ if and only if $T\mathcal F$ is transversal to the contact distribution $\ker\alpha$ and $T\mathcal F\cap \ker\alpha$ is a symplectic subbundle of $(\ker\alpha,d'\alpha)$.}\label{R:tangent_contact_foliation}\end{remark} Let $(M,\alpha)$ be a contact manifold and $N$ a manifold with a smooth foliation $\mathcal F_N$ of even codimension. We denote by $Tr_\alpha(M,\mathcal F_N)$ \index{$Tr_\alpha(M,\mathcal F_N)$} the space of smooth maps $f:M\to N$ transversal to $\mathcal F_N$ for which the inverse foliations $f^*\mathcal F_N$ are contact foliations on $M$ subordinate to $\alpha$. Let $\mathcal E_\alpha(TM,\nu\mathcal F_N)$ \index{$\mathcal E_\alpha(TM,\nu\mathcal F_N)$} be the space of all vector bundle morphisms $F:TM\rightarrow TN$ such that \begin{enumerate} \item $\pi\circ F:TM\to\nu\mathcal F_N$ is an epimorphism and \item $\ker(\pi\circ F)\cap \ker\alpha$ is a symplectic subbundle of $(\ker\alpha,d'\alpha)$,\end{enumerate} where $\pi:TN\to \nu\mathcal F_N$ is the quotient map. We endow $Tr_\alpha(M,\mathcal F_N)$ and $\mathcal{E}_\alpha(TM,\nu\mathcal F_N)$ with $C^{\infty}$ compact open topology and $C^{0}$ compact open topology respectively. The main result of this section can now be stated as follows: \begin{theorem}\label{T:contact-transverse} Let $(M,\alpha)$ be an open contact manifold and $(N,\mathcal F_N)$ be any foliated manifold. Suppose that the codimension of $\mathcal F_N$ is even and is strictly less than the dimension of $M$. Then \[\pi\circ d:Tr_\alpha(M,\mathcal F_N)\to\mathcal{E}_\alpha(TM,\nu\mathcal F_N)\] is a weak homotopy equivalence.\end{theorem} Let $\mathcal R$ denote the first order differential relation consisting of all 1-jets represented by triples $(x,y,G)$, where $x\in M, y\in N$ and $G:T_{x}M\rightarrow T_{y}N$ is a linear map such that \begin{enumerate}\item $\pi\circ G:T_xM\to \nu(\mathcal F_N)_y$ is an epimorphism \item $\ker(\pi\circ G)\cap \ker\alpha_x$ is a symplectic subspace of $(\ker\alpha_x,d'\alpha_x)$. \end{enumerate} The space of sections of $\mathcal R$ can be identified with $\mathcal E_\alpha(TM,\nu(\mathcal F_N))$ defined above. \begin{observation} {\em Theorem~\ref{T:contact-transverse} states that the relation $\mathcal R$ satisfies the parametric $h$-principle. Indeed, the solution space of $\mathcal R$ is the same as $Tr_\alpha(M,\mathcal F)$. To see this, it is sufficient to note (see Definition~\ref{contact_submanifold}) that the following two statements are equivalent: \begin{enumerate} \item[(S1)] $f:M\to N$ is transversal to $\mathcal F_N$ and the leaves of the inverse foliation $f^*\mathcal F_N$ are contact submanifolds (immersed) of $M$. \item[(S2)] $\pi\circ df$ is an epimorphism and $\ker (\pi\circ df)\cap \ker\alpha$ is a symplectic subbundle of $(\ker\alpha,d'\alpha)$. \end{enumerate}}\label{P:solution space}\end{observation} We will now show that the relation $\mathcal R$ is open and invariant under the action of local contactomorphisms. \begin{lemma} \label{OR} The relation $\mathcal{R}$ defined above is an open relation. \end{lemma} \begin{proof} Let $V$ be a $(2m+1)$-dimensional vector space with a (linear) 1-form $\theta$ and a 2-form $\tau$ on it such that $\theta \wedge \tau^{m}\neq 0$. We shall call $(\theta,\tau)$ an almost contact structure on $V$. Note that the restriction of $\tau$ to $\ker\theta$ is then non-degenerate. A subspace $K$ of $V$ will be called an almost contact subspace if the restrictions of $\theta$ and $\tau$ to $K$ define an almost contact structure on $K$. In this case, $K$ must be transversal to $\ker\theta$ and $K\cap \ker\theta$ will be a symplectic subspace of $\ker\theta$. Let $W$ be a vector space of even dimension and $Z$ a subspace of $W$ of codimension $2q$. Denote by $L_Z^\pitchfork(V,W)$ the set of all linear maps $L:V\to W$ which are transversal to $Z$. This is clearly an open subset in the space of all linear maps from $V$ to $W$. Define a subset $\mathcal L$ of $L_Z^\pitchfork(V,W)$ by \[\mathcal L=\{L\in L_Z^\pitchfork(V,W)| \ker(\pi\circ L) \text{ is an almost contact subspace of }V\}\] We shall prove that $\mathcal L$ is an open subset of $L_Z^\pitchfork(V,W)$. Consider the map \[E:L_Z^\pitchfork(V,W)\rightarrow Gr_{2(m-q)+1}(V)\] \[L\mapsto \ker (\pi\circ L),\] where $\pi:W\to W/Z$ is the quotient map. Let $\mathcal U_c$ denote the subset of $G_{2(m-q)+1}(V)$ consisting of all almost contact subspaces $K$ of $V$. Observe that $\mathcal L=E^{-1}(\mathcal U_c)$. We shall now prove that \begin{itemize}\item $E$ is a continuous map and \item $\mathcal U_c$ is an open subset of $G_{2(m-q)+1}(V)$. \end{itemize} To prove that $E$ is continuous, take $L_0\in L_Z^\pitchfork(V,W)$ and let $K_0=\ker (\pi\circ L_0)$. Consider the subbasic open set $U_{K_0}$ consisting of all subspaces $Y$ of $V$ such that the canonical projection $p:K_0\oplus K_0^\perp\to K_0$ maps $Y$ isomorphically onto $K_0$. The inverse image of $U_{K_0}$ under $E$ consists of all $L:V\to W$ such that $p|_{\ker (\pi\circ L)}:\ker (\pi\circ L)\to K_0$ is onto. It may be seen easily that if $L\in L_Z^\pitchfork(V,W)$ then \begin{eqnarray*} p \text{ maps } \ker (\pi\circ L) \text{ onto }K_0 & \Leftrightarrow & \ker (\pi\circ L)\cap K_0^\perp=\{0\} \\ & \Leftrightarrow & \pi\circ L|_{K_0^\perp}:K_0^\perp\to W/Z \text{ is an isomorphism}.\end{eqnarray*} Now, the set of all $L$ such that $\pi\circ L|_{K_0^\perp}$ is an isomorphism is an open subset. Hence $E^{-1}(U_{K_0})$ is open and therefore $E$ is continuous. To prove the openness of $\mathcal U_c$ take $K_0\in\mathcal U$. Recall that a subbasic open set $U_{K_0}$ containing $K_0$ can be identified with the space $L(K_0,K_0^\perp)$, where $K_0^\perp$ denotes the orthogonal complement of $K$ with respect to some inner product on $V$ (\cite{milnor_stasheff}). Let $\Theta$ denote the following composition of continuous maps: \[\begin{array}{rcccl}U_{K_0}\cong L(K_0,K_0^{\perp}) & \stackrel{\Phi}{\longrightarrow} & L(K_0,V)& \stackrel{\Psi}{\longrightarrow} & \Lambda^{2(m-q)+1}(K_0^*)\cong\R\end{array}\] where $\Phi(L)=I+L$ and $\Psi(L)=L^*(\theta\wedge\tau^{2(m-q)+1})$. It may be noted that, if $K\in U_{K_0}$ is mapped onto some $T\in L(K_0,V)$ then the image of $T$ is $K$. Hence it follows that \[{\mathcal U}_c\cap U_{K_0}=(\Psi\circ\Phi)^{-1}(\R\setminus 0)\] which proves that ${\mathcal U}_c\cap U_{K_0}$ is open. Since $U_{K_0}$ is a subbasic open set in the topology of Grassmannian it proves the openness of $\mathcal U_c$. Thus $\mathcal L$ is an open subset. We now show that $\mathcal R$ is an open relation. First note that, each tangent space $T_xM$ has an almost contact structure given by $(\alpha_x,d\alpha_x)$. Let $U$ be a trivializing neighbourhood of the tangent bundle $TM$. We can choose a trivializing neighbourhood $\tilde{U}$ for the tangent bundle $TN$ such that $T\mathcal F_N$ is isomorphic to $\tilde{U}\times Z$ for some codimension $2q$-vector space in $\R^{2n}$. This implies that $\mathcal R\cap J^1(U,\tilde{U})$ is diffeomorphic with $U\times\tilde{U}\times\mathcal L$. Since the sets $J^1(U,\tilde{U})$ form a basis for the topology of the jet space, this completes the proof of the lemma. \end{proof} \begin{lemma} \label{IV} $\mathcal{R}$ is invariant under the action of the pseudogroup of local contactomorphisms of $(M,\alpha)$. \end{lemma} \begin{proof} Let $\delta$ be a local diffeomorphism on an open neighbourhood of $x\in M$ such that $\delta^*\alpha=\lambda\alpha$, where $\lambda$ is a nowhere vanishing function on $Op\, x$. This implies that $d\delta_x(\xi_x)=\xi_{\delta(x)}$ and $d\delta_x$ preserves the conformal symplectic structure determined by $d\alpha$ on $\ker \xi$. If $f$ is a local solution of $\mathcal R$ at $\delta(x)$, then \[d\delta_x(\ker d(f\circ\delta)_x\cap \xi_x)=\ker df_{\delta(x)}\cap\xi_{\delta(x)}.\] Hence $f\circ\delta$ is also a local solution of $\mathcal R$ at $x$. Since $\mathcal R$ is open every representative function of a jet in $\mathcal R$ is a local solution of $\mathcal R$. Thus local contactomorphisms act on $\mathcal R$ by $\delta.j^1_f(\delta(x)) = j^1_{f\circ\delta}(x)$. \end{proof} \emph{Proof of Theorem~\ref{T:contact-transverse}}: In view of Theorem~\ref{CT}, and Lemma~\ref{OR}, \ref{IV} it follows that the relation $\mathcal R$ satisfies the parametric $h$-principle. This completes the proof by Observation ~\ref{P:solution space}.\qed\\ \begin{definition} \emph{A smooth submersion $f:(M,\alpha)\to N$ is called a \emph{contact submersion} if the level sets of $f$ are contact submanifolds of $M$.} \end{definition} We shall denote the space of contact submersion $(M,\alpha)\to N$ by $\mathcal C_\alpha(M,N)$. The space of epimorphisms $F:TM\to TN$ for which $\ker F\cap \ker\alpha$ is a symplectic subbundle of $(\ker\alpha,d'\alpha)$ will be denoted by $\mathcal E_\alpha(TM,TN)$. If $\mathcal F_N$ in Theorem~\ref{T:contact-transverse} is the zero-dimensional foliation then we get the following result. \begin{corollary} Let $(M,\alpha)$ be an open contact manifold. The derivative map \[d:\mathcal C_\alpha(M,N)\to \mathcal E_\alpha(TM,TN)\] is a weak homotopy equivalence.\label{T:contact_submersion} \end{corollary} \begin{remark}{\em Suppose that $F_0\in \mathcal E_\alpha(TM,TN)$ and $D$ is the kernel of $F_0$. Then $(D,\alpha|_D,d\alpha|_D)$ is an almost contact distribution. Since $M$ is an open manifold, the bundle epimorphism $F_0:TM\to TN$ can be homotoped (in the space of bundle epimorphism) to the derivative of a submersion $f:M\to N$ (\cite{phillips}). Hence the distribution $\ker F_0$ is homotopic to an integrable distribution, namely the one given by the submersion $f$. It then follows from Theorem~\ref{T:contact} that $(D,\alpha|_D,d\alpha|_D)$ is homotopic to the distribution associated to a contact foliation $\mathcal F$ on $M$. Theorem~\ref{T:contact-transverse} further implies that it is possible to get a foliation $\mathcal F$ which is subordinate to $\alpha$ and is defined by a submersion.}\end{remark} \section{Contact Submersions into Euclidean spaces} In this section we interpret the homotopy classification of contact submersions of $M$ into $\R^{2n}$ in terms of certain $2n$-frames in $M$. We then apply this result to show the existence of contact foliations on some subsets of odd-dimensional $N$-spheres obtained by deleting lower dimensional spheres. Throughout this section $M$ is a contact manifold with a contact form $\alpha$ and $\xi$ is the contact distribution $\ker\alpha$. Recall from Section 2 that the tangent bundle $TM$ of a contact manifold $(M,\alpha)$ splits as $\ker\alpha\oplus\ker \,d\alpha$. Let $P:TM\to\ker\alpha$ be the projection morphism onto $\ker\alpha$ relative to this splitting. We shall denote the projection of a vector field $X$ on $M$ under $P$ by $\bar{X}$. For any smooth function $h:M\to \R$, $X_h$ will denote the contact Hamiltonian vector field defined as in the prelimiaries (see equations (\ref{contact_hamiltonian1})). \begin{lemma} Let $(M,\alpha)$ be a contact manifold and $f:M\to \R^{2n}$ be a submersion with coordinate functions $f_1,f_2,\dots,f_{2n}$. Then the following statements are equivalent: \begin{enumerate}\item[(C1)] $f$ is a contact submersion. \item[(C2)] The restriction of $d\alpha$ to the bundle spanned by $X_{f_1},\dots,X_{f_{2n}}$ defines a symplectic structure. \item[(C3)] The vector fields $\bar{X}_{f_1},\dots,\bar{X}_{f_{2n}}$ span a symplectic subbundle of $(\xi,d'\alpha)$. \end{enumerate}\end{lemma} \begin{proof} If $f:(M,\alpha)\to\R^{2n}$ is a contact submersion then the following relation holds pointwise: \begin{equation}\ker df\cap \ker\alpha=\langle \bar{X}_{f_1},...,\bar{X}_{f_{2n}}\rangle^{\perp_{d'\alpha}},\end{equation} where the right hand side represents the symplectic complement of the subbundle spanned by $\bar{X}_{f_1},...,\bar{X}_{f_{2n}}$ with respect to $d'\alpha$. Indeed, for any $v\in \ker\alpha$, \[ d'\alpha(\bar{X}_{f_i},v)=-df_i(v),\ \ \text{ for all }i=1,...,2n \] Therefore, $v\in\ker\alpha\cap\ker df$ if and only if $d'\alpha(\bar{X}_{f_i},v)=0$ for all $i=1,\dots,2n$, that is $v\in \langle \bar{X}_{f_1},...,\bar{X}_{f_{2n}}\rangle^{\perp_{d'\alpha}}$. Thus, the equivalence of (C1) and (C3) is a consequence of the equivalence between (S1) and (S2). The equivalence of (C2) and (C3) follows from the relation $d\alpha(X,Y)=d\alpha(\bar{X},\bar{Y})$, where $X,Y$ are any two vector fields on $M$. \end{proof} An ordered set of vectors $e_{1}(x),...,e_{2n}(x)$ in $\xi_x$ will be called a \emph{symplectic $2n$-frame} \index{symplectic $2n$-frame} in $\xi_x$ if the subspace spanned by these vectors is a symplectic subspace of $\xi_x$ with respect to the symplectic form $d'\alpha_x$. Let $T_{2n}\xi$ be the bundle of symplectic $2n$-frames in $\xi$ and $\Gamma(T_{2n}\xi)$ denote the space of sections of $T_{2n}\xi$ with the $C^{0}$ compact open topology. For any smooth submersion $f:(M,\alpha)\rightarrow \mathbb{R}^{2n}$, define the \emph{contact gradient} of $f$ by \[\Xi f(x)=(\bar{X}_{f_{1}}(x),...,\bar{X}_{f_{2n}}(x)),\] where $f_{i}$, $i=1,2,\dots,2n$, are the coordinate functions of $f$. If $f$ is a contact submersion then $\bar{X}_{f_{1}}(x),...,\bar{X}_{f_{2n}}(x))$ span a symplectic subspace of $\xi_x$ for all $x\in M$, and hence $\Xi f$ becomes a section of $T_{2n}\xi$. \begin{theorem} \label{ED} Let $(M^{2m+1},\alpha)$ be an open contact manifold. Then the contact gradient map $\Xi:\mathcal{C}_\alpha(M,\mathbb{R}^{2n})\rightarrow \Gamma(T_{2n}\xi)$ is a weak homotopy equivalence. \end{theorem} \begin{proof} As $T\mathbb{R}^{2n}$ is a trivial vector bundle, the map \[i_{*}:\mathcal{E}_\alpha(TM,\mathbb{R}^{2n})\rightarrow \mathcal{E}_\alpha(TM,T\mathbb{R}^{2n})\] induced by the inclusion $i:0 \hookrightarrow \mathbb{R}^{2n}$ is a homotopy equivalence, where $\mathbb{R}^{2n}$ is regarded as the vector bundle over $0\in \mathbb{R}^{2n}$. The homotopy inverse $c$ is given by the following diagram. For any $F\in \mathcal E_\alpha(TM,T\R^{2n})$, $c(F)$ is defined by as $p_2\circ F$, \[\begin{array}{ccccc} TM & \stackrel{F}{\longrightarrow} & T\mathbb{R}^{2n}=\mathbb{R}^{2n}\times \mathbb{R}^{2n} & \stackrel{p_2}{\longrightarrow} & \mathbb{R}^{2n}\\ \downarrow & & \downarrow & & \downarrow\\ M & \longrightarrow & \mathbb{R}^{2n} & \longrightarrow & 0 \end{array}\] where $p_2$ is the projection map onto the second factor. Since $d'\alpha$ is non-degenerate, the contraction of $d'\alpha$ with a vector $X\in\ker\alpha$ defines an isomorphism \[\phi:\ker\alpha \rightarrow (\ker\alpha)^*.\] We define a map $\sigma:\oplus_{i=1}^{2n}T^*M\to \oplus_{i=1}^{2n}\xi$ by \[\sigma(G_1,\dots,G_{2n})=-(\phi^{-1}(\bar{G}_1),...,\phi^{-1}(\bar{G}_{2n})),\] where $\bar{G}_i=G_i|_{\ker\alpha}$. Then noting that \[\ker(G_1,\dots,G_{2n})\cap \ker\alpha=\langle\phi^{-1}(\bar{G}_1),\dots,\phi^{-1}(\bar{G}_{2n})\rangle^{\perp_{d'\alpha}},\] we get a map $\tilde{\sigma}$ by restricting $\sigma$ to $\mathcal E(TM,\R^{2n})$: \[\tilde{\sigma}:{\mathcal E}(TM,\mathbb{R}^{2n})\longrightarrow \Gamma(M,T_{2n}\xi),\] Moreover, the contact gradient map $\Xi$ factors as $\Xi= \tilde{\sigma} \circ c \circ d$: \begin{equation}\mathcal{C}_\alpha(M,\mathbb{R}^{2n})\stackrel{d}\rightarrow \mathcal{E}_\alpha(TM,T\mathbb{R}^{2n})\stackrel{c}\rightarrow \mathcal{E}_\alpha(TM,\mathbb{R}^{2n})\stackrel{\tilde{\sigma}}\rightarrow \Gamma(T_{2n}\xi).\end{equation} To see this take any $f:M\to \R^{2n}$. Then, $c(df)=(df_{1},...,df_{2n})$, and hence \[ \tilde{\sigma} c (df)=(\phi^{-1}(df_1|_\xi),...,\phi^{-1}(df_{2n}|_\xi)) = (\bar{X}_{f_1},\dots,\bar{X}_{f_{2n}})=\Xi(f)\] which gives $\tilde{\sigma} \circ c \circ d(f)=\Xi f$. We claim that $\tilde{\sigma}: \mathcal{E}_\alpha(TM,\mathbb{R}^{2n})\to \Gamma(T_{2n}\xi)$ is a homotopy equivalence. To prove this we define a map $\tau: \oplus_{i=1}^{2n}\xi \to \oplus_{i=1}^{2n} T^*M$ by the formula \[\tau(X_1,\dots,X_{2n})=(i_{X_1}d\alpha,...,i_{X_{2n}} d\alpha)\] which induces a map $\tilde{\tau}: \Gamma(T_{2n}\xi) \to \mathcal{E}(TM,\mathbb{R}^{2n})$. It is easy to verify that $\tilde{\sigma} \circ \tilde{\tau}=id$. In order to show that $\tilde{\tau}\circ\tilde{\sigma}$ is homotopic to the identity, take any $G\in \mathcal E_\alpha(TM,\R^{2n})$ and let $\widehat{G}=(\tau\circ \sigma)(G)$. Then $\widehat{G}$ equals $G$ on $\ker\alpha$. Define a homotopy between $G$ and $\hat{G}$ by $G_t=(1-t)G+t\widehat{G}$. Then $G_t=G$ on $\ker\alpha$ and hence $\ker G_t\cap \ker\alpha=\ker G\cap \ker\alpha$. This also implies that each $G_t$ is an epimorphism. Thus, the homotopy $G_t$ lies in $\mathcal E_\alpha(TM,\R^{2n})$. This shows that $\tilde{\tau}\circ \tilde{\sigma}$ is homotopic to the identity map. This completes the proof of the theorem since $d:\mathcal{C}(M,\mathbb{R}^{2n}) \rightarrow \mathcal{E}(TM,T\mathbb{R}^{2n})$ is a weak homotopy equivalence (Theorem~\ref{T:contact-transverse}) and $c$, $\tilde{\sigma}$ are homotopy equivalences.\end{proof} \begin{example} {\em Let $\mathbb{S}^{2N-1}$ denote the $2N-1$ sphere in $\R^{2N}$ \[\mathbb{S}^{2N-1}=\{(z_{1},...,z_{2N})\in \mathbb{R}^{2N}: \Sigma_{1}^{2N}|z_{i}|^{2}=1\}\] This is a standard example of a contact manifold where the contact form $\eta$ is induced from the 1-form $\sum_{i=1}^{N} (x_i\,dy_i-y_i\,dx_i)$ on $\R^{2N}$. For $N>K$, we consider the open manifold $\mathcal S_{N,K}$ obtained from $\mathbb{S}^{2N-1}$ by deleting a $(2K-1)$-sphere: \begin{center}$\mathcal{S}_{N,K}=\mathbb S^{2N-1}\setminus \mathbb{S}^{2K-1}$,\end{center} where \[\mathbb{S}^{2K-1}=\{(z_{1},...,z_{2K},0,...,0)\in \mathbb{R}^{2N}: \Sigma_{1}^{2K}|z_{i}|^{2}=1\}\] Then $\mathcal{S}_{N,K}$ is an contact submanifold of $\mathbb S^{2N-1}$. Let $\xi$ denote the contact structure associated to the contact form $\eta$ on $\mathcal S_{N,K}$. Since $\xi\to \mathcal S_{N,K}$ is a symplectic vector bundle, we can choose a complex structure $J$ on $\xi$ such that $d'\eta$ is $J$-invariant. Thus, $(\xi,J)$ becomes a complex vector bundle of rank $N-1$. We define a homotopy $F_t:\mathcal S_{N,K}\to \mathcal S_{N,K}$, $t\in [0,1]$, as follows: For $(x,y)\in \mathbb{R}^{2k}\times \mathbb{R}^{2(N-k)}\cap \mathcal{S}_{N,K}$ \[F_t(x,y)=\frac{(1-t)(x,y)+t(0,y/\|y \|)}{\|(1-t)(x,y)+t(0,y/\| y \|) \|}\] This is well defined since $y\neq 0$. It is easy to see that $F_0=id$, $F_1$ maps $\mathbb{S}^{2(N-K)-1}$ into $\mathcal S_{N,K}$ and the homotopy fixes $\mathbb{S}^{2(N-K)-1}$ pointwise. Define $r:\mathcal S_{N,K}\rightarrow \{0\}\times \mathbb{R}^{2(N-k)}\cap \mathcal S_{N,K}\mathbb{S}^{2(N-K)-1}\simeq \mathbb{S}^{2(N-K)-1}$ by \[r(x,y)= (0,y/\|y\|), \ \ \ (x,y)\in\R^{2K}\times\R^{2(N-K)}\cap \mathcal{S}_{N,K}\] Then $F_1$ factors as $F_1=i\circ r$, where $i$ is the inclusion map, and we have the following diagram: \[ \begin{array}{lcccl} r^*(i^*\xi)&\longrightarrow&i^*\xi&\longrightarrow&\xi\\ \downarrow&&\downarrow&&\downarrow\\ \mathcal{S}_{N,K}&\stackrel{r}{\longrightarrow}&\mathbb{S}^{2(N-K)-1}&\stackrel{i}{\longrightarrow}&\mathcal{S}_{N,K}\end{array}\] Hence, $\xi=F_0^*\xi\cong F_1^*\xi=r^*(\xi|_{S^{(2N-2K)-1}})$ as complex vector bundles. Since $\xi$ is a (complex) vector bundle of rank $N-1$, $\xi|_{\mathbb S^{2(N-K)-1}}$ will have a decomposition of the following form (\cite{husemoller}): \[\xi|_{S^{(2N-2K)-1}}\cong \tau^{N-K-1}\oplus \theta^K,\] where $\theta^K$ is a trivial complex vector bundle of rank $K$ and $\tau^{N-K-1}$ is a complementary subbundle. Hence $\xi$ must also have a trivial direct summand $\theta$ of rank $K$. Moreover, $\theta$ will be a symplectic subbundle of $\xi$ since the complex structure $J$ is compatible with the symplectic structure $d'\eta$ on $\xi$. Thus, $S_{N,K}$ admits a symplectic $2K$ frame spanning $\theta$. Hence, by Theorem~\ref{ED}, there exist contact submersions of $\mathcal S_{N,K}$ into $\R^{2K}$. Consequently, $\mathcal S_{N,K}$ admits contact foliations of codimension $2K$ for each $K<N$. }\end{example} \section{Classification of contact foliations on contact manifolds} Throughout this section $M$ is a contact manifold with a contact form $\alpha$. As before $\xi$ will denote the associated contact structure $\ker\alpha$ and $d'\alpha=d\alpha|_{\xi}$. Let $Fol_\alpha^{2q}(M)$ denote the space of contact foliations on $M$ of codimension $2q$ subordinate to $\alpha$ (Definition~\ref{subordinate_contact_foliation}). Recall the classifying space $B\Gamma_{2q}$ and the universal $\Gamma_{2q}$ structure $\Omega_{2q}$ on it (see Subsection~\ref{classifying space}). Let $\mathcal E_{\alpha}(TM,\nu\Omega_{2q})$ be the space of all vector bundle epimorphisms $F:TM\to \nu \Omega_{2q}$ such that $\ker F$ is transversal to $\ker\alpha$ and $\ker\alpha\cap \ker F$ is a symplectic subbundle of $(\ker\alpha,d'\alpha)$. If $\mathcal F\in Fol^{2q}(M)$ and $f:M\to B\Gamma_{2q}$ is a classifying map of $\mathcal F$, then $f^*\Omega_{2q}= \mathcal F$ as $\Gamma_{2q}$-structure. Recall that we can define a vector bundle epimorphisms $TM\to \nu\Omega_{2q}$ by the following diagram (see \cite{haefliger1}) \begin{equation} \xymatrix@=2pc@R=2pc{ TM \ar@{->}[r]^-{\pi_M}\ar@{->}[rd] & \nu \mathcal{F}\cong f^*(\nu \Omega_{2q}) \ar@{->}[r]^-{\bar{f}}\ar@{->}[d] & \nu \Omega_{2q} \ar@{->}[d]\\ & M \ar@{->}[r]_-{f} & B\Gamma_{2q} }\label{F:H(foliation)} \end{equation} where $\pi_M:TM\to \nu(\mathcal F)$ is the quotient map and $(\bar{f},f)$ is a pull-back diagram. Note that the kernel of this morphism is $T\mathcal F$ and therefore, if $\mathcal F\in Fol^{2q}_\alpha(M)$, then $\bar{f}\circ \pi_M \in \mathcal E_\alpha(TM,\nu\Omega_{2q})$ (see Remark~\ref{R:tangent_contact_foliation}). However, the morphism $\bar{f}\circ \pi_M$ is defined uniquely only up to homotopy. Thus, there is a function \[H'_\alpha:Fol^{2q}_\alpha(M)\to \pi_0(\mathcal E_\alpha(TM,\nu\Omega_{2q})).\] \begin{definition} {\em Two contact foliations $\mathcal F_0$ and $\mathcal F_1$ on $(M,\alpha)$ are said to be \emph{integrably homotopic relative to $\alpha$} if there exists a foliation $\tilde{\mathcal F}$ on $(M\times\mathbb I,\alpha\oplus 0)$ such that the following conditions are satisfied: \begin{enumerate} \item $\tilde{\mathcal F}$ is transversal to the trivial foliation of $M\times\mathbb I$ by the leaves $M\times\{t\}$, $t\in \mathbb I$; \item the foliation $\mathcal F_t$ on $M$ induced by the canonical injective map $i_t:M\to M\times\mathbb I$ (given by $x\mapsto (x,t)$) is a contact foliation subordinate to $\alpha$ for each $t\in\mathbb I$; \item the induced foliations on $M\times\{0\}$ and $M\times\{1\}$ coincide with $\mathcal F_0$ and $\mathcal F_1$ respectively,\end{enumerate} where $\alpha\oplus 0$ denotes the pull-back of $\alpha$ by the projection map $p_1:M\times\R\to M$.}\end{definition} Let $\pi_0(Fol^{2q}_{\alpha}(M))$ denote the space of integrable homotopy classes of contact foliations on $(M,\alpha)$. Define \[H_\alpha:\pi_0(Fol^{2q}_\alpha(M))\to \pi_0(\mathcal E_\alpha(TM,\nu\Omega_{2q})).\] by $H_{\alpha}([\mathcal{F}])=H_\alpha'(\mathcal F)$, where $[\mathcal F]$ denotes the integrable homotopy class of $\mathcal F$ relative to $\alpha$. To see that $H_\alpha$ is well-defined, let $\tilde{\mathcal F}$ be an integrable homotopy relative to $\alpha$ between two contact foliations $\mathcal F_0$ and $\mathcal F_1$. Then the induced foliations $\mathcal F_t$ are contact foliations subordinate to $\alpha$. If $F:M\times\mathbb I\to B\Gamma_{2q}$ is a classifying map of $\widetilde{\mathcal F}$ then $F^*\Omega_{2q}=\widetilde{\mathcal F}$. Let $f_t:M\to B\Gamma_{2q}$ be defined by $f_t(x)=F(x,t)$, for all $x\in M$, $t\in \mathbb I$. Then it follows that $f_t^*\Omega_{2q}=\mathcal F_t$, for all $t\in \mathbb I$. Hence, $H'_\alpha(\mathcal F_0)=H'_\alpha(\mathcal F_1)$. This shows that $H_\alpha$ is well-defined. The classification of contact foliations may now be stated as follows: \begin{theorem} \label{haefliger_contact} If $M$ is open then $H_\alpha:\pi_0(Fol^{2q}_{\alpha}(M)) \longrightarrow \pi_0(\mathcal E_{\alpha}(TM,\nu\Omega_{2q}))$ is bijective. \end{theorem} We first prove a lemma. \begin{lemma}Let $N$ be a smooth manifold with a foliation $\mathcal F_N$ of codimension $2q$. If $g:N\to B\Gamma_{2q}$ classifies $\mathcal F_N$ then we have a commutative diagram as follows: \begin{equation} \xymatrix@=2pc@R=2pc{ \pi_0(Tr_{\alpha}(M,\mathcal{F}_N))\ar@{->}[r]^-{P}\ar@{->}[d]_-{\cong}^-{\pi_0(\pi \circ d)} & \pi_0(Fol^{2q}_{\alpha}(M))\ar@{->}[d]^-{H_{\alpha}}\\ \pi_0(\mathcal E_{\alpha}(TM,\nu \mathcal{F}_N))\ar@{->}[r]_{G_*} & \pi_0(\mathcal E_{\alpha}(TM,\nu\Omega_{2q})) }\label{Figure:Haefliger} \end{equation} where the left vertical arrow is the isomorphism defined by Theorem~\ref{T:contact-transverse}, $P$ is induced by a map which takes an $f\in Tr_\alpha(M,\mathcal F_N)$ onto the inverse foliation $f^*\mathcal F_N$ and $G_*$ is induced by the bundle homomorphism $G:\nu\mathcal F_N\to \nu\Omega_{2q}$ covering $g$.\label{L:haefliger} \end{lemma} \begin{proof} We shall first show that the horizontal arrows in (\ref{Figure:Haefliger}) are well defined. If $f\in Tr_{\alpha}(M,\mathcal{F}_N)$ then the inverse foliation $f^*\mathcal F_N$ belongs to $Fol^{2q}_\alpha(M)$. Furthermore, if $f_t$ is a homotopy in $Tr_{\alpha}(M,\mathcal{F}_N)$, then the map $F:M\times\mathbb I\to N$ defined by $F(x,t)=f_t(x)$ is clearly transversal to $\mathcal F_N$ and so $\tilde{\mathcal F}=F^*\mathcal F_N$ is a foliation on $M\times\mathbb I$. The restriction of $\tilde{\mathcal F}$ to $M\times\{t\}$ is the same as the foliation $f^*_t(\mathcal F_N)$, which is a contact foliation subordinate to $\alpha$. Hence, we get a map \[\pi_0(Tr_{\alpha}(M,\mathcal{F}_N))\stackrel{P}\longrightarrow \pi_0(Fol^{2q}_{\alpha}(M))\] defined by \[[f]\longmapsto [f^*\mathcal{F}_N]\] On the other hand, since $g:N\to B\Gamma_{2q}$ classifies the foliation $\mathcal F_N$, there is a vector bundle homomorphism $G:\nu\mathcal F_N\to \nu\Omega_{2q}$ covering $g$. This induces a map \[G_*: \mathcal E_\alpha(TM,\nu(\mathcal F_N))\to \mathcal E_\alpha(TM,\nu\Omega_{2q})\] which takes an element $F\in \mathcal E_\alpha(TM,\nu(\mathcal F_N))$ onto $G\circ F$. We now prove the commutativity of (\ref{Figure:Haefliger}). Note that if $f\in Tr_{\alpha}(M,\mathcal{F}_N))$ then $g\circ f:M\to B\Gamma_{2q}$ classifies the foliation $f^*\mathcal F_N$. Let $\widetilde{df}:\nu(f^*\mathcal F_N)\to \nu(\mathcal F_N)$ be the unique map making the following diagram commutative: \[ \xymatrix@=2pc@R=2pc{ TM\ar@{->}[r]^-{df}\ar@{->}[d]_-{\pi_M} & TN\ar@{->}[d]^-{\pi_N}\\ \nu (f^*\mathcal{F}_N)\ar@{->}[r]_{\widetilde{df}} & \nu(\mathcal F_N) } \] where $\pi_M:TM\to\nu(f^*\mathcal F_N)$ is the quotient map onto the normal bundle of $f^*\mathcal F_N$. Observe that $G\circ\widetilde{df}:\nu(f^*\mathcal F_N)\to \nu(\Omega_{2q})$ covers the map $g\circ f$ and $(G\circ \widetilde{df},g\circ f)$ is a pullback diagram. Therefore, we have \[H_\alpha([f^*\mathcal F_N])=[(G\circ\widetilde{df})\circ \pi_M]=[G\circ(\pi\circ df)].\] This proves the commutativity of (\ref{Figure:Haefliger}).\end{proof} {\em Proof of Theorem ~\ref{haefliger_contact}}. The proof is exactly similar to that of Haefliger's classification theorem. We can reduce the classification to Theorem~\ref{T:contact-transverse} by using Theorem~\ref{HL} and Lemma~\ref{L:haefliger}. For the sake of completeness we reproduce the proof here following \cite{francis}. For simplicity of notation we shall denote the universal $\Gamma_{2q}$ structure by $\Omega$ in place of $\Omega_{2q}$. To prove surjectivity of $H_\alpha$, take $(\hat{f},f)\in \mathcal E_{\alpha}(TM,\nu\Omega)$ which can be factored as follows: \begin{equation} \xymatrix@=2pc@R=2pc{ TM \ar@{->}[r]^-{\bar{f}}\ar@{->}[rd] & f^*(\nu \Omega) \ar@{->}[r]\ar@{->}[d] & \nu \Omega \ar@{->}[d]\\ & M \ar@{->}[r]_-{f} & B\Gamma_{2q} }\label{Figure:Haefliger1} \end{equation} By Theorem~\ref{HL} there exists a manifold $N$ with a codimension-$2q$ foliation $\mathcal{F}_N$ and a closed embedding $M\stackrel{s}\hookrightarrow N$ such that $s^*\mathcal{F}_N=f^*\Omega$. Let $f':N\rightarrow B\Gamma_{2q}$ be a map classifying $\mathcal{F}_N$, i.e. $f'^*\Omega\cong\mathcal{F}_N$. Hence $(f'\circ s)^*\Omega\cong f^*\Omega$ and $(f'\circ s)^*\nu(\Omega)\cong f^*\nu(\Omega)$. Therefore $f'\circ s$ must also be covered by a bundle epimorphism which splits as in the following diagram: \begin{equation} \xymatrix@=2pc@R=2pc{ TM \ar@{->}[r]^-{\bar{f}}\ar@{->}[rd] & f^*(\nu\Omega)\ar@{->}[r]^-{}\ar@{->}[d] & \nu \mathcal{F}_N\ar@{->}[r]\ar@{->}[d] & \nu \Omega \ar@{->}[d]\\ & M\ar@{->}[r]_-{s} & N\ar@{->}[r]_-{f'} & B\Gamma_{2q} }\label{Figure:Haefliger2}\end{equation} Let $\hat{s}:TM\stackrel{\bar{f}}{\rightarrow} f^*(\nu\Omega)\cong s^*(\nu\mathcal{F}_N)\rightarrow \nu \mathcal{F}_N$. It is not difficult to see that $(\hat{s},s)$ is an element of $\mathcal E_\alpha(TM,\nu(\mathcal F_N)$. Lastly we show that $\bar{P}(\hat{s},s)$ is homotopic to $(\hat{f},f)$. Since $f^*\Omega\cong (f'\circ s)^*\Omega$, by Theorem~\ref{CMT} there exists a homotopy \[M\times \mathbb I\stackrel{G}\longrightarrow B\Gamma_{2q}\] starting at $f'\circ s$ and ending at $f$. As $s$ is a cofibration the following diagram can be solved for some $F$ so that $F(\ ,0)=f'$ and $F(s(x),1)=f(x)$ for all $x\in M$. \begin{equation} \xymatrix@=2pc@R=2pc{ M\times \{0\}\ar@{->}[rr]^-{i_M}\ar@{->}[dd]_-{s\times id_0} & & M\times \mathbb I \ar@{->}[dl]^-{G}\ar@{->}[dd]^-{s\times id_{\mathbb I}}\\ & B\Gamma_{2q} & \\ N\times \{0\}\ar@{->}[ru]^-{f'}\ar@{->}[rr]_{i_N} & & N\times \mathbb I \ar@{-->}[ul]^-{F} }\label{Figure:Haefliger3} \end{equation} If we set $f_t'(x)=F(x,t)$ for $x\in N$ and $t\in [0,1]$ then $f$ factors as $f_1'\circ s$. Since $f_t$ is a homotopy $f_t'^*\nu(\Omega)\cong f'^*\nu(\Omega) \cong\nu(\mathcal F_N)$. Thus we get the following homotopy of vector bundle morphism. \begin{equation} \xymatrix@=2pc@R=2pc{ TM\ar@{->}[r]^{\hat{s}} \ar@{->}[d] & \nu(\mathcal F_N) \ar@{->}[r]^-{a_t}\ar@{->} [d] & \nu \Omega \ar@{->}[d]\\ M\ar@{->}[r]_-{s} & N\ar@{->}[r]_-{f'_t} & B\Gamma_{2q} }\label{Figure:Haefliger4} \end{equation} This homotopy starts at the morphism shown in diagram (\ref{Figure:Haefliger2}) and ends at the morphism shown at diagram (\ref{Figure:Haefliger1}). Now the left square of diagram (\ref{Figure:Haefliger2}) represents an element $(\hat{s},s)$ of $\mathcal E_{\alpha}(TM,\nu \mathcal{F}_N)$ whose homotopy class is mapped to $[(\hat{f},f)]$ by the bottom map of diagram (\ref{Figure:Haefliger}). So in diagram (\ref{Figure:Haefliger}) $P \circ(\pi_0(q\circ d))^{-1}[(\hat{s},s)]$ is the required preimage of $[(\hat{f},f)]$ under $H_{\alpha}$. So we have proved the surjectivity. Now to prove injectivity, suppose that $\mathcal{F}_0,\mathcal{F}_1$ are two contact foliations on $M$ such that $H_{\alpha}(\mathcal{F}_0)$ is homotopic to $H_{\alpha}(\mathcal{F}_1)$. Let $H_{\alpha}(\mathcal{F}_0)=(\hat{f}_0,f_0)$ and $H_{\alpha}(\mathcal{F}_1)=(\hat{f}_1,f_1)$. If $\hat{f}:TM\times[0,1]\to \nu\Omega$ is a homotopy between $\hat{f}_0$ and $\hat{f}_1$ in the space $\mathcal E_{\alpha}(TM,\nu\Omega)$, then we have the following factorization of $\hat{f}$: \begin{equation} \xymatrix@=2pc@R=2pc{ TM\times[0,1] \ar@{->}[r]^-{\bar{f}}\ar@{->}[rd] & f^*(\nu \Omega) \ar@{->}[r]\ar@{->}[d] & \nu \Omega \ar@{->}[d]\\ & M \times [0,1]\ar@{->}[r]_-{f} & B\Gamma_{2q} }\label{Figure:Haefliger5} \end{equation} Without loss of generality we can assume that $f_0^*\Omega=\mathcal{F}_0$ and $f_1^*\Omega=\mathcal{F}_1$. By Theorem~\ref{HL} there exists a manifold $N$ with a foliation $\mathcal{F}_N$ and a closed embedding \[M\times \mathbb I \stackrel{s}\longrightarrow N \] such that $s^*\mathcal{F}_N=f^*\Omega$. As $s_0^*\mathcal{F}_N=f_0^*\Omega=\mathcal{F}_0$ and $s_1^*\mathcal{F}_N=f_1^*\Omega=\mathcal{F}_1$, so $s_0,s_1 \in Tr_{\alpha}(M,\mathcal{F}_N) $. We shall show that $ds_0$ and $ds_1$ are homotopic in $\mathcal F_{\alpha}(TM,\nu \mathcal{F}_N)$. Proceeding as in the first half of the proof, we can define a path between $ds_0$ and $ds_1$ by the following diagram: \[ \xymatrix@=2pc@R=2pc{ TM \times \mathbb I\ar@{->}[r]^-{\bar{f}}\ar@{->}[rd] & f^*(\nu\Omega)\cong s^*\nu(\mathcal F_N)\ar@{->}[r]^-{}\ar@{->}[d] & \nu \mathcal{F}_N \ar@{->}[d] \\ & M\times \mathbb I\ar@{->}[r]_-{s} & N }\] Since the left vertical arrow in diagram (\ref{Figure:Haefliger}) is an isomorphism this proves that $s_0,s_1$ are homotopic in $Tr_{\alpha}(M,\mathcal{F}_N)$. This implies that $\mathcal F_0$ is integrably homotopic to $\mathcal F_1$. This completes the proof of injectivity.\qed \begin{theorem}Let $(M,\alpha)$ be an open contact manifold and let $\tau:M\to BU(n)$ be a map classifying the symplectic vector bundle $\xi=\ker\alpha$. Then there is a bijection between the elements of $\pi_0(\mathcal E_{\alpha}(TM,\nu\Omega))$ and the homotopy classes of triples $(f,f_0,f_1)$, where $f_0:M\to BU(q)$, $f_1:M\to BU(n-q)$ and $f:M\to B\Gamma_{2q}$ such that \begin{enumerate}\item $(f_0,f_1)$ is homotopic to $\tau$ in $BU(n)$ and \item $Bd\circ f$ is homotopic to $Bi\circ f_0$ in $BGL_{2q}$.\end{enumerate} In other words the following diagrams are homotopy commutative:\\ \[\begin{array}{ccc} \xymatrix@=2pc@R=2pc{ & &\ \ B\Gamma(2q)\ar@{->}[d]^{Bd}\\ M \ar@{->}[r]_-{f_0}\ar@{-->}[urr]^{f} & BU(q)\ar@{->}[r]_{Bi} & BGL(2q) } & \hspace{1cm}& \xymatrix@=2pc@R=2pc{ &\ \ BU(q)\times BU(n-q)\ar@{->}[d]^{\oplus}\\ M \ar@{->}[r]_-{\tau}\ar@{-->}[ur]^{(f_0,f_1)}& BU(n) }\end{array}\] \end{theorem} \begin{proof} An element $(F,f)\in \mathcal E_{\alpha}(TM,\nu\Omega)$ defines a (symplectic) splitting of the bundle $\xi$ as \[\xi \cong (\ker F\cap \xi)\oplus (\ker F\cap \xi)^{d'\alpha}\] since $\ker F\cap \xi$ is a symplectic subbundle of $\xi$. Let $F'$ denote the restriction of $F$ to $(\ker F\cap \xi)^{d'\alpha}$. It is easy to see that $(F',f):(\ker F\cap \xi)^{d'\alpha}\to \nu(\Omega)$ is a vector bundle map which is fibrewise isomorphism. If $f_0:M\to BU(q)$ and $f_1:M\to BU(n-q)$ are continuous maps classifying the vector bundles $\ker F\cap \xi$ and $(\ker F\cap \xi)^{d'\alpha}$ respectively, then the classifying map $\tau$ of $\xi$ must be homotopic to $(f_0,f_1):M\to BU(q)\times BU(n-q)$ in $BU(n)$ (Recall that the isomorphism classes of Symplectic vector bundles are classified by homotopy classes of continuous maps into $BU$ \cite{husemoller}). Furthermore, note that $(\ker F\cap \xi)^{d'\alpha}\cong f^*(\nu\Omega)=f^*(Bd^*EGL_{2q}(\R))$; therefore, $Bd\circ f$ is homotopic to $f_0$ in $BGL(2q)$. Conversely, take a triple $(f,f_0,f_1)$ such that \[Bd\circ f\sim Bi\circ f_0 \text{ and } (f_0,f_1)\sim \tau.\] Then $\xi$ has a symplectic splitting given by $f_0^*EU(q)\oplus f_1^*EU(n-q)$. Further, since $Bd\circ f\sim Bi\circ f_0$, we have $f_0^*EU(q)\cong f^*\nu(\Omega)$. Hence there is an epimorphism $F:\xi\stackrel{p_2}{\longrightarrow} f_0^*EU(q) \cong f^*\nu(\Omega)$ whose kernel $f_1^*EU(n-q)$ is a symplectic subbundle of $\xi$. Finally, $F$ can be extended to an element of $\mathcal E_\alpha(TM,\nu\Omega)$ by defining its value on $R_\alpha$ equal to zero.\end{proof} \begin{definition}{\em Let $N$ be a contact submanifold of $(M,\alpha)$ such that $T_xN$ is transversal to $\xi_x$ for all $x\in N$. Then $TN\cap \xi|_N$ is a symplectic subbundle of $\xi$. The symplectic complement of $TN\cap \xi|_N$ with respect to $d'\alpha$ will be called \emph{the normal bundle of the contact submanifold $N$}.} \end{definition} The following result is a direct consequence of the above classification theorem. \begin{corollary} Let $B$ be a symplectic subbundle of $\xi$ with a classifying map $g:M\to BU(q)$. The integrable homotopy classes of contact foliations on $M$ with their normal bundles isomorphic to $B$ are in one-one correspondence with the homotopy classes of lifts of $Bi\circ g$ in $B\Gamma_{2q}$. \end{corollary} We end this article with an example to show that a contact foliation on a contact manifold need not be transversally symplectic, even if its normal bundle is a symplectic vector bundle. \begin{definition}{\em (\cite{haefliger1}) A codimension ${2q}$-foliation $\mathcal F$ on a manifold $M$ is said to be \emph{transverse symplectic} if $\mathcal F$ can be represented by Haefliger cocycles which take values in the groupoid of local symplectomorphisms of $(\R^{2q},\omega_0)$.} \end{definition} Thus the normal bundle of a transverse symplectic foliation has a symplectic structure. It can be shown that if $\mathcal F$ is transverse symplectic then there exists a closed 2-form $\omega$ on $M$ such that $\omega^q$ is nowhere vanishing and $\ker\omega=T\mathcal F$. \begin{example} {\em Let us consider a closed almost-symplectic manifold $V^{2n}$ which is not symplectic (e.g., we may take $V$ to be $\mathbb S^6$) and let $\omega_V$ be a non-degenerate 2-form on $V$ defining the almost symplectic structure. Set $M=V\times\mathbb{R}^3$ and let $\mathcal{F}$ be the foliation on $M$ defined by the fibres of the projection map $\pi:M\to V$. Thus the leaves are $\{x\}\times\mathbb{R}^3,\ x\in V$. Consider the standard contact form $\alpha=dz+x dy$ on the Euclidean space $\R^3$ and let $\tilde{\alpha}$ denote the pull-back of $\alpha$ by the projection map $p_2:M\to\R^3$. The 2-form $\beta=\omega_V\oplus d\alpha$ on $M$ is of maximum rank and it is easy to see that $\beta$ restricted to $\ker\tilde{\alpha}$ is non-degenerate. Therefore $(\tilde{\alpha},\beta)$ is an almost contact structure on $M$. Moreover, $\tilde{\alpha}\wedge \beta|_{T\mathcal{F}}$ is nowhere vanishing. We claim that there exists a contact form $\eta$ on $M$ such that its restrictions to the leaves of $\mathcal F$ are contact. Recall that there exists a surjective map \[(T^*M)^{(1)}\stackrel{D}{\rightarrow}\wedge^1T^*M \oplus \wedge^2T^*M\] such that $D\circ j^1(\alpha)=(\alpha,d\alpha)$ for any 1-form $\alpha$ on $M$. Let \[r:\wedge^1T^*M \oplus \wedge^2T^*M\rightarrow \wedge^1T^*\mathcal{F} \oplus \wedge^2T^*\mathcal{F}\] be the restriction map defined by the pull-back of forms and let $A\subset \Gamma(\wedge^1T^*M \oplus \wedge^2T^*M)$ be the set of all pairs $(\eta,\Omega)$ such that $\eta \wedge \Omega^{n+1}$ is nowhere vanishing and let $B\subset \Gamma(\wedge^1T^*\mathcal{F} \oplus\wedge^2T^*\mathcal{F})$ be the set of all pairs whose restriction on $T\mathcal{F}$ is nowhere vanishing. Now set $\mathcal{R}\subset (T^*M)^{(1)}$ as \[\mathcal{R}=D^{-1}(A)\cap (r\circ D)^{-1}(B).\] Since both $A$ and $B$ are open so is $\mathcal{R}$. Now if we consider the fibration $M\stackrel{\pi}{\rightarrow}V$ then it is easy to see that the diffeotopies of $M$ preserving the fibers of $\pi$ sharply moves $V\times 0$ and $\mathcal{R}$ is invariant under the action of such diffeotopies. So by Theorem~\ref{T:gromov-invariant} there exists a contact form $\eta$ on $Op(V\times 0)=V\times\mathbb{D}^3_{\varepsilon}$ for some $\varepsilon>0$, and $\eta$ restricted to each leaf of the foliation $\mathcal F$ is also contact. Now take a diffeomorphism $g:\mathbb{R}^3\rightarrow \mathbb{D}^3_{\varepsilon}$. Then $\eta'=(id_V\times g)^*\eta$ is a contact form on $M$. Further, $\mathcal{F}$ is a contact foliation relative to $\eta'$ since $id_V\times g$ is foliation preserving. But $\mathcal{F}$ can not be transversal symplectic because then there would exist a closed 2-form $\beta$ whose restriction to $\nu \mathcal{F}=\pi^*(TV)$ would be non-degenerate. This would imply that $V$ is a symplectic manifold contradicting our hypothesis.} \end{example} \newpage \printindex
{'timestamp': '2014-09-12T02:08:42', 'yymm': '1409', 'arxiv_id': '1409.3387', 'language': 'en', 'url': 'https://arxiv.org/abs/1409.3387'}
\section{Introduction} Tree estimators and random forests are nonparmetric estimators that enjoy widespread popularity in applied data science, powering models in e-commerice, finance and macroneconomics, and medicine. Random forests, introduced by Breiman \cite{breiman01}, is a bagging algorithm that aggregates estimates from a collection of tree estimators. Since each tree has alow bias but high variance, aggregation helps performance by balancing the bias-variance trade-off. Other than good empirical performance, random forests also enjoy the following advantages over other nonparametric methods: they scale naturally with high dimensional data, as the optimal cut at each may be computed in parallel by quantizing each covariate; categorical variables and missing data can be easily corporated; and they are more interpretable since variable importance can be explicitly characterized. One fruitful application of random forests in economics is in estimating heterogeneneous treatment effects, i.e., a function of the form $f(x) = \E(Y_1 - Y_0 \mid X = x)$. In order to conduct inference in econometric applications (e.g., to test the null $H_0: f(x) = 0$), knowledge about the rate of convergence or asymptotic distribution of the estimator $\hat f(x)$ is required. Moreover, functions of $f$ are often of practical interest: for example, when we wish to study the difference in treatment effects for two different subpopulations, the quantity of interest is \begin{equation} f(x) - f(\bar x), \quad \text{where $x$ and $\bar x$ describe the two subpopulations}. \end{equation} More generally, we might also be interested in a weigthed treatment effect, where a subpopulation $x$ is given an importance weight $\mu$. In this case, the functional of $f$ is \begin{equation} \int_{x \in \mathcal{X}} f(x) d\mu, \quad \text{where $\mu$ is not necessarily the density of $x$}, \end{equation} and the integral is taken over the feature space $\mathcal{X}$. Inference on functions of $f$ requires not only the asymptotic distribution of the point estimate $f(x)$, but also the correlation between estimates at distinct points $f(x)$ and $f(\bar x)$. This paper studies the correlation structure of a class of random forests models whose asymptotic distributions were worked out in \cite{crf}. We find sufficient conditions under which the asymptotic covariance of estimates at distinct poitns vanish relative to the variance; moreover, we provide finite sample heuristics based on our calculations. To the best of our knowledge, this is the first set of results on the correlation of random forest estimates. The present paper builds on and extends the results in \cite{crf}, which in turn builds on related work \cite{Wager2015AdaptiveCO} on the concentration properties of tree estimators and random estimators in general. See also \cite{athey2019}, which extends the random forest model considered here to estimate moment conditions. Stability results established in this paper have appeared in \cite{arsov2019stability}, who study the notions of algorithmic stability for random forests and logistic regression and derive generalization error guarantees. Also closely related to our paper are \cite{chernozhukov2017} and \cite{chen2018}, concerning finite sample Gaussian approximations of sums and U-statistics in high dimensions. In this context, our paper provides a stepping stone towards applying the theory of finite sample U-statistics to random forests, where bounds on the correlation structure plays a central role. The paper is structured as follows. In Section 2, we introduce the random forest model and and state the assumptions required for our results; Section 3 contains our maion theoretical contributions; Section 4 builds on Sections 3 and discusses heuristics useful in finite sample settings; Section 5 evalutes our heuristics with numerical experiments, and Section 6 concludes. All proofs are found in the appendix. \section{Setup} In this paper, we study Gaussian approximations of random forest estimators. Throughout, we assume that a random sample $(Z_i)_{i=1}^n = (X_i, Y_i)_{i=1}^n$ is given, where $X_i \subseteq \mathcal{X} \in \mathbf{R}^p$ is a vector of regressors and $Y_i \in \mathbf{R}$ the response. A tree estimator $T(\xi; Z_1, \dots, Z_s)$ works by recursively partitioning the space of covariates $\mathcal{X}$ into a collection of non-overlapping hyper-rectangles. The estimator starts with the entire space $\mathcal{X}$ (assumed to be bounded), and uses the points $Z_1$, \dots, $Z_s$ to produce a coordinate $j \in \{ 1, \dots, p\}$ and an index $t \in \mathbf{R}$. This divides $\mathcal{X}$ into two hyperrectangles, \begin{equation} \{ x \in \mathcal{X} : x_j < t \} \quad \text{and} \quad \{ x \in \mathcal{X}: x_j > t \}. \end{equation} In the next step, each of these two sets is partitioned into two hyperrectangles each, using the points that land in each. The process continues until a particular hyperrectangle satisfies a stopping criterion. This process corresponds to a tree in the natural way, so we will refer to a generic hyperrectangle a ``node'', and the leaves of the tree ``terminal nodes'', where splitting ceases. Given the partition of $\mathcal{X}$ into terminal nodes $N_1, \dots, N_q$, the prediction is the average of the responses \begin{equation} T(x; \xi, Z_{i_1}, \dots, Z_{i_s}) = \sum_{i=1}^q \mathbf{1}(x \in N_i) \cdot \frac{1}{|N_i|} \sum_{i: X_i \in N_i} Y_i, \end{equation} where $|N_i|$ is defined to be the number of samples $X_i$ inside $N_i$. Here, $\xi$ plays the role of randomization, as the splitting decisions allows for randomization. Given a base learner $T$, a random forest estimator at $x$ is defined to be the order $s$ $U$-statistic with kernel $T$, i.e., \begin{equation} \RF(x; Z_1, \dots, Z_n) = \frac{1}{\binom{n}{s}} \sum_{i_1, \dots, i_s} \E_{\xi} T(x; \xi, Z_{i_1}, \dots, Z_{i_s}), \end{equation} where the randomization device $\xi$ is marginalized over. In this paper, we study the asymptotic distribution of the vector \begin{equation} \RF(x_1, \dots, x_q; Z_1, \dots, Z_n) \coloneqq \begin{bmatrix} \RF(x_1; Z_1, \dots, Z_n) \\ \vdots\\ \RF(x_q; Z_1, \dots, Z_n) \end{bmatrix} \in \mathbf{R}^q, \end{equation} where the random forest is employed to produce an estimate at $q$ points $x_1, \dots, x_q$. Previous work by (Athey and Wager) has shown that under regularity conditions \begin{equation} \frac{\RF(x, Z_1, \dots, Z_n) - m(x)}{\sigma_n(x)} \xRightarrow{\;\rm dist\;} N(0,1) \qquad \text{where } m(x) = \E(Y \mid X = x) \end{equation} and $\sigma_n^2(x)$ the variance of $\RF(x)$. In this paper, we extend their result to cover multivariate predictions and establish joint normality \begin{equation} \Sigma^{-1/2}\{ \RF(x_1, \dots, x_q) - m(x_1, \dots, x_q) \} \xRightarrow{\;\rm dist\;} N(0, I_{q \times q}) \end{equation} where $m(x_1, \dots, x_q)$ is the vector with components $\E(Y \mid X = x_k)$. This is the main technical result of the paper. In addition, we provide numerical simulations to gauge the finite sample behavior of the multivariate random forest estimator. \subsection{Assumptions} Our results builds on \cite{crf}, so that we will work with the same model of random forests to be maintained. Specifically, we will assume the following properties of the underlying tree algorithm. \begin{assumption}[Honesty] Conditional on $\{ X_i \}$, the tree structure (i.e., the splitting coordinates and splitting indices) are independent of the tree estimates. \end{assumption} There are several ways to satisfy this assumption. The first is to calculate splits based only on $X_i$: for example, a splitting coordinate and index is chosen to minimize the average squared distance to the center of each of the two nodes as in clustering algorithms. The second way is to perform data splitting: partition the data into two sets $\mathcal{I}_1$ and $\mathcal{I}_2$. Observations in $\mathcal{I}_1$ and $X_i \in \mathcal{I}_2$ may be freely used during the splitting process, while $Y_i \in \mathcal{I}_2$ are used to determine terminal node values. Finally, a third method assumes the existence of side information $\{ W_i \}$: splits are made in the domain of $\{ X_i \}$ using $\{ W_i \}$ are surrogate responses, with $\{ Y_i \}$ being used for prediction at the terminal nodes. For simplicity, we will assume that the first scheme is used, namely that the splitting algorithm uses only $X_i$ and not $Y_i$ in placing the splits. Our results remain valid for the third scheme. \begin{assumption}[Randomized Cyclic Splits] At each non-terminal node, the splitting algorithm decides to do data-independent split with probability $\delta > 0$, by flipping a ``$\delta$-coin'' with distribution is independent of everything else. The first time the coin lands heads, the first coordinate is chosen as the splitting coordinate; the second time, the second coordinate is chosen, and so on, such that that on the $J$-th time the coin lands heads, the $(p \text{ mod } J) + 1$-th coordinate is chosen. \end{assumption} This is modification of the assumption in \cite{crf}, in which each node has a probability $\delta$ of being chosen at each split; the latter is most easily implemented by flipping a $p\delta$ coin, and selecting one of the $p$ coordinates uniformly at random to split when the coin lands heads. Our assumption does away with the randomization in the second step. \begin{assumption}[The Splitting Algorithm is $(\alpha, k)$-Regular] There exists some $\alpha \in (0, 1/2)$ such that whenever a split occurs in a node with $m$ sample points, the two hyper-rectangles contains at least $\alpha m$ many points each. Moreover, splitting ceases at a node only if the node contains less than $2k-1$ points for some $k$. In particular, this implies that $\Var T$ has bounded entries, since the number of points in each terminal node is bounded above. \end{assumption} As shown in \cite{Wager2015AdaptiveCO}, this implies that with all but exponentially small probability, the splitting axis also shrinks by a factor between $\alpha$ and $1-\alpha$. The following assumption is specific for our model. The first specifies that the \emph{potential} splits do not depend on the data; in practice, this is ``essentially'' without loss of generality as the domain of $X$ is fixed. For example, the assumption is satisfied if all splits occur at indices that lie on a specified grid, e.g., points which can be represented by a 64-bit floating point number. In keeping with the $(\alpha, k)$ regularity assumption above, we also assume that no potential split may split the splitting axis by more than a proportion $\alpha$. \begin{assumption}[Predetermined Splits] The potential splits at each node does not depend on the realizations of the data. Furthermore, the number of potential splits at each node is finite, and each potential split shrinks the length of the corresponding axis by at most $\alpha$. \end{assumption} We will also require other assumptions regarding the ``stability'' of splits; these assumptions will be introduced later. Finally, we follow \cite{crf} and place the following distributional assumptions on the data generating process. \begin{assumption}[Distributional Assumptions on $(X,Y)$] The covariate $X$ is distributed on the $[0,1]^p$ with a density that is bounded away from zero and infinity; without loss of generality, we may assuem that $X$ is uniformly distributed on the $p$-dimensional hypercube. Furthermore, the regression function $x \mapsto \E(Y \mid X = x)$, $x \mapsto \E(Y^2 \mid X = x)$, and $x \mapsto \E(Y^3\mid X=x)$ are uniformly Lipschitz continuous. \end{assumption} \section{Gaussianity of Multivariate $U$-Statistics} \subsection{Hajek Projections} For $m \geq 1$ and a statistic $f(Z_1, \dots, Z_m) \in \mathbf{R}^q$, recall that the Hajek projection of $f$ is defined to be \begin{equation} \mathring f(Z_1, \dots, Z_m) = \sum_{i=1}^m \E[f(Z_1, \dots, Z_m) \mid Z_i] - (m-1) \E f(Z_1, \dots, Z_m). \end{equation} In particular, when $Z_1, \dots, Z_m$ is an IID sequence and $f$ is symmetric in its arguments, we have \begin{equation} \mathring f(Z_1, \dots, Z_m) = \sum_{i=1}^m f_1(Z_i) - (m-1)\E f, \end{equation} where $f_1(z) = \E[f \mid Z_1 = z]$. The previous calculations applied to our setting where $f$ is the random forest estimator yields \begin{equation} \mathring\RF(Z_1, \dots, Z_n) - \mu = \sum_{i=1}^n \E(\RF - \mu\mid Z_i) = \frac{1}{\binom{n}{s}} \sum_{i=1}^n \E \biggl[ \sum_{i_1, \dots, i_s} \E_{\xi}T(\xi; Z_{i_1}, \dots, Z_{i_s}) - \mu \mid Z_i \biggr]. \end{equation} Since our samples are independent, $\E(\E_{\xi}T(\xi;Z_{i_1}, \dots, Z_{i_s}) \mid Z_i) = \mu$ whenever $i \notin \{ i_1, \dots, i_s \}$. As $\{ i_1, \dots, i_s \}$ runs over the size-$s$ subsets of $\{ 1, \dots, n \}$, there are exactly $\binom{n-1}{s-1}$ subsets which contain~$i$. For each of these subsets \begin{equation} \E(\E_{\xi} T(\xi; Z_{i_1}, \dots, Z_{i_s}) - \mu \mid Z_i) \eqqcolon T_1(Z_i) - \mu, \end{equation} where $T_1(z) = \E_{\xi} T(z, Z_2, \dots, Z_s)$. Therefore, \begin{equation} \mathring\RF - \mu = \frac{1}{\binom{n}{s}} \sum_{i=1}^n \binom{n-1}{s-1} (T_1(Z_i) - \mu) = \frac{s}{n} \sum_{i=1}^n (T_1(Z_i) - \mu). \end{equation} \subsection{Asymptotic Gaussianity via Hajek Projections} The standard technique to derive the asymptotic distribution of a $U$-statistic is to establish a lower bound on the variance of its Hajek statistic; this is the approach taken by (Athey and Wager) and we follow the approach here. Let $V$ be the variance of $\mathring\RF$; using our previous result, we could write \begin{equation} \label{eq:Vvariance} V = \Var\biggl[ \frac{s}{n} \sum_{i=1}^n (T_1(Z_i) - \mu) \biggr] = \frac{s^2}{n} \Var(T_1(Z_1)) = \frac{s}{n} \Var\biggl[ \sum_{i=1}^s T_1(Z_i) \biggr] = \frac{s}{n} \Var \mathring T, \end{equation} where $\mathring T$ is the Hajek projection of the statistic $(Z_1, \dots, Z_s) \mapsto \E_{\xi}T(\xi; Z_1, \dots, Z_s)$. Under standard regularity conditions for a multivariate triangular array Central Limit Theorem, \begin{equation} V^{-1/2}(\mathring\RF - \mu) \xRightarrow{\;\rm dist\;} N(0, I). \end{equation} To establish the joint normality for $\RF$, write \begin{equation} V^{-1/2}(\RF - \mu) = V^{-1/2}(\RF - \mathring \RF) + V^{-1/2}(\mathring \RF - \mu). \end{equation} so all that is required is to prove that $V^{-1/2}(\RF - \mathring \RF) \xrightarrow{\;\P\;} 0$. We will show that the quantity converges in squared mean. Setting $e = V^{-1/2}(\RF - \mathring \RF)$, \begin{equation} \label{eq:error squared} \begin{split} \E (e^{\intercal} e) &= \E(\RF - \mathring \RF)^{\intercal} V^{-1} (\RF - \mathring \RF) = \E \tr V^{-1}(\RF - \mathring \RF)(\RF - \mathring \RF)^{\intercal} \\ &= \tr V^{-1} \E(\RF - \mathring \RF)(\RF - \mathring \RF)^{\intercal} = \tr V^{-1/2} \Var(\RF - \mathring \RF)V^{-1/2}. \end{split} \end{equation} By Proposition~\ref{prop:hoeffding}, we have the following Hoeffding decomposition with respect to the matrix $V^{-1}$ \begin{equation} \RF - \mathring \RF = \frac{1}{\binom{n}{s}} \biggl[ \sum_{i<j} \binom{n-2}{s-2} (T_2(Z_i, Z_j) - \mu) + \sum_{i < j < k} \binom{n-3}{s-3} (T_3(Z_i, Z_j, Z_k) - \mu) + \cdots \biggr], \end{equation} where $T_2$, $T_3$, etc.\ are the second and third order projections of $T$ obeying the orthogonality conditions \begin{equation} \label{eq:4} \E[ (T_k - \mu)^{\intercal} V^{-1} (T_{k'} - \mu)] = 0, \qquad \text{for $k \neq k'$.} \end{equation} In addition, being projections of $T$, the higher order projections also satisfy \begin{equation} \label{eq:7} \E[(T_k - \mu)^{\intercal} V^{-1} (T_k - \mu)] \leq \E[(T - \mu)^{\intercal} V^{-1} (T - \mu)]. \end{equation} Therefore, \eqref{eq:Vvariance} and \eqref{eq:error squared} imply that \begin{equation} \label{eq:1} \E(e^{\intercal}e) \leq \frac{s}{n} \tr(\Var \mathring T^{-1} \Var T). \end{equation} Athey and Wager obtains a bound on the diagonal elements of $\Var \mathring T^{-1}$ and $\Var T$, namely \begin{equation} \label{eq:3} \frac{(\Var T)_{kk}}{(\Var \mathring T)_{kk}} \leq c (\log s)^p, \qquad \text{for each $k = 1, \dots, q$}, \end{equation} for a constant $c$. In the next section, we shall bound the off-diagonal elements of $\Var \mathring T$ and $\Var T$ in order to establish $\E(e^{\intercal} e) \to 0$. \begin{proposition}[Hoeffding Decomposition for Multivariate $U$-statistics] \label{prop:hoeffding} Fix a positive definite matrix $M$. Let $f(x_1, \dots, x_n) \in \mathbf{R}^p$ be a vector-valued function that is symmetric in its arguments and let $X_1, \dots, X_n$ be a random sample such that $f(X_1, \dots, X_n)$ has finite variance. Then there exists functions $f_1, f_2, \dots, f_n$ such that \begin{equation} \label{eq:5} f(X_1, \dots, X_n) = \E(f) + \sum_{i=1}^n f_1(X_1) + \sum_{i < j} f_2(X_i, X_j) + \dots + f_n(X_1, \dots, X_n) \end{equation} where $f_k$ is a function of $k$ arguments, such that \begin{equation} \label{eq:6} \E f_k(X_1, \dots, X_k) = 0 \quad \text{and}\quad \E [f_k(X_1, \dots, X_k)^{\intercal} M f_{\ell}(X_1, \dots, X_l)] = 0. \end{equation} \end{proposition} \subsection{Covariance Calculations} The aim of this subsection is to prove the following asymptotic behavior of the covariance matrix $\Var \mathring T$ under suitable stability conditions of the splitting algorithm \begin{equation} \label{eq:covariance bounds} (\Var \mathring T)_{k, l} = o(s^{-\epsilon}) \text{ for all $k \neq l$ and some $\epsilon > 0$}. \end{equation} The next proposition shows that this is enough to establish $\E(e^{\intercal} e) \to 0$. \begin{proposition}\label{prop:trace-bound} Suppose $\Var T$ and $\Var \mathring T$ satisfy the conditions in \eqref{eq:covariance bounds}. Then \begin{equation} \label{eq:2} \frac{s}{n} \tr \Var \mathring T^{-1} \Var T \to 0. \end{equation} \end{proposition} \begin{remark} Proposition~\ref{prop:trace-bound} does not require the fact $(\Var T)_{k, l} \to 0$ for the off-diagonal entries of $\Var T$, only that they are bounded. Moreover, we only require the much weaker bound $(\Var \mathring T)_{k, l} = o(\frac{1}{\log^p s})$. However, tighter bounds are still useful for applications: for one they allow us to control the ``approximation'' error $\E(e^{\intercal} e)$; second, they are useful for bounding the off diagonal terms in $V^{-1}(\RF - \mu)$. \end{remark} The rest of this section will establish the covariance bound for $\Var \mathring T$. Recall that \begin{equation} \label{eq:10} \mathring T - \mu = \sum_{i=1}^s \E(T \mid Z_i) \quad \text{so that}\quad \Var \mathring T = s \Var(\E(T \mid Z_1)) \text{ due to independence}. \end{equation} By the orthogonality condition, \begin{equation} \label{eq:11} \Var \E(T \mid Z_1) = \Var[\E(T \mid Z_1) - \E(T \mid X_1)] + \Var[\E(T \mid X_1)], \end{equation} and furthermore, honesty implies \begin{equation} \label{eq:12} \E(T_k \mid Z_1) - \E(T_k \mid X_1) = \E(I_k \mid X_1) (Y_1 - \E(Y_1 \mid I_k = 1, X_1)) \end{equation} where $T_k$ is the $k$-th coordinate of $T$ (i.e., the estimate of the tree at $x_k$) and $I_k$ is the indicator variable for the event that $X_1$ belongs to the terminal node containing $x_k$. In particular, the off-diagonal entry at $(j, k)$ of $\Var[\E(T \mid Z_1) - \E(T \mid X_1)]$ is equal to \begin{equation} \label{eq:15} \E[\E(I_k \mid X_1) \E(I_j \mid X_1) (Y_1 - \E(Y_1 \mid X_1, I_k = 1) (Y_1 - \E(Y_1 \mid X_1, I_j = 1)] \end{equation} Following the same truncation argument as in (Athey and Wager, Theorem 5), this quantity is bounded by \begin{equation} \E[\E(I_k \mid X_1) \E(I_{j} \mid X_1)]. \end{equation} Alternatively, this bound also holds in the case when $Y$ is bounded. (Note that a direct application of the Cauchy-Schwarz inequality yields a weaker bound $\sqrt{\E[\E(I_k \mid X_1)^2 \E(I_{j} \mid X_1)^2]}$.) Recall that $\E(I_k \mid X_1 = x)$ is the probability that $x$ and $x_k$ belong to the same terminal node; likewise $\E(I_j \mid X_1 = x)$ is the probability that $x$ and $x_j$ belong to the same terminal node. The following proposition shows the intuitive result that for sufficiently large $s$, at least one of these probabilities is small. \begin{proposition} \label{prop:terminal-node-probability} Fix two distinct points $x$ and $\bar x \in [0,1]^p$, and let $M(x, \bar x)$ be the probability that $x$ and $\bar x$ belongs to the same terminal node. If $\delta > 1/2$, then \begin{equation} \label{eq:19} M(x, \bar x) = o(s^{-(1+\epsilon)}) \quad \text{for some $\epsilon > 0$}. \end{equation} The same holds for the quantity $M(x, \bar x \mid X_1 = \bar x)$, the probability conditional on $X_1 = \bar x$. \end{proposition} \todo{Discuss the significance of the metric $M(x, \bar x)$ that measures how likely $x$ and $\bar x$ lie in the same terminal node.} \todo{Simulations should show that the off-diagonal terms are strongly correlated with $M(x, \bar x)$.} This proposition shows that the contribution of $\Var[\E(T \mid Z_1) - \E(T \mid X_1)]$ to the cross covariances of $\Var \E(T \mid Z_1)$ is small (i.e., smaller than the required $(\log s) / s$). The requirement that $\delta > 1/2$, while needed for our theoretical results, may not be needed in practice. The reasons is that $\delta > 1/2$ is required for a uniform bound on the quantity \begin{equation} \E(I_k \mid X_1) \E(I_{j} \mid X_1), \end{equation} while what the proposition demands is a bound on the expectation. Indeed, in the extreme case $x = 0$ and $\bar x = 1$, it is easy to see that the expectation satisfies the required bound even when $\delta \leq 1/2$. In light of this, we could instead impose the high level condition that \begin{equation} M(x, \bar x) = o\biggl( \frac{1}{\log^p s \cdot s} \biggr), \end{equation} where $M(x, \bar x) = \E[\E(I_k \mid X_1) \E(I_{j} \mid X_1)]$ as above. \subsubsection{Bounding $\Var \E(T \mid X_1)$} We turn next to bound the cross covariances of $\Var[\E(T \mid X_1)]$. It will be convenient to change notations, so that $x \mapsto x_j$, $\bar x \mapsto x_k$, and let $x_1$ denote the value of $X_1$. Thus, we need to show that \begin{equation} \label{eq:covariance-bound} \E[ (\E(T \mid X_1 = x_1) - \mu) (\E(\bar T \mid X_1 = x_1) - \bar \mu)) ] = o\biggl( \frac{\log s}{s} \biggr). \end{equation} where $T$ and $\bar T$ are the tree estimates at $x$ and $\bar x$, and $\mu$ and $\bar \mu$ their (unconditional) expectations. The plan is to show that if $\|x - x_1\|_{\infty}$ is small, then \begin{equation} \label{eq:20} \E(T \mid X_1 = x_1) - \mu = \E(T \mid X_1 = x_1) - \E(T) \end{equation} is also small. Intuitively, the knowledge of sample point $X_1 = x_1$ changes the expectation of the tree estimate at $x$ only when $x_1$ affects the position of the terminal node containing $x$. This is unlikely to happen when $X_1 = x_1$ is farther away; as soon as $X_1$ leaves the node containing $x$ in the splitting process, its effect on the terminal node diminishes. Toward this end, fix $x$, and let $\Pi$ denote the terminal node that contains $x$. Since the set of potential splits is unchanged when conditioning on $X_1$, $\Pi$ is a discrete random variable and we may write \begin{equation} \label{eq:tree-formula} \E(T) = \sum_{\pi} \P(\Pi = \pi) \mu_{\pi} \quad \text{and}\quad \E(T \mid X_1 = x_1) = \sum_{\pi} \P(\Pi = \pi \mid X_1 = x) \mu_{\pi}' \end{equation} where $\mu_{\pi} = \E(T \mid \Pi = \pi)$ and $\mu_{\pi}' = \E(T \mid \Pi = \pi, X_1 = x)$. Recall that $\Pi$ is generated by a recursive splitting procedure, and we can make a natural correspondence between \eqref{eq:tree-formula} and the expectation taken over a directed acyclic graph (DAG) in the following way. Let $[0, 1]^p$ be the root of the DAG; for every potential split at $[0,1]^p$, there is a directed edge to a new vertex, where the vertex is the hyperrectangle that contains $x$. If a vertex is a leaf (i.e., splitting ceases at that node), then the vertex has no outgoing edges; otherwise, a vertex corresponding to a non-terminal node has an outgoing edge for each potential split, with each edge going to another vertex that is the halfspace containing $x$. To each terminal vertex $v$ in the DAG, which naturally correspond to a terminal $\pi$ containing $x$, associate a value $f(v) \coloneqq \mu_{\pi}$. Each edge $e = (v \to w)$ corresponding to a split $s$ from a node $v$ to a halfspace $w$; associate with this edge the transition probability \begin{equation} \label{eq:24} p(e) \coloneqq \P(\text{$s$ is chosen at $v$} \mid \text{current node is $v$}) \eqqcolon \P(w \mid v). \end{equation} Given the transition probabilities, the value $f$ may be extended to each vertex $v$ via the recursion \begin{equation} \label{eq:25} f(v) \coloneqq \sum_{e: v \to w} \P(w \mid v) f(w). \qquad \text{Thus, $f(v)$ is the continuation value at $v$.} \end{equation} In this way, the expectation $\E(T) = f(\text{root}) = f([0,1]^p)$. If instead we assign the terminal values $f'(v) = \mu'_v$ and the transition probabilities \begin{equation} \label{eq:27} p'(e) = \P(\text{$s$ is chosen at $v$} \mid \text{current node is $v$}, X_1 = x_1) = \P'(w \mid v), \end{equation} then we recover $\E(T \mid X_1 = x_1) = f'([0,1]^p)$. Therefore, to bound $\E(T \mid X_1 = x_1) - \E(T)$ requires bounding the difference in the continuation values. We will need to assume that $p'(e) \approx p(e)$; that is, We will require the following assumption regarding differences in the splitting probabilities. \begin{splitting-stability} For any node $v$, the total variation distance between the distributions $\{ p(e) \}_{e: v \to w}$ and $\{ p'(e) \}_{e: v \to w}$ is bounded by the effective volume of $v$. Specifically, there exists some $\delta>0$ such that for all $v$, \begin{equation} \label{eq:29} \operatorname{TV}(p, p') \leq \biggl( \frac{1}{s|v|} \biggr)^{1+\delta} \end{equation} up to some constant. Here, $|v|$ denotes the volume of the hyperrectangle at $v$, i.e., \begin{equation} \label{eq:30} |v| = \biggl| \prod_{j=1}^p (a_j, b_j) \biggr| = \prod_{j=1}^p |b_j - a_j|. \end{equation} \end{splitting-stability} Recall that with probability exceeding $1/s$, the number of sample points $X_1, \dots, X_s$ in the hyperrectangle $v$ is approximately $s|v|$, from our uniform distribution assumption. In general, since the density of $X$ is bounded away from zero and infinity, the number of sample points will be bounded (above and below) by a constant times $s|v|$. Therefore, the stability assumption places a restriction on procedure used to select optimal splits: namely, if the decision is made on the basis of $m$ points, then by moving any one of the points changes the optimal split with probability bounded by $m^{-(1+\delta)}$. In practice, most splitting procedures satisfy a stronger bound. A set of sufficient conditions is given in the following proposition. \begin{proposition} \label{prop:split-stability-sufficient} Assume that the optimal split is chosen based on the quantities \begin{equation} f_1(\mu_1, \dots, \mu_Q), \dots, f_P(\mu_1, \dots, \mu_Q) \end{equation} for some $Q \geq 1$, where $\mu_1, \dots, \mu_Q$ are the sample averages of the points being split \begin{equation} \mu_k = \frac{1}{n_v} \sum_{i: X_i \in v} m_k(X_i). \end{equation} Specifically, the optimal split depends on the which $f_i$ achieves the largest value, i.e., the value $\argmax_i f_i(\mu)$. Here, $f_1, \dots, f_P$ are assumed to be Lipschitz, and the functions $m_1, \dots, m_Q$ are such that $m_k(X)$ is 1-subExponential. Then the splitting stability assumption is satisfied. \end{proposition} \begin{remark} Recall that the $X_i$ are bounded---therefore, $m_k(X)$ being subExponential allows the use of squared loss to compute the optimal split. \end{remark} In general, the conditions in Proposition~\ref{prop:split-stability-sufficient} is sufficient to guarantee the an exponential bound instead of a polynomial one. Thus, the proposition should be viewed as simply as providing a ``plausibility argument'' that stable splitting rules are common in practice. The next proposition shows that when the splitting probabilities are bounded by $(s|v|)^{-(1+\delta)}$ as in the splitting stability assumption, so are the continuation values. \begin{proposition} \label{prop:value-bounds} Suppose the splitting probabilities satisfy a generic bound \begin{equation} \TV(p, p') \leq \log(s) \Delta(s|v|) \quad \text{at each node $v$}. \end{equation} Then for any node $v$ containing $x$ but not $x_1$, \begin{equation} |f(v) - f'(v)| \leq C \Delta(s|v|) \quad \text{for some constant $C$}. \end{equation} (The constant $C$ does not depend on $v$.) \end{proposition} Finally, we can put the bounds on $\TV(p, p')$ and $|f-f'|$ together and prove the required bound on the cross diagonals. \begin{proposition}\label{prop:2-bound} Suppose that the splitting rule is stable as in \eqref{eq:29} and that $\delta > 1-\alpha$. For $x \neq x_1$, \begin{equation} \label{eq:26} |\E(T \mid X_1 = x_1) - \E(T)| = o\biggl( \frac{1}{s^{1+\epsilon}} \biggr) \end{equation} for some $\epsilon > 0$. In particular, the off-diagonal entries of $\Var \E(T \mid X_1)$ are $o(s^{-(1+\epsilon)})$. \end{proposition} Note that Proposition~\ref{prop:terminal-node-probability} required that $\delta > 1/2$. Since $\alpha < 1/2$ by definition, this bound is necessarily more restrictive. As before, this requirement is plausibly looser in applications: we use it to give the bound \begin{equation} |v| \geq \alpha^L, \end{equation} even though the RHS is only tight only at most $1/2^L$ proportion of the nodes. Note that the ``average'' node has area approximately $(1/2)^L$, so that $\delta > 1/2$ may be more appropriate. Putting everything together, we have show that required result that \begin{equation} \label{eq:31} \Var(\mathring T)_{k, l} = o(s^{-\epsilon}) \qquad \text{for some $\epsilon > 0$}. \end{equation} Therefore, by Proposition~\ref{prop:trace-bound} and the trace calculations, the asymptotic joint normality of the random forest estimator is established. \section{Heuristics and Practical Recommendations} The previous sections established the asymptotic normality result that \begin{equation} \label{eq:32} V^{-1/2}(\mathring \RF - \mu) \xRightarrow{\;\rm dist\;} N(0,I), \qquad \text{where } V = \Var \mathring \RF = \frac{s}{n} \Var \mathring T. \end{equation} Recall $\mu$ is the expectation of $\RF$, while the target function is $m(x) = \E(Y \mid X = x)$. Athey and Wager shows that $(\RF - \mu)/\sqrt{V} \xRightarrow{\;\rm dist\;} N(0, 1)$ in the univariate case; since we have shown that $V$ is diagonally dominant, their result carries over, so that $V^{-1/2}(\RF - m) \xRightarrow{\;\rm dist\;} N(0, I)$. Moreover, (Athey and Wager) proposes a jackknife estimator that can consistently estimate $\sqrt{V}$ in the scalar case. Our diagonally dominance result suggests that the random forest estimates at $x$ and $\bar x \in [0, 1]^p$ are independent in the limit $n\to\infty$, \begin{equation} \label{eq:33} \begin{split} \Var(\RF(x) + \RF(\bar x)) &= \Var(\RF(x)) + \Var(\RF(\bar x)) + 2 \Cov(\RF(x) + \RF(\bar x)) \\&\approx \Var(\RF(x)) + \Var(\RF(\bar x)) \end{split} \end{equation} so that the jackknife estimator for the scalar case can be used to obtain confidence intervals. However, the approximation above depends on decay of the off-diagonal terms. This section, we provide a back of the envelope bound for the covariance term that may be useful for practioners. We stress that the following calculations are mostly heuristics: as we have shown above, the covariance term depends on quantities such as $M(x, \bar x)$ which is heavily dependent on the splitting algorithm. Since the aim is to produce a ``usable'' result, we focus on heuristics. To begin, recall that the covariance term is upper bounded by\footnote{This is a very crude upper bound as we have dropped the quantity $\Delta(\alpha^{\ell} s)$ from the infinite series.} \begin{equation} M(x, \bar x) + \log^2s \biggl(\sum_{\ell = 0}^{\infty} p_{\ell}\biggr) \biggl(\sum_{\ell = 0}^{\infty} \bar p_{\ell}\biggr) \end{equation} where $p_{\ell} = \P(L \geq \ell)$ is the probability that $x$ and $x_1$ are not separated after $\ell$ splits (c.f.\ proof of Proposition~\ref{prop:2-bound}) and likewise for $\bar p_{\ell}$. If we let $I$ (resp.\ $\bar I$) denote the indicator that $X_1$ is in the terminal node of $x$ (resp.\ $\bar x$), then $I = 1$ is equal to the event that $L = \log_2 s$, so that \begin{equation} \E(I \mid X_1 = x_1) = \P(L = \log s) \leq \frac{\E L}{\log s}. \end{equation} Replacing the inequality with an approximation, we have $\E L = (\log s) \E(I \mid X_1 = x_1)$. All of this shows that the covariance term is bounded by \begin{equation} (\log^4s) M(x, \bar x), \quad \text{up to appropriate constants}. \end{equation} Towards a useful heuristic, we will consider a bound on the correlation instead of the covariance. In our notation, the result of (Athey and Wager) lower bounds $M(x, x)$ (and $M(\bar x, \bar x)$) in their proof, while we upper bound $M(x, \bar x)$. Ignoring the logarithmic terms, we have \begin{equation} \biggl|\frac{\Cov(T(x), T(\bar x))}{\sqrt{\Var T(x) \cdot \Var T(\bar x)}}\biggr| \approx \frac{M(x, \bar x)}{\sqrt{M(x, x) M(\bar x, \bar x)}}. \end{equation} Recall that $M(x, \bar x) = \E[\E(I \mid X_1) \E(\bar I \mid X_1)]$ and decays superlinearly as $\bar x$ moves away from $x$. Using the previous expression (note that $M(x,x) \approx M(\bar x, \bar x)$ due to symmetry between $x$ and $\bar x$), we can bound the correlation from purely geometric considerations. Since the integrand \begin{equation} \E(I \mid X_1) \E(\bar I \mid X_1) \end{equation} decays as $X_1$ moves away from $x$ (and $\bar x$), we may imagine that its integral $M(x,x)$ comes from contributions to points $X_1$ near $x$, say those points in a $L_{\infty}$-box of side lengths $d$ with volume $d^p$, i.e., points $\{ y \in [0,1]^p : \| x - y \| \leq d/2\}$. For $M(x, \bar x)$, the contribution would come from points that are with $d/2$ of both $x$ \emph{and} $\bar x$, and to a first degree approximation, the volume of these points $\{ y \in [0,1]^p : \| x - y \|_{\infty} \leq d/2, \| \bar x - y \| \leq d/2 \}$ is \begin{equation} (d - z_1) \dots (d - z_p) \approx d^p - (z_1 + \dots + z_p)d^{p-1}, \quad \text{where $z_i = |x_i - \bar x_i|$}, \end{equation} where the approximation is accurate if $|z_i| \ll 1$. The proportion of the volume of these points is therefore $1 - \frac{1}{d}\| x - \bar x \|_1$, which leads to the heuristic \begin{equation} \biggl|\frac{\Cov(T(x), T(\bar x))}{\sqrt{\Var T(x) \cdot \Var T(\bar x)}} \biggr| \approx 1 - c \| x - \bar x \|_1, \qquad \text{for some constant $c$}. \end{equation} The RHS has the correct scaling when $x = \bar x$, i.e., the correlation equals one when $\| x - \bar x \|_1 = 0$. To maintain correct scalinag at the other extreme with $\| x - \bar x \|_1 = p$, we should take $c = 1/p$, so that \begin{equation} \biggl|\frac{\Cov(T(x), T(\bar x))}{\sqrt{\Var T(x) \cdot \Var T(\bar x)}}\biggr| \approx 1 - \frac{1}{p}\sum_{i=1}^p |x_i - \bar x_i|. \end{equation} Of course, this heurstic is ``wrong'' in that it does not depend on $s$; our theoretical results show that even for non-diametrically opposed points, the correlation drops to zero as $s \to \infty$. Therefore, another recommendation is to use \begin{equation} \label{linear-heuristic} \biggl|\frac{\Cov(T(x), T(\bar x))}{\sqrt{\Var T(x) \cdot \Var T(\bar x)}}\biggr| \approx \min \biggl( 1 - \frac{s^{\epsilon}}{p}\sum_{i=1}^p |x_i - \bar x_i|, \; 0 \biggr), \end{equation} for some $\epsilon > 0$, where dependence $s^{\epsilon}$ comes from matching the heuristic $c = 1/d$ above with the proof of Proposition~\ref{prop:terminal-node-probability}. \section{Simulations} In this section, we conduct numerical experiments of varying sample sizes to gauge the empirical relevance of our theoretical results and heuristics. In our experiments, we set $p = 2$, so that the covariates $X$ are distributed on the unit square. Instead of adopting a uniform distribution, for ease of interpretation, the distribution of $X$ is \begin{equation} \label{eq:16} X \sim \begin{cases} \bar N(\mu_1, I_2) & \text{with probability $1/4$} \\ \bar N(\mu_2, I_2) & \text{with probability $1/4$}\\ \bar N(\mu_3, I_2) & \text{with probability $1/4$}\\ \bar N(\mu_4, I_2) & \text{with probability $1/4$} \end{cases} \quad \text{where}\quad \begin{cases} \mu_1 = (0.3, 0.3)^{\intercal} \\ \mu_2 = (0.3, 0.7)^{\intercal} \\ \mu_3 = (0.7, 0.3)^{\intercal} \\ \mu_4 = (0.7, 0.7)^{\intercal} \\ \end{cases} \end{equation} and $\bar N$ denotes a truncated multivariate Gaussian distribution on the unit square.\footnote{ That is, $\bar N(\mu, \Sigma)$ denotes the conditional distribution of $x \sim N(\mu, \Sigma)$ on the event $x \in [0, 1]^2$.} Thus, $X$ has a bounded density on the unit square, and has four peaks at $\mu_1, \dots, \mu_4$. The distribution of $Y$ conditional on $X = (x_1, x_2)$ is \begin{equation} \label{eq:18} Y \sim \frac{x_1 + x_2}{2} + \frac{1}{5} N(0, 1). \end{equation} The random splitting probability is $\delta = 1/2$, and the regularity parameters are $\alpha = 0.01$ and $k = 1$, so that the tree is grown to the fullest extent (i.e., terminal nodes may contain a single observation), with each terminal node lying on the $101 \times 101$ grid of the unit square. For each sample size $n$, five thousand trees are grown, and the estimates are aggregated to compute the correlation. Figure~\ref{fig:1} plots the correlation of estimates at $x$ and $\bar x$ as a function of the $L_1$ norm $\|x - \bar x\|_1$. The calculation is performed by first fixing $x$, then calculating the sample correlation (across five thousand trees) as $\bar x$ ranges over each cell: the correlation is associated with the $L_1$ norm $\|x - \bar x\|_1$. This process is then repeated by varying the reference point $x$, and the correlation at $\| x - \bar x \|_1$ is the average of the correlations observed. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{C.pdf} \caption{Correlation as a function of sample size and $L_1$ norm.} \label{fig:1} \end{figure} The figure demonsrates that the linear heuristic \eqref{linear-heuristic} given in the previous section is conservative: it is evident that correlation decreases super-linearly as $x$ and $\bar x$ become separated. Figure~\ref{fig:2} plots the correlation on a logarithmic scale, which shows that that correlation decay is exponential in a neighborhood of unity. In other words, simulations suggest that the correct heuristic is of the shape \begin{equation} \label{eq:28} \biggl|\frac{\Cov(T(x), T(\bar x))}{\sqrt{\Var T(x) \cdot \Var T(\bar x)}}\biggr| \approx e^{-\lambda\| x - \bar x \|_1} \end{equation} for some suitable $\lambda$. Note, however, that the rate of decay of the correlation depends heavily on the exact splitting algorithm employed. Thus, an avenue for future empirical work is to investigate the decay behavior of popular random forest algorithms used in practice. \begin{figure}[t] \centering \includegraphics{Clog.pdf} \caption{Logarithm of correlation as a function of sample size and $L_1$ norm.} \label{fig:2} \end{figure} \FloatBarrier \section{Conclusion and Points for Future Work} Regression trees and random forests are popular and effective non-parametric estimators in practical applications. A recent paper by Athey and Wager shows that the random forest estimate at any point is asymptotically Gaussian; in this paper, we extend this result to the multivariate case and show that the vector of estimates at multiple points is jointly normal. Specifically, the covariance matrix of the limiting normal distribution is diagonal, so that the estimates at any two points are independent in sufficiently deep trees. Moreover, the off-diagonal term is bounded by quantities capturing how likely two points belong to the same partition of the resulting tree. Certainly stability properties of the base learner are central to our results, c.f.\ Propositions~\ref{prop:terminal-node-probability} and \ref{prop:split-stability-sufficient}. We test our proposed covariance bound and the associated coverage rates of confidence intervals in numerical simulations. This paper provides the a theoretical basis for performing inferences on functionals of target function (e.g., a heterogeneous treatment effect) when the functional is based on values of the target function at multiple points in the feature space. Specifically, we show that the covariance vanishes in the limit relative to the variance, and provide heuristics on the size of the correlation in finite samples. There are a couple interesting avenues for future work: the first is to extend our framework to cover categorical or discrete-valued features. Here, new assumptions would be required in order to maintain the guarantee that node sizes are ``not too small.'' Another direction is to use our bounds---with possible improvements---on the covariance matrix of the underlying $U$-statistic and leverage recent results \cite{chernozhukov2017,chen2018} in order to provite finite sample bounds. This would provide a more sound theoretical underpinning of our heuristics.
{'timestamp': '2020-12-09T02:16:29', 'yymm': '2012', 'arxiv_id': '2012.03486', 'language': 'en', 'url': 'https://arxiv.org/abs/2012.03486'}
\section{Introduction} The Dynkin game as introduced in \cite{dyn} is a two-persons game extension (or a variant) of an optimal stopping problem. It is extensively used in various applications including wars of attrition (see, e.g. \cite{ghem, hendricks, maynard, ekstrom2}, pre-emption games (see, e.g. \cite{fudenberg}), duels (see, e.g. \cite{bellman, blackwell, shapley} and the surveys by Radzik and Raghavan \cite{radzig} and in financial applications including game options (see, e.g. \cite{bieleckia, ekstrom1, grenadier,h06, kif00} and the survey by Kifer \cite{kif13}. The general setup for a zero-sum Dynkin game over a finite time interval $[0,T]$ (henceforth sometimes called Dynkin game) consists of Player 1 choosing to stop the game at a stopping time $\tau$ and Player 2 choosing to stop it at a stopping time $\sigma$. At $\tau\wedge \sigma:=\min(\tau,\sigma)$ the game is over and Player 1 pays Player 2 the amount $$ \mathcal{J}(\tau,\sigma):=\chi_{\tau}{\bf 1}_{\{\tau\le \sigma<T\}}+\zeta_{\sigma}{\bf 1}_{\{\sigma< \tau \}}+\xi{\bf 1}_{\{\tau\wedge \sigma=T\}}, $$ where the payoffs $\chi, \zeta$ and $\xi$ are given processes satisfying $\chi_t \le \zeta_t, \,0\le t<T$ and $\chi_T=\zeta_T=\xi$. The objective of Player 1 is to choose $\tau$ from a set of admissible stopping times to minimize the expected value $J_{\tau,\sigma}:=\mathbb{E}[\mathcal{J}(\tau,\sigma)]$, while Player 2 chooses $\sigma$ from the same set of admissible stopping times to maximize it. In the last few decades, Dynkin games have been extensively studied under several sets of assumptions including \cite{alariot, bay, bensoussan74, bismut, karatzas01, laraki05, laraki13,lep84, morimoto, stettner, touzi} (the list being far from complete). The two main questions addressed in all these papers are: (1) whether the Dynkin game is fair (or has a value) i.e. whether the following equality holds. $$ \underset{\tau}{\inf}\,\underset{\sigma}{\sup}\,J_{\tau,\sigma}=\underset{\sigma}{\sup}\,\underset{\tau}{\inf}\,J_{\tau,\sigma}; $$ (2) whether the game has a saddle-point i.e. whether there exists a pair of admissible strategies (stopping times) $(\tau^*,\sigma^*)$ for which we have $$ J_{\tau^*,\sigma}\le J_{\tau^*,\sigma*}\le J_{\tau,\sigma^*}. $$ By a simple change of variable, the expected payoff can be generalized to include an instantaneous payoff process $g(t)$, so that $$ J_{\tau,\sigma}:=\mathbb{E}\left[\mathcal{R}_0(\tau,\sigma)\right]. $$ where $$ \mathcal{R}_t(\tau,\sigma):=\int_t^{\tau\wedge \sigma} g(s)ds+\chi_{\tau}{\bf 1}_{\{\tau\le \sigma<T\}}+\zeta_{\sigma}{\bf 1}_{\{\sigma< \tau \}}+\xi{\bf 1}_{\{\tau\wedge \sigma=T\}}, \quad 0\le t\le T. $$ Cvitani{\'c} and Karatzas \cite{CK96} were first to establish a link between these Dynkin games and doubly reflected stochastic differential equations (DRBSDE) with driver $g(t)$ and obstacles $\chi$ and $\zeta$, in the Brownian case when $\chi$ and $\zeta$ are continuous processes, which turns out decisive for forthcoming work which considers more general forms of zero-sum Dynkin games including the case where the obstacles are merely right continuous with left limits, and also when the underlying filtration is generated by both the Brownian motion and an independent Poisson random measure, see \cite{hp00, hh06, h06} and the references therein. Motivated by problems in which the players use risk measures to evaluate their payoffs, Dumitrescu {\it et al.} \cite{dqs16} have considered 'generalized' Dynkin games where the 'classical' expectation $\mathbb{E}[\,\cdot\,]$ is replaced the more general nonlinear expectation $\mathcal{E}^g(\cdot)$, induced by a BSDE with jumps and a nonlinear driver $g$. The link between Dynkin games and DRBSDEs suggested in \cite{CK96} goes as follows. Assume that under suitable conditions on $g,\chi,\zeta$ and $\xi$ defined on the filtered probability space $(\Omega,\mathcal{F},(\mathcal{F}_t)_t,\mathbb{P})$, carrying out a Brownian motion $B$, there is a unique solution $(Y,Z,K^1,K^2)$ to the following DRBSDE \begin{align}\label{RBSDE-0} \begin{cases} (i)\quad Y_t = \xi +\int_t^T g(s)ds + (K^1_T - K^1_t)-(K^2_T-K^2_t)-\int_t^T Z_s dB_s, \quad t \in [0,T],\\ (ii) \quad \zeta_t \geq Y_{t}\geq \chi_t, \quad \,\, t \in [0,T], \\ (iii) \quad \int_0^T (Y_{t}-\zeta_t)dK^1_t = 0, \quad \int_0^T (Y_{t}-\chi_t)dK^2_t = 0, \end{cases} \end{align} where the processes $K^1$ and $K^2$ are continuous increasing processes, such that $K^1_0=K^2_0=0$. \begin{comment} \end{comment} Then the process $Y$ admits the following representation \begin{equation}\label{value-intro} Y_t=\underset{\tau\ge t}{{\rm ess}\, \inf\limits}\,\underset{\sigma\ge t}{{\rm ess}\,\sup\limits}\,\mathbb{E}[\mathcal{R}_t(\tau,\sigma)| \mathcal{F}_t]=\underset{\sigma\ge t}{{\rm ess}\,\sup\limits}\,\underset{\tau\ge t}{{\rm ess}\, \inf\limits}\,\mathbb{E}[\mathcal{R}_t(\tau,\sigma)| \mathcal{F}_t],\quad 0\le t\le T. \end{equation} The formula \eqref{value-intro} tells us is that the 'upper-value' $\overline{V}_t:=\underset{\tau\ge t}{{\rm ess}\, \inf\limits}\,\underset{\sigma\ge t}{{\rm ess}\,\sup\limits}\,\mathbb{E}[\mathcal{R}_t(\tau,\sigma)| \mathcal{F}_t]$ and the 'lower-value' $\underline{V}_t:=\underset{\sigma\ge t}{{\rm ess}\,\sup\limits}\,\underset{\tau\ge t}{{\rm ess}\, \inf\limits}\,\mathbb{E}[\mathcal{R}_t(\tau,\sigma)| \mathcal{F}_t]$ of the game coincide and they are equal to the first component of the solution $(Y,Z,K^1,K^2)$ to the DRBSDE \eqref{RBSDE-0}, in which case the zero-sum Dynkin game has a value and is given by $$ Y_0=\underset{\tau}{\inf}\,\underset{\sigma}{\sup}\,\mathbb{E}[\mathcal{R}_0(\tau,\sigma)]=\underset{\sigma}{\sup}\,\underset{\tau}{\inf}\,\mathbb{E}[\mathcal{R}_0(\tau,\sigma)]. $$ \medskip \paragraph{Main contributions.} \textcolor{black}{In this paper, motivated by applications to life insurance (see an example below)}, we suggest a generalization of the above zero-sum Dynkin game, where we consider the case where the payoff depends on the 'values' $V$ of the game and their probability laws $\mathbb{P}_{V}$: $$ g(t):=f(t,V_t,\mathbb{P}_{V_t}),\quad \chi_t:=h_1(t,V_t,\mathbb{P}_{V_t}),\quad \zeta_t:=h_2(t,V_t,\mathbb{P}_{V_t}) $$ so that, for every $t\in [0,T]$, $$ \mathcal{R}_t(\tau,\sigma,V):=\int_t^{\tau\wedge \sigma} f(s,V_s,\mathbb{P}_{V_s})ds+h_1(\tau, V_{\tau}, \mathbb{P}_{V_s}|_{s=\tau}){\bf 1}_{\{\tau\le \sigma<T\}}+h_2(\sigma,V_{\sigma},\mathbb{P}_{V_s}|_{s=\sigma}){\bf 1}_{\{\sigma< \tau \}}+\xi{\bf 1}_{\{\tau\wedge \sigma=T\}}. $$ \begin{comment}\textcolor{red}{for the indicator functions shouldn't rather be $$ \mathcal{R}_t(\tau,\sigma,V):=\int_t^{\tau\wedge \sigma} f(s,V_s,\mathbb{P}_{V_s})ds+h_1(\tau, V_{\tau}, \mathbb{P}_{V_s}|_{s=\tau}){\bf 1}_{\{\tau\le \sigma<T\}}+h_2(\sigma,V_{\sigma},\mathbb{P}_{V_s}|_{s=\sigma}){\bf 1}_{\{\sigma< \tau\}}+\xi{\bf 1}_{\{\tau\wedge \sigma=T\}}. $$ \end{comment} The upper and lower values of the game satisfy \begin{equation}\label{recur-0} \overline{V}_t:=\underset{\tau\ge t}{{\rm ess}\, \inf\limits}\,\underset{\sigma\ge t}{{\rm ess}\,\sup\limits}\,\mathbb{E}[\mathcal{R}_t(\tau,\sigma,\overline{V})| \mathcal{F}_t],\qquad \underline{V}_t:=\underset{\sigma\ge t}{{\rm ess}\,\sup\limits}\,\underset{\tau\ge t}{{\rm ess}\, \inf\limits}\,\mathbb{E}[\mathcal{R}_t(\tau,\sigma,\underline{V})| \mathcal{F}_t]. \end{equation} Due to the dependence of the payoff on the probability law of the 'value', we call the game whose upper and lower values satisfy \eqref{recur-0}, a \textit{zero-sum mean-field Dynkin game}. \textcolor{black}{This new type of games are more involved then the standard Dynkin games, since the first question one has to answer is the existence of the upper (resp. lower) value. The first main result of this paper is to show that, when the underlying filtration is generated by both the Brownian motion and an independent Poisson random measure, under mild regularity assumptions on the payoff process, the upper and lower values $\overline{V}$ and $\underline{V}$ of the game exist and are unique. Then, we show this game has a value (i.e. $\overline{V}_t=\underline{V}_t, \,0\le t\le T,\,\, \mathbb{P}$-a.s.), which can be characterized in terms of the component $Y$ of the solution of \textit{a new class of mean-field doubly reflected BSDEs}, whose obstacles might depend on the solution and its distribution (see Section \ref{MFDG}). We prove the results in a general setting when the driver of the doubly reflected BSDE might also depend on $z$ and $u$ and the method we propose to show the existence of the value of the game (and the solution of the corresponding mean-field doubly reflected BSDE) is based on a new approach which avoids the standard penalization technique. We also provide sufficient conditions under which the game admits \textit{a saddle-point}, the main difficulty in our framework being due to the dependence of the obstacles on the value of the game. The second main result consists in proving the existence of the value of the following system of {\it interacting Dynkin games} which take the form \begin{equation}\label{recur-1} \overline{V}^{i,n}_t:=\underset{\tau\ge t}{{\rm ess}\, \inf\limits}\,\underset{\sigma\ge t}{{\rm ess}\,\sup\limits}\,\mathbb{E}[\mathcal{R}^{i,n}_t(\tau,\sigma,\overline{V}^{n})| \mathcal{F}_t],\qquad \underline{V}^{i,n}_t:=\underset{\sigma\ge t}{{\rm ess}\,\sup\limits}\,\underset{\tau\ge t}{{\rm ess}\, \inf\limits}\,\mathbb{E}[\mathcal{R}^{i,n}_t(\tau,\sigma,\underline{V}^{n})| \mathcal{F}_t], \end{equation} where $$\begin{array}{lll} \mathcal{R}^{i,n}_t(\tau,\sigma,V^{n}):=\int_t^{\tau\wedge \sigma} f(s,V^{i,n}_s,\frac{1}{n}\sum_{j=1}^n\delta_{V^{j,n}_s})ds+h_1(\tau, V^{i,n}_{\tau}, \frac{1}{n}\sum_{j=1}^n\delta_{V^{j,n}_s}|_{s=\tau}){\bf 1}_{\{\tau\le \sigma<T\}} \\ \qquad\qquad \qquad\qquad +h_2(\sigma,V^{i,n}_{\sigma}, \frac{1}{n}\sum_{j=1}^n\delta_{V^{j,n}_s}|_{s=\sigma}){\bf 1}_{\{\sigma< \tau\}}+\xi^{i,n}{\bf 1}_{\{\tau\wedge \sigma=T\}}. \end{array} $$ and providing sufficient conditions under which a saddle point exists. We also show the link with the solution of a system of interacting doubly reflected BSDEs, with obstacles depending on the solution, for which the well-posedness is addressed in the general case when $f$ might also depend on $z$ and $u$. The third main contribution is a convergence result, which shows that, under appropriate assumptions on the involved coefficients, the value $V$ is limit (as $n\to \infty$), under appropriate norms, of ${V}^{i,n}$ (the value of the interacting zero-sum Dynkin game). A related propagation of chaos type result is derived.} \subsection{Motivating example from life insurance} One of the main motivations of studying the class of zero-sum mean-field Dynkin games \eqref{recur-0} and \eqref{recur-1} is the pricing of the following prospective reserves in life insurance. Consider a portfolio of a large number $n$ of homogeneous life insurance policies $\ell$. Denote by $Y^{\ell,n}$ the prospective reserve of each policy $\ell=1,\ldots,n$. Life insurance is a business which reflects the cooperative aspect of the pool of insurance contracts. To this end, the prospective reserve is constructed and priced based on {\it the averaging principal} where an individual reserve $Y^{\ell,n}$ is compared with the average reserve $\frac{1}{n}\sum_{j=1}^n Y^{j,n}$ a.k.a. {\it model point} among actuaries. In particular, in nonlinear reserving, the driver $f$ i.e. the reward per unit time, the solvency/guarantee level $h_1$ (lower barrier) and the allocated bonus level $h_2$ (upper barrier) i.e. the fraction of the value of the global market portfolio of the insurance company allocated to the prospective reserve, depend on the reserve of the particular contract and on the average reserve characteristics over the $n$ contracts (since $n$ is very large, averaging over the remaining $n-1$ policies has roughly the same effect as averaging over all $n$ policies): For each $\ell=1,\ldots,n$, \begin{equation}\label{MF-N}\begin{array}{lll} f(t,Y^{\ell,n}_t,(Y^{m,n}_t)_{m\neq\ell}):=\alpha_t-\delta_tY_t+\beta_t\max(\theta_t, Y^{\ell,n}_t-\frac{1}{n}\sum_{k=1}^nY^{k,n}_t),\\ h_1(Y^{\ell,n}_t,(Y^{m,n}_t)_{m\neq\ell})=\left(u-c^1(Y^{\ell,n}_t)+\mu(\frac{1}{n}\sum_{k=1}^nY^{k,n}_t-u)^+\right) \wedge S_t, \\ h_2(Y^{\ell,n}_t,(Y^{m,n}_t)_{m\neq\ell})=\left(c^2(Y^{\ell,n}_t)+c^3(\frac{1}{n}\sum_{k=1}^nY^{k,n}_t)\right) \vee S^{\prime}_t, \end{array} \end{equation} where $0<\mu<1$, $S$ is the value of the 'benchmark' global portfolio of the company and $S^{\prime}$ some higher value of that global portfolio used by the company as a reference (threshold) to apply the bonus allocation program, where at each time $t$, $S_t\le S^{\prime}_t$ and the involved functions $c^1,c^2$ and the parameters $u,\mu$ are chosen so that $h_1(Y^{\ell,n}_t,(Y^{m,n}_t)_{m\neq\ell})\le h_2(Y^{\ell,n}_t,(Y^{m,n}_t)_{m\neq\ell})$. The driver $f$ includes the discount rate $\delta_t$ and deterministic positive functions $\alpha_t, \beta_t$ and $\theta_t$ which constitute the elements of the withdrawal option. The solvency level $h_1$ is constituted of a required minimum of a benchmark return (guarantee) $u$, a reserve dependent management fee $c^1(Y^{\ell,n}_t)$ (usually much smaller than $u$) and a 'bonus' option $(\frac{1}{n}\sum_{k=1}^nY^{k,n}_t-u)^+$ which is the possible surplus realized by the average of all involved contracts. The allocated bonus level $h_2$ is usually prescribed by the contract and includes a function $c^2(Y^{\ell,n}_t)$ which reflects a possible bonus scheme based the individual reserve level and another function $c^3(\frac{1}{n}\sum_{k=1}^nY^{k,n}_t)$ which reflects the average reserve level. The Dynkin game is between two players, the insurer, Player (I), and each of the $N$ insured (holders of the insurance contracts), Player ($H_{\ell}$), where each of them can decide to stop it i.e. exit the contract at a random time of her choice. Player (I) stops the game when the solvency level $h_1(Y^{\ell,n}_t,(Y^{m,n}_t)_{m\neq\ell})$ of the $\ell$th player is reached, while Player ($H_{\ell}$) stops the game when its allocated bonus level $h_2(Y^{\ell,n}_t,(Y^{m,n}_t)_{m\neq\ell})$ is reached. The prospective reserve $Y^{\ell,n}$ is an upper value (resp. lower value) for the game if Player ($H_{\ell}$) (resp. Player (I)) acts first and then Player (I) (resp. Player ($H_{\ell}$)) chooses an optimal response. \medskip Sending $n$ to infinity in \eqref{MF-N}, yields the following forms of {\it upper or lower value dependent} payoffs of the prospective reserve of a representative (model-point) life insurance contract: \begin{equation*}\begin{array}{lll} f(t,Y_t,\mathbb{E}[Y_t]):=\alpha_t-\delta_tY_t+\beta_t\max(\theta_t, Y_t- \mathbb{E}[Y_t]),\\ h_1(Y_t, \mathbb{E}[Y_t])=\left(u^2-c^1(Y_t)+\mu^1(\mathbb{E}[Y_t]-u)^+\right)\wedge S_t, \\ h_2(Y_t, \mathbb{E}[Y_t])=\left(c^2(Y_t)+c^3(\mathbb{E}[Y_t])\right)\vee S^{\prime}_t. \end{array} \end{equation*} \subsection{Organization of the paper} In Section \ref{MFDG} we introduce \textit{a class of zero-sum mean-field Dynkin games}. Under mild regularity assumptions on the coefficients involved in the payoff function, we show existence and uniqueness of the upper and lower values of the game. We also show that the game has a value and characterize it as the unique solution to a mean-field doubly reflected BSDE. Moreover, we give a sufficient condition on the obstacles which guarantees existence of a saddle-point. In Section \ref{WIDG}, we introduce a system of $n$ interacting zero-sum Dynkin games and show that it has a value and characterize it as the unique solution to a system of interacting doubly reflected BSDEs. Furthermore, we give sufficient conditions on the barriers which guarantee existence of a saddle-point for each component of the system. Finally, in Section \ref{chaos}, we show that the limit, as $n\to \infty$, of the value of system of interacting zero-sum Dynkin games converges to the value of the zero-sum mean-field Dynkin game in an appropriate norm. As a consequence of that limit theorem, we establish a propagation of chaos property for the value of the system of interacting zero-sum Dynkin games. \subsection*{Notation.} \quad Let $(\Omega, \mathcal{F},\mathbb{P})$ be a complete probability space. $B=(B_t)_{0\leq t\leq T}$ is a standard $d$-dimensional Brownian motion and $N(dt,de)$ is a Poisson random measure, independent of $B$, with compensator $\nu(de)dt$ such that $\nu$ is a $\sigma$-finite measure on $I\!\!R^*$, equipped with its Borel field $\mathcal{B}(I\!\!R^*)$. Let $\tilde{N}(dt,du)$ be its compensated process. We denote by $\mathbb{F} = \{\mathcal{F}_t\}$ the natural filtration associated with $B$ and $N$. Let $\mathcal{P}$ be the $\sigma$-algebra on $\Omega \times [0,T]$ of $\mathcal{F}_t$-progressively measurable sets. Next, we introduce the following spaces with $p>1$: \begin{itemize} \item $\mathcal{T}_t$ is the set of $\mathbb{F}$-stopping times $\tau$ such that $\tau \in [t,T]$ a.s. \item \textcolor{black}{$L^p(\mathcal{F}_T)$ is the set of random variables $\xi$ which are $\mathcal{F}_T$-measurable and $\mathbb{E}[|\xi|^p]<\infty$.} \item $\mathcal{S}_{\beta}^p$ is the set of real-valued c{\`ad}l{\`a}g adapted processes $y$ such that $||y||^p_{\mathcal{S}_{\beta}^p} :=\mathbb{E}[\underset{0\leq u\leq T}{\sup} e^{\beta p s}|y_u|^p]<\infty$. We set $\mathcal{S}^p=\mathcal{S}_{0}^p$. \item $\mathcal{S}_{\beta,i}^p$ is the subset of $\mathcal{S}_{\beta}^p$ such that the process $k$ is non-decreasing and $k_0 = 0$. We set $\mathcal{S}_{i}^p=\mathcal{S}_{0,i}^p$. \item $\mathbb{L}_{\beta}^p$ is the set of real-valued c{\`ad}l{\`a}g adapted processes $y$ such that $||y||^p_{\mathbb{L}_{\beta}^p} :=\underset{\tau\in\mathcal{T}_0}{\sup}\mathbb{E}[e^{\beta p \tau}|y_{\tau}|^p]<\infty$. $\mathbb{L}_{\beta}^p$ is a Banach space (see Theorem 22 in \cite{DM82}, pp. 60 when $p=1$). We set $\mathbb{L}^p=\mathbb{L}_{0}^p$. \item $\mathcal{H}^{p,d}$ is the set of $\mathcal{P}$-measurable, $I\!\!R^d$-valued processes such that $\mathbb{E}[(\int_0^T|v_s|^2ds)^{p/2}] <\infty$. \item $L^p_{\nu}$ is the set of measurable functions $l:I\!\!R^*\to I\!\!R$ such that $\int_{I\!\!R^*}|l(u)|^p\nu(du)<+\infty$. The set $L^2_{\nu}$ is a Hilbert space equipped with the scalar product $\langle\delta,l\rangle_{\nu}:=\int_{I\!\!R^*}\delta(u)l(u)\nu(du)$ for all $(\delta,l)\in L^2_{\nu}\times L^2_{\nu}$, and the norm $|l|_{\nu,2}:=\left(\int_{R^*}|l(u)|^2\nu(du)\right)^{1/2}$. If there is no risk for confusion, we sometimes denote $|l|_{\nu,2}:=|l|_{\nu}$. \item $\mathcal{B}(I\!\!R^d)$ (resp. $\mathcal{B}(L^p_{\nu}))$ is the Borel $\sigma$-algebra on $I\!\!R^d$ (resp. on $L^p_{\nu}$). \item $\mathcal{H}^{p,d}_{\nu}$ is the set of predictable processes $l$, i.e. measurable \[l:([0,T]\times \Omega \times I\!\!R^*,\mathcal{P}\otimes \mathcal{B}(I\!\!R^*))\to (I\!\!R^d, \mathcal{B}(I\!\!R^d)); \quad (\omega,t,u)\mapsto l_t(\omega,u)\] such that $\|l\|_{\mathcal{H}^{p,d}_{\nu}}^p:=\mathbb{E}\left[\left(\int_0^T\sum_{j=1}^d|l^j_t|^2_{\nu}dt\right)^{\frac{p}{2}}\right]<\infty$. For $d=1$, we denote $\mathcal{H}^{p,1}_{\nu}:=\mathcal{H}^{p}_{\nu}$. \item $\mathcal{P}_p(I\!\!R)$ is the set of probability measures on $I\!\!R$ with finite $p$th moment. We equip the space $\mathcal{P}_p(I\!\!R)$ with the $p$-Wasserstein distance denoted by $\mathcal{W}_p$ and defined as \begin{align*} \mathcal{W}_p(\mu, \nu) := \inf \left\{\int_{I\!\!R \times I\!\!R} |x - y|^p \pi(dx, dy) \right\}^{1/p}, \end{align*} where the infimum is over probability measures $\pi \in\mathcal{P}_p(I\!\!R\times I\!\!R)$ with first and second marginals $\mu$ and $\nu$, respectively. \end{itemize} \section{Zero-sum mean-field Dynkin games and link with mean-field doubly reflected BSDEs}\label{MFDG} \noindent In this section, we introduce a new class of zero-sum mean-field Dynkin games which have the particularity that the payoff depends on the value of the game, which is shown to exist under specific assumptions. For this purpose, we first recall the notion of $f$-conditional expectation introduced by Peng (see e.g. \cite{Peng07}), which is denoted by $\mathcal{E}^{f}$ and extends the standard conditional expectation to the nonlinear case. \begin{Definition}[Conditional $f$-expectation] We recall that if $f$ is a Lipschitz driver and $\xi$ a random variable belonging to $L^p(\mathcal{F}_T)$, then there exists a unique solution $(X,\pi, \psi) \in \mathcal{S}^p \times \mathcal{H}^{p,d} \times \mathcal{H}_\nu^{p}$ to the following BSDE \begin{align*} X_t=\xi+\int_t^T f(s,X_s,\pi_s,\psi_s)ds-\int_t^T\pi_sdW_s -\int_t^T \int_{I\!\!R^\star}\pi_s(de) d\tilde{N}(ds,de)\text{ for all } t \in [0,T] {\rm \,\, a.s. } \end{align*} For $t\in [0,T]$, the nonlinear operator $\mathcal{E}^{f}_{t,T} : L^p(\mathcal{F}_T) \mapsto L^p(\mathcal{F}_t)$ which maps a given terminal condition $\xi \in L^p(\mathcal{F}_T)$ to the first component $X_t$ at time $t$ of the solution of the above BSDE is called conditional $f$-expectation at time $t$. It is also well-known that this notion can be extended to the case where the (deterministic) terminal time $T$ is replaced by a general stopping time $\tau \in \mathcal{T}_0$ and $t$ is replaced by a stopping time $S$ such that $S \leq \tau$ a.s. \end{Definition} \noindent In the sequel, given a Lipschitz continuous driver $f$ and a process $Y$, we denote by $f \circ Y$ the map $(f \circ Y)(t,\omega,y,z,u):= f(t,\omega,y,z,u, \mathbb{P}_{Y_t})$.\\ \noindent We introduce the following definitions. \begin{Definition} Consider the map $\underline{\Psi}:\mathbb{L}_\beta^p \longrightarrow \mathbb{L}_\beta^p$ given by $$\underline{\Psi}(Y)_t:=\underset{\tau \in \mathcal{T}_t}{{\rm ess}\,\sup\limits}\,\underset{\sigma \in \mathcal{T}_t}{{\rm ess}\, \inf\limits}\,\mathcal{E}_{t,\tau\wedge \sigma}^{f \circ Y}[h_1(\tau, Y_\tau, \mathbb{P}_{Y_s}|_{s=\tau}){\bf 1}_{\{\tau \leq \sigma<T\}}+h_2(\sigma, Y_\sigma, \mathbb{P}_{Y_s}|_{s=\sigma}){\bf 1}_{\{\sigma < \tau\}}+\xi{\bf 1}_{\{\sigma \wedge \tau = T\}}].$$ We define the \textit{first} or \textit{lower value function} of the zero-sum mean-field Dynkin game, denoted by $\underline{V}$, as the \textit{fixed point} of the application $\underline{\Psi}$, i.e. it satisfies \begin{align}\label{lower} \underline{V}_t=\underline{\Psi}(\underline{V})_t. \end{align} \end{Definition} \begin{Definition} Let the map $\overline{\Psi}:\mathbb{L}_\beta^p \longrightarrow \mathbb{L}_\beta^p$ be given by $$\overline{\Psi}(Y)_t:=\underset{\sigma \in \mathcal{T}_t}{{\rm ess}\, \inf\limits}\, \underset{\tau \in \mathcal{T}_t}{{\rm ess}\,\sup\limits}\,\mathcal{E}_{t,\tau\wedge \sigma}^{f \circ Y}[h_1(\tau, Y_\tau, \mathbb{P}_{Y_s}|_{s=\tau}){\bf 1}_{\{\tau \leq \sigma<T\}}+h_2(\sigma, Y_\sigma, \mathbb{P}_{Y_s}|_{s=\sigma}){\bf 1}_{\{\sigma < \tau\}}+\xi{\bf 1}_{\{\sigma \wedge \tau = T\}}].$$ The \textit{second} or \textit{upper value function} of the zero-sum mean-field Dynkin game, denoted by $\overline{V}$, as the \textit{fixed point} of the application $\overline{\Psi}$, i.e. it satisfies \begin{align}\label{upper} \overline{V}_t=\overline{\Psi}(\overline{V})_t. \end{align} \end{Definition} \begin{Definition} The zero-sum mean-field Dynkin game is said to admit a \textit{common value function}, called the \textit{value of the game}, if $\underline{V}$ and $\overline{V}$ exist and $\underline{V}_t=\overline{V}_t$ for all $t \in [0,T]$, $\mathbb{P}$-a.s. \end{Definition} More precisely, the value of the game, denoted by $V$, corresponds to the common fixed point of the applications $\overline{\Psi}$ and $\underline{\Psi}$, i.e. it satisfies $V_t=\overline{\Psi}(V)_t=\underline{\Psi}(V)_t$. The main result of this section consists in providing conditions under which the game admits a value, showing the existence and uniqueness of the solution of a \textit{new class} of mean-field doubly reflected BSDEs given below and establishing the connection between the value of the mean-field Dynkin game and the solution of the mean-field reflected BSDE.\\ \noindent Let us introduce the new class of doubly reflected BSDEs of mean-field type associated with the driver $f$, the terminal condition $\xi$, lower barrier $h_1$ and upper barrier $h_2$. \begin{Definition} We say that the quadruple of progressively measurable processes $(Y_t, Z_t, U_t, K^2_t, K_t^2)_{t\leq T}$ is a solution of the mean-field reflected BSDE associated with $(f,\xi, h_1, h_2)$ if, when $p\geq 2$, \begin{align}\label{BSDE1} \begin{cases} (i) \quad Y \in \mathcal{S}^p, Z\in \mathcal{H}^{p,d}, U\in \mathcal{H}^{p}_{\nu}, K^1 \in \mathcal{S}_i^p \text{ and } K^2 \in \mathcal{S}_i^p\\ (ii)\quad Y_t = \xi +\int_t^T f(s,Y_{s}, Z_{s}, U_{s}, \mathbb{P}_{Y_s})ds + (K^1_T - K^1_t)-(K^2_T-K^2_t) \\ \qquad\qquad -\int_t^T Z_s dB_s - \int_t^T \int_{I\!\!R^\star} U_s(e) \tilde N(ds,de), \quad t \in [0,T],\\ (iii) \quad h_2(t,Y_{t},\mathbb{P}_{Y_t}) \geq Y_{t}\geq h_1(t,Y_{t},\mathbb{P}_{Y_t}), \quad \forall t \in [0,T],\\ (iv) \quad \int_0^T (Y_{t-}-h_1(t,Y_{t-},\mathbb{P}_{Y_{t-}}))dK^1_t = 0, \,\,\,\int_0^T (Y_{t-}-h_2(t,Y_{t-},\mathbb{P}_{Y_{t-}}))dK^2_t = 0, \\ (v) \quad dK^1_t \perp dK^2_t,\quad t \in [0,T]. \quad \end{cases} \end{align} \end{Definition} The last condition $dK^1_t \perp dK^2_t$ is imposed in order to ensure the uniqueness of the solution. For the reader's convenience, we recall here the definition of a mutually singular measures associated with increasing predictable processes. \begin{Definition} Let $A=(A_t)_{0 \leq t \leq T}$ and $A'=(A'_t)_{0 \leq t \leq T}$ belonging to $\mathcal{S}_{i}^p$. The measures $dA_t$ and $dA'_t$ are said to be mutually singular and we write $dA_t \perp dA'_t$ if there exists $D \in \mathcal{P}$ such that $$\mathbb{E}\left[\int_0^T 1_{D^c}dA_t\right]=\mathbb{E}\left[\int_0^T 1_{D}dA'_t\right]=0.$$ \end{Definition} We make the following assumption on $(f,h_1,h_2,\xi)$. \begin{Assumption} \label{generalAssump} The coefficients $f,h_1$, $h_2$ and $\xi$ satisfy the following properties. \begin{itemize} \item[(i)] $f$ is a mapping from $[0,T] \times \Omega \times I\!\!R \times I\!\!R \times L^p_{\nu} \times \mathcal{P}_p(I\!\!R)$ into $I\!\!R$ such that \begin{itemize} \item[(a)] the process $(f(t,0,0,0,\delta_0))_{t\leq T}$ is $\mathcal{P}$-measurable and belongs to $\mathcal{H}^{p,1}$; \item[(b)] $f$ is Lipschitz continuous w.r.t. $(y,z,u,\mu)$ uniformly in $(t,\omega)$, i.e. there exists a positive constant $C_f$ such that $\mathbb{P}$-a.s. for all $t\in [0,T],$ \[|f(t,y_1,z_1,u_1,\mu_1)-f(t,y_2,z_2,u_2,\mu_2)|\leq C_f(|y_1-y_2|+|z_1-z_2|+|u_1-u_2|_{\nu}+\mathcal{W}_p(\mu_1, \mu_2))\] for any $y_1,y_2\in I\!\!R,$ $z_1, z_2 \in I\!\!R$, $u_1, u_2 \in L^p_{\nu}$ and $\mu_1, \mu_2 \in \mathcal{P}_p(I\!\!R) $. \item[(c)] Assume that $d\mathbb{P} \otimes dt$ a.e. for each $(y,z,u_1,u_2,\mu) \in I\!\!R^2 \times (L^2_\nu)^2 \times \mathcal{P}_p(I\!\!R),$ \begin{align} f(t,y,z,u_1,\mu)-f(t,y,z,u_2,\mu) \geq \langle\gamma_t^{y,z,u_1,u_2,\mu},l_1-l_2\rangle_\nu, \end{align} with \begin{align*} \gamma:[0,T] \times \Omega \times I\!\!R^2 \times (L^2_\nu)^2 \times \mathcal{P}_p(I\!\!R) \mapsto L^2_\nu;\\ (\omega,t,y,z,u_1,u_2,\mu) \mapsto \gamma_t^{y,z,u_1,u_2,\mu}(\omega,\cdot) \end{align*} $\mathcal{P} \otimes \mathcal{B}(I\!\!R^2) \otimes \mathcal{B}((L^2_\nu)^2) \otimes \mathcal{B}(\mathcal{P}_p(I\!\!R))$ measurable satisfying $\|\gamma_t^{y,z,u_1,u_2,\mu}(\cdot)\|_\nu \leq C$ for all\\ $(y,z,u_1,u_2,\mu) \in I\!\!R^2 \times (L^2_\nu)^2 \times \mathcal{P}_p(I\!\!R)$, $d\mathbb{P} \otimes dt$-a.e., where $C$ is a positive constant, and such that $\gamma_t^{y,z,u_1,u_2,\mu}(e) \geq -1$, for all $(y,z,u_1,u_2,\mu) \in I\!\!R^2 \times (L^2_\nu)^2 \times \mathcal{P}_p(I\!\!R)$, $d\mathbb{P} \otimes dt \otimes d\nu(e)$-a.e. \end{itemize} \item[(ii)] $h_1$, $h_2$ are measurable mappings from $[0,T] \times \Omega \times I\!\!R \times \mathcal{P}_p(I\!\!R)$ into $I\!\!R$ such that $h_1(t,y,\mu):=\tilde{h}_1(t,y,\mu) \wedge S_t$ and $h_2(t,y,\mu):=\tilde{h}_2(t,y,\mu) \vee S'_t$, where \begin{itemize} \item[(a)] $S_t$ and $S'_t$ are quasimartingales in $S^p$, with $S_t \leq S'_t$ $\mathbb{P}$-a.s. for all $0 \leq t \leq T$, \item[(b)] the processes $\left(\sup_{(y,\mu)\in I\!\!R\times\mathcal{P}_p(I\!\!R)}|h_1(t,y,\mu)|\right) _{0\le t\leq T}$ and $\left(\sup_{(y,\mu)\in I\!\!R\times\mathcal{P}_p(I\!\!R)}|h_2(t,y,\mu)|\right) _{0\le t\leq T}$ belong to $\mathcal{S}^p$, \item[(c)] $\tilde{h}_1$ (resp. $\tilde{h}_2$) is Lipschitz w.r.t. $(y,\mu)$ uniformly in $(t,\omega)$, i.e. there exists two positive constants $\gamma_1$ (resp. $\kappa_1$ ) and $\gamma_2$ (resp. $\kappa_2$ ) such that $\mathbb{P}$-a.s. for all $t\in [0,T],$ \[|\tilde{h}_1(t,y_1,\mu_1)-\tilde{h}_1(t,y_2,\mu_2)|\leq \gamma_1|y_1-y_2|+\gamma_2\mathcal{W}_p(\mu_1, \mu_2),\] \[|\tilde{h}_2(t,y_1,\mu_1)-\tilde{h}_2(t,y_2,\mu_2)|\leq \kappa_1|y_1-y_2|+\kappa_2\mathcal{W}_p(\mu_1, \mu_2)\] for any $y_1,y_2\in I\!\!R$ and $\mu_1,\mu_2 \in \mathcal{P}_p(I\!\!R)$, \item[(d)] $h_1(t,y,\mu) \leq h_2(t,y,\mu)$ for all $y \in I\!\!R$ and $\mu \in \mathcal{P}_p(I\!\!R)$. \end{itemize} \item[(iii)] $\xi \in L^p({\cal F}_T)$ and satisfies $h_2(T,\xi,\mathbb{P}_{\xi}) \geq \xi \geq h_1(T,\xi,\mathbb{P}_{\xi})$. \end{itemize} \end{Assumption} \begin{Remark}\label{moko} The above assumptions (ii) on the obstacles $h_1$ and $h_2$ imply the Mokobozki condition $$ h_1(t,y,\mu)\le S^{\prime}_t\le h_2(t,y,\mu), \quad \text{for all} \,\,\, (t,y,\mu)\in [0,T]\times I\!\!R \times \mathcal{P}_p(I\!\!R), \quad \mathbb{P}\text{-a.s.}, $$ since $S^{\prime}$ is a quasimartingale. \end{Remark} \begin{Remark} We note that if $\tilde{h}_1$ (resp. $\tilde{h}_2$) depends only on $\mu$ i.e. $\tilde{h}_1(t,y,\mu)=\tilde{h}_1(t,\mu)$ (resp. $\tilde{h}_2(t,y,\mu)=\tilde{h}_2(t,\mu)$), the domination condition (ii)(b) can be dropped. \end{Remark} \begin{Theorem}[\textit{Existence of the value and link with mean-field doubly reflected BSDEs}]\label{Dynkin} Suppose that Assumption \ref{generalAssump} is in force for some $p\geq 2$. Assume that $\gamma_1$, $\gamma_2$, $\kappa_1$ and $\kappa_2$ satisfy \begin{align} \label{ExistenceCond} \gamma_1^p+\gamma_2^p+\kappa_1^p+ \kappa_2^p < 2^{3-\frac{5p}{2}}. \end{align} Then, \begin{itemize} \item[(i)]The \textit{mean-field Dynkin game} admits a value $V\in \mathcal{S}^{p}$. \item[(ii)] The mean-field doubly reflected BSDEs \eqref{BSDE1} has a unique solution $(Y,Z,U,K^{1},K^{2})$ in $\mathcal{S}^{p} \otimes \mathcal{H}^{p} \otimes \mathcal{H}_{\nu}^{p} \otimes \mathcal{S}_{i}^{p} \otimes \mathcal{S}_{i}^p$. \item[(iii)] We have $V_\cdot=Y_\cdot$. \end{itemize} \end{Theorem} \begin{Remark} In the Brownian motion case, existence and uniqueness of the solution to \eqref{BSDE1} is derived in \cite{CHM20}, in the particular case when the mean-field coupling is in terms of $\mathbb{E}[Y_t]$ and the driver $f$ does not depend on $z$ and $u$, under the so-called 'strict separation of obstacles' condition. The result in \cite{CHM20} is obtained under a smallness condition different from \eqref{ExistenceCond}. \end{Remark} \begin{proof} \noindent \uline{\textit{Step 1}} (\textit{Well-posedness and contraction property of the operators $\overline{\Psi}$ and $\underline{\Psi}$}). We derive these properties only for the operator $\overline{\Psi}$ since a similar proof holds for the operator $\underline{\Psi}$. \medskip We first show that $\overline{\Psi}$ is a well-defined map from $\mathbb{L}^{p}_{\beta}$ to itself. Indeed, let $\bar{Y} \in \mathbb{L}^{p}_{\beta}$. Since $h_1$ and $h_2$ satisfy Assumption \ref{generalAssump} (ii), it follows that $h_1(t,\bar{Y}_t,\mathbb{P}_{\bar{Y}_t}) \in \mathcal{S}^p$ and $h_2(t,\bar{Y}_t,\mathbb{P}_{\bar{Y}_t}) \in \mathcal{S}^p$. Therefore, there exists a unique solution $(\hat{Y}, \hat{Z}, \hat{U}, \hat{K}^1, \hat{K}^2) \in \mathcal{S}^p \times \mathcal{H}^{p,1} \times \mathcal{H}^p_\nu \times (\mathcal{S}_{i}^p)^2$ of the reflected BSDE associated with the obstacle processes $h_1(t,\bar{Y}_t,\mathbb{P}_{\bar{Y}_t})$ and $h_2(t,\bar{Y}_t,\mathbb{P}_{\bar{Y}_t})$, the terminal condition $\xi$ and the driver $(f \circ {\bar{Y}})(t,\omega,y,z,u)$. Since $f$ satisfies Assumption \ref{generalAssump} {\it (i.c)}, by classical results on the link between the $Y$-component of the solution of a doubly reflected BSDE and optimal stopping games with nonlinear expectations (see e.g. \cite{dqs16}), we obtain $\overline{\Psi}(\bar{Y})=\hat{Y} \in \mathcal{S}^p \subset \mathbb{L}^{p}_{\beta}$. Let us now show that $\overline{\Psi}: \mathbb{L}_{\beta}^p \longrightarrow \mathbb{L}_{\beta}^p$ is a contraction on the time interval $[T-\delta,T]$, for some small $\delta >0$ to be chosen appropriately. \medskip First, note that by the Lipschitz continuity of $f$ and $h$, for $Y,\bar{Y} \in \mathcal{S}^{p}_\beta$, $Z,\bar{Z} \in \mathcal{H}^{p,1}$, and $U,\bar{U} \in \mathcal{H}_\nu^{p}$, \begin{equation} \begin{array}{lll}\label{f-h-lip-3} |f(s,Y_s,Z_s,U_s, \mathbb{P}_{Y_s})-f(s,\bar{Y}_s,\bar{Z}_s,\bar{U}_s,\mathbb{P}_{\bar{Y}_s})| \le C_f(|Y_s-\bar{Y}_s|+|Z_s-\bar{Z}_s|+|U_s-\bar{U}_s|_{\nu}+\mathcal{W}_p(\mathbb{P}_{Y_s},\mathbb{P}_{\bar{Y}_s})), \\ \\ |h_1(s,Y_s,\mathbb{P}_{Y_s})-h_1(s,\bar{Y}_s,\mathbb{P}_{\bar{Y}_s})|\le \gamma_1|Y_s-\bar{Y}_s|+\gamma_2 \mathcal{W}_p(\mathbb{P}_{Y_s},\mathbb{P}_{\bar{Y}_s}), \\ \\ |h_2(s,Y_s,\mathbb{P}_{Y_s})-h_2(s,\bar{Y}_s,\mathbb{P}_{\bar{Y}_s})|\le \kappa_1|Y_s-\bar{Y}_s|+\kappa_2 \mathcal{W}_p(\mathbb{P}_{Y_s},\mathbb{P}_{\bar{Y}_s}). \end{array} \end{equation} For the $p$-Wasserstein distance, we have the following inequality: for $0\leq s \leq u \leq t \leq T$, \begin{eqnarray} \label{Wass-property} \sup_{u\in [s,t]} \mathcal{W}_p(\mathbb{P}_{Y_u},\mathbb{P}_{\bar{Y}_u}) \leq \sup_{u\in [s,t]}(\mathbb{E}[|Y_u-\bar{Y}_u|^p])^{1/p}, \end{eqnarray} from which we derive the following useful inequality \begin{eqnarray} \sup_{u\in [s,t]} \mathcal{W}_p(\mathbb{P}_{Y_u},\delta_0) \leq \sup_{u\in [s,t]}(\mathbb{E}[|Y_u|^p])^{1/p}. \end{eqnarray} Fix $Y,\bar{Y} \in \mathbb{L}_\beta^p$. For any $T-\delta \leq t \leq T$, by the estimates (A.1) on BSDEs (see the appendix, below), we have \begin{equation*}\begin{array} {lll} |\overline{\Psi}(Y)_t-\overline{\Psi}(\bar{Y})_t|^{p} \\ \quad =|\underset{\sigma\in\mathcal{T}_t}{{\rm ess}\, \inf\limits\,} \underset{\tau\in\mathcal{T}_t}{{\rm ess}\,\sup\limits\,}\mathcal{E}_{t,\tau \wedge \sigma}^{f \circ Y }[h_1(\tau,Y_{\tau},\mathbb{P}_{Y_s|s=\tau}){\bf 1}_{\{\tau \leq \sigma<T\}}+h_2(\sigma,Y_{\sigma},\mathbb{P}_{Y_s|s=\sigma}){\bf 1}_{\{\sigma<\tau\}}+\xi {\bf 1}_{\{\tau \wedge \sigma=T\}}] \\ \quad -\underset{\sigma\in\mathcal{T}_t}{{\rm ess}\, \inf\limits\,} \underset{\tau\in\mathcal{T}_t}{{\rm ess}\,\sup\limits\,}\mathcal{E}_{t,\tau \wedge \sigma}^{f \circ \bar{Y} }[h_1(\tau,\bar{Y}_{\tau},\mathbb{P}_{\bar{Y}_s|s=\tau}){\bf 1}_{\{\tau \leq \sigma<T\}}+h_2(\sigma,\bar{Y}_{\sigma},\mathbb{P}_{\bar{Y}_s|s=\sigma}){\bf 1}_{\{\sigma<\tau\}}+\xi {\bf 1}_{\{\tau \wedge \sigma=T\}}]|^{p} \\ \quad \le \underset{\tau\in\mathcal{T}_t}{{\rm ess}\,\sup\limits\,} \underset{\sigma\in\mathcal{T}_t}{{\rm ess}\,\sup\limits\,}\left|\mathcal{E}_{t,\tau \wedge \sigma}^{f \circ Y}[h_1(\tau,Y_{\tau},\mathbb{P}_{Y_s|s=\tau}){\bf 1}_{\{\tau \leq \sigma<T\}}+h_2(\sigma,Y_{\sigma},\mathbb{P}_{Y_s|s=\sigma}){\bf 1}_{\{\sigma<\tau\}}+\xi {\bf 1}_{\{\tau \wedge \sigma=T\}}] \right. \\ \left. \qquad - \mathcal{E}_{t,\tau \wedge \sigma}^{f \circ \bar{Y} }[h_1(\tau,\bar{Y}_{\tau},\mathbb{P}_{\bar{Y}_s|s=\tau}){\bf 1}_{\{\tau \leq \sigma<T\}}+h_2(\sigma,\bar{Y}_{\sigma},\mathbb{P}_{\bar{Y}_s|s=\sigma}){\bf 1}_{\{\sigma<\tau\}}+\xi {\bf 1}_{\{\tau \wedge \sigma=T\}}] \right|^{p}\\ \quad \le \underset{\tau\in\mathcal{T}_t}{{\rm ess}\,\sup\limits\,} \underset{\sigma\in\mathcal{T}_t}{{\rm ess}\,\sup\limits\,} \eta^p 2^{\frac{p}{2}-1}\mathbb{E}\left[\left(\int_t^{\tau \wedge \sigma} e^{2 \beta (s-t)} |(f \circ Y) (s,\widehat{Y}^{\tau, \sigma}_s,\widehat{Z}^{\tau, \sigma}_s,\widehat{U}^{\tau, \sigma}_s) \right.\right. \\ \left.\left. \qquad\qquad\qquad -(f \circ \bar{Y}) (s,\widehat{Y}^{\tau, \sigma}_s, \widehat{Z}^{\tau, \sigma}_s,\widehat{U}^{\tau, \sigma}_s)|^2 ds\right)^{p/2} \right. \\ \left. \qquad\qquad\qquad\qquad +2^{p/2-1} e^{p \beta(\tau \wedge \sigma-t)}|\left(h_1(\tau,Y_\tau,\mathbb{P}_{Y_s|s=\tau})-h_1(\tau,\bar{Y}_\tau, \mathbb{P}_{\bar{Y}_s|s=\tau})\right){\bf 1}_{\{\tau \leq \sigma<T\}} \right. \\ \left. \qquad\qquad\qquad\qquad\qquad +\left(h_2(\sigma,Y_\sigma,\mathbb{P}_{Y_s|s=\sigma})-h_2(\sigma,\bar{Y}_\sigma, \mathbb{P}_{\bar{Y}_s|s=\sigma})\right){\bf 1}_{\{\sigma < \tau\}}|^p |\mathcal{F}_t\right] \\\quad = \underset{\tau\in\mathcal{T}_t}{{\rm ess}\,\sup\limits\,} \underset{\sigma\in\mathcal{T}_t}{{\rm ess}\,\sup\limits\,} \eta^p 2^{\frac{p}{2}-1} \mathbb{E}\left[\left(\int_t^{\tau \wedge \sigma} e^{2 \beta (s-t)}|f(s,\widehat{Y}^{\tau, \sigma}_s,\widehat{Z}^{\tau, \sigma}_s,\widehat{U}^{\tau, \sigma}_s, \mathbb{P}_{Y_s}) \right.\right. \\ \left.\left. \qquad\qquad\qquad\qquad\qquad\qquad\qquad -f(s,\widehat{Y}^{\tau, \sigma}_s,\widehat{Z}^{\tau, \sigma}_s,\widehat{U}^{\tau, \sigma}_s, \mathbb{P}_{\bar{Y}_s})|^2 ds \right)^{p/2} \right. \\ \left. \qquad\qquad\qquad\qquad\qquad\qquad \qquad+e^{p \beta(\tau \wedge \sigma-t)}(|h_1(\tau,Y_\tau,\mathbb{P}_{Y_s|s=\tau})-h_1(\tau,\bar{Y}_\tau, \mathbb{P}_{\bar{Y}_s|s=\tau})|\right. \\ \left. \qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+|h_2(\tau,Y_\tau,\mathbb{P}_{Y_s|s=\tau})-h_2(\sigma,\bar{Y}_\sigma, \mathbb{P}_{\bar{Y}_s|s=\sigma})|)^p |\mathcal{F}_t\right], \end{array} \end{equation*} with $\eta$, $\beta>0$ such that $\eta \leq \frac{1}{C_f^2}$ and $\beta \geq 2 C_f+\frac{3}{\eta}$, where $(\widehat{Y}^{\tau, \sigma},\widehat{Z}^{\tau, \sigma},\widehat{U}^{\tau, \sigma})$ is the solution of the BSDE associated with driver $f\circ \bar{Y}$, terminal time $\tau \wedge \sigma$, terminal condition $\xi$ and terminal condition $h_1(\tau,{\bar{Y}}_\tau,\mathbb{P}_{\bar{Y}_s|s=\tau}){\bf 1}_{\{\tau \leq \sigma<T\}}+h_2(\sigma,{\bar{Y}}_\sigma,\mathbb{P}_{\bar{Y}_s|s=\sigma}){\bf 1}_{\{\sigma<\tau\}}+\xi{\bf 1}_{\{\tau \wedge \sigma=T\}}$. Therefore, using \eqref{f-h-lip-3} and the fact that, for $\rho=\tau,\sigma$, \begin{equation} \label{Wass-property-1}\mathcal{W}_p^{p} (\mathbb{P}_{Y_s|s=\rho},\mathbb{P}_{\bar{Y}_s|s=\rho})\le \mathbb{E}[|Y_s-\bar{Y}_s|^p]_{|s=\rho}\le \underset{\rho\in \mathcal{T}_t}{\sup}\mathbb{E}[|Y_{\rho}-\bar{Y}_{\rho}|^p], \end{equation} we have, for any $t\in [T-\delta,T]$, \begin{equation*}\label{ineq-evsn-3} \begin{array} {lll} e^{p \beta t}|\overline{\Psi}(Y)_t-\overline{\Psi}(\bar{Y})_t|^p \le \underset{\tau\in\mathcal{T}_t}{{\rm ess}\,\sup\limits\,} \underset{\sigma \in\mathcal{T}_t}{{\rm ess}\,\sup\limits\,} \mathbb{E}\left[\int_t^{\tau \wedge \sigma} e^{p \beta (s-t)}\delta^{\frac{p-2}{2}}2^{\frac{p}{2}-1} \eta^pC_f^p\mathbb{E}[|Y_s-\bar{Y}_s|^p]ds \right. \\ \left. +2^{\frac{p}{2}-1} e^{p \beta(\tau \wedge \sigma-t)}\left\{\gamma_1|Y_\tau-\bar{Y}_\tau|+\gamma_2(\mathbb{E}[|Y_s-\bar{Y}_s|^p]_{|s=\tau}+\kappa_1|Y_\sigma-\bar{Y}_\sigma|+\kappa_2(\mathbb{E}[|Y_s-\bar{Y}_s|^p]_{|s=\sigma}\right\}^{1/p})^p |\mathcal{F}_t\right] \\ \qquad \le \underset{\tau\in\mathcal{T}_t}{{\rm ess}\,\sup\limits\,} \underset{\sigma \in\mathcal{T}_t}{{\rm ess}\,\sup\limits\,} \mathbb{E}\left[\int_t^{\tau \wedge \sigma} e^{p \beta s}2^{\frac{p}{2}-1} \delta^{\frac{p-2}{2}} \eta^pC_f^p\mathbb{E}[|Y_s-\bar{Y}_s|^p]ds \right. \\ \left. \qquad\qquad\qquad\qquad +2^{\frac{p}{2}-1} e^{p \beta\tau \wedge \sigma}\left\{4^{p-1}\gamma_1^p|Y_\tau-\bar{Y}_\tau|^p+4^{p-1}\gamma_2^p\mathbb{E}[|Y_s-\bar{Y}_s|^p]_{|s=\tau} \right. \right. \\ \left. \left. \qquad\qquad\qquad\qquad\qquad +4^{p-1}\kappa_1^p|Y_\sigma-\bar{Y}_\sigma|^p+4^{p-1}\kappa_2^p\mathbb{E}[|Y_s-\bar{Y}_s|^p]_{|s=\sigma}\right\} |\mathcal{F}_t\right]. \end{array} \end{equation*} Therefore, $$ e^{p \beta t}|\overline{\Psi}(Y)_t-\overline{\Psi}(\bar{Y})_t|^p \le \underset{\tau\in\mathcal{T}_t}{{\rm ess}\,\sup\limits\,} \mathbb{E}[G^1(\tau)|\mathcal{F}_t]+\underset{\sigma\in\mathcal{T}_t}{{\rm ess}\,\sup\limits\,} \mathbb{E}[G^2(\sigma)|\mathcal{F}_t]:=V^1_t+V^2_t, $$ where \begin{equation}\begin{array}{lll} G^1(\tau):=\int_{T-\delta}^{\tau}e^{p \beta s}2^{\frac{p}{2}-1} \eta^p \delta^{\frac{p-2}{2}} C_f^p\mathbb{E}[|Y_s-\bar{Y}_s|^p]ds \\ \qquad\qquad\qquad +2^{\frac{p}{2}-1}e^{p \beta \tau}(4^{p-1}\gamma_1^p|Y_{\tau}-\bar{Y}_{\tau}|^p+4^{p-1}\gamma_2^p\mathbb{E}[|Y_{s}-\bar{Y}_{s}|^p]_{|s=\tau}), \\ G^2(\sigma) :=2^{\frac{p}{2}-1} e^{p \beta \sigma}(4^{p-1}\kappa_1^p|Y_{\sigma}-\bar{Y}_{\sigma}|^p+4^{p-1}\kappa_2^p\mathbb{E}[|Y_{s}-\bar{Y}_{s}|^p]_{|s=\sigma}). \end{array} \end{equation} which yields \begin{equation}\label{est-1} \underset{\tau\in\mathcal{T}_{T-\delta}}{\sup}\mathbb{E}[e^{p \beta \tau}|\overline{\Psi}(Y)_{\tau}-\overline{\Psi}(\bar{Y})_{\tau}|^p] \le \underset{\tau\in\mathcal{T}_{T-\delta}}{\sup}\mathbb{E}[V^1_{\tau}+V^2_{\tau}]. \end{equation} We have \begin{equation}\label{est-2} \underset{\tau\in\mathcal{T}_{T-\delta}}{\sup}\mathbb{E}[V_{\tau}]\le \alpha \underset{\tau\in\mathcal{T}_{T-\delta}}{\sup}\mathbb{E}[e^{p \beta \tau}|Y_{\tau}-\bar{Y}_{\tau}|^p], \end{equation} where $\alpha:=2^{\frac{p}{2}-1}\delta^{1+\frac{p-2}{2}} \eta^p C_f^p+2^{\frac{p}{2}-1}4^{p-1}(\gamma_1^p+\gamma_2^p+\kappa_1^p+\kappa_2^p)$ and $\textcolor{black}{V_t:=V_t^{1}+V_t^2}$. \noindent \textcolor{black}{Let $\sigma \in \mathcal{T}_0$}. Indeed, by Lemma D.1 in \cite{KS98}, there exist sequences $(\tau^1_n)_n$ and $(\tau^2_n)_n$ of stopping times in $\mathcal{T}_{\sigma}$ such that $$ V^1_{\sigma}=\underset{n\to\infty}{\lim}\mathbb{E}[G^1(\tau^1_n)|\mathcal{F}_{\sigma}] $$ and $$ V^2_{\sigma}=\underset{n\to\infty}{\lim}\mathbb{E}[G^2(\tau^2_n)|\mathcal{F}_{\sigma}]. $$ Therefore, by Fatou's Lemma, we have $$ \mathbb{E}[V^1_{\sigma}]+\mathbb{E}[V^2_{\sigma}]\le \underset{n\to\infty}{\underline{\lim}}\mathbb{E}[G^1(\tau^1_n)]+\underset{n\to\infty}{\underline{\lim}}\mathbb{E}[G^2(\tau^2_n)]\le \underset{\tau\in\mathcal{T}_{T-\delta}}{\sup}\mathbb{E}[G^1(\tau)]+\underset{\tau\in\mathcal{T}_{T-\delta}}{\sup}\mathbb{E}[G^2(\tau)]. $$ Using \eqref{Wass-property-1} and noting that $e^{p\beta\tau}\mathbb{E}[|Y_s-\bar{Y}_s|^p]_{|s=\tau}=\mathbb{E}[e^{p\beta s}|Y_s-\bar{Y}_s|^p]_{|s=\tau}$, we obtain $$ \underset{\tau\in\mathcal{T}_{T-\delta}}{\sup}\mathbb{E}[G^1(\tau)]+\underset{\tau\in\mathcal{T}_{T-\delta}}{\sup}\mathbb{E}[G^2(\tau)] \le \alpha \underset{\tau\in\mathcal{T}_{T-\delta}}{\sup}\mathbb{E}[e^{p \beta \tau}|Y_{\tau}-\bar{Y}_{\tau}|^p] $$ which in turn yields \eqref{est-2}. Assuming $(\gamma_1,\gamma_2, \kappa_1, \kappa_2)$ satisfies \begin{equation*} \gamma_1^p+\gamma_2^p+\kappa_1^p+\kappa_2^p<4^{1-p}2^{1-\frac{p}{2}} \end{equation*} we can choose $$ 0<\delta<\left(\frac{1}{2^{\frac{p}{2}-1} \eta^p C_f^p} \left(1-4^{p-1}2^{\frac{p}{2}-1}(\gamma^p_1+\gamma^p_2+\kappa^p_1+\kappa^p_2)\right)\right)^{\frac{p}{2p-2}} $$ to make $\overline{\Psi}$ a contraction on $\mathbb{L}^p_\beta$ over the time interval $[T-\delta,T]$, i.e. $\overline{\Psi}$ admits a unique fixed point over $[T-\delta,T]$.\\ \begin{comment} \noindent \uline{\textit{Step 2.}} The game admits a \textit{lower value} $\underline{V}$ on $[T-\delta,T]$. The proof follows the same arguments as for the existence of an upper value, therefore we omit it. \end{comment} \begin{comment} \noindent \uline{\textit{Step 3.}} We now show that the value admits a \textit{common value}, that is $\underline{V}=\overline{V}$, on $[T-\delta,T]$.\\ \noindent Let $\overline{V}_t^{(0)}:=\underset{\sigma \in \mathcal{T}_t}{{\rm ess}\, \inf\limits}\, \underset{\tau \in \mathcal{T}_t}{{\rm ess}\,\sup\limits}\, \mathbb{E}[\xi{\bf 1}_{\tau \wedge \sigma=T}|\mathcal{F}_t]$ and $\underline{V}_t^{(0)}:= \underset{\tau \in \mathcal{T}_t}{{\rm ess}\,\sup\limits}\,\underset{\sigma \in \mathcal{T}_t}{{\rm ess}\, \inf\limits}\,\mathbb{E}[\xi{\bf 1}_{\tau \wedge \sigma=T}|\mathcal{F}_t]$ . By using the results on classical Dynkin games, we have $\overline{V}_t^{(0)}=\underline{V}_t^{(0)}:=V_t^{(0)}$. Define, for $n \geq 1$, \begin{align} \begin{cases} \overline{V}^{(n)}:=\overline{\Psi}(\overline{V}^{n-1}), \,\, \overline{V}^{0}=V^{(0)}, \nonumber \\ \underline{V}^{(n)}:=\underline{\Psi}(\underline{V}^{n-1}), \,\, \underline{V}^{0}=V^{(0)}. \end{cases} \end{align} Since the classical Dynkin game with payoff $h_1(\tau, \overline{V}^{0}_\tau, \mathbb{P}_{\overline{V}^{0}_s}|_{s=\tau}){\bf 1}_{\tau \leq \sigma}+h_2(\sigma, \overline{V}^{0}_\sigma, \mathbb{P}_{\overline{V}^{0}_s}|_{s=\sigma}){\bf 1}_{\sigma < \tau}+\xi {\bf 1}_{\sigma \wedge \tau =T}$ has a value (see e.g. Theorem 4.1. in \cite{dqs16}) and $\overline{V}^{(0)}=\underline{V}^{(0)}=V^{(0)}$, we obtain \begin{align*} \overline{V}^{(1)}=\overline{\Psi}(\overline{V}^{(0)})=\underline{\Psi}(\overline{V}^{(0)})=\underline{\Psi}(\underline{V}^{(0)})=\underline{V}^{(1)}. \end{align*} One can show recursively that \begin{align*} \overline{V}^{(n)}=\overline{\Psi}(\overline{V}^{n-1})=\underline{\Psi}(\overline{V}^{n-1})=\underline{\Psi}(\underline{V}^{n-1})=\underline{V}^{(n)}. \end{align*} By convergence in $\mathbb{L}_\beta^p$ of $\overline{V}^{(n)}$ (resp. $\underline{V}^{(n)}$) to the fixed points $\overline{V}$ (resp. $\underline{V}$) of the operators $\overline{\Psi}$ (resp. $\underline{\Psi}$), we obtain that $\overline{V}=\underline{V}=V$, that is the game admits a value on $[T-\delta,T]$.\\ \end{comment} \medskip \textit{\underline{Step 2} (Existence of the value of the game and link with the mean-field doubly reflected BSDE \eqref{BSDE1}).} Let $\overline{V} \in \mathbb{L}_\beta^p$ be the fixed point for $\overline{\Psi}$ obtained in \textit{Step 1} and $(\hat{Y},\hat{Z},\hat{U},\hat{K}^1, \hat{K}^2) \in \mathcal{S}^p \times \mathcal{H}^{p,1} \times \mathcal{H}^p_\nu \times (\mathcal{S}_{i}^p)^2$ be the unique solution of the standard doubly reflected BSDE, with barriers $h_1(s,\overline{V}_s,\mathbb{P}_{V_s})$ and $h_2(s,\overline{V}_s,\mathbb{P}_{\overline{V}_s})$ and driver $g(s,y,z,u):=f(s,y,z,u,\mathbb{P}_{\overline{V}_s})$, i.e. $$ \begin{array}{lll} \hat{Y}_t = \xi +\int_t^T f(s,\hat{Y}_{s}, \hat{Z}_s,\hat{U}_s, \mathbb{P}_{V_s})ds + (\hat{K}^1_T - \hat{K}^1_t)-(\hat{K}^2_T -\hat{K}^2_t) \\ \qquad\qquad\qquad\qquad -\int_t^T \hat{Z}_s dB_s - \int_t^T \int_{I\!\!R^\star} \hat{U}_s(e) \tilde N(ds,de), \qquad\quad T-\delta \leq t \leq T. \end{array} $$ Then, by Theorem 4.1. in \cite{dqs16}, we have $$ \hat{Y}_t= \overline{\Psi}(\overline{V})_t=\underline{\Psi}(\overline{V})_t, $$ \textcolor{black}{which combined with \textit{Step 1}, gives $\hat{Y}_\cdot=\overline{V}_{\cdot}$ and $\overline{V}_{\cdot}=\underline{\Psi}(\overline{V})_{\cdot}$} Thus, $\overline{V}_{\cdot}=\underline{V}_{\cdot}=\hat{Y}_{\cdot}$. This relation also yields existence of a solution for \eqref{BSDE1} on $[T-\delta, T]$. Hence, by the uniqueness of the solution of a doubly reflected BSDE, we obtain uniqueness of the associated processes $(\hat{Z},\hat{U},\hat{K}^1, \hat{K}^2)$ and combined with the fixed point property of $V$, we derive existence and uniqueness of the solution of \eqref{BSDE1} on $[T-\delta,T]$. \noindent Applying the same method as in \textit{Step 1} on each time interval $[T-(j+1)\delta,T-j\delta]$, $1 \leq j \leq m$, with the same operator $\overline{\Psi}$, but with terminal condition $Y_{T-j\delta}$ at time $T-j\delta$, we build recursively, for $j=1$ to $m$, a solution $(Y^j,Z^j,U^j,K^{1,j},K^{2,j})$. Pasting properly these processes, we obtain an unique solution $(Y,Z,U,K^{1},K^{2})$ of \eqref{BSDE1} on $[0,T]$. \noindent By using again the relation between classical Dynkin games and doubly reflected BSDEs, we also get the existence of a \textit{value of the game} $V$, which satisfies $V_\cdot=Y_\cdot$, and therefore also belongs to $\mathcal{S}^p$. \qed \end{proof} We now introduce the definition of $S$-saddle points in our setting and provide sufficient conditions on the barriers which ensure the existence of saddle points. \paragraph{Existence of a $S$-saddle point.} Assume that the mean-field Dynkin game admits a \textit{common value} $(V_t)$. The associated payoff is denoted by $$P(\tau,\sigma):= h_1(\tau,V_\tau, \mathbb{P}_{V_\tau}){\bf 1}_{\{\tau \leq \sigma<T\}}+h_2(\tau,V_\tau, \mathbb{P}_{V_\tau}){\bf 1}_{\{\sigma < \tau\}}+\xi {\bf 1}_{\{\tau \wedge \sigma=T\}}.$$ We now give the definition of a $S$-saddle point for this game problem. \begin{Definition} Let $S \in \mathcal{T}_0$. A pair $(\tau^\star, \sigma^\star) \in (\mathcal{T}_S)^2$ is called an $S$-saddle point for the \textit{mean-field Dynkin game problem} if for each $(\tau, \sigma) \in (\mathcal{T}_S)^2$ we have \begin{align} \mathcal{E}^{f \circ V}_{S, \tau \wedge \sigma^{\star}}(P(\tau,\sigma^\star)) \leq \mathcal{E}^{f \circ V}_{S, \tau^\star \wedge \sigma^{\star}}(P(\tau^\star,\sigma^\star)) \leq \mathcal{E}^{f \circ V}_{S, \tau^\star \wedge \sigma}(P(\tau^\star,\sigma)). \end{align} \end{Definition} We now provide sufficient conditions which ensure the existence of an $S$-saddle point. \begin{Theorem}[Existence of $S$-saddle points]\label{saddle} Suppose that $\gamma_1$, $\gamma_2$, $\kappa_1$ and $\kappa_2$ satisfy the condition \eqref{ExistenceCond}. Assume that $h_1$ (resp. $h_2$) take the form $h_1(t,\omega,y,\mu):=\xi_t(\omega)+\kappa^1(y,\mu)$ (resp. $h_2(t,\omega,y,\mu):=\zeta_t(\omega)+\kappa^2(y,\mu)$), where $\xi$ and $-\zeta$ belong to $\mathcal{S}^p$ and are left upper semicontinuous process along stopping times, and $\kappa^1$ (resp $\kappa^2$) are bounded and Lipschitz functions with respect to $(y,\mu)$. \noindent For each $S \in \mathcal{T}_0$, consider the pair of stopping times $(\tau_{S}^{\star}, \sigma_{S}^\star)$ defined by \begin{align}\label{S} \tau_{S}^\star:=\inf \{t \geq S:\,\, V_t=h_1(t,V_t,\mathbb{P}_{V_t})\}\,\,\, \text{ and } \,\,\,\sigma_{S}^\star:=\inf \{t \geq S:\,\, V_t=h_2(t,V_t,\mathbb{P}_{V_t})\}. \end{align} Then the pair of stopping time $(\tau_{S}^\star, \sigma_{S}^\star)$ given by $\eqref{S}$ is an $S$-saddle point. \end{Theorem} \begin{proof} \begin{comment} By using the results on classical Dynkin games with payoffs $f \circ V, h_1, h_2$ (see Theorem 4.10 in \cite{dqs16}), we get that $V_t=Y_t$, with $(Y,Z,U,K^1,K^2)$ represents the solution of a doubly reflected BSDE with driver $f \circ V$ and payoffs $h_1(t,V_t,\mathbb{P}_{V_t})$ and $h_2(t,V_t,\mathbb{P}_{V_t})$. \end{comment} Consider the following iterative scheme. Let $V^{(0)} \equiv 0$ (with $\mathbb{P}_{V^{(0)}}=\delta_0$) be the starting point and define $$V^{(m)}:=\overline{\Psi}(V^{(m-1)}),\quad m\ge 1.$$ By applying the results on standard doubly reflected BSDEs and their relation with classical Dynkin games, we obtain that, for $1 \le i \le m$, $V^{(i)}$ coincides with the component $Y$ of the solution of the doubly reflected BSDE associated with \textcolor{black}{$f \circ V^{(i-1)}$} and obstacles $h_1(t,V^{(i-1)}_t,\mathbb{P}_{V^{(i-1)}_t})$ and $h_2(t,V^{(i-1)}_t,\mathbb{P}_{V^{(i-1)}_t})$. Due to the assumptions on $h_1$ and $h_2$, \textcolor{black}{we get that $V^{(1)}$ admits only jumps at totally inaccessible stopping times, and by induction, the same holds for $V^{(i)}$, for all $i$}. Since the condition $\eqref{ExistenceCond}$ is satisfied, by Theorem \ref{Dynkin}, the sequence $V_t^{(m)}$ is Cauchy for the norm $\mathbb{L}_\beta^{p}$ and therefore converges in $\mathbb{L}_\beta^{p}$ to the fixed point of the map $\overline{\Psi}$.\\ Let $\tau \in \mathcal{T}_0$ be a predictable stopping time. Since $\Delta V^{(m)}_\tau=0$ a.s. for all $m$, we obtain \begin{align} \mathbb{E}\left[|\Delta V_\tau|^p\right]=\mathbb{E}\left[|\Delta V_\tau-\Delta V^{(m)}_\tau|^p\right] \leq 2^p \sup_{\tau \in \mathcal{T}_0}\mathbb{E}\left[|V_\tau-V^{(m)}_\tau|^p\right], \end{align} which implies that $\Delta V_\tau=0$ a.s. Therefore, $h_1(t,V_t,\mathbb{P}_{V_t})$ and $-h_2(t,V_t,\mathbb{P}_{V_t})$ are left upper semicontinuous along stopping times. By Theorem 3.7 (ii) in \cite{dqs16}, $(\tau_{S}^\star, \sigma_{S}^\star)$ given by $\eqref{S}$ is a $S$-saddle point. \end{proof} \begin{comment} Taking the expectation of the supremum over $t\in [T-\delta,T]$ on both sides, and using Doob's inequality (only with one term, the other one is deterministic) together with Jensen's inequality and \eqref{Wass-property}, we obtain \begin{equation*}\begin{array} {lll} \mathbb{E}\left[\underset{T-\delta \le s\le T}{\sup}e^{\beta p s}|\widehat{\Phi}(Y)_s-\widehat{\Phi}(\bar{Y})_s|^p\right] \le 2^{\frac{p}{\kappa}-1} \left(\frac{p}{p-\kappa}\right)^{\frac{p}{\kappa}} (\delta^{p/\kappa} \eta^pC_f^p+2^{p-1}(\gamma^p_1+\gamma^p_2)) \mathbb{E}\left[\underset{T-\delta \le s\le T}{\sup}e^{ \beta ps}|Y_s-\bar{Y}_s|^p\right] \\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \le \alpha \left(\frac{2p}{p-\kappa}\right)^{p/\kappa} \mathbb{E}\left[\underset{T-\delta \le s\le T}{\sup}e^{\beta ps}|Y_s-\bar{Y}_s|^p\right], \end{array} \end{equation*} where $\alpha:=\max(\delta^{p/\kappa} C_f^{p} \eta^{p} +2^{p-1}\gamma^p_1, \delta^{p/\kappa} C_f^{p} \eta^p +2^{p-1}\gamma^p_2)$. This bound is larger than the 'real' bound $$2^{\frac{p}{\kappa}-1}\left( \eta^pC_f^p+ \left(\frac{p}{p-\kappa}\right)^{\frac{p}{\kappa}} (2^{p-1}(\gamma_1+\gamma_2)) \right) $$ but will lead to simpler (though tighter) conditions on $(\gamma_1,\gamma_2)$. Hence, if for $p>\kappa >1$ $$ 2^{p-1}\max(\gamma^p_1,\gamma^p_2)< \left(\frac{p-\kappa}{2p}\right)^{\frac{p}{\kappa}}, $$ we can choose $$ 0<\delta<\frac{1}{\eta^\kappa C_f^\kappa} \left(\left(\frac{p-\kappa}{2p}\right)^{\frac{p}{\kappa}}-2^{p-1}\max(\gamma^p_1,\gamma^p_2)\right)^{\kappa/p} $$ to make $\widehat{\Phi}$ a contraction on $\mathcal{S}^p_\beta$ over the time interval $[T-\delta,T]$, i.e. the system \eqref{BSDE1} admits a unique solution over $[T-\delta,T]$.\\ \end{comment} \medskip \medskip \begin{comment}\textcolor{blue}{ Fix $Y^0 \in \mathcal{S}^{p}$ and define recursively $Y^{(n)}$ by \eqref{eq11}. By classical results on reflected BSDEs with RCLL obstacle and their connection with nonlinear optimal stopping, we get that for each $n$, $Y^{(n)}$ corresponds to the first component of the solution $(Y^{(n)},Z^{(n)},U^{(n)},K^{(n)})$ of the following reflected BSDE with RCLL obstacle: \begin{align}\label{iter} \begin{cases} Y^{(n)}_t = \xi +\int_t^T f(s,Y^{(n)}_{s}, Z^{(n)}_{s}, U^{(n)}_{s}, \mathbb{P}_{Y^{(n-1)}_s})ds + K^{(n)}_T - K^{(n)}_t-\int_t^T Z^{(n)}_s dB_s - \int_t^T \int_{I\!\!R^\star} U^{(n)}_s(e) \tilde N(ds,de), \,\,\, t \in [0,T],\\ Y^{(n)}_{t}\geq h(t,Y^{(n-1)}_{t},\mathbb{P}_{Y^{(n-1)}_t}),\,\, t \in [0,T], \\ \int_0^T (Y^{(n)}_{t-}-h(t,Y^{(n-1)}_{t-},\mathbb{P}_{Y^{(n-1)}_{t-}}))dK^{(n)}_t = 0. \end{cases} \end{align} By applying similar estimates as in Prop. 3.6. in \cite{ekppq97} (here for $p=2$ for simplification), we derive \begin{align} &\mathbb{E}\left[\sup_t|Y_t^{(n)}-Y_t^{(n-1)}|^2\right]+\mathbb{E}\left[\int_0^T|Z_t^{(n)}-Z_t^{(n-1)}|^2\right]+\mathbb{E}\left[\int_0^T\int_{\mathbb{R^*}}|U_t^{(n)}-U_t^{(n-1)}|^2\nu(de)ds\right]+\mathbb{E}[|K_T^{(n)}-K_T^{(n-1)}|^2] \nonumber \\ &\leq C \left(\int_0^T |f(s,Y^{(n)}_s,Z^{(n)}_s,U^{(n)}_s,\mathbb{P}_{Y^{(n-1)}_t})-f(s,Y^{(n)}_s,Z^{(n)}_s,U^{(n)}_s,\mathbb{P}_{Y^{(n-2)}_t})|dt\right) \nonumber\\ &+C\mathbb{E}\left[\sup_s |h(s,\mathbb{P}_{Y^{(n-1)}_s}-\mathbb{P}_{Y^{(n-2)}_s}|^2)\right]^{1/2} (\Psi^n_T)^{1/2} \nonumber\\ & \leq C\left(\mathbb{E}\left[\int_0^T |Y^{(n-1)}_s-Y^{(n-2)}_s|^2ds\right]+(\sup_t\mathbb{E}\left[|Y^{(n-1)}_t-Y^{(n-2)}_t|^2\right])^{1/2}(\Psi^n_T)^{1/2}\right) \end{align} The sequence $\Psi^n_T$ can be shown to be uniformly bounded.} \end{comment} \bigskip \begin{comment} \medskip {\color{red}{Boualem 12/2: I would like to formulate Step 3 as follows [that I hope is correct]. Given $Y\in \mathcal{S}^{p}_\beta$, let $(\hat{Y},\hat{Z},\hat{U},\hat{K})$ be the unique solution of the RBSDE \eqref{BSDE1} over $[T-\delta,T]$, with barrier $h(s,Y_s,\mathbb{P}_{Y_s})$. Then, $$ \hat{Y}_t=\underset{\tau\in\mathcal{T}_t}{{\rm ess}\,\sup\limits\,}\mathbb{E}\left[\int_t^{\tau} f(t,\omega,\hat{Y}_s,\hat{Z}_s,\hat{U}_s)ds+h(\tau,Y_{\tau},\mathbb{P}_{Y_s}|_{s=\tau}){\bf 1}_{\{\tau<T\}}+\xi {\bf 1}_{\{\tau=T\}}\right]. $$ By the fixed point argument from Step 1, we have $\hat{Y}=Y$ a.s. which implies that $$ Y_t = \xi +\int_t^T f(s,Y_{s}, \mathbb{P}_{Y_s}, \hat{Z}_s,\hat{U}_s)ds + \hat{K}_T - \hat{K}_t -\int_t^T \hat{Z}_s dB_s - \int_t^T \int_{I\!\!R^\star} \hat{U}_s(e) \tilde N(ds,de), \quad T-\delta \leq t \leq T. $$ In particular, the process $Y_t+\int_0^tf(s,Y_{s}, \mathbb{P}_{Y_s}, \hat{Z}_s,\hat{U}_s)ds$ is a supermartingale. Hence, by the uniqueness of the Doob-Meyer decomposition, the characteristics $(Z,U,K)$ of $Y$ should satisfy $(Z,U,K)=(\hat{Z},\hat{U},\hat{K})$. This yields existence and uniqueness of \eqref{BSDE1} in this general case. }}\\ \bigskip {\color{magenta}{Roxana (Proof of existence and uniqueness): Given $Y\in \mathcal{S}^{p}_\beta$, let $(\hat{Y},\hat{Z},\hat{U},\hat{K})$ be the unique solution of the RBSDE \eqref{BSDE1} over $[T-\delta,T]$, with barrier $h(s,Y_s,\mathbb{P}_{Y_s})$ and driver $g(s,y,z):=f(s,y,z,\mathbb{P}_{Y_s})$. Then, $$ \hat{Y}_t=\underset{\tau\in\mathcal{T}_t}{{\rm ess}\,\sup\limits\,}\mathcal{E}_{t,\tau}^{g}\left[h(\tau,Y_{\tau},\mathbb{P}_{Y_s}|_{s=\tau}){\bf 1}_{\{\tau<T\}}+\xi {\bf 1}_{\{\tau=T\}}\right]. $$ By the fixed point argument from Step 1, we have $\hat{Y}=Y$ a.s. which implies that $$ Y_t = \xi +\int_t^T f(s,Y_{s}, \mathbb{P}_{Y_s}, \hat{Z}_s,\hat{U}_s)ds + \hat{K}_T - \hat{K}_t -\int_t^T \hat{Z}_s dB_s - \int_t^T \int_{I\!\!R^\star} \hat{U}_s(e) \tilde N(ds,de), \quad T-\delta \leq t \leq T, $$ therefore we get existence of a solution. Furthermore, the process $Y_t$ is a \textit{nonlinear} g-supermartingale. Hence, by the uniqueness of the \textit{nonlinear} Doob-Meyer decomposition, we get uniqueness of the associated processes $(Z,U,K)$. This yields existence and uniqueness of \eqref{BSDE1} in this general case.\\ }} \end{comment} \begin{comment} \subsection{Mean-field reflected BSDEs with driver $f$ depending on $(y,z,u)$ and the law of $(Y,Z,U)$ and the RCLL obstacle process $h$ independent on the solution} In this section, we consider the case when $f$ depends on $(y,z,u)$ and the joint law of $(Y,Z,U)$ and the obstacles $h_1$ and $h_2$ do not depend on the solution, i.e. we are studying the existence and uniqueness of the solution $(Y,Z,U)$ of the following BSDE: \begin{align}\label{BSDEfull0} \begin{cases} \,\, Y_t = \xi +\int_t^T f(s,Y_{s}, Z_s,U_s, \mathbb{P}_{(Y_s,Z_s,U_s)})ds \\ \qquad\qquad + K^1_T - K^1_t+K^2_t-K^2_T-\int_t^T Z_s dB_s - \int_t^T \int_{I\!\!R^\star} U_s(e) \tilde N(ds,de), \quad \forall t \in [0,T],\\ \,\, h_2(t) \geq Y_{t}\geq h_1(t), \quad \forall t \in [0,T], \\ \,\, \int_0^T (Y_{t-}-h_1(t-))dK^1_t = 0,\,\,\,\int_0^T (Y_{t-}-h_2(t-))dK^2_t = 0\,\,\, dK^1_t \perp dK^2_t. \end{cases} \end{align} where $h_1(t)$, $h_2(t)$ are RCLL processes independent of the solution.\\ We make here the following assumption on $(f,h,\xi)$. \begin{Assumption} \label{Assump} The coefficients $f,h$ and $\xi$ satisfy \begin{itemize} \item[(i)] $f$ is a mapping from $[0,T] \times \Omega \times I\!\!R \times I\!\!R^d \times L^2_{\nu} \times \mathcal{P}_2(I\!\!R^2 \times L^2_{\nu})$ into $I\!\!R$ such that \begin{itemize} \item[(a)] the process $(f(t,0,0,0,0))_{t\leq T}$ is $\mathcal{P}$-measurable and belongs to $\mathcal{H}^{2,1}$; \item[(b)] $f$ is Lipschitz w.r.t. $(y,z,u,\mu)$ uniformly in $(t,\omega)$, i.e. there exists a positive constant $C_f$ such that $\mathbb{P}$-a.s. for all $t\in [0,T],$ \[|f(t,y_1,z_1,u_1,\mu_1)-f(t,y_2,z_2,u_2,\mu_2)|\leq C_f(|y_1-y_2|+|z_1-z_2|+|u_1-u_2|_{\nu}+\mathcal{W}_2(\mu_1, \mu_2))\] for any $y_1,y'_1\in I\!\!R,$ $z_1, z_2 \in I\!\!R^d$, $u_1, u_2 \in L^2_{\nu}$ and $\mu_1, \mu_2 \in \mathcal{P}_2(I\!\!R^2 \times L^2_{\nu})$. \end{itemize} \item[(ii)] $h_1(t)$ and $h_2(t)$ are right-continuous left-limited process belonging to $\mathcal{S}^2$ such that $h_1(t) \leq h_2(t)$ a.s. for all $t \in [0,T]$. \item[(iii)] $\xi \in L^2({\cal F}_T)$ and satisfies $h_2_T \geq \xi \geq h_1_T$. \end{itemize} \end{Assumption} We give below the following existence and uniqueness result. \begin{Theorem}\label{sol-BSDEfull0} Suppose Assumption \ref{Assump} is in force. Then the BSDE \eqref{BSDEfull0} admits an unique solution in $\mathcal{S}^2 \times \mathcal{H}^{2,1} \times \mathcal{H}^{2}_\nu \times (\mathcal{A})^2$. \end{Theorem} \section{Weakly interacting zero-sum Dynkin games}\label{WIDG} \noindent We now interpret the mean-field Dynkin game studied in the previous section at the particle level and introduce the \textit{weakly interacting zero-sum Dynkin games}.\\ \noindent Given a vector $\textbf{x} := (x^1, \ldots, x^n) \in I\!\!R^n$, denote by $$ L_n [\textbf{x}] := \frac{1}{n} \sum_{k=1}^n \delta_{x^k}, $$ the empirical measures associated to $\bf x$. \medskip \noindent Let $\{B^i\}_{1\leq i \leq n}$, $\{\tilde N^i\}_{1\leq i \leq n}$ be independent copies of $B$ and $\tilde N$ and denote by $\mathbb{F}^n:=\{\mathcal{F}_t^n\}_{t \in [0,T]}$ the completion of the filtration generated by $\{B^i\}_{1\leq i \leq n}$ and $\{\tilde N^i\}_{1\leq i \leq n}$. Thus, for each $1\le i\le n$, the filtration generated by $(B^i,\tilde{N}^i)$ is a sub-filtration of $\mathbb{F}^n$, and we denote by $\widetilde{\mathbb{F}}^i:=\{\widetilde{\mathcal{F}}^i_t\}$ its completion. Let $\mathcal{T}^n_t$ be the set of $\mathbb{F}^n$ stopping times with values in $[0,T]$.\\ \noindent In this section, we make the following additional assumption. \begin{Assumption}\label{Assump-PS} We assume that \begin{itemize} \item $f$, $h_1$ and $h_2$ are deterministic functions of $(t,y,z,u,\mu)$ and $(t,y,\mu)$, respectively. \item $\xi^{i,n}\in L^p(\mathcal{F}_T^n), \quad i=1,\ldots,n$. \item $h_1(T,\xi^{i,n},L_n[{\bf \xi}^n]) \le \xi^{i,n}\le h_2(T,\xi^{i,n},L_n[{\bf \xi}^n])\quad \mathbb{P}\text{-a.s.},\quad i=1,\ldots,n.$ \end{itemize} \end{Assumption} Since we will use the whole filtration $\mathbb{F}^n$, the first assumption is imposed to avoid unnecessarily adaptedness problems which might make parts of the proofs heavier. \medskip \noindent Endow the product spaces $\mathcal{S}^{p,\otimes n}_\beta:=\mathcal{S}_\beta^p\times \mathcal{S}_\beta^p \times \cdots \times \mathcal{S}_\beta^p$ and $\mathbb{L}^{p,\otimes n}_{\beta}:=\mathbb{L}_{\beta}^p\times \mathbb{L}_{\beta}^p \times \cdots \times \mathbb{L}_{\beta}^p$ with the respective norms \begin{equation}\label{n-norm-2} \|h\|^p_{\mathcal{S}^{p,\otimes n}_\beta}:=\sum_{1 \le i\le n}\|h^i\|^p_{\mathcal{S}_\beta^p},\qquad \|h\|^p_{\mathbb{L}^{p,\otimes n}_\beta}:=\sum_{1 \le i\le n}\|h^i\|^p_{\mathbb{L}_\beta^p}. \end{equation} \noindent Note that $\mathcal{S}^{p,\otimes n}_\beta$ and $\mathbb{L}^{p,\otimes n}_\beta$ are complete metric spaces. We denote by $\mathcal{S}^{p,\otimes n}:=\mathcal{S}^{p,\otimes n}_0,\,\, \mathbb{L}^{p,\otimes n}:=\mathbb{L}^{p,\otimes n}_0$.\\ Consider the applications $$ \begin{array}{lll} \textbf{f}^{i} \circ \textbf{Y}^n:[0,T] \times \Omega \times I\!\!R \times I\!\!R^n \times (I\!\!R^*)^n \mapsto I\!\!R \\ \qquad\qquad (\textbf{f}^{i} \circ \textbf{Y}^n)(t,\omega, y,z,u)=f(t, y,z^i,u^i,L_{n}[\textbf{Y}^n_{t}](\omega)), \quad i=1,\ldots,n. \end{array} $$ \noindent We now introduce the notions of \textit{lower value} and \textit{upper value} of \textit{interacting Dynkin games}. \begin{Definition} Let $\underline{\Psi}: \,\mathbb{L}^{p,\otimes n}_\beta\longrightarrow \mathbb{L}^{p,\otimes n}_\beta$ to be the mapping that associates to a process $\textbf{Y}^n:=(Y^{1,n},Y^{2,n},\ldots,Y^{n,n})$ the process $\underline{\Psi}(\textbf{Y}^n)=(\underline{\Psi}^1(\textbf{Y}^n),\underline{\Psi}^2(\textbf{Y}^n),\ldots, \underline{\Psi}^n(\textbf{Y}^n))$ defined by the following system: for every $i=1,\ldots n$ and $t\le T$, \begin{equation}\label{snell-i-2} \begin{array}{lll} \underline{\Psi}^i({\bf Y}^n)_t := \underset{\tau\in\mathcal{T}^n_t}{{\rm ess}\,\sup\limits\,} \underset{\sigma \in \mathcal{T}^n_t}{{\rm ess}\, \inf\limits\,} \mathcal{E}^{\textbf{f}^{i} \circ \textbf{Y}^n}_{t,\tau \wedge \sigma}\left[ h_1(\tau,Y^{i,n}_{\tau},L_{n}[\textbf{Y}^n_{s}]|_{s=\tau})1_{\{\tau \leq \sigma<T\}} \right. \\ \left. \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad +h_2(\sigma,Y^{i,n}_{\sigma},L_{n}[\textbf{Y}^n_{s}]|_{s=\sigma})1_{\{\sigma < \tau\}} +\xi^{i,n}1_{\{\tau \wedge \sigma=T\}}\right]. \end{array} \end{equation} We define the \textit{lower value function} of the \textit{interacting Dynkin game}, denoted by $\underline{\textbf{V}}^n$, as the \textit{fixed point} of the application $\underline{\Psi}$, that is $\underline{\Psi}^i(\underline{\textbf{V}}^n)=\underline{V}^{i,n}$, for all $1 \leq i \leq n$. \end{Definition} \begin{Definition} Let $\overline{\Psi}: \,\mathbb{L}^{p,\otimes n}_\beta\longrightarrow \mathbb{L}^{p,\otimes n}_\beta$ to be the mapping that associates to a process $\textbf{Y}^n:=(Y^{1,n},Y^{2,n},\ldots,Y^{n,n})$ the process $\overline{\Psi}(\textbf{Y}^n)=(\overline{\Psi}^1(\textbf{Y}^n),\overline{\Psi}^2(\textbf{Y}^n),\ldots, \overline{\Psi}^n(\textbf{Y}^n))$ defined by the following system: for every $i=1,\ldots n$ and $t\le T$, \begin{equation}\label{snell-i-2} \begin{array}{lll} \overline{\Psi}^i({\bf Y}^n)_t := \underset{\sigma \in \mathcal{T}^n_t}{{\rm ess}\, \inf\limits\,}\underset{\tau\in\mathcal{T}^n_t}{{\rm ess}\,\sup\limits\,} \mathcal{E}^{\textbf{f}^{i} \circ \textbf{Y}^n}_{t,\tau \wedge \sigma}\left[ h_1(\tau,Y^{i,n}_{\tau},L_{n}[\textbf{Y}^n_{s}]|_{s=\tau})1_{\{\tau \leq \sigma<T\}} \right. \\ \left. \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad +h_2(\sigma,Y^{i,n}_{\sigma},L_{n}[\textbf{Y}^n_{s}]|_{s=\sigma})1_{\{\sigma < \tau\}} +\xi^{i,n}1_{\{\tau \wedge \sigma=T\}}\right]. \end{array} \end{equation} We define the \textit{upper value function} of the \textit{interacting Dynkin game}, denoted by $\overline{\textbf{V}}^n$, as the \textit{fixed point} of the application $\overline{\Psi}$, that is $\overline{\Psi}^i(\overline{\textbf{V}}^n)=\overline{V}^{i,n}$, for all $1 \leq i \leq n$. \end{Definition} We are now in position to provide the definition of a \textit{common value} for this type of game. \begin{Definition} The interacting Dynkin game is said to admit a \textit{common value}, called the \textit{value of the game} and denoted by $\textbf{V}^n$, if $\underline{\textbf{V}}^n$ and $\overline{\textbf{V}}^n$ exist and $V^{i,n}_t=\underline{V}^{i,n}_t=\overline{V}^{i,n}_t$ for all $t \in [0,T]$ and for all $1 \leq i \leq n$. \end{Definition} More precisely, the value of the game, denoted by $\textbf{V}^n$, corresponds to the common fixed point of the applications $\overline{\Psi}$ and $\underline{\Psi}$, i.e. it satisfies $V^{i,n}_t=\overline{\Psi}^{i}(\textbf{V}^n)_t=\underline{\Psi}^{i}(\textbf{V}^n)_t$. The main result of this section is to give conditions under which the game admits a value and to obtain the characterization of the common value through the following system of interacting doubly reflected BSDEs, which will be shown to have a unique solution. \begin{align}\label{BSDEParticle} \begin{cases} Y^{i,n}_t = \xi^{i,n} +\int_t^T f(s,Y^{i,n}_{s},Z^{i,n}_{s},U^{i,n}_{s}, L_n[\textbf{Y}^n_{s}])ds + K^{1,i,n}_T - K^{1,i,n}_t+ K^{2,i,n}_t-K^{2,i,n}_T \\ \qquad\qquad \qquad - \int_t^T Z^{i,n}_s dB_s - \int_t^T \int_{I\!\!R^\star} U^{i,n}_s(e) \tilde N(ds,de), \quad t \in [0,T],\\ h_2(t,Y^{i,n}_{t},L_n[\textbf{Y}^n_{t}]) \geq Y^{i,n}_{t} \geq h_1(t,Y^{i,n}_{t},L_n[\textbf{Y}^n_{t}]), \quad t \in [0,T], \\ \int_0^T (Y^{i,n}_{t-}-h_1(t,Y^{i,n}_{t-},L_n[\textbf{Y}^n_{t^-}]))dK^{1,i,n}_t = 0,\\ \int_0^T (Y^{i,n}_{t-}-h_2(t,Y^{i,n}_{t-},L_n[\textbf{Y}^n_{t^-}]))dK^{2,i,n}_t = 0,\\ dK_t^{1,i,n} \perp dK_t^{2,i,n}. \end{cases} \end{align} Note that we have the inequality \begin{equation}\label{ineq-wass-1} \mathcal{W}_p^p(L_n[\textbf{x}], L_n[\textbf{y}]) \le \frac{1}{n}\sum_{j=1}^{n}|x_j-y_j|^p, \end{equation} in particular, \begin{equation}\label{ineq-wass-2} \mathcal{W}_p^p(L_n[\textbf{x}], L_n[\textbf{0}]) \le \frac{1}{n}\sum_{j=1}^{n}|x_j|^p. \end{equation} \begin{Theorem}[\textit{Existence of the value and link with interacting system of doubly reflected BSDEs}]\label{Dynkin2} Suppose that Assumptions \ref{generalAssump} and \ref{Assump-PS} are in force for some $p\ge 2$. Assume that $\gamma_1$, $\gamma_2$, $\kappa_1$ and $\kappa_2$ satisfy \begin{align*}\label{smallnessCond-new} \gamma_1^p+\gamma_2^p+\kappa_1^p+\kappa_2^p<2^{3-\frac{5p}{2}}. \end{align*} Then, \begin{itemize} \item[(i)]the \textit{interacting Dynkin game} admits a value $\textbf{V}^n \in \mathcal{S}^{p, \otimes n}$, \item[(ii)] the interacting system of doubly reflected BSDEs \eqref{BSDEParticle} has an unique solution $(\textbf{Y}^n,\textbf{Z}^n,\textbf{U}^n,\textbf{K}^{1,n},\textbf{K}^{2,n})$ in $\mathcal{S}^{p,\otimes n} \otimes \mathcal{H}^{p,n\otimes n} \otimes \mathcal{H}_{\nu}^{p,n\otimes n} \otimes \mathcal{S}_{i}^{p,\otimes n}$, \item[(iii)] We have $V_\cdot^{i,n}=Y_\cdot^{i,n}$, for all $1 \leq i \leq n$. \end{itemize} \end{Theorem} \begin{proof} \begin{comment} \uline{\textit{Step 1.}} We first show that $\overline{\Psi}$ is a well-defined map from $\mathbb{L}^{p,\otimes n}_{\beta}$ to itself. To this end, we linearize the mappings $h_1$ and $h_2$ as follows: for $i=1,\dots, n$ and $0 \leq s\le T$, $$ \begin{array}{lll} h_1(s,Y^{i,n}_s,L_{n}[\textbf{Y}^n_s])=h_1(s,0,L_{n}[\textbf{0}])+a^{1,i}_h(s)Y^{i,n}_s+b^{1,i}_h(s)\mathcal{W}_p(L_{n}[\textbf{Y}^n_s], L_{n}[\textbf{0}]),\\ h_2(s,Y^{i,n}_s,L_{n}[\textbf{Y}^n_s])=h_2(s,0,L_{n}[\textbf{0}])+a^{2,i}_h(s)Y^{i,n}_s+b^{2,i}_h(s)\mathcal{W}_p(L_{n}[\textbf{Y}^n_s], L_{n}[\textbf{0}]), \end{array} $$ where, for $l \in \{1,2\}$, $a^{l,i}_h(\cdot)$ and $b^{l,i}_h(\cdot)$ are adapted processes given by \begin{equation} \left\{\begin{array}{lll} a^{l,i}_h(s):=\frac{h^{l}(s,Y^{i,n}_s,L_{n}[\textbf{Y}^n_s])-h^{l}(s,0,L_{n}[\textbf{Y}^n_s])}{Y^{i,n}_s}{\bf 1}_{\{Y^{i,n}_s\neq 0\}},\\ \\ b^{l,i}_h(s):=\frac{h^{l}(s,0,L_{n}[\textbf{Y}^n_s])-h^{l}(s, 0,L_{n}[\textbf{0}])}{\mathcal{W}_p(L_{n}[\textbf{Y}^n_s], L_{n}[\textbf{0}])}{\bf 1}_{\{\mathcal{W}_p(L_{n}[\textbf{Y}^n_s], L_{n}[\textbf{0}])\neq 0\}}. \end{array} \right. \end{equation} Note that, by the Lipschitz continuity of $h_1$ and $h_2$, we get $|a^{1,i}_h(\cdot)|\le \gamma_1,\,\,\, |b^{1,i}_h(\cdot)|\le \gamma_2, \,\,\,|a^{2,i}_h(\cdot)|\le \kappa_1,\,\,\, |b^{2,i}_h(\cdot)|\le \kappa_2.$\\ By the estimates (A.1) on BSDEs established in Proposition A.1 in \cite{ddz21}, we obtain, for any stopping time $\tau \in \mathcal{T}^n_t$ the following: \begin{align} &\left|\mathcal{E}_{t,\tau}^{\textbf{f}^{i} \circ \textbf{Y}^n }\left[\xi^{i,n}1_{\{\tau=T\}}+h(\tau,Y^{i,n}_{\tau},L_{n}[\textbf{Y}^n_{s}])_{s=\tau})1_{\{\tau<T\}}\right]\right|^p\nonumber \\ &\leq \mathbb{E}\left[e^{p \beta(\tau-t)}\left|\xi^{i,n}1_{\{\tau=T\}}+h(\tau,Y^{i,n}_{\tau},L_{n}[\textbf{Y}^n_{s}])_{s=\tau})1_{\{\tau<T\}}\right|^p|\mathcal{F}_t \right] \nonumber \\ & \quad \quad \quad+\eta^p \mathbb{E}\left[\int_t^\tau e^{p \beta (s-t)}|f|^p(s,0,0,0,L_{n}[\textbf{Y}^n_{s}])ds|\mathcal{F}_t\right], \end{align} with $\eta$, $\beta>0$ such that $\eta \leq \frac{2 c(p)}{p C_f^2}$ and $\beta \geq C_f+\frac{1}{\eta}+\frac{p-1}{p}\left(\frac{1}{\eta}\right)^{\frac{p}{p-1}}$. Moreover, since $f(t,0,0,0,\delta_0) \in \mathcal{H}^{p,1}$, $\xi^{i,n}\in L^p(\mathcal{F}_T)$, $(h(t,0,\delta_0))_{0\leq t\leq T}\in \mathcal{S}_\beta^p$ and $(Y^{i,n})_{1 \leq i \leq n}\in \mathbb{L}^{p,\otimes n}_\beta$, the non-negative c{\`a}dl{\`a}g process $(M^{i,\beta}_t)_{0 \leq t\leq T}$ defined by \begin{align*} M^{i,\beta}_t := & \left(e^{\beta t}|h(t,0,L_{n}[\textbf{0}])| + \gamma_1 e^{\beta t} |Y^{i,n}_t| + \gamma_2 e^{\beta t} \mathcal{W}_p(L_{n}[\textbf{Y}^n_t], L_{n}[\textbf{0}])+e^{\beta T}|\xi^{i,n}|{\bf 1}_{\{t=T\}}\right)^p \\ & +\int_0^t \eta^{p} (e^{\beta s}|f(s,0,0,0,L_{n}[\textbf{0}])|+C_f e^{\beta s} \mathcal{W}_p(L_{n}[\textbf{Y}^n_s], L_{n}[\textbf{0}]))^p|ds \end{align*} belongs to $\mathbb{L}^{1}$. \end{comment} \uline{\textit{Step 1.}} $($\textit{Well-posedness and contraction property of the operators $\overline{\Psi}$ and $\underline{\Psi})$. The well-posedness of the operators $\overline{\Psi}$ and $\underline{\Psi}$ follows by similar arguments to those used in {\it Step 1} of the proof of Theorem \eqref{Dynkin}. We now only show that $\overline{\Psi}$ is a contraction on the time interval $[T-\delta,T]$, for some well chosen $\delta$. The same proof holds for the operator $\underline{\Psi}$}. \medskip Fix $\mathbf{Y}^n=(Y^{1,n},\dots,Y^{n,n}),\bar{\mathbf{Y}}^n=(\bar Y^{1,n},\dots,\bar Y^{n,n}) \in \mathbb{L}^{p,\otimes n}_\beta$, $(\hat{Y},\tilde{Y}) \in (\mathcal{S}_\beta^{p})^2$, $(\hat{Z},\tilde{Z}) \in (\mathcal{H}^{p,1})^2$, $(\hat{U},\tilde{U}) \in (\mathcal{H}_\nu^{p})^2$. By the Lipschitz continuity of $f$, $h_1$ and $h_2$, we get \begin{eqnarray}\label{f-h-lip-4} &&|f(s,\hat{Y}_s,\hat{Z}_s,\hat{U}_s, L_{n}[\textbf{Y}^n_s])-f(s,\tilde{Y}_s,\tilde{Z}_s,\tilde {U}_s,L_{n}[\bar{\textbf{Y}}^n_s])| \le C_f(|\hat{Y}_s-\tilde{Y}_s|+|\hat{Z}_s-\tilde{Z}_s|\nonumber \\ &&\qquad \qquad\qquad \qquad \qquad\qquad +|\hat{U}_s-\tilde{U}_s|_\nu +\mathcal{W}_p(L_{n}[\textbf{Y}^n_s], L_{n}[\bar{\textbf{Y}}^n_s])), \nonumber\\ &&|h_1(s,Y^{i,n}_s,L_{n}[\textbf{Y}^n_s])-h_1(s,\bar{Y}^{i,n}_s,L_{n}[\bar{\textbf{Y}}^n_s])| \le \gamma_1|Y^{i,n}_s-\bar{Y}^{i,n}_s|+\gamma_2\mathcal{W}_p(L_{n}[\textbf{Y}^n_s], L_{n}[\bar{\textbf{Y}}^n_s]), \nonumber\\ && |h_2(s,Y^{i,n}_s,L_{n}[\textbf{Y}^n_s])-h_2(s,\bar{Y}^{i,n}_s,L_{n}[\bar{\textbf{Y}}^n_s])| \le \kappa_1|Y^{i,n}_s-\bar{Y}^{i,n}_s|+\kappa_2\mathcal{W}_p(L_{n}[\textbf{Y}^n_s], L_{n}[\bar{\textbf{Y}}^n_s]). \end{eqnarray} By \eqref{ineq-wass-1}, we have \begin{equation}\label{ineq-wass-1-1-3} \mathcal{W}_p^p(L_{n}[\textbf{Y}^n_s], L_{n}[\bar{\textbf{Y}}^n_s])) \le \frac{1}{n}\sum_{j=1}^n |Y_s^{j,n}-\bar{Y}_s^{j,n}|^p. \end{equation} Then, using the estimates from Proposition A.1 (see the appendix), for any $t\leq T$ and $i=1,\ldots, n$, we have \begin{equation*}\begin{array} {lll} |\overline{\Psi}^i({\bf Y}^n)_t-\overline{\Psi}^i(\bar{{\bf Y}}^n)_t|^{p} \\ \le \underset{\tau\in\mathcal{T}^n_t}{{\rm ess}\,\sup\limits\,} \underset{\sigma \in\mathcal{T}^n_t}{{\rm ess}\,\sup\limits\,}\left|\mathcal{E}_{t,\tau \wedge \sigma}^{\textbf{f}^{i} \circ {\bf Y}}[h_1(\tau,Y^{i,n}_\tau,L_{n}[{\bf Y}^n_s]_{s=\tau}){\bf 1}_{\{\tau \leq \sigma \}} \right. \\ \left. \qquad\qquad \qquad\qquad \qquad\qquad +h_2(\sigma,Y^{i,n}_\sigma,L_{n}[{\bf Y}^n_s]_{s=\sigma}){\bf 1}_{\{\sigma<\tau\}}+\xi^{i,n}{\bf 1}_{\{\sigma \wedge \tau=T\}}] \right. \\ \left. \qquad\qquad - \mathcal{E}_{t,\tau \wedge \sigma}^{\textbf{f}^{i} \circ \bar{{\bf Y}}^n }[h_1(\tau,\bar{Y}^{i,n}_\tau,L_{n}[\bar{{\bf Y}}^n_s]_{s=\tau})1_{\{\tau \leq \sigma\}}+h_2(\sigma,\bar{Y}^{i,n}_\sigma,L_{n}[\bar{{\bf Y}}^n_s]_{s=\sigma})1_{\{\sigma< \tau\}}+\xi^{i,n}{\bf 1}_{\{\tau \wedge \sigma=T\}}] \right|^{p}\\ \le \underset{\tau\in\mathcal{T}^n_t}{{\rm ess}\,\sup\limits\,}\underset{\sigma\in\mathcal{T}^n_t}{{\rm ess}\,\sup\limits\,}\eta^p 2^{\frac{p}{2}-1}\mathbb{E}\left[\left(\int_t^{\tau \wedge \sigma} e^{2 \beta (s-t)} |(\textbf{f}^{i} \circ {\bf Y}^n) (s,\widehat{Y}^{i,\tau, \sigma}_s,\widehat{Z}^{i,\tau, \sigma}_s,\widehat{U}^{i,\tau, \sigma}_s) \right.\right. \\ \left. \left. \qquad\qquad\qquad -(\textbf{f}^{i} \circ \bar{{\bf Y}}^n) (s,\widehat{Y}^{i,\tau, \sigma}_s, \widehat{Z}^{i,\tau, \sigma}_s,\widehat{U}^{i,\tau, \sigma}_s)|^2 ds \right)^{\frac{p}{2}} \right. \\ \left. \qquad\qquad \qquad\qquad +2^{\frac{p}{2}-1} e^{p \beta(\tau \wedge \sigma-t)}|\left(h_1(\tau,Y^{i,n}_\tau,L_{n}[{\bf Y}^n_s]_{s=\tau})-h_1(\tau,\bar{Y}^{i,n}_\tau, L_{n}[\bar{{\bf Y}}^n_s]_{s=\tau})\right){\bf 1}_{\{\tau \leq \sigma<T\}} \right. \\ \left. \qquad\qquad\qquad\qquad\qquad +\left(h_2(\sigma,Y^{i,n}_\sigma,L_{n}[{\bf Y}^n_s]_{s=\sigma})-h_2(\sigma,\bar{Y}^{i,n}_\sigma, L_{n}[\bar{{\bf Y}}^n_s]_{s=\sigma})\right){\bf 1}_{\{\sigma < \tau\}}|^p |\mathcal{F}_t\right] \\ = \underset{\tau\in\mathcal{T}^n_t}{{\rm ess}\,\sup\limits\,} \underset{\sigma \in\mathcal{T}^n_t}{{\rm ess}\,\sup\limits\,} \eta^p 2^{\frac{p}{2}-1}\mathbb{E}\left[\left(\int_t^{\tau \wedge \sigma} e^{2 \beta (s-t)} |f(s,\widehat{Y}^{i,\tau, \sigma}_s,\widehat{Z}^{i,i,\tau, \sigma}_s,\widehat{U}^{i,i,\tau, \sigma}_s, L_{n}[{\bf Y}^n_s]) \right.\right. \\ \left. \left. \qquad\qquad\qquad -f(s,\widehat{Y}^{i,\tau,\sigma}_s,\widehat{Z}^{i,i,\tau, \sigma}_s,\widehat{U}^{i,i,\tau, \sigma}_s, L_{n}[\bar{{\bf Y}}^n_s])|^2 ds \right)^{p/2}\right. \\ \left. \qquad\qquad \qquad\qquad +2^{\frac{p}{2}-1} e^{p \beta(\tau \wedge \sigma-t)}|\left(h_1(\tau,Y^{i,n}_\tau,L_{n}[{\bf Y}^n_s]_{s=\tau})-h_1(\tau,\bar{Y}^{i,n}_\tau, L_{n}[\bar{{\bf Y}}^n_s]_{s=\tau})\right){\bf 1}_{\{\tau \leq \sigma<T\}} \right. \\ \left. \qquad\qquad\qquad \qquad\qquad +\left(h_2(\sigma,Y^{i,n}_\sigma,L_{n}[{\bf Y}^n_s]_{s=\sigma})-h_2(\sigma,\bar{Y}^{i,n}_\sigma, L_{n}[\bar{{\bf Y}}^n_s]_{s=\sigma})\right){\bf 1}_{\{\sigma < \tau\}}|^p |\mathcal{F}_t\right], \end{array} \end{equation*} where $(\widehat{Y}^{i,\tau, \sigma},\widehat{Z}^{i,\tau, \sigma},\widehat{U}^{i,\tau, \sigma})$ is the solution of the BSDE associated with driver $\textbf{f}^{i}\circ \bar{{\bf Y}}^n$, terminal time $\tau \wedge \sigma$ and terminal condition $h_1(\tau,\bar{Y}^{i,n}_\tau,L_{n}[\bar{{\bf Y}}^n_s]_{s=\tau}){\bf 1}_{\{\tau \leq \sigma<T\}}+h_2(\sigma,\bar{Y}^{i,n}_\sigma,L_{n}[\bar{{\bf Y}}^n_s]_{s=\sigma}){\bf 1}_{\{\sigma < \tau\}}+\xi^{i,n}{\bf 1}_{\{\tau \wedge \sigma=T\}}$. \noindent Therefore, using \eqref{f-h-lip-4} and \eqref{ineq-wass-1-1-3}, we have, for any $t\le T$ and $i=1,\ldots, n$, \begin{equation}\label{ineq-evsn-2}\begin{array} {lll} e^{p \beta t}|\overline{\Psi}^i({\bf Y}^n)_t-\overline{\Psi}^i(\bar{{\bf Y}}^n)_t|^p \\ \qquad\qquad \le \underset{\tau \in\mathcal{T}^n_t}{{\rm ess}\,\sup\limits\,} \underset{\sigma \in\mathcal{T}^n_t}{{\rm ess}\,\sup\limits\,}\mathbb{E}\left[ \int_t^{\tau \wedge \sigma} 2^{\frac{p}{2}-1} \delta^{\frac{p-2}{2}} \eta^{p}C_f^{p}\left( \frac{1}{n}\sum_{j=1}^n e^{p\beta s}|Y_s^{j,n}-\bar{Y}_s^{j,n}|^p \right) ds \right. \\ \left. \qquad \qquad\qquad\qquad +2^{\frac{p}{2}-1} \left(\gamma_1 e^{\beta \tau}|Y^{i,n}_{\tau}-\bar{Y}^{i,n}_{\tau}|+ \gamma_2 \left(\frac{1}{n}\sum_{j=1}^ne^{p\beta \tau}|Y_{\tau}^{j,n}-\bar{Y}_{\tau}^{j,n}|^p \right)^{\frac{1}{p}} \right.\right. \\ \left. \left. \qquad\qquad\qquad \qquad\qquad +\kappa_1 e^{\beta \sigma}|Y^{i,n}_{\sigma}-\bar{Y}^{i,n}_{\sigma}|+ \kappa_2 \left(\frac{1}{n}\sum_{j=1}^ne^{p\beta \sigma}|Y_{\sigma}^{j,n}-\bar{Y}_{\sigma}^{j,n}|^p \right)^{\frac{1}{p}}\right)^{p}|\mathcal{F}_t\right]. \end{array} \end{equation} Next, let $\delta >0$, $t\in [T-\delta,T]$ and \begin{align*} \alpha:&=\max(\delta^{1+\frac{p-2}{2}} 2^{\frac{p}{2}-1} \eta^{p}C^{p}_f+2^{\frac{p}{2}-1} 4^{p-1}\gamma^p_1, \delta^{1+\frac{p-2}{2}} \eta^{p}C^{p}_f+2^{\frac{p}{2}-1} 4^{p-1}\gamma^p_2, \nonumber \\ & \qquad\qquad \delta^{1+\frac{p-2}{2}} \eta^{p}C^{p}_f+2^{\frac{p}{2}-1} 4^{p-1}\kappa^p_1, \delta^{1+\frac{p-2}{2}} \eta^{p}C^{p}_f+2^{\frac{p}{2}-1} 4^{p-1}\kappa^p_2). \end{align*} By applying the same arguments as in the previous section, and using the definition \eqref{n-norm-2} we obtain \begin{comment} \begin{equation}\label{ineq-evsn-2} \begin{array}{lll} \underset{T-\delta\le s\le T}{\sup\,}e^{p \beta t}|\widetilde{\Phi}^i({\bf Y}^n)_t-\widetilde{\Phi}^i(\bar{{\bf Y}}^n)_t|^p \le\alpha \underset{m\to\infty}{\lim} \mathbb{E}\left[\underset{T-\delta \le s\le T}{\sup}e^{\beta p s}|Y^{i,n}_s-\bar{Y}^{i,n}_s|^p \right. \\ \left. \qquad\qquad\qquad\qquad\qquad\qquad +\frac{1}{n}\sum_{j=1}^n\underset{T-\delta\le s\le T}{\sup\,}e^{\beta p s}|Y_s^{j,n}-\bar{Y}_s^{j,n}|^p|\mathcal{F}_{\tau_m}\right]. \end{array} \end{equation} and using Fatou's Lemma \textcolor{red}{(instead of Lebesgue dominated convergence theorem)}, we obtain \begin{equation*}\begin{array} {lll} \mathbb{E}\left[\underset{T-\delta \le s\le T}{\sup}e^{\beta p s}|\widetilde{\Phi}^i({\bf Y}^n)_s-\widetilde{\Phi}^i(\bar{{\bf Y}}^n)_s|^p\right] \\ \qquad\qquad\qquad \le \alpha \mathbb{E}\left[\underset{T-\delta \le s\le T}{\sup}e^{\beta p s}|Y^{i,n}_s-\bar{Y}^{i,n}_s|^p+\frac{1}{n}\sum_{j=1}^n\underset{T-\delta\le s\le T}{\sup\,}e^{\beta p s}|Y_s^{j,n}-\bar{Y}_s^{j,n}|^p\right], \end{array} \end{equation*} where $\alpha:=\max(\delta \eta^{p}C^{p}_f+2^{p-1}\gamma^p_1, \delta \eta^{p}C^{p}_f+2^{p-1}\gamma^p_2)$. Thus, \end{comment} \begin{equation*}\begin{array} {lll} \|\overline{\Psi}({\bf Y}^n)-\overline{\Psi}(\bar{{\bf Y}}^n)\|^p_{\mathbb{L}^{p,\otimes n}_\beta[T-\delta,T]}\le \alpha \|{\bf Y}^n-\bar{{\bf Y}}^n\|^p_{\mathbb{L}^{p,\otimes n}_\beta[T-\delta,T]}. \end{array} \end{equation*} Therefore, if $\gamma_1, \gamma_2, \kappa_1$ and $\kappa_2$ satisfy $$ \gamma_1^p+\gamma_2^p+\kappa_1^p+\kappa_2^p<4^{1-p} 2^{1-\frac{p}{2}}, $$ we can choose $$ 0<\delta<\left(\frac{1}{2^{\frac{p}{2}-1}\eta^p C_f^p}\left(1-4^{p-1}2^{\frac{p}{2}-1}(\gamma^p_1+\gamma^p_2+\kappa_1^p+\kappa_2^p)\right)\right)^{\frac{p}{2p-2}} $$ to make $\overline{\Psi}$ a contraction on $\mathbb{L}^{p,\otimes n}_\beta([T-\delta,T])$, i.e. $\overline{\Psi}$ admits a unique fixed point over $[T-\delta,T]$. Moreover, in view of Assumption \ref{generalAssump} (ii)(a), it holds that ${\bf Y}^n\in \mathcal{S}^{p,\otimes n}([T-\delta,T])$.\\ \begin{comment} \noindent \underline{\textit{Step 2.}} The game admits a \textit{lower value} $\underline{\textbf{V}}^{n}$ on the time interval $[T-\delta, T]$. The proof is similar to the one provided in \textit{Step 1}, therefore we omit it.\\ \noindent \underline{\textit{Step 3.}} We now show that the game admits \textit{a common value} on $[T-\delta,T]$. For each $1 \le i \le n$, let $\overline{V}_t^{(0), i,n}:=\underset{\sigma \in \mathcal{T}^n_t}{{\rm ess}\, \inf\limits}\, \underset{\tau \in \mathcal{T}^n_t}{{\rm ess}\,\sup\limits}\, \mathbb{E}[\xi^{i,n}{\bf 1}_{\tau \wedge \sigma=T}|\mathcal{F}^n_t]$ and $\underline{V}_t^{(0)}:= \underset{\tau \in \mathcal{T}^n_t}{{\rm ess}\,\sup\limits}\,\underset{\sigma \in \mathcal{T}^n_t}{{\rm ess}\, \inf\limits}\,\mathbb{E}[\xi^{i,n}{\bf 1}_{\tau \wedge \sigma=T}|\mathcal{F}^n_t]$ . Now, by using the results on classical Dynkin games, we derive that $\overline{V}_t^{(0),i,n}=\underline{V}_t^{(0)}:=V_t^{(0),i,n}$, for $T - \delta \le t \le T$ and for each $1 \le i \le n$. Define, for $m \geq 1$, and for each $1 \le i \le n$, \begin{align} \begin{cases} \overline{V}_{\cdot}^{(m),i,n}:=\overline{\Psi}_{\cdot}^{i}(\overline{\textbf{V}}^{(m-1),n}), \,\, \overline{V}_{\cdot}^{(0),i,n}=V_{\cdot}^{(0),i,n}, \nonumber \\ \underline{V}_{\cdot}^{(m),i,n}:=\underline{\Psi}_{\cdot}(\underline{\textbf{V}}^{(m-1),n}), \,\, \underline{V}_{\cdot}^{(0),i,n}=V_{\cdot}^{(0),i,n}. \end{cases} \end{align} Since for each $i$ between $1$ and $n$, the classical Dynkin game with payoff $h_1(\tau, \overline{V}^{(0),i,n}_\tau, L_n[\overline{\textbf{V}}_\tau^{(0),n}]){\bf 1}_{\tau \leq \sigma}+h_2(\sigma, \overline{V}^{(0),i,n}_\sigma, L_n[\overline{\textbf{V}}_\sigma^{(0),n}]){\bf 1}_{\sigma < \tau}+\xi^{i,n} {\bf 1}_{\sigma \wedge \tau =T}$ has a value (see e.g. Theorem 4.1. in \cite{dqs16}) and $\overline{V}_\cdot^{(0),i,n}=\underline{V}_\cdot^{(0),i,n}=V_\cdot^{(0),i,n}$, we obtain, for each $1 \leq i \leq n$, \begin{align*} \overline{V}_\cdot^{(1), i, n}=\overline{\Psi}_\cdot(\overline{\textbf{V}}^{(0),n})=\underline{\Psi}_\cdot(\overline{\textbf{V}}^{(0),n})=\underline{\Psi}_\cdot(\underline{\textbf{V}}^{(0),n})=\underline{V}_\cdot^{(1),i,n}. \end{align*} One can show recursively that, for each $1 \le i \le n$, \begin{align*} \overline{V}_\cdot^{(m),i,n}=\overline{\Psi}_\cdot^{i}(\overline{\textbf{V}}^{(m-1),n})=\underline{\Psi}_\cdot^{i}(\overline{\textbf{V}}^{(m-1),n})=\underline{\Psi}_\cdot^{i}(\underline{\textbf{V}}^{(m-1),n})=\underline{V}_\cdot^{(m),i,n}. \end{align*} By convergence in $\mathbb{L}_\beta^{p, \otimes n}$ of $\overline{\textbf{V}}^{(n)}$ (resp. $\underline{\textbf{V}}^{(n)}$) to the fixed points $\overline{\textbf{V}}$ (resp. $\underline{\textbf{V}}$) of the operators $\overline{\Psi}$ (resp. $\underline{\Psi}$), we obtain that $\overline{V}_\cdot^{i,n}=\underline{V}_\cdot^{i,n}=V_\cdot^{i,n}$, that is the game admits a value on $[T-\delta, T]$.\\ \end{comment} \medskip \noindent \underline{\textit{Step 2.}}(\textit{Existence of a value of the interacting Dynkin game and link with interacting doubly reflected BSDEs}). Let us now show that the game admits a \textit{common value} on $[0,T]$ and that the interacting system of doubly reflected BSDE \eqref{BSDEParticle} has a unique solution, which is related to the value of the interacting Dynkin game.\\ \noindent Let $\textbf{V}^n$ be the fixed point associated with the map $\overline{\Psi}$ on $[T-\delta,T]$ obtained in \textit{Step 1}. By classical results on doubly reflected BSDEs (see e.g. \cite{dqs16}), for each $i$ between $1$ and $n$, there exists a unique solution $(Y^{i,n}, Z^{i,n}, U^{i,n}, K^{1,i,n}, K^{2,i,n})$ on $[T-\delta, T]$ of the doubly reflected BSDE associated with driver $\textbf{f}^{i} \circ \textbf{V}^n$ and obstacles $h_1(t,V^{i,n}_{t},L_n[\textbf{V}^n_{t}])$ and $h_2(t,V^{i,n}_{t},L_n[\textbf{V}^n_{t}])$. Furthermore, by using the relation between classical Dynkin games and doubly reflected BSDEs (see Theorem 4.10 in \cite{dqs16}), as well as the fixed point property of $\overline{\Psi}$ and $\underline{\Psi}$ showed above, we get that $Y^{i,n}_t=V_t^{i,n}$, for $t \in [T-\delta, T]$ and $1 \leq i \leq n$. Applying the same method as in \textit{Step 1} on each time interval $[T-(j+1)\delta,T-j\delta]$, $1 \leq j \leq m$, with the same operator $\overline{\Psi}$, but terminal condition $\textbf{Y}^{i,n}_{T-j\delta}$ at time $T-j\delta$, we build recursively, for $j=1$ to $n$, a solution $(\textbf{Y}^{n,(j)},\textbf{Z}^{n,(j)},\textbf{U}^{n,(j)},\textbf{K}^{1,n,(j)}, \textbf{K}^{2,n,(j)})$, on each time interval $[T-(j+1)\delta,T-j\delta].$ Pasting properly these processes, we obtain a solution $(\textbf{Y}^{n},\textbf{Z}^{n},\textbf{U}^{n},\textbf{K}^{1,n}, \textbf{K}^{2,n})$ of \eqref{BSDEParticle} on $[0,T]$. The uniqueness of the solution of the system \eqref{BSDEParticle} is obtained by the fixed point property of $\textbf{Y}^n$ and the uniqueness of the associated processes $(\textbf{Z}^{n},\textbf{U}^{n},\textbf{K}^{1,n}, \textbf{K}^{2,n})$, which follows by the uniqueness of the solution of standard doubly reflected BSDEs. Moreover, using again the relation between standard Dynkin games with doubly reflected BSDEs, we obtain that \textit{the value of the game} $\textbf{V}^n$ exists on the full time interval $[0,T]$ and, furthermore, $V_\cdot^{i,n}={Y}_\cdot^{i,n}$. \end{proof} \paragraph{Existence of an $S$-saddle point.} Assume that the \textit{interacting Dynkin game} admits a \textit{common value} which is denoted by $\textbf{V}^n_t$. We introduce the sequence of payoffs $\left\{P^{i,n}\right\}_{1 \leq i \leq n}$, which, for $1 \leq i \leq n$, is given by $$P^{i,n}(\tau,\sigma):= h_1(\tau,V^{i,n}_\tau, L_n[\textbf{V}^n_\tau]){\bf 1}_{\{\tau \leq \sigma<T\}}+h_2(\sigma,V^{i,n}_\sigma, L_n[\textbf{V}^n_\sigma]){\bf 1}_{\{\sigma < \tau\}}+\xi^{i,n} {\bf 1}_{\{\tau \wedge \sigma=T\}}.$$ \medskip We now give the definition of an $S$-saddle point for the \textit{interacting Dynkin game problem}. \begin{Definition} Let $S \in \mathcal{T}_0$. The sequence of pairs of stopping times $(\tau^{\star,i,n}, \sigma^{\star,i,n}) \in \mathcal{T}^n_S \times \mathcal{T}^n_S$ is called a \textit{system of $S$-saddle points} for the \textit{interacting Dynkin game problem} if for each $1\le i\le n$ and for each $(\tau, \sigma) \in (\mathcal{T}_S)^2$ we have $\mathbb{P}$-a.s. \begin{align} \mathcal{E}^{\textbf{f}^{i} \circ \textbf{V}^n}_{S, \tau \wedge \sigma^{\star,i,n}}(P^{i,n}(\tau,\sigma^{\star,i,n})) \leq \mathcal{E}^{\textbf{f}^{i} \circ \textbf{V}^n}_{S, \tau^{\star,i,n} \wedge \sigma^{\star,i,n}}(P^{i,n}(\tau^{\star,i,n},\sigma^{\star,i,n})) \leq \mathcal{E}^{\textbf{f}^{i} \circ \textbf{V}^n}_{S, \tau^{\star,i,n} \wedge \sigma}(P^{i,n}(\tau^{\star,i,n},\sigma)). \end{align} \end{Definition} \medskip In the next theorem, we provide sufficient conditions which ensure the existence of \textit{a system of $S$-saddle points}. \begin{Theorem}[Existence of a system of $S$-saddle points]\label{saddle1} Suppose that $\gamma_1$, $\gamma_2$, $\kappa_1$ and $\kappa_2$ satisfy the condition \eqref{ExistenceCond}. Assume that $h_1$ (resp. $h_2$) take the form $h_1(t,\omega,y,\mu):=\xi_t(\omega)+\kappa^1(y,\mu)$ (resp. $h_2(t,\omega,y,\mu):=\zeta_t(\omega)+\kappa^2(y,\mu)$), where $\xi$ and $-\zeta$ belong to $\mathcal{S}^p$ and are left upper semicontinuous process along stopping times, and $\kappa^1$ (resp $\kappa^2$) are bounded and Lipschitz functions with respect to $(y,\mu)$. \noindent For each $S \in \mathcal{T}_0$, consider the system of pairs of stopping times $(\tau_{S}^{\star,i,n}, \sigma_{S}^{\star,i,n})$ defined by \begin{eqnarray}\label{S} \begin{array}{lll} \tau_{S}^{\star,i,n}:=\inf \{t \geq S:\, V^{i}_t=h_1(t,V^{i}_t,L_n[\textbf{V}^n_t])\},\\ \sigma_{S}^{\star,i,n}:=\inf \{t \geq S:\, V^{i}_t=h_2(t,V^{i}_t,L_n[\textbf{V}^n_t])\}. \end{array} \end{eqnarray} Then the sequence of pairs of stopping time $(\tau_{S}^{\star,i,n}, \sigma_{S}^{\star,i,n})$ given by $\eqref{S}$ is a system of $S$-saddle points. \end{Theorem} \begin{proof} \begin{comment} By using the results on classical Dynkin games with payoffs $f \circ V, h_1, h_2$ (see Theorem 4.10 in \cite{dqs16}), we get that $V_t=Y_t$, with $(Y,Z,U,K^1,K^2)$ represents the solution of a doubly reflected BSDE with driver $f \circ V$ and payoffs $h_1(t,V_t,\mathbb{P}_{V_t})$ and $h_2(t,V_t,\mathbb{P}_{V_t})$. \end{comment} Consider the following iterative algorithm. Let $\textbf{V}^{(0),n} \equiv 0$ be the starting point and define $$\textbf{V}^{(m),n}:=\overline{\Psi}(\textbf{V}^{(m-1),n}),$$ where the inequality is understood component by component. By using classical results on doubly reflected BSDEs and their relation with classical Dynkin games, we obtain that, for $1 \le i \le n$, $V^{(m),i,n}$ coincides with the component $Y^{i,n}$ of the solution of the doubly reflected BSDE associated with $\textbf{f}^{i} \circ \textbf{V}^{(m-1),n}$ and obstacles $h_1(t,V^{(m-1),i,n}_t,L_n[\textbf{V}^{(m-1),n}_t])$ and $h_2(t,V^{(m-1),i,n}_t,L_n[\textbf{V}^{(m-1),n}_t])$. Due to the assumptions on $h_1$ and $h_2$, we get that, for each $i$, $V^{(1),i,n}$ only admits jumps at totally inaccessible stopping times. By induction, the same holds for $V^{(m)}$, for all $m$. Since the condition $\eqref{ExistenceCond}$ is satisfied, by Theorem \ref{Dynkin2}, the sequence $\textbf{V}_t^{(m),n}$ is a Cauchy sequence for the norm $\mathbb{L}_\beta^{p, \otimes n}$ and therefore converges in $\mathbb{L}_\beta^{p, \otimes n}$ to the fixed point of the map $\overline{\Psi}$.\\ Let $\tau \in \mathcal{T}_0$ be a predictable stopping time. Since $\Delta V^{(m),i,n}_\tau=0$ a.s. for all $m$ and for all $1 \leq i \leq n$, we obtain \begin{align} \mathbb{E}\left[|\Delta V^{i,n}_\tau|^p\right]=\mathbb{E}\left[|\Delta V^{i,n}_\tau-\Delta V^{(m),i,n}_\tau|^p\right] \leq 2^p \sup_{\tau \in \mathcal{T}_0}\mathbb{E}\left[|V^{i,n}_\tau-V^{(m),i,n}_\tau|^p\right], \end{align} which implies that $\Delta V^{i,n}_\tau=0$ a.s. for all $1 \le i \le n$ and consequently, $h_1(t,V^{i}_t,L_n[\textbf{V}^n_t])$ and $-h_2(t,V^{i}_t,L_n[\textbf{V}^n_t])$ are left upper semicontinuous along stopping times. By Theorem 3.7 (ii) in \cite{dqs16}, for each $1 \le i \le n$, we get that $(\tau_{S}^{\star,i,n}, \sigma_{S}^{\star,i,n})$ given by $\eqref{S}$ is a $S$-saddle point. \end{proof} \section{Propagation of chaos}\label{chaos} This section is concerned with the convergence of the sequence of value functions $V^{i,n}$ of the interacting Dynkin games to i.i.d. copies of the value function $V$ of the mean-field Dynkin game which, by the results from the previous section, consists in showing the convergence of the component $Y^{i,n}$ of the solution of the interacting system of doubly reflected BSDEs \eqref{BSDEParticle} to i.i.d. copies of the component $Y$ of the mean-field doubly Reflected BSDE with driver $f\circ Y$ and terminal condition $h_1(\tau,Y_\tau,\mathbb{P}_{Y_s|s=\tau}){\bf 1}_{\{\tau\le \sigma<T\}}+h_2(\sigma,Y_\sigma,\mathbb{P}_{Y_s|s=\sigma}){\bf 1}_{\{\sigma<\tau\}}+\xi^{i}{\bf 1}_{\{\tau \wedge \sigma=T\}}$. These convergence results yield the propagation of chaos result, as it will be explained below. \medskip To establish the propagation of chaos property of the particle system \eqref{BSDEParticle}, we make the following additional assumptions. \begin{Assumption}\label{Assump:chaos} \begin{itemize} \item [(i)] The sequence $ \xi^n=(\xi^{1,n},\xi^{2,n},\ldots,\xi^{n,n})$ is exchangeable i.e. the sequence of probability laws $\mu^n$ of $\xi^n$ on $I\!\!R^n$ is symmetric. \item[(ii)] For each $i\ge 1$, $\xi^{i,n}$ converges in $L^p$ to $ \xi^i$, i.e. $$\lim_{n\to\infty}\mathbb{E}[|\xi^{i,n}-\xi^i|^p]=0,$$ where the random variables $\xi^{i}\in L^p({\cal F}^i_T)$ are independent and equally distributed (iid) with probability law $\mu$. \item[(iii)] The component $(Y_t)$ of the unique solution of the mean-field doubly reflected BSDE \eqref{BSDE1} has jumps only at totally inaccessible stopping times. \end{itemize} \end{Assumption} For $m\ge 1$, introduce the Polish spaces $\mathbb{H}^{2,m}:=L^2([0,T];I\!\!R^m)$ and $\mathbb{H}_{\nu}^{2,m}$, the space of measurable functions $\ell: \,[0,T]\times I\!\!R^*\longrightarrow I\!\!R^n$ such that $\int_0^T\int_{I\!\!R^*}\sum_{j=1}^m |\ell^j(t,u)|^2\nu(du)dt< \infty$. \medskip In the following proposition, we show that the \textit{exchangeability property} transfers from the terminal conditions to the associated solution processes. \begin{Proposition}[Exchangeability property]\label{exchange} Assume the sequence $ \xi^n=(\xi^{1,n},\xi^{2,n},\ldots,\xi^{n,n})$ is exchangeable i.e. the sequence of probability laws $\mu^n$ of $\xi^n$ on $I\!\!R^n$ is symmetric. Then the processes $(Y^{i,n},Z^{i,n},U^{i,n},K^{1,i,n},K^{2,i,n}),\,i=1,\ldots, n,$ solutions of the systems \eqref{BSDEParticle} are exchangeable. \end{Proposition} The proof is similar to that of \cite[Proposition 4.1]{ddz21}. We omit it. \begin{comment} \begin{proof} Since for any permutation $\sigma$ of the set $\{ 1,2,\ldots,n\}$, we have $$ \frac{1}{n}\sum_{i=1}^n\delta_{Y^{i,n}}=\frac{1}{n}\sum_{i=1}^n\delta_{Y^{\sigma(i),n}}, $$ by the pathwise existence and uniqueness results obtained in Theorem \ref{existParticle-2}, we have $$ (\Theta^{1,n},\Theta^{2,n},\ldots,\Theta^{n,n})=(\Theta^{\sigma(1),n},\Theta^{\sigma(2),n},\ldots,\Theta^{\sigma(n),n}),\quad \text{a.s.} $$ whenever $(\xi^{1,n},\ldots,\xi^{n,n})=(\xi^{\sigma(1),n},\ldots,\xi^{\sigma(n),n})$ a.s.. Now, $\xi^n$ being exchangeable, by a careful use of the Skorohod representation theorem applied to the Polish space $$ I\!\!R^n\times (\mathbb{D},d^0)^{\otimes n}\times C([0,T];I\!\!R^n)\times(\mathbb{D},d^0)^{\otimes n}\times \mathbb{H}^{2,n\otimes n}\times \mathbb{H}_{\nu}^{2,n\otimes n}\times (\mathbb{D},d^0)^{\otimes n} $$ where the process $(\xi^n,\{B^i\}_{i=1}^n, \{\tilde{N}^i\}_{i=1}^n,\{\Theta^{i,n}\}_{i=1}^n)$ take their values, as used by Yamada and Watanabe \cite{yw71} to prove uniqueness of solutions to SDEs and extended in Delarue \cite{Delarue} (see the proof after Remark 1.6, pp. 224-227) to prove uniqueness in law for forward-backward SDEs, it follows that each of the systems is exchangeable i.e. for every permutation $\sigma$ of $\{1,2,\ldots,n\}$, $$ \text{Law}(\Theta^{1,n},\Theta^{2,n},\ldots,\Theta^{n,n})=\text{Law}(\Theta^{\sigma(1),n},\Theta^{\sigma(2),n},\ldots,\Theta^{\sigma(n),n}).$$ \qed \end{proof} \end{comment} Consider the product space $$ G:=\mathbb{D}\times \mathbb{H}^{2,n}\times \mathbb{H}_{\nu}^{2,n}\times \mathbb{D} $$ endowed with the product metric $$ \delta(\theta,\theta^{\prime}):=\left(d^o(y,y^{\prime})^p+\|z-z^{\prime}\|^p_{\mathbb{H}^{2,n}}+\|u-u^{\prime}\|^p_{\mathbb{H}_{\nu}^{2,n}}+d^o(k,k^{\prime})^p\right)^{\frac{1}{p}}. $$ where, $\theta:=(y,z,u,k)$ and $\theta^{\prime}:=(y^{\prime},z^{\prime},u^{\prime},k)$. We define the Wasserstein metric on $\mathcal{P}_p(G)$ by \begin{equation}\label{W-simultan} D_G(P,Q)=\inf\left\{\left(\int_{G\times G} \delta(\theta,\theta^{\prime})^p R(d\theta,d\theta^{\prime})\right)^{1/p}\right\}, \end{equation} over $R\in\mathcal{P}(G\times G)$ with marginals $P$ and $Q$. Since $(G,\delta)$ is a Polish space, $(\mathcal{P}_p(G), D_G)$ is a Polish space and induces the topology of weak convergence. \medskip Let $(Y^i,Z^i,U^i,K^{1,i},K^{2,i}), i=1,\ldots, n$, with independent terminal values $Y^i_T=\xi^i,i=1,\ldots, n$, be independent copies of $(Y,Z,U,K^1, K^2)$, the solution of \eqref{BSDE1}. More precisely, for each $i=1,\ldots,n$, $(Y^i,Z^i,U^i,K^{1,i},K^{2,i})$ is the unique solution of the reflected MF-BSDE \begin{align}\label{BSDEParticle-Theta} \begin{cases} \quad Y^{i}_t = \xi^{i} +\int_t^T f(s,Y^{i}_{s},Z^{i}_{s},U^{i}_{s},\mathbb{P}_{Y_s^{i}})ds + K^{1,i}_T - K^{1,i}_t- K^{2,i}_T + K^{2,i}_t\\ \qquad\qquad\qquad \qquad -\int_t^T Z^{i}_s dB^i_s - \int_t^T \int_{I\!\!R^\star} U^{i}_s(e) \tilde N^i(ds,de), \\ \quad h_2(t,Y^{i}_{t},\mathbb{P}_{Y_t^{i}}) \geq Y^{i}_{t} \geq h_1(t,Y^{i}_{t},\mathbb{P}_{Y_t^{i}}), \quad \forall t \in [0,T]; \,\, Y_T^{i}=\xi^{i}, \\ \quad \int_0^T (Y^{i}_{t^-}-h_1(t^-,Y^{i}_{t^-},\mathbb{P}_{Y_{t^-}^{i}}))dK^{1,i}_t = 0, \\ \quad \int_0^T (Y^{i}_{t^-}-h_2(t^-,Y^{i}_{t^-},\mathbb{P}_{Y_{t^-}^{i}}))dK^{2,i}_t = 0,\\ \quad dK^{1,i}_t \perp dK^{2,i}_t. \end{cases} \end{align} In the sequel, we denote $(f \circ Y^{i})(t,y,z,u):=f(t,y,z, u,\mathbb{P}_{Y^{i}_t})$. Introduce the notation \begin{equation}\label{W} W:=K^1-K^2,\quad W^i:=K^{1,i}-K^{2,i},\quad W^{i,n}:=K^{1,i,n}-K^{2,i,n}, \end{equation} and consider the processes \begin{equation}\label{Theta} \Theta:=(Y,Z,U,W),\quad \Theta^{i}:=(Y^i,Z^i,U^i,W^i),\quad \Theta^{i,n}:=(Y^{i,n},Z^{i,n},U^{i,n},W^{i,n}). \end{equation} \noindent For any fixed $1\leq k\leq n$, let \begin{equation}\label{P-Theta} \mathbb{P}^{k,n}:=\text{Law}\,(\Theta^{1,n},\Theta^{2,n},\ldots,\Theta^{k,n}),\quad \mathbb{P}_{\Theta}^{\otimes k}:=\text{Law}\,(\Theta^1,\Theta^2,\ldots,\Theta^k). \end{equation} be the joint probability laws of the processes $(\Theta^{1,n},\Theta^{2,n},\ldots,\Theta^{k,n})$ and $(\Theta^1,\Theta^2,\ldots,\Theta^k)$, respectively. From the definition of the distance $D_G$, we obtain the following inequality. \begin{align}\label{chaos-0} D_G^p(\mathbb{P}^{k,n},\mathbb{P}_{\Theta}^{\otimes k})&\le k\sup_{i\le k}\left(\|Y^{i,n}-Y^i\|^p_{\mathcal{S}^p}+\|Z^{i,n}-Z^i{\bf e}_i\|^p_{\mathcal{H}^{p,n}} \right. \nonumber \\ &\left. \qquad \qquad\qquad +\|U^{i,n}-U^i{\bf e}_i\|^p_{\mathcal{H}^{p,n}_{\nu}} +\|W^{i,n}-W^{i}\|^p_{\mathcal{S}^p}\right), \end{align} where for each $i=1,\ldots, n$, ${\bf e}_i:=(0,\ldots,0,\underbrace{1}_{i},0,\ldots,0)$. Before we state and prove the propagation of chaos result, we give here the following convergence result of the empirical laws of i.i.d. copies $Y^1,Y^2,\ldots,Y^n$ solutions of \eqref{BSDEParticle-Theta} and convergence of the particle system \eqref{BSDEParticle} to the solution to \eqref{BSDE1}. \begin{Theorem}[Law of Large Numbers]\label{LLN-2} Let $Y^1,Y^2,\ldots,Y^n$ with terminal values $Y^i_T=\xi^i$ be independent copies of the solution $Y$ of \eqref{BSDE1}. Then, we have \begin{equation}\label{GC-2} \lim_{n\to\infty}\mathbb{E}\left[\underset{0\le t\le T}\sup\, \mathcal{W}_p^p(L_n[\textbf{Y}_t],\mathbb{P}_{Y_t})\right]=0. \end{equation} \end{Theorem} The proof is similar to that of \cite[Theorem 4.1]{ddz21}, so we omit it. \begin{comment} \begin{proof} For any joint law $R^n( dx,dy)$ with marginals $L_n[\textbf{Y}]$ and $\mathbb{P}_{Y}$ and any $\delta>0$, we have $$ \begin{array}{lll} \mathcal{W}_p^p(L_n[\textbf{Y}_t],\mathbb{P}_{Y_t})\le \int_{\mathbb{D}\times\mathbb{D}} |x(t)-y(t)|^p R^n( dx,dy) \\ \qquad\qquad\qquad\qquad = \int_{\mathbb{D}\times\mathbb{D}} |x(t)-y(t)|^p{\bf 1}_{\{d^o(x,y)\le \ln(1+\delta/T)\}}R^n( dx,dy) \\ \qquad\qquad\qquad\qquad\qquad + \int_{\mathbb{D}\times\mathbb{D}} |x(t)-y(t)|^p {\bf 1}_{\{d^o(x,y)> \ln(1+\delta/T)\}}R^n( dx,dy). \end{array} $$ Integrating both sides of \eqref{x-y-esti} against the joint law $R^n( dx,dy)$, we obtain $$ \begin{array}{lll} 3^{1-p}\int_{\mathbb{D}\times\mathbb{D}} |x(t)-y(t)|^p {\bf 1}_{\{d^o(x,y)\le \ln(1+\delta/T)\}}R^n( dx,dy) \\ \qquad\qquad \le \delta^p +2^p\int_{\mathbb{D}} w^{\prime}_y(2\delta)^p\mathbb{P}_Y(dy)+\int_{\mathbb{D}} \underset{(t-\delta) \vee 0< s\le t+\delta}{\sup}\,|y(s)-y(s^-)|^p\mathbb{P}_Y(dy). \end{array} $$ Noting that $$ \int_{\mathbb{D}} \underset{(t-\delta) \vee 0< s\le t+\delta}{\sup}\,|y(s)-y(s^-)|^p\mathbb{P}_Y(dy)=\mathbb{E}\left[\underset{(t-\delta) \vee 0< s\le t+\delta}{\sup}\,|\Delta Y(s)|^p\right] $$ where $\Delta Y(s):=Y(s)-Y(s^-)$ denotes the jump size of the process $Y$ solution to \eqref{BSDE1}, we have $$ \begin{array}{lll} \mathcal{W}_p^p(L_n[\textbf{Y}_t],\mathbb{P}_{Y_t})\le \int_{\mathbb{D}\times\mathbb{D}} |x(t)-y(t)|^p{\bf 1}_{\{d^o(x,y)>\ln(1+\delta/T)\}}R^n( dx,dy)+ 3^{p-1}\delta^p \\ \qquad\qquad\qquad\qquad + 6^p\int_{\mathbb{D}} w^{\prime}_y(2\delta)^p\mathbb{P}_Y(dy)+3^{p-1}\mathbb{E}\left[\underset{(t-\delta) \vee 0< s\le t+\delta}{\sup}\,|\Delta Y(s)|^p\right]. \end{array} $$ Since, for any $M>0$, it holds that $$ \max(|x-y|_T^p-M^p,0)\le 2^p|x|_T^p{\bf 1}_{\{|x|_T\ge M/2\}}+2^p|y|_T^p{\bf 1}_{\{|y|_T\ge M/2\}}, $$ we have $$\begin{array}{lll} \int_{\mathbb{D}\times\mathbb{D}} |x-y|^p_T{\bf 1}_{\{d^o(x,y)>\ln(1+\delta/T)\}}R^n( dx,dy) \\ \qquad\qquad \le \int_{\mathbb{D}\times\mathbb{D}} \min(|x-y|_T,M)^p{\bf 1}_{\{d^o(x,y)>\ln(1+\delta/T)\}}R^n( dx,dy) \\ \qquad\qquad + 2^p\int_{\mathbb{D}\times\mathbb{D}}|x|_T^p{\bf 1}_{\{|x|_T\ge M/2,d^o(x,y)>\ln(1+\delta/T) \}}R^n( dx,dy) \\ \qquad\qquad +2^p \int_{\mathbb{D}\times\mathbb{D}}|y|_T^p{\bf 1}_{\{|y|_T\ge M/2, d^o(x,y)>\ln(1+\delta/T)\}}R^n( dx,dy) \\ \qquad\qquad \le M^p\int_{\mathbb{D}\times\mathbb{D}} {\bf 1}_{\{d^o(x,y)>\ln(1+\delta/T)\}}R^n( dx,dy)+ 2^p\int_{\mathbb{D}}|x|_T^p{\bf 1}_{\{|x|_T\ge M/2\}}L_n[\mathbf{Y}](dx) \\ \qquad\qquad + 2^p\int_{\mathbb{D}}|y|_T^p{\bf 1}_{\{|y|_T\ge M/2\}}\mathbb{P}_{Y}(dy). \end{array} $$ Therefore, for any $M>0, \delta>0$, \begin{equation}\label{W-est-1} \begin{array}{lll} \underset{0\le t\le T}\sup\,\mathcal{W}_p^p(L_n[\textbf{Y}_t],\mathbb{P}_{Y_t})\le M^p\int_{\mathbb{D}\times\mathbb{D}} {\bf 1}_{\{d^o(x,y)>\ln(1+\delta/T)\}}R^n( dx,dy) \\ \qquad\qquad + 2^p\int_{\mathbb{D}}|x|_T^p{\bf 1}_{\{|x|_T\ge M/2\}}L_n[\mathbf{Y}](dx) + 2^p\int_{\mathbb{D}}|y|_T^p{\bf 1}_{\{|y|_T\ge M/2\}}\mathbb{P}_{Y}(dy) \\ \qquad\qquad + 3^{p-1}\delta^p+6^p\int_{\mathbb{D}} w^{\prime}_y(2\delta)^p\mathbb{P}_Y(dy)+3^{p-1}\underset{0\le t\le T}{\sup}\,\mathbb{E}\left[\underset{(t-\delta) \vee 0< s\le t+\delta}{\sup}\,|\Delta Y(s)|^p\right]. \end{array} \end{equation} By (14.8) in Billingsley \cite{b68}, $\underset{\delta\to 0}\lim\, w^{\prime}_y(2\delta)=0$, and since $$ \int_{\mathbb{D}} w^{\prime}_y(2\delta)^p\mathbb{P}_Y(dy)\le 2\mathbb{E}[|Y|^p_T] <\infty, $$ by Lebesgue's dominated convergence theorem, we obtain $$ \underset{\delta\to \infty}{\lim}\,\int_{\mathbb{D}} w^{\prime}_y(2\delta)^p\mathbb{P}_Y(dy)=0. $$ Moreover, for any $\delta >0$, $$ \underset{n\to \infty}{\lim}\,\int_{\mathbb{D}\times\mathbb{D}} |x-y|_T^p{\bf 1}_{\{d^o(x,y)>\ln(1+\delta/T)\}}R^n( dx,dy)= 0, \quad\text{a.s.} $$ Indeed, in view of \eqref{GC-1}, for any $\delta >0$, we have $$ \underset{n\to \infty}{\lim}\,\int_{\mathbb{D}\times\mathbb{D}} {\bf 1}_{\{d^o(x,y)>\ln(1+\delta/T)\}}R^n( dx,dy)\le (\ln(1+\delta/T))^{-1}\underset{n\to \infty}{\lim}\,D_T^o(L_n[\mathbf{Y}],\mathbb{P}_{Y})^p=0,\quad\text{a.s.} $$ Moreover, by the Strong Law of Large Numbers, $$ \underset{n\to \infty}{\lim}\, \int_{\mathbb{D}}|x|_T^p{\bf 1}_{\{|x|_T\ge M/2\}}L_n[\mathbf{Y}](dx)=\int_{\mathbb{D}}|y|_T^p{\bf 1}_{\{|y|_T\ge M/2\}}\mathbb{P}_{Y}(dy) $$ and since $\mathbb{P}_{Y}\in \mathcal{P}_p(\mathbb{D})$, it is uniformly integrable. In particular, we have $$ \underset{M\to \infty}{\lim}\,\int_{\mathbb{D}}|y|_T^p{\bf 1}_{\{|y|_T\ge M/2\}}\mathbb{P}_{Y}(dy)=0. $$ Therefore, for any $\delta >0$ $$ \begin{array}{lll} \underset{n\to \infty}{\lim}\,\int_{\mathbb{D}\times\mathbb{D}} |x-y|_T{\bf 1}_{\{d^o(x,y)>\ln(1+\delta/T)\}}R^n( dx,dy)\le 2^{p+1}\underset{M\to \infty}{\lim}\,\int_{\mathbb{D}}|y|_T^p{\bf 1}_{\{|y|_T\ge M/2\}}\mathbb{P}_{Y}(dy)=0 \quad \text{a.s.} \end{array} $$ Finally, by taking the limits $n\to\infty$, then $M\to\infty$ and lastly $\delta\to 0$, we obtain \begin{equation}\label{W-est-2} \underset{n\to \infty}{\lim}\, \underset{0\le t\le T}{\sup}\,\mathcal{W}_p^p(L_n[\textbf{Y}_t],\mathbb{P}_{Y_t})\le 3^{p-1} \underset{\delta\to 0}\lim\,\underset{0\le t\le T}{\sup}\mathbb{E}[\underset{(t-\delta) \vee 0< s\le t+\delta}{\sup}\,|\Delta Y(s)|^p],\quad\text{a.s.} \end{equation} Once we can show that \begin{equation}\label{jumps} \underset{\delta\to 0}\lim\,\underset{0\le t\le T}{\sup}\mathbb{E}\left[\underset{(t-\delta) \vee 0< s\le t+\delta}{\sup}\,|\Delta Y(s)|^p\right]=0, \end{equation} we obtain $$ \underset{n\to \infty}{\lim}\, \underset{0\le t\le T}{\sup}\,\mathcal{W}_p^p(L_n[\textbf{Y}_t],\mathbb{P}_{Y_t})=0 \quad \text{a.s.} $$ But, since $$ \mathbb{E}\left[\underset{0\le t\le T}{\sup}\,\mathcal{W}_p^p(L_n[\textbf{Y}_t],\mathbb{P}_{Y_t})\right]\le \mathbb{E}[D_T^p(L_n[\textbf{Y}],\mathbb{P}_{Y})]\le 2^{p-1}\left( \mathbb{E}\left[\frac{1}{n}\sum_{i=1}^n|Y^i|_T^p\right]+ \|\mathbb{P}_{Y}\|_p^p\right)=2^{p}\|\mathbb{P}_{Y}\|_p^p, $$ the random variable $\underset{0\le t\le T}{\sup}\,\mathcal{W}_p^p(L_n[\textbf{Y}_t],\mathbb{P}_{Y_t})$ is uniformly integrable. Thus, it converges to $0$ in $L^1$, i.e. $$ \underset{n\to \infty}{\lim}\,\mathbb{E}\left[\underset{0\le t\le T}{\sup}\,\mathcal{W}_p^p(L_n[\textbf{Y}_t],\mathbb{P}_{Y_t})\right]=0. $$ It remains to show \eqref{jumps}. By Assumption $\eqref{Assump:chaos}(iv)$, it follows that the process $K_t$ is continuous, which implies that $\Delta Y_t=\Delta J_t$, where $J_t:=\int_{(0,t]}\int_{I\!\!R^*}U_s(e)N(ds,de)$. \medskip $\bullet$ \underline{Estimation of $\Delta J_s$ (for the case $p=2$)}. Clearly, $$ \Delta J_t=U_t(\Delta\eta_t){\bf 1}_{I\!\!R^*}(\Delta\eta_t), $$ where $(\eta_s)_s$ denotes the L{\'e}vy process for which $$ N(t,A):=\#\{s\in(0,t],\,\, \Delta\eta_t\in A\},\quad A\subset I\!\!R^*. $$ We have $$\begin{array}{lll} \underset{(t-\delta) \vee 0<s\le t+\delta}{\sup}\,|\Delta J_s|^2=\underset{(t-\delta) \vee 0<s\le t+\delta}{\sup}\, |U_s(\Delta\eta_s)|^2{\bf 1}_{I\!\!R^*}(\Delta\eta_s)\le \underset{(t-\delta) \vee 0<s\le t+\delta}{\sum}\, |U_s(\Delta\eta_s)|^2{\bf 1}_{I\!\!R^*}(\Delta\eta_s) \end{array} $$ that is $$ \underset{(t-\delta) \vee 0 <t \le t+\delta}{\sup}\,|\Delta J_s|^2\le \int_{(0,T]\times I\!\!R^*}{\bf 1}_{((t-\delta) \vee 0,t+\delta]}(s)|U_s(e)|^2N(ds,de). $$ Since $U\in \mathcal{H}_{\nu}^2$, we have $$ \mathbb{E}\left[\int_{(0,T]\times I\!\!R^*}{\bf 1}_{((t-\delta) \vee 0,t+\delta]}(s)|U_s(e)|^2N(ds,de)\right]= \mathbb{E}\left[\int_{(t-\delta) \vee 0}^{t+\delta}|U_s|^2_{\nu}ds\right]. $$ Therefore, for any $R>0$, $$\begin{array}{lll} \mathbb{E}\left[\underset{(t-\delta) \vee 0<t\le t+\delta}{\sup}\,|\Delta J_s|^2\right] \le R^2 \delta+\mathbb{E}\left[\int_0^T|U_s|_{\nu}^2{\bf 1}_{\{|U_s|_{\nu}>R\}}ds\right], \end{array} $$ so \begin{equation}\label{J-2} \underset{0\le t\le T}{\sup}\mathbb{E}\left[\underset{(t-\delta) \vee 0<t\le t+\delta}{\sup}\,|\Delta J_s|^2\right]\le R^2 \delta+\mathbb{E}\left[\int_0^T|U_s|_{\nu}^2{\bf 1}_{\{|U_s|_{\nu}>R\}}ds\right]. \end{equation} Thus, $$ \underset{\delta\to 0}\lim\,\underset{0\le t\le T}{\sup}\mathbb{E}\left[\underset{(t-\delta) \vee 0<t\le t+\delta}{\sup}\,|\Delta J_s|^2\right]\le \mathbb{E}\left[\int_0^T|U_s|_{\nu}^2{\bf 1}_{\{|U_s|_{\nu}>R\}}ds\right]. $$ Again, since $U\in \mathcal{H}_{\nu}^2$, $\underset{R \to \infty}\lim\,\mathbb{E}\left[\int_0^T|U_s|_{\nu}^2{\bf 1}_{\{|U_s|_{\nu}>R\}}ds\right]=0$. Hence, $$\underset{\delta\to 0}\lim\,\underset{0\le t\le T}{\sup}\mathbb{E}\left[\underset{(t-\delta) \vee 0<t\le t+\delta}{\sup}\,|\Delta J_s|^2\right]=0. $$ \medskip $\bullet$ \underline{Estimation of $\Delta J_s$ (for the case $p>2$)} Since the process $(K_t)$ is continuous, we obtain, for any $L>0$, \begin{align*} \underset{(t-\delta) \vee 0<s\le t+\delta}{\sup}\,|\Delta J_s|^p & \leq 2^{p-2} \underset{(t-\delta) \vee 0<s\le t+\delta}{\sup}\, |\Delta J_s|^2\underset{0 \le s\le T}{\sup}\,|Y_s|^{p-2} {\bf 1}_{\{\underset{0 \le s\le T}{\sup}\,|Y_s|\le L \}}+2^{p}\underset{0 \le s\le T}{\sup}\,|Y_s|^{p} {\bf 1}_{\{\underset{0 \le s\le T}{\sup}\,|Y_s|>L\}} \\ & \le 2^{p-2}L^{p-2} \underset{(t-\delta) \vee 0<s\le t+\delta}{\sup}\, |\Delta J_s|^2+2^{p}\underset{0 \le s\le T}{\sup}\,|Y_s|^{p} {\bf 1}_{\{\underset{0 \le s\le T}{\sup}\,|Y_s|>L\}}. \end{align*} Therefore, in view of \eqref{J-2}, for any $L>0$ and $R>0$, \begin{align*} \mathbb{E}\left[\underset{(t-\delta) \vee 0<t\le t+\delta}{\sup}\,|\Delta J_s|^p\right] & \le 2^{p-2} L^{p-2}\mathbb{E}\left[\int_{(t-\delta) \vee 0}^{t+\delta}|U_s|^2_{\nu}ds\right]+2^{p-2}\mathbb{E}\left[\underset{0 \le s\le T}{\sup}\,|Y_s|^{p} {\bf 1}_{\{\underset{0 \le s\le T}{\sup}\,|Y_s|>L\}}\right]\nonumber \\ & \le 2^{p-2} L^{p-2} \left(R^2 \delta+\mathbb{E}\left[\int_0^T|U_s|_{\nu}^2{\bf 1}_{\{|U_s|_{\nu}>R\}}ds\right]\right) \nonumber \\ & +2^{p-2} \mathbb{E}\left[\underset{0 \le s\le T}{\sup}\,|Y_s|^{p} {\bf 1}_{\{\underset{0 \le s\le T}{\sup}\,|Y_s|>L\}}\right], \end{align*} which implies $$ \begin{array}{lll} \underset{0\le t\le T}{\sup}\,\mathbb{E}\left[\underset{(t-\delta) \vee 0<t\le t+\delta}{\sup}\,|\Delta J_s|^p\right] & \le 2^{p-2} L^{p-2} \left(R^2 \delta+\mathbb{E}\left[\int_0^T|U_s|_{\nu}^2{\bf 1}_{\{|U_s|_{\nu}>R\}}ds\right] \right) \\ & +2^{p-2} \mathbb{E}\left[\underset{0 \le s\le T}{\sup}\,|Y_s|^{p} {\bf 1}_{\{\underset{0 \le s\le T}{\sup}\,|Y_s|>L\}}\right]. \end{array} $$ Thus, $$\begin{array}{lll} \underset{\delta\to 0}\lim\,\underset{0\le t\le T}{\sup}\mathbb{E}\left[\underset{(t-\delta) \vee 0<t\le t+\delta}{\sup}\,|\Delta J_s|^p\right] & \le 2^{p-2} L^{p-2} \mathbb{E}\left[\int_0^T|U_s|_{\nu}^2{\bf 1}_{\{|U_s|_{\nu}>R\}}ds\right]\\ & +2^{p-2}\mathbb{E}\left[\underset{0 \le s\le T}{\sup}\,|Y_s|^{p} {\bf 1}_{\{\underset{0 \le s\le T}{\sup}\,|Y_s|>L\}}\right]. \end{array} $$ Since $Y \in \mathcal{S}^p$ and $U\in \mathcal{H}_{\nu}^p$, $$ \underset{R\to \infty}\lim\,\mathbb{E}\left[\int_0^T|U_s|_{\nu}^2{\bf 1}_{\{|U_s|_{\nu}>R\}} ds\right]=0,\quad \underset{L \to \infty}\lim\,\mathbb{E}\left[\underset{0 \le s\le T}{\sup}\,|Y_s|^{p} {\bf 1}_{\{\underset{0 \le s\le T}{\sup}\,|Y_s|>L\}}\right]=0. $$ Hence, by taking first the limit in $R \to \infty$ and then $L \to \infty$, we obtain $$ \underset{\delta\to 0}\lim\,\underset{0\le t\le T}{\sup}\,\mathbb{E}\left[\underset{(t-\delta) \vee 0<t\le t+\delta}{\sup}\,|\Delta J_s|^p\right]=0. $$\qed \end{proof} \end{comment} We now provide the following convergence result for the solution $Y^{i,n}$ of \eqref{BSDEParticle}. \medskip \begin{Proposition}[Convergence of the $Y^{i,n}$'s]\label{chaos-Y-1} Assume that for some $p\ge 2$, $\gamma_1$, $\gamma_2$, $\kappa_1$ and $\kappa_2$ satisfy \begin{align}\label{smallnessCond-chaos-1} 2^{\frac{p}{2}-1} \textcolor{black}{7^{p-1}}(\gamma_1^p+\gamma_2^p+\kappa_1^p +\kappa_2^p)<1. \end{align} Then, under Assumptions \ref{generalAssump}, \ref{Assump-PS} and \ref{Assump:chaos}, we have \begin{equation}\label{chaos-Y-1-1} \underset{n\to\infty}\lim \,\underset{0\le t\le T}{\sup}\,E[|Y^{i,n}_t-Y^i_t|^p]=0. \end{equation} In particular, \begin{equation}\label{chaos-Y-1-2} \underset{n\to\infty}\lim \, \|Y^{i,n}-Y^i\|_{\mathcal{H}^{p,1}}=0. \end{equation} \end{Proposition} \begin{proof} Given $t\in[0,T]$, let $\vartheta \in \mathcal{T}^n_t$. By the estimates on BSDEs in Proposition \ref{estimates} (see the appendix below), we have \begin{equation}\label{Y-n-estimate}\begin{array} {lll} |Y^{i,n}_\vartheta-Y^{i}_\vartheta|^{p} \\ \le \underset{\tau\in\mathcal{T}^n_\vartheta}{{\rm ess}\,\sup\limits\,}\underset{\sigma \in\mathcal{T}^n_\vartheta}{{\rm ess}\,\sup\limits\,}\left|\mathcal{E}_{\vartheta,\tau \wedge \sigma}^{\textbf{f}^i \circ \textbf{Y}^{n}}[h_1(\tau,Y^{i,n}_\tau,L_{n}[\textbf{Y}^n_\tau]){\bf 1}_{\{\tau \leq \sigma<T\}}+h_2(\sigma,Y^{i,n}_\sigma,L_{n}[\textbf{Y}^n_\sigma]){\bf 1}_{\{\sigma < \tau\}}+\xi^{i,n}{\bf 1}_{\{\tau \wedge \sigma=T\}}] \right. \\ \left. \qquad\qquad -\mathcal{E}_{\vartheta,\tau \wedge \sigma}^{\textbf{f}^i \circ Y^i }[h_1(\tau,Y^i_\tau,\mathbb{P}_{Y_s|s=\tau}){\bf 1}_{\{\tau \le \sigma<T\}}+h_2(\sigma,Y^i_\sigma,\mathbb{P}_{Y_s|s=\sigma}){\bf 1}_{\{\sigma < \tau\}}+\xi^i {\bf 1}_{\{\tau \wedge \sigma=T\}}] \right|^{p} \\ \le \underset{\tau\in\mathcal{T}^n_\vartheta}{{\rm ess}\,\sup\limits\,} \underset{\sigma \in\mathcal{T}^n_\vartheta}{{\rm ess}\,\sup\limits\,}2^{\frac{p}{2}-1}\mathbb{E}\left[\eta^p(\int_\vartheta^{\tau \wedge \sigma} e^{2 \beta (s-\vartheta)} |(\textbf{f}^i \circ \textbf{Y}^{n}) (s,\widehat{Y}^{i,\tau, \sigma}_s,\widehat{Z}^{i,\tau, \sigma}_s,\widehat{U}^{i,\tau, \sigma}_s) \right. \\ \left. \qquad\quad -(\textbf{f}^i \circ Y^i) (s,\widehat{Y}^{i,\tau, \sigma}_s, \widehat{Z}^{i,\tau, \sigma}_s,\widehat{U}^{i,\tau, \sigma}_s)|^2 ds)^{\frac{p}{2}} \right. \\ \left. \qquad\qquad +\left(e^{ \beta(\tau \wedge \sigma-\vartheta)}\left[|h_1(\tau,Y^{i,n}_\tau,L_{n}[\textbf{Y}^n_\tau])-h_1(\tau,Y^i_\tau, \mathbb{P}_{Y_s|s=\tau})|{\bf 1}_{\{\tau \leq \sigma<T\}} \right. \right. \right. \nonumber\\ \left. \left. \left. \qquad\qquad\qquad + |h_2(\sigma,Y^{i,n}_\sigma,L_{n}[\textbf{Y}^n_\sigma])-h_2(\sigma,Y^i_\sigma, \mathbb{P}_{Y_s|s=\sigma})|{\bf 1}_{\{\sigma < \tau\}}\right] + e^{\beta(T-\vartheta)}|\xi^{i,n}-\xi^i|{\bf 1}_{\{\tau \wedge \sigma=T\}}\right)^p |\mathcal{F}_\vartheta\right]\\ \le \underset{\tau\in\mathcal{T}^n_\vartheta}{{\rm ess}\,\sup\limits\,} \underset{\sigma \in\mathcal{T}^n_\vartheta}{{\rm ess}\,\sup\limits\,} 2^{\frac{p}{2}-1}\mathbb{E}\left[\int_\vartheta^{\tau \wedge \sigma}T^{\frac{p-2}{2}} e^{p \beta (s-\vartheta)}\eta^p C_f^{p}\mathcal{W}^p_p(L_{n}[\textbf{Y}_s^n],\mathbb{P}_{Y_s})ds +\left( \gamma_1 e^{ \beta(\tau-\vartheta)}|Y^{i,n}_{\tau}-Y^{i}_{\tau}| \right. \right. \\ \left.\left. \qquad\qquad\qquad +\gamma_2 e^{ \beta(\tau-\vartheta)} \mathcal{W}_p(L_{n}[\textbf{Y}_{\tau}^n],\mathbb{P}_{Y_s|s=\tau})+\kappa_1 e^{ \beta(\sigma-\vartheta)}|Y^{i,n}_{\sigma}-Y^{i}_{\sigma}| \right. \right. \\ \left.\left. \qquad\qquad\qquad\qquad\qquad +\kappa_2 e^{ \beta(\sigma-\vartheta)} \mathcal{W}_p(L_{n}[\textbf{Y}_{\sigma}^n],\mathbb{P}_{Y_s|s=\sigma}) +e^{ \beta(T-\vartheta)}|\xi^{i,n}-\xi^i|{\bf 1}_{\{\tau \wedge \sigma=T\}} \right)^p|\mathcal{F}_\vartheta\right], \end{array} \end{equation} with $\eta$, $\beta>0$ such that $\eta \leq \frac{1}{C_f^2}$ and $\beta \geq 2 C_f+\frac{3}{\eta}$, where $(\widehat{Y}^{i,\tau,\sigma},\widehat{Z}^{i,\tau, \sigma},\widehat{U}^{i,\tau, \sigma})$ is the solution of the BSDE associated with driver $\textbf{f}^{i}\circ {\bf Y}^n$, terminal time $\tau \wedge \sigma$ and terminal condition $h_1(\tau,Y^{i,n}_\tau,L_{n}[{\bf Y}^n_s]_{s=\tau}){\bf 1}_{\{\tau \leq \sigma<T\}}+h_2(\sigma,Y^{i,n}_\sigma,L_{n}[{\bf Y}^n_s]_{s=\sigma}){\bf 1}_{\{\sigma < \tau\}}+\xi^{i,n}{\bf 1}_{\{\tau \wedge \sigma=T\}}$. Therefore, we have \begin{equation*} e^{p\beta \vartheta}|Y^{i,n}_\vartheta-Y^{i}_\vartheta|^{p}\le \underset{\tau\in\mathcal{T}^n_\vartheta}{{\rm ess}\,\sup\limits\,}\mathbb{E}[G^{i,n}_{\vartheta,\tau}|\mathcal{F}_\vartheta]+\underset{\tau\in\mathcal{T}^n_\vartheta}{{\rm ess}\,\sup\limits\,}\mathbb{E}[H^{i,n}_{\vartheta,\tau}|\mathcal{F}_\vartheta], \end{equation*} where \begin{equation*}\begin{array} {lll} G^{i,n}_{\vartheta,\tau}:=2^{\frac{p}{2}-1} \left\{\int_\vartheta^{\tau}T^{\frac{p-2}{2}}2^{p-1}\eta^p C_f^{p} \left(\frac{1}{n}\sum_{j=1}^ne^{p \beta s}|Y^{j,n}_s-Y^j_s|^p\right) ds \right. \\ \left. \qquad\qquad\qquad +\textcolor{black}{7^{p-1}}(\gamma_1^p+\gamma_2^p) \left(e^{p\beta\tau}|Y^{i,n}_{\tau}-Y^{i}_{\tau}|^p+ \frac{1}{n}\sum_{j=1}^ne^{p\beta\tau}|Y^{j,n}_{\tau}-Y^j_{\tau}|^p\right) \right. \\ \left. \qquad\qquad\qquad + \left(\textcolor{black}{7^{p-1}}\gamma_2^p+2^{p-1}T\eta^p T^{\frac{p-2}{2}} C_f^{p}\right)\underset{0\le t\le T}{\sup}\, e^{p \beta s}\mathcal{W}^p_p(L_{n}[\textbf{Y}_s],\mathbb{P}_{Y_s})+ \textcolor{black}{7^{p-1}}e^{\beta T}|\xi^{i,n}-\xi^i|^p{\bf 1}_{\{\tau=T\}}. \right\} \end{array} \end{equation*} and \begin{equation*}\begin{array} {lll} H^{i,n}_{\vartheta,\tau}:=2^{\frac{p}{2}-1} \textcolor{black}{7^{p-1}}(\kappa_1^p+\kappa_2^p) \left(e^{p\beta\tau}|Y^{i,n}_{\tau}-Y^{i}_{\tau}|^p+ \frac{1}{n}\sum_{j=1}^ne^{p\beta\tau}|Y^{j,n}_{\tau}-Y^j_{\tau}|^p\right) \\ \qquad\qquad\qquad + 2^{\frac{p}{2}-1} \textcolor{black}{7^{p-1}}\kappa_2^p \underset{0\le t\le T}{\sup}\, e^{p \beta s}\mathcal{W}^p_p(L_{n}[\textbf{Y}_s],\mathbb{P}_{Y_s}). \end{array} \end{equation*} Setting $V^{n,p}_t:=\frac{1}{n}\sum_{j=1}^ne^{p \beta t}|Y^{j,n}_t-Y^j_t|^p$ and $$ \Gamma_{n,p}:=2^{\frac{p}{2}-1} \left(\textcolor{black}{7^{p-1}}\gamma_2^p+\textcolor{black}{7^{p-1}}\kappa_2^p+2^{p-1}T\eta^p T^{\frac{p-2}{2}} C_f^{p}\right)\underset{0\le t\le T}{\sup}\, e^{p \beta s}\mathcal{W}^p_p(L_{n}[\textbf{Y}_s],\mathbb{P}_{Y_s})+ 2^{\frac{p}{2}-1} \textcolor{black}{7^{p-1}}e^{\beta T}|\xi^{i,n}-\xi^i|^p. $$ we obtain \begin{align}\label{V-n-p} V^{n,p}_\vartheta &\le \underset{\tau\in\mathcal{T}^n_\vartheta}{{\rm ess}\,\sup\limits\,}\mathbb{E}[\int_\vartheta^{\tau} 2^{\frac{p}{2}-1} 2^{p-1}T^{\frac{p-2}{2}}\eta^p C_f^{p} V^{n,p}_sds+2^{\frac{p}{2}-1} \textcolor{black}{7^{p-1}}(\gamma_1^p+\gamma_2^p)V^{n,p}_{\tau}+\Gamma_{n,p}|\mathcal{F}_\vartheta] \nonumber \\ &+\underset{\tau\in\mathcal{T}^n_\vartheta}{{\rm ess}\,\sup\limits\,}\mathbb{E}[2^{\frac{p}{2}-1} \textcolor{black}{7^{p-1}}(\kappa_1^p+\kappa_2^p)V^{n,p}_{\tau}|\mathcal{F}_\vartheta]. \end{align} Therefore, we get $$ \mathbb{E}[V^{n,p}_\vartheta]\le 2^{\frac{p}{2}-1} \textcolor{black}{7^{p-1}}(\gamma_1^p+\gamma_2^p+\kappa_1^p+\kappa_2^p)\underset{\tau\in\mathcal{T}^n_\vartheta}{\sup}\,\mathbb{E}[V^{n,p}_{\tau}]+\mathbb{E}[\int_\vartheta^T 2^{p-1}2^{\frac{p}{2}-1} T^{\frac{p-2}{2}}\eta^p C_f^{p} V^{n,p}_sds+\Gamma_{n,p}]. $$ Since $\mathcal{T}^n_\vartheta \subset \mathcal{T}^n_t$ and by arbitrariness of $\vartheta \in \mathcal{T}^n_t$, we obtain $$ \lambda \mathbb{E}[V^{n,p}_{t}] \leq \lambda \underset{ \vartheta \in \mathcal{T}_t^n}{\sup}\,\mathbb{E}[V^{n,p}_{\vartheta}]\le \mathbb{E}[\int_t^T 2^{p-1} 2^{\frac{p}{2}-1} \eta^p T^{\frac{p-2}{2}} C_f^{p} V^{n,p}_sds+\Gamma_{n,p}], $$ where $\lambda:=1-2^{\frac{p}{2}-1} \textcolor{black}{7^{p-1}}(\gamma_1^p+\gamma_2^p+\kappa_1^p+\kappa_2^p) >0$ by the assumption \eqref{smallnessCond-chaos-1}. By \textcolor{black}{Gronwall's inequality}, we have $$ \underset{0\le t\le T}{\sup}\,\mathbb{E}[V^{n,p}_{t}]\le \frac{ e^{K_p}}{\lambda}\mathbb{E}[\Gamma_{n,p}] $$ where $K_p:=\frac{1}{\lambda}2^{p-1}2^{\frac{p}{2}-1} T^{\frac{p-2}{2}}\eta^p C_f^{p}T$. But, in view of the exchangeability of the processes $(Y^{i,n},Y^i),i=1,\ldots,n$ (see Proposition \ref{exchange}), we have, $\mathbb{E}[V^{n,p}_{t}]=\mathbb{E}[e^{p\beta t}|Y^{i,n}_t-Y^i_t|^p]$. Thus, $$ \underset{0\le t\le T}{\sup}\,\mathbb{E}[e^{p\beta t}|Y^{i,n}_t-Y^i_t|^p]\le \frac{ e^{K_p}}{\lambda}\mathbb{E}[\Gamma_{n,p}] \to 0 $$ as $n\to\infty$, in view of Theorem \ref{LLN-2} and Assumption \ref{Assump:chaos}, as required. \qed \end{proof} We now derive the following propagation of chaos result. \begin{Theorem}[Propagation of chaos of the $Y^{i,n}$'s]\label{prop-y} Under the assumptions of Proposition \ref{chaos-Y-1}, the solution $Y^{i,n}$ of the particle system \eqref{BSDEParticle} satisfies the propagation of chaos property, i.e. for any fixed positive integer $k$, $$ \underset{n\to\infty}{\lim} \text{Law}\,(Y^{1,n},Y^{2,n},\ldots,Y^{k,n})=\text{Law}\,(Y^1,Y^2,\ldots,Y^k). $$ \end{Theorem} \begin{proof} Set $P^{k,n}:=\text{law}\,(Y^{1,n},Y^{2,n},\ldots,Y^{k,n})$ and $P^{\otimes k}:=\text{Law}\,(Y^1,Y^2,\ldots,Y^k)$. Consider the Wasserstein metric on $\mathcal{P}_2(\mathbb{H}^2)$ defined by \begin{equation}\label{W-Y} D_{\mathbb{H}^2}(P,Q)=\inf\left\{\left(\int_{\mathbb{H}^2\times \mathbb{H}^2} \|y-y^{\prime}\|_{\mathbb{H}^2}^2 R(dy,dy^{\prime})\right)^{1/2}\right\}, \end{equation} over $R\in\mathcal{P}(\mathbb{H}^2\times \mathbb{H}^2)$ with marginals $P$ and $Q$. Note that, since $p \geq 2$, it is enough to show for $D_{\mathbb{H}^2}(P,Q)$. Since $\mathbb{H}^2$ is a Polish space, $(\mathcal{P}_2(\mathbb{H}^2), D_{\mathbb{H}^2})$ is a Polish space and induces the topology of weak convergence. Thus, we obtain the propagation of chaos property for the $Y^{i,n}$'s if we can show that $\underset{n\to\infty}{\lim}D_{\mathbb{H}^2}(P^{k,n},P^{\otimes k})=0$. But, this follows from the fact that $$ D^2_{\mathbb{H}^2}(P^{k,n},P^{\otimes k})\le k\sup_{i\le k}\|Y^{i,n}-Y^i\|_{\mathcal{H}^{2,1}}^2 $$ and \eqref{chaos-Y-1-2} for $p=2$. \qed \end{proof} \medskip \textcolor{black}{In the next proposition we show a convergence result for $p>2$, of the whole solution $(Y^{i,n},Z^{i,n},U^{i,n},W^{i,n})$ of the system \eqref{BSDEParticle}}. \begin{Proposition}\label{chaos-1} Assume that, for some $p>2$, Assume that $\gamma_1$, $\gamma_2$, $\kappa_1$ and $\kappa_2$ satisfy \begin{align}\label{smallnessCond-chaos} 2^{9p/2-3}(\gamma_1^p+\gamma_2^p+\kappa_1^p+\kappa_2^p)<\left(\frac{p-\kappa}{2p}\right)^{p/\kappa} \end{align} for some $\kappa\in [2,p)$. Then, under Assumptions \ref{generalAssump}, \ref{Assump-PS} and \ref{Assump:chaos}, we have \begin{equation}\label{chaos-1-1}\begin{array}{ll} \underset{n\to\infty}\lim \,\left(\|Y^{i,n}-Y^i\|_{\mathcal{S}^p}+\|Z^{i,n}-Z^i{\bf e}_i\|_{\mathcal{H}^{p,n}}+ \|U^{i,n}-U^i{\bf e}_i\|_{\mathcal{H}^{p,n}_\nu}+ \|W^{i,n}-W^i\|_{\mathcal{S}^p}\right)=0. \end{array} \end{equation} Here, ${\bf e}_1,\ldots,{\bf e}_n$ denote unit vectors in $I\!\!R^n$. \end{Proposition} \begin{proof} \noindent \uline{Step 1.} In view of \eqref{Y-n-estimate}, for any $\kappa \ge 2$ and any $t\leq T$, we have \begin{equation*}\begin{array} {lll} |Y^{i,n}_t-Y^{i}_t|^{\kappa} \le \underset{\tau\in\mathcal{T}^n_t}{{\rm ess}\,\sup\limits\,} \underset{\sigma\in\mathcal{T}^n_t}{{\rm ess}\,\sup\limits\,}2^{\frac{\kappa}{2}-1}\mathbb{E}\left[\int_t^{\tau \wedge \sigma}e^{\kappa \beta (s-t)}\eta^{\kappa} T^{\frac{\kappa-2}{\kappa}}C_f^{\kappa}\mathcal{W}^{\kappa}_{p}(L_{n}[\textbf{Y}_s^n],\mathbb{P}_{Y_s})ds \right. \\ + \left. \left( \gamma_1 e^{ \beta(\tau-t)}|Y^{i,n}_{\tau}-Y^{i}_{\tau}|+\gamma_2 e^{ \beta(\tau-t)} \mathcal{W}_{p}(L_{n}[\textbf{Y}_{\tau}^n],\mathbb{P}_{Y_s|s=\tau}) \right. \right. \\ \left. \left. +\kappa_1 e^{ \beta(\sigma-t)}|Y^{i,n}_{\sigma}-Y^{i}_{\sigma}|+\kappa_2 e^{ \beta(\sigma-t)} \mathcal{W}_{p}(L_{n}[\textbf{Y}_{\sigma}^n],\mathbb{P}_{Y_s|s=\sigma}) +e^{ \beta(T-t)}|\xi^{i,n}-\xi^i|{\bf 1}_{\{\tau \wedge \sigma=T\}} \right)^{\kappa}|\mathcal{F}_t\right], \end{array} \end{equation*} where $\eta$, $\beta>0$ such that $\eta \leq \frac{1}{C_f^2}$ and $\beta \geq 2 C_f+\frac{3}{\eta}$. Therefore, for any $p>\kappa \ge 2$, we have \begin{equation*} e^{p\beta t}|Y^{i,n}_t-Y^{i}_t|^{p}\le \mathbb{E}[\mathcal{G}^{i,n}_{T}|\mathcal{F}_t]^{p/\kappa}, \end{equation*} where $$ \begin{array}{lll} \mathcal{G}_T^{i,n}:=2^{\frac{\kappa}{2}-1} \left\{\int_0^TT^{\frac{\kappa-2}{\kappa}}e^{\kappa \beta s}\eta^{\kappa} C_f^{\kappa}\mathcal{W}^{\kappa}_{p}(L_{n}[\textbf{Y}_s^n],\mathbb{P}_{Y_s})ds \right. \\ \left. \qquad\qquad\qquad +\left( (\gamma_1+\kappa_1) \underset{0\le s\le T}{\sup}\,e^{\beta s}|Y^{i,n}_{s}-Y^{i}_{s}| +(\gamma_2+\kappa_2) \underset{0\le s\le T}{\sup}\,e^{ \beta s} \mathcal{W}_{p}(L_{n}[\textbf{Y}_{s}^n],\mathbb{P}_{Y_s}) +e^{ \beta T}|\xi^{i,n}-\xi^i|\right)^{\kappa}\right\}. \end{array} $$ Thus, by Doob's inequality, we get \begin{equation}\label{Y-n-Doob} \mathbb{E}[\underset{0\le t\le T}{\sup}\,e^{p\beta t}|Y^{i,n}_t-Y^{i}_t|^{p}]\le \left(\frac{p}{p-\kappa}\right)^{p/\kappa}\mathbb{E}\left[\left(\mathcal{G}_T^{i,n}\right)^{p/\kappa}\right]. \end{equation} Therefore, we have $$\begin{array}{lll} 2^{1-\frac{p}{\kappa}}\left(\mathcal{G}_T^{i,n}\right)^{p/\kappa}\le C_1\int_0^T \underset{0\le t\le s}{\sup}\,e^{p\beta t}\mathcal{W}_p^p(L_{n}[\textbf{Y}_{t}^n],L_{n}[\textbf{Y}_{t}])ds \\ \qquad\qquad\qquad\qquad +2^{3p/2-2}\left((\gamma_1+\kappa_1)\underset{0\le t\le T}{\sup}\,e^{\beta t}|Y^{i,n}_t-Y^{i}_t|+(\gamma_2+\kappa_2)\underset{0\le t\le T}{\sup}\,e^{\beta t}\mathcal{W}_{p}(L_{n}[\textbf{Y}_{s}^n],L_{n}[\textbf{Y}_{s}])\right)^p +\Lambda_n \end{array} $$ where $C_1:=2^{3p/2-2}T^{2\frac{p}{\kappa}-1}\eta^pC_f^p$ and $$ \Lambda_n:=C_1 T\underset{0\le s\le T}{\sup}\,e^{ p\beta s} \mathcal{W}_{p}(L_{n}[\textbf{Y}_{s}],\mathbb{P}_{Y_s})+2^{3p/2-2}\left( (\gamma_2+\kappa_2) \underset{0\le s\le T}{\sup}\,e^{ \beta s} \mathcal{W}_{p}(L_{n}[\textbf{Y}_{s}],\mathbb{P}_{Y_s}) +e^{ \beta T}|\xi^{i,n}-\xi^i|\right)^{p}. $$ \noindent But, in view of the exchangeability of the processes $(Y^{i,n},Y^i),i=1,\ldots,n$ (see Proposition \ref{exchange}), for each $s \in [0,T]$ we have, $$ \mathbb{E}\left[\underset{0\le t\le s}{\sup}e^{\beta p t}\mathcal{W}_p^p(L_n[\textbf{Y}_t^n],L_n[\textbf{Y}_t])\right]\leq \mathbb{E}\left[\underset{0\le t\le s}{\sup}e^{\beta p t}|Y^{i,n}_t-Y^{i}_t|^p\right],\quad i=1,\ldots,n, $$ and so $$\begin{array}{lll} 2^{1-\frac{p}{\kappa}}\mathbb{E}[\left(\mathcal{G}_T^{i,n}\right)^{p/\kappa}]\le C_1\mathbb{E}\left[\int_0^T \underset{0\le t\le s}{\sup}\,e^{p\beta t}\mathcal{W}_p^p(L_{n}[\textbf{Y}_{t}^n],L_{n}[\textbf{Y}_{t}])ds\right]+\mathbb{E}\left[\Lambda_n\right]\\ \qquad\qquad\qquad\qquad\qquad\qquad +2^{9p/2-3}(\gamma_1^p+\gamma_2^p+\kappa_1^p+\kappa_2^p)\mathbb{E}[\underset{0\le t\le T}{\sup}\,e^{p\beta t}|Y^{i,n}_t-Y^{i}_t|^{p}]. \end{array} $$ Therefore, \eqref{Y-n-Doob} becomes \begin{equation*} \begin{array} {lll} \mu\mathbb{E}\left[\underset{0\le t\le T}{\sup}e^{p\beta t}|Y^{i,n}_t-Y^{i}_t|^p\right] \le C_1\mathbb{E}\left[\int_0^T \underset{0\le t\le s}{\sup}e^{\beta p t}|Y^{i,n}_t-Y^{i}_t|^pds\right]+\mathbb{E}\left[\Lambda_n\right]. \end{array} \end{equation*} where $\mu:=2^{1-\frac{p}{\kappa}}\left(\frac{p}{p-\kappa}\right)^{-p/\kappa}-2^{9p/2-3} (\gamma_1^p+\gamma_2^p+\kappa_1^p+\kappa_2^p)$. Using the condition \eqref{smallnessCond-chaos}, to see that $\mu>0$, and Gronwall's inequality, we finally obtain \begin{equation*} \mathbb{E}\left[\underset{0\le t\le T}{\sup}e^{p\beta t}|Y^{i,n}_t-Y^{i}_t|^p\right] \le e^{\frac{C_1}{\mu}T}\mathbb{E}\left[\Lambda_n\right]. \end{equation*} Next, by \eqref{GC-2} together with Assumption \eqref{Assump:chaos} (iii) we have $$ \underset{n\to\infty}\lim\, \mathbb{E}\left[\Lambda_n\right]= 0, $$ which yields the desired result. \medskip \noindent \uline{Step 2.} We now prove that $\underset{n\to\infty}{\lim}\|Z^{i,n}-Z^i{\bf e}_i\|_{\mathcal{H}^{p,n}}=0, \,\underset{n\to\infty}{\lim}\|U^{i,n}-U^i{\bf e}_i\|_{\mathcal{H}^{p,n}_\nu}=0$ and $\underset{n\to\infty}{\lim}\|W^{i,n}-W^i\|_{\mathcal{S}^{p}}=0$. We start by showing that $\underset{n\to\infty}{\lim}\|Z^{i,n}-Z^i{\bf e}_i\|_{\mathcal{H}^{p,n}}=0$ and $\underset{n\to\infty}{\lim}\|U^{i,n}-U^i{\bf e}_i\|_{\mathcal{H}^{p,n}_\nu}=0$. For $s \in [0,T]$, denote $\delta Y^{i,n}_s:= Y_s^{i,n}-Y_s^{i}$, $\delta Z_s^{i,n}:= Z_s^{i,n}-Z_s^{i}{\bf e}_i$, $\delta U^{i,n}_s:= U_s^{i,n}-U_s^{i}{\bf e}_i$, $\delta K_s^{1,i,n}:= K_s^{1,i,n}-K_s^{1,i}$, $\delta K_s^{2,i,n}:= K_s^{2,i,n}-K_s^{2,i}$, $\delta f^{i,n}_s:=f(s,Y_s^{i,n}, Z_s^{i,i,n}, U_s^{i,i,n}, L_{n}[\textbf{Y}_s^n])-f(s,Y_s^{i}, Z_s^{i}, U_s^{i}, \mathbb{P}_{Y^{i}_s})$, $\delta \xi^{i,n}:=\xi^{i,n}-\xi^{i}$, $\delta h^{1,i,n}_s:=h^1(s,Y_s^{i,n},L_{n}[\textbf{Y}^n_s])-h^1(s,Y_s^{i}, \mathbb{P}_{Y^{i}_s})$ and $\delta h^{2,i,n}_s:=h^2(s,Y_s^{i,n},L_{n}[\textbf{Y}^n_s])-h^2(s,Y_s^{i}, \mathbb{P}_{Y^{i}_s})$. By applying It\^{o}'s formula to $|\delta Y^{i,n}_t|^2$, we obtain \begin{align*} &|\delta Y^{i,n}_t|^2+\int_t^T |\delta Z^{i,n}_s|^2ds+\int_t^T\int_{I\!\!R^*}\sum_{j=1}^n|\delta U^{i,j,n}_s(e)|^2 N^j(ds,de)+\sum_{t<s\leq T}|\Delta W_s^{i,n}-\Delta W_s^{i}|^2= |\delta \xi^{i,n}|^2 \nonumber \\ & + 2\int_t^T \delta Y_s^{i,n}\delta f_s^{i,n}ds - 2 \int_t^T \delta Y_s^{i,n} \sum_{j=1}^n\delta Z_s^{i,j,n} dB^j_s - 2 \int_t^T \int_{I\!\!R^*} \delta Y_{s^-}^{i,n} \sum_{j=1}^n\delta U_s^{i,j,n}(e) \Tilde{N}^j(ds,de) \nonumber \\ & +2\int_t^T \delta Y_{s^-}^{i,n} d(\delta K_s^{1,i,n})-2\int_t^T \delta Y_{s^-}^{i,n} d(\delta K_s^{2,i,n}). \end{align*} By standard estimates, from the assumptions on the driver $f$, we get, for all $\varepsilon>0$, \begin{equation*}\begin{array}{lll} \int_t^T\delta Y_s^{i,n}\delta f_s^{i,n}ds \leq \int_t^T C_f|\delta Y_s^{i,n}|^2ds+\int_t^T\frac{3}{\varepsilon}C^2_f|\delta Y_s^{i,n}|^2ds+\int_t^T 4\varepsilon \left\{|\delta Z^{i,n}_s|^2+|\delta U^{i,n}_s|^2_\nu\right\}ds \\ \qquad\qquad\qquad\qquad\qquad +\int_t^T4\varepsilon\mathcal{W}^2_p(L_{n}[\textbf{Y}^n_s],\mathbb{P}_{Y^{i}_s}) ds. \end{array} \end{equation*} We obtain that, for some constant $C_p>0$, independent of $n$, we have \begin{equation}\begin{array}{lll}\label{eq1} |\delta Y^{i,n}_0|^p+\left(\int_0^T |\delta Z^{i,n}_s|^2ds\right)^{\frac{p}{2}}+\left(\int_0^T\int_{I\!\!R^*}\sum_{j=1}^n|\delta U^{i,j,n}_s(e)|^2 N^j(ds,de)\right)^{\frac{p}{2}} \leq C_p |\delta \xi^{i,n}|^p \\ \qquad +C_p \left\{\left(2C_f+\frac{6}{\varepsilon}C_f^2\right)T\right\}^{\frac{p}{2}} \underset{0\le s\le T}{\sup} |\delta Y_s^{i,n}|^p +C_p \varepsilon^{\frac{p}{2}} \left(\int_0^T |\delta Z^{i,n}_s|^2 ds\right)^{\frac{p}{2}}+C_p \varepsilon^{\frac{p}{2}} \left(\int_0^T |\delta U^{i,n}_s|^2_\nu ds\right)^{\frac{p}{2}} \\ \qquad +C_p \left(\int_0^T \mathcal{W}^2_p(L_{n}[\textbf{Y}^n_s],\mathbb{P}_{Y^{i}_s})ds\right)^{\frac{p}{2}} +C_p \left\{ \left|\int_0^T \delta Y_s^{i,n} \sum_{j=1}^n\delta Z_s^{i,j,n} dB^j_s \right|^{\frac{p}{2}} \right. \\ \left. \qquad + \left|\int_0^T \int_{I\!\!R^*} \delta Y_{s^-}^{i,n} \sum_{j=1}^n\delta U_s^{i,j,n}(e) \Tilde{N}^j(ds,de) \right|^{\frac{p}{2}} +\left|\int_0^T \delta Y_{s^-}^{i,n} d(\delta K_s^{1,i,n})\right|^{\frac{p}{2}}+\left|\int_0^T \delta Y_{s^-}^{i,n} d(\delta K_s^{2,i,n})\right|^{\frac{p}{2}}\right\}. \end{array} \end{equation} By applying the Burkholder-Davis-Gundy inequality, we derive that there exist some constants $m_p>0$ and $l_p>0$ such that \begin{align*} C_p \mathbb{E}\left[\left|\int_0^T \delta Y_s^{i,n} \sum_{j=1}^n\delta Z_s^{i,j,n} dB^j_s \right|^{\frac{p}{2}}\right] & \leq m_p \mathbb{E} \left[\left(\int_0^T (\delta Y_s^{i,n})^2 |\delta Z_s^{i,n}|^2 ds\right)^{\frac{p}{4}}\right] \\ & \leq \frac{m^2_p}{2} \|\delta Y_s^{i,n}\|^p_{\mathcal{S}^p}+\frac{1}{2}\mathbb{E}\left[\left(\int_0^T |\delta Z^{i,n}_s|^2ds\right)^{\frac{p}{2}}\right] \end{align*} and \begin{align*} C_p \mathbb{E}\left[\left|\int_0^T \int_{I\!\!R^*} \delta Y_{s^-}^{i,n} \sum_{j=1}^n \delta U_s^{i,j,n}(e) \Tilde{N}^j(ds,de) \right|^{\frac{p}{2}}\right] \leq l_p \mathbb{E} \left[\left(\int_0^T (\delta Y_{s^-}^{i,n})^2 \int_{I\!\!R^*}\sum_{j=1}^n(\delta U_s^{i,j,n}(e))^2 N^j(ds,de)\right)^{\frac{p}{4}}\right] \nonumber \\ \qquad\qquad \leq \frac{l^2_p}{2} \|\delta Y_s^{i,n}\|^p_{\mathcal{S}^p}+\frac{1}{2}\mathbb{E}\left[\left(\int_0^T \int_{I\!\!R^*}\sum_{j=1}^n(\delta U^{i,j,n}_s(e))^2 N^j(ds,de)\right)^{\frac{p}{2}}\right]. \end{align*} Also recall that, for some constant $e_p>0$ we have \begin{align*} \mathbb{E}\left[\left(\int_0^T \int_{I\!\!R^*}|\delta U^{i,n}_s(e)|^2\nu(de)ds\right)^{\frac{p}{2}}\right] \leq e_p \mathbb{E}\left[\left(\int_0^T \int_{I\!\!R^*} \sum_{j=1}^n |\delta U^{i,j,n}_s(e)|^2N^j(de,ds)\right)^{\frac{p}{2}}\right]. \end{align*} Now, we take the expectation in $\eqref{eq1}$, by using the above inequalities and taking $\varepsilon>0$ small enough, we obtain \begin{align}\label{ii1} &\mathbb{E}\left[\left(\int_0^T |\delta Z^{i,n}_s|^2ds\right)^{\frac{p}{2}}+\left(\int_0^T \|\delta U^{i,n}_s\|_\nu^2ds\right)^{\frac{p}{2}}\right] \leq C_p |\delta \xi^{i,n}|^p \nonumber \\ &+K_{C_f,\varepsilon,T,p} \|\delta Y_s^{i,n}\|^p_{\mathcal{S}^p}+C_p \mathbb{E}\left[\sup_{0 \leq s \leq T} \mathcal{W}^p_p(L_{n}[\textbf{Y}^n_s],\mathbb{P}_{Y^{i}_s})\right]\nonumber \\ &+\mathbb{E}\left[\left(\sup_{0 \leq s \leq T}|\delta Y_s^{i,n}|(K_T^{1,i,n}+K_T^{1,i})\right)^{\frac{p}{2}}\right]+\mathbb{E}\left[\left(\sup_{0 \leq s \leq T}|\delta Y_s^{i,n}|(K_T^{2,i,n}+K_T^{2,i})\right)^{\frac{p}{2}}\right]. \end{align} From Step 1, we have $\|\delta Y^{i,n}\|^p_{\mathcal{S}^p} \to 0$, which also implies the uniform boundedness of the sequence $\left(\|Y^{i,n}\|^p_{\mathcal{S}^p}\right)_{n \geq 0}$. Furthermore, by Assumption \ref{Assump:chaos} and Proposition \ref{KK}, we obtain that $\mathbb{E}[(K_T^{1,i,n})^p]$ (resp. $\mathbb{E}[(K_T^{2,i,n})^p]$) are uniformly bounded. Taking the limit with respect to $n$ in \eqref{ii1}, and using Theorem \ref{LLN-2}, we get the convergence $\underset{n\to\infty}{\lim}\|Z^{i,n}-Z^i{\bf e}_i\|_{\mathcal{H}^{p,n}}=0, \,\underset{n\to\infty}{\lim}\|U^{i,n}-U^i{\bf e}_i\|_{\mathcal{H}^{p,n}_\nu}=0$. From the equations satisfied by $W^{i,n}$ and $W^{i}$, that is \begin{align}\label{eq2} W_T^{i,n}=Y_0^{i,n}-\xi^{i,n}& -\int_0^Tf(s,Y_s^{i,n}, Z_s^{i,i,n}, U_s^{i,i,n}, L_{n}[\textbf{Y}_s^n])ds \nonumber \\ & +\sum_{j=1}^n\int_0^TZ_s^{i,j,n}dB^j_s+\int_0^T\int_{I\!\!R^*} \sum_{j=1}^nU_s^{i,j,n}(e)\Tilde{N}^j(ds,de) \end{align} and \begin{align}\label{eq3} W_T^{i}=Y_0^{i}-\xi^{i}-\int_0^Tf(s,Y_s^{i}, Z_s^{i}, U_s^{i}, \mathbb{P}_{Y_s^{i}})ds+\int_0^TZ_s^{i}dB^i_s+\int_0^T\int_{I\!\!R^*} U_s^{i}(e)\Tilde{N}^i(ds,de), \end{align} and the convergence of $(Y^{i,n},Z^{i,n}, U^{i,n})$ shown above, we derive that $\underset{n\to\infty}{\lim}\|W^{i,n}-W^i\|_{\mathcal{S}^{p}}=0$.\qed \end{proof} \begin{comment} Due to Assumption \ref{Assump:chaos}, Step 1 and the assumption on the barrier $h_1$, we get that condition $(12)$ from Theorem 3.2 in \cite{CM08} is satisfied, and therefore we obtain \begin{align} &||\delta Y^{n,i}||^{p}_{\mathcal{S}^p}+||\delta Z^{n,i}||^{p}_{\mathcal{H}^p}+||\delta U^{n,i}||^{p}_{\mathcal{H}_\nu^p}+||\delta K^{n,i}||^{p}_{\mathcal{S}^p} \leq C_{C_f}\Phi \left(||\delta \xi^{n,i}||^p_{\mathcal{L}^p}\right. \nonumber \\ & \left. +||f(s,Y_s^{i}, Z_s^{i}, U_s^{i}, L_{n}[\textbf{Y}_s^n])-f(s,Y_s^{i}, Z_s^{i}, U_s^{i}, \mathbb{P}_{Y^{i}_s})||^p_{\mathcal{H}^p}+||h_2(s,Y_s^{i,n},L_{n}[\textbf{Y}_s^n])-h_2(s,Y_s^{i},\mathbb{P}_{Y_s^i}) ||_{\mathcal{S}^p}\right), \end{align} where $\Psi$ and $C_{C_f}$ are constants. \end{comment} \medskip \begin{Corollary}[Propagation of chaos]\label{prop-y-u-z} Under the assumptions of Proposition \ref{chaos-1}, the particle system \eqref{BSDEParticle} satisfies the propagation of chaos property, i.e. for any fixed positive integer $k$, $$ \underset{n\to\infty}{\lim} \text{Law}\,(\Theta^{1,n},\Theta^{2,n},\ldots,\Theta^{k,n})=\text{Law}\,(\Theta^1,\Theta^2,\ldots,\Theta^k), $$ where $$ \Theta^{i,n}:=(Y^{i,n},Z^{i,n},U^{i,n},K^{1,i,n}-K^{2,i,n}),\quad \Theta^{i}:=(Y^i,Z^i,U^i,K^{1,i}-K^{2,i}). $$ \end{Corollary} \begin{proof} We obtain the propagation of chaos if we can show that $\underset{n\to\infty}{\lim}D_{G}^p(\mathbb{P}^{k,n},\mathbb{P}_{\Theta}^{\otimes k})=0$. But, this follows from the inequality \eqref{chaos-0} and Proposition \ref{chaos-1}. \qed \end{proof} \begin{Remark} The question of convergence of $S$-saddle points of the particle system to those of the limit process is more elaborate and will be addressed in a forthcoming paper. \end{Remark}
{'timestamp': '2022-05-06T02:04:38', 'yymm': '2202', 'arxiv_id': '2202.02126', 'language': 'en', 'url': 'https://arxiv.org/abs/2202.02126'}
\section{Introduction} Coronal lines are collisionally excited forbidden transitions within low-lying levels of highly ionized species (IP $>$ 100 eV). As such, these lines form in extreme energetic environments and thus are unique tracers of AGN activity; they are not seen in starburst galaxies. Coronal lines appear from X-rays to IR and are common in Seyfert galaxies regardless of their type (Penston et al. 1984; Marconi et al. 1994; Prieto \& Viegas 2000). The strongest ones are seen in the IR; in the near-IR they can even dominate the line spectrum (Reunanen et al. 2003). Owing to the high ionization potential, these lines are expected to be limited to few tens to hundred of parsec around the active nucleus. On the basis of spectroscopic observations, Rodriguez-Ardila et al. (2004, 2005) unambiguously established the size of the coronal line region (CLR) in NGC~1068 and the Circinus Galaxy, using the coronal lines [SiVII] 2.48~$\rm \mu m$\,, [SiVI] 1.98~$\rm \mu m$\~,, [Fe\,{\sc vii}] 6087\AA, [Fe\,{\sc x}] 6374 \AA\ and [Fe\,{\sc xi}] 7892 \AA\. They find these lines extending up to 20 to 80 pc from the nucleus, depending on ionization potential. Given those sizes, we started an adaptive-optics-assisted imaging program with the ESO/VLT aimed at revealing the detailed morphology of the CLR in some of nearest Seyfert galaxies. We use as a tracer the isolated IR line [Si VII] 2.48~$\rm \mu m$\ (IP=205.08 eV). This letter presents the resulting narrow-band images of the [Si VII] emission line, which reveal for the first time the detailed morphology of the CLR, and with suitable resolution for comparison with radio and optical- lower-ionization-gas images. The morphology of the CLR is sampled with a spatial resolutions almost a factor 5 better than any previously obtained, corresponding to scales $\sim <$10 pc. The galaxies presented are all Seyfert type 2: Circinus, NGC~1068, ESO~428-G1 and NGC~3081. Ideally, we had liked to image type 1 objects, but, in the Southern Hemisphere, there are as yet no known, suitable type~1 sources at sufficiently low redshift to guarantee the inclusion of [Si VII] 2.48~$\rm \mu m$\ entirely in the filter pass-band. \section{Observations, image registration and astrometry} Observations were done with the adaptive-optics assisted IR camera NACO at the ESO/VLT. Two narrow band filters, one centered on the coronal [SiVII] 2.48~$\mu m$ line and an adjacent band centered on 2.42~$\mu m$ line-free continuum, were used. The image scale was 0.027 arcsec pixel$^{-1}$ in all cases, 0.013 arcsec pixel$^{-1}$ in NGC\,1068. Integration times were chosen to keep the counts within the linearity range: $\sim 20$ minutes per filter and source. For each filter, the photometry was calibrated against standard stars observed after each science target. These stars were further used as PSF when needed and for deriving a correction factor that normalizes both narrow-band filters to provide equal number of counts per a given flux. In deriving this factor it is assumed that the continuum level in the stars is the same in both filters and not emission lines are not present. The wavefront sensor of the adaptive optics system followed the optical nucleus of the galaxies to determine seeing corrections. The achieved spatial resolution was estimated from stars available in the field of the galaxies when possible; this was not possible in NGC 3081 and NGC 1068 (cf. Table 1). The resolutions were comparable in both filters within the reported errors in Table 1. Continuum-free [SiVII]2.48~$\rm \mu m$\ line images are shown in Figs. 1 and 2 for each galaxy. These were produced after applying the normalization factor derived from the standers stars. The total integrated coronal line emission derived from these images is listed in Table 2. For comparison, [SiVII] 2.48~$\rm \mu m$\ fluxes derived from long-slit spectroscopy are also provided. Also in these figures, images with the 2.48~$\rm \mu m$\~ filter of the standard stars -also used as PSF's control- are shown. The images provide a rough assessment of the image quality/resolution achieved in the science frames. For the case of Circinus and ESO 428-G014, a more accurate evaluation is possible from the images of a field star. One of these field star is shown in both filters in Figs. 1e and 2b respectively.. To get an assesment of the image quality at the lowest signal levels, the images of the field stars in particular are normalized to the galaxy peak at the corresponding filter. These are much fainter than the galaxy nucleus, thus, the star peak is a mere $\sim$5\% of the galaxy peak. \begin{table*} \centering \caption{Galaxies scale and achieved NACO angular resolution. $*$: in NGC 1068, is given the size of the nucleus as K-band interferometry sets an upper limit for the core of 5 mas (Weigelt et al. 2004); in NGC 3081, the size of a PSF star taken after the science frames is given} \begin{tabular}{cccccccc} \hline AGN & Seyfert & 1 arcsec & Stars & FWHM & FWHM & Size of nucleus \\ &type &in pc & in field & arcsec & pc & FWHM arcsec \\ \hline Circinus & 2 & 19 & 2 & 0.19$\pm$0.02 & 3.6 & 0.27 \\ NGC 1068 & 2 & 70 & 0 & 0.097$^*$ & 6.8 & $<$0.097 \\ ESO\,428-G014 & 2 & 158 & 3 & 0.084$\pm$0.006 & 13 & 0.15$\pm$0.01 \\ NGC\,3081 & 2 & 157 & 0 & 0.095$^*$ & 14 & $<$0.32 \\ \hline \end{tabular} \end{table*} \begin{table*} \centering \caption{Size and photometry of the 2.48 $\mu m$ coronal line region. $*$: from a 1'' x 1.4'' aperture \label{flxTb}} \begin{tabular}{ccccc} \hline {AGN} & {Radius from nucleus} & {Flux NACO} & {Flux long-slit} & {Reference long-slit} \\ &pc &\multicolumn{2}{c}{in units of $10^{-14}$~erg~s$^{-1}$ cm$^{-2}$} \\ \hline Circinus & 30 & 20 & 16 & Oliva et al. 1994 \\ NGC 1068 & 70 & 72 & 47$^*$ & Reunanen et al. 2003 \\ ESO\,428-G014 & 120 - 160 & 2.0 & 0.7$^*$ & Reunanen et al. 2003 \\ NGC\,3081 & 120 & 0.8 & 0.8$^*$ & Reunanen et al. 2003 \\ \hline \end{tabular} \end{table*} Radio and HST images were used, where available, to establish an astrometric reference frame for the CLR in each of the galaxies. For NGC~1068 (Fig 1a, b, \& c), the registration of radio, optical HST and adaptive-optics IR images by Marco et al. (1997; accuracy $\sim$ 0.05'') was adopted. The comparison of the [SiVII] 2.48~$\rm \mu m$\ line image with the HST [OIII] 5007 \AA\ followed by assuming the peak emission in Marco's et al. K-band image to coincide with that in the NACO 2.42~$\rm \mu m$\ continuum image. The comparison with the MERLIN 5~GHz image of Gallimore et al. (2004) was done assuming that the nuclear radio source 'S1' and the peak emission in the NACO 2.42~$\rm \mu m$\ image are coinciding. In Circinus (Fig. 1d, e \& f), the registration of NACO and HST/$H\alpha$ images was done on the basis of 3--4 stars or unresolved star clusters available in all fields. That provides an accurate registration better than 1 pixel (see Prieto et al. 2004). No radio image of comparable resolution is available for this galaxy. For ESO\,428-G014 (Fig. 2a, b, \& c), NACO images were registered on the basis of 3 available stars in the field. Further registration with a VLA 2~cm image (beam 0.2''; Falcke, Wilson \& Simpson 1998) was made on the assumption that the continuum peak at 2.42~$\mu m$ coincides with that of the VLA core. We adopted the astrometry provided by Falcke et al. (uncertainty $\sim$ 0.3") who performed the registration of the 2~cm and the HST/H$\alpha$ images, and plotted the HST/H$\alpha$ atop the NACO coronal line image following that astrometry. NGC~3081 (Fig. 2d, e, \& f) has no stars in the field. In this case NACO 2.42~$\rm \mu m$\ and 2.44~$\rm \mu m$\ images, and an additional NACO deep Ks-band image, were registered using the fact that the NACO adaptive optics system always centers the images at the same position of the detector within 1 pixel (0.027''). The registration with a HST/WFPC2 image at 7910\AA\ (F791W), employed as a reference the outer isophote of the Ks-band image which show very similar morphology to that seen in the HST 7910\AA\ image. Further comparison with an HST PC2 $H\alpha$ image relied on the astrometry by Ferruit et al. (2000). The registration with an HST/FOC UV image at 2100\AA\ (F210M) was based on the assumption that the UV nucleus and the continuum peak emission at 2.42 2.42~$\rm \mu m$\ coincides. The radio images available for this galaxy have a beam resolution $>0.5''$ (Nagar et al. 1999), which includes all the detected coronal extended emission, and are therefore not used in this work. \section{The size and morphology of the coronal line region} In the four galaxies, the CLR resolves into a bright nucleus and extended emission along a preferred position angle, which usually coincides with that of the extended lower-ionization gas. The size of the CLR is a factor 3 to 10 smaller than the extended narrow line region (NLR). The maximum radius (Table 2) varies from 30 pc in Circinus to 70 pc in NGC 1068, to $\sim >$ 120 pc in NGC 3081 and ESO~428-G014. The emission in all cases is diffuse or filamentary, and it is difficult to determine whether it further breaks down into compact knots or blobs such as those found in H$\alpha$, [OIII] 5007\AA\ or radio images even though the resolutions are comparable. In Circinus, [SiVII]2.48~$\rm \mu m$\ emission extends across the nucleus and aligns with the orientation of its one-sided ionization cone, seen in H$\alpha$, or in [OIII] 5007 \AA. In these lines, the counter-cone is not seen (Wilson et al. 2002), but in [SiVII], presumably owing to the reduced extinction, extended diffuse emission is detected at the counter-cone position (Fig. 1f; Prieto et al. 2004). This has been further confirmed with VLT/ISAAC spectroscopy which shows both [SiVII]2.48~$\rm \mu m$\ and [SiVI] 1.96~$\rm \mu m$\ extending up to 30 pc radius from the nucleus (Rodriguez-Ardila et al. 2004). In the coronal line image, the North-West emission is defining an opening cone angle larger than that in $H\alpha$. The morphology of [SiVII] in this region is suggestive of the coronal emission tracing the walls of the ionization cone (see fig. 1f). In ESO~428-G014, the coronal emission is remarkably aligned with the radio-jet (Fig. 2c). The 2~cm emission is stronger in the northwest direction, and [SiVII] is stronger in that direction too. H$\alpha$ emission is also collimated along the radio structure, but the emission spreads farther from the projected collimation axis and extends out to a much larger radius from the nucleus than the coronal or radio emission (Fig. 2b). Both H$\alpha$ and the 2 cm emission resolve into several blobs but the coronal emission is more diffuse. In NGC 3081, the coronal emission resolves into a compact nuclear region and a detached faint blob at $\sim$120 pc north of it. The HST [OIII] 5007 and H$\alpha$ images show rather collimated structure extending across the nucleus along the north-south direction over $\sim$ 300 pc radius (Ferruit et al. 2000). Besides the nucleus, the second brightest region in those lines coincides with the detached [Si VII] emission blob (Fig. 2d). At this same position, we also find UV emission in a HST/FOC image at 2100 \AA. NGC~1068 shows the strongest [Si VII] 2.48~$\rm \mu m$\ emission among the four galaxies, a factor three larger than in Circinus, and the only case where the nuclear emission shows detailed structure. At $\sim 7 ~pc$ radius from the radio core S1, [Si VII] emission divides in three bright blobs. The position of S1 falls in between the blobs. The southern blob looks like a concentric shell. The northern blob coincides with the central [OIII] peak emission at the vortex of the ionization cone; the other two blobs are not associated with a particular enhancement in [OIII] or radio emission (Fig. 1b \& c). [Si VII] depression at the position of S1 may indicate a very high ionization level at already 7 pc radius (our resolution) from the center; the interior region might instead be filled with much higher ionization-level gas, e.g. [FeX], [Si IX] and higher. This central structure, $\sim$14~ pc radius in total, is surrounded in all directions by much lower surface brightness gas, extending up to at least 70 pc radius. The presence of this diffuse region is confirmed by VLT/ISAAC spectra along the north-south direction, which reveal [SiVI]1.96~$\rm \mu m$\ and [Si VII]2.48~$\rm \mu m$\ extending at both sides of the nucleus up to comparable radii (Rodriguez-Ardila et al. 2004, 2005). This diffuse emission shows slight enhancement at both sides of the 5 GHz jet, but there otherwise appears no direct correspondence between the CLR and radio morphology. \section{Discussion} ESO~438-G014 and NGC~3081 show the largest and best collimated [SiVII] emission, up to 150 pc radius from the nucleus. To reach those distances by nuclear photoionization alone would require rather low electron densities or a very strong (collimated) radiation field. Density measurements in the CLR are scarce: Moorwood et al. (1996) estimate a density $n_e \sim 5000~cm^{-3}$ in Circinus on the basis of [NeV]~14.3~$\rm \mu m$\ /24.3~$\rm \mu m$; Erkens et al. (1997) derive $n_e < 10{^7}~ cm^{-3}$ in several Seyfert 1 galaxies, on the basis of several optical [FeVII] ratios. This result may be uncertain because the optical [Fe VII] are weak and heavily blended. Taking $n_e \sim 10^4 cm^{-3}$ as a reference value, it results in an ionization parameter U $<\sim 10^{-3}$ at 150~pc from the nucleus, which is far too low to produce strong [SiVII] emission (see e.g Ferguson et al. 1997; Rodriguez-Ardila et al. 2005). We argue that, in addition to photoionization, shocks must contribute to the coronal emission. This proposal is primarily motivated by a parallel spectroscopic study of the kinematics of the CLR gas of several Seyfert galaxies (Rodriguez-Ardila et al. 2005), which reveals coronal line profiles with velocities 500 $\rm km\, s^{-1}$ $< v <$ 2000 $\rm km\, s^{-1}$ . Here we assess the proposal in a qualitative manner, by looking for evidence for shocks from the morphology of the gas emission. In ESO~428-G014, the remarkable alignment between [Si VII] and the radio emission is a strong indication of the interaction of the radio jet with the ISM. There is spectroscopic evidence of a highly turbulent ISM in this object: asymmetric emission line profiles at each side of the nucleus indicate gas velocities of up to 1400 $\rm km\, s^{-1}$ (Wilson \& Baldwin 1989). Shocks with those velocities heat the gas to temperatures of $>\sim 10^7 K$, which will locally produce bremsstrahlung continuum in the UV -- soft X-rays (Contini et al. 2004) necessary to produce coronal lines. [Si VII] 2.48~$\rm \mu m$\ with IP = 205.08 eV eV will certainly be enhanced in this process. The concentric shell-like structure seen in NGC 3081 in [OIII] 5007 \AA\ and H$\alpha$ (Ferruit et al. 2000) is even more suggestive of propagating shock fronts. From the [OIII]/H$\alpha$ map by Ferruit et al., the excitation level at the position of the [Si VII] northern blob is similar to that of the nucleus, which points to similar ionization parameter despite the increasing distance from the nucleus. The cloud density might then decrease with distance to balance the ionization parameter, but this would demand a strong radiation field to keep line emission efficient. Alternatively, a local source of excitation is needed. The presence of cospatial UV continuum, possibly locally generated bremsstrahlung, and [Si VII] line emission circumstantially supports the shock-excitation proposal. In the case of Circinus and NGC~1068, the direct evidence for shocks from the [Si VII] images is less obvious. In NGC~1068, the orientation of the three blob nuclear structure does not show an obvious correspondence with the radio-jet; it may still be possible we missed the high velocity coronal gas component measured in NGC 1068 in our narrow band filter. In Circinus, there are not radio maps of sufficient resolution for a meaningful comparison. However, both galaxies present high velocity nuclear outflows, which are inferred from the asymmetric and blueshifthed profiles measured in the [OIII] 5007 gas in the case of Circinus (Veilleux \& Bland-Hawthorn 1997), and in the Fe and Si coronal lines in both. In the latter, velocities of $\sim$500 $\rm km\, s^{-1}$ in Circinus and $\sim$ 2000 $\rm km\, s^{-1}$ in NGC 1068 are inferred from the coronal profiles (Rodriguez-Ardila et al. 2004, 2005). An immediate prediction for the presence of shocks is the production of free-free emission, with a maximum in the UV- -- X-ray, from the shock-heated gas. We make here a first order assessment of this contribution using results from photoionization - shocks composite models by Contini et al. (2004), and compare it with the observed soft X-rays. For each galaxy, we derive the 1 keV emission due to free-free from models computed for a nuclear ionizing flux, $F_h = 10^{13} photons~cm^{-2} s^{-1} eV^{-1}$, pre-shock density $n_o=300 cm^{-3}$ and shock velocity closer to the gas velocities measured in these galaxies (we use figure A3 in Contini et al.). The selection of this high- ionizing-flux value has a relative low impact does on the 1 keV emission estimate as the bremsstrahlung emission from this flux drops sharply shortwards the Lyman limit; the results are more depending on the strength of the post-shock bremsstrahlung component, this being mainly dominated by the shock velocity and peaks in the soft X-rays (see fig. A3 in Contini et al. for illustrative examples). Regarding selection of densities, pre-shock densities of a few hundred $ cm^{-3}$ actually imply densities downstream (from where shock-excited lines are emitted) a factor of 10 - 100 higher, the higher for higher velocities, and thus whitin the range of those estimated from coronal line measureemnts (see above). Having selected the model parameters, we further assume that the estimated 1 keV emission comes from a region with size that of the observed [Si VII] emission. Under those premises, the results are as follows. For NGC 1068, assuming the free-free emission extending uniformly over a $\pi \times (70 pc)^2 cm^{-2}$ region (cf. Table 1), and models for shock velocities of 900 $\rm km\, s^{-1}$, the inferred X-ray flux is larger by a factor of 20 compared with the nuclear 1 keV Chandra flux derived by Young et al. (2001). One could in principle account for this difference by assuming a volume filling factor of 5-10\%, which in turn would account for the fact that free-free emission should mostly be produced locally at the fronts shock. In the case of Circinus, following the same procedure, we assume a free-free emission size of $\pi \times (30 pc)^2 cm^{-2}$ (cf. Table 1), and models of shock velocities of 500 $\rm km\, s^{-1}$ (see above). In this case, the inferred X-ray flux is lower than the 1 keV BeppoSAX flux, as estimated in Prieto et al. (2004), by an order of magnitude. For the remaining two galaxies, we assume respective free-free emission areas (cf. Table 1) of 300 pc x 50 pc for ESO 428-G014 -- the width of [Si VII] is $\sim 50 pc$ in the direction perpendicular to the jet -- and $2 \times (\pi \times (14~pc)^2 cm^{-2})$ for NGC 3081 -- in this case, free-free emission is assumed to come from the nucleus and the detached [Si VII] region North of it only. Taking the models for shocks velocities of 900 $\rm km\, s^{-1}$, the inferred X-ray fluxes, when compared with 1 KeV fluxes estimated from BeppoSAX data analysis by Maiolino et al. (1998), are of the same order for ESO 428-G014 and about an order of magnitude less in NGC 3081. The above results are clearly dominated by the assumed size of the free-free emission region, which is unknown. The only purpose of this exercise is to show that under reasonable assumptions of shock velocities, as derived from the line profiles, the free-free emission generated by these shocks in the X-ray could be accommodated within the observed soft X-ray fluxes. We thank Heino Falcke who provided us with the 2 cm radio image of ESO 428-G014, and Marcella Contini for a thorough review of the manuscript.
{'timestamp': '2005-09-07T17:03:32', 'yymm': '0509', 'arxiv_id': 'astro-ph/0509181', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/0509181'}
\section{Small $x$ Resummation} \subsection{Motivation} Current and forthcoming particle collider experiments involve very high energies, such that the momentum fractions $x$ of initial state partons are extremely small. The splitting functions that govern the evolution of parton densities $f_i(x,Q^2)$ with momentum scale $Q^2$, together with the coefficients that relate these partons to proton structure functions, are unstable at low Bjorken $x$ values due to terms behaving like $x^{-1}\alpha_S^n\log^m(1/x)$ where $n\geq m+1$. Although the standard DGLAP theory (where the splitting and coefficient functions are considered at a fixed order in $\alpha_S$) works well in parton fits, there is some evidence that a resummation of small $x$ logarithms is necessary. Previous work has shown that a LL analysis fails to describe data well. One resums small $x$ logarithms in the gluon density by solving the \emph{BFKL equation}~\cite{BFKL}, an integral equation for the unintegrated gluon 4-point function. One then relates this gluon to structure functions using the $k_T$ factorisation formalism~\cite{CollinskT,CatanikT} to obtain the resummed splitting and coefficient functions. \subsection{Solution of the BFKL equation} Introducing the double Mellin transformed unintegrated gluon density: \begin{equation} f(\gamma,N)=\int_0^\infty (k^2)^{-\gamma-1}\int_0^1 dx x^N f(x,k^2), \label{Mellin} \end{equation} the NLL BFKL equation in $(N,\gamma)$ space is a double differential equation in $\gamma$: \begin{align*} \frac{d^2f(\gamma,N)}{d\gamma^2}&=\frac{d^2f_I(\gamma,Q_0^2)}{d\gamma^2}-\frac{1}{\bar{\beta}_0 N}\frac{d(\chi_0(\gamma)f(\gamma,N))}{d\gamma}\notag\\ &+\frac{\pi}{3\bar{\beta}_0^2 N}\chi_1(\gamma)f(\gamma,N), \end{align*} with $\bar{\beta}_0=3/(\pi\beta_0)$. The derivatives in $\gamma$ arise from the use of the LO running coupling $\alpha_S(k^2)=1/(\beta_0\log{k^2/\Lambda^2})$ in momentum space, and $\chi_n(\gamma)$ is the Mellin transform of the $n^{\text{th}}$-order BFKL kernel. One may solve this to give: \begin{equation} f(N,\gamma)=\exp\left(-\frac{X_1(\gamma)}{\bar{\beta}_0 N}\right)\int_\gamma^\infty A(\tilde{\gamma})\exp\left(\frac{X_1(\tilde{\gamma})}{\bar{\beta}_0 N}\right)d\tilde{\gamma} \label{sol} \end{equation} for some $A(\tilde{\gamma})$ and $X_1(\tilde{\gamma})$. One would ideally like to factorise the perturbative from the non-perturbative physics to make contact with the collinear factorisation framework. This can be achieved (up to power-suppressed corrections) by shifting the lower limit of the integral in equation (\ref{sol}) from $\gamma\rightarrow0$. Then one finds for the integrated gluon: \begin{equation} {\cal G}(N,t)={\cal G}_E(N,t){\cal G}_I(Q_0^2,N), \end{equation} where the perturbative piece is: \begin{equation} {\cal G}_E^1(N,t)=\frac{1}{2\pi\imath}\int_{1/2-\imath\infty}^{1/2+\imath\infty}\frac{f^{\beta_0}}{\gamma}\exp\left[\gamma t-X_1(\gamma,N)/(\bar{\beta}_0 N)\right]d\gamma, \end{equation} where $X_1$ can be derived from $\chi_0(\gamma)$ and $\chi_1(\gamma)$, and $f^{\beta_0}$ is a known function of $\gamma$. Structure functions have a similar form: \begin{equation} {\cal F}_E^1(N,t)=\frac{1}{2\pi\imath}\int_{1/2-\imath\infty}^{1/2+\imath\infty}\frac{h(\gamma,N)f^{\beta_0}}{\gamma}\exp\left[\gamma t-X_1(\gamma,N)/(\bar{\beta}_0 N)\right]d\gamma, \end{equation} where $h(\gamma,N)$ is a NLL order impact factor coupling the virtual photon with the BFKL gluon. If all impact factors are known, one can derive all necessary splitting and coefficient functions in double Mellin space (within a particular factorisation scheme) by taking ratios of the above quantities. The non-perturbative dependence then cancels, and one obtains results in momentum and $x$ space by performing the inverse Mellin integrals either numerically or analytically. The exact NLL impact factors are not in fact known, but the LL results supplemented with the correct kinematic behaviour of the gluon have been calculated~\cite{Peschanski,WPT}. We have shown that one expects them to approximate well the missing NLL information in the true impact factors~\cite{WT1}. \\ Consistent implementation of small $x$ resummations in the massive sector requires the definition of a variable flavour number scheme that allows the massive impact factors to be disentangled in terms of heavy coefficient functions and matrix elements. We have devised such a scheme, the DIS($\chi$) scheme~\cite{WT2}. With resummations in both the massive and massless sectors, one has everything necessary to carry out a global fit to DIS and related data. First, the resummed splitting and coefficient functions are combined with the NLO DGLAP results using the prescription: \begin{equation*} P^{tot.}=P^{NLL}+P^{NLO}-\left[P^{NLL(0)}+P^{NLL(1)}\right], \end{equation*} where the subtractions remove the double counted terms, namely the LO and NLO (in $\alpha_S$) parts of the resummed results. Then the resulting improved splitting and coefficient functions interpolate between the resummed results at low $x$, and the conventional DGLAP results at high $x$. \section{Results} The resummed splitting functions $P_{+}$ ($\simeq P_{gg}+4/9 P_{qg}$ at small $x$) and $P_{qg}$ are shown in figure \ref{pgg}. One sees that the LL BFKL results are much more divergent than the standard NLO results, which are known to describe data well. The addition of the running coupling to the LL BFKL equation suppresses this divergence, but it is still unacceptable. Inclusion of the NLL BFKL kernel, however, leads to a significant dip of the splitting functions below the NLO results. This dip is also observed in other resummation approaches~\cite{ABF,CCSS} and has an important consequence in the global fit in that it resolves the tension between the Tevatron jet data (which favour a larger high $x$ gluon) and the H1 and ZEUS data (which prefer a higher low $x$ gluon). By momentum conservation, one cannot increase the gluon at both low and high $x$ in a standard NLO DGLAP fit. This is possible in the resummed approach, due to the dip in the splitting functions. \\ \begin{wrapfigure}{}{0.6\columnwidth} \centerline{\includegraphics[width=0.6\columnwidth]{white_chris.fig1.ps}} \caption{Splitting functions in the DIS scheme for $n_f=4$, $t=\log(Q^2/\Lambda^2)=6$: NLL+NLO (solid); LL with running coupling + LO (dashed); LL + LO (dot-dashed); NLO (dotted).}\label{pgg} \end{wrapfigure} Indeed, the gluon distribution at the parton input scale of $Q_0^2=1\text{GeV}^2$ is positive definite over the entire $x$ range. This is in contrast to a NLO fit, where the gluon distribution is negative at small $x$ for low $Q^2$ values. Whilst a negative gluon is not disallowed, it can lead to negative structure functions which are unphysical. The resummed gluon, however, leads to a prediction for the longitudinal structure function that is positive and growing at small $x$ and $Q^2$, in contrast to fixed order results which show a significant perturbative instability.\\ A consequence of a more sensible description for $F_L$ is that a turnover is observed in the reduced cross-section $\tilde{\sigma}=F_2-y^2/[1+(1-y)^2]\,F_L$ at high $y$. As seen in figure \ref{redxsec}, this is required by the HERA data. Furthermore, this feature is missing in NLO fits (but present at NNLO). Thus the resummations lead to qualitatively different behaviour, consistent with known consequences of higher orders in the fixed order expansion. Overall, we find very compelling evidence of the need for BFKL effects in describing proton structure \cite{WT3}. \begin{figure}[t!] \begin{center} \scalebox{0.5}{\includegraphics{white_chris.fig2.ps}} \caption{Reduced cross-section data, compared with both resummed and fixed order predictions.}\label{redxsec} \end{center} \end{figure} \begin{footnotesize}
{'timestamp': '2007-06-18T16:28:47', 'yymm': '0706', 'arxiv_id': '0706.2609', 'language': 'en', 'url': 'https://arxiv.org/abs/0706.2609'}
\section{Introduction} One of the more surprising results of the Compton Gamma-Ray Observatory (CGRO) was the EGRET discovery of an 18 GeV photon associated with GRB 940217 (Hurley 1994). About a half-dozen bursts were seen over the course of the CGRO mission with photons above 100 MeV (Catelli 1998, Dingus 2003). Since the GRB spectral energy distribution at lower energies has been well characterized by a modified power law with peak fluxes at energies of the order of 200 KeV, the existence of photons at energies $10^4$ times higher puts a significant constraint on any viable model of the GRB phenomenon. This has been a subject of great interest for missions that followed EGRET. Prior to launch of the Fermi Gamma-ray Space Telescope, it was possible to speculate that the LAT instrument would detect more than 200 GRB events per year (Dingus, 2003). In the two year period since the launch of the Fermi Gamma-ray Space Telescope, the Gamma-ray Burst Monitor (GBM) has reported approximately 475 GRBs, ie. a rate of about 250 per year. Over essentially the same period, only 17 bursts have been identified by the Fermi/LAT. We now see that the range of GRB photon energies extends over a scale of $10^6$ but the physical dynamics of these phenomena are still not understood. This is coupled to the question of whether high energy photons are associated with all GRBs or only with a small sub-class. Since the Fermi mission is unlikely to be duplicated any time soon, there is some urgency to assuring that the maximum information is being extracted from this valuable facility. Thus, our group has set about developing techniques for enlarging the number of gamma-ray bursts identified with high energy photon emission, ie. above 100 MeV. The first result of this effort has established the correlation of two $Swift$/XRT-localized bursts, GRB 080905A and GRB 091208B, with high energy photons in the $Fermi$/LAT detector (Akerlof el al. (2010), hereafter A10). The statistical technique employed is the $matched~filter$ method, most familiar to those detecting signals in the time domain. The underlying assumption is that the characteristics of both the signal and background are $a~priori$ known functions of one or more variables. Since the matched filter maximizes the signal-to-noise ratio, moderate departures from optimality degrade the filter performance relatively slowly, making this a valuable tool for investigating the possible existence of faint signals. The details of the filter algorithm are explicitly described in A10. In this paper, we take the next harder step of dropping our reliance on precision burst coordinates provided by Swift or other similar high resolution instruments. Instead, we use the approximate localization of the $Fermi$/GBM to map a region of interest on the $Fermi$/LAT field of view. By identifying high energy photon clusters, provisional burst coordinates can be determined with significantly smaller errors than available from the GBM. From there, the burst identification follows along lines set out in A10. \section{Sample Selection} As a first step in this program, a list of all GBM triggers was obtained from the $fermigbrst$ catalog maintained by the $Fermi$ Science Support Center\footnote{http://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermigbrst.html}. The catalog contains 497 GRB triggers from launch to July 9, 2010. This list was cross-matched with Table 1 in Guetta \& Pian (2009) and Table 2 in Guetta et al. (2010) to identify the burst GCN designations and the low energy fluences. For triggers occuring after February 18, 2010, fluences were obtained from individual GCN circulars. GBM triggers were also checked against XRT locations from $Swift$\footnote{http://heasarc.gsfc.nasa.gov/docs/swift/archive/grb$\_$table/} to remove events already considered in A10. Using data from the Fermi spacecraft attitude file, we further selected those triggers with a boresight angle less than 52$^\circ$ and an estimated GBM error circle less than 10$^\circ$. Events without GBM fluence information or previously claimed LAT detections\footnote{ http://fermi.gsfc.nasa.gov/ssc/observations/types/grbs/grb$\_$table/} were also discarded. Applying a final cut on GBM fluence (8 - 1000 KeV) requiring greater than 5.0 $\mu$erg/cm$^2$ reduced the number to 22 events (see Table 1). These are termed the ``GBM'' data. 464 additional fields were taken at random on the sky with similar criteria to study the background behavior and are identified as the``random'' data in the following text. \begin{deluxetable}{llrrr} \tabcolsep 0.4mm \tablewidth{0pt} \tablecaption{List of 22 GBM trigger GRBs} \tablehead{\colhead{GRB} & \colhead{Trigger} & \colhead{RA} & \colhead{Dec} & \colhead{$S_{GBM}$ } \\ \colhead{} & \colhead{} & \colhead{($^{\circ}$)} & \colhead{($^{\circ}$)} & \colhead{$\mu$erg/cm$^2$} } \startdata 080830 & 080830368 & 160.10 & 30.80 & 9.2 \\ 080904 & 080904886 & 214.20 & -30.30 & 5.0 \\ 080906B & 080906212 & 182.80 & -6.40 & 10.9 \\ 080925 & 080925775 & 96.10 & 18.20 & 19.4 \\ 081122A & 081122520 & 339.10 & 40.00 & 9.6 \\ 081231 & 081231140 & 208.60 & -35.80 & 12.0 \\ 090112A & 090112332 & 110.90 & -30.40 & 5.2 \\ 090131 & 090131090 & 352.30 & 21.20 & 22.3 \\ 090227A & 090227310 & 3.30 & -43.00 & 9.0 \\ 090228A & 090228204 & 106.80 & -24.30 & 6.1 \\ 090319 & 090319622 & 283.30 & -8.90 & 7.5 \\ 090330 & 090330279 & 160.20 & -8.20 & 11.4 \\ 090514 & 090514006 & 12.30 & -10.90 & 8.1 \\ 090516B & 090516137 & 122.20 & -71.62 & 30.0 \\ 090829A & 090829672 & 329.23 & -34.19 & 102.0 \\ 090829B & 090829702 & 354.99 & -9.36 & 6.4 \\ 090922A & 090922539 & 17.16 & 74.30 & 11.4 \\ 091120 & 091120191 & 226.81 & -21.79 & 30.2 \\ 100122A & 100122616 & 79.20 & -2.71 & 10.0 \\ 100131A & 100131730 & 120.40 & 16.45 & 7.7 \\ 100423B & 100423244 & 119.67 & 5.78 & 12.3 \\ 100511A & 100511035 & 109.29 & -4.65 & 7.1 \\ \enddata \end{deluxetable} \section{Signal Detection Technique} The core task of this search procedure is the identification of triplet clusters of photons in the $Fermi$/LAT instrument whose spatial accuracy is considerably better than the GBM. The set of photon data for each candidate burst is confined to lie within a $16^\circ$ cone angle of the GBM direction and a time window extending from zero to 47.5 s after the GBM burst trigger. The procedure first computes a signal weight for each photon pair based on photon energy, detection time, photon event class and angular separation relative to the expected LAT PSF errors. The photon pair weights are subject to a weak threshold cut designed to avoid combinatorial overload should large photon numbers be encountered. In practice, this was not a severe problem and can be ignored. The formula for the pair weights is given by: \begin{equation} Q_{ij} = w_i{\ }{\cdot}{\ }w_j{\ }{\cdot}{\ }\Delta_{ij}, \end{equation} where \begin{equation} w_i = w_E(i){\ }{\cdot}{\ }w_t(i){\ }{\cdot}{\ }w_c(i){\ }{\cdot}{\ }4\pi {\sigma}^2_{PSF}(E_i), \end{equation} \begin{equation} \Delta_{ij}=\frac{e^{-\delta_{ij}}}{4\pi ({\sigma}^2_{PSF}(E_i) + {\sigma}^2_{PSF}(E_j))}, \end{equation} \begin{equation} \delta_{ij} = \frac{1}{2}\frac{{\theta}^2_{ij}}{{\sigma}^2_{PSF}(E_i) + {\sigma}^2_{PSF}(E_j)}, \end{equation} and ${\theta}_{ij}$ is the angle between the $i$'th and $j$'th photon. The definitions of $w_E$, $w_t$ and $w_c$ can be found in equation 1,3 and 4 of A10. The next step is to link photon pairs so that the three pairs, \{i, j\}, \{j, k\} and \{i, k\}, become identified as the triplet, \{i, j, k\}. The triplet weight value is computed by the formula: \begin{equation} R_{ijk} = (w_i{\ }{\cdot}{\ }w_j{\ }{\cdot}{\ }w_k{\ }{\cdot}{\ }\Delta_{ij}{\ }{\cdot}{\ }\Delta_{jk}{\ }{\cdot}{\ }\Delta_{ik})^{\frac{1}{3}} \end{equation} The triplet weights are ranked by value and the set is pruned by the condition that a triplet element, $R_{ilm}$, is removed if $R_{ilm} < R_{ijk}$. This leaves a set of triplet clusters, each with a discrete complement of three photons. For each of these clusters, a PSF-weighted estimate of the burst direction is performed and the matched filter angle weight, $w_\theta$ is computed with respect to this vector as described in equation 2 of A10. At this point, an event weight for each cluster is computed by the formulas given in A10 with one small modification. In the scheme described here, the GRB direction is not initially defined with any precision. Thus, it is inappropriate to include a $1/\sigma^2_{PSF}$ factor for all values of $w_E$. For the highest energy photon in each triplet, the $4\pi\sigma_{PSF}^2$ factor is removed to reflect that this leading photon plays the principal role in fixing the apparent GRB direction. Although the calculations carry each cluster through the same computational path, the expectation is that the cluster with the highest matched filter weight is the most probable identification. The 22 GBM fields described earlier were the target of this investigation. We recognized that the most convincing argument for a true LAT identification should rely on the statistical distributions for the matched filter weights in LAT fields with similar characteristics. To increase that number as much as possible, the LAT fields of view were segmented into 12 circular tiles embedded on a spherical surface. Each tile subtends a cone with a half-angle of 16.0$^\circ$. This tiling scheme was applied to both the GBM and random field data sets to realize 182 and 3440 independent directions in space satisfying all the criteria described previously. Taking advantage of the fact that each field observation was blocked into a 250-s segment, the number of independent observations was multiplied by five by regarding each 50-s time slice as a separate sample. Thus, there are 910 background measurements taken from LAT observations obtained simultaneously with the candidate GBM fields and an additional 17200 samples taken under similar but not identical conditions. One particular concern for an analysis of this type is that false positives will selectively occur as the sample photon rate rises substantially above the mean. Evidence that this is not the case here is shown in Figure 1 which plots the cumulative distribution of the total number of photons within the LAT field of view over a 250-s interval. These rates explicitly exclude contamination from photons beyond the 105$^\circ$ zenith angle cut. As shown in the plot, the prominent GBM trigger event reported here is not associated specifically with fields with high ambient background rates. The similarity of the distributions for GBM and random fields also shows that the GBM data are not pathological as far as rates are concerned. \begin{figure} \includegraphics[scale=0.475]{fgst_GBM_graph_b.eps} \caption{Cumulative distributions of LAT photon rates over 250-s intervals for the GBM (blue) and random background (red) fields. These rates reflect the entire LAT FoV except for photons that lie outside the 105$^\circ$ zenith angle cut. The rate corresponding to the most prominent GBM event is indicated by the arrow.} \end{figure} \section{Results} Our statistical localization and weighting scheme identified one outstanding candidate for high energy photon emission, GRB 090228A. The best estimate for the probability of such an occurence by chance alone was obtained by performing identical searches on random LAT fields with the same criteria. Thus, 11 out of 17200 random fields generated matched filter weights exceeding the value for our candidate event. Multiplying by a trials factor of 22 for the number of GBM localized fields considered yields a false positive probability of 1.4\%. To check that these correlations were simply not due to preferentially higher background rates for the GBM exposures, we also performed similar calculations for the LAT data confined to an average of 8 uncorrelated directions per exposure and five independent time intervals from the same GBM data sets. In this case, 2 fields out of 910 exceed the candidate signal for a false positive rate of 4.8\%. The cumulative distributions are plotted in Figure 2. The statistical similarity of the GBM off-axis and random field data demonstrates that the GBM data set is not correlated with anomalous environmental conditions such as higher cosmic ray background rates. A list of photons associated with this burst is provided in Table 2. The GBM data for GRB 090228A is described in GCN 8918 (von Kienlin, et al., 2009). According to this note, the burst was localized to a $1-\sigma$ accuracy of better than $1^\circ$ with an additional systematic uncertainty of the order of $2.5^\circ$. The coordinate values obtained by the GBM group and this analysis are listed in Table 2. The GBM value generated some concern since our cluster finder position disagreed by 8.7$^\circ$. Fortuitously as this manuscript was being drafted, a paper was posted to astro-ph (Guiriec et al., 2010) providing a burst direction with an estimated accuracy of $0.2^\circ$ and lying $0.5^\circ$ from our own estimate. We believe that this establishes the validity of our identification to near certainty. In addition, the positive GRB correlation of Event Class 2 and 3 photon rates discussed in A10 was observed for the ensemble of GBM fields as well. The photon clustering is easily observed in the sky map shown in Figure 3. \begin{figure} \includegraphics[scale=0.475]{fgst_GBM_graph_a.eps} \caption{Complements of the cumulative distributions for ${\zeta}{\sum}w_i$ for 22 GBM fields (blue), 910 random fields obtained nearly simultaneously with the GBM data (red) and 17200 random fields obtained at random times (green).} \end{figure} \begin{deluxetable}{crrrrr} \tabcolsep 0.4mm \tablewidth{0pt} \tablecaption{~ GRB 090228A high energy photon list and celestial coordinate estimates} \tablehead{\colhead{$i$} & \colhead{$t$} & \colhead{$\theta$} & \colhead{$E$} & \colhead{$c$} & \colhead{$w_i$} \\ \colhead{} & \colhead{(s)} & \colhead{($^{\circ}$)} & \colhead{(MeV)} & \colhead{} & \colhead{} } \startdata 1 & 2.007 & 1.692 & 125.241 & 2 & 52.814 \\ 2 & 3.752 & 2.611 & 206.983 & 3 & 46.545 \\ 3 & 25.141 & 0.592 & 308.638 & 3 & 44.278 \\ 4 & 3.243 & 1.978 & 638.692 & 3 & 1.356 \\ 5 & 4.966 & 0.063 & 2787.028 & 3 & $^*$0.376 \\ 6 & 33.621 & 5.063 & 340.623 & 1 & 0.002 \\ \\ \multicolumn{6}{c}{$\zeta$ = 0.99722~~~~~$\zeta\sum w_i$ = 144.969} \\ \\ \hline \hline \\ & source & $\alpha$ & $\delta$ & $\sigma_\theta$ & $\theta_{i-1, i}^d$ \\ & & ($^\circ$) & ($^\circ$) & ($^\circ$) & ($^\circ$) \\ \\ \hline \\ & GBM$^a$ & 106.80 & -24.30 & $\lesssim3.0$ & \\ & LAT$^b$ & 98.56 & -28.86 & 0.18 & 8.66 \\ & IPN$^c$ & 98.30 & -28.40 & 0.02 & 0.51 \\ \enddata \tablenotetext{*}{indicates diminished $w_E$ for highest energy photon} \tablenotetext{a}{von Kienlin et al. (2009)} \tablenotetext{b}{this paper} \tablenotetext{c}{Guiriec et al. (2010)} \tablenotetext{d}{$\theta_{i-1, i}$ is the angle between the spatial directions for the GBM and LAT or the LAT and IPN} \end{deluxetable} \begin{figure} \includegraphics[scale=0.475]{fgst_GBM_map_h.eps} \caption{Sky map of $>100$ MeV photons for GRB 090228A. The diameter of each dot is proportional to its statistical weight. Thus, the largest diameters represent Event Class 3, etc. The dotted circles around each point indicate the $1-\sigma$ errors. The figure is centered on the nominal coordinates furnished by the GBM; the blue dot on the lower left shows the GRB coordinates computed by the cluster algorithm described in the text. The large green circle depicts the boundaries of the 16.0$^\circ$ cone that defines the fiducial boundaries for the cluster search. North is up and East is to the right.} \end{figure} \section{Discussion} The one event identified in this paper establishes the validity of our statistical techniques to a level of near certainty. By using GBM triggers to guide the discovery of photon clusters in the LAT, the phase space for finding counterparts can be reduced from hundreds of square degrees to one square degree or less. This makes a very significant difference for those seeking to identify GRB optical counterparts. If the algorithms used here could be adapted to the real time environment, the number of bursts with high energy associations could be increased appreciably. The additional computational load is negligible - about 30 ms per day. For such real time applications, the high selectivity employed here is overkill - any identification that can be corroborated optically will suffice. Thus, effective signal-to-noise rates of the order of unity are extremely valuable. As shown here, these techniques greatly enhance the dynamic range over which high energy radiation can be explored. It is not too outrageous to claim that this is the equivalent of making the LAT three to ten times larger in size. As was noted in A10, the most surprising aspect of our recent work is the very small number of GRBs that can be positively identified with high energy emission despite the substantially lower fluence thresholds. This is a mystery which deserves serious consideration. We hope that raising new questions is sometimes more useful than answering old problems. \vspace{0.1cm} \acknowledgments We thank Chris Shrader, director of the $Fermi$ Science Support Center for his considerable help in obtaining and interpreting the $Fermi$ mission data products. Fang Yuan provided valuable assistance in the initial process of learning how to access and manipulate the $Fermi$ data. This research is supported by NASA grant NNX08AV63G and NSF grant PHY-0801007.
{'timestamp': '2010-10-27T02:03:33', 'yymm': '1010', 'arxiv_id': '1010.1588', 'language': 'en', 'url': 'https://arxiv.org/abs/1010.1588'}
\section{\label{sec:INTRO}Introduction} One of the simplest and most commonly studied systems for investigating convection dynamics is the so-called Rayleigh-B\'{e}nard configuration, consisting of a Boussinesq fluid layer of depth $H$ confined between plane-parallel boundaries, and heated from below. The constant gravity vector $\mathbf{g} = - g {\bf\widehat z}$ points vertically downwards. Two limiting cases for thermal boundary conditions are often considered when posing the problem mathematically: (1) `perfectly conducting', or fixed temperature (FT), boundary conditions in which the temperature is held fixed along the bounding surfaces; and (2) `perfectly insulating', or fixed flux (FF), boundary conditions in which the normal derivative of the temperature is fixed at the boundaries \citep[e.g.][]{cC80}. Thermal boundary conditions of geophysical and astrophysical relevance are often considered to reside somewhere between these fixed flux and fixed temperature limits. For a Newtonian fluid of constant thermal expansivity $\alpha$, kinematic viscosity $\nu$, thermal diffusivity $\kappa$, the non-dimensional Rayleigh number quantifies the strength of the buoyancy force. For the FT and the FF cases we have, respectively, \begin{equation} Ra_{FT} = \frac{\alpha g \Delta T H^3}{\nu \kappa}, \quad Ra_{FF} = \frac{\alpha g \beta H^4}{\nu \kappa}, \end{equation} where $\Delta T$ is the fixed temperature difference between the top and bottom boundaries and $\beta$ is the fixed temperature gradient maintained at the boundaries. The Prandtl number quantifies the relative importance of viscous and thermal diffusion as $Pr = \nu/\kappa$. Upon defining the non-dimensional measure of heat transfer via the Nusselt number, \begin{equation} Nu = \frac{\textnormal{total heat transfer}}{\textnormal{conductive heat transfer}} = \frac{\beta H}{ \Delta T} , \end{equation} it is straightforward to show that the two Rayleigh numbers defined above are related simply by $Ra_{FF} = Nu Ra_{FT}$. We thus see that for linear convection in which $Nu \equiv 1$ the two Rayleigh numbers are equivalent. For nonlinear convection in which the critical Rayleigh number has been surpassed, $Nu > 1$ is achieved by adjustment of the temperature gradient $\beta$ at fixed $\Delta T$ for FT boundaries, and vice versa for FF boundaries. Linear stability shows that for the case of non-rotating convection the most unstable wavenumber is finite for FT boundary conditions \citep[e.g.][]{sC61}, but is zero for FF boundary conditions \citep{dH67}. Although previous work suggests that these differences for linear convection also hold for nonlinear convection \citep{cC80}, numerical simulations of convection show that the statistics for the two cases converge as the Rayleigh number is increased and the flow becomes turbulent \citep{hJ09}. When the system is rotating with rotation vector $\mathbf{\Omega} = \Omega {\bf\widehat z}$, the Ekman number, $E_H = \nu/2 \Omega H^2 $, is an additional non-dimensional number required to specify the strength of viscous forces relative to the Coriolis force. The rapidly rotating, quasi-geostrophic convection limit is characterized by $E_H \rightarrow 0$. As of this writing, only two investigations of FF boundary conditions for the rotating plane layer geometry have been published in the literature, with \cite{tD88} and \cite{sT02} examining the weakly rotating and rapidly rotating linear cases, respectively. \cite{sT02} utilized a modal truncation approach to show that the critical parameters for the two cases should converge as $E_H\rightarrow 0$; the present work confirms this suggestion. In the present work we distinguish between `interior' and 'boundary layer' dynamics, and show that the interior governing equations are identical for the two different thermal boundary conditions upon a simple rescaling of the Rayleigh number and temperature. Because the $E_H\rightarrow 0$ limit is a singular perturbation of the governing equations, the interior equations cannot satisfy the FF boundary conditions at leading order; a double boundary layer structure is necessary to adjust both the horizontal viscous stresses and the normal derivative of the temperature fluctuation to zero \cite[c.f.][]{wH71}. It is shown that the boundary layer corrections are asymptotically weak, however, showing that to leading order the interior quasi-geostrophic convection dynamics are equivalent for both thermal boundary conditions. In section \ref{S:Lin} we present the linear stability of the full Boussinesq Navier-Stokes equations. In section \ref{S:Asymp} we present the asymptotic reduction of the Navier-Stokes equations in the rapidly rotating limit and concluding remarks are given in section \ref{S:discuss}. \section{Linear stability of the Navier-Stokes equations} \label{S:Lin} In the present section we briefly present the linear stability of the Boussinesq Navier-Stokes equations for both FT and FF thermal boundary conditions. Upon scaling lengths with the depth of the fluid layer $H$ and time with the viscous diffusion time $H^2/\nu$, the linear system becomes \begin{equation} {\partial_t} \mathbf{u} + \frac{1}{E_H} \, {\bf\widehat z} \times \mathbf{u} = - \frac{1}{E_H} \nabla p + \frac{Ra}{Pr} \, \vartheta^{\prime} \, {\bf\widehat z} + \nabla^2 \mathbf{u}, \label{E:momlin} \end{equation} \begin{equation} {\partial_t} \vartheta^{\prime} - w = \frac{1}{Pr} \nabla^2 \vartheta^{\prime}, \label{E:heatlin} \end{equation} \begin{equation} {\nabla \cdot} \mathbf{u} = 0 , \label{E:contlin} \end{equation} where the velocity vector is denoted by $\mathbf{u}=(u,v,w)$, and the temperature is decomposed into mean and fluctuating variables according to $\vartheta = \overline{\vartheta} + \vartheta^{\prime}$. For both sets of thermal boundary conditions $\overline{\vartheta} = 1-z$ and the fluctuating thermal boundary conditions therefore become \begin{equation} \vartheta^{\prime} = 0, \quad \textnormal{at} \quad z = 0, 1, \quad (FT) \label{E:FT2} \end{equation} \begin{equation} {\partial_z} \vartheta^{\prime} = 0, \quad \textnormal{at} \quad z = 0, 1 . \quad (FF) \label{E:FF2} \end{equation} Stress-free, impenetrable mechanical boundary conditions on the top and bottom boundaries are assumed throughout and given by \begin{equation} w = {\partial_z} u = {\partial_z} v = 0, \quad \textnormal{at} \quad z = 0, 1. \end{equation} The system \eqref{E:momlin}-\eqref{E:contlin} is discretized in the vertical and horizontal dimensions with Chebyshev polynomials and Fourier modes respectively, and formulated as a generalized eigenvalue problem. We solve the system in primitive variable form and enforce boundary conditions via the tau method. The eigenvalue problem is solved with Matlab's `sptarn' function. For further details of the numerical methods the reader is referred to \cite{mC13}. Figure \ref{F:linNS} shows results from the linear stability calculations. Results are given for both steady ($Pr=1$) and oscillatory ($Pr=0.1$) convection; we note oscillatory convection does not exist for $Pr\ge1$ and becomes the primary instability for $Pr\lesssim 0.68$ \citep{sC61}. For $E_H \lesssim 10^{-5}$, both the asymptotically scaled critical Rayleigh number $Ra_c E^{4/3}$ (Figure \ref{F:linNS}a) and wavenumber $k_c E^{1/3}$ (Figure \ref{F:linNS}b) obtained from FT and FF boundary conditions are observed to converge to nearly equivalent values. The open circle shows the Ekman number $E_H=0.0745$ calculated by \cite{tD88} at which the instability becomes characterized by $k_c \ne 0$ with $Ra_c = 341.05$. Figure \ref{F:linNS}c shows the horizontal ($x$) velocity eigenfunction for $E_H=10^{-4}$ where Ekman layers can be seen at the top and bottom boundaries for the FF (dashed curve) case; a magnified view of the bottom Ekman layer is shown in the inset figure. The temperature perturbation eigenfunctions shown in Figure \ref{F:linNS}d show that both the FT and FF cases have identical structure in the fluid interior, whereas a thermal boundary layer is observed in the FF case. In the following section we present the asymptotic reduction of the Navier-Stokes equations to better understand and quantify this behavior. \begin{figure} \begin{center} \subfloat[]{ \includegraphics[height=5cm]{Rc_vs_E}} \qquad \subfloat[]{ \includegraphics[height=5cm]{kc_vs_E}} \\ \subfloat[]{ \includegraphics[height=5cm]{u_eigTa1e8P1}} \qquad \subfloat[]{ \includegraphics[height=5cm]{T_eigTa1e8P1}} \end{center} \caption{Linear stability of the Navier-Stokes equations for fixed temperature (FT) and fixed flux (FF) thermal boundary conditions. (a) Asymptotically scaled critical Rayleigh number and (b) critical wavenumber as a function of the inverse Ekman number for both steady ($Pr=1$) and oscillatory ($Pr=0.1$) convection. (c) Horizontal velocity and (d) temperature eigenfunctions for $Pr=1$ and $E_H=10^{-4}$; the inset figure in (c) shows a magnified view of the Ekman layer along the bottom boundary. In (a) and (b) the Ekman number of $E_H=0.0745$ at which the critical wavenumber for FF boundary conditions becomes non-zero is shown by the open circle, as first determined by \cite{tD88}.} \label{F:linNS} \end{figure} \section{Asymptotics} \label{S:Asymp} To proceed with the asymptotic development, we follow the work of \cite{mS06} and write the governing equations using a generic non-dimensionalization such that \begin{gather} D_t \mathbf{u} + \frac{1}{Ro} {\bf\widehat z} \times \mathbf{u} = - Eu \nabla p + \Gamma \theta {\bf\widehat z} + \frac{1}{ Re} \nabla^2 \mathbf{u}, \label{E:mom1} \\ \nabla \cdot \mathbf{u} = 0, \label{E:mass1} \\ D_t \vartheta = \frac{1}{Pr Re} \nabla^2 \vartheta , \label{E:energy1} \end{gather} where $D_t (\cdot) = {\partial_t}(\cdot) + \mathbf{u} \cdot \nabla(\cdot)$ and the velocity, pressure and temperature are denoted by $\mathbf{u}$, $p$, and $\vartheta$, respectively. The above system has been non-dimensionalized utilizing the velocity scale $U$, length $L$, time $L/U$, pressure $P$, and temperature $\widetilde{T}$. For the FT and FF cases the temperature scale becomes $\Delta T$ and $Nu \Delta T$, respectively. The Rossby, Euler, buoyancy and Reynolds numbers are defined by \begin{equation} Ro = \frac{U}{2 \Omega L}, \quad Eu = \frac{P}{\rho_0 U^2}, \quad \Gamma = \frac{g \alpha \widetilde{T}}{U^2}, \quad Re = \frac{U L}{\nu} . \end{equation} In the present work we are interested in the $\epsilon \equiv Ro \rightarrow 0$ limit. In the fluid interior we employ multiple scales in the axial space direction and time such that \begin{equation} {\partial_z} \rightarrow {\partial_z} + \epsilon {\partial_Z} , \quad {\partial_t} \rightarrow {\partial_t} + \epsilon^2 {\partial_{\tau}} , \end{equation} where $Z = \epsilon z$ is the large-scale vertical coordinate and $\tau = \epsilon^2 t$ is the `slow' timescale. It has been shown that the following distinguished limits can be taken to reduce the governing equations to accurately model quasi-geostrophic convection \citep[e.g.][]{mS06} \begin{equation} Eu = \frac{1}{\epsilon^2}, \quad \Gamma = \frac{\widetilde{\Gamma}}{\epsilon}, \quad Re = O(1) , \quad Pr = O(1), \end{equation} where $\widetilde{\Gamma}=O(1)$. Scaling the velocity viscously such that $U = \nu/L$ we have \begin{equation} \epsilon = E^{1/3}, \quad \widetilde{\Gamma} = \frac{E^{4/3} Ra}{Pr}, \quad Re = 1 , \end{equation} where the $L$-scale Ekman number is related to the $H$-scale Ekman number via $E=E_H\epsilon^{-2}$, i.e.~$L=E^{1/3} H$. We keep the notation for the Rayleigh number generic in the sense that $Ra$ denotes either $Ra_{FT}$ or $Ra_{FF}$ depending upon the particular boundary conditions employed. Hereafter, we define the asymptotically reduced Rayleigh number as $\widetilde{Ra} \equiv E^{4/3} Ra$. We utilize a composite asymptotic expansion approach \citep[e.g.][]{aN08} and, following \cite{wH71}, decompose each variable into interior $(i)$, middle $(m)$, and Ekman layer $(e)$ components. For instance, the dependent variable $f$ can be written as \begin{equation} f= f^{(i)}({\bf x},Z,t,\tau) + f^{(m)}(x,y,\xi,t) + f^{(e)}(x,y,\eta,t) , \label{E:decomp} \end{equation} where $\xi=z$ and $\eta = \epsilon^{-1/2}z$ are boundary layer variables. The above representation ensures that each dependent variable is uniformly valid throughout the domain. The boundary layer variables consist of a sum of contributions from the top and bottom boundary layers; for brevity, we focus on the bottom boundary layers. In the present work we make use of the following limits and notation \begin{equation} \lim_{\xi \rightarrow \infty} \left ( f^{(m)} \right ) = \left ( f^{(m)} \right ) ^{(i)} = \lim_{\eta \rightarrow \infty} \left ( f^{(e)} \right ) = \left ( f^{(e)} \right ) ^{(i)} = 0 , \label{E:limits1} \end{equation} \begin{equation} \lim_{\xi \rightarrow 0} \left ( f^{(i)} \right ) = \lim_{\eta \rightarrow 0} \left ( f^{(i)} \right ) = f^{(i)}\left ( Z = 0\right ) = f^{(i)}(0) . \label{E:limits2} \end{equation} We then expand each variable in a power series according to \begin{equation} f^{(i)}({\bf x},Z,t) = f_0^{(i)}({\bf x},Z,t,\tau) + \epsilon^{1/2} f_{1/2}^{(i)}({\bf x},Z,t,\tau) + \epsilon f_{1}^{(i)}({\bf x},Z,t,\tau) + O(\epsilon^{3/2}) . \label{E:expand} \end{equation} Each dependent variable is further decomposed into mean and fluctuating components such that \begin{equation} f^{(i)}({\bf x},Z,t,\tau) = \overline{f}^{(i)}(Z,\tau) + f^{\prime (i)}({\bf x},Z,t,\tau) , \end{equation} where the averaging operator is defined by \begin{equation} \overline{f}(Z,\tau) = \lim_{[\tau], [A] \rightarrow \infty} \, \frac{1}{[\tau] [A]} \int_{[\tau],[A]} f \, dx dy , \qquad\mbox{and}\qquad \overline{f'} \equiv 0. \end{equation} \subsection{The interior equations} By substituting decompositions for each variable of the form \eqref{E:decomp} into the governing equations and utilizing the limits \eqref{E:limits1}-\eqref{E:limits2}, equations for each region can be derived; expansions of the form \eqref{E:expand} are then utilized to determine the asymptotic behavior of each fluid region. Because the derivation of the interior equations has been given many times previously, we present only the salient features and direct the reader to previous work \citep[e.g.][]{mS06} for details on their derivation. The main point is that the interior convection is geostrophically balanced and horizontally divergence-free to leading order \begin{equation} {\bf\widehat z} \times \mathbf{u}^{(i)}_{0} = - \nabla_\perp p^{(i)}_{1}, \quad {\nabla_\perp \cdot} \mathbf{u}^{(i)}_{0,\perp} = 0, \end{equation} where $\nabla_{\perp} = ({\partial_x},{\partial_y},0)$. The above relations allow us to represent the geostrophic velocity via the geostrophic streamfunction $\psi^{(i)}_0 \equiv p^{\prime(i)}_1$ such that $\mathbf{u}^{(i)}_{0,\perp} = -\nabla \times \psi^{(i)}_0{\bf\widehat z}$. The vertical vorticity is then $\zeta^{(i)}_{0} = \nabla_\perp^2 \psi^{(i)}_0$. The interior vertical vorticity, vertical momentum, fluctuating heat, and mean heat equations then become \begin{gather} D^{\perp}_{t} \zeta^{(i)}_{0} - {\partial_Z} w^{\prime(i)}_0 = {\nabla_\perp^2} \zeta^{(i)}_{0}, \label{E:vortint} \\ D^{\perp}_{t} w^{\prime(i)}_0 + {\partial_Z} \psi^{(i)}_0 = \frac{\widetilde{Ra}}{Pr} \vartheta_1^{\prime(i)} + {\nabla_\perp^2} w^{\prime(i)}_0, \label{E:momint} \\ D^{\perp}_{t}\vartheta_1^{\prime(i)} + w^{\prime(i)}_0 {\partial_Z} \overline{\vartheta}^{(i)}_0 = \frac{1}{Pr} {\nabla_\perp^2} \vartheta_1^{\prime(i)}, \label{E:heatint} \\ {\partial_{\tau}} \overline{\vartheta}_0^{(i)} + {\partial_Z} \overline{\left ( w^{\prime(i)}_0 \vartheta_1^{\prime(i)} \right ) } = \frac{1}{Pr} \partial^2_{Z} \overline{\vartheta}_0^{(i)}, \label{E:mheatint} \end{gather} where $D_t^{\perp} (\cdot) = {\partial_t}(\cdot) + \mathbf{u} \cdot \nabla_{\perp}(\cdot)$ . The mean interior velocity field $\overline{\bf u}_0^{(i)}$ is zero and the mean momentum equation reduces to hydrostatic balance in the vertical, ${\partial_Z} \overline{p}^{(i)}_0 = (\widetilde{Ra}/Pr) \overline{\vartheta}_0^{(i)}$. The interior system is fourth order with respect to the large-scale vertical coordinate $Z$. Two boundary conditions are supplied by impenetrability such that $w^{\prime(i)}_0(0) = w^{\prime(i)}_0(1) = 0$. Although no $Z$-derivatives with respect to $\vartheta^{\prime(i)}_1$ are present in equation \eqref{E:heatint}, evaluating this equation at the boundaries shows the FT conditions $\vartheta^{\prime(i)}_1(0)= \vartheta^{\prime(i)}_1(1)=0$ are satisfied implicitly for the fluctuating temperature. Evaluating equation \eqref{E:momint} at either the top or bottom boundary with the use of impenetrability shows that stress-free boundary conditions are implicitly satisfied as well since ${\partial_Z} \psi^{(i)}_0(0) = {\partial_Z} \psi^{(i)}_0(1) = 0$. For the case of FT thermal boundary conditions, we have \begin{equation} \overline{\vartheta}_0^{(i)}(0) = 1, \quad \textnormal{and} \quad \overline{\vartheta}_0^{(i)}(1) = 0. \quad (FT) \end{equation} Thus, for the FT case the boundary layer corrections are identically zero and the above system is complete. Numerous investigations have used the above system of equations to investigate rapidly rotating convection in the presence of stress-free mechanical boundary conditions and have shown excellent agreement with direct numerical simulations (DNS) of the Navier-Stokes equations and laboratory experiments \citep{sS14,jmA15}. For the FF case the mean temperature boundary conditions become \begin{equation} {\partial_Z} \overline{\vartheta}_0^{(i)}(0) = -1, \quad \textnormal{and} \quad {\partial_Z} \overline{\vartheta}_0^{(i)}(1) = -1. \quad (FF) \end{equation} We further require ${\partial_Z} \vartheta_1^{\prime(i)}(0) = {\partial_Z} \vartheta_1^{\prime(i)}(1)= 0$; boundary layer corrections are therefore required since these conditions are not satisfied by \eqref{E:heatint}. In the following two subsections we determine the magnitude of these boundary layer corrections. \subsection{The middle layer equations} The first non-trivial fluctuating middle layer momentum equation occurs at $O(\epsilon)$ to yield the thermal wind balance \begin{equation} {\bf\widehat z} \times \mathbf{u}^{\prime (m)}_2 = - \nabla p^{\prime (m)}_{3} + \frac{\widetilde{Ra}}{Pr} \vartheta^{\prime(m)}_2 {\bf\widehat z} , \label{E:mommid2} \end{equation} such that ${\nabla_\perp \cdot} \mathbf{u}^{\prime (m)}_{2} = 0$ and $w^{\prime (m)}_{2} \equiv 0$. The mean velocity field $\overline{\bf u}^{(m)} \equiv 0$. The leading order temperature equation for the middle layer is \begin{equation} {\partial_t} \vartheta^{\prime(m)}_2 + \mathbf{u}^{\prime (i)}_0 (0) \cdot {\nabla_{\perp}} \vartheta^{\prime(m)}_2 = \frac{1}{Pr} \nabla^2 \vartheta^{\prime(m)}_2 , \end{equation} with corresponding boundary conditions \begin{equation} {\partial_Z} \vartheta^{\prime(i)}_{1}(0) + {\partial_\xi} \vartheta^{\prime(m)}_{2}(0) = 0, \quad \vartheta^{\prime(m)}_{2}(\xi \rightarrow \infty) \rightarrow 0 . \label{E:midthermBC} \end{equation} We find the first non-trivial mean temperature to be of magnitude $O(\epsilon^5)$ and therefore omit any further consideration of this correction. The first three orders of the stress-free mechanical boundary conditions along the bottom boundary become \begin{equation} {\partial_Z} \mathbf{u}^{\prime (i)}_{0,\perp}(0) = 0, \quad {\partial_Z} \mathbf{u}^{\prime (i)}_{1/2,\perp}(0) = 0, \quad {\partial_Z} \mathbf{u}^{\prime (i)}_{1,\perp}(0) + {\partial_\xi} \mathbf{u}^{\prime (m)}_{2,\perp}(0) + {\partial_\eta} \widetilde{\mathbf{u}}^{\prime (e)}_0(0) = 0 . \label{E:stressBC} \end{equation} Thus, the first two orders of the interior velocity satisfy stress free conditions on their own and therefore need no boundary layer correction. Here we have rescaled the Ekman layer velocity according to $\mathbf{u}^{\prime (e)}_{5/2} = \epsilon^{5/2} \widetilde{\mathbf{u}}^{\prime (e)}_0$; this rescaling is simply highlighting the fact that the Ekman layer velocities are significantly weaker than those in the interior. \subsection{The Ekman layer equations} The Ekman layer equations have been studied in great detail in previous work \citep[e.g.][]{hG68}, so we simply state the leading order continuity and momentum equations as \begin{equation} {\nabla_\perp \cdot} \widetilde{\mathbf{u}}^{\prime (e)}_0 + \partial_{\eta} \widetilde{w}^{\prime (e)}_{\frac{1}{2}} = 0, \quad {\bf\widehat z} \times \widetilde{\mathbf{u}}^{\prime (e)}_0 = \partial^2_{\eta} \widetilde{\mathbf{u}}^{\prime (e)}_0 \end{equation} where $w_3^{\prime(e)}=\epsilon^3 \widetilde{w}^{\prime (e)}_{\frac{1}{2}} $. All of the mean Ekman layer variables can be shown to be zero. A key component in the present analysis that differs from previous work is the middle, thermal wind layer that enters the Ekman layer solution via the stress-free boundary conditions \eqref{E:stressBC}. Utilizing the thermal wind relations for the middle layer that follow from equation \eqref{E:mommid2}, \begin{equation} {\partial_\xi} u^{\prime (m)}_2 = -\frac{\widetilde{Ra}}{Pr} {\partial_y} \vartheta^{\prime(m)}_2, \quad {\partial_\xi} v^{\prime (m)}_2 = \frac{\widetilde{Ra}}{Pr} {\partial_x} \vartheta^{\prime(m)}_2 , \end{equation} the stress-free boundary conditions along the bottom boundary can be written as \begin{equation} {\partial_Z} \mathbf{u}^{\prime (i)}_{1,\perp}(0) + \frac{\widetilde{Ra}}{Pr} \nabla^{\perp} \vartheta^{\prime(m)}_2(0) + {\partial_\eta} \widetilde{\mathbf{u}}^{\prime (e)}_0(0) = 0 , \end{equation} where $\nabla^{\perp} = (-{\partial_y},{\partial_x},0)$. Solving the Ekman layer momentum equations for the horizontal components of the velocity field with the additional requirement that $(\widetilde{u}^{\prime (e)}_0,\widetilde{v}^{\prime (e)}_0) \rightarrow 0$ as $\xi \rightarrow \infty$, the continuity equation is then used to find the Ekman pumping velocity \begin{equation} \widetilde{w}^{\prime (e)}_{\frac{1}{2}} = \left [ {\partial_Z} \zeta^{(i)}_1(0) + \frac{\widetilde{Ra}}{Pr} {\nabla_\perp^2} \vartheta^{\prime(m)}_2(0) \right ] e^{-\frac{\eta}{\sqrt{2}}} \cos \left ( \frac{\eta}{\sqrt{2}} \right ) . \label{E:ekpump} \end{equation} Thus, vertical velocities of magnitude $O(\epsilon^{3})$ are induced by FF thermal boundary conditions and result from both finite viscous stresses within the fluid interior and horizontal variations of the temperature within the middle layer. This finding is closely analogous to the Ekman pumping effect first reported by \cite{rH64} for shallow layer quasi-geostrophic flow in the presence of lateral temperature variations along a free surface. Evaluating equation \eqref{E:ekpump} at $\eta=0$ provides a parameterized boundary condition for the effects of Ekman pumping. The small magnitude of the Ekman pumping velocity \eqref{E:ekpump} results in very weak $O(\epsilon^5)$ temperature fluctuations within the Ekman layer. Because of this, the dominant correction of the FF thermal boundary conditions occurs within the middle layer and we do not consider the Ekman layer temperature any further. \subsection{Synthesis} The thermal boundary layer correction given by equation \eqref{E:midthermBC} is passive in the sense that $\vartheta^{\prime(m)}_2$ can be calculated a posteriori with knowledge of $\vartheta^{\prime(i)}_1$. Thus, the leading order interior dynamics are insensitive to the thermal boundary conditions. The Ekman layer analysis shows that the first six orders of the interior vertical velocity satisfy the impenetrable mechanical boundary conditions $w^{\prime(i)}_i(0) = 0$, for $i=0,\ldots, 5/2$. At $O(\epsilon^3)$ we have the Ekman pumping boundary conditions \begin{equation} w^{\prime(i)}_3(0) = - w^{\prime(m)}_3(0) - {\partial_Z} \zeta^{(i)}_1(0) - \frac{\widetilde{Ra}}{Pr} {\nabla_\perp^2} \vartheta^{\prime(m)}_2(0) , \end{equation} where we have used the Ekman pumping relation \eqref{E:ekpump} evaluated at $\eta=0$. From the standpoint of linear theory, the first correction to the critical Rayleigh number will therefore occur at $O(\epsilon^3)$; this explains the linear behavior previously discussed in section \ref{S:Lin}. \section{Discussion} \label{S:discuss} In light of the boundary layer analysis, we conclude that the leading order quasi-geostrophic dynamics are described by equations \eqref{E:vortint}-\eqref{E:mheatint} for \textit{both} FT and FF thermal boundary conditions. Indeed, inspection of the system shows that it is invariant under the following rescaling of the Rayleigh numbers and temperature variables, \begin{equation} \widetilde{Ra}_{FT} = \frac{\widetilde{Ra}_{FF}}{Nu}, \quad \vartheta_{1,FT}^{\prime(i)} = Nu \, \vartheta_{1,FF}^{\prime(i)}, \quad \overline{\vartheta}_{0,FT}^{(i)} = Nu \, \overline{\vartheta}_{0,FF}^{(i)} . \label{E:scaling} \end{equation} Integrating the time-averaged mean heat equation with respect to $Z$ yields \begin{equation} Pr \overline{\left ( w^{\prime(i)}_0 \vartheta_{1,FT}^{\prime(i)} \right ) } = {\partial_Z} \overline{\vartheta}_{0,FT}^{(i)} + Nu, \quad (FT) \label{E:mFT} \end{equation} \begin{equation} Pr \overline{\left ( w^{\prime(i)}_0 \vartheta_{1,FF}^{\prime(i)} \right ) } = {\partial_Z} \overline{\vartheta}_{0,FF}^{(i)} + 1, \quad (FF) \label{E:mFF} \end{equation} for the FT and FF cases, respectively. The appropriate thermal boundary conditions have been applied at $Z=0$ in the above relations. Taking either equation \eqref{E:mFT} or \eqref{E:mFF} and utilizing \eqref{E:scaling} shows that the mean interior temperature gradient is described by identical equations for the two cases. This leading order correspondence is the result of the anisotropic spatial structure of rapidly rotating convection. The above results indicate that the findings of previous work on low Rossby number convection employing FT thermal boundary conditions can be accurately applied to the case of FF thermal boundary conditions by use of the rescalings given by equations \eqref{E:scaling}. \cite{kJ12} identified four flow regimes that occur in rapidly rotating convection as a function of the Prandtl and (FT) Rayleigh numbers. The so-called `convective Taylor column' (CTC) regime is distinguished by coherent, vertically aligned convective structures that span the depth of the fluid. Figure \ref{F:nonlin}(a) shows a volumetric rendering of the temperature perturbation for $Pr=7$ and $\widetilde{Ra}_{FT}=46.74$, or $\widetilde{Ra}_{FF}=1000$ and $Nu=21.39$. The CTC regime occurs over the FT Rayleigh number range of $20\lesssim \widetilde{Ra}_{FT} \lesssim 55$, corresponding to a FF Rayleigh number range of $82\lesssim \widetilde{Ra}_{FF} \lesssim 1656$ \citep[e.g.~see][]{dN14}. Figure \ref{F:nonlin}(b) shows mean temperature profiles obtained utilizing the FT and FF thermal boundary conditions, along with the remapped FF mean temperature profile. The Nusselt number $Nu=21.39$ corresponds to a mean temperature difference of $0.0468$ between the top and bottom boundaries for the FF case. Of particular interest in convection studies is the dependence of the heat transfer scaling with the strength of the thermal forcing input via Nusselt-Rayleigh number scalings of the form $Nu \sim \widetilde{Ra}_{FT}^\alpha$. With the rescaling given in \eqref{E:scaling} the FF equivalent of this relation becomes $Nu \sim \widetilde{Ra}_{FF}^\beta$ where $\beta = \alpha/(\alpha+1)$. For the CTC regime the exponent is $\alpha \approx 2.1$ \citep{kJ12}, yielding $\beta \approx 0.68$. Additionally, the final regime of geostrophic turbulence achieves a dissipation-free scaling law with $\alpha = 3/2$ such that $\beta = 3/5$ \citep{kJ12b}. Similarly, the dependence of all other variables of interest on the Rayleigh number (e.g. mean temperature gradient, vorticity, etc.) can also be remapped to the case of FF thermal boundary conditions. \begin{figure} \begin{center} \subfloat[]{ \includegraphics[height=5cm]{R1000P7_T}} \hspace{1.5cm} \subfloat[]{ \includegraphics[height=4.5cm]{theta_bar_profiles}} \end{center} \caption{(a) An example volumetric rendering of the temperature perturbation from a simulation of the quasi-geostrophic convection equations showing the `convective Taylor column' (CTC) regime. (b) Mean temperature profiles obtained with both FT (solid blue) and FF (dashed black) boundary conditions, and the rescaled FF temperature profile (red open circles). The parameters are $\Pr=7$, $\Ra_{FT} = 46.74$, $\Ra_{FF} = 1000$, and $Nu=21.39$. } \label{F:nonlin} \end{figure} \section{Conclusion} \label{S:conclude} In the present work we have shown that the leading order dynamics of rapidly rotating convection in a plane layer geometry are equivalent for both FT and FF thermal boundary conditions. FF thermal boundary conditions give rise to a double boundary layer structure in the limit of rapid rotation that is asymptotically weak. Our findings suggest that all previous work employing FT thermal boundary conditions also accurately describes FF thermal boundary conditions within the regime of asymptotic validity, i.e. $Ro\sim E^{1/3}\ll1$ and $\widetilde{Ra} \lesssim \mathcal{O} (E^{-1/3})$ \citep{kJ12b}. This result adds to the robustness of the reduced, quasi-geostrophic model in geophysical and astrophysical applications where stress-free boundary conditions are typically assumed; recent work has also extended the model to the case of no-slip boundary conditions \citep{kJ15} that are of relevance for laboratory experiments and planetary interiors \citep[e.g.][]{jmA15,jC15}. An interesting consequence of these findings is that any horizontal thermal variation along the boundaries that varies on the scale of the convection has no leading order influence on the interior convection. This finding helps to explain the results of previous spherical convection studies investigating the role of temperature variations along the outer boundary \citep[e.g.][]{cD09}. \bibliographystyle{jfm}
{'timestamp': '2015-07-28T02:08:14', 'yymm': '1507', 'arxiv_id': '1507.07168', 'language': 'en', 'url': 'https://arxiv.org/abs/1507.07168'}
\section{Introduction} \IEEEPARstart{A}{utonomous} vehicle technologies for self-driving have attracted significant public attention through conveying huge benefits such as lower vehicle emissions, less traffic congestion, better fuel economy, or shorter travel time \cite{RN757, skeete2018level}. The current self-driving cars in the transportation system stay at simplified levels and need to consider efficacious and practical issues, including precise controls, costs, liability, privacy, and safety \cite{RN758, shneiderman2020bridging}. Although traditional control theory and supervised learning applied to path planning of autonomous driving have been investigated since the last century \cite{RN760, bazzan2009opportunities, hussain2018autonomous}, reinforcement learning (RL) has recently demonstrated powerful performance, with a basis of enabling a vehicle to learn optimal control policies by interacting with the environment, without the need for prior knowledge or a large amount of training data \cite{wu2017flow}. Furthermore, deep reinforcement learning (DRL) models have achieved some notable successes \cite{RN676}. For example, AlphaGo \cite{RN674} defeated a human professional in the board game Go in 2015, which was the first time a computer Go program had defeated a professional human player in history. The deep Q network (DQN) \cite{RN673}, an off-policy method that adds a target network for stability and prioritised experience replay for efficient sampling, demonstrated high efficiency in playing 2600 Atari video games and outperformed humans just by learning from raw image pixels. Furthermore, advantage actor-critic (A2C) \cite{mnih2016asynchronous}, as another off-policy method that uses advantage estimates to calculate the value proposition for each action state pair, is a lightweight framework that uses the synchronised gradient to update that keeps the training more cohesive and potentially makes convergence faster. In addition, proximal policy optimisation (PPO) \cite{schulman2017proximal} is an on-policy method that provides better convergence with simplicity to implement and tune in most benchmark RL testing environments and was applied to traffic oscillations \cite{li2021reinforcement} and hierarchical guidance competitions \cite{cao2021reinforcement,cao2021weak}. These DRL research initiatives have stimulated more recent research areas on the application of DRL in autonomous driving and provided promising solutions in predictive perception, path and motion planning, driving strategies, and low-level controller design \cite {RN749}. However, they paid too much attention to evaluating the reward coverage returns and ignored the crucial safety factors in autonomous driving. \section{Related Work} The vehicle needs to reinforce learning from experience, usually at the cost of collisions, to gain autonomous knowledge and achieve higher rewards. Recent research on the DRL-based driving domain focused on the long-term accumulative reward or averaged reward as the critical performance metrics \cite{zhou2020smarts, RN655}. Nevertheless, the literature rarely paid attention to a safety measure to evaluate the autonomous performance of DRL models. One recent study \cite{RN775} placed a focal point on the vehicle's safety issue but only applied it to a straight four-lane highway road instead of a road traffic junction driving scenario. To investigate the self-awareness mechanism for DRL, we paid heed to the self-attention module that was first introduced by \cite{RN654} to resolve the inability to translate complex sentences in natural language processing tasks, which enables the memorisation of long source sentences by introducing a dynamic context vector of importance weights. Afterwards, \cite{RN662} developed the transformer model, which is a multi-head self-attention module and achieved outstanding performance for translation tasks. Furthermore, \cite{RN726} used a variant form of the transformer model to accommodate the varying size of input data from the surrounding vehicles in autonomous driving, particularly in advance of addressing the limitation of the list of feature representations in a function approximation setting and enhancing the interpretability of the interaction patterns between traffic participants. However, the current explorations did not discover an attention mechanism to evaluate the safety concerns of an autonomous vehicle in the driving environment, specifically challenging road traffic junction scenarios. In this work, we \textbf{aim} to (1) evaluate the safety concerns from four metrics (collision, success, freezing and reward) from three state-of-the-art baseline DRL models (DQN, A2C, and PPO) in autonomous driving, especially in complex road traffic junction driving environments, such as intersection and roundabout scenarios; (2) propose a self-awareness module for the DRL model to improve the safety performance in road traffic junction scenarios. The outcomes of this work will contribute to safe DRL for autonomous driving. \section{Methodology} \subsection{Baseline DRL Models} \subsubsection{DQN} DQN \cite{RN673} is a relatively typical DRL model inspired by \cite{RN744}, where the goal of the agent is to interact with the environment by selecting actions in a way that maximises cumulative future rewards. DQN approximates the optimal action-value function as follows: \begin{equation} {Q^*(s,a)=\mathbb{E}_{s'\sim\epsilon}\left[r+\gamma\max_{a'}Q^*(s',a')|s,a\right]} \end{equation} where $Q^*(s, a)$ is the maximum sum of rewards discounted by $\gamma$ at each time step, achievable by following any strategy in a state $s$ and taking action $a$. This function follows the Bellman equation: if the optimal value $Q^*(s', a')$ of the state $s'$ at the next time step was known for all possible actions, then the optimal strategy is to select the action $a'$ maximising the expected value of $r+\gamma Q^*(s', a')$. DQN has some advancements used for autonomous driving. First, DQN employs a mechanism called experience replay that stores the agent's experience $e_t=(s_t, a_t, r_t, s_{t+1})$ at each time step $t$ in a replay memory $D_t={e_1,\dotsc,e_t}$. During learning, samples of experience $(s, a, r, s')\sim U(D)$ are drawn uniformly at random from the replay memory, which removes the correlations in the observation sequence and smooths over changes in the data distribution. Second, DQN uses an iterative update to adjust the action values Q towards the target values. A neural network function approximator (Q-network) with weights $\theta$ is used to estimate the Q function. The Q-network updates only the target values periodically to reduce correlations with the target, and the Q-network is trained by minimising the following loss function $L$ at iteration $i$: \begin{equation} \newcommand*{\Scale}[2][4]{\scalebox{#1}{$#2$}}% \Scale[0.9]{L_i(\theta_i)= \mathbb{E}_{(s,a,r,s')\sim U(D)}\left[{\left(r+\gamma\max_{a'}Q(s',a';{\theta_i}^-)-Q(s,a;\theta_i)\right)}^2\right]} \end{equation} where $\theta_i$ denotes the parameters of the Q-network at iteration $i$. ${\theta_i}^-$ are the parameters of the target network at iteration $i$, which are held fixed between individual updates and are only updated with the Q-network parameters $\theta_i$ every certain step. \subsubsection{A2C} A2C is a synchronous, deterministic variant of asynchronous advantage actor-critic (A3C) \cite{mnih2016asynchronous}. For autonomous driving, A2C maintains a policy $\pi(a_t|s_t;\theta)$ and an estimate of the value function $V(s_t;\theta_v)$, which uses the same mix of $n$-step returns to update the policy and the value function. The policy and the value function are updated every $t_{max}$ actions or when a terminal state is reached. The update performed by A2C can be denoted as $\nabla_{\theta^{\prime}} \log \pi\left(a_{t} s_{t} ; \theta^{\prime}\right) A\left(s_{t}, a_{t} ; \theta, \theta_{v}\right)$, and $A(s_t,a_t;\theta,\theta_v)$ denotes the advantage function as follows: \begin{equation} \newcommand*{\Scale}[2][4]{\scalebox{#1}{$#2$}}% \Scale[1]{A(s_t,a_t;\theta,\theta_v)=\sum_{i=0}^{k-1}\gamma^i r_{t+i}+\gamma^kV(s_{t+k};\theta_v)-V(s_t;\theta_v)} \end{equation} where $k$ denotes the number of actions taken since time step $t$ and is upper-bounded by $t_{max}$. \subsubsection{PPO} PPO is a model-free, actor-critic, and policy-gradient approach to maintain data efficiency and reliable performance \cite{schulman2017proximal}. In PPO, $\pi$ denotes the policy network optimised with its parameterisation $\theta$, and the policy network takes the state $s$ as the input and outputs an action $a$. PPO uses actor-critic architecture to enable learning of better policies by reformulating reward signals in terms of advantage $A$. The advantage function measures how good an action is compared to the other actions available in the state $s$. PPO maximises the surrogate objective function as follows: \begin{equation} \newcommand*{\Scale}[2][4]{\scalebox{#1}{$#2$}}% \Scale[1]{L(\theta)=\hat{\mathbb{E}}_t[min({r_t(\theta)\hat{A}_t,clip(r_t(\theta),1-\epsilon,1+\epsilon)\hat{A}_t})]} \end{equation} where $L(\theta)$ is the policy gradient loss under parameterization $\theta$. $\hat{\mathbb{E}}_t$ denotes the empirically obtained expectation over a finite batch of samples, and $\hat{A}_t$ denotes an estimator of the advantage function at timestep $t$. $\epsilon$ is a hyperparameter for clipping, and $r_t(\theta)$ is the probability ratio formulised as: \begin{equation} r_t(\theta)=\frac{\pi_\theta(a_t,s_t)}{\pi_{\theta old}(a_t,s_t)} \end{equation} For autonomous driving, to increase sample efficiency, PPO uses importance sampling to obtain the expectation of samples gathered from an old policy $\pi_{\theta old}$ under the new policy $\pi_\theta$. As $\pi_\theta$ is refined, the two policies will diverge, and the estimation variance will increase. Therefore, the old policy is periodically updated to match the new policy. The clipping of the probability ratio to the range of $[1-\epsilon,1+\epsilon]$ ensures that the state transition function is similar between the two policies. The use of clipping discourages extensive policy updates that are outside of the comfortable zone. \subsection{Self-Awareness Safety DRL} The intuition of a self-awareness safety mechanism driven by the attention modules enables the ego vehicle to filter information and emphasises relevant vehicles that have potential conflicts with the ego vehicle's planned route, and the ego vehicle will be more informed to make decisions that could avoid collisions to stay safe. Following this motivation, we proposed a self-awareness safety DRL that employed the self-attention architecture \cite{RN726} and connected a normalisation layer to improve the training speed of multi-head attention based on the transformer framework \cite{RN765}. Second, we incorporated the proposed self-attention architecture with a baseline DRL model to build a self-awareness safety DRL, such as attention-DQN, to evaluate the safety performance of the ego vehicle in intersection and roundabout driving scenarios. \subsubsection{Attention-DQN} \begin{figure*}[!t] \centering \includegraphics[width=13cm]{figures/fig1_DQN_architecture.pdf} \caption{Our proposed self-awareness safety DRL (attention-DQN). } \label{fig_DQN_architecture} \end{figure*} As shown in \textbf{Figure \ref{fig_DQN_architecture}}, we presented a self-awareness safety DRL: attention-DQN, in a simulated road traffic junction environment including intersection and roundabout scenarios. The ego vehicle is an agent who observes the state $s_t$ of the environment and decides an action $a_t$; for instance, the vehicle accelerates forward. The environment then will give a reward $r_t$ as feedback and enter a new state $s_{t+1}$. A DQN agent contains two deep networks: the Q network and the target network; the Q network includes all updates during the training, while the target network is used to retrieve Q values. The target network parameters are only updated with the Q network parameters every 512 steps to reduce the correlations between Q-value updates and thus make the Q-network output's distribution stationary \cite{RN766}. DQN also uses the concept of experience replay to form a stable enough input dataset for training, i.e., all experiences in the form of $e_t=\left(s_t, a_t, r_t, s_{t+1}\right)$ are stored in the replay memory and sampled uniformly in a random manner. Furthermore, the two-deep networks in \textbf{Figure \ref{fig_DQN_architecture}} are considered attention networks, and the architecture of an attention network is illustrated in \textbf{Figure \ref{fig_architecture_attention_network}}. For an attention network, the observations containing vehicle features (i.e., location and velocity) are first split into the ego vehicle's inputs and those of other surrounding vehicles. The input observation information is normalised and then passed to a multilayer perceptron (MLP) network, and then the outputs from both MLPs are passed to a multi-head (i.e., two heads) attention block. Specifically, the multi-head attention block contains a single query $Q$ from the ego vehicle, keys of all the vehicles $K$ and values of all the vehicles $V$. Linear transformations are applied to the $Q$, $K$, and $V$, as well as the scale dot-product attention that is calculated from the transformed data. Then, the attention data from multiple heads are concatenated and linearly transformed before being passed to the next layer. Afterwards, the attention data from multiple heads are combined and added to the MLP output of the ego vehicle and then fed to a LayerNorm layer. Finally, the regularised data from the normalisation layer are passed to the output MLP to produce the action values \footnote {Source code is available at \href{https://github.com/czh513/DRL_Self-Awareness-Safety}{GitHub}. }. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{figures/fig2_attention_architecture.pdf} \caption{The architecture of a self-attention network.} \label{fig_architecture_attention_network} \end{figure} \subsubsection{Default Single-query and Multi-query Attention-DQN} Here, we introduced \textbf{Figure \ref{fig_attention_head}} to further present how an attention head works. In terms of the single-query attention architecture shown in \textbf{Figure \ref{fig_attention_head}-A}, first, the ego vehicle emits a default single query $Q= [q_0]$ to select a subset of vehicles according to the context. The query is computed with a linear projection $L_q$. Second, the query is compared to a set of keys $K= [k_0, …, k_N ]$ that contain descriptive features $k_i$ for each vehicle, and the keys are computed with a shared linear projection $L_k$. Third, the dot product of the query and keys $QK^T$ is computed to assess the similarities between them. The result is scaled by $( 1 / {\sqrt{d_k}})$, where $d_k$ is the dimension of keys, and a softmax function is applied to obtain the weights on the values $V= [v_0, …, v_N ]$. The values are also computed with a shared linear projection $L_v$. Please note that the keys $K$ and values $V$ are concatenated from all vehicles, whereas the query $Q$ is only produced by the ego vehicle. The output attention matrix is formalised as: \begin{equation} \label{eqn_attention} output=softmax\left(\frac{QK^T}{\sqrt{d_k}}\right)V \end{equation} In addition, we explored a multi-query attention architecture, marked as "MultiQ-attention-DQN", as demonstrated in \textbf{Figure \ref{fig_attention_head}-B}, which tweaked from the single-query attention architecture to observe the performance variations, and the main differences were highlighted in orange. Here, we include queries from all the vehicles instead of using the default single query from the ego vehicle. \begin{figure}[!t] \centering \includegraphics[width=7cm]{figures/fig3_attention_head.pdf} \caption{The architecture of an attention head: (A) single-query attention head; (B) multi-query attention head.} \label{fig_attention_head} \end{figure} \section{Experiments} \subsection{Driving Environment} In this study, we used two representative driving scenarios, intersection and roundabout, as shown in \textbf{Figure \ref{fig_scenario_eg}}, to investigate the safety of DRL for road traffic junction environments based on the highway simulation system \cite{RN754}. We suggested intersection (\textbf{Figure \ref{fig_scenario_eg}-A}) and roundabout (\textbf{Figure \ref{fig_scenario_eg}-B}) scenarios characterised as relatively challenging driving interflow environments. It includes one ego vehicle trying to cross the traffic and arrive at a destination, and several surrounding vehicles spawned randomly. The ego vehicle needs to make decisions such as turning left, changing lanes, accelerating and decelerating while avoiding colliding with other vehicles. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{figures/fig4_intersection_roundabout_scenario.pdf} \caption{Two representative driving scenarios in the road traffic junctions: (A) intersection; (B) roundabout. The ego vehicle with green colour is driving as guided by the red arrow to reach the goal.} \label{fig_scenario_eg} \end{figure} \subsubsection{Intersection Scenario} \begin{itemize}[leftmargin=*] \item \textit{Settings}. The intersection scene comprises two roads crossing perpendicularly and stretching towards four directions (north, south, east, and west). The roads are populated with the ego vehicle and several surrounding vehicles. The positions, velocities and destinations of the other vehicles are randomly initialised. The ego vehicle drives from 60 metres south to the intersection, and the task of the ego vehicle is to cross the intersection and turn left (west). The goal will be achieved (i.e., destination arrived) if the ego vehicle could turn left at the intersection and drive 25 metres or more from the turning point in 13 seconds. Please note that the horizontal road has the right of way in the intersection scenario as for the road priority. \textbf{Figure \ref{fig_scenario_eg}-A} is a showcase of the intersection scenario where the goal of the ego vehicle with green colour is to turn left as guided by the red arrow. \item \textit{Observations}. The vehicle observations of the intersection scenario follow the kinematic bicycle model \cite{RN755}, which implements a simplistic behaviour and considers the same lane interactions to prevent potential lateral collisions \cite{RN754}: each vehicle predicts future positions of neighbouring vehicles over a three-second horizon. If a collision with a neighbour is predicted, the yielding vehicle is decided according to the road priorities. The chosen yielding vehicle will brake until the collision prediction clears. \item \textit{Actions}. The ego vehicle in the intersection scenario is designed to operate by selecting one from a finite set of actions \textit{A = \{slower, no-operation, faster\}}, where the vehicle can choose to slow down, keep the constant speed and accelerate. \item \textit{Rewards}. The reward of the ego vehicle for the intersection scenario is designed as follows: the ego vehicle receives a reward of 1 if it drives at the maximum speed or if it arrives at the destination and receives a reward of -5 if a collision happens. This reward design encourages the ego vehicle to cross the intersection and arrive at the destination as soon as possible while avoiding collisions simultaneously. \end{itemize} \subsubsection{Roundabout Scenario} \begin{itemize}[leftmargin=*] \item \textit{Settings}. The scene of the roundabout scenario is composed of a two-lane roundabout with four entrances/exits (north, south, east, and west), and the roads are occupied with the ego vehicle and several surrounding vehicles. The positions, velocities and destinations of the other vehicles are initialised randomly. The ego vehicle drives from 125 metres south to the roundabout, and the task of the ego vehicle is to cross the roundabout and take the second exit (north). The goal can be accomplished if the ego vehicle can successfully take the second exit at the roundabout and drive 10 metres or more from the exit point in 13 seconds. Please note that vehicles in the roundabout have the right of way as for the road priority. As presented in \textbf{Figure \ref{fig_scenario_eg}-B} for a showcase of the roundabout scenario, the goal of the ego vehicle is to exit the roundabout following the red arrow. \item \textit{Observations}. The vehicle observations of the roundabout scenario are applied to the kinematics type as well. The ego vehicle observes and learns from a $V*F$ array, where $V$ is the number of nearby vehicles, and $F$ is the size of features observed. In our setting, the ego vehicle observes a feature set of size 7: $\{p,x,y,v_x,v_y,cos_h,sin_h\}$, where $p$ represents whether a vehicle is in the row (whether a vehicle was observed), $x$ and $y$ describe the location of the vehicle, $v_x$ and $v_y$ denote the $x$ and $y$ axes of velocity, and $cos_h$ and $sin_h$ describe the heading direction of the vehicle. \item \textit{Actions}. The actions of the ego vehicle in the roundabout scenario are selected from a finite set: \textit{A = \{lane\_{left}, lane\_{right}, idle, faster, slower\}}, implying that the vehicle can choose to change to the left/right lane, keep the same speed and lane, and accelerate and decelerate. \item \textit{Rewards}. The reward of the ego vehicle for the roundabout scenario is arranged as follows: the ego vehicle receives a reward of 0.5 if it drives at the maximum speed and receives a reward of -1 if a collision happens. Every time the vehicle changes a lane, it will be awarded as -0.05. We also designed a success reward of 1 if the ego vehicle arrives at the destination, encouraging the ego vehicle to reach the destination. We suppose that this reward design encourages the vehicle to cross the roundabout and arrive at the destination as soon as possible while avoiding collisions or unnecessary lane changing simultaneously. \end{itemize} \subsection{Evaluation Metrics} Our study defined four evaluation metrics heavily considering safety concerns (collision, success, and freezing) and the standard encouraging total reward to assess three baseline DRL models (DQN, PPO, and A2C) and our proposed self-awareness safety DRL: attention-DQN. \begin{itemize}[leftmargin=*] \item \textit{Collision rate}. It calculates the percentage of the episodes where a collision happens to the ego vehicle, reflecting the safety level of the DRL models used to train and test the performance of an autonomous vehicle. \item \textit{Success rate}. It counts the percentage of the episodes where the ego vehicle successfully reaches the destination within the allowed duration. It also considers the functionality of the autonomous vehicle, which evaluates the ability of a DRL model to learn behaviour to complete the desired task. In our experiment, the driving scenario has a timeout setting representing the maximum duration to cross the intersection. Vehicles exceeding this duration are examined as unacceptably slow, which aligns with real-world driving scenarios where cars moving extremely slowly on the road are usually unacceptable. \item \textit{Freezing rate}. It determines the percentage of episodes where the ego vehicle freezes or moves extremely slowly in the driving environment. In the transportation system, it is common to observe a freezing robot problem, where the ego vehicle acts overcautiously and freezes or moves extremely slowly on the road. In this case, the ego vehicle can be a serious safety hazard to traffic, such as the case of causing serial rear-end collisions. This metric is vital for straightforward performance comparisons to evaluate whether our proposed module can help alleviate the freezing robot problem, since the frozen ego vehicle will neither contribute to collision rate nor success rate. In our experiment, the freezing rate is calculated by the residual value from the success rate and collision rate as follows: \begin{equation} Rate_{freezing}=100\%-Rate_{collision}-Rate_{success} \end{equation} \item \textit{Total reward}. It is a typical metric commonly used to evaluate ego vehicle premium performance over several episodes. Our experiment collected the total reward that the ego vehicle receives after taking a set of actions in each episode. As the ego vehicle learns to drive better, the total reward the ego vehicle acquires in each episode would increase with more learning episodes. \end{itemize} \subsection{Experimental Procedure} We conducted two experiments in this study: in experiment 1, three baseline DRL models, DQN, A2C, and PPO, were selected to train the ego vehicle and then evaluate the safety performance based on four defined metrics in the intersection and roundabout driving scenarios; in experiment 2, we trained the ego vehicle in the intersection and roundabout driving scenarios using the proposed single-query and multi-query attention-DQN to compare the performance with the baseline DQN model. In addition, we presented the safety performance of the above DRL models from two experiments during the training and testing phases. \subsubsection{Experiment 1} During the training phase, the ego vehicle was drilled by three baseline DRL models (DQN, A2C, and PPO) in intersection and roundabout driving scenarios. We observed that the collision rate and success rate in each DRL model converged after 60,000 training episodes with 5 runs. In the testing phase, we evaluated the trained DRL models over 100 episodes for 100 trials to collect the testing data and tested each for 20 trials. For each trial, the number of collisions and successes during the 100 testing episodes were recorded to represent the collision rate and success rate. Additionally, the freezing rate was calculated through the residual value from the collected collision rate and success rate. \subsubsection{Experiment 2} We trained the ego vehicle using default single-query and multi-query attention-DQN in intersection and roundabout driving scenarios for 60,000 episodes with 5 runs. Please note that the hyperparameters we used for attention-based DQN are the same as the baseline DQN for a fair comparison. In the testing phase, we evaluated the trained models from single-query and multi-query attention-DQN over 100 episodes for 100 trials to collect the testing data and tested each for 20 trials. We recorded the number of collisions and the number of successes during the 100 episodes to indicate the collision rate and success rate for each trial. The freezing rate is the calculated residual value from the collision rate and the success rate. \section{Results} \subsection{Evaluation of Experiment 1} \subsubsection{Training Phase} \begin{itemize}[leftmargin=*] \item \textit{Intersection scenario}. As shown in \textbf{Figure \ref{fig_exp1_intersection}}, we presented the training evolution performance of collision rate (\textbf{Figure \ref{fig_exp1_intersection}-A}), success rate (\textbf{Figure \ref{fig_exp1_intersection}-B}), freezing rate (\textbf{Figure \ref{fig_exp1_intersection}-C}) and total reward (\textbf{Figure \ref{fig_exp1_intersection}-D}) of the ego vehicle in the intersection scenario, trained by three baseline DRL models (DQN, PPO, and A2C) over 60,000 episodes and repeated 5 times, and the displayed values were averaged with 95\% confidence interval. We found that the collision rate of the ego vehicles trained by three baseline DRL models reached above 60\% at the end of the training steps, particularly in DQN, which revealed the lowest collision rate during the training process. We also noticed that the success rates of PPO and A2C reached approximately the same value at 35\%, and the success rate of DQN reached approximately 28\%, which was reflected in the total reward curves. In addition, the freezing rates of the ego vehicles trained by PPO and A2C reached approximately 0, starting from 15,000 episodes, whereas the freezing rate of DQN decreased to approximately 8\%, being the best performance among all baseline DRL models. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{figures/exp1_training_inter.pdf} \caption{Intersection scenario: the training performance of the ego vechile with four metrics: (A) collision rate, (B) success rate, (C) freezing rate, and (D) total reward using the baseline DRL models (DQN, A2C, and PPO).} \label{fig_exp1_intersection} \end{figure} \item \textit{Roundabout scenario}. As presented in \textbf{Figure \ref{fig_exp1_roundabout}}, we demonstrated the training evolution of the performance of the collision rate (\textbf{Figure \ref{fig_exp1_roundabout}-A}), success rate (\textbf{Figure \ref{fig_exp1_roundabout}-B}), freezing rate (\textbf{Figure \ref{fig_exp1_roundabout}-C}) and total reward (\textbf{Figure \ref{fig_exp1_roundabout}-D}) of the ego vehicle during training in the roundabout scenario, trained by three baseline DRL models (DQN, PPO, and A2C) over 60,000 episodes and repeated 5 times, and the displayed values were averaged with a 95\% confidence interval. We recognised that the collision rate of the ego vehicles trained by all baseline DRL models was below 25\% at the end of the training, in particular A2C being visible as the lowest at approximately 5\%. In contrast, the freezing rates of the ego vehicles trained by A2C and DQN both reached above 60\%, specific to A2C being approximately 73\% and DQN being approximately 63\%. In addition, PPO revealed the lowest freezing rate and the highest success rate in baseline DRL models. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{figures/exp1_training_round.pdf} \caption{Roundabout scenario: the training performance of the ego vechile with four metrics: (A) collision rate, (B) success rate, (C) freezing rate, and (D) total reward using the baseline DRL models (DQN, A2C, and PPO).} \label{fig_exp1_roundabout} \end{figure} \end{itemize} \subsubsection{Testing Phase} \textbf{Table \ref{table_1}} shows the collision rates, success rates, and freezing rates of the ego vehicles tested in the intersection and roundabout scenario using three baseline DRL models over 100 episodes and averaged across 100 runs (10,000 testing episodes in total for each model). Regarding the intersection scenario, the collision rates of the ego vehicles trained by all three baseline models were above 50\%, with PPO being the highest at 64.45\% and DQN being the lowest at 56.88\%. In terms of success rate, DQN also came the lowest value, at 29.23\%. This was caused by DQN being the only case that experienced a freezing robot problem, with a freezing rate of 13.89\%. The PPO and A2C models did not have freezing vehicles during the testing phase. Moving to the testing phase of the roundabout scenario, A2C demonstrated the lowest collision rate of 7.2\%, followed by DQN of 14.7\% and PPO of 20.47\%. DQN had the lowest success rate and the highest freezing rate, whereas PPO had the highest success rate and lowest freezing rate, with A2C being the second in both metrics. \begin{table}[!ht] \caption {The testing performance of three baseline DRL models in intersection and roundabout scenarios.} \label{table_1} \begin{adjustbox}{width=\columnwidth,center} \begin{tabular}{@{}ccccc@{}} \toprule \textbf{Scenarios} & \textbf{\begin{tabular}[c]{@{}c@{}}Metrics\\ mean (std) \%\end{tabular}} & \textbf{DQN} & \textbf{A2C} & \textbf{PPO} \\ \midrule \multicolumn{1}{c|}{\multirow{3}{*}{\textbf{Intersection}}} & \textit{Collision Rate} & 56.88 (5.39) & 65.51 (5.08) & 64.45 (4.93) \\ \multicolumn{1}{c|}{} & \textit{Success Rate} & 29.23 (4.99) & 34.49 (5.08) & 35.54 (4.93) \\ \multicolumn{1}{c|}{} & \textit{Freezing Rate} & 13.89 (3.81) & 0 (0) & 0 (0) \\ \midrule \multicolumn{1}{c|}{\multirow{3}{*}{\textbf{Roundabout}}} & \textit{Collision Rate} & 14.7 (7.24) & 7.2 (3.03) & 20.47 (4.78) \\ \multicolumn{1}{c|}{} & \textit{Success Rate} & 33.99 (5.66) & 46.35 (7.70) & 74.42 (4.91) \\ \multicolumn{1}{c|}{} & \textit{Freezing Rate} & 51.31 (10.33) & 46.45 (8.58) & 5.11 (5.39) \\ \cmidrule(l){2-5} \end{tabular} \end{adjustbox} \end{table} \subsubsection{Our Findings} We observed significant room to improve safety performance in the testing phase, as shown in \textbf{Table \ref{table_1}}, especially the collision rate of the DRL models in DQN. In the intersection scenario, we observed a high collision rate - more than 50\% from all three baseline DRL models (56.88\% for DQN, 65.51\% for A2C, and 64.45\% for PPO), suggesting that the autonomous vehicle in the road traffic junction driving environment is far from safe. Furthermore, we noticed that the collision rate in the roundabout scenario is lower (<20\%) than that in the intersection scenario, but it is mainly caused by the high freezing rate that does not indeed contribute to safety concerns. Specifically, the freezing robot problem observed in the roundabout scenario was remarked during the training phase of all baseline DRL models, especially to DQN, which posed more than half of the freezing episodes (51.31\%). Considering the above performance results, we chose DQN to investigate further whether the proposed self-attention module could decrease the collision rate and alleviate the freezing robot problem in intersection and roundabout driving scenarios. In addition, we demonstrated a showcase in \textbf{Figure \ref{fig_exp1_findings}} about testing episodes using DQN in intersection and roundabout scenarios to further clarify our findings. The goal of the ego vehicle is to turn left in the intersection scenario (\textbf{Figure \ref{fig_exp1_findings}-A}), and the training behaviours of the ego vehicle follows: (1) the ego vehicle began from the starting point, driving towards the intersection; (2) the ego vehicle reached the intersection entry, and some other vehicles came from left and right. The surrounding vehicle on the left probably would not collide with the ego vehicle since it was turning right, but the surrounding vehicle from the right might collide with the ego vehicle; (3) the ego vehicle made the decision to turn left and collided with the incoming car from the right. For the roundabout scenario (\textbf{Figure \ref{fig_exp1_findings}-B}), the goal of the ego vehicle is to take the north (second) exit from the roundabout, and the training behaviours of the ego vehicle follow as follows: (1) the ego vehicle began from the starting point, driving towards the roundabout, and there were other vehicles in the roundabout that might collide with the ego vehicle; (2) the ego vehicle stayed basically where it was when it seemed safe to enter the roundabout; (3) the roundabout was completely clear of vehicles, and the ego vehicle finally moved forward before time was out, and the episode ended. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{figures/exp1_findings.pdf} \caption{A showcase of testing episodes using DQN in (A) intersection and (B) roundabout scenarios.} \label{fig_exp1_findings} \end{figure} \subsection{Evaluation of Experiment 2} \subsubsection{Training Phase} \begin{itemize}[leftmargin=*] \item \textit{Intersection scenario}. As shown in \textbf{Figure \ref{fig_exp2_intersection}}, we presented the training evolution of collision rate (\textbf{Figure \ref{fig_exp2_intersection}-A}), success rate (\textbf{Figure \ref{fig_exp2_intersection}-B}), freezing rate (\textbf{Figure \ref{fig_exp2_intersection}-C}) and total reward (\textbf{Figure \ref{fig_exp2_intersection}-D}) of the ego vehicle in the intersection scenario, learned by DQN, default (single-query) and multiQ (multi-query) attention-DQN over 60,000 episodes, which displayed by the mean values averaged from 5 runs, with 95\% confidence interval. We observed that the ego vehicle trained by the default attention-DQN (the orange colour) achieved the lowest collision rate and the highest success rate, which was demonstrated by a significant drop in the collision rate, from approximately 67\% of the DQN to approximately 20\% of the attention-DQN, as well as by a considerable increase from 28\% of DQN to 49\% of the attention-DQN. The changes in collision and success rates were also reflected in the total reward. We also noticed that the multiQ-attention-DQN (the green colour) exhibited a similar performance yet not as strong compared with default attention-DQN. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{figures/exp2_training_inter.pdf} \caption{Intersection scenario: the training performance of the ego vechile with four metrics: (A) collision rate, (B) success rate, (C) freezing rate, and (D) total reward using DQN, single-query and multi-query attention-DQN.} \label{fig_exp2_intersection} \end{figure} \item \textit{Roundabout scenario}. \textbf{Figure \ref{fig_exp2_roundabout}} demonstrates the training evolution of the collision rate (\textbf{Figure \ref{fig_exp2_roundabout}-A}), success rate (\textbf{Figure \ref{fig_exp2_roundabout}-B}), freezing rate (\textbf{Figure \ref{fig_exp2_roundabout}-C}) and total reward (\textbf{Figure \ref{fig_exp2_roundabout}-D}) of the ego vehicles in the roundabout scenario learned by DQN, default (single-query) and multiQ (multi-query) attention-DQN over 60,000 episodes, which are displayed by the mean values averaged from 5 runs, with a 95\% confidence interval. We observed that the ego vehicle trained by attention-DQN was quickest to avoid collisions during the beginning training period, such as the curve's steepness between 1 to 10,000 episodes, and then reached the low collision rate of 12\%at episode 60,000 finally. For the success rate, the ego vehicle trained by the attention-DQN achieved the highest performance by presenting approximately 74\%, which is a significant improvement compared to 27\% introduced by DQN. In addition, the freezing rate dropped from 63\% of DQN to approximately 15\% of attention-DQN. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{figures/exp2_training_round.pdf} \caption{Roundabout scenario: the training performance of the ego vechile with four metrics: (A) collision rate, (B) success rate, (C) freezing rate, and (D) total reward using DQN, single-query and multi-query attention-DQN.} \label{fig_exp2_roundabout} \end{figure} \end{itemize} \subsubsection{Testing Phase} As shown in \textbf{Table \ref{table_2}}, we summarised the collision rate, success rate, and freezing rate of the ego vehicle tested by trained DRL models (DQN, default attention-DQN and MultiQ-attention-DQN) over 100 episodes and averaged across 100 runs (10,000 testing episodes in total for each DRL model) in the intersection and roundabout scenarios. In the intersection scenario, the testing performance of the ego vehicle trained by attention-DQN showed the lowest collision rate of 14.57\% and the highest success rate of 52.58\%. For the roundabout scenario, similarly, attention-DQN achieved the lowest collision rate of 12.80\% and presented a significant increase in the success rate, from 33.99\% of DQN to 83.92\% of attention-DQN. The freezing rate of the attention-DQN unveiled the lowest value of 3.28\%, which was reduced significantly compared to that of DQN (51.31\%). We also noticed that the performance of multiQ-attention-DQN ranked second, slightly worse than attention-DQN, but very close. \begin{table}[!ht] \caption {The testing performance of DQN, single-query and multi-query attention-DQN in intersection and roundabout scenarios.} \label{table_2} \begin{adjustbox}{width=\columnwidth,center} \begin{tabular}{@{}c|cccc@{}} \toprule \multirow{2}{*}{\textbf{Scenarios}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Metrics\\ mean (std) \%\end{tabular}}} & \multirow{2}{*}{\textbf{DQN}} & \multicolumn{2}{c}{\textbf{Attention-DQN}} \\ \cmidrule(l){4-5} & & & \textbf{Default} & \textbf{MultiQ} \\ \midrule \multirow{3}{*}{\textbf{Intersection}} & \textit{Collision Rate} & 56.88 (5.39) & 14.57 (3.76) & 18.54 (6.40) \\ & \textit{Success Rate} & 29.23 (4.99) & 52.58 (6.35) & 49.18 (6.74) \\ & \textit{Freezing Rate} & 13.89 (3.81) & 32.85 (6.03) & 32.28 (8.88) \\ \midrule \multirow{3}{*}{\textbf{Roundabout}} & \textit{Collision Rate} & 14.70 (7.24) & 12.80 (5.30) & 12.93 (4.73) \\ & \textit{Success Rate} & 33.99 (5.66) & 83.92 (4.30) & 80.90 (8.06) \\ & \textit{Freezing Rate} & 51.31 (10.33) & 3.28 (3.17) & 6.17 (9.93) \end{tabular} \end{adjustbox} \end{table} \subsubsection{Our Findings} The evaluation outcomes of attention-DQN effectively found improved safety concerns of the autonomous vehicle in intersection and roundabout driving scenarios. In the intersection scenario, translation as the attention mechanism helped the ego vehicle pay better attention to the other vehicles in the traffic, thus avoiding collisions and completing the task more efficaciously and successfully. In the roundabout scenario, attention-DQN showed a meagre freezing rate of ego vehicles of only 3.28\% in the test phase, suggesting that the attention mechanism alleviates the problem of robot freezing because the ego vehicle is more confident in moving forward with supplementary attention information to the surrounding vehicles. \begin{figure}[!htp] \centering \includegraphics [width=\columnwidth] {figures/exp2_findings.pdf} \caption{A showcase of testing episodes using attention-DQN in (A) intersection and (B) roundabout scenarios. The two attention heads are visualised as green and blue lines connecting the ego vehicle and other surrounding vehicles, and the width of a line corresponds to the weight of the attention. } \label{fig_exp2_findings} \end{figure} In addition, we demonstrated a showcase \footnote {A testing demonstration of replay videos is available at \href{https://github.com/czh513/DRL_Self-Awareness-Safety}{GitHub}. } in \textbf{Figure \ref{fig_exp2_findings}} about testing episodes using attention-DQN in intersection and roundabout scenarios to further clarify our findings. Here, please note that "attention" is visualised as lines connecting the ego vehicle and other surrounding vehicles. The two different colours (green and blue) represent two attention heads, and the width of the line corresponds to the weight of the attention. The goal of the ego vehicle is to turn left in the intersection scenario (\textbf{Figure \ref{fig_exp2_findings}-A}), and the training attention behaviours of the ego vehicle follow as: (1) the ego vehicle paid attention to the surrounding vehicles from left, front, and right directions; (2) when ego vehicle coming closer to the intersection, the attention to the incoming other vehicles strengthened; (3) the surrounding vehicles from the left to right turn or from the front to left turn at the intersection that were no longer threats to the ego vehicle, so the main attention switched to the surrounding vehicles from the right whose intention was not clear yet; (4) becasue the vehicles from the right showed intention to turn right and drove almost out of the intersection, the attention on it decreased; (5) the intersection was clear for all surrounding vehicles, so that the attention came back to the ego vehicle itself when the ego vehicle crossed the intersection; (6) the ego vehicle drove on the destination lane with slight attention to the vehicle in front of it to keep distance. Moreover, targetting to the roundabout scenario (\textbf{Figure \ref{fig_exp2_findings}-B}), the ego vehicle needs to learn to cross the roundabout and take the desired exit, and the training attention behaviours of the ego vehicle follow as follows: (1) the ego vehicle paid substantial attention to the surrounding vehicles coming from the left that were likely to collide with the ego vehicle; (2) the ego vehicle waited until the surrounding vehicle from left passed the entry point and switched its primary attention to the surrounding vehicles coming from the following entry; (3) the ego vehicle entered the roundabout and kept its attention on other surrounding vehicles; (4) the attention to the front vehicle strengthened, since the ego vehicle was getting closer to its front vehicle; (5) the ego vehicle exited the roundabout but still kept its attention on the front vehicle to keep the reasonable distance; and (6) the attention switched back to the ego vehicle when it was safe enough. \section{Discussion} Based on our findings, we observed that Attention-DQN lowered the freezing rate of the ego vehicle significantly in the roundabout scenario, from above 50\% freezing rate in both training and testing results to only 3.28\% in testing and 15\% in training. This translates as a significant improvement on the freezing robot problems in the roundabout scenario. One possible reason we speculated that Attention-DQN helped to achieve a low freezing rate is that the attention mechanism enables the ego vehicle to be more confident in moving forward by paying more attention to the surrounding vehicles. Interestingly, the attention mechanism did not have the same effect on the ego vehicle in the intersection scenario. The freezing rates of both training and testing results after the applied attention mechanism were higher than the baseline algorithm. The collision rate did decrease a lot in the intersection scenario, but not all of the decreased percentage effectively transferred to the success rate and some part of the collision rate transferred to the freezing rate. It is unclear in the current stage, but one observation during the experiments was that crashes between other vehicles happened in intersections which sometimes blocked the path of the ego vehicle. In this case, the ego vehicle without attention collided with the crashed vehicles, but with attention, it has had to choose to freeze, as referred in \textbf{Figure \ref{fig_11}}. \begin{figure}[!htp] \centering \includegraphics [width=4cm] {figures/fig11.pdf} \caption{A showcase of the frozen vehicle in intersection scenario.} \label{fig_11} \end{figure} For DRL in a complex autonomous driving environment, our research still presented two limitations: The simulated driving environment might exhibit stochastic behaviours that included our testing driving scenarios are complex scenarios populated with vehicles where one vehicle on the road may collide with another. During the experiments, we observed that sometimes there were crashes between other vehicles, which blocked the path of our ego vehicle. As a result, the ego vehicle could not move forward and could only wait till timeout, but it was used to calculate into freezing rate, suggesting that the freezing rate might be influenced by this case. Also, our proposed model is still not entirely safe to be applied to real-world autonomous driving, so it is worth exploring to combine optimisation measurements and more sensing information with DRL trained agents for further safety improvement. \section{Conclusion} In this study, we evaluated the safety performance of autonomous driving in a complex road traffic junction environment, such as challenging intersection and roundabout scenarios. We designed four evaluation metrics, including collision rate, success rate, freezing rate, and total reward, from three baseline DRL models (DQN, A2C, and PPO) to the proposed attention-DQN module for self-awareness safety improvement. Our results based on two experiments showed drawbacks of the current DRL models and presented significantly reduced collision rate and freezing rate by introducing the attention mechanism that validated the effectiveness of our proposed attention-DQN module in enhancing the safety concerns of the ego vehicle. Our findings have the potential to contribute to and benefit safe DRL in transportation applications. \balance \bibliographystyle{IEEEtran}
{'timestamp': '2022-01-21T02:14:35', 'yymm': '2201', 'arxiv_id': '2201.08116', 'language': 'en', 'url': 'https://arxiv.org/abs/2201.08116'}
\section{Introduction} \label{intro} The anomalous magnetic moment of the muon $a_\mu$ provides an important test of the Standard Model (SM) and is potentially sensitive to contributions from New Physics~\cite{JN_09}. In fact, for several years now a deviation is observed between the experimental measurement and the SM prediction, $a_\mu^{\rm exp} - a_\mu^{\rm SM} \sim (250-300) \times 10^{-11}$, corresponding to about $3 - 3.5$~standard deviations~\cite{JN_09,recent_estimates}. Hadronic effects dominate the uncertainty in the SM prediction of $a_\mu$ and make it difficult to interpret this discrepancy as a sign of New Physics. In particular, in contrast to the hadronic vacuum polarization in the $g-2$, which can be related to data, the estimates for the hadronic light-by-light (LbyL) scattering contribution $a_\mu^{{\rm had.\ LbyL}} = (105 \pm 26) \times 10^{-11}$~\cite{PdeRV_09} and $a_\mu^{{\rm had.\ LbyL}} = (116 \pm 40) \times 10^{-11}$~\cite{Nyffeler_09,JN_09} rely entirely on calculations using {\it hadronic models} which employ form factors for the interaction of hadrons with photons. The more recent papers~\cite{Schwinger_Dyson,GdeR_12} yield a larger central value and a larger error of about $(150 \pm 50) \times 10^{-11}$. For a brief review of had.\ LbyL scattering in $a_\mu$ see Ref.~\cite{B_ZA}, which also includes a reanalysis of the charged pion loop contribution, see also Ref.~\cite{pion_loop}. To fully profit from future planned $g-2$ experiments with a precision of $15 \times 10^{-11}$, these large model uncertainties have to be reduced. Maybe lattice QCD will at some point give a reliable number, see the talk~\cite{Blum_Lattice_2012} for some encouraging progress recently. Meanwhile, experimental measurements and theoretical constraints of the relevant form factors can help to constrain the models and to reduce the uncertainties in $a_\mu^{{\rm had.\ LbyL}}$. In most model calculations, pion-exchange gives the numerically dominant contribution\footnote{Apart from the recent papers~\cite{Schwinger_Dyson,GdeR_12}, where the (dressed) quark loop gives the largest contribution.}, therefore it has received a lot of attention. In our paper~\cite{KLOE-2_impact} we studied the impact of planned measurements at KLOE-2 of the $\pi^0\to\gamma\gamma$ decay width to 1\% statistical precision and the $\gamma^\ast\gamma\to\pi^0$ transition form factor ${\cal F}_{\pi^0\gamma^\ast\gamma}(Q^2)$ for small space-like momenta, $0.01~\mbox{GeV}^2 \leq Q^2 \leq 0.1~\mbox{GeV}^2$, to 6\% statistical precision in each bin, on estimates of the pion-exchange contribution $a_\mu^{{\rm LbyL}; \pi^0}$. We would like to stress that a realistic calculation of $a_\mu^{{\rm LbyL}; \pi^0}$ is {\it not} the purpose of this paper. The estimates given below are performed to demonstrate, within several models, an improvement of uncertainty, which will be possible when the KLOE-2 data appear. The simulations in Ref.~\cite{KLOE-2_impact} have been performed with the dedicated Monte-Carlo program EKHARA~\cite{EKHARA} for the process $e^+ e^- \to e^+ e^- \gamma^* \gamma^* \to e^+ e^- P$ with $P = \pi^0, \eta, \eta^\prime$, followed by the decay $\pi^0 \to \gamma\gamma$ and combined with a detailed detector simulation. \section{Impact of KLOE-2 measurements on $a_\mu^{\mathrm{LbyL};\pi^0}$} \label{sec:KLOE} Any experimental information on the neutral pion lifetime and the transition form factor is important in order to constrain the models used for calculating the pion-exchange contribution. However, having a good description e.g.\ for the transition form factor is only necessary, not sufficient, in order to uniquely determine $a_\mu^{\mathrm{LbyL};\pi^0}$. As stressed in Ref.~\cite{Jegerlehner_off-shell}, what enters in the calculation of $a_\mu^{{\rm LbyL}; \pi^0}$ is the fully off-shell form factor ${\cal F}_{{\pi^0}^*\gamma^*\gamma^*}((q_1 + q_2)^2, q_1^2, q_2^2)$ (vertex function), where also the pion is off-shell with 4-momentum $(q_1 + q_2)$. Such a (model dependent) form factor can for instance be defined via the QCD Green's function $\langle VVP \rangle$, see Ref.~\cite{Nyffeler_09} for details and references to earlier work. The form factor with on-shell pions is then given by ${\cal F}_{\pi^0\gamma^*\gamma^*}(q_1^2, q_2^2) \equiv {\cal F}_{{\pi^0}^*\gamma^*\gamma^*}(m_\pi^2, q_1^2, q_2^2)$. Measurements of the transition form factor ${\cal F}_{\pi^0\gamma^\ast\gamma}(Q^2) \equiv {\cal F}_{{\pi^0}^\ast\gamma^\ast\gamma^\ast}(m_{\pi}^2, -Q^2, 0)$ are in general only sensitive to a subset of the model parameters and do not allow to reconstruct the full off-shell form factor. For different models, the effects of the off-shell pion can vary a lot. In Ref.~\cite{Nyffeler_09} an off-shell form factor (LMD+V) was proposed, based on large-$N_C$ QCD matched to short-distance constraints from the operator product expansion, see also Ref.~\cite{KN_EPJC_2001}. This yields the estimate $a_{\mu; {\rm LMD+V}}^{{\rm LbyL}; \pi^0} = (72 \pm 12) \times 10^{-11}$. The error estimate comes from the variation of all model parameters, where the uncertainty of the parameters related to the off-shellness of the pion completely dominates the total error and will {\it not} be shown in Table~\ref{tab:amu} below. In contrast to the off-shell LMD+V model, many models, e.g.\ the VMD model, constituent quark models or the ans\"atze for the transition form factor used in Ref.~\cite{Cappiello:2010uy}, do not have these additional sources of uncertainty related to the off-shellness of the pion. These models often have only very few parameters, which can all be fixed by measurements of the transition form factor or from other observables. Therefore, the precision of the KLOE-2 measurement can dominate the total accuracy of $a_\mu^{\mathrm{LbyL};\pi^0}$ in such models. It was noted in Ref.~\cite{Nyffeler:2009uw} that essentially all evaluations of the pion-exchange contribution use the normalization ${\cal F}_{{\pi^0}^*\gamma^*\gamma^*}(m_\pi^2, 0, 0) = 1 / (4 \pi^2 F_\pi)$ for the form factor, as derived from the Wess-Zumino-Witten (WZW) term. Then the value $F_\pi = 92.4~\mbox{MeV}$ is used without any error attached to it, i.e. a value close to $F_\pi = (92.2 \pm 0.14)~\mbox{MeV}$, obtained from $\pi^+ \to \mu^+ \nu_\mu(\gamma)$~\cite{Nakamura:2010zzi}. Instead, if one uses the decay width $\Gamma_{\pi^0 \to \gamma\gamma}$ for the normalization of the form factor, an additional source of uncertainty enters, which has not been taken into account in most evaluations. In our calculations we account for this normalization issue, using in the fit: \begin{itemize} \item $\Gamma^{{\rm PDG}}_{\pi^0 \to \gamma\gamma} = 7.74 \pm 0.48$~eV from the PDG 2010~\cite{Nakamura:2010zzi}, \item $\Gamma^{{\rm PrimEx}}_{\pi^0 \to \gamma\gamma} = 7.82 \pm 0.22$~eV from the PrimEx experiment~\cite{Larin:2010kq}, \item $\Gamma^{{\rm KLOE-2}}_{\pi^0 \to \gamma\gamma} = 7.73 \pm 0.08$~eV for the KLOE-2 simulation (assuming a $1\%$ precision). \end{itemize} The assumption that the KLOE-2 measurement will be consistent with the LMD+V and VMD models, allowed us in Ref.~\cite{KLOE-2_impact} to use the simulations as new ``data'' and evaluate the impact of such ``data'' on the precision of the $a_\mu^{{\rm LbyL}; \pi^0}$ calculation. In order to do that, we fit the LMD+V and VMD models to the data sets~\cite{TFF_data} from CELLO, CLEO and BaBar for the transition form factor and the values for the decay width given above: \begin{displaymath} \label{eq:fitdatasets} \begin{array}{ll} A0: & \mbox{CELLO, CLEO, PDG} \\ A1: & \mbox{CELLO, CLEO, PrimEx} \\ A2: & \mbox{CELLO, CLEO, PrimEx, KLOE-2} \\ B0: & \mbox{CELLO, CLEO, BaBar, PDG} \\ B1: & \mbox{CELLO, CLEO, BaBar, PrimEx} \\ B2: & \mbox{CELLO, CLEO, BaBar, PrimEx, KLOE-2} \end{array} \end{displaymath} The BaBar measurement of the transition form factor does not show the $1/Q^2$ behavior as expected from earlier theoretical considerations by Brodsky-Lepage~\cite{Brodsky-Lepage} and as seen in the CELLO and CLEO data and the recent measurements from Belle~\cite{Belle}. The VMD model always shows a $1/Q^2$ fall-off and therefore is not compatible with the BaBar data. The LMD+V model has another parameter, $h_1$, which determines the behavior of the transition form factor for large $Q^2$. To get the $1/Q^2$ behavior according to Brodsky-Lepage, one needs to set $h_1 = 0$. However, one can simply leave $h_1$ as a free parameter and fit it to the BaBar data, yielding $h_1 \neq 0$~\cite{Nyffeler:2009uw}. In this case the form factor does not vanish for $Q^2 \to \infty$. Since VMD and LMD+V with $h_1 = 0$ are not compatible with the BaBar data, the corresponding fits are very bad and we will not include these results in the current paper, see Ref.~\cite{KLOE-2_impact} for details. For illustration, we use the following two approaches to calculate $a_\mu^{{\rm LbyL}; \pi^0}$: \begin{itemize} \item Jegerlehner-Nyffeler (JN) approach~\cite{Nyffeler_09,JN_09} with the off-shell pion form factor; \item Melnikov-Vainshtein (MV) approach~\cite{Melnikov:2003xd}, where one uses the on-shell pion form factor at the internal vertex and a constant (WZW) form factor at the external vertex. \end{itemize} Table~\ref{tab:amu} shows the impact of the PrimEx and the future KLOE-2 measurements on the model parameters and, consequently, on the $a_\mu^{{\rm LbyL}; \pi^0}$ uncertainty. The other parameters of the (on-shell and off-shell) LMD+V model have been chosen as in the papers~\cite{Nyffeler_09,JN_09,Melnikov:2003xd}. We stress again that our estimate of the $a_\mu^{{\rm LbyL}; \pi^0}$ uncertainty is given only by the propagation of the errors of the fitted parameters in Table~\ref{tab:amu} and therefore we may not reproduce the total uncertainties given in the original papers. \begin{table*} \caption{KLOE-2 impact on the accuracy of $a_\mu^{{\rm LbyL}; \pi^0}$ in case of one year of data taking ($5$~fb$^{-1}$). The values marked with asterisk (*) do not contain additional uncertainties coming from the ``off-shellness'' of the pion (see the text).} \label{tab:amu} {\scriptsize \begin{tabularx}{\textwidth}{clcllll} Model&Data& $\chi^2/d.o.f.$ & & Parameters && $a_\mu^{{\rm LbyL}; \pi^0} \times 10^{11}$\\ \hline VMD & A0 & $6.6/19$ & $M_V = 0.778(18)$~GeV & $F_\pi = 0.0924(28)$~GeV && $(57.2 \pm 4.0)_{JN}$\\ VMD & A1 & $6.6/19$ & $M_V = 0.776(13)$~GeV & $F_\pi = 0.0919(13)$~GeV && $(57.7 \pm 2.1)_{JN}$\\ VMD & A2 & $7.5/27$ & $M_V = 0.778(11)$~GeV & $F_\pi = 0.0923(4)$~GeV && $(57.3 \pm 1.1)_{JN}$\\ \hline LMD+V, $h_1 = 0$ & A0 & $6.5/19$ & $\bar{h}_5 = 6.99(32)$~GeV$^4$ & $\bar{h}_7 = -14.81(45)$~GeV$^6$ && $(72.3 \pm 3.5)_{JN}^*$\\ & & & & && $(79.8 \pm 4.2)_{MV}$\\ LMD+V, $h_1 = 0$ & A1 & $6.6/19$ & $\bar{h}_5 = 6.96(29)$~GeV$^4$ & $\bar{h}_7 = -14.90(21)$~GeV$^6$ && $(73.0 \pm 1.7)_{JN}^*$\\ & & & & && $(80.5 \pm 2.0)_{MV}$\\ LMD+V, $h_1 = 0$ & A2 & $7.5/27$ & $\bar{h}_5 = 6.99(28)$~GeV$^4$ & $\bar{h}_7 = -14.83(7)$~GeV$^6$ && $(72.5 \pm 0.8)_{JN}^*$\\ & & & & && $(80.0 \pm 0.8)_{MV}$\\ \hline LMD+V, $h_1 \neq 0$ & A0 & $6.5/18$ & $\bar{h}_5 = 6.90(71)$~GeV$^4$ & $\bar{h}_7 = -14.83(46)$~GeV$^6$& $h_1 = -0.03(18)$~GeV$^2$ & $(72.4 \pm 3.8)_{JN}^*$\\ LMD+V, $h_1 \neq 0$ & A1 & $6.5/18$ & $\bar{h}_5 = 6.85(67)$~GeV$^4$ & $\bar{h}_7 = -14.91(21)$~GeV$^6$& $h_1 = -0.03(17)$~GeV$^2$ & $(72.9 \pm 2.1)_{JN}^*$\\ LMD+V, $h_1 \neq 0$ & A2 & $7.5/26$ & $\bar{h}_5 = 6.90(64)$~GeV$^4$ & $\bar{h}_7 = -14.84(7)$~GeV$^6$ & $h_1 = -0.02(17)$~GeV$^2$ & $(72.4 \pm 1.5)_{JN}^*$\\ LMD+V, $h_1 \neq 0$ & B0 & $18/35$ & $\bar{h}_5 = 6.46(24)$~GeV$^4$ & $\bar{h}_7 = -14.86(44)$~GeV$^6$ & $h_1 = -0.17(2)$~GeV$^2$ & $(71.9 \pm 3.4)_{JN}^*$\\ LMD+V, $h_1 \neq 0$ & B1 & $18/35$ & $\bar{h}_5 = 6.44(22)$~GeV$^4$ & $\bar{h}_7 = -14.92(21)$~GeV$^6$ & $h_1 = -0.17(2)$~GeV$^2$ & $(72.4 \pm 1.6)_{JN}^*$\\ LMD+V, $h_1 \neq 0$ & B2 & $19/43$ & $\bar{h}_5 = 6.47(21)$~GeV$^4$ & $\bar{h}_7 = -14.84(7)$~GeV$^6$ & $h_1 = -0.17(2)$~GeV$^2$ & $(71.8 \pm 0.7)_{JN}^*$ \\ \hline \end{tabularx} } \end{table*} We can clearly see from Table~\ref{tab:amu} that for each given model and each approach (JN or MV), there is a trend of reduction in the error for $a_\mu^{{\rm LbyL}; \pi^0}$ (related only to the given model parameters) by about half when going from A0 (PDG) to A1 (including PrimEx) and by about another half when going from A1 to A2 (including KLOE-2). Very roughly, we can write: \begin{itemize} \item Sets A0, B0: $\delta a_\mu^{{\rm LbyL}; \pi^0} \approx 4 \times 10^{-11}$ (with $\Gamma^{{\rm PDG}}_{\pi^0 \to \gamma\gamma}$) \item Sets A1, B1: $\delta a_\mu^{{\rm LbyL}; \pi^0} \approx2 \times 10^{-11}$ (with $\Gamma^{{\rm PrimEx}}_{\pi^0 \to \gamma\gamma}$) \item Sets A2, B2: $\delta a_\mu^{{\rm LbyL}; \pi^0} \approx (0.7 - 1.1) \times 10^{-11}$ (with simulated KLOE-2 data) \end{itemize} This is mainly due to the improvement in the normalization of the form factor, related to the decay width $\pi^0 \to \gamma\gamma$, controlled by the parameters $F_\pi$ or $\bar{h}_7$, respectively, but more data also better constrain the other model parameters $M_V$ or $\bar{h}_5$. This trend of improvement is also visible in the last part of the Table (LMD+V, $h_1 \neq 0$), when we fit the sets B0, B1 and B2 which include the BaBar data. The central values of the final results for $a_\mu^{{\rm LbyL}; \pi^0}$ are only slightly changed, if we include the BaBar data. They shift only by about $-0.5 \times 10^{-11}$ compared to the corresponding data sets A0, A1 and A2. This is due to a partial compensation in $a_\mu^{{\rm LbyL}; \pi^0}$, when the central values for $\bar{h}_5$ and $h_1$ are changed, see Ref.~\cite{Nyffeler:2009uw}. Finally, note that both VMD and LMD+V with $h_1 = 0$ can fit the data sets A0, A1 and A2 for the transition form factor very well with essentially the same $\chi^2$ per degree of freedom for a given data set (see first and second part of the table). Nevertheless, the results for the pion-exchange contribution differ by about $20\%$ in these two models. For VMD the result is $a_\mu^{{\rm LbyL}; \pi^0} \sim 57.5 \times 10^{-11}$ and for LMD+V with $h_1 = 0$ it is $72.5 \times 10^{-11}$ with the JN approach and $80 \times 10^{-11}$ with the MV approach. This is due to the different behavior, in these two models, of the fully off-shell form factor ${\cal F}_{{\pi^0}^*\gamma^*\gamma^*}((q_1 + q_2)^2, q_1^2, q_2^2)$ on all momentum variables, which enters for the pion-exchange contribution~\cite{Jegerlehner_off-shell}. The VMD model is known to have a wrong high-energy behavior with too strong damping, which underestimates the contribution. For the VMD model, measurements of the neutral pion decay width and the transition form factor completely determine the model parameters $F_\pi$ and $M_V$ and the error given in Table~\ref{tab:amu} is the total model error. Note that a smaller error, compared to the off-shell LMD+V model, does not necessarily imply that the VMD model is better, i.e.\ closer to reality. Maybe the model is too simplistic. We conclude that the KLOE-2 data with a total integrated luminosity of $5$~fb$^{-1}$ will give a reasonable improvement in the part of the $a_\mu^{{\rm LbyL}; \pi^0}$ error associated with the parameters accessible via the $\pi^0 \to \gamma\gamma$ decay width and the $\gamma^\ast\gamma \to \pi^0$ transition form factor. Depending on the modelling of the off-shellness of the pion, there might be other, potentially larger sources of uncertainty which cannot be improved by the KLOE-2 measurements. \section*{Acknowledgements} I would like to thank my coauthors on Ref.~\cite{KLOE-2_impact} for the pleasant collaboration on this work. I am grateful to the organizers of MESON 2012 for the opportunity to present our paper and for providing a stimulating atmosphere during the meeting. This work is supported by funding from the Department of Atomic Energy, Government of India, for the Regional Centre for Accelerator-based Particle Physics (RECAPP), Harish-Chandra Research Institute.
{'timestamp': '2012-12-05T02:02:59', 'yymm': '1212', 'arxiv_id': '1212.0738', 'language': 'en', 'url': 'https://arxiv.org/abs/1212.0738'}
\section{Introduction} \subsection{Convex functions.} The Hermite-Hadamard inequality dates back to an 1883 observation of Hermite \cite{hermite} with an independent use by Hadamard \cite{hada} in 1893: it says that for convex functions $f:[a,b] \rightarrow \mathbb{R}$ $$ \frac{1}{b-a} \int_{a}^{b}{f(x) dx} \leq \frac{f(a) + f(b)}{2}.$$ This inequality is rather elementary and has been refined in many ways -- we refer to the monograph of Dragomir \& Pearce \cite{drago}. However, there is relatively little work outside of the one-dimensional case; we refer to \cite{cal1, cal2, jianfeng, nicu2, nicu, choquet, past, stein1}. The strongest possible statement that one could hope for is, for convex functions $f:\Omega \rightarrow \mathbb{R}$ defined on convex domains $\Omega \subset \mathbb{R}^n$, $$ \frac{1}{|\Omega|} \int_{\Omega}{f ~d x} \leq \frac{1}{|\partial \Omega|} \int_{\partial \Omega}{f ~d\sigma}.$$ This inequality has been shown to be true for many special cases: it is known for $\Omega = \mathbb{B}_3$ the $3-$dimensional ball by Dragomir \& Pearce \cite{drago} and $\Omega = \mathbb{B}_n$ by de la Cal \& Carcamo \cite{cal1} (other proofs are given by de la Cal, Carcamo \& Escauriaza \cite{cal2} and Pasteczka \cite{past}), the simplex \cite{bes}, the square \cite{square}, triangles \cite{chen} and Platonic solids \cite{past}. It was pointed out by Pasteczka \cite{past} that if the inequality holds for a domain $\Omega$ with constant 1, then plugging in affine functions shows that the center of mass of $\Omega$ and the center of mass of $\partial \Omega$ coincide, which is not generally true for convex bodies; therefore the inequality cannot hold with constant 1 in higher dimensions uniformly over all convex bodies. The first uniform estimate was shown in \cite{stein1}: if $f:\Omega \rightarrow \mathbb{R}$ is a convex, positive function on the convex domain $\Omega \subset \mathbb{R}^n$, then we have \begin{equation} \label{full} \frac{1}{|\Omega|} \int_{\Omega}{f ~dx} \leq \frac{c_n}{|\partial \Omega|} \int_{\partial \Omega}{f~d\sigma} \end{equation} with $c_n \leq 2 n^{n+1}$. In this paper, we will improve this uniform estimate and show that the optimal constant satisfies $n-1 \leq c_n \leq 2 n^{3/2}$. We do not have a characterization of the extremal convex functions $f$ on a given domain $\Omega$ (however, see below, we have such a characterization in the larger family of subharmonic functions). \subsection{Subharmonic functions.} Niculescu \& Persson \cite{choquet} (see also \cite{cal2, nicu2}) have pointed out that one could also seek such inequalities for subharmonic functions, i.e. functions satisfying $\Delta f \geq 0$. We note that all convex functions are subharmonic. Jianfeng Lu and the last author \cite{stein1} showed that for all positive, subharmonic functions $f:\Omega \rightarrow \mathbb{R}$ on convex domains $\Omega \subset \mathbb{R}^n$ \begin{equation} \label{jianfeng} \int_{\Omega}{f ~d x} \leq |\Omega|^{1/n} \int_{\partial \Omega}{f ~d \sigma}. \end{equation} Estimates relating the integral of a positive subharmonic function $f$ over $\Omega$ to the integral over the boundary $\partial \Omega$ are linked to the torsion function on $\Omega$ given by \begin{align*} - \Delta u &= 1 \quad \mbox{in}~\Omega\\ u &= 0 \quad \mbox{on}~\partial \Omega. \end{align*} Integration by parts and the inequalities $u \geq 0, \Delta f \geq 0$ show that \begin{align*} \int_{\Omega}{ f dx} = \int_{\Omega}{ f (-\Delta u) dx} &= \int_{\partial \Omega}{ \frac{\partial u}{\partial \nu} f d\sigma} - \int_{\Omega}{(\Delta f) u dx} \\ &\leq \int_{\partial \Omega}{ \frac{\partial u}{ \partial \nu} f d\sigma} \\ &\leq \max_{x \in \partial \Omega}{ \frac{\partial u}{\partial \nu}(x)} \int_{\partial \Omega} { f d\sigma}, \end{align*} where $\nu$ is the inward pointing normal vector. This computation suggests that we may have the following characterization of the optimal constant for a given convex domain $\Omega$. \begin{proposition}[see e.g. \cite{dragomirk, choquet}] The optimal constant $c(\Omega)$ in the inequality $$\int_{\Omega}{f ~d x} \leq c(\Omega) \int_{\partial \Omega}{f ~d \sigma}$$ for positive subharmonic functions is given by $$ c(\Omega) = \max_{x \in \partial \Omega}{ \frac{\partial u}{\partial \nu}(x)}.$$ \end{proposition} The lower bound on $c(\Omega)$ follows from setting $f$ to be the Poisson extension of a Dirac measure located at the point at which the normal derivative assumes its maximum. The derivation also shows that it suffices to consider the case of harmonic functions $f$. Implicitly, this also gives a characterization of extremizing functions (via the Green's function). Jianfeng Lu and the last author \cite{jianfeng} used this proposition in combination with a gradient estimate for the torsion function to show that the best constant in (\ref{jianfeng}) is uniformly bounded in the dimension. We will follow a similar strategy to obtain an improved bound for the optimal constant in (\ref{jianfeng}). \section{The Results} Our first result improves the constant $c_n$ from (\ref{full}) in all dimensions for subharmonic functions and shows that the growth is at most polynomial. \begin{theorem} Let $\Omega \subset \mathbb{R}^n$ be convex and let $f:\Omega \rightarrow \mathbb{R}$ be a positive, subharmonic function. Then \begin{equation} \label{thm1} \frac{1}{|\Omega|} \int_{\Omega}{f dx} \leq \frac{c_n}{ |\partial \Omega| } \int_{\partial \Omega}{ f d\sigma}, \end{equation} where the optimal constant $c_n$ satisfies $$ c_n \leq \begin{cases} n^{3/2}~\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \mbox{if}~n~\mbox{is odd}, \\ \frac{n^2+n}{\sqrt{n+2}} ~\qquad\mbox{if}~n~\mbox{is even}. \end{cases}$$ \end{theorem} In particular, for $n=2$ dimensions, our proof shows the inequality $$ \frac{1}{|\Omega|} \int_{\Omega}{f dx} \leq \frac{3}{ |\partial \Omega| } \int_{\partial \Omega}{ f d\sigma},$$ where the constant 3 improves on constant 8 obtained earlier for convex functions in \cite{stein1}. To complement the result in Theorem 1 we prove that any constant for which \eqref{thm1} is valid must grow at least linearly with the dimension. \begin{theorem} The optimal constant $c_n$ in (\ref{thm1}) is non-decreasing in $n$ and satisfies \begin{equation} \label{convex} c_n \geq \max\{n-1, 1\}. \end{equation} \end{theorem} In order to prove Theorem 2 we establish a connection to an isoperimetric problem that is of interest in its own right. Specifically, we prove the following Lemma. \begin{lemma} In any dimension $n\geq 1$, \begin{equation} \label{convex} \sup \left\{ \frac{|\partial \Omega_1|}{|\Omega_1|} \frac{| \Omega_2|}{|\partial \Omega_2|}: \Omega_2 \subset \Omega_1 ~ \emph{both convex domains in}~\mathbb{R}^{n}\right\}=n. \end{equation} \end{lemma} We are not aware of any prior treatment of this shape optimization problem in the literature. Problem (\ref{convex}) can be equivalently written as $$ \sup \left\{ \frac{|\partial \Omega|}{|\Omega|} \frac{1}{h(\Omega)}: \Omega ~ \mbox{a convex set in}~\mathbb{R}^{n-1}\right\},~ \mbox{where}~ h(\Omega) = \inf_{X \subset \overline{\Omega}}{ \frac{|\partial X|}{|X|}}$$ denotes the Cheeger constant. We refer to Alter \& Caselles \cite{alter} and Kawohl \& Lachand-Robert \cite{kawohl}. The result of Kawohl \& Lachand-Robert \cite{kawohl} will be a crucial ingredient in the proof of Theorem 2 (we note that the infimum runs over all subsets, it is known that the Cheeger set is unique and convex). We also obtain a slight improvement of the constant in (\ref{jianfeng}). \begin{theorem} Let $\Omega \subset \mathbb{R}^n$ be convex and let $f:\Omega \rightarrow \mathbb{R}$ be a positive, subharmonic function. Then $$ \int_{\Omega}{f dx} \leq \frac{ |\Omega|^{1/n}}{ \omega_n^{1/n} \sqrt{n} } \int_{\partial \Omega}{ f d\sigma},$$ where $\omega_n$ is the volume of the unit ball in $n-$dimensions. \end{theorem} We observe that, as $n$ tends to infinity, $\omega_n^{1/n} \sqrt{n} \rightarrow \sqrt{2 \pi e}.$ We also note a construction from \cite{jianfeng} which shows that the constant in Theorem 3 is at most a factor $\sqrt{2}$ from optimal in high dimensions. \section{Proof of Theorem 1} \subsection{Convex functions.} We first give a proof of Theorem 1 under the assumption that $f$ is convex; this argument is fairly elementary and is perhaps useful in other settings. A full proof of Theorem 1 is given in \S 3.2. \begin{proof} This proof combines three different arguments. The first argument is that \begin{equation} \label{b1} \int_{\Omega}{f dx} \leq \frac{w(\Omega)}{2} \int_{\partial \Omega}{ f d\sigma} \end{equation} from the one-dimensional Hermite-Hadamard inequality applied along fibers that are orthogonal to the hyperplanes realizing the width $w(\Omega)$. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[scale=1.6] \draw [very thick] (0,0) -- (5,0); \draw [dashed] (-0.5,0) -- (5.5,0); \draw [dashed] (-0.5,1.05) -- (5.5,1.05); \draw [very thick] (0,0) to[out=45, in=135] (5,0); \draw [thick, <->] (6,0) -- (6,1); \node at (6.3, 0.5) {$w(\Omega)$}; \node at (2.5, 0.5) {$\Omega$}; \draw [dashed] (1,0) -- (1,0.7); \draw [dashed] (1.1,0) -- (1.1,0.7); \draw [dashed] (1.2,0) -- (1.2,0.75); \draw [dashed] (1.3,0) -- (1.3,0.8); \end{tikzpicture} \end{center} \caption{Application of the one-dimensional inequality on a one-dimensional fiber. This step is lossy if the boundary is curved.} \end{figure} Steinhagen \cite{steinhagen} showed that width can be bounded in terms of the inradius \begin{equation} \label{b2} w(\Omega) \leq \begin{cases} 2 \sqrt{n} \cdot \mbox{inrad}(\Omega)~\,\,\,\,\,\qquad \mbox{if}~n~\mbox{is odd,} \\ 2 \frac{n+1}{\sqrt{n+2}} \cdot\mbox{inrad}(\Omega) ~\qquad\mbox{if}~n~\mbox{is even}. \end{cases} \end{equation} The last inequality follows from \cite{larson1}: if $\Omega \subset \mathbb{R}^n$ is a convex body and \begin{equation*} \Omega_t = \{x\in \Omega: d(x, \partial\Omega)>t\}, \end{equation*} where $d(x, \partial \Omega)$ denotes the distance to the boundary $$ d(x, \partial \Omega) = \inf_{y \in \partial \Omega}{ \|x-y\|},$$ then $$|\partial \Omega_t| \geq |\partial \Omega| \left(1 - \frac{t}{\mbox{inrad}(\Omega)}\right)_+^{n-1}.$$ Since $|\nabla d(x, \partial \Omega)| = 1$ almost everywhere, the coarea formula implies \begin{align*} |\Omega| &= \int_{0}^{ \tiny \mbox{inrad}(\Omega)} |\partial \Omega_t| dt \\ &\geq |\partial \Omega|\int_{0}^{ \tiny \mbox{inrad}(\Omega)}{ \left(1 - \frac{t}{\mbox{inrad}(\Omega)}\right)^{n-1} dt} = |\partial \Omega| \frac{\mbox{inrad}(\Omega)}{n} \end{align*} and thus we obtain (also stated in \cite[Eq. 13]{larson2}) \begin{equation} \label{b3} \mbox{inrad}(\Omega) \leq n \frac{|\Omega|}{|\partial \Omega|}. \end{equation} Combining inequalities (\ref{b1}), (\ref{b2}) and (\ref{b3}) implies the result. \end{proof} Both Steinhagen's inequality as well as inequality (\ref{b3}) are sharp for the regular simplex. However, see Fig. 1, an application of the one-dimensional Hermite-Hadamard inequality can only be sharp if the fibers are hitting the boundary at a point at which they are normal, otherwise there is a Jacobian determinant determined by the slope of the boundary and better results are expected. It is not clear to us how to reconcile these two competing factors. \subsection{A Proof of Theorem 1} \begin{proof} We have, for all positive, subharmonic functions $f:\Omega \rightarrow \mathbb{R}$, $$ \int_{\Omega}{ f dx} \leq \max_{x \in \partial \Omega}{ \frac{\partial u}{\partial \nu}(x)} \int_{\partial \Omega} { f d\sigma},$$ where $u$ is the torsion function. A classic bound on the torsion functions is given in Sperb \cite[Eq. (6.12)]{sperb}), $$ \max_{x \in \partial \Omega}{ \frac{\partial u}{\partial \nu}(x)} \leq \sqrt{2} \|u\|^{\frac{1}{2}}_{L^{\infty}}.$$ Moreover, using Steinhagen's inequality in combination with (\ref{b3}), we know that $\Omega$ is contained within a strip of width $$ w(\Omega) \leq \frac{|\Omega|}{|\partial \Omega|} \begin{cases} 2 n^{3/2} ~\,\,\,\,\,\qquad \mbox{if}~n~\mbox{is odd,} \\ 2 \frac{n^2+n}{\sqrt{n+2}} ~\qquad\mbox{if}~n~\mbox{is even}. \end{cases}$$ We can now use the maximum principle to argue that the torsion function in $\Omega$ is bounded from above by the torsion function in the strip of width $w(\Omega)$ (see Fig. 2). That torsion function, however, is easy to compute since the problem becomes one-dimensional. Orienting the strip to be given by $$ S = \left\{(x,y) \in \mathbb{R}^{n-1} \times \mathbb{R}: |y| \leq \frac{w(\Omega)}{2} \right\},$$ we see that the torsion function on the strip is given by $$ v(x,y) = \frac{w(\Omega)^2}{8} - \frac{y^2}{2}.$$ \begin{figure}[h!] \begin{center} \begin{tikzpicture}[scale=0.8] \draw[very thick] (0,0) ellipse (3cm and 1cm); \draw[thick] (-4, 1) -- (4, 1); \draw[thick] (-4, -1) -- (4, -1); \draw [<->] (5, -1) -- (5,1); \node at (5.5,0) {$w(\Omega)$}; \node at (0,0) {$\Omega$}; \end{tikzpicture} \end{center} \caption{The torsion function in $\Omega$ is bounded from above by the torsion function of the strip.} \end{figure} This shows $$\|u\|^{}_{L^{\infty}} \leq \frac{w(\Omega)^2}{8}$$ and thus \begin{align*} \max_{x \in \partial \Omega}{ \frac{\partial u}{\partial \nu}(x)} &\leq \sqrt{2} \|u\|_{L^{\infty}}^{1/2} \\ &\leq \frac{w(\Omega)}{2} \leq \frac{|\Omega|}{|\partial \Omega|} \begin{cases} n^{3/2} ~\,\,\,\,\,\qquad \mbox{if}~n~\mbox{is odd} \\ \frac{n^2+n}{\sqrt{n+2}} ~\qquad\mbox{if}~n~\mbox{is even}. \end{cases} \end{align*} \end{proof} \section{Proof of Theorem 2} The purpose of this section is to prove $c_{n+1} \geq c_n$ as well as the inequality $$ c_n \geq \sup \left\{ \frac{|\partial \Omega_1|}{|\Omega_1|} \frac{| \Omega_2|}{|\partial \Omega_2|}: \Omega_2 \subset \Omega_1 ~ \mbox{both convex domains in}~\mathbb{R}^{n-1}\right\}.$$ Theorem 2 is then implied by this statement together the proof of the Geometric Lemma in Section \S 5. \begin{proof} The proof is based on explicit constructions. We first show that $c_{n+1} \geq c_{n}$. This is straightforward and based on an extension in the $(n+1)-$first coordinate: for any $\varepsilon>0$, we can find a convex domain $\Omega_{\varepsilon} \subset \mathbb{R}^n$ and a positive, convex function $f_{\varepsilon}:\Omega_{\varepsilon} \rightarrow \mathbb{R}$ such that $$ \frac{1}{|\Omega_{\varepsilon}|} \int_{\Omega_{\varepsilon}}{f_{\varepsilon} dx} \geq \frac{c_n - \varepsilon}{ |\partial \Omega_{\varepsilon}| } \int_{\partial \Omega_{\varepsilon}}{ f_{\varepsilon} d\sigma}.$$ We define, for any $z > 0$, $$ \Omega_{z,\varepsilon} = \left\{(x, y): x \in \Omega_{\varepsilon} ~\mbox{and}~ 0 \leq y \leq z\right\} \subset \mathbb{R}^{n+1}$$ and $f_{z, \varepsilon}:\Omega_{z, \varepsilon} \rightarrow \mathbb{R}$ via $$ f_{z, \varepsilon}(x,y) = f_{\varepsilon}(x).$$ Then $$ \frac{1}{|\Omega_{z,\varepsilon}|} \int_{\Omega_{z,\varepsilon}}{f_{z,\varepsilon} dx dy} = \frac{1}{|\Omega_{\varepsilon}|} \int_{\Omega_{\varepsilon}}{f_{\varepsilon} dx}.$$ This integral simplifies for $z$ large since $$ \lim_{z \rightarrow \infty} \frac{1}{ |\partial \Omega_{z,\varepsilon}| } \int_{\partial \Omega_{z,\varepsilon}}{ f_{z,\varepsilon} d\sigma} = \frac{1}{ |\partial \Omega_{\varepsilon}| } \int_{\partial \Omega_{\varepsilon}}{ f_{\varepsilon} d\sigma}.$$ Picking $\varepsilon$ sufficiently small and $z$ sufficiently large shows that $c_{n+1} < c_n$ leads to a contradiction.\\ We now establish the inequality inequality $$ c_n \geq \sup \left\{ \frac{|\partial \Omega_1|}{|\Omega_1|} \frac{| \Omega_2|}{|\partial \Omega_2|}: \Omega_2 \subset \Omega_1 ~ \mbox{both convex domains in}~\mathbb{R}^{n-1}\right\}.$$ To this end pick $0 \in \Omega_2 \subset \Omega_1 \subset \mathbb{R}^{n-1}$ in such a way that both domains are convex. We will now define a domain $\Omega_N \subset \mathbb{R}^n$ and a convex function $f_N:\Omega_N \rightarrow \mathbb{R}$ where $N \gg 1$ will be a large parameter. We first define the convex sets $$ C_1 = \left\{(x, y): x \in \Omega_1 \,~\mbox{and}\,~ y \geq - N^3\right\}$$ and $$ C_2 = \left\{(x,y): x \in \left(1- \frac{y}{N^2}\right)\Omega_2 \,~\mbox{and}~ y \leq N \right\}.$$ The set $\Omega_N$ is then given as the intersection $\Omega_N = C_1 \cap C_2$ (see Fig. 3). We observe that $\Omega_N$ is the intersection of two convex sets and is therefore convex. Also, looking at the scaling, we see that $C_1$ dominates: looking at $\Omega_N$ from `far away`, it looks essentially like $C_1$ truncated. We now make this precise: note that there exists a constant $\lambda \geq 1$ such that $\Omega_1 \subseteq \lambda \Omega_2$ and then $$ \Omega_N \cap \left\{(x,y) \in \mathbb{R}^{n}: y \leq -(\lambda-1) N^2 \right\} = C_1 \cap \left\{(x,y) \in \mathbb{R}^{n}: y \leq -(\lambda-1) N^2 \right\}.$$ This means, that for $N \gg \lambda$, the `left' part of the convex domain dominates area and volume. We also observe that \begin{align*} |\Omega_N| &= N^3 |\Omega_1| + \mathcal{O}(N^2) \\ |\partial \Omega_N| &= N^3 |\partial \Omega_1| + \mathcal{O}(N^2), \end{align*} where the implicit constants depend on $\Omega_1$ and $\Omega_2$. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[scale=1.6] \draw [<->, thick] (-1,0) -- (6,0); \node at (6, -0.2) {$y$}; \draw[thick] (1,0) ellipse (0.25cm and 0.5cm); \draw[thick] (0,0) ellipse (0.25cm and 0.5cm); \filldraw (0,0) circle (0.02cm); \node at (0, -0.15) {$-N^3$}; \draw [dashed] (0, 0.5) -- (5,0.5); \draw [dashed] (0, -0.5) -- (5,-0.5); \filldraw (4,0) circle (0.02cm); \node at (4, -0.15) {0}; \draw[thick] (5,0) ellipse (0.2cm and 0.3cm); \filldraw (5,0) circle (0.02cm); \node at (5, -0.15) {$N$}; \draw[thick] (-1,0) ellipse (0.5cm and 1cm); \draw[dashed] (-1, 1) -- (5, 0.3); \draw[dashed] (-1, -1) -- (5, -0.3); \end{tikzpicture} \end{center} \caption{The construction of $C_1$ and $C_2$.} \end{figure} Since $\Omega_2 \subset \Omega_1$, we have that $$ \Omega_N \cap \left\{(x,y) \in \mathbb{R}^{n}: y \geq 0 \right\} = C_2 \cap \left\{(x,y) \in \mathbb{R}^{n}: y \geq 0 \right\}.$$ We now define a convex function on $\mathbb{R}^{n}$ via $$ f(x,y) = \begin{cases} y \quad &\mbox{if}~y \geq 0, \\ 0 \quad &\mbox{otherwise.} \end{cases}$$ We obtain $$ \int_{\Omega_N} f~ dx dy = \int_{\Omega_N \cap \left\{y > 0\right\}} f~ dx dy= \int_{C_2 \cap \left\{y > 0\right\}} f ~dx dy = (1+o(1)) \frac{N^2}{2} |\Omega_2|$$ $$ \int_{\partial \Omega_N} f~ d\sigma = \int_{ \partial \Omega_N \cap \left\{y > 0\right\}} f~ d\sigma = \int_{ \partial C_2 \cap \left\{y > 0\right\}} f ~d\sigma = (1+o(1)) \frac{N^2}{2} |\partial \Omega_2|.$$ This shows that $$ \frac{1}{|\Omega_{N}|} \int_{\Omega_{N}}{f_{} ~ dx} = \frac{(1+o(1))}{2N} \frac{|\Omega_2|}{|\Omega_1|}$$ and $$ \frac{1}{|\partial \Omega_{N}|} \int_{\partial \Omega_{N}}{f_{} ~ d\sigma} = \frac{(1+o(1))}{2N} \frac{|\partial \Omega_2|}{|\partial \Omega_1|}$$ which implies the desired result for $N \rightarrow \infty$. \end{proof} \section{Proof of the Geometric Lemma} \begin{proof} By the inequality $$\frac{|\Omega|}{|\partial\Omega|}\leq \mbox{inrad}(\Omega)\leq n \frac{|\Omega|}{|\partial\Omega|},$$ a proof of which can be found in~\cite{larson2}, the supremum is no larger than $n$ since \begin{equation}\label{eq:upper bound} \frac{|\partial \Omega|}{|\Omega|} \frac{| \Omega'|}{|\partial \Omega'|}\leq \frac{n \cdot \mbox{inrad}(\Omega')}{\mbox{inrad}(\Omega)}\leq n. \end{equation} What remains is to prove that this upper bound is saturated. The underlying idea of our proof is a theorem of Kawohl and Lachand-Robert \cite{kawohl} characterizing the Cheeger set of a convex set $\Omega\subset \mathbb{R}^2$. Specifically, their theorem states that for a convex $\Omega\subset \mathbb{R}^2$ the Cheeger problem \begin{equation*} h(\Omega)=\inf\biggl\{\frac{|\partial\Omega'|}{|\Omega'|}: \Omega'\subset \overline{ \Omega}\biggr\} \end{equation*} is solved by the set $$ \Omega' = \{x\in \Omega: \exists y \in \Omega \mbox{ such that } x\in B_{1/h(\Omega)^{}}(y)\subset \Omega\}, $$ where $B_{r}(x_0)$ is a ball of radius $r$ centered at $x_0$. We recall our use of the notation \begin{equation*} \Omega_t = \{x\in \Omega: d(x, \partial\Omega)>t\}, \end{equation*} where $d(x, \partial \Omega)$ denotes the distance to the boundary $$ d(x, \partial \Omega) = \inf_{y \in \partial \Omega}{ \|x-y\|},$$ and we can equivalently write the Cheeger set of $\Omega$ as $\Omega'= \Omega_{1/h(\Omega)}+B_{1/h(\Omega)}$, where $B_r$ denotes a ball of radius $r$ centered in 0. Here and in what follows the sum of two sets is to be interpreted in the sense of the Minkowski sum: \begin{equation*} A+B = \{x: x=a+b, a\in A, b\in B\}. \end{equation*} The situation when $n\geq 2$ is more complicated, and as far as we know a precise solution of the Cheeger problem is not available~\cite{alter}. Nonetheless, our aim in what follows is to prove that by taking $\Omega$ as a very thin $n$-simplex we can find a good enough candidate for $\Omega'$ among the one-parameter family of sets \begin{equation}\label{eq:Kawohl_trialsest} \Omega_t + B_t, \quad 0\leq t\leq \mbox{inrad}(\Omega). \end{equation} We construct our candidate for $\Omega$ as follows. Let $\Omega(\eta)\subset \mathbb{R}^n$ be the $n$-simplex obtained by taking a regular $(n-1)$-simplex of sidelength $\eta\gg 1$ in the hyperplane $\{x \in \mathbb{R}^n:x_1=-1\}$ with $(-1, 0, \ldots, 0)$ as center of mass and adding the last vertex at $(h(\eta), 0, \ldots, 0)$, where $h(\eta)$ is chosen so that $\mbox{inrad}(\Omega(\eta))=1$. Note that, as $\eta$ becomes large, $h(\eta)$ is approximately $1$ and $|\Omega(\eta)|\sim \eta^{d-1}$. \\ By construction $B_1\subset \Omega(\eta)$, and it is the unique unit ball of maximal radius contained in $\Omega(\eta)$. Moreover, the set $\Omega(\eta)$ is a tangential body to this ball (that is, a convex body all of whose supporting hyperplanes are tangential to the same ball). Since every tangential body to a ball is homothetic to its form body~\cite{Schneider1} (in our case $\Omega(\eta)$ is in fact equal to its form body), the main result in~\cite{larson1} implies \begin{equation*} |(\partial \Omega(\eta))_t| = (1-t)^{n-1}|\partial\Omega(\eta)|, \quad \mbox{ for all }t \in [0, 1]. \end{equation*} An application of the coarea formula now yields the identity \begin{equation}\label{eq:vol_per_ratio} |\Omega(\eta)|= \int_0^{1}|\partial(\Omega(\eta))_t|dt = \frac{|\partial\Omega(\eta)|}{n}. \end{equation} We also note that $\Omega(\eta)\subset B_{2\eta}$. To see why this is true, we first note that the inradius of the regular $n$-simplex (by which we mean $n+1$ points all at distance 1 from each other embedded in $\mathbb{R}^n$) is given by $$ r_n = \frac{1}{\sqrt{2n(n+1)}}.$$ The regular simplex is the convex body for which John's theorem is sharp, the circumradius is thus given by $$ R_n = n\cdot r_n = \frac{\sqrt{n}}{\sqrt{2(n+1)}} \leq \frac{1}{\sqrt{2}}.$$ This shows that $\Omega(\eta) \subset B_{2\eta}$ (for the purpose of the proof, the constant 2 is not important and could be replaced by a much larger (absolute) constant). Since it makes the computations somewhat simpler we consider, for a suitably chosen number $t$, the set $(1+t)\Omega(\eta)$. By construction $B_1\subset \Omega(\eta)$ which implies the inclusion $$ \Omega(\eta)+B_t\subset \Omega(\eta)+t\Omega(\eta)= (1+t)\Omega(\eta). $$ In particular, we can test~\eqref{convex} with $\Omega=(1+t)\Omega(\eta)$ and $\Omega'=\Omega(\eta)+B_t$ for any values of $t, \eta\gg 1$. We note that up to rescaling by $(1+t)^{-1}$ this is exactly the family of sets in~\eqref{eq:Kawohl_trialsest}. Indeed, for each $t$ the set $((1+t)\Omega(\eta))_t=\Omega(\eta)$.\\ The final step of the proof is to show that by letting $t, \eta \to \infty$ appropriately \begin{equation}\label{eq:goal} \frac{|\partial((1+t)\Omega(\eta))|}{|(1+t)\Omega(\eta)|}\frac{|\Omega(\eta)+B_t|}{|\partial(\Omega(\eta)+B_t)|} \to n. \end{equation} To prove~\eqref{eq:goal} we recall the definition and some basic properties of mixed volumes~\cite[p. 275ff]{Schneider1}. Let $\mathcal{K}$ denote the set of convex bodies in $\mathbb{R}^n$ with nonempty interior. The mixed volume is defined as the unique symmetric function $W\colon \mathcal{K}^n \to \mathbb{R}_+$ satisfying \begin{equation*} |\eta_1\Omega_1 + \ldots + \eta_m\Omega_m|= \sum_{j_1=1}^m\cdots \sum_{j_n=1}^m \eta_{j_1}\!\cdots \eta_{j_n}W(\Omega_{j_1}, \ldots, \Omega_{j_n}), \end{equation*} for any $\Omega_{1}, \ldots, \Omega_m \in \mathcal{K}$ and $\eta_1, \ldots, \eta_m\geq 0$~\cite{Schneider1}. Then $W$ satisfies the following properties: \begin{enumerate} \item\label{itma} $W(\Omega_1, \ldots, \Omega_n)>0$ for $\Omega_1, \ldots, \Omega_n \in \mathcal{K}$. \item\label{itmb} $W$ is a multilinear function with respect to Minkowski addition. \item\label{itmc} $W$ is increasing with respect to inclusions in each of its arguments. \item\label{itme} The volume and perimeter of $\Omega\in \mathcal{K}$ can be written in terms of $W$: $$ |\Omega|=W( \underbrace{\Omega, \ldots, \Omega}_{n~\mbox{\tiny times}}) \quad \mbox{and} \quad |\partial\Omega|= n W(\underbrace{\Omega, \ldots, \Omega}_{n-1~\mbox{\tiny times}}, B_1). $$ \end{enumerate} Since only mixed volumes of two distinct sets appear in our proof we introduce the shorthand notation $$ W_j(K, L) = W(\underbrace{K, \ldots, K}_{n-j}, \underbrace{L, \ldots, L}_{j}). $$ By~\eqref{eq:vol_per_ratio}, we have $$ \frac{|\partial((1+t)\Omega(\eta))|}{|(1+t)\Omega(\eta)|} = \frac{n}{1+t}.$$ The definition of $W$ implies $$ |\Omega(\eta)+B_t| = W(\Omega(\eta)+B_t, \dots, \Omega(\eta)+B_t).$$ Multilinearity allows us to expand this term as $$ W(\Omega(\eta)+B_t, \dots, \Omega(\eta)+B_t) = \sum_{j=0}^n \binom{n}{j}t^j W_j(\Omega(\eta), B_1)$$ and the same argument shows $$ |\partial(\Omega(\eta)+B_t)| = n\sum_{j=0}^{n-1}\binom{n-1}{j}t^jW_{j+1}(\Omega(\eta), B_1).$$ Altogether, we can write the expression of interest as \begin{equation}\label{eq:expansion in W} \frac{|\partial((1+t)\Omega(\eta))|}{|(1+t)\Omega(\eta)|}\frac{|\Omega(\eta)+B_t|}{|\partial(\Omega(\eta)+B_t)|} = \frac{n}{(1+t)}\frac{\sum_{j=0}^n \binom{n}{j}t^j W_j(\Omega(\eta), B_1)}{n\sum_{j=0}^{n-1}\binom{n-1}{j}t^jW_{j+1}(\Omega(\eta), B_1)}. \end{equation} In order to prove~\eqref{eq:goal} we need a bound from below. For the sum in the numerator it suffices to keep the first two terms in the expansion and to use~Property (\ref{itma}) of $W$ resulting in \begin{align*} \sum_{j=0}^n \binom{n}{j}t^j W_j(\Omega(\eta), B_1)&\geq W_0(\Omega(\eta), B_1) + n t W_1(\Omega(\eta), B_1) \\ &= |\Omega(\eta)|+ t|\partial\Omega(\eta)|. \end{align*} To bound the sum in the denominator we wish to keep the term with $j=0$ as is. For $j\geq 1$, we now use that $\Omega(\eta)\subset B_{2\eta}$ together with Property (\ref{itmc}) to bound $$ W_{j+1}(\Omega(\eta), B_1) \leq W_{j+1}(B_{2\eta}, B_{1})=(2\eta)^{n-j-1}|B_1|. $$ Inserting the two bounds above into~\eqref{eq:expansion in W} yields \begin{align*} \frac{|\partial((1+t)\Omega(\eta))|}{|(1+t)\Omega(\eta)|}&\frac{|\Omega(\eta)+B_t|}{|\partial(\Omega(\eta)+B_t)|}\\ &\geq \frac{n}{(1+t)}\frac{|\Omega(\eta)|+t|\partial\Omega(\eta)|}{|\partial\Omega(\eta)|+n\sum_{j=1}^{n-1}\binom{n-1}{j}2^{n-j-1}t^j\eta^{n-j-1} |B_1|}. \end{align*} We recall that $|\Omega(\eta)| = n^{-1} |\partial \Omega(\eta)|$ and therefore \begin{align*} \frac{|\partial((1+t)\Omega(\eta))|}{|(1+t)\Omega(\eta)|}&\frac{|\Omega(\eta)+B_t|}{|\partial(\Omega(\eta)+B_t)|}\\ &\geq \frac{n t}{(1+t)}\frac{1 + 1/(nt)}{1+n|B_1|\sum_{j=1}^{n-1}\binom{n-1}{j}2^{n-j-1}t^j\eta^{n-j-1}/ |\partial\Omega(\eta)| }. \end{align*} By construction $|\partial\Omega(\eta)|\sim \eta^{n-1}$ as $\eta \to \infty$. Consequently, by choosing $t=\sqrt{\eta}$ (though the more general choice $t=\eta^{\alpha}$ for $0<\alpha<1$ would also work) we find $t^j \eta^{n-j-1}/|\partial\Omega(\eta)|\sim \eta^{-j/2}$. Therefore, taking $\eta$ (and thus $t$) to infinity, we obtain \begin{equation*} \liminf_{\eta \to \infty}\frac{|\partial((1+\sqrt{\eta})\Omega(\eta))|}{|(1+\sqrt{\eta})\Omega(\eta)|}\frac{|\Omega(\eta)+B_{\sqrt{\eta}}|}{|\partial(\Omega(\eta)+B_{\sqrt{\eta}})|} \geq n, \end{equation*} which when combined with the matching upper bound~\eqref{eq:upper bound} completes the proof. \end{proof} \section{Proof of Theorem 3} \begin{proof} We use the estimate $$ \int_{\Omega}{ f dx} \leq \max_{x \in \partial \Omega}{ \frac{\partial u}{\partial \nu}(x)} \int_{\partial \Omega} { f d\sigma}$$ introduced in Proposition 1 above and use, inspired by the argument in \cite{jianfeng}, estimates for the torsion function. One such estimate for the torsion function comes from $P-$functions, we refer to the classic book of Sperb \cite[Eq. (6.12)]{sperb}, $$ \max_{x \in \partial \Omega}{ \frac{\partial u}{\partial \nu}(x)} \leq \sqrt{2} \|u\|^{\frac{1}{2}}_{L^{\infty}}.$$ It remains to estimate the largest value of the torsion function. There are two different approaches: we can interpret it as the maximum lifetime of Brownian motion inside a domain of given measure or we can interpret it as the solution of a partial differential equation to which Talenti's theorem \cite{talenti} can be applied. In both cases, we end up with a standard isoperimetric estimate \cite{talenti} (that was also used in \cite{jianfeng}) $$ \|u\|_{L^{\infty}} \leq \frac{1}{2n}\left( \frac{|\Omega|}{\omega_n}\right)^{\frac{2}{n}}$$ to obtain $$ \max_{x \in \partial \Omega}{ \frac{\partial u}{\partial \nu}(x)} \leq \frac{ |\Omega|^{1/n}}{\omega_n^{1/n} \sqrt{n}}.$$ \end{proof} \textbf{Acknowledgment.} This research was initiated at the workshop `Shape Optimization with Surface Interactions' at the American Institute of Mathematics in June 2019. The authors are grateful to the organizers of the workshop as well as the Institute. KB's research was supported in part by Simons Foundation Grant 506732. JL’s research was supported in part by a Bucknell University Scholarly Development Grant. SL acknowledges financial support from Swedish Research Council Grant No. 2012-3864. SS's research was supported in part by the NSF (DMS-1763179) and the Alfred P. Sloan foundation.
{'timestamp': '2019-07-16T02:11:13', 'yymm': '1907', 'arxiv_id': '1907.06122', 'language': 'en', 'url': 'https://arxiv.org/abs/1907.06122'}
\section{Introduction} Stellar Observations Network Group (SONG) is an initiative started in Denmark to create a global network of highly specialized 1m telescopes aimed at doing time-domain astronomy. In particular the goals are to: produce exquisite data for asteroseismic studies of stars across most of the HR diagram (with focus on solar-like stars), and to search for, and characterize, the population of low-mass planets in orbit around other stars via the microlensing and radial--velocity methods. A ground-based network with a sufficient number of nodes will ensure observations with a high duty-cycle, needed to obtain stellar oscillation spectra with high frequency precision and without aliasing problems. Furthermore, the light-curve anomalies in microlensing events occur at unpredictable times and thus, to find and characterize these, near-continuous observations are needed. The group behind SONG has obtained funding for the design and construction of a full prototype network node which shall be completed in late 2011. In the following we describe some of the aspects of the ongoing work and the expected performance of the prototype. \section{Observational goals} Below we shall briefly account for the main goals and requirements that the SONG instruments must fulfil in order to meet the requirements to do asteroseismology and measure microlensing light curves. \subsection{Asteroseismology} To study efficiently the oscillations of solar-like stars, the best strategy for ground-based observations is to measure the change in radial velocity of their surface. This requires a high radial velocity precision in the few m/s range. For the best ground-based instruments this level of precision is now routine, and some are at the sub-meter per second precision level \citep{harps, iodine1} over short timescales. For the SONG project the aim is to achieve a velocity precision better than 1m/s for a $V=0$ star per minute of observation. This requires an efficient, high-resolution spectrograph and a CCD camera with a fast readout in order to allow a high observing duty cycle. Our aim is to obtain a network duty cycle near 80\%. This will be achieved by having 8 nodes distributed at existing northern and southern hemisphere sites, and well distributed in longitude. Experience from BiSON \citep{bison1} show that it is realistic to reach this level with 6--8 stations, consistent with the results of \citet{mosser}. It is clear that targets in the equatorial zone will have the highest degree of coverage since these can be observed from both northern and southern nodes. With both northern and southern sites in the network, full-sky coverage will be possible. \subsection{Gravitational microlensing} For the study of microlensing events towards the bulge of the Milky Way, SONG will employ the lucky-imaging method \citep{lucky1}. By observing the target field with a CCD camera which can read out at high speeds ($\approx30$Hz), and then only selecting the images of best quality for subsequent co-addition, it is possible to obtain images with high spatial resolution. Since the bulge fields are in general quite crowded, this offers a big advantage in the achievable photometric precision and depth. Observations for this purpose only require a small field of view, and the current design foresees a field of 46\arcsec$\times$46\arcsec. The microlensing observations have similar requirements as the asteroseismic observations with respect to the observing duty-cycle. \citet{lensing1} discusses the prospects for microlensing studies with SONG in more detail. \subsection{Other possibilities} One of the possibilities we are considering for SONG is to use the spectrograph to observe the oscillations of the Sun during daytime. This is done by pointing the telescope to the blue sky and measuring the velocities in exactly the same way (through the iodine cell) as the stars are observed at night. In this way we will observe the sun-as-a-star and complement the existing facilities (ground and space) for solar observations. See \citet{sun1} for an example of this with the HARPS and UCLES spectrographs. \section{Layout of the instrumentation} The instrumentation at each site consists of a 1m telescope equipped with a high-resolution spectrograph at a Coud{\'e} focus and two lucky-imaging cameras at one of the two Nasmyth foci. This allows simultaneous two-colour imaging. \subsection{Dome and enclosure} The telescope will be housed in a dome of approximately 4.5m diameter, and the Coud{\'e} room will be located in a 20 foot shipping container with a significant level of insulation. Computers and hard drives for observatory control and data reduction will be located in this container as well. The use of a container for housing instrumentation and computers allows cost savings and provides for a rather small footprint, since the only permanent buildings needed will be the telescope concrete pier and the footings on which the container is attached to the ground. A separate concrete foundation will carry the support for the spectrograph such that it is mechanically de-coupled from the container. Figure~\ref{fig:pier} shows the general layout. Since any potential SONG node should be at an existing observatory, access to electrical power and internet will already be available. A weather station with a cloud monitor is also part of the instrumentation. The expected date for arrival of the telescope to the first site is mid-2010. \begin{figure}[!ht] \plotone{grundahl_fig1.ps} \caption{ \label{fig:pier} The basic structures of a SONG node, comprising a concrete pier for supporting the telescope, a 20 foot container and a dome support structure. In the final version the structure carrying the dome will be approximately 1m higher with the dome floor at the level of the container roof. Side-ports will be installed in the extra 1m height to improve nighttime ventilation of the dome. The inside of the container will be split in two compartments, one for the instruments nearest to the telescope pier, and one at the other end for control computers and other electronics. } \end{figure} \subsection{Telescope optics and imaging} Lucky-imaging for the microlensing science will be carried out with two cameras located at one of the two Nasmyth stations. We decided to not place these at the Coud{\'e} focus since this would lead to less than optimal image quality (many optical surfaces), lower overall efficiency and higher costs. At the Nasmyth station a mirror slide with several positions will send the light to the two imaging cameras, or allow the light to pass on to the Coud{\'e} train. For lucky-imaging it is important to have correction for atmospheric dispersion (ADC), as well as field de-rotation (the telescope is alt-az mounted). The field de-rotation will be done with an optical de-rotator of the Abbe-K{\"o}nig type for both imaging and spectroscopy. The imaging at Nasmyth will be in two wavelength regions, with a split at 6700{\AA}. In the ``short'' wavelength channel a beam splitter will be placed, which sends a small fraction of the light to a (cheap) CCD camera for continuous focus monitoring. In this way the telescope will always be at its optimal focus. Both of the two imaging channels will be equipped with filter wheels. The pixel scale for the lucky-imaging cameras is $\sim0\farcs09$ in order to provide adequate sampling (in the red channel) for nearly diffraction-limited imaging. At a seeing of 1\arcsec\, FWHM, the 1m SONG telescope has $D/r_0$ of 5--6 at a wavelength of 8000{\AA}. In this regime the lucky-imaging method works extremely well, which implies that a very significant improvement in image quality can be expected via the use of this method. At the best observing sites, imaging with a FWHM close to 0\farcs3 can be expected. We should note that the CCD cameras will of course also offer the possibility to do photometry of other objects, such as variable stars and gamma-ray bursts. \subsection{The Coud{\'e} train and spectrograph focal plane} The second main instrument for SONG is the spectrograph which is located at a Coud{\'e} focus. By removing the mirror that directs the light to the Nasmyth station the light from M3 will instead go to other mirrors (M4, M5, M6, M7 and M8, see Fig.~\ref{fig:optical_layout}) and ultimately end in the instrument container where the spectrograph is located. Radial velocities are measured using the iodine technique \citep{iodine1}. This implies that only a limited wavelength range, 5000--6200{\AA} is needed (discussed in more detail below). With such a short wavelength interval to be ``transported'' to the Coud{\'e} focus, highly optimized anti-reflective and reflective coatings can be used. This results in the total efficiency of the Coud{\'e} train being nearly 95\%. At the pre-slit assembly several functions will be available; these include calibration light (ThAr and halogen lamps), slit viewing, focus monitoring, tip-tilt correction and telescope pupil monitoring and correction. In addition to this, a temperature stabilized iodine cell is also available for providing an accurate velocity reference. The spectrograph will be located in a temperature-stabilized box for improved stability. \begin{figure}[!ht] \plotone{grundahl_fig2.ps} \caption{ \label{fig:optical_layout} A schematic layout for the SONG prototype node. The relative sizes of the components are not completely correct, but only intended to show the layout. } \end{figure} The light path from M4 to the Coud{\'e} room will be in vacuum tubes -- this will ensure a minimal level of maintenance of the optics as well as a low impact of thermal differences along the light path from the instrument room to the dome. The instrument room is expected to be kept at a temperature of 15--20\deg C. \subsection{Auxiliary instrumentation} The telescope has two Nasmyth ports of which only one will be used initially. In order to allow for future instrument upgrades, the telescope will, however, be designed such that installation of a rotating tertiary mirror will be relatively simple. This allows auxiliary instrumentation to be placed at the second platform. The secondary mirror is designed to allow a field of view of 10\arcmin$\times$10\arcmin. \section{Spectrograph} For the asteroseismic observations, the Coud{\'e} spectrograph is the principal in\-stru\-ment. In order to determine the required high-precision velocities an iodine cell will be used for wavelength reference. The spectro\-graph will have a (2 pixel) resolution of 120.000 for a 1\arcsec\, slit and employ an R4 echelle and a collimated, F/6, beam diameter of 75mm. The slit is 10\arcsec\, long, and via the instrument rotator it can be rotated in any desired direction allowing the possibility of doing asteroseismology for close binary stars such as $\alpha$\,Cen\,A and B. The spectrograph is designed to cover a wavelength range of 4800--6800{\AA}. The orders are fully covered at wavelengths shorter than 5200{\AA}; in the future a larger detector will allow full coverage of the spectral orders. The spectrograph has been designed by P. Span{\`o}, with inspiration from the design of UVES, HARPS and other modern high-resolution spectrographs. It is very compact, roughly $50\times90\times15$cm (without mounts), which is expected to be an advantage with respect to temperature control. In order to provide a well behaved instrumental profile, emphasis was put on the design of the spectroscopic camera. This resulted in a design with an instrumental profile which is nearly diffraction limited over the entire area of the detector. The detector system will be a 2K$\times$2K system from Andor of Belfast. This camera has an advanced peltier cooling system that does not need liquid nitrogen or closed cycle cooling, and furthermore the vacuum is ``permanent'' -- so the camera is essentially maintenance free. The electronics allow full-frame readout in 5s with a readout noise less than 10\,electrons, thus giving only very small overheads for the spectroscopic time-series observations. The CCD camera is capable of reading out frames at a speed of 5Mpix. \subsection{Spectrograph efficiency and performance} Due to the small wavelength range covered by the spectrograph it is possible to employ coatings with very high efficiency for both reflective and transmissive optics. Many optical companies can deliver optics with a reflectivity higher than 99.5\% and anti-reflection coatings with transmissions better than 99.5\% over the spectrograph design wavelength interval. Our calculations of the spectrograph efficiency (no slit or detector included), show that a throughput in excess of 50\% can be expected. With a relatively small telescope and a 75mm beam the slit product is a generous 120.000, much higher than for larger telescopes, implying smaller slit losses under typical seeing conditions. \begin{figure}[!ht] \plotone{grundahl_fig3.ps} \caption{ \label{fig:velocity} The expected velocity precision for SONG. The calculation is based on the assumption that the velocity precision scales with signal-to-noise ratio as found by \cite{iodine2}. We note that at the present time the {\it i}SONG iodine reduction software has achieved a level of precision of $\approx0.8$m/s on UVES archival data for $\alpha$\,Cen\,A; thus with the current generation of the software a 1m/s precision can realistically be expected. } \end{figure} We have carried out calculations of the {\it total} efficiency of the spectroscopic system, including all sources of light loss (from atmosphere to detector) and these indicate that in 1\arcsec\, seeing the total system throughput is in excess of 8\%. For the design of the prototype we have included provisions for continuous control of the location of the telescope pupil on the grating as well as allowing tip-tilt correction to provide a very stable illumination of the slit. On the assumption that our data reduction code reaches a level of performance as that of \citet{iodine2} for $\alpha$\,Cen\,A (their Fig.~5) we arrive at a predicted velocity precision of better than 1m/s per minute of observation for stars brighter than $V\approx1$. Stars with lower metallicity, shallower lines and higher rotational velocities will not allow such a high precision to be obtained. In Fig.~\ref{fig:velocity} the calculated velocity precision vs. stellar magnitude is shown. For stars where a sample time of 1\,minute (5s read $+$ 55s exposure) is sufficient it will be feasible to carry out asteroseismic campaigns to $V\approx6$. It should be noted that the iodine cell is not located permanently in the light path; therefore ``conventional'' spectroscopy with ThAr calibration exposures prior to, and after, science exposure will be possible. Thus programmes which do not demand the highest velocity precision are possible in the same way as for other telescopes. \subsection{Iodine-based velocities with {\it i}SONG} We have developed an IDL-based data reduction code, called {\it i}SONG, for the extraction of velocities based on iodine-cell observations. At this point, the code has been tested on the \citet{iodine2} UVES observations of {$\alpha$}\,Cen\,A (the raw data were retrieved from the ESO archive) and the SARG observations of $\mu$\,Herculis \citep{muher}. For the analysis of the {$\alpha$}\,Cen\,A data we achieve a velocity precision of 77cm/s per data point; only slightly poorer than the 70cm/s achieved by Butler et al. on the same 688 frames. The current (incomplete) analysis of the data for $\mu$\,Her indicates that we reach a similar precision as that of \citet{muher}. For an example of velocity time-series with this code we refer the reader to \citet{grundahl1}. \section{Summary and status of SONG} As of late 2008 the SONG project has obtained full funding for the development of a prototype node. Our design revolves around a high-performance 1m telescope equipped with a dual-colour lucky-imaging camera system and a highly efficient high-resolution spectrograph which will allow 1m/s precision velocities to be obtained for the brightest stars in the sky. Better than 10m/s precision is expected for stars brighter than $V=6$ per minute of observation. SONG is now in its detailed design phase. The optical design is completed thus allowing final design of the remaining elements. Mechanical design of the spectrograph, Nasmyth focal plane and pre-slit systems is on-going. These systems, with control software, will be tested and integrated in Aarhus during 2010, with expected delivery to the telescope on the first site (Tenerife) in late 2010. The telescope is expected to be delivered to the site in mid 2010. The ongoing prototype work aside, the main challenge for the coming years will be to develop the international consortium that will enable the setup of the full network. Work towards this purpose will be kick-started with a workshop in Aarhus in March 2009 (see {\tt http://astro.phys.au.dk/SONG}). \acknowledgements Funding for the SONG project is provided through substantial grants from the Villum Kann Rasmussen Foundation, The Danish Natural Sciences Research Council and the Carlsberg foundation. FG wishes to acknowledge many fruitful conversations with Michael I. Andersen. Henrik K. Bechtold and Anton Norup S{\o}rensen are thanked for providing Figs 1 and 2. G. Marcy is thanked for advice and encouragement during the development of the {\it i}SONG code. S. Leccia and A. Bonanno kindly permitted the use of the raw $\mu$\,Her\, data for further tests of the code.
{'timestamp': '2009-08-04T15:00:33', 'yymm': '0908', 'arxiv_id': '0908.0436', 'language': 'en', 'url': 'https://arxiv.org/abs/0908.0436'}
\section{Introduction} Physical processes unfold over time. Our minds grasp physical mechanisms largely via narrative. So it is not surprising that some of the most vivid physics demonstrations also play out over time. Simulations of physics that unfold over time are similarly powerful; interactive simulations are better; and simulations created by the student can be best of all. This view is gaining ground in introductory courses \cite{Bchab15a}, but the benefits of animated simulation extend farther than this. Here we wish to show that the behavior of strongly nonequilibrium statistical systems can be illustrated via stochastic simulations that are simple enough to serve as undergraduate projects. Recently developed, free, open-source programming resources sidestep the laborious coding chores that were once required for such work. In particular, we believe that the error-correction mechanism known as kinetic proofreading can be more clearly understood when a student views its linear temporal sequence, as opposed to solving deterministic rate equations. Coding this and other simple processes opens the door for the student to study other systems, including those too complex for the rate-equation approach to yield insight. \section{Double-well hopping} \subsection{The phenomenon} We tell students that a simple chemical reaction, for example isomerization of a macromolecule, can be regarded as a barrier-passing process. A micrometer-size bead in a double optical trap serves as a mesoscopic model system with this character \cite{Simon:1992cr}, and it is well worthwhile for students to watch it undergo a few dozen sharp transitions in between episodes of Brownian motion near its two stable positions (see supplementary video \ref{sv1}). A simple model for this behavior states that the hopping transitions occur at random times drawn from an exponential distribution. That is, many rapid transitions are interspersed with a few long pauses\cut{To put this a different way, "activated processes" which involve crossing barriers significantly greater than $k_B T$ tend to be rare, not slow.}. \subsection{Simulation: Waiting times drawn from an exponential distribution\label{ss:wtde}} With this physical motivation, students can explore how to generate simulated waiting times of the sort just described. Any computer math system has a pseudorandom number generator that generates floating-point numbers uniformly distributed between 0 and 1. Many students are surprised (and some are intrigued), to learn that applying a nonlinear function to samples from a random variable yields samples with a different distribution, and in particular that $y=-\tau\ln x$ is exponentially distributed, with mean $\tau$, if $x$ is uniform on (0,1] \cite{Bnels15a}\cut{This construction can be nicely illustrated with a graphic that shows its generality, especially the cdf approach.}. Starting from that insight, it takes just one line of code to generate a list of simulated waiting times for transitions in a symmetric double well; finding the cumulative sums of that list gives the actual transition times (see supplementary computer code \ref{sc1}). The freely accessible VPython programming system (or its Web-based version Glowscript) makes it very easy to create an animation of an object whose spatial position is supplied as a function of time \cite{Evpyt17a}. The only challenging part is to pass from a list of irregularly-spaced transition times to particle positions at each of many (regularly-spaced) video frames (see supplementary computer code \ref{sc1}). The payoff is immediate: Visually, the simulated hopping has a very similar character to the actual Brownian hopping of a bead in a double trap (see supplementary video \ref{sv2}). \subsection{Upgrade to 1D random walks} It may be of interest to make a small modification of the code: Instead of hopping between two wells (reversing direction on every step), consider one-dimensional diffusion on a symmetric many-well potential, for example, one of the form $U(x)=\sin(x)$. In such a potential, for each transition the system must also decide whether to increase or decrease a ``position'' coordinate. The resulting random walk will display the same long-time scaling behavior as any unbiased 1D walk, but with trajectories that undergo hops at random times, not periodic steps as in the simplest realization \cite{Bnels15a}. \section{Birth-death process} We can now generalize from situations with essentially only one kind of transition (or two symmetric kinds), to the more interesting case where several inequivalent choices are possible, and where the relevant probabilities depend on the current state. This general situation can describe a chemical reaction that requires, and depletes, molecules of some substrate. Most science students know that living cells synthesize each of their messenger RNAs ({mRNAs}) from a single copy (or a small fixed number) of the corresponding gene. Even a constitutive (unregulated) gene must wait for the transcription apparatus to arrive, bind, and begin transcription \cite{Balbe09a}. We consider a situation in which that apparatus is in short enough supply that this waiting is the primary determinant for the initiation of transcription. Once a mRNA transcript has formed, it has a limited lifetime until it is degraded by other cellular machinery. We assume that this process, too, relies on chance encounters with degradation enzymes. Moreover, each of many species of mRNA must all share the attentions of a limited number of degradation enzymes, so each mRNA copy has a a fixed probability per unit time to be removed from the system. The physical hypotheses in the preceding paragraph amount to a model called the ``birth-death process,'' which has many other applications in physics and elsewhere. As in the 1D walk, we characterize the system's state by an integer, in this case the population of the mRNA of interest. Synthesis is a transition that increases this number, with a fixed probability per unit time $k_\mathrm{s}$ (called the ``mean rate'' of synthesis). Degradation is a transition that decreases it, with a probability per unit time that is the current population $n$ times another constant $k_\mathrm{d}$ (the ``rate constant'' for degradation)\cut{I suggest including the governing differential equation. That will help make these statements concrete for students who should be comfortable with dif eqns.}. \subsection{Simulation\label{ss:bdp.s}} D.~Gillespie extended and popularized a simple but powerful method, the ``stochastic simulation algorithm,'' for simulating systems of this sort \cite{Gillespie:1976p1686}. In the case just described, the algorithm repeatedly executes the following steps (see supplementary computer code \ref{sc2}):\begin{itemize} \item Determine the probability per time $k_{\mathrm{ tot}}$ for \emph{any} of the allowed transitions to occur by summing all the mean rates. In a birth-death process, we have $k_\mathrm{tot}=k_\mathrm{s}+nk_\mathrm{d}$. \item Draw a waiting time from the exponential distribution with mean given by the reciprocal of $k_{\mathrm{ tot}}$, via the method in \ssref{wtde}. \item Determine which of the allowed processes happens at that transition time. In the birth-death process, we make a Bernoulli trial with probability $p=k_{\mathrm s}/k_\mathrm{tot}$ to increase population $n$ by one, and $1-p$ to decrease it. \item Update $n$ and repeat.\end{itemize} The beauty of this algorithm, besides its correctness \cite{Gillespie:1977p1466}, is that no computation is wasted on time steps at which nothing happened: By definition, there is a state transition at \emph{every} chosen time. \subsection{Convergence to the continuous, deterministic approximation\label{ss:ccda}} Students will probably find it reasonable that, when $n$ is sufficiently large, we may neglect its discrete character. Students who have been exposed to probability ideas may also find it reasonable that in this case, the relative fluctuations of $n$ from one realization to the next will be small, and so $n$ effectively behaves as a continuous, deterministic variable, subject to the differential equation ${\mathrm d} n/{\mathrm d} t=-k_\mathrm dn+k_\mathrm s$. That equation predicts exponential relaxation from an initial value $n_0$ to the steady value $n_*=k_\mathrm{s}/k_\mathrm{d}$ with e-folding time $k_\mathrm{d}$: \begin{equation}n(t)=n_*+(n_0-n_*)\ex{-k_\mathrm{d}t}. \label{e:bdcts}\end{equation} The simulation bears out this expectation (\frefpanel{f1}{a,b}). \begin{figure}\begin{center}\includegraphics{g178BDsim3.pdf}\end{center} \caption{\label{f:f1}\small \arttitle{Behavior of a birth-death process.} \capitem aThe \capelement{bumpy traces} show two examples of simulated time series with $k_\mathrm{s}=12\,\ensuremath{\mathrm{min}}^{\raise.15ex\hbox{${\scriptscriptstyle -}$}\kern-.05em 1}$, $k_\mathrm{d}=0.015\,\ensuremath{\mathrm{min}}^{\raise.15ex\hbox{${\scriptscriptstyle -}$}\kern-.05em 1}$, and hence $n_*=800$. The initial population was $n_0=3200$. The \capelement{smooth trace} shows the exponential relaxation predicted by the continuous, deterministic approximation (\eref{bdcts}). \capitem bAfter the system comes to steady state, there is a tight distribution of $\NX$ values across 100 runs of the simulation (\capelement{bars})\cut{Bars are hard to see. I suggest reducing vertical scale. Also, on both axes, label multiple tick marks to avoid ambiguity of values.}. The \capelement{curve} shows the Poisson distribution\Nindex{Poisson!distribution}{} with mean $n_*$ for comparison. \capitem{c,d}The same but with $k_\mathrm{s}=0.15\,\ensuremath{\mathrm{min}}^{\raise.15ex\hbox{${\scriptscriptstyle -}$}\kern-.05em 1}$ and $n_0=40$. Although individual instances (runs of the simulation) deviate strongly from the continuous, deterministic approximation, nevertheless the sample mean of the population $\NX(t)$ over 150 runs does follow that prediction\cut{ (\capelement{green trace} in c)}. The distribution of steady-state fluctuations is again Poisson (\capelement{curve} in d). (See also \cite{Bnels15a}.) \instructor{Made by \<transcrip2rxn.py, viewTransc.py>.} }\end{figure} Actually, mRNA populations in living cells are often \emph{not} large. Nevertheless, although individual realizations of $n(t)$ may differ significantly, the \emph{ensemble average} of many such trajectories does follow the prediction of the continuous/deterministic idealization (\fref{f1}c). Within individual cells, there will be significant deviation around that mean behavior (\fref{f1}c again). In particular, the ``steady'' state will have fluctuations of $n$ that follow a Poisson distribution (\frefpanel{f1}{d}). That key result is more memorable for students when they discover it empirically in a simulation than it would be if they just watched the instructor prove it with abstract mathematics (by solving a master equation \cite{Bnels15a}). State fluctuations of the sort just mentioned may suffice to pop a more complex system out of one ``steady'' state and into a very different one. Indeed, even the simplest living cells do make sudden, random state transitions of this sort. Such unpredictable behavior, not seen in the differential-equation approach, seems to be useful to bacteria, implementing a population-level ``bet hedging'' strategy \cite{Choi:2008p2537,Eldar:2010p9554,Lidstrom:2010gj}. A real bacterium is not simply a beaker of reagents. Bacteria periodically divide, partitioning a randomly chosen subset of each mRNA species into each daughter cell. That extra level of realism is hard to introduce into an analytical model, but straightforward in a simulation. The results are similar to the ones just described, with a larger effective value of the clearance rate constant \cite{Bnels15a}. \subsection{Upgrade to cover bursting processes\label{ss:ucbp}} Bacteria are supposedly simple organisms. The birth-death process is simple, too, and it fits with the cartoons we see in textbooks. So it is interesting to follow the recent discovery that the model makes quantitative predictions for mRNA production that were experimentally disproven \cite{Golding:2005p1478,Golding:2008p1486,Bnels15a}. For example, recent advances in single-molecule imaging permit the direct measurement of $n(t)$ in individual cells, and disproved the model's prediction that the distribution of $n$ in the ``steady'' state should be Poisson (\fref{f2}{c}). Researchers found, however, that a simple modification of the birth-death model could accommodate these and other discrepant data. The required extension amounts to assuming that mRNA transcripts are generated in \emph{bursts,} that the bursts themselves are initiated with a fixed probability per unit time, and that once initiated, a burst is also terminated with a fixed probability per unit time. Although this ``bursting model'' has two additional parameters compared to the original birth-death model, nevertheless it was overconstrained by the experimental data, so its success was a nontrivial test \cite{Golding:2008p1486,Bnels15a}. Remarkably, detailed biochemical mechanisms for this behavior were found only some years after its indirect inference \cite{Chong:2014p15102,Sevier:2016ew,Sevier:2018ek,klindz}, an important lesson for students to appreciate. \begin{figure}\begin{center}\includegraphics{g180goldingF2abcOutline.pdf}\end{center} \caption{\label{f:f2}\small \arttitle{Indirect evidence for transcriptional bursting.} \capitem{a}\capelement{Symbols:} Time course of the number of {mRNA}\FNindex{RNA!messenger (mRNA)} transcripts in a cell, $\Nx(t)$, averaged over 50 or more cells in each of three separate, identical experiments. Data from the three trials are shown with three different symbols. All of the cells were induced\FNindex{induction} to begin gene expression at time zero. The \capelement{dashed curve} shows a fit of the birth-death (BD) process (\ereffarcomma{bdcts}) to data, determining the apparent synthesis rate $\NSrate\approx\BDsynVAL/\ensuremath{\mathrm{min}}$ and clearance rate constant $\NrateconX{\Nzilch}\approx0.014/\ensuremath{\mathrm{min}}$. The \capelement{solid curve} shows the corresponding result from a computer simulation\FNindex{simulation!random process (Gillespie algorithm)!bursting model} of the \FIindex{bursting} model discussed in \ssref{ucbp} (see also \cite{Bnels15a}). \capitem{b}Semilog plot of the fraction of observed cells that have zero copies of {mRNA} versus elapsed time. \capelement{Symbols} show data from the same experiments as in \capitemrefer{a}. \capelement{Dashed line:} The \FIindex{birth-death process} predicts that initially $\Nprob_{{{\mathrm{\Ndrvbare}}}\rm(t)}(0)$ falls with time as $\exp(-\NSrate t)$, where $\NSrate$ has the value found by fitting the data in \capitemrefer a. (Degradation was negligible in this experiment.) The experimental data instead yield initial slope $-0.028/\ensuremath{\mathrm{min}}$. \capelement{Solid line:} Computer simulation of the bursting model. \capitem{c}Experiments were performed at each of many different levels of gene induction. For each level, a \capelement{cross} shows the variance of the late-time mRNA population $n_\infty$ versus its sample mean. This log-log plot of the data shows that they fall roughly on a line of slope $1$, indicating that the \FIindex{Fano factor} $(\variance\Nx)/\exv{\Nx}$ is roughly a constant. The simple \FIindex{birth-death process} predicts that this constant is equal to 1 (\capelement{dashed line}), because mean equals variance for any Poisson distribution, but the data instead give the value $\approx5$. The \capelement{circle} shows the result of the same simulation shown in \capitemrefer{a,b}. \artfrom{Figure adapted from \cite{Bnels15a}; experimental data from \protect\cite{Golding:2005p1478}\pnonly{Actually taken from \protect\cite{Golding:2008p1486}}\cut{; see \dref{bursting}}.}\instructor{Made by \texttt{bursting/mRNAsimMain.m} et al.} }\end{figure} \section{Kinetic proofreading}\subsection{Word model\label{ss:wmkp}} \instructor{``Already in 1957, before the advent of molecular biology, Linus Pauling postulated that this intrinsic (thermodynamic) limitation to the selection of similar amino acids would give rise to very large amino acid substitution errors in intracellular proteins [Pauling L: The probability of errors in the process of synthesis of protein molecules. Festschrift Arthur Stoll. Birkh\"ause Verlag; 1957:597--602.]'' -- Johansson etal\\ ``The free energy differences due to base pair mismatches between the mRNA codon and the tRNA anticodon are too small to provide the observed high accuracy of tRNA selection (Ogle and Ramakrishnan, 2005; Xia et al., 1998), even if kinetic proofreading (Hopfield, 1974; Ninio, 1975) is taken into account'' -- \cite{Savir:2013ez}}% Ask a student, ``What is the big secret of life?'' and the answer will probably be ``DNA,'' or perhaps ``evolution by natural selection.'' Indeed, DNA's high, but not perfect, degree of stability underlies life's ability to replicate with occasional random modifications. But it is less well appreciated that the \emph{stability} of a {molecule} of DNA does not guarantee the \emph{accuracy} of its {replication} and {transcription.} There is another big secret here, just as essential to life as the well known ones. In fact, a wide range of molecular recognition events must have extremely high accuracy for cells and their organisms to function. Think of our immune cells, which must ignore the vast majority of antigens they encounter (from ``self''), yet reliably attack a tiny subpopulation of foreign antigens differing only slightly from the self. Translation of mRNA into proteins is an emblematic example of such a puzzle. It is true that artificial machines now exist that can read the sequence of mRNA. Then another artificial machine can take the resulting sequence of base triplets, decode it, and synthesize a corresponding polymer of amino acids (a polypeptide), which in some cases will then fold into a functional protein without further help. But the cells in our bodies, and even bacteria, do these jobs reliably \emph{without} those huge and expensive machines, despite the incessant nanoscale thermal motion! Merely intoning that a wonderful molecular machine called the ribosome accomplishes this feat doesn't get us over the fundamental problem: At each step in translation, the triplet codon at the ribosome's active site fits one of the many available transfer RNA (tRNA) species \emph{somewhat} better than it fits the other 19 options. But the binding energy difference, which quantifies ``somewhat better,'' only amounts to two or three hydrogen bonds. This translates into a fraction of time spent bound to the wrong tRNAs that is about $1/100$ times as great as the corresponding quantity for the correct amino acid \cite{Bphil08a}. If the fraction of incorrect amino acids incorporated into a polypeptide chain were that high, then \emph{every} protein copy longer than a few hundred amino acids would be defective! In fact, the error rate of amino acid incorporation is more like $10^{-4}$. The fact that this figure is so much smaller than the one seemingly demanded by thermodynamics remained puzzling for decades. After all, the ribosome is rather complicated, but it is still a nanoscale machine. \emph{Which} of its features could confer this vast improvement in accuracy? J.~Hopfield and J.~Ninio proposed an elegant physical mechanism, based on a known but seemingly pointless feature of the ribosome \cite{Hopfield:1974p15095,Ninio:1975vv,Bphil08a}. To explore it, we begin by paraphrasing a metaphor due to U.~Alon \cite{Balon06a}. Imagine that you run an art museum and wish to find a mechanism that picks out Picasso lovers from among all your museum's visitors. You could open a door from the main hallway into a room with a Picasso painting. Visitors would wander in at random, but those who do not love Picasso would not remain as long as those who do. Thus, the concentration of Picasso lovers in the room would arrive at a steady value (with fluctuations, of course) that is enriched for the desired subpopulation. To improve the enrichment factor further, you could hire an employee who occasionally closes the door to the main hallway, stopping the dilution of your enriched group by random visitors. Then open a new exit doorway onto an empty corridor. Some of the trapped visitors will gratefully escape, but die-hard Picasso lovers will still remain, leading to a second level of enrichment. After an appropriate time has elapsed, you can then reward everyone still in the room with, say, tickets to visit the Picasso museum in Paris. The original authors realized that in the ribosome, the initial, reversible binding of a tRNA was followed by a transformation analogous to closing the door in the preceding metaphor. This transformation involved hydrolysis of a GTP (guanosine triphosphate) molecule complexed with the tRNA, and hence it was nearly \emph{irreversible,} due to the highly nonequilibrium concentration of GTP, compared to the hydrolysis products GDP and \ensuremath{\mathrm P_\mathrm i}{} (inorganic phosphate). Such hydrolysis reactions were well known to supply the free energy needed to drive otherwise unfavorable reactions in cells, but here their role is more subtle. Hopfield and Ninio were aware that after the hydrolysis, incorporation of the amino acid was delayed and could still be preempted by unbinding of the tRNA complex. The existence of this pathway had previously seemed wasteful: An energy-rich GTP had been ``spent'' without anything ``useful'' (protein synthesis) being done\cut{and indeed the hydrolysis actually created a delay and opportunity for reversing the step that had just been achieved!}. On the contrary, however, the authors argued that this second step implemented the mechanism in the art museum metaphor, giving the ribosome an independent second chance to dismiss a wrong tRNA that accidentally stayed bound long enough to progress to this stage. After all, spending some GTPs may be a modest price to pay compared to creating and then having to recycle an entire defective protein. Hopfield coined the name ``kinetic proofreading'' for this mechanism, but we will refer to it as the ``classic Hopfield--Ninio'' (HN) mechanism because the original term is somewhat misleading. In chemical reaction contexts, a ``kinetic'' mechanism generally implies bias toward a product with lower activation barrier, even if it is less stable than another product with higher barrier. This preference is most pronounced at high, far-from-equilibrium catalytic rates \cite{Sartori:2013fv}. In contrast, the classic HN proofreading model involves two sequential thermodynamic (quasiequilibrium) discriminations\cut{, with the second acting on the biased output of the first}. Moreover, these discriminations take place prior to reading even the very next codon, in contrast to editorial proofreading, which generally happens after an entire manuscript is written. (Our choice of term also distinguishes the classic scheme from later models that are sometimes also called ``kinetic proofreading.'') The qualitative word-model given earlier in this section may seem promising. But the corresponding kinetic equations make for difficult reading and understanding. Better intuition could emerge from a presentation that stays closer to the concrete ideas of discrete actors randomly arriving, binding, unbinding, and so on, visibly implementing the ideas behind the ``museum'' metaphor. The following sections will argue that stochastic simulation can realize that goal. In a nutshell, \begin{quote}\textsl{An effectively irreversible step, or at least a step far from equilibrium, gives rise to enhanced accuracy. The free energy of GTP hydrolysis is the price paid for this accuracy.}\end{quote} It would also be valuable to confirm a key result of the analytic approach, which predicts that the enhancement of accuracy depends on GTP, GDP, and \ensuremath{\mathrm P_\mathrm i}{} being held far from chemical equilibrium, so that the hydrolysis step is nearly irreversible (the ``door shuts tightly'' in the museum metaphor). In fact, the model predicts \emph{no enhancement} of accuracy when this chemical driving force is low \cite{Hopfield:1974p15095}. Far from equilibrium, however, the predicted error fraction can be as low as the \emph{square} of the equilibrium value (or even a higher power if multiple rounds of sequential testing are employed). \pnonly{CLINCH} \subsection{A single ribosome in a bath of precursors\label{ss:srbp}} This section's goal is to formulate the word-model of \ssref{wmkp} in the context of mRNA translation, then set up a stochastic simulation (see also \cite{Ezuck17a}). Later sections will show how students can explore the expectations raised at the end of the preceding section. \begin{figure}\hbox to 6in{\includegraphics{ribocartoon.pdf} \qquad\includegraphics{riborx.pdf}} \caption{\label{f:f4}\small\arttitle{Two representations of the classic Hopfield--Ninio mechanism.} \capitem aTraditional cartoon expressing the catalytic cycle of the ribosome (after \cite{Banerjee:2017en}). \capitem bCorresponding kinetic diagram. The large pale arrows indicate the net circulation in each cycle under cellular conditions, where \ensuremath{\mathrm{GTP}}{} is held out of equilibrium with \ensuremath{\mathrm{GDP}}{} and \ensuremath{\mathrm P_\mathrm i}. The symbol R denotes a ribosome complexed with mRNA\PNrevBfn{ with first binding site either empty or bound to a GTP complex}{}; R$^*$ is the corresponding complex ``activated'' by GTP hydrolysis. At far right, R indicates the ribosome with one additional amino acid added to the nascent polypeptide chain. The classic Hopfield--Ninio proofreading model assumes that unbinding rates $k\ssr c$ and $\ell\ssr c$ are smaller than their mismatched counterparts $k\ssr w$ and $\ell\ssr w$, but that other constants are all equal for correct and wrong tRNA. }\end{figure} We will assume that a single ribosome is complexed with a single mRNA and has arrived at a particular codon. This complex sits in an infinite bath containing several free, dissolved species at fixed concentrations (\fref{f4}a): \begin{itemize} \item \ensuremath{\mathrm{C}}{} denotes \underline correct tRNA (that is, the species that matches the codon currently being read), loaded with the corresponding amino acid. We will neglect the possibility of a tRNA being incorrectly loaded; accurate loading is the concern of another proofreading mechanism that we are not studying now \cite{Hopfield:1976p14958,Yamane:1977p14957}. \item \ensuremath{\mathrm W}{} is similar to \ensuremath{\mathrm{C}}, but refers to the \underline wrong tRNA for the codon under study. \item Some reactions form complexes of tRNA with guanosine phosphates: \text{C$\cdot$GTP}, \text{C$\cdot$GDP}, \text{{W}$\cdot${GTP}}, and \text{{W}$\cdot${GDP}}. (For simplicity, we suppress any mention of elongation factors, one of which, ``EF-Tu,'' is also included in these complexes but is only implicit in the classic HN mechanism.) \end{itemize} \fref{f4}b denotes the ribosome-mRNA complex by \ensuremath{\mathrm R}. In state \textsl0, this complex is not bound to any tRNA. (More precisely, no tRNA is bound at the ``A'' site of the ribosome; a previously bound tRNA, together with the nascent polypeptide chain, is bound at another site (\fref{f4}a), which we do not explicitly note.) Surrounding this state, \fref{f4}b shows four other states $\mathsl1$--$\mathsl4$ in which the ribosome is bound to the complexes introduced earlier. The upper part of the figure describes wrong tRNA binding and possible incorporation; the lower part corresponds to the correct tRNA. Horizontal arrows at the top and bottom denote hydrolysis of GTP, which is coupled to a transformation of the ribosome into an activated state, $\ensuremath{\mathrm R}^*$. Although any chemical reaction is fundamentally reversible, under cellular conditions the concentration ratio $[\ensuremath{\mathrm P_\mathrm i}][\text{C$\cdot$GDP}]/[\text{C$\cdot$GTP}]$ is far below the equilibrium value, so that the reactions in \fref{f4}b are predominantly in the direction shown by the pale arrows. This was one of the conditions in Hopfield's original proposal. (\ssref{rtdf} will explore relaxing it.) Again, we are assuming that a \emph{single} ribosome bounces around this state diagram in the presence of fixed concentrations of feedstocks either imposed in vitro by the experimenter or supplied by a cellular milieu. There are two ways to ``exit the museum exhibit by the second door'': After hydrolysis, the ribosome can reject its tRNA-GDP complex with probability per unit time $\ell$. Or, with probability per unit time $k_{\rm add} $ it can {add} its amino acid to the nascent polypeptide, translocate the tRNA to the second binding site, and eject any tRNA already bound there. Either way, the main binding site becomes vacant and, for the purposes of this state diagram, the ribosome returns to state \textsl0. Supplementary computer code \ref{sc3} implements a Gillespie simulation on the five states of the ribosome (\fref{f4}b). \pnonly{CLINCH} \subsection{Visualization of the simulation results\label{ss:vsr}} To keep the project modular, we constructed a simulation code that writes its state trajectory to a file. A second code then reads that file and creates a visual output. The first of these codes operates similarly to \ssref{bdp.s}, but with a four-way choice of what transition to make after each waiting interval. The second code can be almost as simple as the one described in \ssref{wtde}. However, students with more time (perhaps in a capstone project) can make a more informative display with a reasonable additional effort, as follows. The supplementary videos not only show the state that is current at the end of each video frame; they also animate the pending arrivals of new complexes that are about to bind and the departures of old ones that have unbound without incorporation. By this means, the videos give a rough sense of the ``narrative'' in the trajectory being shown. These improvements are not difficult to add once the basic code is working. Alternatively, students can construct the basic version, then be shown these videos. The exponential distribution of waiting times implies that there will be episodes with several events happening rapidly, interspersed with long pauses. For this reason, it is useful to view the simulation in two ways: Once with a shorter time step that resolves most individual events but covers only a limited time interval (supplementary video \ref{sv3}), and then with a coarser time step to see the entire synthesis trajectory (supplementary video \ref{sv4}). \begin{figure}\begin{center} \includegraphics{expdist.pdf} \end{center} \caption{\small\arttitle{Modified waiting times.} \textit{Solid line:} Exponential distribution of waiting times. \emph{Dashed line:} Shifted exponential distribution obtained by adding the constant $t_\mathrm{min}$ to each sample. \label{f:alterwait}}\end{figure} We also found it useful (solely for visualization purposes) to alter the distribution of waiting times in a simple way that relieves visual congestion without, we think, too much damage to the realism of the simulation. Our modification, shown in the supplementary videos, was to add a small fixed delay, for example one half of one video frame, to every transition waiting time (\fref{alterwait}). \subsection{Classic Hopfield--Ninio model\label{ss:ckp}} Following Hopfield, we initially assume that the rate constant for incorporation, $k_{\rm add} $, is the same regardless of whether the tRNA is correct or incorrect. We also suppose that the binding rates $k'\ssr c=k'\ssr w$ and $\ell'\ssr c=\ell'\ssr w$ also have this property; for example, all of them may be diffusion-limited \cite{Bphil08a}. Only the \emph{unbinding} rates differ in the classic HN model: $$k\ssr w=\phi_{-1}k\ssr c\text{ and }\ell\ssr w=\phi_3\ell\ssr c.$$ Here $\phi_{-1}=\phi_3\approx100$ is the preference factor for unbinding the wrong tRNA (relative to the correct one). Again following Hopfield, we will also take the hydrolysis rate constants to be equal: $m'\ssr w=m'\ssr c$ (and $m\ssr w=m\ssr c$). \begin{table}\small\begin{tabular}{l|l|l||l|l|l}\textsl{Description}&\textsl{Symbol}&\textsl{Name in code}&\textsl{Classic HN}&\textsl{Realistic model}&\textsl{Equilibrium \\\hline binding GTP complex, $\mathrm s^{\raise.15ex\hbox{${\scriptscriptstyle -}$}\kern-.05em 1}\vphantom{1^{\strut}}$&$k'\ssr c$&\texttt{kc\_{}on}&$40$&40& 40\\ unbinding GTP complex, $\mathrm s^{\raise.15ex\hbox{${\scriptscriptstyle -}$}\kern-.05em 1}$&$k\ssr c$&\texttt{kc\_{}off}&$0.50$&0.5& 0.5\\ binding GDP complex, $\mathrm s^{\raise.15ex\hbox{${\scriptscriptstyle -}$}\kern-.05em 1}$&$\ell'\ssr c$&\texttt{lc\_{}on}&\PNrevAfn{$2.36\eem{12}$}{0.001}&\PNrevAfn{$1.18\eem{10}$}{0.001}& \PNrevAfn{13.04}{0.26}\\ unbinding GDP complex, $\mathrm s^{\raise.15ex\hbox{${\scriptscriptstyle -}$}\kern-.05em 1}$&$\ell\ssr c$&\texttt{lc\_{}off}&$0.085$&0.085& 0.085\\ hydrolysis and P$_\mathrm i$ release, $\mathrm s^{\raise.15ex\hbox{${\scriptscriptstyle -}$}\kern-.05em 1}$&$m'\ssr c$&\texttt{mhc}&$0.01$&25& \PNrevAfn{25}{0.01}\\ condensation/P$_\mathrm i$ binding, $\mathrm s^{\raise.15ex\hbox{${\scriptscriptstyle -}$}\kern-.05em 1}$&$m\ssr c$&\texttt{msc}&\PNrevAfn{$2.36\eem{12}$}{0.001}&\PNrevAfn{$1.18\eem{10}$}{0.001}& \PNrevAfn{13.04}{0.26}\\ \hline binding GTP $\vphantom{1^{\strut}}$&$k'\ssr w=\phi_{1}k'\ssr c$&\texttt{kw\_{}on}&$\phi_{1}=1$&$0.68$& 1\\ unbinding GTP &$k\ssr w=\phi_{-1}k\ssr c$&\texttt{kw\_{}off}&$\phi_{-1}=5$&94& 5\\ binding GDP &$\ell'\ssr w=\phi_{-3}\ell'\ssr c$&\texttt{lw\_{}on}&$\phi_{-3}=1$&0.0027\pnonly{$^\star$}& 1\\ unbinding GDP &$\ell\ssr w=\phi_{3}\ell\ssr c$&\texttt{lw\_{}off}&$\phi_{3}=5\pnonly{^\dagger}$&7.9& 5\\ hydrolysis&$m'\ssr w=\phi_2m'\ssr c$&\texttt{mhw}&$\phi_2=1$&0.048& 1\\ condensation&$m\ssr w=\phi_{-2}m\ssr c$&\texttt{msw}&$\phi_{-2}=1$&1& 1\\ \hline incorporation$\pnonly{^\#}\vphantom{1^{\strut}}$, $\mathrm s^{\raise.15ex\hbox{${\scriptscriptstyle -}$}\kern-.05em 1}$&$k_{\rm add,c} $&\texttt{kaddC} $0.01$&4.14& 0.01\\ incorporation&$k_{\rm add,w} =\phi_\mathrm{add}k_{\rm add,c}$&\texttt{kaddW}&$\phi_\mathrm{add}=1$ &$0.017$& 1\\ \hline\end{tabular} \caption \small Illustrative values for the rates shown in \fref{f4}b. The third column gives variable names used in supplementary computer code \ref{sc3}. See the Appendix for discussion of the numerical values. The fifth column follows \cite{Banerjee:2017en,ISI:000398884800036}, who refer to our $\phi_i$ as $f_i$. The last column uses rates from the HN model, but with hydrolysis and incorporation modified to satisfy equilibrium. \hfil\break \pnonly{Footnotes: $\star$:~Banerjee et al.\ chose $\phi_{-2}=1$, then got $\phi_{-3}$ by consistenc : $\phi_1\phi_2\phi_3=\phi_{-1}\phi_{-2}\phi_{-3}$. $\dagger$:~Hopfield chose $\phi_3=\phi_{-1}$. Here it's required by the consistency condition. $\ddag$:~$m\ssr c$ and $\ell'\ssr c$ were chosen to be 0.001 to ensure irreversibility of the GTP hydrolysis step \cite{ISI:000398884800036}. $\#$:~\textbf{\color{red}Here Phil disagrees with Banerjee and used the rate they quote for the irreversible next-to-last step.}}} \end{table} To visualize wrong incorporations within a reasonable time frame, we raised the probability of incorrect choices: The preference ratios $\phi_{-1}$ and $\phi_3$ were lowered from their realistic value of 100 to just 5. Other values in column 4 of Table I were loosely inspired by rate constants estimated from experimental data in a simplified form (see Appendix \ref{a:1}). (\ssref{mrm} will follow the experimental values more closely.) \PNrevC{These effective rate constants are either zeroth order (unbinding and hydrolysis), or else pseudo-zeroth order (binding and condensation), with rates appropriate for the concentrations of reactants present in the experiment.}{These effective rate constants are either a constant probability per unit time (unbinding and hydrolysis) or else a probability per unit time with the substrate concentration already lumped in (binding and condensation). The values we chose were appropriate for the concentrations of reactants present in the experiment.}{*} \PNrevB{Some of the rates were modified to satisfy conditions for the classic Hopfield--Ninio kinetic proofreading model. For example, $m'\ssr c$ and $k_{\rm add,c}$ were lowered from 25$\,\ensuremath{\mathrm s}^{-1}$ and 4.14$\,\ensuremath{\mathrm s}^{-1}$, respectively, to 0.01$\ensuremath{\mathrm s}^{-1}$ (see supplementary section X.X for a full discussion of the Hopfield model rate constants).}{}{*} Supplementary videos \ref{sv3}--\ref{sv4} show the resulting behavior. Perhaps the most important impression we get from viewing these animations is that \emph{the cell is a busy place.} The riot of activity, the constant binding events that end with no ``progress'' (and often not even GTP hydrolysis), are hallmarks of chemical dynamics that are hard to appreciate in textbook discussions, yet vividly apparent in the simulation. This is especially apparent in supplementary video \ref{sv4}, which shows a typical run of 25 amino acid incorporations. Because there are many unproductive binding and unbinding events in the simulation, not every event is shown in detail in video \ref{sv4}. However, focusing on the GDP-tRNA rejections shows that more correct tRNAs than incorrect tRNAs make it past GTP hydrolysis, and that the few incorrect tRNAs that do make it past are quickly rejected in the second proofreading step. In the instance shown, only one incorrect amino acid was incorporated out of 25 incorporations, much lower than the 1/5 error rate expected from single step equilibrium binding. Supplementary video \ref{sv3} provides a more detailed look at this process. The videos also show clearly the jerky, nonuniform progress of synthesis, with some amino acid incorporations happening after much longer delays than others. That feature is by now well documented by single-molecule experiments. A typical run created a chain of 100 amino acids, of which 6 were wrong. This error rate of $\approx(6/100)=0.06$ is \PNrevB{similar to the value $1/(\phi_3)^2 = 1/5^2 = 0.04$}{far smaller than the naive expectation of $1/\phi_3=0.20$}{*}. This is the essence of the classic HN mechanism; we see it taking shape in the animation, as many wrong tRNA complexes bind but are rejected, either prior to or after GTP hydrolysis. \PNrevA{}{We also see many \emph{correct} complexes bind and get rejected, before or after GTP hydrolysis. This is the price paid for accuracy in the classic HN proofreading model.}{} The error rate in the simulation is slightly larger than \PNrevB{1/25 = 0.04 Michaelis--Menten kinetics prediction for the error rate given the rate constants in Table 1 is 0.45}{$1/(\phi_3)^2 = 1/5^2 = 0.04$, however. The discrepancy is expected,}{*} because the limiting value $1/(\phi_3)^2$ is only achieved in the limit as the incorporation and hydrolysis catalytic rates are sent to zero \cite{ISI:000398884800036,Hopfield:1974p15095}. \PNrevA{Supplementary Figure 1 shows the predicted error, based on Michaelis--Menten kinetic analysis \cite{Banerjee:2017en}, with variation in incorporation or hydrolysis rate. The graph shows that the error approaches 0.04 as the incorporation rate (1a) or hydrolysis rate (1b) decrease.}{}{} \subsection{More realistic model\label{ss:mrm}} Much has been learned about ribosome dynamics after Hopfield's and Ninio's original insights \cite{Arodn11a,Bbaha17a}. We now know that each step in our model consists of substeps. For example, GTP hydrolysis is subdivided into GTPase activation followed by actual hydrolysis, the latter step probably depends on a rearrangement of ``monitoring bases'' in the ribosomal RNA, and so on \cite{Satpati:2014kz}. The model studied in \ssref{ckp} was designed to show the HN mechanism in its ``classic,'' or pure, form, and how it can enhance fidelity even without help from the effects just described. For example, we assumed that the only dependence on right versus wrong tRNA was via unbinding rates. Indeed, such dependence was later seen at the single-molecule level \cite{Blanchard:2004p7821}. But it now appears that some of the forward rates also depend on the identity of the tRNA \cite{Zaher:2010ga,ISI:000403920000010,ISI:000393403600005}, an effect sometimes called ``internal discrimination.'' In the limit that the ribosome uses only internal discrimination (activation barrier heights of correct and incorrect tRNA binding differ and the equilibrium constants are the same), minimum error is obtained at fast catalytic rates \PNrevA{\cite{Sartori:2013fv}}{\cite{Bennett:1979tb}}{*}. This is in contrast to the HN scheme, which achieves minimum error as catalytic rate tends to zero. \PNrevA{However, a purely internal discrimination model can only discriminate at one step and is thus not as accurate as a pure HN model for similar bias factors in the tRNA binding rates.}{}{*} In our stochastic simulation model, it is straightforward to add internal discrimination effects by altering rate constants (Table I column 5). See Appendix \ref{a:2} for discussion of the values. Supplemental video \ref{sv5} shows that the ribosome with experimentally measured rates can be faster and more efficient than a ribosome with only classic HN proofreading. \PNrevC{}{We see both a bias for correct tRNA binding/hydrolysis and a bias for rejection of wrong tRNAs before GTP hydrolysis.}{*} Of the 26 correct tRNA binding events in this run, 25 resulted in successful incorporation. This is compared to the fraction $24/10245=0.002$ of productive correct tRNA binding events in a typical run of the classic HN ribosome simulation. In addition, of the 30 incorrect tRNA binding events on the realistic ribosome, all 30 resulted in rejection. The error fraction of the ribosome with realistic rates is also more accurate than the classic HN ribosome. Of 10\bn000 amino acids simulated, 18 wrong amino acids were incorporated (error fraction of 0.0018), compared to \PNrevA{X wrong amino acids}{6/100}{*} for the classic HN ribosome. \PNrevC{This matches the Michaelis--Menten kinetics predicted error fraction 0.0017 for the realistic ribosome.}{This simulated error fraction of 0.0018 is consistent with the analytic prediction of 0.0017 from first-passage times \cite{ISI:000398884800036}.}{*} However, the simulated ribosome with in-vitro measured rates is still not as fast or accurate as the real \emph{E.~coli} ribosome in vivo, which translates at 15--20 amino acids per second with an error rate of 1/10\bn000 \cite{Bmilo15a}. Thus, the realistic ribosome likely evolved to combine HN proofreading (quasiequilibrium, energetic proofreading) with internal discrimination (unequal forward rates) to optimize speed, efficiency, and accuracy \PNrevA{\cite{Sartori:2013fv}}{\cite{Rao:2015vy}}{}. Despite this, simulating the classic HN model of proofreading is still a valuable exercise for students. By visualizing discrimination via only a difference in unbinding rates, students see the minimal components necessary to attain high accuracy in a broad class of biological reactions. Also, the classic HN mechanism illustrates an essential part of biological proofreading which fundamentally relies on non-equilibrium physics. There is also recent evidence pointing to \emph{two} kinetic proofreading steps, that is, two sequential, nearly irreversible steps each of which can be followed by unbinding of tRNA \cite{ISI:000388835700066,Chen:2016bx}. Our simulation could be extended to include such effects\PNrevC{}{, whereas analytic methods would quickly become intractable}{*}. Finally, additional interesting steps arise during ``translocation,'' in which the previous tRNA, from which the nascent peptide chain has been released, and the current tRNA, now carrying that chain with an additional amino acid, are both shifted one step inside the ribosome, freeing the binding site so that the entire cycle can begin again (\fref{f4}a). Because this step is not related to accuracy, we have simplified by omitting it from our model. \subsection{Role of thermodynamic driving force\label{ss:rtdf}} For comparison, we return to the classic Hopfield--Ninio model, this time operating at nearly equilibrium concentrations of GTP, GDP, and $\ensuremath{\mathrm P_\mathrm i}$ to demonstrate the importance of the ``one-way door'' (the GTP hydrolysis step). Table I column 6 summarizes the rates for this undriven model (see Appendix \ref{a:3}). With \PNrevA{equilibrium}{these}{*} rates, the reaction still creates a chain, because we assumed a fixed probability per time to irreversibly add an amino acid whenever the ribosome visits its activated state. But this time a typical run gave 17 errors in a chain of length 100, illustrating the significance of the thermodynamic driving force in reducing the error rate. \PNrevA{This matches the Michaelis--Menten predicted error of 0.49 given the rate constants in Table I column 6. This error rate is consistent with the fact that, for a HN model with only difference in the unbinding rates, the error can never fall below $1/\phi_3 = 1/5$ in equilibrium conditions \cite{Hopfield:1974p15095}.}{This error rate of {about 0.17} is consistent with the Michaelis-Menten predicted error of $1/\phi_3=0.20$, which is the lowest the error can be for a classic HN model in equilibrium conditions \cite{Hopfield:1974p15095}.}{*} Analysis of the events in the simulation showed that of the 17 wrong amino acids incorporated, 10 were through direct binding of GDP$\cdot$tRNA. Thus, for many amino acids, the first discrimination step was bypassed\PNrevC{This results in the {\color{red}0.20 [[PREV PARAGRAPH SAID 0.17]]} error rate observed in the simulation. Thus, two independent discriminations are not possible in a simulation with GTP hydrolysis at equilibrium}{, resulting in the high error rate observed in the simulation.}{*} \PNrevA{}{To gain more insight into the role of the irreversible GTP hydrolysis step, some students may wish to rerun the simulation with different incorporation and hydrolysis rates. For example, a simulation with $m'\ssr c = 25$ and $k_{\rm add} = 4.14$ results in a simulation with many tRNAs flipping between GDP and GTP states, another way in which the two discrimination steps become coupled into one.}{*} \section{Conclusion} The models described here show fairly elementary physical principles that lie at the heart of cell biology. Specifically, gene expression and kinetic proofreading are two important, fundamental topics that are well within reach of undergraduates. A module that introduces stochastic simulation need not dominate a semester course: One class week is enough for the first exposure. Indeed, the entire simulation plus visualization in supplementary computer code \ref{sc1} consists of just \emph{seven short lines} of code, and yet it creates a valuable educational experience not available in a static textbook. Moreover, the opening material is not specifically biological in character; it can serve as a stepping stone to more complex simulations relevant for a variety of courses. \section*{Acknowledgments} We are grateful to Ned Wingreen and Anatoly Kolomeisky for correspondence and to Bruce Sherwood for help with software. This work was partially supported by the United States National Science Foundation under Grants PHY--1601894 and MCB--1715823. Some of the work was done at the Aspen Center for Physics, which is supported by NSF grant PHY--1607611, and at the Physical Biology of the Cell summer school (Marine Biological Laboratory).
{'timestamp': '2018-09-18T02:03:18', 'yymm': '1809', 'arxiv_id': '1809.05619', 'language': 'en', 'url': 'https://arxiv.org/abs/1809.05619'}
\section{Introduction} \label{sec:Intro} The convection zone (CZ) of the Sun, despite being highly turbulent, shows a well--organized large--scale axisymmetric rotation profile depending on both depth and latitude. The entire CZ rotates faster at the equator than at the poles and the rotation rate decreases mildly with depth except near the radial boundaries where there are regions of strong shear \citep{MT96,JS98}. Additionally, a large--scale circulation in the meridional plane, known as the meridional flow (MC), is also present. The amplitude of MC is about 15-20 ms$^{-1}$ which is two orders of magnitude smaller than the rotational velocity \citep{TD:79,DH96}. The near--surface shear layer (NSSL) occupies about 17\% of the CZ, or roughly $35\rm Mm$ in depth, from the photosphere. Recently, two further properties of it have been reported. First, the value of the logarithmic radial gradient of the rotation rate is reported to be \begin{equation} \frac{{\rm d} {}\ln\Omega}{{\rm d} {}\ln r} \approx -1 \end{equation} in the upper 13 Mm of the NSSL independent of latitude up to $60^\circ$ \citep{BSG14}. Second, the gradient is evolving over time, by an amount between 5--10\% of its time--averaged value, following closely the magnetic activity cycle \citep{BSG16}. On the other hand, the MC maintains its poleward motions throughout the cycle \citep{HU14}. Shear flows play an important role in generating and maintaining the solar magnetic field and its activity cycle \citep[e.g.][]{1980opp..bookR....K}. In particular, radial shear is important in the $\alpha\Omega$ dynamo model for explaining the equatorward migration of the magnetic activity \citep{EP95,HY75}. In this model, negative radial shear in combination with positive $\alpha$ is required to produce the correct equatorward migration of the activity. Such negative shear exists only in the NSSL in the solar CZ. The effect of the NSSL has been tested numerically in mean--field dynamo models by \cite{KKT06} where it was found to aid equatorward migration. More observational and theoretical arguments for the NSSL strongly shaping the solar dynamo process were presented by \cite{AB05}. The role of NSSL can be easily investigated in mean--field models where it can be added or removed by hand. On the contrary, global 3D convection simulations typically fail in generating a realistic NSSL self--consistently \citep[e.g.][]{2013ApJ...779..176G,HRY15} and thus its role on the resulting dynamo solutions is unclear. Therefore, understanding the role that the NSSL plays for the dynamo requires that we first understand its formation mechanism and why global simulations do not capture it. The equations governing the generation of large--scale flows in the solar CZ are the following: First, azimuthally averaged angular momentum equation describes the time evolution of the differential rotation. This equation is obtained using the Reynolds decomposition, where each physical quantity, $A$, is decomposed into its mean $\mean A$ and fluctuations around the mean, $a$, and where averages are taken over the azimuthal direction. Then, we obtain the equation \begin{eqnarray} \frac{\partial }{\partial t}(\mean \rho \varpi ^2 \Omega)=&-& \bm\nabla\bm\cdot \{ \varpi[\varpi \overline{\rho {\bm U}^m}\ \Omega + \mean \rho Q_{\phi i}-2\nu \mean \rho \overline{\mbox{\boldmath $S$} {}}\bm\cdot \hat{\bm\phi} \nonumber\\ &-& (\overline{B}_\phi\overline{\bm B} / \mu_0 +M_{\phi i} )] \}, \label{eq:AM} \end{eqnarray} where $\mean\rho$, $\overline{\bm U}^m=(\overline{U}_r,\overline{U}_\theta,0)$, $\Omega=\overline{U}_\phi/r \sin\theta$, $\nu$, $\mu_0$, and $\overline{\bm B}$ are density, meridional flow, angular velocity, molecular viscosity, the vacuum permeability, and the magnetic field, respectively. Furthermore, $\varpi=r\sin\theta$, where $\theta$ is the latitude, $Q_{\phi i}$ and $M_{\phi i}$ are the Reynolds and Maxwell stresses, and $\overline{\mbox{\boldmath $S$} {}}$ is the mean rate of strain tensor. The Reynolds and Maxwell stresses are the correlations of fluctuating components $Q_{\phi j}=\overline{u_{\phi}u_j}$ and $M_{\phi j}=\overline{b_{\phi}b_j}/\mu_0$, respectively. Density fluctuations are omitted corresponding to an anelastic approximation. Second, the azimuthally averaged equation for the azimuthal component of vorticity, describes the time evolution of MC: \begin{eqnarray} \frac{\partial \mean w_{\phi}}{\partial t}&=&\varpi \frac{\partial \Omega^2}{\partial z}\!+\! (\bm\nabla\mean s \times \bm\nabla \mean T)_{\phi} \!-\!\left[ \bm\nabla \!\times\! \frac{1}{\mean \rho}[\bm\nabla\!\bm\cdot\!(\mean \rho {\bm Q}\!-\!2\nu\mean \rho \overline{\mbox{\boldmath $S$} {}})] \right]_{\phi} \nonumber \\ &+&[\bm\nabla\times\bm\nabla\bm\cdot(\overline{{\bm B}{\bm B}}^T+{\bm M})], \label{eq:MC} \end{eqnarray} where $ \overline{\bm w}=\nabla \times \overline{\bm{U}}$ is the vorticity, $s$ is the specific entropy, $T$ is the temperature, and $\partial/\partial z$ is the derivative along the rotation axis. The first and second terms describe the centrifugal and baroclinic effects, respectively. From these two equations it becomes clear that meridional flow can drive differential rotation, and vice versa, and additionally any misalignment of density and temperature gradients can drive meridional circulation through the baroclinic term, while turbulent stresses are important in driving both flows. Theoretical studies have shown that the major players generating stellar differential rotation are the first two terms in both \Eqs{eq:AM}{eq:MC} \citep{GR89,LK13}. Additionally, the Coriolis number $\Omega_{\star}$, describing the degree of rotational influence on the flow, defined as \begin{equation} \Omega_{\star}=2\tau\mean{\Omega}, \label{eq:con} \end{equation} where $\mean{\Omega}$ is the rotation rate of the star and $\tau$ is the turnover time of the turbulence, has been found to be a key parameter. It describes the role rotation plays in different parts of the CZ, in particular leading to a completely different rotation profile within the NSSL in comparison to the rest of the CZ. In the solar structure model of \cite{Stix:2000}, $\Omega_{\star}$ changes from the surface to the bottom of the CZ as $10^{-3}\lesssim\Omega_{\star}^{\rm NSSL}\lesssim 1 \lesssim\Omega_{\star}^{\rm CZ}\lesssim 10$. Non--rotating density--stratified convection is dominated by vertical motions in which case the vertical anisotropy parameter $A_{\rm V} \propto u_{\rm H}^2 - u_r^2 < 0$, where $u_{\rm H}$ and $u_r$ are the turbulent horizontal and radial velocities. Rotation tends to suppress convection \citep[e.g.][]{Ch61} and typically $A_{\rm V}$ decreases when $\Omega_\star$ increases such that the maximum of $A_{\rm V}$ is achieved for $\Omega_\star = 0$ \citep[e.g.][]{Chan2001,KKT04}. On the other hand, rotation introduces horizontal anisotropy $A_{\rm H} \propto u_\phi^2 - u_\theta^2$, where $u_\phi$ and $u_\theta$ are the longitudinal and latitudinal velocities. Typically $A_{\rm H}$ is positive and it is increasing with $\Omega_\star$. Furthermore, $A_{\rm H} \rightarrow 0$ as $\Omega_\star \rightarrow 0$. Thus, $\Omega_{\star}$ in the solar CZ reflects also the anisotropy of turbulence which arises due to the presence of the Coriolis force and density stratification. Consequently, rotation and gravity vectors define the necessary two misaligned preferred directions for non--zero off--diagonal Reynolds stress \citep{GR89}. A theoretical model that reproduces the entire rotation profile of the Sun including the NSSL was presented in \citet[][hereafter KR05]{LKGR05}. They utilized a hydrodynamic mean--field (MF) model, considering the properties of the turbulent flow explained above and parameterized the Reynolds stresses in the form of turbulent transport coefficients \citep[][see also \Sec{sec:theory}]{GR80,GR89}. They obtain the NSSL by taking the anisotropy of turbulence near the surface into account such that $A_{\rm V}\gg A_{\rm H}$ for $\Omega_{\star}\lesssim 1$. This leads to strong inward transport of the angular momentum in the NSSL and ultimately to the generation of the radial shear. The remarkable agreement of recent observed latitudinal independence of the gradient with their model brought motivation to develop the theory further including the effect of the magnetic field in the NSSL \citep{LK16}. This leads to the prediction of the time variation of the angular velocity gradient during the solar cycle, qualitatively agreeing with the observations. As the variations are caused by the magnetic field, \cite{LK16} suggested that measurements of the rotational properties of the NSSL can be used as an indirect probe for measuring the sub--surface magnetic field. In their model, however, the Reynolds stresses were computed using second--order correlation approximation (SOCA), the validity of which in astrophysical regimes with high Reynolds numbers is questionable. To avoid the necessity of using such simplifications, it is desirable to build numerical simulations of stellar convection, directly solving for the relevant, either hydro-- or magnetohydrodynamic, equations in spherical geometry. Such models have been developed and utilized since the 1970s \citep[e.g.][]{Gi77,Gi83,Gl85}, but reproducing the NSSL has turned out to be a serious challenge for these models. With such global convection simulations it is possible to generate a shear layer close to the equator, mostly confined outside the tangent cylinder, where rotation--aligned, elongated large--scale convection cells form \citep[see e.g.][]{RC01,KMB11,GSdGDPKM16,WKKB16,MT19}. Only when higher density stratification has been used, a shear layer extending to $60^\circ$ latitudes has been found \cite{HRY15}. In this case, however, the gradient of the radial shear was positive in the range $0^\circ<\theta<45^\circ$, contrary to the helioseismic inferences of the NSSL. They concluded that the meridional Reynolds stress, originating from the radial gradient of the poleward meridional flow, is the most important driver of the NSSL. In their model, the luminosity was decreased to obtain an accelerated equator, hence the influence of rotation on convection (Coriolis number) was overestimated, and they also speculated about an unfavourable influence of boundary conditions to their results. Hence, it is unclear whether these results really are applicable to the NSSL. Overall, using global direct numerical simulations (GDNS) to study the origin of the NSSL is cumbersome due to high computational cost, the multitude of effects present, and the difficulty to reliably separate them from each other. For such an approach, a simpler modelling strategy is required which is attempted in this paper. In addition to MF and GDNS models, the NSSL has also been studied from the point--of--view of different types of equilibria. The most recent of them, \cite{GB19}, considers the formation of the NSSL in a magnetohydrostatic equilibrium model, being driven by a poleward meridional flow near the surface. In addition to the assumption of stationarity, while the magnetic field of the Sun is oscillatory, the model considers only non--turbulent states; nevertheless, a large--scale poloidal flow, when inserted on top of the equilibrium configuration, is seen to reduce the rotational velocity near the surface, hence leading to NSSL--like condition there. In the study of \cite{MH11}, the Reynolds and Maxwell stresses were accounted for in the governing equations, hence allowing for turbulent effects. They considered a case, where an equilibrium condition exists for the angular momentum transport, Eq.~(\ref{eq:AM}), in which case the meridional circulation and the relevant stresses must balance. Any imbalance in the term encompassing the stresses was then postulated not only to drive differential rotation, but more importantly to induce a meridional flow. Similarly, the azimuthal vorticity equation, Eq.~(\ref{eq:MC}), in a steady state, was postulated not only to drive meridional flow, but more importantly contribute to maintaining the differential rotation profile. In the earliest scenario explaining the NSSL, \cite{FJ75} proposed that the reason for the existence of it would be the local angular momentum conservation from rising and falling convective fluid parcels, which would lead to inward angular momentum transport. In the scenario of \cite{MH11}, however, such angular momentum transport is not a sufficient condition to sustain the NSSL, but another necessary ingredient is the meridional force balance in between the turbulent stresses and centrifugally--driven circulation within the NSSL. In the bulk of the convection zone, meridional force balance would be rather provided by the baroclinic effect, and the bottom of the NSSL would be determined by the transition point from baroclinic to Reynolds stress balancing. Some agreement with this scenario was found in the study by \cite{HRY15}, whose models showed that in the region of NSSL, the force caused by the turbulent stresses was balanced by the Coriolis force. In this paper we adopt an entirely different approach to those reviewed above. We formulate a model with minimal ingredients for the generation of large--scale flows to study the role of rotation--induced Reynolds stress specifically in a rotational regime relevant for the NSSL. This involves replacing convection with anisotropically forced turbulence and omitting density stratification, magnetic fields, and spherical geometry. The simplicity of the model allows unambiguous identification of the drivers of mean flows which can be used to assess the generation mechanisms of the solar NSSL. \section{The NSSL in terms of mean--field hydrodynamics} \label{sec:theory} In this section we briefly explain the theory of the $\Lambda$--effect and its relevance for formation of the NSSL \citep{LK13,LK16}. We refer the reader to \cite{GR89} for a thorough treatise. In this theory, rotating and anisotropic turbulence contributes to diffusive and non--diffusive transport of angular momentum. The non--diffusive part is known as the $\Lambda$--effect \citep{Leb41}. Therefore, the Reynolds stress consists of two parts \begin{eqnarray} Q_{ij}&=&Q_{ij}^{(\nu)}+Q_{ij}^{(\Lambda)},\label{eq:R1}\\ Q_{ij}&=&N_{ijkl}\overline{\mbox{\boldmath $U$}}{}}{_{k,l}+\Lambda_{ijk}\Omega_k,\label{eq:R2} \end{eqnarray} where $N_{ijkl}$ and $\Lambda_{ijk}$ are fourth-- and third--rank tensors describing the turbulent viscosity and $\Lambda$--effect, respectively. In spherical geometry $Q_{r\phi}$, $Q_{\theta \phi}$, and $Q_{r\theta}$ are the vertical, horizontal and meridional stresses, respectively. We note here that the meridional stresses appear only in the vorticity equation and in the model by KR05 they do not play a role in the generation of the NSSL. Ignoring magnetic fields, the vertical and horizontal stresses are given by \begin{eqnarray} Q_{r\phi}&=&\nu_{\parallel}\sin \theta \left( V\Omega - r\frac{\partial \Omega}{\partial r}\right) + \nu_{\perp}\Omega^2\sin\theta^2\cos \theta \frac{\partial \Omega}{\partial \theta}, \label{eq:qrp}\\ Q_{\theta\phi}&=&\nu_{\parallel}\!\left(\!\cos \theta H \Omega\!-\!\sin \theta \frac{\partial \Omega}{\partial \theta} \right)\!+\!\nu_{\perp}\Omega^2\sin\theta^2\cos \theta r\frac{\partial \Omega}{\partial r},\label{eq:qtp} \end{eqnarray} where $\nu_{\parallel}$ and $\nu_{\perp}$ are the diagonal and off--diagonal components of the turbulent viscosity tensor $N_{ijkl}$, respectively. The latter component $\nu_{\perp}$ appears due to the effect of the rotation on the turbulent motions \citep{RKKS19}. $V$ and $H$ are the vertical and horizontal $\Lambda$--effect coefficients which are, to the lowest order, proportional to $A_{\rm V}$ and $A_{\rm H}$ \citep{GR80}. These coefficients are typically expanded in latitude in powers of $\sin^2 \theta$ as \begin{eqnarray} V=\sum_{i=0}^{j}V^{(i)}\sin^{2i}\theta,\label{eq:v}\\ H=\sum_{i=1}^{j}H^{(i)}\sin^{2i}\theta.\label{eq:h} \end{eqnarray} In the NSSL $\Omega_{\star}\leqslant 1$ and $A_{\rm H} \approx 0$ such that $Q_{\theta\phi}^{(\Lambda)}$ due to the $\Lambda$--effect vanishes. The off--diagonal viscosity $\nu_{\perp}$ is non--zero but small such that its influence is negligible \citep{RKKS19}. It has been shown analytically \citep{LKGR05} and numerically \citep{Kap19} that in the slow rotation regime only the first term in the expansion of the vertical coefficient $V^{(0)}$ survives and tends to a constant. Furthermore, applying a stress--free boundary condition at the radial boundaries, one realizes that $Q_{r\phi}=Q_{r\theta}=0$. Using this in Eq.~(\ref{eq:qrp}) and equating the diffusive and non--diffusive stresses we get \begin{equation} \frac{\partial\ln \Omega}{\partial \ln r}=V^{(0)}<0, \end{equation} which shows a reasonable agreement with observational results where the radial rotational gradient is independent of latitude \citep{BSG14}. \begin{figure}[t!] \begin{center} \includegraphics[width=0.8\columnwidth]{sk.pdf} \end{center}\caption[]{Schematic representation of geometry of the current models and their relation to the NSSL. The depth of the layer is exaggerated. The simulation boxes are located at nine depths (not all shown) and seven latitudes, respectively. $\Omega_{\star}$ is increasing gradually from the surface to the bottom of the NSSL. }\label{fig:sk}\end{figure} \section{The model} \label{sec:model} We use a similar hydrodynamic model in Cartesian domain as in \cite{KB08} and \cite{Kap19}. We explain it here briefly and refer the reader to relevant parts of the above--mentioned works for details. In this model gravity is neglected and an external random forcing by non--helical transversal waves with direction--dependent amplitude is applied. The ensuing flow is turbulent and anisotropic. The medium is considered to be isothermal and to obey the ideal gas equation. The governing equations are \begin{eqnarray} \frac{D \ln \rho}{Dt}&=&-\mbox{\boldmath $\nabla$} {} \bm\cdot \mbox{\boldmath $U$} {},\\ \frac{D\mbox{\boldmath $U$} {}}{Dt}&=&-c_s^2 \mbox{\boldmath $\nabla$} {} \ln \rho-2\ \bm\Omega\times\mbox{\boldmath $U$} {} +\bm{F}^{\rm visc} +\bm{F}^{\rm f}, \label{momentum} \end{eqnarray} where $D/Dt=\partial / \partial t+\mbox{\boldmath $U$} {} \bm\cdot \mbox{\boldmath $\nabla$} {}$ is the advective derivative, $\rho$ and $c_s$ are the density and sound speed, respectively, and $\bm\Omega=\Omega_0(-\cos\theta,0,\sin\theta)^{\rm T}$ is the rotation vector. The viscous force is given by \begin{equation} \bm{F}^{\rm visc}=\nu\left(\nabla^2\mbox{\boldmath $U$} {}+\frac{1}{3} \mbox{\boldmath $\nabla$} {} \mbox{\boldmath $\nabla$} {} \bm\cdot \mbox{\boldmath $U$} {}+2\mbox{\boldmath $S$} {} \bm\cdot \mbox{\boldmath $\nabla$} {} \ln \rho \right), \end{equation} where $\mbox{\boldmath $S$} {}_{ij}=\frac{1}{2}( U_{i,j}+ U_{j,i})- \frac{1}{3}\delta_{ij}U_{k,k}$ is the traceless rate of strain tensor, $\delta_{ij}$ is the Kronecker delta, and the commas denote differentiation. The forcing function is given by \begin{equation} \bm{F}^{\rm f}( \boldsymbol{x},t)={\rm Re} (\mbox{\boldmath ${\sf N}$} {} \cdot \boldsymbol{f_{k(t)}} \exp [i \boldsymbol{k}(t)\bm\cdot \boldsymbol{x} - i\phi(t)]), \end{equation} where $ \boldsymbol{x}$, $\boldsymbol{k}$, and $\phi$ are the position, wavevector, and a random phase, respectively. The desired vertical ($z$) anisotropy can be enforced using a tensorial normalization factor ${\sf N}_{ij}=(f_0\delta_{ij}+\delta_{iz} \cos^2 \Theta_k f_1/f_0) (kc_s^3/\delta t)^{1/2}$ of the forcing, where $f_0$ and $f_1$ are the amplitudes of the isotropic and anisotropic parts, respectively. $\delta t$ and $\Theta$ are the time step and the angle between the vertical direction $z$ and $\boldsymbol{k}$, respectively, and $k=|\boldsymbol{k}|$ determines the dominant size of the eddies. In the forcing $\boldsymbol{f_{k}}$ is given by \begin{equation} \boldsymbol{f_{k}}=\frac{\boldsymbol{k}\times \boldsymbol{\hat{e}} }{\sqrt{\boldsymbol{k}^2-(\boldsymbol{k\cdot\hat{e}})^2}}, \end{equation} which makes the forcing transversal waves; $\hat{\bm e}$ is an arbitrary unit vector. The details of the forcing can be found in \cite{AB01}. \begin{table}[t!]\caption{ Summary of runs of varying the Taylor number and latitude. The values of $A_V$ are shown at the equator and the pole and the values of $A_H$ at $15^\circ$ and $75^\circ$ latitude. }\vspace{12pt}\centerline{\begin{tabular}{lcccc} \hline set & Ta $(10^6)$ & $\Omega_{\star}$ & $A_{\rm V}$ $(0^\circ ... 90^\circ)$ & $A_{\rm H} [10^{-3}]\,(15^\circ ... 75^\circ)$\\ \hline\hline \textbf{C0} & 0 & 0 & -0.51 & 0.56\\ \textbf{C02} & 0.05 & 0.02 & -0.51 & $0.73 ... 0.57$\\ \textbf{C04} & 0.15 & 0.04 & -0.51 & $0.72 ... 0.66$\\ \textbf{C06} & 0.30 & 0.06 & -0.51 & $0.16 ... 0.32$\\ \textbf{C13} & 1.40 & 0.13 & -0.51 & $0.73 ... 0.30$\\ \textbf{C24} & 4.54 & 0.24 & -0.51 & $2.85 ... 0.19$\\ \textbf{C46} & 15.58 & 0.46 & $-0.52 ... -0.50$ & $8.84 ... 1.41$ \\ \textbf{C64} & 30.54 & 0.64 & $-0.52 ... -0.49$ & $14.3 ... 1.12$ \\ \textbf{C83} & 50.49 & 0.83 & $-0.52 ... -0.48$ & $20.7 ... 1.11$ \\ \textbf{C1} & 75.43 & 1.01 & $-0.52 ... -0.46$ & $27.0 ... 0.89$ \\ \hline \label{tab:summary}\end{tabular}} \tablefoot{ The grid resolution of all runs is $144^3$, forcing amplitudes $f_0=10^{-6}$ and $f_1=0.04$, ${\rm Re}\approx 13$, and $\nu=3.3\cdot10^{-4}~(c_s k_1^3)^{-1}$. } \end{table} \section{Simulation setup} \label{sec:setup} We used the {\sc Pencil Code}~ \footnote{https://github.com/pencil-code} \citep{PC20} to run the simulations. We consider a cubic box with size $(2\pi)^3$ discretized over 144$^3$ grid points. $z$ corresponds to vertical, $x$ to latitudinal, and $y$ to azimuthal direction, respectively, the latter two being referred to as the horizontal directions. Horizontal boundaries are periodic and stress--free conditions are imposed at vertical boundaries with \begin{equation} U_{x,z}=U_{y,z}=U_z=0 \quad\mbox{on $\quad z=z_{\rm bot}$, $z_{\rm top}$}, \end{equation} where $z_{\rm bot}$ and $z_{\rm top}$ represent the bottom and top of the domain. The box size is represented by the wavenumber $k_1=2\pi/L$ and we choose a forcing wavenumber $k_f/k_1=10$. The units of length, time, and density are $k_1^{-1}$, $(c_sk_1)^{-1}$ and $\rho_0$, respectively, where $\rho_0$ is the initial uniform value of density. The forcing parameters $f_0=10^{-6}$ and $f_1=0.04$ are chosen such that the effects of compressibility are weak with a Mach number ${\rm Ma}=u_{\rm rms}/c_s \approx 0.04$ in all simulations. Moreover, with $f_1 \gg f_0$, we fulfill the NSSL condition in which $|A_{\rm V}|\gg |A_{\rm H}|$; see \Tab{tab:summary}. The vigor of turbulence is quantified by the Reynolds number \begin{equation} {\rm Re}=\frac{u_{\rm rms}}{\nu k_f}, \end{equation} where $u_{\rm rms}=(\overline{\mbox{\boldmath $U$} {}^2}-\overline{\mbox{\boldmath $U$} {}}^2)^{1/2}$, is the root mean square of the fluctuating velocity field. Using a fixed value of the kinematic viscosity, $\nu=3.3\cdot10^{-4}~(c_s k_1^3)^{-1}$, the Reynolds number is about 13 for all simulations. We place the box at seven equidistant latitudes from the equator to the pole by setting the angle $\theta$ between the rotation vector and the vertical direction as shown in \Fig{fig:sk}. The vertical placement is determined by the value of $\Omega_0$ which is varied such that the range of $\Omega_{\star}$ from \Eq{eq:con} is relevant for the NSSL. The turnover time is defined as $\tau=\ell/u_{\rm rms}$, where $\ell$ is the size of the eddies. In our simulations the energy--carrying scale of turbulence is the forcing scale $\ell=2\pi/k_f$. Hence, the Coriolis number in the simulations is given by \begin{equation} \Omega_{\star}=\frac{4\pi\Omega_0}{u_{\rm rms} k_f}. \end{equation} The corresponding input parameter is the Taylor number \begin{equation} {\rm{Ta}}=\left (\frac{2\Omega_0L^2}{\nu}\right)^2. \end{equation} The values of ${\rm Ta}$, $\Omega_{\star}$, and the anisotropy parameters are given in \Tab{tab:summary}. An additional run with $\Omega_0=0$ was performed to remove a contribution to the Reynolds stress appearing in the non--rotating case; see \Sec{res:RS}. Mean quantities are defined as horizontal ($xy$) averages. The local Cartesian quantities are related to their counterparts in spherical polar coordinates via $(r,\theta,\phi)\rightarrow (z,x,y)$, $(\overline{\mbox{\boldmath $U$}}{}}{_r,\overline{\mbox{\boldmath $U$}}{}}{_{\theta},\overline{\mbox{\boldmath $U$}}{}}{_{\phi})\rightarrow (\overline{\mbox{\boldmath $U$}}{}_z}{,\overline{\mbox{\boldmath $U$}}{}_x}{,\overline{\mbox{\boldmath $U$}}{}_y}{)$, $Q_{\theta \phi}\rightarrow Q_{xy} $, $Q_{\theta r}\rightarrow Q_{xz}$ and $Q_{r \phi }\rightarrow Q_{yz} $. We normalize quantities such that $\widetilde{\mbox{\boldmath $U$} {}}_{i}=\overline{\mbox{\boldmath $U$}}{}}{_i/u_{\rm rms}$ and $\widetilde{Q}_{ij}=Q_{ij}/u_{\rm rms}^2$, tilde denoting this operation. Additionally, the error on the measured physical quantities, which are obtained directly from the simulations, is estimated by dividing the time series into three parts and comparing their time averaged values with the one obtained from the whole time series. The maximum deviation from the latter is considered to be the error of the measurement. \begin{figure}[t!] \begin{center} \includegraphics[width=\columnwidth]{om05_90.png} \includegraphics[width=\columnwidth]{om05_30.png} \end{center}\caption[]{Streamlines of the velocity field. The color table shows the amplitude of the azimuthal component of the velocity field normalized by sound speed. Panels A and B show $U_y/c_s$ at the equator and at $\theta = 30^\circ$ for set \textbf{C46}, respectively. }\label{fig:Uy}\end{figure} \begin{figure}[t!] \begin{center} \includegraphics[width=\columnwidth]{pa_QA_con.pdf} \end{center}\caption[]{Top panel: Time averaged and normalized diagonal components of the Reynolds stress as functions of $z$. The dotted (solid) line shows $\widetilde{Q}_{zz}$ ($\widetilde{Q}_{xx}$ and $\widetilde{Q}_{yy}$) of the set \textbf{C24} at $15^\circ$ latitude. The vertical dashed lines mark the part of the domain wherefrom $A_{\rm V}$ and $A_{\rm H}$ were measured. Anisotropy parameters $A_{\rm V}$ (middle panel) and $A_{\rm H}$ (bottom panel) are shown as functions of $\Omega_{\star}$ at the latitudes indicated in the legend. }\label{fig:AvAh}\end{figure} \section{Results} \subsection{Velocity field} A statistically stationary turbulent state appears after about few $\tau$ independent of $\Omega_{\star}$ everywhere except at the equator, where the statistically stationary state is reached between few to about 300 $\tau$ from the lowest to highest $\Omega_\star$, respectively. As an example, we show snapshots of the zonal flow normalized by the sound speed at about 1000~$\tau$ for the set \textbf{C46} at the equator and at $30^\circ$ latitude in panel A and B of \Fig{fig:Uy}, respectively. The other components of the velocity field are very similar to the zonal one shown in panel B. The dominant scale of the turbulence is the forcing scale $k_f/k_1=10$. The expected large--scale zonal flow similar to the actual NSSL is generated only at the equator shown in panel A. All other sets show similar behaviour. \begin{figure*}[t!] \begin{center} \includegraphics[width=0.75\textwidth]{uxyz_om27_paper_1.pdf} \end{center}\caption[]{Normalized mean components of the velocity field versus time in terms of turnover time in representative runs in set \textbf{C24}. The rows from top to bottom show $\widetilde{\UU}_y$, $\widetilde{\UU}_x$ and $\widetilde{\UU}_z$, respectively. The left and right columns show the mean velocities at the equator and at $15^\circ$ latitude, respectively. To make the comparison of the velocity components feasible, we clip the values of the color table of panel (A) at 50 per cent of the maximum value. }\label{fig:uxy27}\end{figure*} \subsection{Anisotropy of the flow} \label{sec:Aniso} We start our analysis by measuring the diagonal components of the Reynolds stresses and the anisotropy parameters which are given by \begin{eqnarray} A_{\rm V}&=&\frac{Q_{xx}+Q_{yy}-2Q_{zz}}{u_{\rm rms}^2},\\ A_{\rm H}&=&\frac{Q_{yy}-Q_{xx}}{u_{\rm rms}^2}. \end{eqnarray} We show representative time--averaged diagonal stresses in the top panel of \Fig{fig:AvAh}. The stresses are almost constant in the entire domain except at the boundaries, where $Q_{zz}=0$ and the horizontal components rise to twice larger values. Additionally, in the interior the values of $\widetilde{Q}_{zz}$ are about twice as large as the other two components, reflecting the fact that $A_{\rm V}\approx -0.5$. We show the volume averaged $A_{\rm V}$ and $A_{\rm H}$ as a function of $\Omega_{\star}$ at different latitudes in the middle and bottom panels of \Fig{fig:AvAh}, respectively. We consider data between $-2 \le z k_1 \le 2$ for the volume averages to avoid boundary effects. The vertical anisotropy parameter $A_{\rm V}$ is always at least two orders of magnitude greater than $A_{\rm H}$. Neither shows appreciable variation as a function of Coriolis number for $\Omega_\star \lesssim 0.15$. The vertical anisotropy parameter is almost independent of $\Omega_{\star}$ at the equator in contrast to other latitudes where its absolute value decreases with increasing $\Omega_{\star}$. It decreases about 15\% at the bottom of the NSSL at $15^\circ$ and about 5\% at latitudes above $45^\circ$. The horizontal anisotropy parameter shows almost no dependence on latitude above $45^\circ$ but it becomes 100 times greater from the top to the bottom of the NSSL below this latitude. The behaviour of both anisotropy parameters is similar to the ones obtained by \cite{KB08} in which they have similar set--up as ours but applied fully periodic boundary conditions. This shows that anisotropy of the flow is insensitive to the boundary conditions. \subsection{Mean flows} The development of mean flows in rotating cases means that reaching a statistically steady state takes significantly longer than in non--rotating runs. Furthermore, long time averages are needed for statistical convergence of the turbulent quantities. We run all the simulations for at least 1100 turnover times. As an example, we show a subset of the time evolution of the three components of the normalized mean velocity field for about 1200 turnover times for the set \textbf{C24} at the equator and at $\theta=15^\circ$ in \Fig{fig:uxy27}. At the equator, a large zonal flow $\overline{\mbox{\boldmath $U$}}{}_y}{$ with a negative vertical gradient developed gradually over $100\tau$ as shown in panel (A). All other sets show similar zonal flow profile at the equator, but both the amplitude and steepness of the gradient increase with increasing $\Omega_{\star}$. Moving away from the equator, the amplitude of the mean zonal flow reduces significantly and the negative gradient disappears as shown in panel (B) of \Fig{fig:uxy27}. The dependence of mean zonal flow on rotation can be seen in the panel (A) of \Fig{fig:muco} where we show the time--averaged $\widetilde{\UU}_y$ at selected $\Omega_{\star}$ at $15^\circ$ latitude. By increasing $\Omega_{\star}$, the gradient of $\widetilde{\UU}_y$ changes sign and becomes steeper up to $\Omega_{\star}=0.46$, then it becomes shallower and slowly vanishes in the middle at $\Omega_{\star}=1$. The latitudinal dependence of $\widetilde{\UU}_y$ is shown for sets \textbf{C06} and \textbf{C46} in the panels (C) and (E) of \Fig{fig:muco}, respectively. We find that $\widetilde{\UU}_y$ decreases as a function of latitude, vanishes at the poles, and that the amplitude is less than 5\% of $u_{\rm rms}$ everywhere apart from the boundaries. \label{result} \begin{figure}[t!] \begin{center} \includegraphics[width=\columnwidth]{pa_uxuy_con_07.pdf} \end{center}\caption[]{ Time--averaged normalized mean velocity components versus vertical direction. The panels (A) and (B) show $\widetilde{\UU}_y$ and $\widetilde{\UU}_x$ at $15^\circ$. The second and third rows show the mean horizontal velocities for sets \textbf{C06} and \textbf{C46}, respectively, at the latitudes indicated by the legends. }\label{fig:muco}\end{figure} The time--averaged meridional component of the mean flow $\overline{\mbox{\boldmath $U$}}{}_x}{$ is consistent with zero at the equator for all runs similar to the one is shown in panel (C) of \Fig{fig:uxy27}. In contrast to the zonal flow, its value increases by moving away from the equator; see panel (D) of \Fig{fig:uxy27}. The time--averaged value of this component at $15^\circ$ is shown in panel (B) of \Fig{fig:muco} for selected values of $\Omega_{\star}$. The negative gradient persists up to $\Omega_{\star}=0.24$. Above this $\Omega_{\star}$, the shear slowly vanishes at the center of the box and becomes slightly positive by increasing $\Omega_{\star}$. However, the strong shear persists only near the boundaries. We show the latitudinal dependency of $\overline{\mbox{\boldmath $U$}}{}_x}{$ for the two sets \textbf{C06} and \textbf{C46} in panels (D) and (F) of \Fig{fig:muco}. The amplitude of $\overline{\mbox{\boldmath $U$}}{}_x}{$ decreases as a function of latitude. The amplitudes of $\overline{\mbox{\boldmath $U$}}{}_x}{$ and $\overline{\mbox{\boldmath $U$}}{}_y}{$ are comparable everywhere apart from the equator and the negative gradient of $\overline{\mbox{\boldmath $U$}}{}_x}{$ for $\Omega_{\star}<0.1$ persists at all latitudes. For completeness, we show the vertical component of the normalized mean flow $\widetilde{\UU}_z$ in the bottom row of \Fig{fig:uxy27}. All runs show a similar pattern of high--frequency oscillations for $\widetilde{\UU}_z$ irrespective of latitude and $\Omega_{\star}$ with amplitudes of the order of $10^{-4}u_{\rm rms}$. These oscillations are identified as longitudinal sound waves as expected for a compressible system in a confined cavity. \subsection{Reynolds stresses} \label{res:RS} \begin{figure}[t!] \begin{center} \includegraphics[width=\columnwidth]{pa_Qij.pdf} \end{center}\caption[]{ Left column: Time--averaged off--diagonal Reynolds stresses versus vertical direction at 5 selected $\Omega_{\star}$ indicated by the legends. Right column: The stresses shown on left panels further spatially averaged ($-0.5 \le z k_1 \le 0.5$), at different latitudes. The rows from top to bottom show $\widetilde{Q}_{yz}$, $\widetilde{Q}_{xy}$ and $\widetilde{Q}_{xz}$, respectively. }\label{fig:allq}\end{figure} For zero rotation, it is expected that $\widetilde{Q}_{xy} = 0$, see \Eq{eq:qtp}. However, we find that $\widetilde{Q}_{xy}$ always has a small but non--zero value which persists also in the longest time series of our data. We find that such contribution is present and its magnitude remains unchanged also for higher resolutions ($288^3$ and $576^3$ grids). Hence, this issue seems not to be caused by a numerical convergence issue, but we have been unable to identify whether the cause is due to some, yet unidentified, physical effect for example due to compressibility, effects due to the forcing, inhomogeneities in the system, or a combination thereof. Given that this contribution is systematically present, we made a non--rotating run (\textbf{C0}) and subtracted $\widetilde{Q}_{xy}$ from that run from the results of the runs with rotation. We show representative results of the off--diagonal stresses at five selected $\Omega_{\star}$ at $15^\circ$ (left column) and as function of latitude (right column) in \Fig{fig:allq}. The data for all sets is available as online electronic material. The vertical Reynolds stress at all latitudes shows similar profiles as at $15^\circ$, see panel (A). The stress is nearly constant in the interior of the domain and tends to zero at the boundaries. $\widetilde{Q}_{yz}$ is always negative independent of $\Omega_{\star}$ and latitude, as shown in panel (B). Thus, the vertical angular momentum transport is inward in agreement with previous studies \citep[e.g.][]{PTBNS93,Chan2001,KKT04,LKGR05,KB08,Kap19}. Independent of $\Omega_{\star}$, the vertical stress vanishes at the pole and has its minimum and maximum amplitude at the equator and $15^\circ$, respectively, after which it decreases gradually towards the pole. For a given $\Omega_{\star}$, its amplitude is about twice larger at $30^\circ$ latitude than $60^\circ$. The latitudinal dependence of $Q_{yz}$ is different from previous studies by \cite{PTBNS93} and \cite{KKT04} at $\Omega_{\star}\approx 1$ in which they measured $Q_{yz}$ from local box convection simulations. In \cite{PTBNS93} it is almost constant up to 60$^\circ$ and decreases toward higher latitudes. $Q_{yz}$ has a v--shape profile in latitude with the minimum at $45^\circ$ in \cite{KKT04}. The major ingredient which is missing in our forced turbulence simulation in comparison with theirs is density stratification. Moreover, \cite{KKT04} includes the overshooting layer below the CZ. Therefore, it is difficult to find out what makes our results different from theirs. The middle panels (C) and (D) in \Fig{fig:allq} show horizontal stress $Q_{xy}$. The signature of turbulent fluctuations at the forcing scale are seen more clearly in this component and the measurement is quite noisy. The values of $Q_{xy}$ are close to zero up to $\Omega_{\star}=0.46$, above which it slowly starts to get positive (negative) values in the middle (close to the boundaries). This is the same behavior as with $A_{\rm H}$ seen in \Fig{fig:AvAh}. At a given $\Omega_{\star}$, the profile of $Q_{xy}$ is similar at all latitudes. Its amplitude is maximum at $30^\circ$ and decreases gradually towards the equator and the pole as shown in panel (D). This result is in agreement with the observational measurements of $Q_{xy}$ using sunspot proper motions \citep{FW65,GH84,PT98}, but not with the one measured using supergranulation motions, see Figure 10 in \cite{HGS16}. The horizontal stress has always positive values independent of $\Omega_{\star}$ and latitude in agreement with previous studies in slow rotation regime \citep[e.g.][]{LKGR05,KB08,Kap19}. The latitudinal profile of $Q_{xy}$ measured by \cite{PTBNS93} is very similar to our results, albeit with negative values as their box is located at the southern hemisphere; see their Figure 6. The meridional stress is shown in the last row of \Fig{fig:allq}. In contrast to the other stresses, $\widetilde{Q}_{xz}$ shows a complicated profile, in particular close to the boundaries. Moreover, it has positive or negative values depending on both $\Omega_{\star}$ and $\theta$ shown in panel (E). The latitudinal dependency of the meridional stress is shown in panel (F). At $\Omega_{\star}<0.1$, $\widetilde{Q}_{xz}$ is positive at low latitudes and $\widetilde{Q}_{xz} \rightarrow 0$ above $45^\circ$. By increasing $\Omega_{\star}$, $\widetilde{Q}_{xz}$ moves toward negative values and its absolute value increases. For $\Omega_{\star}>0.24$, it has its maximum amplitude at about $45^\circ$ and it decreases toward the pole and the equator similar to $\widetilde{Q}_{xy}$. The meridional stress in \cite{PTBNS93} also shows a sign change in agreement with ours albeit in mid latitude. However, the sign change occurs at $\Omega_{\star}\approx 1$ while our shows only negative values at that $\Omega_{\star}$. Comparing the absolute amplitude of the stresses in right column of \Fig{fig:allq} we see that $\widetilde{Q}_{yz}$ always larger than $\widetilde{Q}_{xz}$ and $\widetilde{Q}_{xy}$. For example at $\Omega_{\star}=0.64$, $\widetilde{Q}_{yz}$ is about two to ten times larger than $\widetilde{Q}_{xz}$ and five to twenty times larger than $\widetilde{Q}_{yz}$ depending on latitudes. Comparing also the absolute amplitude of $\widetilde{Q}_{xy}$ and $\widetilde{Q}_{xz}$, we see that $\widetilde{Q}_{xz}>\widetilde{Q}_{xy}$ for all $\Omega_{\star}$. These results show that, in spite of the fact that $Q_{xy}$ is increasing as a function of $\Omega_{\star}$, its values are still much smaller than vertical stresses which is in agreement with the assumption of KR05 regarding the NSSL. Although our model is quite simple in comparison to GDNS, it is of interest to compare the Reynolds stresses with simulations such as those in \cite{KMGB11}. These authors modelled turbulent convection in a spherical wedge for a variety of rotation rates. Considering the runs of \cite{KMGB11} with $\Omega_{\star}<1$ we find good agreement for the horizontal stress $Q_{xy}$ which is small and positive for small $\Omega_{\star}$, and which has appreciable values only for $\Omega_{\star}>0.5$. However, we find maximal values at $30^\circ$ instead of at $10\ldots15^\circ$ in \cite{KMGB11}. We also observe a similar trend for $Q_{xz}$ such that it is positive for small $\Omega_{\star}$ on the northern hemisphere with a sign change after certain $\Omega_{\star}$. However, this trend depends on latitude in their case; see their Figure~8. The profile of $Q_{yz}$ in the convection simulations is quite different from ours such that it has a strong latitudinal dependency and gets both positive and negative values depending on $\Omega_{\star}$ and latitude. This is consistent with earlier studies \citep[e.g.][]{Kap19} where a sign change of $Q_{yz}$ occurs at higher $\Omega_{\star}$ than those considered in the present simulations. \subsection{The role of Reynolds stresses in the generation of the mean flows} \begin{figure}[t!] \begin{center} \includegraphics[width=\columnwidth]{pa_bal.pdf} \end{center}\caption[]{The mean velocities $\overline{\mbox{\boldmath $U$}}{}_y}{$ and $\overline{\mbox{\boldmath $U$}}{}_x}{$ and their corresponding balancing terms in \Eqs{eq:bmuy}{eq:bmux} at $30^\circ$ latitude (upper panel) and \Eq{eq:bbmuy} at the equator (lower panel) over vertical direction for set \textbf{C46}. In the upper panel, the orange and blue lines show $\overline{\mbox{\boldmath $U$}}{}_x}{$ and $\overline{\mbox{\boldmath $U$}}{}_y}{$, respectively. The red and black lines show the RHS of \Eqs{eq:bmux}{eq:bmuy}, respectively. In the lower panel, the solid and dotted lines show the LHS and RHS of \Eq{eq:bbmuy}, respectively. }\label{fig:balance}\end{figure} As the Reynolds stresses appear in the MF momentum equation, we start by writing the MF equations for $\overline{\mbox{\boldmath $U$}}{}_x}{$ and $\overline{\mbox{\boldmath $U$}}{}_y}{$ using \Eq{momentum}. We wrote these equations first by considering the facts that our setup is fully compressible and the forcing we used here is not solenoidal which might cause density fluctuations which cannot be ignored in the MF equations. These considerations lead to the presence of three additional terms to the Reynolds stresses in the momentum equation \citep[e.g.][]{KRBK20}. We compared all of them with the Reynolds stresses, and it turns out that they, and their gradients, are considerably smaller than the Reynolds stresses. Therefore, we can ignore the density fluctuations, and the final set of equations reads \begin{equation} \dot{\overline{\mbox{\boldmath $U$}}{}_x}{}\!=\!-\overline{\mbox{\boldmath $U$}}{}_z}{\partial_z\overline{\mbox{\boldmath $U$}}{}_x}{\!-\!\partial_z Q_{xz}\!-\!\nu\partial_z^2\overline{\mbox{\boldmath $U$}}{}_x}{\!-\!2(\Omega_y\overline{\mbox{\boldmath $U$}}{}_z}{\!-\!\Omega_z\overline{\mbox{\boldmath $U$}}{}_y}{), \label{eq:muxt} \end{equation} \begin{equation} \dot{\overline{\mbox{\boldmath $U$}}{}_y}{}\!=\!-\overline{\mbox{\boldmath $U$}}{}_z}{\partial_z\overline{\mbox{\boldmath $U$}}{}_y}{\!-\!\partial_z Q_{yz}\!-\!\nu\partial_z^2\overline{\mbox{\boldmath $U$}}{}_y}{\!-\!2(\Omega_z\overline{\mbox{\boldmath $U$}}{}_x}{\!-\!\Omega_x\overline{\mbox{\boldmath $U$}}{}_z}{). \label{eq:muyt} \end{equation} Omitting terms proportional to the small quantities $\nu$ and $\overline{\mbox{\boldmath $U$}}{}_z}{$, and $\Omega_y=0$, yields the final form of the equations: \begin{eqnarray} \dot{\overline{\mbox{\boldmath $U$}}{}_x}{}&=&-\partial_zQ_{xz} +2\Omega_z\overline{\mbox{\boldmath $U$}}{}_y}{, \label{eq:muxtf}\\ \dot{\overline{\mbox{\boldmath $U$}}{}_y}{}&=&-\partial_zQ_{yz}-2\Omega_z\overline{\mbox{\boldmath $U$}}{}_x}{. \label{eq:muytf} \end{eqnarray} We double--checked the validity of the MF equations by considering the steady--state solution which reads \begin{eqnarray} \overline{\mbox{\boldmath $U$}}{}_y}{&=&(2\Omega_z)^{-1}\partial_z Q_{xz} \label{eq:bmuy},\\ \overline{\mbox{\boldmath $U$}}{}_x}{&=&-(2\Omega_z)^{-1}\partial_z Q_{yz}.\label{eq:bmux} \end{eqnarray} We show the horizontal mean velocities in comparison with the RHS of Eqs.~(\ref{eq:bmuy}) and (\ref{eq:bmux}) from 30$^\circ$ in set \textbf{C46} in the upper panel of \Fig{fig:balance}. These results are representative of all non--equatorial cases. Although there are fluctuations in the gradient of the Reynolds stresses, the match is satisfactory. \begin{figure}[t!] \begin{center} \includegraphics[width=\columnwidth]{pa_psfbc.pdf} \end{center}\caption[]{ Panel (A): time--averaged normalized mean velocity versus vertical direction of periodic boundary condition (PBC) run for set \textbf{C46} at 30$^{\circ}$ latitude. The black and red lines show $\overline{\mbox{\boldmath $U$}}{}_y}{$ and $\overline{\mbox{\boldmath $U$}}{}_x}{$, respectively. Panel (B): comparison of the time--averaged normalized stresses obtained from PBC and stress--free boundary condition (SFBC) of the same run. The solid and dashed lines show the measured $\widetilde{Q}_{xy}$ (red), $\widetilde{Q}_{xz}$ (blue) and $\widetilde{Q}_{yz}$ (black) from SFBC and PBC runs, respectively. }\label{fig:pbc}\end{figure} The equator is a special case and \Eq{eq:bmuy} cannot be used because $Q_{xz}$ and $\Omega_z$ are both zero there. Therefore, we need to use the third component of the MF momentum equation. Applying similar elimination of the terms as done for \Eqs{eq:muxt}{eq:muyt}, we have \begin{equation} \dot{\overline{\mbox{\boldmath $U$}}{}_z}{}=-c_s^2\partial_z \ln \overline{\rho}-\partial_z Q_{zz}-2\Omega_x\overline{\mbox{\boldmath $U$}}{}_y}{.\label{eq:muzt} \end{equation} The pressure gradient appears in this equation due to horizontal averaging. In the steady state, the zonal flow can be written as \begin{equation} \overline{\mbox{\boldmath $U$}}{}_y}{=-(2\Omega_x)^{-1}(\partial_zQ_{zz}+c_s^2\partial_z\ln \overline{\rho}). \label{eq:bbmuy} \end{equation} We show both sides of \Eq{eq:bbmuy} in the lower panel of \Fig{fig:balance}. The good correspondence indicates that these equations can be used to investigate the role of the stresses in generation of the mean flows. We emphasise, that although in the steady state, for example at the equator, the two terms on the RHS of \Eq{eq:bbmuy} balance, these terms are not the generators of the mean flow. They do, however, determine the final amplitude of the flow. Instead, the mean flows are generated by the gradient of the vertical stress $Q_{yz}$ at the vertical boundaries, as can be seen from \Eq{eq:muytf}. This flow then slowly penetrates to the middle of domain. Such behavior can also be clearly seen in the first panel of \Fig{fig:uxy27}, where we show the time evolution of $\overline{\mbox{\boldmath $U$}}{}_y}{$. The generation of mean flows is straightforward at the equator, because the meridional stress, and hence the meridional flow vanish there. At other latitudes the meridional stress and flow has to be included, but it is clear that the Reynolds stresses are the main driver of mean flows in the current setups. \section{Parameterization of Reynolds stresses in terms of mean--field hydrodynamics} \label{sec:Lambda} Based on the $\Lambda$--effect theory explained in \Sec{sec:theory}, the vertical and horizontal Reynolds stresses given in \Eqs{eq:qrp}{eq:qtp}, respectively, can be written in the simulation domain as \begin{equation} Q_{yz}\!=\!Q_{yz}^{(\nu)}\!+\!Q_{yz}^{(\Lambda)}\!=\!-\nu_{\parallel} \frac{\partial \overline{\mbox{\boldmath $U$}}{}_y}{}{\partial z}\!+\!\nu_{\parallel}V\sin \theta \Omega,\label{eq:qqyz} \end{equation} \vspace{-.5cm} \begin{equation} Q_{xy}\!=\!Q_{xy}^{(\nu)}\!+\!Q_{xy}^{(\Lambda)}\!=\!\nu_{\perp}\Omega^2\sin\theta\cos \theta \frac{\partial \overline{\mbox{\boldmath $U$}}{}_y}{}{\partial z}\!+\!\nu_{\parallel}H\cos \theta\Omega.\label{eq:qqxy} \end{equation} Measuring the $\Lambda$--effect coefficients $V$ and $H$ from a single experiment is not possible, because also the turbulent viscosities $\nu_{\parallel}$ and $\nu_{\perp}$ are unknown. Our strategy around this is to run another set of otherwise identical simulations, but where the horizontal mean flows are artificially suppressed at each time step. Therefore, the first terms in \Eqs{eq:qqyz}{eq:qqxy} go to zero. Then from these simulations we can directly measure $Q^{(\Lambda)}$. However, we need to validate this approach because the velocities can be affected through the non--linearity of the Navier--Stokes equations. Therefore, we perform yet another set of otherwise identical simulations, but use periodic boundary conditions (PBC) in all directions instead of stress--free boundary condition (SFBC) in the vertical direction. Then we compare the two sets of stresses obtained with these sets of boundary conditions. Such a comparison of varying boundary conditions is important also in the respect of interpreting the $\Omega_{\star}$ dependence as depth dependence - this approach is somewhat artificial, as we practically enforce unrealistic BCs within the convection zone. As an example, we show the horizontal mean velocities for the PBC version of \textbf{C46} at $30^\circ$ latitude in panel (A) of \Fig{fig:pbc}. Clearly no notable mean flow is generated in this run. Therefore, the first term in both \Eqs{eq:qqyz}{eq:qqxy} goes to zero similar to the cases where the mean flows are suppressed. In panel (B) of \Fig{fig:pbc}, we show the results of the comparison of the Reynolds stresses between PBC and SFBC cases. The difference caused by varying boundary conditions is confined to a very narrow layer near the boundary. These results suggest that our method for the separation of different effects and enforcing artificial SFBC at different depth is valid. Considering \Eq{eq:qqyz}, the subtraction of the Reynolds stresses obtained from these simulations from the total ones gives \begin{equation} Q_{yz}-Q_{yz}^{(\Lambda)}=-\nu_{\parallel} \frac{\partial \overline{\mbox{\boldmath $U$}}{}_y}{}{\partial z}. \label{eq:qnu} \end{equation} Measuring the vertical gradient of $\overline{\mbox{\boldmath $U$}}{}_y}{$, the value of $\nu_{\parallel}$ can be determined by performing an error--weighted linear least--squares fit to \Eq{eq:qnu}. Putting the measured values of $\nu_{\parallel}$ back into $Q^{(\Lambda)}$ of both \Eqs{eq:qqyz}{eq:qqxy}, we can measure $V$ and $H$ provided that $\nu_{\perp}\ll\nu_{\parallel}$. \subsection{Properties of the diffusive and non--diffusive parts of Reynolds stresses} \begin{figure}[t!] \begin{center} \includegraphics[width=\columnwidth]{pa_allq_ln.pdf} \end{center} \caption[]{Panels (A), (C) and (E): time--averaged diffusive and non--diffusive parts of the Reynolds stresses versus vertical direction. The black and blue lines in panel (A) show the normalized vertical stresses at the equator and $30^\circ$ latitude for set \textbf{C24}, respectively. In panel (C) the horizontal stresses are shown at $30^\circ$ latitude for set \textbf{C64}. The blue and black lines at the panel (E) show the meridional stresses for set \textbf{C06} and \textbf{C64} at $30^\circ$ latitude, respectively. The vertical lines denotes the $z$ range used for volume averages. Solid, dotted and dash--dotted lines show $\widetilde{Q}_{ij}$, $\widetilde{Q}_{ij}^{(\Lambda)}$, and $\widetilde{Q}_{ij}^{(\nu)}$, respectively. Panels (B), (D), and (F): volume averages, over $-2 \le z k_1 \le 2$, of $\widetilde{Q}_{ij}^{(\Lambda)}$ versus $\Omega_{\star}$ at different latitudes as indicated by the legend. }\label{fig:Qln}\end{figure} Similar to \Sec{res:RS}, we first measure ${Q_{ij}}$ from a non-rotating run and then subtract its mean value from corresponding stress in other sets. We show the different contributions to the Reynolds stresses in \Fig{fig:Qln}. In the left column we show stresses from one or two simulation sets, and in the right column we show the dependence of volume averages of $Q_{ij}^{(\Lambda)}$ on both latitude and $\Omega_{\star}$. In panel (A), we show the vertical stresses for set \textbf{C24} at the equator and at $30^\circ$ latitude. With these results we can explain the minimum of $Q_{yz}$ at the equator: the diffusive and non--diffusive parts of the stresses are comparable but of opposite signs, leading to a small negative value for the total. With \Eq{eq:qqyz}, we see that $\nu_{\parallel}>0$ which in combination with $\partial_z\overline{\mbox{\boldmath $U$}}{}_y}{<0$, gives $Q_{yz}^{(\nu)}>0$. Moreover, the final negative value of $Q_{yz}$ also shows that $Q_{yz}^{(\Lambda)}$ is responsible for the generation of the zonal flow. The profile of $\qyz^{(\nu)}$ for all other latitudes is similar to the one at $30^\circ$, and shows that the major contribution from the diffusive part is happening close to the boundaries at $|zk_1|\gtrsim 2$ with positive values. Furthermore, the amplitude of $\qyz^{(\nu)}$ decreases towards higher latitudes (not shown). In the middle of the domain it has negative values, which fits well with $\partial_z\overline{\mbox{\boldmath $U$}}{}_y}{>0$ that can be seen in \Fig{fig:muco}. The non--diffusive part of the vertical stress is always $\qyz^{(\Lambda)}<0$. Its absolute value increases from the pole towards the equator and increases with $\Omega_{\star}$. We also find that $Q_{yz}^{(\Lambda)}$ is linearly dependent on $\Omega_{\star}$ in slow rotation regime $\Omega_{\star}\ll 1$ in agreement with previous numerical results \citep{PK19}. \begin{figure*}[t!] \begin{center} \includegraphics[width=1.3\columnwidth]{coefficients.pdf} \end{center} \caption[]{ Normalized turbulent viscosity $\widetilde{\nu}_{\parallel}$ and $\Lambda$--effect coefficients as function of $\Omega_{\star}$ and latitude. Panels (A), (C) and (D) show $\widetilde{\nu}_{\parallel}$, $V$ and $H$ as a function of $\Omega_{\star}$ from the equator to $75^\circ$ latitude, respectively. Panel (B) shows $\widetilde{\nu}_{\parallel}$ as a function of latitudes for five selected $\Omega_{\star}$. }\label{fig:coef}\end{figure*} We show corresponding results for $Q_{xy}$ in panel (C) of \Fig{fig:Qln} for $\Omega_{\star}=0.64$. $\qxy^{(\Lambda)}$ has positive values in the whole domain while $\qxy^{(\nu)}$ is almost zero in the middle of the domain and its contribution to $Q_{xy}$ confined to the boundaries at $|zk_1|\gtrsim 2$. This also shows that $\qxy^{(\nu)}$ is the main contributor to the negative values of $Q_{xy}$ close to the boundaries shown in \Fig{fig:allq}\,(C). The volume averaged values of $\qxy^{(\Lambda)}$, excluding the boundaries, are shown in \Fig{fig:Qln}\,(D) as a function of both $\Omega_{\star}$ and latitude. Its value is almost zero both at the equator and at the pole. It is significantly non--zero above $\Omega_{\star}> 0.24$ and increases with increasing $\Omega_{\star}$ independent of latitude. Independent of $\Omega_{\star}$, it has maximum value at $30^\circ$ latitude. We note here that the amplitude of $\qxy^{(\Lambda)}$ is also significantly smaller than $\qyz^{(\Lambda)}$. The measured profile of $\qxy^{(\Lambda)}$ is almost identical to the one obtained by \cite{Kap19}. Our results for $Q_{xz}$ are shown in \Fig{fig:Qln}. At low $\Omega_{\star}$, there is almost no contribution of $\tqxz^{(\Lambda)}$ to the total stresses. For $\Omega_{\star}>0.15$, the contribution of $\tqxz^{(\nu)}$ disappears in the middle of the domain but maintains its positive value close to the boundaries. This can be seen in panel (E) where we show $\widetilde{Q}_{xz}$ for low and high $\Omega_{\star}$ for the sake of comparison. In panel (F), we show volume averages of $\tqxz^{(\Lambda)}$ at all $\Omega_{\star}$ and latitudes. The value of $\tqxz^{(\Lambda)}$ is almost zero at the equator and at the pole. In other latitudes, its absolute value increases with increasing $\Omega_{\star}$. It has always negative values independent of both $\Omega_{\star}$ and latitude. These results are in agreement with those of \cite{Kap19} in the slow rotation regime. \subsection{Measuring turbulent viscosity} \label{sec:nu} The diagonal turbulent viscosity $\nu_{\parallel}$, normalized by $\ellu_{\rm rms}$, and its dependence on both $\Omega_{\star}$ and latitude is shown in panels (A) and (B) of \Fig{fig:coef}, respectively. Apart from the highest latitudes where measurements are unreliable, the turbulent viscosity decreases monotonically as a function of $\Omega_{\star}$ such that for the largest $\Omega_{\star}$, corresponding to bottom of the NSSL, its value has decreased by roughly a factor of two. The method used here to measure turbulent viscosity relies on the presence of mean flows. As these diminish toward high latitudes it is very difficult to obtain reliable estimates of $\nu_{\parallel}$ near the pole. We note that the measurements of $\widetilde{\nu}_{\parallel}$ also suffer from numerical noise at $\Omega_{\star}<0.1$ at low latitudes. In particular, we think that the latitudinal dependence of $\widetilde{\nu}_{\parallel}$ for $\theta\lesssim60\degr$, shown in panel (B), is not reliable. According to the results at lower latitudes, we conclude that the latitude dependence is weak in comparison to the rotational dependence. Hence, we consider the profile of $\nu_{\parallel}$ at the equator applicable for other latitudes which is measured with high confidence and use it for measuring $V$ and $H$ at other latitudes. The ratio of turbulent to kinematic viscosity is $\nu_{\parallel}/\nu\sim 10$-$20$, as expected for the fluid Reynolds numbers in the current simulations. \cite{KRBK20} measured turbulent viscosity by imposing a weak sinusoidal shear flow in a non--rotating isotropically forced turbulent medium in a Cartesian geometry and measured the response of the system. The response is an off--diagonal Reynolds stress that is assumed to be proportional to the imposed shear flow according to the Boussinesq ansatz. They defined a shear number Sh as \begin{equation} {\rm Sh}=\frac{U_0k_U}{u_{\rm rms} k_f}, \end{equation} where $U_0$ is the amplitude of the flow and $k_U$ is the wavenumber of the imposed sinusoidal shear. To obtain Sh for the shear flows generated at the equator in the present simulations, we set $k_U=k_1/2$ and $U_0/u_{\rm rms}={\rm max}(\widetilde{\UU}_y)$. We consider only the slow rotation regime where $\Omega_{\star}<0.1$, corresponding to sets~\textbf{C02}, \textbf{C04} and \textbf{C06}, where ${\rm Sh}=0.002,0.004$ and $0.006$ which is within the range of {\rm Sh} values used in \cite{KRBK20}. We normalized our $\nu_{\parallel}$ with the same normalization factor as in \cite{KRBK20}, that is $\nu_{t0}=u_{\rm rms}/3k_f$; see their Section~2.2. This differs from the currently used normalization by a factor of $6\pi$ such that the current normalized values, for example in Fig.~\ref{fig:coef}, are smaller than theirs by this factor. Using their normalization, we obtain values of $\nu_{\rm t}/\nu_{t0}\approx 3.5\ldots 3.8$ which are roughly twice larger than in \cite{KRBK20}. The difference is likely caused by the presence of strong anisotropy of turbulence in our simulations due to the forcing and the rotation that were absent in the study of \cite{KRBK20}. We also compare the profile of $\nu_{\parallel}$ with an analytical expression for the rotation dependence of the viscosity obtained under SOCA by \citet[][hereafter KPR94]{KPR94}. We consider the first term in Equation (34) of their work which is relevant to our simulations in which $\nu_{\parallel}=\nu_0 \phi_1(\Omega_{\star})$, where $\nu_0=4/15\ellu_{\rm rms}$ is the turbulent viscosity obtained for the isotropic non--rotating case, and where $\phi_1$ is a function of $\Omega_{\star}$ given in the Appendix of KPR94. We scale the analytical result by a factor of $\kappa=0.68$ to make it comparable with our numerical result. In \Fig{fig:phi1}, we show the result of this comparison. This result shows that apart from $\kappa$ factor the rotation dependence is in fair agreement between the theory and numerical simulations. Considering the off--diagonal turbulent viscosity $\nu_{\perp}$, we failed measuring it as both terms constituting it, $\qxy^{(\nu)}$ and $\Omega^2\sin\theta\cos \theta \partial \overline{\mbox{\boldmath $U$}}{}_y}{/\partial z$, are too small, and the measurement error in the former is large. \subsection{Measurements of the vertical $\Lambda$--effect coefficient} We measure the vertical $\Lambda$--coefficient by substituting the volume averages of $Q_{yz}^{(\Lambda)}$ shown in panel (A) of \Fig{fig:Qln} and $\nu_{\parallel}$ at the equator using \begin{equation} V=\frac{Q_{yz}^{(\Lambda)}}{\nu_{\parallel} \sin\theta \Omega_0}. \end{equation} Our results are shown in panel (C) of \Fig{fig:coef}. The absolute value of $V$ is about $0.75$ and gradually increases to $\approx 0.95$ for latitudes $\leq 45^\circ$. However, the value of $V$ at the lowest $\Omega_{\star}$ are smaller at all latitudes, but they have large error bars. In contrast to low latitudes, the absolute values of $V$ at $60^\circ$ and $75^\circ$ decrease for $\Omega_{\star}>0.3$. Considering the large errors in the measurements at low $\Omega_{\star}$, we might consider $V$ being roughly constant for $\Omega_{\star}\leq 0.15$ independent of latitude, but it shows strong latitudinal and rotational dependency for $\Omega_{\star} > 0.15$. This means that considering only the first term $V^{(0)}$ in \Eq{eq:v} in the NSSL condition is not enough as it is assumed by the theoretical model by KR05 explained in \Sec{sec:theory}. Moreover, the increase of $V$ towards higher $\Omega_{\star}$ at low latitudes is in contrast with the decrease predicted by KR05 model. The same applies to the results of \cite{Kap19} who did not consider that $\nu_{\rm t}=\nu_{\rm t}(\Omega_\star)$. \subsection{Measurements of the horizontal $\Lambda$--effect coefficient} We measure the horizontal $\Lambda$--effect coefficient similarly to the vertical one using \begin{equation} H=\frac{Q_{xy}^{(\Lambda)}}{\nu_{\parallel} \cos\theta \Omega_0}. \end{equation} The results are shown in panel (D) of \Fig{fig:coef}. The values of $H$ are always positive independent of $\Omega_{\star}$ and latitude. Its values are one order of magnitude smaller than $V$ up to $\Omega_{\star}=0.6$, above which $H$ begins to increase at latitudes $<45^\circ$. We also note that its value is zero at the equator and at the pole. $H$ the largest at $15^\circ$ and decreases gradually towards higher latitudes. These results show that $H$ does not play any role close to the surface in transporting the angular momentum which validates the assumption applied in the NSSL model by KR05. \begin{figure}[t!] \begin{center} \includegraphics[width=\columnwidth]{phi1_comp.pdf} \end{center}\caption{ Comparison of obtained turbulent viscosity with analytical result of KPR94. The solid and dashed line show the normalized turbulent viscosity and rescaled analytical expression $\kappa\phi_1$, respectively. }\label{fig:phi1}\end{figure} \section{Conclusions} We applied an alternative approach to MF and GDNS, namely running direct numerical simulations of forced turbulence in local boxes, to primarily find out if the assumptions and approximations applied in MF theory to explain the formation of the NSSL are valid. In contrast to GDNS, we could isolate and study the role and contribution of the Reynolds stresses in the rotational regime relevant for the NSSL. Additionally, we were able to measure the turbulent viscosity. Our results show that applying the three required conditions, explained in \Sec{sec:theory}, that are necessary to generate the NSSL in the RK05 model are insufficient. In particular, the meridional component of the Reynolds stress cannot be ignored. However, our results are in accordance with $Q_{xy}\rightarrow 0$ in the upper part of the NSSL, whereas $Q_{xy}$ obtains small but non--zero values close to the bottom of the NSSL in agreement with the theoretical predictions. Regarding the vertical Reynolds stress, its role in transporting the angular momentum radially inward is in agreement with theory. However, its profile differs from that predicted by theory. In particular, it was assumed in \cite{LK13} and \cite{LK16} that only the term $V^{(0)}$ survives in the expansion of $V$ in the NSSL. However, our results indicate that higher order terms in the expansion of $V$ need to be considered. Moreover, it is also expected from theory that the vertical transport of angular momentum decreases by increasing $\Omega_{\star}$ independent of latitude, but our results show that this expectation is fulfilled only at high latitudes. We also note here that the rotational quenching of the turbulent viscosity, $\nu_{\parallel}$, adds another degree of complexity to the problem which was not considered previously in the models of NSSL. This behavior, however, from a theoretical MF prediction is in good qualitative agreement with our results \citep{KPR94}. Although these local box simulations have a moderate value of ${\rm Re}\approx13$, and there is no connection between different latitudes, our results are largely consistent with the stresses and mean flows obtained in GDNS. On the other hand, the theoretical works used SOCA which should be valid at Reynolds or Strouhal numbers of up to unity, which is in the vicinity of the parameter regime of the current models. Hence, it is not surprising that we find a relatively good match in between the measured turbulent viscosity and the one predicted by SOCA. Concerning the fact that $Q_{xz}$ cannot be disregarded in the NSSL, its role can be further investigated in more realistic setup using spherical geometry where the artifact of discontinuity between latitudes can be removed. We also note here that in this work we consider only a single modest Reynolds number and one forcing scale, the effects of which need to be explored with wider parameter studies. The other important physics that need to be investigated are the effect of stratification, compressibility and magnetic fields, and comparing with previous studies that have studied these in turbulent convection, namely \cite{PTBNS93}, \cite{Chan2001}, and \cite{KKT04}. It is worthwhile to note here that a set of companion laboratory experiments is being proposed to test several aspects of our model. In these experiments, a rotating water--filled apparatus will be used to simulate regions of finite latitudinal extent, including $\beta$--plane effects, and forcing will be introduced by pump--driven nozzles at the boundaries \citep{EH19}. Relative variation of the system rotation rate and the nozzle exit velocity will allow both the $\Omega_\star > 1 $ and $\Omega_\star < 1$ regimes to be explored. The forcing scale length and isotropy will be changed by opening/closing nozzles and altering the nozzle shapes and orientations. Time resolved measurements of the components of the flow velocity will allow the mean flows and stresses to be computed and compared with numerical results and theoretical models. Despite that the details of the forcing and the fluid boundary conditions will be different in the experiment than in the present simulations, it is expected that meaningful results will be obtained as the rotation rate of the system is varied and the experimental data is analyzed to look for signatures of the $\Lambda$--effect. \begin{acknowledgements} The simulations have been carried out on supercomputers at GWDG and on the Max Planck supercomputer at RZG in Garching. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (project "UniSDyn", grant agreement n:o 818665). A. Barekat acknowledges funding by the Max-Planck/Princeton Center for Plasma Physics. PJK acknowledges financial support from the Deutsche Forschungsgemeinschaft (DFG) Heisenberg programme (grant No.\ KA 4825/2-1). \end{acknowledgements} \bibliographystyle{aa}
{'timestamp': '2020-12-14T02:20:32', 'yymm': '2012', 'arxiv_id': '2012.06343', 'language': 'en', 'url': 'https://arxiv.org/abs/2012.06343'}
\section{Introduction} Detailed studies of biomolecular electrostatics provide a tool for the rational design and optimization in diverse fields such as biocatalysis, antibody and nanobody engineering, drug composition and delivery, molecular virology, and nanotechnology \cite{Protein_electrostatics_review_2020}. A commonly accepted and widely used approach is based on solving the nonlinear Poisson-Boltzmann equation (PBE) which provides a mean field description of the electrostatic potential in a system of biological macromolecules immersed in aqueous solution, such as water. In this model, the solvated molecule is represented at an atomic level of detail in a molecule-shaped cavity with a low dielectric permittivity and point partial charges at atomic positions, whereas the water molecules and ions in the solvent are implicitly treated and accounted for by an isotropic dielectric continuum with high dielectric permittivity \cite{Fogolari_Brigo_Molinari_2002}. A further simplification would be to linearize the ionic contributions. However, in that case the resulting description is not accurate enough when the biomolecules are highly charged, as is the case for DNA, RNA and phospholipid membranes such as polylysine \cite{Honig_Nicholls_95}. For these reasons, there is considerable interest in the nonlinear PBE in the scientific computing and biophysics community. Since it is an equation combining a strong nonlinearity and measure data, the existence and uniqueness of solutions for it, while mostly within the scope of known PDE techniques, is not quite trivial. However in the applications literature these are often assumed as standard folklore or insufficiently justified while possibly using weak formulations that are not rigorously appropriate for equations with measure data. A common approach is to consider the natural energy associated to the problem, which is convex, and applying variational arguments. However, this should also be treated with care, since the nonlinearity of the PBE is strong enough to prevent the energy functional from being differentiable in Sobolev spaces, so the Euler-Lagrange equation does not necessarily hold. \subsection*{Contributions} In this work, we provide a complete treatment for the existence and uniqueness of weak solutions of the nonlinear Poisson-Boltzmann equation when applied to electrostatic models for biological macromolecules, and without assuming that the solvent is necessarily charge neutral. Our methods are centered on the spaces where elliptic PDE with measure right hand side can be rigorously formulated, which are not directly compatible with a variational approach, and on which we prove uniqueness of solutions. For existence, out of the possible approaches to show particular solutions, we focus on the additive solution-splitting techniques which are most explicit and most commonly used in the applications literature. These still involve an energy functional which is not differentiable and discontinuous at every point of its domain (see Section \ref{Sections_remarks_on_J}). However, by proving an a priori estimate for boundedness of minimizers we are able to return to standard $H^1$ weak formulations which can be directly approximated numerically. The insistence on the molecular setting is not gratuitous: our analysis specifically uses the particularities of this setting, in which the nonlinearity and right hand side are active on disjoint parts of the domain. This separation plays a role in the main existence result in Theorem \ref{Theorem_Existence_for_full_potential_GPBE} by allowing us to decompose the full potential into a regular component that satisfies an elliptic equation with a more regular right hand side in $H^{-1}$, and another term representing the contribution of the point charges through the Newtonian potential. The main assumption to be able to perform this decomposition is that the dielectric permittivity is constant in a neighborhood of the point charges. A notable feature of our approach is that we treat weak formulations for the complete nonlinear PBE in the framework of weak solutions for elliptic equations with measure right hand side as defined by Boccardo and Gallou\"et in \cite{Boccardo_Gallouet_1989, Boccardo_Gallouet_Nonlinear_equations_with_RHS_measures_1992} and maintain this unified framework throughout. The other rigorous works that we are aware of treating the biomolecular situation for the PBE use the mentioned type of decompositions as a fixed ansatz, whereas we use it as a way to obtain particular solutions of the general formulation, for which we prove a uniqueness result that will cover any such approach. A prominent such work is \cite{BoLi_paper_2009} using a variational perspective, and where charge neutrality is required (see in particular \cite{BoLiErratum2011}). Since the decomposition is fixed a priori, it uses ad-hoc spaces which can be roughly described as ``$W^{1,1}$ around the point charges, but $H^1$ elsewhere'', whereas we work in the sharp spaces for problems with measure data, or ``in dimension $d$, $W^{1,\frac{d}{d-1}-\varepsilon}$ for all $\varepsilon >0$'', and we assume the interface to be just $C^1$ instead of $C^2$. The assumption that the interface between the molecular and solvent regions is $C^1$ plays no role in the existence, but is required for our uniqueness result in Theorem \ref{Theorem_uniqueness_for_GPBE}. We will justify below that this assumption is often satisfied for a very common interpretation of the molecular geometries, which cannot be expected to be $C^2$. Further, for the regular component we treat weak formulations with Sobolev test spaces instead of just minimizers or distributional solutions, and adopt decomposition schemes commonly used in the physical and numerical literature, proving their equivalence through our uniqueness result. The fact that the regular component of a solution obtained by such a decomposition satisfies a weak formulation involving $H^1$ spaces means that this component can be numerically approximated by means of well studied methods, such as standard conforming finite elements. Besides, it also means that the duality approach for error estimation is applicable to obtain both a priori near-best approximation results and to compute guaranteed a posteriori error bounds, as done in \cite{Kraus_Nakov_Repin_PBE1_2018, Kraus_Nakov_Repin_PBE2_2020}. Having a $C^1$ interface also has practical implications, since in this case it is easier to represent exactly with curved elements or isogeometric analysis. Moreover, a boundedness estimate for the regular component of the potential, as proved in Theorem \ref{Thm_Boundedness_of_solution_to_general_semilinear_elliptic_problem}, has physical implications in its own right. A growing body of literature, starting with \cite{Borukhov_Andelman_Orland_1997}, treats modifications of the Poisson-Boltzmann model with more tame nonlinearities (reflecting finite size ions) on the grounds that the original PBE model may produce unphysically high ion concentrations and potentials. A boundedness estimate for the original implicit-solvent PBE puts some theoretical limits to these concerns, on the level of potentials outside the molecule. \subsection*{Organization of this paper} After introducing the general PBE and its linearized version in their physical context, in Section \ref{Section_Setting} we introduce the notion of weak solutions we will work with and provide an existence and uniqueness result for them in the linearized setting. In Section \ref{Section_Splitting} we review two natural linear splittings of solutions, either of which can be used to decouple the contributions of the nonlinearity and of the measure data. Section \ref{Section_Existence_and_uniqueness_of_solution} contains the main results: in Section \ref{Section_Existence_GPBE} we prove existence of weak solutions through a variational argument and boundedness estimate, and Section \ref{Section_uniqueness_for_GPBE} treats uniqueness. \subsection{Physical formulation}\label{Section_Physics} We study an interface problem modelling a biological system consisting of a (macro) molecule embedded in an aqueous solution, e.g., saline water. These are embedded in a bounded computational domain $\Omega\subset \mathbb R^d$ with $d\in \{2, 3\}$. The part of it containing the molecule is denoted by $\Omega_m\Subset \Omega\subset\mathbb R^d$ (see Figure \ref{All_regions_2D_and_3D}) and the one containing the solution with the moving ions is denoted by $\Omega_s$ and defined by $\Omega_s=\Omega\setminus\overline{\Omega_m}$. The interface of $\Omega_m$ and $\Omega_s$ is denoted by $\Gamma=\overline {\Omega_m}\cap\overline{\Omega_s}=\partial\Omega_m$, and the outward (with respect to $\Omega_m$) unit normal vector on $\partial \Omega_m$ by $\bm n_\Gamma$. Usually, the molecular region $\Omega_m$ is prescribed a low dielectric coefficient $\epsilon_m\approx 2$, whereas the solvent region $\Omega_s$ is prescribed high dielectric coefficient $\epsilon_s\approx 80$. We will assume that the function $\epsilon$, describing the dielectric coefficient in $\Omega$, is constant in the molecule region $\Omega_m$ and Lipschitz continuous in the solvent region $\overline{\Omega_s}$ with a possible jump discontinuity across the interface $\Gamma$, i.e., \begin{equation} \label{definition_of_piecewise_constant_epsilon_LPBE} \epsilon(x)=\left\{ \begin{aligned} &\epsilon_m, &x&\in \Omega_m,\\ &\epsilon_s(x), &x&\in\Omega_s. \end{aligned} \right. \end{equation} We note that in presence of moving ions, more refined models include the so-called ion exclusion layer (IEL). This is a region in which no ions can penetrate and which surrounds the bio-molecules. It is denoted by $\Omega_{IEL}$ and the part of $\Omega_s$ accessible for ions is denoted by $\Omega_{ions}=\Omega_s\setminus\overline{\Omega_{IEL}}$. With this notation, we have $\Omega_s=\big(\overline{\Omega_{IEL}}\setminus\Gamma\big)\cup\Omega_{ions}$ (see Figure \ref{All_regions_2D_and_3D}). We remark that considering this region is optional (indeed many works do not use it), in which case one may think of only two regions $\Omega_m$ and $\Omega_{ions}=\Omega_s$ and the notation in some of our results below would be simpler. We give precise mathematical definitions for these sets in Section \ref{Section_Molecular_Surface}. \begin{figure}[!ht] \centering \includegraphics[width=0.47\linewidth, valign=m]{insulin.pdf} \caption{Computational domain $\Omega$ with molecular domain $\Omega_m$ in blue, ion exclusion layer $\Omega_{IEL}$ in yellow and see-through, and ionic domain $\Omega_{ions}$. These domains were constructed from an insulin protein with the procedures described in Section \ref{Section_Molecular_Surface}. \label{All_regions_2D_and_3D}} \end{figure} The electrostatic potential $\hat{\phi}$ is governed by the Poisson equation which is derived from Gauss's law of electrostatics. In CGS (centimeter-gram-second) units, the Poisson equation reads \begin{subequations}\label{general_form_PBE} \begin{equation} \label{general_PBE_1_classical_form} -\nabla\cdot\big(\epsilon\nabla \hat{\phi}\big)=4\pi \rho \quad\text{ in } \Omega_m \cup \Omega_s. \end{equation} Here, $\rho:=\chi_{\Omega_m}\rho_m+\left(\chi_{\Omega_{IEL}}\rho_{IEL}+\chi_{\Omega_{ions}}\rho_{ions}\right)$ denotes the charge density in $\Omega$, where $\rho_m$, $\rho_{IEL}$, and $\rho_{ions}$ are the charge densities\footnote{Since $\Omega_s=\left(\overline{\Omega_{IEL}}\setminus\Gamma\right)\cup \Omega_{ions}$, then $\chi_{\Omega_{IEL}}\rho_{IEL}+\chi_{\Omega_{ions}}\rho_{ions}$ gives the charge density in $\Omega_s$.} in $\Omega_m$, $\Omega_{IEL}$, and $\Omega_{ions}$, respectively, and $\chi_U$ denotes the characteristic function of the set $U$, defined by $\chi_U(x)=1$ if $x\in U$ and $\chi_U(x)=0$ elsewhere. In the molecular region $\Omega_m$, there are only fixed partial charges so the charge density is \[ \rho_m=\sum_{i=1}^{N_m}{z_ie_0\delta_{x_i}}, \] where $N_m$ is the number of fixed partial charges, $z_i$ is the valency of the $i$-th partial charge, $x_i \in \Omega_m$ its position, and $e_0$ is the elementary charge, and $\delta_{x_i}$ denotes the delta function centered at $x_i$. In the region $\Omega_{IEL}$ there are no fixed partial charges, nor moving ions and therefore the charge density there is $\rho_{IEL}=0$. In the region $\Omega_{ions}$, there are moving ions whose charge density is assumed to follow a Boltzmann distribution and is given by \[ \rho_{ions}=\sum_{j=1}^{N_{ions}}{M_j\xi_j e_0\mathrm{e}^{-\frac{\xi_je_0\widehat{\phi}}{k_BT}}}, \] where $N_{ions}$ is the number of different ion species in the solvent, $\xi_j$ is the valency of the $j$-th ion species, $M_j$ is its average concentration in $\Omega_{ions}$ measured in ${\# \textrm{ions}}/{\textrm{cm}^3}$, $k_B$ is the Boltzmann constant, and $T$ is the absolute temperature. For more information on the physical constants used in the text see Table~\ref{Table_physical_constants_in_CGS}. \begin{table}[ht] \centering \begin{tabular}{ |c|c|c| } \hline abbreviation & name & value in CGS derived units\\ \hline \hline $N_A$ & Avogadro's number &$6.022140857 \times 10^{23}$ \\ $e_0$ & elementary charge &$4.8032424\times 10^{-10}$ esu \\ $k_B$ & Boltzmann's constant &$1.38064852\times 10^{-16}$erg $\text{K}^{-1}$\\ \hline \end{tabular} \captionof{table}{Physical constants used in this section. Here K denotes Kelvin as a unit of temperature, esu is the statcoulomb unit of electric charge, and erg the unit of energy which equals $10^{-7}$ joules.}\label{Table_physical_constants_in_CGS} \end{table} The physical problem requires that the potential $\hat{\phi}$ and the normal component of the displacement field $\epsilon\nabla \hat{\phi}$ are continuous across the interface $\Gamma$. Thus, the equation \eqref{general_PBE_1_classical_form} is supplemented with the following continuity conditions \begin{align} \big[\hat{\phi}\big]_\Gamma&=0,\label{general_PBE_2_classical_form}\\ \big[\epsilon\nabla \hat{\phi}\cdot \bm n_\Gamma\big]_\Gamma&=0,\label{general_PBE_3_classical_form} \end{align} where $\left[\cdot\right]_\Gamma$ denotes the jump across the interface $\Gamma$ of the enclosed quantity. Finally, the system \eqref{general_PBE_1_classical_form}, \eqref{general_PBE_2_classical_form}, \eqref{general_PBE_3_classical_form} is complemented with the boundary condition \begin{equation} \label{general_PBE_4_classical_form} \hat{\phi}=\hat{g}_\Omega \quad\text{ on } \partial\Omega. \end{equation} We notice that in fact the physical problem prescribes a vanishing potential at infinity, that is $\hat\phi(x)\to 0$ as $\abs{x}\to\infty$. However in most practical situations one uses a bounded computational domain and imposes the Dirichlet boundary condition \eqref{general_PBE_4_classical_form} instead. In this case, the function $\hat{g}_\Omega$ can be prescribed using the exact solution of a simpler problem in the full $\mathbb{R}^d$ for the linearized equation with constant solvent permittivity, which can be expressed explicitly through Green functions (see Eq. (5) of \cite{Bond_2009} or \cite{Zhou_Payne_Vasquez_Kuhn_Levitt_1996}, for example). \end{subequations} By introducing the new functions $\phi=(e_0\hat{\phi})/(k_B T)$ and $g_\Omega=(e_0 \hat{g}_\Omega)/(k_B T)$ equations \eqref{general_PBE_1_classical_form}--\eqref{general_PBE_4_classical_form} can be written in distributional sense in terms of the dimensionless potential $\phi$: \begin{equation} \label{GPBE_dimensionless} \begin{aligned} -\nabla\cdot\left(\epsilon\nabla \phi\right)+b(x,\phi)&=\pazocal{F} \quad \text{ in } \Omega,\\ \phi&=g_\Omega\quad\text{ on } \partial\Omega, \end{aligned} \end{equation} where $b(x,t):\Omega\times \mathbb R\to\mathbb R$ is defined by \begin{equation} \label{definition_b_GPBE} b(x,t):=-\frac{4\pi e_0^2}{k_BT} \sum_{j=1}^{N_{ions}}{\overline M_j(x)\xi_j \mathrm{e}^{-\xi_j t}} \text{ for all } x\in\Omega,\,t\in\mathbb R \end{equation} with $\overline M_j(x):=\chi_{\Omega_{ions}}M_j$ and \begin{equation}\label{definition_F} \pazocal{F}:=\frac{4\pi e_0^2}{k_BT}\sum_{i=1}^{N_m}{z_i\delta_{x_i}}. \end{equation} Observe that if the condition \begin{equation} \label{charge_neutrality_condition} \sum_{j=1}^{N_{ions}}{M_j\xi_j}=0 \end{equation} holds, then the solvent is electroneutral and we refer to this as the {\it charge neutrality condition}. Obviously, in this case we have that $b(x,0)=0$ for all $x\in\Omega$. This is a quite standard assumption, which we do \emph{not} enforce for our analytical results. In nearly all biophysics models involving the PBE the solvent is charge neutral, but there are also some exceptions. One such is the so-called cell model (see Eq. (17) in \cite{Sharp_Honig_1990}) in which the macromolecule possesses a net charge, and exactly enough counterions of only one species are present to keep the volume to which all the charges are confined globally electrically neutral. We notice as well that $b(x_i,t)=0$ at the positions $x_i$ of the point charges in the molecular region. More precisely, $b(x,t)=0$ for a.e. $x\in \Omega_m\cup\Omega_{IEL}$. This observation will be crucial to our analysis below. Under the assumption that there are only two ion species in the solution with the same concentration $M_1=M_2=M$, which are univalent but with opposite charge, i.e $\xi_j=(-1)^j,\,j=1,2$, we obtain the equation \begin{equation} \label{PBE_dimensionless} \begin{aligned} -\nabla\cdot\left(\epsilon \nabla \phi\right)+\overline{k}^2 \sinh\left(\phi \right)&=\pazocal{F} \quad\text{ in } \Omega,\\ \phi&=g_\Omega \quad\text{ on } \partial\Omega . \end{aligned} \end{equation} The coefficient $\overline{k}$ is defined by \begin{equation} \label{definition_of_piecewise_constant_k} \overline{k}^2(x)=\left\{ \begin{aligned} &0, &x&\in \Omega_m\cup\Omega_{IEL},\\ &\overline{k}_{ions}^2=\frac{8\pi N_A e_0^2I_s}{1000k_BT}, &x&\in \Omega_{ions}, \end{aligned} \right. \end{equation} where $N_A$ is Avogadro's number and the ionic strength $I_s$, measured in moles per liter (molar), is given by \[ I_s=\frac{1}{2}\sum_{j=1}^{2}{c_i \xi_j^2}=\frac{1000M}{N_A} \] with $c_1=c_2=\frac{1000M}{N_A}$, the average molar concentration of each ion (see \cite{Niedermeier_Schulten_1992, Bashford_2004}). Equation \eqref{PBE_dimensionless} is often referred to as the Poisson-Boltzmann equation \cite{Sharp_Honig_1990, Oberoi_Allewell_1993,Holst2012}. On the other hand, we will refer to \eqref{GPBE_dimensionless} as the General Poisson-Boltzmann equation (GPBE). The GPBE \eqref{GPBE_dimensionless} can be linearized by expanding $b(x,\cdot)$ in Maclaurin series. We obtain the linearized GPBE (LGPBE) equation for the dimensionless potential $\phi$: \begin{equation} \label{LGPBE_dimensionless} \begin{aligned} -\nabla\cdot\left(\epsilon\nabla \phi\right)+\overline{m}^2\phi &=\pazocal{F} + \ell \quad\text{ in } \Omega,\\ \phi&=g_\Omega \quad\text{ on } \partial\Omega , \end{aligned} \end{equation} where \begin{equation} \label{definition_overline_m_and_l} \overline m^2(x):=\sum_{j=1}^{N_{ions}}{\overline M_j(x)\xi_j^2} \quad\text{ and }\quad \ell(x):=\sum_{j=1}^{N_{ions}}{\overline M_j(x) \xi_j}. \end{equation} \subsection{The molecular surface}\label{Section_Molecular_Surface} As we are working with a mesoscopic model the subdomains appearing have to be precisely defined, and there is not necessarily only one way to do so; in fact, even the representation with sharp cutoffs is a modelling assumption. A common starting point is to consider the molecule as occupying the van der Waals set $\pazocal{V}$ (with $\partial \pazocal{V}$ the corresponding surface) which is given as a union of spheres centered at the positions where the atoms can be located and with radius the van der Waals radius of each element. Since we are interested in the interaction with an ionic solution, our molecule boundary should be even larger than the van der Waals set, and account for the regions that cannot be accessed by the solvent. This is known as the solvent excluded surface (SES) which is formed by rolling a solvent probe modelled as a sphere on the van der Waals surface (see \cite{Lee_Richards_1971, Richards_SES_1977, Greer_Bush_1978}). A popular precise definition is that of Connolly \cite{Connolly_Analytic_Description_1983} in which the SES is taken to be the surface traced by the boundary of the solvent sphere, which we now describe mathematically. \begin{figure}[!ht] \centering \includegraphics[width=0.38\linewidth, valign=m]{connolly_iel.pdf} \caption{Construction of the Connolly surface $\partial \pazocal{C}$ from the van der Waals set $\pazocal{V}$ (a union of balls centered at the positions of the charges $x_i$) by rolling a spherical probe of radius $r_p$, and construction of $\Omega_{IEL}$ enlarging $\pazocal{V}$ by $r_I$. \label{Molecular_Surfaces}} \end{figure} Given an open set $\Sigma \subset \mathbb{R}^d$ and radius $r>0$ we can define the set generated by ``rolling a ball'' of radius $r$ inside it or outside it as \[[\Sigma]_r := \bigcup_{B(x,r) \subset \Sigma} B(x,r)\ \text{ and }\ [\Sigma]^r := \mathrm{int}\left( \mathbb{R}^d \setminus \big[\mathbb{R}^d \setminus \Sigma\big]_r\right),\] where $\mathrm{int}$ denotes the interior so that both remain open. Moreover, whenever $[\Sigma]_r = \Sigma$ or $[\Sigma]^r = \Sigma$ we say that a ball of radius $r$ can roll freely inside it or outside it, respectively. With this, given a van der Waals set $\pazocal{V}$ and the van der Waals radius $r_p$ of a probe solvent molecule we can define the Connolly set $\pazocal{C} := [\pazocal{V}]^{r_p}$ with $\partial \pazocal{C}$ the Connolly surface (see Figure \ref{Molecular_Surfaces}). Now even if $\pazocal{V}$ is a union of balls, so clearly we can roll a ball inside it with the smallest radius and $[\pazocal{V}]_{r_{\pazocal{V}}}=\pazocal{V}$ for some $r_{\pazocal{V}}$, this is not necessarily the case for $\partial \pazocal{C}$ (see Section 3 in \cite{Walther_1999} for a counterexample). However if additionally $[\pazocal{C}]_{r_0} = [\pazocal{C}]$ for some $r_0$, then we have that \[[\pazocal{C}]_{\min(r_0, r_p)} = [\pazocal{C}]^{\min(r_0, r_p)}=\pazocal{C}\] and in this situation we can apply Theorem 1 of \cite{Walther_1999} to conclude that $\partial \pazocal{C} \in C^{1,1}$. Intuitively, the condition $[\pazocal{C}]_{r_0} = [\pazocal{C}]$ will be satisfied when neither $\pazocal{V}$ nor $\mathbb{R}^d \setminus \pazocal{V}$ contain passages which are thinner than $2r_0$. Under these assumptions, if one chooses $\Omega_m = \pazocal{C}$ then $\partial \Omega_m$ is in particular $C^1$, a condition which will be needed for all our uniqueness results below (cf. the proofs of Theorem~\ref{Theorem_uniqueness_of_linear_elliptic_problems_with_diffusion_and_measure_rhs} and Theorem~\ref{Theorem_uniqueness_for_GPBE}), but for none of the existence ones. We also remark that the Connolly surface we have just described is never $C^2$: since it is a union of pieces of spheres, the curvature of $\partial C$ jumps along their intersections. Above we have introduced the ion exclusion layer around the molecular region which no ions can penetrate. A commonly used definition is to enlarge every ball of the van der Waals set $\pazocal{V}$ by the van der Waals radius $r_I$ of a probe ion molecule. If $r_I>r_p$, we can write (see Figure~\ref{All_regions_2D_and_3D}) \begin{equation} \Omega_{IEL} = \left\{ x \in \Omega \, :\, \dist(x\, , \, \pazocal{V}) < r_I\right\}\setminus \overline{\Omega_m}.\end{equation} The regularity of the outer boundary $\partial\Omega_{IEL}\setminus\Gamma$ of $\Omega_{IEL}$ plays no role in our analysis below. \section{Functional analytic setting}\label{Section_Setting} Our first goal is to give a meaningful notion of a weak solution to the problems GPBE \eqref{GPBE_dimensionless} and the LGPBE \eqref{LGPBE_dimensionless}, which ultimately will ensure uniqueness. The semilinear elliptic equation \eqref{GPBE_dimensionless} combines several features that significantly complicate its treatment: a discontinuous dielectric coefficient $\epsilon$, a measure right hand side $\pazocal{F}$ defined in \eqref{definition_F}, and an unbounded nonlinearity $b(x,\cdot)$ defined in \eqref{definition_b_GPBE}. Before we get into the solution theory of \eqref{LGPBE_dimensionless} and \eqref{GPBE_dimensionless}, we introduce some notation concerning the function spaces that will be used. \begin{assumption}[Domain and permittivity]\label{Assumption_Domain_permittivity} The domain $\Omega$ is assumed to be open bounded and with Lipschitz boundary $\partial \Omega$, whose outward unit normal vector exists almost everywhere (with respect to the area measure) and is denoted by $\bm n_{\partial\Omega}$. The molecule subdomain $\Omega_m$ is strictly inside $\Omega$, i.e., $\overline \Omega_m \subset \Omega$, and the interface $\Gamma=\partial \Omega_m$ is assumed to be $C^1$. We assume that the boundary data $g_\Omega$ is globally Lipschitz on the boundary, that is $g_\Omega\in C^{0,1}(\partial\Omega)$. Moreover, the dielectric permittivity $\epsilon$ is assumed\footnote{Treating anisotropic permittivities (when $\epsilon$ is a symmetric matrix) is straightforward with the same methods used, as long as all these assumptions are satisfied.} constant in $\Omega_m$, equal to $\epsilon_m$, variable in $\Omega_s$ such that $\epsilon_s\in C^{0,1}(\overline{\Omega_s})$, and is allowed to have a jump discontinuity across the interface $\Gamma$. \end{assumption} For $1\leq p < \infty$, the standard Sobolev space $W^{1,p}(\Omega)$ consists of functions which together with their first order weak partial derivatives lie in the Lebesgue space $L^p(\Omega)$. The subspace $W_0^{1,p}(\Omega)$ for $1\leq p<\infty$ denotes the closure of all smooth functions with compact support, $C_c^\infty(\Omega)$, in $\Omega$ with respect to the strong topology of $W^{1,p}(\Omega)$. Given $g \in W^{1-1/p, p}(\partial \Omega)$ and recalling (see Theorem 18.34 in \cite{Leoni_2017}) that the trace operator of $W^{1,p}(\Omega)$ denoted by $\gamma_p$ is surjective onto this space we define the set \[W_g^{1,p}(\Omega):=\{v\in W^{1,p}(\Omega) \,:\, \gamma_p(v)=g\}.\] By $\langle \cdot , \cdot \rangle$ we denote the duality pairing in $W^{-1,p'}(\Omega)\times W_0^{1,p}(\Omega)$ for some $1\leq p<\infty$, where $W^{-1,p'}(\Omega)$ denotes the dual space of $W_0^{1,p}(\Omega)$. In particular, we will also use this notation for the action of measures considered as elements $W^{-1,p'}(\Omega)$ for $p>d$, taking into account the Sobolev embedding into continuous functions. Whenever $p=2$ we also use the standard notation $H^1(\Omega)=W^{1,2}(\Omega)$ and analogously for $H^1_0(\Omega)$, the trace space $H^{1/2}(\partial \Omega)$ and its dual $H^{-1/2}(\partial \Omega)$. For a vector valued function $\bm\psi\in \left[C^\infty\big(\overline\Omega\big)\right]^d$, the evaluation of the normal trace operator $\gamma_n$ is defined almost everywhere on $\partial\Omega$ as the restriction of $\bm\psi\cdot \bm n_{\partial\Omega} \text{ to }\partial\Omega$. It is well known that the mapping $\gamma_n$, can be extended by continuity to a continuous linear operator from $H(\div;\Omega)$ onto $H^{-1/2}(\partial\Omega)$, which we still denote by $\gamma_n$ (see, e.g., Theorem 2 in Section 1.3 of \cite{Dautray_Lions_Volume_6}). We would like to handle elliptic equations with measures as right hand side. In view of the Riesz representation theorem (see Theorem B.111 in \cite{Leoni_2017}) that tells us that the Banach space of bounded signed Radon measures is given as \begin{equation} \label{definition_C_0_overline_Omega} \pazocal{M}(\Omega):=\left(C_0(\overline\Omega)\right)^*\ \text{ with }\ C_0(\overline \Omega):=\{v\in C(\overline \Omega)\,:\, v=0 \text{ on } \partial\Omega\}, \end{equation} we should test weak formulations for such equations only with continuous functions. The Morrey-Sobolev inequality then suggests that it is natural to introduce \begin{equation}\label{Definition_M_and_N} \mathfrak{M}:=\bigcap_{p<\frac{d}{d-1}}{W^{1,p}(\Omega)} \quad\text{ and }\quad \mathfrak{N}:=\bigcup_{q>d}{W_0^{1,q}(\Omega)}, \end{equation} where we note that not only $\mathfrak{M}$ but also $\mathfrak{N}$ is a linear space, since $\Omega$ is bounded and the spaces $W_0^{1,q}(\Omega)$ are nested. The following lemma is easy to check (see Exercises 9.12 and 11.50 in \cite{Leoni_2017}). \begin{lemma}\label{Lemma_extension_pf_Lipschitz} Let $g\in C^{0,1}(\partial\Omega)$. Then there exists an extension $u_g\in C^{0,1}(\Omega)$ such that $\restr{u_g}{\partial\Omega}=g$. Moreover, $u_g\in W^{1,\infty}(\Omega)$. \end{lemma} Therefore, for a given Lipschitz function $g\in C^{0,1}(\partial\Omega)$, we have that $g$ is in all trace spaces $W^{1-1/p, p}(\partial \Omega)$ and we denote \begin{equation}\label{Definition_Mg} \mathfrak{M}_{g}:=\bigcap_{p<\frac{d}{d-1}}{W_{g}^{1,p}(\Omega)}. \end{equation} To understand the interplay between the differential operator $\phi\mapsto -\div(\epsilon(x)\nabla \phi)$, the measure $\pazocal{F}$ and the nonlinearity $b(x,\cdot)$ in \eqref{GPBE_dimensionless}, we start by discussing the linearized problem \eqref{LGPBE_dimensionless}. \subsection{Linear elliptic equations with measure right hand side} First we notice that \eqref{LGPBE_dimensionless} falls in the more general class of linear elliptic problems of the form \begin{equation} \label{general_form_linear_problem_with_measure_RHS} \begin{aligned} -\div\left(\bm A \nabla \phi\right)+c\phi&=\mu \quad\text{ in }\Omega,\\ \phi&=g \quad\text{ on } \partial\Omega, \end{aligned} \end{equation} where $\bm A$ is a symmetric matrix with entries in $L^\infty(\Omega)$, which satisfies the usual uniform ellipticity condition \begin{equation} \label{uniform_ellipticity_of_A} \underline \alpha\abs{\xi}^2\leq \bm A(x)\xi\cdot \xi\text{ for some }\underline{\alpha}>0,\text{ all } \xi=(\xi_1,\ldots,\xi_d)\in\mathbb R^d\text{ and }a.e.\, x\in \Omega, \end{equation} $c\in L^\infty(\Omega)$, and $\mu \in \pazocal{M}(\Omega)$. There are different notions of solution to \eqref{general_form_linear_problem_with_measure_RHS}. Here we mention two approaches in the case $g=0$. The first one is due to Stampacchia \cite{Stampacchia_1965}, where he introduced a notion of a solution to \eqref{general_form_linear_problem_with_measure_RHS} defined by duality using the adjoint of the complete second order operator. The second one is due to Boccardo and Gallou\"et and first appearing in \cite{Boccardo_Gallouet_1989}, where they defined weak solutions of \eqref{general_form_linear_problem_with_measure_RHS} to be those satisfying \begin{equation} \label{Distributional_solution_defined_by_approximation_Boccardo_Gallouet} \begin{aligned} &\phi\in \mathfrak{M}_0 \,\text{ and }\,\int_{\Omega}{\bm A\nabla \phi\cdot\nabla v \dd x} + \int_{\Omega}{c\phi v \dd x}=\int_{\Omega}{v \dd\mu}\quad \text{ for all } v\in \mathfrak{N}, \end{aligned} \end{equation} and whose existence is proved passing to the limit in the solutions for more regular data $\mu$. The solution $\phi$ defined by duality in the framework of Stampacchia is unique and can be shown to satisfy the weak formulation \eqref{Distributional_solution_defined_by_approximation_Boccardo_Gallouet} above as well. However applying this approach is not always possible, and for the nonlinear GPBE \eqref{GPBE_dimensionless} it is not clear how to do so since we have discontinuous space dependent coefficients in the principal part. On the other hand the approximation approach of Boccardo and Gallou\"et can be extended for relatively general nonlinear elliptic problems (see Theorems 1 and 3 in \cite{Boccardo_Gallouet_1989}, Theorem 1 in \cite{Boccardo_Gallouet_Nonlinear_equations_with_RHS_measures_1992}). Since the weak formulation for the latter notion of solutions only involves integrating by parts once, it is a problem which can be immediately posed for \eqref{GPBE_dimensionless} as in \eqref{weak_formulation_General_PBE_W1p_spaces} below, so in this work we focus on this type of weak solutions. Some works further discussing the relations between these notions of solution are \cite{Prignet_1995, Meyer_Panizzi_Schiela_2011}. A difficulty in adopting this notion is is that in dimension $d\ge 3$ and for a general diffusion coefficient matrix $\bm A$ which is in $L^\infty(\Omega)$ and satisfies the uniform ellipticity condition \eqref{uniform_ellipticity_of_A}, the weak formulation \eqref{Distributional_solution_defined_by_approximation_Boccardo_Gallouet} could nevertheless exhibit nonuniqueness, as shown by a counterexample due to Serrin \cite{Serrin_Pathological_Solutions_Elliptic_PDEs,Prignet_1995}. However, under some assumptions on the regularity of the coefficient matrix $\bm A$, one can still show the uniqueness of a weak solution to \eqref{general_form_linear_problem_with_measure_RHS} in the sense of \eqref{Distributional_solution_defined_by_approximation_Boccardo_Gallouet} by employing an adjoint problem with a more regular right-hand side. One such applicable result is a classical one due to Meyers (see \cite{Meyers_Lp_estimates_1964}, Theorem 4.1, Theorem 4.2 in \cite{Bensoussan_Lions_Papanicolaou_book_1978}, or \cite{Gallouet_Monier_On_the_regularity_of_solutions_Meyers_Theorem_1999} for Lipschitz domains), covering general $\bm A$ but only $d=2$, as used for uniqueness in Theorem 2 in \cite{Gallouet_Herbin_Existence_of_a_solution_to_coupled_elliptic_systems_1994}. For higher dimensions, restrictions on $\bm A$ are necessary, and for example the case $\bm A = \bm I$ and $d=3$ is treated in Theorem 2.1 of \cite{Droniou_Gallouet_Herbin_2003} by using a regularity result of Grisvard \cite{Grisvard_Elliptic_Problems_in_Nonsmooth_Domains}. Since these do not apply for our case, in Theorem~\ref{Theorem_uniqueness_of_linear_elliptic_problems_with_diffusion_and_measure_rhs} below we instead apply a more recent optimal regularity result for elliptic interface problems proved in \cite{Optimal_regularity_for_elliptic_transmission_problems}, which still requires the interface $\partial\Omega_m$ to be quite smooth. \begin{theorem}\label{Theorem_uniqueness_of_linear_elliptic_problems_with_diffusion_and_measure_rhs} Assume that $\Omega\subset \mathbb R^d$ with $d\in \{2, 3\}$ is a bounded Lipschitz domain and let $\Omega_m\subset\Omega$ be another domain with a $C^1$ boundary and $\partial \Omega_m \cap \partial \Omega = \emptyset$. Let $\bm A$ be a function on $\Omega$ with values in the set of real, symmetric $d\times d$ matrices which is uniformly continuous on both $\Omega_m$ and $\Omega\setminus \overline{\Omega_m}$. Additionally, $\bm A$ is supposed to satisfy the ellipticity condition \eqref{uniform_ellipticity_of_A}. Further, let $g\in H^{1/2}(\partial\Omega)$, $\mu\in\pazocal{M}(\Omega)$ be a bounded Radon measure, and $c\in L^\infty(\Omega)$ is such that $c(x)\ge 0$ a.e. in $\Omega$. Then the problem \begin{equation}\label{General_linear_problem_with_measure_RHS} \begin{aligned} &\text{Find }\varphi\in \mathfrak{M}_g \text{ such that }\\ &\int_{\Omega}{{\bm A}\nabla \varphi\cdot\nabla v \dd x}+\int_{\Omega}{c\,\varphi v \dd x}=\int_{\Omega}{v \dd\mu}\quad \text{ for all } v\in \mathfrak{N} \end{aligned} \end{equation} has a unique solution. \end{theorem} \begin{proof} \textbf{Existence:} The existence of a solution $\varphi$ of problem \eqref{General_linear_problem_with_measure_RHS} in the case of homogeneous Dirichlet boundary condition, i.e., $g=0$ on $\partial\Omega$ follows from Theorem~3 in \cite{Boccardo_Gallouet_1989} where a solution is obtained as the limit of the solution to problems with regular right-hand sides. In the case where $g$ is not identically zero on $\partial\Omega$ one can find a solution of \eqref{General_linear_problem_with_measure_RHS} using the linearity. We split $\varphi$ into two components $\varphi_D$ and $\varphi_0$ such that $\varphi_D$ satisfies a linear problem with nonhomogeneous Dirichlet boundary condition and zero right-hand side, i.e., \begin{equation} \label{linear_problem_for_uD_W1p_spaces} \begin{aligned} &\varphi_D\in \mathfrak{M}_g \text{ such that }\\ &\int_{\Omega}{{\bm A}\nabla \varphi_D\cdot\nabla v \dd x}+\int_{\Omega}{c\,\varphi_Dv \dd x}=0\quad \text{ for all } v\in \mathfrak{N} \end{aligned} \end{equation} and $\varphi_0$ satisfies a linear problem with homogeneous Dirichlet boundary condition and measure right-hand side. \begin{equation} \label{linear_problem_for_u0_W1p_spaces} \begin{aligned} &\varphi_0\in \mathfrak{M}_0 \text{ such that }\\ &\int_{\Omega}{{\bm A}\nabla \varphi_0\cdot\nabla v \dd x}+\int_{\Omega}{c\,\varphi_0v \dd x}=\int_{\Omega}{v d\mu}\quad \text{ for all } v\in \mathfrak{N}. \end{aligned} \end{equation} Clearly, if we replace the solution space in \eqref{linear_problem_for_uD_W1p_spaces} with $H_g^1(\Omega)$ and the test space with $H_0^1(\Omega)$, there exists a unique solution $\varphi_D\in H_g^1(\Omega)$ (by the Lax-Milgram theorem). Since $H_g^1(\Omega)\subset \bigcap_{p<\frac{d}{d-1}}{W_g^{1,p}(\Omega)}$ and $\bigcup_{q>d}{W_0^{1,q}(\Omega)}\subset H_0^1(\Omega)$ it is clear that this $\varphi_D$ also solves \eqref{linear_problem_for_uD_W1p_spaces}. By Theorem~3 in \cite{Boccardo_Gallouet_1989}, problem \eqref{linear_problem_for_u0_W1p_spaces} also possesses a solution $\varphi_0$ obtained by approximation as the weak (even strong) limit in $W_0^{1,q}(\Omega)$ for every fixed $q<\frac{d}{d-1}$ of a sequence of solutions $\{\varphi_{0,n}\}_{n \in \mathbb{N}}$ of $H^1$ weak formulations with regularized right-hand sides. Now it is clear that $\varphi=\varphi_D+\varphi_0$ solves \eqref{General_linear_problem_with_measure_RHS}, and in fact the functions $\varphi_D+\varphi_{0,n}$ provide the same kind of approximation, since they satisfy $H^1$ weak formulations of linear problems with the same regularized right-hand sides and nonhomogeneous boundary condition given by $g$ on $\partial\Omega$. \textbf{Uniqueness:} It is enough to show that if $\varphi$ satisfies the homogeneous problem \eqref{General_linear_problem_with_measure_RHS} with $\mu=0$ then $\varphi=0$. For a fixed $\theta\in L^\infty(\Omega)$, we consider the auxiliary problem \begin{equation} \label{Auxiliary_problem_for_uniqueness_of_General_linear_elliptic_problem_with_measure_RHS} \begin{aligned} &\text{Find }z\in H_0^1(\Omega) \text{ such that }\\ &\int_{\Omega}{{\bm A}\nabla z\cdot\nabla v \dd x}+\int_{\Omega}{c\,zv \dd x}=\int_{\Omega}{\theta v \dd x},\,\mathrm{for}\ \mathrm{all}\ v\in H_0^1(\Omega). \end{aligned} \end{equation} By the Lax-Milgram Theorem, this problem has a unique solution $z\in H_0^1(\Omega)$. In view of the Sobolev embedding theorem, for $d=3$, $H^1(\Omega)\hookrightarrow L^6(\Omega)$ and for $d=2$, $H^1(\Omega)\hookrightarrow L^r(\Omega)$ for all $1\leq r<\infty$. Therefore, in both cases $z\in L^6(\Omega)$ and consequently $(-cz+\theta)\in L^6(\Omega)$. Since $v \mapsto \int_{\Omega}{\left(-cz+\theta\right) v \dd x}$ defines a bounded linear functional in $W^{-1,p'}$ for all $p'\in (d,6]$ (for all $\frac{6}{5}\leq p<\frac{d}{d-1}$) and since $\Gamma \in C^1$, by applying Theorem~\ref{Thm_Optimal_Regularity_for_Elliptic_Interface_Problems}, it follows that $z\in W_0^{1,q_0}(\Omega)$ for some $q_0\in(d,6]$. By a density argument we see that \eqref{Auxiliary_problem_for_uniqueness_of_General_linear_elliptic_problem_with_measure_RHS} holds for all test functions $v\in W_0^{1,q_0'}(\Omega)$ with $1/q_0+1/q_0'=1$. Thus, we can use $z$ as a test function in \eqref{General_linear_problem_with_measure_RHS} ($\mu=0$, $g=0$) and $\varphi$ as a test function in \eqref{Auxiliary_problem_for_uniqueness_of_General_linear_elliptic_problem_with_measure_RHS}. Thus, we obtain \begin{equation} 0=\int_{\Omega}{{\bm A}\nabla z\cdot\nabla \varphi \dd x}+\int_{\Omega}{cz \varphi \dd x}=\int_{\Omega}{\theta \varphi \dd x}. \end{equation} Since $\theta$ was an arbitrary function in $L^\infty(\Omega)$, it follows that $\varphi=0$ a.e. in $\Omega$. \end{proof} In light of this existence and uniqueness result, this notion of weak solution is indeed applicable for the LGPBE: \begin{definition}\label{definition_weak_solution_LGPBE_W1p_spaces} A measurable function $\phi$ is called a weak solution of \eqref{LGPBE_dimensionless} if it satisfies \begin{equation} \label{LGPBE_dimensionless_Weak_formulation_W1p_spaces} \begin{aligned} \phi \in \mathfrak{M}_{g_\Omega}\ \text{ and }\ \int_{\Omega}{\epsilon \nabla \phi\cdot \nabla v \dd x}+\int_{\Omega}{\overline{m}^2\phi v\dd x}=\langle \pazocal{F},v\rangle + \int_{\Omega}{\ell v\dd x} \text{ for all } v\in \mathfrak{N}. \end{aligned}\tag{wLGPBE} \end{equation} \end{definition} \subsection{Semilinear elliptic equations with measure right hand side} A natural way to extend the weak formulation \eqref{LGPBE_dimensionless_Weak_formulation_W1p_spaces} to the semilinear case of the GPBE is as follows: \begin{definition}\label{definition_weak_formulation_full_GPBE_dimensionless} We call $\phi$ a weak solution of problem \eqref{GPBE_dimensionless} if \begin{equation}\label{weak_formulation_General_PBE_W1p_spaces} \begin{aligned} &\phi \in \mathfrak{M}_{g_\Omega}, \ \ {b(x,\phi)v\in L^1(\Omega)}\text{ for all }v\in \mathfrak{N}, \text{ and}\\ &\int_{\Omega}{\epsilon\nabla \phi\cdot\nabla v \dd x}+\int_{\Omega}{b(x,\phi)v \dd x}=\langle \pazocal{F}, v\rangle \, \text{ for all } \,v\in \mathfrak{N}. \end{aligned}\tag{wGPBE} \end{equation} \end{definition} The approximation schemes used for existence of this type of solutions in Theorems 2 and 3 of \cite{Boccardo_Gallouet_1989} treat $L^1$ data and with no growth condition on the semilinear term, or measure data but with growth conditions on it. In our case however, the nonlinearity $b$ is not bounded onto any $L^q$ space so its not quite clear how to implement such an approximation scheme. We instead treat existence in Section \ref{Section_Existence_GPBE} by a variational approach that exploits the particular structure of the biomolecular geometry introduced in the previous section. Since the right hand side is supported on $\Omega_m$ but $b$ vanishes on it, so the solution can be split additively reflecting these different contributions, as done in Section \ref{Section_Splitting} below. The energy formally associated to \eqref{definition_weak_formulation_full_GPBE_dimensionless} with $\pazocal{F}=0$ is convex and to apply the direct method of the calculus of variations no growth bounds on $b$ are needed, but their absence means that this energy functional is not differentiable, so to go back to the weak formulation we will prove an a priori $L^\infty$ estimate for the minimizers. The question of existence and uniqueness for more general linear and nonlinear elliptic problems involving measure data is studied in many further works, some notable ones being \cite{Droniou_Gallouet_Herbin_2003, Boccardo_Murat_property_of_nonlinear_elliptic_PDEs_with_measure_1994, Bartolucci_Leoni_Orsina_Ponce_exp_nonlinearity_and_measure_2005, Orsina_Prignet_strong_stability_results_for_elliptic_PDEs_with_measure_2002, Ponce_book_2016, Brezis_Marcus_Ponce_Nonlinear_Elliptic_With_Measures_Revisited, Brezis_Nonlinear_Elliptic_Equations_Involving_Measures, Benilan_Brezis_Thomas_Fermi_Equation_2003}. There are many nontrivial cases, for example in \cite{Brezis_Nonlinear_Elliptic_Equations_Involving_Measures} it is shown that even the simple equation $-\Delta u+\abs{u}^{p-1}u=\delta_a$ with $u=0$ on $\partial\Omega$ and $a\in\Omega$ does not have a solution in $L_{loc}^p(\Omega)$ for any $p\ge \frac{d}{d-2}$ when $d\ge 3$. \section{Electrostatics of point charges and solution splittings}\label{Section_Splitting} As we have already mentioned, we aim to use the particular geometry and coefficients of the biomolecular setting to show existence of solutions for \eqref{GPBE_dimensionless} in the weak sense of \eqref{weak_formulation_General_PBE_W1p_spaces}. This is done by an additive splitting of the solutions based on the Green function for the Poisson equation on the full space, or in more physical terms, the Coulomb potential for electrostatics in a uniform dielectric. This procedure is common in the applications literature, so we orient ourselves to the same kind of decompositions done there, which we explain in this section. To this end, define the function $G: \Omega \to \overline{\mathbb{R}}$ by \begin{equation} \label{expression_for_G_2d_and_3d} \begin{aligned} G(x)&=\sum_{i=1}^{N_m}{G_i(x)}=-\frac{2 e_0^2}{\epsilon_m k_BT}\sum_{i=1}^{N_m}{z_i \ln{|x-x_i|}}\quad\text{ if }d=2,\\ G(x)&=\sum_{i=1}^{N_m}{G_i(x)}=\frac{e_0^2}{\epsilon_m k_BT}\sum_{i=1}^{N_m}{\frac{z_i}{|x-x_i|}}\quad\text{ if }d=3. \end{aligned} \end{equation} This function describes the singular or Coulomb part of the potential due to the point charges $\{z_ie_0\}_{i=1}^{N_m}$ in a uniform dielectric medium with a dielectric constant $\epsilon_m$. It satisfies \begin{equation} \label{PDE_for_G} -\nabla\cdot(\epsilon_m \nabla G)=\pazocal{F}\quad\text{in }\mathbb R^d,\, d\in\{2,3\}, \end{equation} in the sense of distributions, that is \begin{equation} \label{Distributional_Laplacian_for_G} -\int_{\mathbb R^d}{\epsilon_m G \Delta v \dd x}=\langle \pazocal{F},v\rangle\quad\text{for all }v\in C_c^\infty(\mathbb R^d). \end{equation} In particular, \eqref{Distributional_Laplacian_for_G} is valid for all $v\in C_c^\infty(\Omega)$. Note that $G$ and $\nabla G$ are in $L^p(\Omega)$ for all $p<\frac{d}{d-1}$ and thus $G\in \mathfrak{M}_G=\bigcap_{p<\frac{d}{d-1}}{W_G^{1,p}(\Omega)}$. This means that we can integrate by parts on \eqref{Distributional_Laplacian_for_G} to obtain \begin{equation}\label{Weak_formulation_for_G_smooth_test_functions} \int_{\Omega}{\epsilon_m \nabla G \cdot \nabla v \dd x}=\langle \pazocal{F},v\rangle\quad\text{for all }v\in C_c^\infty(\Omega). \end{equation} For a fixed $q>d$, owing to the Sobolev embedding $W_0^{1,q}(\Omega)\hookrightarrow C_0(\overline\Omega)$ (see Theorem 4.12 in \cite{Adams_Fournier_Sobolev_Spaces}), $\pazocal{F}$ is bounded on $W_0^{1,q}(\Omega)$, i.e., \begin{equation*} \abs{\langle \pazocal{F},v\rangle}=\Bigg|\frac{4\pi e_0^2}{k_BT}\sum_{i=1}^{N_m}{z_i v(x_i)}\Bigg|\leq \frac{4\pi e_0^2}{k_BT}\sum_{i=1}^{N_m}{\abs{z_i}}\|v\|_{L^\infty(\Omega)}\leq C_E \frac{4\pi e_0^2}{k_BT}\sum_{i=1}^{N_m}{\abs{z_i}}\|v\|_{W^{1,q}(\Omega)}, \end{equation*} where $C_E$ is the constant in the inequality $\|v\|_{L^\infty(\Omega)}\leq C_E\|v\|_{W^{1,q}(\Omega)}$. Since $C_c^\infty(\Omega)$ is dense in $W_0^{1,q}(\Omega)$, we see that \eqref{Weak_formulation_for_G_smooth_test_functions} is valid for all $v\in W_0^{1,q}(\Omega)$, and consequently, for all $v\in \mathfrak{N}=\bigcup_{q>d}{W_0^{1,q}(\Omega)}$. Hence, the electrostatic potential $G$ generated by the charges $\{e_0z_i\}_{i=1}^{N_m}$ in a uniform dielectric with the dielectric coefficient $\epsilon_m$ belongs to $\mathfrak{M}_{G}$ and satisfies the integral relation \begin{equation}\label{Weak_formulation_for_G} \int_{\Omega}{\epsilon_m \nabla G \cdot \nabla v \dd x}=\langle \pazocal{F},v\rangle\quad\text{for all }v\in \mathfrak{N}, \end{equation} indicating that subtracting $G$ from a weak solution $\phi$ satisfying either \eqref{LGPBE_dimensionless_Weak_formulation_W1p_spaces} or \eqref{weak_formulation_General_PBE_W1p_spaces} allows us to remain within the same weak solution framework. In fact \eqref{Weak_formulation_for_G} can be seen as motivation for this notion of solution, since general measure data cannot be more singular than a point charge. This observation leads to the definition of linear 2-term and 3-term splittings of~$\phi$ based on $G$, which we describe below. In Section \ref{Section_Existence_GPBE} we will use these splittings to obtain the existence of a solution to \eqref{LGPBE_dimensionless_Weak_formulation_W1p_spaces} and \eqref{weak_formulation_General_PBE_W1p_spaces} without having to deal with the measure data $\pazocal{F}$ directly. Moreover, if $\phi$ is unique as proved under mild assumptions in Section \ref{Section_uniqueness_for_GPBE}, then there is no difference between the particular solutions found by 2-term and 3-term splitting, providing justification for these commonly used strategies. \subsection{2-term splitting}\label{Subsection_2-term_splitting_GPBE} As anticipated above and also commonly used in practice (see, e.g. \cite{Chen2006b,Zhou_Payne_Vasquez_Kuhn_Levitt_1996,Niedermeier_Schulten_1992}) we can split the full potential of \eqref{weak_formulation_General_PBE_W1p_spaces} as $\phi=u+G$, where $u$ is a well behaved regular component and $G$ is defined by \eqref{expression_for_G_2d_and_3d}. In this case $u$ is usually called the reaction field potential, which accounts for the forces acting on a biomolecule due to the presence of the solvent (see \cite{Niedermeier_Schulten_1992,Rogers_Sternberg_1984}). Taking into account \eqref{Weak_formulation_for_G}, we obtain the following integral identity for $u$: \begin{equation}\label{Weak_formulation_for_uregular_General_PBE_W1p_spaces} \begin{aligned} &\text{Find }u\in \mathfrak{M}_{g_\Omega-G} \text{ such that } b(x,u+G)v \in L^1(\Omega)\,\text{ for all }v\in \mathfrak{N}\,\text{ and }\\ &\int_{\Omega}{\epsilon \nabla u\cdot \nabla v \dd x}+\int_{\Omega}{b(x,u+G)v \dd x}=\int_{\Omega_s}{(\epsilon_m-\epsilon_s)\nabla G\cdot\nabla v \dd x}=:\langle \pazocal{G}_2,v\rangle \, \text{ for all } \,v\in \mathfrak{N}. \end{aligned} \end{equation} The advantage of this formulation is that in contrast to the situation in \eqref{weak_formulation_General_PBE_W1p_spaces} the right hand side $\pazocal{G}_2$ belongs to $H^{-1}(\Omega)$ and is supported on $\Gamma$ if $\epsilon_s$ is constant (see Remark~\ref{Remark_RHS_2-term_splitting_rewritten} below). Noticing that $H_{g_\Omega-G}^1(\Omega)\subset \mathfrak{M}_{g_\Omega-G}$ we can consider the weak formulation with $H^1$ trial space \begin{equation}\label{3_weak_formulations_2_term_splitting_uregular} \begin{aligned} &\text{Find } u\in H_{g_\Omega-G}^1(\Omega)\text{ such that } b(x,u+G)v\in L^1(\Omega) \text{ for all } v\in \pazocal{W} \,\text{ and }\\ &\int_{\Omega}{\epsilon \nabla u\cdot \nabla v \dd x}+\int_{\Omega}{b(x,u+G)v\dd x}=\langle \pazocal{G}_2,v\rangle \,\text{ for all }v\in \pazocal{W}. \end{aligned} \end{equation} In \eqref{3_weak_formulations_2_term_splitting_uregular} we don't fix the testing space $\pazocal{W}$ yet, since proving existence of such a $u$ will be nontrivial. In any case we remark that we can go back to a solution $\phi$ of \eqref{weak_formulation_General_PBE_W1p_spaces} as soon as $\mathfrak{N} \subset \pazocal{W}$. For example, using $\pazocal{W}=H_0^1(\Omega)$ or $\pazocal{W}=H_0^1(\Omega)\cap L^\infty(\Omega)$ (for which finding $u$ is clearly easier) would be enough, since functions in $\mathfrak{N}$ are bounded by the Sobolev embedding $W^{1,q}(\Omega)\subset L^\infty(\Omega)$ for $q>d$. \begin{remark}\label{Remark_RHS_2-term_splitting_rewritten} Recalling that $\epsilon_s\in C^{0,1}(\overline \Omega_s)$ (and therefore $\epsilon_s\in W^{1,\infty}(\Omega_s)$) and that $G$ is harmonic in a neighborhood of $\Omega_s$, we see that $\bm \psi:=(\epsilon_m-\epsilon_s)\nabla G\in \left[H^1(\Omega_s)\cap L^\infty(\Omega_s)\right]^d$ and that its weak divergence is given by $\div(\bm \psi)=\nabla(-\epsilon_s)\cdot\nabla G+(\epsilon_m-\epsilon_s) \Delta G$ (see Proposition 9.4 in \cite{Brezis_FA}). Thus, we can rewrite the term $\langle\pazocal{G}_2,v\rangle$ on the right-hand side of \eqref{3_weak_formulations_2_term_splitting_uregular} by applying the integration by parts formula: \begin{equation}\label{RHS_2-term_splitting_rewritten} \begin{aligned} \int_{\Omega_s}{(\epsilon_m-\epsilon_s)\nabla G\cdot\nabla v \dd x}=&-\int_{\Gamma}{(\epsilon_m-\epsilon_s)\nabla G\cdot \bm n_\Gamma\, v \dd s} + \int_{\partial\Omega}{(\epsilon_m-\epsilon_s)\nabla G\cdot \bm n_{\partial\Omega}\, v \dd s}\\ &\qquad -\int_{\Omega_s}{\left(\nabla(-\epsilon_s)\cdot\nabla G+(\epsilon_m-\epsilon_s) \Delta G\right)v \dd x}\\ =&-\int_{\Gamma}{(\epsilon_m-\epsilon_s)\nabla G\cdot \bm n_\Gamma\, v \dd s}+\int_{\Omega_s}{\nabla \epsilon_s \cdot\nabla G\, v \dd x}, \end{aligned} \end{equation} where the appearances of $v$ should be interpreted as traces if necessary. Now, it is seen that \eqref{3_weak_formulations_2_term_splitting_uregular} is the weak formulation of a nonlinear elliptic interface problem with a jump condition on the normal flux i.e., $\left[\epsilon\nabla u\cdot \bm n_\Gamma\right]_\Gamma=-(\epsilon_m-\epsilon_s)\nabla G\cdot \bm n_\Gamma=-\left[\epsilon\nabla G\cdot \bm n_\Gamma\right]_\Gamma$. Moreover, using the equality \eqref{RHS_2-term_splitting_rewritten} which is valid for $v \in \mathfrak{N}$ we can go back to \eqref{Weak_formulation_for_uregular_General_PBE_W1p_spaces} and obtain a weak formulation of the reaction field potential analogous to the one for the full potential in \eqref{weak_formulation_General_PBE_W1p_spaces} but with right hand side the measure \[\pazocal{F}_\Gamma:=\left((\epsilon_m-\epsilon_s)\nabla G\cdot \bm n_\Gamma\right) \pazocal{H}^{d-1} \mres\Gamma+\left(\nabla \epsilon_s \cdot\nabla G\right)\,\pazocal{L}^d \mres \Omega_s \, \in \pazocal{M}(\Omega),\] where $\pazocal{H}^{d-1} \mres \Gamma$ is the $(d-1)$-dimensional Hausdorff measure restricted to $\Gamma$ and $\pazocal{L}^d \mres \Omega_s$ is the Lebesgue measure restricted to $\Omega_s$. That is, formally we have \begin{equation}\label{Elliptic_equation_for_2-term_u} \begin{aligned} -\nabla\cdot\left(\epsilon\nabla u\right)+b(x,u+G)&=\pazocal{F}_\Gamma \quad &\text{ in } \Omega,\\ \phi&=g_\Omega-G\quad &\text{ on } \partial\Omega. \end{aligned} \end{equation} By doing this manipulation through the potential $G$ obtained from the Newtonian kernel we have replaced the measure $\pazocal{F}$ which seen as a distribution does not belong to $H^{-1}(\Omega)$ with another irregular distribution $\pazocal{F}_\Gamma$, which this time is in fact in $H^{-1}(\Omega)$ (acting through the trace on $\Gamma$, which is $C^1$), making it suitable for $H^1$ weak formulations. \end{remark} The above considerations imply that in numerical computations, which are one of the main motivations to introduce this splitting, the full potential $\phi$ can be obtained without needing to approximate the singularities that arise at the positions $x_i$ of the fixed partial charges. Some other problems appear however, motivating the introduction of a further splitting in three terms that we discuss in Section \ref{Subsection_3-term_splitting_GPBE} below. One such problem arises when $u$ has almost the same magnitude as $G$ but opposite sign and $\abs{\phi}=\abs{G+u}\ll \abs{u}$ (see e.g.~\cite{Holst2012}). This typically happens in the solvent region $\Omega_s$ and under the conditions that the ratio $\epsilon_m / \epsilon_s$ is much smaller than $1$ and the ionic strength $I_s$ is nonzero. In this case a small relative error in $u$ generates a substantial relative error in $\phi = G+u$. However, the 2-term splitting remains useful in practice, since it allows to directly compute the electrostatic contribution to the solvation free energy through the reaction field potential as $\frac{1}{2}\sum_{i=1}^{N_m}{z_ie_0u(x_i)}.$ \subsection{3-term splitting} \label{Subsection_3-term_splitting_GPBE} Although the 2-term splitting we just introduced would suffice to obtain existence of solutions, we describe now another commonly used splitting with the aim of providing some justification for it through our uniqueness results. In it one considers three components $\phi=G+u^H+u$, where $\phi=u$ in $\Omega_s$ and $u^H$ is such that $u^H=-G$ in $\Omega_s$ (see \cite{Chern_Liu_Wang_Electrostatics_for_Macromolecules_2003,Holst2012}). By substituting this expression for $\phi$ into \eqref{weak_formulation_General_PBE_W1p_spaces}, using \eqref{Weak_formulation_for_G}, the fact that $u^H=-G$ in $\Omega_s$, and assuming that $u^H\in \mathfrak{M}$, we obtain the following weak formulation\footnote{Notice that $\phi,u^H\in \mathfrak{M}$ implies $u\in\mathfrak{M}$. In particular, the integral $\int_{\Omega_m}{\epsilon_m\nabla u^H\cdot\nabla v \dd x}$ is well defined.} for $u$: \begin{equation} \label{Weak_formulation_for_uregular_3-term_splitting_General_PBE_W1p_spaces} \begin{aligned} &\text{Find }u\in \mathfrak{M}_{g_\Omega} \text{ such that } b(x,u)v \in L^1(\Omega)\,\text{ for all }v\in \mathfrak{N}\,\text{ and }\\ &\int_{\Omega}{\epsilon \nabla u\cdot \nabla v \dd x}+\int_{\Omega}{b(x,u)v \dd x}\\ &\qquad\qquad=-\int_{\Omega_m}{\epsilon_m\nabla u^H\cdot\nabla v \dd x}+\int_{\Omega_s}{\epsilon_m\nabla G\cdot\nabla v \dd x}=:\langle \pazocal{G}_3,v\rangle \,\text{ for all } \, v\in \mathfrak{N}. \end{aligned} \end{equation} To define $u^H$ in $\Omega_m = \Omega \setminus \overline{\Omega_s}$ we must satisfy the condition $u^H\in \mathfrak{M}$, which holds in particular if $u^H\in H^1(\Omega)$. Again if $\mathfrak{N}\subset \pazocal{W}$ and since $H_{g_\Omega}^1(\Omega)\subset \mathfrak{M}_{g_\Omega}$, we can find a particular solution $u$ of \eqref{Weak_formulation_for_uregular_3-term_splitting_General_PBE_W1p_spaces} by considering yet another $H^1$ weak formulation: \begin{equation}\label{Weak_formulation_for_uregular_3-term_splitting_PBE_H1} \begin{aligned} &\text{Find }u\in H_{g_\Omega}^1(\Omega) \text{ such that } b(x,u)v\in L^1(\Omega)\, \text{ for all } v\in \pazocal{W}\, \text{ and } \\%H_0^1(\Omega)\cap L^\infty(\Omega)\text{ and } \\ &\int_{\Omega}{\epsilon \nabla u\cdot \nabla v \dd x}+\int_{\Omega}{b(x,u)v \dd x}=\langle \pazocal{G}_3,v\rangle \,\text{ for all } v\in \pazocal{W}. \end{aligned} \end{equation} where again $\pazocal{G}_3 \in H^{-1}(\Omega)$ since we have chosen $u^H$ in $H^1(\Omega)$. Testing \eqref{Weak_formulation_for_uregular_3-term_splitting_PBE_H1} with functions $v$ supported in $\Omega_m$ and such that $v\in H_0^1(\Omega_m)$ we obtain\footnote{Note that one can immediately test \eqref{Weak_formulation_for_uregular_3-term_splitting_PBE_H1} with $v\in H_0^1(\Omega_m)\cap L^\infty(\Omega_m)$. Now, since in this case we obtain a linear problem in $\Omega_m$, by a standard density argument one sees that this linear problem can always be tested with $v\in H_0^1(\Omega_m)$.} \[ \int_{\Omega_m}{\epsilon_m \nabla u\cdot \nabla v \dd x}=-\int_{\Omega_m}{\epsilon_m\nabla u^H\cdot\nabla v \dd x}\, \text{ for all }v\in H_0^1(\Omega_m). \] It is particularly convenient (for example in a posteriori error analysis, see \cite{Kraus_Nakov_Repin_PBE2_2020}) to impose that $u^H$ is weakly harmonic in $\Omega_m$, that is \begin{equation} \label{definition_of_uH_3-term_splitting} \begin{aligned} &u^H\in H_{-G}^1(\Omega_m)\quad\text{ and }\quad\int_{\Omega_m}{\nabla u^H\cdot\nabla v \dd x}=0 \text{ for all } v\in H_0^1(\Omega_m), \end{aligned} \end{equation} where the Dirichlet boundary condition $u^H=-G$ on $\partial\Omega$ ensures that $u^H$ has the same trace on $\Gamma$ from both sides and therefore $u^H\in H^1(\Omega)$. \begin{remark}\label{Remark_right_hand_side_functional_3-term_splitting} In this case the right-hand side of equation \eqref{Weak_formulation_for_uregular_3-term_splitting_PBE_H1} depends on the solution of \eqref{definition_of_uH_3-term_splitting}, meaning in particular that for numerical approximations two concatenated elliptic problems have to be solved. Moreover, by the divergence theorem (see, e.g., Theorem 2 in Section 1.3 in \cite{Dautray_Lions_Volume_6}, Theorem 3.24 in \cite{Peter_Monk}) we can compute \begin{equation}\label{RHS_3-term_splitting_rewritten} \begin{aligned} \left\langle \pazocal{G}_3,v\right\rangle=&-\int_{\Omega_m}{\epsilon_m\nabla u^H\cdot\nabla v \dd x}+\int_{\Omega_s}{\epsilon_m\nabla G\cdot\nabla v \dd x}\\ =&-\big\langle \gamma_{\bm n_\Gamma,\Omega_m}\big(\epsilon_m\nabla u^H\big),\gamma_{2,\Gamma}(v)\big\rangle_{H^{-1/2}(\Gamma)\times H^{1/2}(\Gamma)}\\ &\qquad+\langle \gamma_{\bm n_\Gamma,\Omega_s}\left(\epsilon_m\nabla G\right),\gamma_{2,\Gamma}(v)\rangle_{H^{-1/2}(\Gamma)\times H^{1/2}(\Gamma)}, \end{aligned} \end{equation} where we used that $\epsilon_m$ is constant, $\nabla u^H\in H(\div;\Omega_m)$ (see \eqref{definition_of_uH_3-term_splitting}), and that $G$ is harmonic in a neighborhood of $\Omega_s$. In \eqref{RHS_3-term_splitting_rewritten}, $\gamma_{\bm n_\Gamma,\Omega_m}$ and $\gamma_{\bm n_\Gamma,\Omega_s}$ are the normal trace operators in $H(\div;\Omega_m)$ and $H(\div;\Omega_s)$, respectively, and $\gamma_{2,\Gamma}$ is the trace of $v$ on $\Gamma$. This computation tells us that $\pazocal{G}_3 \in H^{-1}(\Omega)$ but in contrast to the situation in Remark \ref{Remark_RHS_2-term_splitting_rewritten}, it does not allow us to immediately conclude that the action of $\pazocal{G}_3$ can be interpreted as a measure. This is only possible when $\nabla u^H \cdot \bm n_\Gamma$ is regular enough to be defined pointwise, as is the case when $\Gamma$ is $C^3$ so that we can use regularity estimates up to the boundary (see for example Theorem 9.25 in \cite{Brezis_FA}) that would provide $u^H \in H^3(\Omega_m)$. In this smoother situation \eqref{RHS_3-term_splitting_rewritten} can be reformulated as \begin{equation} \label{RHS_3-term_splitting_rewritten_nabla_u^H_with_L2_trace_on_Gamma} \langle \pazocal{G}_3,v\rangle=\int_{\Gamma}{-\epsilon_m\nabla\big( u^H+G\big)\cdot \bm n_\Gamma\, \gamma_{2,\Gamma}(v) \dd s},\,\mathrm{for}\ \mathrm{all}\ v\in H_0^1(\Omega), \end{equation} and in this case $\pazocal{G}_3$ can be thought of as a measure for a problem analogous to \eqref{Elliptic_equation_for_2-term_u} and again represents a jump condition on the normal component of $\epsilon\nabla u$. That is, if the function $u$ is smooth in $\Omega_m$ and $\Omega_s$ it should satisfy the jump condition $\left[\epsilon\nabla u\cdot \bm n_\Gamma\right]_\Gamma=-\epsilon_m \nabla\left(u^H+G\right)\cdot \bm n_\Gamma$. \end{remark} Within our context we can obtain some milder regularity of $u^H$ without additional assumptions on $\Gamma$: \begin{proposition}\label{proposition_u^H_is_in_W1s_s_ge_d} If $u^H$ is defined as in Section~\ref{Subsection_3-term_splitting_GPBE}, i.e., $u^H\in H^1(\Omega)$ with $u^H=-G$ in $\Omega_s$ and satisfies \eqref{definition_of_uH_3-term_splitting}, then $u^H\in W^{1,\bar q}(\Omega)$ for some $\bar q>d$. \end{proposition} \begin{proof} Since $\Gamma=\partial\Omega_m$ is Lipschitz (it is even $C^1$ by assumption), we can apply Theorem~\ref{Thm_Optimal_Regularity_for_Elliptic_Interface_Problems} on $\Omega_m$ to the homogenized version of \eqref{definition_of_uH_3-term_splitting}: \begin{equation} \label{definition_of_uH_3-term_splitting_homogenized} \begin{aligned} &\text{Find } u_0^H\in H_0^1(\Omega_m)\text{ such that }\\ &\int_{\Omega_m}{\nabla u_0^H\cdot\nabla v \dd x}=-\int_{\Omega_m}{\nabla u_{-G}^H\cdot\nabla v \dd x} \text{ for all } v\in H_0^1(\Omega_m), \end{aligned} \end{equation} where $u_{-G}^H\in H^1(\Omega_m)$ and $\gamma_2\left(u_{-G}^H\right)=-G$ on $\Gamma$. We can choose $u_{-G}^H$ to be in the space $W^{1,\infty}(\Omega)$ by noting that $G\in C^{0,1}(\Gamma)$ and using a Lipschitz extension (see Lemma~\ref{Lemma_extension_pf_Lipschitz}). We can even choose $u_{-G}^H$ to be smooth in $\Omega_m$. To see this, let $r>0$ be so small that all balls $B(x_i,r)$ centered at $x_i,\,i=1,\ldots, N_m$ (the locations of the point charges, as defined after \eqref{general_PBE_1_classical_form}) and with radius $r$ are strictly contained in $\Omega_m$. Then, we define the function $u_{-G}^H:=\restr{(\psi G)}{\overline\Omega_m}\in C^\infty(\overline{\Omega_m})$, where $\psi\in C_c^\infty(\mathbb R^d)$ is such that it is equal to $1$ in a neighborhood of $\Gamma$ and with support in $\mathbb R^d\setminus \bigcup_{i=1}^{N_m}{B(x_i,r)}$. It follows that the right-hand side of \eqref{definition_of_uH_3-term_splitting_homogenized} defines a bounded linear functional over $W_0^{1,p}(\Omega_m)$ for all $1\leq p<\infty$ and by Theorem~\ref{Thm_Optimal_Regularity_for_Elliptic_Interface_Problems} we conclude that $u_0^H\in W^{1,\overline{q}}(\Omega_m)$ for some $\overline{q}>d$. Now, $u^H=u_{-G}^H+u_0^H\in W^{1,\overline{q}}(\Omega_m)$. \end{proof} \subsection{Splitting for the linearized GPBE}\label{Section_The_LGPBE} The 2- and 3-term splittings can also be applied in the case of the linearized GPBE, as done routinely in numerical works \cite{Niedermeier_Schulten_1992, Zhou_Payne_Vasquez_Kuhn_Levitt_1996, Bond_2009}. After substituting the expressions $\phi=G+u$ and $\phi=G+u^H+u$ into \eqref{LGPBE_dimensionless_Weak_formulation_W1p_spaces} we obtain the respective weak formulations which the regular component $u$ has to satisfy in each case. Those formulations can be written in one common form: \begin{equation} \label{common_form_weak_formulation_u_W1p_spaces_LGPBE} \begin{aligned} &\text{Find }u\in \mathfrak{M}_{\overline g} \text{ such that }\\ &\int_{\Omega}{\epsilon \nabla u\cdot \nabla v \dd x}+\int_{\Omega}{\overline{m}^2uv \dd x}=\int_{\Omega}{\bm f\cdot\nabla v \dd x} + \int_{\Omega}{f_0v \dd x}\,\text{ for all } v\in \mathfrak{N}, \end{aligned} \end{equation} where in the case of the 2-term splitting we have \begin{equation} \label{f0_barg_f_expressions_2-term_splitting} f_0=-\overline m^2G+\ell, \quad {\bm f}={\bm f}_{\pazocal{G}_2}:=\chi_{\Omega_s}(\epsilon_m-\epsilon_s)\nabla G,\quad \text{ and }\quad\overline{g}=g_\Omega-G \text{ on }\partial\Omega, \end{equation} whereas in the case of the 3-term splitting we have \begin{equation} \label{f0_barg_f_expressions_3-term_splitting} f_0=\ell, \quad {\bm f}={\bm f}_{\pazocal{G}_3}:=-\chi_{\Omega_m}\epsilon_m\nabla u^H+\chi_{\Omega_s}\epsilon_m\nabla G,\quad \text{ and }\quad \overline{g}=g_\Omega \text{ on }\partial\Omega. \end{equation} We recall that $\ell$ and $\overline m^2$ as defined in \eqref{definition_overline_m_and_l} are zero in $\Omega_m\cup\Omega_{IEL}$ and constants in $\Omega_{ions}$, $G$ is harmonic in a neighborhood of $\Omega_s$, and $u^H\in H^1(\Omega)\subset \mathfrak{M}$. Therefore, all integrals in \eqref{common_form_weak_formulation_u_W1p_spaces_LGPBE} are well defined. By observing that $\mathfrak{N}\subset H^1(\Omega)$ and $H_{\overline g}^1(\Omega)\subset \mathfrak{M}_{\overline g}$ we can find a particular solution $u$ of problem \eqref{common_form_weak_formulation_u_W1p_spaces_LGPBE} by posing a standard $H^1$ weak formulation for $u$: the trial space in \eqref{common_form_weak_formulation_u_W1p_spaces_LGPBE} is swapped with $H_{\overline g}^1(\Omega)$ and the test space is exchanged for $H_0^1(\Omega)$. An application of Theorem~\ref{Theorem_uniqueness_of_linear_elliptic_problems_with_diffusion_and_measure_rhs} provides us with existence of a solution $\phi$ to \eqref{LGPBE_dimensionless_Weak_formulation_W1p_spaces} by the approximation strategy of \cite{Boccardo_Gallouet_1989}, which is also unique because $\Gamma\in C^1$ by Assumption~\ref{Assumption_Domain_permittivity}. \begin{theorem}\label{Theorem_well_posedness_for_full_potential_LPBE} The unique weak solution $\phi$ of equation \eqref{LGPBE_dimensionless_Weak_formulation_W1p_spaces} can be given either in the form $\phi=G+u$ or in the form $\phi=G+u^H+u$, where $u\in H_{\overline g}^1(\Omega)$ is the unique solution of the problem \begin{equation} \label{common_form_weak_formulation_u_H1_spaces_LGPBE} \begin{aligned} &\text{Find }u\in H_{\overline g}^1(\Omega) \text{ such that }\\ &\int_{\Omega}{\epsilon \nabla u\cdot \nabla v \dd x}+\int_{\Omega}{\overline{m}^2uv \dd x}=\int_{\Omega}{\bm f\cdot\nabla v \dd x} + \int_{\Omega}{f_0v \dd x}\,\text{ for all } v\in H_0^1(\Omega) \end{aligned} \end{equation} with $f_0$, $\bm f$, $\overline g$ defined by either \eqref{f0_barg_f_expressions_2-term_splitting} or \eqref{f0_barg_f_expressions_3-term_splitting} for the 2- or 3-term splittings, respectively. \end{theorem} \begin{proof} We will only show the existence of a solution $\phi$ of \eqref{LGPBE_dimensionless_Weak_formulation_W1p_spaces} by using the 2-term splitting where $f_0,\,\bm f,\,\overline g$ are defined by \eqref{f0_barg_f_expressions_2-term_splitting}, the case of the 3-term splitting is similar. By using an extension of $\overline g$ (see Lemma~\ref{Lemma_extension_pf_Lipschitz}) and linearity we can reduce to homogeneous boundary conditions and use the Lax-Milgram Theorem to obtain a unique solution $u\in H_{\overline g}^1(\Omega) = H_{g_\Omega-G}^1(\Omega)$ of \eqref{common_form_weak_formulation_u_H1_spaces_LGPBE}. It is clear that $u$ is also in $\mathfrak{M}_{g_\Omega-G}$ since $p<\frac{d}{d-1}\leq2$ and $W^{1,2}(\Omega)\equiv H^1(\Omega)$. Therefore, $G+u\in \mathfrak{M}_{g_\Omega}$. Moreover, $H_0^1(\Omega)\supset \mathfrak{N}$, and therefore \eqref{common_form_weak_formulation_u_H1_spaces_LGPBE} is valid for all test functions $v\in \mathfrak{N}$. By adding together \eqref{Weak_formulation_for_G} and \eqref{common_form_weak_formulation_u_H1_spaces_LGPBE} we conclude that $\phi=G+u$ satisfies the weak formulation \eqref{LGPBE_dimensionless_Weak_formulation_W1p_spaces}. \end{proof} Let us note that even without the regularity assumption $\Gamma \in C^1$ one would still get particular solutions $\phi$ given by the two splittings above. However, it would not be clear if these are equal. \section{Existence and uniqueness for the nonlinear GPBE}\label{Section_Existence_and_uniqueness_of_solution} \label{Section_The_GPBE} For existence, our strategy is to consider either the 2-term or 3-term splitting to separate the effect of the singular right hand side. For the regular components of these, since $H^1 \subset \mathfrak{M}$ and $\mathfrak{N}\subset H_0^1$ it is enough to consider an $H^1$ formulation, of which we give a complete treatment. This treatment still requires some care. Since the nonlinearity $b$ of \eqref{weak_formulation_General_PBE_W1p_spaces} has exponential growth, the functional in the minimization problem corresponding to this $H^1$ formulation is not differentiable, so its minimizers do not automatically satisfy the formulation with $H^1_0$ test functions. To conclude that they do, we need an a priori $L^\infty$ estimate for them, which we prove in a slightly more general situation than the one of the GPBE. For uniqueness, we work directly on the original formulation \eqref{weak_formulation_General_PBE_W1p_spaces}, which is the best possible scenario. \subsection{Existence of a full potential \texorpdfstring{$\phi$}{phi}}\label{Section_Existence_GPBE} Equations \eqref{Weak_formulation_for_uregular_General_PBE_W1p_spaces} and \eqref{Weak_formulation_for_uregular_3-term_splitting_General_PBE_W1p_spaces} for the regular component $u$ can be written in one common form: \begin{equation} \label{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_W1p_spaces} \begin{aligned} &\text{Find }u\in \mathfrak{M}_{\overline g} \text{ such that } b(x,u+w)v\in L^1(\Omega) \,\text{ for all } v\in \mathfrak{N} \text{ and }\\ &a(u,v)+\int_{\Omega}{b(x,u+w)v \dd x}=\int_{\Omega}{{\bm f}\cdot\nabla v \dd x}\, \text{ for all } v\in \mathfrak{N}, \end{aligned} \end{equation} where $\mathfrak{M}_{\overline g},\, \mathfrak{N}$ are as defined in \eqref{Definition_Mg} and \eqref{Definition_M_and_N}, denoting $a(u,v):=\int_{\Omega}{\epsilon\nabla u\cdot\nabla v \dd x}$, $w\in L^\infty(\Omega_{ions})$, ${\bm f}=(f_1,f_2,\ldots, f_d)\in \left[L^s(\Omega)\right]^d$ with\footnote{For the 2-term splitting, ${\bm f}$ is obviously in $\left[L^s(\Omega)\right]^d$ for some $s>d$ since $G$ is smooth in $\overline \Omega_s$ and $\epsilon_s\in C^{0,1}(\overline \Omega_s)$. In the case of the 3-term splitting, from Proposition~\ref{proposition_u^H_is_in_W1s_s_ge_d} it follows that $\nabla u^H\in \left[L^s(\Omega_m)\right]^d$ for some $s>d$ since $\Gamma \in C^1$.} $s>d$, and $\overline g$ specifies a Dirichlet boundary condition for $u$ on $\partial\Omega$. In the case of the 2-term splitting we have \begin{equation} \label{w_barg_f_expressions_2-term_splitting} w=G, \quad {\bm f}={\bm f}_{\pazocal{G}_2}:=\chi_{\Omega_s}(\epsilon_m-\epsilon_s)\nabla G,\quad \text{ and }\quad\overline{g}=g_\Omega-G \text{ on }\partial\Omega, \end{equation} whereas in the case of the 3-term splitting we have \begin{equation} \label{w_barg_f_expressions_3-term_splitting} w=0, \quad {\bm f}={\bm f}_{\pazocal{G}_3}:=-\chi_{\Omega_m}\epsilon_m\nabla u^H+\chi_{\Omega_s}\epsilon_m\nabla G,\quad \text{ and }\quad \overline{g}=g_\Omega \text{ on }\partial\Omega. \end{equation} Similarly, equations \eqref{3_weak_formulations_2_term_splitting_uregular} and \eqref{Weak_formulation_for_uregular_3-term_splitting_PBE_H1}, which determine \emph{particular} representatives for the regular component $u$, can also be written in one common form: \begin{equation}\label{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} \begin{aligned} &\text{Find }u\in H_{\overline g}^1(\Omega) \text{ such that } b(x,u+w)v\in L^1(\Omega) \,\text{ for all } v\in \pazocal{W} \text{ and }\\ &a(u,v)+\int_{\Omega}{b(x,u+w)v \dd x}=\int_{\Omega}{{\bm f}\cdot\nabla v \dd x}\, \text{ for all } v\in \pazocal{W}, \end{aligned}\tag{RCH1} \end{equation} where we will consider the three test spaces \begin{equation}\label{what_is_Wb} \pazocal{W}=H_0^1(\Omega), \ \ \pazocal{W}=H_0^1(\Omega)\cap L^\infty(\Omega)\ \text{ and }\ \pazocal{W}=C_c^\infty(\Omega). \end{equation} Of course, the larger the test space $\pazocal{W}$ the harder it will be to prove existence, and the other way around for uniqueness. For the first two, we have the inclusion $\mathfrak{N}\subset \pazocal{W}$, which combined with $H_{\overline g}^1(\Omega)\subset \mathfrak{M}_{\overline g}$ makes it clear that if $u\in H_{\overline g}^1(\Omega)$ solves \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} with either $\pazocal{W}=H_0^1(\Omega)$ or $\pazocal{W}=H_0^1(\Omega)\cap L^\infty(\Omega)$, then $u$ also solves \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_W1p_spaces}. Consequently, we obtain a particular solution $\phi$ of \eqref{weak_formulation_General_PBE_W1p_spaces} through the formula $\phi=G+u$ in the case where $w, {\bm f}, \overline g$ are given by \eqref{w_barg_f_expressions_2-term_splitting} and through the formula $\phi=G+u^H+u$ in the case where $w, {\bm f}, \overline g$ are given by \eqref{w_barg_f_expressions_3-term_splitting}. Thus, our goal in this section is to show existence and uniqueness of a solution $u$ to the weak formulation \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces}. We will mainly work with the first two spaces in \eqref{what_is_Wb}, while the third represents distributional solutions where we will see that uniqueness can still be obtained. \begin{remark}\label{Remark_exponents_of_H01_functions_in_d_2_and_3} If $d=2$ then by the Moser-Trudinger inequality $\mathrm{e}^{u_0}\in L^2(\Omega)$ for all $u_0\in H_0^1(\Omega)$ (see \cite{Trudinger_1967,Best_constants_in_some_exponential_Sobolev_inequalities}) and, therefore, $e^{u_{\overline g}+u_0}\in L^2(\Omega)$ with $u_{\overline g}$ an extension of $\overline g$ as in Lemma~\ref{Lemma_extension_pf_Lipschitz}. Consequently, for $d=2$, $b(x,u+w)\in L^2(\Omega)$ for all $u\in H_{\overline g}^1(\Omega)$ and the weak formulations \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} with $\pazocal{W}=H_0^1(\Omega)\cap L^\infty(\Omega)$ and with $\pazocal{W}=H_0^1(\Omega)$ are equivalent by a density argument. For $d\ge 3$ the situation is more complicated: consider for example $u=\ln{\frac{1}{|x|^d}}\in H_0^1(B(0,1))$ on the unit ball $B(0,1) \subset \mathbb R^d$, for which $\mathrm{e}^u\notin L^1(B(0,1))$. This also means that the condition $b(x,u+w)v \in L^1(\Omega)$ for all $v\in \pazocal{W}$ used in \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} is not superfluous. \end{remark} We prove existence by considering the natural associated convex energy, whose minimizers directly provide solutions for \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} with $\pazocal{W}=H_0^1(\Omega)\cap L^\infty(\Omega)$. To pass to the larger test space $\pazocal{W}=H_0^1(\Omega)$ we will prove boundedness of these minimizers in Section \ref{Section_A_priori_Linfty_estimate}. Let us consider some basic properties of the nonlinearity $b$. Since $\frac{d}{dt}b(x,t)\ge0$ for every $x\in\Omega$ it follows that $b(x,\cdot)$ is monotone increasing. This in particular implies that \begin{equation} \label{monotonicity_of_b} \big(b(x,t_1)-b(x,t_2)\big)\left(t_1-t_2\right)\ge 0,\,\mathrm{for}\ \mathrm{all}\ t_1,t_2\in\mathbb R\ \mathrm{and}\ x\in\Omega. \end{equation} For semilinear equations, in addition to monotonicity a sign condition (ensuring that the nonlinearity always has the same sign as the solution) is often assumed in the literature. Since $b(x,0)=-\frac{4\pi e_0^2}{k_BT} \sum_{j=1}^{N_{ions}}{\overline M_j(x)\xi_j}$, when the charge neutrality condition \eqref{charge_neutrality_condition} is satisfied it follows that $b(x,0)=0$ for all $x\in\Omega$. If additionally one uses the 3-term splitting so that $w=0$, we would have such a sign condition for \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces}. However as mentioned in previous sections, we do not impose charge neutrality from the outset and would like to treat both splitting schemes simultaneously, so the sign condition may fail. This poses some difficulties for the boundedness estimates, the context of which is discussed at the start of Section \ref{Section_A_priori_Linfty_estimate}. An important remark is that truncation methods as used in \cite{Webb_1980} (an easy calculation shows that the assumptions G1 and G2 postulated there are satisfied for the nonlinearity $b$) would also provide existence of solutions for \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} directly and consequently for \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_W1p_spaces}. Our main focus is therefore on the fact that we may obtain bounded solutions, which on the one hand makes the uniqueness results in Section \ref{Section_uniqueness_for_GPBE} applicable, and on the other leads to weak formulations tested with $\pazocal{W} = H^1_0(\Omega)$. These are important in practical applications such as the reliable numerical solution of this equation through duality methods. \subsubsection{Uniqueness of solutions of \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} for all test spaces}\label{Section_Uniqueness_in_H1} First, we prove uniqueness of a solution to \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} for all three choices of the test space $\pazocal{W}$ in \eqref{what_is_Wb}. Suppose that $u_1$ and $u_2$ are two solutions of \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces}. Then, we have \begin{equation} \label{weak_form_2_and_3_term_splitting_difference_of_two_solutions_test_space_V} a(u_1-u_2,v)+\int_{\Omega}{\left(b(x,u_1+w)-b(x,u_2+w)\right) v \dd x}=0\,\mathrm{for}\ \mathrm{all}\ v\in \pazocal{W}. \end{equation} In the case of $\pazocal{W}=H_0^1(\Omega)$, $u_1-u_2\in \pazocal{W}$ and thus we can test \eqref{weak_form_2_and_3_term_splitting_difference_of_two_solutions_test_space_V} with $v:=u_1-u_2$ to obtain \begin{equation*} a(u_1-u_2,u_1-u_2)+\int_{\Omega}{\left(b(x,u_1+w)-b(x,u_2+w)\right)(u_1-u_2) \dd x}=0. \end{equation*} Since $a(\cdot,\cdot)$ is coercive and $b(x,\cdot)$ is monotone increasing, we obtain $u_1-u_2=0$. In the case when $\pazocal{W}=H_0^1(\Omega)\cap L^\infty(\Omega)$ we can test with the (truncated) test functions $T_k(u_1-u_2)\in H_0^1(\Omega)\cap L^\infty(\Omega),\,k\ge 0$, where $T_k(s):=\max\{-k,\min\{k,s\}\}$ and use the monotonicity of $b(x, \cdot)$ and the coercivity of $a( \cdot,\cdot)$ to obtain $u_1-u_2=0$. This method and the method that we mentioned for the case $\pazocal{W}=H_0^1(\Omega)$ do not work when $\pazocal{W}=C_c^\infty(\Omega)$ because neither the difference $u_1-u_2$ of two weak solutions nor its truncations $T_k(u_1-u_2)$ are necessarily in $C_c^\infty(\Omega)$. We overcome this difficulty by applying Theorem~\ref{Thm_Brezis_Browder_for_the_extension_of_bdd_linear_functional_1978}. For this we consider two solutions $u_1$ and $u_2$, so that \begin{equation} \label{weak_form_2_and_3_term_splitting_difference_of_two_solutions_test_space_C0_infty} a(u_1-u_2,v)+\int_{\Omega}{\left(b(x,u_1+w)-b(x,u_2+w)\right) v \dd x}=0\ \mathrm{for}\ \mathrm{all}\ v\in C_c^\infty(\Omega). \end{equation} Since $a(u_1-u_2,\cdot)$ defines a bounded linear functional over $H_0^1(\Omega)$, the functional $T_b$ defined by the formula $\langle T_b,v\rangle:=\int_{\Omega}{\left(b(x,u_1+w)-b(x,u_2+w)\right)v \dd x}$ for all $v\in C_c^\infty(\Omega)$ satisfies the condition $T_b\in H^{-1}(\Omega)\cap L_{loc}^1(\Omega)$ in Theorem~\ref{Thm_Brezis_Browder_for_the_extension_of_bdd_linear_functional_1978}. By using the monotonicity of $b(x,\cdot)$ we see that $\left(b(x,u_1+w)-b(x,u_2+w)\right) (u_1-u_2)\ge 0=:f(x)\in L^1(\Omega)$. Therefore by Theorem~\ref{Thm_Brezis_Browder_for_the_extension_of_bdd_linear_functional_1978} (see also Remark \ref{remark_on_the_theorem_of_Brezis_and_Browder_for_the_extension_of_bdd_linear_functional_1978}) it follows that \[\left(b(x,u_1+w)-b(x,u_2+w)\right)(u_1-u_2)\in L^1(\Omega)\] and the duality product ${\langle T_b, u_1-u_2\rangle_{H^{-1}(\Omega)\times H_0^1(\Omega)}}$ coincides with \[ \int_{\Omega}{\left(b(x,u_1+w)-b(x,u_2+w)\right) (u_1-u_2) \dd x}. \] This means that \begin{equation} \label{weak_form_2_and_3_term_splitting_difference_of_two_solutions_tested_with_u1_minus_u2} a(u_1-u_2,u_1-u_2)+\int_{\Omega}{\left(b(x,u_1+w)-b(x,u_2+w)\right) (u_1-u_2) \dd x}=0, \end{equation} which implies $u_1-u_2=0$. Of course, this approach can also be applied to show uniqueness when $\pazocal{W}=H_0^1(\Omega)\cap L^\infty(\Omega)$ instead of using the truncations $T_k(u_1-u_2)$. The uniqueness of a solution to \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} with all three choices of the test space $\pazocal{W}$ is now clear. \subsubsection{Existence of a solution of \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} with the test spaces $H^1_0 \cap L^\infty$ and $C^\infty_c$}\label{Section_Existence} We consider the variational problem: \begin{equation} \label{Variational_problem_for_J} \text{Find } u_{\rm{min}}\in H_{\overline{g}}^1(\Omega)\text{ such that } J(u_{\rm{min}})=\min_{v\in H_{\overline{g}}^1(\Omega)}{J(v)}, \end{equation} where the functional $J:H_{\overline g}^1(\Omega)\to\mathbb R\cup \{+\infty\}$ is defined by \begin{equation} \label{definition_of_J_general_form_General_PBE} J(v):=\left\{ \begin{aligned} &\frac{1}{2}a(v,v)+\int_{\Omega}{B(x,v+w)\dd x}-\int_{\Omega}{{\bm f}\cdot \nabla v \dd x},\,\text{ if } B(x,v+w)\in L^1(\Omega),\\ &+\infty, \text{ if } B(x,v+w) \notin L^1(\Omega) \end{aligned} \right. \end{equation} with $B(x,\cdot)$ denoting an antiderivative of the monotone nonlinearity $b(x,\cdot)$ of the GPBE defined in \eqref{definition_b_GPBE}, given by \begin{equation} \label{nonlinearity_B} B(x,t):=\frac{4\pi e_0^2}{k_BT} \sum_{j=1}^{N_{ions}}{\overline M_j(x) \mathrm{e}^{-\xi_jt}}\ge 0,\,\mathrm{for}\ \mathrm{all}\ x\in\Omega\ \mathrm{and}\ t\in\mathbb R \end{equation} which is clearly convex in $t$, and $w$, ${\bm f}$, $\overline g$ defined either in \eqref{w_barg_f_expressions_2-term_splitting} or \eqref{w_barg_f_expressions_3-term_splitting}. We have seen in Remark \ref{Remark_exponents_of_H01_functions_in_d_2_and_3} that if $d\leq 2$ then $\mathrm{e}^v\in L^2(\Omega)$ for all $v\in H_0^1(\Omega)$ and therefore (for $\overline g=0$) $\dom(J)=\left\{v\in H_0^1(\Omega)\text{ such that } J(v)<\infty\right\} = H_0^1(\Omega)$. However, in dimension $d=3$, $\dom(J)$ is only a convex set and not a linear space (see Section~\ref{Sections_remarks_on_J}). In fact, ${\rm dom}(J)$ is also not closed, since it contains $C_c^\infty(\Omega)$ which is dense in $H_0^1(\Omega)$. If ${\rm dom}(J)$ were closed, it would coincide with $H_0^1(\Omega)$ and we know by Remark \ref{Remark_exponents_of_H01_functions_in_d_2_and_3} that this is not true in dimension $d\ge 3$. \begin{theorem}\label{Thm_existence_and_uniqueness_of_minimizer_of_J_General_PBE} Problem \eqref{Variational_problem_for_J} has a unique solution $u_{\rm{min}}\in H_{\overline{g}}^1(\Omega)$. \end{theorem} \begin{proof} Since $\dom(J)$ is convex and $J$ is also convex over $\dom(J)$ it follows that $J$ is convex over $H_{\overline g}^1(\Omega)$. To show existence of a minimizer of $J$ over the set $H_{\overline g}^1(\Omega)$ it is enough to verify the following assertions: \begin{itemize} \item[(1)] $H_{\overline g}^1(\Omega)$ is a closed convex set in $H^1(\Omega)$; \item[(2)] $J$ is proper, i.e., $J$ is not identically equal to $+\infty$ and does not take the value $-\infty$; \item[(3)] $J$ is sequentially weakly lower semicontinuous (s.w.l.s.c.), i.e., if $\{v_n\}_{n=1}^{\infty}\subset H_{\overline g}^1(\Omega)$ and $v_n\rightharpoonup v$ (weakly in $H_{\overline g}^1(\Omega)$) then $J(v)\leq \liminf_{n\to\infty}{J(v_n)}$; \item [(4)] $J$ is coercive, i.e., $\lim_{n\to\infty}{J(v_n)}=+\infty$ whenever $\|v_n\|_{H^1(\Omega)}\to \infty$. \end{itemize} That $H_{\overline{g}}^1(\Omega)$ is norm closed in $H^1(\Omega)$ and convex follows easily by the linearity and boundedness of the trace operator $\gamma_2$. Assertion (2) is obvious since $\int_{\Omega}{B(x,u+w)\dd x}\ge 0$ and $J(0)$ is finite. To see that (3) is fulfilled, notice that $J$ is the sum of the functionals $v\mapsto A(v):=\frac{1}{2}a(v,v)$, $v\mapsto \int_{\Omega}{B(x,v+w) \dd x}$, and $v\mapsto -\int_{\Omega}{\bm f\cdot \nabla v \dd x}$. The first one is convex and Gateaux differentiable, and therefore s.w.l.s.c. (for the proof of this implication, see, e.g. Corollary VII.2.4 in \cite{Showalter}). However, for $d=3$, the functional $v\mapsto \int_{\Omega}{B(x,v+w) \dd x}$ is not Gateaux differentiable or even continuous (see Section~\ref{Sections_remarks_on_J}). Nevertheless, one can show that this functional is s.w.l.s.c. using Fatou's lemma and the compact embedding of $H^1(\Omega)$ into $L^2(\Omega)$ as follows. Let $\{v_n\}_{n=1}^{\infty}\subset H^1(\Omega)$ be a sequence which converges weakly in $H^1(\Omega)$ to an element $v\in H^1(\Omega)$, i.e., $v_n\rightharpoonup v$. Since the embedding $H^1(\Omega)\hookrightarrow L^2(\Omega)$ is compact it follows that $v_n\to v$ (strongly) in $L^2(\Omega)$, and therefore we can extract a pointwise almost everywhere convergent subsequence $v_{n_m}(x)\to v(x)$ (see Theorem 4.9 in \cite{Brezis_FA}). Since $B(x,\cdot)$ is a continuous function for any $x\in\Omega$ and $x\mapsto B(x,t)$ is measurable for any $t\in\mathbb R$ it means that $B$ is a Carath\'eodory function and as a consequence the function $x\mapsto B(x,v_{n_m}(x)+w(x))$ is measurable for all $k\in\mathbb N$ (see Proposition 3.7 in \cite{Dacorogna}). By noting that $B(x,z(x)+w(x))\ge 0$ for all $z\in H^1(\Omega)$ and using the fact that $B(x,\cdot)$ is a continuous function for any $x\in\Omega$, from Fatou's lemma we obtain \begin{equation}\label{Applying_Fatous_Lemma} \begin{aligned} \liminf_{m\to \infty}{\int_{\Omega}{B(x,v_{n_m}(x)+w(x))}\dd x}&\ge \int_{\Omega}{\liminf_{m\to \infty}{B(x,v_{n_m}(x)+w(x))}\dd x}\\ &=\int_{\Omega}{B(x,v(x)+w(x))\dd x}. \end{aligned} \end{equation} Now it is clear that if $\{v_{n_m}\}_{m=1}^\infty$ is an arbitrary subsequence of $\{v_n\}_{n=1}^\infty$, then there exists a further subsequence $\{v_{n_{m_s}}\}_{s=1}^{\infty}$ for which \eqref{Applying_Fatous_Lemma} is satisfied. This means that in fact \eqref{Applying_Fatous_Lemma} is also satisfied for the whole sequence $\{v_n\}_{n=1}^\infty$, and hence $v\mapsto \int_{\Omega}{B(x,v+w) \dd x}$ is s.l.w.s.c. It is left to see that $J$ is coercive over $H_{\overline{g}}^1(\Omega)$. Let $u_{\overline{g}}\in H^1(\Omega)$ be such that $\gamma_2(u_{\overline{g}})=\overline{g}$ on $\partial\Omega$. For any $v\in H_{\overline{g}}^1(\Omega)$, we have $\gamma_2(v-u_{\overline{g}})=0$. Since $\Omega$ is a bounded Lipschitz domain, it follows that $v-u_{\overline{g}}\in H_0^1(\Omega)$. By applying Poincar{\'e}'s inequality we obtain \begin{equation} \label{triangle_inequalities_Hg1_space} \begin{aligned} \abs{\|v\|_{H^1(\Omega)}-\|u_{\overline{g}}\|_{H^1(\Omega)}}&\leq \|v-u_{\overline{g}}\|_{H^1(\Omega)}\leq \sqrt{1+C_P^2}\|\nabla (v-u_{\overline{g}})\|_{L^2(\Omega)}\\ &\leq \sqrt{1+C_P^2}\left(\|\nabla v\|_{L^2(\Omega)}+\|\nabla u_{\overline{g}}\|_{L^2(\Omega)}\right). \end{aligned} \end{equation} After squaring both sides of \eqref{triangle_inequalities_Hg1_space} and using the inequality $2ab\leq a^2+b^2$ for $a,b\in\mathbb R$ we obtain the estimate \begin{equation}\label{lower_bound_H1_seminorm_in_Hg1_space} \|v\|_{H^1(\Omega)}^2-2\|v\|_{H^1(\Omega)}\|u_{\overline{g}}\|_{H^1(\Omega)}+\|u_{\overline{g}}\|_{H^1(\Omega)}\leq 2\left(1+C_P^2\right)\left(\|\nabla v\|_{L^2(\Omega)}^2+\|\nabla u_{\overline{g}}\|_{L^2(\Omega)}^2\right). \end{equation} Now, coercivity of $J$ follows by recalling that $B(x,t)\ge 0$ for all $x\in\Omega$ and $t\in\mathbb R$ and using \eqref{lower_bound_H1_seminorm_in_Hg1_space}: \begin{equation}\label{coercivity_of_J_General_PBE} \begin{aligned} J(v)&=\frac{1}{2}a(v,v)+\int_{\Omega}{B(x,v+w)\dd x}-\int_{\Omega}{{\bm f}\cdot \nabla v \dd x}\ge\frac{\epsilon_{\rm{min}}}{2}\|\nabla v\|_{L^2(\Omega)}^2-\|{\bm f}\|_{L^2(\Omega)}\|\nabla v\|_{L^2(\Omega)}\\ &\ge \frac{\epsilon_{\min}}{4\left(1+C_P^2\right)}\left(\|v\|_{H^1(\Omega)}^2-2\|v\|_{H^1(\Omega)}\|u_{\overline{g}}\|_{H^1(\Omega)}+\|u_{\overline{g}}\|_{H^1(\Omega)}\right)\\ &\qquad-\frac{\epsilon_{\min}}{2}\|\nabla u_{\overline{g}}\|_{H^1(\Omega)}^2-\|{\bm f}\|_{L^2(\Omega)}\|v\|_{H^1(\Omega)}\to +\infty \text{ whenever } \|v\|_{H^1(\Omega)}\to \infty, \end{aligned} \end{equation} where $\epsilon_{\min} = \inf_{x \in \Omega} \epsilon(x) > 0$. We have proved the existence of a minimizer $u_{\rm{min}}$ of $J$ over the set $H_{\overline g}^1(\Omega)$. Moreover, since $a(v,v)$ is a strictly convex functional it follows that $J$ is also strictly convex, and therefore this minimizer is unique. \end{proof} Now we show that the minimizer $u_{\rm{min}}$ is a solution to the weak formulation \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces}, which is not immediate since $J$ is not Gateaux differentiable at any element of $H_{\overline g}^1(\Omega) \cap L^\infty(\Omega)$, see Section~\ref{Sections_remarks_on_J} below. \begin{proposition} The unique minimizer $u_{\rm{min}}$ of $J$ over $H_{\overline g}^1(\Omega)$ satisfies \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} for $\pazocal{W}=C_c^\infty(\Omega)$ and ${\pazocal{W}=H_0^1(\Omega)\cap L^\infty(\Omega)}$.\end{proposition} \begin{proof}We will use the Lebesgue dominated convergence theorem and the fact that at $u_{\rm{min}}$ it holds that $B(x,u_{\rm{min}}+w)\in L^1(\Omega)$. We have that $J(u_{\rm{min}}+\lambda v)-J(u_{\rm{min}})\ge 0$ for all $v\in H_0^1(\Omega)$ and all $\lambda \ge 0$, i.e., \begin{equation*} \begin{aligned} &\frac{1}{2}a\left(u_{\rm{min}}+\lambda v,u_{\rm{min}}+\lambda v\right)+\int_{\Omega}{B\left(x,u_{\rm{min}}+\lambda v+w\right)\dd x}-\int_{\Omega}{\bm f\cdot \nabla (u_{\rm{min}}+\lambda v)\dd x}\\ &\quad-\frac{1}{2}a\left(u_{\rm{min}},u_{\rm{min}}\right)-\int_{\Omega}{B\left(x,u_{\rm{min}}+w\right)\dd x}+\int_{\Omega}{\bm f\cdot \nabla u_{\rm{min}} \dd x}\ge 0, \end{aligned} \end{equation*} which, by using the symmetry of $a(\cdot,\cdot)$, is equivalent to \begin{equation}\label{varying_J_01} \begin{aligned} \lambda a\left(u_{\rm{min}},v\right)+\frac{\lambda}{2}a(v,v)+\int_{\Omega}{\left(B\left(x,u_{\rm{min}}+\lambda v+w\right)-B\left(x,u_{\rm{min}}+w\right)\right) \dd x}-\lambda \int_{\Omega}{\bm f\cdot\nabla v \dd x}\ge 0. \end{aligned} \end{equation} Dividing both sides of the above inequality by $\lambda>0$ and letting $\lambda\to 0^+$ we obtain \begin{equation} \label{varying_J_02} a(u_{\rm{min}},v)+\lim_{\lambda \to 0^+}{\frac{1}{\lambda}\int_{\Omega}{B(x,u_{\rm{min}}+\lambda v+w)-B(x,u_{\rm{min}}+w) \dd x}}-\int_{\Omega}{\bm f\cdot\nabla v \dd x}\ge 0. \end{equation} To compute the limit in the second term of \eqref{varying_J_02}, we will apply the Lebesgue dominated convergence theorem. We have \begin{equation} \label{pointwise_convergence_of_the_variation_of_the_nonlinear_term} \begin{aligned} f_\lambda(x)&:=\frac{1}{\lambda}\Big(B\big(x,u_{\rm{min}}(x)+w(x)+\lambda v(x)\big)-B\big(x,u_{\rm{min}}(x)+w(x)\big)\Big)\\ &\xrightarrow{\lambda\to 0^+}b\big(x,u_{\rm{min}}(x)+w(x)\big)v(x)\quad\text{for a.e.}\quad x\in\Omega \end{aligned} \end{equation} By the mean value theorem we have \begin{equation*} f_\lambda(x)=b\big(x,u_{\rm{min}}+w(x)+\Xi(x)\lambda v(x)\big)\,v(x),\ \text{where}\ \Xi(x)\in (0,1)\ \mathrm{for}\ \mathrm{all}\ x\in\Omega \end{equation*} and hence, if $v\in L^\infty(\Omega)$, we can obtain the following bound on $f_\lambda$ whenever $\lambda \le 1$: \begin{equation}\label{Summable_majorant_for_the_nonlinearity_in_General_PBE} \begin{aligned} \abs{f_\lambda(x)}&=\Bigg|-\frac{4\pi e_0^2}{k_BT}v(x)\sum_{j=1}^{N_{ions}}{\overline{M}_j(x)\xi_j\mathrm{e}^{-\xi_j\big(u_{\rm{min}}(x)+w(x)+\Xi(x)\lambda v(x)\big)}}\Bigg|\\ &\leq \max_{j}{\abs{\xi_j}}\|v\|_{L^\infty(\Omega)}\frac{4\pi e_0^2}{k_BT}\sum_{j=1}^{N_{ions}}{\overline M_j(x)\mathrm{e}^{-\xi_j\big(u_{\rm{min}}(x)+w(x)\big)-\xi_j\Xi(x)\lambda v(x)}}\\ &\leq \max_{j}{\abs{\xi_j}}\max_{j}{\mathrm{e}^{\abs{\xi_j} \|v\|_{L^\infty(\Omega)}} }\|v\|_{L^\infty(\Omega)}\frac{4\pi e_0^2}{k_BT}\sum_{j=1}^{N_{ions}}{\overline M_j(x)\mathrm{e}^{-\xi_j\big(u_{\rm{min}}(x)+w(x)\big)}}\\ &=\max_{j}{\abs{\xi_j}}\max_{j}{\mathrm{e}^{\abs{\xi_j} \|v\|_{L^\infty(\Omega)}} }\|v\|_{L^\infty(\Omega)}B\big(x,u_{\rm{min}}(x)+w(x)\big)\in L^1(\Omega). \end{aligned} \end{equation} From the Lebesgue dominated convergence theorem, by using \eqref{pointwise_convergence_of_the_variation_of_the_nonlinear_term} and \eqref{Summable_majorant_for_the_nonlinearity_in_General_PBE}, it follows that the limit in \eqref{varying_J_02} is equal to $\int_{\Omega}{b(x,u_{\rm{min}}+w)v \dd x}$, and therefore we obtain \begin{equation} \label{Final_ineuqality_after_varying_the_functional_J} a(u_{\rm{min}},v)+\int_{\Omega}{b(x,u_{\rm{min}}+w)v \dd x}-\int_{\Omega}{\bm f\cdot\nabla v \dd x}\ge 0 \, \text{ for all }\, v\in H_0^1(\Omega)\cap L^\infty(\Omega). \end{equation} This means that $u=u_{\rm{min}}$ is the unique solution to the weak formulation \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} for $\pazocal{W}=H_0^1(\Omega)\cap L^\infty(\Omega)$ and $\pazocal{W}=C_c^\infty(\Omega)$. \end{proof} In fact $u=u_{\rm{min}}$ is also a solution to \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} with $\pazocal{W}=H_0^1(\Omega)$, which we prove in Section \ref{Section_A_priori_Linfty_estimate} with the help of the a priori $L^\infty$ bound obtained there \subsubsection{Some remarks on the functional \texorpdfstring{$J$}{J}}\label{Sections_remarks_on_J} It is worth noting that for $\overline g=0$, the domain $\dom(J)$ of $J$ as defined in \eqref{definition_of_J_general_form_General_PBE} is a linear subspace of $H_0^1(\Omega)$ for $d\leq 2$ and not a linear subspace of $H_0^1(\Omega)$ if $d\ge 3$. In dimension $d\leq 2$, from the Moser-Trudinger inequality \cite{Trudinger_1967,Best_constants_in_some_exponential_Sobolev_inequalities} we know that $\mathrm{e}^v\in L^2(\Omega)$ for any $v\in H_0^1(\Omega)$ and thus $\mathrm{e}^{\lambda v_1+\mu v_2}\in L^2(\Omega)$ for any $\lambda,\mu\in\mathbb R$ and any $v_1,v_2\in H_0^1(\Omega)$. On the other hand, if $d\ge 3$, first observe that \[\dom(J)=\big\{v\in H_0^1(\Omega)\,:\, B(x,v+w)\in L^1(\Omega)\big\}.\] For simplicity we consider the case of the PBE, i.e., $B(x,v+w)=\overline{k}^2\cosh(v+w)$. Let us consider an example situation in which $B(0,r) \subset \Omega_{ions} \subset \Omega = B(0,1)$, where $B(0,r)$ denotes the ball in $\mathbb R^d,\,d\ge 3$ with radius $r$ and a center~at~0. We consider the function $v=\ln{\frac{1}{\abs{x}}}\in H_0^1(B(0,1))$. Since $\mathrm{e}^v=\frac{1}{\abs{x}}\in L^1(\Omega_{ions})$ and $\mathrm{e}^{\lambda v}=\frac{1}{\abs{x}^\lambda}\notin L^1(\Omega_{ions})$ for any $\lambda\ge d$, we obtain \begin{equation*} \begin{aligned} \int_{\Omega}{\overline{k}^2\cosh(v+w)\dd x}&= \int_{\Omega_{ions}}{\overline{k}^2_{ions}\frac{\left(\mathrm{e}^{v+w}+\mathrm{e}^{-v-w}\right)}{2} \dd x}\\ &\leq\frac{1}{2}\overline{k}_{ions}^2\mathrm{e}^{\|w\|_{L^\infty(\Omega_{ions})}}\int_{\Omega_{ions}}{\left(\mathrm{e}^v+\mathrm{e}^{-v}\right)\dd x}\\ &\leq \frac{1}{2}\overline{k}_{ions}^2\mathrm{e}^{\|w\|_{L^\infty(\Omega_{ions})}}\Big(\int_{\Omega_{ions}}{\mathrm{e}^v\dd x}+\abs{\Omega_{ions}}\Big)<+\infty \end{aligned} \end{equation*} but for any $\lambda > d$ we have \begin{equation*} \int_{\Omega}{\overline{k}^2\cosh(\lambda v+w)\dd x}\ge\frac{1}{2}\int_{\Omega_{ions}}{\overline{k}^2_{ions}\mathrm{e}^{\lambda v+w}\dd x}\ge \frac{1}{2}\overline{k}^2_{ions}\mathrm{e}^{-\|w\|_{L^\infty(\Omega_{ions})}}\int_{\Omega_{ions}}{\mathrm{e}^{\lambda v}\dd x}=+\infty. \end{equation*} This means that $v\in \dom(J)$, but $\lambda v\notin \dom(J)$ for any $\lambda\ge d$. Therefore $\dom(J)$ is not a linear space. However, $\dom(J)\subset H_0^1(\Omega)$ is a convex set. To see this, let $v_1,v_2\in \dom(J)$, i.e., $B(x,v_1+w),\,B(x,v_2+w)\in L^1(\Omega)$. Since $B(x,\cdot)$ is convex it follows that for almost every $x\in\Omega$ and every $\lambda\in [0,1]$ we have \[B(x,\lambda v_1(x)+(1-\lambda)v_2(x)+w(x))\leq \lambda B(x,v_1(x)+w(x))+(1-\lambda)B(x,v_2(x)+w(x)).\] By integrating the above inequality over $\Omega$, since both terms of the right hand side are finite, we get $\lambda v_1+(1-\lambda) v_2\in \dom(J)$ for all $\lambda\in[0,1]$. Analogously, in dimension $d=3$ the functional $\int_{\Omega}{B(x,v+w)\dd x}$ is not Gateaux differentiable at any $u\in H_0^1(\Omega)\cap L^\infty(\Omega)$. In fact $\int_{\Omega}{B(x,v+w)\dd x}$ is discontinuous at every $u\in H_0^1(\Omega)\cap L^\infty(\Omega)$. To see this, consider any $\Omega_{ions}$ and $\Omega$ such that $B(0,r) \subset \Omega_{ions} \subset \Omega$, and again for simplicity $B(x,v+w)=\overline{k}^2\cosh(v+w)$. We define $z=\psi \abs{x}^{-1/3}$, where $\psi \in C_c^\infty(\Omega)$ is equal to 1 in $B(0,r)$. Then $z\in H_0^1(\Omega)$, but $\mathrm{e}^{\lambda z}\notin L^1(\Omega_{ions})$ for any $\lambda >0$. To see this, notice that for any fixed $\lambda >0$, in a neighborhood of the origin (of size depending on $\lambda$) we have $\mathrm{e}^{\lambda z}> \abs{x}^{-3}$. In this case, for any $u\in H_0^1(\Omega)\cap L^\infty(\Omega)$ and any $\lambda >0$ we have \begin{equation*} \begin{aligned} \int_{\Omega}{\overline{k}^2\cosh(u+\lambda z+w) \dd x}&\ge \frac{1}{2}\int_{\Omega_{ions}}{\overline{k}_{ions}^2\mathrm{e}^{u+\lambda z+w}\dd x}\\&\ge \frac{\overline{k}_{ions}^2\mathrm{e}^{-\|u+w\|_{L^\infty(\Omega_{ions})}}}{2}\int_{\Omega_{ions}}{\mathrm{e}^{\lambda z}\dd x}=+\infty. \end{aligned} \end{equation*} \subsubsection{A priori \texorpdfstring{$L^\infty$}{Linfty} estimate for the solution of \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} and existence with test space $H^1_0$}\label{Section_A_priori_Linfty_estimate} In this section we prove a boundedness result for semilinear problems resembling \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} but under slightly more general assumptions on the nonlinearity, which is not necessarily assumed to be monotone. While such a boundedness result is not necessary to obtain existence of a solution $u$ to \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_W1p_spaces}, it is important for the PBE for two reasons. The first is that our uniqueness analysis for \eqref{weak_formulation_General_PBE_W1p_spaces} requires $x \mapsto b(x,\phi(x))\in L^1(\Omega)$, which holds for $u \in L^\infty(\Omega)$. The second is that the 2-term and 3-term splittings are often used numerically in practice, where having standard $H^1$ formulations is advantageous. Results on a priori $L^\infty$ estimates for linear elliptic equations of second order appear for example in \cite{Stampacchia_1965, Kinderlehrer_Stampacchia} and for nonlinear elliptic equations in \cite{Boccardo_Murat_Puel_1992, Boccardo_Segura_de_Leon_Trombetti_2001, Trombetti_2003, Boccardo_Brezis_2003, Boccardo_Dall_Aglio_Orsina_1998}. Vital techniques in the analysis of these papers are different adaptations of the $L^\infty$ regularity procedure introduced by Stampacchia; these make use of families of `nonlinear' test functions $G_k(u)$ derived from the solution $u$. We can write \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} in the generic semilinear form \begin{equation} \label{generic_semilinear_problem} A(u)+H(x,u,\nabla u)=0\quad \text {in } \Omega,\,u=\overline{g}\quad\text{on }\partial\Omega. \end{equation} Now, to obtain information from such a testing procedure on \eqref{generic_semilinear_problem} one typically requires either bounds of the form $H(x,t,\xi)\leq C(x) + h(|t|)|\xi|^p$ with well-behaved $h$ that ensure the effect of the nonlinearity can be dominated by the second order term (see \cite{Boccardo_Murat_Puel_1992} for the case $h, C$ constant and $A$ a Leray-Lions differential operator, or \cite{Boccardo_Segura_de_Leon_Trombetti_2001} with $h \in L^1(\mathbb R)$ and $C=0$), a sign condition of the form $H(x,t,\xi)t \geq 0$ (see \cite{Brezis_Strauss_1973} for some cases without gradient terms), or both \cite{Bensoussan_Boccardo_Murat_1988}. Most works in the literature seem to be centered on these assumptions, but in our situation neither is available: the nonlinearity $b(x,\cdot)$ has exponential growth, and when the ionic solution is not charge neutral it also does not follow the sign of its second argument. However the full strength of the sign condition is rarely needed, for example in \cite[Sec.~3]{Brezis_Strauss_1973} it is introduced as a simplification of more detailed conditions involving the actual test functions to be used. Starting by the observation that the sign condition is clearly satisfied if $b(x,t)$ is nondecreasing in $t$ and $b(x,w)=0$, we relax this by bounds of the type $c_1(x,t)\leq b(x,t)\leq c_2(x,t)$ with $c_1, c_2$ nondecreasing in their second argument and with adequate integrability on the functions $c_1(x,w)$, $c_2(x,w)$. These conditions are applicable to the general form of $b$ in the PBE \eqref{GPBE_dimensionless} and ensure that the effect of the nonlinear term when testing with $G_k(u)$ is bounded below by a fixed $L^1$ function, which is just enough to finish the proof. Since the dependence on $\nabla u$ in \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} consists of a linear term with high summability, it can be taken care of in an ad-hoc manner and does not pose major problems. In the result presented below, we assume a linear operator $\bm A$ and a nonlinearity $b(x,t)$ which does not depend on the gradient of the solution and which is not assumed to be nondecreasing in the second argument. We allow for a linear gradient term and a nonhomogeneous Dirichlet boundary condition on $\partial\Omega$ given by $g$, covering the case of \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces}. With the assumptions we make on $b$, we prove that every weak solution $u\in H_{g}^1(\Omega)$ must be in $L^\infty(\Omega)$ with $\|u\|_{L^\infty(\Omega)}\leq \gamma$ where $\gamma$ depends only on the data of the problem. As in \cite{Boccardo_Murat_Puel_1992}, our $L^\infty$ result seems to be optimal in the sense that when $b(x,\cdot)$ is a linear term, $u\in L^\infty(\Omega)$ for $s>d$, $r>\frac{d}{2}$ which coincides with the classical (optimal) results of Stampacchia, De Giorgi, and Moser in the linear case (see, e.g. the references in \cite{Boccardo_Murat_Puel_1992}). In \cite{Boccardo_Segura_de_Leon_Trombetti_2001, Trombetti_2003}, the authors prove $L^\infty$ estimates on the solution of very general nonlinear elliptic equations but with a nonlinear zeroth order term with a growth condition which seems not to cover the case of exponential nonlinearities with respect to $u$, as it is the case of the general PBE, and homogeneous Dirichlet boundary conditions. In \cite{Boccardo_Dall_Aglio_Orsina_1998, Boccardo_Brezis_2003}, $L^\infty$ estimates are proved for nonlinear elliptic equations with homogeneous Dirichlet boundary conditions and with degenerate coercivity but without a nonlinear zeroth order term. \begin{definition}[see, e.g. Definition 3.5 in \cite{Dacorogna}]\label{Definition_Caratheodory_function} Let $\Omega\subset \mathbb R^n$ be an open set and let $f:\Omega\times \mathbb R\to\mathbb R\cup \{+\infty\}$. Then $f$ is said to be a Carath\'eodory function if \begin{itemize} \item [(i)] $t\mapsto f(x,t)$ is continuous for almost every $x\in\Omega$, \item [(ii)] $x\mapsto f(x,t)$ is measurable for every $t\in\mathbb R$. \end{itemize} \end{definition} If $f$ is a Carath\'eodory function and $u:\Omega\to\mathbb R$ is measurable, then it follows that the function $g:\Omega\to \mathbb R\cup \{+\infty\}$ defined by $g(x)=f(x,u(x))$ is measurable (see, e.g., Proposition 3.7 in \cite{Dacorogna}). \begin{theorem}[A priori $L^\infty$ estimate]\label{Thm_Boundedness_of_solution_to_general_semilinear_elliptic_problem} Let $\Omega\subset\mathbb R^d,\,d\ge 2$ be a bounded Lipschitz domain and let $b(x,t):\Omega\times\mathbb R\to\mathbb R$ be a Carath\'eodory function, not necessarily nondecreasing in its second argument, such that \begin{equation} \label{bounds_on_b} c_1(x,t)\leq b(x,t)\leq c_2(x,t)\quad\text{for a.e } x\in\Omega\ \mathrm{and}\ \mathrm{all}\ t\in\mathbb R, \end{equation} where $c_1, c_2:\Omega\times \mathbb R\to \mathbb R$ are Carath\'eodory functions which are nondecreasing in the second argument for a.e $x\in\Omega$. Let $a(u,v)=\int_{\Omega}{\bm A\nabla u\cdot\nabla v \dd x}$, where ${\bm A=(a_{ij})}$, $a_{ij}(x)\in L^\infty(\Omega)$, and ${\bm A}$ satisfies the uniform ellipticity condition \eqref{uniform_ellipticity_of_A} for some positive constant $\underline \alpha$. Finally, let \begin{equation} \label{L_infty_estimate_more_general_nonlinear_problem} \begin{aligned} &u\in H_0^1(\Omega)\text{ be such that }b(x,u+\omega)v \in L^1(\Omega)\,\text{ for all } v\in \pazocal{W} \text{ and } \\ &a(u,v)+\int_{\Omega}{b(x,u+\omega)v \dd x}=\int_{\Omega}{\left(f_0v+\bm f\cdot\nabla v \right)\dd x}\,\text{ for all } v\in \pazocal{W}, \end{aligned} \end{equation} where $\bm f=(f_1,\ldots,f_d)$ and the test space $\pazocal{W}$ can be either $C_c^\infty(\Omega)$, $H_0^1(\Omega)\cap L^\infty(\Omega)$, or~$H_0^1(\Omega)$. Provided that $\omega \in L^\infty(\Omega)$, $\bm f \in \left[L^s(\Omega)\right]^d$ with $s>d$ and $f_0$, $c_1(x,\omega)$, $c_2(x,\omega)\in L^r(\Omega)$ with $r>d/2$, then $\|u\|_{L^\infty(\Omega)}\leq\gamma$ where $\gamma$ depends only on the data, i.e, $\underline \alpha$, $|\Omega|$, $\|a_{ij}\|_{L^\infty(\Omega)}$, $\|\omega\|_{L^\infty(\Omega)}$, $\|\bm f\|_{L^s(\Omega)}$, $\|f_0\|_{L^r(\Omega)}$, $\|c_1(x,\omega)\|_{L^r(\Omega)}$, $\|c_2(x,\omega)\|_{L^r(\Omega)}$. \end{theorem} \begin{remark} We point out that in Theorem~\ref{Thm_Boundedness_of_solution_to_general_semilinear_elliptic_problem} the bilinear form $a(\cdot,\cdot)$, the nonlinearity $b(x,\omega)$ and the functions $f_0$ and $\bm f$ are more general than those for the GPBE in \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces}. Moreover, we use the notation $\omega$ to distinguish it from $w$ appearing in \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces}, since they do not refer to the same object. Namely, below we apply Theorem~\ref{Thm_Boundedness_of_solution_to_general_semilinear_elliptic_problem} to the particular problem \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} with $\omega=\chi_{\Omega_{ions}} w= \chi_{\Omega_{ions}} G$ in the case of the 2-term splitting and with $\omega=0$ in the case of the 3-term splitting. \end{remark} \begin{remark} Since $c_1(x,\cdot)$ and $c_2(x,\cdot)$ are nondecreasing it follows that \begin{equation} c_i\left(x,-\|\omega\|_{L^\infty(\Omega)}\right)\leq c_i(x,\omega)\leq c_i\left(x,\|\omega\|_{L^\infty(\Omega)}\right)\,\text{ for }\, i=1,2. \end{equation} Then the condition that $c_1(x,\omega),\,c_2(x,\omega)\in L^r(\Omega)$ where $r>d/2$ can be achieved if $c_1(x,t)$ and $c_2(x,t)$ define functions in $L^r(\Omega)$ for every $t\in\mathbb R$. For example, this condition will be fulfilled if $c_1(x,t)=k_1(x)a_1(t)$ and $c_2(x,t)=k_2(x)a_2(t)$ where $k_1,k_2\ge 0$, $k_1,k_2\in L^r(\Omega)$ and $a_1,a_2:\mathbb R\to\mathbb R$ are nondecreasing and continuous functions. If $b(x,\cdot)$ is nondecreasing for almost every $x\in\Omega$ and if $b(x,\omega)\in L^r(\Omega)$ with $r>d/2$, then $c_1(x,\cdot)$ and $c_2(x,\cdot)$ can be taken equal to $b(x,\cdot)$. Notice also that there is neither sign condition nor growth condition on the nonlinearity $b(x,\cdot)$. \end{remark} \begin{remark}[Nonhomogeneous Dirichlet boundary condition]\label{nonhomogeneous_Dirichlet_BC} Let $g$ be in the trace space $W^{1-1/s,s}(\partial\Omega)$ for some $s>d$ (in particular this is true if $g\in C^{0,1}(\partial\Omega)$, see Lemma~\ref{Lemma_extension_pf_Lipschitz}) and let $u_g\in W^{1,s}(\Omega)\subset L^\infty(\Omega)$ be such that $\gamma_s(u_g)=g =\gamma_2(u_g)$. Suppose that $u$ satisfies \begin{equation} \label{L_infty_estimate_more_general_nonlinear_problem_nonhomogeneous_Dirichlet_BC} \begin{aligned} &u\in H_g^1(\Omega)\text{ such that }b(x,u+\omega)v \in L^1(\Omega)\,\text{ for all } v\in \pazocal{W} \text{ and } \\ &a(u,v)+\int_{\Omega}{b(x,u+\omega)v \dd x}=\int_{\Omega}{\left(f_0v+\bm f\cdot\nabla v \right)\dd x}\,\text{ for all } v\in \pazocal{W}. \end{aligned} \end{equation} Then we can apply Theorem~\ref{Thm_Boundedness_of_solution_to_general_semilinear_elliptic_problem} to the homogenized version of problem \eqref{L_infty_estimate_more_general_nonlinear_problem_nonhomogeneous_Dirichlet_BC}, that is \begin{equation} \label{L_infty_estimate_more_general_nonlinear_problem_nonhomogeneous_Dirichlet_BC_homogenized} \begin{aligned} &\text{Find }u_0\in H_0^1(\Omega)\text{ such that }b(x,u_0+u_g+\omega)v \in L^1(\Omega)\,\text{ for all } v\in \pazocal{W} \text{ and } \\ &a(u_0,v)+\int_{\Omega}{b(x,u_0+u_g+\omega)v \dd x}=\int_{\Omega}{\left(f_0v+\left(\bm f-\bm A\nabla u_g\right)\cdot\nabla v \right)\dd x}\,\text{ for all } v\in \pazocal{W} \end{aligned} \end{equation} with $\omega\rightarrow u_g+\omega\in L^\infty(\Omega)$, $\bm f\rightarrow \bm f-\bm A\nabla u_g\in \left[L^s(\Omega)\right]^d$. \end{remark} \begin{remark}\label{additional_regularity_of_general_semilinear_problem} From Theorem~\ref{Thm_Boundedness_of_solution_to_general_semilinear_elliptic_problem} and Remark~\ref{nonhomogeneous_Dirichlet_BC} it follows that the nonlinearity evaluated at the solution $u$ of \eqref{L_infty_estimate_more_general_nonlinear_problem_nonhomogeneous_Dirichlet_BC} (if it exists), is in $L^\infty(\Omega)$, i.e., $b(x,u_0+u_g+\omega)\in L^\infty(\Omega)$. Therefore, if a solution exists, then the classical regularity results of De Giorgi-Nash-Moser (see, e.g., Theorem 2.12 in \cite{Bei_Hu_Blow_up_theories_for_semilinear_parabolic_equations_2011}, p. 65, Theorem 3.5 in \cite{YZ_Chen_LC_Wu_Second_order_elliptic_equations_and_systems_1998}) for linear elliptic equations can be applied to the unique solution~$z_0$ (by the Lax-Milgram Theorem) of the linear equation arising from \eqref{L_infty_estimate_more_general_nonlinear_problem_nonhomogeneous_Dirichlet_BC_homogenized} \begin{equation} \label{solution_of_nonlinear_problem_satisfies_a_linear_one} a(z_0,v)=\int_{\Omega}{\left[\left(-b(x,u_0+u_g+\omega)+f_0\right)v+\left(\bm f-\bm A\nabla u_g\right)\cdot\nabla v \right]\dd x}\, \text{ for all }\, v\in H_0^1(\Omega) \end{equation} and conclude that $z_0\equiv u_0$ is H\"older continuous and so is $u=u_g+u_0$, since $u_g\in W^{1,s}(\Omega)\subset C^{0,\lambda}(\overline\Omega)$ for $0<\lambda\leq 1-d/s$. In addition, if we assume that $\bm A$ satisfies the assumptions in Theorem~\ref{Theorem_uniqueness_of_linear_elliptic_problems_with_diffusion_and_measure_rhs} with $\Omega_m$, $\Gamma=\partial\Omega_m$, and $\Omega$ as defined there, then we can apply Theorem~\ref{Thm_Optimal_Regularity_for_Elliptic_Interface_Problems} and obtain a $p>3$ such that $-\nabla\cdot \bm A \nabla$ is a topological isomorphism between $W_0^{1,q}(\Omega)$ and $W^{-1,q}(\Omega)$ for all $q\in (d,p)$ (since $d\in\{2,3\}$). By the assumptions on $f_0$ and $\bm f$ and taking into account the regularity of $u_g$, the right-hand side of \eqref{solution_of_nonlinear_problem_satisfies_a_linear_one} belongs to $W^{-1,q}(\Omega)$ for some $q>d$ depending on $s$ and $r$. We conclude that $z_0\in W_0^{1,\bar q}(\Omega)$ for some $\bar q>d$ (which also implies H\"older continuity of $z_0$ and consequently of $u$). \end{remark} \begin{proof}[Proof of Theorem~\ref{Thm_Boundedness_of_solution_to_general_semilinear_elliptic_problem}] The proof is based on techniques introduced by Stampacchia, see e.g., the proof of Theorem B.2 in \cite{Kinderlehrer_Stampacchia}. There the $L^\infty$ estimate is proved for a linear elliptic problem tested with the space $V=H_0^1(\Omega)$. Similarly to \cite{Kinderlehrer_Stampacchia}, we construct the following test functions \begin{equation} \label{the_functions_G_k} u_k:=G_k(u)=\left\{ \begin{array}{lll} u-k,\,& \text{ a.e on } \{u(x)>k\},\\ 0,\,& \text{ a.e on } \{\abs{u(x)}\leq k\},\\ u+k,\,& \text{ a.e on } \{u(x)<-k\}, \end{array} \right. \end{equation} \normalsize for any $k\ge 0$, where $u_0:= u$. Since $G_k(t)=\text{sign}(t)(\abs{t}-k)^+$ is Lipschitz continuous with $G_k(0)=0$ and $u\in H_0^1(\Omega)$, by Stampacchia's theorem (e.g., see \cite{Kinderlehrer_Stampacchia,Gilbarg_Trudinger_book_2001}) it follows that $G_k(u)\in H_0^1(\Omega)\,\text{ for all }\, k\ge 0$. Moreover, the weak partial derivatives are given by \begin{equation} \label{weak_partial_derivative_of_G_k} \frac{\partial u_k}{\partial x_i}=\left\{ \begin{array}{lll} \frac{\partial u}{\partial x_i},\,& \text{ a.e on } \{u(x)>k\},\\ 0,\,& \text{ a.e on } \{\abs{u(x)}\leq k\},\\ \frac{\partial u}{\partial x_i},\,& \text{ a.e on } \{u(x)<-k\}. \end{array} \right. \end{equation} We have the Sobolev Embedding $H^1(\Omega)\hookrightarrow L^q(\Omega)$ where $q<\infty$ for $d=2$ and $q=\frac{2d}{d-2}$ for $d>2$. With $q'$ we will denote the H{\"o}lder conjugate to $q$. Thus $q'=\frac{q}{q-1}>1$ for $d=2$, and $q'=\frac{2d}{d+2}$ for $d>2$. With $C_E$ we denote the embedding constant in the inequality $\|u\|_{L^q(\Omega)}\leq C_E\|u\|_{H^1(\Omega)}$, which depends only on the domain $\Omega$, $d$, and $q$. \textbf{Testing with $u_k$:} By applying Theorem~\ref{Thm_Brezis_Browder_for_the_extension_of_bdd_linear_functional_1978}, we will show that we can test equation \eqref{L_infty_estimate_more_general_nonlinear_problem} with $u_k$ for any $k>0$, as well as with $u$, which is not obvious because $u_k$ and $u$ need not be in the test space $\pazocal{W}$. For this observe that \begin{equation} \label{the_extension_of_Tb_is_equal_to_the_RHS} \int_{\Omega}{b(x,u+\omega)v \dd x}=-a(u,v)+\int_{\Omega}{\left(f_0v+\bm f\cdot\nabla v\right)\dd x}\,\text{ for all }\, v\in \pazocal{W} \end{equation} and that the right-hand side of \eqref{the_extension_of_Tb_is_equal_to_the_RHS} defines a bounded linear functional over $H_0^1(\Omega)$: \begin{equation} \label{boundedness_of_a} \abs{a(u,v)}\leq \Bigg(\sum_{i,j=1}^{d}{\|a_{ij}\|_{L^\infty(\Omega)}}\Bigg)\|u\|_{H^1(\Omega)}\|v\|_{H^1(\Omega)}\,\text{ for all }\, v\in H_0^1(\Omega) \end{equation} and \begin{equation} \label{boundedness_of_right_hand_side} \begin{aligned} &\Big|\int_{\Omega}{\left(f_0v+\bm f\cdot \nabla v\right)\dd x}\Big|\leq \|f_0\|_{L^{q'}(\Omega)}\|v\|_{L^q(\Omega)}+\|\bm f\|_{L^2(\Omega)}\|\nabla v\|_{L^2(\Omega)}\\ \leq& C_E\|f_0\|_{L^{q'}(\Omega)}\|v\|_{H^1(\Omega)}+\|\bm f\|_{L^2(\Omega)}\|v\|_{H^1(\Omega)}\,\text{ for all }\, v\in H^1(\Omega). \end{aligned} \end{equation} From \eqref{the_extension_of_Tb_is_equal_to_the_RHS}, \eqref{boundedness_of_a}, and \eqref{boundedness_of_right_hand_side}, it is clear that the linear functional $T_b$ defined by the formula $\langle T_b,v\rangle=\int_{\Omega}{b(x,u+\omega)v \dd x}\,\text{ for all }\, v\in \pazocal{W}$ is bounded in the norm of $H^1(\Omega)$ over the dense subspace\footnote{$\pazocal{W}$ is a dense subspace of $H_0^1(\Omega)$ when $\pazocal{W}=H_0^1(\Omega)\cap L^\infty(\Omega)$ or $\pazocal{W}=C_c^\infty(\Omega)$.} $\pazocal{W}$ and therefore it can be uniquely extended by continuity to a functional $\overline T_b\in H^{-1}(\Omega)$ over the whole space $H_0^1(\Omega)$. Moreover, the fact that $\int_{\Omega}{b(x,u+\omega)v \dd x}$ is finite for all $v \in C_c^\infty(\Omega)$ implies that $b(x,u+\omega) \in L^1_{loc}(\Omega)$. Therefore, if we show that $b(x,u+\omega)u_k\ge f_k(x)$ for some function $f_k\in L^1(\Omega)$, taking into account Remark \ref{remark_on_the_theorem_of_Brezis_and_Browder_for_the_extension_of_bdd_linear_functional_1978} we may apply Theorem~\ref{Thm_Brezis_Browder_for_the_extension_of_bdd_linear_functional_1978} and conclude that $b(x,u+\omega)u_k\in L^1(\Omega)$ and that $\langle \overline T_b, u_k\rangle=\int_{\Omega}{b(x,u+\omega)u_k\dd x}$. Since by density the extension $\overline T_b$ is also equal to the right hand side of \eqref{the_extension_of_Tb_is_equal_to_the_RHS}, we will also have that \begin{equation} \label{the_weak_formulation_tested_with_the_special_test_function} \begin{aligned} a(u,u_k)=&-\int_{\Omega}{b(x,u+\omega)u_k \dd x}+\int_{\Omega}{\left(f_0u_k +\bm f\cdot\nabla u_k\right) \dd x} \,\text{ for all }\, k\ge 0. \end{aligned} \end{equation} By using the definition \eqref{the_functions_G_k} of $u_k$ we can write \begin{equation*} b(x,u+\omega)u_k= \left\{ \begin{array}{lll} b(x,u+\omega)(u-k),\,& \text{ a.e on } \{u(x)>k\},\\ 0,\,& \text{ a.e on } \{\abs{u(x)}\leq k\},\\ b(x,u+\omega)(u+k),\,& \text{ a.e on } \{u(x)<-k\}. \end{array} \right. \end{equation*} Therefore, on the set $\{u(x)>k\}$ we obtain the estimate \begin{equation} \label{inequality_for_b_on_the_set_where_u_g_greater_than_k} b(x,u+\omega)(u-k)\ge c_1(x,u+\omega)(u-k)\ge c_1(x,\omega)(u-k), \end{equation} and on the set $\{u(x)<-k\}$ the estimate \begin{equation} \label{inequality_for_b_on_the_set_where_u_g_less_than_k} b(x,u+\omega)(u+k)\ge c_2(x,u+\omega)(u+k)\ge c_2(x,\omega)(u+k). \end{equation} If we define the function $f_k(x)$ through the equality \begin{equation} f_k(x):= \left\{ \begin{array}{lll} c_1(x,\omega(x))(u(x)-k),\,& \text{ a.e on } \{u(x)>k\},\\ 0,\,& \text{ a.e on } \{\abs{u(x)}\leq k\},\\ c_2(x,\omega(x))(u(x)+k),\,& \text{ a.e on } \{u(x)<-k\}, \end{array} \right. \end{equation} then $f_k$ will be in $L^1(\Omega)$ if $c_1(x,\omega)(u-k)$ and $c_2(x,\omega)(u+k)\in L^1(\Omega)$, because \begin{equation*} \abs{f_k(x)}\leq \abs{c_1(x,\omega(x))(u(x)-k)}+\abs{c_2(x,\omega(x))(u(x)+k)} \text{ a.e }x\in\Omega. \end{equation*} To ensure that $c_1(x,\omega)(u-k)\in L^1(\Omega)$ and $c_2(x,\omega)(u+k)\in L^1(\Omega)$, it is enough to require that $c_1(x,\omega)$, $c_2(x,\omega)\in L^{q'}(\Omega)$ which is true by assumption since $r>d/2>q'$. In this case, it follows that $b(x,u+\omega)u_k\in L^1(\Omega)$ for each $k\ge 0$ and by Theorem~\ref{Thm_Brezis_Browder_for_the_extension_of_bdd_linear_functional_1978} \eqref{the_weak_formulation_tested_with_the_special_test_function} holds. \textbf{Estimation of the terms in \eqref{the_weak_formulation_tested_with_the_special_test_function}:} Now the goal is to show that the measure of the set $A(k)$ becomes zero for all $k\ge k_1>0$, where for $k\ge 0$ the set $A(k)$ is defined by \begin{equation*} A(k):=\{x\in\Omega\,:\,|u(x)|>k\}. \end{equation*} This would mean that $\abs{u}\leq k_1$ for almost every $x\in\Omega$. The idea to show this is to obtain an inequality of the form \eqref{Lemma_B1_main_inequality} in Lemma~\ref{Lemma_B1_Kinderlehrer_Stampacchia} for the nonnegative and nonincreasing function $\Theta(k):=\abs{A(k)}$. To obtain such an inequality we estimate from below the term on the left-hand side of \eqref{the_weak_formulation_tested_with_the_special_test_function} and from above the terms on the right-hand side of \eqref{the_weak_formulation_tested_with_the_special_test_function}. First, by using \eqref{inequality_for_b_on_the_set_where_u_g_greater_than_k} and \eqref{inequality_for_b_on_the_set_where_u_g_less_than_k} we observe that for all $k\ge 0$ it holds \begin{equation} \label{intermediate_inequality_01_boundedness_for_General_Semilinear_Problem} \begin{aligned} &\int_{\Omega}{b(x,u+\omega)u_k \dd x}=\int_{A(k)}{b(x,u+\omega)u_k \dd x}\\ =&\int_{\{u>k\}}{b(x,u+\omega)(u-k) \dd x}+\int_{\{u<-k\}}{b(x,u+\omega)(u+k) \dd x}\\ \ge & \int_{\{u>k\}}{c_1(x,\omega)(u-k)\dd x}+\int_{\{u<-k\}}{c_2(x,\omega)(u+k)\dd x}=\int_{A(k)}{c(x,\omega)u_k\dd x}, \end{aligned} \end{equation} where the function $c:\Omega\times \mathbb R\to \mathbb R\cup \{+\infty\}$ is defined by \begin{equation*} c(x,\omega(x)):= \left\{ \begin{array}{lll} c_1(x,\omega(x)),\,& \text{ a.e on } \{u(x)\ge 0\},\\ c_2(x,\omega(x)),\,& \text{ a.e on } \{u(x)<0\}. \end{array} \right. \end{equation*} Now, we estimate the left-hand side of \eqref{the_weak_formulation_tested_with_the_special_test_function} from below. First by using the expression \eqref{weak_partial_derivative_of_G_k} for the weak partial derivatives of $u_k$, then the coercivity of $a(\cdot,\cdot)$, and finally Poincar\'e's inequality, we obtain \begin{equation} \label{intermediate_inequality_02_boundedness_for_General_Semilinear_Problem} \begin{aligned} &a(u,u_k)=\int_{\Omega}{\bm A\nabla u\cdot \nabla u_k \dd x}=a(u_k,u_k)\ge \underline\alpha\|\nabla u_k\|_{L^2(\Omega)}^2\ge \frac{\underline\alpha}{C_P^2+1} \|u_k\|_{H^1(\Omega)}^2 \end{aligned} \end{equation} By combining \eqref{the_weak_formulation_tested_with_the_special_test_function} with the estimates \eqref{intermediate_inequality_01_boundedness_for_General_Semilinear_Problem} and \eqref{intermediate_inequality_02_boundedness_for_General_Semilinear_Problem} we obtain the intermediate estimate \begin{equation} \label{LHS_RHS_in_the_L_infty_estimate_of_the_general_nonlinear_problem} \begin{aligned} \frac{\underline\alpha}{C_P^2+1} \|u_k\|_{H^1(\Omega)}^2&\leq \absb{\int_{A(k)}c(x,\omega)u_k \dd x}+\absb{\int_{A(k)}f_0u_k \dd x}+\absb{\int_{A(k)}\bm f\cdot\nabla u_k \dd x}. \end{aligned} \end{equation} We continue by estimating from above all terms on the right-hand side of \eqref{LHS_RHS_in_the_L_infty_estimate_of_the_general_nonlinear_problem}. By applying H\"older's and Poincar\'e's inequalities we obtain \begin{equation} \label{third_term1_in_RHS_in_the_L_infty_estimate_of_the_general_nonlinear_problem} \begin{aligned} \absb{\int_{A(k)}f_0u_k \dd x}&\leq \|f_0\|_{L^{q'}(A(k))}\|u_k\|_{L^{q}(\Omega)}\leq C_E \|f_0\|_{L^{q'}(A(k))}\|u_k\|_{H^1(\Omega)}. \end{aligned} \end{equation} Thus if $f_0\in L^r(\Omega)$ with $r>q'$, again by using H\"older's inequality we obtain \begin{equation*} \|f_0\|_{L^{q'}(A(k))}^{q'}\leq\|f_0\|_{L^r(A(k))}^{q'}\abs{A(k)}^{\frac{r-q'}{r}}. \end{equation*} By combining the last estimate with \eqref{third_term1_in_RHS_in_the_L_infty_estimate_of_the_general_nonlinear_problem}, we obtain \begin{equation} \label{third_term2_in_RHS_in_the_L_infty_estimate_of_the_general_nonlinear_problem} \absb{\int_{A(k)}{f_0u_k \dd x}}\leq C_E\|f_0\|_{L^r(\Omega)}\abs{A(k)}^{\frac{r-q'}{rq'}} \|u_k\|_{H^1(\Omega)}. \end{equation} Similarly, we estimate ($r>q'$) \begin{equation} \label{first_term_in_RHS_in_the_L_infty_estimate_of_the_general_nonlinear_problem} \begin{aligned} \absb{\int_{A(k)}c(x,\omega)u_k \dd x}&\leq \|c(x,\omega)\|_{L^{q'}(A(k))}\|u_k\|_{L^{q}(\Omega)}\\&\leq C_E\abs{A(k)}^{\frac{r-q'}{rq'}}\|c(x,\omega)\|_{L^r(\Omega)}\|u_k\|_{H^1(\Omega)}. \end{aligned} \end{equation} We continue with the estimation of the third term in the right-hand side of \eqref{LHS_RHS_in_the_L_infty_estimate_of_the_general_nonlinear_problem}: \begin{equation} \label{fourth_term1_in_RHS_in_the_L_infty_estimate_of_the_general_nonlinear_problem} \absb{\int_{A(k)}\bm f\cdot \nabla u_k \dd x}\leq \|\bm{f}\|_{L^2(A(k))}\|u_k\|_{H^1(\Omega)} \end{equation} If $\bm f\in \left[L^s(\Omega)\right]^d$ with $s>2$, by using H\"older's inequality we obtain \begin{equation*} \|\bm{f}\|_{L^2(A(k))}^2=\int_{A(k)}{\underbrace{\abs{\bm{f}}^2}_{\in L^{\frac{s}{2}}(\Omega)}1\dd x}\leq \Big(\int_{A(k)}{\abs{\bm{f}}^s \dd x}\Big)^{\frac{2}{s}}\Big(\int_{A(k)}{1 \dd x}\Big)^{\frac{s-2}{s}}=\|\bm{f}\|_{L^s(A(k))}^2|A(k)|^{\frac{s-2}{s}}, \end{equation*} and hence by combining with \eqref{fourth_term1_in_RHS_in_the_L_infty_estimate_of_the_general_nonlinear_problem}, we arrive at the estimate \begin{equation} \label{fourth_term2_in_RHS_in_the_L_infty_estimate_of_the_general_nonlinear_problem} \absb{\int_{A(k)}{\bm f\cdot \nabla u_k \dd x}}\leq \|\bm{f}\|_{L^s(\Omega)}\abs{A(k)}^{\frac{s-2}{2s}}\|u_k\|_{H^1(\Omega)}. \end{equation} Combining \eqref{LHS_RHS_in_the_L_infty_estimate_of_the_general_nonlinear_problem} with the estimates \eqref{third_term2_in_RHS_in_the_L_infty_estimate_of_the_general_nonlinear_problem}, \eqref{first_term_in_RHS_in_the_L_infty_estimate_of_the_general_nonlinear_problem}, \eqref{fourth_term2_in_RHS_in_the_L_infty_estimate_of_the_general_nonlinear_problem} for the right-hand side terms in \eqref{LHS_RHS_in_the_L_infty_estimate_of_the_general_nonlinear_problem}, and then dividing by $\|u_k\|_{H^1(\Omega)}$, we obtain \begin{equation} \label{LHS_RHS2_in_the_L_infty_estimate_of_the_general_nonlinear_problem} \begin{split} &\frac{\underline\alpha}{C_P^2+1} \|u_k\|_{H^1(\Omega)}\\ &\qquad \leq C_E\|c(x,\omega)\|_{L^r(\Omega)}\abs{A(k)}^{\frac{r-q'}{rq'}}+ C_E\|f_0\|_{L^r(\Omega)}\abs{A(k)}^{\frac{r-q'}{rq'}}+\|\bm{f}\|_{L^s(\Omega)}\abs{A(k)}^{\frac{s-2}{2s}}. \end{split} \end{equation} Now, it is left to estimate the left-hand side of \eqref{LHS_RHS2_in_the_L_infty_estimate_of_the_general_nonlinear_problem} from below in terms of the measure of the set $A(h)$ for $h>k$. We use again the Sobolev embedding theorem and the fact that $A(k)\supset A(h)$ for all $h>k\ge 0$: \begin{equation} \label{intermediate_inequality_03_boundedness_for_General_Semilinear_Problem} \begin{aligned} &\|u_k\|_{H^1(\Omega)}\ge \frac{1}{C_E}\|u_k\|_{L^{q}(\Omega)}=\frac{1}{C_E}\Big(\int_{\Omega}{\abs{u_k}^{q}\dd x}\Big)^{\frac{1}{q}}=\frac{1}{C_E}\Big(\int_{A(k)}{|\underbrace{\abs{u}-k}_{>0}|^q \dd x}\Big)^{\frac{1}{q}}\\ &= \frac{1}{C_E}\Big(\int_{A(k)\setminus A(h)}{\left(\abs{u}-k\right)^q \dd x}+\int_{A(h)}{\left(\abs{u}-k\right)^q \dd x}\Big)^{\frac{1}{q}}\\ &\ge \frac{1}{C_E}\Big(\int_{A(h)}{(h-k)^q \dd x}\Big)^{\frac{1}{q}}=\frac{1}{C_E}(h-k)\abs{A(h)}^{\frac{1}{q}}. \end{aligned} \end{equation} From \eqref{LHS_RHS2_in_the_L_infty_estimate_of_the_general_nonlinear_problem} and \eqref{intermediate_inequality_03_boundedness_for_General_Semilinear_Problem} it follows that \begin{equation} \label{LHS_RHS3_in_the_L_infty_estimate_of_the_general_nonlinear_problem} \begin{aligned} &(h-k)\abs{A(h)}^{\frac{1}{q}}\\ &\quad\leq \frac{C_E(C_P^2+1)}{\underline\alpha} \Big[C_E\|c(x,\omega)\|_{L^r(\Omega)}\abs{A(k)}^{\frac{r-q'}{rq'}}+C_E\|f_0\|_{L^r(\Omega)}\abs{A(k)}^{\frac{r-q'}{rq'}}+\|\bm{f}\|_{L^s(\Omega)}\abs{A(k)}^{\frac{s-2}{2s}}\Big]\\ &\quad\leq C_M\Big(\abs{A(k)}^{\frac{s-2}{2s}}+\abs{A(k)}^{\frac{r-q'}{rq'}}\Big), \end{aligned} \end{equation} where \small \begin{equation*} C_M:=\frac{C_E(C_P^2+1)}{\underline\alpha}\max{\left\{C_E\left(\|c(x,\omega)\|_{L^r(\Omega)}+\|f_0\|_{L^r(\Omega)}\right),\|\bm{f}\|_{L^s(\Omega)}\right\}}. \end{equation*} \normalsize We have obtained the following inequality for the measure of $A(k)$: \begin{equation} \label{final_inequaliy1_for_Dk} (h-k)\abs{A(h)}^{\frac{1}{q}}\leq C_M\Big(\abs{A(k)}^{\frac{s-2}{2s}}+\abs{A(k)}^{\frac{r-q'}{rq'}}\Big)\,\text{ for all }\, h>k\ge 0. \end{equation} Since $u$ is summable it follows that $\abs{A(k)}={\text{meas}\left(\{x\in\Omega: \abs{u(x)}>k\}\right)\to 0}$ monotonically decreasingly as $k\to \infty$. For this reason, there exists a $k_0>0$ such that ${\abs{A(k)}\leq 1\,\text{ for all }\, k\ge k_0}$ (if $\abs{\Omega}\leq 1$, this is satisfied for all $k\ge 0$). Therefore \eqref{final_inequaliy1_for_Dk} takes the form \begin{equation*} (h-k)\abs{A(h)}^{\frac{1}{q}}\leq 2 C_M\abs{A(k)}^{\min{\{\frac{s-2}{2s},\frac{r-q'}{rq'}\}}}\,\text{ for all } \, h>k\ge k_0, \end{equation*} which is equivalent to the inequality \begin{equation} \label{final_inequaliy2_for_Dk_d_ge3} \abs{A(h)}\leq (2 C_M)^q \frac{\abs{A(k)}^{\min{\{\frac{s-2}{2s},\frac{r-q'}{rq'}\}}q}}{(h-k)^q}\,\text{ for all }\, h>k\ge k_0. \end{equation} However, we want to find a $k_0$ which depends only on the data of the problem. For this, observe that from \eqref{LHS_RHS_in_the_L_infty_estimate_of_the_general_nonlinear_problem} for $k=0$, using H{\"o}lder's inequality and the embedding $H^1(\Omega)\hookrightarrow L^q(\Omega)$, we have \begin{equation} \label{upper_estimate_of_the_squared_H1_norm_of_u_g} \frac{\underline\alpha}{C_P^2+1} \|u\|_{H^1(\Omega)}^2\leq C_E\|c(x,\omega)\|_{L^{q'}(\Omega)}\|u\|_{H^1(\Omega)}+ C_E\|f_0\|_{L^{q'}(\Omega)}\|u\|_{H^1(\Omega)}+\|\bm{f}\|_{L^2(\Omega)}\|u\|_{H^1(\Omega)}. \end{equation} By dividing both sides of \eqref{upper_estimate_of_the_squared_H1_norm_of_u_g} by $\|u\|_{H^1(\Omega)}$, for arbitrary $k\ge 0$, we obtain \begin{equation} \label{upper_estimate_for_the_square_root_of_the_measure_of_A_k} k\abs{A(k)}^{\frac{1}{2}}\leq\Big(\int_{\Omega}{\abs{u}^2 \dd x}\Big)^{\frac{1}{2}} \leq \frac{C_P^2+1}{\underline\alpha}\left(C_E\|c(x,\omega)\|_{L^{q'}(\Omega)}+C_E\|f_0\|_{L^{q'}(\Omega)}+\|\bm{f}\|_{L^2(\Omega)}\right). \end{equation} If we denote by $C_D$ the constant on the right hand side of inequality \eqref{upper_estimate_for_the_square_root_of_the_measure_of_A_k}, which depends only on the data of the problem \eqref{L_infty_estimate_more_general_nonlinear_problem}, then a sufficient condition for $\abs{A(k)}\leq 1$ will be \begin{equation*} \frac{C_D^2}{k^2}\leq 1, \end{equation*} which is equivalent to $k\ge C_D=:k_0$. Here we recall that for $d=2$, $q'$ can be any number greater than 1 and for $d>2$ we have $q=\frac{2d}{d-2}$. Since we have required $r>q'$, the constant $C_D$ is well defined. In order to apply Lemma \ref{Lemma_B1_Kinderlehrer_Stampacchia} to the nonnegative and nonincreasing function $\Theta(k)=\abs{A(k)}$ we need to ensure that \begin{equation*} \min{\left\{\frac{s-2}{2s},\frac{r-q'}{rq'}\right\}}>\frac{1}{q}, \end{equation*} which is equivalent to \begin{equation} \label{the_two_inequalities_for_the_min_in_L_infty_general_theorem_1} \frac{s-2}{2s}>\frac{1}{q}\quad\text{ and }\quad \frac{r-q'}{rq'}>\frac{1}{q}. \end{equation} The first inequality in \eqref{the_two_inequalities_for_the_min_in_L_infty_general_theorem_1} is equivalent to $s>\frac{2q}{q-2}$ and the second to $r>\frac{q}{q-2}$. We also recall that in the course of the proof we have required that $s>2$. \begin{itemize} \item For $d=2$, we have $H^1(\Omega)\hookrightarrow L^q(\Omega)$ for any $q<\infty$. In this case the requirements on $s$ and $r$ become $s>2,\, r>1$. \item For $d\ge3$, we have $H^1(\Omega)\hookrightarrow L^q(\Omega)$ where $q=\frac{2d}{d-2}$ and $q'=\frac{2d}{d+2}$. In this case the requirements on $s$ and $r$ are $s>d,\,r>\frac{d}{2}$. \end{itemize} We can summarize the conditions on $s$ and $r$ for $d\ge 2$ as $s>d$ and $r>\frac{d}{2}$. Now, if we denote $\beta:=\min{\{\frac{s-2}{2s},\frac{r-q'}{rq'}\}}q$ from Lemma~\ref{Lemma_B1_Kinderlehrer_Stampacchia} it follows that there exists a constant~$e$, defined by $e^q:=(2C_M)^q \abs{A(k_0)}^{\beta-1}2^{\frac{q\beta}{\beta-1}}$ such that $\abs{A(k_0+e)}=0$. Since $\abs{A(k_0)}\leq \abs{\Omega}$, we can write $\abs{A(k_1)}=0$, where $k_1:=k_0+\big((2C_M)^q \abs{\Omega}^{\beta-1}2^{\frac{q\beta}{\beta-1}}\big)^{\frac{1}{q}}=C_D+(2C_M) \abs{\Omega}^{\frac{\beta-1}{q}}2^{\frac{\beta}{\beta-1}}$. Thus, we have proved that $\|u\|_{L^\infty(\Omega)}\leq k_1$. \end{proof} \begin{theorem}\label{Thm_existence_and_uniqueness_and_Boundedness_of_minimizer_of_J_General_PBE} The unique minimizer $u_{\rm{min}}\in H_{\overline{g}}^1(\Omega)$ of the variational problem \eqref{Variational_problem_for_J} provided by Theorem~\ref{Thm_existence_and_uniqueness_of_minimizer_of_J_General_PBE} coincides with the unique solution of problem \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} for the test space $H_0^1(\Omega)$. \end{theorem} \begin{proof} We already showed that $u_{\rm{min}}$ equals the unique solution $u$ of \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} with $\pazocal{W}=H_0^1(\Omega)\cap L^\infty(\Omega)$. If we were able to show that $u\in L^\infty(\Omega)$, it would follow that $b(x,u+w)\in L^\infty(\Omega)$ and therefore by a standard density argument we obtain that $u$ is also the unique solution of \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} with $\pazocal{W}=H_0^1(\Omega)$. We would like to use the $L^\infty$ estimate of Theorem~\ref{Thm_Boundedness_of_solution_to_general_semilinear_elliptic_problem} through the modification for nonhomogeneous boundary conditions given in Remark~\ref{nonhomogeneous_Dirichlet_BC}, to the weak formulation \begin{equation}\label{copy_RCH1} \begin{aligned} &\text{Find }u\in H_{\overline g}^1(\Omega) \text{ such that } b(x,u+w)v\in L^1(\Omega) \,\text{ for all } v\in \pazocal{W} \text{ and }\\ &\int_{\Omega}{\epsilon\nabla u\cdot\nabla v \dd x}+\int_{\Omega}{b(x,u+w)v \dd x}=\int_{\Omega}{{\bm f}\cdot\nabla v \dd x}\, \text{ for all } v\in \pazocal{W}, \end{aligned}\tag{\ref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces}} \end{equation} \noeqref{copy_RCH1} with the choices of $\bm f$ and $\bar{g}$ corresponding to the 2-term or 3-term splitting, that is respectively \begin{alignat}{12} w&=G, \quad {\bm f}&=&{\bm f}_{\pazocal{G}_2}:=\chi_{\Omega_s}(\epsilon_m-\epsilon_s)\nabla G,\quad \quad &\text{ and }\quad &\overline{g}&=&g_\Omega-G &\text{ on }&\partial\Omega, \label{copy_4_2} \tag{\ref{w_barg_f_expressions_2-term_splitting}}\\ w&=0, \quad {\bm f}&=&{\bm f}_{\pazocal{G}_3}:=-\chi_{\Omega_m}\epsilon_m\nabla u^H+\chi_{\Omega_s}\epsilon_m\nabla G,\quad &\text{ and }\quad &\overline{g}&=&g_\Omega &\text{ on }&\partial\Omega.\label{copy_4_3} \tag{\ref{w_barg_f_expressions_3-term_splitting}} \end{alignat}\noeqref{copy_4_2}\noeqref{copy_4_3} For this, notice that on the one hand ${\bm f}_{\pazocal{G}_2} \in \left[L^s(\Omega)\right]^d$ for all $s>d$ since $\chi_{\Omega_s}\nabla G \in \left[L^\infty(\Omega)\right]^d$, $\epsilon_s \in C^{0,1}(\overline{\Omega_s})$, and $\epsilon_m$ is constant. For this case, moreover $\omega = \chi_{\Omega_{ions}} G \in L^\infty(\Omega)$. On the other hand, we also have ${\bm f}_{\pazocal{G}_3} \in \left[L^s(\Omega)\right]^d $ for some $s>d$, since $\nabla u^H$ belongs to this space by Proposition~\ref{proposition_u^H_is_in_W1s_s_ge_d} taking into account that\footnote{Note that the assumption $\Gamma\in C^1$ is not needed to show the $L^\infty$ estimate on the regular component $u$ for the 2-term splitting.} $\Gamma \in C^1(\partial \Omega)$. Moreover, since in both cases $\overline{g} \in C^{0,1}(\partial \Omega)$ we have that its extension $u_{\overline{g}}$ from Lemma~\ref{Lemma_extension_pf_Lipschitz} belongs to $W^{1, \infty}(\Omega)$, in particular $u_{\overline{g}} \in L^\infty(\Omega)$ and $\epsilon \nabla u_{\overline{g}} \in \left[L^s(\Omega)\right]^d$ for all $s>d$. \end{proof} As a consequence of the previous theorem and the discussion after \eqref{what_is_Wb} we have obtained the following existence theorem for the General Poisson-Boltzmann equation \eqref{GPBE_dimensionless}. \begin{theorem}\label{Theorem_Existence_for_full_potential_GPBE} There exists a weak solution $\phi$ of equation \eqref{GPBE_dimensionless} satisfying \eqref{weak_formulation_General_PBE_W1p_spaces}. A~particular $\phi$ satisfying \eqref{weak_formulation_General_PBE_W1p_spaces} can be given either in the form $\phi=G+u$ or in the form $\phi=G+u^H+u$, where $u\in H_{\overline g}^1(\Omega)\cap L^\infty(\Omega)$ is the unique solution of \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces} with either $\pazocal{W}=H_0^1(\Omega)$, $\pazocal{W}=H_0^1(\Omega)\cap L^\infty(\Omega)$ or $\pazocal{W}=C_c^\infty(\Omega)$, and $\overline g, w, {\bm f}$ are defined by \eqref{w_barg_f_expressions_2-term_splitting} and \eqref{w_barg_f_expressions_3-term_splitting} for the 2- and 3-term splitting, respectively. \end{theorem} \begin{remark}\label{Remark_after_Thm_Existence_for_full_potential_GPBE} Even if $u$ is the unique solution of \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_H1_spaces}, it might not be the unique solution of \eqref{general_form_of_the_weak_formulation_for_u_2_and_3_term_splitting_W1p_spaces} where the space of test functions, also used in \eqref{weak_formulation_General_PBE_W1p_spaces}, is smaller. To try to close this gap, in Theorem~\ref{Theorem_uniqueness_for_GPBE} we show that if the interface $\Gamma$ is $C^1$ then solutions $\phi$ of \eqref{weak_formulation_General_PBE_W1p_spaces} such that $b(x,\phi)$ is integrable\footnote{Notice that the condition $b(x,\phi)\in L^1(\Omega)$ is slightly more restrictive than the condition $b(x,\phi)v\in L^1(\Omega)$ for all test functions in $\mathfrak{N}$.} are unique. Notice that the solutions provided by Theorem \ref{Theorem_Existence_for_full_potential_GPBE} satisfy this condition, since $b(x,\cdot)$ vanishes for all $x \in \Omega_m$, the Coulomb potential $G$ is by definition bounded in $\Omega \setminus \Omega_m$ and we just proved in Theorem \ref {Thm_Boundedness_of_solution_to_general_semilinear_elliptic_problem} that $u \in L^\infty(\Omega)$ as well. In case the 3-term splitting is used, $u^H$ is also bounded by Proposition \ref{proposition_u^H_is_in_W1s_s_ge_d}. \end{remark} \subsection{Uniqueness of the full potential \texorpdfstring{$\phi$}{phi}}\label{Section_uniqueness_for_GPBE} The proof of uniqueness is based on the following two well-known results for the duality solution framework, which we are able to adapt to weak solutions in the sense of \eqref{weak_formulation_General_PBE_W1p_spaces} with just minor modifications. \begin{lemma}[analogous to Lemma B.1 from \cite{Brezis_Marcus_Ponce_Nonlinear_Elliptic_With_Measures_Revisited}]\label{Lemma_B1} Let $\Omega$, $\Omega_m$ and $\bm A$ be as in Theorem \ref{Theorem_uniqueness_of_linear_elliptic_problems_with_diffusion_and_measure_rhs}. Let $p:\mathbb R\to\mathbb R$, $p(0)=0$, be a nondecreasing, bounded and Lipschitz continuous function. Given $f\in L^1(\Omega)$, let $\varphi\in \mathfrak{M}_0 = \bigcap_{p<\frac{d}{d-1}}{W_0^{1,p}(\Omega)}$ be the unique solution provided by Theorem~\ref{Theorem_uniqueness_of_linear_elliptic_problems_with_diffusion_and_measure_rhs} of \begin{equation} \label{linear_problem_with_L1_rhs} \int_{\Omega}{\bm A\nabla \varphi\cdot\nabla v \dd x}=\int_{\Omega}{fv \dd x}\quad\text{ for all } v\in \mathfrak{N} = \bigcup_{q>d} W_0^{1,q}(\Omega). \end{equation} Then \begin{equation} \label{int_fp(u)_nonnegative} \int_{\Omega}{fp(\varphi)\dd x}\ge 0. \end{equation} \end{lemma} \begin{proof} Let $\{f_n\}\subset L^\infty(\Omega)$ be a sequence such that $f_n\to f$ in $L^1(\Omega)$ with $\|f_n\|_{L^1(\Omega)}\leq \|f\|_{L^1(\Omega)}$ for all $n\ge 1$ ($f_n$ can be chosen in $C_c^\infty(\Omega)$ by mollification - see, e.g., Corollary 4.23 in \cite{Brezis_FA}). Then, we know that there is a unique $\varphi_n\in H_0^1(\Omega)$ which satisfies the problem\footnote{Note that from Theorem~\ref{Thm_Optimal_Regularity_for_Elliptic_Interface_Problems} it follows that there is some $\bar q>d$ such that $\varphi_n\in W_0^{1,\bar q}(\Omega)$ for all $n\ge 1$. Therefore, \eqref{linear_problem_with_Linfty_rhs} also holds for all $v\in W_0^{1,\bar q'}(\Omega)$ with $\bar q'=\bar q/(\bar q-1)<d/(d-1)$.} \begin{equation} \label{linear_problem_with_Linfty_rhs} \int_{\Omega}{\bm A\nabla \varphi_n\cdot\nabla v \dd x}=\int_{\Omega}{f_nv \dd x}\quad\text{ for all } v\in H_0^1(\Omega). \end{equation} Since $p\in C^{0,1}(\mathbb R)$, $p(0)=0$, and $\varphi_n\in H_0^1(\Omega)$ by Stampacchia's superposition theorem it follows that $p(\varphi_n)\in H_0^1(\Omega)$ and we can test \eqref{linear_problem_with_Linfty_rhs} with it. Thus, \begin{equation} \label{int_fp(un)_nonnegative} \int_{\Omega}{f_n p(\varphi_n) \dd x}=\int_{\Omega}{p'(\varphi_n) \bm A \nabla \varphi_n\cdot \nabla \varphi_n \dd x}\ge 0. \end{equation} Now, our goal is to pass to the limit in \eqref{int_fp(un)_nonnegative}. From Theorem 4.9 in \cite{Brezis_FA} it follows that there exists some $h\in L^1(\Omega)$ and a subsequence (not renamed) for which $f_n(x)\to f(x)$ a.e. and $\abs{f_n(x)}\leq h(x)$ a.e.. Also, from the proof of Theorem 1 in \cite{Boccardo_Gallouet_1989} (in particular equation (20) there) we know that $\varphi_n\rightharpoonup \varphi$ weakly in $W^{1,p}(\Omega)$ for every $p<d/(d-1)$. Thus, up to another subsequence (again not relabeled) one has $\varphi_n\to \varphi$ strongly in $L^p(\Omega)$ and hence also pointwise almost everywhere in~$\Omega$. With this in mind we obtain \begin{equation} \label{passing_to_the_limit_fnp(un)} \begin{aligned} \absb{\int_{\Omega}{f_np(\varphi_n) \dd x}-\int_{\Omega}{fp(\varphi) \dd x}}&\leq \int_{\Omega}{\abs{f_n}\abs{p(\varphi_n)-p(\varphi)}\dd x} + \int_{\Omega}{\abs{f_n-f}\abs{p(\varphi)} \dd x}. \end{aligned} \end{equation} The first term in \eqref{passing_to_the_limit_fnp(un)} converges to zero by the Lebesgue dominated convergence theorem since we have pointwise convergence of the integrand and also $\abs{f_n}\abs{p(\varphi_n)-p(\varphi)}\leq 2h M\in L^1(\Omega)$, where $M:=\max_{t\in\mathbb R}\abs{p(t)}$. The second term in \eqref{passing_to_the_limit_fnp(un)} converges to zero because $p(\varphi)\in L^\infty(\Omega)$ and $f_n\to f$ in $L^1(\Omega)$. \end{proof} We define the function ${\rm sgn}:\mathbb R\to\mathbb R$ by ${\rm sgn}(t)=1$ if $t>0$, ${\rm sgn}(t)=-1$ if $t<0$ and ${\rm sgn}(t)=0$ if $t=0$. By $\mu^+$, $\mu^-\in \pazocal{M}(\Omega)$ we denote the positive and negative parts of $\mu$, obtained by the Jordan decomposition (see Theorem B.71 in \cite{Leoni_2017}), and such that $\mu=\mu^+-\mu^-$. \begin{proposition}[analogous to Proposition B.3 from \cite{Brezis_Marcus_Ponce_Nonlinear_Elliptic_With_Measures_Revisited}]\label{Proposition_B3} Let $\Omega$, $\Omega_m$ and $\bm A$ be as in Theorem \ref{Theorem_uniqueness_of_linear_elliptic_problems_with_diffusion_and_measure_rhs}. and let $f\in L^1(\Omega)$, $\mu\in \pazocal{M}(\Omega)$. Let $z\in \bigcap_{p<\frac{d}{d-1}}W_0^{1,p}(\Omega)$ be the unique solution of \begin{equation} \label{linear_problem_with_L1_term_and_measure_rhs} \int_{\Omega}{\bm A\nabla z\cdot\nabla v \dd x} +\int_{\Omega}{fv\dd x}=\int_{\Omega}{v \dd\mu}\quad \text{ for all } v\in \bigcup_{q>d}{W_0^{1,q}(\Omega)}. \end{equation} Then, \begin{equation} \label{PropB3_auxiliary_inequalities} \int_{\left[z>0\right]}{f \dd x}\leq \|\mu^+\|_{\pazocal{M}(\Omega)}\quad \text{ and }\quad -\int_{\left[z<0\right]}{f \dd x}\leq \|\mu^-\|_{\pazocal{M}(\Omega)}, \end{equation} and therefore \begin{equation} \label{PropB3_main_result} \int_{\Omega}{f\,{\rm sgn}(z) \dd x}\leq \|\mu\|_{\pazocal{M}(\Omega)}. \end{equation} \end{proposition} \begin{proof} Since problem \eqref{linear_problem_with_L1_term_and_measure_rhs} is linear, it suffices to prove only the first inequality in \eqref{PropB3_auxiliary_inequalities}. Let $\left\{\mu_n^+\right\}$, $\left\{\mu_n^-\right\}$ be sequences in $L^\infty(\Omega)$ such that $\mu_n^+(x),\,\mu_n^-(x)\ge 0$ a.e., $\mu_n^+\stackrel{\ast}{\rightharpoonup}\mu^+$, $\mu_n^-\stackrel{\ast}{\rightharpoonup}\mu^-$, and $\|\mu_n^+\|_{L^1(\Omega)}\leq \|\mu^+\|_{\pazocal{M}(\Omega)}$, $\|\mu_n^-\|_{L^1(\Omega)}\leq \|\mu^-\|_{\pazocal{M}(\Omega)}$ ($\mu_n^+$ and $\mu_n^-$ can even be chosen in $C_c^\infty(\Omega)$, see e.g. Problem 24 in \cite{Brezis_FA}). Let $z_n$ denote the solution, unique by Theorem~\ref{Theorem_uniqueness_of_linear_elliptic_problems_with_diffusion_and_measure_rhs}, of \eqref{linear_problem_with_L1_term_and_measure_rhs} with $\mu$ replaced by $\mu_n:=\mu_n^+-\mu_n^-$, i.e., $z_n\in \bigcap_{p<\frac{d}{d-1}}W_0^{1,p}(\Omega)$ satisfies \begin{equation} \label{linear_problem_with_L1_term_and_regularized_rhs} \int_{\Omega}{\bm A\nabla z_n\cdot\nabla v \dd x} +\int_{\Omega}{fv\dd x}=\int_{\Omega}{\mu_n v\dd x}\quad \text{ for all } v\in \bigcup_{q>d}{W_0^{1,q}(\Omega)}. \end{equation} If $p:\mathbb R\to\mathbb R$ is a nondecreasing bounded Lipschitz continuous function satisfying $p(0)=0$, then by Lemma~\ref{Lemma_B1} we have \begin{equation} \int_{\Omega}{(\mu_n-f)p(z_n)\dd x}\ge 0. \end{equation} If we further assume that $0\leq p(t)\leq 1$ for all $t\in\mathbb R$, then by using the facts that $\mu_n^+(x),\,\mu_n^-(x)\ge 0$ a.e. and $\|\mu_n^+\|_{L^1(\Omega)}\leq \|\mu^+\|_{\pazocal{M}(\Omega)}$ we obtain \begin{equation} \label{intermediate_result_01_in_PropB3} \begin{aligned} \int_{\Omega}{fp(z_n)\dd x}&\leq\int_{\Omega}{\mu_np(z_n)\dd x}=\int_{\Omega}{\mu_n^+p(z_n)\dd x}-\int_{\Omega}{\mu_n^-p(z_n)\dd x}\\ &\leq \int_{\Omega}{\mu_n^+p(z_n)\dd x}\leq \|\mu^+\|_{\pazocal{M}(\Omega)}. \end{aligned} \end{equation} Our goal now is to pass to the limit in \eqref{intermediate_result_01_in_PropB3}. First, observe that for all $ v\in C_0(\overline\Omega)$ we have \begin{equation*} \int_{\Omega}{\big(\mu_n^+-\mu_n^-\big)\,v\dd x}=\int_{\Omega}{\mu_n^+v\dd x}-\int_{\Omega}{\mu_n^-v\dd x}\to \langle\mu^+,v\rangle-\langle \mu^-,v\rangle=\langle\mu,v\rangle \end{equation*} and \begin{equation*} \|\mu_n\|_{L^1(\Omega)}\leq \|\mu_n^+\|_{L^1(\Omega)}+\|\mu_n^-\|_{L^1(\Omega)}\leq \|\mu^+\|_{\pazocal{M}(\Omega)}+\|\mu^-\|_{\pazocal{M}(\Omega)}=\|\mu\|_{\pazocal{M}(\Omega)}. \end{equation*} Therefore, as in the proof of the previous lemma and up to another subsequence again denoted $\{z_n\}$, we have $z_n\to z$ pointwise almost everywhere in $\Omega$. Since we also have $\abs{fp(z_n)}\leq \abs{f}\in L^1(\Omega)$, by dominated convergence, from \eqref{intermediate_result_01_in_PropB3} we obtain \begin{equation} \label{intermediate_result_02_in_PropB3} \int_{\Omega}{fp(z)\dd x}\leq \|\mu^+\|_{\pazocal{M}(\Omega)}. \end{equation} Now, we apply \eqref{intermediate_result_02_in_PropB3} to a sequence of nondecreasing Lipschitz continuous functions $\{p_n\}$ such that $p_n(s)=0$ for $s\leq 0$ and $p_n(s)=1$ for $s\ge\frac{1}{n}$. As $n\to\infty$, again by dominated convergence we obtain the first inequality in \eqref{PropB3_auxiliary_inequalities}. By changing $f$ with $-f$ and $\mu$ with $-\mu$ in \eqref{linear_problem_with_L1_term_and_measure_rhs} and then applying the first inequality in \eqref{PropB3_auxiliary_inequalities} we easily obtain the second one. Finally, summing up both of these inequalities gives \eqref{PropB3_main_result}. \end{proof} In the next theorem we show that if we additionally impose the condition $b(x,\phi)\in L^1(\Omega)$ in Definition~\ref{definition_weak_formulation_full_GPBE_dimensionless}, then one can show that there is only one such $\phi$ that satisfies~\eqref{weak_formulation_General_PBE_W1p_spaces}. \begin{theorem}[Uniqueness of the weak solution of the GPBE]\label{Theorem_uniqueness_for_GPBE} Under Assumption~\ref{Assumption_Domain_permittivity}, there can only be one solution $\phi$ to problem \eqref{weak_formulation_General_PBE_W1p_spaces} (where $b$ is defined in \eqref{definition_b_GPBE}) such that $b(x,\phi)\in L^1(\Omega)$. \end{theorem} \begin{proof} Let $\phi_1,\phi_2\in \mathfrak{M}_{g_\Omega}$ be two solutions of \eqref{weak_formulation_General_PBE_W1p_spaces} such that $b(x,\phi_1),\,b(x,\phi_2)\in L^1(\Omega)$. Subtracting the corresponding weak formulations for $\phi_1$ and $\phi_2$ we get $\phi_1-\phi_2\in \mathfrak{M}_0$ and \begin{equation} \label{difference_weak_formulations_General_PBE_W1p_spaces} \int_{\Omega}{\epsilon\nabla (\phi_1-\phi_2)\cdot\nabla v \dd x}+\int_{\Omega}{\left(b(x,\phi_1)-b(x,\phi_2)\right)v \dd x}=0 \, \text{ for all } \,v\in \mathfrak{N}. \end{equation} By applying Proposition~\ref{Proposition_B3} with $f=b(x,\phi_1)-b(x,\phi_2)\in L^1(\Omega)$ and $\mu=0$ we obtain \begin{equation} \label{application_of_PropB3} \int_{\Omega}{\left(b(x,\phi_1)-b(x,\phi_2)\right)\,{\rm sgn}(\phi_1-\phi_2)\dd x}\leq 0. \end{equation} Recalling the definition of $b(x,\cdot)$ in \eqref{definition_b_GPBE} we have $b(x,t)=0$ for all $x\in \Omega \setminus \Omega_{ions}$, and $b(x,\cdot)$ strictly increasing whenever $x\in \Omega_{ions}$, with $\Omega = \Omega_m \cup \Gamma \cup \Omega_{IEL} \cup \Omega_{ions}$, where the union is pairwise disjoint and $\Gamma = \partial \Omega_m$. Taking this into account, \eqref{application_of_PropB3} implies \begin{equation} \label{intermediate_result_01_Theorem_uniqueness_GPBE} \left(b(x,\phi_1)-b(x,\phi_2)\right)\,{\rm sgn}(\phi_1-\phi_2)=0 \text{ a.e. } x\in\Omega, \end{equation} which in turn gives $\phi_1(x)=\phi_2(x)$ for a.e. $x\in\Omega_{ions}$, but provides no information on $\Omega \setminus \Omega_{ions}$. To see that $\phi_1=\phi_2$ a.e. in the whole domain $\Omega$, note that the second integral on the left hand side of \eqref{difference_weak_formulations_General_PBE_W1p_spaces} is zero. This allows us to apply Theorem~\ref{Theorem_uniqueness_of_linear_elliptic_problems_with_diffusion_and_measure_rhs} to the resulting linear problem on the complete $\Omega$ and conclude that it has a unique solution. Moreover, it clearly admits the trivial solution as well, so $\phi_1 - \phi_2=0$. \end{proof} We notice that in the proof of this theorem, only two features of the nonlinearity have been used: for $x \in \Omega \setminus \overline{\Omega_{ions}}$ we have that $b(x, \cdot)\equiv 0$, and for $x \in \Omega_{ions}$ we have that $b(x, s)$ is strictly monotone in $s$. This kind of behaviour allows us to infer uniqueness in the semilinear problem from the linear one, also for more general coefficient matrices $\bm A\in \left[L^\infty(\Omega)\right]^{d\times d}$. In particular, we get the following: \begin{corollary}Let $\Omega\subset \mathbb R^d$ with $d\in \{2, 3\}$ be a bounded domain with Lipschitz boundary, and $\Omega_0\subset\Omega$ a subdomain with $C^1$ boundary and $\dist(\Omega_0, \, \partial \Omega)>0$. Let $\bm A$ be a $d\times d$ symmetric matrix valued function on $\Omega$ satisfying the uniform ellipticity condition \eqref{uniform_ellipticity_of_A} and which is uniformly continuous on both $\Omega_0$ and $\Omega\setminus \overline{\Omega_0}$. Assume further that $\Omega_1$ is a measurable subset of $\Omega$ and that $b:\Omega\times\mathbb{R}\to\mathbb{R}\cup\{+\infty\}$ is a Carath\'eodory function such that $t \mapsto b(x,t)$ is strictly monotone for almost every $x\in\Omega_1$ and vanishes identically for almost every $x\in \Omega\setminus\Omega_1$. Then for $g \in W^{1-1/p, p}(\partial \Omega)$ with $p=\frac{d}{d-1}$ and $\mu \in \pazocal{M}(\Omega)$ the problem \begin{equation}\label{weak_formulation_general_semilinear_measure_RHS} \begin{aligned} &\text{Find }z \in \bigcap_{p<\frac{d}{d-1}}{W_{g}^{1,p}(\Omega)}\,\text{ with }\, {b(x,z)\in L^1(\Omega)}\, \text{ such that }\\ &\int_{\Omega}{\bm A \nabla z\cdot\nabla v \dd x}+\int_{\Omega}{b(x,z)v \dd x}=\int_{\Omega}{v \dd\mu} \, \text{ for all } \,v\in \bigcup_{q>d}{W_0^{1,q}(\Omega)} \end{aligned} \end{equation} has at most one solution. \end{corollary} To conclude, we reiterate that having such a uniqueness result for the weak formulation \eqref{weak_formulation_General_PBE_W1p_spaces} ensures that both the 2-term and 3-term splittings lead to the same full potential $\phi$, as would any other decomposition compatible with this natural notion of weak solution for PDE with measure data. \section*{Acknowledgments} The second author is grateful for the financial support received from the Austrian Science Fund (FWF) through the Doctorate College program ``Nano-Analytics of Cellular Systems (NanoCell)'' with grant number W1250 and project P 33154-B, and from Johannes Kepler University in conjunction with the State of Upper Austria through projects LIT-2017-4-SEE-004 and LIT-2019-8-SEE-120. The first author is partially supported by the State of Upper Austria. We would like to thank Hannes Meinlschmidt for helpful comments on a preliminary version of this manuscript.
{'timestamp': '2021-09-08T02:15:35', 'yymm': '2012', 'arxiv_id': '2012.06437', 'language': 'en', 'url': 'https://arxiv.org/abs/2012.06437'}
\section{Introduction} Semantic segmentation enables pixel-wise scene understanding, which is crucial to many real-world applications such as autonomous driving. The recent surge of deep learning methods has significantly accelerated the progress in semantic segmentation \cite{chen2017deeplab,long2015fully} but at the price of large-scale per-pixel labelled datasets \cite{cordts2016cityscapes} which are often prohibitively costly to collect in terms of time and money. Latest progresses in computer graphics engines provide a possible way to circumvent this problem by synthesizing photo-realistic images together with pixel-wise labels automatically \cite{richter2016playing,ros2016synthia}. However, deep neural networks trained with such synthetic image data tend to perform poorly on real image data due to clear domain gaps \cite{vu2019advent}. \begin{figure}[!t] \centering \includegraphics[width=.98\linewidth]{figures/figure_1.pdf} \caption{Our proposed multi-level adversarial network (MLAN) improves domain adaptive semantic segmentation: MLAN adapts features at multiple levels for joint global image-level and local region-level alignment between source and target domains as illustrated in the bottom part. Arrows and boxes in blue and yellow denote source and target data-flows, while dash and solid arrows in gray represent alignment processes. The segmentation outputs with and without our proposed technique are illustrated in the top part (white boxes and green arrows highlight local image regions). Best viewed in color. } \label{fig:intro} \end{figure} Unsupervised domain adaptation (UDA) has been introduced to address the domain bias/shift issue, and most existing unsupervised domain adaptation techniques \cite{tsai2018learning,vu2019advent,luo2019taking,wang2020differential} employ adversarial learning to align the data representations of source and target domains via a discriminator \cite{tzeng2017adversarial,tsai2018learning,luo2019taking,tsai2019domain,vu2019advent}. In these approaches, the adversarial loss is essentially a binary cross-entropy which is evaluated according to whether the generated representation is from the source or target domain. However, the global image-level domain classification signal does not have sufficient capacity to capture local/regional features and characteristics ($e.g.$, consistency of region-level context-relations) as illustrated in Figure \ref{fig:intro}, which is usually too broad and weak to handle the pixel-level semantic segmentation task. In \cite{huang2020contextual}, we investigated domain adaptive semantic segmentation that addresses the limitation of global image-level alignment by exploiting local region-level consistency across domains. Specifically, \cite{huang2020contextual} first learns prototypical region-level context-relations explicitly in source domains (with synthetic images) and then transfers the learnt region-level context-relations to target domains (with real images) via adversarial learning. Beyond region-level adversarial learning in \cite{huang2020contextual} (to be described in Section 3.2), this work further investigates how to exploit consistency and co-regularization of local and global information for optimal domain adaptive segmentation. Specifically, we design a multi-level adversarial networks (MLAN) and co-regularized adversarial learning (CR-AL) that achieve both global image-level and local region-level alignment optimally as illustrated in Figure \ref{fig:intro} (to be described in Section 3.3). MLAN employs a mutual regularizer that uses local and global consistencies to coordinate global image-level adversarial learning (IL-AL) and local region-level adversarial learning (RL-AL). In particular, local consistencies regularize IL-AL by enforcing region-level constraints during image-level alignment, e.g., it increases IL-AL loss for regions with large region-level inconsistency. Similarly, global consistencies regularize RL-AL by enforcing image-level constraints during region-level alignment, e.g., it increases RL-AL loss for hard images with large domain gaps. The multi-level and mutually regularized adversarial learning adds little overhead during inference but just four one-layer classifiers during the training stage. In addition, we introduce a multi-level consistency map (MLCM) that guides domain adaptive segmentation in image translation in the input space and self-training in the output space (to be described in Section 3.4). For input-space adaptation, MLCM learns adaptive image-to-image translation that can attend to specific image regions with large domain gaps. For output-space adaptation, MLCM introduces domain gap information and encourages self-training to select samples/pixels that have larger domain gaps in pseudo labeling. This helps prevent over-fitting to easy samples/pixels with smaller domain gaps effectively. Extensive experiments demonstrate the effectiveness of proposed two new designs, and MLAN surpasses the state-of-the-art including our prior work \cite{huang2020contextual} consistently by large margins. The organization of this paper is as follows: We start by reviewing related work on unsupervised domain adaptive semantic segmentation in Section 2. Then, Sections 3 details our proposed MLAN for domain adaptive semantic segmentation. We provide both qualitative and quantitative evaluations in Section 4, and conclude this paper in Section 5. \section{Related work} Unsupervised Domain Adaptation (UDA) ~\cite{deng2018active,liang2019exploring} aims to learn a model from source supervision only that can perform well in target domains. It has been extensively explored for the tasks of classification~\cite{li2018adaptive,zuo2020challenging,rahman2020correlation} and detection~\cite{chen2018domain,xu2020exploring}. In recent years, UDA for semantic segmentation \cite{hoffman2016fcns} has drawn increasing attention as training high-quality segmentation model requires large-scale per-pixel annotated dataset, which is often prohibitively expensive and labor-intensive. Most existing unsupervised domain adaptive semantic segmentation methods can be broadly classified into three categories including adversarial learning based \cite{tsai2018learning,vu2019advent,luo2019taking,guan2020scale}, image translation based \cite{hoffman2018cycada,yang2020fda}, and self-training based \cite{zou2018unsupervised,zou2019confidence}. \textbf{Adversarial learning based segmentation} minimizes domain divergences by matching the marginal distributions between source and target domains. Hoffman \textit{et al.} \cite{hoffman2016fcns} first proposed a domain adaptive segmentation method by employing a discriminator to globally align source and target features and introducing a constraint from category-level statistic to guarantee the semantic similarity. Chen et al. \cite{chen2017no} implemented class-wise adversarial learning and grid-wise soft label transferring to achieve global and class-wise alignment. Tsai \textit{et al.} \cite{tsai2018learning} demonstrated the spatial similarities across domains and applied adversarial learning in output space. Vu \textit{et al.} \cite{vu2019advent} proposed an adversarial entropy minimization method to achieve structure adaptation in entropy space. Lou \textit{et al.} \cite{luo2019taking} designed a category-level adversarial learning technique to enforce semantic consistency in marginal distribution alignment from source domain to target domain. Guan \textit{et al.} \cite{guan2020scale} proposed a scale-invariant constraint to regularize adversarial learning loss for preserving semantic information. \textbf{Image translation based segmentation} reduces domain gap in the input space by transferring the style of source-domain images to target domain \cite{hoffman2018cycada,huang2021fsdr,li2019bidirectional,yang2020fda}. Hoffman \textit{et al.} \cite{hoffman2018cycada} employs generative adversarial networks to produce target-like source images for direct input adaptation. Li \textit{et al.} \cite{li2019bidirectional} alternatively trained image translation networks and segmentation model in a bidirectional learning manner. Yang \textit{et al.} \cite{yang2020fda} proposed a simple but effective image translation method for domain adaption by swapping the low-frequency spectrum of source images with target images. \textbf{Self-training based segmentation} retrains source-domain models iteratively by including target-domain samples that have high-confident predictions of their pseudo labels (by newly trained source-domain models) \cite{zou2018unsupervised,huang2021cross,zou2019confidence,wang2020differential}. Zou \textit{et al.} \cite{zou2018unsupervised} proposed a class-balanced self-training approach to optimize pseudo-label generation via decreasing the influence of dominant category. Zou \textit{et al.} \cite{zou2019confidence} refined psudo-label generation and model training by a label entropy regularizer and an output smoothing regularizer respectively. Wang \textit{et al.} \cite{wang2020differential} optimized self-training by minimizing the distance of background and foreground features across domain. This paper presents a multi-level adversarial network (MLAN) that introduces local region-level adversarial learning (RL-AL) and co-regularized adversarial learning (CR-AL) for optimal domain adaptation. In addition, MLAN computes a multi-level consistency map (MLCM) to guide the domain adaptation in both input and output spaces. To our knowledge, this is the first method that addresses domain adaptive semantic segmentation by mutual regularization of adversarial learning at multiple levels effectively. \section{Proposed methods} In this section, we present the proposed multi-level adversarial network (MLAN) that aims to achieve both local and global consistencies optimally. MLAN consists of three key components including local region-level adversarial learning (RL-AL), co-regularized adversarial learning (CR-AL), and consistency-guided adaptation in the input and output spaces, which will be described in details in the ensuing subsections 3.2, 3.3, and 3.4, respectively. \subsection{Problem definition} We focus on the problem of unsupervised domain adaptation (UDA) in semantic segmentation. Given the source data $X_{s} \subset \mathbb{R}^{H \times W \times 3}$ with C-class pixel-level segmentation labels $Y_{s} \subset (1,C)^{H \times W}$ ($e.g.$, synthetic scenes from computer graphics engines) and the target data $X_{t} \subset \mathbb{R}^{H \times W \times 3}$ without labels ($i.e.$, real scenes), our objective is to learn a segmentation network $F$ that performs on target dataset $X_{t}$ optimally. Existing adversarial learning based domain adaptive segmentation networks heavily rely on the discriminator to align source and target distributions through two learning objectives: supervised segmentation loss on source images and adversarial loss for target-to-source alignment. Specifically, for the source domain, these approaches learn a segmentation model $F$ with supervised segmentation loss. And then, for target domain, these adversarial learning based UDA networks train $F$ to extract domain-invariant features though the mini-max gaming between segmentation model $F$ and a discriminator $D$. Therefore, they formulate this UDA task as: \begin{equation} \mathcal{L}(X_{s}, X_{t}) = \mathcal{L}_{seg}(F) + \mathcal{L}_{adv}(F, D) \end{equation} However, the current adversarial learning based UDA networks have two crucial limitations. First, they focus on global image-level alignment but neglect local region-level alignment. Second, the global image-level alignment could impair local consistencies even if it is achieved perfectly. The reason is that the discriminator in global image-level alignment takes a whole map as input but outputs a binary domain label only. Consequently, certain local regions that have been well aligned across domains might be deconstructed by the global image-level adversarial loss from the discriminator. We observe such "lack of local region-level consistency" that is essential to semantic segmentation with pixel-level prediction. \begin{figure}[!t] \centering \includegraphics[width=.98\linewidth]{figures/figure_2.pdf} \caption{Illustration of establishment of region-level context-relation labels: we first sample local image regions from pixel-level segmentation maps (in the provided ground-truth annotations) in the source domain and then implement DBSCAN clustering (based on the histogram of gradient) to assign each local region with an indexed/labelled context-relation. The bottom part visualizes the local region-level alignment that enforces target predictions to have source-like context-relations via adversarial learning.} \label{fig:local} \end{figure} \subsection{Region-level adversarial learning} This subsection introduces our local region-level adversarial learning (RL-AL) \cite{huang2020contextual} that consists of region-level context-relations discovery and transfer, as shown in Figure \ref{fig:local}. \textbf{Region-level context-relations label establishment.} In order to implement local region-level adversarial learning (RL-AL), we first explicitly discover and model the region-level context-relations in labelled source domain. Specifically, we first sample patches on the pixel-level ground-truth of source data and then employ Density-based spatial clustering of applications with noise (Dbscan) to cluster them into subgroups based on the histogram of gradient and assign each patch a certain index label ($i.e.$, relation 1, 2, ...., N). We apply this region-level context-relations label establishment process with three patch sizes for explicit local context-relations discovery. These region-level context-relations labels can support our model to conduct alignment at region-level. \textbf{Adaptive entropy max-minimizing for region-level adversarial learning.} As shown in Figure \ref{fig:framework}, $C_{region}$ learns prototypical region-level contextual-relations ($e.g.$, sidewalk-road, building-sky, car-road, etc.) via the supervised learning in labelled source domain ($i.e.$, the established local region-level context-relations label) and transfers the learnt prototypical contextual-relations on unlabelled target data by implementing adaptive entropy max-minimization based adversarial learning. \textbf{Source flow.} In our region-level adversarial learning (RL-AL), the source data contributes to $L_{seg}$ and $L_{region}$. Given a source image $x_{s} \in X_{s}$, its corresponding segmentation label $y_{s} \in Y_{s}$ and contextual-relation pseudo-label $y_{s\_region} \in Y_{s\_region}$, $p_{s}^{(h, w, c)} = C_{seg}(E(x_{s}))$ is the predicted pixel-level probability map for semantic segmentation; $p_{s\_region}^{(i, j, n)} = C_{region}(E(x_{s}))$ is the predicted region-level probability map for local contextual-relations classification. Thus, the two supervised learning objectives is to minimize $L_{seg}$ and $L_{region}$, respectively, which are formulated as: \begin{equation} \mathcal{L}_{seg}(E, C_{seg}) = \sum_{h, w} \sum_{c} -y_{s}^{(h, w, c)} \log p_{s}^{(h, w, c)} \end{equation} \begin{equation} \mathcal{L}_{region}(E, C_{region}) = \sum_{i, j} \sum_{n} -y_{s\_region}^{(i, j, n)} \log p_{s\_region}^{(i, j, n)} \end{equation} \textbf{Target flow.} As the target data is not annotated, we propose an adaptive entropy max-minimization based adversarial training between feature extractor $E$ and classifier $C_{region}$ to enforcing the target segmentation to have source-like context-relations. Given a target image $x_{t} \in X_{t}$, $p_{t\_region}^{(i, j, n)} = C_{region}(E(x_{t}))$ is the predicted region-level probability map for local contextual-relations classification. The entropy based loss $L_{ent\_region}$ is formulated as: \begin{equation} \mathcal{L}_{ent\_region}(E, C_{region}) = - \frac{1}{N}\sum_{i, j} \sum_{n} \text{max}\{p_{t\_region}^{(i, j, n)} \log p_{t\_region}^{(i, j, n)} - \mathcal{R}(p_{t\_region}^{(i, j, n)}), 0\}, \label{equ:local} \end{equation} where the adaptive optimization is achieve by $\mathcal{R}(p)=\text{average}\{ p\log p\} \times \lambda _{R}$; $\lambda_{R} = (1 - \frac{iter}{max\_iter})^{power}$ with ${power} = 0.9$ is a weight factor decreases with training iteration. We employ the gradient reverse layer~\cite{ganin2015unsupervised} for local region-level adversarial learning and the training objective is formulated as: \begin{equation} \begin{split} & \min_{\theta_{E}} \mathcal{L}_{seg} + \lambda_{region}\mathcal{L}_{region} + \lambda_{ent} \mathcal{L}_{ent\_region}, \\ & \min_{\theta_{C_{seg}}} \mathcal{L}_{seg}, \\ & \min_{\theta_{C_{region}}} \lambda_{region}\mathcal{L}_{region} - \lambda_{ent} \mathcal{L}_{ent\_region}, \\ \end{split} \end{equation} where $\lambda_{ent}$ is a weight factor to balance the target unsupervised domain adaptation loss and the source supervised loss; $\lambda_{region}$ is a weight factor to balance the local region-level and pixel-level supervised learning on source domain. \begin{figure}[!t] \centering \includegraphics[width=.98\linewidth]{figures/figure_3.pdf} \caption{Overview of our proposed multi-level adversarial network (MLAN): Given input images, feature extractor $E$ extracts features and feeds them to region-level context-relations classifiers $C_{region}$ for classification at three region levels. In the source flow (highlighted by arrows in blue), $\mathcal{L}_{seg}$ is a supervised segmentation loss on labelled source domain; $\mathcal{L}_{region}$ is a supervised region-level learning loss on source domain with the region-level context-relations labels established in Figure \ref{fig:local}. In the target flow (highlighted by arrows in yellow), $\mathcal{L}^{r}_{ent\_region}$ is an entropy max-min based unsupervised region-level alignment loss to enforce target predictions to have source-like region-level context-relations. $C_{image}$ is the domain classifier that aligns source and target distributions at image-level via loss $\mathcal{L}_{image}$/$\mathcal{L}^{r}_{image}$. The co-regularized adversarial learning (CR-AL) is achieved by mutually rectifying the region-level and image-level alignments. In addition, the local ($i.e.$, region-level) and global ($i.e.$, image-level) consistencies map (CM) are integrated into a multi-level CM to guide the input space and output space adaptation. } \label{fig:framework} \end{figure} \subsection{Co-regularized adversarial learning} This subsection introduces our co-regularized adversarial network (CR-AL) that integrates local region-level AL with global image-level AL and conducts mutual regularization, as shown in the top part of Figure \ref{fig:framework}. \textbf{Image-level adversarial learning (IL-AL).} For IL-AL, We employ a domain classifier $C_{image}$ that takes the source and target feature maps as inputs, and predicts domain labels for them ($i.e.$, 0/1 for source/target domain). The global image-level alignment is conducted through the mini-max gaming between this domain classifier $C_{image}$ and feature extractor $E$. The global image-level adversarial learning loss is defined as: \begin{equation} \mathcal{L}_{image}(E, C_{image}) = \sum_{u, v} \mathbb{E}[\log C_{image}(E(x_{s}))] + \mathbb{E}[1-\log C_{image}(E(x_{t}))], \end{equation} where $C_{image}(E(x_{s})) \in \mathbb{R}^{U \times V}$ is a domain label prediction map with size $3 \times 3$ ($i.e.$, $U=V=3$). \textbf{Consistencies map (CM) calculation.} With both region-level and image-level ALs, we calculate the region-level consistencies map (RLCM) and image-level consistencies map (ILCM) for mutual regularization between local region-level and global image-level adversarial learning: \begin{equation} \begin{split} &\mathcal{M}_{region} = - \frac{1}{N} \sum_{n} max\{p_{t\_region}^{(i, j, n)} \log p_{t\_region}^{(i, j, n)} - \mathcal{R}(p_{t\_region}^{(i, j, n)}), 0\},\\ &\mathcal{M}_{image} = \mathbb{E}[1 - \log C_{image}(E(x_{t}))],\\ \end{split} \label{local+global} \end{equation} where $p_{t\_region}^{(i, j, n)}$ and $\mathcal{R}(\cdot)$ are described in Equation. \ref{equ:local}. \textbf{Multi-level mutual regularization.} Then the local consistencies regularized image-level AL and global consistencies regularized region-level AL are formulated as: \begin{equation} \begin{split} \mathcal{L}_{ent\_region}^{r}(E, C_{region}) = & - \frac{1}{N}\sum_{i, j} \sum_{n}\\ &\text{Max}\{p_{t\_region}^{(i, j, n)} \log p_{t\_region}^{(i, j, n)} - \mathcal{R}(p_{t\_region}^{(i, j, n)}), 0\} \times \mathcal{M}_{image}, \end{split} \label{Rlocal} \end{equation} where $p_{t\_region}^{(i, j, n)}$ and $\mathcal{R}(\cdot)$ are described in Equation. \ref{equ:local}; $\mathcal{M}_{image}$ is up-sampled to match the size of $\text{Max}\{p_{t\_region}^{(i, j, n)} \log p_{t\_region}^{(i, j, n)} - \mathcal{R}(p_{t\_region}^{(i, j, n)}), 0\} \in \mathbb{R}^{I \times J}$ during calculation. \begin{equation} \begin{split} \mathcal{L}_{image}^{r}(E, C_{image}) =& \sum_{u, v} \mathbb{E}[\log C_{image}(E(x_{s}))]\\ &+ \sum_{i, j} \mathbb{E}[1-\log C_{image}(E(x_{t}))] \times \mathcal{M}_{region},\\ \end{split} \label{Rglobal} \end{equation} where $\mathbb{E}[1-\log C_{image}(E(x_{t}))]$ is up-sampled to match the size of $\mathcal{M}_{region} \in \mathbb{R}^{I \times J}$ during calculation. \textbf{Learning objective.} We integrate Equation. \ref{Rlocal} and \ref{Rglobal} to form a co-regularized adversarial learning and the training objective is formulated as: \begin{equation} \begin{split} & \min_{\theta_{E}} \mathcal{L}_{seg} + \lambda_{region}\mathcal{L}_{region} + \lambda_{ent}\mathcal{L}^{r}_{ent\_region} + \lambda_{image} \mathcal{L}^{r}_{image} \\ & \min_{\theta_{C_{seg}}} \mathcal{L}_{seg}\\ & \min_{\theta_{C_{region}}} \lambda_{region}\mathcal{L}_{region} - \lambda_{ent} \mathcal{L}^{r}_{ent\_region}\\ & \max_{\theta_{C_{image}}} \lambda_{image} \mathcal{L}^{r}_{image}\\ \end{split} \end{equation} where $\lambda_{region}$, $\lambda_{image}$ and $\lambda_{ent}$ are the weight factors to balance the objectives different losses. \subsection{Multi-level consistencies regularized input/output adaptation} This subsection introduces the multi-level consistencies map (MLCM) that guides and rectifies the input space ($e.g.$, image translation) and output space ($e.g.$, self-training) domain adaptation effectively, as shown in the bottom part of Figure \ref{fig:framework}. The multi-level consistencies map (MLCM) is an incorporation of local region-level consistencies map (RLCM) and global image-level consistencies map (ILCM), which is formulated as: \begin{equation} \mathcal{M}_{multi} = \mathcal{M}_{region} \times \mathcal{M}_{image}, \end{equation} where $\mathcal{M}_{region}$ and $\mathcal{M}_{image}$ are defined in Equation. \ref{local+global}; $\mathcal{M}_{image}$ is up-sampled to match the size of $\mathcal{M}_{region} \in \mathbb{R}^{I \times J}$ during calculation. We then utilize the calculated MLCM to regularizing the image translation for input space domain adaptation, as shown in the left bottom part of Figure \ref{fig:framework}. MLCM enables adaptive image translation that learns translation with adaptive attention on different regions/images ($i.e.$, focus more on large domain gap regions/images). The training objective of MLCM regularized image translation (MLCMR-IT) is defined as: \begin{equation} \min_{\theta_{G}} \max_{\theta_{D}} \mathcal{L}^{r}_{IT}(G, D) =\mathbb{E}[\log D(X_{s})] + \mathbb{E}[1-\log D(G(X_{t}))] \times \mathcal{M}_{multi}, \end{equation} where $G$ is the generator; $D$ is the discriminator; the original image translation loss $\mathcal{L}_{IT}(G, D)$ is the same as the regularized loss $\mathcal{L}^{r}_{IT}(G, D)$ without multi-level regularization ($i.e.$, $\mathcal{M}_{multi}$). Similarly, MLCM is also used to regularize the self-training for output space domain adaptation, as shown in the right bottom part of Figure \ref{fig:framework}. MLCM introduces domain gap information and encourages self-training to select samples/pixels with larger domain gap as pseudo label for preventing over-fitting on easy samples/pixels ($i.e.$, samples with smaller domain gap). The MLCM regularized pseudo label generation is defined as: \begin{equation} \hat{y_{t}}^{(h, w)} = \arg\max_{c \in C} \text{\usefont{U}{bbm}{m}{n}1}_{[(p_{t}^{(c)} \times \mathcal{M}_{multi}) > \exp(-k_{c})]}(p_{t}^{(h, w, c)}) \label{psedu} \end{equation} where $p_{t}^{(h, w, c)} = C_{seg}(E(x_{t}^{(h, w, 3)}))$ refers to the segmentation probability map; $\text{\usefont{U}{bbm}{m}{n}1}$ is a function that returns the input if the condition is true or an empty output otherwise; and $k_{c}$ is the class-balanced weights~\cite{zou2018unsupervised}. We then fine-tune the segmentation model $(E+C_{seg})$ with target-domain images $x_{t} \in X_{t}$ and the generated pseudo labels $\hat{y}_{t} \in \hat{Y}_{t}$. The loss function and training objective of MLCM regularized self-training (MLCMR-ST) is defined as: \begin{equation} \min_{\theta_{E}} \min_{\theta_{C_{seg}}} \mathcal{L}^{r}_{ST}(E, C_{seg}) = \sum_{h, w} \sum_{c} -\hat{y}_{t}^{(h, w, c)} \log p_{t}^{(h, w, c)} \end{equation} where $\hat{y}_{t}^{(h, w, c)} \in (0, 1)^{(H, W, C)}$ is the one-hot representation transformed from $\hat{y}_{t}^{(h, w)} \in (1, C)^{(H, W)}$; the original self-training (non-regularized) loss $\mathcal{L}_{ST}(E, C_{seg})$ is the same to loss $\mathcal{L}^{r}_{ST}(E, C_{seg})$ except no $\mathcal{M}_{multi}$ regularization in Equation \ref{psedu}. \section{Experiment} \subsection{Datasets} Our experiments are conducted over two synthetic datasets (i.e., GTA5 \cite{richter2016playing} and SYNTHIA \cite{ros2016synthia}) and one real dataset (i.e., Cityscapes \cite{cordts2016cityscapes}). GTA5 consists of $24,966$ realistic virtual images with automatically generated pixel-level semantic labels. SYNTHIA contains $9,400$ synthetic images with segmentation labels obtained from a photo-realistic virtual world. Cityscapes is a real-world captured dataset for semantic segmentation. It contains high-resolution images ($2,975/500$ in training/validation set) with human-annotated dense labels. In our experiments, domain adaptive semantic segmentation networks are trained with the annotated synthetic dataset (GTA5 or SYNTHIA) and the unannotated real dataset (Cityscapes) as in \cite{vu2019advent,guan2020scale,yang2020fda}. The evaluation is performed over the validation set of Cityscapes with the standard mean-Intersection-over-Union (mIoU) metric. \subsection{Implementation details} We employ PyTorch toolbox in implementation. All the experiments are conducted on a single Tesla V100 GPU with maximum 12 GB memory usage. Following \cite{tsai2018learning,vu2019advent,luo2019taking,huang2020contextual,guan2020scale}, we adopt Deeplab-V2 architecture \cite{chen2017deeplab} with ResNet-101 pre-trained on ImageNet \cite{deng2009imagenet} as our semantic segmentation backbone $(E + C_{seg})$. For multi-level learning, we duplicate and modify $C_{seg}$ to create $C_{region}$ with $N$ output channels as illustrated in Figure \ref{fig:framework}. For a fair comparison to previous works with the VGG backbone, we also apply our networks on VGG-16 \cite{simonyan2014very}. Following \cite{tzeng2014deep}, we use the gradient reverse layer to reverse the entropy based loss between $E$ and ($C_{region}$) during region-level adversarial learning. For domain classifier $C_{image}$, we use a similar structure with \cite{tsai2018learning,vu2019advent}, which includes five 2D-convolution layers with kernel $4 \times 4$ and channel numbers $\{64,128,256,512,1\}$. We use SGD \cite{bottou2010large} to optimize segmentation and local alignment modules ($i.e.$, $E$, $C_{seg}$ and $C_{region}$) with a momentum of $0.9$ and a weight decay of $1e-4$. We optimize the domain classifier $C_{image}$ via Adam with $\beta_{1} = 0.9$, $\beta_{2} = 0.99$. The initial learning rate is set as $2.5e-4$ and decayed by a polynomial policy with a power of $0.9$ as in \cite{chen2017deeplab}. For all experiments, the hyper-parameters $\lambda_{region}$, $\lambda_{ent}$, $\lambda_{image}$ and $N$ are set at $5e-3$, $1e-3$, $1e-3$ and $100$, respectively. \subsection{Ablation studies} The proposed multi-level adversarial networks (MLAN) consists of two key components including co-regularized adversarial learning (CR-AL) and regularized adaptation in input and output spaces. We conducted extensive ablation studies to demonstrate the contribution of the two modules to the overall network. Table \ref{tab:abla1} and \ref{tab:abla2} show experimental results. \textbf{Ablation studies of CR-AL:} We trained 7 segmentation models for the adaptation task GTA5-to-Cityscapes as shown in Table~\ref{tab:abla1}. The 7 network models include 1) \textbf{Baseline} that is trained with \textit{source supervised segmentation loss} $\mathcal{L}_{seg}$ only ($i.e.$, no adaptation), 2) \textbf{RLL}~\cite{huang2020contextual} that is trained using \textit{source supervised reion-level context-relations classification loss} $\mathcal{L}_{region}$ and $\mathcal{L}_{seg}$ only, 3) \textbf{RL-AL}~\cite{huang2020contextual} that is trained using \textit{unsupervised region-level context-relations adaptation loss} $\mathcal{L}_{ent\_region}$, $\mathcal{L}_{region}$ and $\mathcal{L}_{seg}$ only, 4) \textbf{Regularized RL-AL} that is trained using \textit{regularized unsupervised region-level context-relations adaptation loss} $\mathcal{L}^{r}_{ent\_region}$, $\mathcal{L}_{region}$ and $\mathcal{L}_{seg}$ only, 5) \textbf{IL-AL} that is trained using \textit{image-level alignment loss} $\mathcal{L}_{image}$ and $\mathcal{L}_{seg}$ only, 6) \textbf{Regularized IL-AL} that is trained using \textit{regularized image-level alignment loss} $\mathcal{L}^{r}_{image}$ and $\mathcal{L}_{seg}$ only, 7) \textbf{CR-AL} that incorporates and co-regularizes RL-AL and IL-AL via combining \textbf{Regularized RL-AL} and \textbf{Regularized IL-AL}. \renewcommand\arraystretch{1.2} \begin{table*}[t] \caption{Ablation study of co-regularized adversarial learning (CR-AL) on the adaptation from GTA5 dataset to Cityscapes dataset with ResNet-101. RLL denotes the supervised region-level learning on source domain.} \centering \begin{scriptsize} \begin{tabular}{p{3cm}|p{0.5cm}p{0.5cm}p{0.5cm}p{0.5cm}|p{0.5cm}p{0.5cm}|p{1.6cm}} \hline \hline & \multicolumn{4}{c|}{Local Adapt.} & \multicolumn{2}{c|}{Global Adapt.} & \multicolumn{1}{c}{mIoU} \\\hline Method & \multicolumn{1}{c}{$\mathcal{L}_{seg}$} & \multicolumn{1}{c}{$\mathcal{L}_{region}$} & \multicolumn{1}{c}{$\mathcal{L}_{ent\_region}$} & \multicolumn{1}{c|}{$\mathcal{L}^{r}_{ent\_region}$} & \multicolumn{1}{c}{$\mathcal{L}_{image}$} & \multicolumn{1}{c|}{$\mathcal{L}^{r}_{image}$} \\\hline Baseline &\multicolumn{1}{c}{\checkmark} & & & & & &\multicolumn{1}{c}{36.6}\\ RLL~\cite{huang2020contextual} &\multicolumn{1}{c}{\checkmark} &\multicolumn{1}{c}{\checkmark} & & & & &\multicolumn{1}{c}{39.3}\\ RL-AL~\cite{huang2020contextual} &\multicolumn{1}{c}{\checkmark} &\multicolumn{1}{c}{\checkmark} &\multicolumn{1}{c}{\checkmark} & & & &\multicolumn{1}{c}{43.7}\\ Regularized RL-AL &\multicolumn{1}{c}{\checkmark} &\multicolumn{1}{c}{\checkmark} & &\multicolumn{1}{c|}{\checkmark} & & &\multicolumn{1}{c}{46.1}\\ IL-AL &\multicolumn{1}{c}{\checkmark} & & & &\multicolumn{1}{c}{\checkmark} & &\multicolumn{1}{c}{43.3}\\ Regularized IL-AL &\multicolumn{1}{c}{\checkmark} & & & & &\multicolumn{1}{c|}{\checkmark} &\multicolumn{1}{c}{46.9}\\ \textbf{CR-AL} &\multicolumn{1}{c}{\checkmark} &\multicolumn{1}{c}{\checkmark} & &\multicolumn{1}{c|}{\checkmark} & &\multicolumn{1}{c|}{\checkmark} &\multicolumn{1}{c}{\textbf{49.1}}\\\hline \end{tabular} \end{scriptsize} \label{tab:abla1} \end{table*} We evaluated the 7 models with mIoU and Table~\ref{tab:abla1} shows experimental results. It is shown that the \textbf{Baseline} performs poorly because of the domain discrepancy between synthetic GTA5 and real Cityscapes datasets. \textbf{RLL} outperforms \textbf{Baseline} by $3.3\%$, indicating learning local context-relations on source domain without any adaptation can slightly enhance the adaptation ability of deep segmentation networks. In addition, \textbf{RL-AL} outperforms \textbf{RLL} clearly, which demonstrates the effectiveness of transferring the learnt local context-relations from source domain to target domain ($i.e.$, enforcing the segmentation of unlabelled target images with source-like context-relations via adversarial learning). Further, \textbf{Regularized RL-AL} and \textbf{Regularized IL-AL} outperform \textbf{RL-AL} and \textbf{IL-AL} by a large margin, respectively, demonstrating the importance of mutual regularization in domain adaptive segmentation. Finally, \textbf{CR-AL} performs clearly the best, showing that the region-level adversarial learning (RL-AL) and image-level adversarial learning (IL-AL) are actually complementary to each other. \textbf{Ablation studies of input and output adaptation:} We trained 6 segmentation networks for the adaptation task GTA5-to-Cityscapes as shown in Table~\ref{tab:abla2}. The 6 models include: 1) \textbf{CR-AL}, 2) \textbf{+IT} that augments target-style source images during CR-AL training based on the \textbf{image translation loss} $\mathcal{L}_{IT}$, 3) \textbf{+RIT} that augments target-style source images during CR-AL training based on the \textit{regularized image translation loss} $\mathcal{L}^{r}_{IT}$, 4) \textbf{+ST} that fine-tunes the trained CR-AL segmentation network by the \textbf{self-training loss} $\mathcal{L}_{ST}$, 5) \textbf{+RST} that fine-tunes the trained CR-AL segmentation network by the \textbf{regularized self-training loss} $\mathcal{L}^{r}_{ST}$, 6) \textbf{MLAN} that incorporates both regularized input and output space domain adaptation. We evaluated the 6 models with mIoU and Table~\ref{tab:abla1} shows experimental results. It is clear that both \textbf{+IT} and \textbf{+ST} outperform the original CR-AL model, which indicates the CR-AL is complementary to the domain adaptation in input and output spaces. Nevertheless, \textbf{+RIT} and \textbf{+RST} outperform \textbf{+IT} and \textbf{+ST} by a large margin, respectively, demonstrating the importance of introducing multi-level consistencies for both input-space ($e.g.$, image translation) and output-space ($e.g.$, self-training) domain adaptation. Further, \textbf{MLAN} performs clearly the best, which shows that the regularized image translation (RIT) and regularized self-training (RST) are actually complementary to each other. \renewcommand\arraystretch{1.2} \begin{table*}[t] \caption{Ablation study of multi-level adversarial networks (MLAN) that include both co-regularized adversarial learning (CR-AL) and the regularized input and output space domain adaptation (with ResNet-101 as backbone). Experiments are conducted on the task GTA5-to-Cityscapes. IT stands for image translation while RIT stands regularized IT; ST stands for self-training while RST stands for regularized ST.} \centering \begin{scriptsize} \begin{tabular}{p{3cm}|p{1.4cm}p{1.4cm}|p{1.4cm}p{1.4cm}|p{1.6cm}} \hline \hline & \multicolumn{2}{c|}{Input Space Adapt.} & \multicolumn{2}{c|}{Output Space Adapt.} & \multicolumn{1}{c}{mIoU} \\\hline Method & \multicolumn{1}{c}{$\mathcal{L}_{IT}$} & \multicolumn{1}{c|}{$\mathcal{L}^{r}_{IT}$} & \multicolumn{1}{c}{$\mathcal{L}_{ST}$} & \multicolumn{1}{c|}{$\mathcal{L}^{r}_{ST}$} \\\hline CR-AL & & & & &\multicolumn{1}{c}{49.1}\\ +IT &\multicolumn{1}{c}{\checkmark} & & & &\multicolumn{1}{c}{50.2}\\ +RIT & &\multicolumn{1}{c|}{\checkmark} & & &\multicolumn{1}{c}{51.4}\\ +ST & & &\multicolumn{1}{c}{\checkmark} & &\multicolumn{1}{c}{50.8}\\ +RST & & & &\multicolumn{1}{c|}{\checkmark} &\multicolumn{1}{c}{52.1}\\ \textbf{MLAN}(+RIT+RST) & &\multicolumn{1}{c|}{\checkmark} & &\multicolumn{1}{c|}{\checkmark} &\multicolumn{1}{c}{\textbf{53.6}}\\\hline \end{tabular} \end{scriptsize} \label{tab:abla2} \end{table*} \subsection{Comparisons with the State-of-the-Art} We validate our methods over two challenging synthetic-to-real domain adaptive semantic segmentation tasks ($i.e.$, GTA5-to-Cityscapes and SYNTHIA-to-Cityscapes) with two widely adopted network backbones ResNet-101 and VGG-16. We also compare our network with various state-of-the-art approaches. Tables \ref{tab:bench1} and \ref{tab:bench2} show experimental results. \renewcommand\arraystretch{1.1} \begin{table*}[t] \caption{Comparisons of the proposed approach with state-of-the-art works over the adaptation from GTA5 \cite{richter2016playing} to Cityscapes\cite{cordts2016cityscapes}. We highlight the best result in each column in \textbf{bold}. \textit{'V'} and \textit{'R'} denote the VGG-16 and ResNet-101 backbones, respectively.} \centering \begin{tiny} \begin{tabular}{p{1.6cm}|p{0.1cm}|*{19}{p{0.14cm}}p{0.4cm}} \hline \hline \multicolumn{22}{c}{\textbf{GTA5-to-Cityscapes}} \\[0.05cm] \hline \hspace{0.5pt} Networks &\rot{backbone} &\rot{road} &\rot{sidewalk} &\rot{building} &\rot{wall} &\rot{fence} &\rot{pole} &\rot{light} &\rot{sign} &\rot{veg} &\rot{terrain} &\rot{sky} &\rot{person} &\rot{rider} &\rot{car} &\rot{truck} &\rot{bus} &\rot{train} &\rot{motor} &\rot{bike} &mIoU\\ \hline AdaptSeg~\cite{tsai2018learning} &V &87.3 &29.8 &78.6 &21.1 &18.2 &22.5 &21.5 &11.0 &79.7 &29.6 &71.3 &46.8 &6.5 &80.1 &23.0 &26.9 &0.0 &10.6 &0.3 &35.0\\ AdvEnt~\cite{vu2019advent} &V &86.9 &28.7 &78.7 &28.5 &\textbf{25.2} &17.1 &20.3 &10.9 &80.0 &26.4 &70.2 &47.1 &8.4 &81.5 &26.0 &17.2 &{18.9} &11.7 &1.6 &36.1\\ CLAN~\cite{luo2019taking} &V &{88.0} &30.6 &79.2 &23.4 &20.5 &26.1 &23.0 &14.8 &{81.6} &{34.5} &72.0 &45.8 &7.9 &80.5 &{\textbf{26.6}} &{29.9} &0.0 &10.7 &0.0 &36.6\\ CrCDA~\cite{huang2020contextual} &V &86.8 &37.5 &{80.4} &30.7 &18.1 &26.8 &25.3 &15.1 &81.5 &30.9 &{72.1} &{52.8} &{19.0} &{82.1} &25.4 &29.2 &10.1 &15.8 &3.7 &{39.1} \\ BDL~\cite{li2019bidirectional} &V &89.2 &40.9 &81.2 &29.1 &19.2 &14.2 &29.0 &19.6 &83.7 &35.9 &80.7 &54.7 &{\textbf{23.3}} &82.7 &25.8 &28.0 &2.3 &{\textbf{25.7}} &{19.9} &41.3\\ FDA~\cite{yang2020fda} &V &86.1 &35.1 &80.6 &{30.8} &20.4 &27.5 &30.0 &26.0 &82.1 &30.3 &73.6 &52.5 &21.7 &81.7 &24.0 &30.5 &{\textbf{29.9}} &14.6 &{\textbf{24.0}} &42.2\\ SIM~\cite{wang2020differential} &V &88.1 &35.8 &\textbf{83.1} &25.8 &23.9 &29.2 &28.8 &28.6 &83.0 &\textbf{{36.7}} &\textbf{{82.3}} &53.7 &22.8 &82.3 &26.4 &{\textbf{38.6}} &0.0 &19.6 &17.1 &42.4\\ SVMin~\cite{guan2020scale} &V &{89.7} &{42.1} &82.6 &{29.3} &22.5 &\textbf{32.3} &\textbf{35.5} &\textbf{32.2} &{84.6} &35.4 &77.2 &{\textbf{61.6}} &21.9 &{\textbf{86.2}} &26.1 &36.7 &7.7 &16.9 &19.4 &{44.2}\\ \textbf{Ours} &V &\textbf{90.8} &\textbf{43.9} &83.0 &\textbf{32.5} &24.5 &31.4 &34.2 &31.6 &\textbf{85.4} &36.1 &81.0 &58.9 &22.8 &86.1 &26.4 &37.8 &24.2 &21.8 &21.3 &{\textbf{46.0}}\\ \hline AdaptSeg~\cite{tsai2018learning} &R &86.5 &36.0 &79.9 &23.4 &23.3 &23.9 &35.2 &14.8 &83.4 &33.3 &75.6 &58.5 &27.6 &73.7 &32.5 &35.4 &3.9 &30.1 &28.1 &42.4\\ CLAN~\cite{luo2019taking} &R &87.0 &27.1 &79.6 &27.3 &23.3 &28.3 &35.5 &24.2 &83.6 &27.4 &74.2 &58.6 &28.0 &76.2 &33.1 &36.7 &{6.7} &{31.9} &31.4 &43.2\\ AdvEnt~\cite{vu2019advent} &R &89.4 &33.1 &81.0 &26.6 &26.8 &27.2 &33.5 &24.7 &{83.9} &{36.7} &78.8 &58.7 &30.5 &{84.8} &38.5 &44.5 &1.7 &31.6 &32.4 &45.5\\ IDA~\cite{pan2020unsupervised} &R &90.6 &37.1 &82.6 &30.1 &19.1 &29.5 &32.4 &20.6 &85.7 &40.5 &79.7 &58.7 &31.1 &86.3 &31.5 &48.3 &0.0 &30.2 &35.8 &46.3\\ PatAlign~\cite{tsai2019domain} &R &92.3 &51.9 &82.1 &29.2 &25.1 &24.5 &33.8 &33.0 &82.4 &32.8 &82.2 &58.6 &27.2 &84.3 &33.4 &{46.3} &2.2 &29.5 &32.3 &46.5\\ CRST~\cite{zou2019confidence} &R &91.0 &{55.4} &80.0 &{33.7} &21.4 &{37.3} &32.9 &24.5 &85.0 &34.1 &80.8 &57.7 &24.6 &84.1 &27.8 &30.1 &{26.9} &26.0 &42.3 &47.1\\ BDL~\cite{li2019bidirectional} &R &91.0 &44.7 &84.2 &{\textbf{34.6}} &27.6 &30.2 &36.0 &36.0 &85.0 &{\textbf{43.6}} &83.0 &58.6 &31.6 &83.3 &35.3 &{49.7} &3.3 &28.8 &35.6 &48.5\\ CrCDA~\cite{huang2020contextual} &R &{92.4} &55.3 &{82.3} &31.2 &{29.1} &32.5 &33.2 &{35.6} &83.5 &34.8 &{84.2} &58.9 &{32.2} &84.7 &{40.6} &46.1 &2.1 &31.1 &32.7 &{48.6} \\ SIM~\cite{wang2020differential} &R &90.6 &44.7 &84.8 &34.3 &28.7 &31.6 &35.0 &37.6 &84.7 &43.3 &85.3 &57.0 &31.5 &83.8 &{\textbf{42.6}} &48.5 &1.9 &30.4 &39.0 &49.2\\ CAG~\cite{zhang2019category} &R &90.4 &51.6 &83.8 &34.2 &27.8 &38.4 &25.3 &48.4 &85.4 &38.2 &78.1 &58.6 &{\textbf{34.6}} &84.7 &21.9 &42.7 &\textbf{41.1} &29.3 &37.2 &50.2\\ TIR~\cite{kim2020learning} &R &{92.9} &55.0 &{\textbf{85.3}} &34.2 &{\textbf{31.1}} &34.9 &40.7 &34.0 &85.2 &40.1 &{\textbf{87.1}} &61.0 &31.1 &82.5 &32.3 &42.9 &0.3 &{\textbf{36.4}} &{46.1} &50.2\\ FDA~\cite{yang2020fda} &R &92.5 &53.3 &82.4 &26.5 &27.6 &36.4 &40.6 &38.9 &82.3 &39.8 &78.0 &62.6 &34.4 &84.9 &34.1 &{\textbf{53.1}} &16.9 &27.7 &{\textbf{46.4}} &50.5\\ SVMin~\cite{guan2020scale} &R &{92.9} &{56.2} &84.3 &34.0 &22.0 &{\textbf{43.1}} &{\textbf{50.9}} &{\textbf{48.6}} &{85.8} &42.0 &78.9 &{\textbf{66.6}} &26.9 &{\textbf{88.4}} &35.2 &46.0 &10.9 &25.4 &39.6 &{51.5}\\ \textbf{Ours} &R &\textbf{93.6} &\textbf{58.4} &84.9 &34.3 &30.6 &42.5 &48.5 &44.6 &\textbf{86.9} &41.7 &83.4 &61.7 &33.4 &88.0 &41.4 &49.2 &16.2 &31.3 &40.4 &{\textbf{53.2}} \\ \hline \end{tabular} \end{tiny} \label{tab:bench1} \end{table*} For the GTA5-to-Cityscapes adaptation, we evaluate the mIoU of 19 classes shared between the source domain and target domain as in \cite{tsai2018learning,vu2019advent,huang2020contextual,guan2020scale}. For a fair comparison in SYNTHIA-to-Cityscapes adaptation, we report both mIoU of 13 classes (mIoU*) and mIoU of 16 classes (mIoU) shared between the source domain and target domain, respectively, as in \cite{tsai2018learning,vu2019advent,huang2020contextual,guan2020scale}. \renewcommand\arraystretch{1.1} \begin{table*}[!t] \caption{Comparisons of the proposed approach with state-of-the-art works over the adaptation task SYNTHIA-to-Cityscapes. We highlight the best result in each column in \textbf{bold}. \textit{'V'} and \textit{'R'} denote the VGG-16 and ResNet-101 backbones, respectively. \textit{mIoU} and \textit{mIoU*} stand for the mean intersection-over-union values calculated over sixteen and thirteen categories, respectively.} \hspace{0.2pt} \centering \begin{tiny} \begin{tabular}{p{1.6cm}|p{0.1cm}|*{16}{p{0.18cm}}p{0.3cm}p{0.4cm}} \hline \hline \multicolumn{20}{c}{\textbf{SYNTHIA-to-Cityscapes}} \\ \hline \hspace{0.5pt} Networks &\rot{backbone} &\rot{road} &\rot{sidewalk} &\rot{building} &\rot{wall} &\rot{fence} &\rot{pole} &\rot{light} &\rot{sign} &\rot{veg} &\rot{sky} &\rot{person} &\rot{rider} &\rot{car} &\rot{bus} &\rot{motor} &\rot{bike} &mIoU &mIoU*\\ \hline AdaptSeg~\cite{tsai2018learning} &V &78.9 &29.2 &75.5 &- &- &- &0.1 &4.8 &72.6 &76.7 &43.4 &8.8 &71.1 &16.0 &3.6 &8.4 &- &37.6\\ AdvEnt~\cite{vu2019advent} &V &67.9 &29.4 &71.9 &6.3 &0.3 &19.9 &0.6 &2.6 &74.9 &74.9 &35.4 &9.6 &67.8 &21.4 &4.1 &15.5 &31.4 &36.6\\ CLAN~\cite{luo2019taking} &V &{80.4} &{30.7} &74.7 &- &- &- &1.4 &8.0 &77.1 &79.0 &46.5 &8.9 &{73.8} &18.2 &2.2 &9.9 &- &39.3\\ CrCDA~\cite{huang2020contextual} &V &74.5 &30.5 &{78.6} &6.6 &{\textbf{0.7}} &21.2 &2.3 &8.4 &77.4 &79.1 &45.9 &{16.5} &73.1 &{24.1} &{9.6} &14.2 &35.2 &{41.1} \\ BDL~\cite{li2019bidirectional} &V &72.0 &30.3 &74.5 &0.1 &0.3 &24.6 &10.2 &25.2 &{80.5} &80.0 &54.7 &{\textbf{23.2}} &72.7 &{24.0} &7.5 &44.9 &39.0 &46.1\\ FDA~\cite{yang2020fda} &V &{84.2} &{\textbf{35.1}} &{78.0} &6.1 &0.4 &{\textbf{27.0}} &8.5 &22.1 &77.2 &79.6 &55.5 &19.9 &74.8 &{24.9} &{\textbf{14.3}} &40.7 &40.5 &47.3\\ SVMin~\cite{guan2020scale} &V &{82.5} &{31.5} &{77.6} &{\textbf{7.6}} &{\textbf{0.7}} &{26.0} &{\textbf{12.3}} &{\textbf{28.4}} &79.4 &{\textbf{82.1}} &{58.9} &21.5 &{82.1} &22.1 &{9.6} &{\textbf{49.2}} &{41.9} &{49.0}\\ \textbf{Ours} &V &\textbf{84.7} &33.9 &\textbf{78.7} &7.2 &0.6 &26.5 &11.4 &27.2 &\textbf{81.4} &\textbf{83.4} &\textbf{60.2} &22.7 &81.3 &\textbf{25.8} &12.6 &47.8 &{\textbf{42.8}} &{\textbf{50.1}}\\ \hline PatAlign~\cite{tsai2019domain} &R &82.4 &38.0 &78.6 &8.7 &0.6 &26.0 &3.9 &11.1 &75.5 &84.6 &53.5 &21.6 &71.4 &32.6 &19.3 &31.7 &40.0 &46.5\\ AdaptSeg~\cite{tsai2018learning} &R &84.3 &42.7 &77.5 &- &- &- &4.7 &7.0 &77.9 &82.5 &54.3 &21.0 &72.3 &32.2 &18.9 &32.3 &- &46.7\\ CLAN~\cite{luo2019taking} &R &81.3 &37.0 &{80.1} &- &- &- &{16.1} &{13.7} &78.2 &81.5 &53.4 &21.2 &73.0 &32.9 &{22.6} &30.7 &- &47.8\\ AdvEnt~\cite{vu2019advent} &R &85.6 &42.2 &79.7 &{8.7} &0.4 &25.9 &5.4 &8.1 &{80.4} &84.1 &{57.9} &23.8 &73.3 &36.4 &14.2 &{33.0} &41.2 &48.0\\ IDA~\cite{pan2020unsupervised} &R &84.3 &37.7 &79.5 &5.3 &0.4 &24.9 &9.2 &8.4 &80.0 &84.1 &57.2 &23.0 &78.0 &38.1 &20.3 &36.5 &41.7 &48.9\\ TIR~\cite{kim2020learning} &R &{\textbf{92.6}} &{53.2} &79.2 &- &- &- &1.6 &7.5 &78.6 &84.4 &52.6 &20.0 &82.1 &34.8 &14.6 &39.4 &- &49.3\\ CrCDA~\cite{huang2020contextual} &R &{86.2} &{44.9} &79.5 &8.3 &{0.7} &{27.8} &9.4 &11.8 &78.6 &{86.5} &57.2 &{26.1} &{76.8} &{39.9} &21.5 &32.1 &{42.9} &{50.0}\\ CRST~\cite{zou2019confidence} &R &67.7 &32.2 &73.9 &10.7 &{\textbf{1.6}} &{\textbf{37.4}} &22.2 &{\textbf{31.2}} &80.8 &80.5 &60.8 &29.1 &82.8 &25.0 &19.4 &45.3 &43.8 &50.1\\ BDL~\cite{li2019bidirectional} &R &86.0 &46.7 &80.3&-&-&- &14.1 &11.6 &79.2 &81.3 &54.1 &27.9 &73.7 &{\textbf{42.2}} &25.7 &45.3 &- &51.4\\ SIM~\cite{wang2020differential} &R &83.0 &44.0 &80.3 &- &- &- &17.1 &15.8 &80.5 &81.8 &59.9 &{\textbf{33.1}} &70.2 &37.3 &28.5 &45.8 &- &52.1\\ FDA~\cite{yang2020fda} &R &79.3 &35.0 &73.2 &- &- &- &19.9 &24.0 &61.7 &82.6 &61.4 &31.1 &83.9 &40.8 &{\textbf{38.4}} &{\textbf{51.1}} &- &52.5\\ CAG~\cite{zhang2019category} &R &84.7 &40.8 &81.7 &7.8 &0.0 &35.1 &13.3 &22.7 &84.5 &77.6 &64.2 &27.8 &80.9 &19.7 &22.7 &48.3 &44.5 &52.6\\ SVMin~\cite{guan2020scale} &R &89.8 &47.7 &{82.3} &{\textbf{14.4}} &0.2 &37.1 &{\textbf{35.4}} &22.1 &{85.1} &{84.9} &{\textbf{65.8}} &25.6 &{\textbf{86.0}} &30.5 &{31.0} &{50.7} &{49.3} &{56.7}\\ \textbf{Ours} &R &90.6 &\textbf{49.3} &\textbf{82.6} &11.6 &1.1 &36.4 &32.9 &23.5 &\textbf{85.8} &\textbf{86.7} &63.4 &31.9 &85.1 &40.6 &34.2 &48.9 &{\textbf{50.3}} &{\textbf{58.1}}\\ \hline \end{tabular} \end{tiny} \label{tab:bench2} \end{table*} As shown in Tables \ref{tab:bench1} and \ref{tab:bench2}, the proposed method outperforms all state-of-the-art methods under all four settings, $i.e.$, two adaptation tasks with two backbones. The superior performance of our segmentation model is largely attributed to the introduced multi-level consistencies regularization that enables concurrent region-level and image-level feature alignment and domain adaptation in both input and output spaces. Without the multi-level consistencies regularization, region-/image-level feature alignment and input/output adaptation tends to lack of image-/region-/multi-level consistencies information and over-minimize their own objectives individually, ultimately leading to sub-optimal segmentation performance in the target domain. \begin{figure*}[!t] \centering \begin{minipage}[h]{0.192\linewidth} \centering\footnotesize {(a) Image} \end{minipage} \vspace{2pt} \begin{minipage}[h]{0.192\linewidth} \centering\footnotesize {(b) GT} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\footnotesize {(c) CrCDA~\cite{huang2020contextual}} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\footnotesize {(d) SVMin~\cite{guan2020scale}} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\footnotesize{\textbf{(e) Ours}} \end{minipage} \vspace{6pt} \centering \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000044_000019_leftImg8bit.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000044_000019_gtFine_color.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000044_000019_leftImg8bit_color_crcda.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000044_000019_leftImg8bit_color_svm.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000044_000019_leftImg8bit_color.png} \end{minipage} \vspace{6pt} \centering \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000057_000019_leftImg8bit.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000057_000019_gtFine_color.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000057_000019_leftImg8bit_color_crcda.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000057_000019_leftImg8bit_color_svm.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000057_000019_leftImg8bit_color.png} \end{minipage} \vspace{6pt} \centering \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000058_000019_leftImg8bit.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000058_000019_gtFine_color.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000058_000019_leftImg8bit_color_crcda.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000058_000019_leftImg8bit_color_svm.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000058_000019_leftImg8bit_color.png} \end{minipage} \vspace{6pt} \centering \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000098_000019_leftImg8bit.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000098_000019_gtFine_color.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000098_000019_leftImg8bit_color_crcda.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000098_000019_leftImg8bit_color_svm.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000098_000019_leftImg8bit_color.png} \end{minipage} \vspace{6pt} \centering \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000122_000019_leftImg8bit.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000122_000019_gtFine_color.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000122_000019_leftImg8bit_color_crcda.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000122_000019_leftImg8bit_color_svm.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000122_000019_leftImg8bit_color.png} \end{minipage} \vspace{6pt} \centering \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000089_000019_leftImg8bit.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000089_000019_gtFine_color.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000089_000019_leftImg8bit_color_crcda.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000089_000019_leftImg8bit_color_svm.png} \end{minipage} \begin{minipage}[h]{0.192\linewidth} \centering\includegraphics[width=1.0\linewidth]{figures/qualitative_results/munster_000089_000019_leftImg8bit_color.png} \end{minipage} \caption{ Domain adaptive segmentation for the task GTA5-to-Cityscapes: Columns (a) and (b) show a few sample images from the dataset Cityscapes and the ground truth of their pixel-level segmentation, respectively. Columns (c), (d) and (e) present the qualitative result that is produced by state-of-the-art methods CrCDA~\cite{huang2020contextual}, SVMin~\cite{guan2020scale} and the proposed MLAN, respectively. Best viewed in color and zoom in for details. } \label{fig:results_supple} \end{figure*} As shown in Figure \ref{fig:results_supple}, we present qualitative segmentation results on the adaptation task GTA5-to-Cityscapes. It is clear that the proposed method performs the best, especially in the local regions with class-transition ($i.e.$, regions that involve local context-relations among two or more classes). For example, as shown in the last row of Figure \ref{fig:results_supple}, CrCDA and SVMin falsely predict the trunk of the 'tree' as 'pole' while the proposed MLAN segments the same area correctly. We reckon the improvement is largely thanks to the proposed multi-level consistencies regularization that well rectifies region-level context-relation alignment ($e.g.$, the 'pole' is not supposed to be on the top of 'vegetation') as well as image-level feature alignment. \begin{figure}[t] \centering \includegraphics[width=.9\linewidth]{figures/figure_4.pdf} \caption{ Parameter studies of region-level adversarial learning (RL-AL) weight factor $\lambda_{ent}$ (with ResNet-101): For the domain adaptive segmentation task GTA5-to-Cityscapes, the segmentation with the original RL-AL is sensitive to different weight factors, showing that its adversarial loss is an unstable objective. As a comparison, the proposed regularized RL-AL is much more stable and robust with different weight factors as the proposed regularization incorporates global image-level information. } \label{fig:weight1} \end{figure} \begin{figure}[t] \centering \includegraphics[width=.9\linewidth]{figures/figure_5.pdf} \caption{ Parameter studies of global image-level adversarial learning (IL-AL) weight factor $\lambda_{image}$ (with ResNet-101): For the domain adaptive semantic segmentation task GTA5-to-Cityscapes, the segmentation of the original IL-AL is sensitive to different weight factors, showing that its adversarial loss is an unstable objective. As a comparison, the proposed regularized IL-AL is more stable and robust to different weight factors as the proposed regularization incorporates local region-level information. } \label{fig:weight2} \end{figure} \subsection{Parameter studies} We analyze the influence of the weight factors that are employed to balance the adaptation/alignment objectives ($i.e.$, $\lambda_{ent}$ for region-level adversarial learning (RL-AL) and $\lambda_{image}$ for image-level adversarial learning) and other factors such as the source supervised segmentation loss and region-level learning loss. As shown in Figures \ref{fig:weight1} and \ref{fig:weight2}, the segmentation performance of proposed regularized RL-AL/IL-AL is insensitive and robust to a variety of weight factor values. Specifically, for RL-AL, the segmentation performance varies from $40.0\%$ to $43.1\%$ while its weight factor ranges from $0.2 \times 10^{-3}$ to $1.8 \times 10^{-3}$. On the contrary, the segmentation performance of proposed regularized RL-AL varies from $45.1\%$ to $46.1\%$ while its weight factor changes from $0.2 \times 10^{-3}$ to $1.8 \times 10^{-3}$. The IL-AL and the rgularized IL-AL show similar phenomenon during same weight factor variation, where IL-AL varies between $40.9\%$ to $43.3\%$ and IL-AL varies between $45.9\%$ to $46.9\%$. The phenomenon described above indicates that the original adversarial loss ($i.e.$, RL-AL and IL-AL) is an unstable training objective, where the proposed regularization can effective improve it and make it more robust and stable. \section{Conclusion} In this work, we propose an innovative multi-level adversarial network (MLAN) for domain adaptive semantic segmentation, which aims to achieve both region-level and image-level cross-domain consistencies, optimally. MLAN introduces two novel designs, namely region-level adversarial learning (RL-AL) and co-regularized adversarial learning (CR-AL). Specifically, RL-AL learns the prototypical region-level context-relations explicitly in the feature space of a labelled source domain and transfer them to an unlabelled target domain via adversarial learning. CR-AL integrates local region-level AL with global image-level AL and conducts mutual regularization. Furthermore, MLAN calculates a multi-level consistency map (MLCM) to guide the domain adaptation in input and output space. We experimentally evaluate the superior of the proposed method on the adaptation from a source domain of synthesized images to a target domain of real images. The proposed methods surpass state-of-the-arts by a large margin. In addition, we perform extensive ablation studies that illustrate the contribution of each design to the overall performance of the proposed method. We also conduct parameter studies and find that the general adversarial learning is an unstable objective for semantic segmentation and the proposed method can well rectify it. In the future work, we will explore more segmentation properties to make adaptation process/loss more relevant to the main task ($i.e.$, segmentation). \section*{Acknowledgment} This research was conducted at Singtel Cognitive and Artificial Intelligence Lab for Enterprises (SCALE@NTU), which is a collaboration between Singapore Telecommunications Limited (Singtel) and Nanyang Technological University (NTU) that is funded by the Singapore Government through the Industry Alignment Fund ‐ Industry Collaboration Projects Grant. \bibliographystyle{plain} \small
{'timestamp': '2022-06-07T02:10:44', 'yymm': '2103', 'arxiv_id': '2103.12991', 'language': 'en', 'url': 'https://arxiv.org/abs/2103.12991'}
\section{% \@ifstar{\starchapter}{\@dblarg\nostarchapter}} \newcommand*\starchapter[1]{% \stdchapter*{#1} \thispagestyle{fancy} \markboth{\MakeUppercase{#1}}{} } \def\nostarchapter[#1]#2{% \stdchapter[{#1}]{#2} \thispagestyle{fancy} } \makeatother \setlength{\headheight}{15pt} \newtheorem{theorem}{Theorem}[section] \newtheorem*{theorem*}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \theoremstyle{corollary} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{remark} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \begin{document} \title{Complex structures on stratified Lie algebras} \author{Junze Zhang% \thanks{Electronic address: \texttt{[email protected]}}} \affil{School of Mathematics and Statistics, University of New South Wales, Sydney} \date{\today} \maketitle \pagestyle{fancy} \fancyhead{} \fancyfoot{} \fancyfoot[LE,RO]{\thepage} \fancyhead[L]{\slshape\MakeUppercase{Complex structures on stratified Lie algebras}} \newcommand{\underuparrow}[1]{\ensuremath{\underset{\uparrow}{#1}}} \begin{abstract} This paper investigates some properties of complex structures on Lie algebras. In particular, we focus on $\textit{nilpotent}$ $\textit{complex structures}$ that are characterized by a suitable $J$-invariant ascending or descending central series $\mathfrak{d}^j$ and $\mathfrak{d}_j$ respectively. In this article, we introduce a new descending series $\mathfrak{p}_j$ and use it to give proof of a new characterization of nilpotent complex structures. We examine also whether nilpotent complex structures on stratified Lie algebras preserve the strata. We find that there exists a $J$-invariant stratification on a step $2$ nilpotent Lie algebra with a complex structure. \end{abstract} \section{Introduction} \label{a} In recent years, complex structures on nilpotent Lie algebras have been shown to be very useful for understanding some geometric and algebraic properties of nilmanifolds. In $\cite{MR1665327}$ and $\cite{MR1899353}$, Cordero, Fern{\'a}ndez, Gray and Ugarte introduced $\textit{nilpotent complex}$ $\textit{structures} $, studied $6$ dimensional nilpotent Lie algebras with nilpotent complex structures, and provided a classification. Since the ascending central series is not necessarily $J$-invariant, they introduced a $J$-invariant ascending central series to characterize nilpotent complex structures. More recently, Latorre, Ugarte and Villacampa defined the space of nilpotent complex structures on nilpotent Lie algebras and further studied complex structures on nilpotent Lie algebras with one dimensional center $\cite{MR4009385}$, $ \cite{latorre2020complex}$. They also provided a theorem describing the ascending central series of $8$ dimensional nilpotent Lie algebras with complex structures. In $\cite{gao2020maximal}$, Gao, Zhao and Zheng studied the relation between the step of a nilpotent Lie algebra and the smallest integer $j_0$ such that the $J$-invariant ascending central series stops. Furthermore, they introduced a $J$-invariant descending central series, which is another tool to characterize nilpotent complex structures. These papers use the language of differential forms to characterize nilpotent complex structures. Our proofs here in this paper are purely Lie algebraic. Let $G$ be a Lie group and $\mathfrak{g} \cong T_eG$ be its Lie algebra, which we always assume to be real, unless otherwise stated. A linear isomorphism $J:TG \rightarrow TG$ is an $\textit{almost complex structure}$ if $J^2 = -I.$ By the Newlander--Nirenberg Theorem $\cite{MR88770},$ an almost complex structure $J$ corresponds to a left invariant complex structure on $G$ if and only if \begin{equation} [J_eX,J_eY] -[X,Y] - J_e([J_eX,Y]+[X,J_eY]) = 0, \label{eq:Niv} \end{equation} for all $X,Y \in \mathfrak{g}.$ Since we are interested only on Lie algebras in this paper, from now on, we will write $J$ for $J_e.$ We will refer to $\eqref{eq:Niv}$ as the $\textit{Newlander--Nirenberg condition}.$ \section{Complex structures on nilpotent Lie algebras} \label{b} In this section, we consider some properties of the central series of nilpotent Lie algebras with complex structures $J$ and define $J$-invariant central series. We define nilpotent complex structures, and relate their properties to the dimension of the center $\mathfrak{z}$ of a nilpotent Lie algebra. \begin{definition} \label{acde} ($\text{See}, \text{ for instance}$, e.g., $\cite{MR1920389}$) Let $\mathfrak{g}$ be a Lie algebra. The $\textit{descending}$ $\textit{central series}$ and $\textit{ascending}$ $\textit{central series}$ of $\mathfrak{g} $ are denoted $\mathfrak{c}_j(\mathfrak{g})$ and $\mathfrak{c}^j(\mathfrak{g})$ respectively, for all $j \geq 0 ,$ and defined inductively by \begin{eqnarray} & \mathfrak{c}_0(\mathfrak{g}) = \mathfrak{g}, \text{ } \mathfrak{c}_j(\mathfrak{g}) = [\mathfrak{g},\mathfrak{c}_{j-1}(\mathfrak{g})] \label{eq:de} ; \\ & \mathfrak{c}^0(\mathfrak{g}) = \{0\}, \text{ } \text{ } \mathfrak{c}^j(\mathfrak{g}) = \{X \in \mathfrak{g}:[X,\mathfrak{g}] \subseteq \mathfrak{c}^{j-1}(\mathfrak{g})\}. \label{eq:ac} \end{eqnarray} \end{definition} \begin{remark} \label{racde} (i) Notice that $\mathfrak{c}^1(\mathfrak{g})=\mathfrak{Z}(\mathfrak{g}) $, $\mathfrak{c}_1(\mathfrak{g}) = [\mathfrak{g},\mathfrak{g}],$ and \begin{align*} \mathfrak{c}^{j}(\mathfrak{g})/\mathfrak{c}^{j-1}(\mathfrak{g}) = \mathfrak{Z}\left(\mathfrak{g}/ \mathfrak{c}^{j-1}(\mathfrak{g})\right) \text{ } \text{for all } j \geq 1, \end{align*} where $\mathfrak{Z}(\cdot)$ means the center of a Lie algebra. Furthermore, $\mathfrak{c}_j(\mathfrak{g})/\mathfrak{c}_{j+1}(\mathfrak{g})$ $\subseteq$ $\mathfrak{Z} \left(\mathfrak{n}/\right. $ $\left.\mathfrak{c}_{j+1}(\mathfrak{g})\right)$ for all $j \geq 0.$ It is clear that $\mathfrak{c}^j(\mathfrak{g})$ and $\mathfrak{c}_j(\mathfrak{g})$ are ideals of $\mathfrak{g}$ for all $j \geq 0$. (ii) A Lie algebra $\mathfrak{g}$ is called $\textit{nilpotent of step }k $, for some $k \in \mathbb{N},$ if $\mathfrak{c}_k(\mathfrak{g}) = \{0\}$ and $\mathfrak{c}_{k-1}(\mathfrak{g}) \neq \{0\}.$ We will denote nilpotent Lie algebras by $\mathfrak{n}$ in this paper. See, e.g., $\cite[\text{Section }5.2]{MR3025417}$ or $\cite{MR1920389}.$ \end{remark} \subsection{$J$-invariant central series and nilpotent complex structures} \label{b4} Following $\cite[\text{Definition }1]{MR1665327}$, we define the $\textit{J-invariant}$ $\textit{ascending}$ $\textit{central series}$ $\mathfrak{d}^j$ for nilpotent Lie algebras and introduce $\textit{nilpotent}$ $\textit{complex}$ $\textit{structures}$ on nilpotent Lie algebras. Furthermore, we recall the definition of the $\textit{J-invariant}$ $\textit{descending}$ $\textit{central series}$ $\mathfrak{d}_j$. $\cite[\text{Defintion 2.7}]{gao2020maximal}$ \begin{definition} \label{7} Let $ \mathfrak{n} $ be a Lie algebra with a complex structure $J.$ Define a sequence of $J$-invariant ideals of $\mathfrak{n}$ by $\mathfrak{d}^0 = \{0\}$ and \begin{equation} \mathfrak{d}^j = \{ X \in \mathfrak{n} : [X,\mathfrak{n}] \subseteq \mathfrak{d}^{j-1} , [JX,\mathfrak{n}] \subseteq \mathfrak{d}^{j-1} \} \label{eq:1} \end{equation} for all $ j \geq 1.$ We call the sequence $\mathfrak{d}^j$ the $\textit{ascending J-invariant}$ $\textit{central series}.$ The complex structure $J$ is called $\textit{nilpotent of step }j_0$ if there exists $j_0 \in \mathbb{N}$ such that $\mathfrak{d}^{j_0} = \mathfrak{n}$ and $\mathfrak{d}^{j_0-1} \subset \mathfrak{n}$. We define inductively the $J$-$\textit{invariant}$ $ \textit{descending}$ $ \textit{central}$ $\textit{series}$ by \begin{equation} \mathfrak{d}_0 = \mathfrak{n}, \text{ } \text{ } \mathfrak{d}_j = [\mathfrak{d}_{j-1},\mathfrak{n}] + J[\mathfrak{d}_{j-1},\mathfrak{n}] \text{ } \text{ } \label{eq:00} \end{equation} all for $j \geq 1.$ \end{definition} \begin{remark} \label{r7} (i) For the ascending $J$-invariant central series $\mathfrak{d}^j$, \begin{align*} \mathfrak{d}^j / \mathfrak{d}^{j-1} = \mathfrak{Z}(\mathfrak{n}/\mathfrak{d}^{j-1}) \cap J\mathfrak{Z}(\mathfrak{n}/\mathfrak{d}^{j-1}) \qquad \text{ for all } j \geq 1 . \end{align*} In particular, $\mathfrak{d}^1 = \mathfrak{z} \cap J\mathfrak{z},$ which is the largest $J$-invariant subspace of $\mathfrak{z} $ and, if $J$ is nilpotent, then $\mathfrak{d}^1 \neq \{0\}$. The nilpotency of $J$ implies that the ascending $J$-invariant central series $\mathfrak{d}^j$ of $\mathfrak{n}$ is strictly increasing until $\mathfrak{d}^{j_0} = \mathfrak{n}$. Furthermore, if $\mathfrak{n}$ is a step $k$ nilpotent Lie algebra with a nilpotent complex structure $J$ of step $j_0$, then $k \leq j_0 \leq \frac{1}{2}\dim \mathfrak{n} .$ See, e.g., $\cite{MR1665327}$ and $\cite{gao2020maximal}.$ (ii) By definition, if $\mathfrak{n}$ admits a nilpotent complex structure, then $\mathfrak{n}$ is nilpotent. (iii) For all $j \geq 0,$ it is clear that $ \mathfrak{c}_j(\mathfrak{n}) + J \mathfrak{c}_j(\mathfrak{n}) \subseteq \mathfrak{d}_j$; Furthermore, $\mathfrak{d}_j \unlhd \mathfrak{n}$ and $\mathfrak{d}^j \unlhd \mathfrak{n}$ where $\unlhd$ is the notation of ideal. (iv) Let $ \mathfrak{n} $ be a Lie algebra with a complex structure $J.$ Then $J$ preserves all terms of $ \mathfrak{c}^j(\mathfrak{n})$ if and only if $\mathfrak{d}^j = \mathfrak{c}^j(\mathfrak{n})$ for all $j \geq 0$. $\cite[\text{Corollary }5]{MR1665327} $ Similarly, $J$ preserves all terms of $\mathfrak{c}_j(\mathfrak{n})$ if and only if $\mathfrak{d}_j = \mathfrak{c}_j(\mathfrak{n})$ for all $j.$ \end{remark} The following lemma provides a connection between $J$-invariant ascending and descending central series. \begin{lemma} \label{9} Let $\mathfrak{n}$ be a Lie algebra with a complex structure $J$. Suppose that $J$ is nilpotent of step $j_0.$ (i) Then $\mathfrak{n}/\mathfrak{d}^{j_0-1}$ is Abelian. Conversely, if there exists $j_0\in \mathbb{N}$ such that $\mathfrak{n}/\mathfrak{d}^{j_0-1}$ is Abelian, then $J$ is nilpotent of step at most $j_0.$ (ii) Then $\mathfrak{d}_j \subseteq \mathfrak{d}^{j_0 - j} $ for all $j \geq 0 $. Conversely, if there exists $j_0 \in \mathbb{N}$ such that $\mathfrak{d}_j \subseteq \mathfrak{d}^{j_0 - j} $ for all $j \geq 0,$ then $J$ is nilpotent of step at most $j_0.$ \end{lemma} \begin{proof} For part (i), suppose that $J$ is nilpotent of step $j_0$. By definition, $\mathfrak{d}^{j_0} = \mathfrak{n}$ and $\mathfrak{d}^{j_0-1} \subset \mathfrak{n}$. Then \begin{align*} \mathfrak{Z}(\mathfrak{n}/\mathfrak{d}^{j_0-1}) \cap J\mathfrak{Z}(\mathfrak{n}/ \mathfrak{d}^{j_0-1}) = \mathfrak{n}/\mathfrak{d}^{j_0-1}. \end{align*} It is obvious that $ \mathfrak{Z}(\mathfrak{n}/\mathfrak{d}^{j_0-1}) = \mathfrak{n}/\mathfrak{d}^{j_0-1}.$ Hence $\mathfrak{n}/\mathfrak{d}^{j_0-1}$ is Abelian. Conversely, suppose that there exists $j_0 \in \mathbb{N}$ such that $ \mathfrak{n}/ \mathfrak{d}^{j_0-1}$ is Abelian. Then $ \{0\} \neq \mathfrak{c}_1(\mathfrak{n}) \subseteq \mathfrak{d}^{j_0-1}$. For all $X \in \mathfrak{n},$ \begin{align*} [X,\mathfrak{n}] \subseteq \mathfrak{d}^{j_0 - 1} \text{ and } [JX,\mathfrak{n}] \subseteq \mathfrak{d}^{j_0 -1}. \end{align*} We deduce that $ \mathfrak{n} = \mathfrak{d}^{j_0}$ and therefore $J$ is nilpotent of step at most $j_0$. For part (ii), assume that $J$ is nilpotent of step $j_0.$ By definition, $\mathfrak{d}_0 = \mathfrak{n} = \mathfrak{d}^{j_0}.$ Next, assume that $\mathfrak{d}_{s-1} \subseteq \mathfrak{d}^{j_0-s+1}$ for some $s \in \mathbb{N}.$ Then \begin{align*} \mathfrak{d}_s & = [\mathfrak{d}_{s-1},\mathfrak{n}] + J[\mathfrak{d}_{s-1},\mathfrak{n}] \\ & \subseteq [\mathfrak{d}^{j_0 -s + 1},\mathfrak{n}] + J [\mathfrak{d}^{j_0 -s + 1},\mathfrak{n}] \\ & \subseteq \mathfrak{d}^{j_0-s} + J\mathfrak{d}^{j_0-s} = \mathfrak{d}^{j_0-s}. \end{align*} Hence by induction, $\mathfrak{d}_j \subseteq \mathfrak{d}^{j_0 - j} $ for all $j \geq 0.$ Conversely, suppose that there exists $j_0 \in \mathbb{N}$ such that $\mathfrak{d}_j \subseteq \mathfrak{d}^{j_0 - j} $ for all $j \geq 0$. In particular, $ \mathfrak{d}_1 \subseteq \mathfrak{d}^{j_0 -1}.$ By definition, $\mathfrak{c}_1(\mathfrak{n}) \subseteq \mathfrak{d}_1.$ It follows that $$ [\mathfrak{n}/\mathfrak{d}^{j_0-1}, \mathfrak{n}/\mathfrak{d}^{j_0-1}] \subseteq [\mathfrak{n},\mathfrak{n}] + \mathfrak{d}^{j_0-1} = \mathfrak{c}_1(\mathfrak{n}) + \mathfrak{d}^{j_0-1} \subseteq \mathfrak{d}_1 + \mathfrak{d}^{j_0-1} \subseteq \mathfrak{d}_{j_0-1},$$ and thus $\mathfrak{n}/\mathfrak{d}^{j_0 -1}$ is Abelian. From $\text{Lemma } \ref{9}$, $J$ is nilpotent of step at most $j_0.$ \end{proof} \begin{remark} \label{r41} Under the condition of Lemma $\ref{9},$ if $J$ is nilpotent of step $j_0,$ $\mathfrak{d}_{j_0-1} \subseteq \mathfrak{d}^1 \subseteq \mathfrak{z}.$ Then $\mathfrak{d}_{j_0-1}$ is Abelian. Furthermore, there exists $j_0 \in \mathbb{N}$ such that $\mathfrak{n}/\mathfrak{d}^{j_0-1}$ is Abelian if and only if $\mathfrak{d}_j \subseteq \mathfrak{d}^{j_0-j}$ for all $j \geq 0.$ This is proved by induction as in the proof of $\text{Lemma }\ref{9}$. \end{remark} \begin{corollary} Let $\mathfrak{n}$ be a step $k$ nilpotent Lie algebra with a complex structure $J.$ Then $J$ is nilpotent of step $k$ if and only if $\mathfrak{d}_j \subseteq \mathfrak{d}^{k-j}$ for all $j \geq 0$. \end{corollary} \begin{proof} Suppose that $J$ is nilpotent of step $k.$ By $\text{Lemma }\ref{9},$ $\mathfrak{d}_j \subseteq \mathfrak{d}^{k-j}$ for all $j \geq 0.$ Conversely, assume that $\mathfrak{d}_j \subseteq \mathfrak{d}^{k-j}$ for all $j.$ Again by Lemma $\ref{9}$, $J$ is nilpotent of step at most $k.$ Furthermore, it follows that $ \{0\} \neq \mathfrak{c}_{k-1}(\mathfrak{n}) \subseteq \mathfrak{d}_{k-1}.$ Therefore $\mathfrak{d}_{k-1} \neq \{0\} $ and $J$ is nilpotent of step $k.$ \end{proof} \begin{remark} From Remark $\ref{r41},$ $J$ is nilpotent of step $k$ if and only if $\mathfrak{n}/\mathfrak{d}^{k-1}$ is Abelian. \end{remark} We introduce a new descending central series whose descending `rate' is slower than that of $\mathfrak{c}_j(\mathfrak{n})$ but faster than that of $\mathfrak{d}_j.$ \begin{definition} \label{p3} Let $J$ be a complex structure on a Lie algebra $\mathfrak{n}.$ We define the sequence $\mathfrak{p}_j$ inductively by \begin{equation} \mathfrak{p}_0 = \mathfrak{n} \text{ and } \mathfrak{p}_j = [\mathfrak{p}_{j-1},\mathfrak{n}] + [J\mathfrak{p}_{j-1},\mathfrak{n}] \text{ for all } j \geq 1 . \label{eq:p} \end{equation} \end{definition} \begin{remark} It is clear that $\mathfrak{p}_{j+1} \subseteq \mathfrak{p}_j$ for all $j \geq 0.$ Furthermore, $\mathfrak{p}_j \unlhd \mathfrak{n}$ since $[\mathfrak{p}_j,\mathfrak{n}] \subseteq \mathfrak{p}_{j+1} \subseteq \mathfrak{p}_j$ for all $j \geq 0$. \end{remark} \begin{lemma} \label{p} Let $\mathfrak{n}$ be a Lie algebra with a complex structure $J $. Then $ \mathfrak{c}_j(\mathfrak{n}) \subseteq \mathfrak{p}_j$ for all $j \geq 0$. Furthermore, $\mathfrak{p}_j \subseteq \mathfrak{d}_j $ and $ J \mathfrak{p}_j \subseteq \mathfrak{d}_j $ for all $j \geq 0$. \end{lemma} \begin{proof} By definition, $\mathfrak{c}_0(\mathfrak{n}) = \mathfrak{n} = \mathfrak{p}_0.$ It follows, by induction, that $ \mathfrak{c}_j(\mathfrak{n}) \subseteq \mathfrak{p}_j$ for all $j \geq 0$. Using $\eqref{eq:00},$ $[\mathfrak{d}_{j-1},\mathfrak{n}] \subseteq \mathfrak{d}_j $. By definition, $ \mathfrak{p}_0 = \mathfrak{n} = \mathfrak{d}_0$ and $J\mathfrak{p}_0 = J\mathfrak{n} = \mathfrak{n} = \mathfrak{d}_0$. Next, suppose that $\mathfrak{p}_s \subseteq \mathfrak{d}_s$ and $J\mathfrak{p}_s \subseteq \mathfrak{d}_s$ for some $s \in \mathbb{N}.$ Then by $\eqref{eq:p},$ \begin{align*} \mathfrak{p}_{s+1} = [\mathfrak{p}_s,\mathfrak{n}] + [J\mathfrak{p}_s,\mathfrak{n}] \subseteq [\mathfrak{d}_s,\mathfrak{n}] \subseteq \mathfrak{d}_{s+1} \text{ and } J\mathfrak{p}_{s+1} \subseteq J[\mathfrak{d}_s,\mathfrak{n}] \subseteq \mathfrak{d}_{s+1} \end{align*} By induction, $ \mathfrak{p}_j \subseteq \mathfrak{d}_j $ and $J\mathfrak{p}_j \subseteq \mathfrak{d}_j $ for all $j \geq 0.$ \end{proof} \begin{remark} \label{rp} (i) Notice that $\mathfrak{p}_j/\mathfrak{p}_{j+1} \subseteq \mathfrak{Z}\left(\mathfrak{n}/\mathfrak{p}_{j+1}\right)$ for all $j \geq 0.$ Indeed, for all $P \in \mathfrak{p}_j$ and $Y \in \mathfrak{n},$ since $[P,Y] \subseteq \mathfrak{p}_{j+1},$ it is enough to deduce \begin{equation*} [P+ \mathfrak{p}_{j+1},Y + \mathfrak{p}_{j+1}] =[P,Y] + \mathfrak{p}_{j+1} \subseteq \mathfrak{p}_{j+1}. \end{equation*} Hence $\mathfrak{p}_j/\mathfrak{p}_{j+1} \subseteq \mathfrak{Z}\left(\mathfrak{n}/\mathfrak{p}_{j+1}\right)$. (ii) By $\text{Lemma }\ref{p},$ $ \mathfrak{p}_j + J \mathfrak{p}_j \subseteq \mathfrak{d}_j$ for all $j \geq 0$. We show that $\mathfrak{p}_j + J \mathfrak{p}_j \unlhd \mathfrak{n}$ for all $j \geq 0.$ Indeed, for all $P,P' \in \mathfrak{p}_j,$ $$ \underbrace{[P+JP',\mathfrak{n}]}_\text{$\subseteq [\mathfrak{p}_j + J \mathfrak{p}_j,\mathfrak{n}]$} \subseteq \underbrace{[P,\mathfrak{n}]}_\text{$ \subseteq \mathfrak{p}_{j+1}$} + \underbrace{[JP',\mathfrak{n}]}_\text{$ \subseteq \mathfrak{p}_{j+1}$} \subseteq \mathfrak{p}_{j+1} \subseteq \mathfrak{p}_{j+1} + J\mathfrak{p}_{j+1}\subseteq \mathfrak{p}_j + J\mathfrak{p}_j.$$ Hence $\mathfrak{p}_j + J \mathfrak{p}_j \unlhd \mathfrak{n}$. From part (ii), we can show that $\mathfrak{p}_j + J\mathfrak{p}_j$ is a $J$-invariant descending central series. Indeed, for all $T = P + JP' \in \mathfrak{p}_j + J \mathfrak{p}_j$ and $Y \in \mathfrak{n},$ \begin{align*} [T+ \mathfrak{p}_{j+1} + J\mathfrak{p}_{j+1},Y + \mathfrak{p}_{j+1} +J\mathfrak{p}_{j+1}] \subseteq [T,Y] + \mathfrak{p}_{j+1} +J\mathfrak{p}_{j+1}\subseteq \mathfrak{p}_{j+1} +J\mathfrak{p}_{j+1}. \end{align*} \end{remark} \begin{theorem} \label{14} Let $\mathfrak{n}$ be a Lie algebra with a complex structure $J.$ The following are equivalent: (i) $J$ is nilpotent of step $j_0$; (ii) $ \mathfrak{p}_{j_0} = \{0\}$ and $ \mathfrak{p}_{j_0-1} \neq \{0\};$ (iii) $ \mathfrak{d}_{j_0} = \{0\}$ and $\mathfrak{d}_{j_0 - 1} \neq \{0\}$. \end{theorem} \begin{proof} We first show that (i) and (ii) are equivalent. Assume that $J$ is nilpotent of step $j_0.$ From $\text{Lemma }\ref{9}$ part (ii), $\mathfrak{d}_{j_0-1} \subseteq \mathfrak{d}^1.$ Hence by $\text{Lemma }\ref{p},$ $$ \mathfrak{p}_{j_0}\subseteq [\mathfrak{d}_{j_0-1},\mathfrak{n}] \subseteq [\mathfrak{d}^1,\mathfrak{n}]= \{0\}.$$ Thus $ \mathfrak{p}_{j_0}= \{0\} $. Assume, by contradiction, that $ \mathfrak{p}_{j_0-1} = \{0\}.$ We show that $\mathfrak{p}_{j_0-j-1} + J\mathfrak{p}_{j_0-j-1} \subseteq \mathfrak{d}^j $ for all $j \geq 0 $ by induction. By definition, $ \mathfrak{p}_{j_0-1} + J \mathfrak{p}_{j_0-1} = \{0\} = \mathfrak{d}^0.$ Next, suppose that $ \mathfrak{p}_{j_0-s-1} + J \mathfrak{p}_{j_0-s-1}\subseteq\mathfrak{d}^s $ for some $s \in \mathbb{N}.$ Then from $\text{Remark }\ref{rp}$ part (ii), \begin{equation*} [\mathfrak{p}_{j_0-s-2} + J \mathfrak{p}_{j_0-s-2},\mathfrak{n}] \subseteq \mathfrak{p}_{j_0-s-1} + J\mathfrak{p}_{j_0-s-1} \subseteq \mathfrak{d}^s. \end{equation*} This implies, using $\eqref{eq:1},$ $ \mathfrak{p}_{j_0-s-2} +J \mathfrak{p}_{j_0-s-2} \subseteq \mathfrak{d}^{s+1}.$ By induction, $\mathfrak{p}_{j_0-j-1} + J\mathfrak{p}_{j_0-j-1}$ $ \subseteq \mathfrak{d}^j $ for all $j \geq 0.$ In particular, let $j = j_0-1.$ Then $\mathfrak{n} \subseteq \mathfrak{d}^{j_0-1}$, which implies that $J$ is nilpotent of step $j_0-1$ by definition. This is a contradiction. Therefore $ \mathfrak{p}_{j_0-1} \neq \{0\}.$ Conversely, suppose that $ \mathfrak{p}_{j_0} = \{0\}$ and $ \mathfrak{p}_{j_0-1} \neq \{0\}$. We show that $J$ is nilpotent of step $j_0.$ By definition, $ \mathfrak{p}_{j_0} + J \mathfrak{p}_{j_0} = \{0\} = \mathfrak{d}^0 $. It follows, by induction, that $\mathfrak{p}_{j_0-j} + J\mathfrak{p}_{j_0-j} \subseteq \mathfrak{d}^{j} $ for all $j \geq 0.$ Hence $\mathfrak{p}_{j_0-j} \subseteq \mathfrak{d}^j$. In particular, let $j =j_0-1.$ Then $$ \mathfrak{p}_1 = [\mathfrak{n},\mathfrak{n}] \subseteq \mathfrak{d}^{j_0-1} \Rightarrow \mathfrak{n}/\mathfrak{d}^{j_0-1} \text{ is Abelian }.$$ By $\text{Lemma }\ref{9},$ $J$ is nilpotent of step at most $j_0.$ We next show that $\mathfrak{d}^{j_0-1} \neq \mathfrak{n}$ by contradiction. Assume, by contradiction, that $ \mathfrak{n}= \mathfrak{d}^{j_0-1}.$ We show that $\mathfrak{p}_{j-1} \subseteq \mathfrak{d}^{j_0-j}$ for all $j \geq 1$ by induction. By definition, $\mathfrak{p}_0 = \mathfrak{n}= \mathfrak{d}^{j_0-1}.$ Next, suppose that $\mathfrak{p}_{s-1} \subseteq \mathfrak{d}^{j_0-s}$ for some $s \in \mathbb{N}.$ Then \begin{align*} \mathfrak{p}_s & = [\mathfrak{p}_{s-1},\mathfrak{n}] +[J\mathfrak{p}_{s-1},\mathfrak{n}] \\ & \subseteq [\mathfrak{d}^{j_0-s},\mathfrak{n}] +[J\mathfrak{d}^{j_0-s},\mathfrak{n}] \\ & \subseteq \mathfrak{d}^{j_0-s-1}. \end{align*} By induction, $\mathfrak{p}_{j-1} \subseteq \mathfrak{d}^{j_0-j}$ for all $j \geq 1.$ In particular, let $j = j_0.$ We deduce that $ \mathfrak{p}_{j_0-1} \subseteq \mathfrak{d}^0 = \{0\}.$ This implies that $ \mathfrak{p}_{j_0-1} = \{0\}$ which is a contradiction. Hence $\mathfrak{d}^{j_0-1} \neq \mathfrak{n}.$ By definition, $J$ is nilpotent of step $j_0.$ We now show (i) and (iii) are equivalent. Since $J$ is nilpotent of step $j_0,$ it follows, from $\text{Lemma } \ref{9}$ part (ii), that $\mathfrak{d}_j \subseteq \mathfrak{d}^{j_0 - j}$ for all $j \geq 0.$ In particular, let $j = j_0.$ By definition, $\mathfrak{d}_{j_0} = \mathfrak{d}^0 = \{0\}.$ We show that $\mathfrak{d}_{j_0-1} \neq \{0\}.$ By Lemma $\ref{p}$, $\{0\} \neq \mathfrak{p}_{j_0-1} + J \mathfrak{p}_{j_0-1} \subseteq \mathfrak{d}_{j_0-1}.$ Hence $\mathfrak{d}_{j_0-1} \neq \{0\}.$ Conversely, assume that $\mathfrak{d}_{j_0} = \{0\}$ and $\mathfrak{d}_{j_0-1} \neq \{0\}$. By definition, $[\mathfrak{d}_{j_0-1},\mathfrak{n}] \subseteq \mathfrak{d}_{j_0} = \{0\}.$ Hence, $\{0\} \neq \mathfrak{d}_{j_0-1} \subseteq \mathfrak{d}^1.$ Next, assume that $\mathfrak{d}_{j_0-s} \subseteq \mathfrak{d}^s$ for some $s \in \mathbb{N}.$ Then by definition, \begin{align*} [\mathfrak{d}_{j_0-s-1},\mathfrak{n}] \subseteq \mathfrak{d}_{j_0-s} \subseteq \mathfrak{d}^s. \end{align*} By $\eqref{eq:1},$ $\mathfrak{d}_{j_0-s-1} \subseteq \mathfrak{d}^{s+1}.$ By induction, $\mathfrak{d}_{j_0-j} \subseteq \mathfrak{d}^j$ for all $j \geq 0.$ Let $j= j_0.$ We find that $\mathfrak{d}_0 = \mathfrak{n} \subseteq \mathfrak{d}^{j_0}.$ Therefore $\mathfrak{d}^{j_0} = \mathfrak{n}$ and $J$ is nilpotent of step at most $j_0.$ We next show that $\mathfrak{d}^{j_0-1} \neq \mathfrak{n}.$ Suppose not, that is, $ \mathfrak{n} =\mathfrak{d}^{j_0-1}.$ By definition, $\mathfrak{d}_0 = \mathfrak{n} = \mathfrak{d}^{j_0-1}.$ It follows, by induction, that $\mathfrak{d}_{j-1} \subseteq \mathfrak{d}^{j_0-j}$ for all $j \geq 1$. Let $j = j_0.$ We find that $ \mathfrak{d}_{j_0-1} \subseteq \{0\}.$ This is a contradiction. Hence $\mathfrak{d}^{j_0-1} \neq \mathfrak{n} $ and $J$ is nilpotent of step $j_0.$ Finally, since (i) is equivalent to both (ii) and (iii), we conclude that (ii) and (iii) are equivalent. \end{proof} \begin{remark} \label{r35} Suppose that a Lie algebra $\mathfrak{n}$ admits a nilpotent complex structure $J$ of step $j_0.$ Then \begin{equation} \mathfrak{c}_j(\mathfrak{n}) + J\mathfrak{c}_j(\mathfrak{n}) \subseteq \mathfrak{p}_j + J \mathfrak{p}_j \subseteq \mathfrak{d}_{j} \subseteq \mathfrak{d}^{j_0-j} \label{eq:ew} \end{equation} for all $ j \geq 0$. \end{remark} It is shown that, in $\cite[\text{Corollary }7]{MR1665327},$ if $\mathfrak{c}_j(\mathfrak{n})$ is $J$-invariant for all $j \geq 0,$ then $J$ is nilpotent. We will provide a different approach to this. \begin{corollary} \label{43.9} Let $\mathfrak{n}$ be a step $k$ nilpotent Lie algebra with a complex structure $J.$ Suppose that all $\mathfrak{c}_j(\mathfrak{n})$ are $J$-invariant. Then $\mathfrak{p}_j = \mathfrak{c}_j(\mathfrak{n})$ for all $j \geq 0.$ Furthermore, $J$ is nilpotent of step $k$. \end{corollary} \begin{proof} Since all $\mathfrak{c}_j(\mathfrak{n})$ are $J$-invariant, by definition, $\mathfrak{p}_0 = \mathfrak{n} = \mathfrak{c}_0(\mathfrak{n}).$ We have, by induction, that $ \mathfrak{p}_j = \mathfrak{c}_j(\mathfrak{n}) $ for all $j \geq 0.$ Therefore $\mathfrak{p}_k = \mathfrak{c}_k(\mathfrak{n}) = \{0\}$ and $\mathfrak{p}_{k-1} = \mathfrak{c}_{k-1}(\mathfrak{n}) \neq \{0\}$. By $\text{Theorem } \ref{14},$ $J$ is nilpotent of step $k.$ \end{proof} \begin{corollary} \label{uus} Let $\mathfrak{n}$ be a step $k$ nilpotent Lie algebra with a nilpotent complex structure $J$ of step $k$. Suppose that $\mathfrak{c}_{k-1}(\mathfrak{n}) = \mathfrak{z}$. Then $\mathfrak{z}$ is $J$-invariant. \end{corollary} \begin{proof} Since $J$ is nilpotent of step $k,$ by $\eqref{eq:ew}$, \begin{equation*} \mathfrak{z} + J \mathfrak{z} \subseteq \mathfrak{d}_{k-1} \subseteq \mathfrak{d}^1 \subseteq \mathfrak{z}\Rightarrow [ \mathfrak{z} + J \mathfrak{z} ,\mathfrak{n}] = \{0\}. \end{equation*} Hence $J\mathfrak{z} = \mathfrak{z}.$ \end{proof} \begin{corollary} \label{42.5} Let $\mathfrak{n}$ be a Lie algebra with a nilpotent complex structure $J$ of step $j_0$. Then for all $j \geq 1,$ $\mathfrak{d}_{j_0-j}$ is not contained in $\mathfrak{d}^{j-1}$. \end{corollary} \begin{proof} Since $J$ is nilpotent of step $j_0,$ by $\text{Theorem }\ref{14},$ $\mathfrak{d}_{j_0-1} \neq \{0\} = \mathfrak{d}^0$. Hence $\mathfrak{d}_{j_0-1}$ is not contained in $\mathfrak{d}^0 .$ Next, suppose that $\mathfrak{d}_{j_0-s+1}$ is not contained in $\mathfrak{d}^{s-2}$ for some $\mathbb{N} \ni s \geq 2.$ We show that $\mathfrak{d}_{j_0-s}$ is not contained in $\mathfrak{d}^{s-1}.$ Suppose not. That is, $\mathfrak{d}_{j_0-s} \subseteq \mathfrak{d}^{s-1}.$ Then \begin{align*} \mathfrak{d}_{j_0-s+1} & = [\mathfrak{d}_{j_0-s},\mathfrak{n}] + J[\mathfrak{d}_{j_0-s},\mathfrak{n}] \\ & \subseteq [\mathfrak{d}^{s-1},\mathfrak{n}]+J[\mathfrak{d}^{s-1},\mathfrak{n}] \subseteq \mathfrak{d}^{s-2}. \end{align*} It follows that $\mathfrak{d}_{j_0-s+1} \subseteq \mathfrak{d}^{s-2}.$ This is a contradiction. Hence $\mathfrak{d}_{j_0-s}$ is not contained in $ \mathfrak{d}^{s-1}.$ By induction, for all $j \geq 1,$ $\mathfrak{d}_{j_0-j}$ is not contain in $\mathfrak{d}^{j-1}$. \end{proof} We investigate the possible range of $\dim \mathfrak{z}$ for a Lie algebra $\mathfrak{n}$ with a nilpotent complex structure $J.$ \begin{proposition} \label{16} Let $\mathfrak{n}$ be a non-Abelian Lie algebra of dimension $2n$ with a nilpotent complex structure $J.$ Then $ 2 \leq \dim \mathfrak{z} \leq 2n-2.$ \end{proposition} \begin{proof} Recall that $ \mathfrak{d}^1 = \mathfrak{z} \cap J \mathfrak{z}$, which is the largest $J$-invariant subspace of $\mathfrak{z}$. Since $J$ is nilpotent, it is clear that $ \mathfrak{d}^1 \neq \{0\}.$ Furthermore, since $ \mathfrak{d}^1$ is $J$-invariant, it follows that $ 2 \leq \dim \mathfrak{d}^1 \leq \dim \mathfrak{z}.$ Then the lower bound of $\dim \mathfrak{z}$ is $2.$ Next, we show that the upper bound of $\dim\mathfrak{z}$ is $2n-2.$ Since $\mathfrak{n}$ is non-Abelian, it is possible to find $X,Y \in \mathfrak{n}$ such that $0 \neq [X,Y] \in \mathfrak{c}_1(\mathfrak{n}).$ Then $ \mathrm{span} \{X,Y\}$ is $2$-dimensional and $ \mathrm{span} \{X,Y\} \cap \mathfrak{z} = \{0\}.$ Hence $\dim \mathfrak{z} \leq 2n -2.$ In conclusion, $2 \leq \dim \mathfrak{z} \leq 2n - 2.$ \end{proof} \begin{remark} From Proposition $ \ref{16},$ we can further conclude that if $\dim \mathfrak{z} = 1,$ then the complex structure $J$ on $\mathfrak{n}$ is non-nilpotent. In particular, the Lie algebra of $n \times n$ upper triangular matrices does not admit a nilpotent complex structure. \end{remark} \section{Stratified Lie algebras with complex structures} In this section we consider a special type of nilpotent Lie algebras: $\textit{stratified Lie}$ $\textit{algebra}$. Recent results on nilpotent Lie algebras with a stratification can be found in $\cite{MR4127910},\cite{MR3521656}, \cite{MR3742567}.$ We start with the definition of stratified Lie algebras. \begin{definition} A nilpotent Lie algebra $\mathfrak{n}$ is said to admit a $\textit{step k stratification}$ if it has a vector space decomposition of the form $\mathfrak{n}_1 \oplus \mathfrak{n}_2 \oplus \ldots \oplus \mathfrak{n}_k,$ where $\mathfrak{n}_k \neq \{0\},$ satisfying the bracket generating property $[\mathfrak{n}_1,\mathfrak{n}_k] = \{0\}$ and \begin{align*} [\mathfrak{n}_1 ,\mathfrak{n}_{j-1}] = \mathfrak{n}_j \qquad \text{ for all } j \in \{2,\ldots,k\} . \end{align*} A Lie algebra $\mathfrak{n}$ that admits a stratification is called a $\textit{stratified Lie algebra}.$ A complex structure $J$ on a stratified Lie algebra $\mathfrak{n}$ is said to be $\textit{strata-preserving}$ if it preserves each layer of the stratification. \end{definition} \begin{remark} \label{rs} Let $\mathfrak{n}$ be a step $k$ stratified Lie algebra. By induction, \begin{equation} \mathfrak{c}_j(\mathfrak{n}) =\bigoplus_{ j+1\leq l \leq k} \mathfrak{n}_l \qquad \text{ for all } j \geq 0. \label{eq:e} \end{equation} \end{remark} \begin{proposition} \label{abc} Let $\mathfrak{n}$ be a $2n$-dimensional step $n$ nilpotent Lie algebra for some $n \in \mathbb{N}$. Suppose that $\dim \mathfrak{c}_j(\mathfrak{n}) = 2n-2j$ for all $1 \leq j \leq n.$ Then $\mathfrak{n}$ does not admit a stratification. \end{proposition} \begin{proof} Assume, by contradiction, that $\mathfrak{n}$ admits a stratification. Since, by $\eqref{eq:e}$, $ \mathfrak{c}_j(\mathfrak{n})=\bigoplus_{j+1 \leq l \leq n}$ $ \mathfrak{n}_l$ and $\dim\mathfrak{c}_1(\mathfrak{n}) = 2n-2$, $\dim \mathfrak{n}_1 =2.$ Since $\mathfrak{n}$ is a stratified Lie algebra, $\mathfrak{n}_2 = [\mathfrak{n}_1,\mathfrak{n}_1].$ Thus $\dim \mathfrak{n}_2 = 1$ and $\dim\mathfrak{c}_2(\mathfrak{n}) = 2n-3 > 2n-4.$ This is a contradiction. \end{proof} \begin{proposition} \label{3.4} Let $\mathfrak{n}$ be a step $k$ stratified Lie algebra with a complex structure $J$ and $k \geq 2$. Suppose that $\dim \mathfrak{n}_1 =2.$ Then $J$ is not strata-preserving. \end{proposition} \begin{proof} Suppose, by contradiction, that there exists a strata-preserving complex structure $J.$ Then $\dim \mathfrak{n}_j \in 2\mathbb{N}$ for all $j \geq 1.$ However, $\dim \mathfrak{n}_1= 2 $ implies that $\dim \mathfrak{n}_2 =1,$ which contradicts the assumption that $\dim \mathfrak{n}_2 \in 2 \mathbb{N}.$ Hence $\mathfrak{n}$ does not have a strata-preserving complex structure. \end{proof} \begin{remark} Let $\mathfrak{n}$ be a step $3$ stratified Lie algebra with a strata-preserving complex structure. Arguing in a similar way as in Proposition $\ref{3.4}$, we conclude that $\dim \mathfrak{n} \neq 4$ or $6.$ \end{remark} We show that there always exists a stratification on step $2$ nilpotent Lie algebra with a strata-preserving complex structure $J$. \begin{theorem} \label{5} Let $\mathfrak{n}$ be a step $2$ nilpotent Lie algebra with a complex structure $J.$ Suppose that $\mathfrak{c}_1(\mathfrak{n})$ is $J$-invariant. Then $\mathfrak{n}$ admits a $J$-invariant stratification. \end{theorem} \begin{proof} Define a $J$-invariant inner product $\psi$ by \begin{align*} \psi (X,Y) = \phi(X,Y) + \phi(JX ,JY) , \text{ for all } X,Y \in \mathfrak{n}, \end{align*} where $\phi$ is an inner product on $\mathfrak{n}.$ We show that there exists a stratification on $\mathfrak{n}$ such that $\mathfrak{n}_1$ and $\mathfrak{n}_2$ are $J$-invariant. Define $\mathfrak{n}_2 = [\mathfrak{n},\mathfrak{n}]$ and $ \mathfrak{n}_1 = \mathfrak{n}_2^\perp$, the orthogonal complement of $ \mathfrak{n}_2$ with respect to $\psi$. Then $\mathfrak{n}_2 = \mathfrak{c}_1(\mathfrak{n})$ is $J$-invariant and by definition $\mathfrak{n}= \mathfrak{n}_1 \oplus \mathfrak{n}_2$. Also note that $$ \mathfrak{n}_2 = [\mathfrak{n}_1\oplus \mathfrak{n}_2,\mathfrak{n}_1\oplus\mathfrak{n}_2] = [\mathfrak{n}_1,\mathfrak{n}_1]. $$ This implies that $\mathfrak{n}_1$ generates $\mathfrak{n}.$ Thus $J$ is a complex structure that preserves both $\mathfrak{n}_1$ and $\mathfrak{n}_2$. \end{proof} \begin{remark} (i) Let $\mathfrak{g}$ be an arbitrary Lie algebra. A complex structure $J$ on $\mathfrak{g}$ is called $\textit{bi-invariant}$ if $J[X,Y] = [JX,Y]$ for all $X,Y \in \mathfrak{g}.$ That is, $J \circ \operatorname{ad} = \operatorname{ad} \circ J.$ A complex structure $J$ is called $\textit{Abelian}$ if $[X,Y] = [JX,JY]$ for all $X,Y \in \mathfrak{g}.$ See, e.g., $\cite{MR2040168}$, $\cite{MR2070596}$. Notice that $J$ preserves all terms of $\mathfrak{c}_j(\mathfrak{n})$ and $\mathfrak{c}^j(\mathfrak{n})$ if $J$ is bi-invariant, while if $J$ is Abelian, $J$ only preserves all terms of $\mathfrak{c}^j(\mathfrak{n}).$ (ii) Suppose that $\mathfrak{n}$ is a step $k$ stratified Lie algebra with a bi-invariant complex structure $J.$ From $\eqref{eq:e}$, $ \mathfrak{c}_j(\mathfrak{n}) = \bigoplus_{j+1 \leq l \leq k} \mathfrak{n}_l$, it is clear that $\dim \mathfrak{n}_j \in 2 \mathbb{N}$ for all $ j \in \{1,\ldots,k\}.$ \end{remark} \begin{proposition} \label{44} Let $\mathfrak{n}$ be a step $k$ stratified Lie algebra with a strata-preserving complex structure $J.$ Then $J\mathfrak{c}_j(\mathfrak{n}) = \mathfrak{c}_j(\mathfrak{n})$ for all $j \geq 0 $ and $J$ is nilpotent of step $k$. \end{proposition} \begin{proof} We first show that $J\mathfrak{c}_j(\mathfrak{n}) = \mathfrak{c}_j(\mathfrak{n})$ for all $j \geq 0.$ Recall, from $\eqref{eq:1},$ that $\mathfrak{c}_j(\mathfrak{n}) $ $= \bigoplus_{j+1 \leq l \leq k} \mathfrak{n}_l$ and hence $ J\mathfrak{c}_j(\mathfrak{n}) =\mathfrak{c}_j(\mathfrak{n}) $ for all $j \geq 0.$ By $\text{Corollary }\ref{43.9}$, $J$ is nilpotent of step $k.$ \end{proof} It is known that every step $2$ nilpotent Lie algebra maybe stratified (see, e.g., $\cite{MR3742567}$). We will provide another proof in Theorem $\ref{47}$, that every complex structure on a step $2$ nilpotent Lie algebra is nilpotent of step $2$ or $3$. See, e.g., $ \cite[\text{Theorem } 1.3]{gao2020maximal}$ and $\cite[\text{Proposition }3.3]{MR2533671}.$ In what follows, we denote by $\mathfrak{k} = \mathfrak{n}_2 \cap J\mathfrak{n}_2$ the largest $J$-invariant subspace contained in $\mathfrak{n}_2$ and we also remind the reader that $\mathfrak{d}^1 = \mathfrak{z} \cap J\mathfrak{z} $ is the largest $J$-invariant subspace contained in $\mathfrak{z}.$ \begin{theorem} \label{47} Let $ \mathfrak{n} = \mathfrak{n}_1 \oplus \mathfrak{n}_2$ be a step $2$ nilpotent Lie algebra with a complex structure $J$ and a $J$-invariant inner product $\psi$. (i) Suppose that $\mathfrak{k} = \{0\}$. Then $ \mathfrak{d}_1$ is Abelian and $J$ is nilpotent of step $2.$ (ii) Suppose that $ \{0\} \neq \mathfrak{k} \subset \mathfrak{n}_2$. Then $J$ is nilpotent of step $3.$ (iii) Suppose that $\mathfrak{n}_2 = \mathfrak{k} .$ Then $J$ is strata-preserving and nilpotent of step $2.$ In conclusion, $J$ is nilpotent of either step $2$ or $3$. \end{theorem} \begin{proof} We start with parts (i) and (ii) together. Suppose that $J\mathfrak{n}_2 \neq \mathfrak{n}_2.$ Then, $\mathfrak{p}_2 = [J\mathfrak{n}_2,\mathfrak{n}] \subseteq \mathfrak{n}_2.$ For all $Z_2 \in \mathfrak{n}_2,$ $X,JX \in \mathfrak{n},$ by the Newlander--Nirenberg condition, \begin{equation} [J\mathfrak{n}_2,\mathfrak{n}] \ni [JZ_2,JX] = J[JZ_2,X] \in J[J\mathfrak{n}_2,\mathfrak{n}]. \label{eq:iu} \end{equation} This implies that $\mathfrak{p}_2$ is $J$-invariant in $\mathfrak{n}_2$. We now consider the following two possibilities. (i) If $\mathfrak{k} = \{0\},$ then from $\eqref{eq:iu},$ we get that $\mathfrak{p}_2 = \{0\}.$ By Theorem $\ref{14},$ $J$ is nilpotent of step $2.$ (ii) If $\{0\} \neq \mathfrak{k} \subset \mathfrak{n}_2,$ since $\{0\} \neq \mathfrak{p}_2 \subseteq \mathfrak{k}$ and $J\mathfrak{p}_2 \subset \mathfrak{n}_2$, then by definition, $\mathfrak{p}_3 = \{0\}.$ By Theorem $\ref{14},$ $J$ is nilpotent of step $3.$ Finally, for part (iii), suppose that $\mathfrak{n}_2 = \mathfrak{k} .$ We find that $J$ preserves $\mathfrak{n}_2.$ By Theorem $\ref{5},$ $J$ is strata-preserving. From $\text{Corollary }\ref{44},$ $J$ is nilpotent of step $2.$ In conclusion, $J$ is either nilpotent of step $2$ or $3.$ \end{proof} \begin{remark} \label{r47} (i) If $J$ is nilpotent of step $3,$ then there does not necessarily exist a $J$-invariant stratification. (ii) We recall, from $\cite[\text{Theorem }1.3]{gao2020maximal},$ if $\mathfrak{z}$ is not $J$-invariant, then $J$ is nilpotent of step $3.$ From Theorem $\ref{47},$ we have the following table: \begin{table}[h] \centering \begin{tabular}{ |c|c|c|c|c } \hline $J$ & Strata-preserving & Non-strata-preserving \\ \hline $J\mathfrak{z} =\mathfrak{z}$ & $J$ nilpotent of step $2$ & $J$ nilpotent of step $2$ \\ \hline $J\mathfrak{z} \neq \mathfrak{z}$ & $J$ nilpotent of step $2$ & $J$ nilpotent of step $3$ \\ \hline \end{tabular} \caption{\label{2.1}nilpotency of $J$} \end{table} From $\text{Table } \ref{2.1},$ if $J$ is nilpotent of step $2$, then $J$ is either strata-preserving or center-preserving. More precisely, we conclude that either $\mathfrak{k} =\mathfrak{n}_2 \cap J\mathfrak{n}_2 =\{0\}$ or $J\mathfrak{n}_2 = \mathfrak{n}_2.$ Indeed, if $\mathfrak{n}$ is step $2$ nilpotent Lie algebra with a nilpotent complex structure $J$ of step $2,$ $J$ may not be strata-preserving. \end{remark} Notice that an even dimensional nilpotent Lie algebra with $\dim \mathfrak{c}_1(\mathfrak{n}) = 1$ has step $2$. There does not exist a $J$-invariant stratification for dimensional reasons. Suppose that $\dim \mathfrak{c}_1(\mathfrak{n}) \geq 2.$ We have the following theorem. \begin{theorem} \label{49} Let $\mathfrak{n}$ be a step $2$ stratified Lie algebra with a complex structure $J$. (i) Suppose that $\dim \mathfrak{n}_2 =2$. Then (a) $J$ is nilpotent of step $2;$ (b) if $\dim \mathfrak{d}^1 = 2,$ then $J\mathfrak{n}_2 = \mathfrak{n}_2$. (ii) Suppose that $\dim \mathfrak{n}_2 = 2l$ for some $l \geq 2 \in \mathbb{N}$. Furthermore, assume that $\dim \mathfrak{d}^1 \leq 4l-2$ and $J\mathfrak{n}_2 \neq \mathfrak{n}_2$. Then $J$ is nilpotent of step $3.$ \end{theorem} \begin{proof} By Theorem $\ref{47},$ $J$ is nilpotent of either step $2$ or $3.$ Start with part (i). Assume that $\dim \mathfrak{n}_2 =2.$ For part (a), notice that $J$ could be either strata-preserving or not. If $J$ is strata-preserving, by Theorem $ \ref{47}$ part (iii), $J$ is nilpotent of step $2$. Otherwise, $J$ is not strata-preserving. Since $\dim \mathfrak{n}_2 =2 $, it follows that $\mathfrak{k} = \{0\}.$ Then by Theorem $ \ref{47}$ part (i), $J$ is Abelian and hence nilpotent of step $2.$ Next, for part (b), recall that $\mathfrak{d}^1 = \mathfrak{z} \cap J\mathfrak{z}$ is the largest $J$-invariant subspace of $\mathfrak{z} $. Suppose that $\mathfrak{n}_2$ is not $J$-invariant. Then $\mathfrak{k} = \{0\}.$ From part (i), $J$ is nilpotent of step $2.$ It follows, from $\text{Theorem }\ref{14},$ that $\mathfrak{d}_2 = \{0\}$ and $\mathfrak{d}_1 \subseteq \mathfrak{d}^1.$ However, $\dim \mathfrak{d}_1 = \dim \mathfrak{n}_2 \oplus J\mathfrak{n}_2 = 4 > \dim\mathfrak{d}^1.$ This is a contradiction. Hence $J\mathfrak{n}_2 = \mathfrak{n}_2$. We now show part (ii). Notice that $l \neq 1.$ Otherwise $\dim \mathfrak{n}_2 = \dim \mathfrak{d}^1 = 2.$ This implies that $J\mathfrak{n}_2 = \mathfrak{n}_2.$ Suppose, by contradiction, that $J$ is not nilpotent of step $3.$ Hence $J$ is nilpotent of step $2.$ Then from Remark $\ref{r47}$ part (ii), $\mathfrak{k} = \{0\} $ and by definition, $\mathfrak{d}_1 = \mathfrak{n}_2 \oplus J\mathfrak{n}_2\subseteq \mathfrak{d}^1.$ However, $\dim \mathfrak{d}_1 = 4l > \dim \mathfrak{d}^1.$ This is a contradiction. Hence $\mathfrak{k} \neq \{0\}.$ By Theorem $\ref{47}$ part (ii), $J$ is nilpotent of step $3.$ \end{proof} \begin{remark} \label{33} We can extend the statement of part (i) into a higher step stratification as follows: Let $\mathfrak{n}$ be a step $k$ stratified Lie algebra with a nilpotent complex structure $J$ of step $k$. Suppose that $\dim \mathfrak{n}_k = 2$ and $\dim \mathfrak{d}^1 =2 $. Then $J\mathfrak{n}_k = \mathfrak{n}_k.$ \end{remark} \begin{corollary} \label{36} Let $\mathfrak{n} = \mathfrak{n}_1 \oplus \mathfrak{n}_2$ be a step $2$ stratified Lie algebra with a complex structure $J$ such that $\dim \mathfrak{n}_2 = 2.$ Then $J$ is center-preserving or strata-preserving or both. Furthermore, suppose that $2 \leq \dim \mathfrak{z} \leq 3$ or $\dim \mathfrak{z} = 4$ and $J\mathfrak{z} \neq \mathfrak{z}.$ Then there exists a $J$-invariant stratification. \end{corollary} \begin{proof} By $\text{Theorem }\ref{49},$ $J$ is nilpotent of step $2$. Then by $\text{Table }\ref{2.1}$, $J\mathfrak{n}_2 = \mathfrak{n}_2$ or $J\mathfrak{z} = \mathfrak{z}$ or both if $\mathfrak{n}_2 = \mathfrak{z}$. Furthermore, $\dim \mathfrak{d}^1 = 2$ since $2 \leq \dim \mathfrak{z} \leq 3$ or $\dim\mathfrak{z} = 4$ and $J\mathfrak{z} \neq \mathfrak{z}.$ By part (ii) of $\text{Theorem }\ref{49},$ $J\mathfrak{n}_2 = \mathfrak{n}_2$. Furthermore, by $\text{Theorem }\ref{5},$ there exists a $J$-invariant stratification. \end{proof} Suppose that $\mathfrak{n}$ is a $6$ dimensional step $2$ nilpotent Lie algebra with a complex structure. In $\cite[\text{Table } 1]{MR1899353},$ there is a complete classification of complex structures on these algebras. However, no information is provided on whether or not $J$ preserves the strata. \begin{corollary}[$\cite{MR2763953,MR1899353}$] \label{6nil} Let $\mathfrak{n}$ be a $6$ dimensional step $2$ nilpotent Lie algebra with a complex structure $J $ such that $\dim \mathfrak{c}_1(\mathfrak{n}) =2 .$ Then $\mathfrak{n}$ admits a $J$-invariant stratification. \end{corollary} \begin{proof} By Theorem $\ref{47}$ and Proposition $\ref{16}$, $J$ is nilpotent and $2 \leq \dim \mathfrak{z} \leq 4$. If $\dim \mathfrak{z} = 4,$ $\dim \mathfrak{c}_1(\mathfrak{n}) = 1$ and $J$ is not strata-preserving due to dimensional reasons. We omit this case. Next, assume that $\dim \mathfrak{z} \leq 3.$ This is a direct consequence of Corollary $\ref{36}.$ \end{proof} In what follows, we focus on higher step stratified Lie algebras with complex structures. \begin{proposition} Let $\mathfrak{n}$ be a step $3$ stratified Lie algebra with a complex structure $J$. Suppose that $J\mathfrak{n}_3 = \mathfrak{n}_3$. Then $J$ is nilpotent of step $3.$ \end{proposition} \begin{proof} By the definition of the descending central series $\mathfrak{p}_j$ in $\eqref{eq:p},$ $\{0\} \neq \mathfrak{p}_2 = \mathfrak{n}_3 + [J\mathfrak{c}_1(\mathfrak{n}),\mathfrak{n}].$ On the one hand, suppose that $[J\mathfrak{c}_1(\mathfrak{n}),\mathfrak{n}] = \{0\}.$ We deduce that $\mathfrak{p}_2 = \mathfrak{n}_3$ and hence $\mathfrak{p}_3 = \{0\}$ by definition. Using Theorem $\ref{14},$ $J$ is nilpotent of step $3.$ On the other hand, suppose that $ [J\mathfrak{c}_1(\mathfrak{n}),\mathfrak{n}] \neq \{0\}.$ Then by the Newlander--Nirenberg condition, for all $U \in \mathfrak{c}_1(\mathfrak{n})$ and $X,JX \in \mathfrak{n}$ \begin{align*} 0 \neq \underbrace{[JU,JX] - J[JU,X]}_\text{$\in [J\mathfrak{c}_1(\mathfrak{n}),\mathfrak{n}] + J[J\mathfrak{c}_1(\mathfrak{n}),\mathfrak{n}] $} = \underbrace{[U,X]+J[U,JX]}_\text{$\in \mathfrak{n}_3$}. \end{align*} Hence $[J\mathfrak{c}_1(\mathfrak{n}),\mathfrak{n}] \subseteq \mathfrak{n}_3.$ This implies that $\mathfrak{p}_2 \subseteq \mathfrak{n}_3$ and therefore $J\mathfrak{p}_2 \subseteq \mathfrak{n}_3.$ Then $\mathfrak{p}_3 = [\mathfrak{p}_2,\mathfrak{n}] + [J\mathfrak{p}_2,\mathfrak{n}] = \{0\}.$ Again by Theorem \ref{14}, $J$ is nilpotent of step $3.$ \end{proof} \begin{proposition} Let $\mathfrak{n}$ be a $8$ dimensional step $3$ stratified Lie algebra with a complex structure $J$ such that $2\dim \mathfrak{n}_3 = \dim \mathfrak{c}_1(\mathfrak{n}) = 4$. Suppose that $J\mathfrak{n}_3 \neq \mathfrak{n}_3$ and $\dim \mathfrak{z} \leq 3.$ Then $J$ is nilpotent of step $4.$ Furthermore, $\mathfrak{d}_2 = \mathfrak{n}_3 \oplus J\mathfrak{n}_3$. \end{proposition} \begin{proof} Since $\mathfrak{n}_3 \subseteq \mathfrak{z},$ $\dim \mathfrak{z} \geq 2.$ By $\cite[\text{Corollary }3.12]{MR4009385},$ $J$ is nilpotent. Then using Remark $\ref{r7}$ (i), $3 \leq j_0 \leq 4$, where $j_0$ is the nilpotent step of $J.$ Suppose, by contradiction, that $J$ is nilpotent of step $3.$ It follows, from the equation $\eqref{eq:ew}$, that $\mathfrak{n}_3 + J\mathfrak{n}_3 \subseteq \mathfrak{d}_2 \subseteq \mathfrak{d}^1 \subseteq \mathfrak{z}.$ On the one hand, since $\dim \mathfrak{z} \leq 3,$ $\dim \mathfrak{d}^1 =2.$ On the other hand, since $J\mathfrak{n}_3 \neq \mathfrak{n}_3$ and $\dim \mathfrak{n}_3 = 2,$ $\mathfrak{n}_3 \cap J\mathfrak{n}_3 = \{0\}$ and therefore $ \dim \mathfrak{n}_3 \oplus J\mathfrak{n}_3 = 4 > \dim \mathfrak{d}^1.$ This is a contradiction. So $J$ is nilpotent of step $4.$ We now show that $\mathfrak{d}_2 = \mathfrak{n}_3 \oplus J\mathfrak{n}_3$. It is sufficient to show that $\mathfrak{d}_2 \subseteq \mathfrak{n}_3 \oplus J\mathfrak{n}_3.$ By definition, \begin{align*} \mathfrak{d}_2 & = [\mathfrak{d}_1,\mathfrak{n}] +J[\mathfrak{d}_1,\mathfrak{n}] \\ & = \mathrm{span}\left\{ [T,X] + J[T',X'] : \text{ } \forall \text{ } T,T' \in \mathfrak{d}_1, \text{ } \forall \text{ } X,X' \in \mathfrak{n} \right\}. \end{align*} For all $ T,T' \in \mathfrak{d}_1,$ we may write $T = U + JV$ and $T' = U' + JV'$ where $U,V,U',V' \in \mathfrak{c}_1(\mathfrak{n}).$ Then \begin{equation} 0 \neq [T,X] + J[T',X']= \underbrace{[U,X]+J[U',X']}_\text{$ \in \mathfrak{n}_3 \oplus J\mathfrak{n}_3 $} +[JV,X] + J[JV',X']. \label{eq:90} \end{equation} By the Newlander--Nirenberg condition, \begin{align*} 0\neq \underbrace{[JV,X] + J[JV,JX]}_\text{$ \in [J\mathfrak{c}_1(\mathfrak{n}),\mathfrak{n}] + J[J\mathfrak{c}_1(\mathfrak{n}),\mathfrak{n}]$} = J[V,X]-[V,X] \in \mathfrak{n}_3 \oplus J\mathfrak{n}_3. \end{align*} Hence $ [JV,X] + J[JV', X'] \in \mathfrak{n}_3 \oplus J\mathfrak{n}_3.$ From $\eqref{eq:90},$ $[T,X] + J[T',X'] \in \mathfrak{n}_3 \oplus J\mathfrak{n}_3.$ Hence $\mathfrak{d}_2 \subseteq \mathfrak{n}_3 \oplus J\mathfrak{n}_3.$ In conclusion, $\mathfrak{d}_2 = \mathfrak{n}_3 \oplus J\mathfrak{n}_3.$ \end{proof} \section*{Acknowledgement} The results of this paper are contained in the author's Master of Science thesis at the University of New South Wales, prepared under the the supervision of Michael. G. Cowling and Alessandro Ottazzi. I would like to give deep thanks to both of them, for guiding and sharing their views on mathematics. They also provided very useful comments and suggestions on the project. \bibliographystyle{plain}
{'timestamp': '2022-02-07T02:15:29', 'yymm': '2202', 'arxiv_id': '2202.02111', 'language': 'en', 'url': 'https://arxiv.org/abs/2202.02111'}
\section{Introduction} In this paper, we address a very crucial problem in fashion e-commerce, namely, automated \textit{color variants identification}, i.e., identifying fashion products that match exactly in their design (or style), but only to differ in their color (Figure \ref{CV_illustration}). Our motivation to pick the use-case of color variants identification for fashion products comes from the following reasons: i) Fashion products top across all categories in online retail sales \cite{jagadeesh2014large}, ii) Most often users hesitate to buy a fashion product solely due to its color despite liking all other aspects of it. Providing more color options increases add-to-cart ratio, thereby generating more revenue, along with improved customer experience. \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{CV_illustration.pdf} \caption{Illustration of the \textit{color variants identification} problem. The images belong to \url{www.myntra.com}. \label{CV_illustration} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{CV_pipeline.png} \caption{Proposed framework to address color variants identification in fashion e-commerce.} \label{CV_pipeline} \end{figure} At Myntra (\url{www.myntra.com}), a leading fashion e-commerce platform, we address this problem by leveraging deep visual Representation/ Embedding Learning. Our proposed framework, as shown in Figure \ref{CV_pipeline}, is generic in nature, consisting of an Embedding Learning model (Step 2), which is trained to obtain embeddings (i.e., representation of visual data in a vector space), in such a way that embeddings of color variant images are grouped together. Having obtained the embeddings, we can perform clustering (Step 3) on them to obtain the color variants (embeddings falling into the same cluster would correspond to images that are color variants). Also, as a typical Product Display Page (PDP) image in a fashion e-commerce platform usually contains multiple fashion products, we apply an object detector to localise our \textit{primary} fashion product of interest, as a preprocessing step (Step 1). It should be noted that the \textit{main component in the pipeline is the Representation/ Embedding Learning model} in Step 2, and this is what we mainly focus in our paper. We discuss different strategies of training the Embedding Learning component. One way is to obtain manual annotations (class labels) indicating whether two product images are color variants to each other. These class labels are used to obtain triplet based constraints (details to be introduced later), consisting of an anchor, a positive and a negative (Figure \ref{CV_illustration}). Despite performing well in practice, a key challenge in the real-world setting of fashion e-commerce is the large scale of data to be annotated, which often becomes infeasible, and tedious. \textit{Could we somehow get away with this annotation step ?}, this is what we asked ourselves. Interestingly, we noted that color variants of fashion products are in essence, manifestations of the same fashion \textit{stlye/ design} after applying color jittering, which is a widely popular image augmentation technique applied in visual Self-Supervised Learning (SSL). SSL finds image embeddings without requiring manual annotations! Thus, we go one step further, and try to answer the question of whether SSL could be employed in our use-case, for achieving comparable performance with that of a supervised model using manually labeled examples. For this, we evaluated a number of State-Of-The-Art (SOTA) SSL techniques from the recent literature, but found certain limitations in them, for our use-case. To address this, we make certain crafty modifications, and propose a novel SSL method as well. Following are the \textbf{major contributions of the paper}: \begin{enumerate}[noitemsep,nolistsep] \item A generic visual Representation Learning based framework to identify color variants among fashion products (to the best of our knowledge, studied as a research paper for the first time). \item A systematic study of a supervised method (with manual annotations), as well as existing SOTA SSL methods to train the embedding learning model. \item A novel contrastive loss based SSL method that focuses on parts of an object to identify color variants. \end{enumerate} \textbf{Related Work:} The problem of visual embedding/ metric learning refers to that of obtaining vector representations/embeddings of images in a way that the embeddings of similar images are grouped together while moving away dissimilar ones. Several supervised \cite{circle_CVPR20,class_collapse_2020,gu2020symmetrical} and unsupervised \cite{dutta2020unsupervised,cao2019unsupervised,li2020unsupervised,SUML_AAAI20} approaches have been proposed in the recent literature. Contrastive learning \cite{simclr_20,moco_cvpr20,byol_20,simsiam_21}, a paradigm of visual SSL \cite{jing2020self}, groups together embeddings obtained from augmentations of the same image, without making use of manual annotations, as needed in Supervised approaches. \section{Proposed Framework} Our proposed framework to identify color variants, as already introduced in Figure \ref{CV_pipeline}, consists of the stages: i) Object Detection, ii) Embedding Learning and iii) Clustering. As the original input image usually consists of a human model wearing secondary fashion products as well, we perform object detection (using any off-the shelf object detector) to localise the primary fashion product of interest. The core component of our framework, i.e., the Embedding Learning model, can either be trained using a supervised method (when manual annotations are available), or a SSL method. The latter is useful when obtaining manual annotations is infeasible, at large scale. We first discuss the supervised method, where we require a set of triplet images obtained from manually annotated examples (consisting of an anchor, positive, and negative, as shown in Figure \ref{CV_illustration}). The positive (p) is usually an image that consists of a product that is a color variant of the product contained in the anchor (a) image. The negative (n) is an image that consists of a product that is not a color variant of the products contained in the anchor and positive images. Let, the embeddings for the triplet of images be denoted as $(\Vec{x}_a, \Vec{x}_p, \Vec{x}_n)$. These triplets are used to train a supervised triplet loss based deep neural network model \cite{schroff2015facenet,veit2017conditional}, the objective of which is to bring the embeddings $\Vec{x}_a$ and $\Vec{x}_p$ closer, while moving away $\Vec{x}_n$. This is achieved by minimizing the following: \begin{equation} \label{equation:tripletmarginrankingloss} \mathcal{L}_{triplet} = max(0, \lambda + \delta^2(\Vec{x}_a,\Vec{x}_p) - \delta^2(\Vec{x}_a,\Vec{x}_n) ), \end{equation} such that $\delta^2(\Vec{x}_i,\Vec{x}_j)=\left \| \Vec{x}_i -\Vec{x}_j \right \|_2^2 $ denotes the squared Euclidean distance between the pair of examples $\Vec{x}_i$ and $\Vec{x}_j$, with $\left \| \Vec{x}_i \right \|_2^2$ being the squared $l_2$ norm of $\Vec{x}_i$, and $\lambda>0$ being a margin. An appropriate clustering algorithm could be applied on the embeddings, such that an obtained cluster contains embeddings of images of products that are color variants to each other. However, the supervised model needs manual annotations which may be infeasible to obtain in large real-world datasets (such as those present in our platform). Thus, we explore SSL to identify color variants. \subsection{SSL based Embedding Learning model} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{SSL_illus.png} \caption{Contrastive SSL based Embedding Model.} \label{SSL_illus} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{slice_motivation.pdf} \caption{Drawbacks of using random crop to form contrastive pairs in our use-case.} \label{slice_motivation} \end{figure} SSL seeks to learn visual embeddings without requiring manual annotations. Figure \ref{SSL_illus} illustrates the contrastive loss based paradigm of SSL, using the SOTA approach called MOCOv2 \cite{moco_cvpr20}. Given an image (Im 1), two augmentations (Im 1-Aug 1 and Im 1-Aug2) are obtained, which are passed through the Embedding Learning model that is maintained as two branches: The Query Encoder and the Key encoder, with parameters $\theta_q$ and $\theta_k$ respectively. It should be noted that the Key encoder is simply a copy of the Query Encoder, and is maintained as a \textit{momentum encoder} obtained using an Exponential Moving Average (EMA) of the latter. The respective embeddings obtained for Im 1-Aug 1 and Im 1-Aug2 are treated as an anchor-positive pair. But, as we do not have manual annotations, to obtain the negative embedding (for avoiding a model collapse if just anchor and positive are pulled), one simply passes augmentation (Im 2-Aug 2) of another distinct image Im 2 (from within a mini-batch) through the key encoder. Additionally, to facilitate comparison with a large pool of negatives, another practice is to store the mini-batch embeddings obtained from the key encoder, in a separate memory module stored as a queue. The final embeddings of inference images are obtained by passing through the query encoder, which is the only branch that is updated via backpropagation (and not the key branch). Other recent SOTA methods are built in a similar fashion as MOCOv2, with minor modifications. For instance, the BYOL \cite{byol_20} method does not make use of negatives (and hence no memory queue), but uses the EMA based update/ momentum encoder. The SimSiam method \cite{simsiam_21} neither uses negatives, nor momentum encoder. All these SSL methods (MOCOv2, BYOL and SimSiam) employ random cropping in the data augmentation pipeline. However, when we already have object detection in our pipeline, the standard random crop step used in existing SSL methods may actually miss out important regions of a fashion product, which might be crucial in identifying color variants (Figure \ref{slice_motivation}). \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{PBCNet_illus.pdf} \caption{Illustration of our slicing based approach.} \label{PBCNet_illus} \end{figure} For this reason, we rather choose to propose a novel SSL variation that considers multiple slices/ patches (left, right, top and bottom) of the primary fashion object (after object detection), and simultaneously obtain embeddings for each of them. The final sum-pooled embedding\footnote{Rather than concatenating or averaging, we simply add them together to maintain simplicity of the model.} is then used to optimize a SSL based contrastive loss (Figure \ref{PBCNet_illus}). We make use of negative pairs in our method because we found the performance of methods that do not make use of negative pairs (eg, SimSiam \cite{simsiam_21}, BYOL \cite{byol_20}) to be sub-optimal in our use-case. We also found merits of momentum encoder and memory queue in our use-case, and thus include them in our method. Following is the Normalized Temperature-scaled cross entropy (NT-Xent) loss \cite{simclr_20} based objective of our method: \begin{equation} \label{NTXent} \mathcal{L}_{\Vec{q}} = - \textrm{log} \frac{ \textrm{exp}( {\Vec{q}\Vec{k}_q} / \tau ) }{ \sum_{i=0}^K \textrm{exp}( {\Vec{q}\Vec{k}_i} / \tau ) } \end{equation} Here, $\Vec{q}=\sum_v \Vec{q}^{(v)}$, $\Vec{k}_i=\sum_v \Vec{k}_i^{(v)}, \forall i$. In (\ref{NTXent}), $\Vec{q}$ and $\Vec{k}_i$ respectively denote the \textit{final} embeddings obtained for a query and a key, which are essentially obtained by adding the embeddings obtained from across all the views, as denoted by the superscript $v$ for $\Vec{q}^{(v)}$ and $\Vec{k}_i^{(v)}$. Also, $\Vec{k}_q$ represents the positive key corresponding to a query $\Vec{q}$, $\tau$ denotes the temperature parameter, whereas $\textrm{exp}()$ and $\textrm{log}()$ respectively denote the exponential and logarithmic functions. $K$ in (\ref{NTXent}) denotes the size of the memory queue. We call our method as \textbf{Patch-Based Contrastive Net (PBCNet)}. We conjecture that considering multiple patches i) do not leave things to chance (as in random crops), and provides a deterministic approach to obtain embeddings, ii) enables us to borrow more information (from other patches) to make a better decision on grouping a pair of similar embeddings. Our conjecture is supported not only by evidence of improved discriminative performances by the consideration of multiple fine patches of images, in literature \cite{wang2018learning}, but also by our experimental results. \section{Experiments} We evaluated the discussed methods on a large, challenging internal collection of images on our platform (\url{www.myntra.com}). Here, we report our results on a collection of Kurtas images. We used the exact same set to train the supervised (with labeled training data, obtained by our in-house team) and self-supervised methods (without labeled training data) for a fair comparison. For inferencing, we use 6 dataset splits (based on brand, gender, MRP), referred to as Data 1-6. \textbf{Performance metrics used:} To evaluate the methods, we made use of the following performance metrics: i) Color Group Accuracy (\textbf{CGacc}), an internal metric with business relevance, that reflects the \textit{precision}, ii) Adjusted Random Index (\textbf{ARI}), iii) Fowlkes-Mallows Score (\textbf{FMS}), and iv) Clustering Score (\textbf{CScore}), computed as: $CScore = \frac{2.ARI.FMS}{ARI+FMS}$. While we use CGacc to compare the methods for all the datasets (\textit{Data 1-6}), the remaining metrics are reported only for \textit{Data 4-6}, where we obtain the ground-truth labels for evaluation. All the performance metrics take values in the range $[0,1]$, where a higher value indicates a better performance. \textbf{Methods Compared}: The focus of our experiments is to evaluate the overall performance by using different embedding learning methods (while fixing the object detector and the clustering algorithm). We compare the supervised triplet loss based model, and our proposed PBCNet method, against the following SOTA SSL methods: \textbf{SimSiam} \cite{simsiam_21}, \textbf{BYOL} \cite{byol_20}, and \textbf{MOCOv2} \cite{moco_cvpr20}. We implemented all the methods in PyTorch. For all the compared methods, we fix a ResNet34 \cite{ResNet} as a base encoder with $224 \times 224$ image resizing, and train all the models for a fixed number of 30 epochs, for a fair comparison. The number of epochs was fixed based on observations on the supervised model, to avoid overfitting. For the purpose of object detection, we made use of YOLOv2 \cite{yolov2}, and for the task of clustering the embeddings, we made use of the Agglomerative Clustering algorithm with Ward's method for merging. In all cases, the 512-dimensional embeddings used for clustering are obtained using the avgpool layer of the trained encoder. A margin of 0.2 has been used in the triplet loss for training the supervised model. \subsection{Systematic Study of SSL for our use-case} \begin{table}[!t] \centering \resizebox{0.9\columnwidth}{!}{% \begin{tabular}{|c|c|cc|cc|} \hline Dataset & Metric & \begin{tabular}[c]{@{}c@{}}SimSiam\_v0\\ (w/o \\ norm)\end{tabular} & SimSiam\_v0 & \begin{tabular}[c]{@{}c@{}}SimSiam\_v1\\ (w/o \\ norm)\end{tabular} & SimSiam\_v1 \\ \hline Data 1 & CGacc & 0.5 & 0.5 & \textbf{1} & 0.67 \\ \hline Data 2 & CGacc & 0.5 & \textbf{0.67} & 0 & \textbf{0.25} \\ \hline Data 3 & CGacc & 0 & \textbf{0.4} & 0 & \textbf{0.33} \\ \hline \end{tabular}% } \caption{Effect of Embedding Normalization.} \label{simsiam_normalization} \end{table} We now perform a systematic study of the typical aspects associated with SSL, especially for our particular task of color variants identification. For this purpose, we make use of a single Table \ref{results_all}, where we provide the comparison of various SSL methods, including ours. \textbf{Effect of Data augmentation on our task:} Firstly, for illustrating the effect of different data augmentations, we consider two variants of SimSiam (for its simplicity and strength): i) SimSiam\_v0: A version of SimSiam, where we used the entire original image as the query, and a color jittered image as the positive, and with a batch size of 12, and ii) SimSiam\_v1: SimSiam with standard SSL \cite{simclr_20} augmentations (\texttt{ColorJitter}, \texttt{RandomGrayscale}, \texttt{RandomHorizontalFlip}, \texttt{GaussianBlur} and \texttt{RandomResizedCrop}), and a batch size of 12. For all cases, the following architecture has been used for SimSiam: \texttt{Encoder}\{ ResNet34 $\rightarrow$ (avgpool) $\rightarrow$ \texttt{ProjectorMLP}(512$\rightarrow$4096$\rightarrow$123) \}$\rightarrow$\texttt{PredictorMLP}(123$\rightarrow$4096$\rightarrow$123). From Table \ref{results_all}, we observed that considering standard augmentations with random crops leads to a better performance than that of SimSiam\_v0. Thus, for all other self-supervised baselines, i.e., BYOL and MOCOv2 we make use of standard augmentations with random crops. However, we later show that our proposed way of considering multiple patches in PBCNet leads to a better performance \textbf{Effect of l2 normalization for our task:} Table \ref{simsiam_normalization} shows the effect of performing $l2$ normalization on the embeddings obtained using SimSiam. We found that without using any normalization, in some cases (eg, Data 2-3) there are no true color variant groups out of the detected clusters (i.e., zero precision), and hence the performance metrics become zero. Thus, for all our later experiments, we make use of $l2$ normalization on the embeddings as a de facto standard, for all the methods. \begin{table*}[t] \centering \resizebox{0.7\linewidth}{!}{% \begin{tabular}{|c|c|c|ccccc|c|} \hline \multicolumn{2}{|c|}{} & {\color[HTML]{3531FF} \textbf{Supervised}} & \multicolumn{6}{c|}{Self-Supervised} \\ \hline Dataset & Metric & {\color[HTML]{3531FF} \textbf{Triplet Net}} & SimSiam\_v0 & SimSiam\_v1 & SimSiam\_v2 & BYOL & MOCOv2 & \textbf{PBCNet (Ours)} \\ \hline Data 1 & CGacc & {\color[HTML]{3531FF} \textbf{0.67}} & 0.5 & 0.67 & 1 & 0.5 & 1 & \textbf{1} \\ \hline Data 2 & CGacc & {\color[HTML]{3531FF} \textbf{1}} & 0.67 & 0.25 & 1 & 0.75 & \textbf{0.8} & 0.75 \\ \hline Data 3 & CGacc & {\color[HTML]{3531FF} \textbf{0.75}} & 0.4 & 0.33 & 0 & 0.5 & 0.5 & \textbf{0.6} \\ \hline & CGacc & {\color[HTML]{3531FF} \textbf{0.67}} & 0.4 & 0.5 & 0.5 & 0.5 & \textbf{1} & 0.85 \\ & ARI & {\color[HTML]{3531FF} \textbf{0.69}} & 0.09 & 0.15 & 0.12 & 0.27 & 0.66 & \textbf{0.75} \\ & FMS & {\color[HTML]{3531FF} \textbf{0.71}} & 0.15 & 0.22 & 0.20 & 0.30 & 0.71 & \textbf{0.76} \\ \multirow{-4}{*}{Data 4} & CScore & {\color[HTML]{3531FF} \textbf{0.700}} & 0.110 & 0.182 & 0.152 & 0.281 & 0.680 & \textbf{0.756} \\ \hline & CGacc & {\color[HTML]{3531FF} \textbf{1}} & 0 & 0.5 & 0.33 & 0.5 & 1 & \textbf{1} \\ & ARI & {\color[HTML]{3531FF} \textbf{1}} & 0 & 0.09 & 0.28 & 0.64 & 1 & \textbf{1} \\ & FMS & {\color[HTML]{3531FF} \textbf{1}} & 0.22 & 0.30 & 0.45 & 0.71 & 1 & \textbf{1} \\ \multirow{-4}{*}{Data 5} & CScore & {\color[HTML]{3531FF} \textbf{1}} & 0 & 0.135 & 0.341 & 0.674 & 1 & \textbf{1} \\ \hline & CGacc & {\color[HTML]{3531FF} \textbf{0.83}} & 0.5 & 0.5 & 0.8 & 0.6 & 1 & \textbf{1} \\ & ARI & {\color[HTML]{3531FF} \textbf{0.44}} & 0.07 & 0.06 & 0.04 & 0.20 & 0.58 & \textbf{0.79} \\ & FMS & {\color[HTML]{3531FF} \textbf{0.49}} & 0.12 & 0.17 & 0.13 & 0.24 & 0.64 & \textbf{0.80} \\ \multirow{-4}{*}{Data 6} & CScore & {\color[HTML]{3531FF} \textbf{0.466}} & 0.089 & 0.090 & 0.063 & 0.214 & 0.610 & \textbf{0.796} \\ \hline \end{tabular}% } \caption{Comparison of our proposed method against the supervised and state-of-the-art SSL baselines, across all the datasets.} \label{results_all} \end{table*} \textbf{Effect of Batch Size and Momentum Encoding in SSL for our task:} For studying the effect of batch size in SSL for our task, we introduce a third variant of SimSiam, i.e., SimSiam\_v2: This is essentially SimSiam\_v1 with a batch size of 128. We then consider SimSiam\_v1, SimSiam\_v2 and BYOL, where the first makes use of a batch size of 12, while the others make use of batch sizes of 128. We observed that a larger batch size usually leads to a better performance. This is observed from Table \ref{results_all}, by the higher values of performance metrics in the columns for SimSiam\_v2 and BYOL (vs SimSiam\_v1). Additionally, we noted that the momentum encoder used in BYOL causes a further boost in the performance, as observed in its superior performance as compared to SimSiam\_v2 that has the same batch size. It should be noted that except for the momentum encoder, the rest of the architecture and augmentations used in BYOL are exactly the same as in SimSiam. We observed that increasing the batch size in SimSiam does not drastically or consistently improve its performance, something which its authors also noticed \cite{simsiam_21}. \textbf{Effect of Memory Queue in SSL for our task:} We also inspect the effect of an extra memory module/queue being used to facilitate the comparisons with a large number of negative examples. In particular, we make use of the MOCOv2 method with the following settings: i) queue size of 5k, ii) temperature parameter of 0.05, iii) a MLP (512$\rightarrow$4096$\rightarrow$relu$\rightarrow$123) added after the avgpool layer of the ResNet34, iv) SGD for updating the query encoder, with learning rate of 0.001, momentum of 0.9, weight decay of $1e^{-6}$, and v) value of 0.999 for $m$ in the momentum update. It is observed from Table \ref{results_all}, by the columns of MOCOv2 and BYOL, that the performance of the former is superior. As BYOL does not use a memory module, but MOCOv2 does, we conclude that using a separate memory module significantly boosts the performance of SSL in our task. Motivated by our observations so far, we choose to employ both momentum encoding and memory module in our proposed PBCNet method. \subsection{Comparison of PBCNet against the state-of-the-art} In Table \ref{results_all}, we provide the comparison of our proposed SSL method \textbf{PBCNet} against the SSL SOTA baselines and the supervised baseline across all the datasets. It should be noted that in Table \ref{results_all}, any performance gains for a specific method is due to the intrinsic nature of the same, and not because of a particular hyperparameter setting. This is because we report the \textit{best performance for each method} after adequate tuning of the clustering distance threshold and other parameters, and not just their default hyperparameters. Following are the configurations that we have used in our \textbf{PBCNet} method: i) memory module size of 5k, ii) temperature parameter of 0.05, iii) the FC layer after the avgpool layer of the ResNet34 was removed, iv) SGD for updating the query encoder, with learning rate of 0.001, momentum of 0.9, weight decay of $1e^{-6}$, and v) value of 0.999 for $m$ in the momentum update. We made use of a batch size of $32 (=128/4)$ as we have to store tensors for each of the 4 slices simultaneously for each mini-batch (we used a batch size of 128 for the other methods). For data augmentation, we first apply a color distortion in the following order: i) \texttt{ColorJitter(0.8 * s, 0.8 * s, 0.8 * s, 0.2 * s)} with s=1, p=0.8, ii) \texttt{RandomGrayscale} with p=0.2, iii) \texttt{GaussianBlur((3, 3), (1.0, 2.0))} with p=0.5. After the color distortion, we apply our slicing technique. For the second image (positive/negative) we apply the same series of transformations. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Data4_quali.pdf} \caption{Qualitative comparison of color variants groups obtained using our PBCNet method (left column), MOCOv2 (middle column) and the supervised baseline (right column), on \textit{Data 4}.} \label{Data4_quali} \end{figure} From Table \ref{results_all}, it is clear that the SimSiam method despite its strong claims of not using any negative pairs, nor momentum encoder nor large batches, performs poorly as compared to our supervised method (shown in bold blue color). The BYOL method which also does not make use of negative pairs, performs better than the SimSiam method in our use case, by virtue of its momentum encoder. Among all compared SSL baselines, it is the MOCOv2 method that performs the best. This is due to the reason of the memory queue that facilitates the comparison with a large number of negative examples. This shows that the importance of considering negative pairs still holds true, especially for challenging use-cases like the one considered in the paper. However, our proposed self-supervised method \textbf{PBCNet} clearly outperforms all the baselines. The fact that it outperforms MOCOv2 can be attributed to the patch-based slicing used, which is the only different component in our method in comparison to MOCOv2 that uses random crop. Another interesting thing that we observed is the fact that despite using much lesser batch size of 32, our method outperforms the baselines. In a way, we were able to extract and leverage more information by virtue of the slicing (by borrowing information from the other patches simultaneously), even with smaller batches. We also noticed that the supervised baseline performs quite good in our task, even without any data augmentation pipeline as used in the SSL methods. However, by virtue of the large memory queue, and the data augmentations like color jitter, which are pretty relevant to the task of color variants identification, stronger SSL methods like MOCOv2 and PBCNet are in fact capable of achieving comparable performance against the supervised baseline. Having said that, if we do not have adequate labeled data in the first place, we cannot even use supervised learning. Hence, enabling data augmentations and slicing strategy in the supervised model has not been focused, because the necessity of our approach comes from the issue of addressing the lack of labeled data, and not to improve the performance of supervised learning (which any how is label dependent). \textbf{Effect of Clustering:} In Table \ref{effect_cluster}, we report the performances obtained by varying the clustering algorithm to group embeddings obtained by different SSL methods, on \textit{Data 4-6}. We picked the Agglomerative, DBSCAN and Affinity Propagation clustering techniques that do not require the number of clusters as input parameter (which is difficult to obtain in our use-case). In general, we observed that the Agglomerative clustering technique leads to a better performance in our use-case. Also, for a fixed clustering approach, using embeddings obtained by our PBCNet method usually leads to a better performance \begin{table}[t] \centering \resizebox{0.8\columnwidth}{!}{% \begin{tabular}{|c|c|cc|cc|cc|} \hline \multicolumn{2}{|c|}{Dataset} & \multicolumn{2}{c|}{\textbf{Data 4}} & \multicolumn{2}{c}{\textbf{Data 5}} & \multicolumn{2}{c|}{\textbf{Data 6}} \\ \hline Method & Clustering & \textbf{ARI} & \textbf{FMS} & \textbf{ARI} & \textbf{FMS} & \textbf{ARI} & \textbf{FMS} \\ \hline \textbf{PBCNet} & Agglo & \textbf{0.75} & \textbf{0.76} & \textbf{1.00} & \textbf{1.00} & \textbf{0.79} & \textbf{0.80} \\ & DBSCAN & 0.66 & 0.71 & 1.00 & 1.00 & 0.66 & 0.71 \\ & Affinity & 0.30 & 0.42 & 0.22 & 0.41 & 0.24 & 0.37 \\ \hline \textbf{MOCOv2} & Agglo & \textbf{0.66} & \textbf{0.71} & \textbf{1.00} & \textbf{1.00} & \textbf{0.58} & \textbf{0.64} \\ & DBSCAN & 0.66 & 0.71 & 1.00 & 1.00 & 0.37 & 0.40 \\ & Affinity & 0.20 & 0.32 & 0.04 & 0.26 & 0.25 & 0.41 \\ \hline \textbf{BYOL} & Agglo & \textbf{0.27} & \textbf{0.30} & \textbf{0.64} & \textbf{0.71} & \textbf{0.20} & \textbf{0.24} \\ & DBSCAN & 0.17 & 0.28 & 0.64 & 0.71 & 0.02 & 0.24 \\ & Affinity & 0.03 & 0.14 & 0.28 & 0.45 & 0.01 & 0.11 \\ \hline \end{tabular}% } \caption{Effect of the clustering technique used.} \label{effect_cluster} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.85\columnwidth]{PBCNet_vs_MOCOv2_quali.pdf} \caption{A few groups obtained on \textit{Data 2 \& 3} using MOCOv2 have false positives (shown in red box), while our PBCNet method does not yield such groups.} \label{PBCNet_vs_MOCOv2_quali} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{slicing_tradeoff.pdf} \caption{Trade off between vertical and horizontal slicing.} \label{slicing_tradeoff} \end{figure} \begin{table}[t] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|ccc|ccc|ccc|} \hline Dataset & Data 1 & Data 2 & Data 3 & \multicolumn{3}{c|}{Data 4} & \multicolumn{3}{c|}{Data 5} & \multicolumn{3}{c|}{Data 6} \\ \hline Method & CGacc & CGacc & CGacc & CGacc & ARI & FMS & CGacc & ARI & FMS & CGacc & ARI & FMS \\ \hline PBCNet-horiz & 1 & \textbf{1} & 0.5 & 0.66 & 0.65 & 0.67 & 1 & 1 & 1 & 1 & \textbf{0.81} & \textbf{0.82} \\ PBCNet-vert & 1 & 0.6 & 0.6 & \textbf{1} & \textbf{0.88} & \textbf{0.89} & 1 & 1 & 1 & 0.83 & 0.48 & 0.50 \\ \hline \textbf{PBCNet} & 1 & 0.75 & \textbf{0.6} & 0.85 & 0.75 & 0.76 & 1 & 1 & 1 & 1 & 0.79 & 0.80 \\ \hline \end{tabular}% } \caption{Effect of Slicing on PBCNet} \label{effect_slicing} \end{table} \textbf{Qualitative results:} Sample qualitative comparisons of color variants groups obtained on \textit{Data 4} using our PBCNet method, MOCOv2 and the supervised baseline are provided in Figure \ref{Data4_quali}. Each of the rows for a column corresponding to a method represents a detected color variants cluster for the considered method. A row has been marked with a red box if the entire cluster contains images that are not color variants to each other. A single image is marked with a red box if it is the only incorrect image, while rest of the images are color variants. We observed that our method not only detects clusters with higher precision (which MOCOv2 does as well), but it also has a higher recall, which is comparable to the supervised method. We also make use of a blue box to show a detected color group by our method which contains images that are color variants, but are difficult to be identified at a first glance, even for humans. Additionally, Figure \ref{PBCNet_vs_MOCOv2_quali} shows a few color groups identified in the datasets \textit{Data 2 \& 3} using MOCOv2 and our PBCNet. We observed that MOCOv2 detected groups with false positives, while our PBCNet method did not. This could happen because when a random crop is obtained by MOCOv2, it need not necessarily be from a \textit{distinctive} region of an apparel that helps to identify its color variant (eg, in Figure \ref{PBCNet_vs_MOCOv2_quali}, the bent line like pattern separating the colored and black region of the apparels of Data 2, and the diamond like shape in the apparels of Data 3). We argue that a random crop might have arrived from such a \textit{distinctive} region given that the size of the crop is made larger, etc. But that still leaves things to random chance. On the other hand, our slicing technique being deterministic in nature, \textit{guarantees} that all the regions of an object \textit{would} be captured. We would also like to mention that our slicing approach is agnostic to the fashion apparel type, i.e, the same is easily applicable for any fashion article type (Tops, Shirts, Shoes, Trousers, Track pants, Sweatshirts, etc). In fact, this is how a human identifies color variants as well, by looking at the article along both horizontal and vertical directions, to identify distinctive patterns. Even humans cannot identify an object if we restrict our vision to only a particular small crop. \textbf{Effect of the slicing}: We also study 2 variants of our PBCNet method: i) PBCNet-horiz (computing an embedding only by considering the top and bottom slices), and ii) PBCNet-vert (computing an embedding using only the left and right slices). The results are shown in Table \ref{effect_slicing}. In \textit{Data 4}, PBCNet-vert performs better than PBCNet-horiz, and in \textit{Data 6}, PBCNet-horiz performs better than PBCNet-vert (significantly). The performance of the two versions is also illustrated in Figure \ref{slicing_tradeoff}. We observed that a single slicing do not work in all scenarios, especially for apparels. Although the horizontal slicing is quite competitive, it may be beneficial to consider the vertical slices as well. This is observed by the drop in performance of PBCNet-horiz in \textit{Data 3-4} (vs PBCNet). This is because some garments may contain distinguishing patterns that may be better interpreted only by viewing vertically, for example, printed texts (say, \textit{adidas} written vertically), floral patterns etc. In such cases, simply considering horizontal slices may actually split/ disrupt the vertical information. It may also happen that mixing of slicing introduces some form of redundancy, as observed by the occasional drop in the performance of PBCNet when compared to PBCNet-horiz (on \textit{Data 6}) and PBCNet-vert (on \textit{Data 4}). However, on average PBCNet leads to an overall consistent and competitive performance, while avoiding drastically fluctuating improvements or failures. We suggest considering both the directions of slicing, so that they could collectively represent all necessary and distinguishing patterns, and if one slicing misses some important information, the other could compensate for it. \section{Conclusions} In this paper, a generic visual Representation Learning based framework to identify color variants in fashion e-commerce has been studied (to the best of our knowledge, for the first time). A systematic study of a supervised method (with manual annotations), as well as existing SOTA SSL methods to train the embedding learning component of our framework has been conducted. A novel contrastive loss based SSL method that focuses on parts of an object to identify color variants, has also been proposed \section{Acknowledgement} We would like to thank Dr Ravindra Babu Tallamraju for his support, feedback, and in being a source of inspiration and encouragement throughout the project.
{'timestamp': '2021-12-07T02:37:10', 'yymm': '2112', 'arxiv_id': '2112.02910', 'language': 'en', 'url': 'https://arxiv.org/abs/2112.02910'}
\section{Introduction} Solar prominences or filaments are cool and dense plasmas embedded in the million-Kelvin corona \citep{mac10}. The plasmas originate from the direct injection of chromospheric materials into a preexisting filament channel, levitation of chromospheric mass into the corona, or condensation of hot plasmas from the chromospheric evaporation due to the thermal instability \citep{xia11,xia12,rony14,zhou14}. Prominences are generally believed to be supported by the magnetic tension force of the dips in sheared arcades \citep{guo10b,ter15} or twisted magnetic flux ropes \citep[MFRs;][]{su12,sun12a,zhang12a,cx12,cx14a,xia14a,xia14b}. They can keep stable for several weeks or even months, but may get unstable after being disturbed. Large-amplitude and long-term filament oscillations before eruption have been observed by spaceborne telescopes \citep{chen08,li12,zhang12b,bi14,shen14} and reproduced by numerical simulations \citep{zhang13}, which makes filament oscillation another precursor for coronal mass ejections \citep[CMEs;][]{chen11} and the accompanying flares. When the twist of a flux rope supporting a filament exceeds the threshold value (2.5$\pi$$-$3.5$\pi$), it will also become unstable and erupt due to the ideal kink instability \citep[KI;][]{hood81,kli04,tor04,tor10,fan05,sri10,asch11,kur12}. However, whether the eruption of the kink-unstable flux rope becomes failed or ejective depends on how fast the overlying magnetic field declines with height \citep{tor05,liu08a,kur10}. When the decay rate of the background field exceeds a critical value, the flux rope will lose equilibrium and erupt via the so-called torus instability \citep[TI;][]{kli06,jiang14,ama14}. On the other hand, if the confinement from the background field is strong enough, the filament will decelerate to reach the maximum height before falling back to the solar surface, which means the eruption is failed \citep{ji03,liu09,guo10a,kur11,song14,jos13,jos14}. In addition to the successful and failed eruptions, there are partial filament eruptions \citep{gil07,liu07}. After examining 54 H$\alpha$ prominence activities, \citet{gil00} found that a majority of the eruptive prominences show separation of escaping material from the bulk of the prominence; the latter initially lifted away from and then fell back to the solar surface. To explain the partial filament eruptions, the authors proposed a cartoon model in which magnetic reconnection occurs inside an inverse-polarity flux rope, leading to the separation of the escaping portion of the prominence and the formation of a second X-type neutral line in the upper portion of the prominence. The inner splitting and subsequent partial prominence eruption is also observed by \citet{shen12}. \citet{gil01} interpreted an active prominence with the process of vertical reconnection between an inverse-polarity flux rope and an underlying magnetic arcade. \citet{liu08b} reported a partial filament eruption characterised by a quasi-static, slow phase and a rapid kinking phase showing a bifurcation of the filament. The separation of the filament, the extreme-ultravoilet (EUV) brightening at the separation location, and the surviving sigmoidal structure provide convincing evidences that magnetic reconnection occurs within the body of filament \citep{tri13}. \citet{gib06a,gib06b} carried out three-dimensional (3D) numerical simulations to model the partial expulsion of a MFR. After multiple reconnections at current sheets that form during the eruption, the rope breaks in an upper, escaping rope and a lower, surviving rope. The ``partially-expelled flux rope'' (PEFR) model has been justified observationally \citep{tri09}. \citet{tri06} observed a distinct coronal downflow following a curved path at the speed of $<$150 km s$^{-1}$ during a CME-associated prominence eruption. Their observation provides support for the pinching off of the field lines drawn-out by the erupting prominences and the contraction of the arcade formed by the reconnection. Similar multithermal downflow at the speed of $\sim$380 km s$^{-1}$ starting at the cusp-shaped structures where magnetic reconnection occurred inside the erupting flux rope that led to its bifurcation was reported by \citet{tri07}. \citet{liu12} studied a flare-associated partial eruption of a double-decker filament. \citet{cx14b} found that a stable double-decker MFR system existed for hours prior to the eruption on 2012 July 12. After entering the domain of instability, the high-lying MFR impulsively erupted to generate a fast CME and \textit{GOES} X1.4 class flare; while the low-lying MFR remained behind and continuously maintained the sigmoidicity of the active region (AR). From the previous literatures, we can conclude that magnetic reconnection and the release of free energy involve in most of the partial filament eruptions. However, the exact mechanism of partial eruptions, which is of great importance to understanding the origin of solar eruptions and forecasting space weather, remain unclear and controversial. In this paper, we report multiwavelength observations of a partial filament eruption and the associated CME and M6.7 flare in NOAA AR 11283 on 2011 September 8. The AR emerged from the eastern solar limb on 2011 August 30 and lasted for 14 days. Owing to its extreme complexity, it produced a couple of giant flares and CMEs during its lifetime \citep{feng13,dai13,jiang14,liu14,li14,ruan14}. In Section~\ref{s-data}, we describe the data analysis using observations from the Big Bear Solar Observatory (BBSO), \textit{SOHO}, \textit{Solar Dynamics Observatory} (\textit{SDO}), \textit{Solar Terrestrial Relation Observatory} \citep[\textit{STEREO};][]{kai05}, \textit{GOES}, \textit{Reuven Ramaty High-Energy Solar Spectroscopic Imager} \citep[\textit{RHESSI};][]{lin02}, and \textit{WIND}. Results and discussions are presented in Section~\ref{s-result} and Section~\ref{s-disc}. Finally, we draw our conclusion in Section~\ref{s-sum}. \section{Instruments and data analysis} \label{s-data} \subsection{BBSO and \textit{SOHO} observations} \label{s-ha} On September 8, the dark filament residing in the AR was most clearly observed at H$\alpha$ line center ($\sim$6563 {\AA}) by the ground-based telescope in BBSO. During 15:30$-$16:30 UT, the filament rose and split into two parts. The major part lifted away and returned to the solar surface, while the runaway part separated from and escaped the major part, resulting in a very faint CME recorded by the \textit{SOHO} Large Angle Spectroscopic Coronagraph \citep[LASCO;][]{bru95} CME catalog\footnote{http://cdaw.gsfc.nasa.gov/CME\_list/}. The white light (WL) images observed by the LASCO/C2 with field-of-view (FOV) of 2$-$6 solar radii ($R_{\sun}$) were calibrated using the \textit{c2\_calibrate.pro} in the \textit{Solar Software} (\textit{SSW}). \subsection{\textit{SDO} observations} \label{s-euv} The partial filament eruption was clearly observed by the Atmospheric Imaging Assembly \citep[AIA;][]{lem12} aboard \textit{SDO} with high cadences and resolutions. There are seven EUV filters (94, 131, 171, 193, 211, 304, and 335 {\AA}) and two UV filters (1600 {\AA} and 1700 {\AA}) aboard AIA to achieve a wide temperature coverage ($4.5\le \log T \le7.5$). The AIA level\_1 fits data were calibrated using the standard program \textit{aia\_prep.pro}. The images observed in different wavelengths were coaligned carefully using the cross-correlation method. To investigate the 3D magnetic configurations before and after the eruption, we employed the line-of-sight (LOS) and vector magnetograms from the Helioseismic and Magnetic Imager \citep[HMI;][]{sch12} aboard \textit{SDO}. The 180$^{\circ}$ ambiguity of the transverse field was removed by assuming that the field changes smoothly at the photosphere \citep{guo13}. We also performed magnetic potential field and non-linear force free field (NLFFF) extrapolations using the optimization method as proposed by \citet{wht00} and as implemented by \citet{wig04}. The FOV for extrapolation was 558$\farcs$5$\times$466$\farcs$2 to cover the whole AR and make sure the magnetic flux was balanced, and the data were binned by 2$\times$2 so that the resolution became 2$\arcsec$. \subsection{\textit{STEREO} and \textit{WIND} observations} The eruption was also captured from different perspectives by the Extreme-Ultraviolet Imager (EUVI) and COR1\footnote{http://cor1.gsfc.nasa.gov/catalog/cme/2011/} coronagraph of the Sun Earth Connection Coronal and Heliospheric Investigation \citep[SECCHI;][]{how08} instrument aboard the ahead satellite (\textit{STA} hereafter) and behind satellite (\textit{STB} hereafter) of \textit{STEREO}. The COR1 has a smaller FOV of 1.3$-$4.0 $R_{\sun}$ compared with LASCO/C2, which is favorable for the detection of early propagation of CMEs. On September 8, the twin satellites (\textit{STA} and \textit{STB}) had separation angles of 103$^{\circ}$ and 95$^{\circ}$ with the Earth. The presence of open magnetic field lines within the AR was confirmed indirectly by the evidence of type \Rmnum{3} burst in the radio dynamic spectra. The spectra were obtained by the S/WAVES \citep{bou08} on board \textit{STEREO} and the WAVES instrument \citep{bou95} on board the \textit{WIND} spacecraft. The frequency of S/WAVES ranges from 2.5 kHz to 16.025 MHz. The WAVES has two radio detectors: RAD1 (0.02$-$1.04 MHz) and RAD2 (1.075$-$13.825 MHz). \subsection{\textit{GOES} and \textit{RHESSI} observations} \label{s-xray} The accompanying M6.7 flare was obviously identified in the \textit{GOES} soft X-ray (SXR) light curves in 0.5$-$4.0 {\AA} and 1$-$8 {\AA}. To figure out where the accelerated nonthermal particles precipitate, we also made hard X-ray (HXR) images and light curves at different energy bands (3$-$6, 6$-$12, 12$-$25, 25$-$50, and 50$-$100 keV) using the observations of \textit{RHESSI}. The HXR images were generated using the CLEAN method with integration time of 10 s. The observing parameters are summarized in Table~\ref{tbl-1}. \section{Results} \label{s-result} Figure~\ref{fig1} shows eight snapshots of the H$\alpha$ images to illustrate the whole evolution of the filament (see also the online movie Animation1.mpg). Figure~\ref{fig1}(a) displays the H$\alpha$ image at 15:30:54 UT before eruption. It is overlaid with the contours of the LOS magnetic field, where green (blue) lines stand for positive (negative) polarities. The dark filament that is $\sim$39 Mm long resides along the polarity inversion line (PIL). The top panels of Figure~\ref{fig2} demonstrate the top-view of the 3D magnetic configuration above the AR at the beginning and after eruption, with the LOS magnetograms located at the bottom boundary. Using the same method described in \citet{zhang12c}, we found a magnetic null point and the corresponding spine and separatrix surface. The normal magnetic field lines are denoted by green lines. The magnetic field lines around the outer/inner spine and the separatrix surface (or arcade) are represented by red/blue lines. Beneath the null point, the sheared arcades supporting the filament are represented by orange lines. The spine is rooted in the positive polarity (P1) that is surrounded by the negative polarities (N1 and PB). It extends in the northeast direction and connects the null point with a remote place on the solar surface. Such magnetic configuration is quite similar to those reported by \citet{sun12b}, \citet{jiang13}, and \citet{man14}. As time goes on, the filament rose and expanded slowly (Figure~\ref{fig1}(b)). The initiation process is clearly revealed by the AIA 304 {\AA} observation (see the online movie Animation2.mpg). Figure~\ref{fig3} shows eight snapshots of the 304 {\AA} images. Initial brigtenings (IB1, IB2, and IB3) appeared near the ends and center of the sigmoidal filament, implying that magnetic reconnection took place and the filament got unstable (Figure~\ref{fig3}(b)-(d)). Such initial brightenings were evident in all the EUV wavelengths. With the intensities of the brigtenings increasing, the dark filament rose and expanded slowly, squeezing the overlying arcade field lines. Null-point magnetic reconnection might be triggered when the filament reached the initial height of the null point ($\sim$15 Mm), leading to impulsive brightenings in H$\alpha$ (Figure~\ref{fig1}(c)-(d)) and EUV (Figure~\ref{fig3}(e)-(h)) wavelengths and increases in SXR and HXR fluxes (Figure~\ref{fig4}). The M6.7 flare entered the impulsive phase. The bright and compact flare kernel pointed by the white arrow in Figure~\ref{fig1}(c) extended first westward and then northward, forming a quasi-circular ribbon at $\sim$15:42 UT (Figure~\ref{fig1}(d)), with the intensity contours of the HXR emissions at 12$-$25 keV superposed. There was only one HXR source associated with the flare, and the source was located along the flare ribbon with the strongest H$\alpha$ emission, which is compatible with the fact that the footpoint HXR emissions come from the nonthermal bremsstrahlung of the accelerated high-energy electrons after penetrating into the chromosphere. The flare demonstrates itself not only around the filament but also at the point-like brightening (PB hereafter) and the V-shape ribbon to the left of the quasi-circular ribbon. Since the separatrix surface intersects with the photosphere at PB to the north and the outer spine intersects with the photosphere to the east (Figure~\ref{fig2}(a)), it is believed that nonthermal electrons accelerated by the null-point magnetic reconnection penetrated into the lower atmosphere not only at the quasi-circular ribbon, but also at PB and the V-shape ribbon. Figure~\ref{fig4} shows the SXR (black solid and dashed lines) and HXR (colored solid lines) light curves of the flare. The SXR fluxes started to rise rapidly at $\sim$15:32 UT and peaked at 15:45:53 UT for 1$-$8 {\AA} and 15:44:21 UT for 0.5$-$4.0 {\AA}. The HXR fluxes below 25 keV varied smoothly like the SXR fluxes, except for earlier peak times at $\sim$15:43:10 UT. The HXR fluxes above 25 keV, however, experienced two small peaks that imply precursor release of magnetic energy and particle acceleration at $\sim$15:38:36 UT and $\sim$15:41:24 UT and a major peak at $\sim$15:43:10 UT. The time delay between the SXR and HXR peak times implies the possible Neupert effect for this event \citep{ning10}. The main phase of the flare sustained until $\sim$17:00 UT, indicating that the flare is a long-duration event. During the flare, the filament continued to rise and split into two branches at the eastern leg around 15:46 UT (Figure~\ref{fig1}(e)), the right of which is thicker and darker than the left one. Such a process is most clearly revealed by the AIA 335 {\AA} observation (see the online movie Animation3.mpg). Figure~\ref{fig5} displays eight snapshots of the 335 {\AA} images. It is seen that the dark filament broadened from $\sim$15:42:30 UT and completely split into two branches around 15:45:51 UT. We define the left and right branches as the runaway part and major part of the filament. The two interwinding parts also underwent rotation (panels (d)-(h)). Meanwhile, the plasma of the runaway part moved in the northwest direction and escaped. To illustrate the rotation, we derived the time-slice diagrams of the two slices (S4 and S5 in panel (f)) that are plotted in Figure~\ref{fig6}. The upper (lower) panels represent the diagrams of S4 (S5), and the left (right) panels represent the diagrams for 211 {\AA} (335 {\AA}). $s=0$ in the diagrams stands for the southwest endpoints of the slices. The filament began to split into two parts around 15:42:30 UT, with the runaway part rotating round the eastern leg of the major part for $\sim$1 turn until $\sim$15:55 UT. During the eruption, the runaway branch of the filament disappeared (Figure~\ref{fig1}(f)). The major part, however, fell back to the solar surface after reaching the maximum height around 15:51 UT, suggesting that the eruption of the major part of the filament was failed. The remaining filament after the flare was evident in the H$\alpha$ image (Figure~\ref{fig1}(h)). NLFFF modelling shows that the magnetic topology was analogous to that before the flare, with the height of the null point slightly increased by 0.4 Mm (Figure~\ref{fig2}(b)). Figure~\ref{fig7} shows six snapshots of the 171 {\AA} images. The rising and expanding filament triggered the M-class flare and the kink-mode oscillation of the adjacent large-scale coronal loops within the same AR (see the online movie Animation4.mpg). With the filament increasing in height, part of its material was ejected in the northwest direction represented by ``S1'' in panel (c). After reaching the maximum height at $\sim$15:51:12 UT, the major part of the filament returned to the solar surface. The bright cusp-like post-flare loops (PFLs) in the main phase of the flare are clearly observed in all the EUV filters, see also Figure~\ref{fig7}(f). To illustrate the eruption and loop oscillation more clearly, we extracted four slices. The first slice, S0 in Figure~\ref{fig1}(f) and Figure~\ref{fig7}(d), is 170 Mm in length. It starts from the flare site and passes through the apex of the major part of the filament. The time-slice diagram of S0 in H$\alpha$ is displayed in Figure~\ref{fig8}(a). The filament started to rise rapidly at $\sim$15:34:30 UT with a constant speed of $\sim$117 km s$^{-1}$. After reaching the peak height ($z_{max}$) of $\sim$115 Mm at $\sim$15:51 UT, it fell back to the solar surface in a bumpy way until $\sim$16:30 UT. Using a linear fitting, we derived the average falling speed ($\sim$22 km s$^{-1}$) of the filament in H$\alpha$ wavelength. The time-slice diagram of S0 in UV and EUV passbands are presented in Figure~\ref{fig9}. We selected two relatively hot filters (335 {\AA} and 211 {\AA} in the top panels), two warm filters (171 {\AA} and 304 {\AA} in the middle panels), and two cool filters (1600 {\AA} and 1700 {\AA} in the bottom panels), respectively. Similar to the time-slice diagram in H$\alpha$ (Figure~\ref{fig8}(a)), the filament rose at apparent speeds of 92$-$151 km s$^{-1}$ before felling back in an oscillatory way at the speeds of 34$-$46 km s$^{-1}$ during 15:51$-$16:10 UT and $\sim$71 km s$^{-1}$ during 16:18$-$16:30 UT (Figure~\ref{fig9}(a)-(d)). The falling speeds in UV wavelengths are $\sim$78 km s$^{-1}$ during 15:51$-$16:10 UT (Figure~\ref{fig9}(e)-(f)). The times when the major part of the filament reached maximum height in UV and EUV passbands, $\sim$15:51 UT, are consistent with that in H$\alpha$. The later falling phase during 16:18$-$16:30 UT is most obvious in the warm filters. The downflow of the surviving filament in an oscillatory way was also observed during a sympathetic filament eruption \citep{shen12}. Owing to the lower time cadence of BBSO than AIA, the escaping process of the runaway part of the filament in Figure~\ref{fig1}(e)-(f) was detected by AIA. We extracted another slice S1 that is 177 Mm in length along the direction of ejection (Figure~\ref{fig7}(c)). $s=0$ Mm and $s=177$ Mm represent the southeast and northwest endpoints of the slice. The time-slice diagram of S1 in 171 {\AA} is displayed in Figure~\ref{fig8}(b). Contrary to the major part, the runaway part of the filament escaped successfully from the corona at the speeds of 125$-$255 km s$^{-1}$ without returning to the solar surface. The intermittent runaway process during 15:45$-$16:05 UT was obviously observed in most of the EUV filters. We extracted another slice S2 that also starts from the flare site and passes through both parts of the filament (Figure~\ref{fig7}(d)). The time-slice diagram of S2 in 171 {\AA} is drawn in Figure~\ref{fig8}(c). As expected, the diagram features the bifurcation of the filament as pointed by the white arrow, i.e., the runaway part escaped forever while the major part moved on after bifurcation and finally fell back. The eruption of the filament triggered transverse kink oscillation of the adjacent coronal loops (OL in Figure~\ref{fig7}(a)). The direction of oscillation is perpendicular to the initial loop plane (see the online movie Animation4.mpg). We extracted another slice S3 that is 80 Mm in length across the oscillating loops (Figure~\ref{fig7}(b)). $s=0$ Mm and $s=80$ Mm represent the northwest and southeast endpoints of the slice. The time-slice diagram of S3 in 171 {\AA} is shown in Figure~\ref{fig8}(d), where the oscillation pattern during 15:38$-$15:47 UT is evidently demonstrated. The OL moved away from the flare site during 15:38$-$15:41 UT before returning to the initial position and oscillating back and forth for $\sim$2 cycles. By fitting the pattern with a sinusoidal function as marked by the white dashed line, the resulting amplitude and period of the kink oscillation were $\sim$1.6 Mm and $\sim$225 s. We also extracted several slices across the OL and derived the time-slice diagrams, finding that the coronal loops oscillated in phase and the mode was fundamental. The initial velocity amplitude of the oscillation was $\sim$44.7 km s$^{-1}$. The speed of propagation of the mode $C_K=2L/P=\sqrt{2/(1+\rho_o/\rho_i)}v_A$, where $L$ is the loop length, $P$ is the period, $v_A$ is the Alfv\'{e}n wave speed, and $\rho_i$ and $\rho_o$ are the plasma densities inside and outside the loop \citep{nak99,nak01,whi12}. In Figure~\ref{fig7}(a), we denote the footpoints of the OL with black crosses that are 106.1 Mm away. Assuming a semi-circular shape, the length of the loop $L=166.7$ Mm and $C_{K}=1482$ km s$^{-1}$. Using the same value of $\rho_o/\rho_{i}=0.1$, we derived $v_{A}=1100$ km s$^{-1}$. In addition, we estimated the electron number density of the OL to be $\sim$2.5$\times10^{10}$ cm$^{-3}$ based on the results of NLFFF extrapolation in Figure~\ref{fig2}(a). The kink-mode oscillation of the loops was best observed in 171 {\AA}, indicating that the temperatures of loops were $\sim$0.8 MK. The escaping part of the filament was also clearly observed by \textit{STA}/EUVI. Figure~\ref{fig10} shows six snapshots of the 304 {\AA} images, where the white arrows point to the escaping filament. During 15:46$-$16:30 UT, the material moved outwards in the northeast direction without returning to the solar surface. The bright M6.7 flare pointed by the black arrows is also quite clear. The runaway part of the filament resulted in a very faint CME observed by the WL coronagraphs. Figure~\ref{fig11}(a)-(d) show the running-difference images of \textit{STA}/COR1 during 16:00$-$16:15 UT. As pointed by the arrows, the CME first appeared in the FOV of \textit{STA}/COR1 at $\sim$16:00 UT and propagated outwards at a nearly constant speed, with the contrast between CME and the background decreasing as time goes on. The propagation direction of the CME is consistent with that of the runaway filament in Figure~\ref{fig10}. Figure~\ref{fig11}(e)-(f) show the running-difference images of LASCO/C2 during 16:36$-$16:48 UT. The faint blob-like CME first appeared in the FOV of C2 at $\sim$16:36 UT and propagated in the same direction as that of the escaping filament observed by AIA in Figure~\ref{fig7}(c). The central position angle and angular width of the CME observed by C2 are 311$^{\circ}$ and 37$^{\circ}$. The linear velocity of the CME is $\sim$214 km s$^{-1}$. The time-height profiles of the runaway filament observed by \textit{STA}/EUVI (\textit{boxes}) and the corresponding CME observed by \textit{STA}/COR1 (\textit{diamonds}) and LASCO/C2 (\textit{stars}) are displayed in Figure~\ref{fig12}. The apparent propagating velocities represented by the slopes of the lines are 60, 358, and 214 km s$^{-1}$, respectively. Taking the projection effect into account, the start times of the filament eruption and the CME observed by LASCO/C2 and \textit{STA}/COR1 from the lower corona ($\approx1.0 R_{\sun}$) are approximately coincident with each other. In the CDAW catalog, the preceding and succeeding CMEs occurred at 06:12 UT and 18:36 UT on September 8. In the COR1 CME catalog, the preceding and succeeding CMEs occurred slightly earlier at 05:45 UT and 18:05 UT on the same day, which is due to the smaller FOV of COR1 than LASCO/C2. Therefore, the runaway part of the filament was uniquely associated with the CME during 16:00$-$18:00 UT. Then, a question is raised: How can the runaway part of the filament successfully escape from the corona and give rise to a CME? We speculate that open magnetic field lines provide a channel. In order to justify the speculation, we turn to the large-scale magnetic field calculated by the potential field source surface \citep[PFSS;][]{sch69,sch03} modelling and the radio dynamic spectra from the S/WAVES and WAVES instruments. In Figure~\ref{fig13}, we show the magnetic field lines whose footpoints are located in AR 11283 at 12:04 UT before the onset of flare/CME event. The open and closed field lines are represented by the purple and white lines. It is clear that open field lines do exist in the AR and their configuration accords with the directions of the escaping part of filament observed by AIA and the CME observed by C2. The radio dynamic spectra from S/WAVES and WAVES are displayed in panels (a)$-$(b) and (c)$-$(d) of Figure~\ref{fig14}, respectively. There are clear signatures of type \Rmnum{3} radio burst in the spectra. For \textit{STA}, the burst started at $\sim$15:38:30 UT and ended at $\sim$16:00 UT, during which the frequency drifted rapidly from 16 MHz to $\sim$0.3 MHz. For \textit{STB} that was $\sim$0.07 AU further than \textit{STA} from the Sun, the burst started slightly later by $\sim$2 minutes with the frequency drifting from $\sim$4.1 MHz to $\sim$0.3 MHz since the early propagation of the filament was blocked by the Sun. For WAVES, the burst started at $\sim$15:39:30 UT and ended at $\sim$16:00 UT with the frequency drifting from 13.8 MHz to $\sim$0.03 MHz. The starting times of the radio burst were consistent with the HXR peak times of the flare. Since the type \Rmnum{3} radio emissions result from the cyclotron maser instability of the nonthermal electron beams that are accelerated and ejected into the interplanetary space along open magnetic field lines during the flare \citep{tang13}, the type \Rmnum{3} radio burst observed by \textit{STEREO} and \textit{WIND} provides indirect and supplementary evidence that open magnetic field lines exist near the flare site. \section{Discussions} \label{s-disc} \subsection{How is the energy accumulated?} \label{s-eng} It is widely accepted that the solar eruptions result from the release of magnetic free energy. For this event, we studied how the energy is accumulated by investigating the magnetic evolution of the AR using the HMI LOS magnetograms (see the online movie Animation5.mpg). Figure~\ref{fig15} displays four snapshots of the magnetograms, where the AR is dominated by negative polarity (N1). A preexisting positive polarity (P1) is located in the northeast direction. From the movie, we found continuous shearing motion along the highly fragmented and complex PIL between N1 and P1. For example, the small negative region N2 at the boundary of the sunspot was dragged westward and became elongated (Figure~\ref{fig15}(b)-(d)). To better illustrate the motion, we derived the transverse velocity field ($v_x$, $v_y$) at the photosphere using the differential affine velocity estimator (DAVE) method \citep{sch05}. The cadence of the HMI LOS magnetograms was lowered from 45 s to 180 s. Figure~\ref{fig16} displays six snapshots of the magnetograms overlaid with the transverse velocity field represented by the white arrows. The velocity field is clearly characterized by the shearing motions along the PIL. The regions within the green and blue elliptical lines are dominated by eastward and westward motions at the speeds of $\sim$1.5 km s$^{-1}$. From the online movie (Animation6.mpg), we can see that the continuous shearing motions were evident before the flare, implying that the magnetic free energy and helicity were accumulated and stored before the impulsive release. \subsection{How is the eruption triggered?} \label{s-tri} Once the free energy of the AR is accumulated to a critical value, chances are that the filament constrained by the overlying magnetic field lines undergoes an eruption. Several types of triggering mechanism have been proposed. One type of processes where magnetic reconnection is involved include the flux emergence model \citep{chen00}, catastrophic model \citep{lin00}, tether-cutting model \citep{moo01,chen14}, and breakout model \citep{ant99}, to name a few. Another type is the ideal magnetohydrodynamic (MHD) processes as a result of KI \citep{kli04} and/or TI \citep{kli06}. From Figure~\ref{fig15} and the movie (Animation5.mpg), we can see that before the flare there was continuous magnetic flux emergence (P2, P3, and P4) and subsequent magnetic cancellation along the fragmented PIL. We extracted a large region within the white dashed box of Figure~\ref{fig15}(d) and calculated the total positive ($\Phi_{P}$) and negative ($\Phi_{N}$) magnetic fluxes within the box. In Figure~\ref{fig17}, the temporal evolutions of the fluxes during 11:00$-$16:30 UT are plotted, with the evolution of $\Phi_{P}$ divided into five phases (I$-$V) separated by the dotted lines. The first four phases before the onset of flare at 15:32 UT are characterized by quasi-periodic and small-amplitude magnetic flux emergence and cancellation, implying that the large-scale magnetic field was undergoing rearrangement before the flare. The intensity contours of the 304 {\AA} images in Figure~\ref{fig3}(b) and (d) are overlaid on the magnetograms in Figure~\ref{fig15}(b) and (c), respectively. It is clear that the initial brightenings IB1 and IB2 are very close to the small positive polarities P4 and P3. There is no significant magnetic flux emergence around IB3. In the emerging-flux-induced-eruption model \citep{chen00}, when reconnection-favorable magnetic bipole emerges from beneath the photosphere into the filament channel, it reconnects with the preexisting magnetic field lines that compress the inverse-polarity MFR. The small-scale magnetic reconnection and flux cancellation serve as the precursor for the upcoming filament eruption and flare. During the flare when magnetic reconnection occurred between 15:32 UT and 16:10 UT, both the positive and negative magnetic field experienced impulsive and irreversible changes. Despite that the flux emergences are plausible to interpret the triggering mechanism, there is another possibility. In the tether-cutting model \citep{moo01}, a pair of $J$-shape sheared arcades that comprise a sigmoid reconnect when the two elbows come into contact, forming a short loop and a long MFR. Whether the MFR experiences a failed or ejective eruption depends on the strength of compression from the large-scale background field. The initial brightenings (IB1, IB2, and IB3) around the sigmoidal filament might be the precursor brightenings as a result of internal tether-cutting reconnection due to the continuous shearing motion along the PIL. After onset, the whole flux system erupted and produced the M-class flare. Considering that the magnetic configuration could not be modelled during the flare, we are not sure whether a coherent MFR was formed after the initiation \citep{chen14}. Compared to the flux emergences, the internal tether-cutting seems more believable to interpret how the filament eruption was triggered for the following reasons. Firstly, the filament was supported by sheared arcade. Secondly, there were continuous shearing motions along the PIL, and the directions were favorable for the tether-cutting reconnection. Finally, the initial brightenings (IB1, IB2, and IB3) around the filament in Figure~\ref{fig3} fairly match the internal tether-cutting reconnection with the presence of multiple bright patches of flare emission in the chromosphere at the feet of reconnected field lines, while there was no flux emergence around IB3. NLFFF modelling shows that the twist number ($\sim$1) of the sheared arcades supporting the filament is less than the threshold value ($\sim$1.5), implying that the filament eruption may not be triggered by ideal KI. The photospheric magnetic field of the AR features a bipole (P1 and N1) and a couple of mini-polarities (e.g., P2, P3, P4, and N2). Therefore, the filament eruption could not be explained by the breakout model that requires quadrupolar magnetic field, although null-point magnetic reconnection took place above the filament during the eruption. After the onset of eruption, the filament split into two parts as described in Section~\ref{s-result}. How the filament split is still unclear. In the previous literatures, magnetic reconnection is involved in the split in most cases \citep{gil01,gib06a,liu08b}. In this study, the split occurred during the impulsive phase of the flare at the eastern leg that was closer to the flare site than the western one, implying that the split was associated with the release of magnetic energy. The subsequent rotation or unwinding motion implies the release of magnetic helicity stored in the filament before the flare, presumably due to the shearing motion in the photosphere. Nevertheless, it is still elusive whether the filament existed as a whole or was composed of two interwinding parts before splitting. The way of splitting seems difficult to be explained by any of the previous models and requires in-depth investigations. Though the runaway part escaped out of the corona, the major part failed. It returned to the solar surface after reaching the apex. Such kind of failed eruptions have been frequently observed and explained by the strapping effect of the overlying arcade \citep{ji03,guo10a,song14,jos14} or asymmetry of the background magnetic fields with respect to the location of the filament \citep{liu09}. In order to figure out the cause of failed eruption of the major part, we turn to the large-scale magnetic configurations displayed in the bottom panels of Figure~\ref{fig2}. It is revealed that the overlying magnetic arcades above AR 11283 are asymmetric to a great extent, i.e., the magnetic field to the west of AR is much stronger than that to the east, which is similar to the case of \citet{liu09}. According to the analysis of \citet{liu09}, the confinements of the large-scale arcade acted on the filament are strong enough to prevent it from escaping. We also performed magnetic potential-field extrapolation using the same boundary and derived the distributions of $|\mathbf{B}|$ above the PIL. It is found that the maximum height of the major part considerably exceeds the critical height ($\sim$80$\arcsec$) of TI where the decay index ($-d\ln|\mathbf{B}|/d\ln z$) of the background potential field reaches $\sim$1.5. The major part would have escaped from the corona successfully after entering the instability domain if TI had worked. Therefore, the asymmetry with respect to the filament location, rather than TI of the overlying arcades, seems reasonable and convincing to interpret why the major part of the filament underwent failed eruption. In this study, both successful and failed eruptions occurred in a partially eruptive event, which provides more constraints to the theoretical models of solar eruptions. \subsection{How is the coronal loop oscillation triggered?} \label{s-loop} Since the first discovery of coronal loop oscillations during flares \citep{asch99,nak99}, such kind of oscillations are found to be ubiquitous and be useful for the diagnostics of coronal magnetic field \citep{guo15}. Owing to the complex interconnections of the magnetic field lines, blast wave and/or EUV wave induced by filament eruption may disturb the adjacent coronal loops in the same AR or remote loops in another AR, resulting in transverse kink-mode oscillations. \citet{nis13} observed decaying and decayless transverse oscillations of a coronal loop on 2012 May 30. The loops experience small-amplitude decayless oscillations, which is driven by an external non-resonant harmonic driver before and after the flare \citep{mur14}. The flare, as an impulsive driver, triggers large-amplitude decaying loop oscillations. In our study, the decayless loop oscillation with moderate amplitude ($\sim$1.6 Mm) occurred during the flare and lasted for only two cycles, which makes it quite difficult to precisely measure the decay timescale if it is decaying indeed. The loop may cool down and become invisible in 171 {\AA} while oscillating. Considering that the distance between the flare and OL is $\sim$50 Mm and the time delay between the flare onset and loop oscillation is $\sim$6 minutes, the speed of propagation of the disturbances from the flare to OL is estimated to be $\sim$140 km s$^{-1}$, which is close to the local sound speed of the plasmas with temperature of $\sim$0.8 MK. Hence, we suppose that the coronal loop oscillation was triggered by the external disturbances as a result of the rising and expanding motions of the filament. \subsection{Significance for space weather prediction} \label{s-swp} Flares and CMEs play a very important role in the generation of space weather. Accurate prediction of space weather is of great significance. Successful eruptions have substantially been observed and deeply investigated. Partial filament eruptions that produce flares and CMEs, however, are rarely detected and poorly explored. For the type of partial eruptions in this study, i.e., one part undergoes failed eruption and the other part escapes out of the corona, it would be misleading and confusing to assess and predict the space weather effects based on the information only from the solar surface, since the escaping part may carry or produce solar energetic particles that have potential geoeffectiveness. Complete observations are necessary for accurate predictions. \section{Summary} \label{s-sum} Using the multiwavelength observations from both spaceborne and ground-based telescopes, we studied in detail a partial filament eruption event in AR 11283 on 2011 September 8. The main results are summarized as follows: \begin{enumerate} \item{A magnetic null point was found above the preexisting positive polarity surrounded by negative polarities in the AR. A spine passed through the null and intersected with the photosphere to the left. Weakly twisted sheared arcade supporting the filament was located under the null point whose height increased slightly by $\sim$0.4 Mm after the eruption.} \item{The filament rose and expanded, which was probably triggered by the internal tether-cutting reconnection or by continuous magnetic flux emergence and cancellation along the highly complex and fragmented PIL, the former of which seems more convincing. During its eruption, it triggered the null-point magnetic reconnection and the M6.7 flare with a single HXR source at different energy bands. The flare produced a quasi-circular ribbon and a V-shape ribbon where the outer spine intersects with the photosphere.} \item{During the expansion, the filament split into two parts at the eastern leg that is closer to the flare site. The major part of the filament rose at the speeds of 90$-$150 km s$^{-1}$ before reaching the maximum apparent height of $\sim$115 Mm. Afterwards, it returned to the solar surface staggeringly at the speeds of 20$-$80 km s$^{-1}$. The rising and falling motions of the filament were clearly observed in the UV, EUV, and H$\alpha$ wavelengths. The failed eruption of the major part was most probably caused by the asymmetry of the overlying magnetic arcades with respect to the filament location.} \item{The runaway part, however, separated from and rotated around the major part for $\sim$1 turn before escaping outward from the corona at the speeds of 125$-$255 km s$^{-1}$, probably along the large-scale open magnetic field lines as evidenced by the PFSS modelling and the type \Rmnum{3} radio burst. The ejected part of the filament led to a faint CME. The angular width and apparent speed of the CME in the FOV of C2 are 37$^{\circ}$ and 214 km s$^{-1}$. The propagation directions of the escaping filament observed by SDO/AIA and \textit{STA}/EUVI are consistent with those of the CME observed by LASCO/C2 and \textit{STA}/COR1, respectively.} \item{The partial filament eruption also triggered transverse oscillation of the neighbouring coronal loops in the same AR. The amplitude and period of the kink-mode oscillation were 1.6 Mm and 225 s. We also performed diagnostics of the plasma density and temperature of the oscillating loops.} \end{enumerate} \acknowledgements The authors thank the referee for valuable suggestions and comments to improve the quality of this article. We gratefully acknowledge Y. N. Su, P. F. Chen, J. Zhang, B. Kliem, R. Liu, S. Gibson, H. Gilbert, M. D. Ding, and H. N. Wang for inspiring and constructive discussions. \textit{SDO} is a mission of NASA\rq{}s Living With a Star Program. AIA and HMI data are courtesy of the NASA/\textit{SDO} science teams. \textit{STEREO}/SECCHI data are provided by a consortium of US, UK, Germany, Belgium, and France. QMZ is supported by Youth Fund of JiangSu BK20141043, by 973 program under grant 2011CB811402, and by NSFC 11303101, 11333009, 11173062, 11473071, and 11221063. H. Ji is supported by the Strategic Priority Research Program$-$The Emergence of Cosmological Structures of the Chinese Academy of Sciences, Grant No. XDB09000000. YG is supported by NSFC 11203014. Li Feng is supported by the NFSC grant 11473070, 11233008 and by grant BK2012889. Li Feng also thanks the Youth Innovation Promotion Association, CAS, for the financial support.
{'timestamp': '2015-03-11T01:08:57', 'yymm': '1503', 'arxiv_id': '1503.02933', 'language': 'en', 'url': 'https://arxiv.org/abs/1503.02933'}
\section{Introduction} Online harassment is pervasive in regions around the world. Users post hate speech that demeans and degrades people based on their gender, race, sexual identity, or position in society \cite{blackwell2017classification,lenhart2016online}; users post insults and spread rumors, disproportionately harming those with fewer resources in society to cope with or respond to the attacks \cite{marwick2021morally, lenhart2016online, ybarra2008risky}; and users share private, sensitive content, like home addresses or sexual images, without the consent of those whose information is being shared \cite{goldberg2019nobody}. These behaviors introduce multiple types of harm with varied levels of severity, ranging from minor nuisances to psychological harm to economic precarity to life threats \cite{jiang2021understanding, schoenebeck2020reimagining, sambasivan2019they}. \textcolor{black}{Gaining a global understanding of online harassment} is important for designing online experiences that meet the needs of diverse, varied global experiences. Social media platforms have struggled to govern online harassment, relying on human and algorithmic moderation systems that cannot easily adjudicate content that is as varied as the human population that creates it \cite{goldman2021content, roberts2019behind}. Platforms maintain community guidelines that dictate what type of content is allowed or not allowed and then use the combination of human and automated pipelines to identify and address violations \cite{roberts2019behind, gillespie2018custodians}. However, identifying and categorizing what type of content is harmful or not is difficult for both humans and algorithms to do effectively and consistently. These challenges are magnified in multilingual environments where people may be trying to assess content in different languages or cultural contexts than they are familiar with, while algorithms are inadequately developed to work across these languages and contexts \cite{york2021silicon, gupta2022adima}. \textcolor{black}{Investigations} of harms associated with online harassment have been given disproportionate attention in U.S. contexts. Most prominent technology companies are centered in the U.S., employing U.S. workers in executive positions and centering U.S. laws, norms, corporations, and people \cite{york2021silicon, wef2022ceo}. Scholars have called attention to this problem, pointing out how experiences differ for people and communities globally (e.g. \cite{york2021silicon, sambasivan2019they, sultana2021unmochon}). For example, a study of 199 South Asian women shows that they refrain from reporting abuse because platforms rarely have the contextual knowledge to understand local experiences \cite{sambasivan2019they}. Across countries, social media users have expressed distrust in platforms' ability to govern behavior effectively, especially systems that are vague, complicated, and U.S.- and European-centric \cite{crawford2016flag, sambasivan2019they, blackwell2017classification}. Governing social media across the majority of the world requires understanding how to design platforms with policies and values that are aligned with the communities who use them. Towards that goal, this article examines perceptions of harm and preferences for remedies associated with online harassment via a survey conducted in 14 countries\footnote{\textcolor{black}{Data was collected from 13 countries plus a collection of Caribbean countries. We use the term "country" throughout for readability.}} around the world, selected for their diversity in location, culture, and economies. Results from this study shed light on similarities and differences in attitudes about harms and remedies in countries around the world. This work also demonstrates the complexities of measuring and making sense of these differences, which cannot be explained by a single factor and should not be assumed to be stable over time. This article advances scholarship on online harassment in majority contexts, and seeks to expand understandings about how to design platforms that meet the needs of the communities that use them. \section{Impacts of Online Harassment} Online harassment is an umbrella term that encompasses myriad types of online behaviors including insults, hate speech, slurs, threats, doxxing, and non-consensual image sharing, among others. A rich body of literature has described characteristics of online harassment including what it is, who experiences it, and how platforms try to address it (e.g. \cite{schoenebeck2020reimagining, jhaver2019did, matias2019preventing, douek2020governing, chandrasekharan2017bag, thomas2021sok}). Microsoft's Digital Civility surveys and Google's state of abuse, hate, and harassment surveys indicate how harassment is experienced globally \cite{thomas2021sok, msft2022dci}. Harassment can be especially severe when it is networked and coordinated, where groups of people threaten one or many other people's safety and wellbeing \cite{marwick2021morally}. Other types of harassment are especially pernicious in-the-moment, such as reporting ``crimes'' so that law enforcement agencies investigate a home \cite{bernstein2016investigating} or sharing a person's home address online with the intent of encouraging mobs of people to threaten that person at their home. Across types of harm, marginalized groups experienced disproportionate harm associated with harassment online, including racial minorities, religious minorities, caste minorities, sexual and gender minorities, and people who have been incarcerated \cite{walker2017systematic, pewonline, powell2014blurred,maddocks2018non, englander2015coerced, poole2015fighting}. Sometimes users post malicious content intended to bypass community guidelines which are difficult to algorithmically detect \cite{dinakar2012common,vitak2017identifying}. This makes it relatively easy to deceive automatic detection models by subtly modifying an otherwise highly toxic phrase so that the detection model assigns it a significantly lower toxicity score \cite{hosseini2017deceiving}. In addition, due to limited training on non-normative behavior, these automatic detection and classification tools can exacerbate existing structural inequities \cite{hosseini2017deceiving}. For instance, Facebook’s removal of a photograph of two men kissing after flagging it as ``graphic sexual content'' highlighted the lack of inclusivity of non-dominant behavior in their automatic detection tools \cite{fbcencershipprob}. This valorization of certain viewpoints highlights that power resides among those who create these labels by embedding their own values and worldviews (mostly U.S.-centric) to classify particular behaviors as appropriate or inappropriate \cite{blackwell2017classification, hosseini2017deceiving}. The effects of harassment vary by experience and individuals but might include anxiety, stress, fear, humiliation, self-blame, anger, and illness. There is not yet a standard framework for measuring harms associated with online harassment, which can include physical harm, sexual harm, psychological harm, financial harm, reproductive harm, and relational harm \cite{unwomen}. These can manifest in myriad ways: online harassment can cause changes to technology use or privacy behaviors, increased safety and privacy concerns, and disruptions of work, sleep, and personal responsibilities \cite{pittaro2007cyber,griffiths2002occupational,duggan2017online}. Other consequences can include public shame and humiliation, an inability to find new romantic partners, mental health effects such as depression and anxiety, job loss or problems securing new employment, offline harassment and stalking, numerous mental health issues, such as post-traumatic stress disorder (PTSD), depression, anxiety, self-blame, self-harm, trust issues, low self-esteem, confidence, and loss of control \cite{walker2017systematic, powell2014blurred, bates2017revenge,eckert2020doxxing, barrense2020non, ryan2018european, pampati2020having}. These effects can be experienced for long periods of time due in part to the persistence and searchability of content \cite{goldberg2019nobody}. Targets often choose to temporarily or permanently abstain from social media sites, despite the resulting isolation from information resources and support networks \cite{goldberg2019nobody, lenhart2016online}. Microsoft's Digital Civility Index, a yearly survey of participants in over 20 countries, indicates that men are more confident than women in managing online risks \cite{msft2022dci}. Sexual images of women and girls are disproportionately created, sent, and redistributed without consent which can severely impact women's lives \cite{burkett2015sex, dobson2016sext, bates2017revenge,eckert2020doxxing, barrense2020non}. In a study of unsolicited \textcolor{black}{nude} images and its affect on user engagement \cite{hayes2018unsolicited, shaw2016bitch}, victims reported being bombarded with unwelcomed explicit imagery and faced further insults when they attempted to reduce interaction. A survey by Maple et al. with 353 participants from the United Kingdom (68\% of respondents were women) listed damage to their reputation as the primary fear of victims of cyberharassment \cite{maple2011cyberstalking}. The consequences of gendered and reputational harm can be devastating. In South Korea, celebrities Hara Goo and Sulli (Jin-ri Choi) died by suicide, which many attributed to the large-scale cyberbullying, sexual harassment, and gender violence they experienced online \cite{goo2019}. A social media Pakistani celebrity was murdered by her brother, who perceived her social media presence as a blemish on the family's honor \cite{QandeelBaloch}. Two girls and their mother were allegedly gunned down by a stepson and his friends over the non-consensual filming and sharing of a video of the girls enjoying rain among family \cite{twogirls}. Many of these harms are ignited and fueled by victim-blaming, where society places the responsibility solely on women and other marginalized groups to avoid being assaulted \cite{walker2017systematic, powell2014blurred, chisala2016gender}. This blaming is also perpetuated digitally; for instance, a review of qualitative studies on non-consensual sharing highlighted that women are perceived as responsible if their images are shared because they voluntarily posed for and sent these images in the first place \cite{walker2017systematic}. \section{Challenges in Governing Online Harassment} Most social media sites have reporting systems aimed at flagging inappropriate content or behavior online \cite{crawford2016flag}. Though platform policies do not explicitly define what constitutes online harassment \cite{pater2016characterizations}, platforms have highlighted several activities and behaviors in their community guidelines including abuse, bullying, defaming, impersonation, stalking, and threats \cite{pater2016characterizations, jiang2020characterizing}. Content that is reported goes into a processing pipeline where human workers evaluate the content and determine whether it violates community guidelines or not \cite{roberts2019behind}. If it does, they may take it down and sanction the user who posted it, with sanctions ranging in severity from warnings to suspensions to permanent bans \cite{schoenebeck2021drawing, goldman2021content}. Platforms use machine learning to automatically classify and filter out malicious content, abusive language, and offensive behaviors \cite{chandrasekharan2017bag,wulczyn2017ex,yin2009detection}. These range from adding contextual and semantic features in detection tools to generating computational models using preexisting data from online communities to using these machine learning models to assign ``toxicity scores'' \cite{wulczyn2017ex, chandrasekharan2017bag}. Though harassment detection approaches have improved dramatically, fundamental limitations remain \cite{blackwell2017classification}, including false positives and true negatives, where content is taken down that should have stayed up and vice versa \cite{haimson2021disproportionate, schoenebeck2020reimagining}. Many of these problems are deeply embedded in algorithmic systems, which can reinforce Western tropes, such as associating the word "Muslim" with terrorists \cite{abid2021persistent}. Algorithms to detect problematic content also perform substantially worse in non-English languages, perpetuating inequalities rather than remediating them \cite{debre2021facebook}. Dominant voices can overrule automatically detected flagged content through situated judgments \cite{crawford2016flag}. For instance, a widely distributed video of Neda, an Iranian woman caught up in street protests and shot by military police in 2009, was heavily flagged as violating YouTube's community guidelines for graphic violence, but YouTube justified leaving it up because the video was newsworthy \cite{Neda}. Platform policies are written in complex terms that are inaccessible to many social media users, which makes it difficult for them to seek validation of their online harassment experiences \cite{fiesler2016reality}. Further, platform operators do not specify which prohibited activities are associated with which responses \cite{pater2016characterizations}. When combined with the punitive nature of sanctions, online governance systems may be confusing and ineffective at remediating user behavior, while overlooking the harms faced by victims of the behavior \cite{schoenebeck2021drawing}. One alternative that has been proposed more recently is a focus on rehabilitation and reparation in the form of apologies, restitution, mediation, or validation of experiences \cite{blackwell2017classification, schoenebeck2021drawing, xiao2022sensemaking}. Implementing responses to online harassment requires that users trust platforms' ability to select and implement that response \cite{wilkinson2022many}; however, public trust in technology companies has decreased in recent years, and there is also distrust of social media platforms' ability to effectively govern online behavior \cite{schoenebeck2021youth, americanstrust2020, blackwell2017classification, musgrave2022experiences}. 84\% of social media users in the U.S. believe that it is the platform's responsibility to protect them from social media harassment \cite{Americans}, yet Lenhart et al.'s survey suggests that only 27\% of victims reported harassing activities on these platforms \cite{onlineharassmentAmerica}. A different survey by Wolak et al. with 1631 victims of sextortion found that 79\% of victims did not report their situation to social media sites because they did not think it would be helpful to report \cite{wolak2016sextortion}. Their participants indicated that platform reporting might be helpful only when victims are connected to perpetrators exclusively online which might be addressable through in-app reporting \cite{wolak2016sextortion}. Sambasivan et al.'s study with 199 South Asian women revealed that participants refrain from reporting through platforms due to platforms' limited contextual understanding of victims' regional issues, which is further slowed by the platforms' requirements to fill out lengthy forms providing detailed contexts \cite{sambasivan2019they}. Musgrave et al. find that U.S. Black women and femmes do not report gendered and racist harassment because they do not believe reporting will help them \cite{musgrave2022experiences}. Wolak et al. also found that only 16\% of victims of sextortion reported their incidents to the police \cite{wolak2016sextortion}. Many of those who reported to police described having a negative reporting experience, which deterred them from pursuing criminal charges against offenders \cite{wolak2016sextortion}. Such experiences include police arguing for the inadequacy of proof to file complaints, that sextortion is a non-offensive act, lack of jurisdiction to take actions, and being generally rude, insensitive, and mocking \cite{wolak2016sextortion}. Sambasivan et al. also reported that only a few of their nearly 200 participants reported abusive behaviors to police because they perceived law enforcement officers to have low technical literacy, to be likely to shame women, or to be abusers themseves. \cite{sambasivan2019they}. When abusers are persistent, even reporting typically does not address the ongoing harassment \cite{marwick2021morally, goldberg2019nobody}. Sara Ahmed introduces the concept ``strategic inefficiency'' to explain how institutions slow down complaint procedures that can then deter complaints from constituents \cite{ahmed2021complaint}. The lack of formal reporting channels leads users to be largely self-reliant for mitigating and avoiding abuse. Techniques they use range from preventative strategies like limiting content, modifying privacy settings, self-censorship, using anonymous and gender-neutral identities, using humor, avoiding communication with others, ignoring abuse, confronting abusers, avoiding location sharing, deleting accounts, blocklists, changing contact information, changing passwords, using multiple emails accounts for different purposes, creating a new social media profile under a different name, blocking or unfriend someone and untagging themselves from photos \cite{onlineharassmentAmerica, wolak2016sextortion, vitak2017identifying, fox2017women, mayer2021now, such2017photo, corple2016beyond, dimond2013hollaback, vitis2017dick, jhaver2018online}. Whether reporting to companies or \textcolor{black}{police}, these approaches all put the burden of addressing harassment on the victims. If we want to better govern online behavior globally, we need to better understand what harms users experience and how platforms and policies can systematically better support them after those harms. \section{Study Design} We conducted a cross-country online survey in 14 countries (13 countries plus multiple Caribbean countries). \textcolor{black}{We aimed for a minimum of 250 respondents in each country which considered our desire for age variance and gender representation among men and women but without the higher sample size needed for representative samples or subgroup analyses}. The survey focused on online harassment harms and remedies and included questions about demographics, personal values, societal issues, social media habits, and online harassment. This paper complements a prior paper from the same project that focused on gender \cite{im2022women}; this paper focuses on country level differences though it also engages with gender as part of the narrative. We iteratively designed the survey as a research team, discussing and revising questions over multiple months. When we had a stable draft of a survey, members of our research team translated surveys manually and compared those versions to translations via paid human translation services for robustness. We pilot tested translations with 2-4 people for each language and revised the survey further. \textcolor{black}{Our goal was to have similar wording across languages; though this resulted in some overlapping terms in the prompts (e.g. malicious), participants seemed to comprehend each prompt in our pilots.} We deployed the survey in a dominant local language for each country (see Table \ref{table:participant-demographics}). The survey contained four parts: harassment scenarios, harm measures, possible remedies, and demographics and values. Below, we describe each stage in detail: \textit{Harassment scenarios}. We selected four online harassment scenarios to capture participants' perceptions about a range of harassment experiences but without making the survey too long which leads to participant fatigue. We selected the four harassment scenarios by reviewing prior scholarly literature, reports, and \textcolor{black}{news articles} and prioritizing diversity in types of harm and severity of harm. We prioritized harassment types that would be globally relevant and legible among participants and could be described succinctly. Participants were presented with one scenario along with the harm and remedy questions (described below), and completed this sequence four times for each harassment scenario. The harassment scenario prompt asked participants to "Imagine a person has:" and then presented each of the experiences below. \begin{itemize} \item spread malicious rumors about you on social media \item taken sexual photos of you without your permission and shared them on social media \item insulted or disrespected you on social media \item created fake accounts and sent you malicious comments through direct messages on social media \end{itemize} \textit{Harm measures}. We developed four measures of harm to ask about with each harassment scenario. We again prioritized types of harmful experiences that would be relevant to participants globally. Drawing on our literature review on harms in other disciplines (e.g. medicine) and more nascent discussions of technological harms (e.g. privacy harms \cite{citron2021privacy}), we chose to prioritize three prominent categories of harm used in scholarly literature and by the World Health Organization--psychological, physical, and sexual harm. We then added a fourth category---reputational harm---because harm to family reputation is a prominent concern in many cultures and these concerns may be exacerbated on social media. We prioritized question wording that could be translated and understood across languages. For example, our testing revealed that the concept of ``physical harm'' was confusing to participants when translated so we iterated on wording until we landed on personal safety. The final wording we used was: \begin{itemize} \item Would you be concerned for your psychological wellbeing? \item Would you be concerned for your personal safety? \item Would you be concerned for your family reputation? \item Would you consider this sexual harassment against you? \end{itemize} Perceived harm options were presented on 5-point scales of ``Not at all concerned'' (1) to ``Extremely concerned'' (5) for the first three questions and ``Definitely not'' (1) to ``Definitely'' (5) for the last question. We chose these response stems to avoid Agree/Disagree options which may promote acquiescence bias \cite{saris2010comparing} and because these could be translated consistently across languages. \textit{Harassment remedies}. Current harassment remedies prioritize content removal and user bans after a policy violation. However, scholars are increasingly arguing that a wider range of remedies is needed for addressing widespread harms. Goldman proposes that expanded remedies can improve the efficacy of content moderation, promote free expression, promote competition among Internet services, and improve Internet services’ community-building functions \cite{goldman2021content}. Goldman's taxonomy of remedies is categorized by content regulation, account regulation, visibility reductions, monetary, and ``other.'' Schoenebeck et al. \cite{schoenebeck2020drawing} have also proposed that expanding remedies can create more appropriate and contextualized justice systems online. They see content removal and user bans as a form of criminal legal moderation, where harmful behavior is removed from the community, and propose adding complementary justice frameworks. For example, restorative justice suggest alternative remedies like apologies, education, or mediation. Building on this work, we developed a set of proposed remedies and for each harassment scenario, we asked participants, ``How desirable would you find the following responses?'' with response options on a 5-point scale of ``Not at all desirable for me (1)'' to ``Extremely desirable for me (5).'' The seven remedies we displayed were chosen to reflect a diversity of types of remedies while keeping the total number relatively low to reduce participant fatigue. We also asked one free response question ``What do you think should be done to address the problem of harassment on social media?'' \begin{itemize} \item removing the content from the site. \item labeling the content as a violation of the site’s rules. \item banning the person from the site. \item paying you money. \item requiring a public apology from the person. \item revealing the person’s real name and photograph publicly on the site. \item by giving a negative rating to the person. \end{itemize} \textit{Demographics}. The final section contained social media use, values, and demographic questions. The values and demographic questions were derived from the World Values Survey (WVS) \cite{inglehart2014world}, a long-standing cross-country survey of values. This paper focuses on six measures from the WVS. \begin{itemize} \item Generally speaking, would you say that most people can be trusted or that you need to be very careful in dealing with people? \item How much confidence do you have in police? \item How much confidence do you have in the courts? \item How secure do you feel these days in your neighborhood? \item What is your gender? \item Have you had any children? \end{itemize} The response options ranged from ``None at all'' (1) to ``A great deal'' (4) for police and courts and from Not at all secure (1) to Very secure (4) for neighborhood. We omitted the police and courts questions in Saudi Arabia. For trust, options were ``Most people can be trusted'' (1) and ``Need to be very careful'' (2). For gender, the response options were ``Male'', ``Female'', ``Prefer not to disclose'', and ``Prefer to self-describe.'' We chose not to include non-binary or transgender questions because participants in some countries cannot safely answer those questions, though participants could choose to write them in. We recruited participants from 14 countries (see Table \ref{table:participant-demographics}): 13 countries plus the Caribbean countries (Antigua and Barbuda, Barbados, Dominica, Grenada, Jamaica, Monserrat, St. Kitts and Nevis, St. Lucia, and St. Vincent). \textcolor{black}{We decided to analyze the Caribbean countries together because of the small sample sizes and their relative similarities, while recognizing that each country has its own economics, culture and politics.} This study was exempted from review by our institution’s Institutional Review Board. Participants completed a consent form in the language of the survey. Participants were recruited via the survey company Cint in most countries, Prolific in the U.S., and manually via the research team in the Caribbean countries and Mongolia. Participants were compensated based on exchange rates and pilot tests of time taken in each country. \begin{table} \caption{Participant demographics} \label{table:participant-demographics} \begin{tabular}{ l c c } \toprule Country & Language & Num Participants\\ \midrule Austria & German & 251 \\ Cameroon & English & 263 \\ Caribbean & English & 254 \\ China & Mandarin & 283 \\ Colombia & Spanish (Colombian) & 296 \\ India & Hindi/English & 277 \\ South Korea & Korean & 252 \\ Malaysia & Malay & 298 \\ Mexico & Spanish (Mexican) & 306 \\ Mongolia & Mongolian & 367 \\ Pakistan & Urdu & 302 \\ Russia & Russian & 282 \\ Saudi Arabia & Arabic & 258 \\ USA & English & 304 \\ \textbf{Total} & & \textbf{3993} \end{tabular} \end{table} \subsection{Participant Demographics} The gender ratio between men and women participants was similar across countries \textcolor{black}{ranging from 50\% women and 50\% men in China to 43\% women and 57\% men in India)} except for Caribbean countries \textcolor{black}{which was women: 69\%, men: 27\% and Mongolia which was women: 59\%, men: 41\%} (see details about gender in \cite{im2022women}). The median age was typically in the 30s; Mongolia was lowest at 21 while South Korea and United States were 41.5 and 44, respectively. Participants skew young but roughly reflect each country's population, e.g. Mongolia’s median age is 28.2 years while South Korea and U.S medians are 43.7 and 38.3, respectively, according to United Nations estimates \cite{united20192019}. Participants’ self-reported income also varied across countries, with participants in Austria reporting higher incomes and participants in Caribbean countries reporting lower incomes. More than half of the participants had education equivalent to a Bachelor degree for eight countries (Cameroon, China, Colombia, India, Malaysia, Russia, Saudi Arabia, United States); the other countries did not. Participants placed their political views as more ``left'' than ``right.'' \subsection{Data analysis} We discarded low-quality responses based on duration (completed too quickly) and data quality (too many skipped questions). Table \ref{table:participant-demographics} shows the final number of participants per country after data cleaning. For the qualitative analysis, we separately discarded responses that were low quality (empty fields, meaningless text); the number of participants was slightly higher overall (N=4127) since some participants completed that section but did not finish the subsequent quantitative portions of the survey. We analyzed data using R software. We used group means to describe perceived harms and preferred remedies. Levene's tests to measure variance were significant for both harm and remedy analyses indicating that homogeneity of variance assumption is violated. Thus, we used Welch one-way tests for nonparametric data and posthoc pairwise t-tests which we deemed appropriate given our sufficiently large sample size \cite{fagerland2012t}. We used the Benjamini–Hochberg (BH) test to correct for multiple comparisons \cite{bretz2016multiple}. We also ran linear regressions with harassment - harm and harassment - remedy pairings as the dependent variables and demographics and country as the independent variables (4 harassment scenarios x 4 harm types = 16 harm models; 4 harassment scenarios x 7 remedies = 28 remedy models). We used adjusted R-squared to identify demographic variables that were more likely to explain model variance. Welch test and posthoc tests for harm (16 harassment-harm pairings) and remedy (28 harassment - remedy pairings) comparisons are available in the Appendix. Regression outputs and confidence intervals for demographic predictors are also available in the Appendix. We analyzed the qualitative data to the free responses question using an iterative, inductive process. Our approach was to familiarize ourselves with the data, develop a codebook, iteratively refine the codebook, code the data, then revisit the data to make sense of themes from the coding process. To do this, four members of the research team first read through a sample of responses across countries and then co-developed a draft codebook. Three members of the team then coded a sample of responses and calculated interrater reliability (IRR) for each code using Cohen's Kappa. Across the 26 codes tested, Kappa values ranged from -0.1 to 1 with a median of .35. We used the IRR values as well as our manual review of differences to refine the codebook. We removed codes that coders did not interpret consistently, generally those with agreement below about .4 and those that were low prevalence in the data. We revised remaining codes, especially those that had lower agreement, and discussed them again. The final codebook contained 21 codes (see Appendix) that focused on moderation practices, user responsibility, government involvement, and other areas of interest. \subsection{Limitations and Mitigation} Cross-country surveys are known to have a range of challenges that are difficult to overcome completely, but they remain useful, even indispensable, if designed and interpreted thoughtfully and cautiously~\cite{kaminska2017survey,kish1994multipopulation,smith2011opportunities}. In our case, the key issues have to do with language, sampling methodologies, and response biases that might have differed across our participants. Language differences were addressed as described above, through a process of careful translation, validation through back-translation, and survey piloting, but topics like non-consensual image sharing are inevitably shaped by the language they are discussed in and there may be differences in interpretation we did not capture. Sampling methodologies within countries were as consistent as we could make them, but a number of known differences should be mentioned: First, we used three different mechanisms for recruiting -- a market research firm (Cint) for 11 countries; a research survey firm (Prolific) for the United States; and our own outreach for the Caribbean and Mongolia. These mechanisms differ in the size of their pool of participants, as well as their baseline ability to draw a representative sample. Some differences were built-in to the recruitment process, for example, we requested a diverse age range of participants explicitly with Cint and Prolific which should have yielded more older adults. In contrast, our researcher recruitment method for Caribbean and Mongolia simply sought a range of participants through word of mouth, but did not specifically recruit or screen for older adults. Second, while we sought representative samples of the national/regional population in all cases, we know that we came up short. For example, while online surveys are increasingly able to achieve good representation in better-educated countries with high internet penetration, they are known to be skewed toward affluent groups in lower-income, less-connected contexts~\cite{mohorko2013internet,tijdens2016web}. \textcolor{black}{Oversampling from groups who are active online may be more tolerable for a study of online harassment, but it still overlooks important experiences from those who may be online but less likely to participate in a survey.} Third, differences in local culture and current events are known to cause a range of response biases across countries. Subjective questions about perception of harm, for example, might depend on a country's average stoicism; questions about ``trust in courts'' might be affected by the temporary effects of high-profile scandals. The issues above are common to cross-country survey research, and our mitigation strategies are consistent with the survey methodology literature ~\cite{kaminska2017survey,kish1994multipopulation}. To provide some assurance of our data's validity, we benchmarked against the World Values Survey, on which some of our demographic and social-issues questions were based. We compared responses from our participants to responses from the WVS for countries that had WVS data (China, Colombia, partial India, South Korea, Malaysia, Mexico, Pakistan, Russia, United States). We used the more recent Wave 7 (2017-20) where data was available, with Wave 6 (2010-14) as a back-up. We expected that our responses should correlate somewhat with WVS, even though there were substantial differences, such as that our sample was recruited via online panels with questions optimized for mobile devices whereas the WVS sample was recruited door-to-door with oral question and answer choices. Sample means for our data and the WVS for similar questions are presented in plots in the Appendix. In countries where corresponding data is available, we find that the means in our data about trust -- in police, or in courts -- align with WVS results. We also find the anticipated biases with respect to online surveys and socio-economic status. In particular, our participants reported better health and more appreciation for gender equality than WVS participants. Still, because of the above, we present our results with some caution, especially for between-country comparisons; specific pair-wise comparisons between countries should be considered with substantial caution. We include specific comparisons primarily in the Appendix for transparency; we focus on patterns in the Results which we expect to be more reliable, especially patterns within countries and holistic trends across the entire dataset. In the following sections, we strive to be explicit about how our findings can be interpreted. \section{Results} Results are organized into two sections: perceptions of harm associated with online harassment and preferences for remedies associated with online harassment. Each section follows the same structure: first we look at which harassment types are perceived as most harmful and which remedy types are most preferred, respectively, then we examine demographic predictors of perceptions of harm and preferences for remedies, respectively. \subsection{Perceptions of Harm Associated with Online Harassment} First, we differentate between the four types of harassment. Figure \ref{fig:harm_plot} shows perceptions of overall harm by harassment type. One-way Welch tests showed that means of perceptions of harm were significantly different, $F$(3, 35313) = 3186.4, $p$ < 0.001, with sexual photos being the highest in harm (M=4.20, SD=1.15), followed by spreading rumors (M=3.42; SD=1.30), malicious messages (M=3.20; SD=1.35), and insults or disrespect (M=2.93; SD=1.36) (see Figure \ref{fig:harm_plot}). Plots and posthoc tests for comparisons by type of harassment by country are available in the Appendix. To display an overall measure of perceived harms associated with online harassment by country, we aggregated each of the four harm measures together -- sexual harassment, psychological harm, physical safety, and family reputation -- for a combined measure of overall harm. Results suggest that participants in Colombia, India, and Malaysia rated perceived harm highest, on average, while participants in the United States, Russia, and Austria perceived it the lowest. Means are presented here and shown visually in Figure \ref{fig:harm_region}: Colombia (M=3.98; SD=1.18); India (M=3.86; SD=1.31); Malaysia (M=3.79; SD=1.21); Korea (M=3.67; SD=1.22); China (M=3.59; SD=1.19); Mongolia (M=3.55; SD=1.29); Cameroon (M=3.50; SD=1.38); Caribbean (M=3.44; SD=1.43); Mexico (M=3.38; SD=1.38); Pakistan (M=3.36; SD=1.36); Saudi Arabia (M=3.34; SD=1.35); Austria (M=2.99; SD=1.42); Russia (M=2.80; SD=1.43); United States (M=2.79; 1.45). \begin{figure} \centering \includegraphics[width=.7\linewidth]{harmplot4.jpg} \caption{Perceptions of harm by harassment type} \label{fig:harm_plot} \Description{Plot with error bars from lowest harm to highest: insulted or disrespected, malicious messages, spreading rumors, sexual photos.} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{harmcountry3.png} \caption{Perceptions of harm by country} \label{fig:harm_region} \Description{Plot with error bars from lowest harm to highest: United States, Russia, Austria, Saudi Arabia, Pakistan, Mexico, Caribbean, Cameroon, Monoglia, China, Korea, Malaysia, India, Colombia.} \end{figure} Most of the ratings by country were statistically significant from each other (one-way Welch tests and posthoc tests are reported in the Appendix), though we remind readers that these differences should be interpreted with caution. In general, the wealthier countries per capita perceive lower harm, but beyond that the key takeaway is that there is substantial variance which is unlikely to be explained by one or even a few differences across any of those countries. \subsubsection{Predictors of Perceptions of Harm} Here we hone in more granular differences across harassment types and harm types and how they vary by country and other demographic data. Note that responses from Saudi Arabia participants are excluded from regressions because they did not complete questions about confidence in courts or police. The distribution of R-squared values for the 16 harassment - harm pairings is shown in Figure \ref{fig:harm_ridgeline} (ranging from close to 0 to 18\% variance). Country was the most predictive of perception of harm though with variance across harassment and harm pairings as indicated by the multiple peaks in Figure \ref{fig:harm_region}. Gender was next most predictive, followed by security in neighborhood, number of children, trust of people, trust in courts, and trust in police. We also ran exploratory factor analyses to look for underlying constructs across measured variables. When all variables we measured were in the analysis, perceptions of harm and preferred remedies loaded into constructs, as expected, but demographic and value variables did not. Analyses with only the demographic and values variables suggest some trends but they were not substantial predictors of variance (e.g. trust and courts loaded together; marriage and age inversely loaded together). We show some factor analyses results in the Appendix but do not focus on them further. \begin{figure}[ht] \centering \includegraphics[width=1.05\linewidth]{harm_ridgeline2.png} \caption{Adjusted R squared of demographic variables for predicting harm across 16 harassment scenario x harm types.} \label{fig:harm_ridgeline} \Description{Ridgeline plot (looks like a wave with one or a few peaks) showing which variables predict harm from highest to lowest: country, gender, secure, children, trust, courts, police.} \end{figure} We ran regression analyses for the 16 harassment type - harm pairings using country, gender, security in neighborhood, number of children, trust in other people, trust in courts, and trust in police as independent variables. We used the U.S. as the reference choice for country and men as the reference for gender. Complete results with confidence intervals are available in the Appendix. To communicate patterns across models, we present a heatmap (see Figure \ref{figure:harm_heatmap}) of regression coefficients with harassment type - harm pairings on the x-axis and the predictors in Figure \ref{fig:harm_ridgeline} on the y-axis. We also plotted participant responses to the courts, police, security, and trust questions with WVS ratings to benchmark that our participants' attitudes reflect those of a broader population; those plots are in the Appendix. \begin{figure*}[ht] \centering \includegraphics[width=0.98\linewidth]{harm_heatmap_8.jpg} \caption{Heatmap of regression coefficients of harassment types and harm pairings by country and demographics. Darker blue is positive coefficient (i.e. higher harm); darker gold is negative coefficient.} \Description{Heatmap showing darker blue shades for countries, especially for the insult scenario. One exception is the photos scenario and sexual harassment which has gold (i.e. negative) coefficients.} \label{figure:harm_heatmap} \end{figure*} Results of the six predictors in the regression models are summarized here: \textit{Country}: Participants in most countries perceive higher harm for most pairings than the U.S., with the exception of \textcolor{black}{the sexual photos and sexual harm pairing} where some countries perceive lower harm than the U.S. \textit{Gender}: Women perceive greater harm than men for all 16 harassment - harm pairings. \textit{Secure}: Participants who were more likely to give low ratings to the question ``How secure do you feel these days in your neighborhood?'' were more likely to perceive higher harm associated with online harassment for 8 of the harassment - harm pairings; however, security in neighborhood is negatively correlated with ratings for the sexual photos - sexual harassment pairing. \textit{Children}: Having more children is a predictor of greater perceptions of harm for 9 of the 16 pairings, except for the insulted or disrespected - sexual harassment pairing which is negatively correlated. \textit{Trust}: Participants who were more likely to be low in trust of other people were more likely to perceive higher harm associated with online harassment for 11 of the 16 harassment - harm pairings. The relationship was stronger for the sexual photos and spreading rumors scenarios, whereas there were no relationships for the malicious harassment scenario. \textit{Courts}: Higher trust in courts is correlated with increases in perceptions of harm for 14 of the 16 pairings. The two exceptions are spreading rumors - sexual harassment and sexual photos - sexual harassment pairings. \textit{Police}: Trust in police is correlated with increases in perceptions of harm for 4 of the 16 pairings. We return to these results in the Discussion. \subsection{Preferences for Remedies Associated with Online Harassment} \begin{comment} The prior section presented perceptions of harm; this section presents preferences for remedies. To display overall preferences for remedies associated with online harassment by country, we aggregate the seven remedy types together for a combined measure of overall remedies. Results show that Colombia, Russia, and Saudi Arabia were highest overall in support for remedies while Pakistan, Mongolia, and Cameroon were lowest. Means are again presented here and shown visually in Figure \ref{fig:remedy_region}. Colombia (M=4.07; SD=1.13); Russia (M=4.03; SD= 1.25); Saudi Arabia (M=3.97; SD=1.27); Mexico (M=3.93; SD=1.25); Malaysia (M=3.89; SD=1.20); China (M=3.89; SD=1.07); Caribbean (M=3.86; SD=1.38); Austria (M=3.86; SD=1.36); Korea (M=3.72; SD=1.29); India (M=3.70; SD=1.39); United States (M=3.60; SD=1.50); Cameroon (M=3.57; SD=1.40); Mongolia (M=3.45; SD=1.40); Pakistan (M=3.40; SD=1.41). As with harms, most of the ratings by country were statistically significant from each other (one-way Welch tests and posthoc tests are reported in the Appendix), though we again caution that the differences should be treated with caution. Regression outputs and confidence intervals for demographic predictors are also available in the Appendix. \end{comment} The prior section presented perceptions of harm; this section presents preferences for remedies. Specifically we report respondents' perceived desirability of the remedies to address harassment - related harms. First, we differentiate between the remedies themselves. One-way Welch tests showed that means of preferences for remedies were significantly different, $F$(6, 49593) = 1130.9, $p$ < 2.2e-16 (see Figure \ref{fig:remedy_plot}). Removing content and banning offenders are rated highest, followed by labeling, then apologies and rating. Revealing identities and payment are \textcolor{black}{rated} lowest. Posthoc comparisons showed that all pairings were significantly different from each other except for apology and rating: removing (M=4.18; SD=1.12); banning (M=4.07; SD=1.17); labeling (M=4.00; SD=1.19); apology (M=3.72; SD=1.34); rating (M=3.72; SD=1.32); revealing (M=3.56; SD=1.39); paying (M=3.16; SD=1.46). To display overall preferences for remedies associated with online harassment by country, we aggregate the seven remedy types together for a combined measure of overall remedies. Results show that Colombia, Russia, and Saudi Arabia were highest overall in support for remedies while Pakistan, Mongolia, and Cameroon were lowest. Means are again presented here and shown visually in Figure \ref{fig:remedy_region}: Colombia (M=4.07; SD=1.13); Russia (M=4.03; SD= 1.25); Saudi Arabia (M=3.97; SD=1.27); Mexico (M=3.93; SD=1.25); Malaysia (M=3.89; SD=1.20); China (M=3.89; SD=1.07); Caribbean (M=3.86; SD=1.38); Austria (M=3.86; SD=1.36); Korea (M=3.72; SD=1.29); India (M=3.70; SD=1.39); United States (M=3.60; SD=1.50); Cameroon (M=3.57; SD=1.40); Mongolia (M=3.45; SD=1.40); Pakistan (M=3.40; SD=1.41). As with harms, most of the ratings by country were statistically significant from each other (one-way Welch tests and posthoc tests are reported in the Appendix), though we again caution that the differences should be treated with caution. Regression outputs and confidence intervals for demographic predictors are also available in the Appendix. \begin{figure} \centering \includegraphics[width=.9\linewidth]{remedyplot3.png} \caption{Preferences for remedies by \textcolor{black}{remedy} type} \label{fig:remedy_plot} \Description{Plot with error bars from lowest remedy preference to highest: paying, revealing, rating, apology, labeling, banning, removing.} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{remedycountry3.png} \caption{Preferences for remedies by country} \label{fig:remedy_region} \Description{Plot with error bars from lowest remedy preference to highest: Pakistan, Mongolia, Cameroon, United States, India, Korea, Austria, Caribbean, China, Malaysia, Mexico, Saudi Arabia, Russia, Colombia.} \end{figure} \subsubsection{Predictors of Preferences for Remedies} We plotted R-squared values with the same variables used in the harm regression models (see Figure \ref{fig:remedy_ridgeline}). Results are broadly similar to the harm ridgeline plot, though there is less overall variance explained in the remedy plots (0-15\%). Country is most predictive of preference for remedy, followed by number of children, gender, security in neighborhood, trust in police, trust in courts, and trust in other people. We ran regression analyses for the 28 harassment type - remedy pairings (4 harassment types and 7 remedies). \begin{figure}[ht] \centering \includegraphics[width=1.05\linewidth]{remedy_ridgeline2.png} \caption{Adjusted R squared of demographic variables for remedy preferences across 28 harassment scenario x remedy types.} \Description{Ridgeline plot (looks like a wave with one or a few peaks) showing which variables predict remedy preferences from highest to lowest: country, children, gender, secure, police, courts, trust.} \label{fig:remedy_ridgeline} \end{figure} We visually show model results in a heatmap (see Figure \ref{fig:remedy_heatmap}). Results are summarized here: \textit{Country:} Most countries tend to prefer payment, apologies, revealing users, and rating users, but are less favorable towards removing content, labeling content, or banning users compared to the U.S. These patterns are observed for three of the four harassment types, with the exception of insults or disrespect where countries tend to prefer all remedies compared to the U.S. \textit{Gender:} Women tend to prefer most remedies compared to men, except for payment, which they are less favorable towards for all four harassment types. \textit{Children:} Having more children is associated with higher preferences for most remedies. \textit{Secure:} Security in neighborhood is negatively associated with higher preference for remedies for 8 of the 28 pairings, primarily for removing content and labeling content. \textit{Trust:} Trust in other people is negatively associated with preferences for remedies for 19 of the 28 pairings. \textit{Courts:} Confidence in courts is associated with preference for the payment remedy for all harassment types but few other remedies. \textit{Police:} Confidence in police is not correlated with remedy preferences. \begin{figure*}[ht] \centering \includegraphics[width=0.98\linewidth]{remedy_heatmap.jpg} \caption{Heatmap of regression coefficients of harassment types and remedy pairings by country and demographics. Darker blue is positive coefficient (i.e. higher preference for remedy); darker gold is negative coefficient.} \Description{Heatmap showing preferences for remedies. The heatmap has small squares whose color reflects coefficient value for that variable. An overall visual pattern is that apology, revealing, and rating tend to be dark blue meaning that most countries prefer them to the U.S. The other remedies are gold or more mixed.} \label{fig:remedy_heatmap} \end{figure*} \subsection{Qualitative responses} We asked participants one free response question about how harassment on social media should be addressed. The most prevalent code related to the site being responsible for addressing the problem; nearly 50\% of responses referred to site responsibility, ranging from about 20\% to 60\% across countries. This was most prevalent in Malaysia, Cameroon, and the United States and least prevalent in Mongolia. Responses included content about setting policies, enforcing policy, supporting users, and protecting users. For example, a participant in Korea said: ''The social media company is primarily responsible for it, though the person who harassed others also has responsibility quite a lot.'' As part of this site responsibility, many participants described what they thought the site should do, such as: ``The social media site should give the offender a negative rating and ban them for a specific time period. This time period being months or years or indefinitely; as well as disallowing them from creating further accounts'' (Caribbean). Some participants described why they thought social media sites were responsible, such as this one from Pakistan who said: ``This problem should be solved only by the social media website as they have all the data of the user through which they can take action against it.'' The second most prevalent code, found in nearly 25\% of responses, referred to government involvement, including regulation, police, courts, arrest, prison, criminal behavior, and juries. References to government involvement were highest in China and Pakistan and lowest in Russia, Mexico, and Cameroon. Many participants who mentioned government responsibility for online harassment indicated it should be in collaboration with law enforcement. For example, one participant from Malaysia said: ''The responsible party (social media workers) need to take this issue seriously and take swift action to `ban' the perpetrator and bring this matter to court and the police so that the perpetrator gets a fair return. This is to avoid the trauma suffered by the victim and also reduce mental illness.'' Participants varied in their indication of whether the user or the platform should report the behavior to the government. One in India said ``First of all the person who has been harassed he should simply go to police station to report this incident then he should report the account and telling his friends to report it then he should mail to instagram some screenshots of that person's chat.'' Some participants focused only on government responsibility, such as one in Colombia saying: ``Having permission for the police to see all your material on the network and electronic devices.'' References to managing content (e.g. removing or filtering content) and account sanctions (e.g. warnings, suspensions, bans) showed up in about 20\% of responses. These were highest in the Caribbean and the United States and lowest in China and Mongolia. Sometimes posts only recommended one step, like content removal, but more often they mentioned a multi-step process for accountability. A participant in China recommended a user rating approach: ``in serious cases, he should be banned and blacklisted, and the stains of his behavior should be recorded on his personal file.'' Many participants proposed policies that required real identities linked to the account to deter harassment and allow punishment. One person in Austria said: ``Login to the networks only with the name known to the operator. Strict consequences, first mark the content, then also delete the profile and transfer the data to the public prosecutor's office.'' About 13\% of responses referred to user responsibility, in which the person who may be experiencing or targeted by harassment should handle it themselves. These responses suggested that people should ignore harassment, stop complaining or whining about it, deal with it, and understand people will disagree. These responses were highest in Colombia and Malaysia and lowest in Austria, Cameroon, Caribbean. For example a participant in Colombia indicated that users should take steps to protect themselves: ``1. Being on social media is everyone's responsibility. 2. In social networks, you should limit yourself in admitting requests from strangers. 3. Remove and block malicious invitations.'' One in Malaysia indicated that people should work out the problems themelves: ``All parties must cast off feelings of hatred and envy towards others. To deal with this problem all parties need to be kind to each other and help each other.'' Another Malaysia participant was more explicit, saying: ``The responsible party is myself, if it happens to me, I will definitely block the profile of the person who is harassing me. The self is also responsible for not harassing others despite personal problems. It is best to complete it face to face.'' Some responses, about 7\%, referred to public awareness or public shaming as a response, which could be through formal media coverage or offender lists or through informal user behaviors. These were highest in the Caribbean and China. One participant in Mongolia said: ``This role belongs to the private organization that runs the first social platform and to the police of the country. Disclosure to the public of the crimes committed by the perpetrators and the number of convictions related to this issue, and the damage caused by the crime.'' About 6\% of responses directly addressed the offender's role in the harassment, indicating that they are responsible and should address the problem and change their behavior. This was most prevalent in Korea and Malaysia. About 6\% referred to restitution in some way, which could involve the offender paying a fine to the site or the victim or the victim receiving compensation from the site or offender. About 6\% referred to blocking other users as a remedy. Other codes showed up in around 5\% of responses, including verifying accounts (checking for bots, fake accounts), educating users about appropriate behaviors, and offender apologies for behavior. Apologies, when specified, were often supposed to be public rather than private, such as a participant from Cameroon's response: ``They should demand a public apology from the person, and if the person does not give the apology, the account should be banned.'' A person in China associated the public apology with reputational damage: ``Infringement of my right of reputation requires a public apology to compensate for the loss.'' In terms of account verification, some participants talked about real names or use of IPs. One from Colombia said: ``Social networks should block people's IPs and allow them to have a maximum of 2 accounts per IP, since most of the people who harass do not do it from their main accounts but rather hide inside other personalities that are not them, by Doing this would greatly reduce this type of bullying.'' \section{Discussion} Our findings coalesce into three broad themes about global perceptions of social media harassment harms and remedies: (1) Location has a large influence on perceptions. (2) The causes are complex -- no single factor, nor even a straightforward subset of factors, emerges as a dominant predictor of perceptions of harm. (3) One-size-fits-all approaches to governance will overlook substantial differences in experiences across countries. \subsection{Key Role of Local Cultural Context} Our results suggest that local cultural context plays the greatest role in determining people's perceptions of online harassment among the factors we measured. In our analysis, country emerged as the most predictive of perceptions of harm across harm types and also with respect to remedies. This is striking, especially when considering that country explained more of the variation in perceptions than gender. As is widely understood, women and girls bear a greatly disproportional brunt of harassment in general~\cite{burkett2015sex, dobson2016sext, bates2017revenge,eckert2020doxxing, barrense2020non}, and though women in each country consistently perceived greater harm than the men in the same country, women's perception of harm globally depended even more on their country. Thus for example, our data indicates that women in the United States perceive less harm from social media harassment (M=2.99, SD=1.44) than men in China (M=3.47, SD=1.2) or India (M=3.18, SD=1.33). Those are just three data points, and we do not claim that this particular set of comparisons is necessarily reliable, but it illustrates a broader point that we believe is robust from our data: \emph{some} countries' women, on average, perceive less harm under some social media harassment cases than \emph{other} countries' men, on average. It is unclear what it is about local cultures that has this impact (our findings suggest that there is unlikely to be a simple set of causes), and we also wish to avoid an unresolvable discussion about what exactly constitutes ``culture.'' Yet, it seems safe to conclude that a range of complex social factors that have some coherence at the national/regional level has a profound effect on how citizens of those countries and countries perceive social media harms and remedies. These are also inevitably shaped by policies and regulations in those countries. For example, some of our Malaysia participants said that online harassment should be the responsibility of the ``MCMC.'' The Malaysian Communications and Multimedia Commission is responsible for monitoring online speech, including social media, though it has little power to remove content on platforms hosted outside of Malaysia. Though all countries we studied have some laws governing the extent of critique users can express towards their own governments, these laws vary in severity. For example, in 2015, the Malaysian government asked Facebook and YouTube to take down posts by blogger Alvin Tan which insulted Muslims \cite{bbc2015muslims}. More recently, the Indian government not only sanctioned individual users who critiqued Modi, but it sought to sanction Twitter for not taking down those posts -- Twitter has recently launched a lawsuit against the Indian government in response \cite{cnn2022twitter}. Though insults is lower in harm than other types of harassment, it is higher in some countries in our study, and it is the most prevalent type of harassment among participants in Google's 22-country survey, suggesting that it may have cumulative harmful effects for users \cite{thomas2021sok}. At the same time, it is important to remember that experiences and concerns within countries inevitably vary and span across boundaries. Our data indicates that reputational harm is lower in the U.S. and Austria and this may be true for the majority in our sample from those countries, but reputational harm can persist within and across boundaries. For instance, Arab women living in the U.S. may deal both with Arab and Western patriarchal structures and orientalism, thereby experiencing a form of intersectional discrimination that requires specific support measures and remedies \cite{al2016influence}. Similarly, refugees and undocumented migrants may be less likely to report online harassment for fear of repercussion to their status in the country \cite{guberek2018keeping}. Though a focus on country-level governance is important, additional work is required to protect and support people within countries who may experience marginalization, despite or because of, local governance. \subsection{No Simple Causal Factors for Harm Perception} The second broad conclusion of our study is that perceptions of harm about online harassment are complex; no simple mechanism, nor any small set of variables, easily explains relative perceptions among countries. Harm perceptions might, for example, reasonably be expected to correlate with how much people trust others, how safe they feel in their own neighborhoods, or how much they trust institutions like the police and the courts. Yet, our results find no such easy explanations: sense of neighborhood security correlated positively with greater perceptions of harm for some forms of harassment, but negatively for the nonconsensual sharing of sexual photos and sexual harassment pairing; number of children predicted greater harm for half of the harassment - harm pairings, but not the other half. Some correlations did emerge in our data, but it is not straightforward to interpret them. For example, trust in courts was associated with perceptions of harm in a majority of our countries. This pattern is surprising, and could indicate a desire to normalize online harassment as harmful to enable greater judicial oversight over those harms. Interestingly, trust in courts is mostly not correlated with the remedies we measured, \textit{except} for payment which is negatively correlated. It may be that lower trust in courts to procure compensation may be correlated with a higher reliance on platforms, but we would need additional data to confirm this interpretation. Somewhat easier to explain is that trust in other people was correlated with lower perception of harm in most cases. It may be that people who are low in trust in others assume online harassment will be severe and persistent. There was substantial variance in trust levels between countries, with Caribbean being lowest and China being highest. This suggests that harms associated with online harassment may reflect offline relationships and communities. Our results show that there is little or no relationship between confidence in police and harm or remedies, which may indicate that people do not see online harassment as a problem that police can or should address. This interpretation aligns also with previous research which has highlighted how police are often an inadequate organization to deal with concerns around harassment and online safety, and can sometimes cause more harm \cite{sambasivan2019they}. Instead, experts have called for investments in human rights and civil society groups who are specifically trained to support people in the communities who experience harassment \cite{york2021silicon}. Such experts could also mediate between affected people and other institutions such as the police and legal institutions. An exploration of factors we did not consider may find simpler or more coherent causal explanations for perceptions of harm and remedies, but we conjecture that the complexity is systemic. Online harassment, though relatively easy to discuss as a single type of phenomenon, touches on many social, cultural, political, and institutional factors, and the interplay among them is correspondingly complex. A highly patriarchal honor culture that leads women to fear the least sensitive of public exposures might be partially countered by effective law enforcement that prioritizes those women's rights; deep concerns about one's children might be offset by a high level of societal trust; close-knit communities might on the one hand provide victims with healthy support, but they might also judge and impose harsh social sanctions. \subsection{One Size Does Not Fit All in Online Governance} The four types of harassment we studied all differed from each other in perceived harm, both in type of harm and severity of that harm. Non-consensual sharing of sexual photos was highest in harm, consistent with work on sexual harms that has focused on non-consensual sharing of sexual images \cite{citron2021privacy, goldberg2019nobody,dad2020most}. This work has advocated for legal protection and recourse for people who are victims of non-consensual image sharing and has brought attention to the devastating consequences it can have on victims' lives. Much of this transformative work in U.S. contexts focuses on sexual content like nude photos, which are now prohibited in some states in the U.S. (though there is no federal law) \cite{citron2019evaluating}. However, in many parts of the world there are consequences for sharing photos of women even if they do not contain nude content. Our findings show substantial variance in perceptions of reputational harm as well as physical harm between countries. India (medians of 4.09 and 4.01, respectively) and Colombia (4.02, 4.24) are highest in both of those categories whereas the U.S. is lowest (2.73, 2.69). Our results corroborate Microsoft's Digital Civility Index, which found high rates of incivility in Colombia, India, and Mexico (and the U.S. being relatively low), though Russia was also high which deviates from our results. Google's survey similarly shows Colombia, India, and also Mexico as highest in prevalence of hate, abuse, and harassment \cite{thomas2021sok}. While shame associated with reputation persists globally, it may be a particularly salient factor where cultures of honor are high \cite{rodriguez2008attack}. In qualitative studies conducted in the South Asian country, including India, Pakistan, and Bangladesh, participants linked reputational harm with personal content leakage and impersonation, including non-consensual creation and sharing of sexually explicit photos \cite{sambasivan2019they}. Because women in conservative countries like India are expected to represent part of what the family considers its “honor,” reputational harm impacts not only just the individual’s personal reputation but also their family and community’s reputation. As one South Asian activist described technology-facilitated sexual violence (quoted from \cite{maddocks2018non}): \textit{"A lot of times, there’s an over-emphasis on sexually explicit photos. But in [this country], just the fact that somebody is photographed with another boy can lead to many problems, and we’ve seen honor killings emerging from that."} In these cases, women are expected to represent part of what the family considers its ``honor'' \cite{sambasivan2019they} and protecting this honor becomes the role of the family, and especially men in the family, who seek to regulate behavior to preserve that honor. Unfortunately, when a person becomes a victim of online abuse, it becomes irrelevant whether she is guilty or not, what matters is other people's perception of her guilt. At an extreme, families will engage in honor killings of women to preserve the honor of the family \cite{QandeelBaloch, jinsook2021resurgence, goo2019}. When women experience any kind of abuse, they may need to bring men with them to file reports, and then they may be mocked by officials who further shame and punish them for the abuse they experienced \cite{sambasivan2019they}. In Malaysia, legal scholars raise concern about the inadequacy of law in addressing cyberstalking in both the National Cyber Security Policy and the Sustainable Development Goals \cite{rosli2021non}. Sexual harassment, sexual harm, and reputation are strongly linked, and the threat of reputational damage empowers abusers. Many European countries have taken proactive stances against online harassment but the efficacy of their policies are not known yet. Unfortunately, any efforts to regulate content also risk threats to free-expression, such as TikTok and WeChat's suppression of LGBTQ+ topics \cite{walker2020more}. Concerns about human rights and civil rights may be especially pronounced in countries where there is not sufficient mass media interest to protest them, such as the rape of a girl in India by a high-profile politician that did not gather attention because it was outside of major cities \cite{guha2021hear}. In Latin American contexts, there is similar evidence that societies that place a premium on family reputation are likely to be afflicted by higher rates of intrapersonal harm \cite{osterman2011culture, dietrich2013culture}. For example, constitutional laws against domestic violence in Colombia decree that family relations are based on the equality of rights and duties for all members, and that violations are subject to imprisonment. Yet recent amendments have called for retribution against domestic violence to be levied \textit{only} when charges with more severe punishment do not apply. Human rights activists from the World Organisation Against Torture have claimed that such negligent regulations send the message that domestic violence, including harassment, is not as serious as other types \cite{omct2004violencecolombia, randall2015criminalizing}. Even with the existence of laws on domestic violence in countries like Colombia and Mexico, prevailing attitudes view harassment as a "private" matter, perhaps because of traditional norms that value family cohesion over personal autonomy. One speculation is that fears of reporting harassment because of familial backlash corroborate why survey respondents from this country may not find exposing their abusers online satisfying. \subsection{Recommendations for Global Platform Design and Regulation} Our recommendations for global platform design and regulation \textcolor{black}{build on work done by myriad civil society groups and follow from our own findings. In short, harms associated with online harassment is greater in non-U.S. countries and platform governance should be more actively coshaped by community leaders in those countries. } Above all, we discourage any idea that a \emph{single} set of platform standards, features, and regulations can apply across the entire world. While a default set of standards might be necessary, the ideal would be for platforms and regulations to be further customized to local context. \textcolor{black}{A reasonable start is for platforms to regulate at the country level, though governance should be sensitive to the blurriness of geopolitical and cultures boundaries.} Digital technology is highly customizable, and it would be possible to have platform settings differ by country. Similarly, regulation of social media, as well as policy for harm caused through online interaction, should also be set locally. To a great extent the latter already happens, as applicable policy tends to be set at a national level. It should also be the case that technology companies engage with local policymakers, without assuming that one-size-fits-all approaches are sufficient. According to the findings discussed above, local cultural context can play an important role in helping platforms define harassment and prioritize online speech and behavior that will likely have the most impact in a given local context. For example, posting non-consensual images, whether sexual or not, can have a more severe impact in countries where women's visibility and autonomy are contentious issues. Customizing definitions of harms would also align with the task of determining the effectiveness of a remedy. If certain behaviors are criminalized offline, that would likely have an impact on how seriously platforms should take online manifestations of such harassment, and how easy it would be for users in that locality to seek help from police or courts. Lastly, due to the great variance on how local laws are shaped and implemented, platforms can play a key role in determining the effectiveness of rules as applied to them and their users. The resulting observations about what laws are effective on the ground can help platforms both customize their own policies, and engage with stakeholders more productively. Platform features, settings, and regulation ought to be determined by multistakeholder discussions with representation from local government, local civil society, researchers, and platform creators. Input from entities familiar with the local laws, customs, and values is essential, \textcolor{black}{as others have recommended (e.g.~\cite{cammaerts2020digital,york2021silicon})}. As our study also finds, the specifics of how users respond to online harassment are localized and not given to easily generalized explanation. Of course, such discussions must be designed well. For example, we recommend that platform creators -- who have international scope yet often tend toward Western, educated, industrial, rich, and democratic (WEIRD) sensibilities~\cite{henrich2010weirdest,linxen2021weird} -- take a back seat and turn to local community leaders to lead these discussions. Platform creators have the power to determine final features anyway; additional exertion of power in such discussions will suppress local voices. Tech companies must also be willing to adopt the resulting recommendations~\cite{powell2013argument}. Beyond platform and regulatory customization within countries, there should be transnational bodies that consider things at a global level, and which might also serve to mediate between issues that bring geographic countries into contention. Technology companies already sponsor such bodies -- for instance, Meta has a Stakeholder Engagement Team that includes policymakers, NGOs, academics, and outside experts that support the company in developing Facebook community standards and Instagram community guidelines \cite{meta2022stakeholder}. Even better would be for such bodies to have more independence, set up for autonomous governance via external organizations. We recognize that customization by country raises new challenges, such as the question of whose policy should take precedence when cross-country interaction occurs on a platform. Or, how platforms should handle users who travel across countries (or claim to do so). Or the substantial problem, though not the focus of this paper, of how to address authoritarian regimes that are not aligned with human rights \cite{york2021silicon}. It will take work, and diplomacy, to resolve these issues, but if the aim is to prevent or mitigate harassment's harms in a locally appropriate way, the effort cannot be avoided. As to what kinds of customization such bodies might suggest, our study gestures toward features and regulations that might differ from place to place. For example, there appears to be wide variation across countries in terms of what is considered invasive disclosure. Russians generally care much less than Pakistanis whether photographs of an unmarried/unrelated man and woman are posted publicly. Thus, in some contexts, the default setting might require the explicit consent of all tagged, commented, or (automatically) recognized parties for a photo or comment to be posted. Another possibility is to adjust the ease with which a request to take down content is granted. The possibilities span a range from (A) automatically taking down any content as requested by \emph{anyone} to (Z) refusing to take down any content regardless of the volume or validity of requests. In between, there is a rich range of possibilities that could vary based on type of content and on country. With respect to how platforms manage content-removal requests, they might establish teams drawn from each geographic context, so that decision-makers address requests from cultures they are most familiar with (and based on standards recommended by the aforementioned local bodies). \section{Conclusion} We studied perceptions of harm and preferences for remedies associated with online harassment in 14 countries around the world. Results show that all countries perceive greater harm with online harassment compared to the U.S. and that non-consensual sharing of sexual photos is highest in harm, while insults and disrespect is lowest. In terms of remedies, participants prefer removing content and banning users compared to revealing identities and payment, though they are more positive than not about all remedies we studied. Country is the biggest predictor of ratings, with people in non-U.S. and lower income countries perceiving higher harm associated with online harassment in most cases. Most countries prefer payment, apologies, revealing identities, and rating users compared to the U.S., but are less favorable towards removing content, banning users, and labeling content. One exception to these trends is non-consensual sharing of sexual photos, which the U.S. rates more highly as sexual harassment than other countries. We discuss the importance of local contexts in governing online harassment, and emphasize that experiences cannot be easily disentangled or explained by a single factor. \begin{acks} This material is based upon work supported by the National Science Foundation under Grants \#1763297 and \#1552503 and by a gift from Instagram. We thank members of the Social Media Research Lab for their feedback at various stages of the project. We thank Anandita Aggarwal, Ting-Wei Chang, Chao-Yuan Cheng, Yoojin Choi, Banesa Hernandez, Kseniya Husak, Jessica Jamaica, Wafa Khan, and Nurfarihah Mirza Mustaheren for their contributions to this project. We thank Michaelanne Thomas, David Nemer, and Katy Pearce for early conversations about these ideas. \end{acks} \bibliographystyle{ACM-Reference-Format}
{'timestamp': '2023-01-30T02:12:29', 'yymm': '2301', 'arxiv_id': '2301.11715', 'language': 'en', 'url': 'https://arxiv.org/abs/2301.11715'}
\section{Introduction} \setcounter{footnote}{0} The Calogero Model (CM) \cite{Calogero:1969xj} - \cite{Olshanetsky:1981dk} is a well-known exactly solvable many-body system, both at the classical and quantum levels. It describes $N$ particles (considered indistinguishable at the quantum level) on the line, which interact through an inverse-square two-body interaction. Its quantum Hamiltonian is \begin{equation} \label{Hcalogero} H = - \frac{1}{2 m} \sum_{i=1}^{N} \frac{{\partial}^{2}}{\partial {x_{i}}^{2}} + \frac{\lambda (\lambda - 1)}{2 m} \sum_{i \neq j }^{N} \frac{1}{{(x_{i} - x_{j})}^{2}}\,, \end{equation} where $m$ is the particles' mass, and the dimensionless coupling constant $\lambda$ parametrizes the inverse-square interaction between pairs of particles. \footnote{Note that we did not include in (\ref{Hcalogero}) a confining potential. This is not really a problem, as we can always add a very shallow confining potential to regulate the problem (in the case of purely repulsive interactions), or else, consider the particles confined to a very large circle (i.e., consider (\ref{Hcalogero}) as the large radius limit of the Calogero-Sutherland model \cite{Sutherland:1971ep}). We shall henceforth tacitly assume that the system is thus properly regularized at large distances.} The CM and its various descendants continue to draw considerable interest due to their many diverse physical applications. A partial list of these applications can be found, for example, in the introductory section of \cite{BFM}. For recent reviews on the Calogero- and related models see, e.g., \cite{Polychronakos:1999sx, Polychronakos:2006}. In addition, for a recent review on the collective-field and other continuum approaches to the spin-Calogero-Sutherland model, see \cite{Aniceto-Jevicki}. In the present paper we concentrate on the thermodynamic limit of the CM. In this limit the system is amenable to large-$N$ collective-field formulation \cite{sakita, Jevicki:1979mb, JFnmm}. As is well-known, the collective theory offers a continuum field-theoretic framework for studying interesting aspects of many-particle systems. Clearly, a description of the particle systems in terms of continuous fields becomes an effectively good one in the high density limit. In this limit the mean interparticle distance is much smaller than any relevant physical length-scale, and the $\delta$-function spikes in the density field (\ref{collective}) below can be smoothed-out into a well-behaved countinuum field. All this is in direct analogy to the hydrodynamical effective description of fluids, which replaces the microscopic atomistic formulation. Of course, the large density limit means that we have taken the large- $N$ limit, as was mentioned above. The collective-field Hamiltonian for the CM (\ref{Hcalogero}) is given by \cite{AJL} \begin{equation}} \def\eeq{\end{equation}\label{Hcollective} H_{coll} = \frac{1}{2 m} \int dx\, \pa_x\pi(x)\, \rho(x)\, \pa_x\pi(x) + \frac{1}{2 m} \int dx\, \rho(x) {\left( \frac{\lambda - 1}{2} \frac{\partial_{x} \rho}{\rho} + \lambda \pv \int \frac{ dy \rho(y)}{x - y} \right)}^{2} + H_{sing}\,, \eeq where $ \; H_{sing} \; $ denotes a singular contribution \cite{Andric:1994su} \begin{equation}} \def\eeq{\end{equation}\label{Hsing} H_{sing} = - \frac{\lambda}{2 m}\,\int dx\, \rho(x)\,\partial_{x}\left. \frac{P}{x - y} \right|_{y = x} - \frac{\lambda - 1}{4 m}\,\int dx\, {\partial_{x}}^{2} \left. \delta(x - y) \right|_{y = x}\,, \eeq and $ \; P \;$ is the principal part symbol. Here, \begin{equation}} \def\eeq{\end{equation}\label{collective} \rho(x) = \sum_{i = 1}^{N} \delta( x - x_{i}) \eeq is the collective - or density - field, and \begin{equation}} \def\eeq{\end{equation}\label{momenta} \pi(x) = - i \frac{\delta}{\delta \rho(x)} \end{equation} is its canonically conjugate momentum. It follows trivially from (\ref{collective}) that the collective field is a positive operator \begin{equation}} \def\eeq{\end{equation}\label{positiverho} \rho(x) \geq 0\,,\eeq and that it obeys the normalization condition \begin{equation}} \def\eeq{\end{equation}\label{conservation} \int\limits_{-\infty}^\infty\,dx \,\rho (x) = N\,. \eeq The latter constraint is implemented by adding to (\ref{Hcollective}) a term $\mu\left(\int\limits_{-\infty}^\infty\,dx \,\rho (x) - N\right)$, where $\mu$ is a Lagrange multiplier (the chemical potential). The first term in (\ref{Hsing}) is proportional to $\rho(x)$. Therefore, its singular coefficient $-{\lambda\over 2m}\partial_{x}\left. \frac{P}{x - y} \right|_{y = x} $ amounts to a shift of the chemical potential $\mu$ by an infinite constant. The last term in (\ref{Hsing}) is, of course, a field independent constant - an infinite shift of energy. In order for this paper to be self-contained, we have briefly summarized the derivation of the collective-field Hamiltonian (\ref{Hcollective}) in Appendix C. It is worth mentioning at this point that the Calogero model enjoys a strong-weak-coupling duality symmetry \cite{Minahan:1994ce, halzirn}. At the level of the collective Hamiltonian (\ref{Hcollective}), these duality transformations read \begin{equation}} \def\eeq{\end{equation}\label{duality} \tilde\lambda = {1\over\lambda}\,,\quad \tilde m = -{m\over\lambda}\,,\quad \tilde \mu = -{\mu\over\lambda}\,;\quad \tilde\rho(x) = -\lambda\rho(x)\,,\quad {\rm and}\quad \tilde\pi(x) = -{\pi(x)\over\lambda}\,,\eeq and it is straightforward to see that these transformations leave (\ref{Hcollective}) (including the chemical potential term) invariant. The minus signs which occur in (\ref{duality}) are all important: We interpret all negative values of the parameters and densities as those pertaining to holes, or antiparticles. Thus, the duality transformations (\ref{duality}) exchange particles and antiparticles. (For more details see e.g. Section 3 of \cite{BFM}, and references therein.) It is well-known \cite{Jevicki:1979mb} that to leading order in the ${1\over N}$ expansion, collective dynamics of our system is determined by the classical equations of motion resulting from (\ref{Hcollective}). The simplest solution of these equations is the constant condensate $\rho (x) = \rho_0$ (and $\pi (x) = 0$) corresponding to the ground state. More interesting solutions of these equations include various types of periodic density waves and soliton configurations \cite{Polychronakos:1994xg, Andric:1994nc, Sen:1997qt}. As we explain in Section 2 below, these periodic density waves can be thought of as a crystal made of the localized soliton solution. Recently, density wave configurations of this type were studied, among other things, in \cite{ajj2}, where a certain regulator, first introduced in \cite{Andric:1994nc}, was used to tame the effective collective potential. The BPS-equations associated with the regulated potential were then converted into a Riccati equation, which was solved explicitly. Such static periodic density waves are the focus of the present paper as well. As in \cite{ajj2}, we convert the BPS-equations associated with the equations of motion of (\ref{Hcollective}) into an explicitly solvable Riccati equation. However, unlike \cite{ajj2}, we avoid introducing any unconventional regulators in (\ref{Hcollective}). In addition, we also construct non-BPS solutions of the equations of motion, which are simple shifts of BPS-solutions by a constant, and compute their energy densities. That constant is fixed by the equation of motion and turns out to be either the maximum or the minimum value of the corresponding BPS-solution. Thus, these non-BPS density profiles vanish periodically and coincide with the large-amplitude waves reported in \cite{Sen:1997qt}, albeit without too many details of their construction. We believe that the constructive way in which we derive our static periodic BPS and non-BPS configurations complements the discussion in \cite{Polychronakos:1994xg, Sen:1997qt, ajj2}. Since these non-BPS solutions vanish periodically, we can also refer to them as vortex crystals, as they constitute a periodic generalization of the vortex solution of \cite{Andric:1994nc}. In the present paper we also show how these known solitary and periodic wave-solutions appear in the collective field theory of the two-family generalization of the CM, under very special conditions on the coupling constants. The two-family Calogero model is a generalization of (\ref{Hcalogero}) into two species of identical particles. The Hamiltonian of this model reads \cite{Meljanacstojic} \begin{eqnarray}} \def\eeqra{\end{eqnarray}\label{h1} H = &-& \frac{1}{2 m_{1}} \sum_{i=1}^{N_{1}} \frac{{\partial}^{2}}{\partial {x_{i}}^{2}} + \frac{\lambda_{1} (\lambda_{1} - 1)}{2 m_{1}} \sum_{i \neq j }^{N_{1}} \frac{1}{{(x_{i} - x_{j})}^{2}}\nonumber\\{}\nonumber\\ &-& \frac{1}{2 m_{2}} \sum_{\alpha = 1}^{N_{2}} \frac{{\partial}^{2}}{\partial {x_{\alpha}}^{2}} + \frac{\lambda_{2} (\lambda_{2} - 1)}{2 m_{2}} \sum_{\alpha \neq \beta }^{N_{2}} \frac{1}{{(x_{\alpha} - x_{\beta})}^{2}}\nonumber\\{}\nonumber\\ &+& \frac{1}{2} \left( \frac{1}{ m_1} + \frac{1} { m_2 } \right) \lambda_{12}(\lambda_{12} -1) \sum_{i = 1}^{N_{1}}\sum_{\alpha = 1 }^{N_{2}} \frac{1}{(x_{i}-x_{\alpha})^{2}}\,. \eeqra Here, the first family contains $ \; N_{1} \; $ particles of mass $ \; m_{1} \; $ at positions $ \; x_{i}, \; i = 1,2,...,N_{1}, \; $ and the second one contains $ \; N_{2} \; $ particles of mass $ \; m_{2} \; $ at positions $ \; x_{\alpha}, \; \alpha = 1,2,...,N_{2}. $ All particles interact via two-body inverse-square potentials. The interaction strengths within each family are parametrized by the coupling constants $ \; \lambda_{1} \; $ and $ \; \lambda_{2}, \; $ respectively. The interaction strength between particles of the first and the second family is parametrized by $ \; \lambda_{12}.$ In (\ref{h1}) we imposed the restriction that there be no three-body interactions, which requires \cite{Meljanacstojic}-\cite{Meljsams} \begin{equation} \label{threebody} \frac{\lambda_{1}}{{m_{1}}^{2}} = \frac{\lambda_{2}}{{m_{2}}^{2}} = \frac{\lambda_{12}}{m_{1} m_{2}}. \end{equation} It follows from (\ref{threebody}) that \begin{equation}} \def\eeq{\end{equation}\label{lambda12} \lambda_{12}^2 = \lambda_1\lambda_2\,. \eeq We assume that (\ref{threebody}) and (\ref{lambda12}) hold throughout this paper wherever we discuss the two-family CM. The Hamiltonian (\ref{h1}) describes the simplest multi-species Calogero model for particles on the line, interacting only with two-body potentials. In \cite{BFM} we studied the collective field theory of the two-family CM. The corresponding collective Hamiltonian is \begin{eqnarray}} \def\eeqra{\end{eqnarray}\label{Hcollective2F} H_{coll} &=& \frac{1}{2 m_{1}} \int dx\, \pa_x\pi_1(x)\, \rho_{1}(x)\, \pa_x\pi_1(x)\nonumber\\{}\nonumber\\ &+& \frac{1}{2 m_{1}} \int dx \rho_{1}(x) {\left( \frac{\lambda_{1} - 1}{2} \frac{\partial_{x} \rho_{1}}{\rho_{1}} + \lambda_{1} \pv \int \frac{ dy \rho_{1}(y)}{x - y} + \lambda_{12} \pv \int \frac{dy \rho_{2}(y)}{x - y} \right)}^{2}\nonumber\\{}\nonumber\\ &+& \frac{1}{2 m_{2}} \int dx \,\pa_x\pi_2(x)\,\rho_{2}(x) \,\pa_x\pi_2(x) \nonumber\\{}\nonumber\\ &+& \frac{1}{2 m_{2}} \int dx \rho_{2}(x) {\left( \frac{\lambda_{2} - 1}{2} \frac{\partial_{x} \rho_{2}}{\rho_{2}} + \lambda_{2} \pv \int \frac{ dy \rho_{2}(y)}{x - y} + \lambda_{12} \pv \int \frac{dy \rho_{1}(y)}{x - y} \right)}^{2}\nonumber\\{}\nonumber\\ &+& H_{sing}\,,\eeqra which is a straightforward generalization of (\ref{Hcollective}). Here $\rho_a (x)$ are the collective density fields of the $a$th family ($a=1,2$), and $\pi_a(x)$ are their conjugate momenta. As in (\ref{Hcollective}), the term $ \; H_{sing} \; $ denotes a singular contribution which is a straightforward generalization of the one-family expression (\ref{Hsing}). Given that there are $N_a$ particles in the $a$th family, the densities $\rho_a(x)$ must be normalized according to \begin{equation}} \def\eeq{\end{equation}\label{conservation2F} \int\limits_{-\infty}^\infty\, dx\, \rho_{1}(x) = N_{1}\,,\quad\quad \int\limits_{-\infty}^\infty\, dx \rho_{2}(x) = N_{2}\,. \eeq As in the one-family case, these normalization conditions are implemented by adding to (\ref{Hcollective2F}) the chemical-potential terms $\sum_{a=1,2}\mu_a\left(\int\limits_{-\infty}^\infty\,dx \,\rho_a (x) - N_a\right)$\,. As was discussed in \cite{BFM}, the collective Hamiltonian (\ref{Hcollective2F}) is invariant under an Abelian group of strong-weak-coupling dualities, which is a generalization of the single-family case (\ref{duality}). A remarkable consequence of these duality symmetries (see Section 3.1 of \cite{BFM} for more details) is that when one sets \begin{equation}} \def\eeq{\end{equation}\label{SOHI} \lambda_1\lambda_2 =1\,,\quad\quad \lambda_{12} = -1\eeq in (\ref{threebody}), the two-family CM (\ref{Hcollective2F}) becomes similar, in some sense, at the level of collective field theory, to the original single family CM, with a collective Hamiltonian effectively given by (\ref{Hcollective}), for a single effective density. More precisely, this similarity manifests itself in the fact that at the special point (\ref{SOHI}), the original Hamiltonian (\ref{Hcollective2F}) can be mapped by these duality symmetries onto a two-family collective Hamiltonian in which the two families are still {\em distinct}, but have common mass and two-body interaction couplings, and therefore, the two densities can be combined into a certain effective one-family density $\rho_{eff}$. In fact, at these special points, the classical densities (i.e., the static solutions $\rho_1(x)$ and $\rho_2(x)$ of the equations of motion associated with (\ref{Hcollective2F})) turn out to be proportional to each other, and of opposite signs. Thus, for example, for $m_2 = -{m_1\over\lambda_1} < 0$, the common parameters mentioned above are $\lambda=\lambda_1$ and $m=m_1$, leading to an effective one-family density \begin{equation}} \def\eeq{\end{equation}\label{rhoeff} \rho_{eff} = \rho_1 -{\rho_2\over\lambda_1}\,,\eeq which satisfies the static equation of motion of the single-family model (\ref{Hcollective}) with these common parameters, whereas for $m_1 = -{m_2\over\lambda_2} < 0$, one obtains similar relations, but with the two families interchanged. (Negative masses and densities in these formulas are interpreted as quantities corresponding to holes rather than particles, as was mentioned above.) In conclusion of this introduction it is proper to mention that the Heisenberg equations of motion of the collective field $\rho (x)$ and its conjugate momentum $\pi (x)$ may be interpreted as the isentropic hydrodynamic flow equations of an Eulerian fluid \cite{hydro} (see also \cite{JFnmm} ) and the latter may be associated with the completely integrable and soliton-bearing Benjamin-Ono equation, both at the classical level \cite{Jevicki:1991yi, abanov1} and the quantum level \cite{abanov}. This paper is organized as follows: In Section 2 we solve the static BPS equation associated with the one-family collective Hamiltonian (\ref{Hcollective}) by converting it into a Riccati equation which we then solve explicitly. The solution is a static periodic density wave - the finite amplitude wave solution of \cite{Polychronakos:1994xg}. Conversion of the BPS equation into a Riccati equation is achieved by considering the complex valued resolvent $\Phi(z)$ associated with the positions of the $N$ particles on the line (see Eq.(\ref{Phi})), whose boundary value, as the complex variable $z$ approaches the real axis, is a linear combination of the density field $\rho (x)$ and its Hilbert-transform $\rho^H(x)$ (see Eq. (\ref{Phipm})) \cite{ajj2, abanov}. That the latter combination satisfies the Riccati equation then follows from the BPS-equation and its Hilbert-transform. We then study various limits of the periodic solution. We conclude Section 2 by showing that the coupled BPS-equations, associated with (\ref{Hcollective2F}) at the special point (\ref{SOHI}) in parameter space do indeed collapse into a single-family BPS equation. In Section 3 we consider the static limit of the equation of motion associated with (\ref{Hcollective}) - namely, the full variational equation. Every solution of the BPS-equation is, of course, a solution of the full variational equation. It is more challenging to find non-BPS solutions of the latter. We seek such solutions in the form of BPS configurations shifted by a constant, as was mentioned above. For each of the cases $\lambda >1 $ and $0 < \lambda <1$ we find two types of solutions, namely, a positive periodic density wave (a vortex crystal) and a negative one (an anti-vortex crystal). We discuss how these solutions map onto each other by the duality transformations (\ref{duality}). Then, we discuss the energy density of these non-BPS solutions, averaged over a period. We end Section 3 by showing that the coupled variational equations, associated with (\ref{Hcollective2F}) at the special point (\ref{SOHI}) in parameter space collapse into a single-family variational equation. For the sake of completeness, and also for future use, we provide and prove in Appendix A a compendium of useful identities involving Hilbert-transforms. In Appendix B we note and also resolve a mathematical paradox associated with the variational equation. It has to do with the trilocal term in the density fields obtained by expanding the square in (\ref{Hcollective}). In many papers on the collective approach to the Calogero model, that trilocal term is converted into a local $\rho^3 (x)$ term by employing a certain identity among distributions, Eq.(\ref{PPdelta}). However, strictly speaking, that identity is valid only for distributions acting on test functions which are integrable along the whole real line. The periodic density profiles discussed in this paper are certainly not of this type. Nevertheless, they arise correctly as solutions of the variational equation associated with the alternative form of the collective potential containing the $\rho^3 (x)$ term, given in Eq.(\ref{Vcoll1}), as they do, for example, in the pioneering work \cite{Polychronakos:1994xg}, where these periodic density waves were discovered. The resolution of this paradox lies in proper readjustment of the chemical potential enforcing the constraint (\ref{conservation}). Finally, in order for this paper to be self-contained, we briefly summarize in Appendix C the derivation of the collective-field Hamiltonian (\ref{Hcollective}) from (\ref{Hcalogero}). \section{Periodic BPS Density Waves: Soliton Crystals} The Hamiltonian (\ref{Hcollective}) is essentially the sum of two positive terms\footnote{\label{fn5}Recall the constraint (\ref{conservation}) and our comment concerning $H_{sing}$ following (\ref{conservation}). In addition, as was mentioned above, the external confining potential was set to zero. Thus, the first two terms in (\ref{Hcollective}) comprise the BPS limit of the model.}. Its zero-energy classical solutions are zero-momentum, and therefore time independent configurations of the collective field (\ref{collective}), which are also solutions of the BPS equation \begin{equation}} \def\eeq{\end{equation}\label{BPS} B[\rho] \equiv \frac{\lambda - 1}{2} \frac{\partial_{x} \rho}{\rho} + \lambda \pv \int \frac{ dy \rho(y)}{x - y} = 0\,. \eeq It is easy to check that the duality transformation (\ref{duality}) maps a solution $\rho(x)$ of (\ref{BPS}) with coupling $\lambda$ onto another solution $\tilde\rho(x) = -\lambda\rho(x)$ of that equation with coupling $\tilde\lambda = {1\over\lambda}$. As we shall see below in Eq. (\ref{rhorhoH}), all solutions of (\ref{BPS}) are of definite sign, and never vanish along the real axis. Thus, such a positive solution of (\ref{BPS}) is mapped by (\ref{duality}) onto a negative solution, and vice-versa. The BPS equation (\ref{BPS}) may be written alternatively as \begin{equation}} \def\eeq{\end{equation}\label{BPS1} (\lambda -1)\,\partial_x\rho = 2\pi\lambda\rho\rho^H\,,\eeq where $\rho^H$ is the Hilbert-transform (\ref{Hilbert}) of $\rho$. Note that for $\lambda = 1$, where the CM describes non-interacting fermions\footnote{ The constant solution is also the sole solution of (\ref{BPS1}) when $\lambda =0$, corresponding to non-interacting bosons.} the only solution of (\ref{BPS1}) is $\rho = \rho_0 = {\rm const.}$ Henceforth, we shall assume $\lambda\neq 1$. The proper way to solve this nonlinear integro-differential equation is to consider it together with its Hilbert-transform\cite{ajj2, abanov} \begin{equation}} \def\eeq{\end{equation}\label{BPS-H} (\lambda -1)\,\partial_x\rho^H = \pi\lambda ((\rho^H)^2 - \rho^2 + \rho_0^2)\,,\eeq where on the RHS we used the identity (\ref{ffHilbert}) (and the fact that $\partial_x\rho^H = (\partial_x\rho)^H$ on the LHS). Here $\rho_0$ is a real parameter such that \begin{equation}} \def\eeq{\end{equation}\label{subtraction} \int\limits_{-\infty}^\infty\, dx\, (\rho(x) - \rho_0) = 0\,.\eeq It arises from the fact that we seek a solution of $\rho (x)$ which need not necessarily decay at spatial infinity. (See (\ref{rhobar} ).) Note that (\ref{BPS-H}) is even in $\rho_0$. By definition, the sign of $\rho_0$ coincides with that of $\rho (x)$, the solution of (\ref{BPS-H}). A positive solution $\rho (x)\geq 0$ corresponds to a BPS configuration of particles, and a negative one - to a configuration of antiparticles, as was mentioned following (\ref{duality}). To arrive at the Riccati equation mentioned in the introduction, we proceed as follows. Given the density $\rho(x)$, consider the resolvent \begin{equation}} \def\eeq{\end{equation}\label{Phi} \Phi(z) = -{1\over\pi}\int\limits_{-\infty}^\infty \,dy\,{ \rho(y)\over z-y} \eeq associated with it, in which $z$ is a complex variable. It is easy to see that it is related to the resolvent $G(z)$ of the {\em subtracted} density $\bar\rho(x) = \rho (x) - \rho_0$, defined in (\ref{G}), by \begin{equation}} \def\eeq{\end{equation}\label{PhiG} \Phi (z) = -{1\over\pi}\, G(z) + i\rho_0\,{\rm sign}\, (\Im z)\,.\eeq The resolvent $\Phi(z)$ is evidently analytic in the complex plane, save for a cut along the support of $\rho (x)$ on the real axis. From the identity (\ref{PS}) we obtain \begin{equation}} \def\eeq{\end{equation}\label{Phipm} \Phi_\pm (x) \equiv \Phi(x\pm i0) = \rho^H(x) \pm i\rho(x) \,, \eeq consistent with (\ref{PhiG}) and (\ref{G1}). Thus, if $\Phi (z)$ is known, $\rho (x)$ can be determined from the discontinuity of $\Phi (z)$ across the real axis. An important property of $\Phi(z)$, which follows directly from the definition (\ref{Phi}), is \begin{equation}} \def\eeq{\end{equation}\label{herglotz0} \Im\, \Phi(z) = {\Im\,z\over\pi}\,\int\limits_{-\infty}^\infty \, {\rho(y)\,dy\over |z-y|^2}\,. \eeq Thus, if $\rho(x)$ does not flip its sign throughout its support, we have \begin{equation}} \def\eeq{\end{equation}\label{herglotz} {\rm sign}\,\,\left(\Im\, \Phi(z)\right) = {\rm sign}\,\,\left(\Im\,z \right){\rm sign}\,\,\left(\rho (x)\right)\,.\eeq We shall use this property to impose certain further conditions on the solution of (\ref{Riccati1}) below. It follows from (\ref{Phipm}) that (\ref{BPS1}) and (\ref{BPS-H}) are, respectively, the imaginary and real parts of the Riccati equation \begin{equation}} \def\eeq{\end{equation}\label{Riccati} (\lambda -1) \partial_x \Phi_\pm (x) = \pi\lambda (\Phi_\pm^2(x) + \rho_0^2) \eeq obeyed by both complex functions $\Phi_\pm (x)\,.$ Let $\Phi_\pm (z)$ be the analytic continuations of $\Phi_\pm (x)$ into the $z-$upper and lower half planes, respectively. These functions are evidently the two solutions of \begin{equation}} \def\eeq{\end{equation}\label{Riccati1} (\lambda -1)\partial_z \Phi (z) = \pi\lambda (\Phi (z)^2 + \rho_0^2)\,, \eeq subjected to the boundary conditions $\Phi^*_+ (x+i0) = \Phi_- (x-i0)$ and ${\rm sign}\,\left(\Im \Phi_+ (x + i0)\right) = {\rm sign}\,\left(\rho(x)\right) = {\rm sign}\,\rho_0,$ from (\ref{Phipm}). The resolvent (\ref{Phi}) is then obtained by patching together $\Phi_+(z)$ in the upper half-pane and $\Phi_-(z)$ in the lower half-plane. The standard way to solve (\ref{Riccati1}) is to write it as \begin{equation}} \def\eeq{\end{equation}\label{Riccati2} \left({1\over \Phi(z) - i\rho_0} - {1\over \Phi(z) + i\rho_0} \right)\,\partial_z \Phi (z) = i k \,, \eeq where \begin{equation}} \def\eeq{\end{equation}\label{k} k = {2\pi\lambda\rho_0\over\lambda -1}\,, \eeq is a real parameter. Straightforward integration of (\ref{Riccati2}) then yields the solutions \begin{equation}} \def\eeq{\end{equation}\label{PhiSol} \Phi_\pm (z) = i\rho_0\,{1 + e^{ikz -u_\pm}\over 1 - e^{ikz -u_\pm}}\,,\eeq where $u_\pm$ are integration constants. The boundary condition $\Phi_+^* (x+i0) = \Phi_- (x-i0)$ then tells us that $u_- = -u_+^*$. Clearly, $\Im u_+$ can be absorbed by a shift in $x$. Therefore, with no loss of generality we set $\Im u_+ = 0$. The second boundary condition ${\rm sign}\,\left(\Im \Phi_+ (x+i0)\right) = {\rm sign}\,\rho_0 $ then tells us that $u \equiv \Re u_+ >0\,.$ Thus, $\Phi_\pm (z)$ are completely determined and we obtain (\ref{Phi}) as \begin{equation}} \def\eeq{\end{equation}\label{PhiSolFinal} \Phi (z) = i\rho_0\,{1 + e^{ikz -u\,{\rm sign}\,(\Im z)}\over 1 - e^{ikz -u\,{\rm sign}\,(\Im z)}}\,.\eeq As can be seen in (\ref{rhorhoH}) below, the density $\rho(x)$ associated with (\ref{PhiSolFinal}) is indeed of definite sign, namely, ${\rm sign}\,\rho_0$. The asymptotic behavior of (\ref{PhiSolFinal}) is such that \begin{equation}} \def\eeq{\end{equation}\label{asymptotics} \Phi (\pm i\infty ) = \pm i\rho_0\,{\rm sign}\, k\,. \eeq This must be consistent with (\ref{herglotz}), which implies (together with the fact that ${\rm sign}\,\left(\rho(x)\right) = {\rm sign}\,\rho_0$) that $k$ must be {\em positive}. In other words, as can be seen from (\ref{k}), positive (space-dependent) BPS density configurations ($\rho_0 > 0 $) exist only for $\lambda >1$, and negative (space-dependent) BPS densities ($\rho_0 < 0 $) arise only for $0<\lambda<1$\footnote{Constant solutions $\rho = \rho_0$ of (\ref{BPS}), are of course not subjected to this correlation between ${\rm sign}\, \rho_0$ and the range of $\lambda$.} . The duality symmetry (\ref{duality}), which interchanges the domains $0< \lambda <1$ and $\lambda >1$, maps these two types of BPS configurations onto each other. Now that we have determined $\Phi (z)$, let us extract from it the BPS density $\rho (x)$ and its Hilbert transform $\rho^H(x)$. From (\ref{PhiSolFinal}) we find that \begin{equation}} \def\eeq{\end{equation}\label{PhiSolplus} \Phi_+(x) = \Phi(x+i0) = \rho_0\, {-\sin\, kx + i\sinh u\over \cosh u -\cos kx}\,,\eeq from which we immediately read-off the solution of the BPS-equation (\ref{BPS}) as \begin{eqnarray}} \def\eeqra{\end{eqnarray}\label{rhorhoH} \rho (x) &=& \,\,\,\,\rho_0\, {\sinh u\over \cosh u -\cos kx}\nonumber\\{}\nonumber\\\rho^H(x) &=& - \rho_0\, {\sin kx \over \cosh u -\cos kx}\,,\eeqra where both $k > 0$ and $u>0$, and the sign of $\rho(x)$ coincides with that of $\rho_0$. That $\rho^H$ in (\ref{rhorhoH}) is indeed the Hilbert-transform of $\rho$ can be verified by explicit calculation. The static BPS density-wave, given by $\rho(x)$ in (\ref{rhorhoH}), is nothing but the finite-amplitude solution of \cite{Polychronakos:1994xg}. It comprises a two-parameter family of spatially periodic solutions, all of which have zero energy density, by construction. The period is \begin{equation}} \def\eeq{\end{equation}\label{period} T = {2\pi\over k} = {\lambda -1\over\lambda\rho_0}\,.\eeq It can be checked by explicit calculation\footnote{The best way to do this computation is to change variables to $t = e^{ikx}$ and transform the integral into a contour integral around the unit circle.} that \begin{equation}} \def\eeq{\end{equation}\label{period-av} {1\over T}\int\limits_{\rm period}\, \rho (x)\, dx = \rho_0\,,\eeq and therefore that $\int\limits_{-\infty}^\infty\, (\rho (x) - \rho_0) \, dx = 0\,,$ as required by definition of $\rho_0$. Thus, the parameter $\rho_0$ determines both the period of the solution $\rho (x)$, as well as its period-average, and the other (positive) parameter $u$ determines the amplitude of oscillations about the average value. Note also from (\ref{period-av}), that the number of particles per period is \begin{equation}} \def\eeq{\end{equation}\label{ppp} T\rho_0 = {\lambda-1\over \lambda}\,.\eeq A couple of limiting cases of (\ref{rhorhoH}) are worth mentioning. Thus, if we let $u\rightarrow 0\,,$ we obtain a comb of Dirac $\delta-$functions \begin{equation}} \def\eeq{\end{equation}\label{comb} \rho (x) = {\lambda-1\over \lambda}\, \sum\limits_{n\in Z\!\!\!Z}\, \delta (x - nT)\,.\eeq If, in addition to $u\rightarrow 0$, we also let $k$ tend to zero (or equivalently, let the period $T$ diverge), such that $b = {u\over k}$ remains finite, we obtain the BPS soliton solution \cite{Polychronakos:1994xg, Andric:1994nc} \begin{equation}} \def\eeq{\end{equation}\label{lump} \rho (x) = {\lambda-1\over \lambda}\,{1\over\pi}\, {b\over b^2 + x^2}\,.\eeq In fact, the original construction of the periodic soliton (\ref{rhorhoH}) in \cite{Polychronakos:1994xg} was done by juxtaposing infinite solitons like (\ref{lump}) in a periodic array. For this reason we may refer to the finite amplitude BPS density wave in (\ref{rhorhoH}) also as the {\em soliton crystal}. Note that the relation (\ref{ppp}) is preserved in both limiting cases, since the RHS of (\ref{ppp}) depends neither on $u$ nor on $k$. \subsection{BPS Solutions of the Two-Family Model at the Special Point (\ref{SOHI})} The BPS-equations of the two-family collective field Hamiltonian (\ref{Hcollective2F}) are \begin{eqnarray}} \def\eeqra{\end{eqnarray}\label{BPS2F} B_1[\rho_1,\rho_2] &\equiv& \frac{\lambda_{1} - 1}{2} \frac{\partial_{x} \rho_{1}}{\rho_{1}} + \lambda_{1} \pv \int \frac{ dy\rho_{1}(y)}{x - y} + \lambda_{12} \pv \int \frac{dy\rho_{2}(y)}{x - y} = 0\nonumber\\{}\nonumber\\ B_2[\rho_1,\rho_2] &\equiv& \frac{\lambda_{2} - 1}{2} \frac{\partial_{x} \rho_{2}}{\rho_{2}} + \lambda_{2} \pv \int \frac{ dy\rho_{2}(y)}{x - y} + \lambda_{12} \pv \int \frac{dy\rho_{1}(y)}{x - y} = 0\,.\eeqra Solutions of these coupled equations yield the time-independent zero-energy and zero-momentum configurations of the collective fields $\rho_1$ and $\rho_2$. Finding the general solution of these coupled equations for arbitrary couplings and masses (subjected to (\ref{threebody})) is still an open problem, which we do not address in the present paper. However, at the special point (\ref{SOHI}), where the two-family model becomes similar to a single-family model, the two equations (\ref{BPS2F}) simplify drastically, becoming linearly dependent. For example, for \begin{equation}} \def\eeq{\end{equation}\label{SOHI1} \lambda = \lambda_1 = {1\over\lambda_2}\,,\quad \lambda_{12} = -1\,,\quad\quad {\rm and}\quad\quad m=m_1 = -\lambda m_2\,, \eeq it is easy to see that \begin{equation}} \def\eeq{\end{equation}\label{lindep} B_1 + \lambda B_2 = {\lambda -1\over 2}\, \partial_x\,\log\,\left({\rho_1\over\rho_2}\right)\,. \eeq Since at the same time, from (\ref{BPS2F}), $B_1=B_2 = 0$, (\ref{lindep}) implies that the two densities must be proportional \begin{equation}} \def\eeq{\end{equation}\label{proportionality} \rho_2(x) = -\kappa \rho_1(x)\,.\eeq (From the discussion in \cite{BFM} we know that the constant $\kappa >0$, and the negative density is interpreted as density of holes, as was mentioned in the Introduction.) Upon substitution of (\ref{proportionality}) back in (\ref{BPS2F}) we see that \begin{equation}} \def\eeq{\end{equation}\label{B1} B_1 = \frac{\lambda - 1}{2} \frac{\partial_{x} \rho_{eff}}{\rho_{eff}} -\lambda\pi \rho_{eff}^H\,, \eeq coincides with the corresponding one-family expression $B$ in (\ref{BPS}) with an effective density $\rho_{eff}$ given by (\ref{rhoeff}). Thus, at this special point, $\rho_{eff} (x)$ is given by (\ref{rhorhoH}), from which $\rho_1$ and $\rho_2$, being proportional to $\rho_{eff}$, can be deduced as well. An analogous solution of (\ref{BPS2F}) exists for the case in which the roles of the two families in (\ref{SOHI1}) are interchanged. \section{Non-BPS Solutions of the Equation of Motion} The uniform-density ground state, as well as the periodic space-dependent BPS-configurations discussed in the previous section, all correspond to zero-energy and zero-momentum configurations of the collective field Hamiltonian (\ref{Hcollective}). Static density configurations with {\em positive} energy density are found by extremizing the collective potential \begin{eqnarray}} \def\eeqra{\end{eqnarray}\label{Vcoll} V_{coll} &=& \frac{1}{2 m} \int dx\, \rho(x) {\left( \frac{\lambda - 1}{2} \frac{\partial_{x} \rho}{\rho} + \lambda \pv \int \frac{ dy \rho(y)}{x - y} \right)}^{2} + \mu\left(N - \int\,dx \,\rho (x) \right)\nonumber\\{}\nonumber\\ &=& \frac{1}{2 m} \int dx\, \rho(x) B[\rho]^2 + \mu\left(N - \int\,dx \,\rho (x) \right)\eeqra part of (\ref{Hcollective}). Computation of the variation of (\ref{Vcoll}) with respect to $\rho$ is most easily carried with the help of (\ref{Bvar}) just below. Thus, using the elementary relation $\int\,dx\,F(x)\,G^H(x) = - \int\,dx\,F^H(x)\,G(x)$ it is easy to obtain the variational identity \begin{equation}} \def\eeq{\end{equation}\label{Bvar} \int\,dx\,\rho(x)\,F(x)\,\delta B[\rho] = \int\,dx\,\left[-{\lambda -1\over 2\rho}\,\partial_x (\rho F) + \pi\lambda\,(\rho F)^H \right]\delta\rho(x)\,,\eeq where the infinitesimal variation of $B[\rho]$ \begin{equation}} \def\eeq{\end{equation}\label{deltaB} \delta B[\rho] = {\lambda -1\over 2}\,\partial_x \left({\delta\rho\over\rho}\right) - \pi\lambda\,\delta\rho^H \eeq was computed from (\ref{BPS}). Using (\ref{Bvar}), it is straightforward to obtain the desired variational equation as \begin{equation}} \def\eeq{\end{equation}\label{variationaleq} 2m\,{\delta V_{coll}\over\delta \rho(x)} = B[\rho]^2 - {\lambda -1\over\rho}\,\partial_x (\rho B[\rho]) + 2\pi\lambda\,(\rho B[\rho])^H - 2m\mu = 0\,.\eeq The collective potential (\ref{Vcoll}) is invariant under the duality transformation (\ref{duality}). Thus, (\ref{variationaleq}) must transform covariantly under (\ref{duality}). Indeed, it is straightforward to see that under (\ref{duality}), the variational equation (\ref{variationaleq}) transforms into ${1\over \lambda^2}$ times itself. In this way, a solution $\rho(x)$ of (\ref{variationaleq}) with parameters $\lambda, m, \mu$ is transformed into a solution $\tilde\rho (x)$ of (\ref{variationaleq}) with parameters $\tilde\lambda, \tilde m, \tilde\mu$. We shall make use of this fact later-on. Evidently, any solution of the BPS equation $B[\rho] = 0$ (Eq.(\ref{BPS})) is also a solution of the variational equation (\ref{variationaleq}) (with $\mu=0$), reflecting the fact that (\ref{Vcoll}) is quadratic and homogeneous in $B[\rho]$. Unfortunately, we do not know how to find the most general solution of this equation. Therefore, we shall content ourselves with finding a particular family of solutions to (\ref{variationaleq}) of the simple shifted form \begin{equation}} \def\eeq{\end{equation}\label{ansatz} \rho (x) = \rho_s(x) + c\,, \eeq where \begin{equation}} \def\eeq{\end{equation}\label{rhos} \rho_s (x) = \rho_0\, {\sinh u\over \cosh u -\cos kx} \eeq is the BPS profile in (\ref{rhorhoH}) and $c$ is an unknown constant, to be determined from (\ref{variationaleq}). (Clearly, (\ref{ansatz}) with $c=0$ must be a solution of (\ref{variationaleq}).) Let us proceed in a few steps. First, note that \begin{equation}} \def\eeq{\end{equation}\label{rhoH} \rho^H(x) = \rho_s^H(x) = - \rho_0\, {\sin kx \over \cosh u -\cos kx}\,, \eeq from (\ref{rhorhoH}). Then, compute \begin{equation}} \def\eeq{\end{equation}\label{Brhos} B[\rho] = \frac{\lambda - 1}{2} \frac{\partial_{x} \rho}{\rho} - \lambda\pi\rho^H = {\lambda\pi c\rho_0\sin\,kx\over c(\cosh\,u - \cos\,kx) + \rho_0\sinh u}\,, \eeq from which we obtain the remarkably simple relation \begin{equation}} \def\eeq{\end{equation}\label{rhoBrho} \rho\,B[\rho] = -\lambda\pi c \rho^H\,.\eeq Therefore, \begin{equation}} \def\eeq{\end{equation}\label{rhoBrhoH} (\rho\,B[\rho])^H = \lambda\pi c (\rho_s - \rho_0)\,,\eeq where we used the identity (\ref{HilbertSq}). (Note that (\ref{rhoBrhoH}) is consistent with the identity $\int_{-\infty}^\infty\,F^H(x)\,dx = 0$.) Substituting the ansatz (\ref{ansatz}) and the auxiliary results (\ref{Brhos})-(\ref{rhoBrhoH}) in (\ref{variationaleq}) we obtain the LHS of the latter as a rational function of polynomials in $\cos\,kx$. The numerator of that function is a cubic polynomial, which we then expand into a finite cosine Fourier series, all coefficients of which must vanish. Thus, the coefficient of $\cos 3kx$ determines the chemical potential in terms of the remaining parameters as \begin{equation}} \def\eeq{\end{equation}\label{mu} \mu = -{(\lambda\pi)^2\over 2m}\,\rho_0(\rho_0 + 2c)\,,\eeq which we then feed into the coefficients of the remaining three terms. The coefficient of $\cos 2kx$ is then found as the cubic \begin{equation}} \def\eeq{\end{equation}\label{cubic} -(\lambda\pi)^2\rho_0\,c (c^2\sinh\,u + 2c\rho_0\cosh\,u + \rho_0^2\sinh\,u)\,,\eeq where we used (\ref{k}) on the way. This coefficient must vanish, yielding a cubic equation for $c$. The remaining Fourier coefficients vanish identically upon substitution of the roots of this cubic equation for $c$. As we have anticipated following (\ref{ansatz}), one root of this cubic equation is obviously $c_0=0$, which corresponds to $\rho = \rho_s$. The other two roots are \begin{equation}} \def\eeq{\end{equation}\label{c12} c_1 = -\rho_0\,\tanh\,{u\over 2}\quad\quad {\rm and}\quad\quad c_2 = -\rho_0\,\coth\,{u\over 2}\,. \eeq Note that neither of these roots, and therefore neither of the shifted solutions (\ref{ansatz}), depend on $m$ or on $\mu$. Once the parameters $m$ and $\mu$ are related according to (\ref{mu}), they drop out of any further consideration. \subsection{Large Amplitude Density Waves: Vortex Crystal Solutions} From this point onward we shall discuss the cases $\lambda > 1$ and $0 < \lambda <1$ separately. \subsubsection{The case $\lambda >1$} In this case $\rho_0 >0$, as we saw following (\ref{asymptotics}). For positive $\rho_0$, the first root $c_1$ in (\ref{c12}) amounts in (\ref{ansatz}) to shifting the BPS solution $\rho_s(x)$ by its {\em minimum}. The resulting solution \begin{equation}} \def\eeq{\end{equation}\label{LAW} \rho_p (x) = \rho_0\, \left({\sinh u\over \cosh u -\cos kx} - \tanh\,{u\over 2}\right)\eeq is a positive function which vanishes periodically. We shall refer to it as the {\em vortex crysta}l solution, as it is a periodic generalization of the single vortex solution of \cite{Andric:1994nc}. Since $\rho_p (x) >0\,,$ it is a density of particles (rather than holes). Therefore it corresponds to having a positive mass parameter $m>0$ in (\ref{Vcoll}). The vortex crystal (\ref{LAW}) coincides with the so-called {\em large amplitude} wave solution of \cite{Sen:1997qt} for the case $\lambda > 1$ and zero velocity. The second root $c_2$ in (\ref{c12}) amounts to shifting the BPS solution $\rho_s(x)$ by its {\em maximum}. The resulting solution \begin{equation}} \def\eeq{\end{equation}\label{negLAW} \rho_n (x) = \rho_0\, \left({\sinh u\over \cosh u -\cos kx} - \coth\,{u\over 2}\right)\eeq is thus a negative function which vanishes periodically - an anti-vortex crystal. Being a negative solution of (\ref{variationaleq}), Eq.(\ref{negLAW}) should be interpreted as the density of holes rather than particles. Therefore it corresponds to having a negative mass $m<0$ in (\ref{Vcoll}). \subsubsection{The case $0 < \lambda < 1$} In this case $\rho_0 < 0$, as we saw following (\ref{asymptotics}). Therefore $c_1$ and $c_2$ in (\ref{c12}) switch roles: For negative $\rho_0$, $c_1$ amounts to shifting in (\ref{ansatz}) the BPS solution $\rho_s(x)$ by its {\em maximum}. The resulting solution \begin{equation}} \def\eeq{\end{equation}\label{tnLAW} \tilde\rho_n (x) = \rho_0\, \left({\sinh u\over \cosh u -\cos kx} - \tanh\,{u\over 2}\right)\eeq is a negative function which vanishes periodically - an anti-vortex crystal. It is therefore a density of holes corresponding to having a negative mass $m<0$ in (\ref{Vcoll}). The second root $c_2$ amounts in this case to shifting the BPS solution $\rho_s(x)$ by its {\em minimum}. The resulting solution \begin{equation}} \def\eeq{\end{equation}\label{tLAW} \tilde\rho_p (x) = \rho_0\, \left({\sinh u\over \cosh u -\cos kx} - \coth\,{u\over 2}\right) = |\rho_0|\, \left(\coth\,{u\over 2} - {\sinh u\over \cosh u -\cos kx}\right)\eeq is thus a positive function which vanishes periodically - a vortex crystal. It corresponds to having $m>0$ in (\ref{Vcoll}), in a similar manner to $\rho_p(x)$ in (\ref{LAW}). $\tilde\rho_p (x)$ coincides with the large amplitude wave solution of \cite{Sen:1997qt} for the case $0<\lambda < 1$ and zero velocity. Note that $\tilde\rho_p (x)$ has appeared also in \cite{ajj2}. The duality transformations (\ref{duality}) leave the wave-number $k$ in (\ref{k}) invariant. By definition, the positive parameter $u$, defined in (\ref{PhiSolFinal}), is invariant under (\ref{duality}) as well. Thus, evidently, the duality transformations (\ref{duality}) map $\rho_p (x)$ in (\ref{LAW}) and $\tilde\rho_n(x)$ in (\ref{tnLAW}) onto each other. (Of course, the $\rho_0$ parameters appearing in the latter two equations are different from each other, and related by the fourth relation in (\ref{duality}).) Similarly, the duality transformations (\ref{duality}) map $\rho_n (x)$ in (\ref{negLAW}) and $\tilde\rho_p(x)$ in (\ref{tLAW}) onto each other. \subsubsection{Average Energy Densities per Period} Our new solutions (\ref{LAW}) - (\ref{tLAW}) of the variational equation (\ref{variationaleq}) are periodic functions, with the same period as that of the BPS solution $\rho_s$. Since these are non-BPS configuration, they must carry positive energy density\footnote{Since negative densities correspond to holes, whose mass should be taken negative, we have ${\rho(x)\over m} > 0$ in these cases as well. This renders $V_{coll}$ in (\ref{Vcoll}) positive for such densities. Thus, the negative solutions $\rho_n$ and $\tilde\rho_n$ carry positive energy density, as their positive counterparts obviously do.}. We shall now proceed to calculate the mean energy densities per period of these configurations, to which end we must determine the combination $\rho B[\rho]^2$ appearing in (\ref{Vcoll}). From the general expressions (\ref{Brhos}) and (\ref{rhoBrho}) we obtain \begin{equation}} \def\eeq{\end{equation}\label{rhoBrhosq} \rho B[\rho]^2 = {\lambda\pi c\rho_0\sin\,kx\over \cosh\,u - \cos\,kx} \,{\lambda\pi c\rho_0\sin\,kx\over c(\cosh\,u - \cos\,kx) + \rho_0\sinh u}\,.\eeq The desired period-averaged energy density is then given by \begin{equation}} \def\eeq{\end{equation}\label{av-energy} {\cal E} = {1\over T}\,\int\limits_{\rm period}\, dx\, {\rho B[\rho]^2\over 2m}\,,\eeq where $T={2\pi\over k}$ (Eq. (\ref{period})). We shall content ourselves with computing the energy density only of the positive densities $\rho_p$ and $\tilde\rho_p$. In order to compute ${\cal E}$ of (\ref{LAW}), corresponding to $\lambda >1$ and $\rho_0 >0$, we substitute $c = c_1$ in (\ref{rhoBrhosq}). After some algebra, we find that in this case \begin{equation}} \def\eeq{\end{equation}\label{rhoBrhosqp} \rho_p B[\rho_p]^2 = (\lambda\pi\rho_0)^2\,\tanh{u\over 2}\,\left[\rho_0 (1-\tanh{u\over 2}) - (\rho_s(x) - \rho_0) \tanh{u\over 2}\right]\,.\eeq In view of (\ref{period-av}), and by definition of $\rho_0$, the period-average of $\rho_s(x) - \rho_0$ vanishes. Thus, from (\ref{Vcoll}), we obtain the period-average energy density of (\ref{LAW}) as \begin{equation}} \def\eeq{\end{equation}\label{energyp} {\cal E}[\rho_p] = {(\lambda\pi\rho_0)^2\over 2m}\,\rho_0\,\tanh{u\over 2}(1-\tanh{u\over 2})\,,\eeq which is a manifestly positive quantity. It depends continuously on the two parameters $\rho_0$ and $u$, comprising an unbounded continuum of positive energies, which is not gapped from the zero energy density of the BPS solitons. Similarly, in order to compute ${\cal E}$ of (\ref{tLAW}), corresponding to $0 < \lambda < 1$ and $\rho_0 < 0$, we substitute $c = c_2$ in (\ref{rhoBrhosq}). After some algebra, we find that in this case \begin{equation}} \def\eeq{\end{equation}\label{rhoBrhosqn} \tilde\rho_p B[\tilde\rho_p]^2 = -(\lambda\pi\rho_0)^2\,\coth{u\over 2}\,\left[\rho_0 (\coth{u\over 2} -1 ) + (\rho_s(x) - \rho_0) \coth{u\over 2}\right]\,,\eeq which leads to the positive period-average energy density of (\ref{tLAW}) given by \begin{equation}} \def\eeq{\end{equation}\label{energyn} {\cal E}[\tilde\rho_p] = -~ {(\lambda\pi\rho_0)^2 \over 2m}\,\rho_0\,\coth{u\over 2}(\coth{u\over 2}-1)\,.\eeq \subsubsection{Energy Densities at Fixed Average Particle Density} It is particularly useful to consider the energy densities (\ref{energyp}) and (\ref{energyn}) at a fixed average particle density per period. The latter is, of course, the subtraction constant as defined in (\ref{subtraction}), which is given by \begin{equation}} \def\eeq{\end{equation}\label{trho0}\tilde\rho_0 = \rho_0 + c \eeq for the shifted solutions (\ref{ansatz}). Both (\ref{energyp}) and (\ref{energyn}) depend on the two parameters $\rho_0$ and $u$. Holding $\tilde\rho_0$ fixed can thus be used to eliminate one of these parameters, which we shall take to be $u$. Let us concentrate first on $\rho_p$ in (\ref{LAW}), for which $c= c_1 = -\rho_0\,\tanh\,{u\over 2}$ (and of course, $\lambda >1$). Thus, \begin{equation}} \def\eeq{\end{equation}\label{postilderho0} \tilde\rho_0 = \rho_0\,(1 - \tanh\,{u\over 2})\,,\eeq which is positive, since $\rho_0 > 0$ in (\ref{LAW}). Moreover, $\rho_0\geq\tilde\rho_0$ in this case, since $u>0$. In terms of this fixed $\tilde\rho_0$, we obtain ${\cal E}[\rho_p]$ in (\ref{energyp}) as \begin{equation}} \def\eeq{\end{equation}\label{energyp1} {\cal E}[\rho_p] = {(\lambda\pi)^2\over 2m}\,\tilde\rho_0\rho_0\,(\rho_0 - \tilde\rho_0)\,,\quad \rho_0\geq \tilde\rho_0 = {\rm fixed}\,.\eeq This energy density vanishes at the minimal possible value of $\rho_0 = \tilde\rho_0$, corresponding to $u=0$, and therefore to the BPS density configuration (\ref{comb}). As $\rho_0$ increases from its minimal value, the period-average energy density ${\cal E}[\rho_p]$ increases monotonically from zero to infinity. Increasing $\rho_0$ really means increasing the wave number $k = {2\pi\lambda\rho_0\over\lambda -1}$, i.e., making the density modulation wave-length shorter. It is interesting to note that in terms of $k$ and $\tilde\rho_0$ we can write \begin{equation}} \def\eeq{\end{equation}\label{energyp2} {\cal E}[\rho_p] = {(1-\lambda)\tilde\rho_0\over 4}\,\left({1-\lambda \over 2m}\,k^2 + {\lambda\pi\tilde\rho_0\over m}k\right)\,,\quad k = {2\pi\lambda\over \lambda -1}\rho_0 \geq {2\pi\lambda\over \lambda -1}\tilde\rho_0 = {\rm fixed}\,,\eeq where the expression within the brackets is nothing but the dispersion relation for fluctuations around the constant background $\tilde\rho_0$ \cite{Andric:1994su}. We can analyze the periodic vortices $\tilde\rho_p$ in (\ref{tLAW}) in a similar manner. For these solutions $c= c_2 = -\rho_0\,\coth\,{u\over 2}$ (and of course $0<\lambda<1$). Thus, \begin{equation}} \def\eeq{\end{equation}\label{negtrho0} \tilde\rho_0 = \rho_0\,(1 - \coth\,{u\over 2}) = |\rho_0|\,(\coth\,{u\over 2} - 1)\,,\eeq which is again positive, since the allowed range of $\rho_0$ in (\ref{tLAW}) is $\rho_0 \leq 0$. For a given value of $\tilde\rho_0$, $\rho_0 = -\frac{1}{2} (e^u-1)\tilde\rho_0$ ranges throughout the negative real axis as $u$ ranges throughout the positive one. In terms of this fixed $\tilde\rho_0$, we obtain an expression for ${\cal E}[\tilde\rho_p]$ in (\ref{energyn}) which coincides with the RHS of (\ref{energyp1}), but where now $\rho_0 \leq 0$, of course: ${\cal E}[\tilde\rho_p] = {(\lambda\pi)^2\over 2m}\,\tilde\rho_0|\rho_0|\,(|\rho_0| + \tilde\rho_0)\,.$ This energy density vanishes at the maximal possible value of $\rho_0 = 0$, corresponding to $u=0$, and therefore to the BPS density configuration (\ref{lump}). As $\rho_0$ becomes increasingly negative the period-average energy density ${\cal E}[\tilde\rho_p]$ increases monotonically from zero to infinity. In terms of the wave number $k$ and $\tilde\rho_0$, we obtain that ${\cal E}[\tilde\rho_p] = {(1-\lambda)\tilde\rho_0\over 4}\,\left({1-\lambda \over 2m}\,k^2 + {\lambda\pi\tilde\rho_0\over m}k\right)\,,$ which coincides with (\ref{energyp2}), but where now $k\geq 0$ for any value of $\tilde\rho_0>0$. \subsection{The Two-Family model at the Special Point (\ref{SOHI})} The variational equations associated with the two-family collective potential part of the two-family collective Hamiltonian (\ref{Hcollective2F}) are \begin{eqnarray}} \def\eeqra{\end{eqnarray}\label{variational2F} B_1^2 - {\lambda_1-1\over\rho_1}\partial_x\,(\rho_1 B_1) + 2\lambda_1\pi\,(\rho_1 B_1)^H + 2{m_1\over m_2}\,\lambda_{12}\pi\,(\rho_2 B_2)^H - 2m_1\mu_1 &=& 0\nonumber\\{}\nonumber\\ B_2^2 - {\lambda_2-1\over\rho_2}\partial_x\,(\rho_2 B_2) + 2\lambda_2\pi\,(\rho_2 B_2)^H + 2{m_2\over m_1}\,\lambda_{12}\pi\,(\rho_1 B_1)^H - 2m_2\mu_2 &=& 0\,,\nonumber\\{} \eeqra in straightforward analogy with (\ref{variationaleq}), where the BPS combinations $B_1$ and $B_2$ were defined in (\ref{BPS2F}). As in the case of the BPS equations (\ref{BPS2F}), the general solution of these coupled equations for arbitrary couplings and masses (subjected to (\ref{threebody})) is still an open problem, which we do not address in the present paper. However, at the special point (\ref{SOHI}), where the two-family model becomes similar to a single-family model, the two equations (\ref{variational2F}) simplify drastically, becoming linearly dependent, in much the same way that the BPS equations (\ref{BPS2F}) got simplified. Consider, for example, the case (\ref{SOHI1}). In this case (\ref{lindep}) still holds, of course, but now neither $B_1$ nor $B_2$ need vanish. Thus, we cannot conclude that $\rho_1$ and $\rho_2$ must be proportional. Instead, we shall now show that under the condition (\ref{SOHI1}), there is a non-BPS solution of the coupled equations (\ref{variational2F}) in which the two densities are proportional to each other, as in (\ref{proportionality}). In this case it follows from (\ref{lindep}) that $B_1 + \lambda B_2 = 0\,.$ Substituting this relation as well as (\ref{SOHI1}) in (\ref{variational2F}) , we see that the two equations coincide, provided $\mu_1 + \lambda\mu_2 =0$, and that their common form is nothing but the variational equation (\ref{variationaleq}) of the single-family, for an effective density (\ref{rhoeff}). Thus, at this special point, $\rho_{eff} (x)$ is given by (\ref{rhos}), (\ref{LAW}) or (\ref{negLAW}), from which $\rho_1$ and $\rho_2$, being proportional to $\rho_{eff}$ can be deduced as well. An analogous solution of (\ref{variational2F}) exists for the case in which the roles of the two families in (\ref{SOHI1}) are interchanged. \bigskip \bigskip \pagebreak \newpage \setcounter{equation}{0} \setcounter{section}{0} \renewcommand{\theequation}{A.\arabic{equation}} \renewcommand{\thesection}{Appendix A:} \section{A Compendium of Useful Hilbert-Transform Identities} \vskip 5mm \setcounter{section}{0} \renewcommand{\thesection}{A} For the sake of completeness, and also for future reference, in this Appendix we list and prove some well-known and useful identities involving Hilbert transforms. Consider the class of (possibly complex) functions $\rho (x)$ on the whole real line $-\infty < x < \infty$, whose Hilbert transforms \begin{equation}} \def\eeq{\end{equation}\label{Hilbert} \rho^H(x) = {1\over \pi} \pv\int_{-\infty}^\infty \,dy\, {\rho (y) \over y - x} \eeq exist, and which can be made integrable by subtracting a constant $\rho_0$. Let us denote \begin{equation}} \def\eeq{\end{equation}\label{rhobar} \bar\rho (x) = \rho (x) - \rho_0\,. \eeq (If $\rho (x)$ is already integrable, then $\rho_0 = 0$, of course.) Thus, for example, if $\rho (x)$ is periodic with period $T$, with a Fourier zero-mode $\rho_0$, then $\int\limits_{-\infty}^\infty dx\,\bar\rho (x) = \int\limits_{-\infty}^\infty dx\,(\rho (x) - \rho_0) = 0$\,. Given $\bar\rho(x)$, consider the resolvent \begin{equation}} \def\eeq{\end{equation}\label{G} G(z) = \int\limits_{-\infty}^\infty \,dy\,{ \bar\rho(y)\over z-y} \eeq associated with it, in which $z$ is a complex variable. The resolvent $G(z)$ is evidently analytic in the complex plane, save for a cut along the support of $\bar\rho (x)$ on the real axis. From the identity \begin{equation}} \def\eeq{\end{equation}\label{PS} {1\over x\mp i0} = {P\over x} \pm i\pi\delta (x)\,, \eeq where $P$ denotes the Cauchy principal value, we then obtain the well-known formula \begin{equation}} \def\eeq{\end{equation}\label{G1} G(x\mp i0) = -\pi \rho^H(x) \pm i\pi\bar\rho(x) \,, \eeq where in the term one before last we used the fact that $\bar\rho^H(x) = \rho^H(x)$. Thus, if $G(z)$ is known, $\bar\rho (x)$ can be determined from the discontinuity of $G(z)$ across the real axis. As a nontrivial example consider $\rho (x) = \bar\rho (x) = e^{ix}$. For this function $\rho^H(x) = i e^{ix} = i\rho (x)$ and $G(z) = -2\pi i \,\theta (\Im z) \,e^{iz}$. Consequently $G(x-i0) = 0$ and $G(x+i0) = -2\pi i \, e^{ix}$, in accordance with (\ref{G1}). As yet another example consider the Cauchy probability distribution $\rho (x) = \bar\rho (x) = {\gamma\over\pi} {1\over x^2 + \gamma^2}$. For this function $\rho^H(x) = -{1\over\pi} {x\over x^2 + \gamma^2}$ and $G(z) = {1\over z + i\gamma {\rm sign}\, (\Im z)}$. Consequently $G(x\mp i0) = {x\pm i\gamma\over x^2 + \gamma^2}$, in accordance with (\ref{G1}). For all functions in this class, as $z\rightarrow\infty$, $G(z)$ tends asymptotically to zero not slower than ${1\over z}$, that is \begin{equation}} \def\eeq{\end{equation}\label{Gasympt} G(z) {_{\stackrel{\displaystyle\sim} {z\rightarrow\infty}}\,\, } {\cal O}\left({1\over z}\right)\,.\eeq If, in addition, all moments $M_n = \int\limits_{-\infty}^\infty \, dx\, x^n \,\bar\rho (x)\,,\quad (n\geq 0)$ of $\bar\rho (x)$ exist, then $G(z)$ is the moment generating function of $\bar\rho (x)$, namely, it has the large-$z$ expansion \begin{eqnarray*}} \def\eeqast{\end{eqnarray*} G(z) = \sum_{n=0}^\infty {M_n\over z^{n+1}}\,. \eeqast The analyticity properties of $G(z)$ and the bounds on its asymptotic behavior at infinity are at the heart of our derivation of the Hilbert-transform identities to follow. {\em From this point on, we shall take all functions $\rho (x)$ to be real.} For real $\rho (x)$, we deduce from (\ref{G1}) that \begin{equation}} \def\eeq{\end{equation}\label{G2} \rho^H(x) = -{1\over \pi} \Re G(x\mp i0) \quad {\rm and}\quad \bar\rho (x) = \pm {1\over\pi} \Im G(x\mp i0)\,. \eeq As a warm-up exercise, let us prove the well-known fact that \begin{equation}} \def\eeq{\end{equation}\label{HilbertSq} (\rho^H (x))^H = -\bar\rho (x) = \rho_0 - \rho (x) \,. \eeq Thus, consider \begin{eqnarray*}} \def\eeqast{\end{eqnarray*} (\rho^H (x))^H &=& (\bar\rho^H (x))^H = {1\over\pi}\int\limits_{-\infty}^\infty {P\over y-x}\, \bar\rho^H(y)\, dy \\{}\\ &=& {1\over\pi}\int\limits_{-\infty}^\infty \,dy \, \Re\left[\left({P\over y-x} -i\pi\delta (y-x)\right) \left(\bar\rho^H(y) + i\bar\rho (y)\right)\right]\, dy - \bar\rho (x) \\{}\\ &=& -{1\over \pi^2} \Re\, \int\limits_{-\infty}^\infty {G(y+i0)\over y-x +i0}\, dy - \bar\rho (x) \,,\eeqast where in the last step we used (\ref{PS}) and (\ref{G2}). Let us now prove that the last integral vanishes, from which (\ref{HilbertSq}) follows. To this end, complete the contour of integration in the last integral (namely, the line running parallel to the real axis just above it) by the infinite semi-circle in the upper half-plane $\Im z>0$, traversed in the positive sense. Let us denote the closed contour thus formed by $\gamma$. Due to the asymptotic behavior (\ref{Gasympt}) of $G(z)$ we can establish the first equality in $$ \int\limits_{-\infty}^\infty {G(y+i0)\over y-x +i0}\, dy = \oint_\gamma\,dz\, {G(z)\over z-x}= 0\,,$$ whereas the second equality follows since the contour $\gamma$ encompasses no singularity. We shall now prove the important identity \begin{equation}} \def\eeq{\end{equation}\label{fgHilbert} (\rho_1\rho_2^H + \rho_1^H\rho_2)^H = \rho_1^H\rho_2^H - \rho_1\rho_2 + \rho_{10}\rho_{20} \eeq obeyed by any two functions $\rho_1(x)$ and $\rho_2(x)$ in the class of functions considered. Our first step in proving (\ref{fgHilbert}) is to observe that it may be written equivalently as \begin{equation}} \def\eeq{\end{equation}\label{fgHilbert1} (\bar\rho_1\bar\rho_2^H + \bar\rho_1^H\bar\rho_2)^H = \bar\rho_1^H\bar\rho_2^H - \bar\rho_1\bar\rho_2\,. \eeq Consider now the contour integral \begin{equation}} \def\eeq{\end{equation}\label{fg-contour} I = \oint_{\cal C_\infty} {G_1(z) G_2(z)\over z-x}\,{dz\over 2\pi i}\,,\eeq where $G_k(z)$ is the resolvent corresponding to $\bar\rho_k(x)\,\, (k=1,2)\,,$ $x\in I\!\! R\,,$ and where ${\cal C_\infty}$ is the circle of infinite radius, centered at the origin. Due to the asymptotic behavior (\ref{Gasympt}) of the two resolvents, evidently \begin{equation}} \def\eeq{\end{equation}\label{fg-contour-null} I = 0\,.\eeq Since $G_{1,2}(z)$ are analytic off the real axis, we can deform ${\cal C_\infty}$ into the positively oriented boundary $\Gamma$ of an infinitesimal strip around the real axis (namely, the union of a line parallel to the real axis just below it, traversed from $-\infty$ to $\infty$, with a line parallel to the real axis just above it and traversed in the opposite direction). The contour integral around $\Gamma$ essentially picks up the imaginary part of the integrand evaluated just above the real axis. Thus, we have \begin{equation}} \def\eeq{\end{equation}\label{fg-contour-Im} 0 = I = \oint_{\Gamma} {G_1(z) G_2(z)\over z-x}\,{dz\over 2\pi i} = -{1\over\pi}\,\Im\, \int\limits_{-\infty}^\infty {G_1(y+i0)G_2(y+i0)\over y-x +i0}\, dy\,. \eeq The last integrand may be written as $$\pi^2\left({P\over y-x} -i\pi\delta(y-x)\right)\prod_{k=1,2}\left(\bar\rho_k^H(y) + i\bar\rho_k(y)\right)\,,$$ by virtue of (\ref{PS}) and (\ref{G2}). Upon substituting the last expression in (\ref{fg-contour-Im}) and taking the imaginary part, we obtain the desired result (\ref{fgHilbert1}). Note that for $\rho_1=\rho_2 = \rho$, (\ref{fgHilbert}) simplifies into \begin{equation}} \def\eeq{\end{equation}\label{ffHilbert} 2(\rho\rho^H)^H = (\rho^H)^2 - \rho^2 + \rho_0^2\,. \eeq Finally, we shall prove an identity involving three functions $\rho_k(x)\,\, (k=1,2,3)$ and their Hilbert transforms. Our proof follows essentially the one given in \cite{jev-sak-PRD22, onofri} for the case $\rho_1=\rho_2=\rho_3$, which is reproduced also in the text-book \cite{sakita}. Let $G_k(z)$ be the resolvent corresponding to $\bar\rho_k (x)$. Consider now the contour integral \begin{equation}} \def\eeq{\end{equation}\label{fgh-contour} J = \oint_{\cal C_\infty} \,{dz\over 2\pi i}\,G_1(z) G_2(z) G_3(z)\,.\eeq As in the previous proof, due to the asymptotic behavior (\ref{Gasympt}) of the resolvents, evidently \begin{equation}} \def\eeq{\end{equation}\label{fgh-contour-null} J = 0\,.\eeq Since the resolvents are analytic off the real axis, we can deform ${\cal C_\infty}$ into the contour $\Gamma$, as in the previous proof, which picks up the imaginary part of the integrand evaluated just above the real axis. Thus, we have \begin{equation}} \def\eeq{\end{equation}\label{fgh-contour-Im} 0 = J = -{1\over\pi}\,\Im\, \int\limits_{-\infty}^\infty G_1(y+i0)G_2(y+i0)G_3(y+i0)\, dy\,. \eeq The last integrand may be written as $$-\pi^3\prod_{k=1}^3\left(\bar\rho_k^H(y) + i\bar\rho_k(y)\right)\,,$$ by virtue of (\ref{G2}). Upon substituting the last expression in (\ref{fgh-contour-Im}) and taking the imaginary part, we obtain the desired result \begin{equation}} \def\eeq{\end{equation}\label{cubic-id} \int\limits_{-\infty}^\infty\,\left(\bar\rho^H_1\bar\rho^H_2\bar\rho_3 + \bar\rho_1^H\bar\rho_2\bar\rho^H_3 + \bar\rho_1\bar\rho^H_2\bar\rho^H_3 \right)\, dx = \int\limits_{-\infty}^\infty\,\bar\rho_1\bar\rho_2\bar\rho_3\,dx\,.\eeq Note that for $\bar\rho_1=\bar\rho_2 = \bar\rho_3 = \bar\rho$, (\ref{cubic-id}) simplifies into \begin{equation}} \def\eeq{\end{equation}\label{fff-cuibic} 3 \int\limits_{-\infty}^\infty\,\bar\rho (\bar\rho^H)^2\, dx = \int\limits_{-\infty}^\infty\,(\bar\rho)^3\, dx\,,\eeq which is the identity proved in \cite{sakita, jev-sak-PRD22, onofri}. Since (\ref{cubic-id}) holds for any triplet of functions $\bar\rho_k$ in the class of functions thus considered, we can write it formally as an identity among distributions acting upon these test functions, namely, the well-known \cite{jackiw} identity \begin{equation}} \def\eeq{\end{equation}\label{PPdelta} {P\over x-y}{P\over x-z} + {P\over y-x}{P\over y-z} + {P\over z-x}{P\over z-y} = \pi^2\delta(x-y)\delta(x-z)\,.\eeq In \cite{jackiw}, the identity (\ref{PPdelta}) was proved using Fourier transforms. For alternative proofs of the identities discussed in this Appendix, and for more information about Hilbert-transform techniques, see Appendix A of \cite{abanov1}. As should be clear from our proof, (\ref{PPdelta}) holds only when the distributions on both its sides act upon functions which are integrable on the whole real line. However, this identity is frequently used in the literature on collective field theory beyond its formal domain of validity. For further discussion of this problem see Appendix B, where we show that this transgression is benign, and can be compensated for by readjusting the chemical potential which governs the normalization condition (\ref{conservation}). \pagebreak \newpage \setcounter{equation}{0} \setcounter{section}{0} \renewcommand{\theequation}{B.\arabic{equation}} \renewcommand{\thesection}{Appendix B:} \section{A Paradox and its Resolution} \vskip 5mm \setcounter{section}{0} \renewcommand{\thesection}{B} The expression for the collective potential in (\ref{Vcoll}) contains bilocal as well as trilocal terms in the density. It is customary in the literature to avoid the trilocal terms by applying a standard procedure as follows: The principal value distribution, acting on functions integrable along the whole real line, satisfies the identity (\ref{PPdelta}), which we rewrite here for convenience \begin{equation} \frac{P}{x-y} \frac{P}{x-z} + \frac{P}{y-z} \frac{P}{y-x} + \frac{P}{z-x} \frac{P}{z-y} = {\pi}^{2} \delta(x-y) \delta(x-z)\,. \label{principal} \end{equation} Making use of (\ref{principal}) in (\ref{Vcoll}), we obtain \begin{eqnarray}} \def\eeqra{\end{eqnarray}\label{Vcoll1} \tilde V_{coll} &=& \frac{(\lambda\pi)^{2}}{6 m} \int\, dx\, \rho^3 + \frac{{(\lambda - 1)}^{2}}{8 m} \int\, dx\, \frac{{(\partial_{x} \rho)}^{2}}{\rho} + \frac{ \lambda (\lambda - 1)}{2 m} \int\, dx \,\partial_{x} \rho \; \pv \int dy \frac{\rho(y)}{x-y}\nonumber\\{}\nonumber\\ &+& \tilde\mu \left(N - \int\, dx\, \rho(x) \right) \eeqra This expression for $\tilde V_{coll}$ is evidently devoid of any trilocal terms. (Note that the chemical potential $\tilde \mu$ in (\ref{Vcoll1}) need not coincide with the one in (\ref{Vcoll}), as our notations imply.) The classical equation of motion which results from varying (\ref{Vcoll1}) is \begin{equation}} \def\eeq{\end{equation}\label{variationaleq1} \frac{(\lambda\pi)^{2}}{2 m} \rho^2 - \frac{{(\lambda - 1)}^{2}}{8 m} { ( \frac{\partial_{x} \rho} {\rho})}^{2} - \frac{{(\lambda - 1)}^{2}}{4 m}\partial_{x} ( \frac{\partial_{x} \rho}{\rho}) - \frac{\lambda (\lambda - 1)}{m} \pv \int dy \frac{\partial_{y} \rho(y)}{x-y} = \tilde\mu\,. \eeq It was this form of the equation of motion (rather than (\ref{variationaleq})), from which the solitons and density waves were derived in the pioneering work \cite{Polychronakos:1994xg}. It can be checked that $\rho_s$ in (\ref{rhos}), $\rho_p$ in (\ref{LAW}) and $\rho_n$ in (\ref{negLAW}), the solutions of the variational equation (\ref{variationaleq}) of the first form (\ref{Vcoll}) of the collective potential, are also solutions of (\ref{variationaleq1}) (albeit, with values of $\tilde\mu$ different from those of (\ref{mu})). That this is true may look surprising, and even paradoxical to some readers, since neither of these solutions is integrable along the whole real line, which is a necessary condition for (\ref{principal}) to hold. This should be clear from the proof of (\ref{PPdelta}) in Appendix A, but it can also be demonstrated by a simple counter example - just apply both sides of (\ref{principal}) on three constant functions and integrate over all coordinates. The LHS would be null, while the RHS would diverge. In fact, the latter counter example is precisely relevant to determining the ground state of the collective Hamiltonian (\ref{Hcollective}). The uniform ground state density $\rho=\varrho_0$ is a solution of the BPS equation (\ref{BPS}), and of course, also of the variational equation (\ref{variationaleq}) with $\mu=0$. The energy density tied in it is of course null. It is also a solution of the alternative variational equation (\ref{variationaleq1}) with $\tilde\mu = {(\lambda\pi\varrho_0)^2\over 2m}$ and energy density (with respect to (\ref{Vcoll1})) ${(\lambda\pi)^2\varrho_0^3\over 6m}$. Thus, it seems that using (\ref{principal}) beyond its formal domain of validity is a mild transgression, which is compensated for by appropriately readjusting the chemical potential. This is indeed true, as we shall now prove, thus resolving the paradox why (\ref{variationaleq}) and (\ref{variationaleq1}) always lead to the same solutions. To this end we shall consider all $\rho$ configurations which are simultaneous solutions of (\ref{variationaleq}) and (\ref{variationaleq1}). Such functions are evidently extrema of $\Delta V = V_{coll} - \tilde V_{coll}$. From (\ref{Vcoll}) and (\ref{Vcoll1}) we obtain \begin{eqnarray}} \def\eeqra{\end{eqnarray}\label{deltaV} \Delta V &=& {(\lambda\pi)^2\over 6m} \,\Im\,\int\,dx\,(\rho^H + i\rho)^3 - (\mu-\tilde\mu)\,\int\, dx\,\rho\nonumber\\{}\nonumber\\ &=& {(\lambda\pi)^2\over 6m} \,\Im\,\int\,dx\,\Phi^3(x +i0) -(\mu-\tilde\mu)\,\Im\,\int\,dx\,\Phi(x +i0)\,.\eeqra (Note that we have omitted from this expression the constant term $(\mu-\tilde\mu)N\,.$) Due to the analytic structure of $\Phi (z)$, and as explained in Appendix A, the latter integral can be written as the contour integral \begin{equation}} \def\eeq{\end{equation}\label{deltaV1} \Delta V = -{\lambda^2\pi^3\over 6m} \,\oint_{{\cal C}_\infty}\,{dz \over 2\pi i}\, \Phi^3(z) + \pi(\mu-\tilde\mu)\,\oint_{{\cal C}_\infty}\,{dz \over 2\pi i}\, \Phi(z) \,,\eeq where ${\cal C}_\infty$ is a circle of infinite radius, centered at the origin. Note that $\Phi (z)$ need not decay as $z\rightarrow\infty$, since $\int\, dx\, \rho$ may diverge. Thus, in general $\Delta V\neq 0$. We shall now determine solutions of \begin{equation}} \def\eeq{\end{equation}\label{deltadeltaV0} {\delta \Delta V\over \delta \rho(x)} = 0 \,.\eeq To this end, let us compute \begin{equation}} \def\eeq{\end{equation}\label{deltaPhi} {\delta \Phi (z)\over \delta \rho(x)} = -{1\over\pi}{\delta\over\delta \rho(x)}\,\int\limits_{-\infty}^\infty\,{\rho(u)\,du\over z-u} = -{1\over\pi}\, {1\over z-x}\,.\eeq From this we infer that \begin{equation}} \def\eeq{\end{equation}\label{deltadeltaV} {\delta \Delta V\over \delta \rho(x)} = {(\lambda\pi)^2\over 2m}\,\oint_{{\cal C}_\infty}\,{dz \over 2\pi i}\, {\Phi^2(z)\over z-x} - (\mu-\tilde\mu)\,.\eeq The contour ${\cal C}_\infty$ in the last integral can be deformed to the countour $\Gamma$, defined in Appendix A, which essentially picks up the imaginary part of the integrand evaluated just above the real axis. Thus, in a manner similar to the discussion in Appendix A, from (\ref{fgHilbert}) to (\ref{ffHilbert}), we obtain \begin{equation}} \def\eeq{\end{equation}\label{deltadeltaV1} {\delta \Delta V\over \delta \rho(x)} = {(\lambda\pi)^2\over 2m}\,\left[ (\rho^H)^2 - \rho^2 - (2\rho\rho^H)^H\right] - (\mu-\tilde\mu)\,.\eeq But from the identity (\ref{ffHilbert}) we see that the latter equation boils down to \begin{equation}} \def\eeq{\end{equation}\label{deltadeltaV2} {\delta \Delta V\over \delta \rho(x)} = \tilde\mu -\mu - {(\lambda\pi\rho_0)^2\over 2m}\,.\eeq In other words, the condition (\ref{deltadeltaV0}) simply relates the two chemical potentials \begin{equation}} \def\eeq{\end{equation}\label{deltamu} \tilde\mu = \mu + {(\lambda\pi\rho_0)^2\over 2m}\,,\eeq setting no further conditions on $\rho (x)$, where $\rho_0$ is the subtraction constant associated with the $\rho$ in question, and should not be confused with the one appearing in (\ref{rhos}). To summarize - any solution of (\ref{variationaleq}) with chemical potential $\mu$ is simultaneously a solutions of (\ref{variationaleq1}) with chemical potential $\tilde\mu$ given by (\ref{deltamu}). \pagebreak \newpage \setcounter{equation}{0} \setcounter{section}{0} \renewcommand{\theequation}{C.\arabic{equation}} \renewcommand{\thesection}{Appendix C:} \section{A Brief Summary of the Collective Field Formulation of the Calogero Model} \vskip 5mm \setcounter{section}{0} \renewcommand{\thesection}{C} In order for this paper to be self-contained, we briefly summarize in this appendix the derivation of the collective-field Hamiltonian (\ref{Hcollective}) from (\ref{Hcalogero}) . The singularities of the Calogero-model Hamiltonian (\ref{Hcalogero}), namely, \begin{equation} \label{h1C} H = - \frac{1}{2 m} \sum_{i=1}^{N} \frac{{\partial}^{2}}{\partial {x_{i}}^{2}} + \frac{\lambda (\lambda - 1)}{2 m} \sum_{i \neq j }^{N} \frac{1}{{(x_{i} - x_{j})}^{2}}\,, \end{equation} at points where particles coincide, implies that the many-body eigenfunctions contain a Jastrow-type prefactor \begin{equation}} \def\eeq{\end{equation}\label{jastrowC} \Pi = \prod_{i < j}^{N} { (x_{i} - x_{j})}^{\lambda}\,. \eeq This Jastrow factor vanishes (for positive $\lambda$) at particle coincidence points, and multiplies that part of the wave-function which is totally symmetric under any permutation of particles\footnote{Note, in particular, that for $ \; \lambda=0 \; $ and $ \;\lambda =1, \; $ the model describes interacting bosons and fermions, respectively.}. It is precisely these symmetric wave-functions on which the collective field operators act, as explained below. Let us recall at this point some of the basic ideas of the collective-field method \cite{sakita,Jevicki:1979mb,JFnmm}, adapted specifically to the Calogero model\cite {AJL,Andric:1994su}: Instead of solving the Schr\"odinger equation associated with (\ref{h1C}) for the many-body eigenfunctions, subjected to the appropriate particle statistics (Bosonic, Fermionic of fractional), we restrict ourselves to functions which are totally symmetric under any permutation of identical particles. This we achieve by stripping off the Jastrow factor (\ref{jastrowC}) from the eigenfunctions, which means performing on (\ref{h1C}) the similarity transformation \begin{equation} \label{similaritytrC} H \rightarrow \tilde H = \Pi^{-1} H \Pi \,, \end{equation} where the Hamiltonian \begin{equation} \label{h1trC} \tilde H = - \frac{1}{2 m} \sum_{i=1}^N \frac{{\partial}^{2}} {\partial {x_{i}}^{2}} - \frac{\lambda}{m} \sum_{i \neq j}^N \frac{1}{x_{i} - x_{j}}\,\frac{\partial}{\partial x_i} \,. \end{equation} Note that $\tilde H$ does not contain the singular two-body interactions. By construction, this Hamiltonian is hermitian with respect to the measure $$d\mu (x_i) = \Pi^2\, d^N x\,,$$ (as opposed to the original Hamiltonian $H$ in (\ref{h1C}), which is hermitian with respect to the flat Cartesian measure). We can think of the symmetric many-body wave-functions acted upon by $\tilde H$ as functions depending on all possible symmetric combinations of particle coordinates. These combinations form an overcomplete set of variables. However, as explained below, in the {\em continuum} limit, redundancy of these symmetric variables has a negligible effect. The set of these symmetric variables can be generated, for example, by producs of moments of the collective - or density - field \begin{equation}} \def\eeq{\end{equation}\label{collectiveC} \rho (x) = \sum_{i = 1}^N \delta( x - x_{i})\,. \eeq The collective-field theory for the Calogero model is obtained by changing variables from the particle coordinates $ \; x_{i} \; $ to the density field $ \; \rho (x) \; $. This transformation replaces the finitely many variables $ \; x_{i} \; $ by a continuous field, which is just another manifestation of overcompleteness of the collective variables. Clearly, description of the particle systems in terms of continuous fields becomes an effectively good description in the high density limit. Of course, the large density limit means that we have taken the large- $N$ limit. Changing variables from particle coordinates $x_i$ to the collective fields (\ref{collectiveC}) implies that we should express all partial derivatives in the Hamiltonian $\tilde H$ in (\ref{h1trC}) as \begin{equation}} \def\eeq{\end{equation}\label{derivativesC} \frac{\partial}{\partial x_{i}} = \int dx \frac{\partial \rho(x)} {\partial x_{i}} \frac{\delta}{\delta \rho(x)}\,,\end{equation} where we applied the differentiation chain rule. In the large $- N$ limit, the Hamiltonian $\tilde H$ can be expressed entirely in terms of the collective field $ \; \rho (x)$ and its canonical conjugate momentum \begin{equation}} \def\eeq{\end{equation}\label{momentaC} \pi(x) = - i \frac{\delta}{\delta \rho(x)}\,, \end{equation} as we show below. It follows from (\ref{derivativesC}) and (\ref{momentaC}) that the particle momentum operators (acting on symmetric wave-functions) may be expressed in terms of the collective-field momenta at particular points on the line as \begin{equation}} \def\eeq{\end{equation}\label{particlemomentaC} p_i = -\pi'(x_i) \eeq (where $\pi'(x) = \pa_x \pi (x)$). Finally, note from (\ref{collectiveC}) that the collective field obeys the normalization condition \begin{equation}} \def\eeq{\end{equation}\label{conservationC} \int dx \rho(x) = N\,. \eeq The density field $ \; \rho\; $ and its conjugate momentum $ \; \pi\; $ satisfy the equal-time canonical commutation relations\footnote{According to (\ref{conservationC}), the zero-momentum modes of the density fields are constrained, i.e., non-dynamical. This affects the commutation relation (\ref{canonicalC}), whose precise form is $[\rho(x), \pi (y)] = i (\delta(x - y) - (1/l))$, where $l$ is the size of the large one-dimensional box in which the system is quantized, which is much larger than the macroscopic size $L$ of the particle condensate in the system. In what follows, we can safely ignore this $1/l$ correction in the commutation relations.} \begin{equation}} \def\eeq{\end{equation}\label{canonicalC} [\rho (x), \pi (y)] = i \delta(x - y)\,, \eeq (and of course $[\rho (x), \rho (y)] = [\pi (x), \pi (y)] = 0)\,. $ By substituting (\ref{collectiveC})-(\ref{particlemomentaC}) in (\ref{h1trC}), we obtain the continuum-limit expression for $\tilde H$ as \begin{equation}\label{htildeC} \tilde H = \frac{1}{2 m} \int dx \rho(x) { ( \partial_{x} \pi(x))}^{2} - \frac{i}{m} \int dx \rho(x) \left( \frac{\lambda - 1}{2} \frac{\partial_{x} \rho}{\rho} + \lambda \pv \int \frac{ dy \rho(y)}{x - y} \right) \partial_{x} \pi(x) \end{equation} where $ \; \pv \int \; $ denotes Cauchy's principal value. It can be shown \cite{sakita} that (\ref{htildeC}) is hermitian with respect to the functional measure\footnote{By definition (recall (\ref{collectiveC})), this measure is defined only over positive values of $\rho$\,.} \begin{equation}} \def\eeq{\end{equation}\label{functionalmeasureC} {\cal D} \mu [\rho] = J[\rho] \prod_x d\rho(x)\,, \eeq where $ J[\rho] $ is the Jacobian of the transformation from the $ \; \{ x_{i} \} \; $ to the collective field $ \; \{ \rho(x) \}$\,. In the large - $N$ limit it is given by \cite{Bardek:2005yx} \begin{equation} \ln J = (1 - {\lambda}) \int dx \rho(x) \ln \rho(x) - {\lambda} \int dx dy \rho(x) \ln |x - y | \rho(y) \end{equation} It is more convenient to work with a Hamiltonian, which unlike (\ref{htildeC}), is hermitian with respect to the flat functional Cartesian measure $\prod_x d\rho(x)\,.$ This we achieve by means of the similarity transformation $\psi \rightarrow J^{\frac{1}{2}} \psi\,, \tilde H \rightarrow H_{coll} = J^{\frac{1}{2}} \tilde H J^{- \frac{1}{2}}\,,$ where the continuum {\em collective} Hamiltonian is \begin{equation} \label{colhamC} H_{coll} = \frac{1}{2 m} \int dx\, \pi'(x)\, \rho(x)\, \pi'(x) + \frac{1}{2 m} \int dx \rho(x) {\left( \frac{\lambda - 1}{2} \frac{\partial_{x} \rho}{\rho} + \lambda \pv \int \frac{ dy \rho(y)}{x - y} \right)}^{2} + H_{sing},~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{equation} namely, the Hamiltonian given by (\ref {Hcollective}) and (\ref{Hsing}). The collective-field Hamiltonian (\ref{Hcollective2F}) of the two-family Calogero model can be derived from (\ref{h1}) in a similar manner. \pagebreak {\bf Acknowledgement}\\ This work was supported in part by the Ministry of Science and Technology of the Republic of Croatia under contract No. 098-0000000-2865 and by the US National Science Foundation under Grant No. PHY05-51164.
{'timestamp': '2010-02-13T13:58:13', 'yymm': '0811', 'arxiv_id': '0811.2536', 'language': 'en', 'url': 'https://arxiv.org/abs/0811.2536'}
\section{Introduction} The Schwarzschild solution plays a key role in teaching about general relativity: It describes the simplest version of a black hole. By Birkhoff's theorem, it more generally describes the gravitational field around any spherical mass distribution, such as the Sun in our own Solar system. As one of two particularly simple, yet physically relevant examples of a non-trivial metric (the other being the FLRW spacetime of an expanding universe), it is particularly well-suited for teaching about general techniques of ``reading'' and interpreting a spacetime metric. Consider undergraduate courses where students are introduced to selected concepts and results from general relativity without exposing them to the full mathematical formalism. Such courses have the advantage of introducing students to one of the two great fundamental theories of 20th century physics early on (the other being quantum mechanics); they also profit from subject matter that meets with considerable interest from students.\cite{Hartle2006} Using the terminology of Christensen and Moore,\cite{Christensen2012} in the ``calculus only'' approach pioneered by Taylor and Wheeler,\cite{Taylor2001,Taylor2018} spacetime metrics are not derived, but taken as given, and the focus is on learning how to interpret a given spacetime metric. Similar presentations can be found in the first part of the ``physics first'' approach exemplified by Hartle's text book,\cite{Hartle2003} where the concepts of the metric and of geodesics are introduced early on, and their physical consequences explored, while the mathematics necessary for the Einstein equations is only introduced at a later stage. Whenever the approach involves an exploration of simple metrics such as the Schwarzschild solution, but stops short of the formalism required for the full tensorial form of Einstein's equations, access to a simple derivation of the Schwarzschild solution that does not make use of the advanced formalism can be a considerable advantage. Simplified derivations of the Schwarzschild solution have a long tradition within general relativity education,\cite{Schiff1960,Harwit1973} although specific simplifications have met with criticism.\cite{Rindler1968} This article presents a derivation which requires no deeper knowledge of the formalism of differential geometry beyond an understanding of how to interpret a given spacetime metric $\mathrm{d} s^2$. The derivation avoids the criticism levelled at attempts to derive the Schwarzschild solution from the Einstein equivalence principle in combination with a Newtonian limit,\cite{Gruber1988} relying as it does on a simplified version of the vacuum Einstein equation. More specifically, I combine the restrictions imposed by the symmetry with the simple form of Einstein's equations formulated by Baez and Bunn.\cite{BaezBunn2005} That same strategy was followed by Kassner in 2017,\cite{Kassner2017} but in this text, I use the ``infalling coordinates'' that are commonly associated with the Gullstrand-Painlev\'e form of the Schwarzschild metric,\cite{Martel2001,Visser2005,HamiltonLisle2008} not the more common Schwarzschild coordinates. That choice simplifies the argument even further. In the end, what is required is no more than the solution of an ordinary differential equation for a single function, which yields to standard methods, to obtain the desired result. \section{Coordinates adapted to spherical symmetry and staticity} \label{SymmetriesCoordinates} Assume that the spacetime we are interested in is spherically symmetric and static. In general relativity, a symmetry amounts to the possibility of being able to choose coordinates that are adapted to the symmetry, at least within a restricted sub-region of the spacetime in question. That the spacetime is static is taken to mean that we can introduce a (non-unique) time coordinate ${t}$ so that our description of spacetime geometry does not depend explicitly on ${t}$, and that space and time are completely separate --- in the coordinates adapted to the symmetry, there are no ``mixed terms'' involving $\mathrm{d} {t}$ times the differential of a space coordinate in the metric. If we use ${t}$ to slice our spacetime into three-dimensional hyperplanes, each corresponding to ``space at time ${t}$,'' then each of those 3-spaces has the same spatial geometry. A mixed term would indicate that those slices of space would need to be shifted relative to another in order to identify corresponding points. The mixed term's absence indicates that in adapted coordinates, there is no need for such an extra shift. In those coordinates, we can talk about the 3-spaces as just ``space,'' without the need for specifying which of the slices we are referring to. In the case of spherical symmetry, we can introduce spherical coordinates that are adapted to the symmetry: a radial coordinate $r$ and the usual angular coordinates $\vartheta,\varphi$, so that the spherical shell at constant $r$ has the total area $4\pi r^2$. In consequence, the part of our metric involving $\mathrm{d}\vartheta$ and $\mathrm{d}\varphi$ will have the standard form \begin{equation} r^2(\mathrm{d}\vartheta^2+\sin^2\theta\mathrm{d}\varphi^2) \equiv r^2\mathrm{d}\Omega^2, \end{equation} where the right-hand side defines $\mathrm{d}\Omega^2$, the infinitesimal solid angle corresponding to each particular combination of $\mathrm{d}\vartheta$ and $\mathrm{d}\varphi$. The radial coordinate slices space into spherical shells, each corresponding to a particular value $r=const.$ The rotations around the origin, which are the symmetry transformations of spherical symmetry, map each of those spherical shells onto itself, and they leave all physical quantities that do not explicitly depend on $\vartheta$ or $\varphi$ invariant. In what follows, we will use the basic structures introduced in this way --- the slices of simultaneous ${t}$, the radial directions within each slice, the angular coordinates spanning the symmetry--adapted spherical shells of area $4\pi r^2$ --- as auxiliary structures for introducing spacetime coordinates. For now, let us write down the shape that our metric has by simple virtue of the spherical symmetry, the requirement that the spacetime be static, and the adapted coordinates, namely \begin{equation} \mathrm{d} s^2 = -c^2F(r) \mathrm{d} {t}^2 + G(r) \mathrm{d} r^2 + r^2\:\mathrm{d}\Omega^2. \label{StaticForm} \end{equation} Students familiar with ``reading'' a spacetime metric will immediately recognize the sign difference between the parts describing space and describing time that is characteristic for spacetime, and the speed of light $c$ that gives us the correct physical dimensions. That there is no explicit dependence on $\varphi$ and $\vartheta$ in the remaining functions $F$ and $G$ is a direct consequence of spherical symmetry. That the factor in front of $\mathrm{d}\Omega^2$ is $r^2$ is a consequence of our coordinate choice, with spherical angular coordinates so that the area of a spherical surface of constant radius $r$ is $4\pi r^2$. That there is no explicit dependence on ${t}$ is one consequence of the spacetime being static; the absence of the mixed term $\mathrm{d} {t}\cdot \mathrm{d} r$ is another. We are left with two unknown functions $F(r)$ and $G(r)$. In the following, let us call ${t}$ and $r$ the {\em static coordinates}. Note that, since $G(r)$ is as yet undefined, we have not yet chosen a specific physical meaning for the length measurements associated with our $r$ coordinate. But because of the $\mathrm{d}\Omega^2$ part, it is clear that whatever choice we make, the locally orthogonal lengths $r\cdot\mathrm{d}\vartheta$ and $r\cdot\sin\vartheta\cdot\mathrm{d}\varphi$ will have the same physical interpretation as for the length measurement corresponding to $\mathrm{d} r$. \section{Infalling observer coordinates} \label{Sec:InfallingObservers} Now that we know what the radial directions are, at each moment of time ${t}$, we follow Visser\cite{Visser2005} as well as Hamilton and Lisle\cite{HamiltonLisle2008} in defining a family of radially infalling observers. Observers in that family are in free fall along the radial direction, starting out at rest at infinity: In mapping each observer's radial progression in terms of the static coordinate time ${t}$, we adjust initial conditions, specifically: the choice of initial speed at some fixed time ${t}$, in just the right way that the radial coordinate speed goes to zero for each observer in the same way as $r\to\infty.$ It is true that talking about ``infalling'' observers already reflects our expectation that our solution should describe the spacetime of a spherically symmetric mass. As we know from the Newtonian limit, such a mass attracts test particles in its vicinity. It should be noted, though, that all our calculations would also be compatible with the limit of no mass being present. In that case, ``infalling'' would be a misnomer, as our family of observers would merely hover in empty space at unchanging positions in $r$. We can imagine infinitesimal local coordinate systems associated with our observers --- think of the observer mapping out space and time by defining three orthogonal axes, and by measuring time with a co-moving clock. We assume all such little coordinate systems to be non-rotating --- otherwise, we would break spherical symmetry, since rotation would locally pick out a plane of rotation that is distinguishable from the other planes. The radial direction is a natural choice for the first space axis of those little free-falling systems. The other directions, we take to point to observers falling side by side with our coordinate-defining observer --- and to remain pointed at a specific such other observer, once the choice of direction is made. We assume our infalling observers' clocks to be synchronised at some fixed radius value $r$. By spherical symmetry, those clocks should then be synchronised at {\em all} values of $r$. Anything else would indicate direction-dependent differences for the infalling observers and their clocks, after all. Hence, at any given static time ${t}$, all the infalling observers who are at radius value $r$ show the same proper time $T$ on the ideal clocks travelling along with them. Once our definition is complete, our static, spherically symmetric spacetime is filled with infalling observers from that family: Whenever we consider an event $\cal E$, there will be an observer from that family passing by at that time, at that location. Now, consider the coordinate speed of those infalling observers. If we position ourselves at some constant radius value $r$ and watch the falling observers fly by, then we can express both their proper time rate and their coordinate speed in the $r$ direction in terms of $r$ and ${t}$. We can combine the two pieces of information to obtain the rate of change in radial position $r$ with proper time $T$ for those infalling observers. But since the initial conditions for those observers are the same, and since our spacetime is, by assumption, static, the resulting function can only depend on $r$, and not explicitly on ${t}$. Let us rescale that function with the speed of light to make it dimensionless, give it an overall minus sign to make it positive for infalling particles, and call it $\beta(r)$, \begin{equation} \beta(r)\equiv -\frac{1}{c}\frac{\mathrm{d} r}{\mathrm{d} T}(r). \label{betaDefinition} \end{equation} Recall from section \ref{SymmetriesCoordinates} that we also still have the freedom to decide on the physical meaning of $r$. We make the choice of making $\mathrm{d} r$ the physical length measured by one of our infalling observers at the relevant location in spacetime, at constant time $T$. Via our angular coordinates, that implies that length measurements orthogonal to the radial direction, $r\cdot\mathrm{d}\vartheta$ and $r\cdot\sin\vartheta\:\mathrm{d}\varphi$ inherit the same physical interpretation. As a next step, we transform our metric (\ref{StaticForm}) from the static form into the form appropriate for our coordinate choice $r$ and $T$. We do so by writing the static time coordinate as a function ${t}(T,r)$ in terms of infalling observer time and radius value. In consequence, \begin{equation} \mathrm{d} {t} = \frac{\partial{t}}{\partial T}\cdot\mathrm{d} T+ \frac{\partial {t}}{\partial r}\cdot\mathrm{d} r, \end{equation} and our new metric now has the form \begin{align} \mathrm{d} s^2 = {} & -c^2 F(r)\left(\frac{\partial t}{\partial T}\right)^2\mathrm{d} T^2 \nonumber \\[0.2em] & -2c^2F(r)\left(\frac{\partial t}{\partial T}\right)\left(\frac{\partial t}{\partial r}\right)\mathrm{d} T\:\mathrm{d} r \nonumber \\[0.2em] & +\left[G(r)-c^2F(r)\left(\frac{\partial t}{\partial r}\right)^2\right]\mathrm{d} r^2+r^2\:\mathrm{d}\Omega^2. \end{align} At face value, this looks like we are moving the wrong way, away from simplification, since we now have more functions, and they depend on two variables instead of one. But in fact, this new formulation paves the way for an even simpler form of the metric. Consider a specific event, which happens at given radius value $r$. In a small region around that event, we will introduce a new coordinate $\bar{r}$ to parametrize the radial direction. We want this coordinate to be co-moving with our infalling observers at $r$; each such observer then has a position $\bar{r}=const.$ that does not change over time. Key to our next step is that we {\em know} the metric for the local length and time measurements made by any one of our free-falling observers. By Einstein's equivalence principle, the metric is that of special relativity. Locally, namely whenever tidal effects can be neglected, spacetime geometry for any non-rotating observer in free fall is indistinguishable from Minkowski spacetime as described by a local inertial system. Since we have chosen both the time coordinate $T$ and the physical meaning of the radial coordinate $r$ so as to conform with the measurements of the local infalling observer, the transformation between $\bar{r}$ and $r$ is particularly simple: It has the form of a Galilei transformation \begin{equation} \mathrm{d}\bar{r}= \mathrm{d} r + \beta(r)c\:\mathrm{d} T. \label{barRshift} \end{equation} In that way, as it should be by definition, radial coordinate differences at constant $T$ are the same in both systems, while for an observer at constant $\bar{r},$ with $\mathrm{d} \bar{r}=0$, the relation between $\mathrm{d} r$ and $\mathrm{d} T$ is consistent with the definition of the function $\beta(r)$ in (\ref{betaDefinition}). Are you surprised that this is not a Lorentz transformation, as one might expect from special relativity? Don't be. We are not transforming from one local inertial coordinate system to another. The $T$ is already the time coordinate of the infalling observers, so both coordinate systems have the same definition of simultaneity, and time dilation plays no role in this particular transformation. Also, we have chosen $r$ intervals to correspond to length measurements of the infalling observers, so there is no Lorentz contraction, either. It is the consequence of these special choices that gives the relation (\ref{barRshift}) its simple form. Last but not least, when we analyse specifically an infinitesimal neighbourhood of the point $r,\vartheta,\varphi$, let us make the choice that directly at our point of interest, we make $\bar{r}$ coincide with $r$. Since before, we had only fixed the differential $\mathrm{d} \bar{r}$, we do have the remaining freedom of choosing a constant offset for $\bar{r}$ that yields the desired result. By Einstein's equivalence principle, the metric in terms of the locally co-moving coordinates $T,\bar{r},\vartheta,\varphi$ is the spherical-coordinate version of the Minkowski metric, \begin{equation} \mathrm{d} s^2 = -c^2\mathrm{d} T^2 + \mathrm{d}\bar{r}^2 + \bar{r}^2\mathrm{d}\Omega. \end{equation} This version can, of course, be obtained by taking the more familiar Cartesian-coordinate version \begin{equation} \mathrm{d} s^2=-c^2\mathrm{d} T^2 + \mathrm{d} X^2 + \mathrm{d} Y^2 + \mathrm{d} Z^2, \label{CartesianMinkowski} \end{equation} applying the definition of Cartesian coordinates $X,Y,Z$ in terms of spherical coordinates $\bar{r},\vartheta,\varphi$ \begin{equation} x= \bar{r}\:\sin\vartheta\:\cos\varphi, \;\; y= \bar{r}\:\sin\vartheta\:\sin\varphi, \;\; z= \bar{r}\:\cos\vartheta, \end{equation} to express $\mathrm{d} X, \mathrm{d} Y, \mathrm{d} Z$ in terms of $\mathrm{d} \bar{r}, \mathrm{d}\vartheta, \mathrm{d}\varphi$, and substitute the result into (\ref{CartesianMinkowski}). By noting that we have chosen $\bar{r}$ so that, at the specific spacetime event where we are evaluating the metric, $\bar{r}=r$, while, for small radial coordinate shifts around that location, we have the relation (\ref{barRshift}), we can now write down the same metric in the coordinates $T, r, \vartheta,\varphi$, namely as \begin{equation} \mathrm{d} s^2 = -c^2\left[ 1-\beta(r)^2 \right] \mathrm{d} T^2+2c\beta(r)\mathrm{d} r\:\mathrm{d} T +\mathrm{d} r^2+r^2\mathrm{d}\Omega^2. \label{preMetric} \end{equation} Since we can repeat that local procedure at any event in our spacetime, this result is our general form of the metric, for all values of $r$. This, then is the promised simplification: By exploiting the symmetries of our solutions as well as the properties of infalling observers, we have reduced our metric to a simple form with no more than one unknown function of one variable, namely $\beta(r)$. So far, what I have presented is no more than a long-form version of the initial steps of the derivation given by Visser in his heuristic derivation of the Schwarzschild metric.\cite{Visser2005} In the next section, we will deviate from Visser's derivation. \section{$\beta(r)$ from tidal deformations} \label{TidalSection} In the previous section, we had exploited symmetries and Einstein's equivalence principle. In order to determine $\beta(r)$, we need to bring in additional information, namely the Einstein equations, which link the matter content with the geometry of spacetime. For our solution, we only aim to describe the spacetime metric outside whatever spherically-symmetric matter distribution resides in (or around) the center of our spherical symmetry. That amounts to applying the {\em vacuum Einstein equations}. More specifically, we use a particularly simple and intuitive form of the vacuum Einstein equations, which can be found in a seminal article by Baez and Bunn:\cite{BaezBunn2005} Consider a locally flat free-fall system around a specific event $\cal E$, with a time coordinate $\tau$, local proper time, where the event we are studying corresponds to $\tau=0$. In that system, describe a small sphere of freely floating test particles, which we shall call a {\em test ball}. The particles need to be at rest relative to each other at $\tau=0$. Let the volume of the test ball be $V(\tau)$. Then the vacuum version of Einstein's equations states that \begin{equation} \left.\frac{\mathrm{d}^2 V}{\mathrm{d}\tau^2}\right|_{\tau=0} = 0. \label{EinsteinVacuum} \end{equation} In words: If there is no matter or energy inside, the volume of such a test ball remains constant in the first order (those were our initial conditions) and the second order (by eq.~[\ref{EinsteinVacuum}]). If you are familiar with Wheeler's brief summary of Einstein's equations, ``spacetime grips mass, telling it how to move'' and ``mass grips spacetime, telling it how to curve'',\cite{Wheeler1990} you will immediately recognise that this is a specific way for the structure of spacetime telling the test ball particles how to move. The calculation later in this section provides the second part: It will amount to using (\ref{EinsteinVacuum}) to determine the structure of spacetime, namely the still missing function $\beta(r)$, and that is the way for mass, in this case: for the absence of mass, to tell spacetime how to curve. Note that equation (\ref{EinsteinVacuum}) also holds true in Newtonian gravity. So in a way, this version of Einstein's equation can be seen as a second-order extension of the usual Einstein equivalence principle: Ordinarily, the equivalence principle is a statement about physics in the absence of tidal forces. Equation (\ref{EinsteinVacuum}) adds to this that the lowest-order correction for tidal forces in a freely falling reference frame is that specified by Newtonian gravity. This makes sense, since by going into a free-fall frame, and restricting our attention to a small spacetime region, we have automatically created a weak-gravity situation. In such a situation, tidal corrections are approximately the same as those described by Newton. This argument can serve as a heuristic justification of (\ref{EinsteinVacuum}). In 2017, Kassner made use of the Baez-Bunn form of Einstein's vacuum equation to derive the Schwarzschild solution, starting from what we have encountered as the static form of the metric (\ref{StaticForm}).\cite{Kassner2017} We follow the same general recipe, but using the infalling coordinates introduced in section \ref{Sec:InfallingObservers}, which makes our derivation even simpler. Consider five test particles in a small region of space. Let the motion of each be the same as for the local representative from our coordinate-defining family of infalling observers. We take the central particle $C$ to be at radial coordinate value $r=R$ at the time of the snapshot shown in Fig.~\ref{TestParticlesOutside}. The other four are offset relative to the central particle: As described in the local inertial system that is co-moving with the central particle, one of the particles is shifted by $\Delta l$ upwards in the radial direction, another downward, while two of the particles are offset orthogonally by the same distance. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\linewidth]{01-free-fall-particles.pdf} \caption{Five test particles in our spherically-symmetric spacetime} \label{TestParticlesOutside} \end{center} \end{figure} The $\Delta l$ is meant to be infinitesimally small, so while Fig.~\ref{TestParticlesOutside} is of course showing a rather large $\Delta l$ so as to display the geometry of the situation more clearly, we will in the following only keep terms linear in $\Delta l$. Consider a generic particle, which moves as if it were part of our coordinate-defining family of infalling observers, and which at the time $T_0$ is at $r=r_0$. By a Taylor expansion, that particle's subsequent movement is given by \begin{equation} r(T) = r_0 + \frac{\mathrm{d} r}{\mathrm{d} T}(T_0) \cdot \Delta T +\frac12 \frac{\mathrm{d}^2 r}{\mathrm{d} T^2}(T_0) \cdot \Delta T^2 \label{TaylorREvo} \end{equation} where $\Delta T\equiv T-T_0$. We know from (\ref{betaDefinition}) that the derivative in the linear term can be expressed in terms of $\beta(r)$; by the same token, \begin{equation} \frac{\mathrm{d}^2 r}{\mathrm{d} T^2} = -c\frac{\mathrm{d}\beta}{\mathrm{d} T}=-c\beta' \frac{\mathrm{d} r}{\mathrm{d} T} = c^2\beta\cdot\beta', \end{equation} where the prime denotes differentiation of $\beta$ with respect to its argument. Since, in the following, the product of $\beta$ and its first derivative will occur quite often, let us introduce the abbreviation \begin{equation} B(r) \equiv \beta(r)\cdot\beta'(r). \label{BigBDefinition} \end{equation} With these results, can rewrite the Taylor expansion (\ref{TaylorREvo}) as \begin{equation} r(T) = r_0 -c\beta(r_0)\cdot\Delta T + \frac12 c^2B(r_0)\cdot\Delta T^2. \label{RadialOrbitTime} \end{equation} In order to find $r_C(T)$ for our central particle, we simply insert $r_0=R$ into that expression. If, on the other hand, we want to write down the time evolution for particles $U$ and $D$, let us denote it by $r_{U,D}(T)$, we need to evaluate the expression (\ref{RadialOrbitTime}) at the initial location $r_0=R\pm\Delta l$. Since $\Delta l$ is small, we can make a Taylor expansion of $\beta(r)$ and its derivative around $r=R$, and neglect everything beyond the terms linear in $\Delta l$. The result is \begin{multline} r_{U,D}(T)=R \pm\Delta l-c\left[ \beta(R)\pm\beta'(R)\Delta l \right]\Delta T \\[0.2em] +\frac{c^2}{2}\big[ B(R)\pm B'(R)\Delta l \big]\Delta T^2 \end{multline} In consequence, the distance between the upper and lower particle, $d_{\parallel}(T)\equiv r_U(T)-r_D(T),$ changes over time as \begin{equation} d_{\parallel}(T) = 2\Delta l\left[ 1-c\beta'(R)\Delta T+\frac12c^2 B'(R)\Delta T^2 \right]. \label{dParallel} \end{equation} Next, let us look at how the distance between the particles $L$ and $R$ changes over time. The initial radial coordinate value for each of the particles is \begin{equation} r(T_0) = \sqrt{R^2+\Delta l^2}=R\left[1+\frac12\left(\frac{\Delta l}{R}\right)^2\right]\approx R, \end{equation} that is, equal to $R,$ as long as we neglect any terms that are higher than linear in $\Delta l$. In consequence, $r_{L,R}(t)$ is the same function as for our central particle, given by eq.~(\ref{RadialOrbitTime}) with $r_0=R$. The transversal (in Fig.~\ref{TestParticlesOutside}: horizontal) distance $d_{\perp}(T)$ between the particles $L$ and $R$ changes in proportion to the radius value, \begin{align} d_{\perp}(T) &= 2\Delta l\cdot\frac{r_{L}(T)}{R} \nonumber \\ &=2\Delta \left[1-\frac{c\beta(R)}{R}\Delta T+\frac{c^2}{2}\frac{B(R)}{R}\Delta T^2\right]. \label{dPerp} \end{align} With these preparations, consider the vacuum Einstein equation (\ref{EinsteinVacuum}) for the volume of a test ball. Initially, our particles $C, U, D, L, R$ define a circle, which is deformed to an ellipse. By demanding rotational symmetry around the radial direction, we can construct the associated ellipsoid, which is initially a spherical surface. That ellipsoid has one axis in radial direction, whose length is $d_{\parallel}(T)$, and two axes that are transversal and each have the length $d_{\perp}(t)$. But that ellipsoid is not quite yet the test ball we need. After all, the particles of the test ball need to be at rest initially, at time $T_0$, in the co-moving system defined by the central particle $C$. Our defining particles are not, as the terms linear in $\Delta T$ in both (\ref{dParallel}) and (\ref{dPerp}) show, where the coefficients of $\Delta T$ correspond to the particles' initial velocities. In order to define our test ball, we need to consider particles at the same location, undergoing the same acceleration, but which are initially at rest relative to the central particle $C$. We could go back to the drawing board, back to Fig.~\ref{TestParticlesOutside}, make a more general Ansatz that includes initial velocities which measure the divergence of the motion of our test ball particles from that of the infalling-observer particles, and repeat our calculation while including those additional velocity terms. But there is a short-cut. The only consequence of those additional velocity terms will be to change the terms linear in $\Delta T$ in equations (\ref{dParallel}) and (\ref{dPerp}). And we already know the end result: We will choose the additional terms so as to cancel the terms linear in $\Delta T$ in the current versions of (\ref{dParallel}) and (\ref{dPerp}). But by that reasoning, we can skip the explicit steps in between, and write down the final result right away. The time evolution of the radial-direction diameter of our test ball, let us call it $L_{\parallel}(T)$, must be the same as $d_{\parallel}(T)$, but without the term linear in $\Delta T$. Likewise, the time evolution $L_{\perp}(T)$ of the two transversal diameters must be equal to $d_{\perp}(T)$, but again without the term linear in $\Delta T$. The result is \begin{align} L_{\parallel}(T) &= 2\Delta l \left[1+\frac12c^2B'(R)\Delta T^2\right] \\ L_{\perp}(T) &= 2\Delta l \left[1+\frac{c^2}{2}\frac{B(R)}{R}\Delta T^2\right]. \end{align} Thus, our test ball volume is \begin{align} V(T) &= \frac{\pi}{6}L_{\parallel}(T) L_{\perp}^2(T) \\ &= \left.\frac{4\pi}{3}\Delta l^3\left[1+{c^2}\left( \frac{B(r)}{r} + \frac{B'(r)}{2}\right)\Delta T^2\right]\right|_{r=R} \end{align} For the second time derivative of $V(T)$ to vanish at the time $T=T_0$, we must have \begin{equation} \frac{B(r)}{r} + \frac{B'(r)}{2}= 0 \label{VolumeConditionR} \end{equation} for all values of $r$. This is readily solved by the standard method of separation of variables: We can rewrite (\ref{VolumeConditionR}) as \begin{equation} \frac{\mathrm{d} B}{B} = -2\frac{\mathrm{d} r}{r}, \end{equation} which is readily integrated to give \begin{equation} \ln(B) = -\ln(r^{2}) + const. \;\; \Rightarrow \;\; \ln(Br^2) = C', \end{equation} with a constant $C'$, which upon taking the exponential gives us \begin{equation} Br^2= C, \label{BSolution} \end{equation} with a constant $C$. Note that the constant $C$ can be negative --- there is no reason the constant $C'$ needs to be real; only our eventual function $B(r)$ needs to be that, and it is clear that (\ref{BSolution}) satisfies the differential equation (\ref{VolumeConditionR}) for any constant $C$, positive, zero, or negative. By (\ref{BigBDefinition}), the solution (\ref{BSolution}) corresponds to the differential equation \begin{equation} \beta(r)\beta'(r) = \frac{C}{r^2} \end{equation} for our function $\beta$; with another separation of variables, we can re-write this as \begin{equation} \beta\cdot\mathrm{d}\beta=C\frac{\mathrm{d} r}{r^2}. \end{equation} Both sides are readily integrated up; we can solve the result for $\beta(r)$ and obtain \begin{equation} \beta(r) = \sqrt{ -\frac{2C}{r} +2D }, \end{equation} where $D$ is the second integration constant, and where we have chosen the proper sign, since we know that $\beta(r)>0$. That brings us to the last step: The requirement that, for large values of $r$, the description provided by our solution should correspond to the results from Newtonian gravity. First of all, we note that our initial condition for the infalling observers, which had those observers start out at zero speed at infinity, means that we must choose $D=0$. Then, as we would expect, $\beta(r)$ for large values of $r$ becomes very small, corresponding to small speeds. But at slow speeds, time and length intervals as measured by the infalling observer will become arbitrarily close to time and length intervals as measured by an observer at rest in our static coordinate system at constant $r$, using the static time coordinate ${t}$. As is usual, we identify these coordinates with those of an approximately Newtonian description. In that description, the radial velocity is \begin{equation} v(r) = \sqrt{\frac{2GM}{r}}, \end{equation} which follows directly from energy conservation for the sum of each observer's kinetic and Newtonian-gravitational potential energy. This fixes the remaining integration constant as \begin{equation} C = -\frac{GM}{c^2}, \end{equation} and the final form of our function $\beta(r)$ becomes \begin{equation} \beta(r) = \sqrt{\frac{2GM}{rc^2}}. \end{equation} Inserting this result in (\ref{preMetric}), we obtain the metric \begin{equation} \mathrm{d} s^2 = -c^2\left[ 1-\frac{2GM}{rc^2} \right]\mathrm{d} T^2+2\sqrt{\frac{2GM}{r}}\mathrm{d} r\:\mathrm{d} T+\mathrm{d} r^2+r^2\mathrm{d}\Omega^2. \label{GPMetric} \end{equation} This is known as the Gullstrand-Painlev\'e version of the Schwarzschild metric.\cite{Martel2001,Visser2005,HamiltonLisle2008} A last transformation step brings us back to the traditional Schwarzschild form. Recall our discussion in sec.~\ref{SymmetriesCoordinates}, leading up to the explicitly static form (\ref{StaticForm}) of the metric? The main difference between our current form and the static version is the mixed term containing $\mathrm{d} r\:\mathrm{d} T$ in (\ref{GPMetric}). Everything else already has the required shape. Inserting the Ansatz \begin{equation} \mathrm{d} T = \mathrm{d} t + \xi(r) \mathrm{d} r \end{equation} into the metric (\ref{GPMetric}), it is straightforward to see that the mixed term vanishes iff our transformation is \begin{equation} \mathrm{d} T = \mathrm{d} t +\frac{\sqrt{2GM/r}}{c^2\left(1-\frac{2GM}{rc^2}\right)}\mathrm{d} r. \label{TtTrafo} \end{equation} Substitute this into (\ref{GPMetric}), and the result is the familiar form of the Schwarzschild metric in Schwarzschild's original coordinates $t,r,\vartheta,\varphi$, \begin{equation} \mathrm{d} s^2 = -c^2\left(1-\frac{2GM}{c^2 r} \right)\mathrm{d} t^2 + \frac{\mathrm{d} r^2}{\left(1-\frac{2GM}{c^2 r} \right)} + r^2\mathrm{d}\Omega^2. \end{equation} \section{Conclusion} Using coordinates adapted to the symmetries, we were able to write down the spherically symmetric, static spacetime metric. On this basis, and using the family of infalling observers that is characteristic for the Gullstrand-Painlev\'e solution, we wrote down the metric in the form (\ref{preMetric}), with a single unknown function $\beta(r)$. From the simplified form (\ref{EinsteinVacuum}) of the vacuum Einstein equations, as applied to a test ball in free fall alongside one of our family of observers, we were able to determine $\beta(r)$, up to two integration constants. By using the Einstein equation, we escape the restrictions imposed on simplified derivations by Gruber et al.\cite{Gruber1988} From the initial condition for our infalling observers, as well as from the Newtonian limit at large distances from our center of symmetry, we were able to fix the values of the two intergration constants. Our derivation does not require knowledge of advanced mathematical concepts beyond the ability to properly interpret a given metric line element $\mathrm{d} s^2$. Even our analysis of tidal effects proceeds via a simple second-order Taylor expansion, leading to differential equations for $\beta(r)$ that are readily solved using two applications of the method of separation of variables. What is new about the derivation presented here is the combination of the Baez-Bunn equations with the infalling coordinates typical for the Gullstrand-Painlev\'e form of the metric --- this combination is what, in the end, makes our derivation particularly simple. In turn, this simplicity is what should make the derivation particularly useful in the context of teaching general relativity in an undergraduate setting. The derivation proceeds close to the physics, and gives ample opportunity to discuss interesting properties of Einstein's theory of gravity. Students who are presented with this derivation, either as a demonstration or as a (guided) exercise, will come to understand the way that symmetries determine the form of a metric, the deductions that can be made from Einstein's equivalence principle, and last but not least that we need to go beyond the equivalence principle, and consider tidal forces, to completely define our solution. \section*{Acknowledgements} I would like to thank Thomas M\"uller for helpful comments on an earlier version of this text.
{'timestamp': '2020-09-09T02:15:58', 'yymm': '2009', 'arxiv_id': '2009.03738', 'language': 'en', 'url': 'https://arxiv.org/abs/2009.03738'}
\section{Introduction} \label{sec:intro} \input{sections/intro} \section{Technical Overview} \label{sec:approarch} \input{sections/approach} \section{EarthQube} \label{sec:system} \input{sections/system} \section{Demonstration} \label{sec:demo} \input{sections/demo} \begin{acks} \small{This work is funded by the European Research Council through the ERC-2017-STG BigEarth Project (Grant 759764) and the German Ministry for Education \& Research as BIFOLD - Berlin Institute for the Foundations of Learning \& Data (ref. 01IS18025A and 01IS18037A).} \end{acks} \bibliographystyle{ACM-Reference-Format} \subsection{The BigEarthNet Archive} \label{sec:bigearthnet} The BigEarthNet archive\footnote{\url{https://bigearth.net/}} is a large-scale benchmark archive consisting of 590,326 pairs of Sentinel-1 and Sentinel-2 satellite images acquired from $10$ European countries (i.e., Austria, Belgium, Finland, Ireland, Kosovo, Lithuania, Luxembourg, Portugal, Serbia, Switzerland) between June 2017 and May 2018~\cite{sumbul_bigearthnet-mm_2021}. The Sentinel-2 satellite constellation acquires multispectral images with 13 spectral bands and varying spatial resolutions. BigEarthNet excludes the 10th band because it does not embody surface information, thus keeping 12 bands per image. Each BigEarthNet Sentinel-2 image is a section of: (i) $120\times120$ pixels for 10m bands; (ii) $60\times60$ pixels for 20m bands; and (iii) $20\times20$ pixels for 60m bands. The Sentinel-1 satellite constellation acquires synthetic-aperture radar data. BigEarthNet Sentinel-1 images contain dual-polarized information channels (VV and VH) with a spatial resolution of 10m and are based on the interferometric wide swath mode, which is the main acquisition mode over land. Each pair of images in BigEarthNet is annotated with multi-labels provided by the CLC map of 2018 based on its thematically most detailed Level-3 class nomenclature. For details about BigEarthNet, we refer the reader to ~\cite{sumbul_bigearthnet-mm_2021}. \subsection{The MiLaN Approach} \label{sec:cbir} \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{figures/cbir.png} \caption{Content-based Image Retrieval in EarthQube.} \label{fig:cbir-layout} \end{figure} Given a query image, we aim to retrieve the most similar satellite images to the query from huge data archives in a highly time-efficient manner. For example, given an image that depicts a beach, we want to find images of similar beaches in different locations, as shown in Figure~\ref{fig:cbir-layout}. To enable image indexing and scalable search, we apply deep hashing to BigEarthNet images using our recent metric learning-based deep hashing network (MiLaN)~\citep{MiLAN}. MiLaN simultaneously learns: (i) a semantic-based metric space for effective feature representation; and (ii) compact binary hash codes for scalable search. To train MiLaN, we use three loss functions: (i) the triplet loss function to learn a metric space where semantically similar images are close to each other and dissimilar ones are separated; (ii) the bit balance loss function that forces the hash codes to have a balanced number of binary values (i.e., each bit has a $50\%$ chance to be activated) and makes the different bits independent from each other; and (iii) the quantization loss function that mitigates the performance degradation of the generated hash codes through binarization on the deep neural network outputs. As proven in~\citep{MiLAN}, the learned hash codes based on the above loss functions can efficiently characterize the complex semantics in satellite images. After obtaining the binary hash codes of the archive images, we generate a hash table that stores all images with the same hash code in the same hash bucket. Then, we perform image retrieval through hash lookups, i.e., we retrieve all images in the hash buckets that are within a small hamming radius of the query image. In this demonstration, we integrate MiLaN into EarthQube, thus allowing users to perform fast image-based similarity search on the BigEarthNet data archive. \subsection{User Interface} \label{sec:interface} The visual interface of EarthQube is composed of a map rendering component (see Figure~\ref{fig:overall-layout-bigearthnet-portal}). EarthQube overlays various menus and panels on the map for easy configuration and hides them when not needed for better map navigation. Users perform operations on the map (e.g., zoom in/out) through mouse interactions. \vspace{0.1cm} \myparagraph{Query Panel.} Users can issue queries through the main search menu (left side of the map in Figure~\ref{fig:overall-layout-bigearthnet-portal}-1). Specifically, the \emph{coordinates} subsection allows users to define a geospatial area by choosing a shape (i.e., rectangle or circle) and manually typing the area coordinates. Alternatively, users can draw an arbitrary rectangle, circle, or polygon directly on the map. In addition, users can filter the data based on the acquisition date range, satellites, seasons, and labels (land cover classes). Users can control the labels using a switch button, which is initially turned on (i.e.,~no label-based filtering applies). Turning the button off provides complete control over the label filtering criteria, as shown in Figure~\ref{fig:overall-layout-bigearthnet-portal}-2. EarthQube groups the labels in a three-level hierarchy following the structure of the CLC land cover classes nomenclature. Furthermore, it supports three filtering operators: \emph{Some}, \emph{Exactly}, and \emph{At least \& more}. The \emph{Some} operator retrieves all relevant images that have at least one of the selected labels. For example, to retrieve images with forests, the user can select the Level-2 class \emph{Forest} that comprises of three types of Level-3 forest labels (i.e., \emph{Broad-leaved}, \emph{Coniferous}, and \emph{Mixed}). The \emph{Exactly} operator returns images with the exact same labels as the selected ones. This can be useful when a user is looking for very specific information, e.g. finding all airports in a provided area. The \emph{At least \& more} operator retrieves images that have all the selected labels and potentially some additional ones. For example, if a user is looking for sea or ocean beaches located near coniferous forests, then she is mainly interested in the labels \emph{Coniferous forest}, \emph{Beaches, dunes, sands}, and \emph{Sea and ocean}. However, images with some additional labels, such as \emph{Bare rock} or \emph{Coastal lagoons}, could also be relevant. Overall, thanks to its expressive operators and the easy-to-follow hierarchical layout of the labels, EarthQube provides a powerful tool for querying EO data based on land cover classes. Finally, the last subsection of the query panel allows users to upload a BigEarthNet image and search for similar images in the archive using our deep hashing based index. \myparagraph{Map View.} The map displays the locations of the retrieved images as markers (zoomed-in view) and marker cluster groups (zoomed-out view). Markers have several features, such as hovering animations, tooltips, pop-ups, and pinpointers. Specifically, hovering over a marker changes its color and shows its labels in a tooltip, while clicking on the marker opens a popup that contains metadata. The pop-up also exposes a button that locates the image in the result panel that we describe next. Furthermore, the user can choose a set of markers to pinpoint on the map. Lastly, the bottom right of the screen shows a minimap (see Fig.~\ref{fig:cbir-layout}), which can be toggled on or off and allows users to keep an overall perspective even when they are zoomed into a particular area. As a next step in the visual exploration, we allow users to render RGB images directly on the map, as shown in Figure~\ref{fig:overall-layout-bigearthnet-portal}-3. \vspace{0.1cm} \myparagraph{Result Panel.} The result panel (right side of the map in Figure~\ref{fig:overall-layout-bigearthnet-portal}) presents metadata, additional features, and label statistics regarding the latest retrieval. It consists of two views: \emph{Image patches} and \emph{Label statistics}. The top of the panel in the \emph{Image patches} view shows the total number of image patches that match the query criteria. Furthermore, it allows users to enable image rendering on the map (up to 1000 images), download the names of the retrieved images as a plain text file, and add the current page range of images (up to 50) to the download cart. The cart allows users to combine images from different searches and download them together as a single collection. The window below displays the full list of images. Each image has a brief description and five buttons that allow to: (i) retrieve similar images, (ii) navigate to the image on the map, (iii) pinpoint the image, (iv) download the image as a zip, and (v) add the image to the download cart. The view \emph{Label statistics} summarizes the occurrence of land cover labels in the retrieved images, which is a unique feature of EarthQube. Specifically, as shown in Figure~\ref{fig:overall-layout-bigearthnet-portal}-4, it consists of a bar chart that shows the number of occurrences of each label present in the retrieval. To facilitate the identification of dominant land types in a given area, we map each label to a predefined color that is representative of the land cover type. To display the results of a similarity search, EarthQube opens two new tabs for the image patches and label statistics, respectively. These views are the same as described above, with the only difference that the image patches view displays the query image at the top in addition to the retrieved similar images (see Fig.~\ref{fig:cbir-layout}). \balance \subsection{System Architecture} EarthQube follows a three-tier architecture consisting of a data tier, a back-end server, and a user interface. As we discussed the user interface in Section~\ref{sec:interface}, here we focus on the remaining two tiers. \myparagraph{Data Tier.} EarthQube uses MongoDB as a database server to store four data collections: (i) metadata, (ii) image data, (iii) rendered images, and (iv) user feedback. The \emph{metadata} collection is central to EarthQube as it enables efficient search and retrieval of images based on their geospatial coordinates and other attributes. Specifically, metadata documents have a \emph{location} attribute that represents the bounding rectangle of an image and a \emph{properties} attribute that encompasses other queryable image features, such as the image name, labels, season, and acquisition date. To improve query performance, we index the \emph{location} attribute using MongoDB's built-in 2D geohashing index. Furthermore, to improve the performance of label-based filtering, we map each (potentially multi-word) CLC label to an ASCII character, thereby avoiding the manipulation of long strings. The \emph{image data} collection stores the actual binary representations of the 12 bands of the BigEarthNet images. Each document has an \emph{image patch name} attribute that serves as primary key and is automatically indexed by MongoDB. The \emph{rendered images} collection contains the binary representations of the rendered displayable images. We acquire those images by combining the RGB bands. Finally, the collection \emph{feedback} stores anonymous user-provided text feedback, such as public reactions and comments. \myparagraph{Back-end Server.} The back-end server provides the means to submit geospatial queries, filter the images based on different search criteria, and perform CBIR. To this end, EarthQube invokes different services that validate and process the user query. \subsection{Integrating with MiLaN} To provide the CBIR functionality, we infer a 128-bit binary hash code for each image in the BigEarthNet archive using MiLaN (see Section~\ref{sec:cbir}). EarthQube supports both querying by an existing archive image and by an external one. To perform a similarity search based on an archive image, we maintain an in-memory hash table that maps each image patch name to the corresponding binary code. For queries based on an external image, the deep learning model produces a binary code for the query on-the-fly. Given the binary code of the query image, EarthQube retrieves all images with binary codes within a small hamming radius. Finally, the back-end server further processes the retrieved images before displaying them on the user interface.
{'timestamp': '2022-08-24T02:14:42', 'yymm': '2208', 'arxiv_id': '2208.10830', 'language': 'en', 'url': 'https://arxiv.org/abs/2208.10830'}
\section{Introduction} \label{intro} Perturbative QCD, based on small coupling calculations, shows that the gluon density in the nuclear wave function rises quickly at small $x$ (large energy) and eventually saturates to preserve unitarity. The saturation of the gluon distribution is governed by a single scale, the so-called saturation scale $Q_s\gg \Lambda_{QCD}$, which sets the hard scale for pertubative calculations. This idea is formalized in the recently developped theory of the color glass condensate (CGC)\cite{McLerV,JalilKLW,JalilKMW1,IancuLM,Balit1,Kovch3}. \\ In the framework of the CGC, the problem of gluon production reduces to solving the classical Yang-Mills equations with a classical current describing the fast moving sources. To calculate observables, one has to average over all possible configurations of the sources with a given statistical distribution. To probe the saturation regime, proton(deuteron)-nucleus collisions have been studied at RHIC since they represent a clean probe of the nuclear wave function at high energy. The gluon production cross-section has been calculated analytically \cite{KovchM3,KovnW,KovchT1,DumitM1,BlaizGV1,GM}. In this case, the proton is a dilute object, whereas the nucleus is saturated. To do so, one linearizes the Yang-Mills equations with respect the weak proton field resuming all high density effects in the nucleus. $k_t-$factorization holds and allows one to express the cross-section as a convolution in transverse momentum space of proton and nucleus gluon distributions. Also, it has been shown that final state interactions are absent, therefore no medium is produced. In nucleus-nucleus collisions, where both projectiles are in the saturation regime, numerical studies of gluon production in nucleus-nucleus collisions at high energy, based on the \cite{JKMW1,JKMW2} (see also \cite{GV}), have been performed \cite{KrasnV,Lappi1}, but no exact solutions have been found so far. Kovchegov was the first to consider analytically the problem, however the formula he proposes is derived by conjecturing the absence of final state interactions in addition to other assumptions \cite{KovAA}. Later, I. Balitsky calculated the first correction the proton-nucleus by expanding the gauge field symmetrically in powers of commutators of Wilson lines \cite{Balit2}.\\ In this work \cite{BlaizM}, we propose an analytic method, in the CGC framework, to calculate the cross-section for gluon production in nucleus-nucleus collisions. \section{The classical gauge field in nucleus-nucleus collisions} The Yang-Mills equations read \begin{equation}\label{Yang-Mills} D_\mu F^{\mu\nu}=J^{\nu}, \end{equation} where $J^\nu$ is a conserved current describing the fast moving nuclei, A and B, moving respectively in the $+z$ and $-z$ direction. Their source densities $\rho_{_A}(x^+,{\boldsymbol x})\sim \delta(x^+)\rho_{_A}({\boldsymbol x})$ and $\rho_{_B}(x^-,{\boldsymbol x})\sim\delta(x^-)\rho_{_B}({\boldsymbol x})$ are confined near the light-cone. $x^+=(t+z)/\sqrt{2}$ and $x^+=(t+z)/\sqrt{2}$ are the light cone variables, and ${\boldsymbol x}$ represents the transverse coordinate. As we shall see, the relevant degrees of freedom are the Wilson lines : \begin{eqnarray} U({\boldsymbol x})&=&{\mathcal P} \exp\left(ig\int dz^+ \frac{1}{\partial^2_\perp}\rho_{_A}(z^+,{\boldsymbol x})\cdot T\right),\\ V({\boldsymbol x})&=&{\mathcal P} \exp\left(ig\int dz^- \frac{1}{\partial^2_\perp}\rho_{_B}(z^-,{\boldsymbol x})\cdot T \right), \end{eqnarray} for nucleus A, and nucleus B respectively. The case where one of the sources is weak ( corresponding to proton-nucleus collisions) has been solved analytically, however, the scattering of two dense systems is still an unsolved problem. In our work we present an iterative method to built the solution of the Yang-Mills equation for this problem with the aim to capture the main essence of the physics in the first iteration which, as we shall see, can bee easily computed. For this purpose, the light-cone gauge $A^+=0$ is very convenient, and leads to a lot of simplifications. In this gauge the current can be constructed exactly, and the field immediately after the collision follows easily from the yang-Mills equations. \\ The nuclei fields are not determined uniquely, because of the additional gauge degree of freedom in axial gauges. One simple configuration is $A^-_{_A}=-\frac{1}{\partial^2_\perp}\rho_{_A}(x^+,{\boldsymbol x})$, $A^i_{_A}=A^+_{_A}=0$, for nucleus A and $A^{i}_{_B}=-\int dy^-V^\dag(y^-,{\boldsymbol x})\;\frac{\partial^i}{\partial^2_\perp}\rho_{_B}(y^-,{\boldsymbol x})$ for nucleus B. The field just after the collision is determined exactly for this initial condition $A^i=UA^i_{_B}$ and $\partial^+A^-=\left(\partial^iU\right) A_{_B}^i$. The transverse pure gauge field of nucleus B is present all the way to $t=\infty$, this would lead to technical complication while computing the field in the forward light-cone. To avoid that one can simplify further by removing this pure gauge by a gauge rotation involving the gauge link $V$, i.e., ${\tilde A}^\mu\cdot T=V^\dag (A^\mu\cdot T)V-\frac{1}{ig}V^\dag \partial^\mu V$. So the produced gauge field near the light-cone gets rotated leading to \begin{equation} {\tilde A}^i= V(U-1)A^i_{_B}\equiv{\tilde \alpha}_0^i, \;\;\;\partial^+{\tilde A}^-=V(\partial^j U)A^j_{_B}\equiv{\tilde \beta}_0.\label{beta0VU} \end{equation} Having the exact field produced immediately after the collision of two heavy nuclei one can think of solving the equations of motion iteratively in powers of this initial field. Let us define an expansion in powers of the initial fields ${\tilde \alpha}_{_0}$ and ${\tilde \beta}_{_0}$: \begin{equation} {\tilde A}^\mu=\sum_{n=0}^{\infty}{\tilde A}^\mu_{(n)}\;.\label{expan} \end{equation} In this case the zero order is simply obtained by gauge rotating the nuclei fields $A^-_{_A}$ and $ A^{i}_{_B}$, \begin{equation} {\tilde A}^\mu_{(0)}=-\delta^{\mu-}V\frac{1}{\partial^2_\perp}\rho_{_A}-\delta^{\mu+}\frac{1}{\partial^2_\perp}\rho_{_B}\;,\label{Ain} \end{equation} Note that we have generated a $+$ component of the gauge field. Strictly speaking, we switch to ${\tilde A}^+=-\frac{1}{\partial^2_\perp}\rho_{_A}$ gauge, which reduces to ${\tilde A}^+=0$ in the forward light-cone since the source has its support only on the light-cone, $x^+=0$. The equations of motion for the first order read \begin{eqnarray} &&\partial^+(\partial_\mu{\tilde A}^\mu_{(1)})=0,\label{YMtlin+}\\ && \square {\tilde A}^-_{(1)}-\partial^-\partial_\mu{\tilde A}^\mu_{(1)}=0,\label{YMtlin-}\\ &&\square {\tilde A}^i_{(1)}-\partial^i\partial_\mu{\tilde A}^\mu_{(1)}=0,\label{YMtlint} \end{eqnarray} which are solved in Fourier space by \begin{eqnarray} &&-q^2{\tilde A}^i_{(1)}(q)=-2 \left(\delta^{ij}-\frac{q^iq^j}{2q^+q^-}\right){\tilde \alpha}^j_{_0}({\boldsymbol q})-2i\frac{q^i}{2q^+q^-}{\tilde \beta}_{_0}({\boldsymbol q}),\nonumber\\ &&-q^2{\tilde A}^-_{(1)}(q)=-\frac{2i}{q^+}{\tilde \beta}_{_0}({\boldsymbol q}).\label{res2} \end{eqnarray} \section{Gluon production} The spectrum of produced gluon is defined as follows \begin{equation} (2\pi)^32E\frac{dN}{d^3{\bf q}}=\sum\limits_\lambda \langle |{\mathcal M}_\lambda|^2\rangle, \end{equation} where $\lambda$ is the gluon polarization. The symbol $\langle...\rangle$ stands for the average over the color sources $\rho_{_A}$ and $\rho_{_B}$\cite{GLV}. The amplitude ${\mathcal M}_\lambda$ is related to the classical gauge field by the reduction formula \begin{equation} {\mathcal M}_\lambda=\lim_{q^2\rightarrow 0} q^2{\tilde A}^i(q)\epsilon^i_\lambda(q) ,\label{amp} \end{equation} where $ \epsilon^i_\lambda(q)$ is the polarization vector of the gluon and $q$ its four-momentum. In the axial gauge $A^+=0$, only the transverse components of the field contribute, and the sum over polarizations states is done with the help of the relation $\sum\limits_\lambda \epsilon^i_\lambda(q)\epsilon^{\ast j}_\lambda(q)=\delta^{ij}$. Note that, for on-shell gluons ($2q^+q^--{\boldsymbol q}^2=0$), the condition of transversality $q_\mu M^\mu(q)=0$ is fulfilled, where $ M^\mu\equiv q^2{\tilde A}^\mu(q)$. By inserting in (\ref{amp}) the explicit expression of ${\tilde A}^i_{(1)}(q)$ given in Eq.~(\ref{res2}) we get \begin{equation} {\mathcal M}_\lambda=-2 \left(\epsilon_\lambda^j-\frac{\epsilon_\lambda\cdot{\boldsymbol q}}{{\boldsymbol q}^2}q^j\right){\tilde \alpha}^j_{_0}({\boldsymbol q})-2i\frac{\epsilon_\lambda\cdot{\boldsymbol q}}{{\boldsymbol q}^2}{\tilde \beta}_{_0}({\boldsymbol q}).\label{res3} \end{equation} This allows us to write the gluon spectrum in the following compact form \cite{BlaizM}: \begin{equation} 4\pi^3E\frac{dN}{d^3{\bf q}}=\frac{1}{\;{\boldsymbol q}^2}\langle |{\boldsymbol q}\times \tilde {\boldsymbol \alpha}_{_0}({\boldsymbol q})|^2 +|{\tilde \beta}_{_0}({\boldsymbol q})|^2\rangle.\label{resN} \end{equation} Because of the color structure of the fields, it is not possible to write Eq. (\ref{resN}) in a $k_t$-factorized form in the case where the two nuclei are in the saturation regime. In case where one of the nuclei is dilute, one can expand Eq. (\ref{resN}) at first order in the weak source (this would be the case for a proton) we recover the well know $k_t$-factorization formula for proton-nucleus collisions \cite{BlaizGV1}.
{'timestamp': '2008-11-11T14:54:21', 'yymm': '0811', 'arxiv_id': '0811.1707', 'language': 'en', 'url': 'https://arxiv.org/abs/0811.1707'}
\section{Introduction} \label{sec:intro} Humans excel at inattentively predicting others' actions and movements. This is key to effectively engaging in interactions with other people, driving a car, or walking across a crowd. Replicating this ability is imperative in many applications like assistive robots, virtual avatars, or autonomous cars~\cite{andrist2015look, rudenko2020human}. Many prior works conceive Human Motion Prediction (HMP) from a deterministic point of view, forecasting a single sequence of body poses, or \textit{motion}, given past poses, usually represented with skeleton joints~\cite{lyu20223d}. However, humans are spontaneous and unpredictable creatures by nature, and this deterministic interpretation does not fit contexts where anticipating all possible outcomes is crucial. Accordingly, recent works have attempted to predict the whole distribution of possible future motions (i.e., a \textit{multimodal} distribution) given a short observed motion sequence. We refer to this reformulation as stochastic HMP. Most prior stochastic works focus on predicting a highly \textit{diverse} distribution of motions. % Such diversity has been traditionally defined and evaluated in the coordinate space~\cite{yuan2020dlow, dang2022diverse, mao2021gsps, salzmann2022motron, ma2022multiobjective}. This definition biases research toward models that generate fast and motion-divergent motions (see~\autoref{fig:intro}). Although there are scenarios where predicting low-speed diverse motion is important, this is discouraged by prior techniques. For example, in assistive robotics, anticipating \textit{behaviors} (i.e., actions) like whether the interlocutor is about to shake your hand or scratch their head might be crucial for preparing the robot's actuators on time~\cite{Barquero2022survey, palmero2022chalearn}. In a surveillance scenario, a foreseen noxious behavior might not differ much from a well-meaning one when considering only the poses along the motion sequence. We argue that this behavioral perspective is paramount to build next-generation stochastic HMP models. Moreover, results from prior diversity-centric works often suffer from a trade-off that has been persistently overlooked: predicted motion looks unnatural when observed following the motion of the immediate past. The strong diversity regularization techniques employed often produce abrupt speed changes or direction discontinuities. We argue that consistency with the immediate past is a requirement for prediction plausibility. % % % To tackle these issues, we present \mbox{BeLFusion}{}\footnote{Code and pretrained models are made publicly available in \url{https://barquerogerman.github.io/BeLFusion/}.} (\autoref{fig:intro}). By building a latent space where behavior is disentangled from poses and motion, diversity is detached from the traditional coordinate-based perspective and promoted from a behavioral viewpoint. The \textit{behavior coupler} ensures the predicted behavior is decoded into a smooth and realistic continuation of the ongoing motion. Thus, our predicted motions look more realistic than alternatives, which we assess through quantitative and qualitative analyses. \mbox{BeLFusion}{} is the first approach that exploits conditional latent diffusion models (LDM)~\cite{vahdat2021score, rombach2022high} for stochastic HMP, achieving state-of-the-art performance. Specifically, \mbox{BeLFusion}{} combines the unique capabilities of LDMs to model conditional distributions with the convenient inductive biases recurrent neural networks (RNNs) have for HMP. % To summarize, our main contributions are: (1) We propose \mbox{BeLFusion}{}, a method that generates predictions that are significantly more realistic and coherent with respect to the near past than prior works, while achieving state-of-the-art accuracy on Human 3.6M~\cite{ionescu2013h36m} and AMASS~\cite{mahmood2019amass} datasets. (2) \mbox{BeLFusion}{} promotes diversity in a behavioral latent space. % As a result, both low- (e.g., hand-waving, smoking) and long-range motions (e.g., standing up, sitting down) are equally encouraged. We show that this boosts the capacity to adapt the predictions' diversity to the determinacy of the motion context. (3) We improve and extend the usual evaluation pipeline for stochastic HMP. For the first time in this task, a \textit{cross-dataset evaluation} is conducted to assess the robustness against domain shifts, where the superior generalization capabilities of our method are clearly depicted. This setup, built with AMASS~\cite{mahmood2019amass} dataset, % showcases a broad range of actions performed by more than 400 subjects. % (4) We propose two new metrics that provide complementary insights on the statistical similarities between a) the predicted and the dataset averaged absolute motion, and b) the predicted and the intrinsic dataset diversity. We show that they are fairly correlated to our definition of realism. % % % \section{Related work} \label{sec:relatedwork} \subsection{Human motion prediction} \textbf{Deterministic scenario.} % Prior works on HMP define the problem as regressing a single future sequence of skeleton joints matching the immediate past, or \textit{observed} motion. This regression is usually modeled with autoregressive RNNs~\cite{fragkiadaki2015recurrent, jain2016structural, martinez2017human, gui2018adversarial, pavllo2018quaternet} or Transformers~\cite{aksan2021spatio, cai2020learning, martinez2021pose}. Graph Convolutional Networks are typically included as intermediate layers to model the dependencies among joints~\cite{li2020dynamic, mao2019learning, dang2021msr, li2021skeleton}. Some methods leverage Temporal Convolutional Networks~\cite{li2018convolutional, medjaouri2022hr} or a simple Multi-Layer Perceptron~\cite{guo2022mlp} to predict fixed-size sequences, achieving high performance. Recently, some works claimed the benefits of modeling sequences in the frequency space \cite{cai2020learning, mao2019learning, mao2020history}. However, none of these solutions can model multimodal distributions of future motions. \begin{figure*}[t!] \centering \includegraphics[width=15.5cm]{figures/arch_v3.pdf} \vspace{-0.2cm} \caption{\mbox{BeLFusion}{}'s architecture. A latent diffusion model conditioned on an encoding $c$ of the observation, $\mathbf{X}$, progressively denoises a sample from a zero-mean unit variance multivariate normal distribution into a behavior code. Then, the behavior coupler $\mathcal{B}_{\bvaeDecParams}$ decodes the prediction by transferring the sampled behavior to the target motion, $\mathbf{x}_{m}$. In our implementation, $f_{\Phi}$ is a conditional U-Net with cross-attention, and $h_{\vaeObsEncParams}$, $g_{\bvaeXmotionEncParams}$, and $\mathcal{B}_{\bvaeDecParams}$ are one-layer recurrent neural networks.} \label{fig:main_arch} \vspace{-0.2cm} \end{figure*} \textbf{Stochastic scenario.} % To fill this gap, other methods that predict multiple futures for each observed sequence were proposed. Most of them use a generative approach to model the distribution of possible futures. Most popular generative models for HMP are generative adversarial networks (GANs) \cite{barsoum2018hpgan, kundu2019bihmpgan} and variational autoencoders (VAEs) \cite{walker2017theposeknows, yan2018mtvae, cai2021unified, mao2021gsps}. % These methods often include diversity-promoting losses in order to predict a high variety of motions \cite{mao2021gsps}, or incorporate explicit techniques for diverse sampling~\cite{yuan2020dlow, dang2022diverse}. This diversity is computed with the raw coordinates of the predicted poses. We argue that, as a result, the race for diversity has promoted motions deriving to extremely varied poses very early in the prediction. Most of these predictions are neither realistic nor plausible within the context of the observed motion. % Moreover, prior works neglect situations where a diversity of behaviors, which can sometimes be subtle, is important. We address this by implicitly enforcing such diversity in a behavioral latent space.% \textbf{Semantic human motion prediction. } % Few works have attempted to leverage semantically meaningful latent spaces for stochastic HMP~\cite{yan2018mtvae, liu2021aggregated, gu2022learning}. For example, \cite{gu2022learning} exploits disentangled motion representations for each part of the body to control the HMP. % \cite{yan2018mtvae} proposes to add a sampled latent code to the observed encoding to transform it into a prediction encoding. This inductive bias helps the network disentangle a motion code from the observed poses. However, the strong assumption that a simple arithmetic operation can map both sequences limits the expressiveness of the model. Although not specifically focused on HMP, \cite{blattmann2021behavior} proposes an adversarial framework to disentangle a behavioral encoding from a sequence of poses. The extracted behavior can then be transferred to any initial pose. In this paper, we propose a generalization of such framework that transfers behavior to ongoing movements. Our method exploits this disentanglement to promote behavioral diversity in HMP. \subsection{Diffusion models} Denoising diffusion probabilistic models aim at learning to reverse a Markov chain of $M$ diffusion steps (usually $M>100$) that slowly adds random noise to the target data samples~\cite{sohl2015deep, ho2020denoising}. For conditional generation, the most common strategy consists in applying cross-attention to the conditioning signal at each denoising timestep~\cite{dhariwal2021diffusionbeatsgans}. Diffusion models have achieved impressive results in fields like video generation, inpainting, or anomaly detection~\cite{yang2022diffusionsurvey}. In a more similar context, \cite{rasul2021timegrad, tashiro2021csdi} use diffusion models for time series forecasting and imputation. \cite{gu2022stochastic} recently presented a diffusion model for trajectory prediction that controls the uncertainty of the prediction by shortening the denoising chain. All diffusion models come with an expensive trade-off: immensely slow inference due to the large number of denoising steps required. Latent diffusion models (LDM) accelerate the sampling by applying diffusion to a much lower-resolution latent space learned by a VAE~\cite{vahdat2021score, rombach2022high}. Thanks to the Kullback–Leibler (KL) regularization, the learned latent space is built close to a normal distribution. As a result, the length of the Markov chain that diffuses the latent codes can be greatly reduced, and reversed much faster. % In this work, we present the first approach that leverages LDM for stochastic HMP, achieving state-of-the-art performance in terms of accuracy and realism. \section{Methodology} \label{sec:methodology} In this section, we describe the methodology of \mbox{BeLFusion}{} (see \autoref{fig:main_arch}). First, we characterize the HMP problem (\autoref{subsec:definition}). Then, we adapt the definitions of LDMs to our scenario (\autoref{subsec:molfusion}). Finally, we describe the construction of our behavioral latent space and the derivation of the training losses (\autoref{subsec:behavioral_ld}). \subsection{Problem definition} \label{subsec:definition} The goal in HMP consists in, given an observed sequence of $B$ poses (\textit{observation window}), predicting the following $T$ poses (\textit{prediction window}). In stochastic HMP, $N$ different prediction windows are predicted for each observation window. Accordingly, we define the set of poses in the observation and prediction windows as $\mathbf{X}=\{p_{t-B}, ..., p_{t-2}, p_{t-1}\}$ and $\mathbf{Y}^{i}=\{p_{t}^{i}, p_{t+1}^{i}, ..., p_{t+T-1}^{i}\}$, respectively, where $i\in \{1, \dots, N\}$\footnote{A sampled prediction $\mathbf{Y}^{i}$ is hereafter referred as $\mathbf{Y}$ for intelligibility. }, and $p_t^{(i)} \in \mathbb{R}^d$ are the Cartesian coordinates of the human joints at time step $t$. \subsection{Motion latent diffusion} \label{subsec:molfusion} Now, we define a direct adaptation of LDM to HMP. First, a VAE is trained so that an encoder $\mathcal{E}$ transforms fixed-length target sequences of $T$ poses, $\mathbf{Y}$, into a low-dimensional latent space $V \subset \mathbb{R}^{v}$. Samples $z\in V$ can be drawn and mapped back to the coordinate space with a decoder $\mathcal{D}$. Then, an LDM conditioned on $\mathbf{X}$ is trained to predict the corresponding latent vector $z = \mathcal{E}(\mathbf{Y}) \in V$\footnote{For simplicity, we make an abuse of notation by using $\mathcal{E}(\mathbf{Y})$ to refer to the mean of the distribution $\mathcal{E}(z | \mathbf{Y})$.}. The generative HMP problem is formulated as follows: \vspace{-0.2cm} \begin{equation} \label{eq:ldm_generative} P(\mathbf{Y}|\mathbf{X})=P(\mathbf{Y}, z|\mathbf{X})=P(\mathbf{Y}|z, \mathbf{X})P(z|\mathbf{X}). \vspace{-0.2cm} \end{equation} The first equality holds because $\mathbf{Y}$ is a deterministic mapping from the latent code $z$. Then, sampling from the true conditional distribution $P(\mathbf{Y}|\mathbf{X})$ is equivalent to sampling $z$ from $P(z|\mathbf{X})$ and decoding $\mathbf{Y}$ with $\mathcal{D}$. LDMs are typically trained to predict the perturbation $\epsilon_t=f_{\Phi}(\latcode_{t}, t, \mathbf{X})$ of the diffused latent code $\latcode_{t}$ at each time step $t$, where $\mathbf{X}$ is the conditioning observation. Once trained, the network $f_{\Phi}$ can reverse the diffusion Markov chain of length $M$ and infer $z$ from a random sample $z_M \sim \mathcal{N}(0, 1)$. Instead, we choose to use a more convenient parameterization so that $z_0=f_{\Phi}(\latcode_{t}, t, \mathbf{X})$ \cite{xiao2021trilemma, luo2022understandingDM}. With this, an approximation of $z$ is predicted in every denoising step $\latcode_{0}$, and used to sample the input of the next denoising step $\latcode_{t-1}$, by diffusing it $t-1$ times. We use $q(z_{t-1} | z_0)$ to refer to this diffusion process. With this parameterization, the LDM objective loss (or \textit{latent} loss) becomes: \vspace{-0.2cm} \begin{equation} \label{eq:lat_loss} \mathcal{L}_{lat}(\mathbf{X}, \mathbf{Y})=\sum^T_{t=1} \underset{q( \latcode_{t} | \latcode_{0} )}{\mathbb{E}} \|f_{\Phi}(\latcode_{t}, t, \mathbf{X}) - \underbrace{\mathcal{E}(\mathbf{Y})}_{z}\|_{1}. \vspace{-0.2cm} \end{equation} Having an approximate prediction at any denoising step allows us to 1) apply regularization in the coordinates space (\autoref{subsec:behavioral_ld}), and 2) stop the inference at any step and still have a meaningful prediction (\autoref{subsec:results}). % \subsection{Behavioral latent diffusion} \label{subsec:behavioral_ld} In HMP, small discontinuities between the last observed pose and the first predicted pose can look unrealistic. Thus, the LDM must be highly accurate in matching the coordinates of the first predicted pose to the last observed pose. An alternative consists in autoencoding the offsets between poses in consecutive frames. Although this strategy minimizes the risk of discontinuities in the first frame, motion speed or direction discontinuities are still bothersome. Our proposed architecture, \textbf{Be}havioral \textbf{L}atent dif\textbf{Fusion}, or \mbox{BeLFusion}{}, solves both problems. It reduces the latent space complexity by relegating the adaption of the motion speed and direction to the decoder. It does so by learning a representation of posture-independent human dynamics: a \textit{behavioral representation}. In this framework, the decoder learns to transfer any behavior to an ongoing motion by building a coherent and smooth transition. Here, we first describe how the behavioral latent space is learned, and then detail the BeLFusion pipeline for behavior-driven HMP. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{figures/bvae_v2.pdf} \vspace{-0.7cm} \caption{Framework for behavioral disentanglement. By adversarially training the auxiliary generator, $r_{\bvaeAuxDecParams}$, against the behavior coupler, $\mathcal{B}_{\bvaeDecParams}$, the behavior encoder, $p_{\bvaeEncParams}$, learns to generate a disentangled latent space of behaviors, $p_{\bvaeEncParams}(z | \pred_{e})$. At inference, $\mathcal{B}_{\bvaeDecParams}$ decodes a sequence of poses that smoothly transitions from \textit{any} target motion $\mathbf{x}_{m}$ to performing the behavior extracted from $\mathbf{Y}$.} \label{fig:bvae} \vspace{-0.4cm} \end{figure} \textbf{Behavioral Latent Space (BLS).} The behavioral representation learning is inspired by \cite{blattmann2021behavior}, which presents a framework to disentangle behavior from motion. Once disentangled, such behavior can be transferred to any static initial pose. We propose an extension of their work to a general and challenging scenario: behavioral transference to ongoing motions. The architecture proposed is shown in \autoref{fig:bvae}. First, we define the last $C$ observed poses as the \textit{target motion}, $\mathbf{x}_{m}=\{p_{t-C}, ..., p_{t-2}, p_{t-1}\}\subset \mathbf{X}$, and $\pred_{e}=\mathbf{x}_{m} \cup \mathbf{Y}$. $\mathbf{x}_{m}$ informs us about the motion speed and direction of the last poses of $\mathbf{X}$, which should be coherent with $\mathbf{Y}$. The goal is to disentangle the behavior from the motion and poses in $\mathbf{Y}$. To do so, we adversarially train two generators, the behavior coupler $\mathcal{B}_{\bvaeDecParams}$, and the auxiliary decoder $r_{\bvaeAuxDecParams}$, such that a behavior encoder $p_{\bvaeEncParams}$ learns to generate a disentangled latent space of behaviors $p_{\bvaeEncParams} (z|\pred_{e})$. Both $\mathcal{B}_{\bvaeDecParams}$ and $r_{\bvaeAuxDecParams}$ have access to such latent space, but $\mathcal{B}_{\bvaeDecParams}$ is additionally fed with an encoding of the target motion, $g_{\bvaeXmotionEncParams}( \mathbf{x}_{m} )$. During adversarial training, $r_{\bvaeAuxDecParams}$ aims at preventing $p_{\bvaeEncParams}$ from encoding pose and motion information by trying to reconstruct poses of $\pred_{e}$ directly from $p_{\bvaeEncParams} (z|\pred_{e})$. This training allows $\mathcal{B}_{\bvaeDecParams}$ to decode a sequence of poses that smoothly transitions from $\mathbf{x}_{m}$ to perform the behavior extracted from $\pred_{e}$. At inference time, $r_{\bvaeAuxDecParams}$ is discarded. More concretely, the disentanglement is learned by alternating two objectives at each training iteration. The first objective, which optimizes the parameters $\omega$ of the auxiliary generator, forces it to predict $\pred_{e}$ given the latent code $z$: \vspace{-0.2cm} \begin{equation} \label{eq:bvae_aux_loss} \underset{\omega}{\max}\; \mathcal{L}_{\text{aux}} = \underset{\omega}{\max}\; \mathbb{E}_{p_{\bvaeEncParams}(z|\pred_{e})} (\log r_{\bvaeAuxDecParams}(\pred_{e} |z)). \vspace{-0.2cm} \end{equation} The second objective acts on the parameters of the target motion encoder, $\alpha$, the behavior encoder, $\theta$, and the behavior coupler, $\phi$. It makes $\mathcal{B}_{\bvaeDecParams}$ learn an accurate $\pred_{e}$ reconstruction through the construction of a normally distributed intermediate latent space: \vspace{-0.2cm} \begin{multline} \label{eq:bvae_main_loss} \underset{\alpha, \theta, \phi}{\max}\; \mathcal{L}_{\text{main}} = \underset{\alpha, \theta, \phi}{\max}\; \mathbb{E}_{p_{\bvaeEncParams}(z|\pred_{e})} [\log \mathcal{B}_{\bvaeDecParams}(\pred_{e}|z, g_{\bvaeXmotionEncParams}(\mathbf{x}_{m}))] \\ - D_{\text{KL}}(p_{\bvaeEncParams}(z | \pred_{e}) || p(z))) - \mathcal{L}_{\text{aux}}. \vspace{-0.2cm} \end{multline} Note that the parameters $\omega$ are frozen when training with \autoref{eq:bvae_main_loss}, and $\alpha, \theta, \phi$ with \autoref{eq:bvae_aux_loss}. The prior $p(z)$ is a multi-variate $\mathcal{N}(0, I)$. The inclusion of $-\mathcal{L}_{\text{aux}}$ in \autoref{eq:bvae_main_loss} penalizes any accurate reconstruction of $\pred_{e}$ by the auxiliary generator. Since $\mathcal{B}_{\bvaeDecParams}$ has access to the target posture and motion provided by $\mathbf{x}_{m}$, the main decoder $\mathcal{B}_{\bvaeDecParams}$ only needs $p_{\bvaeEncParams}$ to encode the behavioral dynamics. The erasing of any postural information from $z$ is encouraged by the $-\mathcal{L}_{\text{aux}}$ term in $\mathcal{L}_{\text{main}}$. One could argue that a valid and simple solution for $p_{\bvaeEncParams}$ would consist in disentangling motion from postures. However, motion dynamics can still be used to easily extract a good posture approximation. Further details and visual examples of behavioral transference to several motions $\mathbf{x}_{m}$ are included in the \supp{} \ref{sec:supp_behavior_transference}. \textbf{Behavior-driven HMP.} \mbox{BeLFusion}{}'s goal is to sample the appropriate behavior code given the observation $\mathbf{X}$, see \autoref{fig:main_arch}. To that end, an LDM conditioned on $c=h_{\vaeObsEncParams}(\mathbf{X})$ is trained to optimize $\mathcal{L}_{lat}(\mathbf{X},\pred_{e})$ (\autoref{eq:lat_loss}), with $\mathcal{E} = p_{\bvaeEncParams}$, so that it learns to predict the behavioral encoding of $\pred_{e}$: the expected value of $p_{\bvaeEncParams} (z|\pred_{e})$. Then, the behavior coupler, $\mathcal{B}_{\bvaeDecParams}$, transfers the predicted behavior to the target motion, $\mathbf{x}_{m}$, to reconstruct the poses of the prediction. However, the reconstruction of $\mathcal{B}_{\bvaeDecParams}$ is also conditioned on $\mathbf{x}_{m}$. Such dependency cannot be modeled by the $\mathcal{L}_{lat}$ objective alone. Thanks to our parameterization (\autoref{subsec:molfusion}), we can also use the traditional MSE loss in the reconstruction space: \vspace{-0.3cm} \begin{multline} \label{eq:rec_loss} \mathcal{L}_{rec}(\mathbf{X}, \pred_{e})= \sum^T_{t=1} \underset{q(x_t|x_0)}{\mathbb{E}} \|\mathcal{B}_{\bvaeDecParams}(f_{\Phi}(x_t, t, \mathbf{X}), g_{\bvaeXmotionEncParams}(\mathbf{x}_{m})) \\ - \mathcal{B}_{\bvaeDecParams}(\mathcal{E}(\pred_{e}), g_{\bvaeXmotionEncParams}(\mathbf{x}_{m}))\|_{2}. \vspace{-0.3cm} \end{multline} The second term of \autoref{eq:rec_loss} leverages the autoencoded $\pred_{e}$. We optimize the objective within the solutions space bounded at the top by the autoencoder capabilities to help stabilize the training. Note that only the future poses $\mathbf{Y} \subset \pred_{e}$ form the prediction. The encoder $h_{\vaeObsEncParams}$ is pretrained in an autoencoder framework that reconstructs $\mathbf{X}$. We found experimentally that $h_{\vaeObsEncParams}$ does not benefit from further training, so its parameters $\lambda$ are frozen during \mbox{BeLFusion}{}'s training. The target motion encoder, $g_{\bvaeXmotionEncParams}$, and the behavior coupler, $\mathcal{B}_{\bvaeDecParams}$, are also pretrained as described before and kept frozen. $f_{\Phi}$ is conditioned on $c$ with cross-attention. \textbf{Implicit diversity loss.} Although training \mbox{BeLFusion}{} with Eqs.~\ref{eq:lat_loss} and \ref{eq:rec_loss} leads to accurate predictions, their diversity is poor. We argue that this is caused by the strong regularization of both losses. We propose to relax them by sampling $k$ predictions at each training iteration and only backpropagating the gradients through the two predictions that minimize the latent and reconstructed losses: \vspace{-0.1cm} \begin{equation} \label{eq:final_loss} % \underset{k}{\min}\; \mathcal{L}_{lat}(\mathbf{X}, \pred_{e}^k) + % \lambda \; \underset{k}{\min}\; \mathcal{L}_{rec}(\mathbf{X}, \pred_{e}^k), \vspace{-0.2cm} \end{equation} where $\lambda$ controls the trade-off between the latent and the reconstruction errors. Regularization relaxation usually leads to out-of-distribution predictions. This is often solved by employing additional complex techniques like pose priors, or bone-length losses that regularize the other predictions~\cite{mao2021gsps,bie2022hitdvae}. \mbox{BeLFusion}{} can dispense with it due to mainly two reasons: 1) Denoising diffusion models are capable of faithfully capturing a greater breadth of the training distribution than GANs or VAEs~\cite{dhariwal2021diffusionbeatsgans}; 2) The variational training of the behavior coupler makes it more robust to errors in the predicted behavior code.% \section{Experimental evaluation} \label{sec:experimental_setup} \setlength{\tabcolsep}{3pt} \begin{table*}[t!]\renewcommand{\arraystretch}{0.9} \footnotesize \centering \begin{tabular}{l@{\hskip 3mm}cccccccc@{\hskip 2mm}|@{\hskip 2mm}ccccccc} \toprule & \multicolumn{8}{c}{Human3.6M \cite{ionescu2013h36m}} & \multicolumn{7}{c}{AMASS \cite{mahmood2019amass}} \\ \midrule & APD & APDE & ADE & FDE & MMADE & MMFDE & CMD & FID* & APD & APDE & ADE & FDE & MMADE & MMFDE & CMD \\ \midrule Zero-Velocity & 0.000 & 8.079 & 0.597 & 0.884 & 0.683 & 0.909 & 21.801 & 0.606 & 0.000 & 9.292 & 0.755 & 0.992 & 0.814 & 1.015 & 39.262\\ BeGAN k=1 & 0.675 & 7.411 & 0.494 & 0.729 & 0.605 & 0.769 & 11.117 & 0.542 & 0.717 & 8.595 & 0.643 & 0.834 & 0.688 & 0.843 & 24.483 \\ BeGAN k=5 & 2.759 & 5.335 & 0.495 & 0.697 & 0.584 & 0.718 & 12.946 & 0.578 & 5.643 & 4.043 & 0.631 & 0.788 & 0.667 & 0.787 & 24.034 \\ BeGAN k=50 & 6.230 & 2.200 & 0.470 & 0.637 & 0.561 & 0.661 & 7.461 & 0.569 & 7.234 & 2.548 & 0.613 & 0.717 & 0.650 & 0.720 & 22.625\\ \midrule HP-GAN~\cite{barsoum2018hpgan} & 7.214 & - & 0.858 & 0.867 & 0.847 & 0.858 & - & - & - & - & - & - & - & - & - \\ DSF~\cite{yuan2019dsf} & 9.330 & - & 0.493 & 0.592 & 0.550 & 0.599 & - & - & - & - & - & - & - & - & - \\ DeLiGAN~\cite{gurumurthy2017deligan} & 6.509 & - & 0.483 & 0.534 & 0.520 & 0.545 & - & - & - & - & - & - & - & - & - \\ GMVAE~\cite{dilokthanakul2016gmvae} & 6.769 & - & 0.461 & 0.555 & 0.524 & 0.566 & - & - & - & - & - & - & - & - & - \\ TPK~\cite{walker2017theposeknows} & 6.723 & \underline{1.906} & 0.461 & 0.560 & 0.522 & 0.569 & 5.580 & 0.538 & 9.283 & \underline{2.265} & 0.656 & 0.675 & 0.658 & 0.674 & 17.127\\ MT-VAE~\cite{yan2018mtvae} & 0.403 & - & 0.457 & 0.595 & 0.716 & 0.883 & - & - & - & - & - & - & - & - & - \\ BoM~\cite{bhattacharyya2018bom} & 6.265 & - & 0.448 & 0.533 & 0.514 & 0.544 & - & - & - & - & - & - & - & - & - \\ DLow~\cite{yuan2020dlow} & 11.741 & 3.781 & 0.425 & 0.518 & 0.495 & 0.531 & \textbf{4.872} & 1.255 & \underline{13.170} & 4.243 & 0.590 & 0.612 & 0.618 & 0.617 & \textbf{15.185}\\ MultiObj~\cite{ma2022multiobjective} & 14.240 & - & 0.414 & 0.516 & - & - & - & - & - & - & - & - & - & - & -\\ GSPS~\cite{mao2021gsps} & \underline{14.757} & 6.749 & 0.389 & 0.496 & 0.476 & 0.525 & 10.897 & 2.103 & 12.465 & 4.678 & 0.563 & 0.613 & 0.609 & 0.633 & 18.404\\ Motron~\cite{salzmann2022motron} & 7.168 & 2.583 & 0.375 & 0.488 & 0.509 & 0.539 & 41.760 & 13.743 & - & - & - & - & - & - & -\\ DivSamp~\cite{dang2022diverse} & \textbf{15.310} & 7.479 & \underline{0.370} & 0.485 & 0.475 & 0.516 & 12.017 & 2.083 & \textbf{24.724} & 15.837 & 0.564 & 0.647 & 0.623 & 0.667 & 50.239\\ \midrule BeLFusion\_D & 5.777 & 2.571 & \textbf{0.367} & \textbf{0.472} & \textbf{0.469} & \textbf{0.506} & 7.532 & \underline{0.255} & 7.458 & 2.663 & \textbf{0.508} & \underline{0.567} & \textbf{0.564} & \underline{0.591} & 19.497\\ BeLFusion & 7.602 & \textbf{1.662} & 0.372 & \underline{0.474} & \underline{0.473} & \underline{0.507} & \underline{5.303} & \textbf{0.209} & 9.376 & \textbf{1.977} & \underline{0.513} & \textbf{0.560} & \underline{0.569} & \textbf{0.585} & \underline{16.995}\\ \bottomrule \end{tabular} \vspace{-0.3cm} \caption{Comparison of \mbox{BeLFusion}{}\_D (single denoising step) and \mbox{BeLFusion}{} (all denoising steps) with state-of-the-art methods for stochastic human motion prediction on Human3.6M and AMASS datasets. Bold and underlined results correspond to the best and second-best results, respectively. Lower is better for all metrics except APD. *Only showed for Human3.6M due to lack of class labels for AMASS.} \vspace{-0.3cm} \label{tab:sota_comparison} \end{table*} \setlength{\tabcolsep}{6pt} Our experimental evaluation is tailored toward two objectives. First, we aim at proving \mbox{BeLFusion}{}'s generalization capabilities for both seen and unseen scenarios. For the latter, we propose a challenging cross-dataset evaluation setup. Second, we want to demonstrate the superiority of our model with regard to the realism of its predictions compared to state-of-the-art approaches. In this sense, we propose two metrics and perform a qualitative study. \subsection{Evaluation setup} \label{subsec:evaluation_setup} \textbf{Datasets.} We evaluate our proposed methodology on Human3.6M~\cite{ionescu2013h36m} (H36M), and AMASS~\cite{mahmood2019amass}. H36M consists of clips where 11 subjects perform 15 actions, totaling 3.6M frames recorded at 50 Hz, with action class labels available. We use the splits proposed by \cite{yuan2020dlow} and adopted by most subsequent works~\cite{mao2021gsps, salzmann2022motron, ma2022multiobjective, dang2022diverse}. Accordingly, 0.5s (25 frames) are used to predict the following 2s (100 frames). % AMASS is a large-scale dataset that, as of today, unifies 24 extremely varied datasets with a common joints configuration, with a total of 9M frames when downsampled to 60Hz. Whereas latest deterministic HMP approaches already include a within-dataset AMASS configuration in their evaluation protocol \cite{mao2020history, aksan2021spatio, medjaouri2022hr}, the dataset remains unexplored in the stochastic context yet. To determine whether state-of-the-art methods can generalize their learned motion predictive capabilities to other contexts (i.e., other datasets), we propose a new cross-dataset evaluation protocol with AMASS. The training, validation, and test sets include 11, 4, and 7 datasets, and 406, 33, and 54 subjects, respectively. We set the observation and prediction windows to 0.5s and 2s (30 and 120 frames after downsampling), respectively. AMASS does not provide action class labels. See the \supp{} \ref{sec:supp_dataset_details} for more details. % \textbf{Baselines.} We include the zero-velocity baseline, which has been proven very competitive in HMP~\cite{martinez2017human, Barquero2022}, and a version of our model that replaces the LDM with a GAN, BeGAN. We train three versions with $k=1, 5, 50$. We also compare against state-of-the-art methods for stochastic HMP (referenced in~\autoref{tab:sota_comparison}). For H36M, we took all evaluation values from their respective works. For AMASS, we retrained state-of-the-art methods with publicly available code that showed competitive performance for H36M. \textbf{Implementation details.} We trained \mbox{BeLFusion}{} with $M=10$, $k=50$, a U-Net with cross-attention~\cite{dhariwal2021diffusionbeatsgans} as $f_{\Phi}$, and one-layer RNNs as $h_{\vaeObsEncParams}$, $g_{\bvaeXmotionEncParams}$, and $\mathcal{B}_{\bvaeDecParams}$. For H36M, $\lambda=5$, and for AMASS, $\lambda=1$. The model used for inference was an exponential moving average of the trained model with a decay of 0.995 and 0.999, for H36M and AMASS. Sampling was conducted with a DDIM sampler~\cite{song2021ddim}. As explained in \autoref{subsec:molfusion}, our implementation of LDM can be early-stopped at any step of the chain of length $M$ and still have access to an approximation of the behavioral latent code. Thus, we also include \mbox{BeLFusion}{}'s results when inference is early-stopped right after the first denoising diffusion step (i.e., x10 faster): \mbox{BeLFusion}{}\_D. Further implementation details are included in the \supp{} \ref{sec:supp_implementation_details}. \subsection{Evaluation metrics} \label{subsec:evaluation_metrics} To compare \mbox{BeLFusion}{} with prior works, we follow the well-established evaluation pipeline proposed in \cite{yuan2020dlow}. The Average and the Final Displacement Error metrics (ADE, and FDE, respectively) quantify the error on the most similar prediction compared to the ground truth. While the ADE averages the error along all timesteps, the FDE only does it for the last predicted frame. Their multimodal versions for stochastic HMP, MMADE and MMFDE, compare all predicted futures with the multimodal ground truth of the observation. To obtain the latter, each observation window $\mathbf{X}$ is grouped with other observations $\mathbf{X}_i$ with a similar last observed pose in terms of L2 distance. The corresponding prediction windows $\mathbf{Y}_i$ form the \textit{multimodal ground truth} of $\mathbf{X}$. The Average Pairwise Distance (APD) quantifies the diversity by computing the L2 distance among all pairs of predicted poses at each timestep. Following \cite{guo2020action2motion, petrovich2021action, dang2022diverse, bie2022hitdvae}, we also include the Fréchet Inception Distance (FID), which leverages the output of the last layer of a pretrained action classifier to quantify the similarity between the distributions of predicted and ground truth motions. \textbf{Area of the Cumulative Motion Distribution (CMD).} The plausibility and realism of human motion are difficult to assess quantitatively. However, some metrics can provide an intuition of when a set of predicted motions are not plausible. For example, consistently predicting high-speed movements given a context where the person was standing still might be plausible but does not represent a statistically coherent distribution of possible futures. We argue that prior works have persistently ignored this. We propose a simple complementary metric: the area under the cumulative motion distribution. First, we compute the average of the L2 distance between the joint coordinates in two consecutive frames (displacement) across the whole test set, $\bar{M}$. Then, for each frame $t$ of all predicted motions, we compute the displacement $M_{t}$. Then: \vspace{-0.25cm} \begin{equation} \text{CMD} = \int^{99}_{1}P(X(t) \leq t) \,dt, \;\; X(t) = \| M_{t} - \bar{M} \|_{1}. \vspace{-0.25cm} \end{equation} Note that $X$ is assumed a random variable and $P$ refers to its probability density function. The choice of $P(X(t) \leq t)$ is motivated by the fact that early motion irregularities in the predictions impact the quality of the remaining sequence. Intuitively, this metric gives an idea of how the predicted average displacement per frame deviates from the expected one. However, the expected average displacement could arguably differ among actions and datasets. To account for this, we compute the total CMD as the weighted average of the CMD for each H36M action, or each AMASS test dataset, weighted by the action or dataset relative frequency. \textbf{Average Pairwise Distance Error (APDE).} There are many elements that condition the distribution of future movements and, therefore, % the appropriate motion diversity levels. To analyze to which extent the diversity is properly modeled, we introduce the average pairwise distance error. We define it as the absolute error between the APD of the multimodal ground truth and the APD of the predicted samples. Samples without any multimodal ground truth are dismissed. See \supp{} \ref{subsec:supp_wise_results} for additional details. \subsection{Results} \label{subsec:results} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figures/motion_metric_h36m.pdf} \vspace{-0.7cm} \caption{Left. Average predicted motion of state-of-the-art methods in H36M. Right. Cumulative distribution function (CDF) of the weighted absolute errors in the left with respect to the ground truth. CMD is the area under this curve.} \label{fig:motion_metric_h36m} \vspace{-0.4cm} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{figures/k_analysis.pdf} \vspace{-0.6cm} \caption{Evolution of evaluation metrics (y-axis) along denoising steps (x-axis) at inference time, for different values of $k$. Early stopping can be applied at any time, between the first ($\bullet$) and the last step ($\star$). Accuracy saturates at $k=50$, with gains for all metrics when increasing $k$, especially for diversity (APD). Qualitative metrics (CMD, FID) decrease after each denoising step across all $k$ values.} \label{fig:k_analysis} % \end{figure*} \textbf{Comparison with the state of the art.} As shown in \autoref{tab:sota_comparison}, \mbox{BeLFusion}{} achieves state-of-the-art performance in all accuracy metrics for both datasets. The improvements are especially important in the cross-dataset AMASS configuration, proving its superior robustness against domain shifts. We hypothesize that such good generalization capabilities are due to 1) the exhaustive coverage of behaviors modeled in the disentangled latent space, and 2) the potential of LDMs to model the conditional distribution of future behaviors. In fact, after a single denoising step, our model already achieves the best accuracy results (\mbox{BeLFusion}{}\_D). Our method also excels at realism-related metrics like CMD and FID, which benefit from going through all denoising steps. By contrast, \autoref{fig:motion_metric_h36m} shows that predictions from GSPS and DivSamp consistently accelerate at the beginning, presumably toward divergent poses that promote high diversity values. As a result, they yield high CMD values, especially for H36M. The predictions from methods that leverage transformations in the frequency space freeze at the very long-term horizon. The high CMD value of Motron depicts an important jitter in its predictions. \mbox{BeLFusion}{} shows low APDE, highlighting its good ability to adjust to the observed context. This is achieved thanks to 1) the pretrained encoding of the whole observation window, and 2) the behavior coupling to the \textit{target motion}. In contrast, higher APDE values of GSPS and DivSamp are caused by their tendency toward predicting movements more diverse than those present in the dataset. % Action- (H36M) and dataset-wise (AMASS) results are included in \supp{} \ref{subsec:supp_wise_results}. \autoref{fig:qualitative_examples} shows the evolution of 10 superimposed predictions along time in three actions from H36M (sitting down, eating, and giving directions), and three datasets from AMASS (DanceDB\footnote{Dance Motion Capture DB, \url{http://dancedb.cs.ucy.ac.cy}.}, HUMAN4D~\cite{chatzitofis2020human4d}, and GRAB~\cite{taheri2020grab}). It confirms visually what the CMD and APDE metrics already suggested. First, the acceleration of GSPS and DivSamp at the beginning of the prediction leads to extreme poses very fast, abruptly transitioning from the observed motion. Second, it shows the capacity of \mbox{BeLFusion}{} to adapt the diversity predicted to the context. For example, the diversity of motion predicted while eating focuses on the arms, and does not include holistic extreme poses. Interestingly, when just sitting, the predictions include a wider range of full-body movements like laying down, or bending over. A similar context fitting is observed in the AMASS cross-dataset scenario. For instance, \mbox{BeLFusion}{} correctly identifies that the diversity must target the upper body in the GRAB dataset, or the arms while doing a dance step. \begin{table*}[t!] \footnotesize\renewcommand{\arraystretch}{0.9} \centering \begin{tabular}{ccc@{\hskip 8mm}cccccc@{\hskip 8mm}ccccc} \toprule & & & \multicolumn{6}{c}{Human3.6M \cite{ionescu2013h36m}} & \multicolumn{5}{c}{AMASS \cite{mahmood2019amass}} \\ \toprule BLS & $\mathcal{L}_{lat}$ & $\mathcal{L}_{rec}$ & APD & APDE & ADE & FDE & CMD & FID & APD & APDE & ADE & FDE & CMD \\ \midrule & & \checkmark & \textbf{7.622} & \textbf{1.276} & 0.510 & 0.795 & 4.813 & 2.530 & \textbf{10.788} & 3.032 & 0.697 & 0.881 & \textbf{16.628}\\ \checkmark & & \checkmark & 6.169 & 2.240 & 0.386 & 0.505 & 7.446 & 0.475 & \underline{9.555} & 2.216 & 0.593 & 0.685 & 17.036\\ & \checkmark & & 7.475 & 1.773 & 0.388 & 0.490 & \textbf{4.010} & \textbf{0.177} & 8.688 & 2.079 & 0.528 & 0.572 & 18.429\\ \checkmark & \checkmark & & 6.760 & 1.974 & \underline{0.377} & 0.485 & 5.775 & 0.233 & 8.885 & \underline{2.009} & \underline{0.516} & \underline{0.565} & 17.576 \\ & \checkmark & \checkmark & 7.301 & 2.012 & 0.380 & \underline{0.484} & \underline{4.159} & \underline{0.195} & 8.832 & 2.034 & 0.519 & 0.568 & 17.618 \\ \checkmark & \checkmark & \checkmark & \underline{7.604} & \underline{1.658} & \textbf{0.371} & \textbf{0.471} & 5.303 & 0.205 & 9.376 & \textbf{1.977} & \textbf{0.513} & \textbf{0.560} & \underline{16.995} \\ \bottomrule \end{tabular} \vspace{-0.2cm} \caption{Results from the ablation analysis of \mbox{BeLFusion}{}. We assess the contribution of the latent ($\mathcal{L}_{lat}$) and reconstruction ($\mathcal{L}_{rec}$) losses, as well as the benefits of applying latent diffusion to a disentangled behavioral latent space (BLS).} \label{tab:ablation} \vspace{-0.3cm} % \end{table*} \textbf{Ablation study.} Here, we analyze the effect of each of our contributions in the final model quantitatively. This includes the contributions of $\mathcal{L}_{lat}$ and $\mathcal{L}_{rec}$, and the benefits of disentangling behavior from motion in the latent space construction. Results are summarized in \autoref{tab:ablation}. Although training is stable and losses decrease similarly in all cases, solely considering the loss at the coordinate space ($\mathcal{L}_{rec}$) leads to poor generalization capabilities. This is especially noticeable in the cross-dataset scenario, where models with both latent space constructions are the least accurate among all loss configurations. We observe that the latent loss ($\mathcal{L}_{lat}$) boosts the metrics in both datasets, and can be further enhanced when considered along with the reconstruction loss. % Overall, the BLS construction benefits all loss configurations in terms of accuracy on both datasets, proving it a very promising strategy to be further explored in HMP. \textbf{Implicit diversity.} As explained in \autoref{subsec:behavioral_ld}, the parameter $k$ regulates the \textit{relaxation} of the training loss (\autoref{eq:final_loss}) on BeLFusion. \autoref{fig:k_analysis} shows how metrics behave when 1) tuning $k$, and 2) moving forward in the reverse diffusion chain (i.e., progressively applying denoising steps). In general, increasing $k$ enhances the samples' diversity, accuracy, and realism. For $k \leq 5$, going through the whole chain of denoising steps boosts accuracy. However, for $k > 5$, further denoising only boosts diversity- and realism-wise metrics (APD, CMD, FID), and makes the fast single-step inference extremely accurate. With large enough $k$ values, the LDM learns to cover the conditional space of future behaviors to a great extent and can therefore make a fast and reliable first prediction. The successive denoising steps stochastically refine such approximations at expenses of larger inference time. Thus, each denoising step 1) promotes diversity within the latent space, and 2) brings the predicted latent code closer to the true behavioral distribution. Both effects can be observed in the latent APD and FID plots in \autoref{fig:k_analysis}. The latent APD is the equivalence of the APD in the latent space of predictions and is computed likewise. Note that these effects are not favored by neither the loss choice nor the BLS (see \supp{} \ref{subsec:supp_extended_k}). Concurrent research has observed a similar effect on image generation~\cite{bansal2022colddiff}. \setlength{\tabcolsep}{3pt} \begin{table}[t!]\renewcommand{\arraystretch}{0.9} \footnotesize \centering \begin{tabular}{l@{\hskip 4mm}cccc} \toprule & \multicolumn{2}{c}{Human3.6M\cite{ionescu2013h36m}} & \multicolumn{2}{c}{AMASS\cite{mahmood2019amass}} \\ \midrule & Avg. rank & Ranked 1st & Avg. rank & Ranked 1st \\ \midrule GSPS & 2.246 $\pm$ 0.358 & 17.9\% & 2.003 $\pm$ 0.505 & 30.5\% \\ DivSamp & 2.339 $\pm$ 0.393 & 13.4\% & 2.432 $\pm$ 0.408 & 14.0\% \\ \mbox{BeLFusion}{} & \textbf{1.415 $\pm$ 0.217} & \textbf{68.7\%} & \textbf{1.565 $\pm$ 0.332} & \textbf{55.5\%} \\ \bottomrule \end{tabular} \vspace{-0.3cm} \caption{Qualitative study. 126{} participants ranked sets of samples from GSPS, DivSamp, and \mbox{BeLFusion}{} by their realism. Lower average rank ($\pm$ std. dev.) is better.} \label{tab:mos_results}\vspace{-0.6cm} \end{table} \setlength{\tabcolsep}{6pt} \textbf{Qualitative assessment.} We performed a qualitative study to assess the realism of \mbox{BeLFusion}{}'s predictions compared to those of the most accurate methods: DivSamp and GSPS. For each method, we sampled six predictions for 24 randomly sampled observation segments from each dataset (48 in total). We then generated a \textit{gif} that showed both the observed and predicted sequences of the six predictions at the same time. Each participant was asked to order the three sets according to the average realism of the samples. Four questions from either H36M or AMASS were asked to each participant (more details in \supp{} \ref{sec:supp_mos}). % A total of 126{} people participated in the study. The statistical significance of the results was assessed with the Friedman and Nemenyi tests. Results are shown in \autoref{tab:mos_results}. \mbox{BeLFusion}{}'s predictions are significantly more realistic than both competitors' in both datasets (p${<}0.01$). GSPS could only be proved significantly more realistic than DivSamp for AMASS (p${<}0.01$). Interestingly, the participant-wise average realism ranks of each method are highly correlated to each method's CMD ($r$=0.730, and $r$=0.601) and APDE ($r$=0.732, and $r$=0.612), for both datasets (H36M, and AMASS, respectively), in terms of Pearson's correlation (p${<}0.001$). \section{Conclusion} \label{sec:conclusions} \vspace{-0.2cm} We presented \mbox{BeLFusion}{}, a latent diffusion model that exploits a behavioral latent space to make more realistic, accurate, and context-adaptive human motion predictions. \mbox{BeLFusion}{} takes a major step forward in the cross-dataset AMASS configuration. This suggests the necessity of future work to pay attention to domain shifts. These are present in any on-the-wild scenario and therefore on our way toward making highly capable predictive systems. \textbf{Limitations and future work.} Although sampling with \mbox{BeLFusion}{} only takes 10 denoising steps, this is still slower than sampling from GANs or VAEs. This might limit its applicability to a real-life scenario. Future work includes exploring our method's capabilities for exploiting a longer observation time-span, and for being auto-regressively applied to predict longer-term sequences. \section{Behavioral latent space} \label{sec:supp_behavior_transference} In this section, we present 1) a t-SNE plot for visualizing the behavioral latent space of the H36M test segments, and 2) visual examples of transferring behavior to ongoing motions. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figures/tsne_figure.pdf} \caption{\textbf{Behavioral latent space. }2D projection of the behavioral encodings of all H36M test sequences generated with t-SNE.} \vspace{-0.3cm} \label{fig:supp_tsne_behaviors} \end{figure} \textbf{2D projection. }\autoref{fig:supp_tsne_behaviors} shows a 2-dimensional t-SNE projection of all behavioral encodings of the H36M test sequences~\cite{van2008tsne}. Note that, despite its class label, a sequence may show actions of another class. For example, \textit{Waiting} sequences include sub-sequences where the person walks or sits down. Interestingly, we can observe that most walking-related sequences (\textit{WalkDog}, \textit{WalkTogether}, \textit{Walking}) are clustered together in the top-right and bottom-left corners. Such entanglement within those clusters suggests that the task of choosing the way to keep walking might be relegated to the behavior coupler, which has information on how the action is being performed. Farther in those corners, we can also find very isolated clusters of \textit{Phoning} and \textit{Smoking}, whose proximity to the walking behaviors suggests that such sequences may involve a subject making a call or smoking while walking. However, without fine-grained annotations at the sequence level, we cannot come to any strong conclusion. \textbf{Transference of behaviors. }We include several videos\footnote{Videos referenced in the \supp{} are available in \url{https://barquerogerman.github.io/BeLFusion/}.} showing the capabilities of the behavior coupler to transfer a behavioral latent code to any ongoing motion. The motion tagged as \textit{behavior} shows the target behavior to be encoded and transferred. All the other columns show the ongoing motions where the behavior will be transferred to. They are shown with blue and orange skeletons. Once the behavior is transferred, the color of the skeletons switches to green and pink. In `H1' (H36M), the walking action or behavior is transferred to the target ongoing motions. For ongoing motions where the person is standing, they start walking towards the direction they are facing (\#1, \#2, \#4, \#5). Such transition is smooth and coherent with the observation. For example, the person making a phone call in \#7 keeps the arm next to the ear while starting to walk. When sitting or bending down, the movement of the legs is either very little (\#3 and \#6), or very limited (\#8). `H2' and `H3' show the transference of subtle and long-range behaviors, respectively. For AMASS, such behavioral encoding faces a huge domain drift. However, we still observe good results at this task. For example, `A1' shows how a \textit{stretching} movement is successfully transferred to very distinct ongoing motions by generating smooth and realistic transitions. Similarly, `A2' and `A3' are examples of transferring subtle and aggressive behaviors, respectively. Even though the dancing behavior in `A3' was not seen at training time, it is transferred and adapted to the ongoing motion fairly realistically. \section{Implementation details} \label{sec:supp_implementation_details} \defB{B} \defT{T} \def\mathbf{x}_{m}{\mathbf{x}_{m}} \def\mathbf{X}{\mathbf{X}} \def\mathbf{Y}{\mathbf{Y}} \def\mathcal{E}{\mathcal{E}} \def\mathcal{D}{\mathcal{D}} \deff_{\Phi}{f_{\Phi}} \defz{z} \def\latcode_{t}{z_{t}} \def\latcode_{t-1}{z_{t-1}} \def\latcode_{0}{z_{0}} \def\pred_{e}{\mathbf{Y}_{e}} \def\theta{\theta} \def\phi{\phi} \def\omega{\omega} \def\alpha{\alpha} \defg_{\bvaeXmotionEncParams}{g_{\alpha}} \def\lambda{\lambda} \defh_{\vaeObsEncParams}{h_{\lambda}} \def\mathcal{B}_{\bvaeDecParams}{\mathcal{B}_{\phi}} \defp_{\bvaeEncParams}{p_{\theta}} \defr_{\bvaeAuxDecParams}{r_{\omega}} \def\mathcal{L}_{rec}{\mathcal{L}_{rec}} \def\mathcal{L}_{lat}{\mathcal{L}_{lat}} To ensure reproducibility, we include in this section all the details regarding \mbox{BeLFusion}{}'s architecture and training procedure (\autoref{subsec:supp_implementation_ours}). We also cover the details on the implementation of the state-of-the-art models retrained with AMASS (\autoref{subsec:supp_implementation_sota}). We follow the terminology used in Fig. 2 and 3 from the main paper. Note that we only report the hyperparameter values of the best models. For their selection, we conducted grid searches that included learning rate, losses weights, and most relevant network parameters. Data augmentation for all models consisted in randomly rotating from 0 to 360 degrees around the Z axis and mirroring the body skeleton with respect to the XZ- and YZ-planes. The axis and mirroring planes were selected to preserve the floor position and orientation. All models were trained with the ADAM optimizer with AMSGrad~\cite{reddi2019convergence}, with PyTorch 1.9.1 \cite{paszke2019pytorch} and CUDA 11.1 on a single NVIDIA GeForce RTX 3090. The whole \mbox{BeLFusion}{} training pipleine was trained in 12h for H36M, and 24h for AMASS. \subsection{\mbox{BeLFusion}{}} \label{subsec:supp_implementation_ours} \textbf{Behavioral latent space.} The behavioral VAE consists of four modules. The behavior encoder $p_{\bvaeEncParams}{}$, which receives the flattened coordinates of all the joints, is composed of a single Gated Recurrent Unit (GRU) cell (hidden state of size 128) followed by a set of a 2D convolutional layer (kernel size of 1, stride of 1, padding of 0) with L2 weight normalization and learned scaling parameters that maps the GRU state to the mean of the latent distribution, and another set to its variance. The behavior coupler $\mathcal{B}_{\bvaeDecParams}{}$ consists of a GRU (input shape of 256, hidden state of size 128) followed by a linear layer that maps, at each timestep, its hidden state to the offsets of each joint coordinates with respect to their last observed position. The context encoder $g_{\bvaeXmotionEncParams}{}$ is a single-cell GRU (hidden state of 128) that is fed with the flattened joints coordinates. Finally, the auxiliary decoder $r_{\bvaeAuxDecParams}{}$ is a clone of $\mathcal{B}_{\bvaeDecParams}{}$ with a narrower input shape (128), as only the latent code is fed. For H36M, the behavioral VAE was trained with learning rates of 0.005, and 0.0005 for $\mathcal{L}_{main}$ and $\mathcal{L}_{aux}$, respectively. For AMASS, they were set to 0.001 and 0.005. They were all decayed with a ratio of 0.9 every 50 epochs. The batch size was set to 64. Each epoch consisted of 5000 and 10000 iterations for H36M and AMASS, respectively. The weight of the $-\mathcal{L}_{aux}$ term in $\mathcal{L}_{main}$ was set to 1.05 for H36M and to 1.00 for AMASS. The KL term was assigned a weight of 0.0001 in both datasets. Once trained, the behavioral VAE was further fine-tuned for 500 epochs with the behavior encoder $p_{\bvaeEncParams}{}$ frozen, to enhance the reconstruction capabilities without modifying the disentangled behavioral latent space. Note that for the ablation study, the non-behavioral latent space was built likewise by disabling the adversarial training framework, and optimizing the model only with the log-likelihood and KL terms of $\mathcal{L}_{main}$ (main paper, Eq. 4), as in a traditional VAE framework. \textbf{Observation encoding. }The observation encoder $h_{\vaeObsEncParams}{}$ was pretrained as an autoencoder with an L2 reconstruction loss. It consists of a single-cell GRU layer (hidden state of 64) fed with the flattened joints coordinates. The hidden state of the GRU layer is fed to three MLP layers (output sizes of 300, 200, and 64), and then set as the hidden state of the GRU decoder unit (hidden state of size 64). The sequence is reconstructed by predicting the offsets with respect to the last observed joint coordinates. \textbf{Latent diffusion model.} \mbox{BeLFusion}{}'s LDM borrowed its U-Net from \cite{dhariwal2021diffusionbeatsgans}. To leverage it, the target latent codes were reshaped to a rectangular shape (16x8), as prior work proposed~\cite{bautista2022gaudi}. In particular, our U-Net has 2 attention layers (resolutions of 8 and 4), 16 channels per attention head, a FiLM-like conditioning mechanism~\cite{perez2018film}, residual blocks for up and downsampling, and a single residual block. Both the observation and target behavioral encodings were normalized between \text{-1} and 1. The LDM was trained with the \textit{sqrt} noise schedule ($s=0.0001$) proposed in \cite{li2022diffusionLM}, which also provided important improvements in our scenario compared to the classic \textit{linear} or \textit{cosine} schedules (see~\autoref{fig:supp_diff_schedules}). With this schedule, the diffusion process is started with a higher noise level, which increases rapidly in the middle of the chain. The length of the Markov diffusion chain was set to 10, the batch size to 64, the learning rate to 0.0005, and the learning rate decay to a rate of 0.9 every 100 epochs. Each epoch included 10000 samples in both H36M and AMASS training scenarios. Early stopping with a patience of 100 epochs was applied to both, and the epoch where it was triggered was used for the final training with both validation and training sets together. Thus, \mbox{BeLFusion}{} was trained for 217 epochs in H36M and 1262 for AMASS. The LDM was trained with an exponential moving average (EMA) with a decay of 0.995 and 0.999, for H36M and AMASS, triggered every 10 batch iterations, and starting after 1000 initial iterations. The EMA helped reduce the overfitting in the last denoising steps. Predictions were inferred with DDIM sampling~\cite{song2021ddim}. \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth]{figures/diffusion_schedules.pdf} \vspace{-0.2cm} \caption{\textbf{Diffusion schedules. } Schedules explored for diffusing the target latent codes.} \vspace{-0.3cm} \label{fig:supp_diff_schedules} \end{figure} \subsection{State-of-the-art models} \label{subsec:supp_implementation_sota} The publicly available codes from TPK, DLow, GSPS, and DivSamp were adapted to be trained and evaluated under the AMASS cross-dataset protocol. The best values for their most important hyperparameters were found with grid search. The number of iterations per epoch for all of them was set to 10000. TPK's loss weights were set to 1000 and 0.1 for the transition and KL losses, respectively. The learning rate was set to 0.001. DLow was trained on top of the TPK model with a learning rate of 0.0001. Its reconstruction and diversity losses weights were set to 2 and 25. For GSPS, the upper- and lower-body joint indices were adapted to the AMASS skeleton configuration. The multimodal ground truth was generated with an upper L2 distance of 0.1, and a lower APD threshold of 0.3. The body angle limits were re-computed with the AMASS statistics. The GSPS learning rate was set to 0.0005, and the weights of the upper- and lower-body diversity losses were set to 5 and 10, respectively. For DivSamp, we used the multimodal ground truth from GSPS, as for H36M they originally borrowed such information from GSPS. For the first training stage (VAE), the learning rate was set to 0.001, and the KL weight to 1. For the second training stage (sampling model), the learning rate was set to 0.0001, the reconstruction loss weight was set to 40, and the diversity loss weight to 20. For all of them, unspecified parameters were set to the values reported in their original H36M implementations. \section{Datasets details} \label{sec:supp_dataset_details} In this section, we give more details to ensure the reproducibility of the cross-dataset AMASS evaluation protocol. \textbf{Training splits. } The training, validation, and test splits are based on the official AMASS splits from the original publication~\cite{mahmood2019amass}. However, we also include the new datasets added afterward, up to date. Accordingly, the training set contains the ACCAD, BMLhandball, BMLmovi, BMLrub, CMU, EKUT, EyesJapanDataset, KIT, PosePrior, TCDHands, and TotalCapture datasets, and the validation set contains the HumanEva, HDM05, SFU, and MoSh datasets. The remaining datasets are all part of the test set: DFaust, DanceDB, GRAB, HUMAN4D, SOMA, SSM, and Transitions. AMASS datasets showcase a wide range of behaviors at both intra- and inter-dataset levels. For example, DanceDB, GRAB, and BMLhandball contain sequences of dancing, grabbing objects, and sport actions, respectively. Other datasets like HUMAN4D offer a wide intra-dataset variability of behaviors by themselves. As a result, this evaluation protocol represents a very complete and challenge benchmark for HMP. \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth]{figures/datasets_distribution_v.pdf} \caption{\textbf{Test set sequences. }We show the number of test sequences evaluated for each class/dataset in H36M/AMASS.} \vspace{-0.3cm} \label{fig:supp_datasets_distribution} \end{figure} \textbf{Test sequences. }For each dataset clip (previously downsampled to 60Hz), we selected all sequences starting from frame 180 (3s), with a stride of 120 (2s). This was done to ensure that for any segment to predict (prediction window), up to 3s of preceding motion was available. As a result, future work will be able to explore models exploiting longer observation windows while still using the same prediction windows and, therefore, be compared to our results. A total of 12728 segments were selected, around 2.5 times the amount of H36M test sequences. Note that those clips with no framerate available in AMASS metadata were ignored. \autoref{fig:supp_datasets_distribution} shows the number of segments extracted from each test dataset. 94.1\% of all test samples belong to either DanceDB, GRAB, or HUMAN4D. Most SSM clips had to be discarded due to lengths shorter than 300 frames (5s). The list of sequence indices is made available along the project code for easing reproducibility. \textbf{Multimodal ground truth. }The L2 distance threshold used for the generation of the multimodal ground truth was set to 0.4 so that the average number of resulting multimodal ground truths for each sequence was similar to that of H36M with a threshold of 0.5~\cite{yuan2020dlow}. % \section{Further experimental results} \label{sec:supp_exp_results} In this section, we present a class- and dataset-wise comparison to the state of the art for H36M and AMASS, respectively (\autoref{subsec:supp_wise_results}). We also include the distributions of predicted displacement for each class/dataset, which are used for the CMD calculation. Finally, we present an extended analysis of the effect of $k$, which controls the loss \textit{relaxation} level (\autoref{subsec:supp_extended_k}). \subsection{Class- and dataset-wise results} \label{subsec:supp_wise_results} \autoref{tab:supp_sota_comparison_h36m} shows that \mbox{BeLFusion}{} achieves state-of-the-art results in most metrics in all H36M classes. We stress that our model is especially good at predicting the future in contexts where the observation strongly determines the following action. For example, when the person is \textit{Smoking}, or \textit{Phoning}, a model should predict a coherent future that also involves holding a cigar, or a phone. \mbox{BeLFusion}{} succeeds at it, showing improvements of 9.1\%, 6.3\%, and 3.7\% for FDE with respect to other methods for \textit{Eating}, \textit{Phoning}, and \textit{Smoking}, respectively. Our model also excels in classes where the determinacy of each part of the body needs to be assessed. For example, for \textit{Directions}, and \textit{Photo}, which often involve a static lower-body, and diverse upper-body movements, \mbox{BeLFusion}{} improves FDE by an 8.9\%, and an 8.0\%, respectively. We also highlight the adaptive APD that our model shows, in contrast to the constant variety of motions predicted by the state-of-the-art methods. Such effect is better observed in \autoref{fig:supp_apde}, where \mbox{BeLFusion}{} is the method that best replicates the intrinsic multimodal diversity of each class (i.e., APD of the multimodal ground truth, see \autoref{subsec:evaluation_metrics}). The variety of motions present in each AMASS dataset impedes such a detailed analysis. However, we also observe that the improvements with respect to the other methods are consistent across datasets (\autoref{tab:supp_sota_comparison_amass}). The only dataset where \mbox{BeLFusion}{} is beaten in an accuracy metric (FDE) is Transitions, where the sequences consist of transitions among different actions, without any behavioral cue that allows the model to anticipate it. We also observe that our model yields a higher variability of APD across datasets that adapts to the sequence context, clearly depicted in \autoref{fig:supp_apde} as well. Regarding the CMD, Tab.~\ref{tab:supp_sota_comparison_h36m} and \ref{tab:supp_sota_comparison_amass} show how methods that promote highly diverse predictions are biased toward forecasting faster movements than the ones present in the dataset. \autoref{fig:supp_supp_cmd} shows a clearer picture of this bias by plotting the average predicted displacement at all predicted frames. We observe how in all H36M classes, GSPS and DivSamp accelerate very early and eventually stop by the end of the prediction. We argue that such early divergent motion favors high diversity values, at expense of realistic transitions from the ongoing to the predicted motion. By contrast, \mbox{BeLFusion}{} produces movements that resemble those present in the dataset. While DivSamp follows a similar trend in AMASS than in H36M, GSPS does not. Although DLow is far from state-of-the-art accuracy, it achieves the best performance with regard to this metric in both datasets. Interestingly, \mbox{BeLFusion}{} slightly decelerates at the first frames and then achieves the motion closest to that of the dataset shortly after. We hypothesize that this effect is an artifact of the behavioral coupling step, where the ongoing motion smoothly transitions to the predicted behavior. \subsection{Ablation study: implicit diversity} \label{subsec:supp_extended_k} As described in \autoref{subsec:behavioral_ld} and \ref{subsec:results} of the main paper, by relaxing the loss regularization (i.e., increasing the number of predictions sampled at each training iteration, $k$), we can increase the diversity of \mbox{BeLFusion}{}'s predictions. We already showed that by increasing $k$, the diversity (APD), accuracy (ADE, FDE), and realism (FID) improves. In fact, for large $k$ ($>5$), a single denoising step becomes enough to achieve state-of-the-art accuracy. Still, going through the whole reverse Markov diffusion chain helps the predicted behavior code move closer to the latent space manifold, thus generating more realistic predictions. In \autoref{fig:supp_ext_k_analysis}, we include the same analysis for all the models in the ablation study of the main paper. The results prove that the implicit diversity effect is not exclusive of either \mbox{BeLFusion}{}'s loss or behavioral latent space. % \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{figures/apde.pdf} \caption{\textbf{Class- and dataset-wise APD. }GT corresponds to the APD of the multimodal ground truth. \mbox{BeLFusion}{} is the only method that adjusts the diversity of its predictions to model the intrinsic diversity of each class and dataset. As a result, the APD distributions between \mbox{BeLFusion}{} and GT are very similar.} \vspace{-0.3cm} \label{fig:supp_apde} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/supp_motion_per_class_both.pdf} \caption{\textbf{Predicted motion analysis. } For each timestep in the future (predicted frame), the plots above show the displacement predicted averaged across all test sequences. For H36M, GSPS and DivSamp predictions accelerate in the beginning, leading to unrealistic transitions. For AMASS, DivSamp shows a similar behavior, and DLow beats all methods except in GRAB, where \mbox{BeLFusion}{} matches very well the average dataset motion.} \vspace{-0.3cm} \label{fig:supp_supp_cmd} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/k_supp_analysis_both.pdf} \caption{\textbf{Implicit diversity. }By increasing the value of $k$, the diversity is implicitly promoted in both the latent and reconstructed spaces (Latent APD, and APD). We observe that this is effect is not particular to the loss choice ($\mathcal{L}_{lat}$, $\mathcal{L}_{rec}$, or both) or the latent space construction (behavioral or not). Using the LDM to reverse the whole Markov chain of 10 steps (x-axis) helps improve diversity (APD), accuracy (ADE), and realism (FID) in general. Note that for $k>5$, only the diversity and the realism are further improved, and a single denoising step becomes enough to generate the most accurate predictions.} \vspace{-0.3cm} \label{fig:supp_ext_k_analysis} \end{figure*} \section{Examples \textit{in motion}} \label{sec:supp_visual_examples} For each dataset, we include several videos where 10 predictions of \mbox{BeLFusion}{} are compared to those of methods showing competitive performance for H36M: TPK~\cite{walker2017theposeknows}, DLow~\cite{yuan2020dlow}, GSPS~\cite{mao2021gsps}, and DivSamp~\cite{dang2022diverse}. Videos are identified as `[dataset]\_[sample\_id]\_[class/subdataset]'. For example, `A\_6674\_GRAB' is sample 6674, which is part of the GRAB~\cite{taheri2020grab} dataset within AMASS (prefix `A\_'), and `H\_1246\_Sitting' is the sample 1246, which is part of a `Sitting' sequence of H36M (prefix `H\_'). The \textit{Context} column shows the observed sequence and freezes at the last observed pose. The \textit{GT} column shows the ground truth motion. % In this section, we discuss the visual results by highlighting the main advantages provided by \mbox{BeLFusion}{} and showing some failure examples. \textbf{Realistic transitioning. } By means of the behavior coupler, \mbox{BeLFusion}{} is able to transfer predicted behaviors to any ongoing motion with high realism. This is supported quantitatively by the FID and CMD metrics, and perceptually by our qualitative assessment (\autoref{subsec:results}). Now, we assess it by visually inspecting several examples. For example, when the observation shows an ongoing fast motion (`H\_608\_Walking', `H\_1928\_Eating' or `H\_2103\_Photo'), \mbox{BeLFusion}{} is the only model that consistently generates a coherent transition between the observation and the predicted behavior. Other methods mostly predict a sudden stop of the previous action. This is also appreciated in the cross-dataset evaluation. For example, although the observation window of the `A\_103\_Transitions' clearly showcases a fast rotational dancing step, none of the state-of-the-art methods are able to generate a plausible continuation of the observed motion, and all of their predictions abruptly stop rotating. \mbox{BeLFusion}{} is the only method that generates predictions that slowly decrease its rotational momentum to start performing a different action. A similar effect is observed in `A\_2545\_DanceDB', and `A\_10929\_HUMAN4D'. \textbf{Context-driven prediction. } \mbox{BeLFusion}{}'s state-of-the-art APDE and CMD metrics show its superior ability to adjust both the \textit{motion speed} and \textit{motion determinacy} to the observed context. This results in sets of predictions that are, overall, more coherent with respect to the observed context. For example, whereas for `H\_4\_Sitting' \mbox{BeLFusion}{}'s predicted motions showcase a high variety of arms-related actions, its predictions for sequences where the arms are used in an ongoing action (`H\_402\_Smoking', `H\_446\_Smoking', and `H\_541\_Phoning') have a more limited variety of arms motion. In contrast, predictions from state-of-the-art methods do not have such behavioral consistency with respect to the observed motion. This is more evident in diversity-promoting methods like DLow, GSPS, and DivSamp, where the motion predicted is usually implausible for a person that is smoking or making a phone call. Similarly, in `H\_962\_WalkTogether', our method predicts motions that are compatible with the ongoing action of walking next to someone, whereas other methods ignore such possibility. In AMASS, \mbox{BeLFusion}{}'s capability to adapt to the context is clearly depicted in sequences with low-range motion, or where motion is focused on particular parts of the body. For example, \mbox{BeLFusion}{} adapts the diversity of predictions to the `grabbing' action present in the GRAB dataset. While other methods predict coordinate-wise diverse inaccurate predictions, our model encourages diversity within the short spectrum of the plausible behaviors that can follow (see `A\_7667\_GRAB', `A\_7750\_GRAB', or `A\_9274\_GRAB'). In fact, in `A\_11074\_HUMAN4D' and `A\_12321\_SOMA', our model is the only able to anticipate the intention of laying down by detecting subtle cues inside the observation window (samples \#6 and \#8). In general, \mbox{BeLFusion}{} provides good coverage of all plausible futures given the contextual setting. For example, in `H\_910\_SittingDown', and `H\_861\_SittingDown' our model's predictions contain as many different actions as all other methods, with no realism trade-off as for GSPS or DivSamp. \textbf{Generalization to unseen contexts. } As a result of the two properties above (realistic transitioning and context-driven prediction), \mbox{BeLFusion}{} shows superior generalization to unseen situations. This is quantitatively supported by the big step forward in the results of the cross-dataset evaluation. Such generalization capabilities are especially perceptible in the DanceDB\footnote{Dance Motion Capture DB, \url{http://dancedb.cs.ucy.ac.cy}.} sequences, which include dance moves unseen at training time. For instance, `A\_2054\_DanceDB' shows how \mbox{BeLFusion}{} can predict, up to some extent, the correct continuation of a dance move, while other methods either almost freeze or simply predict an out-of-context movement. Similarly, `A\_2284\_DanceDB' and `A\_1899\_DanceDB' show how \mbox{BeLFusion}{} is able to detect that the dance moves involve keeping the arms arising while moving or rotating. In comparison, DLow, GSPS, and DivSamp simply predict other unrelated movements. TPK is only able to predict a few samples with fairly good continuations to the dance step. Also, in `A\_12391\_SOMA', \mbox{BeLFusion}{} is the only method able to infer how a very challenging repetitive stretching movement will follow. We also include some examples where our model fails to generate a coherent and plausible set of predictions. This mostly happens under aggressive domain shifts. For example, in `A\_1402\_DanceDB', the first-seen handstand behavior in the observation leads to \mbox{BeLFusion}{} generating several wrong movement continuations. Similarly to the other state-of-the-art methods, \mbox{BeLFusion}{} also struggles with modeling high-frequencies. For example, in `A\_1087\_DanceDB', the fast legs motion during the observation is not reflected in any prediction, although \mbox{BeLFusion}{} slightly shows it in samples \#4 and \#7. Even though less clearly, this is also observed in H36M. For example, in `H\_148\_WalkDog', none of the models is able to model the high-speed walking movement from the ground truth. Robustness against huge domain drifts and modeling of high-frequencies are interesting and challenging limitations that need to be addressed as future work. \section{Qualitative assessment} \label{sec:supp_mos} \textbf{Selection criteria. } In order to ensure the assessment of a wide range of scenarios, we randomly sampled from three sampling pools per dataset. To generate them, we first ordered all test sequences according to the average joint displacement $D_i$ in the last 100 ms of observation. Then, we selected the pools by taking sequences with $D_i$ within 1) the top 10\% (high-speed transition), 2) 40-60\% (medium-speed transition), and 3) the bottom 10\% (low-speed transition). Then, 8 sequences were randomly sampled for each group. A total of 24 samples for each dataset were selected. These were randomly distributed in groups of 4 and used to generate 6 tests per dataset. Since each dataset has different joint configurations, we did not mix samples from both datasets in the same test to avoid confusion. \begin{figure*} \centering \begin{minipage}{.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/mos_instructions.png} \label{fig:supp_mos_example} \end{minipage}% \hfill \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/mos_example.png} \label{fig:supp_mos_instructions} \end{minipage} \caption{\textbf{Questionnaire example. }On the left, instructions shown to the participant at the beginning. On the right, the interface for ranking the skeleton motions. All skeletons correspond to \textit{gif} images that repeatedly show the observation and prediction motion sequences. } \label{fig:supp_mos} \vspace{-0.3cm} \end{figure*} \setlength{\tabcolsep}{4pt} \begin{table*}[t!]\renewcommand{\arraystretch}{0.9} \footnotesize \centering \begin{tabular}{l@{\hskip 8mm}cccc@{\hskip 8mm}cccc} \toprule & \multicolumn{4}{c}{Human3.6M\cite{ionescu2013h36m}} & \multicolumn{4}{c}{AMASS\cite{mahmood2019amass}} \\ \midrule & Avg. rank & Ranked 1st & Ranked 2nd & Ranked 3rd & Avg. rank & Ranked 1st & Ranked 2nd & Ranked 3rd \\ \midrule \multicolumn{9}{l}{Low-speed transition} \\ \midrule GSPS & 2.238 $\pm$ 0.305 & 18.0\% & \textbf{40.4\%} & 41.6\% & 2.156 $\pm$ 0.595 & 22.6\% & \textbf{38.1\%} & 39.3\%\\ DivSamp & 2.276 $\pm$ 0.459 & 15.7\% & 39.3\% & \textbf{44.9\%} & 2.210 $\pm$ 0.373 & 23.8\% & 31.0\% & \textbf{45.2\%}\\ \mbox{BeLFusion}{} & \textbf{1.486 $\pm$ 0.225} & \textbf{66.3\%} & 20.2\% & 13.5\% & \textbf{1.634 $\pm$ 0.294} & \textbf{53.6\%} & 31.0\% & 15.5\%\\ \midrule \multicolumn{9}{l}{Medium-speed transition} \\ \midrule GSPS & 2.305 $\pm$ 0.466 & 13.8\% & \textbf{48.3\%} & 37.9\% & 2.025 $\pm$ 0.449 & 24.3\% & \textbf{50.0\%} & 25.7\%\\ DivSamp & 2.396 $\pm$ 0.451 & 9.2\% & 36.8\% & \textbf{54.0\%} & 2.497 $\pm$ 0.390 & 10.8\% & 28.4\% & \textbf{60.8\%}\\ \mbox{BeLFusion}{} & \textbf{1.299 $\pm$ 0.243} & \textbf{77.0\%} & 14.9\% & 8.0\% & \textbf{1.478 $\pm$ 0.424} & \textbf{64.9\%} & 21.6\% & 13.5\%\\ \midrule \multicolumn{9}{l}{High-speed transition} \\ \midrule GSPS & 2.194 $\pm$ 0.320 & 21.7\% & \textbf{40.2\%} & 38.0\% & 1.828 $\pm$ 0.468 & 44.9\% & 35.9\% & 19.2\%\\ DivSamp & 2.345 $\pm$ 0.292 & 15.2\% & 32.6\% & \textbf{52.2\%} & 2.589 $\pm$ 0.409 & 6.4\% & 26.9\% & \textbf{66.7\%}\\ \mbox{BeLFusion}{} & \textbf{1.461 $\pm$ 0.149} & \textbf{63.0\%} & 27.2\% & 9.8\% & \textbf{1.583 $\pm$ 0.287} & \textbf{48.7\%} & \textbf{37.2\%} & 14.1\%\\ \midrule \multicolumn{9}{l}{All} \\ \midrule GSPS & 2.246 $\pm$ 0.358 & 17.9\% & \textbf{42.9\%} & 39.2\% & 2.003 $\pm$ 0.505 & 30.5\% & \textbf{41.1\%} & 28.4\% \\ DivSamp & 2.339 $\pm$ 0.393 & 13.4\% & 36.2\% & \textbf{50.4\%} & 2.432 $\pm$ 0.408 & 14.0\% & 28.8\% & \textbf{57.2\%} \\ \mbox{BeLFusion}{} & \textbf{1.415 $\pm$ 0.217} & \textbf{68.7\%} & 20.9\% & 10.4\% & \textbf{1.565 $\pm$ 0.332} & \textbf{55.5\%} & 30.1\% & 14.4\% \\ \bottomrule \end{tabular} \vspace{-0.2cm} \caption{\textbf{Qualitative assessment.} 126{} participants ranked sets of samples from GSPS, DivSamp, and \mbox{BeLFusion}{} by their realism. Lower average rank ($\pm$ std. dev.) is better.} \label{tab:supp_mos_results}\vspace{-0.3cm} \end{table*} \setlength{\tabcolsep}{6pt} \textbf{Assessment details. } The tests were built with the \textit{JotForm}\footnote{\url{https://www.jotform.com/}} platform. Users accessed it through a link generated with \textit{NimbleLinks}\footnote{\url{https://www.nimblelinks.com/}}, which randomly redirected them to one of the tests. \autoref{fig:supp_mos} shows an example of the instructions and definition of realism shown to the user before starting the test (left), and an example of the interface that allowed the user to order the methods according to the realism showcased (right). Note that the instructions showed either AMASS or H36M ground truth samples, as both skeletons have a different number of joints. A total of 126{} people answered the test, with 67 participating in the H36M study, and 59 participating in the AMASS one. \textbf{Extended results.} Extended results for the qualitative study are shown in \autoref{tab:supp_mos_results}. We also show the results for each sampling pool, i.e., grouping sequences by the speed of the transition. The average rank was computed as the average of all samples' mean ranks, and the 1st/2nd/3rd position percentages as the number of times a sample was placed at 1st/2nd/3rd position over the total amount of samples available. We observe that the realism superiority of \mbox{BeLFusion}{} is particularly notable in the sequences with medium-speed transitions (77.0\% and 64.9\% ranked first in H36M and AMASS, respectively). We argue that this is partly promoted by the good capabilities of the behavior coupler to adapt the prediction to the movement speed and direction observed. This is also seen in the high-speed set (ranked third only in 9.8\% and 14.1\% of the cases), despite GSPS showing competitive performance on it.
{'timestamp': '2022-11-28T02:28:51', 'yymm': '2211', 'arxiv_id': '2211.14304', 'language': 'en', 'url': 'https://arxiv.org/abs/2211.14304'}
\section{\label{sec:intro}Introduction} A number of popular extensions of the Standard Model (SM) of particle physics predict the existence of doubly charged scalar particles $X^{\pm\pm}$. These include the Type-II seesaw \cite{Schechter:1980gr, Magg:1980ut,Cheng:1980qt,Lazarides:1980nt,Mohapatra:1980yp,Lindner:2016bgg} and the Zee-Babu \cite{Zee:1980ai,Babu:1988ki} models of neutrino masses, the Left–Right model \cite{Pati:1974yy,Mohapatra:1974hk,Senjanovic:1975rk}, the Georgi–Machacek model \cite{Georgi:1985nv,Chanowitz:1985ug,Gunion:1989ci, Gunion:1990dt,Ismail:2020zoz}, the 3-3-1 model \cite{CiezaMontalvo:2006zt, Alves:2011kc} and the little Higgs model \cite{ArkaniHamed:2002qx}). Doubly charged scalars appear also in simplified models, in which one merely adds such scalars in a gauge invariant way in various representations of the SM gauge group $SU(2)_L$ to the particle content of the SM. The Lagrangian of the model is then complemented by gauge-invariant interaction terms involving these new fields \cite{Delgado:2011iz,Alloul:2013raa}. Doubly charged scalars may be long-lived or even stable \cite{Alloul:2013raa,Alimena:2019zri,Acharya:2020uwc,Hirsch:2021wge}. As the simplest example, one can add to the SM an uncolored $SU(2)_L$-singlet scalar field $X$ with hypercharge $Y=2$ \cite{Alloul:2013raa}. The corresponding doubly charged particles will couple to the neutral gauge bosons $\gamma$ and $Z^0$ and may also interact with the SM Higgs boson $H$ through the $(H^\dag H)(X^\dag X)$ term in the Higgs potential. Gauge invariance allows, in addition, the Yukawa coupling of $X$ to right-handed charged leptons, $h_X l_R l_R X+h.c.$ This is the only coupling that makes the $X$-particles unstable in this model; they will be long-lived if the Yukawa coupling constants $h_X$ are small. The Yukawa coupling of $X$ may be forbidden by e.g.\ $Z_2$ symmetry $X\to -X$, in which case the $X$-scalars will be stable. Doubly charged scalar particles are being actively searched for experimentally, but up to now have not been discovered. For discussions of current experimental constraints on the doubly charged particles and of the sensitivities to them of future experiments see \cite{Alloul:2013raa,Alimena:2019zri,Fuks:2019clu,Padhan:2019jlc, Acharya:2020uwc,Hirsch:2021wge,Dev:2021axj} and references therein. In addition to interesting particle-physics phenomenology, doubly charged scalars may have important implications for cosmology. In this paper we will, however, consider another aspect of their possible existence. As we shall demonstrate, doubly charged particles can catalyze fusion of light nuclei, with potentially important applications for energy production. The negatively charged $X^{--}$ (which we will hereafter simply refer to as $X$) can form atomic bound systems with the nuclei of light elements, such as deuterium, tritium or helium. One example is the antihelium-like $(ddX)$ atom with the $X$-particle as the ``nucleus'' and two deuterons in the 1$s$ atomic state instead of two positrons. (Here and below we use the brackets to denote states bound by the Coulomb force). As $X$ is expected to be very heavy, the size of such an atomic system will in fact be determined by the deuteron mass $m_d$ and will be of the order of the Bohr radius of the $(dX)$ ion, $a_d\simeq 7.2$ fm. Similar small-size atomic systems $(N\!N'X)$ can exist for other light nuclei $N$ and $N'$ with charges $Z\le 2$. Atomic binding of two nuclei to an $X$-particle brings them so close together that this essentially eliminates the necessity for them to overcome the Coulomb barrier in order to undergo fusion. The exothermic fusion reactions can then occur unhindered and do not require high temperature or pressure. The $X$-particle is not consumed in this process and can then facilitate further nuclear fusion reactions, acting thus as a catalyst. This $X$-catalyzed fusion mechanism is to some extent similar to muon catalyzed fusion ($\mu$CF) (\cite{Frank,Sakharov,Zeldovich1,Alvarez:1957un, Jackson:1957zza,Zeldovich2,Zeldovich3,bogd}, see \cite{Zeldovich4,GerPom, bracci,Breunlich:1989vg,Ponomarev:1990pn,Zeldovich:1991pbl,bogd2} for reviews), in which the role of the catalyst is played by singly negatively charged muons. $\mu$CF of hydrogen isotopes was once considered a prospective candidate for cold fusion. However, already rather early in its studies it became clear that $\mu$CF suffers from a serious shortcoming that may prevent it from being a viable mechanism of energy production. In the fusion processes, isotopes of helium are produced, and there is a chance that they will capture on their atomic orbits the negative muons present in the final state of the fusion reactions. Once this happens, muonic ions ($^3$He$\mu)$ or ($^4$He$\mu)$ are formed, which, being positively charged, cannot catalyze further fusion reactions. This effect is cumulative; the sticking to helium nuclei thus eventually knocks the muons out from the catalytic process, i.e.\ the catalytic poisoning occurs. Out of all $\mu$CF reactions, the $d-t$ fusion has the smallest muon sticking probability, $\omega_s\simeq 10^{-2}$. This means that a single muon will catalyze $\sim$100 fusion reactions before it gets removed from the catalytic process. The corresponding total produced energy is $\sim$1.7 GeV, which is at least a factor of five smaller than the energy needed to produce and handle one muon \cite{Jackson:1957zza}. In addition, muon's short lifetime makes it impractical to try to dissolve the produced ($^3$He$\mu)$ or ($^4$He$\mu)$ bound states by irradiating them with particle beams in order to reuse the released muons. These considerations have essentially killed the idea of using $\mu$CF for energy production. There were discussions in the literature of the possibility of energy generation through the catalysis of nuclear fusion by hypothetic heavy long-lived or stable singly charged \cite{Zeldovich3,Rafelski:1989pz, Ioffe:1979tv,Hamaguchi:2006vp} or fractionally charged \cite{Zweig:1978sb} particles. However, it has been shown in \cite{Zeldovich3,Ioffe:1979tv,Hamaguchi:2006vp} that these processes suffer from the same problem of catalytic poisoning as $\mu$CF, and therefore they cannot be useful sources of energy. In particular, in ref.~\cite{Ioffe:1979tv} it was demonstrated that reactivation of the catalyst particles by irradiating their atomic bound states with helium nuclei by neutron beams, as suggested in \cite{Zweig:1978sb}, would require beams that are about nine orders of magnitude higher than those currently produced by most powerful nuclear reactors. In this paper we consider the fusion of light nuclei catalyzed by doubly negatively charged $X$-particles and demonstrate that, unlike $\mu$CF, this process may be a viable source of energy. We analyze $X$-catalyzed fusion ($X$CF) in deuterium environments and show that the catalytic poisoning may only occur in this case due to the sticking of $X$-particles to $^6$Li nuclei, which are produced in the fusion reactions of the third stage. The corresponding sticking probability is shown to be very low, and, before getting bound to $^6$Li, each $X$-particle can catalyze $\sim 3.5\cdot 10^{9}$ fusion cycles, producing $\sim 7\cdot 10^{4}$ TeV of energy. To the best of the present author's knowledge, nuclear fusion catalyzed by doubly charged particles has never been considered before. \section{\label{sec:xcg}$X$-catalyzed fusion in deuterium} We will be assuming that $X$-particles interact only electromagnetically, which in any case should be a very good approximation at low energies relevant to nuclear fusion. Let $X$-particles be injected in pressurized D$_2$ gas or liquid deuterium. Being very heavy and negatively charged, the $X$-particles can easily penetrate D$_2$ molecules and D atoms, dissociating the former and ionizing the latter and losing energy on the way. Once the velocity of an $X$-particle becomes comparable to atomic velocities ($v\simeq 2e^2/\hbar\sim 10^{-2}c$), it captures a deuteron on a highly excited atomic level of the ($dX$) system, which then very quickly de-excites to its ground state, mostly through electric dipole radiation and inelastic scattering on the neighboring deuterium atoms. As the ($dX$) ion is negatively charged, it swiftly picks up another deuteron to form the ($ddX$) atom. The characteristic time of this atomic phase of the $X$CF process is dominated by the $X$ moderation time and is $\sim 10^{-10}$\,s at liquid hydrogen density $N_0=4.25\times 10^{22}$ nuclei/cm$^3$ and $T\simeq 20$K and about $10^{-7}$\,s in deuterium gas at $0^\circ$C and pressure of one bar (see Appendix~\ref{sec:Xatom}). After the $(ddX)$ atom has been formed, the deuterons undergo nuclear fusion through several channels, see below. Simple estimates show that the fusion rates are many orders of magnitudes faster than the rates of the atomic formation processes. That is, once ($ddX$) [or similar ($N\!N'X$)] atoms are formed, the fusion occurs practically instantaneously. The time scale of $X$CF is therefore determined by the atomic formation times. The rates of the fusion reactions, however, determine the branching ratios of various fusion channels, which are important for the kinetics of the catalytic cycle. At the first stage of $X$CF in deuterium two deuterons fuse to produce $^3$He, $^3$H or $^4$He. In each case there is at least one channel in which the final-state $X$ forms an atomic bound state with one of the produced nuclei. Stage I fusion reactions are \begin{align} &(ddX)\to {^3\rm He}+n+X &(Q=2.98~{\rm MeV},~29.1\%) \tag{1a} \label{eq:r1a}\\ &(ddX)\to ({\rm ^3He}X)+n &(Q=3.89~{\rm MeV}, ~19.4\%) \tag{1b} \label{eq:r1b} \end{align} \vglue-8mm \begin{align} &(ddX)\to {\rm ^3H}+p+X &&(Q=3.74~{\rm MeV},~34.4\%) \tag{2a} \label{eq:r2a}\\ &(ddX)\to ({\rm ^3H}X)+p &&(Q=4.01~{\rm MeV}, ~6.2\%) \tag{2b} \label{eq:r2b}\\ &(ddX)\to {\rm ^3H}+(pX) &&(Q=3.84~{\rm MeV}, ~0.5\%) \tag{2c} \label{eq:r2c} \end{align} \vglue-8mm \begin{align} &(ddX)\to {\rm ^4He}+\gamma+X &&(Q=23.6~{\rm MeV}, ~4\!\cdot\!10^{-9}) \tag{3a} \label{eq:r3a}\\ &(ddX)\to ({\rm ^4He}X)+\gamma &&(Q=24.7~{\rm MeV}, ~3\!\cdot\! 10^{-8}) \tag{3b} \label{eq:r3b}\\ &(ddX)\to {\rm ^4He}+X &&(Q=23.6~{\rm MeV}, ~10.4\%) \tag{3c} \label{eq:r3c} \end{align} Here in the parentheses the $Q$-values and the branching ratios of the reactions are shown. In evaluating the $Q$-values we have taken into account that the atomic binding of the two deuterons to $X$ in the initial state reduces $Q$, whereas the binding to $X$ of one of the final-state nuclei increases it. As the Bohr radii of most of the $X$-atomic states we consider are either comparable to or smaller than the nuclear radii, in calculating the Coulomb binding energies one has to allow for the finite nuclear sizes. We do that by making use of a variational approach, as described in Appendix~\ref{sec:Bind}. The rates of reactions (\ref{eq:r1b}), (\ref{eq:r2b}), (\ref{eq:r2c}) and (\ref{eq:r3b}) with bound $X$-particles in the final states are proportional to the corresponding $X$-particle sticking probabilities, $\omega_{s}$. The existence of such channels obviously affects the branching ratios of the analogous reactions with free $X$ in the final states. Radiative reactions (\ref{eq:r3a}) and (\ref{eq:r3b}) have tiny branching ratios, which is related to their electromagnetic nature and to the fact that for their $X$-less version, $d+d\to{\rm ^4He}+\gamma$, transitions of E1 type are strictly forbidden. This comes about because the two fusing nuclei are identical, which, in particular, means that they have the same charge-to-mass ratio. This reaction therefore proceeds mainly through E2 transitions \cite{bogd}. When the deuterons are bound to $X$, the strict prohibition of E1 transitions is lifted due to possible transitions through intermediate excited atomic states.% \footnote{\label{fn:1}The author is grateful to M.~Pospelov for raising this issue and suggesting an example of a route through which E1 transitions could proceed in reaction (\ref{eq:r3b}).} However, as shown in Appendix~\ref{sec:lift}, the resulting E1 transitions are in this case heavily hindered and their rates actually fall below the rates of the E2 transitions. Reaction (\ref{eq:r3c}) is an internal conversion process. Note that, unlike for reactions (\ref{eq:r1a}) - (\ref{eq:r3b}), the $X$-less version of (\ref{eq:r3c}) does not exist: the process $d+d\to ^4$He is forbidden by kinematics. For the details of the calculation of the rate of reaction (\ref{eq:r3c}) as well as of the rates of the other discussed in this paper reactions, see Appendix~\ref{sec:Sfactor}. The relevant $Q$-values of the reactions and sticking probabilities are evaluated in Appendices~\ref{sec:Bind} and \ref{sec:sticking}, respectively. The final states of reactions (\ref{eq:r1a}), (\ref{eq:r2a}), (\ref{eq:r3a}) and (\ref{eq:r3c}) contain free $X$-particles which are practically at rest and can immediately capture deuterons of the medium, forming again the ($ddX$) atoms. Thus, they can again catalyze $d-d$ fusion through stage I reactions (\ref{eq:r1a})-(\ref{eq:r3c}). The same is also true for the $X$-particles in the final state of reaction (\ref{eq:r2c}) which emerge being bound to protons. Collisions of ($pX$) with deuterons of the medium lead to fast replacement of the protons by deuterons through the exothermic charge exchange reaction $(pX)+d\to (dX)+p$ with the energy release $\sim$90~keV (see Appendix~\ref{sec:charge}). The produced $(dX)$ ion then picks up a deuteron to form the $(ddX)$ atom, which can again participate in stage I reactions (\ref{eq:r1a})-(\ref{eq:r3c}). The situation is different for the $X$-particles in the final states of reactions (\ref{eq:r1b}) and (\ref{eq:r2b}) forming the bound states with $^3$He and $^3$H, respectively. They can no longer directly participate in stage I $d-d$ fusion reactions. However, they are not lost for the fusion process: the produced (${\rm ^3He}X$) and (${\rm ^3H}X$) can still pick up deuterons of the medium to form the atomic bound states (${\rm ^3He}dX$) and (${\rm ^3H}dX$), which can give rise to stage II fusion reactions, which we will consider next. Before we proceed, a comment is in order. While (${\rm ^3H}X$) is a singly negatively charged ion which can obviously pick up a positively charged deuteron to form an (${\rm ^3H}dX$) atom, (${\rm ^3He}X$) is a neutral $X$-atom. It is not immediately obvious whether it can form a stable bound state with $d$, which, if exists, would be a positive ion. In the case of the usual atomic systems, analogous (though negatively charged) states do exist -- a well-known example is the negative ion of hydrogen H$^-$. However, the stability of (${\rm ^3He}\,dX$) cannot be directly deduced from the stability of H$^-$: in the latter case the two particles orbiting the nucleus are identical electrons, whereas for (${\rm ^3He}dX$) these are different entities -- nuclei with differing masses and charges. Nevertheless, from the results of a general analysis of three-body Coulomb systems carried out in \cite{Martin:1998zc,krikeb,armour} it follows that the state (${\rm ^3He}dX$) (as well as the bound state (${\rm ^4He}dX$) which we will discuss later on) should exist and be stable. For additional information see Appendix~\ref{sec:posIons}. Once (${\rm ^3He}X$) and (${\rm ^3H}X$), produced in reactions (\ref{eq:r1b}) and (\ref{eq:r2b}), have picked up deuterons from the medium and formed the atomic bound states (${\rm ^3He}dX$) and (${\rm ^3H}dX$), the following stage II fusion reactions occur: \begin{align} \!\!\!\!\!&(^3{\rm He}dX)\to {\rm ^4He}+p+X\!\!\! &&(Q=17.4~{\rm MeV},\,94\%) \!\!\!\tag{4a} \label{eq:r4a}\\ \!\!\!\!\!&({\rm ^3He}dX)\to ({\rm ^4He}X)+p\!\!\! &&(Q=18.6~{\rm MeV},\,6\%) \nonumber \!\!\!\tag{4b} \label{eq:r4b}\\ \!\!\!\!\!&({\rm ^3He}dX)\to {\rm ^4He}+(pX)\!\!\! &&(Q=17.5~{\rm MeV},\,3\!\cdot\!10^{-4}) \!\!\! \tag{4c} \label{eq:r4c} \end{align} \vglue-8.0mm \begin{align} \!\!\!\!\!\!&(^3{\rm H} d X) \to {\rm ^4He}+n+X \!\!\!&&(Q=17.3~{\rm MeV},~96\%) \tag{5a} \label{eq:r5a}\\ \!\!\!\!\!\!&(^3{\rm H} d X) \to ({\rm ^4He}X)+n \!\!\! &&(Q=18.4~{\rm MeV}, ~4\%) \tag{5b} \label{eq:r5b} \end{align} In these reactions vast majority of $X$ bound to $^3$He and $^3$H are liberated; the freed $X$-particles can again form $(ddX)$ states and catalyze stage I fusion reactions (\ref{eq:r1a})-(\ref{eq:r3c}). The same applies to the final-state $X$-particles bound to protons, as was discussed above. The remaining relatively small fraction of $X$-particles come out of stage II reactions in the form of $({\rm ^4He}X)$ atoms. Together with a very small amount of $({\rm ^4He}X)$ produced in reaction (\ref{eq:r3b}), they pick up deuterons from the medium and form $({\rm ^4He}dX)$ states, which undergo stage III $X$CF reactions: \begin{align} &(^4{\rm He} d X)\to {\rm ^6Li}+\gamma+X \!\!\! \!\!\! &&(Q=0.32~{\rm MeV}, ~10^{-13}) \tag{6a} \label{eq:r6a}\\ &({\rm ^4He} d X)\to ({\rm ^6Li}X)+\gamma \!\!\! \!\!\! &&(Q=2.4~{\rm MeV}, ~\,2\!\cdot\! 10^{-8}) \tag{6b} \label{eq:r6b}\\ &({\rm ^4He} d X)\to {\rm ^6Li}+X \!\!\! \!\!\!\!\!&&(Q=0.32~{\rm MeV}, \,\simeq100\%) \tag{6c} \label{eq:r6c} \end{align} In these reactions, almost all previously bound $X$-particles are liberated and are free to catalyze again nuclear fusion through $X$CF reactions of stages I and II. The remaining tiny fraction of $X$-particles end up being bound to the produced ${\rm ^6Li}$ nuclei through reaction (\ref{eq:r6b}). However, as small as it is, this fraction is very important for the kinetics of $X$CF. The bound states $({\rm ^6Li}X)$ are ions of charge +1; they cannot form bound state with positively charged nuclei and participate in further $X$CF reaction. That is, with their formation catalytic poisoning occurs and the catalytic process stops. {}From the branching ratios of stage I, II, and III $X$CF reactions one finds that the fraction of the initially injected $X$-particles which end up in the $({\rm ^6Li}X)$ bound state is $\sim 2.8\times 10^{-10}$. This means that each initial $X$-particle, before getting stuck to a $^6$Li nucleus, can catalyze $\sim 3.5\times 10^{9}$ fusion cycles. Direct inspection shows that, independently of which sub-channels were involved, the net effect of stage I, II and III $X$CF reactions is the conversion of four deuterons to a $^6$Li nucleus, a proton and a neutron: \begin{equation} 4d\to {\rm ^6Li}+p+n+23.1\,{\rm MeV}\,. \tag{7} \label{eq:7} \end{equation} Therefore, each initial $X$-particle will produce about $7\times 10^4$ TeV of energy before it gets knocked out of the catalytic process. It should be stressed that this assumes that the $X$-particles are sufficiently long-lived to survive during $3.5\times 10^9$ fusion cycles. From our analysis it follows that the slowest processes in the $X$CF cycle are the formation of positive ions $({\rm ^3He}dX)$ and $({\rm ^4He}dX)$. The corresponding formation times are estimated to be of the order of $10^{-8}$\,s. (see sec.~\ref{sec:posIons} of the Supplemental material). Therefore, for the $X$-particles to survive during $3.5\times 10^9$ fusion cycles and produce $\sim 7\times 10^4$ TeV of energy, their lifetime $\tau_X$ should exceed $\sim 10^2$\,s. For shorter lifetimes the energy produced by a single $X$-particle before it gets stuck to a $^6$Li nucleus is reduced accordingly. \section{\label{sec:acquis} Acquisition and reactivation of $X$-particles} The amount of energy produced by a single $X$-particle has to be compared with energy expenditures related to its production. $X$-particles can be produced in pairs in accelerator experiments, either in $l^+l^-$ annihilation at lepton colliders or through the Drell-Yan processes at hadronic machines. Although the energy $E\sim 7\times 10^4$ TeV produced by one $X$-particle before it gets knocked out of the catalytic process is quite large on microscopic scale, it is only about 10\,mJ. This means that $\gtrsim 10^{8}$ $X$-particles are needed to generate 1\,MJ of energy. While colliders are better suited for discovery of new particles, for production of large numbers of $X$-particles fixed-target accelerator experiments are more appropriate. For such experiments the beam energy must exceed the mass of the $X$-particle significantly. Currently, plans for building such machines are being discussed \cite{Benedikt:2020ejr}. The problem is, however, that the $X$-particle production cross section is very small. This comes about because of their expected large mass ($m_X\gtrsim 1$\,TeV/$c^2$) and the fact that for their efficient moderation needed to make the formation of $(dX)$ atoms possible, $X$-particles should be produced with relatively low velocities. The cross section $\sigma_p$ of production of $X$-particles with mass $m_X\simeq 1$\,TeV/$c^2$ and $\beta=v/c\simeq 0.3$ is only $\sim 1$ fb (note that for scalar $X$-particles $\sigma_p\propto \beta^3$). As a result, the energy spent on production of an $X^{++}X^{--}$ pair will be by far larger than the energy that can be generated by one $X^{--}$ before it gets bound to a $^6$Li nucleus. This means that reactivating and reusing the bound $X$-particles multiple times would be mandatory in this case. This, in turn, implies that only very long lived $X$-particles with $\tau_X\gtrsim 3\times 10^{4}$\,yr will be suitable for energy production. Reactivation of $X$-particles bound to $^6$Li requires dissociation of $({\rm ^6Li}X)$ ions. This could be achieved by irradiating them with particle beams, similarly to what was suggested for reactivation of lower-charge catalyst particles in ref.~\cite{Zweig:1978sb}. However, it would be much more efficient to use instead $({\rm ^6Li}X)$ ions as projectiles and irradiate a target with their beam.% \footnote{We thank M.~Pospelov for this suggestion.} The Coulomb binding energy of $X$ to ${\rm ^6Li}$ is about 2 MeV; to strip them off by scattering on target nuclei with the average atomic number $A\simeq 40$ one would have to accelerate $({\rm ^6Li}X)$ ions to velocities $\beta\simeq 0.01$ which, for $m_X\simeq 1$\,TeV/$c^2$, corresponds to beam energy $\sim 0.05$ GeV. At these energies the cross section of the stripping reaction is $\gtrsim 0.1$\,b, and $X$-particles can be liberated with high efficiency in relatively small targets. The energy spent on the reactivation of one $X$-particle will then only be about $10^{-9}$ of the energy it can produce before sticking to a $^6$Li nucleus. If $X$-particles are stable or practically stable, i.e.\ their lifetime $\tau_X$ is comparable to the age of the Universe, there may exist a terrestrial population of relic $X$-particles bound to nuclei or (in the case of $X^{++}$) to electrons and thus forming exotic nuclei or atoms. The possibility of the existence of exotic bound states containing charged massive particles was suggested in ref.~\cite{Cahn:1980ss} (see also \cite{DeRujula:1989fe}) and has been studied by many authors. The concentration of such exotic atoms on the Earth may be very low if reheating after inflation occurs at sufficiently low temperatures. Note that reheating temperatures as low as a few MeV are consistent with observations \cite{Hannestad:2004px}. A number of searches for such superheavy exotic isotopes has been carried out using a variety of experimental techniques, and upper limits on their concentrations were established, see \cite{Burdin:2014xma} for a review. Exotic helium atoms $(X^{++}ee)$ were searched for in the Earth's atmosphere using laser spectroscopy technique, and the limit of their concentration $10^{-12}-10^{-17}$ per atom over the mass range 20 $-$ $10^4$\,GeV/$c^2$ was established~\cite{Mueller:2003ji}. In the case of doubly negatively charged $X$, their Coulomb binding to nuclei of charge $Z$ would produce superheavy exotic isotopes with nuclear properties of the original nuclei but chemical properties of atoms with nuclear charge $Z-2$. Such isotopes could have accumulated in continental crust and marine sediments. Singly positively charged ions ($^6$Li$X$) and ($^7$Li$X$) chemically behave as superheavy protons; they can capture electrons and form anomalously heavy hydrogen atoms. Experimental searches for anomalous hydrogen in normal water have put upper limits on its concentration at the level of $\sim 10^{-28} - 10^{-29}$ for the mass range 12 to 1200 GeV/$c^2$ \cite{smith1} and $\sim 6\times 10^{-15}$ for the masses between 10 and $10^5$ TeV/$c^2$ \cite{verkerk}. If superheavy isotopes containing relic $X$-particles of cosmological origin exist, they can be extracted from minerals e.g.\ by making use of mass spectrometry techniques, and their $X$-particles can then be stripped off. To estimate the required energy, we conservatively assume that it is twice the energy needed to vaporize the matter sample. As an example, it takes about 10 kJ to vaporize 1\,g of granite \cite{Woskov}; denoting the concentration of $X$-particles in granite (number of $X$ per molecule) by $c_X$, we find that the energy necessary to extract one $X$-particle is $\sim 2.3\times 10^{-18}\,{\rm J}/c_X$. Requiring that it does not exceed the energy one $X$-particle can produce before getting stuck to a $^6$Li nucleus leads to the constraint $c_X\gtrsim 2.3\times 10^{-16}$. If it is satisfied, extracting $X$-particles from granite would allow $X$CF to produce more energy than it consumes, even without reactivation and recycling of the $X$-particles. Another advantage of the extraction of relic $X$-particles from minerals compared with their production at accelerators is that it could work even for $X$-particles with mass $m_X\gg 1$\,TeV/$c^2$. In assessing the viability of $X$CF as a mechanism of energy generation, in addition to pure energy considerations one should obviously address many technical issues related to its practical implementation, such as collection and moderation of the produced $X$-particles and prevention of their binding to the surrounding nuclei (or their liberation if such binding occurs), etc. However, the corresponding technical difficulties seem to be surmountable~\cite{Goity:1993ih}. \section{\label{sec:disc} Discussion} There are several obvious ways in which our analysis of $X$CF can be generalized. Although we only considered nuclear fusion catalyzed by scalar $X$-particles, doubly charged particles of non-zero spin can do the job as well. While we studied $X$CF in deuterium, fusion processes with participation of other hydrogen isotopes can also be catalyzed by $X$-particles. We considered $X$CF taking place in $X$-atomic states. The catalyzed fusion can also proceed through in-flight reactions occurring e.g.\ in $d+(dX)$ collisions. However, because even at the highest attainable densities the average distance $\bar{r}$ between deuterons is much larger than it is in $(ddX)$ atoms, the rates of in-flight reactions are suppressed by a factor of the order of $(\bar{r}/a_d)^3\gtrsim 10^{9}$ compared with those of reactions occurring in $X$-atoms. Our results depend sensitively on the properties of positive ions $({\rm ^3He} d X)$ and $({\rm ^4He} d X)$, for which we obtained only crude estimates. More accurate calculations of these properties and of the formation times of these positive ions would be highly desirable. The existence of long-lived doubly charged particles may have important cosmological consequences. In particular, they may form exotic atoms, which have been discussed in connection with the dark matter problem \cite{Fargion:2005ep,Belotsky:2006pp,Cudell:2015xiw}. They may also affect primordial nucleosynthesis in an important way. In ref.~\cite{Pospelov:2006sc} it was suggested that singly negatively charged heavy metastable particles may catalyze nuclear fusion reactions at the nucleosynthesis era, possibly solving the cosmological lithium problem. The issue has been subsequently studied by many authors, see refs.~\cite{Pospelov:2010hj,Kusakabe:2017brd} for reviews. Doubly charged scalars $X$ may also catalyze nuclear fusion reactions in the early Universe and thus may have significant impact on primordial nucleosynthesis. On the other hand, cosmology may provide important constraints on the $X$CF mechanism discussed here. Therefore, a comprehensive study of cosmological implications of the existence of $X^{\pm\pm}$ particles would be of great interest. To conclude, we have demonstrated that long-lived or stable doubly negatively charged scalar particles $X$, if exist, can catalyze nuclear fusion and provide a viable source of energy. Our study gives a strong additional motivation for continuing and extending the experimental searches for such particles. {\it Note added.} Recently, the ATLAS Collaboration has reported a 3.6$\sigma$ (3.3$\sigma$) local (global) excess of events with large specific ionization energy loss $|dE/dx|$ in their search for long-lived charged particles at LHC \cite{ATLAS:2022pib}. In the complete LHC Run 2 dataset, seven events were found for which the values of $|dE/dx|$ were in tension with the time-of-flight velocity measurements, assuming that the corresponding particles were of unit charge. It has been shown in \cite{Giudice:2022bpq} that this excess could be explained as being due to relatively long-lived doubly charged particles. It would be very interesting to see if the reported excess will survive with increasing statistics of the forthcoming LHC Run 3. \section{Acknowledgments} The author is grateful to Manfred Lindner, Alexei Smirnov and Andreas Trautner for useful discussions. Special thanks are due to Maxim Pospelov for numerous helpful discussions of various aspects of $X$-catalyzed fusion and constructive criticism.
{'timestamp': '2022-07-20T02:00:39', 'yymm': '2109', 'arxiv_id': '2109.13960', 'language': 'en', 'url': 'https://arxiv.org/abs/2109.13960'}
\section{Introduction} \label{sec:introduction} Hierarchical clustering for graphs plays an important role in the structural analysis of a given data set. Understanding hierarchical structures on the levels of multiple granularities is fundamental in various disciplines including artificial intelligence, physics, biology, sociology, etc \cite{brown1992class,eisen1998cluster,gorban2008principal,culotta2007author}. Hierarchical clustering requires a cluster tree that represents a recursive partitioning of a graph into smaller clusters as the tree nodes get deeper. In this cluster tree, each leaf represents a graph node while each non-leaf node represents a cluster containing its descendant leaves. The root is the largest one containing all leaves. During the last two decades, flat clustering has attracted great attentions, which breeds plenty of algorithms, such as $k$-means \cite{hartingan1979kmeans}, DBSCAN \cite{ester1996density}, spectral clustering \cite{alpert1995spectral}, and so on. From the combinatorial perspective, we have cost functions, i.e., modularity and centrality measures, to evaluate the quality of partition-based clustering. Therefore, community detection is usually formulated as an optimization problem to optimize these objectives. By contrast, no comparative cost function with a clear and reasonable combinatorial explanation was developed until Dasgupta \cite{dasgupta2016cost} introduced a cost function for cluster trees. In this definition, similarity or dissimilarity between data points is represented by weighted edges. Taking similarity scenario as an example, a cluster is a set of nodes with relatively denser intra-links compared with its inter-links, and in a good cluster tree, heavier edges tend to connect leaves whose lowest common ancestor is as deep as possible. This intuition leads to Dasgupta's cost function that is a weighted linear combination of the sizes of lowest common ancestors over all edges. Motivated by Dasgupta's cost function, Cohen-Addad et al. \cite{cohen2019hierarchical} proposed the concept of admissible cost function. In their definition, the size of each lowest common ancestor in Dasgupta's cost function is generalized to be a function of the sizes of its left and right children. For all similarity graphs generated from a minimal ultrametric, a cluster tree achieves the minimum cost if and only if it is a generating tree that is a ``natural'' ground truth tree in an axiomatic sense therein. A necessary condition of admissibility of an objective function is that it achieves the same value for every cluster tree for a uniformly weighted clique that has no structure in common sense. However, any slight deviation of edge weights would generally separate the two end-points of a light edge on a high level of its optimal (similarity-based) cluster tree. Thus, it seems that admissible objective functions, which take Dasgupta's cost function as a specific form, ought to be an unchallenged criterion in evaluating cluster trees since they are formulated by an axiomatic approach. However, an admissible cost function seems imperfect in practice. The arbitrariness of optima of cluster trees for cliques indicates that the division of each internal nodes on an optimal cluster tree totally neglects the \emph{balance} of its two children. Edge weight is the unique factor that decides the structure of optimal trees. But a balanced tree is commonly considered as an ideal candidate in hierarchical clustering compared to an unbalanced one. So even clustering for cliques, a balanced partition should be preferable for each internal node. At least, the height of an optimal cluster tree which is logarithm of graph size $n$ is intuitively more reasonable than that of a caterpillar shaped cluster tree whose height is $n-1$. Moreover, a simple proof would imply that the optimal cluster tree for any connected graphs is binary. This property is not always useful in practical use since a real system usually has its inherent number of hierarchies and a natural partition for each internal clusters. For instance, the natural levels of administrative division in a country is usually intrinsic, and it is not suitable to differentiate hierarchies for parallel cities in the same state. This structure cannot be obtained by simply minimizing admissible cost functions. In this paper, we investigate the hierarchical clustering from the perspective of information theory. Our study is based on Li and Pan's structural information theory \cite{li2016structural} whose core concept named structural entropy measures the complexity of hierarchical networks. We formulate a new objective function from this point of view, which builds the bridge for combinatorial and information-theoretic perspectives for hierarchical clustering. For this cost function, the balance of cluster trees will be involved naturally as a factor just like we design optimal codes, for which the balance of probability over objects is fundamental in constructing an efficient coding tree. We also define cluster trees with a specific height, which is coincident with our cognition of natural clustering. For practical use, we develop a novel algorithm for natural hierarchical clustering for which the number of hierarchies can be determined automatically. The idea of our algorithm is essentially different from the popular recursive division or agglomeration framework. We formulate two basic operations called \emph{stretch} and \emph{compress} respectively on cluster trees to search for the sparsest level iteratively. Our algorithm HCSE terminates when a specific criterion that intuitively coincides with the natural hierarchies is met. Our extensive experiments on both synthetic and real datasets demonstrate that HCSE outperforms the present popular heuristic algorithms LOUVAIN \cite{blondel2008fast} and HLP \cite{rossi2020fast}. The latter two algorithms proceed simply by recursively invoking flat clustering algorithms based on modularity and label propagation, respectively. For both of them, the hierarchy number is solely determined by the round numbers when the algorithm terminates, for which the interpretability is quite poor. Our experimental results on synthetic datasets show that HCSE has a great advantage in finding the intrinsic number of hierarchies, and the results on real datasets show that HCSE achieves competitive costs over LOUVAIN and HLP. We organize this paper as follows. The structural information theory and its relationship with combinatorial cost functions will be introduced in Section \ref{sec:cost_functions}, and the algorithm will be given in Section \ref{sec:algorithm}. The experiments and their results are presented in Section \ref{sec:experiments}. We conclude the paper in Section \ref{sec:conclusions}. \section{A cost function from information-theoretic perspective} \label{sec:cost_functions} In this section, we introduce Li and Pan's structural information theory \cite{li2016structural} and the combinatorial cost functions of Dasgupta \cite{dasgupta2016cost} and Cohen-Addad et al. \cite{cohen2019hierarchical}. Then we propose a new cost function that is developed from structural information theory and establish the relationship between the information-theoretic and combinatorial perspectives. \subsection{Notations} \label{subsec:notations} Let $G=(V,E,w)$ be an undirected weighted graph with a set of vertices $V$, a set of edges $E$ and a weight function $w:E\rightarrow\mathbb{R}^+$, where $\mathbb{R}^+$ denotes the set of all positive real numbers. An unweighted graph can be viewed as a weighted one whose weights are unit. For each vertex $u\in V$, denote by $d_u=\sum_{(u,v)\in E} w(u,v)$ the weighted degree of $u$.\footnote{From now on, whenever we say the degree of a vertex, we always refer to the weighted degree.} For a subset of vertices $S\subseteq V$, define the volume of $S$ to be the sum of degrees of vertices. We denote it by $\textrm{vol}(S)=\sum_{u\in S} d_u$. A cluster tree $T$ for graph $G$ is a rooted tree with $|V|$ leaves, each of which is labeled by a distinct vertex $v\in V$. Each non-leaf node on $T$ is labeled by a subset $S$ of $V$ that consists of all the leaves treating $S$ as ancestor. For each node $\alpha$ on $T$, denote by $\alpha^-$ the parent of $\alpha$. For each pair of leaves $u$ and $v$, denote by $u\vee v$ the least common ancestor (LCA) of them on $T$. \subsection{Structural information and structural entropy} \label{subsec:structural_information} The idea of structural information is to encode a random walk with a certain rule by using a high-dimensional encoding system for a graph $G$. It is well known that a random walk, for which a neighbor is randomly chosen with probability proportional to edge weights, has a stationary distribution on vertices that is proportional to vertex degree.\footnote{For connected graphs, this stationary distribution is unique, but not for disconnected ones. Here, we consider this one for all graphs.} So to position a random walk under its stationary distribution, the amount of information needed is typically the Shannon's entropy, denoted by $$\mathcal{H}^{(1)}(G)=-\sum_{v\in V} \frac{d_v}{\textrm{vol}(V)} \log \frac{d_v}{\textrm{vol}(V)} \footnote{In this paper, the omitted base of logarithm is always $2$.}.$$ By Shannon's noiseless coding theorem, $\mathcal{H}^{(1)}(G)$ is the limit of average code length generated from the \emph{memoryless} source for one step of the random walk. However, dependence of locations may shorten the code length. For each level on cluster trees, the uncertainty of locations is measured by the entropy of the stationary distribution on the clusters of this level. Consider an encoding for every cluster, including the leaves. Each non-root node $\alpha$ is labeled by its order among the children of its parent $\alpha^-$. So the self-information of $\alpha$ within this local parent-children substructure is $-\log(\textrm{vol}(\alpha)/\textrm{vol}(\alpha^-))$, which is also roughly the length of Shannon code for $\alpha$ and its siblings. The codeword of $\alpha$ consists of the sequential labels of nodes along the unique path from the root (excluded) to itself (included). The key idea is as follows. For one step of the random walk from $u$ to $v$ in $G$, to indicate $v$, we omit from $v$'s codeword the longest common prefix of $u$ and $v$ that is exactly the codeword of $u\vee v$. This means that the random walk takes this step in the cluster $u\vee v$ (and also in $u\vee v$'s ancestors) and the uncertainty at this level may not be involved. Therefore, intuitively, a quality similarity-based cluster tree would trap the random walk with high frequency in the deep clusters that are far from the root, and long codeword of $u\vee v$ would be omitted. This shortens the average code length of the random walk. Note that we ignore the uniqueness of decoding since a practical design of codewords is not our purpose. We utilize this scheme to evaluate and differentiate hierarchical structures. Then we formulate the above scheme and measure the average code length as follows. Given a weighted graph $G=(V,E,w)$ and a cluster tree $T$ for $G$, note that under the stationary distribution, the random walk takes one step out of a cluster $\alpha$ on $T$ with probability $g_\alpha/\textrm{vol}(V)$, where $g_\alpha$ is the sum of weights of edges with exactly one end-point in $\alpha$. Therefore, the aforementioned uncertainty measured by the average code length is \[\mathcal{H}^T (G)=-\sum_{\alpha\in T} \frac{g_\alpha}{\textrm{vol}(V)} \log \frac{\textrm{vol}(\alpha)}{\textrm{vol}(\alpha^-)}.\footnote{For notational convenience, for the root $\lambda$ of $T$, set $\lambda^-=\lambda$. So the term for $\lambda$ in the summation is $0$.}\] We call $\mathcal{H}^T(G)$ the \emph{structural entropy of $G$ on $T$}. We define the \emph{structural entropy of $G$} to be the minimum one among all cluster trees, denoted by $\mathcal{H}(G)=\min_{T}\{\mathcal{H}^T (G)\}.$ Note that the structural entropy of $G$ on the trivial $1$-level cluster tree is consistent with the previously defined $\mathcal{H}^{(1)}(G)$. It doesn't have any non-trivial cluster. \subsection{Combinatorial explanation of structural entropy} The cost function of a cluster tree $T$ for graph $G=(V,E)$ introduced by Dasgupta \cite{dasgupta2016cost} is defined to be $c^T(G)=\sum_{(u,v)\in E} w(u,v) |u\vee v|$, where $|u\vee v|$ denotes the size of cluster $u\vee v$. The admissible cost function introduced by Cohen-Addad et al. \cite{cohen2019hierarchical} generalizes the term $|u\vee v|$ in the definition of $c^T(G)$ to be a general function $g(|u|,|v|)$, for which Dasgupta defined $g(x,y)=x+y$. For both definitions, the optimal hierarchical clustering of $G$ is in correspondence with a cluster tree of minimum cost in the combinatorial sense that heavy edges are cut as far down the tree as possible. The following theorem establishes the relationship between structural entropy and this kind of combinatorial form of cost functions. \begin{theorem} \label{thm:equvi_cost_func} For a weighted graph $G=(V,E,w)$, to minimize $H^T(G)$ (over $T$) is equivalent to minimize the cost function \begin{equation} \label{eqn:SE_cost_form} \textrm{cost}^T(G)=\sum_{(u,v)\in E} w(u,v) \log\textrm{vol}(u\vee v). \end{equation} \end{theorem} \begin{proof} Note that \begin{eqnarray*} H^T(G) &=& -\sum_{\alpha\in T} \frac{g_\alpha}{\textrm{vol}(V)}\log\frac{\textrm{vol}(\alpha)}{\textrm{vol}(\alpha^-)}\\ &=& -\sum_{\alpha\in T} \sum_{(u,v)\in g_\alpha} \frac{w(u,v)}{\textrm{vol}(V)}\log\frac{\textrm{vol}(\alpha)}{\textrm{vol}(\alpha^-)}\\ &=& -\sum_{(u,v)\in E} \left(\frac{w(u,v)}{\textrm{vol}(V)} \sum_{\alpha:(u,v)\in g_\alpha} \log\frac{\textrm{vol}(\alpha)}{\textrm{vol}(\alpha^-)}\right). \end{eqnarray*} For a single edge $(u,v)\in E$, all the terms $\log(\textrm{vol}(\alpha)/\textrm{vol}(\alpha^-))$ for leaf $u$ satisfying $(u,v)\in g_\alpha$ sum (over $\alpha$) up to $\log (d_u/\textrm{vol}(u\vee v))$ along the unique path from $u$ to $u\vee v$. It is symmetric for $v$. Therefore, considering ordered pair $(u,v)\in E$, \begin{eqnarray*} H^T(G) &=& -\sum_{\text{ordered }(u,v)\in E} \frac{w(u,v)}{\textrm{vol}(V)}\log\frac{d_u}{\textrm{vol}(u\vee v)}\\ &=& \frac{1}{\textrm{vol}(V)} \left( -\sum_{u\in V} d_u \log d_u + \sum_{\text{ordered }(u,v)\in E} w(u,v) \log \textrm{vol}(u\vee v) \right)\\ &=& \frac{1}{\textrm{vol}(V)} \left( -\sum_{u\in V} d_u \log d_u + 2\cdot\sum_{(u,v)\in E} w(u,v) \log \textrm{vol}(u\vee v) \right). \end{eqnarray*} The second equality follows from the fact $\sum_{u\in V}d_u=\sum_{\text{ordered }(u,v)\in E} w(u,v)=\textrm{vol}(V)$ and the last equality from the symmetry of $(u,v)$. Since the first summation is independent of $T$, to minimize $H^T(G)$ is equivalent to minimize $\sum_{\{u,v\}\in E} w(u,v)\log \textrm{vol}(u\vee v)$. \end{proof} Theorem \ref{thm:equvi_cost_func} indicates that when we view $g$ as a function of vertices rather than of numbers and define $g(u,v)=\log \textrm{vol}(u\vee v)$, the ``admissible'' function becomes equivalent to structural entropy in evaluating cluster trees, although it is not admissible any more. So what is the difference between these two cost functions? As stated by Cohen-Addad et al. \cite{cohen2019hierarchical}, an important axiomatic hypothesis for admissible function, thus also for Dasgupta's cost function, is that the cost for every binary cluster tree of an unweighted clique is identical. This means that any binary tree for clustering on cliques is reasonable, which coincides with the common sense that structureless datasets can be organized hierarchically free. However, for structural entropy, the following theorem indicates that balanced organization is of importance even though for structureless dataset. \begin{theorem} \label{thm:SE_for_cliques} For any positive integer $n$, let $K_n$ be the clique of $n$ vertices with identical weight on every edge. Then a cluster tree $T$ of $K_n$ achieves minimum structural entropy if and only if $T$ is a balanced binary tree, that is, the two children clusters of each sub-tree of $T$ have difference in size at most $1$. \end{theorem} The proof of Theorem \ref{thm:SE_for_cliques} is a bit technical, and we defer it to Appendix \ref{sec:proof_thm_2.2}. The intuition behind Theorem \ref{thm:SE_for_cliques} is that balanced codes are the most efficient encoding scheme for unrelated data. So the codewords of the random walk that jumps freely among clusters on each level of a cluster tree have the minimum average length if all the clusters on this level are in balance. This is able to be guaranteed exactly by a balanced cluster tree. In the cost function (\ref{eqn:SE_cost_form}), which we call cost(SE) from now on, $\log\textrm{vol}(u\vee v)$ is a concave function of the volume of $u\vee v$. In the context of regular graphs (e.g. cliques), replacing $\textrm{vol}(u\vee v)$ by $|u\vee v|$ is equivalent for optimization. Dasgupta \cite{dasgupta2016cost} claimed that for the clique $K_4$ of four vertices, a balanced tree is preferable when replace $|u\vee v|$ by $g(|u\vee v|)$ for any strictly increasing concave function $g$ with $g(0)=0$. However, it is interesting to note that this generalization does not hold for all those concave functions. For example, it is easy to check that for $g(x)=1-e^{-x}$, the cluster tree of $K_6$ that achieves minimum cost partitions $K_6$ into $K_2$ and $K_4$ on the first level, rather than $K_3$ and $K_3$. Theorem \ref{thm:SE_for_cliques} shows that for all cliques, balanced trees are preferable when $g$ is a logarithmic function. It is worth noting the admissible function introduced by Cohen-Addad et al. \cite{cohen2019hierarchical} is defined from the viewpoint that a generating tree $T$ of a similarity-based graph $G$ that is generated from a minimal ultrametric achieves the minimum cost. In this definition, the monotonicity of edge weights between clusters on each level from bottom to top on $T$, which is given by Cohen-Addad et al. \cite{cohen2019hierarchical} as a property of a ``natural'' ground-truth hierarchical clustering, is the unique factor when evaluating $T$. However, Theorem \ref{thm:SE_for_cliques} implies that for cost(SE), besides cluster weights, the balance of cluster trees is implicitly involved as another factor. Moreover, for cliques, the minimum cost should be achieved on every subtree, which makes an optimal cluster tree balanced everywhere. This optimal clustering for cliques is also robust in the sense that a slight perturbation to the minimal ultrametric, which can be considered as slight variations to the weights of a batch of edges, will not change the optimal cluster tree structure wildly due to the holdback force of balance. \section{Our hierarchical clustering algorithm} \label{sec:algorithm} In this section, we develop an algorithm to optimize cost(SE) (Eq. (\ref{eqn:SE_cost_form}), equivalent to optimizing structure entropy) and yield the associated cluster tree. At present, all existing algorithms for hierarchical clustering can be categorized into two frameworks: top-down division and bottom-up agglomeration \cite{cohen2019hierarchical}. The top-down division approach usually yields a binary tree by recursively dividing a cluster into two parts with a cut-related criterion. But a binary clustering tree is far from a practical one as we introduced in Section \ref{sec:introduction}. For practical use, bottom-up agglomeration that is also known as hierarchical agglomerative clustering (HAC) is commonly preferable. It constructs a cluster tree from leaves to the root recursively, during each round of which the newly generated clusters shrink into single vertices. Our algorithm jumps out of these two frameworks. We establish a new one that stratifies the \emph{sparsest} level of a cluster tree recursively rather than in a sequential order. In general, in guide with cost(SE), we construct a $k+1$-level cluster tree from the previous $k$-level one, during which the level whose stratification makes the average local cost that is incorporated in a local reduced subgraph decrease most is differentiated into two levels. The process of stratification consists of two basic operations: \emph{stretch} and \emph{compression}. In stretch steps, given an internal node of a cluster tree, a local binary subtree is constructed by an agglomerative approach, while in compression steps, the paths that are overlength from the root to leaves on the binary tree is compressed by shrinking tree edges that make the cost reduce most. This framework can be collocated with any cost function. Then we define the operations ``stretch'' and ``compression'' formally. Given a cluster tree $T$ for graph $G=(V,E)$, let $u$ be an internal node on $T$ and $v_1,v_2,\ldots,v_\ell$ be its children. We call this local parent-children structure to be a \emph{$u$-triangle} of $T$, denoted by $T_u$. These two operations are defined on $u$-triangles. Note that each child $v_i$ of $u$ is a cluster in $G$. We reduce $G$ by shrinking each $v_i$ to be a single vertex $v_i'$ while maintaining each intra-link and ignoring each internal edge of $v_i$. This reduction captures the connections of clusters at this level in the parent cluster $u$. The stretch operation proceeds in HAC approach for $u$-triangle. That is, initially, view each $v_i'$ as a cluster and recursively combine two clusters into a new one for which cost(SE) drops most. The sequence of combinations yields a binary subtree $T_u'$ rooted at $u$ which has $v_1,v_2,\ldots,v_\ell$ as leaves. Then the compression operation is proposed to reduce the height of $T_u'$ to be $2$. Let $\hat{E}(T')$ be the set of edges on $T'$ each of which appears on a path of length more than $2$ from the root of $T'$ to some leaf. Denote by $\Delta(e)$ for edge $e$ be the amount of structural entropy enhanced by the shrink of $e$. We pick from $\hat{E}(T_u')$ the edge $e$ with least $\Delta(e)$. Note that the compression of a tree edge makes the grandchildren of some internal node to be children, it must amplify the cost. The compression operation picks the least amplification. The process of stretch and compression is illustrated in Figure \ref{fig:stretch_compress} and stated in Algorithms \ref{alg:stretch} and \ref{alg:compress}, respectively. \begin{figure}[ht] \centering \includegraphics[scale=0.5]{stretch_compress.jpg} \caption{Illustration of stretch and compression for a $u$-triangle. A binary tree is constructed first by stretch, and then edge $e$ is compressed.} \label{fig:stretch_compress} \end{figure} \begin{algorithm} \label{alg:stretch} \caption{Stretch} \KwIn{a $u$-triangle $T_u$} \KwOut{a binary tree rooted at $u$} Let $\{v_1,v_2,\ldots,v_\ell\}$ be the set of leaves of $T_u$\; Compute $\eta(a,b)$ which is structural entropy reduced by merging siblings $a,b$ into a single cluster\; \For{$t\in [\ell-1]$}{ $(\alpha,\beta) \gets \arg\max_{(a,b) \text{ are siblings}} \{\eta(a,b)\}$\; Add a new node $\gamma$\; $\gamma.parent \gets \alpha.parent$\; $\alpha.parent = \gamma$\; $\beta.parent = \gamma$\; } return $T_u$ \end{algorithm} \begin{algorithm} \label{alg:compress} \caption{Compress} \KwIn{a binary tree $T$} \While{$T$'s height is more than $2$}{ $e \gets \arg\min_{e'\in\hat{E}(T)} \{\Delta(e')\}$\; Denote $e=(u,v)$ where $u$ is the parent of $v$\; \For{$w\in v.children$}{ $w.parent \gets u$\; } } \end{algorithm} Then we define the sparsest level of a cluster tree $T$. Let $U_j$ be the set of $j$-level nodes on $T$, that is, $U_j$ is the set of nodes each of which has distance $j$ from $T$'s root. Suppose that the height of $T$ is $k$, then $U_0,U_1,\ldots,U_{k-1}$ is a partition for all internal nodes of $T$. For each internal node $u$, define $\mathcal{H}(u)=-\sum_{v:v^-=u} \frac{g_u}{\textrm{vol}(V)} \log \frac{\textrm{vol}(v)}{\textrm{vol}(u)}$. Note that $\mathcal{H}(u)$ is the partial sum contributed by $u$ in $\mathcal{H}^T(G)$. After a ``stretch-and-compress'' round on $u$-triangle, denote by $\Delta\mathcal{H}(u)$ the structural entropy by which the new cluster tree reduces. Since the reconstruction of $u$-triangle stratifies cluster $u$, $\Delta\mathcal{H}(u)$ is always non-negative. Define the sparsity of $u$ to be $\text{Spar}(u)=\frac{\Delta\mathcal{H}(u)}{\mathcal{H}(u)}$, which is the relative variation of structural entropy in cluster $u$. From the information-theoretic perspective, this means that the uncertainty of random walk can be measured locally in any internal cluster, which reflects the quality of clustering of this local area. At last, we define the \emph{sparsest level} of $T$ to be the $j$-th level such that the average sparsity of triangles rooted at nodes in $U_j$ is maximum, that is $\arg\max_j \{\overline{\text{Spar}}_j(T)\}$, where $\overline{\text{Spar}}_j(T)=\sum_{u\in U_j}\text{Spar(u)}/|U_j|$. Then the operation of stratification stretches and compresses on the sparsest level of $T$. This is illustrated in Figure \ref{fig:stratify}. \begin{figure}[htbp] \begin{center} \subfigure[]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=6cm]{up.jpg} \label{fig:up} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=6cm]{below.jpg} \label{fig:below} \end{minipage}% \caption{Illustration of stratification for a $2$-level cluster tree. The preference of (a) and (b) depends on the average sparsity of triangles at each level.} \label{fig:stratify} \end{center} \end{figure} For a given positive integer $k$, to construct a cluster tree of height $k$ for graph $G$, we start from the trivial $1$-level cluster tree that involves all vertices of $G$ as leaves. Then we do not stop stratifying at the sparsest level recursively until a $k$-level cluster tree is obtained. This process is described in Algorithm \ref{alg:k-HCSE}. \begin{algorithm} \label{alg:k-HCSE} \caption{$k$-Hierarchical clustering based on structural entropy ($k$-HCSE)} \KwIn{a graph $G = (V,E)$, $k\in\mathbb{Z}^+$} \KwOut{a $k$-level cluster tree $T$} Initialize $T$ to be the $1$-level cluster tree\; $h=\text{height(T)}$\; \While{$h<k$}{ $j' \gets \arg\max_{j} \{\overline{\text{Spar}}_{j}(T)\}$; \quad // Find the sparsest level of $T$ (breaking ties arbitraily)\; \If{$\overline{\text{Spar}}_{j'}(T)=0$}{ break; \quad // No cost will be saved by any further clustering\; } \For{$u\in U_{j'}$}{ $T_u \gets$ Stretch($u$-triangle $T_u$)\; Compress($T_u$)\; $h \gets h+1$\; } \For{$j\in [j'+1,h]$}{ Update $U_j$\; } } return $T$ \end{algorithm} To determine the height of the cluster tree automatically, we derive the natural clustering from the variation of sparsity on each level. Intuitively, a natural hierarchical cluster tree $T$ should have not only sparse boundary on clusters, but also low sparsity for triangles of $T$, which means that stratification within the reduced subgraphs corresponding to the triangles on the sparsest level make little sense. For this reason, we consider the inflection points of the sequence $\{\delta_t(\mathcal{H})\}_{t=1,2,\ldots}$, where $\delta_t(\mathcal{H})$ is the structural entropy by which the $t$-th round of stratification reduces. Formally, denote $\Delta_t\mathcal{H}=\delta_t(\mathcal{H})-\delta_{t-1}(\mathcal{H})$ for each $t\geq 2$. We say that $\Delta_t\mathcal{H}$ is an inflection point if both $\Delta_t\mathcal{H} \geq \Delta_{t-1}\mathcal{H}$ and $\Delta_t\mathcal{H} \geq \Delta_{t+1}\mathcal{H}$ hold. Our algorithm finds the least $t$ such that $\Delta_t\mathcal{H}$ is an inflection point and fix the height of the cluster tree to be $t$ (Note that after $t-1$ rounds of stratification, the number of levels is $t$). This process is described as Algorithm \ref{alg:HCSE}. \begin{algorithm} \label{alg:HCSE} \caption{Hierarchical clustering based on structural entropy (HCSE)} \KwIn{a graph $G = (V,E)$} \KwOut{a cluster tree $T$} $t \gets 2$\; \While{$\Delta_t\mathcal{H} < \Delta_{t-1}\mathcal{H}$ or $\Delta_t\mathcal{H} < \Delta_{t+1}\mathcal{H}$}{ \If{$\max_{j} \{\overline{\text{Spar}}_{j}(T)\}$=0}{ break\; } $t \gets t+1$\; } return $t$-HCSE$(T)$ \end{algorithm} \section{Experiments} \label{sec:experiments} Our experiments are given both on synthetic networks generated from the Hierarchical Stochastic Block Model (HSBM) and on real datasets. We compare our algorithm HSE with the popular practical algorithms LOUVAIN \cite{blondel2008fast} and HLP \cite{rossi2020fast}. Both of these two algorithms construct a non-binary cluster tree with the same framework, that is, the hierarchies are formed from bottom to top one by one. In each round, they invoke different flat clustering algorithms, Modularity and Label Propagation, respectively. To avoid over-fitting to higher levels, which possibly results in under-fitting to lower levels, LOUVAIN admits a sequential input of vertices. Usually, to avert the worst-case trap, the order in which the vertices come is random, and so the resulting cluster tree depends on this order. HLP invokes the common LP algorithm recursively, and so it cannot be guaranteed to avoid under-fitting in each round. This can be seen in our experiments on synthetic datasets, for which these two algorithms usually miss a ground-truth level. For real datasets, we do the comparative experiments on real networks. Some of them have (possibly hierarchical) ground truth, e.g., Amazon, while most of them do not have. We evaluate the resulting cluster trees for Amazon networks by Jaccard index, and for others without ground truth by both cost(SE) and Dasgupta's cost function cost(Das). \subsection{HSBM} The stochastic block model (SBM) is a type of probability generation model proposed based on the idea of random equivalence. The SBM of flat graphs needs to design the number of clusters, the number of vertices in each cluster, the probability of generating an edge for each pair of vertices in clusters, and the probability of generating an edge for each pair of vertices that are from different clusters. For HSBM, a ground truth of hierarchies should be presumed. Suppose that there are $m$ clusters at the bottom level. Then probability of generating edges is determined by a symmetric $m\times m$ matrix, in which the $(i,j)$-th entry is the probability of connecting each pair of vertices from cluster $i$ and $j$, respectively. Two clusters that have higher LCA on the ground-truth tree have lower probability. Our experiments utilize $4$-level HSBM. For simplicity, let $\vec{p}=(p_0,p_1,p_2,p_3)$ be the probability vector for which $p_i$ is the probability of generating edges for vertex pairs whose LCA on the ground-truth cluster tree has depth $i$. Note that the $0$-depth node is the root. We compare the Normalized Mutual Information (NMI) at each level of the ground-truth cluster tree to those of three algorithms. Note that the randomness in LOUVAIN, and breaking-ties rule as well as convergence of HLP make different results, we choose the most effective strategy and pick the best results in five runs for both of them. Compared to their uncertainty, our algorithm HCSE yields stable results. Table \ref{tab:HSBM_NMI} demonstrates the results in three groups of probabilities, for which the clarity of hierarchical structure gets higher one by one. Our algorithm HSE is always able to find the right number of levels, while LOUVAIN always misses the top level, and HLP misses the top level in two groups. The inflection points for choosing the intrinsic hierarchy number $t=4$ of hierarchies are demonstrated in Figure \ref{fig:inflection_points}. \makeatletter \newcommand\figcaption{\def\@captype{figure}\caption} \newcommand\tabcaption{\def\@captype{table}\caption} \makeatother \begin{figure}[tb] \centering \begin{minipage}{0.42\textwidth} \centering \begin{tabular}{ccccc} \toprule & $\vec{p}$ & HCSE & HLP & LOU \\ \midrule $p_2$ & 4.5E(-2) & 0.89 & 0.79 & \textbf{0.92}\\ $p_1$ & 1.5E(-3) & \textbf{0.93} & 0.75 & 0.92 \\ $p_0$ & 6E(-6) & \textbf{0.62} & 0.58 & \verb|--|\\ \midrule $p_2$ & 5.5E(-2) & 0.87 & \textbf{0.89} & 0.89\\ $p_1$ & 1.5E(-3) & \textbf{0.95} & 0.87 & 0.87 \\ $p_0$ & 4E(-6) & \textbf{0.72} & \verb|--| & \verb|--| \\ \midrule $p_2$ & 6.5E(-2) & 0.96 & 0.95 & \textbf{0.99}\\ $p_1$ & 4.5E(-3) & 0.94 & 0.81 & \textbf{0.99} \\ $p_0$ & 2.5E(-6) & \textbf{0.80} & \verb|--| & \verb|--| \\ \bottomrule \end{tabular} \tabcaption{\footnotesize NMI for three algorithms. Each dataset has $2,500$ vertices, and the cluster numbers at three levels are $5$, $25$ and $250$, respectively, for which the size of each cluster is accordingly generated at random. $p_3=0.9$ for each graph. ``$--$'' means the algorithm did not find this level.} \label{tab:HSBM_NMI} \end{minipage} \hspace{0.5in} \begin{minipage}[h]{0.42\linewidth} \centering \includegraphics[scale=0.4]{delta.png} \figcaption{$\delta_t(\mathcal{H})$ variations for HCSE. It can be observed easily that the inflection points for all the three datasets appear on $t=4$, which is also the ground-truth number of hierarchies.} \label{fig:inflection_points} \end{minipage} \end{figure} \subsection{Real datasets} First, we do our experiments on Amazon network \footnote{http://snap.stanford.edu/data/} for which the set of ground-truth clusters has been given. For two sets $A,B$, the \emph{Jaccard Index} of them is defined as $J(A,B)=|A\cap B|/|A\cup B|$. We pick the largest cluster which is a subgraph with $58283$ vertices and $133178$ edges. We run HCSE algorithm on it. For each ground-truth cluster $c$ that appears in this subgraph, we find from the resulting cluster tree an internal node that has maximum Jaccard index with $c$. Then we calculate the average Jaccard index $\overline{J}$ over all such $c$. We also calculate cost(SE) and cost(Das). The results are demonstrated in Table \ref{tab:Amazon}. HCSE performs better for $\overline{J}$ and cost(SE), while LOUVAIN performs better for cost(Das). Because of unbalance in over-fitting and under-fitting traps, HLP outperforms none of the other two algorithms for all criteria. \begin{table}[htbp] \centering \begin{tabular}{cccc} \toprule index & \hspace{0.3em}HCSE & HLP & LOUVAIN \\ \midrule $\overline{J}$ & \textbf{0.20} & 0.16 & 0.17\\ cost(SE) & \textbf{1.85E6} & 2.05E6 & 1.89E6 \\ cost(Das) & 5.57E8 & 3.99E8 & \textbf{3.08E8} \\ \bottomrule \end{tabular} \smallskip \caption{\footnotesize Comparisons of the average Jaccard index ($\overline{J}$), cost function based on structural entropy (cost(SE)) and Dasgupta's cost function (cost(Das)).} \label{tab:Amazon} \end{table} Second, we do our experiments on a series of real networks \footnote{http://networkrepository.com/index.php} without ground truth. We compare cost(SE) and cost(Das), respectively. Since the different level numbers given by the three algorithms influence the costs seriously, that is, lower costs are obtained just due to greater heights, we only list in Table \ref{tab:real_datasets} the networks for which the three algorithms yield similar level numbers that differ by at most $1$ or $2$. It can be observed that HLP does not achieve optima for any network, while HCSE performs best w.r.t. cost(Das) for all networks, but does not outperform LOUVAIN for most networks. This is mainly due to the fact that LOUVAIN always finds no less number of hierarchies than HCSE, and the better cost probably benefits from its depth. \begin{table} \centering \begin{tabular}{cccccccccc} \toprule Networks & HCSE & HLP & LOUVAIN \\ \midrule CSphd & 1.30E4 / \textbf{5.19E4} / 5 & 1.54E4 / 5.58E4 / 4 & \textbf{1.28E4} / 7.61E4 / 5\\ \midrule fb-pages-government & 2.48E6 / \textbf{1.18E8} / 4 & 2.53E6 / 1.76E8 / 3 & \textbf{2.43E6} / 1.33E8 / 4\\ \midrule email-univ & 1.16E5 / \textbf{2.20E6} / 3 & 1.46E5 / 6.14E6 / 3 & \textbf{1.14E5} / 2.20E6 / 4\\ \midrule fb-messages & 1.58E5 / \textbf{4.50E6} / 4 & 1.76E5 / 8.12E6 / 3 & \textbf{1.52E5} / 4.96E6 / 4\\ \midrule G22 & \textbf{5.56E5} / \textbf{2.68E7} / 4 & 6.11E5 / 4.00E7 / 3 & 5.63E5 / 2.80E7 / 5\\ \midrule As20000102 & 2.64E5 / \textbf{2.36E7} / 4 & 3.62E5 / 7.63E7 / 3 & \textbf{2.42E5} / 2.42E7 / 5\\ \midrule bibd-13-6 & \textbf{7.41E5} / \textbf{2.56E7} / 3 & 8.05E5 / 4.41E7 / 2 & 7.50E5 / 2.75E7 / 4\\ \midrule delaunay-n10 & 4.65E4 / \textbf{3.39E5} / 4 & 4.87E4 / 3.55E5 / 4 & \textbf{4.24E4} / 4.25E5 / 5\\ \midrule p2p-Gnutella05 & 9.00E5 / \textbf{1.48E8} / 3 & 1.01E6 / 2.78E8 / 3 & \textbf{8.05E5} / 1.49E8 / 5\\ \midrule p2p-Gnutella08 & 5.59E5 / \textbf{5.51E7} / 4 & 6.36E5 / 1.28E8 / 4 & \textbf{4.88E5} / 6.03E7 / 5\\ \bottomrule \end{tabular} \smallskip \caption{``cost(SE) / cost(Das) / $k$'' for three algorithms, where $k$ is the hierarchy number that the algorithm finds.} \label{tab:real_datasets} \end{table} \section{Conclusions and future discussions} \label{sec:conclusions} In this paper, we investigate the hierarchical clustering problem from an information-theoretic perspective and propose a new objective function that relates to the combinatoric cost functions raised by Dasgupta \cite{dasgupta2016cost} and Cohen-Addad et al. \cite{cohen2019hierarchical}. We define the optimal $k$-level cluster tree for practical use and devise an hierarchical clustering algorithm that stratifies the sparsest level of the cluster tree recursively. This is a general framework that can be collocated with any cost function. We also propose an interpretable strategy to find the intrinsic number of levels without any hyper-parameter. The experimental results on $k$-level HSBM demonstrate that our algorithm HCSE has a great advantage in finding $k$ compared to the popular but strongly heuristic algorithms LOUVAIN and HLP. Our results on real datasets show that HCSE also achieves competitive costs compared to these two algorithms. There are several directions that are worth further study. The first problem is about the relationship between the concavity of $g$ of the cost function and the balance of the optimal cluster tree. We have seen that for cliques, being concave is not a sufficient condition for total balance, so whether is it a necessary condition? Moreover, is there any explicit necessary and sufficient condition for total balance of the optimal cluster tree for cliques? The second problem is about approximation algorithms for both structural entropy and cost(SE). Due to the non-linear and volume-related function $g$, the proof techniques for approximation algorithms in \cite{cohen2019hierarchical} becomes unavailable. The third one is about more precise characterizations for ``natural'' hierarchical clustering whose depth is limited. Since any reasonable choice of $g$ makes the cost function achieve optimum on some binary tree, a blind pursuit of minimization of cost functions seems not to be a rational approach. More criteria in this scenario need to be studied. \bibliographystyle{plain}
{'timestamp': '2021-08-16T02:07:08', 'yymm': '2108', 'arxiv_id': '2108.06036', 'language': 'en', 'url': 'https://arxiv.org/abs/2108.06036'}
\section{Introduction}\label{sec:intro} Real-world data are generated from a convoluted interactive process whose underlying physical principles are often unknown. Such a nature violates the common hypothesis of standard representation learning paradigms assuming that data are IID sampled. The challenge, however, is that due to the absence of prior knowledge about ground-truth data generation, it can be practically prohibitive to build feasible methodology for uncovering data dependencies, despite the acknowledged significance. To address this issue, prior works, e.g., \cite{pointcloud-19,LDS-icml19,jiang2019glcn,Bayesstruct-aaai19}, consider encoding the potential interactions between instance pairs, but this requires sufficient degrees of freedom that significantly increases learning difficulty from limited labels~\citep{fatemi2021slaps} and hinders the scalability to large systems~\citep{IDGL-neurips20}. Turning to a simpler problem setting where putative instance relations are instantiated as an observed graph, remarkable progress has been made in designing expressive architectures such as graph neural networks (GNNs)~\citep{scarselli2008gnnearly,GCN-vallina,GAT,SGC-icml19,gcnii-icml20,RWLS-icml21} for harnessing inter-connections between instances as a geometric prior~\citep{geometriclearning-2017}. However, the observed relations can be incomplete/noisy, due to error-prone data collection, or generated by an artificial construction independent from downstream targets. The potential inconsistency between observation and the underlying data geometry would presumably elicit systematic bias between structured representation of graph-based learning and true data dependencies. While a plausible remedy is to learn more useful structures from the data, this unfortunately brings the previously-mentioned obstacles to the fore To resolve the dilemma, we propose a novel general-purpose encoder framework that uncovers data dependencies from observations (a dataset of partially labeled instances), proceeding via two-fold inspiration from physics as illustrated in Fig.~\ref{fig:model}. Our model is defined through feed-forward continuous dynamics (i.e., a PDE) involving all the instances of a dataset as locations on Riemannian manifolds with \emph{latent} structures, upon which the features of instances act as heat flowing over the underlying geometry~\citep{hamzi2021learning}. Such a diffusion model serves an important \emph{inductive bias} for leveraging global information from other instances to obtain more informative representations. Its major advantage lies in the flexibility for the \emph{diffusivity} function, i.e., a measure of the rate at which information spreads~\citep{rosenberg1997laplacian}: we allow for feature propagation between arbitrary instance pairs at each layer, and adaptively navigate this process by pairwise connectivity weights. Moreover, for guiding the instance representations towards some ideal constraints of internal consistency, we introduce a principled energy function that enforces layer-wise \emph{regularization} on the evolutionary directions. The energy function provides another view (from a macroscopic standpoint) into the desired instance representations with low energy that are produced, i.e., soliciting a steady state that gives rise to informed predictions on unlabeled data \begin{figure}[tb!] \centering \hspace{5pt} \includegraphics[width=0.85\textwidth]{figure/model.pdf} \vspace{-5pt} \caption{\looseness=-1 An illustration of the general idea behind \textsc{DIFFormer}\xspace which takes a whole dataset (or a batch) of instances as input and encodes them into hidden states through a diffusion process aimed at minimizing a regularized energy. This design allows feature propagation among arbitrary instance pairs at each layer with optimal inter-connecting structures for informed prediction on each instance.} \label{fig:model} \vspace{-20pt} \end{figure} As a justification for the tractability of above general methodology, our theory reveals the underlying equivalence between finite-difference iterations of the diffusion process and unfolding the minimization dynamics for an associated regularized energy. This result further suggests a closed-form optimal solution for the diffusivity function that updates instance representations by the ones of all the other instances towards giving a rigorous decrease of the global energy. Based on this, we also show that the energy constrained diffusion model can serve as a principled perspective for unifying popular models like MLP, GCN and GAT which can be viewed as special cases of our framework. On top of the theory, we propose a new class of neural encoders, Diffusion-based Transformers (\textsc{DIFFormer}\xspace), and its two practical instantiations: one is a simple version with $\mathcal O(N)$ complexity ($N$ for instance number) for computing all-pair interactions among instances; the other is a more expressive version that can learn complex latent structures. We empirically demonstrate the success of \textsc{DIFFormer}\xspace on a diverse set of tasks. It outperforms SOTA approaches on semi-supervised node classification benchmarks and performs competitively on large-scale graphs. It also shows promising power for image/text classification with low label rates and predicting spatial-temporal dynamics. \section{Related Work}\label{sec:related} \vspace{-5pt} \textbf{Graph-based Semi-supervised Learning.} Graph-based SSL~\citep{GCN-vallina} aims to learn from partially labeled data, where instances are treated as nodes and their relations are given by a graph. The observed structure can be leveraged as regularization for learning representations~\citep{belkin2006manireg,weston2012semiemb,Panetoid-icml19} or as an inductive bias of modern GNN architectures~\citep{scarselli2008gnnearly}. However, there frequently exist situations where the observed structure is unavailable or unreliable~\citep{LDS-icml19,jiang2019glcn,Bayesstruct-aaai19,IDGL-neurips20,fatemi2021slaps}, in which case the challenge remains how to uncover the underlying relations. This paper explores a new encoder architecture for discovering data geometry to promote learning through the inter-dependence among instances (either labeled or unlabeled). \textbf{Neural Diffusion Models.} There are several recent efforts on diffusion-based learning where continuous dynamics serve as an inductive bias for representation learning~\citep{hamzi2021learning}. One category directly solves a continuous process of differential equations~\citep{lagaris1998artificial,chen2018neuralode}, e.g.,~\cite{grand} and its follow-ups~\citep{beltrami,GRAND++} reveal the analogy between the discretization of diffusion process and GNNs' feedforward rules, and devise new (continuous) models on graphs whose training requires PDE-solving tools. Differently, another category is PDE-inspired learning using the diffusion perspective as motivation on top of which new (discrete) graph neural models are designed~\citep{atwood2016diffusion,NetLSD,klicpera2019diffusion,xu2020heat,wang2021dissecting}. Our work leans on the latter and the key originality lies in two aspects. First, we introduce a novel diffusion model whose dynamics are implicitly defined by optimizing a regularized energy. Second, our theory establishes an equivalence between the numerical iterations of diffusion process and unfolding the optimization of the energy, based on which we develop a new class of neural encoders for uncovering latent structures among a layer number of instances. \textbf{Instance/Node-Level v.s.~Graph-Level Prediction.} To avoid potential mis-interpretations of our work, we remark upfront that our goal is to learn a single latent interaction graph among instances, and this can be generally viewed as an embodiment of \textit{node-level} prediction (NP) tasks widely studied in the graph learning community. This is distinct from \textit{graph-level} prediction (GP) whereby each graph itself generally has a single label to predict and a dataset contains many graph instances. These two problems are typically tackled separately in the literature~\citep{ogb-nips20} with disparate technical considerations. This is because input instances are inter-dependent in NP (due to the instance interactions involved in the data-generating process), while in GP tasks the instances can be treated as IID samples. Although there exist some recent models considering all-pair feature propagation among the nodes of each graph instance~\citep{pointcloud-19,graphtransformer-2020}, such prior work largely focuses on GP with relatively small graphs. It therefore remains under-explored how to build an efficient and expressive model for learning node-pair interactions for NP tasks involving latent graphs that can be prohibitively large. \vspace{-2pt} \section{Energy Constrained Geometric Diffusion Transformers}\label{sec:model} \vspace{-2pt} Consider a set of partially labeled instances $\{\mathbf x_i\}_{i=1}^N$, whose labeled portion is $\{(\mathbf x_j, y_j)\}_{j=1}^M$ (often $M \ll N$). In some cases there exists relational structures that connect instances as a graph $\mathcal G = (\mathcal V, \mathcal E)$, where the node set $\mathcal V$ contains all the instances and the edge set $\mathcal E=\{e_{ij}\}$ consists of observed relations. Without loss of generality, the main body of this section does \emph{not} assume graph structures as input, but we will later discuss how to trivially incorporate then if/when available. \vspace{-2pt} \subsection{Geometric Diffusion Model} \vspace{-2pt} The starting point of our model is a diffusion process that treats a dataset of instances as a whole and produces instance representations through information flows characterized by an anisotropic diffusion process, which is inspired by an analogy with heat diffusion on a Riemannian manifold~\citep{rosenberg1997laplacian}. We use a vector-valued function $\mathbf z_i(t): [0, \infty) \rightarrow \mathbb R^d$ to define an instance's state at time $t$ and location $i$. The anisotropic diffusion process describes the evolution of instance states (i.e., representations) via a PDE with boundary conditions~\citep{freidlin1993diffusion,medvedev2014nonlinear}: \begin{equation}\label{eqn-diffuse} \frac{\partial \mathbf Z(t)}{\partial t} = \nabla^*\left(\mathbf S(\mathbf{Z}(t), t) \odot \nabla \mathbf Z(t)\right), ~~~ \mbox{s. t.} ~~ \mathbf Z(0) = [\mathbf x_i]_{i=1}^N, ~~ t\geq 0, \end{equation} where $\mathbf Z(t) = [\mathbf z_i(t)]_{i=1}^N \in \mathbb R^{N\times d}$, $\odot$ denotes the Hadamard product, and the function $\mathbf S(\mathbf Z(t), t): \mathbb R^{N\times d} \times [0, \infty) \rightarrow [0, 1]^{N\times N}$ defines the \emph{diffusivity} coefficient controlling the diffusion strength between any pair at time $t$. The diffusivity is specified to be dependent on instances' states. The gradient operator $\nabla$ measures the difference between source and target states, i.e., $(\nabla \mathbf Z(t))_{ij} = \mathbf z_j(t) - \mathbf z_i(t)$, and the divergence operator $\nabla^*$ sums up information flows at a point, i.e., $(\nabla^*)_i = \sum_{j=1}^N \mathbf S_{ij}(\mathbf Z(t), t) \left(\nabla \mathbf Z(t)\right)_{ij}$. Note that both operators are defined over a discrete space consisting of $N$ locations. The physical implication of Eq.~\ref{eqn-diffuse} is that the temporal change of heat at location $i$ equals to the heat flux that spatially enters into the point. Eq.~\ref{eqn-diffuse} can be explicitly written as \begin{equation}\label{eqn-diffuse2} \frac{\partial \mathbf z_i(t)}{\partial t} = \sum_{j=1}^N \mathbf S_{ij}(\mathbf Z(t), t) (\mathbf z_j(t) - \mathbf z_i(t)). \end{equation} Such a diffusion process can serve as an inductive bias that guides the model to use other instances' information at every layer for learning informative instance representations. We can use numerical methods to solve the continuous dynamics in Eq.~\ref{eqn-diffuse2}, e.g., the explicit Euler scheme involving finite differences with step size $\tau$, which after some re-arranging gives: \begin{equation}\label{eqn-diffuse-iter} \mathbf z_i^{(k+1)} = \left (1 - \tau \sum_{j=1}^N \mathbf S_{ij}^{(k)} \right ) \mathbf z_i^{(k)} + \tau \sum_{j=1}^N \mathbf S_{ij}^{(k)} \mathbf z_j^{(k)}. \end{equation} The numerical iteration can stably converge for $\tau\in(0, 1)$. We can adopt the state after a finite number $K$ of propagation steps and use it for final predictions, i.e., $\hat y_i = \mbox{MLP}(\mathbf z_i^{(K)})$. \looseness=-1\textbf{\emph{Remark.}} The diffusivity coefficient in Eq.~\ref{eqn-diffuse} is a measure of the rate at which heat can spread over the space~\citep{rosenberg1997laplacian}. Particularly in Eq.~\ref{eqn-diffuse2}, $\mathbf S(\mathbf Z(t), t)$ determines how information flows over instances and the evolutionary direction of instance states. Much flexibility remains for its specification. For example, a basic choice is to fix $\mathbf S(\mathbf Z(t), t)$ as an identity matrix which constrains the feature propagation to self-loops and the model degrades to an MLP that treats all the instances independently. One could also specify $\mathbf S(\mathbf Z(t), t)$ as the observed graph structure if available in some scenarios. In such a case, however, the information flows are restricted by neighboring nodes in a graph. An ideal case could be to allow $\mathbf S(\mathbf Z(t), t)$ to have non-zero values for arbitrary $(i, j)$ and evolve with time, i.e., the instance states at each layer can efficiently and adaptively propagate to all the others. \subsection{Diffusion Constrained by a Layer-wise Energy} As mentioned previously, the crux is how to define a proper diffusivity function to induce a desired diffusion process that can maximize the information utility and accord with some inherent consistency. Since we have no prior knowledge for the explicit form or the inner structure of $\mathbf S^{(k)}$, we consider the diffusivity as a time-dependent latent variable and introduce an \emph{energy function} that measures the presumed quality of instance states at a given step $k$: \begin{equation}\label{eqn-energy} E(\mathbf Z, k; \delta) = \|\mathbf Z - \mathbf Z^{(k)}\|_{\mathcal F}^2 + \lambda\sum_{i,j} \delta(\|\mathbf z_i - \mathbf z_j\|_2^2), \end{equation} where $\delta:\mathbb R^+ \rightarrow \mathbb R$ is defined as a function that is \emph{non-decreasing} and \emph{concave} on a particular interval of our interest, and promotes robustness against large differences~\citep{RWLS-icml21} among any pair of instances. Eq.~\ref{eqn-energy} assigns each state in $\mathbb R^d$ with an energy scalar which can be leveraged to regularize the updated states (towards lower energy desired). The weight $\lambda$ trades two effects: 1) for each instance $i$, the states not far from the current one $\mathbf z_i^{(k)}$ have low energy; 2) for all instances, the smaller differences their states have, the lower energy is produced. \textbf{\emph{Remark.}} Eq.~\ref{eqn-energy} can essentially be seen as a robust version of the energy introduced by \cite{globallocal-2003}, inheriting the spirit of regularizing the global and local consistency of representations. ``Robust" here particularly implies that the $\delta$ adds uncertainty to each pair of the instances and could \emph{implicitly} filter the information of noisy links (potentially reflected by proximity in the latent space). \textbf{Energy Constrained Diffusion.} The diffusion process describes the \emph{microscopic} behavior of instance states through evolution, while the energy function provides a \emph{macroscopic} view for quantifying the consistency. In general, we expect that the final states could yield a low energy, which suggests that the physical system arrives at a steady point wherein the yielded instance representations have absorbed enough global information under a certain guiding principle. Thereby, we unify two schools of thoughts into a new diffusive system where instance states would evolve towards producing lower energy, e.g., by finding a valid diffusivity function. Formally, we aim to find a series of $\mathbf S^{(k)}$'s whose dynamics and constraints are given by \begin{equation}\label{eqn-diffuse-appx} \begin{split} &\mathbf z_i^{(k+1)} = \left (1 - \tau \sum_{j=1}^N \mathbf S_{ij}^{(k)} \right ) \mathbf z_i^{(k)} + \tau \sum_{j=1}^N \mathbf S_{ij}^{(k)} \mathbf z_j^{(k)} \\ &\mbox{s. t.}~~\mathbf z_i^{(0)} = \mathbf x_i, \quad E(\mathbf Z^{(k+1)}, k; \delta) \leq E(\mathbf Z^{(k)}, k-1; \delta), \quad k\geq 1. \end{split} \end{equation} The formulation induces a new class of geometric flows on latent manifolds whose dynamics are \emph{implicitly} defined by optimizing a time-varying energy function (see Fig.~\ref{fig:model} for an illustration). \subsection{Tractability of Solving Diffusion Process with Energy Minimization} Unfortunately, Eq.~\ref{eqn-diffuse-appx} is hard to solve since we need to infer the value for a series of coupled $\mathbf S^{(k)}$'s that need to satisfy $K$ inequalities by the energy minimization constraint. The key result of this paper is the following theorem that reveals the underlying connection between the geometric diffusion model and iterative minimization of the energy, which further suggests an explicit closed-form solution for $\mathbf S^{(k)}$ based on the current states $\mathbf Z^{(k)}$ that yields a rigorous decrease of the energy. \begin{theorem}\label{thm-main} For any regularized energy defined by Eq.~\ref{eqn-energy} with a given $\lambda$, there exists $0<\tau<1$ such that the diffusion process of Eq.~\ref{eqn-diffuse-iter} with the diffusivity between pair $(i,j)$ at the $k$-th step given by \begin{equation}\label{eqn-optimal-diffuse} \mathbf{\hat S}_{ij}^{(k)} = \frac{\omega_{ij}^{(k)}}{\sum_{l=1}^N \omega_{il}^{(k)}}, \quad \omega_{ij}^{(k)} = \left. \frac{\partial \delta(z^2)}{\partial z^2} \right |_{z^2 = \|\mathbf z_i^{(k)} - \mathbf z_j^{(k)}\|_2^2}, \end{equation} yields a descent step on the energy, i.e., $E(\mathbf Z^{(k+1)}, k; \delta) \leq E(\mathbf Z^{(k)}, k-1; \delta)$ for any $k\geq 1$. \end{theorem} Theorem~\ref{thm-main} suggests the existence for the optimal diffusivity in the form of a function over the $l_2$ distance between states at the current step, i.e., $\|\mathbf z_i^{(k)} - \mathbf z_j^{(k)}\|_2$. The result enables us to unfold the implicit process and compute $\mathbf S^{(k)}$ in a feed-forward way from the initial states. We thus arrive at a new family of neural model architectures with layer-wise computation specified by: {\small \vspace{-5pt} \begin{center} \fcolorbox{black}{gray!10}{\parbox{0.97\linewidth}{ \vspace{-5pt} \begin{equation}\label{eqn-model} \begin{split} & \mbox{Diffusivity Inference:} \quad \mathbf {\hat S}_{ij}^{(k)} = \frac{f(\|\mathbf z_i^{(k)} - \mathbf z_j^{(k)}\|_2^2)}{\sum_{l=1}^N f(\|\mathbf z_i^{(k)} - \mathbf z_l^{(k)}\|_2^2)}, \quad 1 \leq i, j \leq N,\\ & \mbox{State Updating:} \quad \mathbf z_i^{(k+1)} = \underbrace{\left (1 - \tau \sum_{j=1}^N \mathbf {\hat S}_{ij}^{(k)} \right ) \mathbf z_i^{(k)} }_{\mbox{state conservation}} + \underbrace{\tau \sum_{j=1}^N \mathbf {\hat S}_{ij}^{(k)} \mathbf z_j^{(k)} }_{\mbox{state propagation}}, \quad 1 \leq i \leq N. \end{split} \end{equation} \vspace{-10pt} } } \end{center} } \vspace{-5pt} \textbf{\emph{Remark.}} The choice of function $f$ in above formulation is not arbitrary, but needs to be a non-negative and decreasing function of $z^2$, so that the associated $\delta$ in Eq.~\ref{eqn-energy} is guaranteed to be non-decreasing and concave w.r.t. $z^2$. Critically though, there remains much room for us to properly design the specific $f$, so as to provide adequate capacity and scalability. Also, in our model presented by Eq.~\ref{eqn-model} we only have one hyper-parameter $\tau$ in practice, noting that the weight $\lambda$ in the regularized energy is implicitly determined through $\tau$ by Theorem \ref{thm-main}, which reduces the cost of hyper-parameter searching. \vspace{-2pt} \section{Instantiations of \textsc{DIFFormer}\xspace}\label{sec:inst} \vspace{-2pt} \subsection{Model Implementation} \vspace{-2pt} We next go into model instantiations based on our theory. We introduce two specified $f$'s as practical versions of our model whose detailed implementations are given in Appendix~\ref{appx-alg} with PyTorch-style pseudo code. First, because $\|\mathbf z_i - \mathbf z_j\|_2^2 = \|\mathbf z_i\|_2^2 + \|\mathbf z_j\|_2^2 - 2\mathbf z_i^\top \mathbf z_j$, we can convert $f(\|\mathbf z_i - \mathbf z_j\|_2^2)$ into the form $g(\mathbf z_i^\top \mathbf z_j)$ using a change of variables on the condition that $\|\mathbf z_i\|_2$ remains constant. And we add layer normalization to each layer to loosely enforce such a property in practice. \textbf{Simple Diffusivity Model.} A straightforward design is to adopt the linear function $g(x) = 1+x$: \begin{equation}\label{eqn-S1} \omega_{ij}^{(k)} = f(\|\tilde{\mathbf z}_i^{(k)} - \tilde{\mathbf z}_j^{(k)}\|_2^2; \mathbf W_Q^{(k)}, \mathbf W_K^{(k)}) = 1 + (\mathbf W_Q^{(k)} \mathbf z_i^{(k)} )^\top (\mathbf W_K^{(k)} \mathbf z_j^{(k)} ), \end{equation} where $\mathbf W_Q^{(k)}$, $\mathbf W_K^{(k)}$ are learnable weight matrices of the $k$-th layer. Assuming $\tilde{\mathbf z}_i^{(k)} = \mathbf W_Q^{(k)} \mathbf z_i^{(k)}$, $\tilde{\mathbf z}_j^{(k)} = \mathbf W_K^{(k)} \mathbf z_j^{(k)}$ and $z = \|\tilde{\mathbf z}_i^{(k)} - \tilde{\mathbf z}_j^{(k)}\|_2$, Eq.~\ref{eqn-S1} can be written as $f(z^2) = 2 - \frac{1}{2}z^2$, which yields a non-negative result and is decreasing on the interval $[0, 2]$ where $z^2$ lies after layer normalization. One scalability concern for the model Eq.~\ref{eqn-model} arises because of the need to compute pairwise diffusivity and propagation for each individual, inducing $\mathcal O(N^2)$ complexity. Remarkably, the simple diffusivity model allows a significant acceleration by noting that the state propagation can be re-arranged via \begin{equation} \sum_{j=1}^N \mathbf S_{ij}^{(k)} \mathbf z_j^{(k)} = \sum_{j=1}^N \frac{1 + (\tilde{\mathbf z}_i^{(k)})^\top \tilde{\mathbf z}_j^{(k)}}{\sum_{l=1}^N\left (1 + (\tilde{\mathbf z}_i^{(k)})^\top \tilde{\mathbf z}_l^{(k)} \right ) } \mathbf z_j^{(k)} = \frac{\sum_{j=1}^N\mathbf z_j^{(k)} + \left (\sum_{j=1}^N \tilde{\mathbf z}_j^{(k)}\cdot (\mathbf z_j^{(k)})^\top \right ) \cdot \tilde{\mathbf z}_i^{(k)} }{N + (\tilde{\mathbf z}_i^{(k)})^\top \sum_{l=1}^N \tilde{\mathbf z}_l^{(k)}}. \end{equation} Note that the two summation terms above can be computed once and shared to every instance $i$, reducing the complexity in each iteration to $\mathcal O(N)$. We call this implementation as \textsc{DIFFormer}\xspace-s. \textbf{Advanced Diffusivity Model.} The simple model facilitates efficiency/scalability, yet may sacrifice the capacity for complex latent geometry. We thus propose an advanced version with $g(x) = \frac{1}{1+\exp(-x)}$: \begin{equation}\label{eqn-S2} \omega_{ij}^{(k)} = f(\|\tilde{\mathbf z}_i^{(k)} - \tilde{\mathbf z}_j^{(k)}\|_2^2; \mathbf W_Q^{(k)}, \mathbf W_K^{(k)}) = \frac{1}{1+\exp{\left(- (\mathbf W_Q^{(k)} \mathbf z_i^{(k)} )^\top (\mathbf W_K^{(k)} \mathbf z_j^{(k)} ) \right ) }}, \end{equation} which corresponds with $f(z^2) = \frac{1}{1+e^{z^2/2-1}}$ guaranteeing monotonic decrease and non-negativity. We dub this version as \textsc{DIFFormer}\xspace-a. Appendix~\ref{appx-inst} further compares the two models (i.e., different $f$'s and $\delta$'s) through synthetic results. Real-world empirical comparisons are in Section~\ref{sec:exp}. \subsection{Model Extensions and Further Discussion} \textbf{Incorporating Layer-wise Transformations.} Eq.~\ref{eqn-model} does not use feature transformations for each layer. To further improve the representation capacity, we can add such transformations after the updating, i.e., $\mathbf z_i^{(k)} \leftarrow h^{(k)}(\mathbf z_i^{(k)})$ where $h^{(k)}$ can be a fully-connected layer (see Appendix~\ref{appx-alg} for details). In this way, each iteration of the diffusion yields a descent of a particular energy $E(\mathbf Z, k;\delta, h^{(k)}) = \|\mathbf Z - h^{(k)}(\mathbf Z^{(k)})\|_2^2 + \sum_{i,j} \delta(\|\mathbf z_i - \mathbf z_j\|_2^2)$ dependent on $k$. The trainable transformation $h^{(k)}$ can be optimized w.r.t. the supervised loss to map the instance representations into a proper latent space. Our experiments find that the layer-wise transformation is not necessary for small datasets, but contributes to positive effects for datasets with larger sizes. Furthermore, one can consider non-linear activations in the layer-wise transformation $h^{(k)}$ though we empirically found that using a linear model already performs well. We also note that Theorem~\ref{thm-main} can be extended to hold even when incorporating such a non-linearity in each layer (see Appendix~\ref{appx-non} for detailed discussions). \textbf{Incorporating Input Graphs.} For the model presented so far, we do \emph{not} assume an input graph for the model formulation. For situations with observed structures as available input, we have $\mathcal G = (\mathcal V, \mathcal E)$ that can be leveraged as a geometric prior. We can thus modify the updating rule as: \begin{equation}\label{eqn-diffuse-graph} \mathbf z_i^{(k+1)} = \left (1 - \frac{\tau}{2} \sum_{j=1}^N \left(\mathbf {\hat S}_{ij}^{(k)} + \tilde{\mathbf A}_{ij} \right ) \right ) \mathbf z_i^{(k)} + \frac{\tau}{2} \sum_{j=1}^N \left (\mathbf {\hat S}_{ij}^{(k)} + \tilde{\mathbf A}_{ij} \right )\mathbf z_j^{(k)}, \end{equation} where $\tilde{\mathbf A}_{ij} = \frac{1}{\sqrt{d_id_j}}$ if $(i,j)\in \mathcal E$ and 0 otherwise, and $d_i$ is instance $i$'s degree in $\mathcal G$. The diffusion iteration of Eq.~\ref{eqn-diffuse-graph} is equivalent to minimizing a new energy additionally incorporating a graph-based penalty~\citep{graphkernel}, i.e., $\sum_{(i,j)\in \mathcal{E}} \|\mathbf z_i^{(k)} - \mathbf z_j^{(k)}\|_2^2$ (see Appendix~\ref{appx-inst} for details). \textbf{Scaling to Large Datasets.} Another advantage of \textsc{DIFFormer}\xspace is the flexibility for mini-batch training. For datasets with prohibitive instance numbers that make it hard for full-batch training on a single GPU, we can naturally partition the dataset into mini-batches and feed one mini-batch for one feed-forward and backward computation. Similar efforts can be applied for parallel acceleration. \begin{table}[t!] \centering \caption{A unified view for MLP, GCN and GAT from our energy-driven geometric diffusion framework regarding energy function forms, diffusivity specifications and algorithmic complexity. \label{tbl-existing}} \vspace{-5pt} \small \resizebox{0.99\textwidth}{!}{ \begin{tabular}{@{}c|c|c|c@{}} \toprule Models & Energy Function $E(\mathbf Z, k; \delta)$ & Diffusivity $\mathbf S^{(k)}$ & Complexity \\ \midrule MLP & $\|\mathbf Z - \mathbf Z^{(k)}\|_2^2$ & $\mathbf S^{(k)}_{ij} = \left\{ \begin{aligned} &1, \quad \mbox{if} \; i = j \\ &0, \quad otherwise \end{aligned} \right. $ & $\mathcal O(NKd^2)$ \\ \hline GCN & $\sum_{(i,j)\in \mathcal E} \|\mathbf z_i - \mathbf z_j\|_2^2$ & $\mathbf S^{(k)}_{ij} = \left\{ \begin{aligned} &\frac{1}{\sqrt{d_id_j}}, \quad \mbox{if} \; (i,j) \in \mathcal E \\ &0, \quad otherwise \end{aligned} \right. $ & $\mathcal O(|\mathcal E|Kd^2)$ \\ \hline GAT & $\sum_{(i,j)\in \mathcal E} \delta(\|\mathbf z_i - \mathbf z_j\|_2^2)$ & $\mathbf S^{(k)}_{ij} = \left\{ \begin{aligned} &\frac{f(\|\mathbf z_i^{(k)} - \mathbf z_j^{(k)}\|_2^2)}{\sum_{l: (i,l)\in \mathcal E} f(\|\mathbf z_i^{(k)} - \mathbf z_l^{(k)}\|_2^2)}, \quad \mbox{if} \; (i,j) \in \mathcal E \\ &0, \quad otherwise \end{aligned} \right. $& $\mathcal O(|\mathcal E|Kd^2)$ \\ \hline \textsc{DIFFormer}\xspace & $\|\mathbf Z - \mathbf Z^{(k)}\|_2^2 + \lambda \sum_{i,j} \delta(\|\mathbf z_i - \mathbf z_j\|_2^2)$ & $\mathbf S^{(k)}_{ij} = \frac{f(\|\mathbf z_i^{(k)} - \mathbf z_j^{(k)}\|_2^2)}{\sum_{l=1}^N f(\|\mathbf z_i^{(k)} - \mathbf z_l^{(k)}\|_2^2)}, \quad 1\leq i,j \leq N$ & $\begin{aligned} &\mbox{\textsc{DIFFormer}\xspace-s}:\quad \mathcal O(NKd^2) \\ &\mbox{\textsc{DIFFormer}\xspace-a}:\quad \mathcal O(N^2Kd^2) \end{aligned}$ \\ \bottomrule \end{tabular}} \vspace{-15pt} \end{table} \textbf{Connection with Existing Models.} The proposed methodology in fact serves as a general diffusion framework that provides a unifying view whereby some existing models can be seen as special cases of ours. As a non-exhaustive summary and high-level comparison, Table~\ref{tbl-existing} presents the relationships with MLP, GCN and GAT which can be expressed via simplified versions of our principled energy and diffusion models (see more elaboration in Appendix~\ref{appx-connection}). \section{Experiments}\label{sec:exp} We apply \textsc{DIFFormer}\xspace to various tasks for evaluation: 1) graph-based node classification where an input graph is given as observation; 2) image and text classification without input graphs; 3) dynamics prediction that requires handling new unseen instances from test data, i.e., inductive learning. In each case, we compare a different (with some overlap) set of competing models that are closely associated with \textsc{DIFFormer}\xspace and specifically designed for the particular task. Unless otherwise stated, for datasets where input graphs are available, we incorporate them for feature propagation as is defined by Eq.~\ref{eqn-diffuse-graph}. Due to space limit, we defer details of datasets to Appendix~\ref{appx-dataset} and the implementations to Appendix~\ref{appx-implementation}. Also, we provide additional supporting empirical results in Appendix~\ref{appx-results}. \begin{table}[t] \centering \caption{Mean and standard deviation of testing accuracy on node classification (with five different random initializations). All the models are split into groups with a comparison of non-linearity (whether the model requires activation for layer-wise transformations), PDE-solver (whether the model requires PDE-solver for back-propagation) and rewiring (whether to modify the structure).\label{tbl-bench}} \vspace{-5pt} \small \resizebox{\textwidth}{!}{ \begin{tabular}{@{}c|c|cccccc@{}} \toprule \textbf{Type} & \textbf{Model} & \textbf{Non-linearity} & \textbf{PDE-solver} & \textbf{Rewiring} & \textbf{Cora} & \textbf{Citeseer} & \textbf{Pubmed} \\ \midrule \multirow{3}{*}{Basic models} & MLP & \cmark & - & - & 56.1 $\pm$ 1.6 & 56.7 $\pm$ 1.7 & 69.8 $\pm$ 1.5 \\ & LP & - & - & - & 68.2 & 42.8 & 65.8 \\ & ManiReg & \cmark & - & - & 60.4 $\pm$ 0.8 & 67.2 $\pm$ 1.6 & 71.3 $\pm$ 1.4 \\ \midrule \multirow{8}{*}{Standard GNNs} & GCN & \cmark & - & - & 81.5 $\pm$ 1.3 & 71.9 $\pm$ 1.9 & 77.8 $\pm$ 2.9 \\ & GAT & \cmark & - & - & 83.0 $\pm$ 0.7 & 72.5 $\pm$ 0.7 & 79.0 $\pm$ 0.3 \\ & SGC & - & - & - & 81.0 $\pm$ 0.0 & 71.9 $\pm$ 0.1 & 78.9 $\pm$ 0.0 \\ & GCN-$k$NN & \cmark & - & \cmark & 72.2 $\pm$ 1.8 & 56.8 $\pm$ 3.2 & 74.5 $\pm$ 3.2 \\ & GAT-$k$NN & \cmark & - & \cmark & 73.8 $\pm$ 1.7 & 56.4 $\pm$ 3.8 & 75.4 $\pm$ 1.3 \\ & Dense GAT & \cmark & - & \cmark & 78.5 $\pm$ 2.5 & 66.4 $\pm$ 1.5 & 66.4 $\pm$ 1.5 \\ & LDS & \cmark & - & \cmark & \color{brown}\textbf{83.9 $\pm$ 0.6} & \color{brown}\textbf{74.8 $\pm$ 0.3} & out-of-memory \\ & GLCN & \cmark & - & \cmark & 83.1 $\pm$ 0.5 & 72.5 $\pm$ 0.9 & 78.4 $\pm$ 1.5 \\ \midrule \multirow{6}{*}{Diffusion-based models} & GRAND-l & - & \cmark & - & 83.6 $\pm$ 1.0 & 73.4 $\pm$ 0.5 & 78.8 $\pm$ 1.7 \\ & GRAND & \cmark & \cmark & \cmark & 83.3 $\pm$ 1.3 & 74.1 $\pm$ 1.7 & 78.1 $\pm$ 2.1 \\ & GRAND++ & \cmark & \cmark & \cmark & 82.2 $\pm$ 1.1 & 73.3 $\pm$ 0.9 & 78.1 $\pm$ 0.9 \\ & GDC & \cmark & - & - & 83.6 $\pm$ 0.2 & 73.4 $\pm$ 0.3 & 78.7 $\pm$ 0.4 \\ & GraphHeat & \cmark & - & - & 83.7 & 72.5 & \color{brown}\textbf{80.5} \\ & DGC-Euler & - & - & - & 83.3 $\pm$ 0.0 & 73.3 $\pm$ 0.1 & 80.3 $\pm$ 0.1 \\ \midrule \multirow{2}{*}{Ours} & {\textsc{DIFFormer}\xspace}-s & - & - & \cmark & \color{purple}\textbf{84.7 $\pm$ 0.1} & 73.5 $\pm$ 0.3 & \color{purple}\textbf{81.8 $\pm$ 0.3} \\ & {\textsc{DIFFormer}\xspace}-a & - & - & \cmark & 83.7 $\pm$ 0.6 & \color{purple}\textbf{75.7 $\pm$ 0.3} & \color{brown}\textbf{80.5 $\pm$ 1.2} \\ \bottomrule \end{tabular}} \vspace{-5pt} \end{table} \subsection{Semi-supervised Node Classification Benchmarks} We test \textsc{DIFFormer}\xspace on three citation networks \texttt{Cora}, \texttt{Citeseer} and \texttt{Pubmed}. Table~\ref{tbl-bench} reports the testing accuracy. We compare with several sets of baselines linked with our model from different aspects. 1) Basic models: \emph{MLP} and two classical graph-based SSL models Label Propagation (\emph{LP})~\citep{lp-icml2003} and \emph{ManiReg}~\citep{belkin2006manireg}. 2) GNN models: \emph{SGC}~\citep{SGC-icml19}, \emph{GCN}~\citep{GCN-vallina}, \emph{GAT}~\citep{GAT}, their variants \emph{GCN-$k$NN}, \emph{GAT-$k$NN} (operating on $k$NN graphs constructed from input features) and \emph{Dense GAT} (with a densely connected graph replacing the input one), and two strong structure learning models LDS~\citep{LDS-icml19} and GLCN~\citep{glconv-cvpr19}. 3) PDE graph models: the SOTA models \emph{GRAND}~\citep{grand} (with its linear variant GRAND-l) and \emph{GRAND++}~\citep{GRAND++}. 4) Diffusion-inspired GNN models: \emph{GDC}~\citep{klicpera2019diffusion}, \emph{GraphHeat}~\citep{xu2020heat} and a recent work \emph{DGC-Euler}~\citep{wang2021dissecting}. Table~\ref{tbl-bench} shows that \textsc{DIFFormer}\xspace achieves the best results on three datasets with significant improvements. Also, we notice that the simple diffusivity model \textsc{DIFFormer}\xspace-s significantly exceeds the counterparts without non-linearity (SGC, GRAND-l and DGC-Euler) and even comes to the first on \texttt{Cora} and \texttt{Pubmed}. These results suggest that \textsc{DIFFormer}\xspace can serve as a very competitive encoder backbone for node-level prediction that learns inter-instance interactions for generating informative representations and boosting downstream performance. \subsection{Large-Scale Node Classification Graphs} We also consider two large-scale graph datasets \texttt{ogbn-Proteins}, a multi-task protein-protein interaction network, and \texttt{Pokec}, a social network. Table~\ref{tbl-largebench} presents the results. Due to the dataset size (0.13M/1.63M nodes for two graphs) and scalability issues that many of the competitors in Table~\ref{tbl-bench} as well as \textsc{DIFFormer}\xspace-a would potentially experience, we only compare \textsc{DIFFormer}\xspace-s with standard GNNs. In particular, we found GCN/GAT/\textsc{DIFFormer}\xspace-s are still hard for full-graph training on a single V100 GPU with 16GM memory. We thus consider mini-batch training with batch size 10K/100K for \texttt{Proteins}/\texttt{Pokec}. We found that \textsc{DIFFormer}\xspace outperforms common GNNs by a large margin, which suggests its desired efficacy on large datasets. As mentioned previously, we prioritize the efficacy of \textsc{DIFFormer}\xspace as a general encoder backbone for solving node-level prediction tasks on large graphs. While there are quite a few practical tricks shown to be effective for training GNNs for this purpose, e.g., hop-wise attention~\citep{adadiffusion-2022} or various label re-use strategies, these efforts are largely orthogonal to our contribution here and can be applied to most any model to further boost performance. For further investigation, we supplement more results using different mini-batch sizes for training and study its impact on testing performance in Appendix~\ref{appx-batch}. Furthermore, we compare the training time and memory costs in Appendix~\ref{appx-time} where we found that \textsc{DIFFormer}\xspace-s is about 6 times faster than GAT and 39 times faster than DenseGAT on \texttt{Pokec}, which suggests superior scalability and efficiency of \textsc{DIFFormer}\xspace-s on large graphs. \subsection{Image and Text Classification with Low Label Rates} We next conduct experiments on \texttt{CIFAR-10}, \texttt{STL-10} and \texttt{20News-Group} datasets to test \textsc{DIFFormer}\xspace for standard classification tasks with limited label rates. For \texttt{20News} provided by \cite{pedregosa2011scikit}, we take 10 topics and use words with TF-IDF more than 5 as features. For \texttt{CIFAR} and \texttt{STL}, two public image datasets, we first use the self-supervised approach SimCLR~\citep{chen2020simple} (that does not use labels for training) to train a ResNet-18 for extracting the feature maps as input features of instances. These datasets contain no graph structure, so we use $k$NN to construct a graph over input features for GNN competitors and do \emph{not} use input graphs for \textsc{DIFFormer}\xspace. Table~\ref{tab:image classification} reports the testing accuracy of \textsc{DIFFormer}\xspace and competitors including MLP, ManiReg, GCN-$k$NN, GAT-$k$NN, DenseGAT and GLCN. Two \textsc{DIFFormer}\xspace models perform much better than MLP in nearly all cases, suggesting the effectiveness of learning the inter-dependence over instances. Besides, \textsc{DIFFormer}\xspace yields large improvements over GCN and GAT which are in some sense limited by the handcrafted graph that leads to sub-optimal propagation. Moreover, \textsc{DIFFormer}\xspace significantly outperforms GLCN, a strong baseline that learns new (static) graph structures, which demonstrates the superiority of our evolving diffusivity that can adapt to different layers. \begin{figure}[t] \begin{minipage}{0.33\linewidth} \centering \captionof{table}{Testing ROC-AUC for \texttt{Proteins} and Accuracy for \texttt{Pokec} on large-scale node classification datasets. $*$ denotes using mini-batch training on a Tesla V100 with 16GB memory. \label{tbl-largebench}} \vspace{-6pt} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{@{}c|cc@{}} \toprule \textbf{Models} & \textbf{Proteins} & \textbf{Pokec} \\ \midrule MLP & 72.41 $\pm$ 0.10 & 60.15 $\pm$ 0.03 \\ LP & 74.73 & 52.73\\ SGC &49.03 $\pm$ 0.93 & 52.03 $\pm$ 0.84 \\ GCN & 74.22 $\pm$ 0.49$^*$ & 62.31 $\pm$ 1.13$^*$ \\ GAT & \color{brown}\textbf{75.11 $\pm$ 1.45}$^*$ & \color{brown}\textbf{65.57 ± 0.34}$^*$ \\ \midrule {\textsc{DIFFormer}\xspace}-s & \color{purple}\textbf{79.49 $\pm$ 0.44}$^*$ & \color{purple}\textbf{69.24 $\pm$ 0.76}$^*$ \\ \bottomrule \end{tabular}} \end{minipage} \hspace{5pt} \begin{minipage}{0.66\linewidth} \subfigure[]{ \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figure/cifar-ablation.pdf} \label{fig-abl} \vspace{-10pt} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figure/cora_dis_depth.pdf} \label{fig-hyper} \vspace{-10pt} \end{minipage} } \vspace{-10pt} \caption{(a) Ablation studies w.r.t. different diffusivity function forms on \texttt{CIFAR}. (b) Impact of $K$ and $\tau$ on \texttt{Cora}.} \end{minipage} \vspace{-10pt} \end{figure} \begin{table}[tb!] \centering \small \caption{Testing accuracy for image (\texttt{CIFAR} and \texttt{STL}) and text (\texttt{20News}) classification.} \vspace{-5pt} \label{tab:image classification} \resizebox{0.95\textwidth}{!}{ \begin{tabular}{c|c|ccccccc|cc} \midrule \multicolumn{2}{c|}{\textbf{Dataset}} & MLP & LP & ManiReg & GCN-$k$NN & GAT-$k$NN & DenseGAT & GLCN & \textsc{DIFFormer}\xspace-s & \textsc{DIFFormer}\xspace-a \\ \midrule \multirow{3}{*}{\textbf{CIFAR}} & 100 labels &65.9 ± 1.3&66.2&67.0 ± 1.9&66.7 ± 1.5&66.0 ± 2.1 & out-of-memory &66.6 ± 1.4&\color{brown}\textbf{69.1 ± 1.1}&\color{purple}\textbf{69.3 ± 1.4} \\ & 500 labels &73.2 ± 0.4&70.6&72.6 ± 1.2&72.9 ± 0.4&72.4 ± 0.5 & out-of-memory &72.8 ± 0.5&\color{purple}\textbf{74.8 ± 0.5}&\color{brown}\textbf{74.0 ± 0.6} \\ & 1000 labels &75.4 ± 0.6&71.9&74.3 ± 0.4&74.7 ± 0.5&74.1 ± 0.5 & out-of-memory &74.7 ± 0.3&\color{purple}\textbf{76.6 ± 0.3}&\color{brown}\textbf{75.9 ± 0.3} \\ \midrule \multirow{3}{*}{\textbf{STL}} & 100 labels &66.2 ± 1.4&65.2&66.5 ± 1.9&\color{brown}\textbf{66.9 ± 0.5}&66.5 ± 0.8 & out-of-memory &66.4 ± 0.8&\color{purple}\textbf{67.8 ± 1.1}&66.8 ± 1.1 \\ & 500 labels &\color{brown}\textbf{73.0 ± 0.8}&71.8&72.5 ± 0.5&72.1 ± 0.8&72.0 ± 0.8 & out-of-memory &72.4 ± 1.3&\color{purple}\textbf{73.7 ± 0.6}&72.9 ± 0.7 \\ & 1000 labels &75.0 ± 0.8&72.7&74.2 ± 0.5&73.7 ± 0.4&73.9 ± 0.6 & out-of-memory &74.3 ± 0.7&\color{purple}\textbf{76.4 ± 0.5}&\color{brown}\textbf{75.3 ± 0.6} \\ \midrule \multirow{3}{*}{\textbf{20News}} & 1000 labels &54.1 ± 0.9&55.9&56.3 ± 1.2&56.1 ± 0.6&55.2 ± 0.8 & 54.6 ± 0.2 &56.2 ± 0.8&\color{brown}\textbf{57.7 ± 0.3}&\color{purple}\textbf{57.9 ± 0.7} \\ & 2000 labels &57.8 ± 0.9&57.6&60.0 ± 0.8&60.6 ± 1.3&59.1 ± 2.2 & 59.3 ± 1.4 &60.2 ± 0.7&\color{brown}\textbf{61.2 ± 0.6}&\color{purple}\textbf{61.3 ± 1.0} \\ & 4000 labels &62.4 ± 0.6&59.5&63.6 ± 0.7&64.3 ± 1.0&62.9 ± 0.7 & 62.4 ± 1.0 &64.1 ± 0.8&\color{purple}\textbf{65.9 ± 0.8}&\color{brown}\textbf{64.8 ± 1.0} \\ \midrule \end{tabular} } \vspace{-15pt} \end{table} \subsection{Spatial-Temporal Dynamics Prediction} We consider three spatial-temporal datasets with details in Appendix~\ref{appx-dataset}. Each dataset consists of a series of graph snapshots where nodes are treated as instances and each of them has a integer label (e.g., reported cases for \texttt{Chickenpox} or \texttt{Covid}). The task is to predict the labels of one snapshot based on the previous ones. Table~\ref{tab:spatial_temporal_results} compares testing MSE of four \textsc{DIFFormer}\xspace variants (here \textsc{DIFFormer}\xspace-s w/o g denotes the model \textsc{DIFFormer}\xspace-s without using input graphs) against baselines. We can see that two \textsc{DIFFormer}\xspace variants without input graphs even outperform the counterparts using input structures in four out of six cases. This implies that our diffusivity estimation module could learn useful structures for informed prediction, and the input structure might not always contribute to positive effect. In fact, for temporal dynamics, the underlying relations that truly influence the trajectory evolution can be much complex and the observed relations could be unreliable with missing or noisy links, in which case GNN models relying on input graphs may perform undesirably. Compared to the competitors, our models rank the first with significant improvements. \begin{table}[t!] \centering \caption{Mean and standard deviation of MSE on spatial-temporal prediction datasets.} \vspace{-5pt} \label{tab:spatial_temporal_results} \small \resizebox{1.0\textwidth}{!}{ \begin{tabular}{c|cccccc|cccc} \toprule \textbf{Dataset}& MLP&GCN& GAT&Dense GAT&GAT-$k$NN&GCN-$k$NN&{\textsc{DIFFormer}\xspace}-s& {\textsc{DIFFormer}\xspace}-a& {\textsc{DIFFormer}\xspace}-s w/o g& {\textsc{DIFFormer}\xspace}-a w/o g \\ \midrule \multirow{2}{*}{\textbf{Chickenpox}}& 0.924&\color{brown}\textbf{0.923}&0.924&0.935&0.926&0.936&0.926&0.926&\color{purple}\textbf{0.920}&\color{purple}\textbf{0.920}\\& ($\pm$0.001)&\color{brown}\textbf{($\pm$0.001)}&($\pm$0.002)&($\pm$0.005)&($\pm$0.004)&($\pm$0.004)&($\pm$0.002)&($\pm$0.008)&\color{purple}\textbf{($\pm$0.001)}&\color{purple}($\pm$\textbf{0.002})\\\midrule \multirow{2}{*}{\textbf{Covid}}& 0.956&1.080&1.052&1.524&0.861&1.475&\color{brown}\textbf{0.792}&\color{brown}\textbf{0.792}&\color{purple}\textbf{0.791}&0.935\\ & ($\pm$0.198)&($\pm$0.162)&($\pm$0.336)&($\pm$0.319)&($\pm$0.123)&($\pm$0.560)&\color{brown}\textbf{($\pm$0.086)}&\color{brown}\textbf{($\pm$0.076)}&\color{purple}($\pm$\textbf{0.090})&($\pm$0.143)\\\midrule \multirow{2}{*}{\textbf{WikiMath}}& 1.073&1.292&1.339&0.826&0.882&1.023&0.922&\color{brown}\textbf{0.738}&0.993&\color{purple}\textbf{0.720}\\& ($\pm$0.042)&($\pm$0.125)&($\pm$0.073)&($\pm$0.070)&($\pm$0.015)&($\pm$0.058)&($\pm$0.015)&\color{brown}\textbf{($\pm$0.031)}&($\pm$0.042)&\color{purple}($\pm$\textbf{0.036})\\ \bottomrule \end{tabular}} \vspace{-15pt} \end{table} \subsection{Further Results and Discussions} \textbf{How do different diffusivity functions perform?} Figure~\ref{fig-abl} compares \textsc{DIFFormer}\xspace with four variants using other diffusivity functions that have no essential connection with energy minimization: 1) \emph{Identity} sets $\mathbf S^{(k)}$ as a fixed identity matrix; 2) \emph{Constant} fixes $\mathbf S^{(k)}$ as all-one constant matrix; 3) \emph{Full Attn} parameterizes $\mathbf S^{(k)}$ by attention networks~\citep{transformer}; 4) \emph{Kernel} adopts Gaussian kernel for computing $\mathbf S^{(k)}$. More results on other datasets are in Appendix~\ref{appx-abl} and they consistently show that our adopted diffusivity forms produce superior performance, which verifies the effectiveness of our diffusivity designs derived from minimization of a principled energy. \textbf{How do model depth and step size impact the performance?} We discuss the influence of model depth $K$ and step size $\tau$ on \texttt{Cora} in Fig.~\ref{fig-hyper}. More results on \texttt{Citeseer} and \texttt{Pubmed} are generally consistent with Fig.~\ref{fig-hyper} and deferred to Appendix~\ref{appx-hyper}. The curves indicate that the performance of GCN and GAT models exhibit significant degradation with deeper layers, while \textsc{DIFFormer}\xspace maintains its superiority and performs stably with large $K$. Furthermore, when $K$ is not large enough (less than 32), there is a clear performance improvement of \textsc{DIFFormer}\xspace as $K$ increases, and larger $\tau$ contributes to a steeper increase. When $K$ continues increasing (more than 32), the model performance still goes up with small $\tau$ (0.1 and 0.25) yet exhibits a slight drop with large $\tau$ (0.5 and 0.9). The reason could be that larger (smaller) $\tau$ contributes to more (less) concentration on global information from other instances in each iteration, which brings up more (less) benefits with increasing propagation layers yet could lead to instability when the step size is too large. \textbf{What is learned by the representation?} We next try to shed some insights on how our diffusion models help to learn effective representations that facilitate downstream prediction, via visualizing node representations and layer-wise diffusivity strength in Appendix~\ref{appx-vis}. We found that the diffusivity strength estimates tend to increase the connectivity among nodes with different classes, and thus the updated node embeddings can absorb information from different communities for informative prediction. On the other hand, the produced representations exhibit smaller intra-class distance and larger inter-class distance, making them easier to be distinguished for classification. Comparing \textsc{DIFFormer}\xspace-s and \textsc{DIFFormer}\xspace-a on the temporal datasets, we found that \textsc{DIFFormer}\xspace-s produces more concentrated large weights while \textsc{DIFFormer}\xspace-a tends to have large diffusivity spreading out more and learn more complex structures. See more discussions in Appendix~\ref{appx-vis}. \section{Conclusions} \looseness=-1 This paper proposes an energy-driven geometric diffusion model with latent diffusivity function for data representations. The model encodes all the instances as a whole into evolving states aimed at minimizing a principled energy as implicit regularization. We further design two practical implementations with enough scalability and capacity for learning complex interactions over the underlying data geometry. Extensive experiments demonstrate the effectiveness and superiority of the model. \bibliographystyle{iclr2023_conference}
{'timestamp': '2023-01-24T02:22:57', 'yymm': '2301', 'arxiv_id': '2301.09474', 'language': 'en', 'url': 'https://arxiv.org/abs/2301.09474'}
\section{Introduction} In settings where comparable studies are available, it is critical to simultaneously consider and systematically integrate information across multiple studies when training prediction models. Multi-study prediction is motivated by applications in biomedical research, where exponential advances in technology and facilitation of systematic data-sharing increased access to multiple studies (\cite{kannan2016public, manzoni2018genome}). When training and test studies come from different distributions, prediction models trained on a single study generally perform worse on out-of-study samples due to heterogeneity in study design, data collection methods, and sample characteristics. (\cite{castaldi2011empirical, bernau2014cross, trippa2015bayesian}). Training prediction models on multiple studies can address these challenges and improve the cross-study replicability of predictions. Recent work in multi-study prediction investigated two approaches for training cross-study replicable models: 1) merging all studies and training a single model, and 2) multi-study ensembling that involves training a separate model on each study and combining the resulting predictions. When studies are relatively homogeneous, \cite{patil2018training} showed that merging can lead to improved replicability over ensembling due to increase in sample size; as between-study heterogeneity increases, multi-study ensembling demonstrated preferable performance. While the trade-off between these approaches has been explored in detail for random forest (\cite{ramchandran2020tree}) and linear regression (\cite{guan2019merging}), none have examined this for boosting, one of the most successful and popular supervised learning algorithms. Boosting combines a powerful machine learning approach with classical statistical modeling. Its flexible choice of base learners and loss functions makes it highly customizable to many data-driven tasks including binary classification (\cite{freund1997decision}), regression (\cite{friedman2001greedy}) and survival analysis (\cite{wang2010buckley}). To the best of our knowledge, this work is the first to study boosting algorithms in a setting with multiple and potentially heterogeneous training and test studies. Existing findings on boosting are largely rooted in theories based on a single training study, and extensions of the algorithm to a multi-study setting often assume a subset of the training study shares the same distribution as the test study. \cite{buehlmann2006boosting} and \cite{tutz2007boosting} studied boosting with linear base learners and characterized an exponential bias-variance trade-off under the assumption that the training and test studies have the same predictor distribution. \cite{habrard2013boosting} proposed a boosting algorithm for domain adaptation with a single training study. \cite{dai2007boosting} proposed a transfer learning framework for boosting that uses a small amount of labeled data from the test study in addition to the training data to make classifications on the test study. This approach was extended to handle data from multiple training studies (\cite{yao2010boosting, bellot2019boosting}) and modified for regression (\cite{pardoe2010boosting}) and survival analysis (\cite{bellot2019boosting}). In this paper, we study boosting algorithms in a regression setting and compare cross-study replicability of merging versus multi-study ensembling. We assume a flexible mixed effects model with potential heterogeneity in predictor-outcome relationships across studies and provide theoretical guidelines to determine whether merging is more beneficial than ensembling for a given collection of training datasets. In particular, we characterize an analytical transition point beyond which ensembling exhibits lower mean squared prediction error than merging for boosting with linear learners. Conditional on the selection path, we characterize a bias-variance decomposition for the estimation error of boosting with component-wise linear learners. We verify the theoretical transition point results via simulations, and illustrate how it may guide practitioners' choice regarding merging vs. ensembling in a breast cancer application. \section{Methods} \subsection{Multi-study Setup} We consider $K$ training studies and $V$ test studies that measure the same outcome and the same $p$ predictors. Each study has size $n_k$ with a combined size of $N = \sum_{k=1}^K n_k$ for the training studies and $N^{\text{Test}}=\sum_{k=K+1}^{K+V} n_k$ for the test studies. Let $Y_k \in \mathbb{R}^{n_k}$ and $X_k \in \mathbb{R}^{n_k \times p}$ denote the outcome vector and predictor matrix for study $k$, respectively. The linear mixed effects data generating model is of the form \begin{equation} \label{eqn:lme} Y_k = X_k\beta + Z_k \gamma_k + \epsilon_k, \quad k = 1, \ldots, K + V \end{equation} where $\beta \in \mathbb{R}^p$ are the fixed effects and $\gamma_k \in \mathbb{R}^q$ the random effects with $E\left[\gamma_k\right] = 0$ and $Cov(\gamma_k) = \text{diag(}\sigma^2_1, \ldots, \sigma^2_q) \eqqcolon G$. If $\sigma^2_j > 0,$ then the effect of the $j$th predictor varies across studies; if $\sigma^2_j = 0$, then the predictor has the same effect in each study. The matrix $Z_k \in \mathbb{R}^{n_k \times q}$ is a subset of $X_k$ that corresponds to the random effects, and $\epsilon_k$ are the residual errors where $E[\epsilon_k] = 0, Cov(\epsilon_k) = \sigma^2_{\epsilon} I$, and $Cov(\gamma_k, \epsilon_k) = 0.$ We consider an extension of (\ref{eqn:lme}) and assume the study data are generated under the mixed effects model of the form \begin{equation} \label{eqn:gme} Y_k = f(X_k) + Z_k \gamma_k + \epsilon_k, \quad k = 1, \ldots, K+V \end{equation} where $f(\cdot)$ is a real-valued function. Compared to (\ref{eqn:lme}), the model in (\ref{eqn:gme}) provides more flexibility in fitting the mean function $E(Y_k)$. For any study $k$, we assume $Y_k$ is centered to have zero mean and $X_k$ standardized to have zero mean and unit $\ell_2$ norm, i.e., $\norm{X_{jk}}_2 = 1$ for $j = 1, \ldots, p,$ where $X_{jk} \in \mathbb{R}^N$ denotes the $j$th predictor in study $k$. Unless otherwise stated, we use $i \in \{1, \ldots, N\}$ to index the observations, $j \in \{1, \ldots, p\}$ the predictors, and $k \in \{1, \ldots, K + V\}$ the studies. For example, $X_{ijk} \in \mathbb{R}$ is the value of the $j$th predictor for observation $i$ in study $k.$ We formally introduce boosting on the merged study $(Y, X)$ in the next section, but the formulation is the same for the $k$th study if one were to replace $(Y, X)$ with $(Y_k, X_k)$. In particular, we focus on boosting with linear learners due to its analytical tractability. We denote a linear learner as an operator $H: \mathbb{R}^N \rightarrow \mathbb{R}^N$ that maps the responses $Y$ to fitted values $\hat{Y}$. Examples of linear learners include ridge regression and more general projectors to a class of basis functions such as regression or smoothing splines. We denote the basis-expanded predictor matrix by $\tilde{X} \in \mathbb{R}^{N \times P}$ and the subset of predictors with random effects by $\tilde{Z} \in \mathbb{R}^{N \times Q}$. We define the basis-expanded predictor matrix as $$\tilde{X} = \left[h(X_i) \quad \cdots \quad h(X_N)\right]^T \in \mathbb{R}^{N \times P},$$ where $$h(X_i) = \left(h_{11}(X_{i1}),\ldots, h_{U_11}(X_{i1}), \ldots, h_{1p}(X_{ip}), \ldots, h_{U_Pp}(X_{ip})\right) \in \mathbb{R}^P, \quad i = 1,\ldots, N$$ is the vector of $P = \sum_{p}U_p$ one-dimensional basis functions evaluated at the predictors $X_i \in \mathbb{R}^p$. As an example, suppose we have $p=2$ covariates, $X_{i1}, X_{i2}$, and we want to model $X_{i1}$ linearly and $X_{i2}$ with a cubic spline at knots $\xi_1 = 0$ and $\xi_2 = 1.5$. The basis-expanded predictor matrix $\tilde{X}$ contains the following vector of $P = 6$ basis functions: $$h(X_i) = \left(h_{11}(X_{i1}), h_{12}(X_{i2}), h_{22}(X_{i2}), h_{32}(X_{i2}), h_{42}(X_{i2}), h_{51}(X_{i2})\right), \quad i = 1, \ldots, N$$ where \begin{center} $ \begin{aligned}[c] h_{11}(X_{i1}) &= X_{i1}\\ h_{12}(X_{i2}) &= X_{i2}\\ h_{22}(X_{i2}) &= X_{i2}^2\\ \end{aligned} \qquad \qquad \begin{aligned}[c] h_{32}(X_{i2}) &= X_{i2}^3\\ h_{42}(X_{i2}) &= (X_{i2} - 0)^3_{+}\\ h_{52}(X_{i2}) &= (X_{i2} - 1.5)^3_+ \end{aligned} $ \end{center} and $(X_{i2} - \xi)^3_+ = max\left\{(X_{i2}-\xi)^3, 0\right\}.$ For $\lambda \geq 0$, our goal is to minimize the objective $$||Y - \tilde{X}\beta||^2_2 + \lambda \beta^T\beta$$ with respect to parameters $\beta \in \mathbb{R}^P$. We denote the vector of coefficient estimates and fitted values by $\hat{\beta} \coloneqq BY$ and $\hat{Y} \coloneqq HY$, respectively, where $$B\coloneqq (\tilde{X}^T\tilde{X} + \lambda I )^{-1}\tilde{X}^T \in \mathbb{R}^{P \times N}$$ and $$H\coloneqq \tilde{X}(\tilde{X}^T\tilde{X} + \lambda I )^{-1}\tilde{X}^T = \tilde{X}B \in \mathbb{R}^{N \times N}.$$ \subsection{Boosting with linear learners} Given the basis-expanded predictor matrix $\tilde{X} \in \mathbb{R}^{N \times P}$, the goal of boosting is to obtain an estimate $\hat{F}(\tilde{X})$ of the function $F(\tilde{X})$ that minimizes the expected loss $E\left[\ell(Y, F(\tilde{X}))\right]$ for a given loss function $\ell(\cdot, \cdot): \mathbb{R}^{N} \times \mathbb{R}^{N} \rightarrow \mathbb{R}^N_+,$. Here, the outcome $Y \in \mathbb{R}^N$ may be continuous (regression problem) or discrete (classification problem). Examples of $\ell(Y, F)$ include exponential loss $exp(YF)$ for AdaBoost (\cite{freund1995boosting}) and $\ell_2$ (squared error) loss $(Y - F)^2/2$ for $\ell_2$ boosting (\cite{buhlmann2003boosting}). In finite samples, estimation of $F(\cdot)$ is done by minimizing the empirical risk via functional gradient descent where the base learner $g(\tilde{X}; \hat{\theta})$ is repeatedly fit to the negative gradient vector $$r =\left. \frac{-\partial \ell(Y, F)}{\partial F}\right|_{F = \hat{F}_{(m)}(\tilde{X})}$$ evaluated at $\hat{F}_{(m)}(\tilde{X}) = \hat{F}_{(m-1)}(\tilde{X}) + \eta g(\tilde{X}; \hat{\theta}_m)$ across $m = 1, \ldots, M$ iterations. Here, $\eta \in (0, 1]$ denotes the learning rate, and $\hat{\theta}_m$ denotes the estimated finite or infinite-dimensional parameter that characterizes $g$ (i.e., if $g$ is a regression tree, then $\theta$ denotes the tree depth, minimum number of observations in a leaf, etc.). Under $\ell_2$ loss, the negative gradient at iteration $m$ is equivalent to the residuals $Y - \hat{F}_m(\tilde{X})$. Therefore, $\ell_2$ boosting produces a stage-wise approach that iteratively fits to the current residuals (\cite{buhlmann2003boosting, friedman2001greedy}). Let $\hat{\beta}_{(m)} \in \mathbb{R}^P$ and $\hat{Y}_{(m)} \in \mathbb{R}^N$ denote the coefficient estimates and fitted values at the $m$th boosting iteration, respectively. We describe $\ell_2$ boosting with linear learners in \textbf{Algorithm 1}. \begin{algorithm} \caption{$\ell_2$ boosting with linear learners.} \begin{algorithmic}[1] \State Initialization: $$\hat{ \beta}_{(0)} = 0, \quad \hat{ Y}_{(0)} = 0$$ \State Iteration: For $m = 1, 2, \ldots, M,$ fit a linear learner to the residuals $ r_{(m)} = Y - \hat{ Y}_{(m-1)}$ and obtain the estimated coefficients $$\hat{ \beta}_{(m)}^{\text{current}} = Br_{(m)}$$ and fitted values $$\hat{ Y}_{(m)}^{\text{current}} = Hr_{(m)}.$$ The new coefficient estimates are given by: $$\hat{ \beta}_{(m)} = \hat{ \beta}_{(m - 1)} + \eta \hat{ \beta}_{(m)}^{\text{current}}$$ The new fitted values are given by: $$\hat{ Y}_{(m)} = \hat{ Y}_{(m - 1)} +\eta \hat{ Y}_{(m)}^{\text{current}}$$ where $\eta \in (0, 1]$ is the learning rate. \end{algorithmic} \end{algorithm} By Proposition 1 in \cite{buhlmann2003boosting}, the $\ell_2$ boosting coefficient estimates at iteration $M$ can be written as: \begin{equation} \label{eqn:l2boost} \hat{\beta}^{\text{Merge}}_{(M)} = \sum_{m=1}^M \eta B(I - \eta H)^{m-1}Y. \end{equation} Equation (\ref{eqn:l2boost}) represents $\hat{\beta}^{\text{Merge}}_{(M)}$ as the sum across coefficient estimates obtained from repeatedly fitting a linear learner $H$ to residuals $r_{(m)} = (I - \eta H)^{m-1}Y$ at iteration $m = 1, \ldots, M.$ The ensemble estimator, based on pre-specified weights $w_k$ such that $\sum_{k=1}^K w_k = 1,$ is \begin{equation} \hat{\beta}^{Ens}_{(M)} = \sum_{k=1}^K w_k \hat{\beta}_{k(M)} = \sum_{k=1}^K w_k \left[\sum_{m=1}^M \eta B_k(I - \eta H_k)^{m-1}Y_k\right] \end{equation} where $B_k$ and $H_k \hspace{0.3em} (k = 1, \ldots, K)$ are study-specific analogs of $B$ and $H,$ respectively. \subsection{Boosting with component-wise linear learners} Boosting with component-wise linear learners (\cite{buhlmann2007boosting, buhlmann2003boosting}), also known as LS-Boost (\cite{friedman2001greedy}) or least squares boosting (\cite{freund2017new}), determines the predictor $\tilde{X}_{\hat{j}_{(m)}} \in \mathbb{R}^N$ that results in the maximal decrease in the univariate least squares fit to the current residuals $r_{(m)}$. The algorithm then updates the $\hat{j}_{(m)}$th coefficient and leaves the rest unchanged. Let $\hat{\beta}_{(m)j} \in \mathbb{R}$ denote the $j$th coefficient estimate at the $m$th iteration and $\hat{\beta}_{\hat{j}_{(m)}} \in \mathbb{R}$ the estimated coefficient of the selected covariate at iteration $m$. \textbf{Algorithm 2} describes boosting with component-wise linear learners. \begin{algorithm} \caption{$\ell_2$ boosting with component-wise linear learners.} \begin{algorithmic}[1] \State Initialization: $$\hat{ \beta}_{(0)} = 0, \quad \hat{ Y}_{(0)} = 0$$ \State Iteration: For $m = 1, 2, \ldots, M,$ compute the residuals $$ r_{(m)} = Y - \hat{ Y}_{(m-1)}.$$ Determine the covariate $\tilde{X}_{\hat{j}_{(m)}}$ that results in the best univariate least squares fit to $r_{(m)}:$ $$\hat{j}_{(m)} = \argmin_{1 \leq j \leq P} \sum_{i=1}^N \left( r_{(m)i} - \tilde{X}_{ij}\hat{\beta}_{(m)j}\right)^2.$$ Calculate the corresponding coefficient estimate: $$ \hat{\beta}_{\hat{j}_{(m)}} = \left(\tilde{X}_{\hat{j}_{(m)}}^T \tilde{X}_{\hat{j}_{(m)}}\right)^{-1} \tilde{X}_{\hat{j}_{(m)}}^T r_{(m)}.$$ Update the fitted values and the coefficient estimate for the $\hat{j}_{(m)}$th covariate \begin{align*} \hat{Y}_{(m)} &= \hat{ Y}_{(m - 1)} + \eta \tilde{X}_{\hat{j}_{(m)}}\hat{\beta}_{\hat{j}_{(m)}}\\ \hat{\beta}_{(m)\hat{j}_{(m)}} &= \hat{\beta}_{(m-1)\hat{j}_{(m)}} + \eta \hat{\beta}_{\hat{j}_{(m)}} \end{align*} where $\eta \in (0, 1]$ is a learning rate. \end{algorithmic} \end{algorithm} \begin{proposition} \label{prop:1} Let $e_{\hat{j}_{(m)}} \in \mathbb{R}^P$ denote a unit vector with a 1 in the $\hat{j}_{(m)}$-th position, $$B_{(m)} = e_{\hat{j}_{(m)}} \left(\tilde{X}_{\hat{j}_{(m)}}^T \tilde{X}_{\hat{j}_{(m)}}\right)^{-1}\tilde{X}^T_{\hat{j}_{(m)}},$$ and $$H_{(m)} = \tilde{X}_{\hat{j}_{(m)}}\left(\tilde{X}_{\hat{j}_{(m)}}^T \tilde{X}_{\hat{j}_{(m)}}\right)^{-1}\tilde{X}^T_{\hat{j}_{(m)}}.$$ The coefficient estimates for $\ell_2$ boosting with component-wise linear learners at iteration $M$ can be written as: \begin{equation} \hat{\beta}^{\text{Merge, CW}}_{(M)} = \sum_{m=1}^M \eta B_{(m)}\left(\prod_{\ell = 0}^{m-1} \left(I - \eta H_{(m-\ell-1)}\right)\right)Y. \end{equation} \end{proposition} A proof is provided in the appendix. Proposition \ref{prop:1} represents $\hat{\beta}^{\text{Merge,CW}}_{(M)}$ as the sum across coefficient estimates obtained from repeatedly fitting a univariate linear learner $H_{(m)}$ to the current residuals $r_{(m)} =(\prod_{\ell = 0}^{m-1} (I - \eta H_{(m-\ell-1)}))Y$ at iteration $m$. As $M \rightarrow \infty,$ $\hat{\beta}^{\text{Merge,CW}}_{(M)}$ converges to a least squares solution that is unique if the predictor matrix has full rank (\cite{buhlmann2007boosting}). The ensemble estimator, based on pre-specified weights $w_k$, is \begin{equation} \hat{\beta}^{\text{Ens, CW}}_{(M)} = \sum_{k=1}^K w_k \hat{\beta}^{\text{CW}}_{(M)k} = \sum_{k=1}^K w_k \left[\sum_{m=1}^M \eta B_{(m)k} \left(\prod_{\ell = 0}^{m-1} \left( I - \eta H_{(m-\ell-1)k}\right)\right) Y_k\right] \end{equation} where $B_{(m)k}$ and $H_{(m)k}$ are study-specific analogs of $B_{(m)}$ and $H_{(m)},$ respectively. \subsection{Performance comparison}\label{performance} We compare merging and ensembling based on mean squared prediction error (MSPE) of $V$ unseen test studies $\tilde{X}_0 \in \mathbb{R}^{N^{\text{Test}} \times P}$ with unknown outcome vector $Y_0 \in \mathbb{R}^{N^{\text{Test}}}$, $$E[||Y_0 - \tilde{X}_0\hat{\beta}_{(M)}||^2_2]$$ where $\norm{\cdot}_2$ denotes the $\ell_2$ norm. To properly characterize the performance of boosting with component-wise linear learners (\textbf{Algorithm 2}), we account for the algorithm's adaptive nature by conditioning on its selection path. To make progress analytically, we assume $Y$ is normally distributed with mean $\mu \coloneqq f(\tilde{X})$ and covariance $\Sigma \coloneqq \text{blkdiag}(\{Z_kGZ_k^T+\sigma^2_{\epsilon}I\}^K_{k=1})$. Note that at iteration $m$, the covariate $\Tilde{X}_{\hat{j}_{(m)}}$ will result in the best univariate least squares fit to $r_{(m)}$ if and only if it satisfies $$ \norm{(I - H_{(\hat{j}_{(m)})}) r_{(m)}}_2^2 \leq \norm{( I - H_{(j)}) r_{(m)}}_2^2,$$ which is equivalent to \begin{equation} \label{eqn:selection} (sgn_{(m)} \tilde{X}_{j(m)}^T/\norm{ \tilde{X}_{\hat{j}_{(m)}}}_2 \pm \tilde{X}_j^T/\norm{ \tilde{X}_{j}}_2) r^{(m)} \geq 0 \end{equation} $\forall j \neq \hat{j}_{(m)}$, $sgn_{(m)} = \text{sign}(\tilde{X}_{\hat{j}_{(m)}}^T r_{(m)})$, where $$r_{(m)} = \prod_{\ell=0}^{m-1}( I - \eta H_{(m-\ell-1)}) Y := \Upsilon_{(m)} Y.$$ With fixed $\tilde{X}$, the inequalities in (\ref{eqn:selection}) can be compactly represented as the polyhedral representation $\Gamma Y \geq 0$ for a matrix $\Gamma \in \mathbb{R}^{2M(P-1) \times N},$ with the $(\tilde{m} + 2(j - \omega(j)) - 1)$th and $(\tilde{m} + 2(j - \omega(j)))$th rows given by $$(sgn_{(m)} \tilde{X}_{\hat{j}_{(m)}}^T/\norm{ \tilde{X}_{\hat{j}_{(m)}}}_2 \pm \tilde{X}_j^T/\norm{ \tilde{X}_j}_2) \Upsilon^{(m)}$$ $\forall j \neq \hat{j}_{(m)}$ with $\tilde{m} = 2(P-1)(m-1)$ and $\omega(j) = \mathbb{1}\{j > \hat{j}_{(m)}\}$ (\cite{rugamer2020inference}). The $j$th regression coefficient in \textbf{Algorithm 2} can be written as $$\hat{\beta}^{\text{Merge, CW}}_{(M)j} = v_j^TY,$$ where $$v_j = (\sum_{m=1}^M \eta B_{(m)} (\prod_{\ell = 0}^{m-1}(I - \eta H_{(m-\ell - 1)}))^Te_j,$$ and $e_j \in \mathbb{R}^P$ is a unit vector. The distribution of $\hat{\beta}^{\text{Merge, CW}}_{(M)j}$ conditional on the selection path is given by the polyhedral lemma in \cite{lee2016exact}. \begin{lemma}[Polyhedral lemma from \cite{lee2016exact}] \label{lemma:lee} Given the selection path $$\mathcal{P}\coloneqq \{Y:\Gamma Y \geq 0, z_j = z\},$$ where $z_j \coloneqq (I - c_jv_j^T)Y$ and $c_j \coloneqq \Sigma v_j(v_j^T\Sigma v_j)^{-1}$, $$\hat{\beta}^{\text{Merge, CW}}_{(M)j}|\mathcal{P} \sim \text{TruncatedNormal}\left(v_j^T\mu, v_j\Sigma v_j^T, a_j, b_j\right),$$ where \begin{align*} a_j &= \max_{\ell:(\Gamma c_j)_{\ell} > 0} \frac{0-(\Gamma z_j)_{\ell}}{(\Gamma c_j)_{\ell}}\\ b_j &= \min_{\ell: (\Gamma c_j)_{\ell} < 0} \frac{0-(\Gamma z_j)_{\ell}}{(\Gamma c_j)_{\ell}}. \end{align*} \end{lemma} A proof is provided in the appendix. The conditioning is important because it properly accounts for the adaptive nature of \textbf{Algorithm 2}. Conceptually, it measures the magnitude of $\hat{\beta}^{\text{Merge, CW}}_{(M)j}$ among random vectors $Y$ that would result in the selection path $\Gamma Y \geq 0$ for a fixed value of $z_j$. When $\Sigma = \sigma^2I,$ $z_j = (I - v_j(v_j^Tv_j)^{-1}v_j^T)Y$ is the projection onto the orthocomplement of $v_j.$ Accordingly, the polyhedron $\Gamma Y \geq 0$ holds if and only if $v_j^TY$ does not deviate too far from $z_j,$ hence trapping it between bounds $a_j$ and $b_j$ (\cite{tibshirani2016exact}). Moreover, because $a_j$ and $b_j$ are functions of $z_j$ alone, they are independent of $v^TY$ under normality. The result in Lemma \ref{lemma:lee} allows us to analytically characterize the mean squared error of the estimators $\hat{\beta}^{\text{Merge, CW}}_{(M)j}$ and $\hat{\beta}^{\text{Ens, CW}}_{(M)j}$ conditional on their respective selection paths. \subsection{Implicit regularization and early stopping} \label{subsection:earlystop} In \textbf{Algorithm 1} and \textbf{Algorithm 2}, the learning rate $\eta$ and stopping iteration $M$ together control the amount of shrinkage and training error. A smaller learning rate $\eta$ leads to slower overfitting but requires a larger $M$ to reduce the training error to zero. With a small $\eta$, it is possible to explore a larger class of models, which often leads to models with better predictive performance (\cite{friedman2001greedy}). While boosting algorithms are known to exhibit slow overfitting behavior with small values of $\eta$, it is necessary to implement early stopping strategies to avoid overfitting (\cite{schapire1998boosting}). The boosting fit for \textbf{Algorithm 1} in iteration $m$ (assuming $\eta = 1)$ is $$\mathcal{B}_{(m)}Y \coloneqq (I-(I-H)^{m+1})Y,$$ where $\mathcal{B}_{(m)}: \mathbb{R}^N \rightarrow \mathbb{R}^N$ is the boosting operator. For a base learner that satisfies $\norm{I - H} \leq 1$ for a suitable norm, we have $\mathcal{B}_{(m)}Y \rightarrow Y$ as $m \rightarrow \infty$. That is, if left to run forever, the boosting algorithm converges to the fully saturated model $Y$ (\cite{buhlmann2007boosting}). A similar argument can be made for \textbf{Algorithm 2} where $$\mathcal{B}^{\text{CW}}_{(m)} = I-(I - H_{(\hat{j}_m)})(I - H_{(\hat{j}_{m-1})}) \cdots (I - H_{(\hat{j}_{1})})$$ is the component-wise boosting operator. We define the degrees of freedom at iteration $m$ as $tr(\mathcal{B}_{(m)})$ and use the corrected AIC criterion ($AIC_c$) (\cite{buehlmann2006boosting}) to choose the stopping iteration $M.$ Compared to cross-validation (CV), $AIC_c$-tuning is computationally efficient as it does not require running the boosting algorithm multiple times. For \textbf{Algorithm 1}, the $AIC_c$ at iteration $m$ is given by \begin{equation} \label{eqn:AICC} AIC_c(m) = \log(\hat{\underline{\sigma}}^2) + \frac{1 + tr(\mathcal{B}_{(m)})/N}{1 - (tr(\mathcal{B}_{(m)}) + 2)/N}, \end{equation} where $\underline{\hat{\sigma}}^2 = \frac{1}{N}\sum_{i=1}^N (Y_i - (\mathcal{B}_{(m)}Y)_i )^2$. The stopping iteration is $$M = \argmin_{1 \leq m \leq m_{upp}} AIC_c(m),$$ where $m_{upp}$ is a large upper bound for the candidate number of boosting iterations (\cite{buehlmann2006boosting}). For \textbf{Algorithm 2}, the $AIC_c$ is computed by replacing $\mathcal{B}_{(m)}$ with $\mathcal{B}^{\text{CW}}_{(m)}.$ We allow the stopping iterations to differ between the merged and ensemble learners. In our results, we denote them by $M$ and $M_{\text{Ens}} = \{M_k\}_{k=1}^K,$ respectively. \section{Results} We summarize the degree of heterogeneity in predictor-outcome relationships across studies by the sum of the variances of the random effects divided by the number of fixed effects: $\overline{\sigma^2} \coloneqq tr(G)/P$, where $G \in \mathbb{R}^{Q \times Q}$. For boosting with linear learners, let $\tilde{R} = \sum_{m=1}^{M} \eta B(I-\eta H)^{m-1}$ and $\tilde{R}_k = \sum_{m=1}^{M} \eta B_k(I-\eta H_k)^{m-1}$. Let $b_{\text{Merge}} = Bias(\hat{\beta}^{\text{Merge}}_{(M)}) = \tilde{R}f(\tilde{X}) - f(\tilde{X}_0)$ denote the bias of the boosting coefficients for the merged estimator and $b_{\text{Ens}} = Bias(\hat{\beta}^{\text{Ens}}_{(M_{\text{Ens}})}) = \sum_{k=1}^K w_k \tilde{R}_kf(\tilde{X}_k) - f(\tilde{X}_0)$ the bias for the ensemble estimator. Let $Z' = blkdiag(\{Z_k\}_{k=1}^K)$ and $G' = blkdiag(\{G_k\}_{k=1}^K)$ where $G_k = G$ for $k = 1, \ldots, K.$ \subsection{Boosting with linear learners} \begin{customthm}{1}\label{thm:1} Suppose \begin{equation} \label{eq:thm1cond} \text{tr}( Z'^T \tilde{R}^T \tilde{X}_0^T \tilde{X}_0 \tilde{R} Z') - \sum_{k=1}^K w_k^2 \text{tr}( Z_k^T \tilde{R}_k^T \tilde{X}_0^T \tilde{X}_0 \tilde{ R}_k Z_k) > 0 \end{equation} Define \small \begin{equation} \label{eqn:tau} \scriptsize \tau =\frac{Q}{P} \times \frac{\sigma^2_{\epsilon}(\sum_{k=1}^Kw_k^2 \text{\text{tr}}( \tilde{R}_k^T \tilde{X}_0^T \tilde{X}_0 \tilde{R}_k)-\text{tr}( \tilde{R}^T \tilde{X}_0^T \tilde{X}_0 \tilde{R}))+ ( b^{\text{Ens}})^T b^{\text{Ens}} - ( b^{\text{Merge}})^T b^{\text{Merge}}}{\text{tr}( Z'^T \tilde{R}^T \tilde{X}_0^T \tilde{X}_0 \tilde{R} Z') - \sum_{k=1}^K w_k^2 \text{tr}( Z_k^T \tilde{R}_k^T \tilde{X}_0^T \tilde{X}_0 \tilde{R}_k Z_k)} \end{equation} Then $E[\norm{Y_0 - \tilde{X}_0\hat{\beta}^{\text{Ens}}_{(M_{\text{Ens}})}}^2_2] \leq [\norm{Y_0 - \tilde{X}_0\hat{\beta}^{\text{Merge}}_{(M)}}^2_2]$ if and only if $\overline{\sigma}^2 \geq \tau.$ \end{customthm} A proof is provided in the appendix. Under the equal variances assumption, Theorem \ref{thm:1} characterizes a transition point $\tau$ beyond which ensembling outperforms merging for \textbf{Algorithm 1}. $\tau$ is characterized by differences in the predictive performance of merging vs. ensembling driven by within-study variability and bias in the numerator and between-study variability in the denominator. The condition in (\ref{eq:thm1cond}), which ensures $\tau$ is well defined, holds when the between-study variability of $\hat{\beta}^{\text{Merge}}_{(M)}$ is greater than that of $\hat{\beta}^{\text{Ens}}_{(M_{Ens})}$. This is generally true because merging does not account for between-study heterogeneity. $\tau$ depends on the population mean function $f$ through the bias term. Therefore, an estimate of $f$ is required to estimate the transition point unless the bias is equal to zero. One example of an unbiased estimator is ordinary least squares, which can be obtained by setting $H = \tilde{X}(\tilde{X}^T\tilde{X})\tilde{X}^T$ and $M = \eta = 1$. In general, for any linear learner $H: \mathbb{R}^N \rightarrow \mathbb{R}^N$, the transition point in \cite{guan2019merging} (cf., Theorem 1) is a special case of (\ref{eqn:tau}) when $M = \eta = 1$. \begin{corollary} \label{cor:1} Suppose $\text{tr}( Z'^T \tilde{R}^T \tilde{X}_0^T \tilde{X}_0 \tilde{R} Z') \neq 0.$ As $\sigma^2 \rightarrow \infty,$ $$\frac{E[\norm{Y_0 - \tilde{X}\hat{ \beta}^{\text{Ens}}_{(M_{\text{Ens}})}}^2_2]}{ E[\norm{Y_0 - \tilde{X}_0\hat{ \beta}^{\text{Merge}}_{(M)}}^2_2]} \longrightarrow \frac{\sum_{k=1}^K w_k^2 \text{tr}( Z_k^T \tilde{R}_k^T \tilde{X}_0^T \tilde{X}_0 \tilde{R}_k Z_k)}{ \text{tr}( Z'^T \tilde{R}^T \tilde{X}_0^T \tilde{X}_0 \tilde{R} Z')}.$$ \end{corollary} This result follows immediately from Theorem \ref{thm:1}. According to Corollary \ref{cor:1}, the asymptote of the MSPE ratio comparing ensembling to merging equals the ratio of between-study variability. Because the merged estimator does not account for between-study variability, the asymptote is less than one. Let $\sigma^2_{(1)}, \ldots, \sigma^2_{(D)}$ denote the distinct values of variances of the random effects where $D \leq Q$, and let $J_d$ denote the number of random effects with variance $\sigma^2_{(d)}.$ \begin{theorem} \label{thm:2} Suppose $$\max_d \sum_{i: \sigma^2_i = \sigma^2_{(d)}} \left[ \sum_{k=1}^K \left(Z'^T\tilde{R}^T\tilde{X}_0^T\tilde{X}_0 \tilde{R}Z'\right)_{i + Q \times (k - 1), i + Q \times (k - 1)} - w_k^2\left(Z_k^T\tilde{R}_k^T\tilde{X}^T_0\tilde{X}_0\tilde{R}_k Z_k\right)_{i,i}\right] > 0$$ and define \begin{equation} \label{eqn:tau1} \scriptsize \tau_1 = \frac{ \sigma^2_{\epsilon}(\sum_{k=1}^Kw_k^2 \text{\text{tr}}(\tilde{R}_k^T \tilde{X}_0^T \tilde{X}_0 \tilde{R}_k)-\text{tr}( \tilde{R}^T \tilde{X}_0^T \tilde{X}_0 \tilde{R}))+( b^{\text{Ens}})^T b^{\text{Ens}} - ( b^{\text{Merge}})^T b^{\text{Merge}}}{P \max_{d} \frac{1}{J_d}\sum_{i: \sigma^2_i = \sigma^2_{(d)}} [ \sum_{k=1}^K (Z'^T\tilde{R}^T\tilde{X}_0^T\tilde{X}_0 \tilde{R}Z')_{i + Q \times (k - 1), i + Q \times (k - 1)} - w_k^2(Z_k^T\tilde{R}_k^T\tilde{X}^T_0\tilde{X}_0\tilde{R}_k Z_k)_{i,i}]}. \end{equation} Then $E[||Y_0 - \tilde{X}_0 \hat{\beta}^{\text{Ens}}_{(M_{\text{Ens}})}||_2^2] \geq E[||Y_0 - \tilde{X}_0 \hat{\beta}^{\text{Merge}}_{(M)}||_2^2]$ when $\overline{\sigma}^2 \leq \tau_1.$ Suppose $$\min_d \sum_{i: \sigma^2_i = \sigma^2_{(d)}} \left[ \sum_{k=1}^K \left(Z'^T\tilde{R}^T\tilde{X}_0^T\tilde{X}_0 \tilde{R}Z'\right)_{i + Q \times (k - 1), i + Q \times (k - 1)} - w_k^2\left(Z_k^T\tilde{R}_k^T\tilde{X}^T_0\tilde{X}_0\tilde{R}_k Z_k\right)_{i,i}\right] > 0$$ and define \begin{equation} \label{eqn:tau2} \scriptsize \tau_2 = \frac{ \sigma^2_{\epsilon}(\sum_{k=1}^Kw_k^2 \text{\text{tr}}( \tilde{R}_k^T \tilde{X}_0^T \tilde{X}_0 \tilde{R}_k)-\text{tr}( \tilde{R}^T \tilde{X}_0^T \tilde{X}_0 \tilde{R}))+( b^{\text{Ens}})^T b^{\text{Ens}} - ( b^{\text{Merge}})^T b^{\text{Merge}}}{P \min_{d} \frac{1}{J_d}\sum_{i: \sigma^2_i = \sigma^2_{(d)}} [ \sum_{k=1}^K (Z'^T\tilde{R}^T\tilde{X}_0^T\tilde{X}_0 \tilde{R}Z')_{i + Q \times (k - 1), i + Q \times (k - 1)} - w_k^2(Z_k^T\tilde{R}_k^T\tilde{X}^T_0\tilde{X}_0\tilde{R}_k Z_k)_{i,i}]}. \end{equation} Then $E[||Y_0 - \tilde{X}_0 \hat{\beta}^{\text{Ens}}_{(M_{\text{Ens}})}||_2^2] \leq E[||Y_0 - \tilde{X}_0 \hat{\beta}^{\text{Merge}}_{(M)}||_2^2]$ when $\overline{\sigma}^2 \geq \tau_2.$ \end{theorem} A proof is provided in the appendix. Theorem \ref{thm:2} generalizes Theorem \ref{thm:1} to account for unequal variances along the diagonal of $G$. It characterizes a transition interval $[\tau_1, \tau_2]$ where merging outperforms ensembling when $\overline{\sigma}^2 \leq \tau_1$ and vice versa when $\overline{\sigma}^2 \geq \tau_2.$ The transition interval provided by \cite{guan2019merging} (cf. Theorem 2) is a special case of (\ref{eqn:tau1}, \ref{eqn:tau2}) when $M = \eta = 1$. \subsection{Boosting with component-wise linear learners} \label{sec:cwboost} To properly characterize the performance of the boosting estimator in \textbf{Algorithm 2}, we condition on its selection path. To this end, we provide the conditional MSE of the merged and ensemble estimators in Proposition \ref{prop:2}. Assuming $Y \sim MVN(\mu, \Sigma)$, it follows that $Y_k$ is normal with mean $\mu_k \coloneqq f(\tilde{X}_k)$ and covariance $\Sigma_k \coloneqq Z_kGZ_k^T + \sigma^2_{\epsilon}I$ for $k = 1,\ldots, K$. Let $$\mathcal{P} = \{Y: \Gamma Y \geq 0, z_j = z\}$$ and $$\mathcal{P}^{\text{Ens}} = \{\mathcal{P}_1, \ldots, \mathcal{P}_K\}$$ denote the conditioning events for the merged and ensemble estimators, respectively, where $$\mathcal{P}_k \coloneqq \{Y_k: \Gamma_k Y_k \geq 0, z_{jk} = z_k\}$$ summarizes the boosting path from fitting \textbf{Algorithm 2} to the data in study $k$. Let $\bar{\mu}_j = v_j^T\mu$ and $\vartheta^2_j = v_j\Sigma v_j^T$ denote the mean and variance of $\hat{\beta}^{\text{CW, Merge}}_{(M)j}=v^T_jY$, respectively. And let $\alpha_j = \frac{a_j - \bar{\mu}_j}{\vartheta_j}$ and $\xi_j = \frac{b_j - \bar{\mu}_j}{\vartheta_j}$ denote the standardized lower and upper truncation limits. We denote the study-specific versions of $\bar{\mu}_j, \theta_j, \alpha_j$ and $\xi_j$ by $\bar{\mu}_{jk}, \theta_{jk}, \alpha_{jk},$ and $\xi_{jk},$ respectively. \begin{proposition} Let $\phi(\cdot)$ and $\Phi(\cdot)$ denote the probability density and cumulative distribution functions of a standard normal variable, respectively. The conditional mean squared error (MSE) of the merged estimator is \label{prop:2} {\scriptsize \begin{align*} E\left[\left.\left(\hat{\beta}^{\text{Merge, CW}}_{(M)j} - \beta_j\right)^2\right|\mathcal{P}\right] &= \left( \bar{\mu}_j - \vartheta_j\left(\frac{\phi(\xi_j)-\phi(\alpha_j)}{\Phi(\xi_j) - \Phi(\alpha_j)}\right) - \beta_j\right)^2 \\ &+ \vartheta^2_j \left(1 - \frac{\xi_j\phi(\xi_j) - \alpha_j\phi(\alpha_j)}{\Phi(\xi_j) - \Phi(\alpha_j)} - \left(\frac{\phi(\xi_j) - \phi(\alpha_j)}{\Phi(\xi_j) - \Phi(\alpha_j)}\right)^2\right). \end{align*} } The conditional MSE of the ensemble estimator is {\scriptsize \begin{align*} E\left[\left.\left(\hat{\beta}^{\text{Ens, CW}}_{(M_{\text{Ens}})j} - \beta_j\right)^2\right|\mathcal{P}^{\text{Ens}}\right] &= \left(\sum_{k=1}^K w_k \left(\bar{\mu}_{jk} - \vartheta_{jk}\left(\frac{\phi(\xi_{jk})-\phi(\alpha_{jk})}{\Phi(\xi_{jk}) - \Phi(\alpha_{jk})}\right)\right) - \beta_j\right)^2\\ &+ \sum_{k=1}^K w_k^2 \vartheta^2_{jk} \left(1 - \frac{\xi_{jk}\phi(\xi_{jk}) - \alpha_{jk}\phi(\alpha_{jk})}{\Phi(\xi_{jk}) - \Phi(\alpha_{jk})} - \left(\frac{\phi(\xi_{jk}) - \phi(\alpha_{jk})}{\Phi(\xi_{jk}) - \Phi(\alpha_{jk})}\right)^2\right).\\ \end{align*}} \end{proposition} A proof is provided in the appendix. Proposition \ref{prop:2} characterizes the conditional MSE of boosting estimators via the bias-variance decomposition. By the polyhedral lemma (\cite{lee2016exact}), the selection path $\Gamma Y \geq 0$ is equivalent to truncating $\hat{\beta}^{\text{Merge, CW}}_{(M)j} = v_j^TY$ to an interval $[a_j, b_j]$ around $z_j$. When there is no between-study heterogeneity, $z_j = (I-v_j(v_j^Tv_j)^{-1}v^T)Y$ is the residual from projecting $Y$ onto $v_j$. Loosely speaking, the selection path is equivalent to $v_j^TY$ not deviating too far from $z_j$. As shown in Section \ref{performance}, we can rewrite the selection path as a system of $2M(P-1)$ inequalities with the variable $v_j^TY$: \begin{equation} \label{eqn:linearSys} \{\Gamma Y \geq 0\} = \{\Gamma c_j (v_j^TY) \leq - \Gamma z_j\}. \end{equation} For fixed $P$, as the number of boosting iterations $M$ increases, the number of linear inequalities (or constraints) in (\ref{eqn:linearSys}) also increases; as a result, the size of the polyhedron $\Gamma Y \geq 0$ decreases. A smaller polyhedron generally leads to a narrower truncation interval $[a_j, b_j]$ around $v_j^TY$. Intuitively, a tighter truncation interval leads to reduced variance. When between-study heterogeneity is low, at a fixed learning rate $\eta$, the merged model generally requires a later stopping iteration than the study-specific model due to the increase in sample size. Therefore, $\hat{\beta}^{\text{Merge, CW}}_{(M)j}$ tends to have a tighter truncation region, and as a result, smaller variance than $\hat{\beta}^{\text{Ens, CW}}_{(M_{\text{Ens}})j}$. As between-study heterogeneity increases, the merged model often has an earlier stopping iteration to avoid overfitting, so $Var(\hat{\beta}^{\text{Merge, CW}}_{(M)j}) > Var(\hat{\beta}^{\text{Ens, CW}}_{(M_{\text{Ens}})j})$. In practice, the variance component in Proposition \ref{prop:2} can be computed given estimates of $\sigma^2$ and $f$. \iffalse By Proposition \ref{prop:1}, we can write $\hat{\beta}^{\text{CW, Merge}}_{(M)} \in \mathbb{R}^P$ as a linear function of $Y$, i.e., $\hat{\beta}^{\text{CW, Merge}}_{(M)} &= V^TY$, where $V \coloneqq (\sum_{m=1}^M \eta B_{(m)}(\prod_{\ell = 0}^{m-1} (I - \eta H_{(m-\ell-1)})))^T$. We decompose $ Y$ into $$ Y = C( V^T Y) + Z^*,$$ where $$ C = \Sigma V\left( V^T \Sigma V\right)^{-1},$$ is a $N-$dimensional vector and $$ Z^* = ( I - \Sigma V\left( V^T \Sigma V\right)^{-1} V^T) Y$$ is a $2M(P-1)$-dimensional vector. Because $V$ is fixed once we condition on the boosting path, it follows that $\hat{\beta}^{\text{CW, Merge}}_{(M)}$ conditional on the boosting path follows a multivariate truncated distribution. We provide an expression for the truncation region in Claim \ref{claim:1} in the appendix. Briefly, we show that the boosting path can be re-written as a set in $\mathbb{R}^P$ where each component of $\hat{\beta}^{\text{CW, Merge}}_{(M)}$ is truncated to an interval that depends on $Z^*$. Because the truncation region is non-rectangular, the distribution of $\hat{\beta}^{\text{CW, Merge}}_{(M)}$ conditional on the boosting path is analytically intractable. \fi \section{Simulations} We conducted simulations to evaluate the performance of boosting with four base learners: ridge, component-wise least squares (CW-LS), component-wise cubic smoothing splines (CW-CS) and regression trees. We sampled predictors from the \texttt{curatedOvarianData} R package (\cite{ganzfried2013curatedovariandata}) to reflect realistic and potentially heterogeneous predictor distributions. The true data-generating model contains $p=10$ predictors of which $5$ have random effects. The outcome for individual $i$ in study $k$ is \begin{equation} Y_{ik} = f(X_{ik}) + Z_{ik} \gamma_k + \epsilon_{ik}, \end{equation} where $\gamma_k \sim MVN(0, G)$ with $G = diag(\sigma^2_1, \ldots, \sigma^2_5)$, $Z_{ik} = (X_{3ik}, X_{4ik}, X_{5ik}, X_{6ik}, X_{7ik})$, and $\epsilon_{ik} \sim N(0, \sigma^2_{\epsilon})$ with $\sigma^2_{\epsilon} = 1$ for $i = 1, \ldots, n_k, k = 1, \ldots, K.$ The mean function $f$ has the form \begin{small} \begin{align} \label{eq:simfun} f(X_{ik}) &= -0.28 h_{11}(X_{1ik}) -0.12 h_{21}(X_{1ik}) -0.78 h_{31}(X_{1ik}) +0.035 h_{41}(X_{1ik}) -0.23X_{2ik} \nonumber\\ &+1.56 X_{3ik} -0.0056 X_{4ik} + 0.13 X_{5ik} +0.0013 X_{6ik} - 0.00071 X_{7ik} - 0.0023 X_{8ik} \nonumber\\ &-0.69 X_{9ik} + 0.016 X_{10ik} \end{align} \end{small} where $h_{11}, \ldots, h_{41}$ are cubic basis splines with a knot at 0, and the coefficients were generated from $N(0, 0.5)$. The coefficients for $X_{2ik}, X_{3ik}, X_{5ij}$ and $X_{9ik}$ were generated from $N(0, 1)$, and those for $X_{4ik}, X_{6ik}, X_{7ik}, X_{8ik}, $ and $X_{10ik}$ were generated from $N(0, 0.01).$ We generated $K = 4$ training and $V = 4$ test studies of size $100$. For each simulation replicate $s = 1, \ldots, 500$, we generated outcomes for varying levels of $\overline{\sigma}^2$, trained merged and multi-study ensemble boosting models and evaluated them on the test studies. The outcome was centered to have zero mean, and predictors were standardized to have zero mean and unit $\ell_2$ norm. The regularization parameter $\lambda$ for ridge boosting and stopping iteration $M$ for tree boosting were chosen using 3-fold cross validation. The stopping iteration for linear base learners (ridge, CW-LS, and CW-CS) were chosen based on the $AIC_c$-tuning procedure described in Section \ref{subsection:earlystop}. All hyperparameters were tuned on a held-out data set of size 400 with $\sigma^2$ set to zero. For tree boosting, we set the maximum tree-depth to two. A learning rate of $\eta = 0.5$ was used for all boosting models. For the ensemble estimator, equal weight was assigned to each study. We considered two cases for the structure of $G:$ 1) equal variance and 2) unequal variance. In the first case, Figure \ref{fig:1} shows the relative predictive performance comparing multi-study ensembling to merging for varying levels of $\overline{\sigma}^2$. When $\overline{\sigma}^2$ was small, the merged learner outperformed the ensemble learner. As $\overline{\sigma}^2$ increased, there exists a transition point beyond which ensembling outperformed merging. The empirical transition point based on simulation results confirmed the theoretical transition point (\ref{eqn:tau}) for boosting with linear learners. As $\overline{\sigma}^2$ tended to infinity, the log relative performance ratio tended to $-0.81$ by Corollary \ref{cor:1}. Figure \ref{fig:2} shows the relative predictive performance under the unequal variance case. For boosting with linear learners, there exists a transition interval $[\tau_1, \tau_2]$ where merging outperformed ensembling when $\overline{\sigma}^2 \leq \tau_1$ and vice versa when $\overline{\sigma}^2 \geq \tau_2.$ Compared to boosting with linear or tree learners, boosting with component-wise learners had an earlier transition point. For boosting with component-wise linear learners, we compared the performance of merging and multi-study ensembling based on results in Proposition \ref{prop:2}. In each simulation replicate, we generated outcomes based on (\ref{eq:simfun}) and estimated $\beta^{\text{CW, Merge}}_{(M)}$ and $\beta^{\text{CW, Ens}}_{(M)}$ with $M$ set to 30. We assumed equal variance along the diagonal of $G$. At each boosting iteration $m= 1,...,M$, we evaluated the MSE for both estimators with respect to $\beta_6= 1.72$ conditional on the boosting path up to iteration $m$. We chose to evaluate the coefficient associated with $X_6$ because the true data-generating coefficient $\beta_6$ had the largest magnitude, and as a result, the component-wise boosting algorithm was more likely to select $X_6$. Figure \ref{fig:3} shows the MSE associated with the merged and ensemble estimators at $\overline{\sigma}^2= 0.01$ and 0.05. We chose these values because the empirical transition point for boosting with component-wise linear learners in Figure \ref{fig:1} lies between 0.01 and 0.05. When $\overline{\sigma}^2$ = 0.01, merging outperformed ensembling. As the number of boosting iterations increased, both performed similarly. At $\overline{\sigma}^2 = 0.05,$ merging outperformed ensembling up until $M = 20$, beyond which ensembling began to show preferable performance. \section{Breast Cancer Application} Using data from the \texttt{curatedBreastData} R package (\cite{curatedBreastData}), we illustrated how the transition point theory could guide decisions on merging vs. ensembling. This R package contains 34 high-quality gene expression microarray studies from over 16 clinical trials on individuals with breast cancer. The studies were normalized and post-processed using the \texttt{processExpressionSetList()} function. In practice, a key determinant of breast cancer prognosis and staging is tumor size (\cite{fleming1997ajcc}). Clinicians use the TNM (tumor, node, metastasis) system to describe how extensive the breast cancer is. Under this system, "T" plus a letter or number (0 to 4) is used to describe the size (in centimeters (cm)) and location of the tumor. While the best way to measure the tumor is after it has been removed from the breast, information on tumor size can help clinicians develop effective treatment strategies. Common treatment options for breast cancer include surgery (e.g., mastectomy or lumpectomy), drug therapy (e.g., chemotherapy or immunotherapy) or a combination of both (\cite{gradishar2021nccn}). In our data illustration, the goal was to predict tumor size (cm) before treatment and surgery. We trained boosting models on $K = 5$ training studies with a combined size of $N = 643$: ID 1379 ($n = 60$), ID 2034 ($n = 281$), ID 9893 ($n = 155$), ID 19615 ($n = 115$) and ID 21974 ($n = 32$) and evaluated them on $V = 4$ test studies with a combined size of $N^{\text{Test}} = 366$: ID 21997 ($n = 94$), ID 22226 ($n = 144$), ID 22358 ($n = 122$), and ID 33658 ($n = 10$). We selected the top $p = 40$ gene markers that were most highly correlated with tumor size in the training studies as predictors and randomly selected $q=8$ to have random effects with unequal variance. To calculate the transition interval from Theorem \ref{thm:2}, we trained boosting models with ridge learners using two strategies: merging and ensembling. We also estimated the variances of the random effects ($\sigma^2_1, \ldots, \sigma^2_8)$ and residual error ($\sigma^2_{\epsilon}$) by fitting a linear mixed effects model using restricted maximum likelihood. The estimate of $\overline{\sigma}^2$ and $\sigma^2_{\epsilon}$ were $4.32\times 10^{-2}$ and 1.053, respectively, and the transition interval was $[0.020, 0.026]$. In addition to ridge regression, we trained boosting models with three other base learners: CW-LS, CW-CS and regression trees. Results comparing the predictive performance of ensembling vs. merging are shown in Figure \ref{fig:4}. By Theorem \ref{thm:2}, merging would be preferred over ensembling for boosting with ridge learners because the estimate of $\overline{\sigma}^2$ was smaller than the lower bound of the transition interval. This result was corroborated by the boxplot of performance ratios in Figure \ref{fig:4}. Among the boosting algorithms that perform variable selection, ensembling outperformed merging when boosting with regression trees, and both performed similarly when boosting with component-wise learners. Table \ref{tab:1} summarizes the top three genes selected by each algorithm. Genes were ordered by decreasing variable importance, which was defined as the reduction in training error attributable to selecting a particular gene. In the merged study, both boosting with CW-CS and trees selected the same three genes: \textit{S100P, MMP11,} and \textit{E2F8}, whereas boosting with CW-LS selected \textit{S100P, ASPN,} and \textit{STY1}. This may be attributed to the fact that, compared to CW-LS, CW-CS and trees are more flexible and can capture non-linear trends in the data. Overall, there was some overlap in the genes that were selected by the three base learners across studies. In study ID 1379, all three base learners selected \textit{S100P}, and all but the tree learner selected \textit{AEBP1}. In studies ID 9893, 19615 and 21974, all three learners selected \textit{PPP1R3C}, \textit{CD9}, and \textit{CD69}, respectively. Tree boosting selected a single gene in studies ID 1379, 9893, and 21974 because the optimal number of boosting iterations determined by 3-fold CV was one. In general, CV-tuning leads to earlier stopping iterations than $AIC_c$-tuning as CV approximates the test error on a smaller sample. \section{Discussion} In this paper, we studied boosting algorithms in a regression setting and compared merging and multi-study ensembling for improving cross-study replicability of predictions. We assumed a flexible mixed effects model with potential heterogeneity in predictor-outcome relationships across studies and provided theoretical guidelines for determining whether it was more beneficial to merge or to ensemble. In particular, we extended the transition point theory from \cite{guan2019merging} to boosting with linear learners. For boosting with component-wise linear learners, we characterized a bias-variance decomposition of estimation error conditional on the selection path. Boosting under $\ell_2$ loss is computationally simple and analytically attractive. In general, performance of the algorithm is inextricably linked with the choice of learning rate $\eta$ and stopping iteration $M.$ Common tuning procedures include $AIC_c$ tuning, cross-validation, and restricting the total step size (\cite{zhang2005boosting}). When both $\eta$ and $M$ are set to one, the transition point results on boosting coincide with those on ordinary least squares and ridge regression from \cite{guan2019merging}. A smaller $\eta$ corresponds to increased shrinkage of the effect estimates and decreased complexity of the boosting fit. For fixed $M$, decreasing $\eta$ results in a smaller transition point $\tau$, suggesting that multi-study ensembling would be preferred over merging at a lower threshold of heterogeneity. This can be attributed to the fact that for a fixed $M$, merging would require a larger $\eta$ due to the increase in sample size. Because of the interplay between $\eta$ and $M$, for a fixed $\eta$, decreasing $M$ also leads to a smaller $\tau.$ \cite{buehlmann2006boosting} noted that a smaller $\eta$ resulted in a weaker learner with reduced variance, and this was empirically shown to be more successful than a strong learner. We focused on $\ell_2$ boosting with linear learners for the opportunity to pursue closed-form solutions. With an appropriate choice of basis function, these learners can in theory approximate any sufficiently smooth function to any level of precision (\cite{stone1948generalized}). In our simulations, the empirical transition points of boosting with ridge learners and boosting with regression trees were similar, suggesting that in certain scenarios it may be reasonable to consider the transition point theory in Theorems \ref{thm:1} and \ref{thm:2} as a proxy when comparing merging and ensembling for boosted trees. It is important to note, however, that such an approximation may not be warranted in settings where the choice of hyperparameters differ from that of our simulations. Although this paper focuses on boosting algorithms, we acknowledge important connections with other machine learning methods. A close relative of boosting with component-wise linear learners is the incremental forward stagewise algorithm (FS), which selects the covariate most correlated (in absolute value) with the residuals $r_{(m)}$ (\cite{efron2004least}). Because the covariates are standardized, both algorithms lead to the same variable selection for a given $r_{(m)}$. A potential limitation of Theorems \ref{thm:1} and \ref{thm:2} is that the tuning parameters (e.g., $\eta$ and $M$) are treated as fixed. These quantities are typically chosen by tuning procedures that introduce additional variability. Although we assumed the same $\eta$ for merging and ensembling in simulations, the transition point $\tau$ can be estimated with different values of $\eta$, which may be more realistic in practice. For the ensembling approach, we assigned equal weight to each study, which is equivalent to averaging the predictions. The equal-weighting strategy is a special case of stacking (\cite{breiman1996stacked, ren2020cross}) and is preferred in settings where studies have similar sample sizes. Many areas of biomedical research face a replication crisis in which scientific studies are difficult or impossible to replicate (\cite{ioannidis2005most}). An equally important but less commonly examined issue is the replicability of prediction models. To improve cross-study replicability of predictions, our work provides a theoretical rationale for choosing multi-study ensembling over merging when between-study heterogeneity exceeds a well-defined threshold. As many areas of science are becoming data-rich, it is critical to simultaneously consider and systematically integrate multiple studies to improve cross-study replicability of predictions. \clearpage \begin{section}{Tables and Figures} \begin{figure}[b] \includegraphics[scale=0.37]{fig1.png} \caption{Log relative mean squared prediction error (MSPE) of multi-study ensembling vs. merging for boosting with different base learners under the equal variance assumption. The red vertical dashed line indicates the transition point $\tau$. The solid circles represent the average performance ratios comparing multi-study ensembling to merging, and vertical bars the 95\% bootstrapped intervals.} \label{fig:1} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.37]{fig2.png} \caption{Log relative mean squared prediction error (MSPE) of multi-study ensembling vs. merging for boosting with different base learners under the unequal variance assumption. The red vertical dashed lines indicates the transition interval $[\tau_1, \tau_2]$. The solid circles represent the average performance ratios comparing multi-study ensembling to merging, and vertical bars the 95\% bootstrapped intervals.} \label{fig:2} \end{figure} \clearpage \begin{figure}[h!] \centering \includegraphics[scale=0.35]{fig3.png} \caption{Mean squared error associated with merging and ensembling at different levels of $\overline{\sigma}^2$. Blue and red lines correspond to the merged and ensemble estimators at $\overline{\sigma}^2 = 0.01$, respectively. Purple and green lines correspond to the merged and ensemble estimators at $\overline{\sigma}^2 = 0.05$, respectively.} \label{fig:3} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.37]{fig4.png} \caption{Log relative mean squared prediction error (MSPE) of multi-study ensembling vs. merging for boosting with different base learners under the equal variance assumption. Ridge = ridge regression; CW-LS = component-wise least squares; CW-CS = component-wise cubic smoothing splines; tree = regression tree.} \label{fig:4} \end{figure} \begin{table}[ht] \label{tab:1} \resizebox{\columnwidth}{!}{ \begin{tabular}{cllllllll} \hline Learner & ID 1379 & ID 2034 & ID 9893 & ID 19615 & ID 21974 & Merged \\ \hline CW-LS & S100P (0.135) & MMP11 (0.0455) & PPP1R3C (0.0421) & CENPN (0.111) & CD69 (0.193) & S100P (0.0215) \\ & AEBP1 (0.129) & CENPA (0.0241) & IGF1 (0.0208) & CD9 (0.0767) & MMP11 (0.108) & ASPN (0.0184) \\ &CENPA (0.0652) & CAMP (0.0204) & SYT1 (0.0183) & ASPN (0.0733) & ESR1 (0.0358) & SYT1 (0.0133) \\ \hline CW-CS & AEBP1 (0.133) & TNFSF4 (0.0477) & PPP1R3C (0.0463) & CENPN (0.103) & MMP11 (0.183) & S100P (0.021) \\ & C10orf116 (0.115) & S100A9 (0.0405) & GRP (0.0342) & CD9 (0.0865) & CD69 (0.182) & MMP11 (0.0195) \\ & S100P (0.100) & CLU (0.0321) & POSTN (0.0256) & COL1A1 (0.0848) & S100P(0.0889) & E2F8 (0.0185) \\ \hline Tree & S100P (0.111) & S100A9 (0.0699) & PPP1R3C (0.0438) & COL1A1 (0.131) & CD69 (0.147) & MMP11 (0.0286) \\ & N/A & MMP11 (0.0588) & N/A & CD9 (0.108) & N/A & S100P (0.0266) \\ & N/A & N/A & N/A & ADRA2A (0.0732) & N/A & E2F8 (0.0249) \\ \hline \end{tabular} } \caption{Selected genes ordered by decreasing variable importance across different training studies. Each entry in the table consists of the gene name followed by the amount of reduction in training error that is attributed to selecting the gene in parentheses. An entry is N/A if there were fewer than three selected genes. CW-LS = component-wise least squares and CW-CS = component-wise cubic smoothing splines.} \end{table} \end{section} \clearpage \begin{section}{Appendix} \begin{proof}[Proof of Proposition \ref{prop:1}] We show $r_{(m)} = \prod_{\ell=0}^{m-1}\left(I - \eta H_{(m-\ell-1)}\right) Y$ by induction. Without loss of generality, we assume $\eta = 1$. At iteration 1, the residual vector is \begin{align*} r_{(1)} &= Y - \hat{ Y}_{(0)}\\ &= \left( I - H_{(0)}\right) Y \end{align*} At iteration $m-1$, we assume the induction hypothesis: \begin{align} r_{(m-1)} &= \prod_{\ell=0}^{m-2}\left( I - H_{(m-\ell-1)}\right) Y \label{eqstar} \end{align} At iteration $m,$ the residual vector is \begin{align*} r_{(m)} &= Y - \hat{ Y}_{(m-1)}\\ &= Y - \left(\hat{ Y}_{(m-2)} + H_{(m-1)} r_{(m-1)}\right)\\ &= r_{(m-1)} - H_{(m-1)} r_{(m-1)}\\ &= \left( I - H_{(m-1)}\right) r_{(m-1)}\\ &\overset{(\ref{eqstar})}{=} \left( I - H_{(m-1)}\right)\left( I - H_{(m-2)}\right)\cdots \left( I - H_{(1)}\right)\left( I - H_{(0)}\right) Y\\ &= \prod_{\ell=0}^{m-1}\left( I - H_{(m-\ell-1)}\right) Y. \end{align*} It follows that $\left(\tilde{X}_{\hat{j}_{(m)}}^T \tilde{X}_{j_{m}} \right)^{-1}\tilde{X}^T_{\hat{j}_{(m)}}r_{(m)} \in \mathbb{R}$ is the coefficient estimate of $\tilde{X}_{\hat{j}_{(m)}}$. Multiplying the coefficient estimate by $e_{\hat{j}_{(m)}} \in \mathbb{R}^P$ results in an $U$-dimensional vector with $\left(\tilde{X}_{\hat{j}_{(m)}}^T \tilde{X}_{j_{m}} \right)^{-1}\tilde{X}^T_{\hat{j}_{(m)}}r_{(m)}$ in the $\hat{j}_{(m)}$-th position and 0 everywhere else. The final coefficient estimates are given by the sum across iteration-specific vectors $e_{\hat{j}_{(m)}}\left(\tilde{X}_{\hat{j}_{(m)}}^T \tilde{X}_{j_{m}}\right)^{-1}\tilde{X}^T_{\hat{j}_{(m)}}r_{(m)}$ for $m = 1, \ldots, M.$ \end{proof} \clearpage \begin{proof}[Proof of Lemma \ref{lemma:lee}] We decompose $Y$ into $$Y = c_j(v_j^TY) + z_j$$ and rewrite the polyhdron as \begin{align*} \{\Gamma Y \geq 0\} &= \left\{ \Gamma \left( c_j v_j^T Y + z_j\right) \geq 0\right\}\\ &= \left\{ \Gamma c_j (v_j^T y) \geq 0 - \Gamma z_j\right\}\\ & = \left\{\left( \Gamma c_j\right)_{\ell} \left( v_j^T Y\right) \geq 0- ( \Gamma z_j)_{\ell} \quad \text{for all }\ell = 1, \ldots, 2M(P-1) \right\}\\ &= \begin{Bmatrix} v_j^T Y \geq \frac{ 0 - ( \Gamma z_j)_{\ell}}{(\Gamma c_j)_{\ell}}, & \text{for } \ell:( \Gamma c_j)_{\ell} > 0 \\ v_j^T Y \leq \frac{0 - ( \Gamma z_j)_{ \ell}}{( \Gamma c)_i}, & \text{for } \ell:( \Gamma c_j)_{ \ell} < 0 \\ 0 \geq 0 - ( \Gamma z_j)_{ \ell} & \text{for } \ell:( \Gamma c_j)_i = 0 \end{Bmatrix}\\ &= \begin{Bmatrix} v_j^T Y \geq \max\limits_{ \ell:( \Gamma c)_{ \ell} > 0} \frac{ 0 - ( \Gamma z_j)_{ \ell}}{( \Gamma c)_i}\\ v_j^T Y \leq \min\limits_{ \ell:( \Gamma c_j)_{ \ell} < 0}\frac{0 - ( \Gamma z_j)_{ \ell}}{( \Gamma c_j)_{ \ell}}\\ 0 \geq \max\limits_{\ell:( \Gamma c)_{ \ell} = 0} 0 - ( \Gamma z_j)_{ \ell} \end{Bmatrix} \end{align*} where in the last step, we have divided the components into three categories depending on whether $(\Gamma c_j)_{\ell} \lesseqgtr 0$, since this affects the direction of the inequality (or whether we can divide at all). Since $v_j^TY$ is the same quantity for all $\ell$, it must be at least the maximum of the lower bounds, which is $a_j$, and no more than the minimum of the upper bounds, which is $b_j.$ Since $a_j, b_j,$ and $c_j$ are independent of $v_j^TY,$ then $v_j^TY$ is conditionally a normal random variable, truncated to be between $a_j$ and $b_j.$ By conditioning on the value of $z_j,$ $$v_j^TY|\{\Gamma Y \geq 0, z_j = z\}$$ is a truncated normal. \end{proof} \clearpage \begin{proof}[Proof of Theorems \ref{thm:1} and \ref{thm:2}] \begin{flalign*} Bias\left(\tilde{X}_0\hat{ \beta}_{(M)}^{\text{Merge}}\right) &= E\left(\tilde{X}_0\sum_{m=1}^M \eta B \left( I - \eta H\right)^{m-1} Y\right) - f(\tilde{X}_0)\\ &= \tilde{X}_0\tilde{R}f(\tilde{X}) - f(\tilde{X}_0)\\ Bias\left(\tilde{X}_0\hat{ \beta}_{(M)}^{\text{Ens}}\right) &= E\left(\tilde{X}_0\sum_{k=1}^K w_k \left[\sum_{m=1}^M \eta B_{k} \left( I - \eta H_{k}\right)^{m-1} Y_k \right]\right) - f(\tilde{X}_0)&\\ &= \sum_{k=1}^K w_k \tilde{X}_0 \tilde{R}_k f(X_k) - f(\tilde{X}_0)\\ Cov\left(\tilde{X}_0\hat{ \beta}^{\text{Merge}}_{(M)}\right) &= Cov\left(\tilde{X}_0\sum_{m=1}^M \eta B \left( I - \eta H\right)^{m-1} Y\right)\\ &= \tilde{X}_0\tilde{R} Cov( Y) \tilde{R}^T \tilde{X}^T_0\\ &= \tilde{X}_0\tilde{R} \text{blkdiag}\left(\left\{Cov\left( Y_k\right)\right\}_{k=1}^K\right) \tilde{R}^T\tilde{X}^T_0\\ &= \tilde{X}_0\tilde{R} \text{blkdiag}\left(\left\{ Z_k G Z_k^T + \sigma^2_{\epsilon} I\right\}_{k=1}^K\right) \tilde{R}^T \tilde{X}^T_0\\ Cov\left(\tilde{X}_0\hat{ \beta}^{\text{Ens}}_{(M)}\right) &= Cov\left(\tilde{X}_0\sum_{k=1}^K w_k \left[\sum_{m=1}^M \eta B_{k} \left( I - \eta H_{k}\right)^{m-1} Y_k \right]\right)\\ &= Cov\left(\tilde{X}_0\sum_{k=1}^K w_k \tilde{R}_k Y_k\right)&\\ &= \sum_{k=1}^K w_k^2 \tilde{X}_0\tilde{R}_k\left( Z_k G Z_k^T + \sigma^2_{\epsilon} I\right) \tilde{R}_k^T\tilde{X}^T_0\\ &= \sum_{k=1}^K w_k^2 \tilde{X}_0 \tilde{R}_k Z_k G Z_k^T \tilde{R}_k^T\tilde{X}^T_0 + \sigma^2_{\epsilon}\sum_{k=1}^Kw_k^2 \tilde{X}_0\tilde{R}_k \tilde{R}_k^T\tilde{X}^T_0\\ \end{flalign*} Let $b^{\text{Merge}} = Bias\left(\tilde{X}_0 \hat{ \beta}_{(M)}^{\text{Merge}}\right)$. The MSPE of $\hat{\beta}^{\text{Merge}}_{(M)}$ is {\scriptsize \begin{align*} E\left[\norm{Y_0 - \tilde{X}_0\hat{ \beta}^{\text{Merge}}_{(M)}}^2_2\right] &= \text{tr}\left(Cov\left(\tilde{X}_0\hat{ \beta}_{(M)}^{\text{Merge}}\right)\right) + \left( b^{\text{Merge}}\right)^T b^{\text{Merge}} + E\left[\norm{Y_0 - f(\tilde{X}_0)}^2_2\right]\\ &= \text{tr}\left(\tilde{X}_0\tilde{R} \text{blkdiag}\left(\left\{Cov( Y_k)\right\}_{k=1}^K\right) \tilde{R}^T\tilde{X}^T_0\right) + \left( b^{\text{Merge}}\right)^T b^{\text{Merge}} + E\left[\norm{Y_0 - f(\tilde{X}_0)}^2_2\right]\\ &= \text{tr}\left( \text{blkdiag}\left(\left\{ Z_k G Z_k^T + \sigma^2_{\epsilon} I\right\}_{k=1}^K\right) \tilde{R}^T\tilde{X}^T_0\tilde{X}_0 \tilde{R}\right) + \left( b^{\text{Merge}}\right)^T b^{\text{Merge}}+ E\left[\norm{Y_0 - f(\tilde{X}_0)}^2_2\right]\\ &= \text{tr}\left(\text{blkdiag}\left(\{ Z_k G Z_k^T\}_{k=1}^K\right)\tilde{R}^T\tilde{X}^T_0\tilde{X}_0 \tilde{R}\right) + \sigma^2_{\epsilon}\text{tr}\left(\tilde{R}^T\tilde{X}^T_0\tilde{X}_0 \tilde{R}\right)+ \left( b^{\text{Merge}}\right)^T b^{\text{Merge}}+ E\left[\norm{Y_0 - f(\tilde{X}_0)}^2_2\right]\\ &= \text{tr}\left( Z' G' Z'^T \tilde{R}^T\tilde{X}^T_0\tilde{X}_0 \tilde{R}\right) + \sigma^2_{\epsilon}\text{tr}\left( \tilde{R}^T\tilde{X}^T_0\tilde{X}_0 \tilde{R}\right)+ \left( b^{\text{Merge}}\right)^T b^{\text{Merge}}+ E\left[\norm{Y_0 - f(\tilde{X}_0)}^2_2\right]\\ &= \text{tr}\left( G' Z'^T \tilde{R}^T\tilde{X}^T_0\tilde{X}_0 \tilde{R} Z'\right) + \sigma^2_{\epsilon}\text{tr}\left( \tilde{R}^T\tilde{X}^T_0\tilde{X}_0 \tilde{R}\right)+ \left( b^{\text{Merge}}\right)^T b^{\text{Merge}}+ E\left[\norm{Y_0 - f(\tilde{X}_0)}^2_2\right]\\ &= \sum_{d=1}^D \sigma_{(d)}^2 \left\{\sum_{i: \sigma^2_{i} = \sigma^2_{(d)}} \left[\sum_{k=1}^K \left( Z'^T \tilde{R}^T\tilde{X}^T_0\tilde{X}_0 R Z'\right)_{i + Q \times (k - 1), i + Q \times (k - 1)}\right] \right\} + \sigma^2_{\epsilon}\text{tr}\left( \tilde{R}^T\tilde{X}^T_0\tilde{X}_0 \tilde{R}\right) \\ &+ \left( b^{\text{Merge}}\right)^T b^{\text{Merge}}+ E\left[\norm{Y_0 - f(\tilde{X}_0)}^2_2\right] \end{align*} } Let $ b^{\text{Ens}} = Bias\left(\tilde{X}_0 \hat{\beta}_{(M_{\text{Ens}})}^{\text{Ens}}\right)$. The MSPE of $\hat{ \beta}^{Ens}_{(M_{\text{Ens}})}$ is {\scriptsize \begin{align*} E\left[\norm{ Y_0 - \tilde{X}_0\hat{ \beta}^{\text{Ens}}_{(M_{\text{Ens}})}}^2_2\right] &= \text{tr}\left(Cov\left(\tilde{X}_0\hat{ \beta}_{(M_{\text{Ens}})}^{\text{Ens}}\right)\right) + \left( b^{\text{Ens}}\right)^T b^{\text{Ens}}+ E\left[\norm{Y_0 - f(\tilde{X}_0)}^2_2\right]\\ &= \text{tr}\left(\tilde{X}_0Cov\left(\sum_{k=1}^K w_k \tilde{R}_k Y_k\right)\tilde{X}_0^T\right) + \left( b^{\text{Ens}}\right)^T b^{\text{Ens}}+ E\left[\norm{Y_0 - f(\tilde{X}_0)}^2_2\right]\\ &= \sum_{k=1}^K w_k^2 \text{tr}\left( Z_k G Z_k^T \tilde{R}_k^T\tilde{X}_0^T\tilde{X}_0 \tilde{R}_k\right) +\sigma^2_{\epsilon} \sum_{k=1}^Kw_k^2 \text{\text{tr}}\left( \tilde{R}_k^T \tilde{X}_0^T\tilde{X}_0 \tilde{R}_k\right) + \left( b^{\text{Ens}}\right)^T b^{\text{Ens}}+ E\left[\norm{Y_0 - f(\tilde{X}_0)}^2_2\right]\\ &= \sum_{k=1}^K w_k^2 \text{tr}\left( G Z_k^T \tilde{R}_k^T \tilde{X}_0^T\tilde{X}_0\tilde{R}_k Z_k\right) +\sigma^2_{\epsilon} \sum_{k=1}^Kw_k^2 \text{\text{tr}}\left( \tilde{R}_k^T \tilde{X}_0^T\tilde{X}_0 \tilde{R}_k\right) + \left( b^{\text{Ens}}\right)^T b^{\text{Ens}}+ E\left[\norm{Y_0 - f(\tilde{X}_0)}^2_2\right]\\ &=\sum_{d=1}^D \sigma^2_{(d)} \left\{ \sum_{i:\sigma^2_i = \sigma^2_{(d)}}\left[\sum_{k=1}^K w_k^2 \left( Z_k^T \tilde{R}_k^T \tilde{X}_0^T\tilde{X}_0 \tilde{R}_k Z_k\right)_{i,i}\right]\right\}\\ &+\sigma^2_{\epsilon} \sum_{k=1}^Kw_k^2 \text{\text{tr}}\left( \tilde{R}_k^T \tilde{X}_0^T\tilde{X}_0\tilde{R}_k\right) + \left( b^{\text{Ens}}\right)^T b^{\text{Ens}}+ E\left[\norm{Y_0 - f(\tilde{X}_0)}^2_2\right] \end{align*} } If $\sigma^2_1 =\sigma^2_2 = \ldots = \sigma^2_J$ (Theorem 1), then {\scriptsize \begin{align*} \overline{\sigma}^2 & \geq \frac{Q}{P} \times \frac{\sigma^2_{\epsilon}\left(\sum_{k=1}^Kw_k^2 \text{\text{tr}}\left( \tilde{R}_k^T \tilde{X}_0^T \tilde{X}_0 \tilde{R}_k\right)-\text{tr}\left( \tilde{R}^T \tilde{X}_0^T \tilde{X}_0 \tilde{R}\right)\right)+ \left( b^{\text{Ens}}\right)^T b^{\text{Ens}} - \left( b^{\text{Merge}}\right)^T b^{\text{Merge}}}{\text{tr}\left( Z'^T \tilde{R}^T \tilde{X}_0^T \tilde{X}_0 \tilde{R} Z'\right) - \sum_{k=1}^K w_k^2 \text{tr}\left( Z_k^T \tilde{R}_k^T \tilde{X}_0^T \tilde{X}_0 \tilde{R}_k Z_k\right)}\\ &\Rightarrow \sigma^2 \left(\text{tr}\left( Z'^T \tilde{R}^T \tilde{X}_0^T \tilde{X}_0 \tilde{R} Z'\right) - \sum_{k=1}^K w_k^2 \text{tr}\left( Z_k \tilde{R}_k^T \tilde{X}_0^T \tilde{X}_0 \tilde{R}_k Z_k\right)\right) \\ & \geq \sigma^2_{\epsilon}\left(\sum_{k=1}^Kw_k^2 \text{\text{tr}}\left( \tilde{R}_k^T \tilde{X}_0^T \tilde{X}_0 \tilde{R}_k\right)-\text{tr}\left(\tilde{R}^T \tilde{X}_0^T \tilde{X}_0 \tilde{R}\right)\right) + \left( b^{\text{Ens}}\right)^T b^{\text{Ens}} - \left( b^{\text{Merge}}\right)^T b^{\text{Merge}} \\ &\Leftrightarrow E\left[\norm{ Y_0 - \tilde{X}_0 \hat{ \beta}^{\text{Merge}}_{(M)}}^2_2\right] \geq E\left[\norm{ Y_0 - \tilde{X}_0 \hat{ \beta}^{\text{Ens}}_{(M_{\text{Ens}})}}^2_2\right]. \end{align*} } If $\sigma^2_j \neq\sigma^2_{j'}$ for at least one $j \neq j'$ (Theorem 2), then let $$a_d = \sum_{i: \sigma^2_i = \sigma^2_{(d)}} \left[ \sum_{k=1}^K \left(Z'^T\tilde{R}^T\tilde{X}_0^T\tilde{X}_0 \tilde{R}Z'\right)_{i + Q \times (k - 1), i + Q \times (k - 1)} - w_k^2\left(Z_k^T\tilde{R}_k^T\tilde{X}^T_0\tilde{X}_0\tilde{R}_k Z_k\right)_{i,i}\right]$$ and $$c = \sigma^2_{\epsilon}\left(\sum_{k=1}^Kw_k^2 \text{\text{tr}}\left( \tilde{R}_k^T \tilde{X}_0^T \tilde{X}_0 \tilde{R}_k\right)-\text{tr}\left( \tilde{R}^T \tilde{X}_0^T \tilde{X}_0 \tilde{R}\right)\right)+\left( b^{\text{Ens}}\right)^T b^{\text{Ens}} - \left( b^{\text{Merge}}\right)^T b^{\text{Merge}}.$$ Since $$E\left[\norm{ Y_0 - \tilde{X}_0 \hat{ \beta}^{\text{Merge}}_{(M)}}^2_2\right] \geq E\left[\norm{ Y_0 - \tilde{X}_0 \hat{ \beta}^{\text{Ens}}_{(M_{\text{Ens}})}}^2_2\right] \Longleftrightarrow \sum_{d=1}^D \sigma^2_{(d)} a_d \geq c$$ and $$\left(\min_d \frac{a_d}{J_d}\right) \sum_{d=1}^D \sigma^2_{(d)} J_d \leq \sum_{d=1}^D \sigma^2_{(d)} \leq \left(\max_d \frac{a_d}{J_d}\right) \sum_{d=1}^D \sigma^2_{(d)} J_d,$$ assuming $a_d > 0$ for all $d$, then \begin{align*} \overline{\sigma}^2 &= \frac{\sum_{d=1}^D \sigma^2_{(d)} J_d}{P} \leq \frac{c}{P \max_{d} \frac{a_d}{J_d}} = \tau_1 \\ &\Rightarrow \sum_{d=1}^D \sigma^2_{(d)} a_d \leq \max_{d} \frac{a_d}{J_d} \sum_{d=1}^D \sigma^2_{(d)} J_d \leq c\\ &\Longleftrightarrow E\left[\norm{ Y_0 - \tilde{X}_0 \hat{ \beta}^{\text{Merge}}_{(M)}}^2_2\right] \leq E\left[\norm{ Y_0 - \tilde{X}_0 \hat{ \beta}^{\text{Ens}}_{(M_{\text{Ens}})}}^2_2\right]. \end{align*} and \begin{align*} \overline{\sigma}^2 &= \frac{\sum_{d=1}^D \sigma^2_{(d)} J_d}{P} \geq \frac{c}{P \max_{d} \frac{a_d}{J_d}} = \tau_2 \\ &\Rightarrow \sum_{d=1}^D \sigma^2_{(d)} a_d \geq \min_{d} \frac{a_d}{J_d} \sum_{d=1}^D \sigma^2_{(d)} J_d \geq c\\ &\Longleftrightarrow E\left[\norm{ Y_0 - \tilde{X}_0 \hat{ \beta}^{\text{Merge}}_{(M)}}^2_2\right] \geq E\left[\norm{ Y_0 - \tilde{X}_0 \hat{ \beta}^{\text{Ens}}_{(M_{\text{Ens}})}}^2_2\right]. \end{align*} \end{proof} \clearpage \begin{proof}[Proof of Proposition \ref{prop:2}] \begin{align*} Var\left(\left.\hat{\beta}^{\text{Merge, CW}}_{(M)j}\right|\mathcal{P}\right) &= \vartheta^2_j \left(1 - \frac{\xi_j\phi(\xi_j) - \alpha_j\phi(\alpha_j)}{\Phi(\xi_j) - \Phi(\alpha_j)} - \left(\frac{\phi(\xi_j) - \phi(\alpha_j)}{\Phi(\xi_j) - \Phi(\alpha_j)}\right)^2\right)\\ Bias^2\left(\left.\hat{\beta}^{\text{Merge, CW}}_{(M)j}\right|\mathcal{P}\right) &= \left( \bar{\mu}_j - \vartheta_j\left(\frac{\phi(\xi_j)-\phi(\alpha_j)}{\Phi(\xi_j) - \Phi(\alpha_j)}\right) - \beta_j\right)^2\\ Var\left(\left.\hat{\beta}^{\text{Ens, CW}}_{(M)j}\right|\mathcal{P}^{\text{Ens}}\right) &= Var\left(\left.\sum_{k=1}^K w_k \hat{\beta}^{\text{CW}}_{(M_k)jk}\right|\mathcal{P}^{\text{Ens}}\right)\\ &= \sum_{k=1}^K w_k^2Var\left(\left. \hat{\beta}^{\text{CW}}_{(M_k)jk}\right|\mathcal{P}_k\right) \\ &= \sum_{k=1}^K w_k^2 \vartheta^2_{jk} \left(1 - \frac{\xi_{jk}\phi(\xi_{jk}) - \alpha_{jk}\phi(\alpha_{jk})}{\Phi(\xi_{jk}) - \Phi(\alpha_{jk})} - \left(\frac{\phi(\xi_{jk}) - \phi(\alpha_{jk})}{\Phi(\xi_{jk}) - \Phi(\alpha_{jk})}\right)^2\right)\\ Bias^2\left(\left.\hat{\beta}^{\text{Ens, CW}}_{(M)j}\right|\mathcal{P}^{\text{Ens}}\right) &= \left(\sum_{k=1}^K w_k E\left(\left.\hat{\beta}^{\text{CW}}_{(M_k)jk}\right|\mathcal{P}^\text{Ens}\right) - \beta_j\right)^2\\ &= \left(\sum_{k=1}^K w_k E\left(\left.\hat{\beta}^{\text{CW}}_{(M_k)jk}\right|\mathcal{P}_k\right) - \beta_j\right)^2 \\ &= \left(\sum_{k=1}^K w_k \left(\bar{\mu}_{jk} - \vartheta_{jk}\left(\frac{\phi(\xi_{jk})-\phi(\alpha_{jk})}{\Phi(\xi_{jk}) - \Phi(\alpha_{jk})}\right)\right) - \beta_j\right)^2 \end{align*} \end{proof} \newpage \begin{claim}[Truncation region for component-wise boosting coefficients] \label{claim:1} Let $Y \in \mathbb{R}^N$ denote the outcome vector where $Y \sim N(\mu, \Sigma)$. The boosting coefficients can be written as \begin{align*} \hat{\beta}^{\text{CW, Merge}}_{(M)} &= V^TY\\ &\coloneqq \sum_{m=1}^M \eta B_{(m)}\left(\prod_{\ell = 0}^{m-1} (I - \eta H_{(m - \ell - 1))}\right)Y, \end{align*} where $V \in \mathbb{R}^{N \times P}$ depends on $Y$ through variable selection. We decompose $ Y$ into $$ Y = C( V^T Y) + Z^*,$$ where $$ C = \Sigma V\left( V^T \Sigma V\right)^{-1},$$ is a $N-$dimensional vector and $$ Z^* = \left( I - \Sigma V\left( V^T \Sigma V\right)^{-1} V^T\right) Y$$ is a $\ell_P \coloneqq 2M(P-1)$ dimensional vector. We claim the polyhedral set $\{ \Gamma Y \geq 0\}$ can be re-written as a truncation region where the coefficients $\hat{\beta}^{\text{CW, Merge}}_{(M)}$ have non-rectangular truncation limits. \end{claim} \begin{proof} We define the projection $\Pi_k(S)$ of a set $S \subset \mathbb{R}^n$ by letting $$\Pi_k(S) = \left\{(x_1, \ldots, x_k)| \exists x_{k+1}, \ldots, x_n \text{ s.t. } (x_1, \ldots, x_n) \in S\right\}.$$ Given a polyhedron $\mathcal{P}$ in terms of linear inequality constraints of the form $$ A x \geq b,$$ we state the Fourier Motzkin elimination algorithm from \cite{bertsimas1997introduction}. \begin{algorithm} \caption{Elimination algorithm for a system of linear inequalities} \begin{algorithmic}[1] \State Rewrite each constraint $\sum_{j=1}^N a_{ij} x_j \geq b_i$ in the form $$a_{iNx_N} \geq -\sum_{j=1}^{N-1} a_{ij}x_j + b_i, \quad i = 1, \ldots, m$$ if $a_{iN} \neq 0$, divide both sides by $a_{iN}.$ By letting $\bar{x} = (x_1, \ldots, x_{n-1}),$ we obtain an equivalent representation of $\mathcal{P}$ involving the following constraints \begin{align*} x_N \geq d_i + f'_i \bar{x}, \qquad &\text{if } a_{iN} > 0\\ d_j + f'_j\bar{x} \geq x_N, \qquad &\text{if } a_{jN} < 0\\ 0 \geq d_k + f'_k \bar{x}, \qquad &\text{if } a_{kN} = 0 \end{align*} Each $d_i, d_j, d_k$ is a scalar, and each $f_i, f_j, f_k$ is a vector in $\mathbb{R}^{N-1}$. \State Let $\mathcal{Q}$ be the polyhedron in $\mathbb{R}^{N-1}$ defined by the constraints \begin{align*} d_j + f'_j\bar{x} \geq d_i + f'_i \bar{x} \qquad &\text{if } a_{iN} > 0 \text{ and } a_{jN} < 0\\ 0 \geq d_k +f'_k\bar{x}, \qquad &\text{if } a_{kN} = 0 \end{align*} \end{algorithmic} \end{algorithm} We note the following: \begin{enumerate} \item The projection $\Pi_k(\mathcal{P})$ can be generated by repeated application of the elimination algorithm (Theorem 2.10 in \cite{bertsimas1997introduction}) \item The elimination approach always produces a polyhedron (definition of the elimination algorithm in \cite{bertsimas1997introduction}). \end{enumerate} Therefore, it follows that a projection $\Pi_k(\mathcal{P})$ of a polyhedron is also a polyhedron. The polyhedral set $\mathcal{P} \coloneqq \{ Y: \Gamma Y \geq 0\}$ is a system of $\ell_P \coloneqq 2M(P-1)$ linear inequalities, with $P$ variables $ V^T Y_1, \ldots, V^T Y_P.$ Let $( A)_{ij}$ denote the $i,j$-th entry in matrix $ A$. We let $I_P = \{1, 2, \ldots, \ell_P\}$ denote the row index set for the system of inequalities with $P$ variables and partition it into subsets $I_P^+, I_P^-,$ and $I_P^0$, where $I_P^+ = \{i: ( \Gamma C)_{ip} > 0\}, I_P^- = \{i: ( \Gamma C)_{ip} < 0\},$ and $I_P^0 = \{i: ( \Gamma C)_{ip} = 0\}$. Then we have \begin{small} \begin{align*} \{ \Gamma Y \geq 0\} &= \left\{ \Gamma \left( C V^T Y + Z^*\right) \geq 0\right\}\\ &= \left\{ \underbrace{ \Gamma C}_{\ell_P \times P} \underbrace{ V^T Y}_{P \times 1} \geq \underbrace{ 0 - \Gamma Z^*}_{\ell_P \times 1}\right\}\\ &= \left\{\sum_{j=1}^P ( \Gamma C)_{ij}( V^T Y)_j \geq 0 - ( \Gamma Z^*)_i \quad i = 1, \ldots, \ell_P\right\}\\ &= \left\{( \Gamma C)_{ip}( V^T Y)_{p} \geq -\sum_{j=1}^{P-1} ( \Gamma C)_{ij}( V^T Y)_j - ( \Gamma Z^*)_i \quad i = 1,\ldots, \ell_P\right\}\\ &= \begin{Bmatrix} ( V^T Y)_P \geq \frac{-\sum_{j=1}^{P-1} ( \Gamma C)_{qj}( V^T Y)_j - ( \Gamma Z^*)_q}{( \Gamma C)_{qp}}, & \text{for } q \in I_P^+\\ ( V^T Y)_P \leq \frac{-\sum_{j=1}^{P-1} ( \Gamma C)_{rj}( V^T Y)_j - ( \Gamma Z^*)_r}{( \Gamma C)_{rp}}, & \text{for } r \in I_P^-\\ 0 \geq -\sum_{j=1}^{P-1}( \Gamma C)_{sj}( V^T Y)_j - ( \Gamma Z^*)_s & \text{for } s \in I_P^0\\ \end{Bmatrix}\\ &= \begin{Bmatrix}\max_{q \in I^+} \frac{-\sum_{j=1}^{P-1} ( \Gamma C)_{qj}( V^T Y)_j - ( \Gamma Z^*)_q}{( \Gamma C)_{qp}} \leq ( V^T Y)_{p} \leq \min_{r \in I^-} \frac{-\sum_{j=1}^{P-1} ( \Gamma C)_{rj}( V^T Y)_j - ( \Gamma Z^*)_r}{( \Gamma C)_{rp}}\\ 0 \geq -\sum_{j=1}^{P-1}( \Gamma C)_{sj}( V^T Y)_j - ( \Gamma Z^*)_s & \text{for } s \in I_P^0 \end{Bmatrix} \end{align*} \end{small} We reduce this to a system of inequalities with $P-1$ variables after eliminating $( V^T Y)_P$: \begin{equation}\label{eqn:1} \begin{Bmatrix} \frac{-\sum_{j=1}^{P-1} ( \Gamma C)_{qj}( V^T Y)_j - ( \Gamma Z^*)_q}{( \Gamma C)_{qp}} \leq \frac{-\sum_{j=1}^{P-1} ( \Gamma C)_{rj}( V^T Y)_j - ( \Gamma Z^*)_r}{( \Gamma C)_{rp}} \text{ for } q \in I_P^+, r \in I_P^-\\ 0 \geq -\sum_{j=1}^{P-1}( \Gamma C)_{sj}( V^T Y)_j - ( \Gamma Z^*)_s \text{ for } s \in I_P^0 \end{Bmatrix} \end{equation} The set in (\ref{eqn:1}) is a system of $\ell_{P-1} \coloneqq |I_P^+| \times |I_P^-| + |I_P^0|$ inequalities. It is a polyhedral set in $\mathbb{R}^{P-1}$, which can be seen by rewriting (\ref{eqn:1}) as follows: \newpage \begin{scriptsize} \begin{align*} & \begin{Bmatrix} \frac{-\sum_{j=1}^{P-1} ( \Gamma C)_{qj}( V^T Y)_j - ( \Gamma Z^*)_q}{( \Gamma C)_{qp}} \leq \frac{-\sum_{j=1}^{P-1} ( \Gamma C)_{rj}( V^T Y)_j - ( \Gamma Z^*)_r}{( \Gamma C)_{rp}} \text{ for } q \in I^+, r \in I^-\\ 0 \geq -\sum_{j=1}^{P-1}( \Gamma C)_{sj}( V^T Y)_j - ( \Gamma Z^*)_s \text{ for } s \in I^0 \end{Bmatrix} \\ =&\begin{Bmatrix} -\sum_{j=1}^{P-1} ( \Gamma C)_{rp}( \Gamma C)_{qj}( V^T Y)_j -( \Gamma C)_{rp} ( \Gamma Z^*)_q \geq -\sum_{j=1}^{P-1} ( \Gamma C)_{qp}( \Gamma C)_{rj}( V^T Y)_j - ( \Gamma C)_{qp}( \Gamma Z^*)_r \text{ for } q \in I^+, r \in I^-\\ \sum_{j=1}^{P-1}( \Gamma C)_{sj}( V^T Y)_j \geq - ( \Gamma Z^*)_s \text{ for } s \in I^0 \end{Bmatrix} \\ =&\begin{Bmatrix} \sum_{j=1}^{P-1} \left(( \Gamma C)_{qp}( \Gamma C)_{rj}- ( \Gamma C)_{rp}( \Gamma C)_{qj}\right)( V^T Y)_j \geq ( \Gamma C)_{rp} ( \Gamma Z^*)_q - ( \Gamma C)_{qp}( \Gamma Z^*)_r \text{ for } q \in I^+, r \in I^-\\ \sum_{j=1}^{P-1}( \Gamma C)_{sj}( V^T Y)_j \geq - ( \Gamma Z^*)_s \text{ for } s \in I^0 \end{Bmatrix}. \end{align*} \end{scriptsize} Let $ A_{p-k}$ denote a $\ell_{p-k} \times (p-k)$ matrix, $( V^T Y)_{1:p-k}$ a vector that contains the first $p-k$ coordinates of $( V^T Y)$, and $ b_{p-k}( Z^*)$ a $\ell_{p-k}$-dimensional vector, where $k \in \{0, \ldots, P-1\}$, and $\ell_{p-k}$ is the number of linear constraints in $\Pi_{p-k}(\mathcal{P})$, which is the projection of $\mathcal{P}$. Note that $ A_P= \Gamma C$ and $ b_P( Z^*) = 0 - \Gamma Z^*.$ We repeat the elimination process $P-1$ times to obtain $\Pi_1(\mathcal{P}):$ \begin{align*} \left\{ \Gamma Y \geq 0\right\} &= \{ A_{p}( V^T Y) \geq b_{p}( Z^*)\}\\ \Pi_{P-1}(\mathcal{P}) &= \{ A_{P-1}( V^T Y)_{1:P-1} \geq b_{P-1}( Z^*)\}\\ &\vdots\\ \Pi_{1}(\mathcal{P}) &= \{ A_{1}( V^T Y)_{1} \geq b_{1}( Z^*)\}. \end{align*} \underline{Induction base case for $\Pi_2(\mathcal{P})$:} Without loss of generality, we assume the variable in $\Pi_1(\mathcal{P})$ is $( V^T Y)_1$. We can obtain its lower and upper truncation limits, $\mathcal{V}_1^{\text{lo}}( Z^*)$ and $\mathcal{V}_1^{\text{up}}( Z^*)$, and $\mathcal{V}_1^{0}( Z^*)$ using the same argument as the one in \cite{lee2016exact}, where \begin{align*} \mathcal{V}_1^{\text{lo}}( Z^*) &= \max_{i:( A_1)_i >0} \frac{( b_1( Z^*))_i}{( A_1)_i}\\ \mathcal{V}_1^{\text{up}}( Z^*) &= \min_{i:( A_1)_i < 0} \frac{( b_1( Z^*))_i}{( A_1)_i}\\ \mathcal{V}_1^{0}( Z^*) &= \max_{i:( A_1)_i = 0} ( b_1( Z^*))_i. \end{align*} We conclude that $\Pi_1(\mathcal{P}) = \{(\mathcal{V}_1^{\text{lo}}( Z^*) \leq ( V^T Y)_1 \leq \mathcal{V}_1^{\text{up}}( Z^*), \mathcal{V}_1^{0}( Z^*) \leq 0\}.$ By the definition of $\Pi_2(\mathcal{P})$, we have \begin{align*} \Pi_2(\mathcal{P}) &=\left\{ A_2( V^T Y)_{1:2} \geq b_2( Z^*)\right\}\\ &= \begin{Bmatrix} A_2 ( V^T Y)_{1:2} \geq b_2( Z^*)\\ \mathcal{V}_1^{\text{lo}}( Z^*) \leq ( V^T Y)_1 \leq \mathcal{V}_1^{\text{up}}( Z^*)\\ \mathcal{V}_1^{0}( Z^*) \leq 0 \end{Bmatrix} \end{align*} because reducing the system from $\Pi_2(\mathcal{P})$ to $\Pi_1(\mathcal{P})$ does not change the range of $( V^T Y)_1$ that satisfy the linear constraints in $\Pi_2(\mathcal{P}).$ We can obtain the lower and upper truncation limits for $( V^T Y)_2$ as a function of $( V^T Y)_1$. \begin{align*} \Pi_2(\mathcal{P}) &= \begin{Bmatrix} A_2 ( V^T Y)_{1:2} \geq b_2( Z^*)\\ \end{Bmatrix}\\ &= \begin{Bmatrix} A_2 ( V^T Y)_{1:2} \geq b_2( Z^*)\\ \mathcal{V}_1^{\text{lo}}( Z^*) \leq ( V^T Y)_1 \leq \mathcal{V}_1^{\text{up}}( Z^*)\\ \mathcal{V}_1^{0}( Z^*) \leq 0 \end{Bmatrix}\\ &= \begin{Bmatrix} \sum_{j=1}^2 ( A_2)_{ij}( V^T Y)_j \geq ( b_2( Z^*))_i \quad \text{ for } i = 1, \ldots, \ell_{2}\\ \mathcal{V}_1^{\text{lo}}( Z^*) \leq ( V^T Y)_1 \leq \mathcal{V}_1^{\text{up}}( Z^*)\\ \mathcal{V}_1^{0}( Z^*) \leq 0 \end{Bmatrix}\\ &=\begin{Bmatrix} ( A_2)_{i2} ( V^T Y)_2 \geq -( A_2)_{i1}( V^T Y)_1 + ( b_2( Z^*))_i \quad \text{ for } i = 1, \ldots, \ell_{2}\\ \mathcal{V}_1^{\text{lo}}( Z^*) \leq ( V^T Y)_1 \leq \mathcal{V}_1^{\text{up}}( Z^*)\\ \mathcal{V}_1^{0}( Z^*) \leq 0 \end{Bmatrix}\\ &= \begin{Bmatrix} ( V^T Y)_2 \geq \frac{-( A_2)_{i1}( V^T Y)_1 + ( b_2( Z^*))_i}{( A_2)_{i2}} \quad \text{ for } i: ( A_2)_{i2} > 0\\ ( V^T Y)_2 \leq \frac{-( A_2)_{i1}( V^T Y)_1 + ( b_2( Z^*))_i}{( A_2)_{i2}} \quad \text{ for } i: ( A_2)_{i2} < 0\\ 0 \geq -( A_2)_{i1}( V^T Y)_1 + ( b_2( Z^*))_i \quad \text{ for } i: ( A_2)_{i2} = 0\\ \mathcal{V}_1^{\text{lo}}( Z^*) \leq ( V^T Y)_1 \leq \mathcal{V}_1^{\text{up}}( Z^*)\\ \mathcal{V}_1^{0}( Z^*) \leq 0 \end{Bmatrix}\\ &= \begin{Bmatrix} \max\limits_{i:( A_2)_{i2} > 0} \frac{-( A_2)_{i1}( V^T Y)_1 + ( b_2( Z^*))_i}{( A_2)_{i2}}\leq ( V^T Y)_2 \leq \min\limits_{i:( A_1)_{i2} < 0}\frac{-( A_2)_{i1}( V^T Y)_1 + ( b_2( Z^*))_i}{( A_2)_{i2}}\\ 0 \geq \max\limits_{i:( A_2)_{i2} = 0} -( A_2)_{i1}( V^T Y)_1 + ( b_2( Z^*))_i\\ \mathcal{V}_1^{\text{lo}}( Z^*) \leq ( V^T Y)_1 \leq \mathcal{V}_1^{\text{up}}( Z^*)\\ \mathcal{V}_1^{0}( Z^*) \leq 0 \end{Bmatrix}\\ &=\begin{Bmatrix} \mathcal{V}^{\text{lo}}_2( Z^*, ( V^T Y)_1) \leq ( V^T Y)_2 \leq \mathcal{V}^{\text{up}}_2( Z^*, ( V^T Y)_1)\\ \mathcal{V}^0_2( Z^*, ( V^T Y)_1) \leq 0\\ \mathcal{V}_{1}^{\text{lo}}( Z^*) \leq ( V^T Y)_{1} \leq \mathcal{V}_{1}^{\text{up}}( Z^*)\\ \mathcal{V}_{1}^0( Z^*) \leq 0 \end{Bmatrix} \end{align*} where \begin{align*} \mathcal{V}_2^{\text{lo}}( Z^*, ( V^T Y)_1) &= \max_{i: ( A_2)_{i2} > 0} \frac{-( A_2)_{i1}( V^T Y)_1( Z^*)_1 + ( b_2( Z^*))_i}{( A_2)_{i2}}\\ \mathcal{V}_2^{\text{up}}( Z^*, ( V^T Y)_1) &= \min_{i: ( A_2)_{i2} < 0} \frac{-( A_2)_{i1}( V^T Y)_1( Z^*)_1 + ( b_2( Z^*))_i}{( A_2)_{i2}}\\ \mathcal{V}_2^0( Z^*, ( V^T Y)_1) &= \max_{i: ( A_2)_{i2} = 0} -( A_2)_{i1}( V^T Y)_1+( b_2( Z^*))_i. \end{align*} \underline{Inductive step for $\Pi_{P-1}(\mathcal{P})$}: Under the induction hypothesis, we assume $$\Pi_{P-2}(\mathcal{P}) = \begin{Bmatrix} \mathcal{V}_{1}^{\text{lo}}( Z^*) \leq ( V^T Y)_{1} \leq \mathcal{V}_{1}^{\text{up}}( Z^*)\\ \mathcal{V}_{1}^0( Z^*) \leq 0\\ \mathcal{V}_2^{\text{lo}}(( V^T Y)_1, Z^*) \leq ( V^T Y)_2 \leq \mathcal{V}_2^{\text{up}}(( V^T Y)_1, Z^*)\\ \mathcal{V}_{2}^0(( V^T Y)_1, Z^*) \leq 0\\ \vdots\\ \mathcal{V}_{P-2}^{\text{lo}}(( V^T Y)_{1:P-3}, Z^*) \leq ( V^T Y)_{P-2} \leq \mathcal{V}_{P-2}^{\text{up}}(( V^T Y)_{1:P-3}, Z^* )\\ \mathcal{V}_{P-2}^0(( V^T Y)_{1:P-3}, Z^*) \leq 0 \end{Bmatrix}$$ Then we have \begin{align*} \Pi_{P-1}(\mathcal{P}) &= \begin{Bmatrix} A_{P-1} ( V^T Y)_{1:P-1} \geq b_{P-1}( Z^*)\\ \end{Bmatrix}\\ &= \begin{Bmatrix} A_{P-1} ( V^T Y)_{1:P-1} \geq b_{P-1}( Z^*)\\ \mathcal{V}_{1}^{\text{lo}}( Z^*) \leq ( V^T Y)_{1} \leq \mathcal{V}_{1}^{\text{up}}( Z^*)\\ \mathcal{V}_{1}^0( Z^*) \leq 0\\ \mathcal{V}_2^{\text{lo}}( Z^*, ( V^T Y)_1) \leq ( V^T Y)_2 \leq \mathcal{V}_2^{\text{up}}( Z^*, ( V^T Y)_1)\\ \mathcal{V}_{2}^0( Z^*, ( V^T Y)_1) \leq 0\\ \vdots\\ \mathcal{V}_{P-2}^{\text{lo}}(( V^T Y)_{1:P-3}, Z^*) \leq ( V^T Y)_{P-2} \leq \mathcal{V}_{P-2}^{\text{up}}(( V^T Y)_{1:P-3}, Z^* )\\ \mathcal{V}_{P-2}^0(( V^T Y)_{1:P-3}, Z^*) \leq 0 \end{Bmatrix}\\ &=\begin{Bmatrix} ( A_{P-1})_{i(P-1)} ( V^T Y)_{P-1} \geq -\sum_{j=1}^{P-2}( A_{P-1})_{ij}( V^T Y)_j+ ( b_{P-1}( Z^*))_i \quad \text{ for } i = 1, \ldots, \ell_{P-1}\\ \mathcal{V}_{1}^{\text{lo}}( Z^*) \leq ( V^T Y)_{1} \leq \mathcal{V}_{1}^{\text{up}}( Z^*)\\ \mathcal{V}_{1}^0( Z^*) \leq 0\\ \mathcal{V}_2^{\text{lo}}( Z^*, ( V^T Y)_1) \leq ( V^T Y)_2 \leq \mathcal{V}_2^{\text{up}}( Z^*, ( V^T Y)_1)\\ \mathcal{V}_{2}^0( Z^*, ( V^T Y)_1) \leq 0\\ \vdots\\ \mathcal{V}_{P-2}^{\text{lo}}(( V^T Y)_{1:P-3}, Z^*) \leq ( V^T Y)_{P-2} \leq \mathcal{V}_{P-2}^{\text{up}}(( V^T Y)_{1:P-3}, Z^* )\\ \mathcal{V}_{P-2}^0(( V^T Y)_{1:P-3}, Z^*) \leq 0 \end{Bmatrix}\\ &= \begin{Bmatrix} ( V^T Y)_{P-1} \geq \frac{-\sum_{j=1}^{P-2}( A_{P-1})_{ij}( V^T Y)_j+ ( b_{P-1}( Z^*))_i}{( A_{P-1})_{i(P-1)}} \quad \text{ for } i: ( A_{P-1})_{i(P-1)} > 0\\ ( V^T Y)_{P-1} \leq \frac{-\sum_{j=1}^{P-2}( A_{P-1})_{ij}( V^T Y)_j+ ( b_{P-1}( Z^*))_i}{( A_{P-1})_{i(P-1)}} \quad \text{ for } i: ( A_{P-1})_{i(P-1)} < 0\\ 0 \geq -\sum_{j=1}^{P-2}( A_{P-1})_{ij}( V^T Y)_j+ ( b_{P-1}( Z^*))_i \quad \text{ for } i: ( A_{P-1})_{i(P-1)} = 0\\ \mathcal{V}_{1}^{\text{lo}}( Z^*) \leq ( V^T Y)_{1} \leq \mathcal{V}_{1}^{\text{up}}( Z^*)\\ \mathcal{V}_{1}^0( Z^*) \leq 0\\ \mathcal{V}_2^{\text{lo}}( Z^*, ( V^T Y)_1) \leq ( V^T Y)_2 \leq \mathcal{V}_2^{\text{up}}( Z^*, ( V^T Y)_1)\\ \mathcal{V}_{2}^0( Z^*, ( V^T Y)_1) \leq 0\\ \vdots\\ \mathcal{V}_{P-2}^{\text{lo}}(( V^T Y)_{1:P-3}, Z^*) \leq ( V^T Y)_{P-2} \leq \mathcal{V}_{P-2}^{\text{up}}(( V^T Y)_{1:P-3}, Z^* )\\ \mathcal{V}_{P-2}^0(( V^T Y)_{1:P-3}, Z^*) \leq 0 \end{Bmatrix}.\\ &= \begin{Bmatrix} \mathcal{V}_{1}^{\text{lo}}( Z^*) \leq ( V^T Y)_{1} \leq \mathcal{V}_{1}^{\text{up}}( Z^*)\\ \mathcal{V}_{1}^0( Z^*) \leq 0\\ \mathcal{V}_2^{\text{lo}}(( V^T Y)_1, Z^*) \leq ( V^T Y)_2 \leq \mathcal{V}_2^{\text{up}}(( V^T Y)_1, Z^*)\\ \mathcal{V}_{2}^0(( V^T Y)_1, Z^*) \leq 0\\ \vdots\\ \mathcal{V}_{P-1}^{\text{lo}}(( V^T Y)_{1:P-2}, Z^*) \leq ( V^T Y)_{P-1} \leq \mathcal{V}_{P-1}^{\text{up}}(( V^T Y)_{1:P-2}, Z^* )\\ \mathcal{V}_{P-1}^0(( V^T Y)_{1:P-2}, Z^*) \leq 0 \end{Bmatrix} \end{align*} where \begin{align*} \mathcal{V}_{P-1}^{\text{lo}}\left(( V^T Y)_{1:P-2}, Z^*\right) &= \max_{i:( A_{P-1})_{i(P-1)} > 0} \frac{-\sum_{j-1}^{P-2}( A_{P-1})_{ij}( V^T Y)_j + ( b_{P-1}( Z^*))_i}{( A_{P-1})_{i(P-1)}} \\ \mathcal{V}_{P-1}^{\text{up}}\left(( V^T Y)_{1:P-2}, Z^*\right) &= \min_{i:( A_{P-1})_{i(P-1)} < 0} \frac{-\sum_{j-1}^{P-2}( A_{P-1})_{ij}( V^T Y)_j + ( b_{P-1}( Z^*))_i}{( A_{P-1})_{i(P-1)}} \\ \mathcal{V}_{P-1}^{\text{0}}\left(( V^T Y)_{1:P-2}, Z^*\right) &= \max_{\substack{\\ i:( A_{P-1})_{i(P-1)} = 0 \\}} -\sum_{j=1}^{P-2} ( A_{P-1})_{ij}( V^T Y)_j + ( b_{P-1}( Z^*))_i. \end{align*} Therefore, we conclude that \begin{align*} \Pi_P(\mathcal{P}) &= \left\{ \Gamma Y \geq 0\right\}\\ &= \begin{Bmatrix} \mathcal{V}_{1}^{\text{lo}}( Z^*) \leq ( V^T Y)_{1} \leq \mathcal{V}_{1}^{\text{up}}( Z^*)\\ \mathcal{V}_{1}^0( Z^*) \leq 0\\ \mathcal{V}_2^{\text{lo}}(( V^T Y)_1, Z^*) \leq ( V^T Y)_2 \leq \mathcal{V}_2^{\text{up}}(( V^T Y)_1, Z^*)\\ \mathcal{V}_{2}^0(( V^T Y)_1, Z^*) \leq 0\\ \vdots\\ \mathcal{V}_{P-1}^{\text{lo}}(( V^T Y)_{1:P-2}, Z^*) \leq ( V^T Y)_{P-1} \leq \mathcal{V}_{P-1}^{\text{up}}(( V^T Y)_{1:P-2}, Z^* )\\ \mathcal{V}_{P-1}^0(( V^T Y)_{1:P-2}, Z^*) \leq 0\\ \mathcal{V}_{p}^{\text{lo}}(( V^T Y)_{1:P-1}, Z^*) \leq ( V^T Y)_{p} \leq \mathcal{V}_{p}^{\text{up}}(( V^T Y)_{1:P-1}, Z^*)\\ \mathcal{V}_{p}^0(( V^T Y)_{1:P-1}, Z^*) \leq 0\\ \end{Bmatrix} \end{align*} where \begin{align*} \mathcal{V}_P^{\text{lo}}\left(( V^T Y)_{1:P-1}, Z^*\right) &= \max_{i:( A_P)_{ip} > 0} \frac{-\sum_{j-1}^{P-1}( A_P)_{ij}( V^T Y)_j + ( b_P( Z^*))_i}{( A_P)_{ip}} \\ \mathcal{V}_P^{\text{up}}\left(( V^T Y)_{1:P-1}, Z^*\right) &= \min_{i:( A_P)_{ip} < 0} \frac{-\sum_{j-1}^{P-1}( A_P)_{ij}( V^T Y)_j + ( b_P( Z^*))_i}{( A_P)_{ip}}\\ \mathcal{V}_P^{\text{0}}\left(( V^T Y)_{1:P-1}, Z^*\right) &= \max_{\substack{\\ i:( A_P)_{ip} = 0 \\}} -\sum_{j=1}^{P-1} ( A_P)_{ij}( V^T Y)_j + ( b_P( Z^*))_i. \end{align*} \end{proof} \end{section} \clearpage \begin{funding} This work was supported by the NIH grant 5T32CA009337-40 (Shyr), NSF grants DMS1810829 and DMS2113707 (Parmigiani and Patil), DMS2113426 (Sur) and a William F. Milton Fund (Sur). \end{funding} \begin{code} Code to reproduce results from the simulations and data application can be found at \texttt{https://github.com/wangcathy/multi-study-boosting}. \end{code} \clearpage \bibliographystyle{imsart-nameyear}
{'timestamp': '2022-07-14T02:06:37', 'yymm': '2207', 'arxiv_id': '2207.04588', 'language': 'en', 'url': 'https://arxiv.org/abs/2207.04588'}
\section{Introduction} The goal of the single image super-resolution (SISR) is to recover the high-resolution (HR) image from its low-resolution (LR) counterpart. SISR problem is a fundamental low-level vision and image processing problem with various practical applications in satellite imaging, medical imaging, astronomy, microscopy imaging, seismology, remote sensing, surveillance, biometric, image compression, etc. Usually, the SISR is described as a linear forward observation model by the following image degradation process: \begin{equation} {\bf y} = {\bf H} * \Tilde{{\bf x}} + \eta, \label{eq:degradation_model} \end{equation} where ${\bf y}$ is an observed LR image, ${\bf H}$ is a \emph{down-sampling operator} (usually bicubic) that convolves with an HR image $\Tilde{{\bf x}}$ and resizes it by a scaling factor $s$, and $\eta$ is considered as an additive white Gaussian noise with standard deviation $\sigma$. However, in real-world settings, $\eta$ also accounts for all possible errors during the image acquisition process that include inherent sensor noise, stochastic noise, compression artifacts, and the possible mismatch between the forward observation model and the camera device. The operator ${\bf H}$ is usually ill-conditioned or singular due to the presence of unknown noise ($\eta$) that makes the SISR a highly ill-posed nature of inverse problems. Since, due to the ill-posed nature, there are many possible solutions, regularization is required to select the most plausible ones. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{figs/tease_updated.pdf} \end{center} \vspace{-0.4cm} \caption{The super-resolution results at the $\times 4$ upscaling factor of the state-of-art--ESRGAN, the proposed SRResCycGAN+ with respect to the ground-truth images. SRResCycGAN+ has successfully remove the visible artifacts, while the ESRGAN has still artifacts due to data bias between the training and testing images.} \vspace{-0.5cm} \label{fig:tease} \end{figure} Recently, numerous works have been addressed on the task of SISR that are based on deep CNNs for their powerful feature representation capabilities either on PSNR values~\cite{kim2016vdsrcvpr,Lim2017edsrcvprw,kai2017ircnncvpr,kai2018srmdcvpr,yuan2018unsupervised,Li2019srfbncvpr,zhang2019deep} or on visual quality~\cite{ledig2017srgan,wang2018esrgan}. These SR methods mostly rely on the known degradation operators such as bicubic (\emph{i.e. } noise-free) with paired LR and HR images (same clean domain) in the supervised training, while other methods do not follow the image observation (physical) model (refers to Eq.~\eqref{eq:degradation_model}). In the real-world settings, the input LR images suffer from different kinds of degradation or LR is different from the HR domain. Under such circumstances, these SR methods often fail to produce convincing SR results. In Figure~\ref{fig:tease}, we show the results of the state-of-art deep learning method--ESRGAN with the noisy input image. The ESRGAN degraded SR result is due to the difference of training and testing data domains. The detailed analysis of the deep learning-based SR models on the real-world data can be found in the recent literature~\cite{lugmayr2019unsupervised,fritsche2019dsgan}. In this work, we propose a SR learning method (SRResCycGAN) that overcomes the challenges of real image super-resolution. It is inspired by CycleGAN~\cite{zhu2017unpairedcycgan} structure which maintains the domain consistency between the LR and HR domain. It is also inspired by powerful image regularization and large-scale optimization techniques to solve general inverse problems in the past. The scheme of our proposed real image SR approach setup is shown in Fig.~\ref{fig:srrescycgan}. The $\mathbf{G}_{SR}$ network takes the input LR image and produces the SR output with the supervision of the SR discriminator network $\mathbf{D}_{{\bf x}}$. For the domain consistency between the LR and HR, the $\mathbf{G}_{LR}$ network reconstructs the LR image from the SR output with the supervision of the LR discriminator network $\mathbf{D}_{{\bf y}}$. We evaluate our proposed SR method on multiple datasets with synthetic and natural image corruptions. We use the Real-World Super-resolution (RWSR) dataset~\cite{NTIRE2020RWSRchallenge} to show the effectiveness of our method through quantitative and qualitative experiments. Finally, we also participated in the AIM2020 Real Image Super-resolution Challenge~\cite{AIM2020_RSRchallenge} for the Track-3 ($\times 4$ upscaling) associated with the ECCV 2020 workshops. Table~\ref{tab:track3} shows the final testset SR results for the track-3 of our method (\textbf{MLP\_SR}) with others as well as the visual comparison in the Fig.~\ref{fig:4x_result_val} and Fig.~\ref{fig:4x_result_test}. \begin{figure}[t] \centering \includegraphics[scale=1.0]{figs/srrescycgan.pdf} \caption{Visualizes the structure of the our proposed SR approach setup. We trained the network $\mathbf{G}_{SR}$ in a GAN framework, where our goal is to map images from the LR (${\bf y}$) to the HR (${\bf x}$), while maintaining the domain consistency between the LR and HR images.} \label{fig:srrescycgan} \vspace{-0.5cm} \end{figure} \section{Related Work} \subsection{Image Super-Resolution methods} Recently, numerous works have addressed the task of SISR using deep CNNs for their powerful feature representation capabilities. A preliminary CNN-based method to solve SISR is a super-resolution convolutional network with three layers (SRCNN)~\cite{dong2014srcnneccv}. Kim~\emph{et al. }\cite{kim2016vdsrcvpr} proposed a very deep SR (VDSR) network with residual learning approach. The efficient subpixel convolutional network (ESPCNN)~\cite{Shi2016pixelcnncvpr} was proposed to take bicubicly LR input and introduced an efficient subpixel convolution layer to upscale the LR feature maps to HR images at the end of the network. Lim \emph{et al. }\cite{Lim2017edsrcvprw} proposed an enhanced deep SR (EDSR) network by taking advantage of the residual learning. Zhang~\emph{et al. }\cite{kai2017ircnncvpr} proposed iterative residual convolutional network (IRCNN) to solve SISR problem by using a plug-and-play framework. Zhang~\emph{et al. }\cite{kai2018srmdcvpr} proposed a deep CNN-based super-resolution with multiple degradation (SRMD). Yaoman \emph{et al. }\cite{Li2019srfbncvpr} proposed a feedback network (SRFBN) based on feedback connections and recurrent neural network-like structure. Zhang \emph{et al. }\cite{zhang2019deep} proposed a deep plug-and-play Super-Resolution method for arbitrary blur kernels by following the multiple degradation. In \cite{srwdnet}, the authors proposed SRWDNet to solve the joint deblurring and super-resolution task by following the realistic degradation. These methods mostly rely on the PSNR-based metric by optimizing the $\mathcal{L}_1$/$\mathcal{L}_2$ losses with blurry results in a supervised way, while they do not preserve the visual quality with respect to human perception. Moreover, the above-mentioned methods are deeper or wider CNN networks to learn non-linear mapping from LR to HR with a large number of training samples, while neglecting the real-world settings. \subsection{ Real Image Super-Resolution methods} For the perception SR task, a preliminary attempt was made by Ledig \emph{et al. }\cite{ledig2017srgan} who proposed the SRGAN method to produce perceptually more pleasant results. To further enhance the performance of the SRGAN, Wang \emph{et al. }\cite{wang2018esrgan} proposed the ESRGAN model to achieve the state-of-art perceptual performance. Despite their success, the previously mentioned methods are trained with HR/LR image pairs on the bicubic down-sampling \emph{i.e. } noise-free and thus they have limited performance in the real-world settings. More recently, Lugmayr \emph{et al. }\cite{lugmayr2019unsupervised} proposed a benchmark protocol for the real-wold image corruptions and introduced the real-world challenge series~\cite{AIM2019RWSRchallenge} that described the effects of bicubic downsampling and separate degradation learning for super-resolution. Later on, Fritsche \emph{et al. }\cite{fritsche2019dsgan} proposed the DSGAN to learn degradation by training the network in an unsupervised way and modified the ESRGAN structure as the ESRGAN-FS to further enhance the performance in the real-world settings. Recently, the authors proposed the SRResCGAN~\cite{muhammad2020srrescgan} to solve real-world SR problem, which is inspired by a physical image formation model. However, the above methods still suffer unpleasant artifacts (see the Fig.~\ref{fig:4x_result_div2k} and the Table~\ref{tab:comp_sota}). Our approach takes into account the real-world settings by greatly increasing its applicability in practical scenarios. \section{Proposed Method} \label{sec:proposed_method} \subsection{Problem Formulation} By referencing to the Eq.~\eqref{eq:degradation_model}, the recovery of ${\bf x}$ from ${\bf y}$ mostly relies on the variational approach for combining the observation and prior knowledge, and is given by the following objective function: \begin{equation} \mathbf{J}({\bf x}) = \underset{\mathbf{x}}{\arg \min }~\frac{1}{2}\|{\bf y} - {\bf H}*{\bf x}\|_2^{2}+\lambda \mathcal{R}({\bf x}), \label{eq:eq1} \end{equation} where $\frac{1}{2}\|{\bf y}-\mathbf{H}*{\bf x}\|_2^2$ is the data fidelity (also known as log-likelihood) term that measures the proximity of the solution to the observations, $\mathcal{R}({\bf x})$ is the regularization term that is associated with image priors, and $\lambda$ is the trade-off parameter that governs the compromise between the data fidelity and the regularizer term. Interestingly, the variational approach has a direct link to the Bayesian approach and the derived solutions can be described either as penalized maximum likelihood or as maximum a posteriori (MAP) estimates~\cite{bertero1998map1,figueiredo2007map2}. Thanks to the recent advances of deep learning, the regularizer (\emph{i.e. } $\mathcal{R}({\bf x})$) is employed by the SRResCGAN~\cite{muhammad2020srrescgan} generator structure that has powerful image priors capabilities. \subsection{SR Learning Model} \label{sec:sr_learning} The proposed Real Image SR approach setup is shown in the Fig.~\ref{fig:srrescycgan}. The SR generator network $\mathbf{G}_{SR}$ borrowed from the SRResCGAN~\cite{muhammad2020srrescgan} is trained in a GAN~\cite{goodfellow2014gan} framework by using the LR (${\bf y}$) images with their corresponding HR images with pixel-wise supervision in the clean HR target domain (${\bf x}$), while maintaining the domain consistency between the LR and HR images. In the next coming sections~\ref{sec:net_arch}, \ref{sec:net_losses}, and \ref{sec:net_training}, we present the details of the network architectures, network losses, and training descriptions for the proposed SR setup. \subsection{Network Architectures} \label{sec:net_arch} \subsubsection{SR Generator ($\mathbf{G_{SR}}$):} We use the SR generator $\mathbf{G_{SR}}$ network which is basically an \emph{Encoder-Resnet-Decoder} like structure as done SRResCGAN~\cite{muhammad2020srrescgan}. In the $\mathbf{G_{SR}}$ network, both \emph{Encoder} and \emph{Decoder} layers have $64$ convolutional feature maps of $5\times5$ kernel size with $C \times H\times W$ tensors, where $C$ is the number of channels of the input image. Inside the \emph{Encoder}, LR image is upsampled by the Bicubic kernel with \emph{Upsample} layer, where the choice of the upsampling kernel is arbitrary. \emph{Resnet} consists of $5$ residual blocks with two Pre-activation \emph{Conv} layers, each of $64$ feature maps with kernel support $3\times3$, and the pre-activation is the parametrized rectified linear unit (PReLU) with $64$ output feature channels. The trainable projection layer~\cite{Lefkimmiatis2018UDNet} inside the \emph{Decoder} computes the proximal map with the estimated noise standard deviation $\sigma$ and handles the data fidelity and prior terms. The noise realization is estimated in the intermediate \emph{Resnet} that is sandwiched between \emph{Encoder} and \emph{Decoder}. The estimated residual image after \emph{Decoder} is subtracted from the LR input image. Finally, the clipping layer incorporates our prior knowledge about the valid range of image intensities and enforces the pixel values of the reconstructed image to lie in the range $[0, 255]$. The reflection padding is also used before all the \emph{Conv} layers to ensure slowly varying changes at the boundaries of the input images. \subsubsection{SR Discriminator ($\mathbf{D}_{{\bf x}}$):} The SR discriminator network is trained to discriminate the real HR images from the fake HR images generated by the $\mathbf{G_{SR}}$. The raw discriminator network contains 10 convolutional layers with kernels support $3\times3$ and $4\times4$ of increasing feature maps from $64$ to $512$ followed by Batch Norm (BN) and leaky ReLU as do in SRGAN~\cite{ledig2017srgan}. \subsubsection{LR Generator ($\mathbf{G_{LR}}$):} We adapt the similar architecture as does in \cite{yuan2018unsupervised} for the down-sampling which is basically a \emph{Conv-Resnet-Conv} like structure. We use 6 residual blocks in the \emph{Resnet} with 3 convolutional layers at the head and tail \emph{Conv}, while the stride is set to 2 in the second and third head \emph{Conv} layers for the down-sampling purpose. \subsubsection{LR Discriminator ($\mathbf{D}_{{\bf y}}$):} The LR discriminator network consists of a three-layer convolutional network that operates on the patch level as do in PatchGAN~\cite{isola2017image,li2016precomputed}. All the \emph{Conv} layers have $5\times5$ kernel support with feature maps from 64 to 256 and also applied the Batch Norm and Leaky ReLU (LReLU) activation after each \emph{Conv} layer except the last \emph{Conv} layer that maps 256 to 1 features. \subsection{Network Losses} \label{sec:net_losses} To learn the image super-resolution, we train the proposed SRResCycGAN network with the following loss functions: \begin{equation} \mathcal{L}_{G_{SR}} = \mathcal{L}_{\mathrm{per}}+ \mathcal{L}_{\mathrm{GAN}} + \mathcal{L}_{tv} + 10\cdot \mathcal{L}_{\mathrm{1}} + 10\cdot \mathcal{L}_{\mathrm{cyc}} \label{eq:l_g} \end{equation} where, these losses are defined as follows:\\ \textbf{Perceptual loss ($\mathcal{L}_{\mathrm{per}}$):} It focuses on the perceptual quality of the output image and is defined as: \begin{equation} \mathcal{L}_{\mathrm{per}}=\frac{1}{N} \sum_{i}^{N}\mathcal{L}_{\mathrm{VGG}}=\frac{1}{N} \sum_{i}^{N}\|\phi(\mathbf{G}_{SR}({\bf y}_i))-\phi({\bf x}_i)\|_{1} \end{equation} where, $\phi$ is the feature extracted from the pretrained VGG-19 network at the same depth as ESRGAN~\cite{wang2018esrgan}.\\ \textbf{Texture loss ($\mathcal{L}_{\mathrm{GAN}}$):} It focuses on the high frequencies of the output image and it is defined as: \begin{equation} \mathbf{D}_{{\bf x}}({\bf x}, \hat{{\bf y}})(C) = \sigma(C({\bf x})-\mathbb{E}[C(\hat{{\bf y}})]) \end{equation} Here, $C$ is the raw discriminator output and $\sigma$ is the sigmoid function. By using the relativistic discriminator~\cite{wang2018esrgan}, we have: \begin{equation} \begin{split} \mathcal{L}_{\mathrm{GAN}} = \mathcal{L}_{\mathrm{RaGAN}} = &-\mathbb{E}_{{\bf x}}\left[\log \left(1-\mathbf{D}_{{\bf x}}({\bf x}, \mathbf{G}_{SR}({\bf y}))\right)\right] \\ &-\mathbb{E}_{\hat{{\bf y}}}\left[\log \left(\mathbf{D}_{{\bf x}}(\mathbf{G}_{SR}({\bf y}), {\bf x})\right)\right] \end{split} \end{equation} where, $\mathbb{E}_{{\bf x}}$ and $\mathbb{E}_{\hat{{\bf y}}}$ represent the operations of taking average for all real (${\bf x}$) and fake ($\hat{{\bf y}}$) data in the mini-batches respectively. \\ \textbf{Content loss ($\mathcal{L}_{\mathrm{1}}$):} It is defined as: \begin{equation} \mathcal{L}_{1} = \frac{1}{N} \sum_{i}^{N} \|\mathbf{G}_{SR}({\bf y}_i)-{\bf x}_i\|_{1} \end{equation} where, $N$ represents the size of mini-batch.\\ \textbf{TV (total-variation) loss ($\mathcal{L}_{tv}$):} It focuses to minimize the gradient discrepancy and produces sharpness in the output SR image and it is defined as: \begin{equation} \mathcal{L}_{tv}=\frac{1}{N} \sum_{i}^{N}\left(\left\|\nabla_{h} \mathbf{G}_{SR}\left({\bf y}_{i}\right) - \nabla_{h} \left({\bf x}_{i}\right) \right\|_{1}+\left\|\nabla_{v} \mathbf{G}_{SR}\left({\bf y}_{i}\right) - \nabla_{v} \left({\bf x}_{i}\right) \right\|_{1}\right) \end{equation} Here, $\nabla_{h}$ and $\nabla_{v}$ denote the horizontal and vertical gradients of the images.\\ \textbf{Cyclic loss ($\mathcal{L}_{\mathrm{cyc}}$):} It focuses to maintain the cyclic consistency between LR and HR domain and it is defined as: \begin{equation} \mathcal{L}_{cyc} = \frac{1}{N} \sum_{i}^{N} \|\mathbf{G}_{LR}(\mathbf{G}_{SR}({\bf y}_i))-{\bf y}_i\|_{1} \end{equation} \subsection{Training description} \label{sec:net_training} At the training phase, we set the input LR patches size as $32\times32$ with their corresponding HR patches. We train the network in an end-to-end manner for 51000 training iterations with a batch size of 16 using Adam optimizer with parameters $\beta_1 =0.9$, $\beta_2=0.999$, and $\epsilon=10^{-8}$ without weight decay for generators ($\mathbf{G_{SR}}$ \& $\mathbf{G_{LR}}$) and discriminators ($\mathbf{D}_{{\bf x}}$ \& $\mathbf{D}_{{\bf y}}$) to minimize the loss in Eq.~\eqref{eq:l_g}. The learning rate is initially set to $10^{-4}$ and then multiplies by $0.5$ after 5K, 10K, 20K, and 30K iterations. The projection layer parameter $\sigma$ is estimated according to \cite{liu2013single} from the input LR image. \section{Experiments} \subsection{Training data} We use the source domain data ($\Tilde{{\bf y}}$: 2650 HR images) that are corrupted with two known degradation, e.g., sensor noise, compression artifacts as well as unknown degradation, and target domain data (${\bf x}$: 800 clean HR images from the DIV2K~\cite{div2k}) provided in the NTIRE2020 Real-World Super-resolution (RWSR) Challenge~\cite{NTIRE2020RWSRchallenge} for the track-1. We use the source and target domain data for training the $\mathbf{G_{SR}}$ network under the different degradation scenarios. The LR data (${\bf y}$) with similar corruption as in the source domain is generated from the down-sample GAN network (DSGAN)~\cite{fritsche2019dsgan} with their corresponding HR target domain (${\bf x}$) images. Furthermore, we use the training data (\emph{i.e. } ${\bf y}$: 19000 LR images, ${\bf x}$: 19000 HR images) provided in the AIM2020 Real Image SR Challenge~\cite{AIM2020_RSRchallenge} for the track-3 ($\times 4$ upscaling) for training the SRResCycGAN (refer to the section-\ref{sec:aim2020_risr}). \begin{table}[t] \centering \vspace{-0.5cm} \caption{The $\times4$ SR quantitative results comparison of our method with others over the DIV2K validation-set (100 images). Top section: SR results comparison with added sensor noise ($\sigma=8$) and compression artifacts ($quality=30$) in the validation-set. Middle section: SR results with the unknown corruptions (e.g., sensor noise, compression artifacts, etc.) in the validation-set provided in the RWSR challenge series~\cite{AIM2019RWSRchallenge,NTIRE2020RWSRchallenge}. Bottom section: SR results with the real image corruptions in the validation-set and testset provided in the AIM 2020 Real Image SR challenge~\cite{AIM2020_RSRchallenge} for the track-3. The arrows indicate if high $\uparrow$ or low $\downarrow$ values are desired. The best performance is shown in {\color{red} red} and the second best performance is shown in {\color{blue} blue}.} \vspace{0.1cm} \tabcolsep=0.01\linewidth \scriptsize \resizebox{1.0\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|} \multicolumn{2}{c}{ } & \multicolumn{3}{c}{sensor noise ($\sigma=8$)} & \multicolumn{3}{c}{compression artifacts ($q=30$)} \\ SR methods & \#Params & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ \\ \hline EDSR~\cite{Lim2017edsrcvprw} & $43M$ & 24.48 & 0.53 & 0.6800 & 23.75 & 0.62 & 0.5400 \\ ESRGAN~\cite{wang2018esrgan} & $16.7M$ & 17.39 & 0.19 & 0.9400 & 22.43 & 0.58 & 0.5300 \\ ESRGAN-FT~\cite{lugmayr2019unsupervised} & $16.7M$ & 22.42 & 0.55 & 0.3645 & 22.80 & 0.57 & {\color{red}0.3729} \\ ESRGAN-FS~\cite{fritsche2019dsgan} & $16.7M$ & 22.52 & 0.52 & {\color{red}0.3300} & 20.39 & 0.50 & {\color{blue}0.4200} \\ SRResCGAN~\cite{muhammad2020srrescgan} & $380K$ & 25.46 & 0.67 & {\color{blue}0.3604} & 23.34 & 0.59 & 0.4431 \\ SRResCycGAN (ours) & $380K$ & {\color{blue}25.98} & {\color{blue}0.70} & 0.4167 & {\color{blue}23.96} & {\color{blue}0.63} & 0.4841 \\ SRResCycGAN+ (ours) & $380K$ & {\color{red}26.27} & {\color{red}0.72} & 0.4542 & {\color{red}24.05} & {\color{red}0.64} & 0.5192 \\ \hline &\multicolumn{3}{c}{ } & \multicolumn{3}{c}{unknown corruptions~\cite{NTIRE2020RWSRchallenge}} &\\ \hline SRResCGAN~\cite{muhammad2020srrescgan} & $380K$ & 25.05 & 0.67 & {\color{red}0.3357} \\ SRResCycGAN (ours) & $380K$ & {\color{blue}26.13} & {\color{blue}0.71} & {\color{blue}0.3911} \\ SRResCycGAN+ (ours) & $380K$ & {\color{red}26.39} & {\color{red}0.73} & 0.4245 \\ \hline &\multicolumn{3}{c}{ } & \multicolumn{3}{c}{real image corruptions~\cite{AIM2020_RSRchallenge}} &\\ \hline SRResCycGAN (ours, valset) & $380K$ & 28.6239 & 0.8250 & - \\ SRResCycGAN (ours, testset) & $380K$ & 28.6185 & 0.8314 & - \\ \end{tabular}} \label{tab:comp_sota} \vspace{-0.3cm} \end{table} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/res_val1.pdf} \vspace{-0.4cm} \caption{Visual comparison of our method with the other state-of-art methods on the DIV2K validation set at the $\times4$ super-resolution.} \label{fig:4x_result_div2k} \vspace{-0.5cm} \end{figure} \subsection{Technical details} We implemented our method in the Pytorch. The experiments are performed under Windows 10 with i7-8750H CPU with 16GB RAM and on the NVIDIA RTX-2070 GPU with 8GB memory. It takes about 25 hours to train the network. The run time per image (on the GPU) is 4.54 seconds at the AIM2020 Real Image SR testset. In order to further enhance the fidelity, we use a self-ensemble strategy~\cite{timofte2016seven} (denoted as SRResCycGAN+) at the test time, where the LR inputs are flipped/rotated and the SR results are aligned and averaged for enhanced prediction. \subsection{Evaluation metrics} We evaluate the trained model under the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM), and LPIPS~\cite{zhang2018unreasonable} metrics. The PSNR and SSIM are distortion-based measures that correlate poorly with actual perceived similarity, while LPIPS better correlates with human perception than the distortion-based/handcrafted measures. As LPIPS is based on the features of pretrained neural networks, so we use it for the quantitative evaluation with features of AlexNet~\cite{zhang2018unreasonable}. The quantitative SR results are evaluated on the $RGB$ color space. \subsection{Comparison with the state-of-art methods} \label{sec:comp_sota} We compare our method with other state-of-art SR methods including EDSR~\cite{Lim2017edsrcvprw}, ESRGAN~\cite{wang2018esrgan}, ESRGAN-FT~\cite{lugmayr2019unsupervised}, ESRGAN-FS~\cite{fritsche2019dsgan}, and SRResCGAN~\cite{muhammad2020srrescgan}, whose source codes are available online. The two degradation settings (\emph{i.e. } sensor noise, JPEG compression) have been considered under the same experimental situations for all methods. We run all the original source codes and trained models by the default parameters settings for the comparison. The EDSR is trained without the perceptual loss (only $\mathcal{L}_{\mathrm{1}}$) by a deep SR residual network using the bicubic supervision. The ESRGAN is trained with the $\mathcal{L}_{\mathrm{perceptual}}$, $\mathcal{L}_{\mathrm{GAN}}$, and $\mathcal{L}_{\mathrm{1}}$ by a deep SR network using the bicubic supervision. The ESRGAN-FT and ESRGAN-FS apply the same SR architecture and perceptual losses as in the ESRGAN using the two known degradation supervision. The SRResCGAN is trained with the similar losses combination as done in the ESRGAN using the two known degradation supervision. We train the proposed SRResCycGAN with the similar losses combination as done in the ESRGAN and SRResCGAN with the additional cyclic loss by using the bicubic supervision. Table~\ref{tab:comp_sota} shows the quantitative results comparison of our method over the DIV2K validation-set (100 images) with two known degradation (\emph{i.e. } sensor noise, JPEG compression), the unknown degradation in the NTIRE2020 Real-World SR challenge series~\cite{NTIRE2020RWSRchallenge}, and the validation-set and testset in the AIM2020 Real Image SR Challenge~\cite{AIM2020_RSRchallenge}. Our method results outperform in terms of PSNR and SSIM compared to the other methods, while in the case of LPIPS, we have comparable results with others. In the case of the sensor noise ($\sigma=8$) and JPEG compression ($q=30$) in the top section of the Table~\ref{tab:comp_sota}, the ESRGAN has the worst performance in terms of the PSNR, SSIM, and LPIPS among all methods. Its also depicts the visual quality in Fig.~\ref{fig:4x_result_val}. The EDSR has better performance to the noisy input, but it produces more blurry results. These are due to the domain distribution difference by the bicubic down-sampling during training phase. The ESRGAN-FT and ESRGAN-FS have much better performance due to overcoming the domain distribution shift problem, but they have still visible artifacts. The SRResCGAN has better robustness to the noisy input, but still has lower the PSNR and SSIM due to lacking the domain consistency problem. The proposed method has successfully overcome the challenge of the domain distribution shift in both degradation settings, which depicts in the both quantitative and qualitative results. In the middle section of the Table~\ref{tab:comp_sota},for the unknown degradation in the NTIRE2020 Real-World SR challenge~\cite{NTIRE2020RWSRchallenge}, the SRResCycGAN has much better the PSNR/SSIM improvment, while the LPIPS is also comparable with the SRResCGAN. In the bottom section of the Table~\ref{tab:comp_sota}, we also report the validation-set and testset SR results in the AIM2020 Real Image SR Challenge~\cite{AIM2020_RSRchallenge} for the track-3. Despite that, the parameters of the proposed $\mathbf{G}_{SR}$ network are much less, which makes it suitable for deployment in mobile/embedded devices where memory storage and CPU power are limited as well as good image reconstruction quality. Regarding the visual quality, Fig.~\ref{fig:4x_result_val} shows the qualitative comparison of our method with other SR methods at the $\times 4$ upscaling factor on the validation-set~\cite{NTIRE2020RWSRchallenge}. In contrast to the existing state-of-art methods, our proposed method produces the excellent SR results that are reflected in the PSNR/SSIM values, as well as the visual quality of the reconstructed images with almost no visible corruptions. \begin{table}[htbp!] \vspace{-0.5cm} \centering% \caption{Final Testset results for the Real Image SR ($\times4$) challenge Track-3~\cite{AIM2020_RSRchallenge}. The table contains ours (\textbf{MLP\_SR}) with other methods that are ranked in the challenge. The participating methods are ranked according to their weighted Score of the PSNR and SSIM given in the AIM 2020 Real Image SR Challenge~\cite{AIM2020_RSRchallenge}.} \vspace{0.1cm} \resizebox{0.7\textwidth}{!}{% \begin{tabular}{|l|c|c|c|} Team Name & PSNR$\uparrow$ & SSIM$\uparrow$ & Weighed\_score$\uparrow$ \\ \hline Baidu & $31.3960$ & $0.8751$ & $0.7099_{(1)}$\\ ALONG & $31.2369$ & $0.8742$ & $0.7076_{(2)}$\\ CETC-CSKT & $31.1226$ & $0.8744$ & $0.7066_{(3)}$\\ SR-IM & $31.1735$ & $0.8728$ & $0.7057$\\ DeepBlueAI & $30.9638$ & $0.8737$ & $0.7044$\\ JNSR & $30.9988$ & $0.8722$ & $0.7035$\\ OPPO\_CAMERA & $30.8603$ & $0.8736$ & $0.7033$\\ Kailos & $30.8659$ & $0.8734$ & $0.7031$\\ SR\_DL & $30.6045$ & $0.8660$ & $0.6944$\\ Noah\_TerminalVision & $30.5870$ & $0.8662$ & $0.6944$\\ Webbzhou & $30.4174$ & $0.8673$ & $0.6936$\\ TeamInception & $30.3465$ & $0.8681$ & $0.6935$\\ IyI & $30.3191$ & $0.8655$ & $0.6911$\\ MCML-Yonsei & $30.4201$ & $0.8637$ & $0.6906$\\ MoonCloud & $30.2827$ & $0.8644$ & $0.6898$\\ qwq & $29.5878$ & $0.8547$ & $0.6748$\\ SrDance & $29.5952$ & $0.8523$ & $0.6729$\\ \textbf{MLP\_SR} & $28.6185$ & $0.8314$ & $0.6457$\\ RRDN\_IITKGP & $27.9708$ & $0.8085$ & $0.6201$\\ congxiaofeng & $26.3915$ & $0.8258$ & $0.6187$\\ \end{tabular} } \label{tab:track3} \vspace{-0.5cm} \end{table} \begin{figure}[htbp!] \centering \begin{subfigure}[t]{0.85\textwidth} \centering \includegraphics[width=\linewidth]{figs/res_rwsr_val1.pdf} \end{subfigure}\\ \begin{subfigure}[t]{0.85\textwidth} \centering \includegraphics[width=\linewidth]{figs/res_rwsr_val2.pdf} \end{subfigure} \vspace{-0.4cm} \caption{Visual comparison of our method with the other state-of-art methods on the AIM 2020 Real Image SR (track-3) validation set at the $\times4$ super-resolution.} \label{fig:4x_result_val} \vspace{-0.5cm} \end{figure} \begin{figure}[htbp!] \centering \begin{subfigure}[t]{0.85\textwidth} \centering \includegraphics[width=\linewidth]{figs/res_rwsr_test1.pdf} \end{subfigure}\\ \begin{subfigure}[t]{0.85\textwidth} \centering \includegraphics[width=\linewidth]{figs/res_rwsr_test2.pdf} \end{subfigure} \vspace{-0.4cm} \caption{Visual comparison of our method with the other state-of-art methods on the AIM 2020 Real Image SR (track-3) test set at the $\times4$ super-resolution.} \label{fig:4x_result_test} \vspace{-0.5cm} \end{figure} \subsection{The AIM 2020 Real Image SR Challenge ($\times4$)} \label{sec:aim2020_risr} We participated in the AIM2020 Real Image Super-Resolution Challenge~\cite{AIM2020_RSRchallenge} for the track-3 ($\times 4$ upscaling) associated with the ECCV 2020 workshops. The goal of this challenge is to learn a generic model to super-resolve LR images captured in practical scenarios for more complex degradation than the bicubic down-sampling. In that regard, we propose the SRResCycGAN to super-resolve the LR images with the real-world settings. We use the pretrained model $\mathbf{G_{SR}}$ taken from the SRResCGAN~\cite{muhammad2020srrescgan} (excellent perceptual quality) and further fine-tune it on the training data provided in the AIM 2020 Real Image SR challenge with the proposed SR scheme as shown in the Fig.~\ref{fig:srrescycgan} by using the following training losses: \begin{equation} \mathcal{L}_{G_{SR}} = \mathcal{L}_{\mathrm{GAN}} + \mathcal{L}_{tv} + 10\cdot \mathcal{L}_{\mathrm{1}} + \mathcal{L}_{ssim} + \mathcal{L}_{msssim} + 10\cdot \mathcal{L}_{\mathrm{cyc}} \label{eq:l_g_risr} \end{equation} Since the final ranking is based on the weighted score of the PSNR and SSIM given in this challenge, we adopt the above losses combination where we neglect the $\mathcal{L}_{\mathrm{per}}$ and use the $\mathcal{L}_{ssim}$ and $\mathcal{L}_{msssim}$ (refers to the Eq.~\eqref{eq:l_g}) whose incorporate the structure similarity~\cite{Wang2004ImageQA} as well as the variations of image resolution and viewing conditions for the output image. Table~\ref{tab:track3} provides the final $\times4$ SR testset results for the track-3 of our method (\textbf{MLP\_SR}) with others participants. We also provide the visual comparison of our method with the state-of-art methods on the track-3 validation-set and testset in the Fig.~\ref{fig:4x_result_val} and Fig.~\ref{fig:4x_result_test}. Our method produces sharp images without any visible corruptions and achieves comparable visual results with the other methods. \begin{table}[!ht] \centering \vspace{-0.5cm} \caption{This table reports the quantitative results of our method over the DIV2K validation set (100 images) with unknown degradation for our ablation study. The arrows indicate if high $\uparrow$ or low $\downarrow$ values are desired.} \vspace{0.1cm} \tabcolsep=0.01\linewidth \scriptsize \resizebox{1.0\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|} SR method & Cyclic Path & Network structure & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ \\ \hline SRResCycGAN & \texttimes & ${\bf y}\rightarrow\mathbf{G_{SR}}\rightarrow\hat{{\bf y}}$ & 25.05 & 0.67 & \textbf{0.3357} \\ SRResCycGAN & \checkmark & ${\bf y}\rightarrow\mathbf{G_{SR}}\rightarrow\hat{{\bf y}}\rightarrow\mathbf{G_{LR}\rightarrow{\bf y}^\prime}$ & 26.13 & 0.71 & 0.3911 \\ SRResCycGAN+ & \checkmark & ${\bf y}\rightarrow\mathbf{G_{SR}}\rightarrow\hat{{\bf y}}\rightarrow\mathbf{G_{LR}\rightarrow{\bf y}^\prime}$ & \textbf{26.39} & \textbf{0.73} & 0.4245 \\ \end{tabular}} \label{tab:ablation_study} \vspace{-0.5cm} \end{table} \subsection{Ablation Study} For our ablation study, we design two variants of the proposed network structure with cyclic path or not. The first network structure (\emph{i.e. } ${\bf y}\rightarrow\mathbf{G_{SR}}\rightarrow\hat{{\bf y}}$) takes the LR input to the $\mathbf{G_{SR}}$ and produces the SR output by the supervision of the SR discriminator network $\mathbf{D_{x}}$ without the cyclic path ($\mathbf{G_{LR}}$ \& $\mathbf{D_{y}}$) as shown in the Fig.~\ref{fig:srrescycgan}. Correspondingly, we minimize the total loss in the Eq.~\eqref{eq:l_g} without the $\mathcal{L}_{\mathrm{cyc}}$. The second network structure (\emph{i.e. } ${\bf y}\rightarrow\mathbf{G_{SR}}\rightarrow\hat{{\bf y}}\rightarrow\mathbf{G_{LR}\rightarrow{\bf y}^\prime}$) takes the LR input to the $\mathbf{G_{SR}}$ and produces the SR output by the supervision of the SR discriminator network $\mathbf{D_{x}}$. After that, the SR output fed into the $\mathbf{G_{LR}}$ and reconstructs the LR output by the supervision of the LR discriminator network $\mathbf{D_{y}}$, refers to the Fig.~\ref{fig:srrescycgan}. Accordingly, we minimize the the total loss in the Eq.~\eqref{eq:l_g}. Table~\ref{tab:ablation_study} shows the quantitative results of our method over the DIV2K validation-set~\cite{NTIRE2020RWSRchallenge} with the unknown degradation. We found that in the presence of the cyclic path, we get the significant improvement of the PSNR/SSIM \emph{i.e. } $+1.34/+0.06$ to the first variant. It suggests that the cyclic structure gives the benefits to handle complex degradation such as noise, blurring, compression artifacts, etc., while the other structure lacks this due to the domain difference between LR and HR. \section{Conclusion} We proposed a deep SRResCycGAN method for the real image super-resolution problem by handling the domain consistency between the LR and HR images with the CycleGAN. The proposed method solves the SR problem in a GAN framework by minimizing the loss function with the discriminative and residual learning approaches. Our method achieves excellent SR results in terms of the PSNR/SSIM values as well as visual quality compared to the existing state-of-art methods. The SR network is easy to deploy for limited memory storage and CPU power requirements for the mobile/embedded environment. \bibliographystyle{splncs04}
{'timestamp': '2020-09-09T02:14:49', 'yymm': '2009', 'arxiv_id': '2009.03693', 'language': 'en', 'url': 'https://arxiv.org/abs/2009.03693'}
\section{Introduction} \label{s:intro} The structure and function of cells is carefully regulated by the signals they receive from their environment. Of particular interest is the transfer of mechanical forces and stresses, which in turn are known to trigger specific bio-chemical responses inside the cell that can significantly alter their behavior, inducing changes in shape, size, motility, reorganization of the cytoskeleton, and even cell proliferation and differentiation\cite{Janmey2007,Crowder2016}. This last example is probably the most striking, given the bio-medical applications it promises. Carefully engineered bio-materials should allow us to control stem cell fate decisions, i.e., whether or not they divide or differentiate, and which specific cell lineage is chosen\cite{Hiew2018}. However, before this is possible, we need to have a fundamental understanding of the interactions between the cells and the chosen bio-material. One of the preferred methods to probe the mechanical interaction of cells with their environment is to place them on an elastic substrate that is being periodically stretched along a given direction. Studying how the cells respond to this perturbation provides crucial information on its mechanosensing abilities. Following Iwadate et al.\cite{Iwadate2009,Iwadate2013,Okimura2016, Okimura2016a}, it is useful to distinguish between slow crawling cells, such as fibroblasts, endothelial, and smooth muscle cells, and fast crawling cells such as \textit{Dictyostelium} or neutrophil-like HL-60; where the typical migration velocities can differ by one to two orders of magnitude between the two types. For example, the average speed of fibroblasts is of the order of $10\mu \text{m}/\text{h}$\cite{Ebata2018}, whereas \textit{Dictyostelium} can move at speeds on the order of $10\mu \text{m}/\text{s}$\cite{Iwadate2009}. In addition, slow crawling cells typically possess stress fibers, whereas fast crawling cells do not. This is a crucial difference to understand their mechanosensitive response. Early experiments on fibroblasts\cite{Buck1980} and endothelial cells\cite{Dartsch1989,Iba1991} found that these cells preferred to align their stress fibers in a direction perpendicular to the stretching. This reorientation of the stress fibers has been linked to the depolymerization and disassembly of parallel fibers\cite{Hayakawa2001,Jungbauer2008}. Nevertheless, it is also possible for the stress fibers to align parallel to the direction of stretching, as demonstrated experimentally on endothelial cells with inhibited Rho-kinase activity (which would tend to lower the myosin activity and thus the base tension)\cite{Lee2010}. Finally, while the alignment of the stress fibers is correlated with the cell reorientation, it is by no means sufficient. This was shown by experiments on vascular smooth muscle cells, in which stress activated cation channels where inhibited, resulting in cells that were randomly oriented, even though they contained oriented stress fibers\cite{Hayakawa2001}. The fact that cells without stress fibers also exhibit characteristic reorientation under cyclic stretching is clear evidence that stress fiber realignment cannot be the only mechanism responsible for the reorientation. Unfortunately, the fast crawling nature of these cells makes experimental observations much more difficult, since it requires that the cell motion be tracked. Indeed it was only recently that the group of Iwadate managed to perform such experiments\cite{Iwadate2009,Iwadate2013,Okimura2016a,Okimura2016}. They have found that \text{Dictyostelium} cells prefer to migrate in the perpendicular direction. This occurs without any ordering of the dense actin-network inside the cell, but it is accompanied by the formation of dense myosin bundles at the lateral edges, preventing any pseudopod extension in those directions. Further experiments on other fast-crawling cells, such as HL-60 and Blebbistatin-treated (stress fiber less) keratocytes also found similar perpendicular alignment\cite{Okimura2016a,Okimura2016}. This response has yet to be fully explained, and it is even less understood than the reorientation of slow-crawling cells, where the stress fibers seem to play a dominant role. The mechanism responsible for the realignment is evidently cell-specific, and likely to depend on the experimental conditions. However, it is clear that it should involve several general ingredients, namely, the focal adhesion dynamics, through which the cell is able to transfer forces to and from the substrate, and the actin network and myosin induced contractility, responsible for the migration of the cell, as well as the mechanical properties of the cytoskeleton. Alternative theories have been proposed that can explain the reorientation as arising from one of these elements. For example, as a consequence of the passively stored elastic energy\cite{Livne2014a} or the forces on the focal adhesions\cite{Zhong2011,Chen2015a}, with both theories capable of reproducing the same experimental data, even though they are modeling different mechanisms, under different assumptions. Furthermore, all such models seem to have been developed with slow-crawling cells in mind, where stress fibers are likely to play a crucial role, and where the motion of the cells can be decoupled from their reorientation. When considering fast crawling cells such as \textit{Dictyostelium} or HL-60, the motility of the cell can no longer be decoupled from its reorientation. Thus, we must consider the dynamic remodeling of the relevant sub-cellular elements (e.g., the actin-cytoskeleton and the focal adhesions) under the cyclic stretching and how this affects the motion of the cell. In such cases, theoretical approaches quickly become intractable, and we must resort to computational modeling. In this work, we extend an established phase-field model of crawling cells\cite{Lober2014} to describe the dynamics of fast-crawling cells over substrates undergoing large amplitude cyclic deformations. We then use this model to study the reorientation dynamics as a function of frequency. Based on recent studies, which report a strong frequency dependence for the stability of focal adhesions\cite{Kong2008,Zhong2011}, we assume that the coupling to the substrate is given by a frequency-dependent detachment rate. At low frequencies, the cell and its constituent elements are able to follow the deformation and no reorientation is observed. At moderate frequencies, but below the threshold value that triggers the instability of the focal adhesions, both parallel and perpendicular orientations are stable. Increasing the frequency over this threshold, a moderate frequency range is found over which only the perpendicular direction is stable (as seen experimentally). Furthermore, by tuning the response of the cells to detach only during fast extension, this realignment effect can be strengthened, and it is entirely reversed to favor parallel alignment if the detachment occurs during fast compression. We thus find that an asymmetry in the cellular response during loading and unloading can have a dramatic effect on their reorientation dynamics. This could be tested experimentally by employing a non-symmetric stretching protocol (i.e., fast extension accompanied by slow compression, and vice versa). Finally, upon a further increase in the frequency, only the parallel orientation is stable. The observed realignment response depends on whether the frequency of stretching is probing the shape deformation, the actin-network, or the focal adhesion dynamics. While we have used a generic model for crawling cells, and only including the response of the focal adhesion sites to the stretching, the framework we propose can be easily used with more elaborate phase-field models developed and parameterized for specific cell types. This will allow us to investigate the biomolecular and mechanical origins of the cell's mechanosensitive response in much more detail. \section{Model} \label{s:model} Ever since the pioneering studies of Cahn, Hilliard, and Allen, who introduced phase-field models to study the phase-separation of binary alloys\cite{Cahn1958,Cahn1961,Allen1972,Allen1973}, phase-field modeling has become one of the preferred methods for physicists and material scientists to describe microstructural dynamics in systems with non-homogeneous ``phases''. These phases can be used to represent any material property of interest, from a difference in density, orientational order, or chemical composition, to differences in electric or magnetic polarization, thus providing a universal framework with which to study a wide variety of phenomena. Recently, this approach has seen considerable success outside of physics, and is now actively used to address problems in biology and even medicine. Notable examples include, among others, studies on the morphodynamics of crawling cells\cite{Shao2010,Shao2012,Ziebert2012,Lober2014,Palmieri2015}, the immune response to invading pathogens\cite{Najem2014}, axonal extension of nerve cells\cite{Najem2013,Takaki2015}, and cartilage regeneration\cite{Yun2013}, as well as tumor growth\cite{Sciume2013,Lima2014}. In this work, we focus on the mechanosensitivity of crawling cells, and in particular on their ability to sense and respond to mechanical cues from a substrate undergoing cyclic stretching. A phase-field approach is ideally suited for this purpose, particularly for fast crawling cells, as it provides a cell-level description which can take into account the acto-myosin based propulsion mechanism, the force transmission to and from the substrate (mediated by the focal adhesion sites), as well as allowing for the large shape deformations caused by the externally applied strain. In addition, this type of modeling can easily scale upward to consider the collective dynamics of multi-cellular systems and confluent tissues\cite{Lober2015}. In this section, we will introduce the basic phase-field model of crawling cells that we have adopted, which was originally designed to describe the motion of keratocyte-like fragments over viscoelastic substrates without any global deformation. Then, we define the periodic strain imposed on the substrate, and the extensions to the model that are required to consider crawling under such large-amplitude cyclic deformations. \subsection{Phase-Field Model of Cells on Viscoelastic Substrates} We adopt the 2D model originally developed by Ziebert and Aranson\cite{Ziebert2012}, which describes each cell using a non-conserved order parameter $\rho$, whose values lie between zero (outside the cell) and one (inside the cell). This allows for an implicit tracking of the boundary, avoiding many of the computational difficulties of related sharp interface methods. A free energy functional $F[\rho]$ is then associated to this order parameter and determines the driving force for its time evolution as \begin{align} \p_t\rho &= -\Gamma \frac{\delta F[\rho]}{\delta \rho}\label{e:modela} \end{align} with $\Gamma$ the mobility coefficient for $\rho$. This is the so-called ``Model A'' or time-dependent Ginzburg-Landau model\cite{Chaikin1995}. To lowest order, the free energy functional takes the form \begin{align} F[\rho] &= \int\barg[\big]{f(\rho) + D_\rho\parg{\Grad\rho}^2} \vdf{x}\label{e:Frho} \end{align} where $f(\rho)$ is the free energy density of the homogeneous system and the term proportional to $D_\rho$ provides a penalty term to the formation of sharp interfaces. The free-energy density is defined to have a double-well form, representing the local stability of the two phases $\rho=0$ and $\rho=1$, and is given by \begin{align} f(\rho) &= \int_0^\rho\parg{1-\rho^\prime}\parg{\delta[\rho] - \rho^\prime}\rho^\prime\df{\rho^\prime}\label{e:frho} \end{align} where the value of $\delta$ controls the relative stability of the two. The motility of the cell is modeled by introducing an additional polar order parameter $\bm{p}$, which gives the average orientational order of the actin filament network responsible for the motion. These filaments are continuously polymerizing at the leading edge and pushing against the membrane, allowing the cell to extend forward. This requires that the cell be able to transfer the forces to the substrate, something it is able to due because the actin-network is connected to the substrate through the focal adhesion bonds. This is modeled by introducing an additional scalar field $A$, representing the density of adhesion bonds. Finally, the coupled set of equations for $\rho$, $\bm{p}$, and $A$ are given by\cite{Lober2014} \begin{align} \partial_t \rho &= D_\rho \nabla^2 \rho - \parg[\big]{1-\rho}\parg[\big]{\delta[\rho] - \rho}\rho - \alpha(A)\bm{\nabla}\rho\cdot \bm{p} \label{e:rho0}\\ \partial_t \bm{p} &= D_{p}\nabla^2\bm{p} - \tau_1^{-1}\bm{p} - \tau_2^{-1}\parg[\big]{1-\rho^2}\bm{p} - \beta f\barg[\big]{\bm{\nabla}\rho} - \gamma(\bm{\nabla}\rho\cdot\bm{p})\bm{p} \label{e:p0}\\ \partial_t A &= D_A \nabla^2 A + \rho\parg[\big]{a_0\Norm{\bm{p}} + a_{\text{nl}} A^2} - \parg[\big]{d(u) + sA^2} A\label{e:A0} \end{align} where, without loss of generality we have taken $\Gamma = 1$. For the dynamics of $\rho$ (Eq.\eqref{e:rho0}), the first two terms on the right-hand side result from taking the functional derivative of the energy functional of Eq.~\eqref{e:Frho}, while the last term, proportional to $\Grad\rho\cdot\bm{p}$, and akin to an advection term, represents the active contribution of the actin-network pushing the cell membrane. The strength with which the actin network can push on the membrane is given as a function of the local density of adhesion sites $\alpha(A) = \alpha \cdot A$. The dynamics of $\bm{p}$ (Eq.\eqref{e:p0}) is given by a simple reaction-diffusion equation, with a source term to account for the polymerization at the interface ($\propto \beta f[\Grad\rho]$), and a decay term ($\propto \tau_1^{-1}$) to account for the corresponding depolymerization. The polymerization rate is chosen to be a function of the gradient of $\rho$ that ensures that the growth rate is bounded and limited to the interface, with \begin{align} \bm{f}[\bm{x}] &= \frac{\bm{x}}{\sqrt{1 + \epsilon\Norm{\bm{x}}^2}}\label{e:fx} \end{align} As such, the maximum growth rate is given by $\simeq \beta/\sqrt{\epsilon}$. Note that an additional decay term $\propto \tau_2^{-1}(1-\rho^2)$ is included for computational simplicity, to make sure that the actin field is non-zero only inside the cell. The last term in Eq.~\eqref{e:p0} accounts for the myosin induced bundling at the rear of the cells\cite{Ziebert2012}, helping to break the $\pm\bm{p}$ symmetry and favor polarization. A similar reaction-diffusion model is used for the concentration of adhesion sites $A$~(Eq.~\eqref{e:A0}). Naturally, the attachment to the substrate can only occur inside of the cell: there is a linear term proportional to $a_0 \Norm{p}^2$, since the attachments require the presence of actin, and a non-linear term $a_{\text{nl}} A^2$ to model the maturation and growth of existing bonds. For the detachment, there is a linear term that couples the dynamics of $A$ with the substrate displacement $u$, and a non-linear term that saturates the total number of bonds. Finally, the $\delta$ function controlling the relative stability of the two phases is given by ($\avg{\cdot} = \int\cdot\,\vdf{r}$) \begin{align} \delta[\rho] &= \frac{1}{2} + \mu\parg[\big]{\avg{\rho} - \pi r_0^2} - \sigma\Norm{\bm{p}}^2\label{e:delta0} \end{align} where the second term on the right hand side acts as a global constraint on the cell volume (with $r_0$ the radius of the non-polarized static cell), and the third term accounts for the myosin-induced contraction. At first glance, the model can seem overwhelming, as it possesses over a dozen free parameters. Fortunately, a detailed analysis of this model and its variants has already been performed\cite{Aranson2016,Ziebert2016}, allowing us to focus on the few parameters relevant for a study on the mechanosensitivity of cells on cyclically stretched substrates. The activity of the cell can be controlled by the strength of the propulsion ($\alpha$) and the rate of polymerization ($\beta$). The shape of the cell can be controlled mainly by the strength of the contractility ($\sigma$), with low (high) values resulting in fan(crescent)-like shapes. The motor-asymmetry ($\gamma$) has only a small effect on the shape or dynamics of the cell and can be considered constant without loss of generality. Of the remaining parameters appearing in the equations of motion for $\bm{p}$ and $A$, the most important is $a_0$, which sets the rate at which new adhesion sites can be formed with the substrate. For example, to consider patterned substrates, one would make this parameter be position dependent. Such a study has been presented in Ref.~\cite{Lober2014}, where a viscoelastic Kelvin-Voigt model is used to describe the displacement of the substrate due to the traction forces exerted by the cell. By controlling just two parameters, the stiffness of the substrate and the rate of attachment ($a_0$), the authors report a wide variety of motility modes, such as steady gliding motion, stick-slip, bipedal and wandering, which have also been observed experimentally\cite{Barnhart2010,Riaz2016}. \subsection{Substrate Deformation} We consider a substrate that is being cyclically stretched along one of its axes. In most cases, this will necessarily imply a compression along the perpendicular axes, with an amplitude that depends on the Poisson's ratio $\nu$ of the material. To describe this deformation, it is convenient to introduce Lagrangian (material) coordinates $\bm{\xi}$ to label the substrate elements. The time-dependent (Eulerian) coordinates of a given element $\bm{\xi}$ are then given by $\bm{x} = \bm{x}(\bm{\xi},t)$, which, for the present case is given explicitly by \begin{align} x^1 &= \parg[\big]{\xi^1 - L_x/2} \parg[\big]{1 + \eps(t)} \label{e:x1}\\ x^2 &= \parg[\big]{\xi^2 - L_y/2} \parg[\big]{1 + \eps(t)}^{-\nu} \label{e:x2} \end{align} where $\eps$ is the lateral strain (along which the substrate is being actively deformed), and $L_x$ and $L_y$ are the (undeformed) substrate dimensions. For simplicity, we assume a sinusoidal perturbation given by \begin{align} \eps(t) &= \frac{\eps_0}{2}\parg[\big]{1 - \cos{\parg{2\pi \omega t}}} \label{e:eps} \end{align} We thus have two equivalent representations for our system, in terms of the body ($\bm{\xi}$) or lab ($\bm{x}$) frame. Given the time-dependent deformation of the substrate, it is more convenient to solve the equations of motion in the body frame, which is by definition constant, than it is to solve them in the lab frame. This is a common strategy when solving flow or elasticity problems in the presence of time-dependent boundary conditions\cite{Luo2004,Venturi2009,Molina2016}. However, this requires careful consideration, particularly with regards to the definition of the time derivatives. Let $\bm{e}_i$ and $\bm{E}_I$ be the basis vectors in the lab and body frame, respectively, and $u^i$ and $u^I$ the corresponding (contravariant) components of a given vector $\bm{u} = u^i \bm{e}_i = u^I \bm{E}_i$. Throughout this work we will assume the Einstein summation convention, and reserve lower (upper) case indices for quantities in the lab (body) frame. The corresponding transformation rules are given by\cite{Schutz1980} \begin{align} \bm{e}_i &= \Lambda^{I}_{\phantom{I}i} \bm{E}_I & \bm{E}_I &= \Lambda^{i}_{\phantom{i}I}\bm{e}_i \label{e:Fei}\\ u^i &= \Lambda^{i}_{\phantom{i} I} u^I & u^I &= \Lambda^{I}_{\phantom{I} i} u^i\label{e:Fvi} \end{align} with $\Lambda^I_{\phantom{I}i} \equiv \p\xi^I/\p x^i$, $\Lambda^i_{\phantom{i}I} \equiv \p x^i/\p\xi^I$, and $\Lambda^{I}_{\phantom{I}i}\Lambda^{i}_{\phantom{i}J} = \delta^{I}_{\phantom{I}J}$. The inner or scalar product between two vectors is defined as $\vec{u}\cdot\vec{v} \equiv u_I v^J = u^I v_J = G_{IJ} u^I v^J = G^{IJ} u_I v_J$, with $G_{IJ}$ and $G^{IJ}$ the components of the metric tensor and its inverse ($G^{IJ} G_{JK} = \delta^{I}_{\phantom{I}K}$) \begin{align} G_{IJ} &= \Lambda^{i}_{\phantom{i}I}\Lambda^{j}_{\phantom{i}J} g_{ij} = \begin{pmatrix} \parg{1 + \eps(t)}^2 & 0 \\ 0 & \parg{1 + \eps(t)}^{-2\nu} \end{pmatrix}\label{e:GIJ} \end{align} where the metric tensor in the lab frame is the Euclidean metric tensor $g_{ij} = \delta_{ij}$. For what follows, we will also require the coordinate flow velocity $\bm{U}$, i.e., the velocity of the coordinates or the velocity of the moving substrate. In the body frame, this is defined as\cite{Venturi2009} \begin{align} \bm{U} &\equiv -\frac{\p\bm{\xi}}{\p t} = \begin{pmatrix} \phantom{-\nu}\epst \parg[\big]{\xi^1 - L_x/2} \\ -\nu \epst \parg[\big]{\xi^2 - L_y/2} \end{pmatrix} \label{e:U} \end{align} where $\epst = \frac{\dot{\eps}}{1 + \eps}$ and \begin{align} \dot{\eps}(t) &=\p_t \eps(t) = 2\pi\omega\frac{\eps_0}{2}\sin{\parg{2\pi \omega t}}\label{e:doteps} \end{align} \subsection{Crawling Cells on Cyclically Stretched Substrates} To consider the dynamics of the cell on the cyclically stretched substrate, we begin by writing down the equations of motion in contravariant form in the body (substrate) frame of reference, replacing the time-derivatives with intrinsic time derivatives (see Appendix~\ref{s:app_tensor}), to obtain \begin{align} \p_t\rho &= D_\rho \Delta \rho - \parg[\big]{1-\rho}\parg[\big]{\delta[\rho]- \rho}\rho - \alpha\parg{A} p^J \covd{\rho}{J} \label{e:rho}\\ \p_t p^I &= D_p \Delta p^I - \tau_1^{-1}p^I - \tau_2^{-1}\parg{1-\rho^2}p^I - \beta G^{IJ} \frac{\covd{\rho}{J}}{1 + \epsilon \cntd{\rho}{K} \covd{\rho}{K}} - \gamma \parg[\big]{p^J \covd{\rho}{J}} p^I - p^J \grad_J U^I\label{e:p}\\ \p_t A &= D_A\Delta A -\tau_A^{-1}(1-\rho^2)A + \rho\parg[\big]{a_0 p^J p_J + a_{\text{nl}}A^2} - \parg[\big]{d(\cdots) + s A^2}A - A\grad_J U^J \label{e:A} \end{align} where $\Grad_J \rho = \p_{\xi^J} \rho = \p_I \rho$ and $\grad_J U^I = \p_{J} U^I + \Gamma^{I}_{KJ} U^K$ are the components of the covariant derivative of $\rho$ and $\bm{U}$, respectively ($\Gamma^{I}_{JK}$ the connection coefficients). In addition, the Laplacian operator $\grad^2$ is here replaced with the Laplace-Beltrami operator $\Delta$. In the current case, all connection coefficients are zero ($\Gamma^{I}_{JK} = 0$), considerably simplifying the calculations, since $\Delta \rho = G^{JK}\p_{\xi^J}\p_{\xi^K}\rho$ and $\Delta p^I = G^{JK}\p_{\xi^J}\p_{\xi^K} p^I$. The final set of equations (\ref{e:rho}-\ref{e:A}) are almost the same as in the original formulation (\ref{e:rho0}-\ref{e:A0}), except for the last term on the right-hand side of the equations for $\bm{p}$ and $A$, which depends on the gradient of the coordinate flow velocity ($\Grad\bm{U}$) \begin{align} \Grad\bm{U} &\equiv \begin{pmatrix} \grad_1 U^1 & \grad_2 U^1 \\ \grad_1 U^2 & \grad_2 U^2 \end{pmatrix} = \begin{pmatrix} \epst(t) & 0 \\ 0 & -\nu\epst(t) \end{pmatrix}\label{e:DU} \end{align} and an additional decay term ($\tau_A^{-1}$) for the adhesion sites outside the cell. We found the latter to be necessary to avoid any spurious adhesion-mediated interactions between the cell and its periodic images, particularly when using small system sizes or low frequencies. The precise functional form for the detachment rate $d$ will be discussed in the next subsection. The additional term in the equation for $\bm{p}$ comes from the time-dependent nature of the basis vectors, whereas the term appearing in the equation for $A$ comes from the time-dependence of the volume element, and is required to ensure the total conservation of bonds under stretching. Of note is the fact that the equations of motion are translationally invariant, i.e., there is no explicit dependence on the coordinates $\bm{\xi}$. This allows us to assume periodic boundary conditions and employ efficient pseudo-spectral methods to solve the equations. Details on the numerical implementation can be found in Appendix~\ref{s:app_num}. \subsection{Cell-Substrate Coupling} In this work, we are interested in studying the reorientation of fast-crawling cells such as \textit{Dictyostelium}, which possess no stress fibers, on cyclically stretched substrates. Recent experiments by Iwadate et al.\cite{Okimura2016a} have shown that cell reorientation occurs even though no significant orientational order is observed in the dense actin-network in the middle of the cell. Instead, the authors have reported that myosin II becomes concentrated on the stretched sides of the cell, but how this is related to the reorientation response, or which pathway the cell uses to sense the mechanical stimulation, is still not understood. However, they conclude their work by offering three possibilities for how the mechanical signals trigger the localization, (1) through the focal adhesion sites, (2) through some unidentified mechanosensitive channel, or (3) through the deformation of the actin filament network, among which they identify the latter as more likely. Here, we will consider the first option, given the obvious importance of the focal adhesions in the transmission of forces to and from the cell, and the actin-network in particular. Indeed, recent studies on slow-crawling, stress fiber containing cells, have shown that the adhesion dynamics can help to explain the experimentally observed reorientation of such cells\cite{Zhong2011,Chen2015a}. Thus, for simplicity, we will ignore any effects coming from the viscoelastic properties of the actin-network, even though it surely has a role to play in determining the reorientation response, particularly at lower frequencies\cite{Kong2008,Zhong2011}. We therefore consider that the coupling between the cell dynamics and the substrate is due exclusively to the adhesion dynamics. Under cyclic stretching, adhesion bonds have been shown to lose stability if the frequency is high enough\cite{Kong2008,Zhong2011}. This is due to the high speed changes in the substrate, which prevent the formation of any stable bonds. This frequency dependence for the stability of the adhesion bonds has been linked to the strong frequency dependence of the reorientation response seen experimentally. In particular, Liu et al.\cite{Liu2008} found that the alignment of arterial smooth muscle cells is maximized for a given value of the stretching frequency, and Jungbauer et al.\cite{Jungbauer2008} and Greiner et al.\cite{Greiner2013} both reported a lower threshold frequency below which no alignment is observed. Although it should be noted that the former found the response time to decrease with increasing frequency (above the lower threshold), before plateauing at an upper frequency threshold, whereas the latter found no such change. Within the phenomenological framework we are considering, we incorporate this frequency dependent response in the form of a strain dependent detachment rate. Based on the experimental results showing a lower frequency threshold needed to observe any realignment\cite{Jungbauer2008,Greiner2013}, and the strong frequency dependence found for the stability of focal adhesions\cite{Kong2008}, we assume that the detachment rate is sensitive only to the rate at which the substrate is being stretched. As an objective measure for this rate of stretching, we use the (Lagrangian) rate of deformation tensor $\tensor{D}$, defined as the time-derivative of the Green deformation tensor (or the right Cauchy-Green tensor), which in component form is given by\cite{Marsden1994} \begin{align} 2 D^{I}_{\phantom{I}J} &= G^{K I} g_{i k}\parg[\bigg]{ \Lambda^{i}_{J} \grad_K U^k + \Lambda^{k}_{\phantom{k}K}\grad_J U^i }\label{e:Dij} \end{align} In Eulerian terms, it yields the symmetric part of the velocity gradient tensor, and, as its name suggests, it provides information on the rate at which an object is being deformed or stretched. We consider that the rate of detachment $d$ depends solely on the trace of this rate of deformation tensor $D=\trace{(\tensor{D})}$, i.e., how fast it is being stretched or compressed. We assume a sharp sigmoidal response, such that $d=0$ ($d=1$) below (above) the critical frequency $\omega_c$. We introduce three basic response functions \begin{align} d^{(\pm)}(D) &= \frac{d_0}{2}\barg[\bigg]{1 + \tanh{\parg[\Big]{{b^2\parg[\big]{D^2 - D_c^2}}}}} \label{e:Dpm}\\ d^{(+)}(D) &= \frac{d_0}{2}\barg[\bigg]{1 + \tanh{\parg[\Big]{{b^2\parg[\big]{R^2(D) - D_c^2}}}}}\label{e:Dp}\\ d^{(-)}(D) &= \frac{d_0}{2}\barg[\bigg]{1 + \tanh{\parg[\Big]{{b^2\parg[\big]{R^2(-D) - D_c^2}}}}}\label{e:Dm} \end{align} with $d_0$ the maximum rate of detachment, $D_c$ the critical deformation rate, $R(x) = x H(x)$ the ramp function ($H$ the Heaviside step function), and $b$ a numerical parameter to control the stiffness. This will allow us to distinguish the response of the cells to extension ($d^{(+)}$), compression ($d^{(-)}$), or both ($d^{(\pm)}$). In all cases, when $d=1$, attachments to the substrate will break, which will lead to a cell that stops moving, since the propulsion term depends linearly on $A$, and tries to recover its circular shape. \begin{figure}[ht!] \centering \includegraphics[width=0.48\textwidth]{dpm} \includegraphics[width=0.48\textwidth]{chiw} \caption{\label{f:chi}(color online) (left) Detachment rate as a function of time for three different frequencies, with $D_c = 5\cdot 10^{-3}$, $b=10^3$, and $d=1$. (right) Average detachment rate $\chi$, as a function of frequency, for three different critical stretching rates $D_c$.} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=0.7\textwidth]{stretch_coupling_fix} \caption{\label{f:eDd}(color online) Schematic representation of the adhesion/substrate coupling. From top to bottom, the stretch ratio $\eps$, the trace of the rate-of-deformation tensor $D=\trace{\parg{\tensor{D}}}$, and the magnitude of the detachment rate $d(D)$, with $d=0$ ($d=1$) shown as light and dark blue, respectively, for three different response functions, expansion-contraction $d^{(\pm)}$, expansion $d^{(+)}$, and contraction $d^{(-)}$.} \end{figure} To estimate the critical frequency $\omega_c$, we assume that the detachment functions (Eqs.(\ref{e:Dpm}-\ref{e:Dm})) exhibit a step-like response, which is a good approximation if $b$ is large enough (see Figure~\ref{f:chi}). We then have $d=1$ for $D^2 - D_c^2 \ge 0$, which leads to the following quadratic equation for $y= 1 - \cos{\parg{2\pi\omega t}}$, from which we can directly compute $\omega_c$ as a function of $D_c$ \begin{align} B^2 = \frac{y\parg{2-y}}{\parg{1 + \frac{\eps_0}{2} y}^2} \end{align} with $B = \frac{D_c}{\pi \omega \eps_0 (1-\nu)}$. The roots to this equation are given by \begin{align} \parg[\Big]{1 + \parg{B \eps_0/2}^2}\cos{\parg{2\pi\omega t}} &= B^2 \frac{\eps_0}{2}\parg[\bigg]{\frac{\eps_0}{2} + 1} \pm \sqrt{1 - B^2\parg{1 + \eps_0}} \end{align} and are real only if the term inside the square root is greater than zero, from which we can derive the critical frequency \begin{align} \omega_c = \frac{D_c}{\eps_0 (1-\nu) \pi}\sqrt{1 + \eps_0}\label{e:wc} \end{align} Finally, to quantify the degree to which this detachment rate affects the dynamics, we define a function $\chi$ that measures the average detachment rate over a half-cycle \begin{align} d_0 \chi &= \frac{2}{T} \int_{t_0}^{t_0 + T/2} d (D(t)) \,\df{t} \end{align} where $d$ is one of $d^{(\pm)}$, $d^{(+)}$, or $d^{(-)}$. Alternatively, this also provides a measure of the relative time-interval during which the cell can move. Fig.~\ref{f:chi} shows the detachment rate as a function of time, as well as the average detachment rate as a function of frequency, Fig.~\ref{f:eDd} gives a schematic diagram of the three main quantities involved in determining the response of the cell: the time-dependent strain, the rate of deformation $D$, and the detachment rate $d(D)$. \section{Simulation and Analysis Method} \begin{table}[ht!] \begin{tabular}{lll} \hline Parameter & Value & Description\\ \hline $\alpha$ & $4$ & Propulsion rate \\ $\beta$ & $\alpha/2$ & Actin nucleation rate \\ $\gamma$ & $0.5$ & Motors' symmetry breaking \\ $\sigma$ & $1.3$ & Motors' contraction \\ $\mu$ & $0.1$ &Stiffness of volume conservation \\ $D_\rho$ & $1$ & Stiffness of the diffuse interface \\ $D_p$ & $0.2$ & Diffusion coefficient for $\boldsymbol{p}$ \\ $\tau_1^{-1}$ & $0.1$ & Degradation rate of actin \\ $\tau_2^{-1}$ & $0.4$ & Decay rate of $\boldsymbol{p}$ outside of cell \\ $\epsilon$ & $37.25$ & Regularization of actin creation\\ $D_A$ & $1$ & Diffusion of adhesion sites \\ $a_0$ & $0.01$ & Linear adhesion attachment rate\\ $a_{\textrm{nl}}$ & $1.5$ & Nonlinear adhesion attachment rate \\ $s$ & $1$ & Saturation of adhesion sites \\ $d_0$ & $1$ & (Maximum) Adhesion detachment rate\\ $\tau_A^{-1}$ & $\tau_2^{-1}$ & Decay rate of adhesion sites outside of cell \\ $\nu$ & $0.3$ & Poisson ratio \\ $\omega$ & $0-0.1$ & Substrate Stretching frequency \\ $\eps_0$ & $0.3$ & Substrate deformation amplitude \\ $D_c$ & $10^{-3}-10^{-1}$ & Critical rate-of-deformation\\ $b$ & $10^3$ & Stiffness parameter for detachment rate response\\ $r_0$ & $15$ & Radius of circular initial condition \end{tabular} \caption{\label{t:params} Default simulation parameters adapted from Ref.\cite{Lober2014}.} \end{table} We consider a single cell on a cyclically stretched substrate, at various frequencies, and study the time-dependent orientation for the three different response functions introduced above $d^{(\pm)}$, $d^{(+)}$, and $d^{(-)}$. As a reference, we have also considered the case when $d=0$, as it serves to identify to what degree the reorientation can be attributed to the passive deformation of the cell by the substrate. Since we are interested in studying the frequency dependence of the cell dynamics, we have fixed all parameters related to the cell and substrate. Unless otherwise stated, the default values are those listed in Table~\ref{t:params}, which where taken from a previous study on patterned substrates performed by Ziebert and Aranson\cite{Lober2014}. In the absence of stretching, a polarized cell with these parameters will settle into a steady gliding motion with a fan-like shape. Regarding the stretching protocol, we follow the experiments of Iwadate et al\cite{Iwadate2009}, and set the Poisson's ratio at $\nu=0.3$, with a fixed amplitude of $\eps_0 = 0.3$. For all simulations we considered a single cell of circular radius $r_0 = 15$ that was initially polarized at an angle $\theta_0$ with respect to the stretching direction ($\theta=0$). In order to quantify the reorientation response we performed simulations for $n=5$ different initial conditions $\theta_0 = n \pi/12$ for each set of parameter values ($\omega$ and $d$). The initial values for the magnitude of the polarization field and the concentration of adhesion sites were set to $p = 0.5$ and $A=0.1$, respectively. The dimensions of the (unstretched) domain were $L_x = L_y = 100$ and we used $N=256$ grid points along each dimension to discretize the system. To track the orientation of the cells, we computed the center of mass as a function of time and from this, the (relative) center of mass velocity within the lab frame was obtained and used to define $\theta$. Specifically, let $\Delta \bm{r}(t_0, t_1) = \bm{r}(t_1) - \bm{r}(t_0)$ be the center of mass displacement, within the lab frame, in a time interval $\Delta t = t_1-t_0$. To compute the relative velocity of the cell $\bm{v}_{\text{eff}}$ with respect to the substrate we should remove the displacement corresponding to the externally imposed strain. Consider a substrate element that at time $t=t_0$ coincides exactly with the position of the center of mass $\bm{r}(t_0)$. The Lagrangian coordinates for this element are $\bm{\xi}(\bm{r}(t_0), t_0)$. The spatial position of this element at any subsequent time $t_1$ is known exactly, since it follows the substrate deformation, and allows us to define the effective substrate velocity $\bm{u}(t_0,t_1)$ as \begin{align} \bm{u}(t_0,t_1) &= \frac{\bm{x}\parg{\bm{\xi}(\bm{r}(t_0),t_0), t_1} - \bm{r}(t_0)}{\Delta t} \end{align} Thus, the effective velocity of the cell, within the lab frame is simply \begin{align} \bm{v}_{\text{eff}}(t_0, t_1) &= \frac{1}{\Delta t}\barg[\Big]{\Delta \bm{r}(t_0, t_1) - \Delta t \bm{u}(t_0, t_1)} \end{align} To obtain accurate measurements for $\bm{v}_{\text{eff}}$ we made sure that the sampling time $\Delta t$ was smaller than both the period of oscillation ($T=\omega^{-1}$) and the time $\tau$ required for the cell to move a distance equal to its diameter in the static case ($\omega=0$), with $v_0$ the steady state velocity, such that $\tau = r_0/v_0$. The shape deformations of the cells can be tracked by computing the aspect ratio $h$, defined in terms of the following shape tensor \cite{Ziebert2012} \begin{align} I^{ij}&= \int (x^i - R^i)(x^j - R^j) \df{x}\df{y} \end{align} where $\bm{R} = \avg{\bm{x} \rho(\bm{x})}$ is the center of mass position of the cell. The aspect ratio is then given as $h = \sqrt{\lambda_1/\lambda_2}$, where $\lambda_1$ and $\lambda_2$ are the eigenvalues of $I$ ($\lambda_1\ge \lambda_2$). A cell in the circular (static) state will have an aspect ratio of $h=1$, whereas the fan-like crawling cells in the absence of stretching will present an aspect ratio closer to $h=2$. \begin{figure}[ht!] \centering \includegraphics[width=0.9\textwidth]{frequencies} \caption{\label{f:frequencies}Relevant time/frequency scales for the cell $\rho$, polarization $\bm{p}$, and adhesion $A$ dynamics for the default choice of parameter values given in Table~\ref{t:params}. The stretching frequency range is chosen to probe the role of the adhesion dynamics on the cell response.} \end{figure} To understand the reorientation dynamics, we need to consider the interplay between the dynamics of the shape deformations, the actin dynamics, and the adhesion dynamics, as well as their characteristic time-scales, and how they compare to the time-scale over which the substrate is being deformed. For this we first define the characteristic length scales in the system, the cell size $r_0 = 15$ and the interface thickness $\zeta = D_\rho^{1/2} = 1$. The characteristic time or frequency of the shape deformations is determined by the stiffness of the interface as $\omega_{D_\rho} = D_\rho r_0^{-2} \simeq 4\cdot 10^{-3}$, as well as the time governing the retraction/expansion of the two phases $\omega_\delta = 1$. The frequency associated to the propulsion of the cell by the actin network is $\omega_\alpha = \alpha \zeta^{-3} \simeq 4$. The time-scales for the actin dynamics include the diffusion time-scale ($D_p$), the depolymerization rate ($\tau_1$), the polymerization rate ($\beta$), and the asymmetry driving term ($\gamma$). They in turn yield the following characteristic frequencies, $\omega_{D_p} = D_p r_0^{-2} \simeq 9\cdot 10^{-4}$, $\omega_{\tau_1} \simeq \tau_1^{-1} = 10^{-1}$, $\omega_\beta = \beta \epsilon^{-1/2} \simeq 3 \cdot 10^{-1}$, and $\omega_\gamma = \gamma \zeta^{-1} \simeq 5\cdot 10^{-1}$. Finally, the frequencies associated to the adhesion dynamics are $\omega_{D_A} = D_A r_0^{-2} \simeq 4\cdot 10^{-3}$, $\omega_{a_0} = a_0 \zeta^2 = 10^{-2}$, $\omega_{a_{\text{nl}}} = a_{\text{nl}} r_0^{-2} \simeq 7\cdot 10^{-3}$, $\omega_s = s r_0^{-4} \simeq 2\cdot 10^{-5}$, and $\omega_{d} = d = 1$. Where it should be noted that the relevant length scale for the linear attachment rate for the adhesions ($a_0$) is the characteristic size of the region over which the $\bm{p}$ field is non-zero. Analysis of the simulations shows that this is strongly peaked near the leading edge, as the polymerization rate is proportional to $\grad\rho$. For simplicity we have assumed that this is given by the interface width $\zeta$, but this is just a lower bound, the real value should be slightly higher $\simeq 2\sim 4\zeta$. In contrast, the relevant length-scale for the non-linear growth and saturation rates is the cell size $r_0$. To summarize, using the default parameter values given in Table~\ref{t:params}, we can identify the following frequency regimes governing the shape ($\rho$), actin ($\bm{p}$), and adhesion ($A$) dynamics \begin{align*} \omega_{D_\rho} &\ll \omega_\delta < \omega_\alpha\\ \omega_{D_p} \ll \omega_\gamma &< \omega_{\tau_1} < \omega_\beta \\ \omega_s \ll \omega_{D_A} &< \omega_{a_{\text{nl}}} < \omega_{a_0} \ll \omega_d \end{align*} An illustration of the different characteristic frequencies is given in Fig.~\ref{f:frequencies}. Here, since we are interested in studying how the adhesion dynamics affect the reorientation, we will focus on stretching frequencies within the range $10^{-4} < \omega < 10^{-1}$, such that $\omega_{D_p} < \omega < \omega_{a_0}$. \section{Results} \label{s:res} \subsection{Passive alignment} \begin{figure}[ht!] \centering \includegraphics[width=0.35\textwidth]{orientation_adhesion_decay0.4_d0} \caption{\label{f:passive} (color online) Orientation $\theta$ as a function of time $t$ for different initial polarization directions $\theta_0$, and various frequencies, in the absence of any specific cell-substrate coupling ($d=0$). Default parameters given in Table~\ref{t:params} were used. Note that $\theta=0$ ($\theta=\pi/2$) corresponds to parallel (perpendicular) alignment.} \end{figure} Let us start by considering the simple case of a cell that is being passively advected by the substrate, in the absence of any direct coupling, i.e., $d=0$. The results for this case are summarized in Fig.~\ref{f:passive}, which shows the orientation as a function of time, for various stretching frequencies $\omega$. First, in the absence of stretching, for $\omega=0$, the orientation of the cell is time-independent, as expected, since we have not included any source of stochasticity in the model. For non-zero frequencies, the orientation shows a clear time-variation, since it is being constantly deformed and rotated by the substrate. However, there are several distinct frequency regimes, depending on how the stretching frequency compares to the characteristic frequencies of the system, giving rise to qualitatively different realignment dynamics. At very low frequencies, $\omega \lesssim 10^{-4}$, the orientation oscillates around the initial value, but there is no stretch-induced alignment. In this case, the deformation is so slow that the cell (together with the actin network and adhesion sites) can completely follow the imposed strain. As the frequency is increased further, such that it becomes comparable to the frequency for the diffusion of orientational order $\omega_{D_p}$, we begin to see an alignment either parallel or perpendicular to the stretching direction. In this case, the actin network is not able to rearrange fast enough to adapt to the changing shape of the cell. However, this alignment is extremely slow, with a time-scale of the order of $t/\tau\simeq 10^{4}$. In addition, there seems to be no preference between parallel or perpendicular directions, with the final orientation depending on the initial orientation: cells that were aligned closer to the parallel or perpendicular directions will favor those orientations. Upon increasing the frequency of oscillation to $\omega \simeq 5\cdot 10^{-3}$, the qualitative behavior remains unchanged, but the reorientation time-scale is reduced by roughly an order of magnitude. At these frequencies, the substrate is stretching faster than the cell can relax, since $\omega > \omega_{D_\rho}$, so that the shape starts to become perturbed by the imposed strain. If the frequency is increased still further, we observe a clear transition, at $\omega \simeq 2\cdot 10^{-2}$, above which all cells show a parallel alignment, regardless of the initial orientation. For such high frequencies, $\omega > \omega_{a_0} \gtrsim \omega_{a_{\text{nl}}}$, the distribution of adhesion bonds inside the cell can no longer be stabilized fast enough to keep track of the imposed deformations. As a complement to the previous analysis, we can also consider the time-dependence of the aspect ratio $h$ and the magnitude of the effective cell velocity $v_{eff}$. The time-variation of these quantities shows similar oscillations in response to the strain as does the orientation $\theta(t)$, but there is no systematic drift, with both quantities oscillating around their ``equilibrium'' ($\omega = 0$) values, corresponding to $h_0\simeq 1.9$ and $v_0\simeq 0.6$. Studying how the fluctuations in these quantities changes as a function of frequency will help us to clarify the mechanosensitive response of the cells. For this, we have plotted the maximum and minimum value of $h-1$ and $v_{0}$, as well as the amplitude of the corresponding oscillations, for two different initial orientations ($\theta = \pi/6$ and $\pi/3$) in Figure~\ref{f:passive_hv}. As expected, at lower frequencies $\omega < \omega_{D_{\rho}}$ the fluctuations are negligible, as the cell is able to relax to its preferred shape faster than the substrate is being deformed. In addition, even though the cells reorient into either the parallel or perpendicular directions for $\omega \gtrsim \omega_{D_p}$, we see no difference in their shape or velocity. This means that for this frequency range the reorientation of the cell can be effectively decoupled from its translational motion. As the frequency becomes comparable to $\omega_{D_\rho}$ the shape of the cell begins to show oscillations, as the substrate is moving faster than it can relax. Since the velocity and motility are intimately linked, this is accompanied by a corresponding increase in the velocity fluctuations, but this effect is much less pronounced. It is at this point where we can start to see a difference between cells oriented perpendicular or parallel to the stretching. The cell that was initially polarized at $\theta = \pi/3$ will align in the perpendicular direction and experiences considerable shape deformation but relatively small velocity fluctuations. The cell that was polarized in the $\theta = \pi/6$ direction will align in the parallel direction and shows the opposite behavior, small shape deformations but large velocity fluctuations. These tendencies increase with increasing frequency, up until $\omega \simeq \omega_{a_0}$, where the only stable orientation is the parallel one. Here, the fluctuations of the aspect ratio reach a plateau, which tells us that the cell shape is now completely unable to respond to the imposed strain. Simultaneously, at this point the stretching starts to interfere with the adhesion dynamics and this greatly amplifies the fluctuations in the velocity. We see that the cell can slow down and speed up by up to $50\%$ with respect to their average value. This is due to the heterogeneous and unstable distribution of adhesion sites that characterize the cell at these frequencies. These fluctuations reach a maximum at $\omega \simeq 5\omega_{a_0}$, after which their amplitude shows a sharp decrease. \begin{figure}[ht!] \centering \includegraphics[width=0.8\textwidth]{aspect_vel_none} \caption{\label{f:passive_hv} (color online) Maximum, minimum, and amplitude of the (steady) oscillations of the aspect ratio $h$ and effective velocity $v_{\text{eff}}$ as a function of frequency $\omega$. While there is no significant change in the average values of $h$ or $v_{\text{eff}}$, the fluctuations of these quantities depend strongly on frequency. Data was obtained from the trajectories of cells initially polarized in the $\theta=\pi/6$ (open symbols) and $\theta=\pi/3$ (filled symbols) orientations, once the orientation of the cell was stabilized. The amplitude was computed as the difference between the maximum and minimum values.} \end{figure} The question of why the cells choose one particular orientation over another, and why the only stable orientation is the parallel one at high frequencies remains to be answered. Existing theories\cite{De2007,De2008,Safran2009,Livne2014a,Xu2016}, which focus on slow crawling cells with stress fibers, and do not consider shape deformations or the cell motility, predict that both parallel $\theta=0$ and perpendicular $\theta = \pi/2$ orientations are solutions to the steady state equation ($\text{d}\theta/\text{d}t = 0$), together with an oblique orientation $\theta_f$, which is a function of the system parameters. While the oblique (nearly perpendicular) orientation $\theta_f$ is usually the stable solution, under certain conditions, such as when the mechanical forces due to the substrate dominate the cellular activity, or if the substrate is very soft, the parallel orientation becomes stable\cite{De2007,De2008,Xu2016}. A direct comparison with our results is not straightforward, but we also find $\theta=0$ and $\theta=\pi/2$ as steady state solutions, with the parallel orientation the only stable one at high frequencies (at least within the frequency range we have considered). In this high frequency regime, we have seen that the cell is unable to resist the shape deformations imposed by the substrate, and that the distribution of adhesions is unstable, leading to large velocity fluctuations, even though the average velocity remains unchanged. In this limit, the forces due to the externally imposed strain dominate any forces due to the intrinsic cell motility. Thus, our findings of a stable parallel orientation are consistent with the theoretical predictions\cite{De2007,De2008,Xu2016}. In the absence of any specific cell-substrate interaction, the strong coupling that exists between shape and motility yields a preferential alignment under cyclic stretching that is strongly dependent on the relative frequency. At very low frequencies the cells and the actin network have time to readjust to the deformation, and the average migration direction is not affected. When the stretching is faster than the actin network can respond, there is a very weak reorientation process, but no preference between perpendicular or parallel directions. If the stretching is faster than the time-scale over which the cell can accommodate its shape (as defined by the stiffness of the membrane), then the reorientation is significantly faster. At even higher frequencies, where the cell cannot form and stabilize the adhesion bonds fast enough to follow the deformation, we observe that the cells align parallel to the stretching direction, regardless of initial orientation. Thus, even without any direct coupling to the substrate, there is a clear preference in the direction of migration. \subsection{Active alignment} \begin{figure}[ht!] \centering \includegraphics[width=0.9\textwidth]{phase_dia} \caption{\label{f:active}(color online) Phase diagram showing the final orientation of the cells as a function of frequency $\omega$ (average detachment rate $\chi$) and adhesion response function. Each point is specified as an ellipse, with orientation, aspect ratio, and color used to encode information on the average orientation and the spread of the orientations. Results from five simulations with distinct initial cell polarization directions ($\theta_0 = \pi/12, \pi/6, \pi/4, \pi/3, 5\pi/12$) are used for each point. The average ($\langle\theta\rangle$) and the standard deviation $\big(\sqrt{\langle\theta^2\rangle}\big)$ of the steady-state orientation is used to define the orientation of the long axis and the aspect ratio of the ellipse, respectively. In this case, small (large) standard deviations result in elongated (spherical) shapes. Finally, we also compute an orientational order parameter $\langle\cos(2\theta)\rangle$, which is $1$ ($-1$) if all cells align parallel (perpendicular) to the stretching, and use it to color-code each ellipse. The critical stretching rate was set by $D_c = 5\cdot 10^{-3}$, all other parameters are the same as in Fig.~\ref{f:passive}.} \end{figure} We now consider the reorientation of cells whose internal propulsion mechanism is actively responding to the strain it receives from the substrate through a rate-dependent detachment rate ($d\ne 0$). This is done to model the frequency dependent stability of the focal adhesions\cite{Kong2008,Zhong2011} As described above, Eqs.(\ref{e:Dpm}-\ref{e:Dm}), we will consider cells that respond to either compression $d^{(-)}$ or extension $d^{(+)}$, or both $d^{(\pm)}$. By setting the threshold value $D_c$ ($\omega_c$) at which this response is activated, we can control the interval during which the cell is able to form attachments, and thus crawl over the substrate. Taking into account the results presented above in the absence of any direct coupling $d=0$, for which the cells show parallel reorientation when the frequency of oscillation is greater than the frequency associated to the attachment dynamics, we can expect that a rate-dependent detachment rate will significantly affect the reorientation dynamics. We set $D_c = 5\cdot 10^{-3}$ ($\omega_c \simeq 8\cdot 10^{-3} \lesssim \omega_{a_0}$) and the frequency to lie in the range of $8\cdot 10^{-3} < \omega < 6\cdot10^{-3}$ ($\chi$ between $0$ and $1$). Within this range, the only relevant time scales are those corresponding to the attachment dynamics $\omega_{a_\text{nl}}$ and $\omega_{a_0}$, as the time scales for the actin and shape deformations are both slower $\omega_{D_p} < \omega_{D_\rho} < \omega$. For these lower frequencies, we would have $d=0$ and the dynamics would be the same as in the passive case (Fig.~\ref{f:passive}). We have summarized the results in the phase diagram presented in Fig.~\ref{f:active} (see ESI~1 for the full set of trajectory data). For comparison purposes, the corresponding results for $d=0$ have also been included. First, at high frequencies ($\omega\gtrsim 2\cdot 10^{-2}$), we see that all cell types show parallel alignment, regardless of the specific form of the response function. In such cases, the stretching is too fast for the cell to respond ($\omega > \omega_{a_0} > \omega_\rho > \omega_p$), so the exact details of the attachment/detachment become irrelevant. More interesting are the results at low and intermediate frequencies. At low frequencies $8\cdot 10^{-3}\lesssim \omega \lesssim 1.1\cdot 10^{-2}$ ($0.1\le \chi \le 0.4$), cells with $d^{(+)}$, which ``resist'' extension, exhibit a perpendicular alignment. In contrast, cells with $d^{(-)}$, which ``resist'' compression, show a parallel alignment. Within this frequency range the cyclic detachments occur over time-scales comparable to the time it takes for the cell to form and grow new attachments. It is clear that this alignment is due to the type of detachment, since cells with $d=0$ show no preferential alignment, with $\theta = 0$ or $\theta=\pi/2$ equally likely. Furthermore, the realignment of the cells with non-zero detachment rate occurs over time-scales that are considerably shorter that those with $d=0$. Simulation snapshots for $\omega=8.8\cdot 10^{-3}$ ($\chi=0.1$), showing the cell shape, concentration of adhesion sites, and actin orientation are given Fig.~\ref{f:snapshots}. Compared to cells with $d=0$, cells showing an active response to the stretching ($d^{+}$ or $d^{-}$) exhibit more pronounced shape deformations. As can be seen from the figure, the $d^{(+)}$ cells completely detach as the substrate is extending and rotating them towards the parallel direction. However, they are able to recover their adhesions during the compression stage, when they are being rotated into the perpendicular orientation. Cells with $d^{(-)}$ show the opposite behavior. This asymmetry in the dynamics during the extension and compression stages is the cause of the reorientation. The corresponding movies are provided as Supplemental Material (ESI~2-4). For comparison purposes, movies obtained from simulations at high frequencies $\omega = 2.8\cdot 10^{-2}$ ($\chi = 0.8$), for which all cells show parallel alignment, are also available (ESI~5-7). \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{d0.png} \includegraphics[width=\textwidth]{dplus.png} \includegraphics[width=\textwidth]{dminus.png} \caption{\label{f:snapshots}Simulations snapshots for cells exhibiting passive and active responses to a periodically stretching substrate. Data was taken over one cycle between $3\le t/T < 4$, for cells initially polarized in the $\theta = \pi/6$ direction. From top to bottom, $d=0$, $d^{(+)}$, and $d^{(-)}$, respectively. Solid lines show the contour of the phase-field for $\rho = 0.5$, the density map shows the concentration of adhesion sites $A$, and the arrows the actin orientation field $\bm{p}$. The rectangles in the top figure show the substrate deformation, scaled down by a factor of 20.} \end{figure} For intermediate frequencies $\omega\simeq \omega_{a_0}$, with $\chi \simeq 0.5$, $d^{(+)}$ cells exhibit a transition between the low frequency response (favoring perpendicular orientations) and the high frequency response (favoring parallel orientations), resulting in a stable oblique orientation $\theta\simeq \pi/4$. Surprisingly, the $d^{(-)}$ cells also exhibit a non-monotonic behavior, even though the low and high frequency limits both show parallel orientations, at $\omega\simeq 1.2\cdot 10^{-2}$ ($\chi\simeq 0.5$) cells align perpendicular to the stretching direction. The behavior of the $d^{(\pm)}$ cells seems more complicated, but it can roughly be understood as a competition between the opposing tendencies of $d^{(+)}$ and $d^{(-)}$ cells to orient perpendicular or parallel to the stretching at low frequencies, with the perpendicular response being dominant. This is consistent with the fact that the reorientation time-scale is much longer than that of the other cell types. We have performed simulations for two other critical stretching rates, $D_c=2\cdot 10^{-3}$ ($\omega_c \simeq 3\cdot 10^{-3}$) and $D_c=10^{-2}$ ($\omega_c \simeq 10^{-2}$), and found similar behavior, at least in the high frequency range. However, for $\omega_c=3\cdot 10^{-3} \lesssim \omega_{D_\rho} = 4\cdot 10^{-3}$, the stretching is now able to probe the shape-deformations. For the lowest frequency considered, $\omega = 3.4\cdot 10^{-3}$ ($\chi=0.1$), the $d^{(+)}$ ($d^{(-)}$) cells actually favor a parallel (perpendicular) alignment. Increasing the frequency to $\omega =3.7\cdot 10^{-3} \simeq \omega_{D_\rho}$ the system reverts back to being dominated by the adhesion dynamics, thus, if the frequency is not too high $d^{(+)}$ ($d^{(-)}$) cells will tend to align perpendicular (parallel) to the direction of stretching. We note however that for $d^{(-)}$ cells the reorientation response is less pronounced, particularly at intermediate frequencies. Again, in the high-frequency range $\omega > \omega_{a_0}$ all cells show parallel alignment. Experimentally, the fast-crawling cells that we are modeling, such as \textit{Dictyostelium} and HL-60\cite{Iwadate2009,Iwadate2013,Okimura2016a}, have been shown to align perpendicular to the stretching direction, just as our cells within an appropriate frequency range. In these experiments, the imposed strain was not sinusoidal in nature, but more saw-tooth like: a quick expansion of the substrate was followed by a static interval and then a slow relaxation to the original shape (such that the duty ratio was fixed to $1:1$). Thus, there is a clear asymmetry in the rate of deformation imposed in experiments during the expansion and contraction phases. Assuming this is enough to cause a relative instability in the adhesions during expansion/contraction, it would correspond to our simulations for $d^{(+)}$. Note that the cells would not need to be able to distinguish between expansion or contraction (as we have assumed for our simulations), but only the rate of deformation, since this rate is different in the two intervals. Our simulations then provide evidence to favor the adhesion dynamics as being responsible for the reorientation. This could be tested by using a reciprocal deformation to that of the original experiments\cite{Iwadate2009}, with a slow expansion followed by fast contraction, for which our model ($d^{(-)}$) predicts a parallel orientation. Finally, while we do not claim quantitative agreement is possible with the simple model we have used here, particularly because it has not been parametrized for any specific cell type, we predict that in the limit where the adhesion dynamics dominates the response of the cells, an asymmetry in the expansion/contraction periods of the stretching can be used to selectively drive the reorientation. \section{Discussion} \label{s:conc} The question of cellular realignment under a cyclically stretching substrate has attracted much attention recently due to its biological significance. Among the various possible factors or mechanisms determining this mechanosensitive response, two have been singled out: (1) the viscoelasticity of the actin filament networks and (2) the focal adhesion dynamics. This is understandable, as the former is largely responsible for the mechanical properties of the cell, and the latter provides the coupling between the cell and the substrate (through the actin-network) needed for the transfer of forces. While there has been considerable success in developing theories that can predict the reorientation dynamics of cells under cyclic stretching of the substrate, several issues remain. First, the exact mechanism responsible for the reorientation remains illusive. For example, Livne et al.\cite{Livne2014a} attribute it to the passively stored elastic energy, while Chen et al. attribute it to the forces on the focal adhesions\cite{Chen2015a,Qian2013,Xu2016,Xu2018}. Both theories are able to explain the same set of experimental observations equally well, and even result in the same theoretical prediction for the orientational dynamics, making it difficult to determine which of the two effects is the dominant one. Second, most theoretical and simulation work has so far focused on slow crawling cells which contain stress fibers, such as fibroblasts. These type of cells usually align in such a way that their stress fibers are aligned perpendicular to the stretching direction. Furthermore, since they move so slowly, their motion can be decoupled from their reorientation. Therefore, the question of how fast-crawling cells without stress fibers, such as \text{Dictyostelium}, reorient under cyclic stretching has remained largely unanswered. Recent experiments by Iwadate et al.\cite{Iwadate2009,Iwadate2013,Okimura2016a,Okimura2016} have shown that they prefer to orient perpendicular to the stretching direction. This perpendicular reorientation is observed without any corresponding alignment of the dense actin network inside of the cells. To study how fast crawling cells respond to large amplitude cyclic deformations, we require a model that describes both the cell motion and its reorientation. For this, we need to take into account the internal machinery of the cell (e.g., the actin-network and myosin contractility), its coupling to the substrate (through the focal adhesion sites), and the accompanying shape deformations. To this end, we have established a computational framework that allows us to study the dynamics of cells using any of the phase-field models that have been recently developed recently\cite{Shao2012,Ziebert2016,Moure2016,Najem2016}. This phenomenological approach allows us to easily model the complex coupling between the shape and motility of the cells, as well as their interactions with the substrate. In this work, we have adopted a generic model for fast crawling cells, which is nevertheless able to reproduce a wide-variety of motility modes seen experimentally\cite{Lober2014}. Following previous work, which found a frequency dependent instability in the focal adhesions\cite{Kong2008,Zhong2011}, and the fact that no orientational order was observed in the actin network of \textit{Dictyostelium} under stretching, we have focused our study on the role of the adhesion dynamics on the reorientation. Given the strong frequency dependence found by Kong et al.\cite{Kong2008}, and the reports of a lower frequency threshold to observe reorientation (albeit in slow crawling cells)\cite{Jungbauer2008,Liu2008}, we assumed a sigmoidal response for the adhesion dynamics on the rate of deformation, such that they detach ($d\ne 0$) if the rate at which they are being deformed exceeds a given threshold. Furthermore, we can selectively tune this response so that the cells become sensitive only to compression ($d^{(-)}$) or extension ($d^{(+)}$), or both ($d^{(\pm)}$). Even using this simple coupling we are still able to obtain a non-trivial frequency dependent reorientation for our model cells. Depending on whether the cells tend to detach and stop crawling under too large extension or compression, or whether they are just being passively advected by the substrate, and how the stretching frequency compares to the characteristic frequencies associated to the shape deformation and the actin and adhesion dynamics, we can observe both perpendicular or parallel alignment, as well as oblique orientations. As a reference, we considered first the passive case ($d=0$). At very low frequencies, there is no reorientation, with the cell oscillating around its initial direction. As the frequency is increased, both the parallel $\theta=0$ and perpendicular $\theta=\pi/2$ directions become steady state solutions, but there is not systematic reorientation (i.e., cells do not show any preference between either direction). This (slow) reorientation arises because the actin network can no longer follow the deformations of the substrate. At higher frequencies, past the characteristic frequency associated to the shape deformations (as given by the membrane stiffness), the reorientation time scale is considerably reduced, but there is still no preference between parallel or perpendicular directions. Finally, upon a further increase in the stretching frequency, we reach the time-scales over which the adhesion attachments are formed. It is at this point where we observe complete reorientation in the parallel direction. This parallel alignment has been predicted to occur in cases where the cellular activity is negligible compared to the forces coming from the substrate\cite{De2007,De2008}, which is in line with our numerical predictions. In the case of an active coupling with the substrate ($d\ne 0$), we observed complete realignment, either in the parallel or perpendicular directions, over most of the parameter range considered. Thus, our results provide further evidence for the fact that the stability of adhesion bonds can have a dramatic effect on the mechanosensitivity of crawling cells\cite{Kong2008,Zhong2011,Chen2015a}. For all three types of responses ($d^{(\pm)}, d^{(+)}, d^{(-)}$), we were able to observe complete perpendicular alignment, as has been reported experimentally for fast-crawling cells\cite{Iwadate2009,Iwadate2013,Okimura2016a}, for low to moderate frequencies. This is particularly noticeable for the $d^{(+)}$ cells, for which the perpendicular direction is the stable orientation over a wide range of frequencies. In contrast, $d^{(-)}$ cells show a preference to align in the parallel direction. Thus, cells that resist extension (compression) will usually align perpendicular (parallel) to the direction of stretching. However, at high enough frequencies the cells always align parallel to the direction of stretching, just as the passively advected cells ($d=0$). Our theory predicts that in the case where the adhesion dynamics dominates the response of the cell, any asymmetry during the loading/unloading phases of the stretching can be used to align the cells along directions parallel or perpendicular to the stretching. This asymmetry can be intrinsic to the cell, if it is able to respond differently to extension and compression, or it can be due to the stretching protocol itself. This is relevant with regards to the experiments reported by Iwadate et al.\cite{Iwadate2009,Iwadate2013,Okimura2016a}, since they have not used a sinusoidal signal, with symmetric loading and unloading, but a saw-tooth like signal, with fast extension followed by a slow compression. Even if the cell cannot distinguish between extension and compression, but only the magnitude of the rate of deformation, this would correspond to our cells with the $d^{(+)}$ response. Indeed, we have shown that for moderate frequencies, these cells prefer a perpendicular orientation, as reported experimentally. This could be easily tested by repeating the experiments with a complementary experimental protocol that had slow extension followed by fast relaxation. Such a case would correspond to our $d^{(-)}$ cells, and our theory predicts that the preferred orientation could then be switched to the parallel direction. Our approach will prove useful to study the mechanosensitivity of fast crawling cells, since it can incorporate the salient features: (1) the elastic response of the cell, (2) the forces on the focal adhesions, (3) the active forces generated by the cell, and (4) the complex coupling between cell shape and motility. In addition, the cell-level description we propose can be trivially extended to multi-cellular systems to study the mechanosensitivity of tissues. Finally, we would like to point out that the generic model used here has not been parametrized to any particular cell type. Thus, more work is required to obtain precise quantitative comparisons with experiments. This will be the focus of future investigations, where we will consider a more detailed coupling between the cell and the substrate, as well as the effect of membrane tension and substrate elasticity, and how they affect the actin (de)polymerization rates\cite{Winkler2016}. \acknowledgments{ JJM would like to acknowledge fruitful discussions with Natsuhiko Yoshinaga, Koichiro Sadakane, Kenichi Yoshikawa, Matthew Turner, Takashi Taniguchi, and Simon Schnyder during the preparation of this manuscript. This work was supported by the Japan Society for the Promotion of Science (JSPS) Wakate~B (17K17825) and KAKENHI (17H01083) grants, as well as the JSPS bilateral joint research projects. }
{'timestamp': '2018-08-29T02:09:32', 'yymm': '1807', 'arxiv_id': '1807.02295', 'language': 'en', 'url': 'https://arxiv.org/abs/1807.02295'}
\section{Introduction} \label{sec:intro} The numerical sign problem has prevented us from the quantitative understanding of many important physical systems with first-principles calculations. Typical examples for such systems include finite-density QCD, strongly-correlated electron systems and frustrated spin systems, as well as the real-time dynamics of quantum systems. The main aim of this talk is to argue that the \emph{tempered Lefschetz thimble method} (TLTM) \cite{Fukuma:2017fjq} and its extension, the \emph{worldvolume tempered Lefschetz thimble method} (WV-TLTM) \cite{Fukuma:2020fez}, may be a reliable and versatile solution to the sign problem. The (WV-)TLTM actually has been confirmed to work for toy models of some of the systems listed above. In this talk, we pick up the Stephanov model, to which the WV-TLTM is applied. This matrix model has played a particularly important role in attempts to establish a first-principles calculation method for finite-density QCD, because it well approximates the qualitative behavior of finite-density QCD at large matrix sizes, and because it has a serious sign problem which had not been solved by other methods than the (WV-)TLTM. We also discuss the computational scaling of WV-TLTM. \section{Sign problem} \label{sec:sign_problem} \subsection{What is the sign problem?} \label{sec:what_is_sign_problem} Our aim is to numerically estimate the expectation value defined in a path-integral form: \begin{align} \langle \mathcal{O}(x) \rangle \equiv \frac{\int dx\,e^{-S(x)}\,\mathcal{O}(x)}{\int dx\,e^{-S(x)}}. \label{vev} \end{align} Here, $x=(x^i)\in\bbR$ is a dynamical variable of $N$ degrees of freedom (DOF), $S(x)$ the action, and $\mathcal{O}(x)$ a physical observable of interest. When $S(x)$ is real-valued, one can regard $p_{\rm eq}(x)\equiv e^{-S(x)}/\int dx\,e^{-S(x)}$ as a probability distribution, and can estimate $\langle \mathcal{O}(x) \rangle$ with a sample average as \begin{align} \langle\mathcal{O}(x)\rangle \approx \frac{1}{N_{\rm conf}}\,\sum_{k=1}^{N_{\rm conf}} \mathcal{O}(x^{(k)}). \end{align} Here, $\{x^{(k)}\}$ is a sample (a set of configurations) of size $N_{\rm conf}$, that is generated as a suitable Markov chain with the equilibrium distribution $p_{\rm eq}(x)$. The above prescription is no longer applicable when the action has an imaginary part as $S(x)=S_R(x)+i S_I(x)\in\bbC^N$. A naive way to handle this is the so-called reweighting method, where we treat $e^{-S_R(x)}/\int dx\,e^{-S_R(x)}$ as a new weight and rewrite the expression \eqref{vev} as a ratio of reweighted averages: \begin{align} \langle \mathcal{O}(x) \rangle = \frac{\langle e^{-i S_I(x)}\,\mathcal{O}(x) \rangle_{\rm rewt}} {\langle e^{-i S_I(x)}\, \rangle_{\rm rewt}} \quad \Bigl( \langle f(x) \rangle_{\rm rewt} \equiv \frac{\int dx\,e^{-S_R(x)}\,f(x)}{\int dx\,e^{-S_R(x)}} \Bigr). \label{vev-rewt} \end{align} However, when the DOF, $N$, is very large, the reweighted averages can become vanishingly small of $e^{-O(N)}$, even though the operator itself is $O(1)$. This should not be a problem \emph{if} we can estimate both the numerator and the denominator precisely. However, in the numerical computation, they are estimated separately with statistical errors: \begin{align} \langle \mathcal{O}(x) \rangle \equiv \frac{\langle e^{-i S_I(x)} \mathcal{O}(x) \rangle_{\rm rewt}} {\langle e^{-i S_I(x)} \rangle_{\rm rewt}} \approx \frac{e^{-O(N)} \pm O(1/\sqrt{N_{\rm conf}})} {e^{-O(N)} \pm O(1/\sqrt{N_{\rm conf}})}. \end{align} Thus, in order for the statistical errors to be smaller than the mean values, the sample size must be exponentially large with respect to DOF, namely, $N_{\rm conf} \gtrsim e^{O(N)}$. The need of this unrealistically large numerical cost is called the sign problem. \subsection{Various approaches proposed so far} \label{sec:approaches} We list some of the approaches proposed so far, which are intended to solve the sign problem. \vspace{4pt} \noindent \underline{\textbf{class 1}: no use of reweighting} A typical algorithm in this class is the complex Langevin method \cite{Parisi:1984cs,Klauder:1983nn,Klauder:1983sp, Aarts:2011ax,Aarts:2013uxa,Nagata:2016vkn}, where the complex Boltzmann weight is rewritten to a positive probability distribution over a complex space $\bbC^N$. Although its numerical cost is low $[\sim O(N)]$, it often exhibits a wrong convergence (gives incorrect estimates with small statistical errors) at parameter values of physical importance. \vspace{4pt} \noindent \underline{\textbf{class 2}: deforming the integration surface} A typical algorithm is the Lefschetz thimble method \cite{Witten:2010cx,Cristoforetti:2012su,Fujii:2013sra, Alexandru:2015sua,Fukuma:2017fjq,Alexandru:2017oyw, Fukuma:2019wbv,Fukuma:2019uot,Fukuma:2020fez,Fukuma:2021aoo}, where the integration surface $\Sigma_0=\bbR^N$ is continuously deformed to a new surface $\Sigma_t \subset \bbC^N$ \footnote This algorithm will be explained in detail in the next section. Another interesting algorithm is the path-optimization method \cite{Mori:2017pne,Alexandru:2018fqp}, where the integration surface is looked for with the machine learning technique so that the average phase factor is maximized. } The flow time $t$ is taken sufficiently large so that $\Sigma_t$ is close to a union of Lefschetz thimble, $\bigcup_\sigma \mathcal{J}_\sigma$, on each of which $\ImS(z)$ $(z\in\mathcal{J}_\sigma)$ is constant. Generic Lefschetz thimble method has been shown to suffer from an ergodicity problem for physically important parameter regions of a model \cite{Fujii:2015bua}, where multiple thimbles become relevant that are separated by infinitely high potential barriers. This problem was resolved by tempering the system with the flow time \cite{Fukuma:2017fjq} \footnote A similar idea is proposed in Ref.~\cite{Alexandru:2017oyw}. } This \emph{tempered Lefschetz thimble method} (TLTM) solves both the sign problem (serious at small flow times) and the ergodicity problem (serious at large flow times) simultaneously. The disadvantage is its high numerical cost of $O(N^{3-4})$. Recently, this numerical cost has been substantially reduced [expected to be $O(N^{\sim 2.25})$] with a new method, the \emph{worldvolume tempered Lefschetz thimble method} (WV-TLTM), which is based on the idea to perform the Hybrid Monte Carlo on a continuous accumulation of deformed integration surfaces (the worldvolume) \cite{Fukuma:2020fez}. The algorithm (WV-)TLTM is the main subject in this talk \footnote See Ref.~\cite{Alexandru:2020wrj} for a review from a different viewpoint. } \vspace{4pt} \noindent \underline{\textbf{class 3}: no use of MC in the first place} A typical algorithm in this class is the tensor network method (especially the tensor renormalization group method \cite{Levin:2006jai}) \footnote See Ref.~\cite{Fukuma:2021cni} for a recent attempt to apply the tensor renormalization group method to Yang-Mills theory. } This is good at calculating the free energy in the thermodynamic limit, but not so much efficient to calculate correlation functions at large distances. We expect this method to play a complementary role to methods based on Markov chain Monte Carlo. \section{Lefschetz thimble method} \label{sec:Lefschetz_thimble} We complexify the dynamical variable $x=(x^i)\in\bbR^N$ to $z=(z^i=x^i+i y^i)\in\bbC^N$. We set an assumption (which holds for most cases) that $e^{-S(z)}$ and $e^{-S(z)} \mathcal{O}(z)$ are entire functions over $\bbC^N$. Then, Cauchy's theorem ensures that the integrals do not change their values under continuous deformation of the integration surface: $\Sigma_0=\bbR^N \to \Sigma\,(\subset \bbC^N)$, where the boundary at $|x|\to\infty$ is fixed so that the convergence of integration holds under the deformation: \begin{align} \langle \mathcal{O}(x) \rangle = \frac{\int_{\Sigma_0} dx\,e^{-S(x)}\,\mathcal{O}(x)} {\int_{\Sigma_0} dx\,e^{-S(x)}} = \frac{\int_\Sigma dz\,e^{-S(z)}\,\mathcal{O}(z)} {\int_\Sigma dz\,e^{-S(z)}}. \label{vev2} \end{align} Thus, even when the sign problem is severe on the original surface $\Sigma_0$, it will be significantly reduced if $\ImS(z)$ is almost constant on the new surface $\Sigma$. The prescription for the deformation is given by the anti-holomorphic gradient flow: \begin{align} \dot{z}_t = \overline{\partial S(z_t)} \quad \mbox{with} \quad z_{t=0}=x. \label{flow} \end{align} The most important property of this flow equation is the following inequality: \begin{align} [S(z_t)]^{\cdot} = \partial S(z_t)\cdot \dot{z}_t = |\partial S(z_t)|^2 \geq 0, \end{align} from which we find that \noindent (i)~ $\ReS(z_t)$ always increases along the flow except at critical points \footnote $\zeta$ is said to be a critical point when $\partial S(\zeta)=(\partial_i S(\zeta)) =0$. } \noindent (ii) $\ImS(z_t)$ is constant along the flow. The Lefschetz thimble $\mathcal{J}$ associated with a critical point $\zeta$ is defined by a set of orbits starting at $\zeta$. From this construction and property (ii), we easily see that $\ImS(z)$ is constant on $\mathcal{J}$ [i.e., $\ImS(z)=\ImS(\zeta)~(z\in\mathcal{J})$]. Denoting the solution of Eq.~\eqref{flow} by $z_t(x)$ and assuming that $\Sigma_t\equiv \{z_t(x)|\, x\in\bbR^N\}$ approaches a single Lefschetz thimble $\mathcal{J}$, we expect that the sign problem disappears on $\Sigma_t$ if we choose a sufficiently large $t$. Let us see how the sign problem disappears as flow time $t$ increases. The integrals on a deformed surface $\Sigma_t$ can be rewritten as \begin{align} \langle \mathcal{O}(x) \rangle = \frac{\langle e^{i \phi(z)} \mathcal{O}(z) \rangle_{\Sigma_t}} {\langle e^{i \phi(z)} \rangle_{\Sigma_t}}, \label{vev3} \end{align} wher \footnote Note that $\langle f(z) \rangle_{\Sigma_0} = \langle f(x) \rangle_{\rm rewt}.$ } \begin{align} \langle f(z) \rangle_{\Sigma_t} \equiv \frac{\int_{\Sigma_t} |dz|\,e^{-\ReS(z)} f(z) } {\int_{\Sigma_t} |dz|\,e^{-\ReS(z)}}, \quad e^{i \phi(z)} \equiv e^{-i \ImS(z)}\,\frac{dz}{|dz|}. \end{align} As can be easily checked for a Gaussian case, the integrals take the form $O(e^{-e^{-\lambda t}O(N)})$, where $\lambda$ is a typical singular value of $\partial_i \partial_j S(\zeta)$. Thus, the numerical estimate now becomes \begin{align} \langle \mathcal{O}(x) \rangle \approx \frac{O(e^{-e^{-\lambda t}O(N)}) \pm O(1/\sqrt{N_{\rm conf}})} {O(e^{-e^{-\lambda t}O(N)}) \pm O(1/\sqrt{N_{\rm conf}})}, \end{align} from which we see that the main parts become $O(1)$ when the flow time $t$ satisfies a relation $e^{-\lambda t}O(N)=O(1)$. We thus see that the sign problem disappears at flow times $t\gtrsim T=O(\ln N)$. \section{Tempered Lefschetz thimble method (TLTM)} \label{sec:TLTM} \subsection{Ergodicity problem in the original Lefschetz thimble method} \label{sec:ergodicity_problem} So far, so good; when a single Lefschetz thimble is relevant to estimation, one can resolve the sign problem simply by taking a sufficiently large flow time. However, this nice story no longer holds true when multiple thimbles are involved in estimation, because there comes up another problem (ergodicity problem) as the flow time increases. Figure \ref{fig:ergodicity_problem} describes the case $e^{-S(x)}=e^{-\beta x^2/2}\,(x-i)^\beta$ $(\beta\gg 1)$. In addition to two critical points $\zeta_\pm=\pm \sqrt{3}/2 +(1/2)\,i$ and the associated Lefschetz thimbles $\mathcal{J}_\pm$, here is the zero of $e^{-S(z)}$ at $z=i$. We see that the integration surface $\Sigma_T$ is separated into two parts by an infinitely high potential barrier at the zero. It is thus very hard for two configurations on different parts to communicate in stochastic processes, which means that it takes a very long computation time for the system to reach equilibrium. \begin{figure}[ht] \centering \includegraphics[width=70mm]{ergodicity_problem.eps} \caption{Ergodicity problem.} \label{fig:ergodicity_problem} \end{figure \subsection{Basic algorithm of TLTM} \label{sec:basic_algorithm_TLTM} The tempered Lefschetz thimble method \cite{Fukuma:2017fjq} was invented to overcome this problem by implementing the tempering algorithm \cite{Marinari:1992qd,Swendsen1986,Geyer1991,Hukushima1996} to the thimble method, where the flow time is used as a tempering parameter (see Fig.~\ref{fig:tltm}). The basic algorithm goes as follows:\vspace{4pt} \begin{figure}[ht] \centering \includegraphics[width=70mm]{tltm.eps} \caption{Tempered Lefschetz thimble method (TLTM).} \label{fig:tltm} \end{figure \noindent \underline{Step 0.}\hspace{2mm We fix the target flow time $T$ so that the sign problem is not serious for a sample on $\Sigma_T$ except for the ergodicity problem. This is judged by looking at the average phase factor $|\langle e^{i\phi(z)} \rangle_{\Sigma_T}|$.\vspace{2pt} \noindent \underline{Step 1.}\hspace{2mm We introduce replicas in between the initial integration surface $\Sigma_0=\bbR^N$ and the target deformed surface $\Sigma_T$ as $\{\Sigma_{t_0=0},\Sigma_{t_1},\ldots,\Sigma_{t_A=T}\}$.\vspace{2pt} \noindent \underline{Step 2.}\hspace{2mm We set up a Markov chain for the extended configuration space $\{(x,t_\alpha) |\,x\in\bbR^N,\,\alpha=0,1,\ldots,A\}$.\vspace{2pt} \noindent \underline{Step 3.}\hspace{2mm After equilibration, we estimate observables with a sample on $\Sigma_T$. \vspace{4pt} This tempering method prompts the equilibration on $\Sigma_T$ because two configurations on different connected components now can communicate easily by passing through a detour. Thus, the TLTM solves both the sign and ergodicity problems simultaneously. \subsection{Comment on transitions between adjacent replicas} \label{sec:comment_on_transitions} We here comment that one can expect a significant acceptance rate for transitions between adjacent replicas \cite{Fukuma:2019wbv}. To see this, let us use the initial configurations $x\in\bbR^N$ as common coordinates for different replicas. When we employ the simulated tempering \cite{Marinari:1992qd} for a tempering method as in the previous subsection, a configuration $(x,t_\alpha)$ moves to $(x,t_{\alpha\pm 1})$ (after it explores on $\Sigma_{t_\alpha}$), keeping the $x$-coordinate values the same \footnote When the parallel tempering \cite{Swendsen1986,Geyer1991,Hukushima1996} is employed as in Ref.~\cite{Fukuma:2017fjq}, two configurations on adjacent replicas, $(x,t_\alpha)$ and $(x',t_{\alpha+1})$, move as $(x,t_\alpha)\to (x,t_{\alpha+1})$ and $(x',t_{\alpha+1})\to (x',t_\alpha)$, again keeping the $x$-coordinate values the same. } Since the probability distribution on every replica has peaks at the same points $x_\sigma$, where $x_\sigma$ flows to a critical point $z_\sigma$, we can expect a significant overlap between distributions on two adjacent replicas. \subsection{Computational cost for the original TLTM} \label{sec:computational_cost_TLTM} An obvious advantage of the original TLTM is its versatility; the method can be applied to any system once it is formulated in a path-integral form with continuous variables, resolving the sign and ergodicity problems simultaneously. A disadvantage is its high numerical cost. It is expected to be $O(N^{3-4})$ due to (a) the increase of the necessary number of replicas [probably as $O(N^{0-1})$] and (b) the need to compute the Jacobian matrix of the flow, $J(x)\equiv (\partial z^i_t(x)/\partial x^a)$, every time we move configurations between adjacent replicas [$O(N^3)$]. The worldvolume TLTM \cite{Fukuma:2020fez} was introduced to significantly reduce the computational cost. \section{Worldvolume tempered Lefschetz thimble method (WV-TLTM)} \label{sec:WV-TLTM} \subsection{Basic idea of the Worldvolume TLTM} \label{sec:basic_idea_WV-TLTM} Instead of introducing a finite set of replicas (a finite set of integrations surfaces), we consider in the WV-TLTM a HMC algorithm on a continuous accumulation of deformed integration surfaces, \begin{align} \mathcal{R}\equiv \bigcup_{0\leq t\leq T}\Sigma_t = \bigl\{ z_t(x) \big|\,t \in [0,T],~x \in \bbR^N \bigr\}. \end{align} We call $\mathcal{R}$ the \emph{worldvolume} because this is an orbit of integration surface in the ``target space'' $\bbC^N=\bbR^{2N}$ (see Fig.~\ref{fig:wv-tltm}) \footnote We here use a terminology in string theory, where an orbit of particle is called a worldline, that of string a worldsheet, and that of membrane (surface) a worldvolume. } \begin{figure}[ht] \centering \includegraphics[width=70mm]{wv-tltm.eps} \caption{Worldvolume $\mathcal{R}$ of WV-TLTM.} \label{fig:wv-tltm} \end{figure Keeping the original virtues intact (solving the sign and ergodicity problems simultaneously), the new algorithm significantly reduces the computational cost. In fact, we no longer need to introduce replicas explicitly or to calculate the Jacobian matrix in every molecular dynamics process, and we can move configurations largely due to the use of HMC algorithm. The key idea behind the algorithm is again Cauchy's theorem. We start from the expression \eqref{vev2}: \begin{align} \langle \mathcal{O}(x) \rangle = \frac{\int_{\Sigma_0} dx\,e^{-S(x)}\,\mathcal{O}(x)} {\int_{\Sigma_0} dx\,e^{-S(x)}} = \frac{\int_{\Sigma_t} dz_t\,e^{-S(z_t)}\,\mathcal{O}(z_t)} {\int_{\Sigma_t} dz_t\,e^{-S(z_t)}}. \label{vev2a} \end{align} Cauchy's theorem ensures that both the numerator and the denominator do not depend on $t$, so that we can average over $t$ with an arbitrary weight $e^{-W(t)}$, leading to an integration over $\mathcal{R}$ \footnote The weight $e^{-W(t)}$ is determined such that the probability to appear on $\Sigma_t$ is (almost) independent of $t$. } \begin{align} \langle \mathcal{O}(x) \rangle = \frac{\int_0^T dt\,e^{-W(t)} \int_{\Sigma_t} dz_t\,e^{-S(z_t)} \mathcal{O}(z_t)} {\int_0^T dt\,e^{-W(t)}\int_{\Sigma_t} dz_t\,e^{-S(z_t)}} = \frac{\int_\mathcal{R} dt\,dz_t\, e^{-W(t)-S(z_t)} \mathcal{O}(z_t)} {\int_\mathcal{R} dt\,dz_t\,e^{-W(t)-S(z_t)}}. \end{align} \subsection{Algorithm} \label{sec:WV-TLTM_algorithm} An explicit implementation can go in two ways, as described in the original paper \cite{Fukuma:2020fez}. One is the \emph{target-space picture}, in which the HMC is performed on the worldvolume $\mathcal{R}$ that is treated as a submanifold in the target space $\bbC^N$. The other is the \emph{parameter-space picture}, in which the HMC is performed on the parameter space $\{(x,t)\}$ \footnote The latter picture was further studied in Ref.~\cite{Fujisawa:2021hxh}. In this picture, however, the Jacobian determinant $\det J(x)$ is treated as part of observable, which is exponentially large and has no guarantee to have a significant overlap with the weight $e^{-\ReS(z_t(x))}$. This is why we have not pursued the second option seriously in the original paper \cite{Fukuma:2020fez}. } In the target-space picture, we first parametrize the induced metric on $\mathcal{R}$ with the ADM decomposition \cite{Arnowitt:1962hi}: \begin{align} ds^2 = \alpha^2\,dt^2 + \gamma_{ab}\,(dx^a + \beta^a\,dt)\,(dx^b + \beta^b\,dt). \end{align} Here, the functions $\alpha$ and $\beta^a$ are called the lapse and the shifts, respectively, and $\gamma_{ab}$ is the induced metric on $\Sigma_t$. The invariant volume element on $\mathcal{R}$ is then given by \begin{align} Dz = \alpha\,dt\,|dz_t(x)| = \alpha \,|\det J|\,dt\,dx \quad \bigl(|\det J|=\sqrt{\det \gamma}\bigr), \end{align} and the expectation value can be rewritten to a ratio of reweighted averages on $\mathcal{R}$: \begin{align} \langle \mathcal{O}(x) \rangle = \frac{\int_\mathcal{R} Dz\,e^{-V(z)}\,A(z)\,\mathcal{O}(z)} {\int_\mathcal{R} Dz\,e^{-V(z)}\,A(z)} = \frac{\langle A(z)\,\mathcal{O}(z) \rangle_\mathcal{R}} {\langle A(z) \rangle_\mathcal{R}}. \end{align} Here, the reweighted average of a function $f(z)$ is defined by \begin{align} \langle f(z) \rangle_\mathcal{R} \equiv \frac{\int_\mathcal{R} Dz\,e^{-V(z)}\,f(z)} {\int_\mathcal{R} Dz\,e^{-V(z)}} \end{align} with $V(z) \equiv \ReS(z)+W(t(z))$, and the associated reweighting factor takes the form \begin{align} A(z) \equiv \frac{dt\,dz_t}{Dz}\,e^{-i\ImS(z)} = \alpha^{-1}(z)\,\frac{\det J}{|\det J|}\,e^{-i\ImS(z)}. \end{align} The reweighted average can be estimated with the RATTLE algorithm \cite{Andersen:1983,Leimkuhler:1994}, where molecular dynamics is performed on $\mathcal{R}$ which is treated as a submanifold of $\bbC^N$ \cite{Fukuma:2020fez}. The algorithm takes the following form (see Fig.~\ref{fig:rattle}) \footnote RATTLE on a single Lefschetz thimble $\mathcal{J}=\Sigma_{t=\infty}$ was first introduced in Ref.~\cite{Fujii:2013sra}, which is extended to $\Sigma_t$ with finite $t$ in Ref.~\cite{Alexandru:2019} (see also Ref.~\cite{Fukuma:2019uot} for the combination of RATTLE with the tempering algorithm). } \begin{align} \pi_{1/2} &= \pi - \Delta s\,\bar\partial V(z) - \lambda^a F_a(z), \\ z' &= z + \Delta s\,\pi_{1/2}, \\ \pi' &= z -\Delta s\,\bar\partial V(z') - \lambda'^{a} F_a(z'). \end{align} \begin{figure}[t] \centering \includegraphics[width=70mm]{rattle.eps} \caption{RATTLE on the worldvolume $\mathcal{R}$ \cite{Fukuma:2020fez}.} \label{fig:rattle} \end{figure Here, $F_a(z) \equiv i J_a(z)$ $(a=1,\ldots,N)$ with $J_a \equiv (J^i_a=\partial z^i_t(x)/\partial x^a)$ form a basis of the normal space $N_z\Sigma_t$ at $z\in\Sigma_t\,(\subset\mathcal{R})$. The Lagrange multipliers $\lambda^a$ and $\lambda'^{a}$ are determined using $E_0(z)\equiv\overline{\partial S(z)}$ such that \begin{align} \bullet&~~ z'\in\mathcal{R}~~~\mbox{and}~~~ \lambda^a F_a(z) \perp E_0(z), \\ \bullet&~~ \pi' \in T_{z'}\mathcal{R}~~~\mbox{and}~~~ \lambda'^a F_a(z') \perp E_0(z'). \end{align} The second equation in each line ensures that $\lambda^a F_a(z)$ actually belongs to $N_z\mathcal{R}\,(\subset N_z \Sigma_t)$. The statistical analysis method for WV-TLTM (or more generally, for WV-HMC that is the HMC algorithm on a foliated manifold) is established in Ref.~\cite{Fukuma:2021aoo}. \subsection{Various models to which (WV-)TLTM is applied} \label{sec:various_models} The (WV-)TLTM has been successfully applied to various models, including \\ \noindent $\bullet$~ $(0+1)$-dimensional massive Thirring model \cite{Fukuma:2017fjq}\\ \noindent $\bullet$~ two-dimensional Hubbard model \cite{Fukuma:2019wbv,Fukuma:2019uot}\\ \noindent $\bullet$~ Stephanov model (a chiral random matrix model as a toy model of finite density QCD) \cite{Fukuma:2020fez}\\ \noindent $\bullet$~ antiferromagnetic Ising model on the triangular lattice \cite{Fukuma:2020JPS}. \\ Correct results have always been obtained when they can be compared with analytic results, although the system sizes are yet small. In the next section, we discuss the application of WV-TLTM to the Stephanov model. \section{Application to the Stephanov model} \label{sec:application_Stephanov_model} \subsection{Stephanov model} \label{sec:Stephanov_model} The finite density QCD is given by the following partition function after $N_f$ quark fields (assumed to have the same mass) are integrated out: \begin{align} Z_{\rm QCD} &= {\rm tr}\,e^{-\beta(H-\mu N)} \nonumber\\ &= \int [dA_\mu]\,e^{(1/2g_0^2)\,\int d^4 x\,{\rm tr}\,F_{\mu\nu}^2}\, {\rm Det}\,{}^{N_f} \left(\begin{array}{cc} m & \sigma_\mu (\partial_\mu + A_\mu) + \mu \\ \sigma^\dag (\partial_\mu+A_\mu) + \mu & m \end{array}\right). \end{align} The Stephanov model \cite{Stephanov:1996ki,Halasz:1998qr} takes the following form at temperature $T=0$: \begin{align} Z_{\rm Steph} = \int d^2 W\,e^{-n\,{\rm tr}\,W^\dag W}\, \det{}^{N_f} \left(\begin{array}{cc} m & i W + \mu \\ i W^\dag + \mu & m \end{array}\right), \end{align} where the $n\times n$ complex matrix $W=(W_{ij})=(X_{ij}+i Y_{ij})$ represents quantum-field degrees of freedom (including space-time dependences) \footnote The degrees of freedom is given by $N=2n^2$, which should be compared with those of link variables, $4L^4 (N_c^2-1)$, where $L$ is the linear size of four-dimensional square lattice and $N_c$ is color. } This model plays a particularly important role because (a) it well approximates the qualitative behavior of finite-density QCD at large matrix sizes and (b) it has a serious sign problem which can hardly be solved by the complex Langevin method due to a wrong convergence \cite{Bloch:2017sex}. Figures \ref{fig:chiral_condensate} and \ref{fig:number_density} show the results for the chiral condensate $\langle \bar\psi \psi \rangle$ and the number density $\langle \psi^\dag \psi \rangle$ at $n=10$, $m=0.004$ and $N_f=1$ obtained with the WV-TLTM, where the sample size is $N_{\rm conf} = 4,000 - 17,000$ (varying on $\mu$). We see that they agree with the exact results within statistical errors. For comparison, we also plot the results obtained with the naive reweighting method (showing large deviations from the exact values due to the sign problem) and also with the complex Langevin method (exhibiting a serious wrong convergence). The sample size is $N_{\rm conf} = 10^4$ for both the reweighting and the complex Langevin. \begin{figure}[ht] \centering \includegraphics[width=140mm]{chiral_condensate.eps} \caption{Chiral condensate $\langle\bar\psi\psi\rangle \equiv (1/2n)(\partial/\partial m)\ln Z_{\rm Steph}$ \cite{Fukuma:2020fez}.} \label{fig:chiral_condensate} \end{figure \begin{figure}[ht] \centering \includegraphics[width=140mm]{number_density.eps} \caption{Number density $\langle\psi^\dag\psi\rangle \equiv (1/2n)(\partial/\partial \mu)\ln Z_{\rm Steph}$ \cite{Fukuma:2020fez}.} \label{fig:number_density} \end{figure \subsection{Computational scaling} \label{sec:computational_scaling} In the RATTLE algorithm, we need to make an inversion of the linear problem, $J v = b$ ($J$: the Jacobian matrix). The total numerical cost of WV-TLTM depends on which solver is used. When a direct method (e.g., LU decomposition) is used, the computational cost is expected to be $O(N^3)$. In this case, the Jacobian matrix $J=(J_t(x))$ is explicitly computed by numerically integrating the differential equation $\dot{J_t}=\overline{\partial^2 S(z_t)\,J_t}$ together with Eq.~\eqref{flow}, whose cost is also $O(N^3)$. Figure \ref{fig:comp_cost} shows the real computation time for generating a single configuration, performed on a supercomputer (Yukawa-21) at Yukawa Institute, Kyoto University. We clearly see that it scales as expected, and smaller than $O(N^{3-4})$ expected for the original TLTM. We also see that the aid of GPU is quite effective. \begin{figure}[ht] \centering \includegraphics[width=90mm]{comp_cost.eps} \caption{Computation time to generate a configuration with a direct method in the linear inversion.} \label{fig:comp_cost} \end{figure The computational cost can be further reduced if we adopt an iterative method (such as BiCGStab) as in Ref.~\cite{Alexandru:2017lqr}. The numerical cost is then expected to be $O(N^2)$ if the Krylov subspace iteration converges quickly. This factor will be multiplied by $O(N^{1/4})$ if we reduce the step size of molecular dynamics so that the acceptance rate of the final Metropolis test is independent of $N$. \section{Summary and outlook} \label{sec:summary_outlook} We have reported that the tempered Lefschetz thimble method and its worldvolume extension, (WV-)TLTM, has a potential to be a reliable and versatile solution to the sign problem, because the algorithm solves the sign and ergodicity problems simultaneously and can be applied to any system in principle if it is formulated in a path-integral form with continuous variables. The (WV-)TLTM has been successfully applied to various models (yet only to toy models with small DOF at this stage), which include important toy models such as the Stephanov model (for finite density QCD), the 1D/2D Hubbard model (for strongly correlated electron systems), and the antiferromagnetic Ising model on the triangular lattice (for frustrated classical/quantum spin systems). We are now porting the code of WV-TLTM such that it can run on a large-scale supercomputer, which we expect to be completed soon. In parallel with this, it should be important to keep improving the algorithm itself so that the estimation can be made more efficiently for large-scale systems. It would be also interesting to combine various algorithms that have been proposed as solutions to the sign problem. An interesting candidate we have in mind as a partner of \mbox{(WV-)}TLTM is the tensor renormalization group method, which is actually complementary to Monte Carlo method in many aspects. A particularly important subject in the near future will be to establish a Monte Carlo algorithm for the calculation of time-dependent systems. This will open a way to the quantitative understanding of nonequilibrium processes, such as those happening in heavy ion collision experiments and in the very early universe. A study along these lines is in progress, and we hope we can report some of the achievements in the next Corfu conference. \acknowledgments The authors thank Issaku Kanamori, Yoshio Kikukawa and Jun Nishimura for useful discussions. M.F.\ thanks the organizers of Corfu 2021, especially George Zoupanos and Konstantinos Anagnostopoulos, for organizing wonderful conference series. This work was partially supported by JSPS KAKENHI Grant Numbers JP20H01900, JP21K03553. N.M.\ is supported by the Special Postdoctoral Researchers Program of RIKEN. Some of our numerical calculations are performed on Yukawa-21 at Yukawa Institute for Theoretical Physics, Kyoto University.
{'timestamp': '2022-05-03T02:32:45', 'yymm': '2205', 'arxiv_id': '2205.00652', 'language': 'en', 'url': 'https://arxiv.org/abs/2205.00652'}
\section{Introduction} Brown dwarfs are free-floating stellar objects with masses below the hydrogen-burning limit of $\sim 0.075$M$_{\odot}$ and with radii of $\approx 1 R_{\rm Jup}$ for mature objects, although this can vary with cloud cover and metallicity. They are the low-mass extension of the sub-solar main-sequence in the Hertzsprung-Russel diagram (HRD, Fig.~\ref{diet13_HRD}). Their total emitted flux, and hence their effective temperature T$_{\rm eff}$, is lower than that of M-dwarfs. Brown dwarfs become increasingly dimmer as they age because their mass is too small to sustain continuous hydrogen burning. Only in their youth, the heaviest brown dwarfs fuse some helium and maybe lithium. Hence, the class of brown dwarfs comprises members that are just a little cooler than M-dwarfs (L dwarfs) and members that can be as cold as planets (T and Y dwarfs). Several formation mechanisms are suggested, including the classical star-forming scenario of a local gravitational collapse of an interstellar molecular cloud. The formation efficiency may have changed depending on time and location. The oldest brown dwarfs could be as old as the first generation of stars that formed in the universe. Their metallicity would be extremely low leaving the spectrum almost feature-less (Fig. 9 in \citealt{witte2009}). \cite{luh2012} reviews the formation and evolution of brown dwarfs, including the initial mass function and circumstellar disks. Observational evidence builds up for that brown dwarfs and giant gas planets overlap in masses and in global temperatures (see review \citealt{chab2014}). \cite{viki14} reviews the emergence of brown dwarfs as a research area started by a theoretical prediction of their existence, and emphasizes the research progress in formation and evolution of brown dwarfs. \begin{figure} \centering \includegraphics[width=\textwidth]{Figures/HRD_Dieterichetal2013.pdf} \caption{The low-mass end of the main sequence in the HRD diagram showing the transition from the stellar M-dwarf to the substellar brown dwarf regime (\citealt{diet2013}; courtesy: S. Dieterich).} \label{diet13_HRD} \end{figure} \cite{allard97} reviewed model atmospheres of very low mass stars and brown dwarfs discussing model aspects like updating gas-phase opacities, convection modelled by mixing length theory, and the T$_{\rm eff}$-scale for M-dwarfs. The present review summarizes the progress in brown dwarf observations extending now from the UV into the radio, revealing new atmospheric processes and indicating overlapping para\-meter ranges between brown dwarfs and planets (Sect.~\ref{s:obs}). Special emphasis is given to cloud modelling as part of the brown dwarf atmospheres, a field with increasing importance since the first review on brown dwarf atmospheres by \cite{allard97} (Sect.~\ref{s:theo}). Since \cite{allard97}, a considerably larger number of brown dwarfs is known which allows first statistical evaluations, one example being the search for correlation between X-ray emission and rotational activity in brown dwarfs. Wavelength-dependent variability studies have gained momentum, and the idea of weather on brown dwarfs is commonly accepted since the first variability search by \cite{tt99}. In the following, we summarize the observational achievements for field brown dwarfs. We ignore the specifics of young brown dwarfs as this is reviewed in e.g. \cite{luh2012} and concentrate on evolved brown dwarfs, not on individual star forming regions or brown dwarfs with disks. Sections~\ref{s:bench} discusses the idea of model benchmarking as it emerged in the brown dwarf community. Section~\ref{s:compl} gives an outlook regarding new challenges like multi-dimensional atmosphere modelling and kinetic gas-phase chemistry. \include{SarahC2_afterReview} \section{Theory of brown dwarf atmospheres}\label{s:theo} Most, if not all, observational findings reported in the previous sections result from or are influenced by atmospheric processes. The following section will therefore summarize the physics as we expect it to occur in ultra-cool atmospheres of brown dwarfs, including planets and M-dwarfs (Sect.~\ref{s:theo1}). Special emphasis will be given to cloud formation processes and their modelling. Section~\ref {ss:diffclmo} contains a summary of different approaches to treat cloud formation as part of model atmospheres. \subsection{The brown dwarf atmosphere problem}\label{s:theo1} \begin{figure \hspace*{-1cm}\includegraphics[width=1.2\textwidth]{Figures/TgasvsPgasTeff1.pdf} \caption{The local temperature-pressure (T$_{\rm gas}$-p$_{\rm gas}$)-structure of brown dwarfs (log(g)=5.0) for effective temperature T$_{\rm eff}=1000\,\ldots\,3000$K. Shown are results from {\sc Drift-Phoenix} model atmosphere simulations for solar element abundances (\citealt{witte2009,witte2011}) [courtesy: Isabel Rodriguez-Barrera] } \label{TgasPgas_DP} \end{figure} The atmosphere of a brown dwarf is composed of a cold gas with temperatures $\approx 3000$K$\,\ldots\,<500$K, generally decreasing outwards due to the upper atmosphere being an open boundary by extending into space (Fig.~\ref{TgasPgas_DP}). Local temperature inversions could occur due to locally heating processes: A locally increased opacity could lead to radiative heating (e.g. backwarming, see Fig.~\ref{TgasPgas_DP}) or cooling, Alfv{\'e}n wave propagation could cause the occurrence of chromospheric structures, and irradiation would provide an additional, external flux source. The description of the atmosphere of a brown dwarf requires to model the local thermodynamic (T$_{\rm gas}$ [K], p$_{\rm gas}$ [dyn/cm$^2$]), hydrodynamic (v$_{\rm gas}$ [cm/s], $\rho_{\rm gas}$ [g/cm$^3$]) and chemical properties (n$_{\rm x}$ [cm$^{-3}$], ${\rm x}$ - chemical species (ions, atoms, molecules, cloud particles)) in order to predict observable quantities based on the radiative flux F$_{\lambda}$ [erg/s/cm$^2$/$\AA$]. The goal is to perform this task by using a minimum of global quantities that are observationally accessible like the resulting total radiative flux F$_{\rm tot}=\int F_{\lambda}\,d\lambda$ through the atmosphere. The classical 1D model atmosphere problem (e.g. \citealt{mihalas82}) is determined by the effective temperature T$_{\rm eff}$ (F$_{\rm tot}=\sigma T_{\rm eff}^4$, $\sigma$ [erg\,cm$^{-2}$s$^{-1}$K$^{-4}$] - Stefan-Bolzmann constant with the luminosity $L=4\pi R^2 \sigma T_{\rm eff}^4$ and the radius ($R$ [cm])), the surface gravity log(g) ($\log (g) = \log (GM/R^2)$, $G$ [cm$^3$g$^{-1}$ s$^{-2}$] - gravitational constant, M - object mass) and the ele\-ment abundances of the object. Material quantities like the equation of state, and further chemistry and opacity data close this system of equations. Brown dwarf atmosphere models (\citealt{lunine1986, burrows1989, tsuji1996, tsuji2002, ackerman01, am2013, allard2001, burrows2002, gust2008, witte2009, witte2011}; for earlier references on M-dwarfs see \citealt{allard97}) that are widely applied to optical and IR observations (Sects.~\ref{ssOIR},~\ref{ss:obsvariab}) \begin{itemize} \item[--] calculate the local gas temperature by solving the radiative and convective energy transport through a stratified medium in local thermal equilibrium (LTE) with an open upper boundary, \item[--] assume flux conservation according to a total energy given by $T_{\rm eff}$\\ ($F_{\rm tot}=F_{\rm rad}+F_{\rm conv} = \sigma T_{\rm eff}^4$), \item[--] calculate local gas-phase composition for a set of element abundances to determine the opacity of the local gas (Mostly, chemical equilibrium is assumed, in some simulations, CO/CH$_4$/CO$_2$ and N$_2$/NH$_3$ are treated kinetically to better fit observed spectra.), \item[--] calculate the local gas pressure assuming hydrostatic equilibrium. \end{itemize} \noindent Brown dwarfs deviate from the classical model atmosphere (e.g. textbook \citealt{mihalas82}) in that clouds form inside their atmospheres. During the formation process, elements are consumed resulting in depleted gas-phase opacity sources like TiO, SiO and others. The cloud particles are a considerable opacity source which introduces a backwarming effect (see Fig.~\ref{TgasPgas_DP}). Convection plays an important role in transporting material into regions that are cool enough for the condensation processes to start or progress (see also Fig.~\ref{DustCircuit}). Convection is a mixing mechanisms that can also drive the local gas-phase out of chemical equilibrium if the transport is faster than the chemical reaction towards the local equilibrium state (\citealt{noll1997}). So far, the focus of brown dwarf model atmospheres was on one dimensional simulations in the vertical ($z$) direction. This assumption implies that the brown dwarf atmosphere is homogeneous in the other two dimensions ($x, y$). Observation of irradiated brown dwarfs (\citealt{casewell12}) suggest what is known for irradiated planets since \cite{knutson2009}, namely, that global circulation may also occur on brown dwarfs. We note, however, that the driving mechanisms for global circulation are likely being dominated by rotation in brown dwarfs compared to irradiation alone in close-in planets (\citealt{zhang2014}). Variability observations of time-scales different than the rotational period (Sect.~\ref{ss:obsvariab}) have long been interpreted as a sign for an inhomogeneous cloud coverage of brown dwarfs. {\it To summarize, the aim of building a model for an atmosphere is to understand the interaction between different processes and to calculate quantities that can be compared to experiments or observation. The \underline{input quantities} are the global properties T$_{\rm eff}$, L$_{\star}$, log(g), mass or radius, element abundances, plus material constants for gas chemistry, cloud formation and opacity calculations. The \underline{output quantities} are details of the atmosphere like the local gas temperature, T$_{\rm gas}$ [K], the local gas pressure, p$_{\rm gas}$ [bar], the local convective velocity, {\sc v}$_{\rm conv}$ [cm/s], the local number densities of ions, atoms, molecules, $n_{\rm x}$ [cm$^{-3}$], local grain size, $a$ [cm], local number of cloud particles, $n_{\rm d}$ [cm$^{-3}$], local material composition of grains, the cloud extension, and many more details. Directly comparable to observations is the the resulting spectral surface flux $F_{\lambda}$. } \subsection{The chemical repository of the atmosphere} The chemical repository of an atmosphere, including atoms, molecules and cloud particles, is determined by the element abundances available throughout the atmosphere. Different wavelengths with different optical depths probe different atmospheric layers with their specific chemical composition. {\it Primordial element abundances}, that should be characteristic for a young, hot brown dwarf, are determined by where and when a brown dwarf formed as the interstellar element abundances increase in heavy elements over time (\citealt{yuan2011}) and may depend on the star formation history of the brown dwarf's birthplace (\citealt{henry1999,cheng2012}). The primordial abundances should be preserved in the brown dwarf's interior and below any atmospheric region that could be affected by cloud formation and down-mixing of processed element abundances. Given the long life times of brown dwarfs, we expect element sedimentation inside the brown dwarf core similar to what we know from white dwarfs. The element abundances that determine the spectral appearance of a brown dwarf are {\it processed element abundances} and they differ from the primordial values due to the effect of element depletion by cloud formation, element enrichment by cloud evaporation and the convective mixing of such chemically altered element abundances. The primordial element abundances are almost always assumed to be the solar element abundances or a scaling thereof. These values are inspired by seismological measurements and 3D simulations to fit high-resolution line profiles. The element abundance values determined for the Sun depend a little on the method and/or simulation applied (\citealt{pereira2013}, see also discussion in \citealt{hell2008}). In principle, there is no reason why any star's element abundances should precisely scale with the solar element abundances (\citealt{berg2014}). \cite{allard97} summarized the chemical composition in brown dwarf atmospheres as the basis of every model atmosphere simulation. \begin{figure {\ }\\*[-2.0cm] \centering \hspace*{-0cm}\includegraphics[width=\textwidth]{Figures/Tsubli_M.pdf}\\*[-4.6cm] \hspace*{-0cm}\includegraphics[width=\textwidth]{Figures/Tsubli_C.pdf}\\*[-2.5cm] \caption{Thermal stability of various materials in an oxygen-rich (C/O$<1$; top) and a carbon-rich (C/O$>1$, middle) solar abundance gas. All curves show $(T_{\rm gas}, n_{<H>})$ where the supersaturation ratio $S_{\rm s}=1$ ($s$=TiO[s], Fe[s], graphite, $\ldots$; Eq.~\ref{eq:ss}). The materials will evaporate in the parameter space above each curve. In the case of graphite, the evaporation parameter space is above and to the right of of the curve [courtesy: P. Woitke].} \label{Tsub} \end{figure} \subsection{Fundamental ideas on cloud formation } The following section provides a summary of basic ideas of how clouds form. The formation processes and underlying concepts are based on a microscopic approach which, in the very end, will depend on our quantum-mechanical understanding of chemical reactions leading to more and more complex structures that describe the transition from the gas-phase into the solid or liquid phase. Section~\ref{ss:diffclmo} will summarize different approaches to cloud formation modelling that are applied by different research groups to solve the brown dwarf atmosphere problem. \begin{figure}[ht] \centering \hspace*{-0cm}\includegraphics[width=\textwidth]{Figures/TiO2J_Sat.pdf} \caption{A considerable supersaturation is required for the seed formation processes to occur. The seed formation rate of TiO$_2$ (nucleation rate $J_*$, dashed red line) is highest far below the thermal stability curve for the solid material TiO$_2$[s] (solid red line). Thermal stability for Al$_2$O$_3$[s] and the olivines in a solar abundance gas are shown for comparison. [courtesy: P. Woitke]} \label{TsubJ*TiO2} \end{figure} \subsubsection{Thermal stability} The concept of thermal stability is used in all but one cloud model to determine if a cloud exists in an atmosphere. Figure~\ref{Tsub} shows thermal stability curves usually used for this procedure, and which are often called 'condensation curves' in the literature. Such a material is in {\it phase equilibrium} which is described by the supersaturation ratio $S_{\rm s}=\,1$ of a material $s$ ($s\,$= SiO[s], TiO$_2$[s], MgO[s], Fe[s], Mg$_2$SiO$_3$[s], $\ldots$; [s] referring to 'solid') with the supersaturation ratio being defined as \begin{equation} \label{eq:ss} S_{\rm s}=\frac{p_{\rm x}(T_{\rm gas}, p_{\rm gas})}{p_{\rm sat, s}(T_{\rm s})}. \end{equation} $p_{\rm x}(T_{\rm gas}, p_{\rm gas})$ is the partial pressure of the growing gas species $x$, and $p_{\rm sat, s}(T_{\rm s}$) is the saturation vapour pressure of the solid $s$. The application of the law of mass action to $p_{\rm x}$ shows that $S_{\rm s}$ is well-defined no matter whether the monomer of solid $s$ exists in the gas-phase or not (\citealt{hewo2006}). Hence, the concept of thermal stability does not allow to investigate if and how a particular condensate does form. Figures~\ref{Tsub} demonstrate below (i.e., $\leqq$) which temperatures a material would be thermally stable in an oxygen-rich (top) and a carbon-rich (bottom) environment. \subsubsection{Cloud formation processes}\label{ss:clp} Cloud formation (Fig.~\ref{DustCircuit}) starts with the formation of condensation seeds in brown dwarfs and giant gas planets where no tectonic processes can provide an influx of dust particles into the atmosphere.\footnote{The formation of weather clouds on Earth involve water condensation of pre-existing seeds particles ({\it condensation nuclei}) which origin from volcano outbreaks, wood fires, ocean salt spray, sand storms, and also cosmic-ray-induced ion-ion cluster reactions (see CERN CLOUD experiment). Noctilucent clouds in the upper Earth atmosphere, however, require the recondensation of meteoritic material to understand their existence (\citealt{saunders2007}).} This process is a sequence of chemical reactions through which larger molecules form which then grow to clusters and eventually, a small, solid particle emerges from the gas phase. Such reaction chains have been extensively studied in soot chemistry pointing to the key role of PAH (polycyclic aromatic hydrocarbons) in carbon-rich environments (\citealt{goeres1993}). The modelling aspect of such chemical paths is greatly hampered by cluster data being not always available for a sensible number of reactions steps, and computational chemistry plays an important role for our progress in astrophysics' cloud formation (e.g. \citealt{catlow2010}). Figure~\ref{TsubJ*TiO2} demonstrates that the seed formation process requires a considerable supersaturation of the respective seed forming species: The seed formation rate, $J_*$, peaks at a far lower temperature than the thermal stability of the same material suggests. This is very similar to Earth where water vapour condensation on ions requires a supersaturation of 400\%. \cite{iraci2010}, for example, demonstrate that equilibrium water ice formation is impossible on Mars. \begin{figure}[ht] \centering \hspace*{-0cm}\includegraphics[width=\textwidth]{Figures/CircuitofDust.pdf} \caption{The circuit of dust that determines cloud formation: cloud particle formation (nucleation, growth) $\rightarrow$ gravitational settling (drift) $\rightarrow$ element depletion \& element replenishment by convective mixing (\citealt{woi2004}).} \label{DustCircuit} \end{figure} Once condensation seeds have formed, other materials are already thermally stable (Fig.~\ref{Tsub}) and highly supersaturated (Fig. 1 in \citealt{helling2008}). This causes the growth of a substantial mantle via dust-gas surface reactions. The cloud particles forming in an oxygen-rich gas will therefore be made of a mix of all available materials as many materials become thermally stable in a rather narrow temperature interval (top panel, Fig.~\ref{Tsub}). Once the cloud particles have formed, other intra-particle collision processes may alter the particle size distribution. Such collisions depend on the momentum transfer between particles and may result in a further growth of the particle or in destruction (\citealt{guettler2010, wada2013}). Collisions between charged grains may lead to an acceleration of the coagulation process in brown dwarf atmospheres (e.g. \citealt{konopka2005}). \begin{figure \centering \includegraphics[width=1.0\textwidth]{Figures/DriftPhoenix200045solar_struc.pdf}\\*[-0.2cm] \caption{Cloud model results as part of a {\sc Drift-Phoenix} atmosphere model (T$_{\rm eff}=2000$K, $\log$(g)=4.5, solar metallicity). {\bf 1st panel:} local gas temperature $T_{\rm gas}$ [K] (solid), convective mixing time scale $\tau_{\rm mix}$ [s] (dashed); {\bf 2nd panel:} seed formation rate $J_*$ [cm$^{-3}$s$^{-1}$] (solid), cloud particle number density $n_{\rm d}$ [cm$^{-3}$] (dashed); {\bf 3rd panel:} grain growth velocity $\chi_{\rm s}$ [cm s$^{-1}$]; {\bf 4th panel:} material volume fraction $V_{\rm s}/V_{\rm tot}$ [\%]; {\bf 5th panel:} effective supersaturation ratio $S_{\rm eff}$ for each material $s$; {\bf 6th panel:} mean grain size $\langle a \rangle$ [$\mu$m] (solid), drift velocity v$_{\rm dr}$ [cm s$^{-1}$] (dashed). The different colours refer to the same different solid in each of the panels. {\small The subscript $s$ refers to the different condensate materials: $s$=TiO$_2$[s] (blue), Mg$_2$SiO$_4$[s] (orange, long-dash), SiO[s] (brown, short-dash), SiO$_2$[s] (brown, dot - short-dash), Fe[s] (green, dot - long-dash), Al$_2$O$_3$[s] (cyan, dot), CaTiO$_3$[s] (magenta, dash), FeO[s] (green, dash), FeS[s] (green, dot), Fe$_2$O$_3$[s] (green, dot - short-dash), MgO[s] (dark orange, dot - short-dash), MgSiO$_3$[s] (dark orange, dot).} } \label{DP2004.5_solar_struc} \end{figure} \subsubsection{Some results on cloud {\it formation}}\label{ss:cloudresults} We use modelling results from Helling \& Woitke to demonstrate the origin of basic cloud properties and feedback mechanisms that determine the formation of clouds. Refined results of a {\sc Drift-Phoenix} atmosphere simulation (\citealt{witte2011}) are used for this purpose. \begin{figure}[ht \centering \includegraphics[width=0.7\textwidth]{Figures/GrainSizeDist200045solar.pdf} \caption{Grain size distributions, $f(a)$ [cm$^{-3}$ m$^{-1}$], differ for different atmospheric layers due to different cloud formation processes contributing and/or dominating. The plot shows the evolution of the grain size distribution from the start of the cloud formation by seed formation (narrow gray-ish peaks), the continues production of seed particles which simultaneously grow (peaks growing in height and moving to the right towards larger grain sizes; blue colours), and the growth dominated distributions (peaks have constant heights and keep moving to the right; purple colours). $f(a)$ is shown for the same atmosphere model like in Fig.~\ref{DP2004.5_solar_struc}.} \label{DP2004.5_solar_fvona} \end{figure} The upper boundary of the cloud is determined by the formation of condensation seeds. Figure~\ref {DP2004.5_solar_struc} demonstrates that a substantial grain growth can only set in when condensation seeds have formed despite of extremely high supersaturation of potentially condensing materials (5th panel). The seed formation rate, $J_*$ [cm$^{-3}$s$^{-1}$] (2nd panel, solid), is calculated for TiO$_2$-nucleation for which cluster data exists (see \citealt{helfom2013}). It determines furthermore the number density of cloud particles, $n_{\rm d}$ [cm$^{-3}$] (2nd panel, dashed) in the entire cloud. Below a shallow TiO$_2$[s] layer, all thermally stable materials grow almost simultaneously (3rd and 4th panel) producing core-mantle cloud particles with a mixed mantle composition. A closer inspection of the material volume fractions ($V_{\rm s}/V_{\rm tot}$) and the grain growth velocity ($\chi_{\rm s}$) reveals a changing material composition in the cloud with height. The reasons are element consumption and thermal instability: The condensates can not grow further if element depletion causes a sub-saturation, or evaporation sets in if the local temperature is too high. The result is that also the individual supersaturation ratios approach one, but each at a different temperature (i.e. atmospheric height in 1D models). Element depletion affects also the seed formation (mainly due to Ti). No more new seed particles can form below a certain height as the increasing temperature hampers the clusters' thermal stability. All cloud particles that exist below this height have rained in from above. These particles fall into a gas of increasing density and temperature. The increasing gas density causes an increasing mean grain size ($\langle a \rangle$) until the grains fall faster than they can grow. A part of the lower cloud has therefore an almost constant mean grain size. Below that, the temperature is too high and even high-temperature condensates like Fe[s] and Al$_2$O$_3$[s] evaporate. The thermal stability of the most stable materials determines the cloud's lower boundary. The lower edge of the cloud is made of large particles that consist of very heat-resistant materials like Al$_2$O$_3$[s] with inclusions of Fe[s] and TiO$_2$[s]. \subsubsection{Why do we need a cloud model?} Important input quantities for model atmosphere simulations, and for retrieval me\-thods, are opacity data for the gas-phase and for the cloud particles. Molecular line lists have been a big issue for a long time (\citealt{allard97, hill2013}). But the calculation of the gas phase absorption coefficient requires also the knowledge of the number density of the absorbing species which is determined by the element abundances of the constituting elements. The element abundances are strongly influenced by how many cloud particles form of which composition and where in the atmosphere. A detailed cloud model is therefore needed to calculate how many cloud particles deplete the gas where and of which elements. \cite{helling2008} detail in their Fig. 7 the changing [Ti/H], [Si/H], [Fe/H], etc abundances with atmospheric height and global parameters when cloud formation is considered. The cloud opacity is determined by the size distributions of the cloud particles and their material composition as well as the optical constants (refractory index). The cloud opacity changes with height because of the height dependent grain size distribution, but also the material composition of the cloud particles changes with height (Figs.~\ref{DP2004.5_solar_struc},~\ref{DP2004.5_solar_fvona}). Figure~\ref{DP2004.5_solar_fvona} depicts the number of cloud particles for each atmospheric layer considered in the underlying atmosphere model: each atmospheric layer is characterized by one curve. The distribution of the number of particles (denoted by $f(a)$) also changes with atmospheric height. The evolution of the cloud particle size distributions is determined by the cloud formation processes summarized in Sect.~\ref{ss:cloudresults}. The delta-like distributions (dark green curves) are characteristic for the top of the cloud where the nucleation process dominates and surface growth is not yet efficient (compare also panel 2 \& 3 in Fig.~\ref{DP2004.5_solar_struc}). The distributions, $f(a)$, broaden when grain growth becomes efficient and $f(a)$ increases in height when nucleation takes place simultaneously (wide purple curves). This is the case just below the cloud top. Once the size distributions 'move' in grain-size space towards the right (towards larger grain sizes) with a constant peak value, the nucleation has stopped. This is indicative for those cloud regions where the cloud particles rain into deeper atmospheric layers. The deeper cloud layers are characterized by narrow size distributions of large grains as all cloud particles in that layer had time to grow. Eventually, the distribution functions move back into the small-grain region of Fig.~\ref{DP2004.5_solar_fvona} because the cloud particles evaporate. Figure~\ref{DustOpacity_Teff2000} shows wavelength-dependent cloud opacities for individual cloud layers. The silicate absorption features appears clearly in the low-temperature part of the cloud (compare lowest panel Fig.~\ref{DustOpacity_Teff2000}). Scattering dominates the cloud extinction shortward of $4\,\ldots\,9\mu$m depending on the cloud particle sizes. Hence, for both, the gas opacity and the cloud opacity, a rather detailed cloud model needs to be applied to determine the cloud particle sizes, their size distribution and their material composition depending on the local thermodynamic properties inside the atmosphere. \begin{figure \centering \hspace*{-0.4cm}\includegraphics[width=0.75\textwidth]{Figures/t2000g45_abs.pdf}\\*[-1.3cm] \hspace*{-0.4cm}\includegraphics[width=0.75\textwidth]{Figures/abs+sca.pdf}\\*[-0.7cm] \includegraphics[width=0.61\textwidth]{Figures/Qext-Woitke2006.pdf} \caption{{\bf Top two:} Cloud opacity, $\kappa^{\rm dust}_{\rm abs}(\lambda)$ and $\kappa^{\rm dust}_{\rm ext}(\lambda)= \kappa^{\rm dust}_{\rm abs+scat}(\lambda)$ [cm$^2$ g$^{-1}$], as function of wavelength, $\lambda$ [$\mu$m], for different layers for a warm brown dwarf (T$_{\rm eff}=2000$K, log(g)=4.5). Different colours indicate different layers in the {\sc Drift-Phoenix} model atmosphere with each layer having different grain sizes and material compositions (Figs.~\ref{DP2004.5_solar_struc},\ref{DP2004.5_solar_fvona}) [courtesy: Diana Juncher]. {\bf Bottom:} Extinction efficiency as function wavelength in the small-particle limit of the Mie-theory (\citealt{woitke2006}).} \label{DustOpacity_Teff2000} \end{figure} \subsection{Different approaches to describe cloud formation in atmosphere simulations}\label{ss:diffclmo} Cloud models are an integral part of each brown dwarf (and planetary) model atmosphere simulation as they determine the remaining element abundances that define the local gas-phase composition of the spectrum forming atmosphere layers. In the following, we summarize the different cloud models that are to date applied and published in model atmosphere simulations of brown dwarfs (and planets and M dwarfs). This section is an update of \cite{hell2008} and it includes all brown dwarf cloud models that are part of an atmosphere simulation. \cite{ackerman01} discuss some of the older cloud models (\citealt{lunine1986, rossow1978}). A more planet-focused review is provided in \citep{marley2013}. \cite{lunine1986} and in their next paper \cite{burrows1989} were the first to introduce a cloud opacity (or 'particulate opacity sources') into their atmosphere model which served as outer boundary for brown dwarf evolution models. \cite{tsuji1996} suggested that dust needs to be taken into account as opacity source for atmosphere models with T$_{\rm eff}<$2800K. The following cloud models are very different from any cloud parametrization used in the classical retrieval methods. Retrieval methods only use a radiative transfer code that is iterated until an externally given set of parameters (local properties like molecular abundances and gas temperatures) allows to fit a set of observed properties (\citealt{mad2009, ben2012, bar2013, lee2013}). Different approaches are used to decide with which quality the parameter set fits the observation. \cite{lee2013} and \cite{line2014} assume Gaussian error for single-best fit solutions, \cite{ben2012} derive a full probability distribution and credibility regions for all atmospheric parameters. Both methods require a prior to start the best-fit procedure in a multi-parameter space to ensure that the global minimum can be found. \paragraph{i) Tsuji model:} \cite{tsuji2001} suggested that condensates in cool dwarf atmospheres are present in the form of layers with strict inner and outer boundaries. The inner boundary, associated with a certain temperature denoted by $T_{\rm cond}$, is related to the thermodynamical stability of the cloud particles in the surrounding gas. The upper boundary, parametrized by $T_{\rm cr}$, is related to the assumption that the cloud particles must remain extremely small, because if they would grow too large then they would otherwise settle gravitationally. For $T_{\rm cond} > T > T_{\rm cr}$, the particles are assumed to be constantly forming and evaporating, thereby circumventing the problem of the gravitational settling (\citealt{tsuji2001, tsuji2002, tsuji2004, tsuji2005}). \cite{yam2010, tsuji2011, sora2013} have applied their {\sc Unified Cloudy Model} to the unique AKARI data set for brown dwarfs. \cite{sorahana} interpreted the mismatch with these models as signature of chromospheric activity on brown dwarfs (see Sect.~\ref{ss:halfa}). \paragraph{ii) Burrows et al. model:} \cite{cooper2003} (also \citealt{burrows2006}) assume chemical and phase equilibrium to determine whether cloud particles of a certain kind are thermodynamically stable in a solar composition gas. If $p_{\rm x}(T_{\rm gas}, p_{\rm gas})=p_{\rm sat, s}(T_{\rm gas})$, i.e. the material $s$ is thermally stable, the mean size of the particle of a certain homogeneous composition $s$ is deduced from local time-scale arguments (\citealt{rossow1978}), considering growth, coagulation (also named coalescence), precipitation and convective mixing. $p_{\rm x}(T_{\rm gas}, p_{\rm gas})=p_{\rm sat, s}(T_{\rm gas})$ also determines the altitude of cloud layers of composition $s$. The amount of dust is prescribed by a free parameter $S_{\rm max} \approx 1.01$ (maximum supersaturation) which is the same for all materials. Thereby, the supersaturation ratio of the gases is fixed throughout the atmosphere and the mass of cloud particles present in the atmosphere scales with the saturation vapour pressure $p_{\rm sat, s}(T)$, which decreases exponentially with decreasing $T_{\rm gas}$. Consequently, the vertical cloud structure is a dust layer with a strict lower boundary and an exponentially decreasing dust-to-gas ratio above the cloud base. \cite{burrows2011} use this phase-equilibrium approach to search where $S_{\rm s}$=1.0 for individual materials $s$. A cloud density function is distributed around this local pressure parametrizing the geometrical cloud extension. For each condensate $s$ the vertical particle distribution is approximated by a combination of a cloud shape function and exponential fall-offs at the high- and low-pressure ends. The model has been used, for example, in \cite{apai2013} to suggest the presence of an upper, warm thick cloud and a lower, cool thin cloud as reason for the observed atmospheric variability of two early L/T-transition dwarfs (2M2139, SIMP0136; see Sect.\ref{ss:obsvariab}). Different cloud shape functions are tested with constant vertical distribution of particles above the cloud base (B-clouds, same as DUSTY models in Allard et al. 2001), and the modal grain sizes per material are adjusted to provided the best spectral fit. \paragraph{iii) Marley et al. model:} \cite{ackerman01} parametrize the efficiency of sedimentation of cloud particles relative to turbulent mixing through a scaling factor, $f_{\rm sed}$. Large values of $f_{\rm sed}$ describe rapid particle growth and large mean particle sizes. In this case, sedimentation is efficient, which leads to geometrically and optically thin clouds. When $f_{\rm sed}$ is small, particles are assumed to grow more slowly and the amount of condensed matter in the atmosphere is larger and clouds are geometrically more extended. Marley et al. solve a diffusion equation that aims to balance the advection and diffusion of each species vapor, $p_{\rm x}(T_{\rm gas}, p_{\rm gas})$, and condensate, $p_{\rm sat, s}(T_{\rm gas})$, at each layer of the atmosphere. It balances the upward transport of vapor and condensate by turbulent mixing with the downward transport of condensate by sedimentation. The downward transport of each condensate is parametrized by $f_{\rm sed}$ and the turbulent mixing by an eddy diffusion coefficient, $K_{\rm zz}$ [cm$^2$ s$^{-1}$]. The partial pressure of each condensate species, $p_{\rm x}$, is compared with the condensate vapour pressure, $p_{\rm sat, s}$, the crossing of which defines the lower boundary of the atmosphere above which this particular condensate is thermally stable. The model, as all other aforementioned models, assumes that each material can form by homogeneous condensation. \cite{ackerman01} compute a single, broad log-normal particle size distribution for an assumed modal size that is intended to capture the likely existence of a double-peaked size distribution. \cite{fort2008} apply this cloud model to planetary atmosphere simulations and, for example, suggest two classes of irradiated planets. \cite{morley12} applied the Marley et al. model to suggest the presence of an additional cloud layer by moving higher into the atmosphere towards lower temperatures. \paragraph{iv) Allard et al. model:} Phase equilibrium between cloud particles and gas is also assumed in this model (\citealt{allard2001}). This assumption, i.e. $p_{\rm x}(T_{\rm gas}, p_{\rm gas})=p_{\rm sat, s}(T_{\rm gas})$, is used to determine the cloud base for each condensate individually. Using the time scale for condensation, sedimentation and coalescence (\citealt{rossow1978}) in comparison to a prescribed mixing time-scale allows to determine a local mean grain size for a given grain size distribution (\citealt{allard2013}). The mixing-time scale described the convective overshooting based on a mass exchange frequency guided by \cite{ludwig2002}. The {\sc BT-Settl} models were applied to learn about variability in brown dwarf atmospheres, for example on Luhman 16 by \cite{crossfield14}. \begin{table} \label{tab:cloudmodels} {\small \hspace*{-0.5cm} \begin{tabular}{l|l|l|l|ll} & grain & grain & gas & \multicolumn{2}{l}{fitted }\\ & size & composition & saturation & \multicolumn{2}{l}{ parameters}\\ \hline \multicolumn{5}{l}{{\bf \underline{Model atmosphere simulations}}}\\[0.2cm] Tsuji$(^1)$ & $a=10^{-2}\mu$m & homog. & $S=1$ & {\it UCM} & dust between\\ & & & & & $T_{\rm cr}<T<T_{\rm cond}$ \\[0.1cm] Allard \& Homeier $(^2)$ & $f(a)=a^{-3.5}$ & homog. & $S=1$ &{\it dusty} & full dusty model\\ & & & &{\it cond} & dust cleared model \\ &time scales dep. & homog. & $S=1.001$ &{\it settl} & time scales\\ & & & & $K_{\rm zz}$ & mixing for non-\\ & & & & & equilibrium molecules\\[0.1cm] Cooper, Burrows et al.$(^3)$ & $f(a)\sim\big(\frac{a}{a_0}\big)^6$ & homog. &$S=1.001$ & & dust between \\%[-0.2cm] & $\times\exp\big[-6\big(\frac{a}{a_0}\big)\big] $ & & & & $P^{\rm cloud}_{\rm upper}$, $P^{\rm cloud}_{\rm lower}$ \\ [0.1cm] Barman $(^4)$ & log-norm. \! $f(a, a_0)$ & homog. & $S=1$ & $P_{\rm min}$, $a_0$ \\ Ackerman \& Marley $(^5)$ & log-norm. \! $f(a,z)$ & homog. & $S=1$ & $f_{\rm sed}$ & sedimentation \\ & & & & $K_{\rm zz}$ & mixing for non-\\ & & & & & equilibrium molecules\\%[0.1cm] Helling \& Woitke$(^6)$ & $f(a, z)$ & mixed & $S=S(z,s)$&&\\[0.3cm] \multicolumn{5}{l}{{\bf \underline{Retrieval method (radiative transfer only + fit quality assessment)}}}\\[0.2cm] Barstow, \!Irwin, Fletcher$(^7)$ & $a_1$ & homog. & -- & \multicolumn{2}{l}{n$_{\rm mix}$(H$_2$O, CO$_2$, CH$_4$) }\\%[-0.2cm] + Benneke \& Seager$(^8)$ & log-norm. \!$f(a_2,z)$ & & & \multicolumn{2}{l}{$\tau^{\rm cloud}(a_1, a_2)$, R$_{\rm Pl}$(@ 10 bar)}\\[0.1cm] Lee, Heng \& Irwin$(^9)$ & $a=$ const & homog. & -- & \multicolumn{2}{l}{$P^{\rm cloud}_{\rm up}$, $P^{\rm cloud}_{\rm down}$, $\tau^{\rm cloud}(Q_{\rm ext}(a))$}\\%[-0.2cm] + Line, Fortney, & & & & \multicolumn{2}{l}{\small $\rightarrow$ more parameter possible}\\ Marley, \& Sorahana $(^{10})$ & & & & \\ \end{tabular} \caption{ $(^1)$: \cite{tsuji1996, tsuji2002, tsuji2004, tsuji2005}, $(^2)$:\cite{allard2001,bar2001, allard2013, rossow1978}, $(^3)$:\cite{cooper2003, burrows2006}, $(^4)$:\cite{barman11}, $(^5)$:\cite{ackerman01, morley12}, $(^6)$:\cite{woi2003, woi2004, hewo2006, helling2008, witte2009}, $(^7)$:\cite{bar2013, fle2009}, $(^8)$:\cite{lee2013}, $(^9)$:\cite{ben2012}, $(^{10})$:\cite{line2014}. }} \end{table} \paragraph{v) Barman model:} Also \cite{barman11} assume phase-equilibrium and find the lower boundary of the cloud (cloud base) where the atmospheric ($T_{\rm gas}$, $p_{\rm gas}$)-profile intersects the thermal stability curve of a condensate ($p_{\rm x}(T_{\rm gas})=p_{\rm sat, s}(T_{\rm gas})$). The particle sizes follow a log-normal distribution with a prescribed modal size, $a_0$. The adjustable parameter $a_0$ can vary between 1 and 100$\mu$m and is the same for each atmospheric height. Prescribing the particle sizes allows to determine the number of cloud particles (equilibrium dust concentration). The cloud height and density above the cloud base is determined by a free parameter $P_{\rm min}$. The equilibrium dust concentration is assumed if $p_{\rm gas} \ge P_{\rm min}$, and decays exponentially for $p_{\rm gas}< P_{\rm min}$. If $P_{\rm min} > p_{\rm sat, s}(T_{\rm gas})$, then the maximum dusty-to-gas ratio is lowered relative to the equilibrium concentration. \paragraph{vi) Woitke \& Helling model:} This model is different from all above models i)-v) as it kinetically describes seed formation and growth/evaporation coupled to gravitational settling, convective mixing and element depletion by conservation equations following Sect~\ref{ss:clp}. These intrinsically time-dependent processes (\citealt{hell2001, woi2003}) are treated in a stationary approximation of conservation equations (\citealt{woi2004, hewo2006, hell2008}) to allow the coupling with a model atmosphere code ({\sc Drift-Phoenix}, \citealt{hell2008b, witte2009, witte2011}). The convective mixing with overshooting is parametrized according to a mass exchange frequency (\cite{ludwig2002}). {\sc Drift-Phoenix} model atmospheres were applied to study early universe, metal-deficient brown dwarfs (\citealt{witte2009}), and they have recently been used to explore ionization and discharge processes in ultra-cool, cloud-forming atmospheres (\citealt{hell2011, hell2011b, hell2013, rim2013, stark2013}). \medskip \noindent {\bf Model approach summary:}\\ -- All phase-equilibrium models (i-v) assume that each condensate can form by homogeneous condensation, hence, it is assumed that the monomer exist in the gas phase. \\ -- All phase-equilibrium models (i-v) adjust their element abundances according to by how much the monomer partial pressure exceeds the vapour pressure $p_{\rm s}>p_{\rm sat, s}$. A prescribed size distribution allows the calculation of the cloud particle sizes. Note that the two prescribed distribution functions (power law and log normal) differ strongly in their dust mass distribution due to the relative contribution of the different sizes.\\ -- All phase-equilibrium models (i-v) are relatively easy to implement.\\ -- Table~\ref{tab:cloudmodels} provides a comparison of all cloud models, including free parameters for each cloud model. \begin{figure}[ht \includegraphics[width=\textwidth]{Figures/Patienceetal2012_Fig8.pdf} \caption{\cite{pat2012} demonstrate spectral fits for varies Brown Dwarfs with synthetic spectra from different model families. Different model families were also used by \cite{dup2010} to provide a good error estimate on the derived global parameters. The figure shows that all model atmospheres found it challenging to fit brown dwarfs at the lowest T$_{\rm eff}$ depicted.} \label{pat2012Fig8} \end{figure} \section{Benchmarking atmosphere models and comparison to observations}\label{s:bench} A 'benchmark' is per definition a well-defined test case which is performed to understand differences in different approaches to the same problem. Astronomers and Astrophysicists approach this differently. The observational astronomy, which aims on benchmarking brown dwarf models, follow the original idea that was inspired by stellar binary systems for which one companion's parameters are well known (\citealt{pin2006,burningham13}). 'Benchmark brown dwarfs' are objects for which the distance (and thus luminosity), the age and the metallicity can be determined from observations. Their radii and their mass can be derived from evolutionary models if the age is well constrained. The effective temperature and the surface gravity can now be determined from the luminosity and the radius, the mass and radius, respectively. With T$_{\rm eff}$, $\log$(g) and a known metallicity, hence known element abundances, a model atmosphere is well constrained and a synthetic spectrum can be produced. This model spectrum is now compared to the binary target. Obvious places to search for such benchmark objects are open clusters as their age, composition and distance can be well defined through observations of much more luminous cluster members, and near-by kinematic moving groups with well-defined membership, age and composition. Theoretical astrophysics has two model systems to benchmark in our context of brown dwarfs: the atmosphere models and the evolutionary models. Both model systems are complex. The additional challenge is that both systems are not independent because the model atmospheres serve as outer boundaries during the run of evolutionary models. Benchmarking both systems requires a substantial effort from different research groups world-wide, and it is unlikely to happen in the near future. Ongoing areas of research, however, are dedicated comparison studies that aim to demonstrate the difficulties in using model atmosphere and/or evolutionary grids as 'black boxes' (e.g. \citealt{sincl2010}), and to allow the observer community to use an error estimate for model atmospheres (\citealt{boz2014}) and evolutionary models (\citealt{southworth2009}). A sensitive handling of such uncertainties is not only important for our immediate understanding of brown dwarfs, but also for detecting planets around brown dwarfs and M-dwarfs (e.g. \citealt{rojas2013,triaud2013}. Model atmosphere test are conducted in various ways. \cite{rajp2012} test different model atmosphere families in finding the stellar parameters for the late M-dwarf LHS 1070. They also including the cloud-free MARCS models in their comparison with the cloud-modelling {\sc Phoenix}-families (BTsettl, {\sc Drift}). \cite{dup2010} and \cite{pat2012} compare the quality of fits to observations by model atmospheres that describe the same physical problem, a cloud-forming atmosphere (Fig.~\ref{pat2012Fig8}). The authors point out the significant difference in the $H$-band spectrum for the different models. This effects decreases with increasing T$_{\rm eff}$ but does still impact the $J$-band for T$_{\rm eff}=1700$K. The primary differences is in the cloud modelling. The cloud models differ in details (grain sizes, material composition) that influence the local opacity of the cloud and also of the gas phase through element depletion. For example does a higher cloud opacity results in less flux in the $J$-band as the water abundance decreases and, hence, results in weakened water absorption bands. The approach presented in \cite{dup2010} and \cite{pat2012} allows an error estimate of the derived, global stellar parameters. A more detailed comparison of various brown dwarf model families is presented in \cite{hell2008}. The focus of this paper is comparing cloud model results for prescribed test cases. This also led to more understanding about why the synthetic spectra from different model families differ and which role the cloud modelling plays. The Virtual Observatory pioneers the incorporation of grids of different model atmosphere families into their data base. A comparison between different model families can provide estimates of the expected systematic errors, which is of interest for the outcome of space missions. \cite{sarro2013} performed the task of predicting how well T$_{\rm eff}$ can be determined from GAIA data for Brown Dwarfs based in the BTSettle model atmosphere grid. The Virtual Observatory is now capable of facilitating a multi-model family approach. \section{Increasing completeness and increasing model complexity }\label{s:compl} It may seem a big leap from a 1D atmosphere code to an atmosphere model that allows the study of time-variable cloud cover, the formation of photochemically driven hydro-carbonaceous macro-molecules, magnetic interactions, and irradiation. All these depend on global parameters like rotational period, magnetic field strength, cosmic ray flux etc. However, several aspects on this list start to emerge in the literature. \subsection{Multi-dimensional, dynamical atmosphere simulations} First steps towards a multi-dimensional approach to cloud forming brown dwarf atmospheres were made by 2D hydrodynamical simulations that treat dust formation (nucleation, growth/evaporation, element depletion), turbulence and radiative heating/cooling (\citealt{hell2001, hell2004}), and by large-scale 2D radiative-convection simulations that included the dust growth/evaporation processes for a prescribed number of nucleation seeds (\citealt{frey2010}). Both works suggest that clouds will not be present as a homogeneous, carpet-like layer but that cloud particles form, depending on the local temperature and density field, intermittently resulting in patchy cloud structures. \cite{robinson14} came forward with a similar suggestion of local temperature variations to explain brown dwarf variability. \cite{show2013} use a 3D approach to simulate a globally circulating brown-dwarf atmosphere but excluding cloud opacities, turbulence, and radiation. These authors suggest that a hydrodynamically induced horizontal temperature variation of $\Delta T=50$K can lead to flux variations of $\Delta F/F \approx 0.02-0.2$. Each of these models addressed a different aspect of a multi-dimensional, dynamical atmosphere simulations. The challenges faced by all simulation is illustrated by comparing the following time scales that are characteristic for interacting processes. The below table demonstrates that it is misleading to consider atmospheric processes as independent from each other, but that for example cloud formation processes could be re-ignited by transport processes like gravitational settling or gas-mixing that provides new condensable material: \begin{tabbing} radiative cooling$^{\Diamond}$:\= \hspace*{0.8cm} \= $\tau_{\rm rad}(\rho_{\rm gas})$\,\hspace*{0.8cm}\=$=$\,\=$\big(\frac{4\pi \kappa}{c_{\rm V}}\frac{\partial B}{\partial T}\big)^{-1}$ \=\,\,\= =\,\, 0.5 $\ldots$ 100 days \\ \> \> \> \> \> \>{\small (with dust \quad without dust)}\\ gravitational settling$^{\dag}$:\> \> $\tau_{\rm sink}(a, \rho_{\rm gas})$ \,\hspace*{0.5cm}\>$=$\,\>$\frac{H_{\rm p}}{{\rm v}_{\rm drift}}$ \>\,\,\= =\,\, 15 min $\ldots$ 8 month\\ \> \> \> \> \> \>\,\, {\small ($a=100\mu$m$\ldots 0.1\mu$m)}\\ large-scale convection: \> \>$\tau_{\rm conv}(\nabla T_{\rm gas})$\>$=$\>$\frac{H_{\rm p}}{{\rm v}_{\rm conv}}$ \> \> =\,\, 20 min $\ldots$ 3.5h \\ diffusive eddy mixing$^{\ddag}$: \> \>$\tau_{\rm diff}$\>$=$\>$\frac{H^2_{\rm p}}{K_{\rm eddy}}$ \> \> =\,\, 3h$\,\ldots\,$ 3 yrs\\ grain growth: \> \> $\tau_{\rm gr}(T_{\rm gas}, \rho_{\rm gas})$\>$=$\>$\frac{a}{{\rm v}_{\rm gr}}$ \> \> =\,\, 0.1s $\ldots$ 1.5min\\ wave propagation: \> \> $\tau_{\rm wave}(T_{\rm gas}, \rho_{\rm gas})$\>$=$\>$\frac{H_{\rm p}}{u+c_{\rm s}}$ \>\,\,\= =\,\, 0.3s $\ldots$ 3s\\ seed formation: \> \> $\tau_{\rm nuc}(T_{\rm gas}, \rho_{\rm gas})$\>$=$\>$\frac{n_{\rm d}}{J_*}$\> \> $\approx$\,\, $10^{-3}$s\\[0.2cm] $^{\Diamond}$ {\small Please refer to Table 1 in \cite{hell2011} for definitions and values of the absorption coefficient $\kappa$, }\\ {\small $c_{\rm V}$ specific heat capacity for constant volume and $B(T)$ the frequency integrated Planck function.}\\ $^{\dag}$ {\small Please refer to \cite{woi2003, woi2004} for definition of ${\rm v}_{\rm drift}$, ${\rm v}_{\rm gr}$, the mean }\\[0.0cm] {\small grain size $a$, the number of cloud particles $n_{\rm d}$, and the seed formation rate $J_*$. Typical}\\[-0.0cm] {\small values for all other quantities are applied, $H_{\rm p}\approx 10^6$cm.}\\ $^{\ddag}$ {\small $K_{\rm eddy}$ [cm$^2$s$^{-1}$] (or $K_{\rm zz}$) is the eddy diffusion coefficient ranging between $10^4\,\dots\,10^8$cm$^2$s$^{-1}$,}\\%[-0.5cm] {\small see \cite{bilg2013}} \end{tabbing} Diffusion, gravitational settling and convection have the longest time scales in a brown dwarf atmosphere compared to chemical timescales for cloud particle nucleation and growth. The wave propagation time scale can be used as proxy for turbulence acting on small scales where chemical processes would take place. The wave propagation takes still $100\times$ longer, and hence, cloud particle formation would be given too much time if a numerical scheme uses wave propagation to set integration time-steps. This is problematic as the cloud particle formation determines the remaining gas phase abundance which in turn determines the local gas opacity and with that the local gas temperature, and eventually, the energy transport and the spectral flux. The radiative cooling time scale indicates the impact of the radiative energy transport on the local hydrodynamics through the energy equation. Depending on the local opacity, radiative cooling can be very efficient, even causing local gas volumes to implode (\citealt{hell2001}). \subsection{Gas-phase non-equilibrium effects} Deviations from local chemical gas-phase equilibrium in the upper atmosphere are suggested to be caused by a rapid convective and/or diffusive up-mixing of warm gases from deeper atmospheric layers combined with a slow relaxation into chemical equilibrium (\citealt{saum2000}). Other processes that drive the local gas-phase out of chemical equilibrium (and LTE) are photodissociation, or ion-neutral chemistry initiated by cosmic ray impact (\citealt{rim2013}). The local chemical composition of the atmosphere is derived from extensive gas-phase rate network calculations under the influence of vertical mixing and photodissociation. Most of these studies are performed for irradiated planets (e.g. \citealt{moses2011, venot2012})\footnote{\cite{venot2012}'s chemical network is publicly available under http://kida.obs.u-bordeaux1.fr/.}. A height in the atmosphere (so-called quenching height) is derived above which the gas kinetic reactions are too slow to considerably change molecular abundances. This idea of modelling time-dependent gas non-equilibrium effects has two shortcomings: First, the diffusive eddy mixing coefficient $K_{\rm eddy}$ [cm$^2$s$^{-1}$] (or $K_{\rm zz}$) becomes an additional fitting parameter for model atmospheres to observations (e.g. \citealt{cush2010}). Second, reaction rate coefficients differ in the literature leading differences in destruction time scale for example for C$_2$H$_2$ and C$_2$H$_6$ (\citealt{bilg2013}). This is, however, a challenge faced by all gas-kinetic approaches and considerable effort is ongoing to weed-out the respective rates. \cite{rim2014} model cosmic ray transport through an brown dwarf atmosphere and they demonstrate how galactic cosmic rays influence the abundance of hydrocarbon molecules through ion-neutral reactions. \section{Closing the loop} The first L4 dwarf (GD 165B; \citealt{becklin88}) was discovered $\sim 30$ years ago. Since then, spectral classification of these very cool objects led to the introduction of the new spectral classes L, T and Y, with the Y dwarfs having T$_{\rm eff}$ typical for planets. Such low temperature immediately suggest that brown dwarf atmospheres must contain a chemically very rich gas from which clouds will form. The evolutionary transition from the L into the T dwarf spectral type are associated with atmospheric variabilities which is attributed to variable cloud coverage. In parallel to the increasing number of observations, atmosphere modellers adopted stellar atmosphere codes for cooler gases by introducing cloud models and additional gas opacity sources (e.g. CO$_2$, CH$_4$, NH$_3$). More complex processes like kinetic gas chemistry, turbulence and multi-dimensional hydrodynamic simulations were performed but with far less consistency between the processes.\hspace{2cm} The picture that emerged for a brown dwarf atmosphere is that of chemically very active gas that is exposed to phase-changes, turbulence, high and low energy radiation. It also provides a valuable path towards the understanding for brown dwarfs as planetary host stars (\citealt{triaud2013}) and of climate evolution on extrasolar planets. More and more similarities to planets arise: Radio and X-ray observations suggest brown dwarf atmospheres to be ionized to a certain extend. Theoretical studies on ionization processes support this idea by demonstrating that clouds in brown dwarfs will be charged (\citealt{hell2011b, hell2011}), that clouds can discharge in form of lighting (\citealt{bai2013}), that Cosmic Rays can ionize the upper atmosphere and the upper part of the cloud (\citealt{rim2013}, and that hydrodynamic winds can provide a source for gas-ionization (\citealt{stark2013}). Figure~\ref{StratIon} shows that these ionization processes (boxes in figure) do appear with different efficiencies in different parts of the atmosphere, suggesting a brown dwarf atmosphere to be a stratified ionized medium rather than a cold, neutral gas. \begin{figure}[ht \centering \includegraphics[width=1.0\textwidth]{Figures/StratIonAtmos_fe_2.pdf} \caption{A brown dwarf atmosphere is not only stratified by its cloud structure (2D colour overlay). Different non-thermal ionization processes occur in brown dwarf atmospheres (boxes: cosmic rays, dust-dust collisions, strong winds) which produce free charges through the atmosphere with varying efficiency. The local degree of thermal ionization (brown solid and dashed lines) is shown in the background for illustration, a 2D simulation of turbulent dust formation indicates where the cloud is located in the atmosphere. (The 2D colour-coded plot contains contour lines of the local vorticity; see Fig 8 in \cite{hell2004}) } \label{StratIon} \end{figure} \bibliographystyle{spbasic} \section{Brown dwarf observations in different spectral energy ranges}\label{s:obs} In this section of the review we aim to bring the reader up to date with the observations of brown dwarfs. We describe the recent results focusing on cloud effects in low gravity objects, and their comparison with young extrasolar planets, as well as observations of low metallicity objects. We also discuss the recent reports of photometric and spectroscopic variability of brown dwarfs, which is linked to patchy clouds, temperature fluctuations within the atmosphere and weather effects, as well as high energy phenomena that are linked to emissions seen in the radio, X-ray and UV wavelength regimes. \subsection{Optical and IR spectral types}\label{ssOIR} Although the first brown dwarf to be discovered was the L4 dwarf GD165B \citep{becklin88}, it was not identified as such until the discovery of Gl229B \citep{nakajima95} and Teide 1 \citep{rebolo95} in 1995. Since then, astronomers have searched for a way of classifying brown dwarfs. These objects are very different from the M dwarfs known at the time, and the contrast between the dusty L dwarfs and the methane rich T dwarfs is stark. Both these spectral types are discussed extensively in the literature and are not described here in great detail (\citealt{burrows01, lodders06} for an overview). The L dwarfs are similar to M dwarfs in photospheric chemical composition, containing alkali lines of (K, Na) and metal hydrides (FeH) and oxides (TiO, VO) and water. As we progress through the spectral types from L0 to L9, the TiO and VO bands weaken, the alkali lines become weaker and more pressure broadened, and the water bands and FeH strengthen in the optical. In the near-IR, CO strengthens towards the mid-L dwarfs, and then weakens again as methane begins to form. The change between the L and T dwarfs, often called the L-T transition region is characterized by the near-infrared colours of the brown dwarfs changing, while the effective temperature of the objects remains the same (Figure \ref{st}; see \citealt{kirkpatrick99} for a review). T dwarfs, sometimes called methane dwarfs are characterized by the methane absorption seen in the near-IR that gets progressively stronger as one progresses through the subclasses, making the $J-H$ and $H-K$ colours bluer. In the optical, the spectrum is affected by collisionally induced molecular hydrogen absorption and FeH. 2011, marked the discovery of an additional, and later spectral type, the Y dwarf. There are $\sim$20 Y dwarfs known to date \citep{cushing11, luhman11, kirkpatrick12, kirkpatrick13, cushing14}. The majority of these have spectral types ranging between Y0 and Y2 and were discovered using the \textit{Wide-field Infrared Survey Explorer} (WISE). WISE was designed to discover Y dwarfs: its shortest wavelength band at 3.4$\mu$m was selected to fall in the centre of the fundamental methane absorption band at 3.3$\mu$m and the W2 filter at 4.6$\mu$m detects radiation from the deeper, hotter layers in the atmosphere. When combined, the W1-W2 colour is very large, allowing the detection of Y dwarfs \citep{kirkpatrick11}. All the Y dwarfs show deep H$_{2}$O and CH$_{4}$ absorption bands in their near-infrared spectra, similar to late-T dwarfs (Figure \ref{st}). These water clouds were measured in detail by \citet{morley14a}. The $J$ band peaks of the Y dwarfs are narrower than those of the latest type T dwarfs, and the ratio of the $J$ and $H$ band flux is close to 1, meaning that the $J-H$ trend towards the blue for T dwarfs turns back towards the red for Y0. This trend also occurs for the $Y-J$ colour. This colour reversal is thought to be caused by alkali atoms, that normally dominate the absorption in the shorter wavelengths, being bound in molecules, thus reducing the alkali atom opacity \citep{liu10}. While the $H$ band spectra of T dwarfs are shaped by CH$_{4}$ and H$_{2}$O, for Y dwarfs as the effective temperature decreases, NH$_{3}$ absorption becomes important \citep{lodders02, burrows03}. \citet{cushing11} estimate T$_{\rm eff}=350\,\ldots\,500$K for the Y0 dwarfs, with their masses between $\sim$5 and 20 M$_{\rm Jup}$. However, using a luminosity measurement derived from a $\sc Spitzer$ based parallax, \citet{dupuy13} estimate their effective temperatures to be typically 60-90 K hotter. This difference in temperature is possibly caused by using near-IR spectra, a regime where only 5\% of the Y dwarf flux is emitted and so models do not always accurately reproduce the observations \citep{dupuy13}. Using models containing sulphide and chloride clouds from \cite{morley12}, \citet{beichman14} obtain the lower effective temperatures, suggesting that previous results apply on a model dependent bolometric correction. There is one other Y dwarf, not discovered by WISE: WD0806-661B, which was until recently, the most likely candidate for the lowest mass and temperature Y dwarf at 6-10 M$_{\rm Jup}$ and 330-375 K, although there are as yet no spectra of this object. This has since been superseded by the discovery of WISEJ085510.83-071442.5, a high proper motion Y dwarf at 2 pc. This object is our fourth nearest neighbour and has an estimated effective temperature of 225-260 K and a mass of 3-10 M$_{\rm Jup}$ \citep{luhman14}. \begin{landscape} \begin{figure} \begin{center} {\ }\\*[-1cm] \hspace*{-0cm} \includegraphics[width=0.83\textwidth]{Figures/M_colour.pdf \hspace*{-1.3cm} \includegraphics[width=0.83\textwidth]{Figures/Y_colour.pdf}\\*[-0.8cm] \caption{\label{st} The near-infrared spectral sequence from early-M dwarfs to early-Y brown dwarfs. All spectra are normalised to 1 at their peak and then a flux offset is applied \citep{kirk13}. } \end{center} \end{figure} \end{landscape} \subsubsection{Brown dwarf classification} Brown dwarf spectral types do not fit into the standard Morgan-Keenan stellar spectral types system because their age-mass-temperature degeneracy causes the classical stellar mass-temperature relationship to break down for such ultra-cool objects. An object of a specific spectral type (or effective temperature) may be higher mass and old, or lower mass and young. For instance, an L5 dwarf (T$_{\rm eff}$$\sim$ 1500 K) in the Pleiades cluster (125 Myr) has a mass of 25 M$_{\rm Jup}$, but a field dwarf of the same spectral type is much more massive at 70 M$_{\rm Jup}$ \citep{chabrier00}. There are in general three methods of identifying a brown dwarf's spectral type: the first, by comparison with spectral templates as is usual for stars that fit the Morgan-Keenan classification scheme, the second is by using indices derived from spectral parameters and the third is by comparing broadband photometry to spectral standards. The first two methods are described for the L dwarf classification in the optical wavelength range by \citet{kirkpatrick99} and \citet{martin99}, respectively. The most commonly used method is to compare spectra of objects to "standard" or "template" spectra and to use spectral indices as a secondary calibration tool, for instance to judge metallicity or gravity (see Sect.~\ref{grav} for the gravity classification scheme). The template scheme was extended into the near-IR by \citet{reid01}, and the indices by \citet{geballe02}. This index scheme is now more widely used than that of \citet{martin99} and extends down to T9. \citet{burgasser06a} combined the index scheme of \citet{geballe02} with their template \citep{burgasser02} to create a unified way of spectral typing T dwarfs. When using these methods to classify L and T dwarfs it should be noted that the assigned spectral types are limited to only the parts of the spectrum that are measured. For objects on the L-T transition, it is not unusual to have optical and near-IR derived spectral types that differ by $>$1 spectral type. \subsubsection{Low metallicity brown dwarfs} Low metallicity brown dwarfs (subdwarfs) provide an insight into the coolest, oldest brown dwarfs. The existence of low-metallicity brown dwarfs indicates that such low-mass stars also formed in a younger universe when the metallicity was lower than today. It is further of interest to comparing their atmospheres to those of low metallicity extrasolar planets which formed as by-product of star formation. Only a handful of ultracool subdwarfs are known to date \citep{burgasser03, burgasser04, cushing09, sivarani09, lodieu10, kirkpatrick10, lodieu12, mace13, kirkpatrick14, burningham14}. Of the $\sim$30 objects known to date, only 11 have spectral types later than L2 \citep{kirkpatrick14, burningham14, mace13}. The naming scheme for subdwarfs follows that for M dwarfs developed by \citet{gizis97} and upgraded by \citep{lepine}, moving from dM for metal rich M dwarfs to subdwarfs (sdM), extreme subdwarfs (esdM) and ultra subdwarfs (usdM) in order of decreasing metallicity. \citet{burgasser08} noted that L subdwarfs are overluminous in $M_{J}$, but slightly underluminous in $M_{K}$. This change is suggested to be caused by a reduced cloud opacity causing strong TiO, FeH, Ca $\textsc{I}$ and Ti $\textsc{I}$ features, and enhanced collisional-induced H$_{2}$ opacity in the $K$ band as predicted by \citet{ackerman01, tsuji96}. These sources of opacity will be discussed in more detail in Section \ref{s:theo}. \cite{kirkpatrick14} suggests that an L subdwarf "gap" exists between the early L and late L subdwarfs. This could be explained if lower metallicity brown dwarfs would generally be the older objects. For older brown dwarfs, an effective temperature gap is observed to occur between the hotter, deuterium-burning subdwarfs (blue symbols in Fig.~\ref{sd}), and the coolest, lowest mass members of similarly aged hydrogen burning stars (M-dwarfs, red symbols in Fig.~\ref{sd}). This temperature gap is predicted to increase for older populations \citet{kirkpatrick14}. \begin{figure} \begin{center} \scalebox{0.25}{\includegraphics[angle=0]{Figures/fig29_sd_gap.pdf}} \caption{\label{sd} The effective temperature gap between brown dwarfs and M-dwarfs increases with lower metallicity (lower J-K$_{\rm s}$). Known late-M (SpecT $<$ sdL5 -- blue circles) and L subdwarfs (SpecT $>$ sdL5 -- red circles) and AllWISE proper motion stars (solid black dots) are shown (\citealt{kirkpatrick14}). \cite{kirkpatrick14} suggest that the wedge (green zone) covers an area in the diagram where L subdwarfs may rarely be found. } \end{center} \end{figure} Recently, a population of T subdwarfs has emerged \citep{burningham14, mace13}. T dwarf colours are very sensitive to small changes in metallicity - a shift of 0.3 dex can change the $H-$[4.5] colours of T8 dwarfs as much as a 100 K change in effective temperature \citep{burningham13}. The T dwarfs exhibit enhanced $Y$ and depressed $K$ band flux, indicative of a high gravity, and hence older age, and low metallicity atmosphere \citep{burgasser02}. The increased $Y$ band flux is caused as the lower metallicity reduces the opacity in the wings of the alkali lines, resulting in a brighter and broader peak flux \citep{burgasser06}. The $K$ band flux depression is created as pressure-enhanced collision induced absorption by molecular hydrogen becomes more important, as molecular features are removed from the spectra \citep{saumon94}. \subsubsection{The surface gravity of brown dwarfs}\label{grav} Young, low gravity brown dwarfs have very similar properties to directly-imaged exo\-planets \citep{fah2013} and it has been suggested that younger brown dwarfs (log($g$)$\approx 3$) may have thicker clouds in their atmospheres than those present in older objects (log($g$)$\approx 5$) of the same effective temperature \citep{barman11, currie11, madhusudhan11}. \citet{hell2011} demonstrate that the geometric cloud extension\footnote{The geometrical cloud extension, or cloud height, can be defined in various ways. \cite{woi2004} used the degree of condensation for Ti (their Eq. 16) for defining the cloud height. The distance between the gas pressure at the nucleation maximum and the gas pressure where all cloud particles have evaporated determine the cloud height in \cite{hell2011}.} increases with decreasing surface gravity log($g$) in cloud-forming atmospheres, an effect likened to an increasing pressure scale height H$_{\rm P}\sim$1/$g$, and \citet{marley2012} suggest that these clouds persist for longer, at higher temperatures, than in older objects. Many brown dwarfs have been identified in young open star clusters (e.g. Sigma Ori: \citep{bihain09, pena12}, Serpens: \citep{spezzi12}; Pleiades: \citep{casewell11, casewell07}; see \citealt{luhman12} for a review). However, there also exists a field population of low gravity, young brown dwarfs (e.g. \citealt{reid08, cruz09, kirkpatrick10}). A comprehensive scheme for defining the gravity of these objects was devised by \citet{kirkpatrick08} and \citet{cruz09} in the optical, and \citet{allers13a, allers13b} in the near-infrared. The classification introduces a suffix of $\alpha$, $\beta$, or $\gamma$ to the spectral type, indicating the gravity. $\alpha$ implies a normal gravity field dwarf, whereas $\beta$ is an intermediate-gravity object and $\gamma$ represents a low-gravity object. These suffixes can also be used as a proxy for age. In the optical, the suffixes are assigned based on measurements of the Na $\textsc{I}$ doublet and the K $\textsc{I}$ doublet which are weaker and sharper in a low gravity object, the VO absorption bands which are stronger, and the FeH absorption features which are weaker. We note that decreasing hydride molecules are typical for low metallicity stars of higher mass, hence higher temperature. The abundance of such hydrogen-binding molecules decreases in brown dwarfs because cloud formation decreases the metal components available. In the near-infrared, the VO and FeH bands are considered simultaneously with the alkali lines to derive $\log$(g). The changing shape of the $H$-band is also taken into consideration. It becomes more triangular-shaped caused by increasing water absorption as a sign of low gravity \citep{lucas01, rice11}. All of these features are altered as there is less pressure broadening due to the object having a low surface gravity causing the pressure scale height to increase \citep{rice11, allers13a}. In general, alkali absorption features are weaker, and the overall colours of lower gravity objects are redder than those of their higher gravity counterparts \citep{faherty13}. The redder colour is due, in part, to the changes in the near-infrared broadband features, but also due to more photospheric dust \citep{hell2011}. An additional feature is that while young M dwarfs ($\log$(g)$\approx 3$) are brighter than their older (log($g$)$\approx 4.5$) counterparts, young L dwarfs (log($g$)$\approx 3$) are underluminous in the near-infrared for their spectral type. This may be due to the additional dust in their photosphere \citep{faherty13a} or due to a cooler spectral type/temperature relation being required. There are a handful of directly-imaged planetary mass companions that have estimated effective temperatures in the brown dwarf regime (e.g. \citealt{bonnefoy13}), for example 2M1207b ($\sim$ 10 Myr, $\sim$ 1600 K, SpecT$\sim$mid L) and HR8799b ($\sim$30 Myr, $\sim$1600 K, SpecT$\sim$ early T). These planets are underluminous and have unusually red near-infrared colours, when compared to field brown dwarfs, as well as displaying the characteristic peaked $H$-band spectra. There are $\sim$30 brown dwarfs that have been kinematically linked to moving groups and association aged between 10 and 150 Myr, that share these features, thus indicating that low gravity brown dwarfs may provide a clue to the atmospheric processes occurring on young exoplanets. \subsection{High energy processes in non-accreting brown dwarfs} Despite brown dwarfs being objects that are brightest in the near-IR, this has not prevented searches for other types of emission, particularly those associated with higher energy processes such as those seen in early M dwarfs. However, the magnetic dynamo mechanism is used to explain magnetic field generation in solar-type stars, and there is a direct correlation between rotation and magnetic activity indicated by H$\alpha$, X-rays and radio emission \citep{noyes84, stewart88, james00, delfosse98, pizzolato03, browning10, reiners09, morin10}. This relationship between the radio (L$_{v,R}$) and X-ray (L$_{X}$) luminosities holds for active F-M stars, and L$_{X} \propto L_{v,R}^{\alpha}$ ($\alpha$$\sim$0.73) is known as the G{\"u}del-Benz relation (Figure \ref{gb}: \citealt{gudel93, benz94}). As the dynamo operates at the transition layer between the radiative and convective zones (the tachocline), this mechanism cannot explain radio activity in fully convective dwarfs ($>$M3), and although H$\alpha$ and X-ray activity continues into the late M dwarf regime, the tight correlation between X-ray and radio luminosity breaks down which suggests that a separate mechanism is likely to be responsible for radio emission. \begin{figure} \begin{center} \scalebox{0.82}{\includegraphics[angle=0]{Figures/lxlr.pdf}} \caption{\label{gb} The G{\"u}del-Benz relationship between $L_{X}$ (0.2-2 keV) and $L_{v,R}$ (\citealt{williams13c}). Limits are shown as downward pointing triangles. Objects with spectral types of M6 or earlier (green), M6.5-M9.5 dwarfs (red) and spectral types of L0 or later (blue) are shown. Grey circles show the original G{\"u}del-Benz relation from \citet{benz94}. Dashed lines connect multiple measurements of the same source.} \end{center} \end{figure} \subsubsection{X-ray and UV observations}\label{X-ray} Many searches for X-ray emission in L dwarfs were conducted \citep{stelzer03, berger05, stelzer06}, but only one detection is known to date. The L dwarf binary Kelu-1, composed of two old brown dwarfs, was detected with $Chandra$ in the energy range of 0.63, 0.86, 1.19 and 1.38 keV, resulting in an estimation of the 0.1-10 keV X-ray luminosity to be $L_{X}=2.9\times10^{25}$ erg s$^{-1}$ \citep{audard07}. It has been suggested that this emission does not originate from flares as there was no concurrent radio detection at a frequency $\sim$8 GHz \citep{audard07}. \cite{audard07} suggested that the ratio between luminosity (radio, H$\alpha$, or X-rays) and bolometric luminosity (L/L$_{\rm bol}$) increases with decreasing effective temperature. They concluded that the chromospheric magnetic activity (H$\alpha$ emission) and the activity in the hot coronal loops (X-ray emission) decreases with effective temperature, indicating a different mechanism responsible for the radio emission in ultra cool dwarfs. \citet{williams13c} suggest that ultracool dwarfs with strong axisymmetric magnetic fields tend to have $L_{v,Radio}/L_{X}$ consistent with the G\"{u}del-Benz relation (Fig.~\ref{gb}), while dwarfs with weak non-axisymmetric fields are radio luminous. Slower rotating dwarfs have strong convective field dynamos and so also stay near the G\"{u}del-Benz relation, whereas some rapid rotators may violate the G\"{u}del-Benz relation. \citet{williams13c} also note that dwarfs with a weaker magnetic field tend to have later spectral types, and lower X-ray luminosity, which may be related to their cooler temperatures. They also note that in general, radio-bright sources, tend to be X-ray under luminous compared to radio dim brown dwarf sources. There are as yet no UV detections of brown dwarfs where the emission is attributed to atmospheric processes. Only brown dwarfs with disks such as the young TW Hydra member 2MASS1207334-39 have been observed to show UV emission. H$_{2}$ fluorescence is detected and attributed to accretion \citep{gizis05}. \subsubsection{Optical H$\alpha$ emission}\label{ss:halfa} The H$\alpha$ luminosity (indicative of chromospheric activity) in late M and L dwarfs also decreases with lower mass and later spectral type. \citet{schmidt07} estimate that of their 152 strong sample, 95\% of M7 dwarfs show H$\alpha$ emission (consistent with \citealt{gizis00}). This fraction decreases with spectral type to 8\% of L2-L8 dwarfs showing H$\alpha$. For the L dwarfs in particular, 50\% of L0 dwarfs being active declines to 20-40\% of L1 dwarfs and only 10\% of L2 dwarfs and later spectral types. This decline is similar to the breakdown in the rotation-activity relationship seen for the X-ray activity (Section \ref{X-ray}) and has been attributed to the high electrical resistivities in the cool, neutral atmospheres of these dwarfs \citep{mohanty02}. \citet{sorahana} have recently suggested that molecules may be affected by chromospheric activity. The active chromosphere heats the upper atmosphere, causing the chemistry in that region to change, resulting in the weakening of the 2.7 $\mu$m water, 3.3$\mu$m methane and 4.6$\mu$m CO absorption bands as seen in $\textit{AKARI}$ spectra of mid-L dwarfs. Despite the predicted decrease in H$\alpha$ emission, some objects of late spectral type show H$\alpha$ emission. For example, the L5 dwarfs 2MASSJ01443536-07 and 2MASS1315-26. In quiescence, these objects show H$\alpha$ fluxes similar to those of other dwarfs of the same spectral type. However in outburst, 2MASS0144-07 has an H$\alpha$ flux measurement more than 10 times higher than the mean \citep{liebert03}, and for 2MASS1315-26 the H$\alpha$ emission is $\sim$ 100 times stronger than for L dwarfs of a similar spectral type \citep{hall02}. \subsubsection{Radio emission} A number of brown dwarfs have been detected to be radio loud. This non-thermal emission may be low level quiescent \citep{berger02}, exhibiting variability \citep{antonova07}, showing variability that is periodic and linked to rotation \citep{berger05, mclean11}, highly polarised and bursting \citep{burgasser05}, pulsing synchronised with the rotation period, and highly polarised \citep{hallinan07, berger09} or a combination of some of these \citep{williams13b}. Some of these objects (TVLM513-46546, 2MASSJ0036+18, LSRJ1835+3259, 2MASSJ1047539+21 \citealt{rw12, hallinan08}) emit periodic, 100\% polarised radiation, normally detected at 4-8 GHz with spectral luminosities of $\approx 10^{13.5}$~\ergshz \citep{had+06, rw12}. These pulses are caused by the cyclotron maser instability (CMI), the emission mechanism that occurs on Jupiter \citep{t06, nbc+12, mmk+12}. CMI emission requires a relatively tenuous population of energetic particles confined to a relatively strong magnetic field; in particular, the cyclotron frequency, $f_\text{ce}=e B / m_e c$, must be much greater than the plasma frequency $f_\text{pe}=\sqrt{4 \pi e^2 n_e/ m_e}$. Whenever detailed observations are available, the free energy in the plasma is seen to be provided by electrons moving along the magnetic field lines, which can originate in magnetospheric-ionospheric (M-I) shearing and possibly plasma instabilities. These observations suggest that BDs can self-generate stable, $\sim$kG-strength magnetic fields \citep{b06b}. The underlying assumption is, however, that enough free charges are present to form a plasma in these extremely cold atmospheres ($<$2000 K). Although this mechanism for emission is quite well characterised, and can account for the polarised flaring behaviour, two of these dwarfs also produce quiescent, moderately polarised emission. This indicates that a second mechanism such as synchrotron or gyrosynchrotron emission \citep{berger02, osten06, ravi11} may be occurring, or that the CMI emission is becoming depolarised in some way as it crosses the dwarf's magnetosphere \citep{hallinan08}. It has been suggested that some of the variability seen in these sources may be due to variation in the local plasma conditions \citep{stark2013}, perhaps linked to magnetic reconnection events \citep{schmidt07} or due to other sporadic charged events in the plasma (\citealt{hell2011, hell2011b, hell2013, bai2013}). It is still unknown what features distinguish radio "active" from radio "inactive" dwarfs. The relationship may depend on mass, effective temperature, activity, magnetic field strength and rotation rate. All the known radio "active" dwarfs have a high v$\sin{\rm i}$-value, indicating short rotation periods ($\sim$3hrs) \citep{antonova13, williams13a}. This may indicate a link between rotation rate and emission, but could also indicate a dependence on the inclination angle, $i$, instead of the velocity, thus making detection of radio emission dependent on the line of sight and the beamed radiation emitted \citep{hallinan08}. \subsection{Observed variability in Brown Dwarfs}\label{ss:obsvariab} The number of campaigns that search for spectro-photometric variability has increased since it was first suggested by \citet{tinney99}. Such varia\-bility would indicate non-uniform cloud cover (e.g. \citealt{ackerman01}) and weather-like features, such as those seen on Jupiter. The majority of the early searches concentrated on L dwarfs, suggesting that variability could be due to holes in the clouds \citep{gelino02, ackerman01}. Surveys suggest that between 40--70\% of L dwarfs are variable \citep{bailer01,gelino02,clarke02}, although most of these surveys involve small numbers of objects and the authors vary on what is considered a detection. The majority of these studies were performed in the $I$ band where the amplitude of variability is at the 1--2\% level \citep{clarke02, clarke08} on timescales of tens of minutes to weeks, and is in general not periodic. There has now been a shift towards using the near-IR for variability studies \citep{enoch03,clarke08, khandrika13, girardin13, buenzli14, radigan14, wilson14} where the frequency of variability is estimated to be $\sim$10--40\%. So far, however, high amplitude periodic variability ($>3$\%) has been limited to the L-T transition objects, while lower amplitude variability is detected in both early L and late T dwarfs. \citet{heinze13} reported sinusoidal variability of the L3 dwarf DENIS-P J1058.7-1548 in the $J$ and [3.6] micron bands, but no variability in the [4.5] micron band. They suggested that the variability may again be due to inhomogenous cloud cover, where the thickest clouds have the lowest effective temperature, but there may also be an effect related to magnetic activity (starspots, chromospheric or aurorae) suggested by a weak H$\alpha$ detection. This result is similar to the findings on the radio loud M8.5 dwarf TVLM513-46546 \citep{littlefair08}. TVLM513 shows $i'$ and $g'$ band variability in antiphase, initially suggesting the variability is due to patchy dust clouds coupled with the object's fast rotation. However, more recent results show that the $g'$ and $i'$ bands are no longer correlated. The $g'$ band variability has remained stable, but the $i'$ band lightcurve has evolved. However, the optical continuum variability is in phase with the H$\alpha$ flux, again suggesting some magnetic processes are also occurring \citep{metchev13}. The first T dwarf to be confirmed as variable was the T2.5 dwarf SIMP J013656.5\- +093347 \citep{artigau09} which was determined to be variable at the 50 mmag level in the $J$ band. A rotation period of 2.4 days was determined from the lightcurves. More interestingly, the lightcurve evolves from night to night (Fig. 2 in \citet{artigau09}), perhaps due to evolving features such as storms. \citet{radigan12} studied a similar object, the T1.5 dwarf 2MASS2139, and again found evidence of an evolving light curve in the $J$, and $K_s$ wavebands, which after extensive analysis they attributed to heterogeneous clouds with regions of higher opacity. In general, these objects exhibit variable lightcurves, modulated as the object rotates, which evolve on a period of hours \citep{apai13}, days (e.g. \citealt {artigau09, radigan12}) or even years \citep{metchev13}. While the majority of studied T dwarfs lie at the L-T dwarf transition region, some late T dwarfs ($>$T5) have also been determined to be variable \citep{clarke08}. Initially, these atmospheres were thought to be relatively cloud-free, however recent work by \citet{morley12} suggests that sulphide clouds may exist there (see Sect.~\ref{ss:diffclmo}). \citet{buenzli12} studied one such object and determined a phase offset between different wavelengths while sinusoidal variability was present on the rotation prior of the object. This phase shift is directly linked to the pressure (and hence cloud constituents) probed at each of the observed wavelengths \citep{marley10, morley14b}. The lower pressure regions (high altitude) cause the highest phase lag with respect to the highest pressure layers (lowest altitude) and may be as large as half a rotation period. The authors attribute this lag to a change in the opacities (gas or cloud) without a change in the temperature profile, a change in the temperature-pressure profile without a change in the opacity or a combination of the two resulting in a "stacked-cell" atmosphere as seen in Saturn (e.g. \citealt{fletcher11}). \citet{robinson14} interpret the phase lag in the frame of their 1D model as being due to thermal fluctuations. However, their model is unable to reproduce the variability on a timescale of $\sim$hours, as seen in the observations, claiming that a 3D model will be required to explore the dynamics fully. An alternative to ground based observations is to move into space, minimising differential refraction and atmospheric effects. One of the largest space based variability surveys to date was performed by \citet{buenzli14} who studied 22 brown dwarfs ranging from L5 to T6 with $HST$. Six of these objects are determined to be variable with another 5 marked as potentially variable. This survey was not sensitive to objects with long periods, but it still suggests that the majority of brown dwarfs have patchy atmospheres, and that there is no spectral type dependence on the fraction of dwarfs that are variable. Perhaps one of the best studied variable objects is Luhman 16B. Luhman 16AB is the third closest system to the sun at only 2pc \citep{luhman13} being a L7.5 and T0.5 dwarf binary. \citet{gillon13} reported variability on a 4.87 hour period with strong night to night evolution which was attributed to a fast evolving atmosphere on the cooler T0.5 dwarf. The L7.5 dwarf is not found to be variable. \citet{burgasser14} performed spectral monitoring of the system, modelling the T dwarf using a two-spot model and inferring a cold covering fraction of $\sim$30-55\% varying by 15-30\% over a rotation period. This resulted in a difference of $\sim$200-400~K between the hot and cold regions. \citet{burgasser14} interpreted the variations in temperature as changes in the covering fraction of a high cloud deck resulting in cloud holes which expose the deeper, hotter cloud layers. They also suggested the rapidly evolving atmosphere may produce winds as high as 1-3 kms$^{-1}$ which is consistent with an advection timescale of 1-3 rotation periods. A new analysis of this system, was produced by \citet{crossfield14} who used Doppler imaging techniques to produce a global surface map, sensitive to a combination of CO equivalent width and surface brightness, of the T dwarf Luhman 16B. The map shows a large, dark mid-latitude region, a brighter area on the opposite hemisphere located near the pole and mottling at equatorial latitudes (Fig. 2 in \citealt{crossfield14}). The authors interpreted the map in one of two ways. Either the darker areas represent thicker clouds, obscuring the hotter, inner regions of the atmosphere, and the bright regions correspond to cloud holes providing a view of this warmer interior, or the map shows a combination of surface brightness and chemical abundance variations. They predict that the high latitude bright spot could be similar to polar vortices seen in solar system giant planets, in which case it should be seen in future mapping of this object. Another class of brown dwarfs that shows photometric variability are those in close ($<$10 hrs) detached binary systems with white dwarfs (WD0137-349, WD+L6-L8, P=116 min: \citealt{maxted06, burleigh06}; GD1400, WD+L6, P=9.98hrs \citealt{farihi, dobbie, burleigh11}; WD0837+185, WD+T8, P=4.2hrs: \citealt{casewell12}; NLTT5306, WD+L4-L7, P=101.88 min: \citealt{steele13}; CSS21055, WD+L, P=121.73 min: \citealt{beuermann13}). These systems are likely tidally locked and as a result one side of the brown dwarf is continually heated by its much hotter white dwarf companion. Three of the five known systems show variability in optical wavelengths at the 1\% level. One of these objects, WD0137-349 has been studied extensively in the near-infrared \citep{casewell13}. The white dwarf in this system is 10 times hotter than the brown dwarf, resulting in a difference in brightness temperature of $\sim$500K between the day and night sides, likely causing vigorous motion and circulation in the atmosphere (e.g. \citealt{showman13}). While the substellar objects in these systems are brown dwarfs, and not extrasolar planets, their atmospheres behave in a similar way, both absorbing the (mainly) ultraviolet emissions from their host, and also reflecting the incident light. There are brown dwarfs known in close orbits with main sequence stars that are also irradiated (e.g. WASP-30b: \citealt{anderson11}, Kelt-1b: \citealt{siverd12}), but as their host stars are much more luminous, and the brown dwarf atmosphere scale height is too small to allow transmission spectroscopy, these are much more challenging to observe. \section{Brown dwarf observations in different energy ranges}\label{s:obs} \subsection{Optical and IR} -- shortly introduce the different spectral types (late M, L, T, Y) but mention that spectral types are confined to limited parts of spectrum. L dwarfs described and defined by \cite{martin99, kirkpatrick99, kirkpatrick05} and a spectral classification based on molecular line strengths is mentioned. -- T dwarfs - \cite{geballe02} did the same. There are $\sim$20 Y dwarfs known to date \citep{cushing11, luhman11, kirkpatrick12, kirkpatrick13, cushing14}. The majority of these have spectral types ranging between Y0 and Y2 and were discovered using the \textit{Wide-field Infrared Survey Explorer} (WISE). WISE was designed to discover Y dwarfs, and the shortest wavelength band at 3.4$\mu$m was selected to fall in the centre of the fundamental methane absorption band at 3.3$\mu$m. The W2 filter at 4.6$\mu$m detects radiation from the deeper, hotter layers in the brown dwarf atmosphere. Combined, the W1-W2 colour is very red, allowing the detection of Y dwarfs \citep{kirkpatrick11}. All the Y dwarfs show deep H$_{2}$O and CH$_{4}$ absorption bands in their near-infrared spectra, similar to late-T dwarfs. The $J$ band peaks of the Y dwarfs are narrower than those of the latest type T dwarfs, and the ratio of the $J$ and $H$ band flux is close to 1, effectively meaning that the $J-H$ trend towards the blue for T dwarfs turns back towards the red for Y0. This trend also occurs for the $Y-J$ colour. It is though this colour reversal is due to the alkali metals that dominate the absorption in the shorter wavelengths being trapped in molecules, thus reducing the opacity \citep{liu10}. While the $H$ band spectra of T dwarfs are shaped by CH$_{4}$ and H$_{2}$O, for Y dwarfs as the effective temperature decreases NH$_{3}$ absorption becomes important \citep{lodders02, burrows03}. \citet{cushing11} estimate the effective temperatures of the Y0 dwarfs are between 350 and 500 K, with masses between $\sim$5 and 20 M$_{\rm Jup}$. However, using a luminosity measurement derived using a $\sc Spitzer$ based parallax, \citet{dupuy13} estimate their effective temperatures to be typically 60-90 K hotter. The difference in temperatures is possibly related to the fact that in the near-IR, where the spectra are obtained, only 5\% of the Y dwarf flux is emitted and the models do not always accurately reproduce the observed spectra \citep{dupuy13}. Using more advanced non-equilibrium models containing sulphide and chloride clouds \citep{allard10, morley12}. \citet{beichman14} do obtain the lower effective temperatures, claiming that the \citet{dupuy13} result depends on a model dependent bolometric correction, which also is also quite uncertain. \citet{dupuy13} also highlight the WD0806-661B as being the most likely candidate for lowest mass, and lowest temperature Y dwarf at 6-10 M$_{\rm Jup}$ and 330-375 K. As WD0806-661B is in a wide binary with a DQ white dwarf \cite{luhman12, luhman11}, it has a well defined age derived from the cooling age of the white dwarf and it's main sequence lifetime derived using an initial mass-final mass relation \citep{casewell09, dobbie09}. \subsubsection{Metallicity} Low metallicity brown dwarfs (subdwarfs) provide an insight into the coolest, old brown dwarfs. Not only from a formation point of view, proving that low metallicity clouds can form substellar objects, but also for comparison to low metallicity extrasolar planets. There are, however, only a handful of ultracool subdwarfs known to date \citep{burgasser03, burgasser04, cushing09, sivarani09, lodieu10, kirkpatrick10, lodieu12, mace13, kirkpatrick14}, and even fewer ($\sim$10) that are L dwarfs. Indeed, of the 29 objects known to date, only 6 have spectral type later than L2 \citep{kirkpatrick14}. Many of these objects have been identified as potential members of the Galactic Halo. \citet{burgasser08} noted that L subdwarfs are overluminous in $M_{J}$, but this changes to slightly underluminous by $M_{K}$. This change is attributed to a reduced condensate opacity causing strong TiO, FeH, Ca $\sc{I}$ and Ti $\sc{I}$ features, and enhanced collisional-induced H$_{2}$ opacity in the $K$ band \citep{ackerman01, tsuji96, burgasser03, burgasser07}. This effect also causes red colours in the optical wavelengths and bluer colours than L dwarfs of the same spectral type. The naming scheme for subdwarfs follows that for M dwarfs developed by \citet{gizis97} and upgraded by \citep{lepine}, moving from dM for metal rich M dwarfs to subdwarfs (sdM), extreme subdwarfs (esdM) and ultra subdwarfs (usdM) in order of decreasing metallicity. There has been a suggestion by \cite{kirkpatrick14} that there is an L subdwarf "gap" between the early L and late L subdwarfs caused because lower metallicity generally corresponds to older objects. Thus at older ages there is a gap in temperature between the the hottest members of the population, and the coolest, lowest mass members of similarly aged hydrogen burning stars. This temperature gap is predicted to increase with older populations. This can be seen in Figure 29 of \citet{kirkpatrick14}. Recently, a population of T subdwarfs has emerged \citep{burningham14, mace13}. T dwarfs in are very sensitive to small changes in metallicity - a low metallicity T dwarf with a shift of 0.3 dex can change the $H-$[4.5] colours of T8 dwarfs as much as a 100K change in effective temperature \citep{burningham13}. The T dwarfs exhibit enhanced $Y$ and depressed $K$ band flux indicative of a high gravity and low metallicity atmosphere \citep{burgasser02}. The increased $Y$ band is caused as the lower metallicity means a reduced opacity in the wings of the alkali lines, resulting in a brighter and broader peak flux \citep{burgasser06}. The $K$ band flux depression is caused by pressure-enhanced collision induced absorption by molecular hydrogen (CIA H$_{2}$; \citealt{saumon94}). \subsubsection{Gravity} Young, low gravity brown dwarfs have very similar properties to directly-imaged exoplanets \citep{fah2013}. It has been suggested that younger brown dwarfs may have thick clouds in their atmospheres that are not present in older objects of the same temperature \citep{barman11, currie11, madhusudhan11}. There are many brown dwarfs that have been identified in open star clusters such as those in Sigma Ori \cite{bihain09, pena12} , Serpens \cite{spezzi12} or the Pleiades \cite{casewell11, casewell07}. However, there also exists a field population of low gravity, young brown dwarfs \citep{reid08, cruz09, kirkpatrick10}. A comprehensive scheme for defining the gravity of these objects was devised by \citet{kirkpatrick08} and \citet{cruz09} in the optical, and \citet{allers13a, allers13b} in the near-infrared. The classification relies on a suffix of $\alpha$, $\beta$, or $\gamma$ to be added to the object's spectral type, to indicate the gravity. $\alpha$ implies a normal gravity field dwarf (age $\sim$ 1Gyr), whereas $\beta$ is an intermediate-gravity object of $\sim$100 Myr, and $\gamma$ represents an age of $\sim$10-30 Myr. In the optical wavebands the suffixes are assigned based on measurements of the Na $\sc{I}$ doublets and the K $\sc{I}$ doublets which are weaker and sharper in a low gravity object, the VO absorption bands which are stronger than for a high gravity dwarf, and the FeH absorption features which are weaker in low gravity. In the near-infrared, the depth of the VO and FeH bands are considered as well as the alkali lines. The shape of the $H$-band is also taken into consideration as a more peaky, triangular H band, caused by steam absorption is a sign of low gravity \citep{lucas01, rice11}. All of these features are altered as there is less pressure broadening due to the object having a low surface gravity \citep{rice11, allers13a}. In general, alkali absorption features are weaker, and the overall colours of lower gravity objects are redder than those of their higher gravity counterparts, by as much as 0.8 mag in $J-K_{s}$ and 0.25 mag redder in $W1-W2$ \citep{faherty13}. The redder colour is due in part to the changes in the near-infrared broadband features, but also due to more photospheric dust. (FIND A REFRERNCE) An additional feature is that while young M dwarfs are brighter than their older counterparts, young L dwarfs are underluminous in the near-infrared for their spectral type. This may be due to the additional dust in their photosphere \citep{faherty13a} or due to a cooler spectral type/temperature relation being required. It is possible that many of these low gravity field brown dwarfs may belong to moving groups. There have been many kinematic measurements of brown dwarfs (e.g. \citealt{schmidt07, faherty09, faherty11, faherty12, faherty13a, dupuy13, marocco13}, identifying moving group members \cite{Jameson08, Casewell08} of which the majority concentrate on young moving groups such as AB Doradus, $\sc Beta$ Pictorus, and Tucana-Horologium \citep{kirkpatrick06, kirkpatrick10, cruz09, allers10, rice10}. There are two directly-imaged planetary mass companions that have estimated effective temperatures in the brown dwarf regime. 2M1207b ($\sim$ 10 Myr, $\sim$ 1100 K, Spt$\sim$mid L) and HR8799b ($\sim$30 Myr, $\sim$1600 K, Spt$\sim$ early T). These planets are underluminous and have unusually red near-infrared colours, as well as displaying the characteristic peaked $H$-band spectra. There are 30 brown dwarfs that have been kinematically lined to moving groups and association aged between 10 and 150 Myr, that share these features, thus indicating that low gravity brown dwarfs may provide a clue as to the atmospheric processes occurring on young exoplanets. \subsection{X-ray and UV observations} There have been numerous searches for X-ray detections of brown dwarfs due to their indicator of magnetic activity. However, it is clear that after the spectral type M7-M8, there is a decline in emission, consistent with the decline seen in chromospheric H$\alpha$ emission \citep{gizis00, mohanty03, west04}. The relationship normally used to describe this relates X-ray activity ($L_{X}/L_{bol}$) and rotation, and there is a similar relation for H$\alpha$. There is also a relationship between the radio (L$_{v,R}$) and X-ray luminosities which is linear for active F-M stars and L$_{X} \propto$ L$_{v,R}^{\alpha}$ where $\alpha$$\sim$0.73. . This is known as the G"/{u}del-Benz relation \citep{gudel93, benz94}. There have been many searches \citep{stelzer03, berger05, stelzer06} for X-ray emission in L dwarfs, however, only one detection to date \citep{audard07}, the L dwarf binary Kelu-1 that was detected with $Chandra$ in the energy range of 0.63, 0.86, 1.19 and 1.38 keV, resulting in an estimation of the 0.1-10 keV X-ray luminosity to be $L_{X}=2.9\times10^{25}$ erg s$^{-1}$. It is suggested that this emission does not originate from flares, and indeed there was no concurrent radio detection at a frequency $\sim$8 GHz \citep{audard07}. \citep{audard07} suggested that the ratio between luminosity and bolometric luminosity increase with decreasing effective temperature, which dominates the bolometric luminosity, in the radio. However in the X-rays, the ratio decreases with effective temperature, as does the similar relation for H$\alpha$. Their conclusions were that the chromospheric magnetic activity (H$\alpha$ emission) and the activity in the hot coronal loops (X-ray emission) decreases with effective temperature, indicating a different mechanism responsible for the radio emission in ultra cool dwarfs. \citet{williams13c} suggest that ultracool dwarfs with strong axisymmetric magnetic fields tend to have $L_{v,R}/L_{X}$ consistent with the G\"{u}del-Benz relation, while dwarfs with weak non-axisymmetric fields are radio luminous. Slower rotating dwarfs have strong convective field dynamos and so also stay near the G\"{u}del-Benz relation, whereas rapid rotators may have dynamos in either mode so some violate the G\"{u}del-Benz relation, and some do not. \citet{williams13c} also note that weaker field dwarfs tend to have later spectral types, and lower X-ray luminosity, which may be related to their cooler temperatures. They also note that in general, radio-bright sources, tend to be X-ray under luminous compared to the radio dim sources. There are no detections of brown dwarfs emission in the UV known to result from atmospheric processes. However, brown dwarfs with disks such as the young TW Hydra member 2MASS1207334-393254 have been observed in the UV. H$_{2}$ fluorescence is detected and attributed to accretion \citep{gizis05}. As such, this mechanism is beyond the scope of this review. \subsection{H$\alpha$} The strength of H$\alpha$ emission in late M and L dwarfs also decreases with lower mass and later spectral type. \citet{schmidt07} estimate that of their 152 strong sample, 95\% of M7 dwarfs show H$\alpha$ emission (which is consistent with \citealt{gizis00}) which declines with spectral type to 8\% of L2-L8 dwarfs showing H$\alpha$. For the L dwarfs in particular, 50\% of L0 dwarfs being active declines to 20-40\% of L1 dwarfs and only 10\% of L2 dwarfs and later spectral types. This decline is similar to the breakdown in rotation-activity relationship seen for the X-ray activity. This decline in H$\alpha$ has been attributed to the high electrical resistivities in the cool, neutral atmospheres of these dwarfs \citep{mohanty02}, although no firm conclusions have been made. Despite this decline, some objects of late spectral type do show H$\alpha$ emission. The L5 dwarfs 2MASSJ01443536-0716142 and 2MASS1315-26 have shown H$\alpha$ emission. For 2MASSJ1044-0716, this emission declined over time and was not detected in subsequent spectra obtained a year later \citep{liebert03}. This flaring has been detected in other ultracool dwarfs such as BRI0021-0214 \citep{reid99}. 2MASS1315-26 is a high proper motion source that has pronounced and sustained H$\alpha$ emission \citep{hall02, gizis02, burgasser11}. The kinematics suggest that this source is not young (age $\sim$ few Gyr), meaning it is likely just above the hydrogen mass burning limit. \citep{burgasser13}. In quiescence, these objects show H$\alpha$ fluxes similar to those of other dwarfs of the same spectral type. However in outburst, 2MASS0144-0716 has an H$\alpha$ flux measurement more than 10 times higher than the mean \citep{liebert03}, and for 2MASS1315-26 the optical emission is $\sim$ 100 times stronger than for L dwarfs of a similar spectral type \citep{hall02}. \subsection{Radio emission} In cool stars, the magnetic dynamo mechanism is used to explain magnetic field generation in solar-type stars, and there is a direct correlation between rotation and magnetic activity indicated by H$\alpha$, X-rays and radio emission \citep{noyes84, stewart88, james00, delfosse98, pizzolato03, browning10, reiners09, morin10}. As this dynamo operates at the transition layer between the radiative and convective zones, the mechanism cannot explain radio activity in fully convective dwarfs ($>$M3). As already discussed, H$\alpha$ and X-ray activity does, however continue into the late M dwarf regime, but the tight correlation between X-ray and radio luminosity breaks down (G{\"u}del-Benz relation: \citealt{gudel93, benz94}) suggesting a separate mechanism is responsible for this emission. A number ($\sim$10 \% reference) of brown dwarfs have been detected to be radio loud. This non-thermal emission may be low level quiescent \citep{berger02}, exhibiting variability \citep{antonova07}, showing variability that is periodic and linked to rotation \citep{berger05, mclean11}, highly polarised and bursting \citep{burgasser05}, pulsing synchronised with the rotation period and highly polarised \citep{hallinan07, berger09} or a combination of some of these \citep{williams13b}. Some of these objects (TVLM513-46546, 2MASSJ0036+1821104, LSRJ1835+3259, 2MASSJ1047539+212432 \citealt{rw12, hallinan08}) emit periodic, 100\% polarised radiation, normally detected at 4-8 GHz with spectral luminosities of $\approx 10^{13.5}$~\ergshz e.g.\cite{had+06,rw12}. These pulses have been confirmed to be caused by the cyclotron maser instability (CMI), the emission mechanism that occurs on Jupiter \citep{t06, nbc+12, mmk+12}. CMI emission requires a relatively tenuous population of energetic particles confined to a relatively strong magnetic field; in particular, one must have $f_\text{ce} \gg f_\text{pe}$, where $f_\text{ce} = e B / m_e c$ is the cyclotron frequency and $f_\text{pe} = \sqrt{4 \pi e^2 n_e / m_e}$ is the plasma frequency. Whenever detailed observations are available, the necessary free energy in the plasma is seen to be provided by field-aligned electric currents, which can originate in magnetospheric-ionospheric (M-I) shearing and possibly plasma instabilities. These observations imply that BDs can self-generate stable, $\sim$kG-strength magnetic fields \citep{b06b}. Although this mechanism for emission is quite well characterised, and can account for the polarised flaring behaviour, two of these dwarfs also produce quiescent, moderately polarised emission. This indicates that a second mechanism such as synchrotron or gyrosynchrotron emission \citep{berger02, osten06, ravi11} may be occurring, or that the CMI emission is becoming depolarised in some way as it crosses the dwarf's magnetosphere. It has been suggested that some of the variability seen in these sources may be due to variation in the local plasma conditions, perhaps linked to magnetic reconnection events \citep{schmidt07} or due to other charged events in the plasma (IS THIS REASONABLE? CAN I CITE CRAIG AND PAUL HERE???) Additionally, there are some radio active ultracool dwarfs that exhibit behaviour that is not due to CMI (e.g. 2MASS13153094-2649513AB \citealt{burg2013} - which also displays H$\alpha$ emission). This particular source emits strong quiescent emission, with peak fluxes comparable to the fluxes detected for other ultracool dwarfs while in outburst. It is also still unknown what features distinguish radio "active" from radio "inactive" dwarfs. The relationship may depend on mass, effective temperature, activity, magnetic field strength and rotation rate. All the known radio "active" dwarfs have a high $V$ sin $i$ value, indicating short rotation periods ($\sim$3hrs) \citep{antonova13, williams13a}. This may indicate a link between rotation rate and emission, but could also indicate the dependence is not on the velocity but on the inclination angle, $i$, thus making detection of radio emission dependent on the line of sight and the radiation emitted beamed \citep{hallinan08}. \subsection{Observed variability in Brown Dwarfs}\label{ss:obsvariab} As the number of brown dwarfs has increased there have been more frequent searches for spectro-photometric variability that would indicate non-uniform cloud cover (e.g. \citealt{ackerman01}) and weather-like features, such as those seen on Jupiter. The majority of these early searches concentrated on L dwarfs, suggesting that variability could be due to holes in the clouds \citep{gelino02, ackerman01}. Surveys appear to suggest that between 40-70\% of objects are variable \citet{bailer01,gelino02,clarke02}, although most of these surveys involve small numbers of objects and the authors vary on what is considered at detection. The majority of these studies were performed in the $I$ band where the amplitude of variability is at the 1-2\% level \citep{clarke02, clarke08} on timescales of tens of minutes to weeks, and is in general not periodic. There has now been a shift towards using the near-IR for variability studies \citep{enoch03,clarke08, khandrika13, girardin13} where the frequency of variability is estimated to be $\sim$10-40\%. So far, however, high amplitude periodic variability ($>3$\%) has been limited to the L-T transition objects, while lower amplitude variability is detected in both early L and late T dwarfs. The first T dwarf to be confirmed as variable was the T2.5 dwarf SIMP J013656.5+093347 \citep{artigau09} which was determined to be variable at the 50 mmag level in the $J$ band. Using the lightcurves, a rotation period of 2.4 days was determined. However, perhaps more interestingly, the lightcurve evolves from night to night (Figure 2 of \citet{artigau09}), perhaps due to evolving features such as storms. \citet{radigan12} studied a similar object, the T1.5 dwarf 2MASS2139, and again found evidence of an evolving light curve in the $J$, and $K_s$ wavebands, which after extensive modelling analysis they attributed to heterogeneous clouds with regions of higher opacity. In general these objects exhibit variable lightcurves, modulated as the object rotates, which evolve on a period of hours \citet{apai13}, days (e.g. \citealt {artigau09, radigan12} or even years \citet{metchev13}. While the majority of T dwarfs that have been studied mostly lie at the L-T dwarf transition region, some late T dwarfs ($>$T5) have been determined to be variable \citep{clarke08}. Initially, these atmospheres were thought to be relatively cloud-free, however recent work by \citet{morley12} suggests that sulphide clouds may persist in these atmospheres. \citet{buenzli12} studied one such object and determined that while sinusoidal variability was present, there was also a phase offset between different wavelengths. This phase shift is directly linked to the pressure probed at each of the observed wavelengths. The lower pressure regions cause the highest phase lag with respect to the highest pressure layers. This lag may be as large as half a rotation period. \citet{buenzli12} suggest this longitudinal variability may be caused by changes in cloud opacities, a "stacked-cell" atmosphere as sen in Saturn(e.g. \citealt{fletcher11} or a heterogeneous deep cloud layer with horizontal variations in temperature. One of the largest space based surveys to date was performed by \citet{buenzli14} who studied 22 brown dwarfs ranging from L5 to T6 with $HST$. Six of these objects are determined to be variable with another 5 marked as potentially variable. They were not sensitive to objects with long periods, but still determine that the majority of brown dwarfs have patchy atmospheres, and that there is no spectral type dependence on the fraction of dwarfs that are variable. \citet{heinze13} Heinze reported sinusoidal variability of the L3 dwarf DENIS-P J1058.7-1548 in the $J$ and [3.6] micron bands, but no variability in the [4.5] micron band. They suggested that the variability may again be due to inhomogenous cloud cover, where the thickest clouds have the lowest effective temperature, but there may also be an effect related to magnetic activity (starspots, chromospheric or aurorae) due to a weak H$\alpha$ detection. This result is similar to the findings on the radio loud M8.5 dwarf TVLM513-46546 \citep{littlefair08}. TVLM513 shows $i'$ and $g'$ band variability in antiphase, initially suggesting the variability is due to dust clouds. However, more recent results show that the $g'$ and $i$ bands are no longer correlated. The $g'$ band variability has remained stable, but the $i'$ band lightcurve has evolved. However, the optical continuum variability is in phase with the H$\alpha$ flux, again suggesting some magnetic processes are also occurring \citep{metchev13}. Luhman 16 is the third closest system to the sun at only 2pc \citep{luhman13} consisting of a L7.5 and T0.5 dwarf in a binary. \citet{gillon13} reported variability on a 4.87 hour period with strong night to night evolution which they attributed to a fast evolving atmosphere on the cooler T0.5 dwarf. The L7.5 dwarf is not found to be variable. \citet{burgasser14} perfumed spectral monitoring of the system and confirmed the lack of variability in the L dwarf. They modelled the T dwarf using a two-spot model and inferred a cold covering fraction of $\sim$30-55\% varying by 15-30\% over a rotation period. This resulted in a difference of $\sim$200-400 K between the hot and cold regions. \citet{burgasser14} interpreted the variations in temperature as changes in the covering fraction of a high cloud deck resulting in cloud holes which expose the deeper, hotter cloud layers. They also suggested the rapidly evolving atmosphere may produce winds as high as 1-3 kms$^{-1}$ which is consistent with an advection timescale of 1-3 rotation periods. The most in depth analysis of this system however, was produced by \citet{crossfield14} who used Doppler imaging techniques to produce a global surface map, mainly sensitive to CO, of the T dwarf Luhman 16B. The map shows a large, dark mid-lattitude region, a brighter area on the opposite hemisphere located near the pole and mottling at equatorial latitudes (Figure 2 of \citealt{crossfield14}). The authors interpreted the map in one of two ways. Either the darker areas represent thicker clouds, obscuring the hotter, inner regions of the atmosphere, and the bright regions correspond to cloud holes providing a view of this warmer interior, or the map shows a combination of surface brightness and chemical abundance variations. They predict that the high latitude bright spot could be similar to polar vortices seen in solar system giant planets, in which case it should be seen in future mapping of this object. Another class of brown dwarfs that show photometric variability are those in close ($<$10 hrs) detached binary systems with white dwarfs (WD0137-349, WD+L6-L8, P=116 min \citealt{maxted06, burleigh06}; GD1400, WD+L6, P=9.98hrs \citealt{farihi, dobbie, burleigh11}; WD0837+185, WD+T8, P=4.2hrs \citealt{casewell12}; NLTT5306, WD+L4-L7, P=101.88 min \citealt{steele13}; CSS21055, WD+L, P=121.73 min \citealt{beuermann13}). These systems are likely tidally locked and as a result one side of the brown dwarf is continually heated by its much hotter white dwarf companion. Three of the five known systems show variability in optical wavelengths at the 1\% level. One of these objects, WD0137-349 has been studied extensively at in the near-infrared \citep{casewell13}. The white dwarf in this system is 10 times hotter than the brown dwarf, resulting in a difference in brightness temperature of $\sim$500K. While the substellar objects in these systems are brown dwarfs, and not extrasolar planets, their atmospheres behave in a similar way, both absorbing the (mainly) ultraviolet emissions from their host, and also reflecting the incident light.
{'timestamp': '2014-11-07T02:10:16', 'yymm': '1410', 'arxiv_id': '1410.6029', 'language': 'en', 'url': 'https://arxiv.org/abs/1410.6029'}
\section{Abstract} In this paper, we detail the improvement of the Cascadic Multigrid algorithm with the addition of the Gauss Seidel algorithm in order to compute the Fiedler vector of a graph Laplacian, which is the eigenvector corresponding to the second smallest eigenvalue. This vector has been found to have applications in graph partitioning, particularly in the spectral clustering algorithm. The algorithm is algebraic and employs heavy edge coarsening, which was developed for the first cascadic multigrid algorithm. We present numerical tests that test the algorithm against a variety of matrices of different size and properties. We then test the algorithm on a range of square matrices with uniform properties in order to prove the linear complexity of the algorithm. \section{Introduction} The Fiedler vector has seen numerous applications within computational mathematics, primarily within the fields of graph partitioning and graph drawing [1]. In particular, we require eigenvalues and eigenvectors for a successful run of the spectral clustering algorithm that partitions a network into clusters[2]. Although many languages have built in eigenvalue methods, the spectral clustering method requires a specialized eigenvalue algorithms to account for massive network size. In fact, spectral clustering becomes unfeasible for networks of size over 1000 as computing the eigenvalues through matrix inversion becomes inefficient. Therefore, we require specialized multigrid algorithms to find the eigenvectors and eigenvalues in less than $O(N^3)$ time. The Cascadic Multigrid Algorithm is an effective method for computing the second largest eigenvalue and eigenvector, where the eigenvector is called the Fiedler vector. The main methods for calculating eigenvalues and eigenvectors include the Lanczos method and the power method. However, these methods become unfeasible for large matrices ($|V| > 1000$). Furthermore, many networks can have over $1000$ nodes which correlates to a matrix with dimension higher than $1000$. For this reason, we require the cascadic multigrid algorithm, as it solves the eigenvalue problem on coarser levels and projects the solution upwards until the solution is projected to the original matrix [3]. When calculating the eigenvalues and eigenvectors of symmetric positive definite matrices more generally, the Jacobi and PCG methods provide good approximations. These can be extended to calculating the Fiedler vector and its eigenvalue [1]. In this paper, we improve upon the previously made Cascadic Multigrid Algorithm by introducing a Gauss Seidel smoother on each level. We employ the previously established heavy edge coarsening that selects the edge with the heaviest weight between two vertices. The refinement procedure continues to use power iteration on a modified matrix. This method does not require the inversion of matrices, unlike Rayleigh-Quotient iteration, thereby making it a much more optimal method. As always, the eigenvector calculated on coarse levels are projected to a finer level with interpolation matrices. The eigenvector that gets calculated and projected up to a higher level serves as the guess for the next Gauss Seidel iteration on the finer level. Finally, on the highest level, we calculate the Rayleigh Quotient to achieve the eigenvalue. The paper is organized to provide a logical introduction to the algorithm. In section 3, we provide definitions and background knowledge required to understand multigrid methods. We also introduce heavy edge coarsening, power iteration, and the Gauss Seidel method. This culminates in a presentation of the algorithm developed. In section 4, we introduce the numerical tests of the algorithm. We compare the algorithm to the previous cascadic multigrid algorithm and various other multigrid algorithms that are meant to calculate the Fiedler vector. We also compare spectral clustering with the build in eigenvalue calculating function in MATLAB to spectral clustering that employs our algorithm to show efficient. In the final section, we wrap up the paper and discuss future improvements to multigrid algorithms that calculate Fiedler vectors. \section{Modified Cascadic MG Method for Computing the Fiedler Vector} First we formally introduce the concepts of the graph Laplacian and the Fiedler vector. A weighted graph $G = (V,E,w)$ is undirected if the edges are unoriented. \textbf{Definition 2.1} $G = (V,E,w)$ is a weighted graph. The Laplacian of $G$, $L(G) \in \mathbb{R}^{n \times n}$, shortened to $L$, where $n = |V|$, is denoted as follows \begin{displaymath} L(G)_{(i,j)} = \left\{ \begin{array}{lr} d_{v_i} &, $ if $ i = j\\ -w_{(i,j)} &, $ if $ i \neq j\\ \end{array} \right. \end{displaymath} where $d_{v_i}$ is the degree of vertex i and $w_{i,j}$ is the weight of the edge connecting $v_i$ and $v_j$. This Laplacian is positive semi-definite and diagonally dominant, and the sum of and row or column of L is zero. This makes the smallest eigenvalue 0 with the corresponding vector $[1,1,...,1]^T$. We are particularly interested in the second smallest eigenvalue and eigenvector. \textbf{Definition 2.2} The second smallest eigenvalue of the Laplacian of a graph $G$ is called the algebraic connectivity. This eigenvalue must be greater than or equal to 0. The corresponding eigenvector $\phi_2$ is called the Fiedler vector of G. The importance of the Fiedler vector is detailed in [4, 5]. It is important to note that the coarsest graph must be very small in size at around $|V| < 25$. A direct power iteration is used at this coarsest level to obtain an eigenvector. Afterwards, the eigenvector is projected upwards and then smoothed using Guass-Seidel. We now introduce heavy edge coarsening for our cascadic algorithm. In our algorithm, $L^i \in \mathbb{R}^{n_{i} \times n_{i}}$. Heavy edge coarsening is iterated on the graph Laplacian in order to create multiple levels for solving. This algorithm makes up the setup phase. \begin{algorithm} \caption{Heavy Edge Coarsening} \begin{algorithmic}[1] \Procedure {HEC}{L} \State $c \leftarrow 0$ \State $p \leftarrow randperm(n_i)$ \State $q \leftarrow zeros(n_i,1)$ \For {$i=1 \rightarrow n_i$} \If {$q(p(i)) = 0$} \State $m \leftarrow argmin(L(:,p(i)))$ \If {$q(m) = 0$} \State $c \leftarrow c+1$ \State $q(m) = c$ \State $q(p(i)) = c$ \Else \State $q(p(i)) = q(m)$ \EndIf \EndIf \EndFor \State $I_{i}^{i+1} \leftarrow zeros(c,n_i)$ \For {$i=1 \rightarrow n_i$} \State $I_{i}^{i+1}(q(i),i) = 1$ \EndFor \EndProcedure \end{algorithmic} \end{algorithm} Heavy edge coarsening is further detailed in [3], and several properties of the algorithm are proved as well. Next, we formally introduce the Gauss Seidel method. This method takes a guess vector and solves a linear system using that guess. In our algorithm, the we use the vector projected upwards from the coarser level as the guess. This was similar to power iteration, as we used the projected vector as the first guess for power iteration as well. The values A and b are the original values in the linear system $Ax = b$. X0 is our initial guess to the solution of this system. N denotes the number of iterations allowed while tol represents the tolerance of error. The algorithm outputs a solution to $Ax = b$ within our denoted error. \begin{algorithm} \caption{Gauss Seidel} \begin{algorithmic}[1] \Procedure {G-S}{A, b, X0, tol, N} \State $k \leftarrow 1$ \While {$k \leq N$} \For {$i = 1 \rightarrow n$} \State $x_i = 1/a_{ii}[- \sum\limits_{j = 1}^{i-1} (a_{ij}x_j) - \sum\limits_{j = i+1}^{n}(a_{ij}X0_j) + b_i]$ \If {$ |x - X0| < tol$} \State output $[x_1, x_2, ..., x_n]$ \EndIf \State $k = k+1$ \For {$i = 1 \rightarrow n$} \State $X0_j = x_i$ \EndFor \EndFor \EndWhile \State Output $[x_1, x_2, ..., x_n]$ \EndProcedure \end{algorithmic} \end{algorithm} We discuss two theorems that confirm that the Gauss Seidel method will converge to a solution in our multigrid algorithm. \textbf{Theorem 2.1}: The Gauss Seidel method converges if A is symmetric positive definite or if A is strictly or irreducibly diagonally dominant. \textbf{Theorem 2.2}: Let A be a symmetric positive definite matrix. Then the Gauss-Seidel method converges for any arbitrary choice of initial approximation $x$. A proof of these theorems can be found in [6]. All of our graph Laplacians on all levels are symmetric positive definite and diagonally dominant therefore the Gauss Seidel method will converge on all levels. With our component algorithms defined and sufficiently detailed, we can now outline the procedure for our algorithm. We begin with a setup phase that has heavy edge coarsening set up the levels on which we do computations. After this, we solve the eigenvalue problem on the coarsest level. We then begin projecting our eigenvector upwards and using Gauss Seidel on finer and finer levels until we get to the finest level, our original matrix. At this level, we use Gauss Seidel one last time to yield the Fiedler vector and then calculate the Rayleigh quotient for the algebraic connectivity. We input the finest level graph Laplacian and the algorithm outputs the Fiedler vector and corresponding eigenvalue. \begin{algorithm} \caption{Gauss Seidel Cascadic Multigrid} \begin{algorithmic}[1] \Procedure {Step 1: Setup Phase}{L} \State $i = 0$ \While {$n_i > 25$} \State $I_i^{i+1} \leftarrow HEC(L^i)$ \State $L^{i+1} = I_{i}^{i+1}L^{i}(I_{i}^{i+1})^T$ \State $i = i+1$ \EndWhile \State $J \leftarrow i$ \EndProcedure \Procedure {Step 2: Coarsest Level Solving Phase}{$L^J$} \State $ y^(J) \leftarrow GS(L^J, rand(n_J))$ \EndProcedure \Procedure{Step 3: Cascadic Refinement Phase}{$y^J$, $L$} \For {$j = J-1 \rightarrow 0$} \State $y^{j} = (I_{i}^{i+1})^{T}y^{(j+1)}$ \State $y^{j} \leftarrow GS(L^j, y^{j})$ \EndFor \EndProcedure \end{algorithmic} \end{algorithm} Structurally, this algorithm is similar to other multigrid algorithms in that it begins with a setup phase and solves on the coarsest level upwards. It is nearly identical to the Cascadic Multigrid Algorithm with the sole difference being in the Gauss Seidel replacing power iteration. \section{Numerical Tests} We perform numerical tests on a variety of graphs listed on Table 4.1. The graphs were taken from the University of Florida Sparse Matrix Collection [7]. The computations were performed on an HP Envy with a 2.40 GHz Intel Core i7 Processor with 8.00 GB of RAM. We consider the performance of the Gauss Seidel Cascadic Multigrid Algorithm to matrices with over 8000 nodes. We use a tolerance $(u^k,u^{k-1}) > 1 - 10^{-6}$. \begin{tabular}{l||l|l|l} Matrix Name & Matrix Size & Matrix Edges & CGMG runtime (s) \\ \hline barth5 & 15606 & 45878 & 0.371467 \\ \hline bcsstk32 & 44609 & 985046 & 1.242307 \\ \hline bcsstk33 & 8738 & 291583 & 0.381135 \\ \hline brack2 & 62631 & 366559 & 1.307903 \\ \hline copter1 & 17222 & 96921 & 0.42307 \\ \hline ct2010 & 67578 & 168176 & 1.265944 \\ \hline halfb & 224617 & 6081602 & 6.694857 \\ \hline srb1 & 54924 & 1453614 & 1.582835 \\ \hline wing\_nodal & 10937 & 75488 & 0.40845 \end{tabular} Next, we show that the algorithm is $O(N)$. We run the algorithm on uniform square arrays of various sizes and show that the runtime increases linearly according to the matrix size. The amount of nodes and edges increases linearly therefore we can expect the runtime of the algorithm to also increase linearly. Because multigrid algorithms run in linear time, it is important that the Gauss Seidel smoother does not change the runtime, otherwise it would be an inferior algorithm to use. The r value is very close to 1, indicating that the algorithm does in fact have $O(N)$ complexity. \begin{tabular}{l|l} Matrix Nodes & Time (seconds) \\ \hline 106276 & 1.921614 \\ \hline 178929 & 3.220836 \\ \hline 232324 & 4.088426 \\ \hline 276676 & 5.344172 \\ \hline 303601 & 5.684314 \\ \hline 374544 & 7.178143 \\ \hline 425104 & 7.811554 \\ \hline 564001 & 10.565033 \\ \hline 657721 & 11.704087 \\ \hline 705600 & 12.936846 \\ \hline 736164 & 13.768696 \\ \hline 753424 & 13.843865 \\ \hline 762129 & 14.799933 \\ \hline 788544 & 14.613115 \\ \hline 795664 & 15.51262 \\ \hline 799236 & 16.808463 \\ \hline 848241 & 16.922279 \\ \hline 851929 & 16.233831 \\ \hline 915849 & 17.257426 \\ \hline 956484 & 19.349795 \end{tabular} \includegraphics[scale = 0.5] {linearity.png} \section{Conclusion} In this paper, we have presented an improvement on the existing Cascadic Multigrid Algorithm by introducing a Gauss Seidel smoother as opposed to a power iteration smoother on each level. The algorithm is effective in calculating the algebraic connectivity and the Fiedler vector and is able to partition graphs quickly. Having shown that the Gauss-Seidel Cascadic Multigrid Algorithm runs in linear time, we can now discuss its benefits and pitfalls. If our initial graph Laplacian is not sparse, then Gauss Seidel will fail as a smoother since it is inherently meant to work on sparse matrices. In this case, other multigrid algorithms would be optimal. However, the Gauss Seidel smoother works well for most Laplacians as most Laplacians are sparse. Furthermore, we showed that the algorithm is effective in calculating the Fiedler vector of a variety of different graphs. We see future works modifying the smoother more. Future improvements could include changing the Gauss Seidel to a Lanczos smoother. Krylov subspace methods are costly for calculating the eigenvalues and eignvectors of large matrices but produce accurate results. Furthermore, future works could include a convergence analysis of the cascadic multigrid algorithm on a more general level and take into account the Gauss Seidel method in the convergence. Of particular interest is our algorithm's convergence with respect to elliptic eigenvalue problems. \section{Acknowledgement} The research presented here was undertaken by Shivam Gandhi and directed by Dr. Xiaozhe Hu of the Tufts University Mathematics Department.
{'timestamp': '2016-02-16T02:08:35', 'yymm': '1602', 'arxiv_id': '1602.04386', 'language': 'en', 'url': 'https://arxiv.org/abs/1602.04386'}
\section{Introduction} Our manuscript deals with three prominent topics in algebraic differential equations and their connections to each other, especially interpreted in the context of rational planar vector fields with constant coefficients. \subsection{Model theory} Strong minimality is an important notion emerging from stability theory, and in the context of differential equations, the notion has a concrete interpretation in terms of functional transcendence. The zero set of a differential equation, $X$, with coefficients in a differential field $K$ is \emph{strongly minimal} if and only if (1) the equation is irreducible over $K^{alg}$ and (2) given any solution $f$ of $X$ and any differential field extension $F$ of $K$, $$\text{trdeg}_F F\langle f \rangle = \text{trdeg}_K K \langle f \rangle \text{ or } 0.$$ Here $K \langle f \rangle $ denotes the differential field extension of $K$ generated by $f$. To the non-model theorist, it likely isn't obvious from the definition, but strong minimality has played a central role in the model theoretic approach to algebraic differential equations. Two factors seem to be important in explaining the centrality of the notion. First, once strong minimality of an equation is established, the trichotomy theorem, a model theoretic classification result, along with other model theoretic results can often be employed in powerful ways \cite{HPnfcp, nagloo2014algebraic}. Second, among nonlinear differential equations, the property seems to hold rather ubiquitously; in fact there are theorems to this effect in various settings \cite{devilbiss2021generic, jaoui2019generic}. Even for equations which are not themselves minimal, there is a well-known decomposition technique, \emph{semi-minimal analysis}\footnote{Definitions of model theroetic notions can be found in section \ref{prelim}.} \cite{moosa2014model}, which often allows for the reduction of questions to the minimal case. Establishing the notion has been the key step to resolving a number of longstanding open conjectures \cite{casale2020ax, nagloo2014algebraic}. Despite these factors, there are few enough equations for which the property has been established that a comprehensive list of such equations appears in \cite{devilbiss2021generic}. In this manuscript, we generalize results of Poizat \cite{poizat1977rangs} and Brestovski \cite{brestovski1989algebraic} by showing that \begin{thmstar} The set of solutions of \begin{equation*} z''={z'}f(z),\;\;\;\;z'\neq 0 \end{equation*} where $f(z) \in \mathbb C(z)$ is strongly minimal \emph{if and only if} $f(z)$ is not the derivative of some $g(z) \in \mathbb C (z)$. \end{thmstar} In addition to giving a complete characterization for this class of equations, our proof gives a new technique for establishing strong minimality which relies on valuation theoretic arguments about the field of Puiseux series. In the strongly minimal case, we give a precise characterization of the algebraic relations between solutions (and their derivatives) of equations in our class (discussed in the third part of this introduction). \sloppy When the equation is not strongly minimal, we show that it must be \emph{nonorthogonal to the constants}. The solution set $X$ is orthogonal to the constants if, perhaps over some differential field extension $F $ of $k$, there is a solution $a$ of $X$ such that $F \langle a \rangle $ contains a constant which is not in $F^{alg}$. Again, to non-model theorists, it likely isn't obvious that this condition should play a such a central role as it does. With respect to the semi-minimal analysis of the generic type $p(z)$ of the equation, three possibilities are a priori possible in this case: \begin{enumerate} \item $p(z)$ is internal to the constants (this is a strengthening of nonorthogonality to the constants). \item $p(z)$ is 2-step analyzable in the constants. \item For generic $c \in \mathbb C$, $z' = \int f(z) dz +c$ is orthogonal to the constants, and in the semi-minimal analysis of $p(z)$ there is one type nonorthogonal to the constants and one trivial type. \end{enumerate} In Section \ref{nonmin}, we show that any of the three possibilities can occur within the non-minimal equations in our family, providing concrete examples of each case. This type of analysis is done in Section \ref{nonmin} and is similar to the results of \cite{jin2020internality} (who did this analysis for a different class of order two equations). Our analysis involves work along the lines of the techniques of \cite{HrIt, notmin}, and there are a number of results of independent interest developed in the course of this analysis. \subsection{Special solutions and integrability} One of the fundamental problems of algebraic differential equations is to express the solutions of a differential equation or the first integral of a vector field by some specific \emph{known functions}\footnote{e.g. rational, algebraic, elementary, Liouvillian.} and arbitrary constants or to \emph{show that this is impossible}. In this manuscript, we develop the connection between various such impossibility results for solutions and the notions coming from model theory described above. In particular, we establish results for equations of Li\'enard type: \begin{equation*} x'' (t) + f(x) x' (t) + g(x) =0, \end{equation*} for $f(x),g(x)$ rational functions. Notice that the equations of this type generalize the Brestovski-Poizat type equations described above. This family of equations has its origins in the work of Li\'enard \cite{Lie1, Lie2} and has been the subject of study from a variety of perspectives in large part due to its important applications in numerous scientific areas. See \cite{harko2014class} and the references therein for numerous applications. The class of equations has been intensely studied with respect to finding explicit solutions and integrability, mainly from the point of view of Liouvillian functions. We give a review of the existing results in Section \ref{prevlie}. The connections between these model theoretic notions and the equation having certain special solutions are known to some experts, but there does not seem to be any account of these connections in the literature. Our approach makes use of model theoretic notions and, in particular, a recent specialization theorem of the second author \cite{jaoui2020corps}. \subsection{Algebraic relations between solutions} Though establishing the strong minimality of a differential equation is itself sometimes a a motivational goal, in many cases it is just the first step in a strategy to classify the algebraic relations between solutions of the equation. See for instance \cite{jaoui2019generic}, where this strategy is employed for generic planar vector fields. In \cite{casale2020ax}, this strategy is used to prove the Ax-Lindemann-Weierstrass theorem for the automorphic functions associated with Fuchsian groups. Sections \ref{formssection} and \ref{algrelsection} are devoted to classifying the algebraic relations between the strongly minimal equations of Brestovski-Poizat type. \begin{thmstar} Let $f_1(z),\ldots, f_n(z) \in \mathbb{C}(z)$ be rational functions such that each $f_i(z)$ is not the derivative of a rational function in $\mathbb{C}(z)$ and consider for $i = 1,\ldots, n$, $y_i$ a solution of $$(E_i): y''/y' = f_i(y)$$ Then $trdeg_\mathbb{C}(y_1,y'_1,\ldots, y'_n,y_n) = 2n$ unless for some $i \neq j$ and some $(a,b) \in \mathbb{C}^\ast \times \mathbb{C}$, $y_i = ay_j + b$. In that case, we also have $f_i(z) = f_j(az + b)$. \end{thmstar} Much of the analysis of Section \ref{formssection} is of independent interest. Indeed, in Section \ref{5.1} we set up the formalism of volume forms, vector fields, and Lie derivatives quite generally. In Section \ref{5.2} we give a proof of a result of Hrushovski and Itai \cite{HrIt} using our formalism. In Section \ref{5.3}, we develop and use formalism around the Lie algebra of volume forms to show that for equations in our class, characterizing algebraic relations between solutions and their derivatives follows from characterizing \emph{polynomial relations} between solutions (with no derivatives). Following this, in Section \ref{algrelsection}, we give a precise characterization of the polynomial relations which can appear. In Section \ref{nonmin} we turn towards the nonminimal case and characterize the type of semi-minimal analysis which can appear for the equations from the class and make some remarks regarding the implications of this analysis on the dimension order property (DOP). \subsection{Organization of the paper} Section \ref{prelim} contains the basic definitions and notions from model theory and the model theory of differential fields that we use throughout the paper. The basic setup of other topics is mostly carried out in the respective sections throughout the paper. In Section \ref{strminsection} we characterize strong minimality for equations of a generalized Brestovski-Poizat form. In Section \ref{integrabilityandnot}, we give a brief introduction to integrability and various special classes of solutions, overview the extensive previous work for equations of Li\'enard type, and prove our results on the existence of Liouvillian solutions to Li\'enard equations. In Sections \ref{formssection} and \ref{algrelsection} with classify the algebraic relations between solutions of strongly minimal equations in the generalized Brestovski-Poizat class. In Section \ref{nonmin} we analyze the nonminimal equations of the class. \section{Preliminaries} \label{prelim} Throughout, $(\mathcal{U},\delta)$ will denote a saturated model of $DCF_0$, the theory of differentially closed fields of characteristic zero with a single derivation. So $\mathcal{U}$ will act as a ``universal'' differential field in the sense of Kolchin. We will also assume that its field of constants is $\mathbb C$. We will be using standard notations: given a differential field $K$, we denote by $K^{alg}$ its algebraic closure and if $y$ is a tuple from $\mathcal{U}$, we use $K\gen{y}$ to denote the differential field generated by $y$ over $K$, i.e. $K\gen{y}=K(y,\delta (y),\delta^2(y),\ldots)$. We will sometimes write $y'$ for $\delta (y)$ and similarly $y^{(n)}$ for $\delta^n (y)$. Recall that a Kolchin closed subset of $\mathcal{U}^n$ is the vanishing set of a finite system of differential polynomials equations and by a definable set we mean a finite Boolean combination of Kolchin closed sets. In the language $L_{\delta}=(+,-,\times,0,1,\delta)$ of differential rings, these are precisely the sets defined by quantifier free $L_{\delta}$-formulas. Since $DCF_0$ has quantifier elimination, these are exactly {\em all} the definable sets. If a definable set $X$ in $\mathcal{U}^n$ is defined with parameters from a differential field $K$, then we will say that $X$ is defined over $K$. Given such an $X$, we define the {\em order} of $X$ to be $ord(X)=sup\{\text{tr.deg.}_FF\langle y \rangle:y\in X\}$ where $F$ is any differential field over which $X$ is defined. We call an element $y\in X$ {\em generic over $K$} if $\text{tr.deg.}_KK\langle y\rangle = ord(X)$. As mentioned in the introduction, strong minimality is the first central notion that is studied in this paper: \begin{defn} A definable set $X$ is said to be {\em strongly minimal} if it is infinite and for every definable subset $Y$ of $X$, either $Y$ or $X\setminus Y$ is finite. \end{defn} It is not hard to see that $\mathbb{C}$, the field of constants, is strongly minimal. \begin{rem}\label{SMLienard}We will be mainly concerned with equations of Li\'enard type and in that case, we have a nice algebraic characterization of strong minimality: Let $\mathcal{C}\subset\mathbb{C}$ be a finitely generated subfield. Let $X$ be defined by an ODE of the form $y^{(n)}=f(y,y',\ldots,y^{(n-1)})$, where $f$ is rational over $\mathcal{C}$. Then $X$ (or the equation) is strongly minimal if and only if for any differential field extension $K$ of $\mathcal{C}$ and solution $y\in X$, we have that $\text{tr.deg.}_KK\gen{y}=0$ or $n$. If $X$ is given as a vector field on the affine plane, then if $X$ is strongly minimal there are no invariant algebraic curves of the vector field (if there were, the generic solution of the system of equations given by $X$ and the curve would violate the transcendence condition we describe in the previous paragraph). For instance, the equation $z''=z \cdot z' $ studied by Poizat \cite{poizat1977rangs} is not strongly minimal, but the definable set $z''=z \cdot z' , \, z'\neq 0$ is strongly minimal. So, strong minimality precludes the existence of invariant curves, \emph{but this is not sufficient.} For instance, the system \begin{eqnarray*} x' & = & 1 \\ y' & = & xy + \alpha \end{eqnarray*} is not strongly minimal, but when $\alpha \neq 0$ the system has no invariant curves.\footnote{Thanks to Maria Demina for this example.} It is easy to see that the system violates the transcendence criterion over the field $\mathbb C (t)$ with the solution $x=t$ and $y$ a generic solution to $y'=ty+ \alpha$. \end{rem} As already alluded to in the introduction (see also the discussion below), in $DCF_0$ strongly minimal sets determine, in a precise manner, the structure of all definable sets of finite order. Furthermore, establishing strong minimality of a definable set $X$ usually ensures that we have some control over the possible complexity of the structure of the set $X$. As an example, if $X$ is defined over $\mathbb{C}$, that is the differential equations involved are autonomous, then the following holds (cf. \cite[Section 2]{nagloo2011algebraic} and \cite[Section 5]{casale2020ax}). \begin{fact}\label{autonomous} Assume that a strongly minimal set $X$ is defined over $\mathbb{C}$ and that $ord(X)>1$. Then \begin{enumerate} \item $X$ is orthogonal to $\mathbb{C}$. \item $X$ is geometrically trivial: for any differential field $K$ over which $X$ is defined, and for any $y_{1},..,y_{\ell}\in X$, denoting $\widetilde{y}_i$ the tuple given by $y_i$ together with all its derivatives, if $(\widetilde{y}_1,\ldots,\widetilde{y}_{\ell})$ is algebraically dependent over $K$, then for some $i<j$, $\widetilde{y}_{i}, \widetilde{y}_{j}$ are algebraically dependent over $K$. \item If $Y$ is another strongly minimal set that is nonorthogonal to $X$, then it is non-weakly orthogonal to $X$. \end{enumerate} \end{fact} Recall that if $X_1$ and $X_2$ are strongly minimal sets, we say that $X_1$ and $X_2$ are \emph{nonorthogonal} if there is some infinite definable relation $R\subset X_1\times X_2$ such that ${\pi_1}_{|R}$ and ${\pi_2}_{|R}$ are finite-to-one functions. Here for $i=1,2$, we use $\pi_i:X_1\times X_2\rightarrow X_i$ to denote the projections maps. Generally, even if the sets $X_1$ and $X_2$ are defined over some differential field $K$, it need not be the case that the finite-to-finite relation $R$ witnessing nonorthogonality is defined over $K$ (instead it will be defined over a differential field extension of $K$). We say that $X_1$ is non-\emph{weakly orthogonal} to $X_2$ if they are nonorthogonal and the relation $R\subset X_1\times X_2$ is defined over $K^{alg}$. \begin{rem} Notice that in Fact \ref{autonomous}(2) we can replace ``$K$'' in the conclusion by ``$\mathbb{C}$'', that is one can state the conclusion as ``then for some $i<j$, $\widetilde{y}_{i}, \widetilde{y}_{j}$ are algebraically dependent over $\mathbb C$''. This follows using the non-weak orthogonality statement given in Fact \ref{autonomous}(3) (taking $Y=X$). \end{rem} In the next section, we will show that strong minimality holds in some special cases of equations of Li\'enard type. Since these equations are autonomous of order 2, it will then follows that all three conclusions of Fact \ref{autonomous} hold in those cases. This will allow us to make deeper analysis of the algebraic property of the solution sets. It is worth mentioning that if a strongly minimal set is not necessarily defined over $\mathbb C$, then there still is a strong classification result called the Zilber trichotomy theorem: \begin{fact}[\cite{HrushovskiSokolovic},\cite{PillayZiegler}]\label{trichotomy} Let $X$ be a strongly minimal set. Then exactly one of the following holds: \begin{enumerate} \item $X$ is nonorthogonal to $\mathbb{C}$, \item $X$ is nonorthogonal to the (unique) smallest Zariski-dense definable subgroup of a simple abelian variety $A$ which does not descend to $\mathbb{C}$, \item $X$ is geometrically trivial. \end{enumerate} \end{fact} Notice that nonorthogonality to the constants is simply a version of algebraic integrability after base change. We will now discuss several other variations of this notion but first need to say a few words about ``types'' and ``forking'' in $DCF_0$. Let $K$ be a differential field and ${y}$ a tuple of elements from $\mathcal{U}$, the type of ${y}$ over $K$, denoted ${\rm tp}({y}/K)$, is the set of all $L_{\delta}$-formulas with parameters from $K$ that ${y}$ satisfies. It is not hard to see that the set $I_{p}=\{f\in K\{\overline{X}\}: f(\overline{X})=0\in p\}=\{f\in K\{\overline{X}\}: f({y})=0\}$ is a differential prime ideal in $K\{\overline{X}\}=K[\overline{X},\overline{X}',\ldots]$, where $p={\rm tp}({y}/K)$. Indeed, by quantifier elimination, the map $p\mapsto I_p$ is a bijection between the set of complete types over $K$ and differential prime ideals in $K\{\overline{X}\}$. Therefore in what follows there is no harm to think of $p=tp({y}/K)$ as the ideal $I_{p}$. If $X$ is a definable set over $K$, then by the (generic) type of $X$ over $K$ we simply mean ${\rm tp}(y/K)$ for $y\in X$ generic over $K$. We say that a complete type\footnote{So $p=tp(y/K)$ for some tuple $y$ from $\mathcal{U}$.} $p$ over a differential field $K$ is of finite rank (or order) if it is the generic type of some definable set over $K$ of finite order. \begin{defn} Let $K$ be a differential field and ${y}$ a tuple of elements from $\mathcal{U}$. Let $F$ be a differential field extension of $K$. We say that ${\rm tp}(y/F)$ {\em is a nonforking extension} of ${\rm tp}(y/K)$ if $K\gen{y}$ is algebraically disjoint from $F$ over $K$, i.e., if $y_1,\ldots,y_k\in K\gen{y}$ are algebraically independent over $K$ then they are algebraically independent over $F$. Otherwise, we say that ${\rm tp}(y/F)$ is a forking extension of ${\rm tp}(y/K)$ or that ${\rm tp}(y/F)$ forks over $K$. \end{defn} It is not hard to see from the definition that ${\rm tp}(y/K^{alg})$ is always a nonforking extension of ${\rm tp}(y/K)$. A complete type $p={\rm tp}(y/K)$ over a differential field $K$, is said to be {\em stationary} if ${\rm tp}(y/K^{alg})$ is its unique nonforking extension, i.e., whenever $z$ is another realization of $p$ (so ${\rm tp}(y/K)={\rm tp}(z/K)$), then $z$ is also a realization of ${\rm tp}(y/K^{alg})$ (so ${\rm tp}(y/K^{alg})={\rm tp}(z/K^{alg})$). We say that it is {\em minimal} if it is not algebraic and all its forking extensions are algebraic, that is if $q=tp(y/F)$ is a forking extension of $p$, where $F\supseteq K$, then $y\in F^{alg}$. If $X$ is strongly minimal and $p$ is its generic type, then if follows that $p$ is minimal. Using forking, one obtain a well-defined notion of independence as follows: Let $K\subseteq F$ be differential fields and ${y}$ a tuple of elements from $\mathcal{U}$. We say that $y$ is {\em independent} from $F$ over $K$ and write $y\mathop{\mathpalette\Ind{}}_{K}{F}$, if ${\rm tp}(y/F)$ is a nonforking extension of ${\rm tp}(y/K)$. We now give the first variation of nonorthogonality to the constants. \begin{defn} A complete type $p$ over a differential field $K$ is said to be {\em internal to $\mathbb C$} if there is some differential field extension $F\supseteq K$ such that for every realisation $y$ of $p$ there is a tuple $c_1,\ldots,c_k$ from $\mathbb C$ such that $y\in F(c_1,\ldots,c_k)$. \end{defn} \begin{fact}\cite[Lemma 10.1.3-4]{tent2012course}\label{internality} \begin{enumerate} \item A complete type $p$ over a differential field $K$ is internal to $\mathbb C$ if and only if there is some differential field extension $F\supseteq K$ and some realisation $y$ of $p$ such that $y\in F(\mathbb C)$ and $y\mathop{\mathpalette\Ind{}}_{K}{F}$. \item A definable set $X$ is internal to $\mathbb C$ if and only if there is a definable surjection from $\mathbb C^n$ (for some $n\in\mathbb{N}$) onto $X$. \end{enumerate} \end{fact} Using Fact \ref{internality}(2) it is not hard to see that homogeneous linear differential equations are internal to $\mathbb C$. Indeed in this case, the solution set is simply a $\mathbb C$-vector space $V$. If $(v_1,\ldots v_k)$ is a basis for $V$, then the map $f(x_1,\ldots x_k)=\sum_{i=1}^kx_iv_i$ is the surjective map $\mathbb C^n\rightarrow V$ witnessing that $V$ is internal to $\mathbb C$. Clearly, Fact \ref{internality}(2) also shows that internality to the constants is closely related to the notion of algebraic integrability (i.e. enough independent first integrals). We also have a more general but closely related notion of analysability in the constants: \begin{defn} \sloppy Let $y$ be a tuple from $\mathcal{U}$ and $K$ a differential field. We say that ${\rm tp}(y/K)$ is {\em analysable in the constants} if there is a sequence $(y_{0},\ldots,y_{n})$ such that \begin{itemize} \item $y\in K\gen{y_{0},y_{1},\ldots,y_{n}}^{alg}$ and \item for each $i$, either $y_{i}\in K\gen{y_{0},\ldots,y_{i-1}}^{alg}$ or $tp(y_{i}/K\gen{y_{0},\ldots,y_{i-1}})$ is stationary and internal to $\mathbb C$. \end{itemize} \end{defn} \sloppy It follows that if ${\rm tp}(y/K)$ is analysable in the constants, then the sequence $(y_{0},\ldots,y_{n})$ in the definition above can be chosen to be from $K\gen{y}$. Furthermore, it follows that analysability of $p$ in the constants is equivalent to the condition that every extension of $p$ is nonorthogonal to $\mathbb C$. Differential equations that have Liouvillian solutions provide the most studied example of equations that are analysable in the constants. We will say quite a bit more in Section \ref{integrabilityandnot}. Let us now turn our attention to the semi-minimal analysis of complete types, a notion which has been mentioned a few times in the introduction. \begin{defn} \label{semiminimal} Let $p$ be a complete stationary type over a differential field $K$. Then $p$ is said to be {\em semiminimal} if there is some differential field extension $F\supseteq K$ , some $z$ realising the nonforking extension of $p$ to $F$ and $z_1,\ldots,z_n$ each of whose type over $F$ is minimal and such that $z\in F\gen{z_1,\ldots,z_n}$. \end{defn} Semiminimal (and hence minimal) types are the building block all finite rank types in $DCF_0$ via the following construction \begin{defn} \label{semimin} Let $p={\rm tp}(y/K)$ be a complete type over a differential field $K$. A \emph{semiminimal analysis of $p$} is a sequence $(y_0, \ldots,y_n)$ such that \begin{itemize} \item $y\in K\gen{y_n}$, \item for each $i$, $y_i \in K\gen{y_{i+1}}$, \item for each $i$, ${\rm tp}(y_{i+1} /K\gen{y_i})$ is semiminimal. \end{itemize} \end{defn} The following is a fundamental result and is obtained by putting together Lemma 2.5.1 in \cite{GST} and Lemma 1.8 in \cite{BUECHLER2008135} (See aslo Proposition 5.9 and 5.12 in \cite{PillayNotes}). \begin{fact}\label{semifact} Every complete type of finite rank in $DCF_0$ has a semiminimal analysis. \end{fact} Finally, recall that for a field $K$, we denote by $K\left(\left(X\right)\right)$ the field of formal Laurent series in variable $X$, while $K\gen{\gen{X}}$ denotes the field of formal Puiseux series, i.e., the field $\bigcup_{d\in{\mathbb{N}}}K\left(\left(X^{1/d}\right)\right)$. It is well know that if $K$ is an algebraically closed field of characteristic zero, then so is $K\gen{\gen{X}}$ (cf. \cite[Corollary 13.15]{Eisenbud}). Puiseux series traditionally appear in the study of algebraic solutions of differential equations, however they have also been used by Nishioka (cf. \cite{nishioka1990painleve} and \cite{Nishioka2}) in his work around proving transcendence results for solutions of some classical differential equations. Inspired by those ideas, Nagloo \cite{nagloo2015geometric} and Casale, Freitag and Nagloo \cite{casale2020ax} have also use these techniques to study model theoretic and transcendence properties of solutions of well-known differential equations generalizing the results of Nishioka. In a different direction, Le\'on-S\'anchez and Tressl \cite{Leonlarge} also used Puiseux series in their work on differentially large fields. We will make use of Puiseux series in our proof of strong minimality of special cases of equations of Li\'enard type. \section{Strong minimality} \label{strminsection} The set of solutions of the equation $$z z'' = z',\;\;\;\;z'\neq 0$$ in a differentially closed field of characteristic zero were shown by Poizat (see \cite{MMP} for an exposition) to be strongly minimal. Poizat's method of proof relies in an essential way on the specific form of the equation being extremely simple.\footnote{The proof is direct; taking an arbitrary differential polynomial $p(z)$ of order one, if the polynomial determines a subvariety, it must be that the vanishing of $p(z)$ implies the vanishing of $zz''-z'$. Considering $z\delta (p(z)) $ one can apply the relation $zz''=z'$ to obtain a new differential polynomial $q(z)$ of order one such that the vanishing of $p(z)$ implies the vanishing of $q(z)$. It follows that $p(z)$ must divide $q(z)$, and this fact can be used to show that $p(z)$ itself must be of a very restrictive form. One ultimately shows that $p(z) = z'.$} A similar but more complicated variant of the strategy of Poizat was employed in Kolchin's proof of the strong minimality of the first Painlev\'e equation (originally in an unpublished letter from Kolchin to Wood); an exposition appears in \cite{MMP}. In \cite[Chapter 9]{freitag2012model}, another elaboration of the above strategy was employed to show that the set defined by $$zz'''-z''=0 , \, \text{ and } \, \, z'' \neq 0$$ is strongly minimal. In \cite{brestovski1989algebraic}, Brestovski generalized Poizat's theorem to include equations of the form: $$z''= z' \left(\frac{B - f_z z' -g_z}{fA} \right),\;\;\;\;z'\neq 0$$ for polynomials $f,g,A,B$ over $\mathbb C$ satisfying very specific conditions.\footnote{When $f, g$ are constant, $B=1, A=z$ the theorem yield's Poizat's result and these choices satisfy Brestovski's assumptions. The assumptions in Brestovski's theorem are calibrated just so that the strategy of Poizat can be successfully carried out. A complete characterization of strong minimality via this method seems unlikely, due to the complexity of the calculations which appear in the course of the proof in \cite{brestovski1989algebraic}.} We are interested in the case that the derivatives of $z$ appear linearly in the equation (i.e. $f$ is a constant). Then Brestovski's family of equations becomes: \begin{equation}\tag{$\star$} \label{stareqn} z''={z'}f(z),\;\;\;\;z'\neq 0 \end{equation} where $f(z) \in \mathbb C(z)$. In this case, we give a definitive characterization of the strong minimality: \begin{thm} \label{stminthm} The solution set of equation (\ref{stareqn}) is strongly minimal if and only if for all $g\in \mathbb C(z)$, we have that $f(z)\neq\frac{d g}{dz}$. \end{thm} \begin{proof} Clearly, if $f(z)=\frac{d g}{dz}$ for some $g\in \mathbb C(z)$, then any solution of $z'=g(z)+c$, $c\in \mathbb C$, is also a solution to $\frac{z''}{z'} = f(z)$. Hence the solution set of equation (\ref{stareqn}) is not strongly minimal and indeed has rank 2. Now assume that $f(z)$ has partial fraction decomposition $$f(z)=\frac{d g}{dz}+\sum_{i=1}^n{\frac{c_i}{z-a_i}}$$ where the $a_i$'s are distinct and some $c_i\neq 0$. Without loss of generality assume $c_1\neq 0$. Then $f(z)$ has a nonzero residue at $a_1$. Considering the change of variable $z\mapsto z-a_1$ we may assume that $f(z)$ has a nonzero residue at $0$. Arguing by contradiction, let us assume that the solution set of equation (\ref{stareqn}) is not strongly minimal. Then for some $K$, a finitely generated differential field extending $\mathbb C$\footnote{Formally, we work with $\mathcal{C}\subset \mathbb C$ a subfield finitely generated over $\mathbb Q$ by the coefficients of the equation.} with derivation $\delta$, and $y$ a solution of (\ref{stareqn}) we have that $u = \delta(y) \in K(y)^{alg}.$ \sloppy We can think of $u$ as living in the field of Puiseux series $K^{alg}\gen{\gen{y}}$ with the usual valuation $v$ and the derivation $$\delta \left(\sum a_iy^i\right)=\sum \delta(a_i)y^i+\left(\sum ia_iy^{i-1}\right)\delta(y).$$ So $$u=\sum_{i=0}^{\infty} a_iy^{r+\frac{i}{m}},$$ where $v(u)=r$ and $m$ is the ramification exponent. Differentiating we get $$\delta(u)=\sum_{i=0}^{\infty} \delta(a_i)y^{r+\frac{i}{m}}+u\left(\sum_{i=0}^{\infty} (r+\frac{i}{m})a_iy^{r+\frac{i}{m}-1}\right).$$ Since $$v\left(\sum_{i=0}^{\infty} \delta(a_i)y^{r+\frac{i}{m}}\right)\geq r,$$ we have that $$\frac{\delta(u)}{u}=\alpha +\sum_{i=0}^{\infty} (r+\frac{i}{m})a_iy^{r+\frac{i}{m}-1},$$ where $v(\alpha)\geq 0$. The right hand side of this equation is equal to $f(y)$ and so there should be a nonzero residue. But the coefficient of $y^{-1}$ on the right hand side is 0. This is a contradiction. \end{proof} Since Equation (\ref{stareqn}) has constant coefficients, it follows from Theorem \ref{stminthm} and Fact \ref{autonomous}(2) (see \cite[Proposition 5.8]{casale2020ax} for a proof) that: \begin{cor} \label{triviality} The solution set of equation (\ref{stareqn}) for $f(z)$ not the derivative of any rational function is geometrically trivial. \end{cor} The previous corollary already gives strong restrictions on the possible algebraic relations between solutions of Equation (\ref{stareqn}), but sections \ref{formssection} and \ref{algrelsection} are devoted to giving a complete classification. Following this, we turn to similar questions in the case that $f(z)$ is the derivative of a rational function. Before we do so let us describe the connection between Theorem \ref{stminthm} and (non)integrability of equations of Li\'enard type. \section{Solutions and integrability} \label{integrabilityandnot} Equations of the form: \begin{equation} \label{Lie} x'' (t) + f(x) x' (t) + g(x) =0, \end{equation} for $f(x),g(x)$ rational functions have their origins in the work of Li\'enard \cite{Lie1, Lie2} and have important applications in numerous scientific areas. For instance, the solutions can be used to model oscillating circuits; see page 2 of \cite{harko2014class} for numerous references. Numerous recent works are devoted to giving explicit solutions or first integrals of Equation \ref{Lie} in special cases or showing that none can be expressed in terms of special functions in some class (e.g. Liouvillian, elementary). In this section, we first point out some general connections between solutions in special classes of solutions, first integrals, and the model theoretic notions we study. Following this, we describe some existing results for Li\'enard equations then give some results based on model theoretic ideas and our work in Section \ref{strminsection}. \subsection{Special classes of solutions} In this section, we give results connecting our model theoretic notions to several classically studied classes of solutions. \begin{defn} Let $(F, \Delta)$ be a differential field (generally we are interested in the case $F=\mathbb C(x,y)$ with the derivations $\frac{d}{dx}, \frac{d}{dy}$). We say that a finitely generated differential field extension $(K, \Delta)$ of $F$ is \emph{elementary} if there is a tower of differential field extensions $F=F_0 \subset F_1 \ldots , \subset F_n = K$ such that for all $i=1, \ldots n$ we have that $F_i = F_{i-1} (\alpha ) $ where $\alpha$ is such that: \begin{enumerate} \item $\delta \alpha = \delta f /f$ for some $f \in F_{i-1}$ and for all $\delta \in \Delta$ \emph{or} \item $\delta \alpha /\alpha =\delta f$ for some $f \in F_{i-1}$ and for all $\delta \in \Delta$ \emph{or} \item $\alpha \in F_{i-1}^{alg}.$ \end{enumerate} \end{defn} The class of Liouvillian functions is more general than the class of elementary functions: \begin{defn} Let $(F, \Delta)$ be a differential field. We say that a finitely generated differential field extension $(K, \Delta)$ of $F$ is \emph{Liouvillian} if there is a tower of differential field extensions $F=F_0 \subset F_1 \ldots , \subset F_n = K$ such that for all $i=1, \ldots n$ we have that $F_i = F_{i-1} (\alpha ) $ where $\alpha$ is such that: \begin{enumerate} \item $\delta \alpha \in F_{i-1}$ for all $\delta \in \Delta$ \emph{or} \item $\delta \alpha /\alpha \in F_{i-1}$ for all $\delta \in \Delta$ \emph{or} \item $\alpha \in F_{i-1}^{alg}.$ \end{enumerate} \end{defn} We next give several more special classes of functions generalizing Liouvillian and elementary functions. \begin{defn}\footnote{The notion of a Pfaffian function is most commonly defined for a real-valued function of a real variable, but we formulate the complex analog as well which fits more naturally with the results of this paper. Both notions are closely connected to model theoretic notions from the theory of differentially closed fields. See \cite{freitag2021not}.} Let $f_1, \ldots , f_l $ be complex analytic functions on some domain $U \subseteq \mathbb C^n$. We will call $(f_1, \ldots , f_l)$ a \emph{$\mathbb C $-Pfaffian chain} if there are polynomials $p_{ij}(u_1, \ldots , u_n , v_1, \ldots, v_i )$ with coefficients in $\mathbb C$ such that $$\pd{f_i}{x_j}= p_{ij} \left( \overline x, f_1 ( \overline x), \ldots , f_i (\overline x ) \right)$$ for $1 \leq i \leq l$ and $1 \leq j \leq n.$ We call a function \emph{$\mathbb C$-Pfaffian} if it can be written as a polynomial (coefficients in $\mathbb C$) in the functions of some $\mathbb C$-Pfaffian chain. \end{defn} Finally, we come to the most general notion we consider, a condition that was developed by Nishioka \cite{nishioka1990painleve, nishiokaII}: \begin{defn} Let $y$ be differentially algebraic over a differential field $k$. We say $a$ is \emph{$r$-reducible over $k$} if there exists a finite chain of $k$-finitely generated differential field extensions, $$k=R_0 \subset R_1 \subset \ldots R_m$$ such that $a \in R_m$ and $\operatorname{trdeg}{R_i/R_{i-1}} \leq r.$ \end{defn} \begin{thm} If $X$ is a strongly minimal differential equation of order $n$ defined over a finitely generated differential field $K$, then any nonalgebraic solution $f$ of $X$ is not $d$-reducible for any $d<n.$ It also follows that $f$ is not Pfaffian, Liouvillian, or elementary. \end{thm} \begin{proof} Recall, from Remark \ref{SMLienard} that the zero set of our differential equation $X$ with coefficients in a differential field $K$ is strongly minimal if and only if (1) the equation is irreducible over $K^{alg}$ (as a polynomial in several variables) and (2) given any solution $f$ of $X$ and \emph{any differential field extension $F$ of $K$}, $$\text{trdeg}_F F\langle f \rangle = \text{trdeg}_K K \langle f \rangle \text{ or } 0.$$ If $f$ were $d$-reducible for $d<n$, as witnessed by some chain $K=R_0 \subset R_1 \subset \ldots R_m$, then we can assume that $f$ is transcendental over $R_{m_1}$ for some $m_1 \leq m$ and algebraic over $R_{m_1}$. But then the differential field $R_{m_1}$ has the property that $\text{trdeg}_{R_{m_1}} \left( R_{m_1} \langle f \rangle \right) \leq d <n,$ contradicting strong minimality of $X$. Of course, each of the classes Pfaffian, Liouvillian, and elementary are $1$-reducible, so $f$ can not be in any of these classes either. \end{proof} Assuming a weaker model theoretic notion about $X$ allows one to rule out Liouvillian solutions, but not Pfaffian solutions: \begin{thm} \label{fact} Let $X$ be a differential equation of order $n$ defined over a finitely generated differential field $K$. Suppose the generic type of $X$ is not analyzable in the constants; then the generic solution of $X$ is not Liouvillian. Suppose further that $X$ is orthogonal to the constants. Then any nonalgebraic solution $f$ of $X$ is not Liouvillian. \end{thm} \begin{proof} Recall from Fact \ref{semifact} that every finite rank type has a semiminimal analysis. The extensions appearing in the definition of $f$ being Liouvillian are either algebraic or generated by the generic solution of an order one linear differential equation. The type of the generator of this extension is \emph{internal to the constants}\footnote{See Fact \ref{trichotomy}.} over the previous field in the tower, so the type of $f$ over $K$ is analyzable in the constants. If $X$ (as a definable set) is orthogonal to the constants, then any type $q$ in $X$ not algebraic over $k$ has the property that $q$ is orthogonal to the constants. This implies $q$ is not analyzable in the constants, so any realization of $q$ is not Liouvillian. \end{proof} Non-analyzability or even orthogonality to the constants does not rule out the more general Pfaffian or $d$-reducible solutions as above. The connection between integrability in Liouvillian or elementary terms and our model theoretic notions is more subtle than the connection to the existence of solutions, as we explain in the next subsection. \subsection{Notions of integrability} We will begin by describing some general notions around integrability. Consider a system of autonomous differential equations \begin{equation} \overline x ' = P(\overline x ) \label{vecfield} \end{equation} where $P=(P_1,\dots,P_n)$ are polynomial or rational functions in the variables $\overline x = (x_1, \ldots , x_n) $ with coefficients in $\mathbb C^n$. A \emph{first integral} of the system is a non constant meromorphic function of $\overline x$ which is constant along solution curves of the system, i.e., $F: U \subset {\mathbb C}^n\rightarrow {\mathbb C}$ defined on some non-empty analytic open set $U$ of $\mathbb{C}^n$ with $$\sum_{i=1}^n P_i(\overline x){\partial F\over \partial x_i}=0.$$ Meromorphic (and even holomorphic) first integrals always exist in an analytic neighborhood of a non-singular point of the equation; furthermore if $F$ is a first integral of the system on some open set $U$ then it is a first integral on any open set $U \subset V$ where $F$ can be analytically continued. In particular, if $F$ is a rational function then the open set $U$ can be taken to be the Zariski-open set of $\mathbb{C}^n$ where $F$ is well-defined Usually, one is interested in first integrals from various special classes of functions. For instance, a \emph{Darboux integral} \cite{MR3563433} of the system is one of the special form: $$f_1 ( \overline x ) ^{r_1} \ldots f_k ( \overline x ) ^{r_k} e^{h(x) /g(x)} $$ for polynomials $f_i,g,h$ and $r_j \in \mathbb C$. Associated with the polynomials $P(\overline x ) = (P_1 (\overline x ), \ldots , P_n (\overline x ))$ is the vector field $$\tau _P := P_1( \overline x ) \pd{}{x_1} + \ldots + P_n (\overline x ) \pd{}{x_n}.$$ A \emph{Darboux polynomial} of the system is $f ( \overline x ) \in \mathbb C[ \overline x ]$ such that $\tau _P (f) $ divides $f$. This condition is equivalent to the zero set of $f$ being an invariant algebraic hypersurface for the vector field $\tau$. The connection to integrability is given by results originally due to Darboux and Jouanolou, see \cite[Theorem 3]{MR2902728}. \begin{fact} Suppose that a polynomial vector field $\tau$ of degree at most $d$ has irreducible invariant hypersurfaces given by the zero set of $f_i$ for $i=1, \ldots k$ and suppose that the $f_i$ are relatively prime. Then: \begin{enumerate} \item If $k \geq \binom{n+d-1}{n}+1$ then $\tau $ has a Darboux integral. \item If $k \geq \binom{n+d-1}{n}+n$ then $\tau $ has a rational first integral. \end{enumerate} \end{fact} In model theoretic terms, even in the nonautonomous case, there is a close connection between co-order one differential subvarieties of a differential algebraic variety and nonorthogonality to the constants, see \cite{freitag2017finiteness}. Of course, the relation to the previous section is: strong minimality of a second order (or higher) system of differential equations implies that the system has no Darboux polynomials. In fact strong minimality and the other model theoretic notions we study go a good deal further, but as we will see, our model theoretic notions are more closely connected to the existence of solutions in various special classes rather than integrability in those classes. \begin{defn} We call a first integral $F$ \emph{elementary (Liouvillian)} if $F$ is an elementary (Liouvillian) function.\footnote{Any of the special classes of functions we mention in the previous subsection might be used to develop notions of integrability, but to our knowledge there is no development of integrability in terms of Pfaffian or $r$-reducible functions.} \end{defn} We first remark that one can reduce the study of algebraic integrals to the study of rational integrals. \begin{lem} Let $X$ be a vector field on some algebraic variety over $\mathbb{C}$. If $X$ has an algebraic first integral then $X$ has a rational first integral \end{lem} \begin{proof} We denote by $V$ the algebraic variety supporting $X$ and by $\delta$ the derivation induced by $X$ on $\mathbb{C}(V)$. First remark that since $\delta$ extends uniquely to a derivation $\overline{\delta}$ on $\mathbb{C}(V)^{alg}$, we have $$ \overline{\delta} \circ \sigma = \sigma \circ \overline{\delta} \text{ for all } \sigma \in Gal(\mathbb{C}(V)^{alg}/\mathbb{C}(V))$$ as $\sigma^{-1} \circ \overline{\delta} \circ \sigma$ is another derivation on $\mathbb{C}(V)^{alg}$ extending $\delta$. Assume now that $X$ has no rational first integrals and consider $f \in \mathbb{C}(V)^{alg}$ such that $\overline{\delta}(f) = 0$. Then by the remark above, we also have $\overline{\delta}(\sigma(f)) = 0$ for all $\sigma \in Gal(\mathbb{C}(V)^{alg}/\mathbb{C}(V))$. Hence the coefficients $a_1,\ldots, a_r \in \mathbb{C}(V)$ of the minimal polynomial of $f$ over $\mathbb{C}(V)$ satisfy $\delta(a_i) = \overline{\delta}(a_i) = 0$ and therefore by assumption $a_1,\ldots, a_r \in \mathbb{C}$. Since $\mathbb{C}$ is algebraically closed, we conclude that $f \in \mathbb{C}$ and that $X$ does not have any algebraic integral either. \end{proof} \begin{thm} Let $X$ be a vector field on some algebraic variety over $\mathbb C$. If $X$ has an algebraic first integral, then $X$ is not orthogonal to the constants. \end{thm} \begin{proof} An \emph{algebraic} first integral $f$ gives a map from the solution set of $X$ to $\mathbb C$ as $f$ is constant on solutions. When $f$ is algebraic, this yields a definable map from $X$ to $\mathbb C$, implying $X$ is nonorthogonal to $\mathbb C$. \end{proof} For the remainder of the section we work with more general first integrals, but we will assume the differential equation we work with, $X$, is given by a planar vector field with coefficients in $\mathbb C$. \begin{thm} Let $X$ as above be an order two differential equation given by a rational planar vector field over $\mathbb C$. If $X$ has an elementary first integral, then $X$ has an integrating factor of the form: $$\Pi (C_i)^{p_i}$$ for polynomials $C_i$ and integers $p_i$. If $X$ is strongly minimal then all of the $C_i$ must be poles of the vector field. If $X$ is regular and strongly minimal, then $X$ has no elementary first integral. \end{thm} \begin{proof} If the system $X$ has an elementary first integral, results of \cite{prelle1983elementary} show that the integrating factor is of the form $$\Pi (C_i)^{p_i}$$ for polynomials $C_i$ and integers $p_i$.\footnote{Technically, \cite{prelle1983elementary} works in the setting of regular vector fields, but an easy argument shows that the results apply to rational vector fields as well; see page 8 of \cite{duarte2009finding}.} It follows that if the $C_i$ are not poles of the vector field, then the system has nontrivial invariant algebraic curves (an explanation of this can be found in various place, e.g. the second page of \cite{christopher1999liouvillian} following the statement of the main theorem). Strongly minimal systems have no invariant curves, and regular systems have no poles. \end{proof} The connection between Liouvillian first integrals and strong minimality is more subtle, but we can say something about the form of the integrating factor: \begin{thm} \label{Liouv} If $X$ is a strongly minimal planar vector field with coefficients in $\mathbb C$, then $X$ has a Liouvillian first integral if and only if $X$ has an integrating factor of the form $\Pi (C_i)^{p_i} e^{C/D}$ for polynomials $C_i, D$ which are poles of the vector field and $C$ a polynomial. If $X$ is a strongly minimal regular vector field, then if $X$ has a Liouvillian first integral, it has an integrating factor of the form $e^C$. \end{thm} \begin{proof} By results of Singer \cite{MR1062869} and Christopher \cite[Theorem 2]{christopher1999liouvillian}, if there is a Liouvillian first integral of $X$, then there is an integrating factor of the form: $$e^{C/D} \cdot \Pi (C_i)^{p_i}$$ where $C,D, C_i$ are polynomial functions of the two variables of the system. Their proofs take place in the regular setting, but can be adapted to rational vector fields; see \cite{duarte2009finding}. The zero sets of the $C_i$ and the zero set of $D$ give invariant algebraic curves for the vector field as long as they are not poles of the vector field $X$, contradicting strong minimality. \end{proof} We now describe two examples. The first ones shows that Liouvillian integrability does not in general imply the existence of invariant algebraic curves. \begin{exam} Consider the system \begin{equation} \label{demi} \begin{array}{r@{}l} x' &{} = 1 \\ y' &{} = xy + \alpha \end{array} \end{equation} where $\alpha \neq 0$. The system has integrating factor $e^{\frac{-x^2}{2}}$, so the system has a Liouvillian first integral, but no invariant algebraic curve. \end{exam} Notice that the system \ref{demi} is not strongly minimal and that more precisely the solutions of this system are all Liouvillian. On the other hand, Rosenlicht constructed examples of order two equations having a Liouvillian first integral but no nonconstant Liouvillian solution \cite[introduction]{rosenlicht1969explicit} \cite[Proposition 3]{MR3563433}. Our second example shows that there exist order two equations having a Liouvillian first integral but \emph{no Pfaffian solution}. \begin{exam} Consider the vector field associated with the Poizat equation which originally motivated our work: \begin{equation} \label{poi1} \begin{array}{r@{}l} x' &{} = y \\ y' &{} = y/x \end{array} \end{equation} Note that the first integrals of the system are unaffected by multiplying both rational functions by $x$ to clear the denominator of the second equation. One then obtains the system: \begin{equation} \label{poi2} \begin{array}{r@{}l} x' &{} = xy \\ y' &{} = y \end{array} \end{equation} It is easy to check that the function $H(x,y) = \frac{e^y}{x}$ is a first integral of this second system (hence of the first one too), which has two invariant curves given by $x=0$ and $y=0$. It is also easy to see that the generic solution of system \ref{poi2} is not strongly minimal or orthogonal to the constants (it is 2-step analyzable in the constants and has a Liouvillian generic solution), while the generic solution of system \ref{poi1} is strongly minimal by the arguments of the previous section. So system \ref{poi1} is a system with a Liouvillian first integral but no Pfaffian solution. \end{exam} Furthermore, this example illustrates the following observation of independent interest: transformations which scale both coordinates of the vector field by some polynomial \begin{itemize} \item preserve first integrals, \item do not preserve the model theoretic notions we study (e.g. strong minimality, orthogonality to the constants), \item do not preserve the property of the system having Liouvillian solutions. \end{itemize} The examples given above also show that the Theorem \ref{Liouv} can not be improved to give a direct connection between strong minimality and the existence of Liouvillian first integrals, at least not in complete generality. However, in the case that one can rule out an exponential integrating factor by some other argument, one can use strong minimality to show that no Liouvillian first integral exists. For instance, an argument ruling out exponential integrating factors in the case of certain Li\'enard equations is contained in \cite[Section 2]{MR3563433}. \subsection{Overview of previous results for Li\'enard equations} \label{prevlie} Equation \ref{Lie} is equivalently expressed by the vector field on $ \mathbb A ^2$: \begin{equation} \label{Lie1} \begin{array}{r@{}l} x' &{}= y \\ y' &{}= -f(x) y - g(x) \end{array} \end{equation} The study of algebraic solutions of Equation \ref{Lie1} seems to begin with Odani \cite{ODANI1995146}, who shows that Equation \ref{Lie1} has no invariant algebraic curves when $f,g \neq 0$, $ {\rm deg} (f) \geq {\rm deg} (g)$ and $g /f$ is nonconstant. Numerous authors attempted to generalize Odani's results on invariant curves \cite{MR1433130, MR2430656}. Many recent works utilize the results of Odani and generalizations to characterize Liouvillian first integrals of Li\'enard equations in various special cases \cite{10.2307/20764280, LlibreValls+2013+825+835, Cheze2021, MR3573730, MR3808495, MR4190110, demina2021method}. Many of the special cases considered make assumptions about the degrees of $f(x), g(x)$ in equation \ref{Lie1}, while others make detailed assumptions not unlike the criteria employed by Brestovski \cite{brestovski1989algebraic}. Demina \cite{demina2021integrability} has recently completely classified the systems \ref{Lie1} which have Liouvillian first integrals for polynomial $f,g$. Explicit exact solutions (all Liouvillian) for the Equation \ref{Lie1} in very special cases are the subject of many additional papers in the literature \cite{feng2001algebraic, feng2002explicit, feng2004exact, harko2014class, KONG1995301}. Our results in the next subsection show in numerous wide-ranging cases Equation \ref{Lie1} has no Liouvillian solutions, so formulas for explicit exact solutions such as those of \cite{feng2001algebraic, feng2002explicit, feng2004exact, harko2014class, KONG1995301} do not exist. Numerous other order two systems of differential equations can be transformed analytically or algebraically to solutions of a system in the form of Equation \ref{Lie1}. In most cases, it is apparent that the transformations preserve the property of being Liouvillian. For instance, this applies to the transformations in Propositions 2 and 3 of \cite{gine2011weierstrass}. There it is shown that the solutions of the system \begin{eqnarray*} x' & = & f_0(x) - f_1 (x) y , \\ y' & = & g_0 ( x) + g_1 (x) y + g_2 (x ) y^n \end{eqnarray*} can be transformed to solutions of the Li\'enard family \ref{Lie} by means of the transformation $$Y = (f_0(x) - f_1(x) y ) e^{ \int_ {0} ^ x \left( g_2 (\tau ) - f_1' (\tau )/ f_1(\tau ) \right) d \tau }.$$ It is easy to see that when the functions appearing in the system are Liouvillian, this analytic transformation preserves the property of solutions being Liouvillian. Similar more complicated analytic transformations have been developed for various particular order two systems of higher degree (e.g. Proposition 3 of \cite{gine2011weierstrass}). There are numerous additional works showing particular systems can be transformed into equations of Li\'enard form (see e.g. \cite{transformLien} or the references of \cite{gine2011weierstrass}). \subsection{Solutions of Li\'enard type equations} \label{nonintLie} \begin{thm} \label{nonlou} \cite[Theorem C]{jaoui2020corps} Let $k$ be a countable field of characteristic $0$, let $S$ be a smooth irreducible algebraic variety over $k$ and let $\pi: (\mathcal X,v) \rightarrow (S,0)$ be a smooth family of autonomous differential equations indexed by $S$ defined over $k$. Assume that all the fibres of $\pi$ are absolutely irreducible and that \begin{center} $(O):$ for some $s_0 \in S(k)$, the generic type of the fibre $(\mathcal X,v)_{s_0}:= \pi^{-1}(s_0)$ is orthogonal to the constants. \end{center} Then for some/any realization $s \in S(\mathbb{C})$ of the generic type of $S$ over $k$, the generic type of $(\mathcal X,v)_s$ is also orthogonal to the constants. \end{thm} By Theorem \ref{fact} and the conclusion of Theorem \ref{nonlou}, when condition $(O)$ holds and the system $(\mathcal X,v)_s$ is two-dimensional, the system $(\mathcal X,v)_s$ has only finitely many Liouvillian solutions. Note that because the theorem only says that the \emph{generic type} is orthogonal to the constants, there might be finitely many other types of order one coming from the finitely many algebraic invariant curves. We fix $k$ a countable field of characteristic $0$ (for example, $k = \mathbb{Q}$). Set $S = \mathbb{A}^p$ the affine space of dimension $n$. By an \emph{$k$-algebraic family of rational functions indexed by $S$}, we mean a rational function $g(s,z) \in k(S)(z)$. \begin{lem} Let $f(s,z) \in k(S)(z)$. There is a dense open set $S_0 \subset S$ such that $f(s,z) \in \mathbb{C}[S_0](z)$. \end{lem} \begin{proof} Write $$f(s,z) = \frac {g(s,z)}{h(s,z)} = \frac {\sum a_i(s) z^i}{\sum_{i \geq 1} b_i(s) z^i + 1}$$ where the $a_i's$ and the $b_i's$ are in $k(S)$. Denote by $Z$ the proper closed subset of $S$ obtained as the finite union of the poles of the $a_i$'s and the $b_i$'s and set $S_0 = S \setminus Z$. \end{proof} \begin{cor} \label{orthcor} Let $k$ be a countable field of characteristic $0$, let $g(s,z) \in k(S)(z)$ be a $k$-algebraic family of rational functions indexed by $S = \mathbb{A}^p$ and let $f(z) \in k[z]$ be a rational function with at least one non-zero residue. Assume that \begin{center} for some $s_0 \in S(k)$, the rational function $g(s_0,z)$ is identically equal to $0$. \end{center} Then for every realization $s \in S(\mathbb{C})$ of the generic type of $S$ over $k$ , the generic type of $$ y'' + y'f(y) + g(s, y) = 0.$$ is orthogonal to the constants. \end{cor} Notice that the conclusion is \emph{equivalent} to: the property \begin{center} $O(s)$: the generic type of $ y'' + y'f(y) + g(s, y) = 0$ is orthogonal to the constants \end{center} holds on a set of full Lebesgue measure of the parameter space $S(\mathbb{C})$. \begin{proof} Without loss of generality, we can replace $S$ by an open set $S_0$ such that $g \in \mathbb{C}[S_0](z)$: since $S$ is irreducible, so is $S_0$ and $s_0 \in S_0$. Denote by $(z,z')$ the standard coordinates on $\mathbb{A}^2$, $S$ the (finite) set of poles of $f(z)$ and by $U \subset \mathbb{A}^2$ the Zariski open set defined by $$U = \mathbb{A}^2 \setminus (S \times \mathbb{A}^1)$$ Consider $\pi: \mathcal X = U \times S_0 \rightarrow S_0$ which is obviously smooth and with the notation of the previous lemma consider the closed subset $Z$ of $\mathcal X$ defined by: $$ 1 + \sum_{i \geq 1} b_i(s)z^i = 0$$ describing the set of poles of $g(s,z)$ when $s$ varies in $S_0$. Since the restriction of a smooth morphism is smooth, the restriction of $\pi$ to the open set $\mathcal X_0 = \mathcal X \setminus Z$ $$\pi_0: \mathcal X_0 \rightarrow S_0.$$ is also smooth. Moreover, the fibres of $\pi_0$ are absolutely irreducible since the fibres of $\pi$ are absolutely irreducible and a dense open set of an absolutely irreducible variety is also absolutely irreducible. Consider the vector field on $\mathcal{X}_0$ given in the coordinates $(z,z',s)$ by $$ v(z,z',s) = z' \frac \partial {\partial z} + \Big(- z'f(z) - g(z,s)\Big) \frac \partial {\partial z'} + 0 \frac \partial {\partial s_1} + \ldots + 0 \frac \partial {\partial s_p} .$$ By definition, the vector field $v$ is tangent to the fibres of $\pi_0$ so that $$\pi_0: (\mathcal X_0,v) \rightarrow (S_0,0)$$ is a morphism of $D$-varieties and it satisfies the ``geometric'' assumptions of Theorem C by the discussion above. \begin{claim} \label{theclaim1} Let $s \in S_0(\mathbb{C})$ and denote by $(\mathcal X_0,v)_{s}:= \pi^{-1}(s)$. There is a $k(s)$-definable bijection between $(\mathcal X_0,v)^\delta_{s}$ and the solution set of $y'' + y'f(y) + g(y,s) = 0.$ In particular, the generic type of one is interdefinable over $k(s)$ with the generic type of the other. \end{claim} Indeed, this is the standard correspondence between $D$-varieties and differential equations: the definable bijection is given by: $$(z,z') \mapsto z$$ For $s_0 \in S_0$, we have shown that the definable set $y'' + y'f(y) = 0$ has Morley rank $1$ (and Morley degree $2$). Hence the generic type of this equation --- the unique type $p \in S(k)$ of maximal order living on the solution set of this equation --- is a strongly minimal type of order $2$, hence orthogonal to the constants. The claim above shows that $s = s_0$ satisfies the property $(O)$. By Theorem 1.1, we conclude that for generic $s \in S_0(\mathbb{C})$ (equivalently, for generic $s \in S(\mathbb{C})$) the generic type of $(\mathcal X_0,v)_s$ is orthogonal to the constants. Hence using the claim above in the other direction, we obtain that the generic type of $$y'' + y'f(y) + g(s,y) = 0$$ for generic values of $s \in S(\mathbb{C})$ over $k$. \end{proof} \begin{exam} Let $a_0,\ldots, a_n,b_0,\ldots b_n \in \mathbb{C}$ be $\mathbb{Q}$-algebraically independent. Then the generic type of \begin{equation} \label{ex1} y'' + \frac{y'}{y} + \frac{a_ny^n + a_{n-1}y^{n-1} + \ldots + a_0}{b_my^m + b_{m-1}y^{m-1} + \ldots + b_0} = 0 \end{equation} is orthogonal to the constants. By Fact \ref{fact}, the generic solutions of this equation are not Liouvillian and more precisely, this equation has at most finitely many nonconstant Liouvillian solutions which are all supported by algebraic invariant curves of the equation. \end{exam} \begin{exam} Let $a \notin \mathbb{Q}^{alg}$ be a transcendental number and $g(y) \in \mathbb{Q}(y)$ arbitrary. The generic type of \begin{equation} \label{ex2} y'' + \frac{y'}{y} + ag(y)= 0 \end{equation} is orthogonal to the constants. By Fact \ref{fact}, the generic solutions of this equation are not Liouvillian and more precisely, this equation has at most finitely many nonconstant Liouvillian solutions which are all supported by algebraic invariant curves of the equation. \end{exam} \begin{rem} Systems satisfying condition $(O)$ from Theorem \ref{nonlou} yield wide classes of examples generalizing Equations \ref{ex1} \ref{ex2}. For instance, one can replace $\frac{1}{y}$, the coefficient of $y'$ in Equations \ref{ex1} or \ref{ex2}, by any rational function $h(y)$ which has no rational antiderivative while drawing the same conclusions. By Corollary \ref{orthcor} and Claim \ref{theclaim1}, one can replace $ag(y)$ in Equation \ref{ex2} by $g(a,y)$ where $g(s,y)$ is a $k$-algebraic family of rational functions indexed by $\mathbb A^p$ and $a \in \mathbb C^p$ is a point such that for some $k$-specialization $a_0$ of $a$, $g(a_0,y)=0$. \end{rem} \section{Algebraic relations between solutions and orthogonality in the strongly minimal case} \label{formssection} Let $x_1, \ldots , x_n $ be solutions of Equation (\ref{stareqn}). Since Equation (\ref{stareqn}) is strongly minimal by Theorem \ref{stminthm} and has constant coefficients, by Fact 5.7 and Proposition 5.8 of \cite{casale2020ax}, if $x_1, \ldots , x_n$ are not independent over some differential field $k$ extending $\mathbb C$, then there is a {\em differential} polynomial in two variables (of order zero or one) with coefficients in $\mathbb C$ such that $p(x_i, x_j,x_j')= 0.$\footnote{Note here we are already using strong minimality and triviality to deduce that the relation witnessing non-independence involves the derivative of only one of the solutions.} In this section, we go farther, showing that in our case $p$ can be taken to be a \emph{polynomial} relation between $x_i$ and $x_j$ not involving any derivative. Then in the following section, we give a precise characterization of what the possible polynomial relations between solutions are in terms of basic invariants of the rational function appearing in Equation (\ref{stareqn}) (e.g. singularities, residues). \subsection{Differential forms} \label{5.1} We give some background on differential forms as this will be used heavily in this section. A general reference on the subject is \cite[Chapter 5]{lang1999fundamentals} in the context of real differential geometry. Recall that throughout, $\mathcal{U}$ is a saturated model of $DCF_0$ with constants $\mathbb C$. Let $V$ be an irreducible (affine) variety over $\mathbb C$ and let $F=\mathbb C(V)$ be its function field. We identify $\text{Der}(F/\mathbb C)$ with the vector space of rational vector fields of $V(\mathbb C)$, that is a derivation $D\in \text{Der}(F/\mathbb C)$ corresponds to a rational map $$V(\mathbb C)\xrightarrow{X_D} TV(\mathbb C).$$ We let $\Omega^1_V=\Omega^1(F/\mathbb C)$ be the space of rational differential 1-forms on $V(\mathbb C)$ endowed with the universal derivation $$d: F \rightarrow \Omega^1_V.$$ For every derivation $D \in Der(F/\mathbb{C})$, there exists a unique linear map $D^\ast: \Omega^1_V \rightarrow F$ such that $D^\ast \circ d = D$. In particular, the $F$-vector spaces $\text{Der}(F/\mathbb C)$ and $\Omega^1_V$ are dual to each other. It is well known (see \cite[Chapter 2, Section 8]{Hartshorne}) that any transcendence basis $\xi_1,\ldots,\xi_r$ of $F$ over $\mathbb{C}$ gives rise to a $F$-basis $d\xi_1,\ldots, d\xi_r$ of $\Omega^1_V$ so that $$dim(V) = ldim_F(\Omega^1_F).$$ In particular, notice that if $v=(v_1,\ldots,v_n)$ is a generic point of $V(\mathcal{U})$ then $F=\mathbb C(v)$ and $\{dv_1,\ldots,dv_n\}$ includes a basis for $\Omega_V$. For each $n\in\mathbb N$ we define $\Omega^n_V$, the space of rational differential n-forms, to be the exterior algebra $\bigwedge^n\Omega^1_V$. It is the $F$-vector space of all alternating $n$-multilinear maps $$\omega: \text{Der}(F/\mathbb C)^n \rightarrow F.$$ As usual, $\Omega^n_V = \{0\}$ for $n > dim(V)$ and otherwise $ldim_F(\Omega^n_V) = {{dim(V)} \choose {n}}$. In particular, $\Omega^{dim(V)}_V$ is an $F$-vector space of dimension one and an element $\omega \in \Omega^{dim(V)}_V$ will be called a (rational) \emph{volume form} on $V$. The finite dimensional $F$-vector space $$\Omega^{\bullet}_V = F \oplus \Omega^1_V \oplus \ldots \oplus \Omega^{dim(V)}_V$$ is endowed with the structure of an anticommutative graded $F$-algebra given by the wedge product characterized by the two properties: \begin{itemize} \item[(i)] $\wedge$ is $F$-bilinear. \item[(ii)] for every $1$-forms $\omega_1,\ldots, \omega_k \in \Omega^1_V$, $$ (\omega_1 \wedge \ldots \wedge \omega_k):(D_1,\ldots, D_k) \mapsto det((\omega_i(D_j)_{i,j \leq k}) $$ \end{itemize} On top of that, the universal derivative $d: F \rightarrow \Omega^1_V$ extends uniquely into a complex (that is $d \circ d = 0$) of $F$-vector spaces: $$0 \rightarrow F \xrightarrow{d} \Omega^1_V \xrightarrow{d} \Omega^2_V \xrightarrow{d} \ldots \xrightarrow{d} \Omega^n_V \rightarrow 0$$ characterized by the following compatibility condition with $\wedge$: for every $p$-form $\omega_1$ and $q$-form $\omega_2$ $$ d(\omega_1 \wedge \omega_2) = \omega_1 \wedge d\omega_2 + (-1)^p \omega_1 \wedge d\omega_2.$$ We refer to \cite{lang1999fundamentals} for more details on the construction outlined above. \begin{defn} Given a derivation $D \in Der(F/\mathbb{C})$, we describe two operations on $\Omega^\bullet_V$ naturally attached to $D$ initially considered by E. Cartan: \begin{itemize} \item[(1)] the \emph{interior product} $i_D: \Omega^n_V \rightarrow \Omega^{n-1}_V$ is the contraction by the derivation $D$: $$i_D\omega(D_1,\ldots,D_{n-1})=\omega(D,D_1,\ldots,D_{n-1}).$$ \item[(2)] The \emph{Lie derivative} $L_D: \Omega^n_V \rightarrow \Omega^n_V$ is defined using ``Cartan's magic formula'' $$L_D=i_D\circ d+d\circ i_D.$$ \end{itemize} \end{defn} Notice that one can use a different approach to define the Lie derivative based on the Lie bracket of vector fields as described in the first section of \cite{jaoui2020foliations}. Moreover, the Lie derivative $L_D$ corresponds to the derivation $D$ defined on $\Omega^\bullet_V$ by Brestovski on page 12 of \cite{brestovski1982deviation}. \begin{fact} For $f\in F$, $\omega,\omega_1,\omega_2\in\Omega^n_V$ and $D\in \text{Der}(F/\mathbb C)$, we have the following well-known identities: \begin{eqnarray*} L_D(f\omega) &=& D(f)\omega+fL_D(\omega), \\ L_D(\omega_1 \wedge \omega_2) &=& L_D(\omega_1) \wedge \omega_2 + \omega_1 \wedge L_D(\omega_2) \\ L_D (d \omega)&=& d L_D(\omega) , \\ L_{fD} &=& f L_D (\omega) + df \wedge i_D (\omega),\\ i_D(\omega_1 \wedge \omega_2) &=& i_D(\omega_1) \wedge \omega_2 + (-1)^n \omega_1 \wedge i_D(\omega_2) \end{eqnarray*} \end{fact} See \cite[Proposition 5.3 pp. 142]{lang1999fundamentals} for a proof of these identities. The main definition of this section is \begin{defn} Let $(E): y^{(n)} = f(y,y',\ldots, y^{(n-1)})$ be a complex autonomous equation of order $n$ where $f$ is a rational function of $n$ variables. If $V = \mathbb{C}^n$ with coordinates $x_0,\ldots x_{n-1}$, the equation $(E)$ defines a derivation $D_f \in Der(\mathbb{C}(V)/\mathbb{C})$ given by: $$D_f(x_i) = x_{i +1} \text{ for } i < n - 1 \text{ and } D_f(x_{n-1}) = f(x_0,\ldots,x_{n-1}).$$ We say that a volume form $\omega \in \Omega^{n}_V$ is an \emph{invariant volume form} for the equation $(E)$ if $$L_{D_f}(\omega) = 0.$$ \end{defn} Before going in further details, we first give an analytic interpretation explaining the terminology although it will not be needed in our analysis. Consider $$(E): y^{(n)} = f(y,y',\ldots, y^{(n)})$$ a differential equation as above and $\omega$ a (rational) volume form on $V = \mathbb{C}^n$. Denote by $U$ the (dense) open set of $V$ obtained by throwing away the poles $f$ and the poles of $\omega$. As described above, the derivation $D_f$ gives rise to a vector field $s_f$ on $U$ namely the section $s_f: U \rightarrow T(U) \simeq U \times \mathbb{C}^n$ given by $$s_f(x_0,\ldots,x_{n-1}) = (x_0,\ldots,x_{n-1};x_1,\ldots,x_{n-1}, f(x_0,\ldots,x_{n-1})).$$ By definition, every point in $\overline{a} \in U$ is a non singular point of the vector field $s_f$. The classical analytic theorem of local existence and uniqueness for the integral curves of a vector field implies that there exists an analytic function: $$ \phi: U_{\overline{a}} \times \mathbb{D} \subset U \times \mathbb{C} \rightarrow U.$$ where $U_{\overline{a}}$ is an analytic neighborhood of $\overline{a}$ and $\mathbb{D}$ a complex disk such that for every $\overline{b} \in U_{\overline{a}}$, the function $t \mapsto \phi(\overline{b}, t)$ is the local analytic solution of the initial value problem $$ \frac{d\phi}{dt} = (\pi_2 \circ s_f)(\phi(t)) \text{ and } \phi(0) = \overline{b}. $$ We will call $\phi$ the local flow of the vector field $v$ around $\overline{a}$. The germ of $\phi$ at $(a,0)$ is determined by the vector field $s_f$. \begin{fact} With the notation above, the volume form $\omega$ is invariant for the equation $(E)$ if and only if for every $\overline{a} \in U$, the local flow $\phi$ around $\overline{a}$ preserves the volume form $\omega$: namely $$\text{ for every } t \in \mathbb{D}, \phi_t^\ast \omega_{\phi_t(\overline{a})} = \omega_{\overline{a}}$$ where $\phi_t: U_{\overline{a}} \rightarrow U$ is the function defined by $\phi_t(\overline{b}) = \phi(\overline{b},t)$ and $\omega_p$ denotes the germ of $\omega$ around $p$. \end{fact} This follows from the formula on pp. 140 of \cite{lang1999fundamentals}: for every $p$-form $\omega$, $$\mathcal L_v(\omega) = \frac{d}{dt}_{\mid t = 0} \phi_t^\ast \omega.$$ using the same proof as the proof of \cite[Proposition 3.2.1]{jaoui2020foliations}. We don't give more details here since this analytic interpretation will not be needed in the rest of the paper. Instead, in the following subsections, we will use invariant volume forms together with the following result which follows from the work of Ax \cite{ax1971schanuel} that can be found explicitly in \cite[Proposition 4]{Rosenlitch2} or \cite[Lemma 6.10]{Marker96modeltheory}. \begin{fact}\label{AxLemma} Let $V$ be an irreducible affine variety over $\mathbb C$ and let $F=\mathbb C(V)$ be its function field. Let $u_1,\ldots,u_n,v\in F$ be such that all the $u_i$'s are non zero. Suppose $c_1,\ldots,c_n\in\mathbb{C^*}$ are linearly independent over $\mathbb{Q}$ and let $$\omega=dv+\sum_{i=1}^nc_i\frac{du_i}{u_i}.$$ Then $\omega=0$ in $\Omega_V$ if and only if $du_1=\ldots=du_n=dv=0$, i.e $u_1,\ldots,u_n,v\in\mathbb{C}$. \end{fact} \subsection{A warm-up case} \label{5.2} The techniques we will use in our setting already have (known) strong consequences for order one differential equations, where the arguments are often simpler. For instance, the method we use allows us to give a proof a result of Hrushovski and Itai \cite[2.22]{HrIt}. \begin{lem} \label{twovolumeforms} Let $V$ be an irreducible (affine) algebraic variety of dimension $n$ and let $D \in Der(\mathbb{C}(V)/\mathbb{C})$ be a derivation. Assume that the constant field $\mathbb{C}(V)^D$ of $(\mathbb{C}(V),D)$ is equal to $\mathbb{C}$. The space of invariant volume forms $$ \Omega^{n}_{V,D} = \lbrace \omega \in \Omega^n_V \mid L_D(\omega) = 0\rbrace$$ is a complex vector space of dimension $\leq 1$. \end{lem} \begin{proof} Clearly, $\Omega^{n}_{V,D}$ is a complex vector space. It remains to show that any two non-zero invariant volumes forms $\omega_1,\omega_2 \in \Omega^{n}_{V,D}$ are linearly dependent. Since $\Omega^n_V$ is a $\mathbb{C}(V)$ vector space of dimension one, there exists $f \in \mathbb{C}(V)$ such that $\omega_1 = f \omega_2$. Computing $L_D$ on both side, we get: $$0 = L_D(\omega_1) = L_D(f\omega_2) = D(f)\omega_2 + f L_D(\omega_2) = D(f) \omega_2.$$ Since $\omega_2 \neq 0$, we get $D(f) = 0$ which implies $f \in \mathbb{C}$. \end{proof} In general, this vector space may very well be the trivial vector space but when $V = \mathbb{P}^1$ (or more generally when $V$ is a curve), an easy computation shows that Hrushovski-Itai $1$-form is always an invariant volume form, so that this vector space is always one-dimensional. \begin{lem} Consider two differential equations of order one of the form: $$(E_1): x' = f(x) \text{ and } (E_2):y' = g(y).$$ and denote by $c_1,\ldots,c_r$ the residues of $1/f(x)$ and by $d_1,\ldots,d_s$ the residues of $1/g(x)$. We assume that $1/f(x)$ and $1/g(y)$ have at least one non zero residue and that $c_1,\ldots,c_r$ are $\mathbb{Q}$-linearly disjoint from $d_1,\ldots,d_s$. That is: $$ldim_\mathbb{Q}(c_1,\ldots,c_r) + ldim_\mathbb{Q}(d_1,\ldots,d_s) = ldim_\mathbb{Q}(c_1,\ldots,c_r,d_1,\ldots,d_s).$$ Then $(E_1)$ and $(E_2)$ are weakly orthogonal. \end{lem} \begin{proof} First notice that both the equations $(E_1)$ and $(E_2)$ admit an invariant volume form which are respectively the 1-forms $$\omega_1 = \frac {dx}{f(x)} \text{ and } \omega_2 = \frac {dy}{g(y)} $$ associated by Hrushovski and Itai to the equations $(E_1)$ and $(E_2)$. By Lemma \ref{twovolumeforms}, every invariant volume form will be a constant multiple of these forms. So $\omega_1$ and $\omega_2$ are the unique invariant volume forms of $(E_1)$ and $(E_2)$ normalized by $$\omega_i(s_i) = 1 \text{ for }i = 1,2.$$ where $s_1(x) = f(x) \frac d {dx}$ and $s_2(y) = g(y) \frac d {dy}$ are the vector fields associated with the derivation $D_f$ on $\mathbb{C}(x)$ and $D_g$ on $\mathbb{C}(y)$ respectively. For the sake of a contradiction, assume that these two equations are not weakly orthogonal: this means that there exists a closed generically finite to finite correspondence $Z \subset \mathbb{P}^1 \times \mathbb{P}^1$ which is invariant under the derivation $D_f \times D_g$ associated with the product vector field $s_1(x) \times s_2(y)$ on $\mathbb{P}^1 \times \mathbb{P}^1$. Without loss of generality, we can assume that $Z$ is irreducible. Consider the two pull-backs of the two $1$-forms $\omega_1$ and $\omega_2$ (by the respective projections) to $\mathbb{P}^1 \times \mathbb{P}^1$ which are still given by the formulas above in the coordinates $(x,y)$ on $\mathbb{P}^1 \times \mathbb{P}^1$. Since $$\mathcal L_{D_f \times D_g}(\omega_1) = \mathcal L_{D_f}(\omega_1) = 0$$ and similarly for $\omega_2$, both $\omega_1$ and $\omega_2$ are invariant $1$-forms for the derivation $D_f \times D_g$ on $\mathbb{P}^1 \times \mathbb{P}^1$. It follows that their restrictions $\omega_{1 \mid Z}$ and $\omega_{2 \mid Z}$ are two invariant volume forms on $Z$ endowed with the derivation induced by $D_f \times D_g$ on $\mathbb{C}(Z)$. By Lemma \ref{twovolumeforms}, we conclude that for some $c \in \mathbb{C}$, $$(\omega - c \omega_2)_{\mid Z} = 0.$$ Noting the normalization in our case, we see $$1 = \omega_1(s_1 \times s_2) = c\omega_2(s_1 \times s_2) = c$$ so that in fact $c = 1$ and the one-form $\omega_1 - \omega_2$ vanishes identically on $Z$. Write \begin{eqnarray*} \frac{1}{f(x)} = \frac{df_1}{dx} + \sum \frac{c_i}{x - a_i} \\ \frac{1}{g(y)} = \frac{dg_1}{dy} + \sum \frac{d_j}{y - b_j}. \end{eqnarray*} Using this notation, we have an equality of $1$-forms on $Z$ \begin{eqnarray*} df - dg &= \sum c_i\frac{d(x-a_i)}{x - a_i} + \sum d_j\frac{d(y-b_j)}{y - b_j} \\ & = \sum \alpha_i \frac{df_i} {f_i} + \sum \beta_j\frac{dg_j}{g_j}. \end{eqnarray*} where the $\alpha_i$ forms a $\mathbb{Q}$-basis of $c_1,\ldots,c_n$, the $\beta_j$ form a $\mathbb{Q}$-basis of $ d_1,\ldots, d_s$ and $f_i(x) \in \mathbb{C}(x), g_j(y) \in \mathbb{C}(y)$. Note that a linear combination of logarithmic derivatives can always be rewritten as a sum of logarithmic derivatives in which the coefficients are linearly independent over $\mathbb Q$. See Remark on page 76 of \cite{Marker96modeltheory}. The assumption of linear independence means that the $\beta_j$ and the $\alpha_i$ form a $\mathbb{Q}$-linearly independent set. By Fact \ref{AxLemma}, $f_i(x)$ is constant on $Z$ for all $i$ and $g_j(y)$ is constant on $Z$ for all $j$. Since $f(x)$ and $g(y)$ have at least one non-zero residue, we conclude that $Z$ can not project dominantly on the solution sets of $(E_1)$ and $(E_2)$. Contradiction. \end{proof} \subsection{Our setting} \label{5.3} Let $f\in\mathbb C(z)$ be a rational function and consider the associated equation $(\star)$. Let $V=\mathbb C^2\setminus Z_f$ in coordinates $(x,y)$, where $Z_f$ is the union of horizontal line $y = 0$ and, for each pole $a$ of $f$, the vertical line $x = a$. Consider the section of the tangent bundle $s_f:V\rightarrow T(V)$ $$s_f(x,y)=(x,y,y,yf(x)).$$ Let $\pi_2 s_f (x,y) := (y,yf(x)).$Then we showed, in Section 2, that if $f(z)$ is such that for any $h\in \mathbb C(z)$ $f(z)\neq\frac{d h}{dz}$, it follows that $(V,s_f)^\#=\{(x,y)\in V_f(\mathcal{U}): x^\prime=y\land y^\prime=yf(x)\}$ is a geometrically trivial strongly minimal set. This section $s_f$ gives rise to the derivation $D_{f}\in \text{Der}(\mathbb C(V)/\mathbb C)$ given by $$D_{f}(h)=y \frac{\partial h} {\partial x}+{yf(x)}\frac{\partial h} {\partial y}.$$ In particular, $D_{f}(x)=y$, $D_{f}(y)=yf(x)$ and $D_{f}(\frac{1} {y})={\frac{-f(x)} {y}}.$ \begin{lem}\label{Volume form} For any $f\in\mathbb C(z)$, the derivation (or vector field) $D_{f}$ preserves the volume form $$\omega={{dx\wedge dy}\over y}\in \Omega^{2}_V$$ and $L_{D_f}(\omega)=0$. \end{lem} \begin{proof} We only need to show that $L_{D_f}(\omega)=0$. \begin{eqnarray} L_{D_f}(\omega)&=&L_{D_f}({{dx\wedge dy}\over y})\notag\\ &=&D_f({1\over y})({dx\wedge dy})+{1\over y}L_{D_f}({dx\wedge dy})\notag\\ &=&{-f(x)\over y}({dx\wedge dy})+{1\over y}\left[{dy\wedge dy} +dx\wedge(f'(x)ydx+f(x)dy)\right]\notag\\ &=&{-f(x)\over y}({dx\wedge dy})+{1\over y}\left[f'(x)ydx\wedge dx+f(x)dx\wedge dy)\right]\notag\\ &=&0\notag \end{eqnarray} \end{proof} Now assume that $f(z),g(z)\in \mathbb C(z)$ are such that for any $h\in \mathbb C(z)$, we have that neither $f(z)\neq\frac{d h}{dz}$ nor $g(z)\neq\frac{d h}{dz}$. {We do not exclude here the possibility that $f(z)=g(z)$}. Let $V=\mathbb C^2\setminus Z_f$ and $W=\mathbb C^2\setminus Z_g$ with coordinates $(x_1,y_1)$ and $(x_2,y_2)$ respectively. Assume that the two strongly minimal definable sets $(V,s_f)^\#$ and $(W,s_g)^\#$ are nonorthogonal. Then since they are geometrically trivial, they are non-weakly orthogonal. So there is $Z\subset V\times W$ a closed complex $D_f\times D_g$ invariant generically finite to finite correspondence witnessing nonorthogonality. We write $$\omega_1={{dx_1\wedge dy_1}\over y_1}\in \Omega^{2}_V\;\;\text{and}\;\;\omega_2={{dx_2\wedge dy_2}\over y_2}\in \Omega^{2}_W$$ for the corresponding $2$-forms. From Lemma \ref{Volume form}, we have that $L_{D_f}(\omega_1)=L_{D_g}(\omega_2)=0$. We will now view $\omega_1$ and $\omega_2$ as $2$-forms on $Z$, which are volume forms since $Z$ is a finite to finite correspondence ($\text{tr.deg.}_{\mathbb C}\mathbb C(Z)=2$). More precisely we let $\widetilde{\omega}_1$ be the 2-form on $Z$ defined as the pullback of $\omega_1$ by the projection map $\pi_1:Z\rightarrow V$. The form $\widetilde{\omega}_2$ is defined similarly. By construction we have that $$L_{D_f\times D_G}(\widetilde{\omega}_1)=L_{D_f}(\omega_1)=0.$$ A similar expression holds for $\widetilde{\omega}_2$. \begin{lem}\label{innerproduct} Let $Z$ be as above, then there exist $c\in\mathbb C^*$ such that $$i_{D_f\times D_g}\left({{dx_1\wedge dy_1}\over y_1}-c\cdot {{dx_2\wedge dy_2}\over y_2} \right)=0$$ where $i_{D_f\times D_g}$ is the interior product. \end{lem} \begin{proof} Since $Z$ is $2$-dimensional, the space of rational $2$-forms on $Z$ is a $\mathbb C(Z)$-vector space of dimension one. So there exists $h\in \mathbb C(Z)$ such that $$\widetilde{\omega}_1=h\widetilde{\omega}_2.$$ We hence have that \begin{eqnarray} 0&=&L_{D_f\times D_G}(\widetilde{\omega}_1)\notag\\ &=&L_{D_f\times D_G}(h\widetilde{\omega}_2)\notag\\ &=&(D_f\times D_g)(h)\omega_2+hL_{D_g}(\omega_2)\notag\\ &=&(D_f\times D_g)(h)\omega_2\notag \end{eqnarray} So $h$ is in the constant field of $D_f\times D_g$ in $\mathbb C(Z)$. Since the equations are orthogonal to $\mathbb C$, we have that $h\in\mathbb C$. By construction we have that ${\omega}_1=c{\omega}_2$, for $c\in\mathbb{C}$. Hence on $Z$, the two form ${\omega}_1-c{\omega}_2={{dx_1\wedge dy_1}\over y_1}-c\cdot {{dx_2\wedge dy_2}\over y_2}$ is identically $0$. Furthermore, on $Z$, the $1$-form obtained by applying the interior product $i_{D_f\times D_g}$ is 0 and the result follows. \end{proof} \begin{lem} \label{constantlem} Let $Z$ be as above, then there is $c\in\mathbb C^*$ such that on $Z$ $$dy_1-f(x_1)dx_1-c(dy_2-f(x_2)dx_2)=0\in\Omega_Z^1$$ \end{lem} \begin{proof} We will use the formula $$i_D(\upsilon_1\wedge\upsilon_2)=i_D(\upsilon_1)\wedge\upsilon_2-\upsilon_1\wedge i_D(\upsilon_2)$$where $D$ is any derivation and $\upsilon_1,\upsilon_2$ are 1 forms. Starting with lemma \ref{innerproduct} \begin{eqnarray} 0&=&i_{D_f\times D_g}\left({{dx_1\wedge dy_1}\over y_1}-c\cdot {{dx_2\wedge dy_2}\over y_2} \right)\notag\\ &=&i_{D_f}({{dx_1\wedge dy_1}\over y_1})-c\cdot i_{D_g}({{dx_2\wedge dy_2}\over y_2})\notag\\ &=&{i_{D_f}(dx_1)\wedge dy_1-dx_1\wedge i_{D_f}(dy_1)\over y_1}-c\cdot{i_{D_f}(dx_2)\wedge dy_2-dx_2\wedge i_{D_f}(dy_2)\over y_2}\notag\\ &=&{y_1dy_1-y_1f(x_1)dx_1\over y_1}-c\cdot{y_1dy_2-y_2f(x_2)dx_2\over y_2}\notag\\ &=& dy_1-f(x_1)dx_1-c(dy_2-g(x_2)dx_2)\notag \end{eqnarray} \end{proof} \begin{prop} Let $Z$ be as above. Then $Z$ is contained in a closed hypersurface of $V\times W$ of the from $Z(p)$ for some $p\in\mathbb C[x,y]$. \end{prop} \begin{proof} Recall that by assumption for some $f_1,g_1\in\mathbb C(z)$, we have that $$f(x_1)=\frac{d f_1}{dx_1}+\sum{\frac{c_i}{x_1-a_i}}$$ and $$g(x_2)=\frac{d g_1}{dx_2}+\sum{\frac{d_i}{x_2-b_i}}$$ where at least one of the $c_i$'s and one of the $d_i$'s are non-zero. Multiplying the above equations by $dx_1$ and $dx_2$ respectively and using $$dy_1-f(x_1)dx_1-c(dy_2-g(x_2)dx_2)=0$$ we get $$d(y_1-cy_2-f_1(x_1)+cg_1(x_2))=-\sum{c_i\cdot\frac{d(x_1-a_i)}{x_1-a_i}}+\sum{cd_i\cdot\frac{d(x_2-b_i)}{x_2-b_i}}.$$ We use here that $d(x_1-a_i)=dx_1$ and $d(x_2-a_i)=dx_2$. Consider the $\mathbb Q$-linear span of $\{c_i,cd_j\}$ - which is a non-trivial vector space since $f(z)$ (and on top of that $g(z)$) has at least one simple pole - and extract $\{e_1,\ldots,e_s\}$ a $\mathbb Q$-basis (so $s\geq 1$). If we divide all the $e_i$'s by some $N\gg0$, we can assume that $c_i$'s and $cd_j$'s are in the $\mathbb Z$-span and get that $$d(y_1-cy_2-f_1(x_1)+cg_1(x_2))=\sum{e_k{dh_k\over h_k}}$$ where $h_k\in\mathbb C[x_1,x_2]$ has the specific form $$h_k = \prod (x_1 - a_i)^{-n(c_i,k)} \prod (x_2-b_j)^{n(cb_j,k)}$$ and $n(c_i,k)$(resp. $n(cb_j,k)$) denotes the coefficient of $c_i$ (resp. $cb_j$) relatively to $e_k$ in the basis $e_1,\ldots,e_k$. But by Fact \ref{AxLemma}, it must be that $$d(y_1-cy_2-f_1(x_1)+cg_1(x_2))=0\;\;\text{and}\;\;{{dh_k}}=0.$$ Hence for $k=1$ as an example, we get that $$h_1(x_1,x_2)=c$$ for some constant $c\in \mathbb C$. Since $Z$ projects dominantly on $V$ and $W$, we get a non-trivial polynomial relation between $x_1$ and $x_2$ as required. \end{proof} To summarize, in this subsection, we have shown \begin{prop}\label{StrongTriviality} Let $f(z),g(z)\in \mathbb C(z)$ be such that for any $h\in \mathbb C(z)$, we have that neither $f(z)\neq\frac{d h}{dz}$ nor $g(z)\neq\frac{d h}{dz}$. Suppose that $x$ and $y$ are solutions to the strongly minimal equations $$\frac{z''}{z'} = f(z)\;\;\;\text{and}\;\;\;\frac{z''}{z'} = g(z)$$ respectively. Let $K$ be any differential extension of $\mathbb C$ such that $K\gen{y}^{alg}=K\gen{x}^{alg}$. Then $\mathbb C(x)^{alg}=\mathbb C(y)^{alg}$. \end{prop} In the next section we classify the algebraic relations between solutions in details in the case that $\mathbb C(x)^{alg}=\mathbb C(y)^{alg}$, in particular showing that there are only finitely many, depending on basic invariants of the rational functions $f,g$. \section{Algebraic relations between solutions} \label{algrelsection} \begin{thm}\label{alg-solutions-with-derivatives} Let $f_1(z),\ldots, f_n(z) \in \mathbb{C}(z)$ be rational functions such that each $f_i(z)$ is not the derivative of a rational function in $\mathbb{C}(z)$ and consider for $i = 1,\ldots, n$, $y_i$ a solution of $$(E_i): y''/y' = f_i(y)$$ Then $trdeg_\mathbb{C}(y_1,y'_1,\ldots, y'_n,y_n) = 2n$ unless for some $i \neq j$ and some $(a,b) \in \mathbb{C}^\ast \times \mathbb{C}$, $y_i = ay_j + b$. In that case, we also have $f_i(z) = f_j(az + b)$. \end{thm} Notice that we do not exclude the case where some of the $f_i(z)$ are equal in this statement. \begin{proof} By Theorem \ref{stminthm} and Corollary \ref{triviality}, we already know that each of the equations $$(E_i): y''/y' = f_i(y)$$ is strongly minimal and geometrically trivial. It follows that if $y_1,\ldots, y_n$ are solutions of $(E_1),\ldots,(E_n)$ such that $trdeg_\mathbb{C}(y_1,y'_1,\ldots, y'_n,y_n) = 2n$ then for some $i \neq j$, $$trdeg_\mathbb{C}(y_i,y'_i,y_j,y'_j) < 4$$ Since all the equations $(E_i)$ do not admit any constant solution, $y_i$ and $y_j$ must realize the generic type of $(E_i)$ and $(E_j)$ respectively and using strong minimality we can conclude that $$trdeg_\mathbb{C}(y_i,y'_i,y_j,y'_j) = 2 \text{ and } \mathbb{C}(y_i,y_i')^{alg} = \mathbb{C}(y_j,y_j')^{alg}.$$ Proposition \ref{StrongTriviality} now implies that in fact $\mathbb{C}(y_i)^{alg} = \mathbb{C}(y_j)^{alg}$. To simplify the notation, set $f(z) = f_i(z)$, $g(z) = f_j(z)$, $y_i = x$ and $y_j = y$ and so we have that $x$ and $y$ are interalgebraic over $\mathbb C$. First note that the derivation on ${\mathbb C}(x)^{\rm alg}$ has image in the module ${\mathbb C}(x)^{\rm alg}x^\prime$. If $F(x,z)=0$ for some $z$, then we have that $z^\prime=-{F_x(x,z)\over F_z(x,z)}x^\prime$. Thus there are $\alpha,\beta\in {\mathbb C}(x)^{\rm alg}$ such that $y^\prime=\alpha x^\prime$ and $\alpha^\prime=\beta x^\prime$. Then $$y^{\prime\prime}=\beta (x^\prime)^2+\alpha (x^{\prime\prime})=\beta (x^\prime)^2+\alpha f(x)x^\prime$$ but also $$y^{\prime\prime}=g(y)y^\prime=\alpha g(y)x^\prime.$$ Since $x^\prime\ne 0$, $$\beta x^\prime=\alpha(g(y)-f(x)).$$ If $\beta\ne 0$, then $x'=\frac{\alpha(g(y)-f(x))}{\beta}\in{\mathbb C}(x)^{\rm alg}$ contradicting strong minimality. Hence $\beta=0$. Since $\alpha^\prime=\beta x^\prime$ and $y^\prime$(=$\alpha x^\prime$) is not zero, we get that $\alpha\in {\mathbb C}^\times$. Using $y^\prime=\alpha x^\prime$ we also obtain that $y=\alpha x+ b$ for some $b\in\mathbb{C}$. Finally, $\beta=0$ also implies that $f(x)-g(y)=0$ and hence $f(x)=g(y)=g(\alpha x+b)$. \end{proof} In the rest of this section, we will derive some consequences of Theorem \ref{alg-solutions-with-derivatives} on the structure of the solution sets of these equations. First let us recall the definitions of what it means for an equation to have no or little structure. \begin{defn} Suppose that $X$ is a geometrically trivial strongly minimal set defined over some differential field $K$. Then, $X$ is said to be {\em $\omega$-categorical} if for any $y\in X$, the set $X\cap K\gen{y}^{alg}$ is finite. Moreover, if $X\cap K\gen{y}^{alg}=\{y\}$, then we say that $X$ is {\em strictly disintegrated}. \end{defn} \begin{exam} In the Poizat example, that is when $f(z)=\frac{1}{z}$, the requirement $\frac{1}{az+b}=\frac{1}{z}$ gives that $a=1$ and $b=0$. Hence, the Poizat example is strictly disintegrated. \end{exam} \begin{exam} Consider now the case where $f(z)=\frac{1}{z-a}-\frac{1}{z-b}$, where $a,b\in\mathbb C$. Then it is not hard to see that $f(-z+a+b)=f(z)$. Hence it follows that the strongly minimal equation $\frac{z''}{z'}=\frac{1}{z-a}-\frac{1}{z-b}$ is not strictly disintegrated. Moreover we will show that it is $\omega$-categorical. \end{exam} We now focus on the case when $f(z)=g(z)$ and will further study the condition $f(ax+b) =f(x)$. Recall that $f(z)$ is such that $\frac{z''}{z'} = f(z)$ is strongly minimal. We write $f(z) = \frac{dg}{dz} + \sum _{i=1}^n \frac{c_i }{z- \alpha _i}$ and so $f(ax+b) =f(x)$ gives $$\frac{dg}{dz} (\beta (x)) + \sum _{i=1}^n \frac{c_i }{\beta (x)- a _i} = f(x) $$ where $\beta(x)=ax+b$. Since $\beta$ has such a simple form, it is easy to see that $\beta$ must permute the set of $a_i,$ points at which $f$ has a nonzero residue or else $f(ax+b)\neq f(x)$. So, bounding the size of the setwise stabilizer of the collection of $a_i$ will bound the number of nontrivial algebraic relations between solutions. In what follows, we let $A$ be the collection of $a_i$ at which $f(x)$ has a nontrivial residue and $G_1$ be the stabilizer of $A.$ We assume that the points of $A$ have a unique orbit under $G_1$ - otherwise replace $A$ by one of the orbits. Our arguments below will only depend on the size of any particular set stabilized by the affine transformations which induce algebraic relations. For some $n$, $\beta ^n $ is in the \emph{pointwise} stabilizer any of the collection of $a_i \in A$. If there is more than one $a_i,$ then $\beta ^n$ is the identity, since the pointwise stabilizer of two distinct points under the group of affine transformations is trivial (e.g. directly from the stabilizer condition, one gets two linearly independent equations for $a,b$ and of course $a=1, b=0$ is a solution to the system - thus the unique solution). So, $\beta $ is torsion in the group of affine transformations. We will represent the group of affine transformations in the standard manner: $$\left\{ \begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix} \, \middle| \, a,b \in \mathbb C \right\}.$$ The natural action on $x \in \mathbb C$ is given by matrix multiplication on the vector $\begin{pmatrix} x \\ 1 \end{pmatrix}$. One can show that the elements of finite order in this group are precisely those in which $a$ is a root of unity of some order greater than one together with the identity element. When $a$ is a primitive $k^{th}$ root of unity, the cyclic subgroup of the affine generated by $\begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix}$ is of order $k$. If $|A|=1$, then a simple argument shows that there are no nontrivial affine transformations which preserve $f(x)$. In the case that $|A|>1$, there is an upper bound on the number of affine transformations preserving $f$ in terms of $|A|$ (the same argument works with any set known to be stabilized by the action). \begin{claim} \label{2transcl} When $|A|=n,$ $|G|\leq n(n-1).$ \end{claim} \begin{proof} First, note that the action of the affine group on the affine line is \emph{sharply $2$-transitive}, meaning that for any pairs of distinct elements $(c_1,c_2)$ and $(d_1,d_2)$ in $\mathbb C^2$, there is precisely one affine transformation which maps $(c_1,c_2)$ to $(d_1,d_2).$ Thus, the action of the setwise stabilizer on the collection of $a_i$ will be determined by determining the image of $a_1$ and $a_2.$ Since their images are in the collection $\{a_1, \ldots , a_n \}$ there are at most $n(n-1)$ choices for their images, of which at most $n(n-1)-1$ correspond to nontrivial affine transformations. Thus the setwise stabilizer is of size at most $n(n-1).$ \end{proof} \begin{cor} Let $f(z) \in \mathbb{C}(z)$ which is not the derivative of a rational function. Then the solution set of the equation $(E): y''/y' = f(y)$ is $\omega$-categorical. \end{cor} \begin{proof} We already proved that the equation is strongly minimal and geometrically trivial. All we need to show is that if $y$ is a solution of $(E)$, then $\mathbb{C}\gen{y}^{alg}$ only contains finitely many solutions of $(E)$. By Theorem \ref{alg-solutions-with-derivatives} applied to $f_1(y) = f_2(y) = f(y)$, we see that if $y_1 \in \mathbb{C}\gen{y}^{alg}$ is a solution of $(E)$, then $y_1 = ay + b$ for some $(a,b) \in \mathbb{C}^\ast \times \mathbb{C}$ such that $z \mapsto az + b$ belongs in the stabilizer of $f(z)$ by the action of $Aff_2(\mathbb{C})$ on $\mathbb{C}(z)$ by precomposition. It follows that if $k$ is the number of solutions of $(E)$ in $\mathbb{C}\gen{y}^{alg}$, then $$ k = |Stab(f(z))| \leq n(n-1) $$ where $n$ is the number of non-zero complex residues of $f(z)$. \end{proof} Actually, the previous bound for $k$ obtained above is not sharp. Before improving the bound, we first give an example for which $k$ will be maximal based on the number of non zero residues of $f(y)$. \begin{comment} We will next show through a series of arguments that in fact that there is a better bound for the size of the group $G_1$. Consider an element of $G_1$ which generates a maximal cyclic subgroup $H_1 \leq G_1$, and assume for the moment that it is an element of the special form $\begin{pmatrix} a & 0 \\ 0 & 1 \end{pmatrix}$ and take $n_1$ minimal so that $a^{n_1}=1$. Pick a point (wlog) $a_1$ such that the orbit of $a_1$ under the cyclic group $G_1$ generated by the above transformation is nontrivial. Note that there must be such an element unless $a=1$, since there are without loss of generality at least two distinct $a_i$, and the intersection of the point stabilizers in the affine group of at least two distinct elements is trivial (for instance, by sharp 2-transitivity). Consider the orbit of $H_1$ acting on $a_1$. \begin{claim} The setwise stabilizer of the orbit of $a_1$ under $H_1$ in $G_1$ is $H_1.$ \end{claim} \begin{proof} To see this, note first that it suffices to consider the case that $a_1=1.$ To reduce to this special case, first scale by multiplying by $1/|a_1|.$ Then rotate to position the image of $a_1$ (which is on the unit circle) at $1$. The transformation given by $\begin{pmatrix} a & 0 \\ 0 & 1 \end{pmatrix}$ commutes with these operations and their inverses, so determining the stabilizer of the orbit of $a_1$ is equaivalent to determining the stabilizer of the orbit of $1.$ Now our collection of points in the orbit is, without loss of generality $1, a, \ldots , a^{n_1-1}.$ Consider an affine transformation $\begin{pmatrix} a_2 & b_2 \\ 0 & 1 \end{pmatrix} \in G_1$ which stabilizes this set. Then since the element must be of finite order, we must have $a_2$ being a root of unity of some order (which divides $n_1$). But then rotation by $a_2$ fixes the unit circle. Translating by $b$ does not fix the unit circle if $b_2 \neq 0.$ But then since the set $1, a, \ldots , a^{n_1-1}$ is stabilized by $\begin{pmatrix} a_2 & b_2 \\ 0 & 1 \end{pmatrix}$, we obtain a contradiction if $n_1>2,$ since the intersection of two distinct circles contains at most two points. In the case that $a^2$=2, one can argue directly that the stabilizer of the orbit is $H_1$, but this also follows because in this case, the non-identity element of $H_1$ is the unique element of the group of affine permutations permuting the elements of the orbit nontrivially (by sharp $2$-transitivity) and the identity element is the unique element of the group of affine transformations which stabilizes each element of the orbit. In general, now fix an element of the setwise stabilizer of the $a_i$ which generates a cyclic subgroup of maximal order, of the general form $\begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix}$ with $a^{n_1}=1$ and wlog, $n_1 >1$. Then note that $$\begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 1 & -\frac{b}{a-1} \\ 0 & 1 \end{pmatrix} \begin{pmatrix} a & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & \frac{b}{a-1} \\ 0 & 1 \end{pmatrix}.$$ That is, $\begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix}$ can be written as the composition of a translation by $\frac{b}{a-1}$ followed by rotation by $a$ followed by translation by $-\frac{b}{a-1}$. Translation by $\frac{b}{a-1}$ thus gives a bijection between an orbit of $\begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix}$ and an orbit of $\begin{pmatrix} a & 0 \\ 0 & 1 \end{pmatrix}$. It follows that the setwise stabilizer of any nontrivial orbit of $\begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix}$ must be given by the cyclic group generated by $\begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix}$. \end{proof} Consider the orbits of $a_1$ under maximal cyclic subgroups of $G_1$ - on each such orbit, $\mathcal O$, there are (at most) $|\mathcal O |-1$ group elements which yield nontrivial algebraic relations between solutions of our equation. The stabilizer of this orbit is precisely the corresponding maximal cyclic subgroup in $G_1$ and every element of $G_1$ is contained in some maximal cyclic subgroup. If it were the case that the maximal orbits of $a_1$ are disjoint except for $a_1$, then the number of group elements would be bounded by $|A|$, since in that case $G_1$ would act sharply $1$-transitively on the collection of $a_i$. The sharp $1$-transitivity of $G_1$ acting on $A$ would also follow if we were to show whenever $a_i$ lies on two different maximal orbits, there is a unique group element taking $a_1$ to $a_i$. \begin{claim} Suppose $a_i$ lies on two distinct orbits of $a_1$ under maximal cyclic subgroups of $G_1$. Then there is a unique group element $g \in G_1$ such that $g a_1 = a_i.$ \end{claim} \begin{proof} So, suppose that this is not the case, and assume that $a_1=0$. Then suppose that there are two distinct elements $\begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix}$ and $\begin{pmatrix} c & d \\ 0 & 1 \end{pmatrix}$ which are elements in distinct maximal cyclic subgroups and both set $a_1$ to $a_2$. Since $a_1=0$, it must be that $b=c=a_2.$ Then $\begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix} \begin{pmatrix} c & d \\ 0 & 1 \end{pmatrix}^{-1} = \begin{pmatrix} a/c & 0 \\ 0 & 1 \end{pmatrix} \in G_1$. Thus $\begin{pmatrix} a/c & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} c & d \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} a & ad/c \\ 0 & 1 \end{pmatrix} \in G_1.$ And so $\begin{pmatrix} a & ad/c \\ 0 & 1 \end{pmatrix}^{-1} \begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix}= \begin{pmatrix} 1/a & -b/c \\ 0 & 1 \end{pmatrix} \begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix}=\begin{pmatrix} 1 & b/a-b/c \\ 0 & 1 \end{pmatrix} \in G_1.$ But $\begin{pmatrix} 1 & b/a-b/c \\ 0 & 1 \end{pmatrix}$ has infinite orbit if $a\neq c$, and thus can not be in the stabilizer of $A$. So, we must have had $a=c$ and so $\begin{pmatrix} a & b \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} c & d \\ 0 & 1 \end{pmatrix}$, completing the proof of the claim in the case that $a_1 =0$. The general case follows by noting that the set of points of $A$ translated by $-b$ has stabilizer $$\begin{pmatrix} 1 & b \\ 0 & 1 \end{pmatrix} G_1 \begin{pmatrix} 1 & -b \\ 0 & 1 \end{pmatrix},$$ and so the general case follows after translating $A$ by $-a_1.$ \end{proof} {\color{blue} RJ: added in relation with the arguments above (giving an optimal bound for the cardinal of the stabilizer of $f(z)$ based on the number of non zero residues of $f(z)$)} \end{comment} \begin{exam} Let $n \geq 2$, let $c \in \mathbb{C}^\ast$, let $\xi$ be a primitive $n$-th root of unity and let $g(z) \in \mathbb{C}(z)$ be a rational function. Consider $$ f(z) = c.\sum_{k = 0}^{n-1} \frac{\xi^k}{z - \xi^k} + g(z^n) \in \mathbb{C}(z).$$ We claim that $f(\xi z) = f(z)$. Indeed, obviously $g((\xi z)^n) = g(z^n)$ and moreover \begin{eqnarray*} \sum_{k = 0}^{n-1} \frac{\xi^k}{\xi z - \xi^k} & = & \sum_{k = 0}^{n-1} \frac{\xi^{k-1}}{z - \xi^{k-1}} \\ & = & \frac {\xi^{-1}}{z - \xi^{-1}} + \sum_{k = 1}^{n-1} \frac{\xi^{k-1}}{z - \xi^{k-1}} \\ & = & \frac {\xi^{n-1}}{z - \xi^{n-1}} + \sum_{k = 0}^{n-2} \frac{\xi^{k}}{z - \xi^{k}} \\ & = & \sum_{k = 0}^{n-1} \frac{\xi^{k}}{z - \xi^{k}}. \end{eqnarray*} It follows that $f(\xi^k z ) = f(z)$ for all $k \leq n-1$ and therefore that the stabilizer $f(z)$ under the action of the affine group has cardinal $\geq n$. Consequently, there at least $n$ polynomial relations for the solutions of the differential equation $\frac{y''}{y'} = f(y)$. \end{exam} The following lemma shows that this in fact equality holds: \begin{lem} \label{stablemma} Let $f(z) \in \mathbb{C}(z)$ be a function with at least one non zero residue. Denote by $G$ the stabilizer of $f(z)$ under the action of the affine group by precomposition and by $n \geq 1$ the number of complex points where $f(z)$ has a non zero residue. Then $$|G| \leq n.$$ \end{lem} We already know by Claim 3.3 that $G$ is finite. \begin{claim} Any finite subgroup $G$ of ${\rm Aff}_2(\mathbb{C})$ is cyclic and conjugated to a finite subgroup of rotations (for the usual action of ${\rm Aff}_2(\mathbb{C})$ on the complex plane). \end{claim} \begin{proof} Since the additive group $\mathbb{G}_a (\mathbb{C})$ has no non-trivial finite subgroup, using the exact sequence $$0 \rightarrow \mathbb{G}_a (\mathbb{C}) \rightarrow {\rm Aff}_2(\mathbb{C}) \rightarrow \mathbb{G}_m(\mathbb{C}) \rightarrow 1,$$ we see that $G$ is isomorphic to its image $\mu(G)$ in $\mathbb{G}_m(\mathbb{C})$ and therefore that $G$ is cyclic. Moreover, in the matrix representation of ${\rm Aff}_2(\mathbb{C})$, $G$ is generated by an element of the form $$ \Xi = \begin{pmatrix} \xi & b \\ 0 & 1 \end{pmatrix}$$ where $\xi$ is a root of unity. A direct computation shows that $$\begin{pmatrix} 1 & -c \\ 0 & 1 \end{pmatrix} \begin{pmatrix} \xi & 0 \\ 0 & 1 \end{pmatrix}\begin{pmatrix} 1 & c \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} \xi & (\xi - 1) c \\ 0 & 1 \end{pmatrix}$$ Hence, if $\xi \neq 1$ (i.e. $G$ is not the trivial group) then taking $c = \frac b {\xi - 1}$ conjugates $G$ to a subgroup of $\mathbb{S}^1 = \lbrace z \in \mathbb{C} \mid \mid z \mid = 1 \rbrace \subset \mathbb{G}_m(\mathbb{C})$ (i.e. a subgroup of rotation of the complex plane). \end{proof} \begin{claim} Let $f(z) \in \mathbb{C}(z)$ be a rational function stabilized by a non-trivial finite group $G$ of rotations of the complex plane then $f(z)$ has a trivial residue at $z = 0$. \end{claim} \begin{proof} The finite group $G$ is generated by the rotation $z \mapsto \xi z$ where $\xi \neq 1$ is a root of unity. Write $$f(z) = \frac a z + g(z)$$ where $0$ is not a simple pole of $g(z)$. Therefore, $0$ is not a simple pole of $g(\xi z)$ either and $$ f(\xi z) = \frac {a \xi^{-1}} z + g(\xi z).$$ Comparing the residues of $f(\xi z)$ and $f(z)$ at $0$, we get $a = a \xi^{-1}$ and therefore $a = 0$. Hence, $f(z)$ has a trivial residue at $z = 0$. \end{proof} \begin{proof}[Proof of Lemma \ref{stablemma}] Denote by $G$ the stabilizer of $f(z)$. We already know that $G$ is finite by Claim 3.3. Using Claim 3.8, up to replacing $f(z)$ by $f(z + c)$, we can assume that $G$ is a subgroup of the group of rotations of the complex plane since this transformation does not affect the number of complex points where $f(z)$ has a non-zero residue. In particular, $G$ is a subgroup of $\mathbb{G}_m(\mathbb{C})$ acting on the complex plane by multiplication. Denote by $A = \lbrace a_1,\ldots, a_n \rbrace \neq \emptyset$ the set of complex points where $f(z)$ has a non-zero residue. The second claim ensures that $A \subset \mathbb{C}^\ast$. Since the action of $\mathbb{G}_m(\mathbb{C})$ on $\mathbb{C}^\ast$ is $1$-sharply transitive, the same argument as in Claim 3.3 gives $$|G| \leq n.$$ \end{proof} Since $G$ is a group of rotations, the proof gives in fact a bit more: if the upper bound is achieved ($|G|= n$) then all the complex numbers where $f(z)$ has a non zero residue must lie on a common circle of the complex plane. Coming back to Example 3.6, since $n \geq 2$, $g(z)$ does not have non zero residues and therefore $f(z)$ has a non-zero residue exactly at the $n^{th}$ roots of unity. It follows that the stabilizer of $f(z)$ is exactly the group of rotations of the complex plane with angles $2\pi k/n$ with $k = 0,\ldots {n-1}$. \begin{lem} Let $f(z) \in \mathbb{C}(z)$ be a rational function with at most simple poles. Assume that $f(z)$ has a non-zero residue at $n \geq 2$ complex points and that equality occurs in the previous lemma: \begin{center} the stabilizer of $f(z)$ under the action of the affine group by precomposition has cardinality $n$. \end{center} Then $f(z)$ is conjugated to one of the examples of Example 3.6: there exist $a,b \in \mathbb{C}$ such that $$f(az + b) = c.\sum_{k = 0}^{n-1} \frac{\xi^k}{z - \xi^k} + g(z^n)$$ where $g(z) \in \mathbb{C}[z]$ is a polynomial. \end{lem} \begin{proof} As for the proof of the previous lemma, replacing $f(z)$ by $f(z+c)$, we can assume that the stabilizer $G$ of $f(z)$ is the subgroup of rotations with angles $2\pi k/n$ for $k = 0,\ldots, n-1$. As noticed after the proof of the previous lemma, after this translation, all the poles of $f(z)$ lie (in a single orbit hence) on a circle centered at $0$ (say of radius $r$). Replacing $f(z)$ by $f(rz)$, we can assume that all the poles of $f(z)$ lie on the unit circle. Finally, replacing $f(z)$ again by $f(e^{i\theta}z)$, we can assume one of the pole of $f(z)$ is $z = 1$. After this combination of affine substitutions, the $n$ simple poles of $f(z)$ are located at $n^{th}$ roots of unity $1,\xi,\ldots, \xi^{n-1}$. We claim that $$f(z) = c.\sum_{k = 0}^{n-1} \frac{\xi^k}{z - \xi^k} + g(z^n).$$ Indeed, writing the partial fraction decomposition of $f(z)$ as $$ f(z) = P(z) + \sum_{i = 0}^{n-1} \frac {\alpha_i}{z - \xi ^i}$$ we get (by uniqueness of the partial fraction decomposition) that both terms are preserved under the action of $G$. \begin{itemize} \item Looking at the polynomial part, if $a$ is a root of $P$ then $$ P(a) = P(\xi a) = \cdots = P(\xi^{n-1} a) = 0.$$ \end{itemize} Since these are all distinct roots, we get that $$ (z-a)(z-\xi a) \cdots(z-\xi^{n-1} a) = (z^n - a^n)$$ divides $P(z)$. Iterating the argument, we obtain that $P(z)$ is of the form $$P(z) = (z^n - a_1^n) \cdots (z^n - a_k^n) = g(z^n)$$ \begin{itemize} \item Looking at the simple poles part: we compute as in Example 3.6 \begin{eqnarray*} \sum_{i = 0}^{n-1} \frac {\alpha_i}{\xi z - \xi ^i} & = & \sum_{i = 0}^{n-1} \frac {\alpha_i {\xi}^{-1}}{z - \xi ^{i-1}} \\ & = & \frac {\alpha_0 {\xi}^{-1}}{z - \xi^{n-1}} + \sum_{i = 0}^{n-2} \frac {\alpha_{i+1} {\xi}^{-1}}{z - \xi ^{i}} \end{eqnarray*} \end{itemize} which gives $\alpha_{i+1} = \alpha_i \xi$ for $i = 1,\ldots (n-2)$ and $\alpha_{0} = \alpha_{n-1} \xi$. In particular, $\alpha_0 = c$ can be chosen freely and $\alpha_i = \xi^i c$ for $i \geq 1$. The last equality is automatically satisfied since $\xi$ is a $n^{th}$-root of unity. Putting everything together, we showed that after these substitution, we obtain $$f(z) = g(z^n) + c. \sum_{k = 0}^{n-1} \frac{\xi^k}{z - \xi^k}. $$ \end{proof} \begin{exam} If we consider the functions given in Example 3.2 by $$ f(z) = \frac 1 {z- a} - \frac 1 { z- b} $$ and $z \mapsto \frac {a - b}{2}z + \frac {a + b}{2}$ is the unique affine transformation sending $(1,-1)$ to $(a,b)$ then $$ f(\frac{a - b}{2}z + \frac {a + b}{2}) = \frac {2} {b - a} (\frac 1 {z + 1} - \frac 1 {z-1})$$ are all of the form prescribed by the lemma. On the other hand, it is necessary to assume that $f(z)$ has only simple poles for the conclusion of the lemma to hold. For instance, $$ f(z) = \frac {-1} {z-1} + \frac 1 {z+1} + \frac 1 {(z-a)^2} + \frac 1 {(z+a)^2} +\frac 1 {(z+ b)^3} - \frac 1 {(z-b)^3} $$ is not of the form given by Example 3.6 and satisfies $f(z) = f(-z)$. \end{exam} \begin{cor} Let $f(z) \in \mathbb{C}(z)$. Denote by $a_1,\ldots, a_n$ the non-zero zero complex residues of $f(z)$ and assume $n \geq 1$. For a solution $y$ of $(E): y''/y' = f(y)$, denote by $acl(y)$ the set of solutions of $(E)$ which are algebraic over $\mathbb{C}\gen{y}$. Then $| acl(y) | $ does not depend on the chosen solution $y$ and $$ 1 \leq | acl(y) | \leq n$$ Moreover, \begin{itemize} \item[(i)] $| acl(y) | = 1$ if $n = 1$ or if $n \geq 3$ and the only affine transformation which preserves the set of complex residues of $f(z)$ is the identity. In that case, the equation is strictly disintegrated. \item[(ii)]Assume that $f(z)$ does not have higher order poles. Then $| acl(y) | = n$ if and only if for some $(a,b) \in \mathbb{C}^\ast \times \mathbb{C}$, $$f(az + b) = c.\sum_{k = 0}^{n-1} \frac{\xi^k}{z - \xi^k} + g(z^n)$$ where $g(z) \in \mathbb{C}[z]$ is a polynomial and $\xi$ is a primitive $n$-root of unity. \end{itemize} \end{cor} \section{Observations about the non-minimal case} \label{nonmin} Theorem \ref{stminthm} tells us that the solution set of $\frac{z''}{z'}=f(z)$ has rank $2$ precisely when we can write $f(z)$ as the derivative of a rational function $g(z)$. In that case, a family of order one subvarieties fibers our equation and is given by $z' = g(z)+c$ for $c \in \mathbb C$. A priori, three options might arise: \begin{enumerate} \item The equation $\frac{z''}{z'}=f(z)$ is internal to the constants. \item The fibers $z' = g(z)+c$ are internal to the constants, but the equation $\frac{z''}{z'}=f(z)$ is 2-step analyzable in the constants. \item For generic $c$, $z' = g(z)+c$ is orthogonal to the constants. \end{enumerate} The goal of this section is to show that all three possibilities can arise in our family of equations. \subsection{The generic fiber and nonorthogonality to the constants} The following slightly restated theorem of Rosenlicht gives conditions for a rational order one differential equation to be nonorthogonal to the constants, see \cite{notmin, mcgrail2000search}. \begin{thm} \label{rosorth} Let $K$ be a differential field with algebraically closed field of constants. Let $f(z) \in \mathcal C_K (z) $ and consider the differential equation $z' = f(z)$. Then $z' = f(z)$ is nonorthogonal to the constants if and only if $\frac{1}{f(z)}$ can be written as: $$c \frac{\pd{u}{z}}{u} \text{ or } c \pd{v}{z} $$ where $c \in \mathcal C_K$ and $u,v \in \mathcal C_K(z)$. \end{thm} \begin{lem} \label{notader} Suppose that $g(z) \in \mathbb C(z)$. Then for $c \in \mathbb C$ generic over the coefficients of $g(z)$, $g(z)+c$ can not be written as $c_1 \pd{v}{y}$ for any $c_1 \in \mathbb C$ and $\pd{v}{y} \in \mathbb C(z)$. \end{lem} \begin{proof} We first establish the following claim: \begin{claim} For any $p(z), q(z) \in \mathbb C[z]$ nonzero, sharing no common roots, with at least one of $p,q$ nonconstant, and $c $ generic over the coefficients of $p,q$, the polynomial $p(z) - c q(z)$ has only simple roots. \end{claim} The claim is equivalent to: for some $b \in \mathbb C$, the polynomial $$f(x) = p(z-b) - c q(z-b)$$ has a no constant or linear term (has at least a double root at zero). Then $f(0)=f'(0)= 0.$ It follows that $p(-b)= c q(-b)$ and $p'(-b) = c q'(-b)$. Now, since $p,q$ share no common roots and $c$ is generic over the coefficients of both, we mnust have $q(-b) \neq 0$ and $p(-b) \neq 0.$ Now by a simple computation, it follows that $$\frac{d}{dz} \left( \frac{p(z)}{q(z)} \right) (-b) = 0.$$ Again, since $p, q$ are relatively prime, the function $\frac{p(z)}{q(z)}$ is nonconstant and so $b$ is algebraic over the coefficients of $p,q$. But now $\frac{p(-b) }{q(-b)} = c$, which is impossible as $c$ is generic. This proves the claim. From the claim it follows for $g(z) \in \mathbb C(z)$ and $c $ generic over the coefficients of $g$, $\frac{1}{g(z) +c}$ can not be written as $c_1 \pd{v}{y}$ - indeed by the above claim it follows that $\frac{1}{g(z) +c}$ has only simple poles while $c_1 \pd{v}{y}$ has poles of order $2$ or more. \end{proof} Now, combining Lemma \ref{notader} and Theorem \ref{rosorth}, we obtain: \begin{cor} \label{logdercon} If $\frac{z''}{z'}=f(z)$ has rank $2$ and the family of order one subvarieties is given by $z' = g(z)+c$, then the generic solution of $\frac{z''}{z'}=f(z)$ is analyzable in the constants if and only if for generic $c$, $\frac{1}{g(z)+c}$ can be written as $c_1 \frac{\pd{u}{z}}{u} $ for $c_1 \in \mathcal C_K$ and $u \in \mathcal C_K(z)$. \end{cor} \begin{rem} The condition that a rational function can be written as a constant times a single logarithmic derivative is known to be non-constuctible in the coefficients of the rational function - see for instance Corollary 2.10 of \cite{mcgrail2000search}. \end{rem} \subsection{Internality to the constants} Consider the case $\frac{z''}{z'}=c$ where $c \in \mathbb C$. In this case, regarding $z$ as a function of $t$ and assuming $c\neq 0$, solutions of the equation can be seen by an elementary calculation, to be given by $$a e^{ct} + b,$$ for some $a,b \in \mathbb C$. Then, the equation is internal to the constants (with the internality realized over a single solution). This fits into case 1) of the classification given at the beginning of this section. \begin{ques} Is there any any nonconstant rational function $f(z)$ such that $\frac{z''}{z'}=f(z)$ is internal to the constants? \end{ques} \subsection{Analyzability to the constants} We first fix some notation for this subsection: $\frac{z''}{z'}=f(z)$ with $g(z)$ a rational antiderivative of $f(z)$ so that $z'=g(z)+c$ is a family of order one subvarieties of $\frac{z''}{z'}=f(z)$. \begin{lem} Suppose that $g(z)$ is a degree $2$ polynomial $a_2 z^2 +a_1 z + a_0$. Then for generic\footnote{or more specifically, as long as $c \neq a_0 - \frac{a_1^2 }{4a_2}$} $c \in \mathbb C$, $z'=g(z)+c$ is nonorthogonal to the constants. \end{lem} \begin{proof} We have that $$\frac{1}{g(z)+c} = \frac{d}{(z-\alpha) (z-\beta)},$$ with $\alpha \neq \beta $. Then writing $A = \frac{d}{\alpha-\beta }$, $B = \frac{d}{\beta -\alpha}$, $$\frac{1}{g(z)+c} = \frac{A}{z-\alpha} + \frac{B}{z-\beta}.$$ If we take $u(z)=\frac{z-\alpha}{z-\beta}$, then $$\frac{1}{g(z)+c} = A \frac{u'}{u},$$ and so it follows by Theorem \ref{rosorth} that $z'=g(z)+c$ \end{proof} We next show that the equation $\frac{z''}{z'} = z$ falls under case 2 of the classification mentioned at the beginning of this section: \begin{lem} The generic type of the equation \begin{equation} \label{deg2} \frac{z''}{z'}=z \end{equation} is 2-step analyzable in the constants and is not internal to the constants. \end{lem} \begin{proof} For a generic solution $z$ of equation \ref{deg2}, set $c(z) = z^2 - 2z' \in \mathbb{C}(z,z')$. The equation \ref{deg2} implies that $$ c(z)' = 0$$ and set $$z_0 = \frac{z - \sqrt{c(z)}}{z + \sqrt{c(z)}} \in \mathbb{C}(z,z')^{alg}$$ A direct computation shows that \begin{eqnarray*} \frac{z_0'}{z_0} &= \frac{(z - \sqrt{c(z)})'}{z - \sqrt{c(z)}} - \frac{(z + \sqrt{c(z)})'}{z + \sqrt{c(z)}} \\ & = \frac{z'}{z - \sqrt{c(z)}} - \frac{z '}{z + \sqrt{c(z)}} \\ & = \frac{2z'\sqrt{c(z)}}{z^2 - c(z)} = \sqrt{c(z)} \end{eqnarray*} where we used the exact formula for $c(z)$ in the computation of the denominator for the last equality. Since $c(z)$ and hence $\sqrt{c(z)}$ are constants, it follows that \begin{equation} \label{deg22} \left( \frac{z_0'}{z_0}\right)' = 0 \end{equation} Since for $z$ generic, $\sqrt{c(z)} \notin \mathbb{C}$, $z_0$ realizes the generic type of equation \ref{deg22} so that there is an algebraic correspondence between the equations \ref{deg2} and \ref{deg22}. The equation \ref{deg22} is known to be analyzable in exactly two steps in the constants by \cite{jin2020internality} and therefore so is the equation \ref{deg2}. \end{proof} A linear change of variables $z_1=\frac{z-b}{a}$ can be used to give a bijective correspondence between equation \ref{deg2} and any such equation with the right hand side an arbitrary linear function of $z$ over $\mathbb C$: \begin{cor} For any $a,b \in \mathbb C$, the generic type of the equation $$ \frac{z''}{z'}=az+b $$ is 2-step analyzable in the constants and is not internal to the constants. \end{cor} \subsection{Orthogonality to the constants} We remind the reader of our general notation: $\frac{z''}{z'}=f(z)$ with $g(z)$ an antiderivative of $f(z)$ so that $z'=g(z)+c$ is a family of order one subvarieties of $\frac{z''}{z'}=f(z)$. In this subsection, we consider the case that $g(z)$ is a degree three polynomial over $\mathbb C$. \begin{lem} There is no polynomial $P(z)$ of degree $3$ such that $$f_c(z) = \frac 1 {P(z) + c} $$ is a constant multiple of a logarithmic derivative in $\mathbb{C}(z)$ for generic values of $c$. \end{lem} \begin{proof} By contradiction, assume that such a polynomial $P(z)$ exists. Without loss of generality, we can assume that $P(z)$ is monic and the constant coefficient of $P(z)$ is $0$. So we write: $$ P(z) = z^3 + az^2 + bz.$$ This implies that the quotients of the residues do not depend on $c$ and therefore that there exists \textit{fixed} $A_1,A_2,A_3 \in \mathbb{C}^\ast$ such that for infinitely many values of $c$, $$(\ast): \frac 1 {P(z) + c} = e.(\frac{A_1}{z - \alpha_1} + \frac{A_2}{z - \alpha_2} + \frac{A_3}{z - \alpha_3})$$ for some $e \neq 0, \alpha_1,\ldots, \alpha_3$. So, choose $c$ such that $P(z) + c$ has simple roots (this holds for any $c$ independent from $a,b,$ for instance) and $A_1,A_2,A_3$ are the residues of $f_c(z)$. For $d$ close enough to $c$, $P(z) + d$ also as simple roots $\beta_1,\ldots, \beta_3$ and if $B_1, B_2,B_3$ are the residues of $f_d(z)$ then $$B_2/B_1 = A_2/A_1 \text{ and } B_3/B_1 = A_3/A_1.$$ It follows that \begin{eqnarray*} f_d(z) &= & \frac { B_1}{z - \beta_1} + \frac {B_2}{z - \beta_2} + \frac { B_3}{z - \beta_3} \\ & = & \frac{B_1} {A_1}\Big(\frac { A_1}{z - \beta_1} + \frac {A_2}{z - \beta_2} + \frac {A_3}{z - \beta_3} \Big) \end{eqnarray*} Up to replacing $e$ by $e.(A_1A_2A_3)$, we can assume that $A_1,A_2$ and $A_3$ have been chosen such that: $$ (E_1): A_1A_2A_3 = 1.$$ With this normalization, we claim that: \begin{claim} $A_1,A_2$ and $A_3$ are the three third roots of unity. In particular, $A_1/A_2 \notin \mathbb{Q}$. \end{claim} \begin{proof} It is enough to show that $$\begin{cases} A_1 + A_2 + A_3 = 0 \\ A_1A_2 + A_1A_3 + A_2A_3 = 0. \end{cases}$$ since this implies that $(z - A_1)(z - A_2)(z - A_3) = z^3 - 1$. Note that $\alpha_1,\ldots, \alpha_3$ must be the roots of $P(z) + c$ so we get the two equations: $$(S):\begin{cases} \alpha_1 + \alpha_2 + \alpha_3 = a \\ \alpha_1 \alpha_2 + \alpha_2 \alpha_3 + \alpha_1 \alpha_3 = b. \end{cases}$$ On the other hand, developing $(\ast)$ gives: \begin{eqnarray*} & \frac 1 {(z-\alpha_1)(z - \alpha_2)(z - \alpha_3)} = e. \frac{A_1(z - \alpha_2)(z - \alpha_3) + A_2(z - \alpha_1)(z - \alpha_3) + A_3(z-\alpha_1)(z - \alpha_2)}{(z-\alpha_1)(z - \alpha_2)(z - \alpha_3)} \\ & = e.\frac{\Big(A_1 + A_2 + A_3\Big)z^2 - \Big(A_1(\alpha_2 + \alpha_3) + A_2(\alpha_1 + \alpha_3) + A_3(\alpha_1 + \alpha_2)\Big)z + \Big(A_1\alpha_2\alpha_3 + A_2 \alpha_1 \alpha_3 + A_3 \alpha_1 \alpha_2\Big)} {(z-\alpha_1)(z - \alpha_2)(z - \alpha_3)} \end{eqnarray*} The coefficients of $z^2$ and $z$ on the right hand side must therefore be $0$ and the constant coefficient must be equal to $1$. The last equation defines $e$ implicitly in terms of the other parameters so we won't be using it. Next, consider the coefficient of $z^2$ $$(E_2): A_1 + A_2 + A_3 = 0.$$ The sum of the residues is $0$. The coefficient of $z$: \begin{eqnarray*} 0 = A_1(\alpha_2 + \alpha_3) + A_2(\alpha_1 + \alpha_3) + A_3(\alpha_1 + \alpha_2) & = & \\ \alpha_1 (A_2 + A_3) + \alpha_2 (A_1 + A_3) + \alpha_3(A_1 + A_2) & = & \\ - \alpha_1 A_1 - \alpha_2 A_2 - \alpha_3 A_3 \end{eqnarray*} where in the last equality we used $(E_2)$. Together with the system $(S)$, this yields that $\alpha_1,\alpha_2,\alpha_3$ are solutions of the system of polynomial equations: $$(\overline{S}):\begin{cases} X_1 + X_2 + X_3 = a \\ X_1X_2 + X_2X_3 + X_1X_3 = b \\ A_1 X_1 + A_2 X_2 + A_3 X_3 = 0 \end{cases}$$ This is where we use our assumption: since this is true for infinitely many values of $c$, this system must have infinitely many solutions so its set of solutions must have dimension $\geq 1$ (actually $= 1$). This will give us our last equation on $A_1,A_2,A_3$: Consider $q = (q_1,q_2,q_3)$ a common solution of the system above (since it has infinitely many solutions). The first equation and the last equations are equations of planes so they must intersect on a line $L$ of the form $$L = \lbrace q + \lambda v, \lambda \in \mathbb{C} \rbrace$$ where the vector $v$ is given by $$ v = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \wedge \begin{pmatrix} A_1 \\ A_2 \\ A_3 \end{pmatrix} = \begin{pmatrix}A_3 - A_2 \\ A_1 - A_3 \\ A_2 - A_1 \end{pmatrix} $$ So in order for the system $(\overline{S})$ to have infinitely solutions this line $(L)$ must be contained in the conic given by the second equation: $$ (q_1 + \lambda v_1)(q_2 + \lambda v_2) + (q_2 + \lambda v_2)(q_3 + \lambda v_3) + (q_1 + \lambda v_1)(q_3 + \lambda v_3) =b$$ So the coefficient in $\lambda^2$ must vanish, which gives: \begin{eqnarray*} & 0 = v_1v_2 + v_2v_3 + v_1v_3 \\ & = (A_3 - A_2)(A_1 - A_3) + (A_1 - A_3)(A_2 - A_1) + (A_3 -A_2)(A_2 - A_1) \\ & = - (A_1^2 + A_2^2 + A_3^2) + A_1A_3 + A_1A_2 + A_2A_3 \\ & = -(A_1 + A_2 + A_3)^2 + 3(A_1A_3 + A_1A_2 + A_2A_3) \\ & = 3(A_1A_3 + A_1A_2 + A_2A_3) \end{eqnarray*} where we used $(E_2)$ on the last line. We conclude that $$(E_3): A_1A_2 + A_2A_3 + A_1A_3 = 0.$$ \end{proof} To conclude the proof of the lemma, we use the following argument explained in the Example 2.20 of \cite{HrIt}: every (non constant) $f(z) \in \mathbb{C}(z)$ can be written as: $$f(z) = \frac {(z - a_1) \ldots (z - a_n)} {(z- b_1)\ldots (z-b_m)} $$ By direct calculation, one can see that: $$\frac{f'(z)}{f(z)} = \sum \frac{1}{z - a_i} - \sum \frac{1}{z - b_i} .$$ It follows that every logarihmic derivative has only simple poles with integer residues. So if $g(z)$ is a constant multiple of a logarithmic derivative, then all poles are simple are the quotients of the residues are rational, but we've observed that is impossible. \end{proof} By combining the previous Lemma with Corollary \ref{logdercon}, we see that \begin{cor}\label{3const} For any polynomial $P(z)$ of degree $3$, then for generic $c \in \mathbb C$ independent from the coefficients of $P$, $z'=P(z)+c$ is orthogonal to the constants. \end{cor} \begin{prop} \label{genorth} Suppose $a,b,c,d$ are algebraically independent over ${\mathbb Q}}\def {\mathbb K} {{\mathbb K}$. Let $g(z)=z^3+az^2+bz$. The strongly minimal sets defined by $z^\prime=g(z)+c$ and $z^\prime=g(z)+d$ are orthogonal. \end{prop} \begin{proof} Let $\alpha_1,\dots,\alpha_3$ be the zeros of $g(z)+c$. Then $\alpha_1,\dots,\alpha_3$ are algebraically independent. $${1\over g(z)+c}=\sum_{i=1}^3 {A_i\over z-\alpha_i}$$ where $$A_i={1\over\prod_{j\ne i}(\alpha_i-\alpha_j)}.$$ We have the linear relation $A_1+A_2+A_3=0$. \begin{claim} \label{cl1} If $m_1,m_2,m_3 \in {\mathbb Q}}\def {\mathbb K} {{\mathbb K}$ and $\sum m_i A_i \in {\mathbb Q}}\def {\mathbb K} {{\mathbb K} (a,b)^{alg}$, then $m_1=m_2=m_3$. \end{claim} Suppose $\sum m_i A_i=\beta\in {\mathbb Q}}\def {\mathbb K} {{\mathbb K}(a,b)^{\rm alg}$. \begin{eqnarray*}\beta \prod_{j<i}(\alpha_i-\alpha_j) &=& m_1(\alpha_2-\alpha_3)-m_2(\alpha_1-\alpha_3)+m_3(\alpha_1-\alpha_2) \\ &=& (m_3-m_2)\alpha_1+(m_1-m_3)\alpha_2+ (m_2-m_1)\alpha_3 \end{eqnarray*} If $m_1=m_2=m_3$, then we are left with the equation $\beta\prod_{j<i}(\alpha_j-\alpha_i)=0$. Since $\alpha_1,\alpha_2,\alpha_3$ are algebraically independent we must also have $\beta=0$. Otherwise we have a degree 3 polynomial over ${\mathbb Q}}\def {\mathbb K} {{\mathbb K}(a,b)^{\rm alg}$ vanishing at $(\alpha_1,\alpha_2,\alpha_3)$. Let's write the linear term $\sum n_i\alpha_i$. We now have the following system of equations over ${\mathbb Q}}\def {\mathbb K} {{\mathbb K}(a,b)^{\rm alg}$ satisfied by $\alpha_1,\alpha_2,\alpha_3$. \begin{eqnarray*} -a&=& z_1+z_2+z_3\\ b&=& z_1z_2+z_1z_3+z_2z_3\\ \beta \prod_{j<i}(z_i-z_j) &=&nz_1+n_2z_2+ n_3 z_3\\ \end{eqnarray*} Let $H$ be the hyperplane $z_1+z_2+z_3=-a$, $V$ the surface $z_1z_2+z_1z_3+z_2z_3=b$, and $W$ the surface $\prod_{i<j}(z_i-z_j) =nz_1+n_2z_2+ n_3 z_3$. We will show $H\cap V\cap W$ is finite. But then $\alpha_1,\alpha_2,\alpha_3\in {\mathbb Q}}\def {\mathbb K} {{\mathbb K}(a,b)^{\rm alg}$, a contradiction. We make the substitution $z_3=-z_1-z_2-a$ into the defining equation for $V$ to get $F(z_1,z_2)=0$ where $$F(z_1,z_2)=z_1^2+z_2^2+z_1z_2+az_1+az_2+b.$$ This is an irreducible polynomial. Making the same substitution into the defining equation for $W$ we get $G(z_1,z_2)=0$ where $$G(z_1,z_2)=\beta [2z_1^3-2z_2^2+3z_1^2z_2-3z_1z_2^2]+ \hbox{ lower degree terms}$$ If $F(z_1,z_2)=G(z_1,z_2)=0$ has infinitely many solutions, then, since $F$ is irreducible, we must have $F|G$. But comparing the homogeneous parts of $F$ and $G$ of highest degree we see that is impossible, so we've established the claim. Now suppose the strongly minimal set $z^\prime=g(z)+c$ and $z^\prime=g(z)+d$ are non-orthogonal. Let $B_1,\dots,B_n$ be the residues for $g(z)+d$. By Subsection \ref{5.2} of this paper (or \cite[2.22]{HrIt}), $${\rm ldim} _{\mathbb Q}}\def {\mathbb K} {{\mathbb K} (A_1,A_2,A_3,B_1,B_2,B_3) < {\rm ldim} _{\mathbb Q}}\def {\mathbb K} {{\mathbb K} (A_1,A_2,A_3) +{\rm ldim} _{\mathbb Q}}\def {\mathbb K} {{\mathbb K} (B_1,B_2,B_3)=4.$$ Thus we have an equation $$\sum m_i A_i=\sum n_iB_i$$ where neither $m_1=m_2=m_3$ or $n_1=n_2=n_3$. Since $A_1,\dots,A_n$ are algebraic over ${\mathbb Q}}\def {\mathbb K} {{\mathbb K}(a,b,c)$ and $c \hbox{\ \ {\small\char '152}\kern -.65em \lower .4em \hbox{$\smile$}}_{{\mathbb Q}}\def {\mathbb K} {{\mathbb K}(a,b)^{\rm alg}}d$,\break ${\rm tp}(A_1,A_2,A_3/{\mathbb Q}}\def {\mathbb K} {{\mathbb K}(a,b)^{\rm alg},B_1,B_2,B_3)$ is finitely satisfiable in ${\mathbb Q}}\def {\mathbb K} {{\mathbb K}(a,b)^{\rm alg}$. Thus we have $\sum m_i A_i\in {\mathbb Q}}\def {\mathbb K} {{\mathbb K}(a,b)^{\rm alg}$ contradicting Claim \ref{cl1}. \end{proof} We now derive the following model-theoretic consequence of Corollary \ref{3const} and Proposition \ref{genorth}. \begin{cor} Let $f(z) = z^2 + az + b$ be a complex polynomial of degree $2$. Then the theory of the solution set of $$(\star): z''/z' = f(z)$$ has the dimensional order property (DOP) and hence $2^\kappa$ isomorphism classes of models of cardinal $\kappa$ for every uncountable cardinal $\kappa$. \end{cor} Recall that a complete totally transcendental theory $T$ has the dimensional order property (DOP) if there are models $M_0 \subset M_1,M_2$ with $M_1 \mathop{\mathpalette\Ind{}}_{M_0} M_2$ and a regular type $q$ with parameters in the prime model over $M_1 \cup M_2$ such that $q$ is orthogonal to $M_1$ and $M_2$. It is well-known that if $T$ has the DOP then $T$ has $2^\kappa$ isomorphism classes of models of cardinal $\kappa$ for every $\kappa \geq \aleph_1 + \mid T \mid$. \begin{proof} First note that if $\alpha \neq 0$ then $y \mapsto \alpha y$ gives a definable bijection between the solution sets of $z''/z' = f(z)$ and $z''/z' = f(z/\alpha)$. Choosing $\alpha = 1/\sqrt{3}$, we can assume that $f(z)$ is of the form $$f(z) = 3z^2 + \sqrt{3}az + b$$ Set $g(z) = z^3 + \frac{a\sqrt{3}}{2}z^2+ bz$ and let $c$ be a transcendental constant over $\mathbb{Q}(a,b)$. We claim that the generic type $q_c \in S(\mathbb{Q}(a,b,c))$ of $$ z' = g(z) + c.$$ is orthogonal to $\mathbb{Q}(a,b)^{alg}$: assume that $q_c$ is non-orthogonal to $\mathbb{Q}(a,b)^{alg}$. Since $q_c$ is strongly minimal and orthogonal to the constants by Corollary \ref{3const}, $q_c$ is one based. It follows that there exists a \emph{minimal} type $q_0 \in S(\mathbb{Q}(a,b)^{alg})$ non-orthogonal to $q_c$. Moreover, any copy $q_{d}$ of $q_c$ for every transcendental constant $d$ over $\mathbb{Q}(a,b)$ is also non-orthogonal to $q_0$. By transitivity of the non-orthogonality relation for minimal types, the types $q_c$ and $q_{d}$ are non-orthogonal whenever $c$ and $d$ are transcendental constant over $\mathbb{Q}(a,b)$. This contradicts Proposition \ref{genorth}, hence $p$ is orthogonal to $\mathbb{Q}(a,b)^{alg}$. We conclude as in Chapter 3, Corollary 2.6 of \cite{MMP} that the theory of the solution set of $(\star)$ has the DOP: consider $M_0$ the prime model over $\mathbb{Q}(a,b)$, $c$ and $d$ independent transcendental constants over $\mathbb{Q}(a,b)$ and denote by $M_1$ (resp. $M_2$) the prime model over $\mathbb{Q}(a,b,c)$ (resp. $\mathbb{Q}(a,b,d)$). Set $e = c + d$ and $q = q_e$. We claim that $q_e$ is orthogonal to both $M_1$ and $M_2$: since $e$ is a transcendental constant over $M_1$, we have that: $$e \mathop{\mathpalette\Ind{}}_{\mathbb{Q}(a,b)^{alg}} M_1 \text{ and } q_e \text{ orthogonal to } \mathbb{Q}(a,b)^{alg}$$ from which it follows that $q_e$ is orthogonal to $M_1$. Similarly, $q_e$ is orthogonal to $M_2$, hence the theory of the solution set of $(\star)$ has the DOP and hence the maximal number of isomorphism classes of models in any uncountable cardinal. \end{proof} In particular, from our analysis of a specific autonomous second order equation, we recover Shelah's theorem in \cite{shelah1973differentially} which asserts that the theory $\textbf{DCF}_0$ admits the maximal number of isomorphism classes of models in any given uncountable cardinal. While Shelah's proof uses differentially transcendental elements, it was already noticed by Poizat in \cite[pp. 10]{poizat1980c} that the DOP is also witnessed by families of algebraic differential equations parametrized by constants such as: $$ (x' = {cx\over 1+x}, c \in {\mathcal C}^\times).$$ \begin{rem} In the same vein, it is interesting to note that our results allows us to compute effectively the oldest model-theoretic invariant --- the function $\kappa \mapsto I(\kappa)$ which counts the isomorphism classes of models of cardinal $\kappa$ --- for the solution sets of equations of the form $(\star)$. More precisely, if $T_f$ denotes the theory of the solution set of $y''/y' = f(y)$ and $I(\kappa,T_f)$ counts the number of isomorphism classes of models of $T_f$ of cardinal $\kappa$ then: \begin{itemize} \item[(1)] The rational function $f(z)$ is a derivative in $\mathbb{C}(z)$ if and only if $I(\kappa,T_f) = 1$ for all infinite cardinals $\kappa$. \item[(2)] If $f(z)$ is constant or a linear polynomial then $I(\kappa,T_f) = 1$ for all uncountable cardinals $\kappa$ but $I(\aleph_0,T_f) = \aleph_0$. \item[(3)] If $f(z)$ is a polynomial of degree $2$ then $I(\kappa,T_f) = 2^\kappa$ for every uncountable cardinal $\kappa$. \end{itemize} From this perspective, it would be interesting to show that no other function $\kappa \mapsto I(\kappa)$ can occur in the family $(\star)$ or equivalently that the theory of the solution set of any differential equation of the form $(\star)$ which is not analyzable in the constants nor strongly minimal admits the maximal number of isomorphism classes of models in every uncountable cardinal. \end{rem}
{'timestamp': '2022-01-12T02:12:49', 'yymm': '2201', 'arxiv_id': '2201.03838', 'language': 'en', 'url': 'https://arxiv.org/abs/2201.03838'}
\section{Conclusions} \label{sec:conclude} In this paper, we investigated the problem of contrastive representation learning for video sequences. Our main innovation is to generate and use synthetic noise, in the form of adversarial perturbations, for building the negative pairs, and then producing our video representation in a novel contrastive pooling scheme. Assuming the video frames are encoded as CNN features, such perturbations are often seen to affect vulnerable parts of the features. Using such generated perturbations to our benefit, we propose a discriminative classifier, in a max-margin setup, via learning a set of hyperplanes as a subspace, that could separate the data from its perturbed counterpart. As such hyperplanes need to fit to useful parts of the features for achieving good performance, it is reasonable to assume they capture data parts that are robust. We provided a non-linear objective for learning our subspace representation and explored efficient optimization schemes for computing it. Experiments on several datasets explored the effectiveness of each component in our scheme, demonstrating state-of-the-art performance on the benchmarks. \section{End-to-End CNN Learning} \label{sec:end-to-end} As alluded to in the main paper, end-to-end CNN training through the discriminative subspace pooling (DSP) layer can be done using methods that are quite well-known. For a reader who might be unfamiliar with such methods, we provide a detailed exposition below. \begin{figure*}[t] \centering \includegraphics[width=13cm,trim={0cm 8cm 0cm 0cm},clip]{figure/dsp_gradients.eps} \caption{Architecture of our end-to-end CNN with discriminative subspace pooling (DSP) layer in between. We assume $X_{\ell}$ represents the feature map outputs from the $\ell$-th CNN layer (from all frames in the sequence) denoted as $f_{\ell}$, and $S_{\ell}$ represents its respective parameters. The final loss is shows as $\mathcal{L}$, $\sigma(\beta)$ is the softmax function, and $c$ is the action class label. The parameter $W$ is the subspace pooled output of the DSP layer, and $Z$ is the adversarial noise. Below the model, we provide the gradient that we are after for enabling back-propagation through the DSP layer.} \label{fig:dpl} \end{figure*} To set the stage for our discussions, we first provide our CNN architecture with the DSP layer. This CNN model is depicted in Figure~\ref{fig:dpl}. In the model, we assume the DSP layer takes as input the feature map $X_{L-1}$ from the previous layer (across all frames) and the adversarial noise $Z$, and produces as output the subspace descriptor $W^*$. This $W^*$ goes through another series of CNN fully connected layers before using it in a loss layer $\mathcal{L}$ (such as cross-entropy) to be trained against a ground truth video class $c$. Among the gradients of parameters $S$ on the various blocks, the only non-trivial gradient is the one for the block penultimate to the DSP layer, to update the parameters $S_{L-1}$ of this layer will require the gradient of the DSP block with respect to its inputs $X_{L-1}$ (the gradient that we are interested in is depicted below our CNN model in Figure~\ref{fig:dpl}). The main challenge to have this gradient is that it is not with regard to the weights $W$, but the outcome of the DSP optimization $W^*$ -- which is an argmin problem, that is: \begin{equation} W^* = \argmin_{W} \dsp(X_{L-1}, Z). \end{equation} Given that the Riemannian objective might not be practically amenable to a CNN setup (due to its components such as exponential maps, etc. that might be expensive in a CNN setting), we use a slightly different objective in this setup, given below (which is a variant of Eq. (3) in the main paper). We avoid the use of the ordering constraints in our formulation, to simplify our notations (however we use it in our experiments). \begin{equation} \min_{W} \dsp(X) := \Omega(W) + \sum_{i=1}^n \left[\max\left(0, 1-\max\left(y_iW^\top X^i\right)\right)\right]^2, \label{eq:2} \end{equation} where $\Omega(W)=\fnorm{W^TW-\eye{p}}^2$ is the subspace constraint specified as a regularization. Recall that $y_i$ is binary label for frame $i$. With a slight abuse of notation to avoid the proliferation of the CNN layer $L$ in the derivations, we use $X$ to consist of both the data features and the adversarial noise features, as captured by their labels in $y$ ($y=-1$ for adversarial noise features and 1 otherwise), and that the pair $(X^i, y_i)$ denote the $i$-th column of $X$ and its binary label respectively. \subsection{Gradients for Argmin} In this section, we derive the gradient $\frac{\partial \dsp(X)}{\partial X}$. We use the following theorem for this derivation, which is well-known as the implicit function theorem~\cite{chiang1984fundamental},~\cite{faugeras1993three}[Chapter 5] and recently reviewed in Gould et al.~\cite{gould2016differentiating}. \begin{theorem} Let $\dsp:\reals{d\times n}\to\reals{d\times p}$ be our discriminative subspace pooling operator on $n$ features each of dimension $d$ (defined as in~\eqref{eq:2}). Then, its gradient wrt $X^i$ is given by: \begin{equation} \nabla_{X^i} \dsp(W; X) = -\left.\left\{\nabla_{WW} \dsp(W;X)\right\}^{-1} \nabla_{X^iW} \dsp(W; X^i)\right|_{W=W^*} \end{equation} \label{thm:1} \end{theorem} The above theorem suggests that to get the required gradient, we only need to find the second derivatives of our objective. To simplify notation, let $P(t,q)$ denote a $d\times p$ matrix, with all zeros, except the $q$-th column which is $t$. Then, for all $i$ satisfying $\max(y_iW^TX^i)<1$, we have the second-order derivatives as follows: \begin{equation} \nabla_{WW} \dsp(W; X) = \Omega''(W) + 2\sum_{i} \vecp\left(P\left(\alpha_j(i),j(i)\right)\right)\vecp\left(P\left(\alpha_j(i), j(i)\right)\right)^T, \end{equation} where $j(i)=\argmax_q y_i W^TX^i$ and $\alpha_j(i)=y_iX^i$, $q$ capturing the dimension-index that takes the largest of $y_iW^TX^i$, which is a $p\times 1$ vector. Similarly, \begin{equation} \nabla_{X^iW} \dsp(W; X) = 2\vecp\left(P\left(\alpha_j(i),j(i)\right)\right)\vecp\left(P\left(\beta, j(i)\right)\right)^T, \end{equation} where $j$ and $\alpha_j$ are as defined above, while $\beta=P(y_iW, j(i))$. Note that $\nabla_{WW}$ is a $pd\times pd$ matrix, while $\nabla_{X^iW}$ is a $pd\times d$ matrix. While, it may seem expensive to compute these large matrices, note that it requires only the vectors $y_i X^i$ as its elements which are cheap to compute, and the argmin takes only linear time in $p$, which is quite small (6 in our experiments). However, computing the matrix inverse of $\nabla_{WW}$ can still be costly. To avoid this, we use a diagonal approximation to it. Figures~\ref{subfig:1} and~\ref{subfig:2} show the convergence and the action classification error in the end-to-end learning setup on the HMDB-51 dataset split1 using a ResNet-152 model. \begin{figure}[h] \begin{center} \subfigure[]{\label{subfig:1}\includegraphics[width=0.3\linewidth]{figure/Error.eps}} \subfigure[]{\label{subfig:2}\includegraphics[width=0.3\linewidth]{figure/Loss.eps}} \end{center} \caption{Convergence of our end-to-end training setup on HMDB-51 split1.} \end{figure} \section{Experiments} \label{experiment} In this section, we demonstrate the utility of our discriminative subspace pooling (DSP) on several standard vision tasks (including action recognition, skeleton-based video classification, and dynamic video understanding), and on diverse CNN architectures such as ResNet-152, Temporal Convolutional Network (TCN), and Inception-ResNet-v2. We implement our pooling scheme using the ManOpt Matlab package~\cite{boumal2014manopt} and use the RCG optimizer with the Hestenes-Stiefel's~\cite{hager2005new} update rule. We found that the optimization produces useful representations in about 50 iterations and takes about 5 milli-seconds per frame on a single core 2.6GHz CPU. We set the slack regularization constant $C=1$. As for the CNN features, we used public code for the respective architectures to extract the features. Generating the adversarial perturbation plays a key role in our algorithm, as it is used to generate our negative bag for learning the discriminative hyperplanes. We follow the experimental setting in~\cite{moosavi2017universal} to generate UAP noise for each model by solving the energy function as depicted in Alg.~\ref{alg:1}. Differently from ~\cite{moosavi2017universal}, we generate the perturbation in the shape of the high level CNN feature instead of an RGB image. We review below our the datasets, their evaluation protocols, the CNN features next. \subsection{Datasets, CNN Architectures, and Feature Extraction} \noindent\textbf{HMDB-51~\cite{kuehne2011hmdb}:} is a popular video benchmark for human action recognition, consisting of 6766 Internet videos over 51 classes; each video is about 20 -- 1000 frames. The standard evaluation protocol reports average classification accuracy on three-folds. To extract features, we train a two-stream ResNet-152 model (as in~\cite{simonyan2014two}) taking as input RGB frames (in the spatial stream) and a stack of optical flow frames (in the temporal stream). We use features from the pool5 layer of each stream as input to DSP, which are sequences of 2048D vectors \noindent\textbf{NTU-RGBD~\cite{shahroudy2016ntu}:} is by far the largest 3D skeleton-based video action recognition dataset. It has 56,880 video sequences across 60 classes, 40 subjects, and 80 views. The videos have on average 70 frames and consist of people performing various actions; each frame annotated for 25 3D human skeletal keypoints (some videos have multiple people). According to different subjects and camera views, two evaluation protocols are used, namely cross-view and cross-subject evaluation~\cite{shahroudy2016ntu}. We use the scheme in Shahroudy et al.~\cite{shahroudy2016ntu} as our baseline in which a temporal CNN (with residual units) is applied on the raw skeleton data. We use the 256D features from the bottleneck layer (before their global average pooling layer) as input to our scheme. \noindent\textbf{YUP++ dataset~\cite{feichtenhofer2017temporal}:} is a recent dataset for dynamic video-texture understanding. It has 20 scene classes with 60 videos in each class. Importantly, half of the sequences in each class are collected by a static camera and the rest are recorded by a moving camera. The latter is divided into two sub-datasets, YUP++ stationary and YUP++ moving. As described in the~\cite{feichtenhofer2017temporal}, we apply the same 1/9 train-test ratio for evaluation. There are about 100-150 frames per sequence. We train an Inception-ResNet-v2 on the respective training set to generate the features and fine-tune a network that was pre-trained on the ImageNet dataset. In detail, we apply the 1/9 train-test ratio and follow the standard supervised training procedure of image-based tasks; following which we extract frame-level features (1536D) from the second-last fully-connected layer. \subsection{Parameter Analysis} \begin{figure}[t] \begin{center} \subfigure[]{\label{subfig:1}\includegraphics[width=0.35\linewidth]{figure/temporal2.eps}} \qquad \subfigure[]{\label{subfig:2}\includegraphics[width=0.35\linewidth]{figure/fooling.eps}} \end{center} \caption{Analysis of the hyper parameters used in our scheme. All experiments use ResNet-152 features on HMDB-51 split-1 with a fooling rate of 0.8 in (a) and 6 hyperplanes in (b). See text for details.} \end{figure} \para{Evaluating the Choice of Noise:} As is clear by now, the noise patterns should be properly chosen in the contrastive learning setup, as it will affect how well the discriminative hyperplanes characterize useful video features. To investigate the quality of UAP features, we compare it with the baseline of choosing noise from a Gaussian distribution with the data mean and standard deviation computed on the respective video dataset (as done in the work of Wang et al.~\cite{wang2018video}). We repeat this experiment 10-times on the HMDB-51 split-1 features. In Figure~\ref{subfig:1}, we plot the average classification accuracy after our pooling operation against an increasing number of hyperplanes in the subspaces. As is clear, using UAP significantly improves the performance against the alternative, substantiating our intuition. Further, we also find that using more hyperplanes is beneficial, suggesting that adding UAP to the features leads to a non-linear problem requiring more than a single discriminator to capture the informative content. \para{Evaluating Temporal Constraints:} Next, we evaluate the merit of including temporal-ordering constraints in the DSP objective, viz.~\eqref{eq:5}. In Figure~\ref{subfig:1}, we plot the accuracy with and without such temporal order, using the same settings as in the above experiment. As is clear, embedding temporal constraint will help the discriminative subspace capture representations that are related to the video dynamics, thereby showing better accuracy. In terms of the number of hyperplanes, the accuracy increases about $3\%$ from one hyperplane to when using six hyperplanes, and drops around $0.5\%$ from 6 hyperplanes to 15 hyperplanes, suggesting that the number of hyperplanes (6 in this case) is sufficient for representing most sequences. \para{UAP Fooling Rate:} In Figure~\ref{subfig:2}, we analyze the fooling rate of UAP that controls the quality of the adversary to confuse the trained classifier. The higher the fooling rate is, the more it will mix the information of the feature in different classes. As would be expected, we see that increasing the fooling rate from 0.1 to 0.9 increases the performance of our pooling scheme as well. Interestingly, our algorithm could perform relatively well without requiring a very high value of the fooling rate. From ~\cite{moosavi2017universal}, a lower fooling rate would reduce the amount of data needed for generating the adversarial noise, making their algorithm computationally cheaper. Further, comparing Figures~\ref{subfig:1} and~\ref{subfig:2}, we see that incorporating a UAP noise that has a fooling rate of even 10\% does show substantial improvements in DSP performance against using Gaussian random noise (70.8\% in Figure~\ref{subfig:2} against 69.8\% in Figure~\ref{subfig:1}). \paragraph*{\textbf{Experimental Settings:}} Going by our observations in the above analysis, for all the experiments in the sequel, we use six subspaces in our pooling scheme, use temporal ordering constraints in our objective, and use a fooling rate of 0.8 in UAP. Further, as mentioned earlier, we use an exponential projection metric kernel~\cite{cherian2018non} for the final classification of the subspace descriptors using a kernel SVM. Results using end-to-end learning are provided in the supplementary materials. \begin{table}[t] \centering \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & \multicolumn{3}{l|}{HMDB-51} & \multicolumn{2}{l|}{NTU-RGBD} & \multicolumn{2}{l|}{YUP++} \\ \hline & Spatial & Temporal & Two-stream & Cross-subject & Cross-view & Stationary & Moving \\ \hline AP & 46.7\%~\cite{feichtenhofer2016convolutional} & 60.0\%~\cite{feichtenhofer2016convolutional} & 63.8\%~\cite{feichtenhofer2016convolutional} & 74.3\%~\cite{soo2017interpretable} & 83.1\%~\cite{soo2017interpretable} & 85.1\% & 76.5\% \\ \hline MP & 45.1\% & 58.5\% & 60.6\% & 65.4\% & 78.5\% & 81.8\% & 72.4\% \\ \hline DSP & \textbf{58.5\%} & \textbf{67.0\%} & \textbf{72.5\%} & \textbf{81.6\%} & \textbf{88.7\%} & \textbf{95.1}\% & \textbf{88.3}\% \\ \hline \end{tabular} \caption{The accuracy comparison between our Discriminative subspace pooling (DSP) with standard Average pooling (AP) and Max pooling (MP).} \label{table:1} \end{table} \comment{ \subsection{Parameter analysis} In the Figure~\ref{subfig:2}, we present how the accuracy changes with and without embedding temporal order in different number of hyperplanes. As is clear, embedding temporal constraint will help the discriminative subspace to get a better representation. In terms of the number of hyperplanes, the accuracy increases about $3\%$ from 1 hyperplane to 6 hyperplanes, and drops around $0.5\%$ from 6 hyperplanes to 15 hyperplanes, which means a number of hyperplanes (6 in this case) is enough for capturing the discriminative feature of the sequence, redundant ones would catch the repeated information and cause confusion for the classifier. Additionally, in Figure~\ref{subfig:3}, we compare the accuracy with adversarial noise in different fooling rate. The higher the fooling rate is, the more adversarial noise will capture the information of the feature in different classes, which result in a better performance. Interestingly, our algorithm could performance relatively well without requiring a very high value of fooling rate of the adversarial noise From ~\cite{moosavi2017universal}, a lower fooling rate would reduce the amount of data that is used for generating the adversarial noise, which make the entire algorithm more computationally cheap. } \subsection{Experimental Results} \paragraph*{\textbf{Compared with standard pooling:}} In Table~\ref{table:1}, we show the performance of DSP on the three datasets and compare to standard pooling methods such as average pooling and max pooling. As is clear, we outperform the baseline results by a large margin. Specifically, we achieve 9$\%$ improvement on the HMDB-51 dataset split-1 and $5\%-8\%$ improvement on the NTU-RGBD dataset. On these two datasets, we simply apply our pooling method on the CNN features extracted from the pre-trained model. We achieve a substantial boost (of up to $12\%$) after applying our scheme. \begin{table}[t] \centering \scalebox{0.9}{ \begin{tabular}{lc}\hline \multicolumn{2}{c}{HMDB-51} \\\hline Method & Accuracy \\\hline Temporal Seg. n/w ~\cite{Wang2016} & 69.4\% \\ TS I3D ~\cite{carreira2017quo} & 80.9\% \\ ST-ResNet~\cite{feichtenhofer2016spatiotemporal} & 66.4\% \\ ST-ResNet+IDT ~\cite{feichtenhofer2016spatiotemporal} & 70.3\% \\ STM Network ~\cite{feichtenhofer2017spatiotemporal} & 68.9\% \\ STM Network+IDT ~\cite{feichtenhofer2017spatiotemporal} & 72.2\% \\ ShuttleNet+MIFS ~\cite{Shi_2017_ICCV} & 71.7\% \\ GRP ~\cite{grp} & 70.9\% \\ SVMP ~\cite{wang2018video} & 71.0\% \\ $L^2$STM~\cite{Sun_2017_ICCV} & 66.2\% \\\hline Ours(TS ResNet) & \textbf{72.4\%} \\ Ours(TS ResNet+IDT) & \textbf{74.3\%} \\ Ours(TS I3D) & \textbf{81.5\%} \\\hline \end{tabular}} \quad \scalebox{0.9}{ \begin{tabular}{lcc}\hline \multicolumn{3}{c}{NTU-RGBD} \\\hline Method & \multicolumn{1}{c}{Cross-Subject} & \multicolumn{1}{c}{Cross-View} \\\hline VA-LSTM~\cite{Zhang_2017_ICCV} & 79.4\% & 87.6\% \\ TS-LSTM ~\cite{Lee_2017_ICCV} & 74.6\% & 81.3\% \\ ST-LSTM+Trust Gate~\cite{liu2017skeleton} & 69.2\% & 77.7\% \\ SVMP~\cite{wang2018video} & 78.5\% & 86.4\% \\ GRP ~\cite{grp} & 76.0\% & 85.1\% \\ Res-TCN~\cite{soo2017interpretable} & 74.3\% & 83.1\% \\\hline Ours & \textbf{81.6\%} & \textbf{88.7\%} \\\hline \multicolumn{3}{c}{YUP++} \\\hline Method & \multicolumn{1}{c}{Stationary} & \multicolumn{1}{c}{Moving} \\\hline TRN~\cite{feichtenhofer2017temporal} & 92.4\% & 81.5\% \\ SVMP~\cite{wang2018video} & 92.5\% & 83.1\% \\ GRP~\cite{grp} & 92.9\% & 83.6\% \\\hline Ours & \textbf{95.1\%} & \textbf{88.3\%} \\\hline \end{tabular}} \caption{Comparisons to the state-of-the-art on each dataset following their respective official evaluation protocols. We used three splits for HMDB-51. `TS' refers to `Two-Stream'.} \label{table:2} \end{table} \paragraph*{\textbf{Comparisons to the State of the Art:}} In Table~\ref{table:2}, we compare DSP to the state-of-the-art results on each dataset. On the HMDB-51 dataset, we also report accuracy when DSP is combined hand-crafted features (computed using dense trajectories~\cite{wang2013dense} and summarized as Fisher vectors (IDT-FV)). As the results show, our scheme achieves significant improvements over the state of the art. For example, without IDT-FV, our scheme is 3\% better than than the next best scheme~\cite{Wang2016} (69.4\% vs. 72.4\% ours). Incorporating IDT-FV improves this to 74.3\% which is again better than other schemes. We note that the I3D architecture~\cite{carreira2017quo} was introduced recently that is pre-trained on the larger Kinectics dataset and when fine-tuned on the HMDB-51 leads to about 80.9\% accuracy. To understand the advantages of DSP on pooling I3D model generated features, we applied our scheme to their bottleneck features (extracted using the public code provided by the authors) from the fine-tuned model. We find that our scheme further improves I3D by about 0.6\% showing that there is still room for improvement for this model. On the other two datasets, NTU-RGBD and YUP++, we find that our scheme leads to about 5--7\% and 3--6\% improvements respectively, and outperforms prior schemes based on recurrent networks and temporal relation models, suggesting that our pooling scheme captures spatio-temporal cues much better than recurrent models. \paragraph*{\textbf{Run Time Analysis:}} In Figure~\ref{fig:runtime}, we compare the run time of DSP with similar methods such as rank pooling, dynamic images, and GRP. We used the Matlab implementations of other schemes and used the same hardware platform (2.6GHz Intel CPU single core) for our comparisons. To be fair, we used a single hyperplane in DSP. As the plot shows, our scheme is similar in computations to rank pooling and GRP. \begin{figure} \begin{floatrow} \ffigbox{% \includegraphics[width=0.8\linewidth,trim={0cm 0cm 0cm 0cm}, clip]{rebuttal/running_time.eps} }{% \caption{Run time analysis of DSP against GRP~\cite{grp}, RP~\cite{fernando2015modeling}, and Dynamic Images~\cite{bilen2016dynamic}. \label{fig:runtime}}% } \capbtabbox{% \begin{adjustbox}{width=1\linewidth} \begin{tabular}{c|c|c|c|c|c|c|c} \hline \#frames & 1 & 80 & 100 & 140 & 160 & 180 & 260 \\ \hline \#classes & 51 & 49 & 34 & 27 & 23 & 21 & 12 \\ \hline AP~\cite{carreira2017quo} & \textbf{80.8} & 81.8 & 86.1 & 84.1 & 82.3 & 78.0 & \textbf{77.3} \\ \hline DSP (ours) & \textbf{81.6} & 82.8 & 88.5 & 88.0 & 86.1 & 83.3 & \textbf{82.6} \label{T2} \end{tabular} \end{adjustbox} }{% \caption{Comparison of I3D performance on sequences of increasing lengths in HMDB-51 split-1.}% } \end{floatrow} \end{figure} \para{Analysis of Results on I3D Features:} To understand why the improvement of DSP on I3D (80.9\% against our 81.5\%) is not significant (on HMDB-51) in comparison to our results on other datasets, we further explored the reasons. Apparently, the I3D scheme uses chunks of 64 frames as input to generate one feature output. However, to obtain DSP representations, we need a sufficient number of features per video sequence to solve the underlying Riemannian optimization problem adequately, which may be unavailable for shorter video clips. To this end, we re-categorized HMDB-51 into subsets of sequences according to their lengths. In Table~\ref{T2}, we show the performance on these subsets and the number of action classes for sequences in these subsets. As our results show, while the difference between average pool (AP) (as is done in~\cite{carreira2017quo}) and DSP is less significant when the sequences are smaller ($<$80 frames), it becomes significant ($>$5\%) when the videos are longer ($>$260 frames). This clearly shows that DSP on I3D is significantly better than AP on I3D. \begin{figure}[!h] \begin{center} \includegraphics[width=1\linewidth,trim={0cm 0cm 0cm 0cm},clip]{figure/hyperplanes_2.eps} \end{center} \caption{Visualizations of our DSP descriptor (when applied on raw RGB frames) on an HMDB-51 video sequences. First column shows a sample frame from the video, second-to-seventh columns show the six hyperplanes produced by DSP. Interestingly, we find that each hyperplane captures different aspects of the sequences--first two mostly capture spatial, while the rest capture the temporal dynamics at increasing granularities.} \label{fig:4} \end{figure} \para{Qualitative Results:} In Figure~\ref{fig:4}, we visualize the hyperplanes that our scheme produces when applied to raw RGB frames from HMDB-51 videos -- i.e., instead of CNN features, we directly feed the raw RGB frames into our DSP, with adversarial noise generated as suggested in~\cite{moosavi2017universal}. We find that the subspaces capture spatial and temporal properties of the data separately; e.g., the first two hyperplanes seem to capture mostly the spatial cues in the video (such as the objects, background, etc.) while the rest capture mostly the temporal dynamics at greater granularities. Note that we do not provide any specific criteria to achieve this behavior, instead the scheme automatically seem to learn such hyperplanes corresponding to various levels of discriminative information. In the supplementary materials, we provide comparisons of this visualization against those generated by PCA and generalized rank pooling~\cite{grp}. \section{Introduction} \label{intro} \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth,trim={0cm 0cm 0cm 0cm},clip]{figure/outline.eps} \end{center} \caption{A graphical illustration of our discriminative subspace pooling with adversarial noise. For every video sequence (as CNN features), our scheme generates a positive bag (with these features) and a negative bag by adding adversarial perturbations to the features. Next, we learn discriminative temporally-ordered hyperplanes that separate the two bags. We use orthogonality constraints on these hyperplanes and use them as representations for the video. As such representations belong to a Stiefel manifold, we use a classifier on this manifold for video recognition. } \label{fig:1} \end{figure} Deep learning has enabled significant advancements in several areas of computer vision; however, the sub-area of video-based recognition continues to be elusive. In comparison to image data, the volumetric nature of video data makes it significantly more difficult to design models that can remain within the limitations of existing hardware and the available training datasets. Typical ways to adapt image-based deep models to videos are to resort to recurrent deep architectures or use three-dimensional spatio-temporal convolutional filters~\cite{carreira2017quo,tran2015learning,simonyan2014two}. Due to hardware limitations, the 3D filters cannot be arbitrarily long. As a result, they usually have fixed temporal receptive fields (of a few frames)~\cite{tran2015learning}. While recurrent networks, such as LSTM and GRU, have shown promising results on video tasks~\cite{zhu2016co,liu2016spatio,ballas2015delving}, training them is often difficult, and so far their performance has been inferior to models that look at parts of the video followed by a late fusion~\cite{carreira2017quo,simonyan2013deep}. While, better CNN architectures, such as the recent I3D framework~\cite{carreira2017quo}, is essential for pushing the state-of-the-art on video tasks, it is also important to have efficient representation learning schemes that can capture the long-term temporal video dynamics from predictions generated by a temporally local model. Recent efforts in this direction, such as rank pooling, temporal segment networks and temporal relation networks~\cite{Wang2016,grp,fernando2016learning,feichtenhofer2017temporal,fernando2015modeling,bilen2016dynamic,soo2017interpretable}, aim to incorporate temporal dynamics over clip-level features. However, such models often ignore the noise in the videos, and use representations that adhere to a plausible criteria. For example, in the rank pooling scheme~\cite{grp,fernando2016learning,fernando2015modeling,bilen2016dynamic,fernando2016discriminative}, it is assumed that the features from each frame are temporally-ordered, and learns a representation that preserves such order -- however without accounting for whether the learned representation fits to data foreground or background. In this paper, we present a novel pooling framework to contrastively summarize the temporally-ordered video features. Different from prior works, we assume that per-frame video features consist of noisy parts that could confuse a classifier in a downstream task, such as for example, action recognition. A robust representation, in this setting, will be one that could avoid the classifier from using these vulnerable features for making predictions. Learning such representations is similar in motivation to the idea of contrastive learning~\cite{hadsell2006dimensionality}, which has achieved promising results in many recent works for the task of unsupervised visual feature learning~\cite{bachman2019learning,henaff2019data,hjelm2018learning,oord2018representation,tian2019contrastive,wu2018unsupervised,zhuang2019local,he2019momentum,chen2020simple}. These works learn the visual representation by contrasting the positive pairs against the negative ones via a loss function, namely as contrastive loss. However, it is challenging to generate negative samples for video sequences and building the memory bank is also not appealing. To this end, we resort to some intuitions made in a few works recently in the area of adversarial perturbations~\cite{lu2017safetynet,oh2017adversarial,moosavi2017universal,xie2017adversarial}. Such perturbations are noise-like patterns that, when added to data, can fail an otherwise well-trained highly accurate classifier. Such perturbations are usually subtle, and in image recognition tasks, are quasi-imperceptible to a human. It was shown in several recent works that such noise can be learned from data. Specifically, by taking gradient ascent on a minimizing learning objective, one can produce such perturbations that will push the data points to the class boundaries, thereby making the classifier to mis-classify. Given that the strength (norm) of this noise is often bounded, it is highly likely that such noise will find minimum strength patterns that select features that are most susceptible to mis-classification. To this end, we use the recent universal adversarial perturbation generation scheme \cite{moosavi2017universal}. Once the perturbations are learned (and fixed) for the dataset, we use it to learn robust representations for the video. To this end, for features from every frame, we make two bags, one consisting of the original features, while the other one consisting of features perturbed by noise. Next, we learn a discriminative hyperplane that separates the bags in a max-margin framework. Such a hyperplane, which in our case is produced by a primal support vector machine (SVM), finds decision boundaries that could well-separate the bags; the resulting hyperplane is a single vector and is a weighted combination of all the data points in the bags. Given that the data features are non-linear, and given that a kernelized SVM might not scale well with sequence lengths, we propose to instead use multiple hyperplanes for the classification task, by stacking several such hyperplanes into a column matrix. We propose to use this matrix as our data representation for the video sequence. However, there is a practical problem with our descriptor; each such descriptor is local to its respective sequences and thus may not be comparable between videos. To this end, we make additional restrictions on the hyperplanes -- regularizing them to be orthogonal, resulting in our representation being subspaces. Such subspaces mathematically belong to the so-called Stiefel manifold~\cite{boothby1986introduction}. We formulate a novel objective on this manifold for learning such subspaces on video features. Further, as each feature is not independent of the previous ones, we make additional temporal constraints. We provide efficient Riemannian optimization algorithms for solving our objective, specifically using the Riemannian conjugate gradient scheme that has been used in several other recent works~\cite{grp,harandi2014manifold,huang2015projection}. Our overall pipeline is graphically illustrated in Figure~\ref{fig:1}. We present experiments on three video recognition tasks, namely (i) action recognition, (ii) dynamic texture recognition, and (iii) 3D skeleton based action recognition. On all the experiments, we show that our scheme leads to state-of-the-art results, often improving the accuracy between 3--14\%. Before moving on, we summarize the main contributions of this work: \begin{itemize} \item We introduce adversarial perturbations into the video recognition setting for contrastively learning robust video representations. \item We formulate a binary classification problem to learn temporally-ordered discriminative subspaces that separate the data features from their perturbed counterparts. \item We provide efficient Riemannian optimization schemes for solving our objective on the Stiefel manifold. \item Our experiments on three datasets demonstrate state-of-the-art results. \end{itemize} \comment{ Video-based tasks have drawn a significant amount of attention from the academic community, due to its various applications in many area, such as human behavior analysis and human-robot interactions. This raise a problem of how to effectively represent the video, especially the semantic content of it. We witness a couple of recent works have been made significant progress on this topic and have achieved the state-of-the- art in many applications like action recognition and detection~\cite{feichtenhofer2016spatiotemporal,feichtenhofer2017spatiotemporal,feichtenhofer2017temporal,feichtenhofer2016convolutional,hayat2015deep,simonyan2014two,simonyan2014very,Wang2016}. However, these solutions are still far from being an effective video representation, because they either fail to capture the long term dynamic or avoid noise frames when calculating the representation. Thus it becomes crucial to design effective representations that is able to embed the video dynamic while preserve the discriminative information of the sequence. Recently, the Convolutional Neural Networks (CNNs) have witnessed great success in both image and video level recognition tasks. However, most methods for video feature learning so far simply extend the image-based models to video. To overcome the higher number of parameter brought by video's volumetric nature, some algorithm split the video data to short video clips consisting of a few frames, and a image pre-trained model will be supervisely fine-tuned on the top of them. One of the most popular example is the two-stream based models~\cite{feichtenhofer2016convolutional,simonyan2014two,simonyan2014very,wang2015action,wangtwo}, the two-stream CNNs are trained to independently predict actions from RGB frame in the appearance stream and stacked optical flow in the motion stream. The predictions from each clip are then merged to generate a prediction for the full sequence by using average/max pooling or other classifiers such as SVM. The defect of this model is that 1. the short video clip is fail to capture a longer dynamic for the entire sequence, while the longer video segmentation would require much memory; 2. the background/noisy frame will reduce the feature quality and confuse the classifier. In order to tackle the first problem, some pooling scheme~\cite{grp,bilen2016dynamic,fernando2015modeling,wang2015action,Wang2016,yue2015beyond} and 3D architecture~\cite{tran2015learning,carreira2017quo} are introduced. However, as far as we have witnessed, we are the first working on the second problem in video feature learning. To this end, we observe that: 1. not all frames/video clips should be assigned equal importance; 2. a pooling scheme that embed the temporal structure of the video is favorable for video feature learning. Based on these two observations, we proposed our discriminative subspace pooling framework, in which, we introduce the synthetic noise feature along with the video feature. And then we apply a semi-supervised fashion to learn a set of decision hyperplanes, which are used to distinguish the noise feature from the video features. At the meantime, each hyperplanes will reserve the temporal order of the feature in the positive bag by using a quadratic ranking function. At last, the video will be represented by these decision hyperplanes, as a subspace. We calculate this subspace by minimizing an objective on the Stiefel manifold. This is because the orthononality constraint will lead to an non-convex optimization problem. To overcome it, we reduce the problem on the Stiefel manifold and solve it using the efficient Riemannian optimization algorithms. Compared with other pooling schemes, our method has the following benefits: \begin{itemize} \item It produces a compact and powerful representation of a video sequence by characterizing the discriminative information of its features against a background set. \item It preserve the temporal structure of each video sequence by capturing the non-linear dependencies between the input feature. \item It is compatible with a various CNN architectures and can be used for many applications. \item It is computational efficient. \end{itemize} } \section{Proposed Method} Let us assume $X=\left<x_1, x_2, \cdots, x_n\right>$ be a sequence of video features, where $x_i\in\reals{d}$ represents the feature from the $i$-th frame. We use `frame' in a loose sense; it could mean a single RGB frame or a sequence of a few RGB or optical flow frames (as in the two stream~\cite{simonyan2014very} or the I3D architectures~\cite{carreira2017quo}) or a 3D skeleton. The feature representation $x_i$ could be the outputs from intermediate layers of a CNN. As alluded to in the introduction, our key idea is the following. We look forward to an effective representation of $X$ that is (i) compact, (ii) preserves characteristics that are beneficial for the downstream task (such as video dynamics), and (iii) efficient to compute. Recent methods such as generalized rank pooling~\cite{grp} have similar motivations and propose a formulation that learns compact temporal descriptors that are closer to the original data in $\ell_2$ norm. However, such a reconstructive objective may also capture noise, thus leading to sub-optimal performance. Instead, we take a different approach in the contrastive learning fashion. Specifically, we assume to have access to some noise features $Z=\set{z_1, z_2,\cdots, z_m}$, each $z_i\in\reals{d}$. Let us call $X$ the positive bag, with a label $y=+1$ and $Z$ the negative bag with label $y=-1$. Our main goal is to find a discriminative hyperplane that separates the two bags; these hyperplanes can then be used as the representation for the bags. An obvious question is how such a hyperplane can be a good data representation? To answer this, let us consider the following standard SVM formulation with a single discriminator $w\in\reals{d}$: \begin{equation} \min_{w, \xi\geq 0} \frac{1}{2}\enorm{w}^2 + \sum_{\theta\in X\cup Z} \left[\max(0, 1- y(\theta) w^\top \theta + \xi_{\theta}) + C \xi_{\theta}\right], \label{eq:1} \end{equation} where with a slight abuse of notation, we assume $y(\theta) \in \set{+1, -1}$ is the label of $\theta$, $\xi$ are the slack variables, and $C$ is a regularization constant on the slacks. Given the positive and negative bags, the above objective learns a linear classification boundary that could separate the two bags with a classification accuracy of say $\gamma$. If the two bags are easily separable, then the number of support vectors used to construct the separating hyperplane might be a few and thus may not capture a weighted combination of a majority of the points in the bags –- as a result, the learned hyperplane would not be representative of the bags. However, if the negative bag $Z$ is suitably selected and we demand a high $\gamma$, we may turn~\eqref{eq:1} into a difficult optimization problem and would demand the solver to overfit the decision boundary to the bags; this overfitting creates a significantly better summarized representation, as it may need to span a larger portion of the bags to satisfy the $\gamma$ accuracy.\footnote{Here regularization parameter $C$ is mainly assumed to help avoid outliers.} This overfitting of the hyperplane is our key idea, that allows to avoid using data features that are susceptible to perturbations, while summarizing the rest. There are two key challenges to be addressed in developing such a representation, namely (i) an appropriate noise distribution for the negative bag, and (ii) a formulation to learn the separating hyperplanes. We explore and address these challenges below. \begin{algorithm} [t] \SetAlgoLined \KwIn{Feature points $x_{ij}$, Network weighting $W$, fooling rate $\psi$, cross entropy loss with softmax funtion $f(.)$, normalization operator $N(.)$.} \KwOut{Adversarial noise vector $\epsilon$.} Initialization: $\epsilon\gets 0.$ \\ \Repeat{$Accuracy\leq 1-\psi$} { $\Delta \epsilon\gets \argmin_{r} \|r\|_2 - \sum_{ij} f(W^\top (x_{ij}), W^\top (x_{ij}+\epsilon+r))$\; $\epsilon \gets N(\epsilon + \Delta \epsilon$)\; } \KwRet{$v$} \caption{Optimization step for solving adversarial noise.} \label{alg:1} \end{algorithm} \subsection{Finding Noise Patterns} As alluded to above, having good noise distributions that help us identify the vulnerable parts of the feature space is important for our scheme to perform well. The classic contrastive learning schemes either pair random noise~\cite{tian2019contrastive,wang2018video} or use in-batch samples~\cite{chen2020simple,he2019momentum} as the negatives, in which the random noise may introduce uncertainty into the learned feature and the in-batch samples would take too much memory during the training of video data. Instead, we resort to the recent idea of universal adversarial perturbations (UAP)~\cite{moosavi2017universal} to formulate the negatives by adding this global noise pattern onto the positives. This scheme is dataset-agnostic and provides a systematic and mathematically grounded formulation for generating adversarial noise that when added to the original features is highly-likely to mis-classify a pre-trained classifier. Further, this scheme is computationally efficient and requires less data for building relatively generalizable universal perturbations. Precisely, suppose $\mathcal{X}$ denotes our dataset, let $h$ be a CNN trained on $\mathcal{X}$ such that $h(x)$ for $x\in\mathcal{X}$ is a class label predicted by $h$. Universal perturbations are noise vectors $\epsilon$ found by solving the following objective: \begin{equation} \min_{\epsilon} \norm{\epsilon} \text{ s.t. } h(x+\epsilon) \neq h(x), \forall x \in \mathcal{X}, \label{eq:uap} \end{equation} where $\norm{\epsilon}$ is a suitable normalization on $\epsilon$ such that its magnitude remains small, and thus will not change $x$ significantly. In~\cite{moosavi2017universal}, it is argued that this norm-bound restricts the optimization problem in~\eqref{eq:uap} to look for the minimal perturbation $\epsilon$ that will move the data points towards the class boundaries; i.e., selecting features that are most vulnerable -- which is precisely the type of noise we need in our representation learning framework. To this end, we extend the scheme described in~\cite{moosavi2017universal}, to our \textcolor{red}{contrastive setting}. Differently to their work, we aim to learn a UAP on high-level CNN features as detailed in Alg.~\ref{alg:1} above, where the $x_{ij}$ refers to the $i^{th}$ frame in the $j^{th}$ video. We use the classification accuracy before and after adding the noise as our optimization criteria as captured by maximizing the cross-entropy loss. \subsection{Discriminative Subspace Pooling} Once a ``challenging'' noise distribution is chosen, the next step is to find a summarization technique for the given video features. While one could use a simple discriminative classifier, such as described in~\eqref{eq:1} to achieve this, such a linear classifier might not be sufficiently powerful to separate the potentially non-linear CNN features and their perturbed counterpart. An alternative is to resort to non-linear decision boundaries using a kernelized SVM; however that may make our approach less scalable and poses challenges for end-to-end learning. Thus, we look forward to a representation within the span of data features, while having more capacity for separating non-linear features. Our main idea is to use a subspace of discriminative directions (as against a single one as in~\eqref{eq:1}) for separating the two bags such that every feature $x_i$ is classified by at least one of the hyperplanes to the correct class label. Such a scheme can be looked upon as an approximation to a non-linear decision boundary by a set of linear ones, each one separating portions of the data. Mathematically, suppose $W\in\reals{d\times p}$ is a matrix with each hyperplane as its columns, then we seek to optimize: \begin{equation} \min_{W, \xi} \Omega(W) + \sum_{\theta \in X\cup Z} \left[\max\left(0, 1-\max\left(\bf{y}(\theta) \odot W^\top \theta\right) - \xi_{\theta}\right) + C\xi_{\theta}\right], \label{eq:2} \end{equation} where $\bf{y}$ is a vector with the label $y$ repeated $p$ times along its rows. The quantity $\Omega$ is a suitable regularization for $W$, of which one possibility is to use $\Omega(W) = W^\top W = \eye{p}$, in which case $W$ spans a $p$ dimensional subspace of $\reals{d}$. Enforcing such subspace constraints (orthonormality) on these hyperplanes are often empirically seen to demonstrate better performance as is also observed in~\cite{grp}. The operator $\odot$ is the element-wise multiplication and the quantity $\max(\bf{y}(\theta) \odot W^\top \theta)$ captures the maximum value of the element-wise multiplication, signifying that if at least one hyperplane classifies $\theta$ correctly, then the hinge-loss will be zero. Recall that we work with video data, and thus there are temporal characteristics of this data modality that may need to be captured by our representation. In fact, recent works show that such temporal ordering constraints indeed results in better performance, e.g., in action recognition~\cite{grp,fernando2015modeling,bilen2016dynamic,bilen2017action}. However, one well-known issue with such ordered pooling techniques is that they impose a global temporal order on all frames jointly. Such holistic ordering ignores the repetitive nature of human actions, for example, in actions such as clapping or hand-waving. As a result, it may lead the pooled descriptor to overfit to non-repetitive features in the video data, which might be corresponding to noise/background. Usually a slack variable is introduced in the optimization to handle such repetitions, however its effectiveness is questionable. To this end, we propose a simple temporal segmentation based ordering constraints, where we first segment a video sequence into multiple non-overlapping temporal segments $\mathcal{T}_0, \mathcal{T}_1,\cdots \mathcal{T}_{\lfloor n/\delta\rfloor}$, and then enforce ordering constraints only within the segments. We find the segment length $\delta$ as the minimum number of consecutive frames that do not result in a repeat in the action features. With the subspace constraints on $W$ and introducing temporal segment-based ordering constraints on the video features, our complete \textbf{order-constrained discriminative subspace pooling optimization} can be written as: \begin{align} \label{eq:5}\min_{\substack{W^\top W = \eye{p},\\\xi,\zeta\geq 0}}&\sum_{\theta \in X\cup Z}\!\!\!\!\!\left[\max\!\left(0, 1-\max\left(\bf{y}(\theta) \odot W^\top \theta\right) - \xi_{\theta}\right)\right]\!\!+\!C_1\!\!\!\!\!\sum_{\theta\in X \cup Z}\!\!\!\!\!\xi_{\theta} +\!C_2\!\!\sum_{i<j} \zeta_{ij},\\ \label{eq:6}&\enorm{W^\top x_i}^2 + 1 \leq \enorm{W^\top x_j}^2 +\zeta_{ij}, \quad i<j, \forall (i,j)\in \mathcal{T}_k, \text{where}\\ \mathcal{T}_k &= \set{k\delta+1, k\delta+2,\cdots, \min(n,(k+1)\delta)}, \forall k\in\set{0,1,\cdots, \lfloor n/\delta\rfloor} \\ \delta & = b^*-a^*, \text{ where } (a^*,b^*) = \argmin_{a,b>a} \enorm{x_a - x_b}, \label{eq:3} \end{align} where~\eqref{eq:6} captures the temporal order, while the last two equations define the temporal segments, and computes the appropriate segment length $\delta$, respectively. Note that, the temporal segmentation part could be done offline, by using all videos in the dataset, and selecting a $\delta$ which is the mean. In the next section, we present a scheme for optimizing $W$ by solving the objective in~\eqref{eq:5}and~\eqref{eq:6}. Once each video sequence is encoded by a subspace descriptor, we use a classifier on the Stiefel manifold for recognition. Specifically, we use the standard exponential projection metric kernel~\cite{grp,harandi2014expanding} to capture the similarity between two such representations, which are then classified using a kernelized SVM. \subsection{Efficient Optimization} The orthogonality constraints on $W$ results in a non-convex optimization problem that may seem difficult to solve at first glance. However, note that such subspaces belong to well-studied objects in differential geometry. Specifically, they are elements of the Stiefel manifold $\mathcal{S}(d,p)$ ($p$ subspaces in $\reals{d}$), which are a type of Riemannian manifolds with positive curvature~\cite{boothby1986introduction}. There exists several well-known optimization techniques for solving objectives defined on this manifold~\cite{absil2009optimization}, one efficient scheme is Riemannian conjugate gradient (RCG)~\cite{smith1994optimization}. This method is similar to the conjugate gradient scheme in Euclidean spaces, except that in the case of curved-manifold-valued objects, the gradients should adhere to the geometry (curvature) of the manifold (such as orthogonal columns in our case), which can be achieved via suitable projection operations (called exponential maps). However, such projections may be costly. Fortunately, there are well-known approximate projection methods, termed~\emph{retractions} that could achieve these projections efficiently without losing on the accuracy. Thus, tying up all together, for using RCG on our problem, the only part that we need to derive is the Euclidean gradient of our objective with respect to $W$. To this end, rewriting~\eqref{eq:6} as a hinge loss on~\eqref{eq:5}, our objective on $W$ and its gradient are: \begin{align} &\min_{W\in\mathcal{S}(d,p)} g(W):= \sum_{\theta \in X\cup Z}\left[\max\left(0, 1-\max\left(\bf{y}(\theta) \odot W^\top \theta\right) - \xi_{\theta}\right)\right] \notag\\ &\qquad\qquad\qquad + \frac{1}{n(n-1)}\sum_{i<j}\max(0, 1+\enorm{W^\top x_i}^2-\enorm{W^\top x_j}^2-\zeta_{ij}), \end{align} \begin{align} &\frac{\partial{g}}{\partial W} = \sum_{\theta \in X\cup Z} A(W; \theta,y(\theta)) + \frac{1}{n(n-1)}\sum_{i<j}B(W;x_i,x_j), \text{where }\\ &A(W; \theta, y(\theta)) = \left\{ \begin{array}{ll} &0, \quad \text{if } \max(y(\theta)\odot W^\top\theta-\xi_{\theta}) \geq 1\\ &-\left[\mathbf{0}_{d\times r\!-\!1}\ y(\theta)\theta\ \mathbf{0}_{d\times p-r}\right],\ r=\argmax_q y(\theta)\!\odot\!W^\top_q\theta, \text{ else}\\ \end{array} \right.\\ &B(W; x_i, x_j) = \left\{ \begin{array}{ll} &0, \quad \text{if } \enorm{W^\top x_j}^2 \geq 1+\enorm{W^\top x_i}^2 -\zeta_{ij} \\ &2(x_ix_i^\top -x_jx_j^\top)W,\quad \text{else.}\\ \end{array} \right. \end{align} In the definition of $A(W)$, we use $W^\top_q$ to denote the $q$-th column of $W$. To reduce clutter in the derivations, we have avoided including the terms using $\mathcal{T}$. Assuming the matrices of the form $xx^T$ can be computed offline, on careful scrutiny we see that the cost of gradient computations on each data pair is only $O(d^2p)$ for $B(W)$ and $O(dp)$ for the discriminative part $A(W)$. If we include temporal segmentation with $k$ segments, the complexity for $B(W)$ is $O(d^2p/k)$. \noindent\paragraph*{\textbf{End-to-End Learning:}} The proposed scheme can be used in an end-to-end CNN learning setup where the representations can be learned jointly with the CNN weights. In this case, CNN backpropogation would need gradients with respect to the solutions of an argmin problem defined in~\eqref{eq:5}, which may seem difficult. However, there exist well-founded techniques~\cite{chiang1984fundamental},~\cite{faugeras1993three}[Chapter 5] to address such problems, specifically in the CNN setting~\cite{gould2016differentiating} and such techniques can be directly applied to our setup. However, since gradient derivations using these techniques will require review of some well-known theoretical results that could be a digression from the course of this paper, we provide them in the supplementary materials. \section{Related work} \label{related_work} Traditional video learning methods use hand-crafted features (from a few frames) -- such as dense trajectories, HOG, HOF, etc.~\cite{wang2013action} -- to capture the appearance and the video dynamics, and summarize them using a bag-of-words representation or more elegantly using Fisher vectors~\cite{sadanand2012action}. With the success of deep learning methods, feeding video data as RGB frames, optical flow subsequences, RGB differences, or 3D skeleton data directly into CNNs is preferred. One successful such approach is the two-stream model (and its variants)~\cite{simonyan2014two,feichtenhofer2017temporal,feichtenhofer2017spatiotemporal,hayat2015deep} that use video segments (of a few frames) to train deep models, the predictions from the segments are fused via average pooling to generate a video level prediction. There are also extensions of this approach that directly learn models in an end-to-end manner~\cite{feichtenhofer2017spatiotemporal}. While, such models are appealing to capture the video dynamics, it demands memory for storing the intermediate feature maps of the entire sequence, which may be impractical for long sequences. Recurrent models~\cite{baccouche2011sequential,donahue2015long,du2015hierarchical,li2016action,srivastava2015unsupervised,yue2015beyond} have been explored for solving this issue, that can learn to filter useful information while streaming the videos through them, but they are often found difficult to train~\cite{pascanu2013difficulty}; perhaps due to the need to back-propagate over time. Using 3D convolutional kernels~\cite{carreira2017quo,tran2015learning} is another idea that proves to be promising, but bring along more parameters. The above architectures are usually trained for improving the classification accuracy, however, do not consider the robustness of their internal representations -- accounting for which may improve their generalizability to unseen test data. One way to improve the data representation is via contrastive learning~\cite{hadsell2006dimensionality}, which learns representations by minimizing a contrastive loss between the positive and negative pairs. This approach has been used in several recent works~\cite{bachman2019learning,henaff2019data,hjelm2018learning,oord2018representation,tian2019contrastive,wu2018unsupervised,zhuang2019local,he2019momentum,chen2020simple}, achieving promising results for unsupervised visual representation learning. Although their motivations are different, the core idea is to unsupervisely train an encoder by minimizing the contrastive loss, which encodes a visual representation closer to its positive data points and far away from its negatives. The difference with our formulation is that some works~\cite{bachman2019learning,henaff2019data,hjelm2018learning,oord2018representation,tian2019contrastive,zhuang2019local,chen2020simple} formulate the positive and negative pairs within a mini-batch while others~\cite{he2019momentum,wu2018unsupervised} build a memory bank for generating the pairs. However, as the video data is often voluminous, it is hard to apply the same strategy for learning video representations. Moreover, the size of the memory bank could be huge due to the potentially large spatio-temporal semantic complexity in the video data. Instead, we show how we can use adversarial perturbations~\cite{moosavi2017universal} to produce negative samples, in a network-agnostic manner, which can then be used for contrastive learning within a novel subspace-based contrastive learning framework. Specifically, different from the classic contrastive learning methods mentioned above, we formulate a binary classification problem that contrasts the video features against its perturbed counterparts and use the learned decision boundaries as video representation. Our main inspiration comes from the recent work of Moosavi et al.~\cite{moosavi2017universal} that show the existence of quasi-imperceptible image perturbations that can fool a well-trained CNN model. They provide a systematic procedure to learn such perturbations in an image-agnostic way. In Xie et al.~\cite{xie2017adversarial}, such perturbations are used to improve the robustness of an object detection system. Similar ideas have been explored in ~\cite{lu2017safetynet,oh2017adversarial,zhang2018deep}. In Sun et al.~\cite{sun2014discover}, a latent model is used to explicitly localize discriminative video segments. In Chang et al.~\cite{chang2017semantic}, a semantic pooling scheme is introduced for localizing events in untrimmed videos. While these schemes share similar motivation as ours, the problem setup and formulations are entirely different. On the representation learning front of our contribution, there are a few prior pooling schemes that are similar in the sense that they also use the parameters of an optimization functional as a representation. The most related work is rank-pooling and its variants~\cite{fernando2016learning,fernando2015modeling,fernando2016discriminative,su2016hierarchical,bilen2017action,cherian2018non,wang2017ordered} that use a rank-SVM for capturing the video temporal evolution. Similar to ours, Cherian et al.~\cite{grp} propose to use a subspace to represent video sequences. However, none of these methods ensure if the temporal-ordering constraints capture useful video content or capture some temporally-varying noise. To overcome this issue, Wang et al~\cite{wang2018video} proposes a contrastive video representation learning scheme using the decision boundaries of a support vector machine classifier that contrast data features against independently sampled noise. In this paper, we revisit this problem in the setting of data dependent noise generation via an adversarial noise design and learns a non-linear decision boundary using Riemannian optimization; our learned representations per sequence are more expressive and leads to significant performance benefits. \section{Qualitative Visualizations} \section{Discriminative Subspace Pooling: Intuitions} In the following, we analyze the technicalities behind DSP in a very constrained and simplified setting, that we believe will help it understand better. A rigorous mathematical analysis of this topic is beyond the scope of this paper. Let us use the notation $X$ to denote a matrix of $n$ data features, let $Z$ be the noise bag, and let $\epsilon$ be the adversarial noise. Then $Z=X+\epsilon$, where $\epsilon$ is fixed for the entire dataset. Simplifying our notation used in the main paper, let us further assume we are looking for a single dimension $w$ that could separate the two bags $X$ and $Z$. Then, this implies for example, in an ideal discriminative setting, $w^TX^i=1, \forall i=1,2,\cdots ,n$ and $w^TZ_i = w^T(X^i+\epsilon) = -1, \forall i=1,\cdots, n$. Substituting the former into the latter, we have $w^T\epsilon = -2$. Combining, we have a set of $n+1$ equations\footnote{Practically, we should use inequalities to signify the half-spaces, but here we omit such technicality.} as follows: \begin{align} \label{eq:hype_1} w^TX_i &= 1,\ i=1,2,\cdots, n\\ w^T\epsilon &= -2. \label{eq:hype_2} \end{align} Assuming $\epsilon$ is a direction in the feature space that could confuse a well-trained classifier, the above set of equations suggest that when learning the discriminative hyperplane in a max-margin setup, our framework penalizes (at twice the rate) directions that could be confusing. \comment{ Let us further simplify this setting. Suppose we are working in a 2D feature space, where say $x=[x_1, x_2]$ is a data point. Hypothetically, let $x_1$ be 1D a CNN feature capturing the video background, while let $x_2$ captures the dynamics. When learning the noise, suppose that the dimension $x_2$ might be easier to make a classifier to mis-classify (vice versa). Based on this, assuming $\ell_1$ regularization on the noise patterns, the learned universal perturbation $\epsilon$ would be $[0, e]$ (noise will be added on the easily mis-classified dimension) . In this case, our negative bag is given by $z=[x1, x2+e]$. Using the above equations~\eqref{eq:hype_1}~\eqref{eq:hype_2}, assuming $w=[w_1,w_2]$ is the discriminative direction, then we can show that \begin{equation} w = [(e+2x_2)/(x_1e), -2/e], \end{equation} where the second dimension, which is vulnerable to noise, is made a constant, and the other dimensions undergoes a non-linear transformation with respect to the data dimensions $x_1$ and $x_2$ and the noise $e$ -- the entire setting can be seen as a dimensionality reduction and a non-linear transformation. } \input{end2end} \section{Classifying DSP descriptors Using Neural Networks} Besides, the above end-to-end setup, below we investigate an alternative setup that mixes frameworks -- that is, use a Riemannian framework to generate our DSP descriptors, and a multi-layer perceptron for video classification. That is, we explore the possibility of training a multi-layer perceptron (MLP) on the discriminatively pooled subspace descriptors, as against a non-linear SVM (using the exponential projection metric kernel) suggested in the main paper. This is because, an SVM-based classifier might not be scalable when working with very large video datasets. In Figure~\ref{fig:mlp}, we compare the performance of this experiment on HMDB-51 split-1. For the MLP, we use the following architecture: we first introduce a $1 \times p$ vector to do a linear pooling of the $p$ columns produced by the DSP scheme. The resultant $d\times 1$ vector is then passed through a $d\times 51$ weight matrix learned against the data labels after a softmax function. We use RELU units after the weight matrix, and use cross-entropy loss. \begin{figure} \begin{floatrow} \ffigbox{% \includegraphics[width=0.7\linewidth,trim={0cm 0cm 0cm 0cm},clip]{figure/MLP.eps} }{% \caption{Accuracy comparisons when using non-linear SVM and multi-layer perceptron for classifying the DSP descriptors (on HMDB-51 split1). \label{fig:mlp}}% } \capbtabbox{% \begin{adjustbox}{width=0.8\linewidth} \begin{tabular}{|l|l|l|l|l|l|l|} \hline &RGB & Flow\\ \hline DSP (SVM)& 58.5 & 67.0 \\ \hline DSP (end-to-end)& 56.2 & 65.0 \\ \hline \end{tabular} \end{adjustbox} }{% \caption{Comparison between end-to-end learning and SVM-based DSP classification on HMDB-51 split-1 with ResNet152.}% \label{e2e} } \end{floatrow} \end{figure} The result in Figure~\ref{fig:mlp} suggests that the non-linear kernels perform better than using the MLP, especially when the number of hyperplanes $p$ is large. This might be because the $1 \times k$ vector could not capture as much information of each hyperplane as the exponential projection metric kernel does. Note that the subspaces are non-linear manifold objects, and thus the linear pooling operation could be sub-optimal. However, the result of MLP is still better than the baseline result shown in the Table 1 in the main paper. We also provide the result from end-to-end learning setup in the Table~\ref{e2e}, which is slightly lower than the one from SVM setup; which we believe is perhaps because of the diagonal approximations to the Hessians that we use in our back-propagation gradient formulations (see the end of Section~\ref{sec:end-to-end}. \section{Additional Qualitative Experiment Results} In the main paper, we make comparison between non-linear kernel on DSP against a linear kernel on AP and MP, which some may argue as unfair, as the latter kernel is expected to me much more richer than the former (and thus it is unclear if the performance in DSP comes from the use of the non-linear kernel or the representation itself). To this end, we explore the impact of various kernel choices for the methods. Note that the output of the DSP representation is a matrix with orthogonal columns and is an element of the non-linear Stiefel manifold, which to the best of our knowledge, cannot be embedded into a linear space of finite dimensions without distortions. Thus, using a linear SVM may be mathematically incorrect. That said, however, we evaluate the idea in Table~\ref{T1}(left): (i) use a linear classifier (SVM) on DSP and (ii) use non-linear classifier on other features (RBF kernel+SVM). As is clear, linear SVM on DSP is 4-8\% inferior and using non-linear SVM on AP/MP did not improve over DSP -- demonstrating that it is not the classifier that helps, rather it is the DSP representation itself. \begin{table}[ht] \begin{adjustbox}{width=1.0\linewidth} \centering \begin{tabular}{|l|l|l|l|l|l|l|} \hline & RGB(L) & Flow(L) & RGB(NL) & Flow(NL)\\ \hline AP & 46.7 & 60.0 & 44.2 & 57.8 \\ \hline MP & 45.1 & 58.5 & 40.6 & 56.1 \\ \hline DSP & 50.4 & 63.2 & 58.5 & 67.0 \\ \hline \end{tabular} \quad \begin{tabular}{|l|l|l|l||l|l||l|l|} \hline & RGB & Flow & R+F & {NTU-S} & {NTU-V} & {YUP-S} & {YUP-M} \\ \hline RP & 48.6 & 58.3 & 65.2 & 71.6 & 80.5 & 91.3 & 81.6 \\ \hline GRP & 53.3 & 63.4 & 70.9 & 76.0 & 85.1 & 92.9 & 83.6 \\ \hline DSP & 58.5 & 67.0 & 72.4 & 81.6 & 88.7 & 95.1 & 88.3 \\ \hline \end{tabular} \end{adjustbox} \caption{Left: Comparison between different classifiers (L)inear and non-linear (NL) on HMDB-51 split-1 with ResNet152. Right:Comparison of DSP against other pooling schemes, esp. rank pooling (RP) and generalized (bi-directional) rank pooling (GRP) on HMDB-51 (two-stream ResNet-152), NTU, and YUP datasets (following the official evaluation protocol)} \label{T1} \end{table} Apart from that, we also make comparison with the Generalized Rank Pooling (GRP) scheme\cite{grp}, which is one of the most important baseline method mentioned in the main paper. Table~\ref{T1}(right) shows the results on all the three datasets. As is seen, DSP still outperforms these prior methods. For RP and GRP, we used the public code from the authors without any modifications. We used 6 subspaces for GRP after cross-validation. \begin{figure}[!h] \begin{center} \subfigure[]{\label{subfig:4}\includegraphics[width=0.32\linewidth]{rebuttal/scale.eps}} \subfigure[]{\label{subfig:3}\includegraphics[width=0.32\linewidth]{rebuttal/dropout.eps}} \end{center} \caption{(a) SNR plot with Gaussian noise and (c) Dropout to generate negative features.} \label{fig:all_plots} \end{figure} \subsection{More on Noise Selection} In this section, we explore other alternatives to noise selection, as against the adversarial noise (UAP) we used in the main paper. First, we use different levels of Gaussian noise (signal-to-noise ratio); the results are shown in Figure~\ref{subfig:4}. As the magnitude of noise increases, accuracy do increase, however is below that achieved when using UAP. A second alternative to UAP\footnote{We thank an ECCV reviewer for suggesting this alternative.} is to add drop-out to build the noise bag. The result is provided in Figure~\ref{subfig:3}. Specifically, instead of UAP, we use a negative bag containing features after dropout on the original video features. We increase dropout ratio in the plot, which does improve accuracy, however is below UAP. \begin{figure}[] \begin{center} \includegraphics[width=0.9\linewidth,trim={0cm 0cm 0cm 0cm},clip]{figure/comparison.eps} \end{center} \caption{We make qualitative comparisons of DSP subspaces against those learned via PCA and GRP~\cite{grp}.} \label{comparison} \end{figure} \subsection{Qualitative Comparisons to PCA and GRP} In Figure~\ref{comparison}, we make comparison between the visualization of subspaces from DSP, PCA and GRP. From these two set of examples, We find that the type of discriminative subspaces that DSP learns is quite different from the other two. Interestingly, we find that DSP disentangles the appearance and dynamics; however that is not the case in PCA or GRP; even when GRP uses temporal-ordering constraints on top of PCA.
{'timestamp': '2020-04-17T02:04:48', 'yymm': '1807', 'arxiv_id': '1807.09380', 'language': 'en', 'url': 'https://arxiv.org/abs/1807.09380'}
\section{Introduction} \label{sec:intro} A {\em spanning tree} on a finite graph $G=(V,E)$ is a connected subgraph of $G$ which is a tree and has vertex set $V$. A {\em uniform spanning tree} in $G$ is a random spanning tree chosen uniformly from the set of all spanning trees. Let $Q_n = [-n, n]^d \subset \bZ^d$, and write $\sU_{Q_n}$ for a uniform spanning tree on $Q_n$. Pemantle \cite {Pem} showed that the weak limit of $\sU_{Q_n}$ exists and is connected if and only if $d \leq 4$. (He also showed that the limit does not depend on the particular sequence of sets $Q_n$ chosen, and that `free' or `wired' boundary conditions give rise to the same limit.) We will be interested in the case $d = 2$, and will call the limit the uniform spanning tree (UST) on $\bZ^2$ and denote it by $\sU$. For further information on USTs, see for example \cite{BLPS, BKPS, Lyo}. The UST can also be obtained as a limit as $p, q \to 0$ of the random cluster model -- see \cite{Hag}. A loop erased random walk (LERW) on a graph is a process obtained by chronologically erasing the loops of a random walk on the graph. There is a close connection between the UST and the LERW. Pemantle \cite{Pem} showed that the unique path between any two vertices $v$ and $w$ in a UST on a finite graph $G$ has the same distribution as the loop-erasure of a simple random walk on $G$ from $v$ to $w$. Wilson \cite{Wil96} then proved that a UST could be generated by a sequence of LERWs by the following algorithm. Pick an arbitrary vertex $v \in G$ and let $T_0 = \{v\}$. Now suppose that we have generated the tree $T_k$ and that $T_k$ does not span. Pick any point $w \in G \setminus T_k$ and let $T_{k+1}$ be the union of $T_k$ and the loop-erasure of a random walk started at $w$ and run until it hits $T_k$. We continue this process until we generate a spanning tree $T_m$. Then $T_m$ has the distribution of the UST on $G$. We now fix our attention on $\bZ^2$. By letting the root $v$ in Wilson's algorithm go to infinity, one sees that one can obtain the UST $\sU$ on $\bZ^2$ by first running an infinite LERW from a point $x_0$ (see Section \ref{sect:LERW} for the precise definition) to create the first path in $\sU$, and then using Wilson's algorithm to generate the rest of $\sU$. This construction makes it clear that $\sU$ is a 1-sided tree: from each point $x$ there is a unique infinite (self-avoiding) path in $\sU$. Both the LERW and the UST on $\bZ^2$ have conformally invariant scaling limits. Lawler, Schramm and Werner \cite{LSW1} proved that the LERW in simply connected domains scales to $\operatorname{SLE}_2$ -- Schramm-Loewner evolution with parameter $2$. Using the relation between LERW and UST, this implies that the UST has a conformally invariant scaling limit in the sense of \cite{Sch} where the UST is regarded as a measure on the set of triples $(a,b,\gamma)$ where $a,b \in \bR^2 \cup \{\infty\}$ and $\gamma$ is a path between $a$ and $b$. In addition \cite{LSW1} proves that the UST Peano curve -- the interface between the UST and the dual UST -- has a conformally invariant scaling limit, which is $\operatorname{SLE}_8$. In this paper we will study properties of the UST $\sU$ on $\bZ^2$. We have two natural metrics on $\sU$; the intrinsic metric given by the shortest path in $\sU$ between two points, and the Euclidean metric. For $x,y \in \bZ^2$ let $\gam(x,y)$ be the unique path in $\sU$ between $x$ and $y$, and let $d(x,y)=|\gam(x,y)|$ be its length. If $U_0$ is a connected subset of $\sU$ then we write $\gam(x, U_0)$ for the unique path from $x$ to $U_0$. Write $\gam(x,\infty)$ for the path from $x$ to infinity. We define balls in the intrinsic metric by $$ B_d(x,r) =\{ y: d(x,y)\le r \}$$ and let $|B_d(x,r)|$ be the number of points in $B_d(x,r)$ (the {\em volume} of $B_d(x,r)$). We write $$ B(x,r) =\{ y \in \bZ^d: |x-y| \le r\}, $$ for balls in the Euclidean metric, and let $B_R = B(R)= B(0,R)$, $B_d(R)=B_d(0,R)$. Our goals in this paper are to study the volume of balls in the $d$ metric, to obtain estimates of the degree of `metric distortion' between the intrinsic and Euclidean metrics, and to study the behaviour of simple random walk (SRW) on $\sU$. To state our results we need some further notation. Let $G(n)$ be the expected number of steps of an infinite LERW started at 0 until it leaves $B(0,n)$. Clearly $G(n)$ is strictly increasing; extend $G$ to a continuous strictly increasing function from $[1,\infty)$ to $[1,\infty)$, with $G(1)=1$. Let $g(t)$ be the inverse of $G$, so that $G(g(t))=t=g(G(t))$ for all $t\in [1,\infty)$. By \cite{Ken, Mas09} we have \begin{equation} \label{eq:growthexp} \lim_{n \to \infty} \frac{ \log G(n)} {\log n} = \frac54. \end{equation} Our first result is on the relation between balls in the two metrics. \begin{theorem} \label{t:main1} (a) There exist constants $c, C > 0$ such that for all $r \ge 1$, ${\lambda} \ge 1$, \begin{equation}\label{ei:Bdtail} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R} \big( B_d(0, {\lambda}^{-1} G(r) ) \not\subset B(0, r ) \big) \le C e^{-c {\lambda}^{2/3}}. \end{equation} (b) For all $\varepsilon > 0$, there exist $c(\eps), C(\eps) > 0$ and $\lambda_0(\varepsilon) \geq 1$ such that for all $r \ge 1$ and $\lambda \geq 1$, \begin{equation}\label{ei:Bd-lb} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R} \big( B(0, r) \not\subset B_d(0, {\lambda} G(r) \big) \leq C \lambda^{-4/15 + \varepsilon}, \end{equation} and for all $r \geq 1$ and all $\lambda \geq \lambda_0(\varepsilon)$, \begin{equation} \label{ei:Bd-lb2} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R} \big( B(0, r) \not\subset B_d(0, {\lambda} G(r) \big) \geq c \lambda^{-4/5 - \varepsilon}. \end{equation} \end{theorem} We do not expect any of these bounds to be optimal. In fact, we could improve the exponent in the bound \eqref{ei:Bdtail}, but to simplify our proofs we have not tried to find the best exponent that our arguments yield when we have exponential bounds. However, we will usually attempt to find the best exponent given by our arguments when we have polynomial bounds, as in \eqref{ei:Bd-lb} and \eqref{ei:Bd-lb2}. The reason we have a polynomial lower bound in \eqref{ei:Bd-lb2} is that if we have a point $w$ such that $\abs{w} = r$, then the probability that $\gamma(0,w)$ leaves the ball $B(0, \lambda r)$ is bounded below by $\lambda^{-1}$ (see Lemma \ref{l:Losubset}). This in turn implies that the probability that $w \notin B_d(0,{\lambda} G(r))$ is bounded from below by $c\lambda^{-4/5-\varepsilon}$ (Proposition \ref{p:ProMw<}). Theorem \ref{t:main1} leads immediately to bounds on the tails of $|B_d(0,R)|$. However, while \eqref{ei:Bdtail} gives a good bound on the upper tail, \eqref{ei:Bd-lb} only gives polynomial control on the lower tail. By working harder (see Theorem \ref{t:Vexplb}) we can obtain the following stronger bound. \begin{theorem} \label{t:main2} Let $R\ge 1$, ${\lambda} \ge 1$. Then \begin{align} \label{e:vlubpii} \Pro{}{ |B_d(0,R)| \ge {\lambda} g(R)^2 } &\le C e^{ - c {\lambda}^{1/3}}, \\ \label{e:vlubpi} \Pro{}{ |B_d(0,R)| \le {\lambda}^{-1} g(R)^2 } &\le C e^{-c {\lambda}^{1/9}}. \end{align} So in particular there exists $C$ such that for all $R \geq 1$, \begin{equation}\label{e:Vdasymp2} C^{-1} g(R)^2 \leq \Exp{}{ |B_d(0,R)| } \leq C g(R)^2. \end{equation} \end{theorem} We now discuss the simple random walk on the UST $\sU$. To help distinguish between the various probability laws, we will use the following notation. For LERW and simple random walk in $\bZ^2$ we will write ${\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}^z$ for the law of the process started at $z$. The probability law of the UST will be denoted by ${\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}$, and the UST will be defined on a probability space $(\Omega, {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R})$; we let ${\omega}$ denote elements of $\Omega$. For the tree $\sU({\omega})$ write $x \sim y$ if $x$ and $y$ are connected by an edge in $\sU$, and for $x \in \bZ^2$ let $$ \mu_x=\mu_x ({\omega}) = | \{ y: x\sim y\} | $$ be the degree of the vertex $x$. The random walk on $\sU({\omega})$ is defined on a second space ${\cal D}} \def\sE {{\cal E}} \def\sF {{\cal F}= (\bZ^2)^{\bZ_+}$. Let $X_n$ be the coordinate maps on ${\cal D}} \def\sE {{\cal E}} \def\sF {{\cal F}$, and for each ${\omega} \in \Omega$ let $P^x_{\omega}$ be the probability on ${\cal D}} \def\sE {{\cal E}} \def\sF {{\cal F}$ which makes $X=(X_n, n \ge 0)$ a simple random walk on $\sU({\omega})$ started at $x$. Thus we have $P^x_{\omega}(X_0=x)=1$, and $$ P^x_{\omega}( X_{n+1}=y| X_n=x) =\frac{1}{\mu_x({\omega})} \quad \hbox{ if } y \sim x. $$ We remark that since the UST $\sU$ is a subgraph of $\bZ^2$ the SRW $X$ is recurrent. We define the heat kernel (transition density) with respect to $\mu$ by \begin{equation} \label{eq:hkdef} p^{\omega}_n(x,y) = \mu_y^{-1} P^x_{\omega}(X_n =y). \end{equation} Define the stopping times \begin{align}\label{e:tRdef} \tau_R &= \min \{ n \ge 0: d(0, X_n) > R\},\\ \label{e:wtrdef} \widetilde \tau_r &= \min \{ n \ge 0: |X_n| > r \}. \end{align} Given functions $f$ and $g$ we write $f \approx g$ to mean $$ \lim_{n \to \infty} \frac{ \log f(n)}{\log g(n)} = 1, $$ and $f \asymp g$ to mean that there exists $C\ge 1$ such that $$ C^{-1} f(n) \le g(n) \le C f(n), \quad n \ge 1. $$ The following summarizes our main results on the behaviour of $X$. Some more precise estimates, including heat kernel estimates, can be found in Theorems \ref{ptight} -- \ref{t:hk} in Section \ref{sect:RW}. \begin{theorem} \label{t:mainrw} We have for ${\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}$ -a.a. ${\omega}$, $P^0_{\omega}$-a.s., \begin{align} \label{e:m1} p_{2n}(0,0) &\approx n^{-8/13}, \\ \label{e:m2} \tau_R &\approx R^{13/5}, \\ \label{e:m3} \widetilde \tau_r &\approx r^{13/4}, \\ \label{e:m4} \max_{0 \leq k \leq n} d(0, X_k) &\approx n^{5/13}. \end{align} \end{theorem} \smallskip We now explain why these exponents arise. If $G$ is a connected graph, with graph metric $d$, we can define the volume growth exponent (called by physicists the fractal dimension of $G$) by $$ d_f=d_f(G) = \lim_{R \to \infty} \frac { \log |B_d(0,R)|}{\log R}, $$ if this limit exists. Using this notation, Theorem \ref{t:main2} and \eqref{eq:growthexp} imply that $$ d_f (\sU) = 8/5, \quad {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}-\hbox{a.s.} $$ Following work by mathematical physicists in the early 1980s, random walks on graphs with fractal growth of this kind have been studied in the mathematical literature. (Much of the initial mathematical work was done on diffusions on fractal sets, but many of the same results carry over to the graph case). This work showed that the behaviour of SRW on a (sufficiently regular) graph $G$ can be summarized by two exponents. The first of these is the volume growth exponent $d_f$, while the second, denoted $d_w$, and called the walk dimension, can be defined by $$ d_w=d_w(G) = \lim_{R \to \infty} \frac { E^0 \tau_R }{\log R} \quad \hbox { (if this limit exists).} $$ Here $0$ is a base point in the graph, and $\tau_R$ is as defined in \eqref{e:tRdef}; it is easy to see that if $G$ is connected then the limit is independent of the base point. One finds that $d_f \ge 1$, $2\le d_w \le 1 + d_f$, and that all these values can arise -- see \cite{Brev}. Many of the early papers required quite precise knowledge of the structure of the graph in order to calculate $d_f$ and $d_w$. However, \cite{BCK} showed that in some cases it is sufficient to know two facts: the volume growth of balls, and the growth of effective resistance between points in the graph. Write $R_{\rm eff}(x,y)$ for the effective resistance between points $x$ and $y$ in a graph $G$ -- see Section \ref{sect:UST} for a precise definition. The results of \cite{BCK} imply that if $G$ has uniformly bounded vertex degree, and there exist $\alpha>0$, $\zeta>0$ such that \begin{align} \label{e:vgt} c_1 R^\alpha &\le \abs{B_d(x,R)} \le c_2 R^\alpha, \quad x \in G, \, R \ge 1, \\ \label{e:rgt} c_1 d(x,y)^\zeta &\le R_{\rm eff}(x,y) \le c_2 d(x,y)^\zeta, \quad x,y \in G, \end{align} then writing $\tau^x_R = \min\{n : d(x,X_n) > R\}$, \begin{align} \label{e:p2n} p_{2n}(x,x) &\asymp n^{-\alpha/(\alpha + \zeta)}, \quad x \in G, \, n \ge 1, \\ \label{e:tdw} E^x \tau^x_R &\asymp R^{\alpha + \zeta}, \quad x \in G, \, R \ge 1. \end{align} (They also obtained good estimates on the transition probabilities $P^x(X_n =y)$ -- see \cite[Theorem 1.3]{BCK}.) From \eqref{e:p2n} and \eqref{e:tdw} one sees that if $G$ satisfies \eqref{e:vgt} and \eqref{e:rgt} then $$ d_f = \alpha, \quad d_w = \alpha + \zeta. $$ The decay $n^{-d_f/d_w}$ for the transition probabilities in \eqref{e:p2n} can be explained as follows. If $R\ge 1$ and $2n = R^{d_w}$ then with high probability $X_{2n}$ will be in the ball $B(x, cR)$. This ball has $c R^{d_f} \approx c n^{d_f/d_w}$ points, and so the average value of $p_{2n}(x,y)$ on this ball will be $n^{-d_f/d_w}$. Given enough regularity on $G$, this average value will then be close to the actual value of $p_{2n}(x,x)$. In the physics literature a third exponent, called the spectral dimension, was introduced; this can be defined by \begin{equation}\label{ei:ds} d_s(G) = -2 \lim_{n \to \infty} \frac{ \log P^x_{\omega}( X_{2n} = x) }{\log 2n}, \quad \hbox { (if this limit exists).} \end{equation} This gives the rate of decay of the transition probabilities; one has $d_s(\bZ^d)=d$. The discussion above indicates that the three indices $d_f$, $d_w$ and $d_s$ are not independent, and that given enough regularity in the graph $G$ one expects that $$ d_s =\frac{ 2d_f}{d_w}. $$ For graphs satisfying \eqref{e:vgt} and \eqref{e:rgt} one has $d_s = 2\alpha/(\alpha + \zeta)$. Note that if $G$ is a tree and satisfies \eqref{e:vgt} then $R_{\rm eff}(x,y) = d(x,y)$ and so \eqref{e:rgt} holds with $\zeta=1$. Thus \begin{equation} \label{e:exp-tree} d_f = \alpha, \quad d_w = \alpha +1, \quad d_s = \frac{ 2 \alpha}{\alpha +1 }. \end{equation} For random graphs arising from models in statistical physics, such as critical percolation clusters or the UST, random fluctuations will mean that one cannot expect \eqref{e:vgt} and \eqref{e:rgt} to hold uniformly. Nevertheless, providing similar estimates hold with high enough probability, it was shown in \cite{BJKS} and \cite{KM} that one can obtain enough control on the properties of the random walk $X$ to calculate $d_f, d_w$ and $d_s$. An additional contribution of \cite{BJKS} was to show that it is sufficient to estimate the volume and resistance growth for balls from one base point. In section \ref{sect:RW}, we will use these methods to show that \eqref{e:exp-tree} holds for the UST, namely that \begin{theorem} \label{t:dims} We have for ${\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}$ -a.a. ${\omega}$ \begin{align}\label{e:dims} d_f(\sU) = \frac{8}{5}, \quad d_w(\sU)= \frac{13}{5}, \quad d_s(\sU) = \frac{16}{13}. \end{align} \end{theorem} The methods of \cite{BJKS} and \cite{KM} were also used in \cite{BJKS} to study the incipient infinite cluster (IIC) for high dimensional oriented percolation, and in \cite{KN} to show the IIC for standard percolation in high dimensions has spectal dimension $4/3$. These critical percolation clusters are close to trees and have $d_f=2$ in their graph metric. Our results for the UST are the first time these exponents have been calculated for a two-dimensional model arising from the random cluster model. It is natural to ask about critical percolation in two dimensions, but in spite of what is known via SLE, the values of $d_w$ and $d_s$ appear at present to be out of reach. \smallskip The rest of this paper is laid out as follows. In Section \ref{sect:LERW}, we define the LERW on $\bZ^2$ and recall the results from \cite{Mas09, BM} which we will need. The paper \cite{BM} gives bounds on $M_D$, the length of the loop-erasure of a random walk run up to the first exit of a simply connected domain $D$. However, in addition to these bounds, we require estimates on $d(0,w)$ which by Wilson algorithm's is the length of the loop-erasure of a random walk started at $0$ and run up to the first time it hits $w$; we obtain these bounds in Proposition \ref{p:ProMw<}. In Section \ref{sect:UST}, we study the geometry of the two dimensional UST $\sU$, and prove Theorems \ref{t:main1} and \ref{t:main2}. In addition (see Proposition \ref{p:kmest}) we show that with high probability the electrical resistance in the network $\sU$ between $0$ and $B_d(0,R)^c$ is greater than $R/{\lambda}$. The proofs of all of these results involve constructing the UST $\sU$ in a particular way using Wilson's algorithm and then applying the bounds on the lengths of LERW paths from Section \ref{sect:LERW}. In Section \ref{sect:RW}, we use the techniques from \cite{BJKS,KM} and our results on the volume and effective resistance of $\sU$ from Section \ref{sect:UST} to prove Theorems \ref{t:mainrw} and \ref{t:dims}. Throughout the paper, we use $c, c'$, $C, C'$ to denote positive constants which may change between each appearance, but do not depend on any variable. If we wish to fix a constant, we will denote it with a subscript, e.g. $c_0$. \section{Loop erased random walks} \label{sect:LERW} In this section, we look at LERW on $\bZ^2$. We let $S$ be a simple random walk on $\bZ^2$, and given a set $D \subset \bZ^2$, let $$ \sigma_D = \min \{j \geq 1 : S_j \in \bZ^2 \setminus D \}$$ be the first exit time of the set $D$, and $$ \xi_D = \min \{j \geq 1 : S_j \in D \}$$ be the first hitting time of the set $D$. If $w \in \bZ^2$, we write $\xi_w$ for $\xi_{\{ w \}}$. We also let $\sigma_R = \sigma_{B(R)}$ and use a similar convention for $\xi_R$. The outer boundary of a set $D \subset \bZ^2$ is $$ \partial D = \{ x \in \bZ^2 \setminus D : \text{ there exists $y \in D$ such that $\abs{x-y}=1$} \},$$ and its inner boundary is $$ \partial_i D = \{ x \in D : \text{ there exists $y \in \bZ^2 \setminus D$ such that $\abs{x - y}=1$} \}.$$ Given a path $\lambda = [\lambda_0, \ldots, \lambda_m]$ in $\bZ^2$, let $\operatorname{L}(\lambda)$ denote its chronological loop-erasure. More precisely, we let $$ s_0 = \max \{ j : \lambda(j) = \lambda(0) \},$$ and for $i > 0$, $$ s_i = \max \{ j: \lambda(j) = \lambda(s_{i-1} + 1) \}.$$ Let $$n = \min \{ i: s_i = m \}.$$ Then $$ \operatorname{L} (\lambda) = [\lambda(s_0), \lambda(s_1), \ldots, \lambda(s_n)].$$ We note that by Wilson's algorithm, $\operatorname{L}(S[0,\xi_w])$ has the same distribution as $\gamma(0,w)$ -- the unique path from $0$ to $w$ in the UST $\sU$. We will therefore use $\gamma(0,w)$ to denote $\operatorname{L}(S[0,\xi_w])$ even when we make no mention of the UST $\sU$. For positive integers $l$, let $\Omega_l$ be the set of paths $\omega = [0, \omega_1, \ldots, \omega_k] \subset \bZ^2$ such that $\omega_j \in B_l$, $j=1, \ldots, k-1$ and $\omega_k \in \partial B_l$. For $n \geq l$, define the measure $\mu_{l,n}$ on $\Omega_l$ to be the distribution on $\Omega_l$ obtained by restricting $\operatorname{L}(S[0,\sigma_n])$ to the part of the path from $0$ to the first exit of $B_l$. For a fixed $l$ and $\omega \in \Omega_l$, it was shown in \cite{Law91} that the sequence $\mu_{l,n}(\omega)$ is Cauchy. Therefore, there exists a limiting measure $\mu_l$ such that $$ \lim_{n \to \infty} \mu_{l,n}(\omega) = \mu_l(\omega).$$ The $\mu_l$ are consistent and therefore there exists a measure $\mu$ on infinite self-avoiding paths. We call the associated process the infinite LERW and denote it by $\widehat{S}$. We denote the exit time of a set $D$ for $\widehat{S}$ by $\widehat{\sigma}_D$. By Wilson's algorithm, $\widehat{S}[0,\infty)$ has the same distribution as $\gamma(0,\infty)$, the unique infinite path in $\sU$ starting at $0$. Depending on the context, either notation will be used. For a set $D$ containing $0$, we let $M_D$ be the number of steps of $\operatorname{L}(S[0,\sigma_D])$. Notice that if $D = \bZ^2 \setminus \{w\}$ and $S$ is a random walk started at $x$, then $M_D = d(x,w)$. In addition, if $D' \subset D$ then we let $M_{D',D}$ be the number of steps of $\operatorname{L}(S[0,\sigma_D])$ while it is in $D'$, or equivalently the number of points in $D'$ that are on the path $\operatorname{L}(S[0,\sigma_D])$. We let $\widehat{M}_n$ be the number of steps of $\widehat{S}[0,\widehat{\sigma}_n]$. As in the introduction, we set $G(n) = {\rm E}[\widehat{M}_n]$, extend $G$ to a continuous strictly increasing function from $[1,\infty)$ to $[1,\infty)$ with $G(1)=1$, and let $g$ be the inverse of $G$. It was shown \cite{Ken, Mas09} that $G(n) \approx n^{5/4}$. In fact, the following is true. \begin{lemma} \label{l:vgrwth} Let $\varepsilon>0$. Then there exist positive constants $c(\varepsilon)$ and $C(\varepsilon)$ such that if $r\ge 1$ and ${\lambda} \ge 1$, then \begin{align} \label{e:KMg1} c {\lambda}^{5/4 -\varepsilon} G(r) &\le G( {\lambda} r) \le C {\lambda}^{5/4+ \varepsilon} G(r) , \\ c {\lambda}^{4/5 -\varepsilon} g(r) &\le g( {\lambda} r) \le C {\lambda}^{4/5 + \varepsilon} g(r). \end{align} \end{lemma} \begin{proof} The first equation follows from \cite[Lemma 6.5]{BM}. Note that while the statement there holds only for all $r \ge R(\eps)$, by choosing different values of $c$ and $C$, one can easily extend it to all $r \ge 1$. The second statement follows from the first since $g = G^{-1}$ and $G$ is increasing. {\hfill $\square$ \bigskip} \end{proof} The following result from \cite{BM} gives bounds on the tails of $\widehat{M}_n$ and of $M_{D',D}$ for a broad class of sets $D$ and subsets $D' \subset D$. We call a subset of $\bZ^2$ {\em simply connected} if all connected components of its complement are infinite. \begin{thm} \cite[Theorems 5.8 and 6.7]{BM} \label{t:expLERW} There exist positive global constants $C$ and $c$, and given $\eps > 0$, there exist positive constants $C(\eps)$ and $c(\eps)$ such that for all $\lambda > 0$ and all $n$, the following holds. \begin{enumerate} \item Suppose that $D \subset \bZ^2$ contains $0$, and $D' \subset D$ is such that for all $z \in D'$, there exists a path in $D^c$ connecting $B(z,n+1)$ and $B(z,2n)^c$ (in particular this will hold if $D$ is simply connected and ${\rm dist}(z,D^c) \leq n$ for all $z \in D'$). Then \begin{equation} \label{eq:uptail} \Pro{}{M_{D',D} > \lambda G(n)} \leq 2 e^{-c \lambda}. \end{equation} \item For all $D \supset B_n$, \begin{equation} \Pro{}{M_D < \lambda^{-1} G(n)} \leq C(\eps) e^{-c(\eps) \lambda^{4/5 - \eps}}. \end{equation} \item \begin{equation} \label{eq:uptailinf} \Pro{}{\widehat{M}_n > \lambda G(n)} \leq C e^{-c \lambda}. \end{equation} \item \begin{equation} \label{eq:lowtailinf} \Pro{}{\widehat{M}_n < \lambda^{-1} G(n)} \leq C(\eps) e^{-c(\eps) \lambda^{4/5 - \eps}}. \end{equation} \end{enumerate} \end{thm} We would like to use \eqref{eq:uptail} in the case where $D = \bZ^2 \setminus \{w\}$ and $D' = B(0,n) \setminus \{w\}$. However these choices of $D$ and $D'$ do not satisfy the hypotheses in \eqref{eq:uptail}, so we cannot use Theorem \ref{t:expLERW} directly. The idea behind the proof of the following proposition is to get the distribution on $\gamma(0,w)$ using Wilson's algorithm by first running an infinite LERW $\gamma$ (whose complement is simply connected) and then running a LERW from $w$ to $\gamma$. \begin{proposition} \label{p:expbnd2} \label{p:expbnd} There exist positive constants $C$ and $c$ such that the following holds. Let $ n \ge 1$ and $w \in B(0,n)$. Let $Y_w = w$ if $\gamma(0,w) \subset B(0,n)$; otherwise let $Y_w$ be the first point on the path $\gamma(0,w)$ which lies outside $B(0,n)$. Then, \begin{equation} \label{e:point2} \Pro{}{ d(0,Y_w) > \lambda G(n) } \leq C e^{-c \lambda}. \end{equation} \end{proposition} \begin{proof} Let $\gamma$ be any infinite path starting from 0, and let $\widetilde D = \bZ^2 \setminus \gamma$. Then $\widetilde D$ is the union of disjoint simply connected subsets $D_i$ of $\bZ^2$; we can assume $w \in D_1$ and let $D_1 = D$. By \eqref{eq:uptail}, (taking $D'=B_n \cap D$) there exist $C < \infty$ and $c > 0$ such that \begin{equation} \label{eq:expbnd} \Pro{w}{M_{D',D} > \lambda G(n)} \leq C e^{-c \lambda}. \end{equation} Now suppose that $\gamma$ has the distribution of an infinite LERW started at 0. By Wilson's algorithm, if $S^w$ is an independent random walk started at $w$, then $\gamma(0,w)$ has the same distribution as the path from $0$ to $w$ in $\gamma \cup \operatorname{L}(S^w[0,\sigma_D])$. Therefore, $$ d(0,Y_w) = \abs{\gamma(0,Y_w)} \leq \widehat{M}_n + M_{D',D}, $$ and so, \begin{align*} \Pro{}{d(0,Y_w) > \lambda G(n)} \leq \Pro{}{\widehat{M}_n > (\lambda/2) G(n)} + \max_D \Pro{w}{M_{D',D} > (\lambda/2) G(n)}. \end{align*} The result then follows from \eqref{eq:uptailinf} and \eqref{eq:expbnd}. {\hfill $\square$ \bigskip} \end{proof} \begin{lemma} \label{condesc} There exists a positive constant $C$ such that for all $k \geq 2$, $n \geq 1$, and $K \subset \bZ^2 \setminus B_{4kn}$, the following holds. The probability that $\operatorname{L}(S[0,\xi_K])$ reenters $B_n$ after leaving $B_{kn}$ is less than $C k^{-1}$. This also holds for infinite LERWs, namely \begin{equation} \Pro{}{\widehat{S}[\widehat{\sigma}_{kn}, \infty) \cap B_{n} \neq \emptyset} \leq C k^{-1}. \end{equation} \end{lemma} \begin{proof} The result for infinite LERWs follows immediately by taking $K = \bZ^2 \setminus B_m$ and letting $m$ tend to $\infty$. We now prove the result for $\operatorname{L}(S[0,\xi_K])$. Let $\alpha$ be the part of the path $\operatorname{L}(S[0,\xi_K])$ from $0$ up to the first point $z$ where it exits $B_{kn}$. Then by the domain Markov property for LERW \cite{Law91}, conditioned on $\alpha$, the rest of $\operatorname{L}(S[0,\xi_K])$ has the same distribution as the loop-erasure of a random walk started at $z$, conditioned on the event $\{ \xi_K < \xi_\alpha\}$. Therefore, it is sufficient to show that for any path $\alpha$ from $0$ to $\partial B_{kn}$ and $z \in \partial B_{kn}$, \begin{eqnarray} \label{eqcondesc} \condPro{z}{\xi_n < \xi_K}{\xi_K < \xi_\alpha} = \frac{\Pro{z}{\xi_{n} < \xi_K; \xi_K < \xi_\alpha}}{\Pro{z}{\xi_K < \xi_\alpha}} \leq C k^{-1}. \end{eqnarray} On the one hand, \begin{align*} & \Pro{z}{\xi_n < \xi_K; \xi_{K} < \xi_\alpha} \\ &\leq \Pro{z}{\xi_{kn/2} < \xi_\alpha} \max_{x \in \partial_i B_{kn/2}} \Pro{x}{\xi_n < \xi_\alpha} \max_{w \in \partial B_{n}} \Pro{w}{\sigma_{2kn} < \xi_\alpha} \max_{y \in \partial B_{2kn}} \Pro{y}{\xi_K < \xi_\alpha}. \end{align*} However, by the discrete Beurling estimates (see \cite[Theorem 6.8.1]{LL08}), for any $x \in \partial_i B_{kn/2}$ and $w \in \partial B_{n}$, $$ \Pro{x}{\xi_n < \xi_\alpha} \leq C k^{-1/2};$$ $$ \Pro{w}{\sigma_{2kn} < \xi_\alpha} \leq C k^{-1/2}.$$ Therefore, $$ \Pro{z}{\xi_n < \xi_K; \xi_{K} < \xi_\alpha} \leq C k^{-1} \Pro{z}{\xi_{kn/2} < \xi_\alpha} \max_{y \in \partial B_{2kn}} \Pro{y}{\xi_K < \xi_\alpha}.$$ On the other hand, \begin{equation*} \Pro{z}{\xi_K < \xi_\alpha} \ge \Pro{z}{\sigma_{2kn} < \xi_\alpha} \min_{y \in \partial B_{2kn}} \Pro{y}{\xi_K < \xi_\alpha}. \end{equation*} By the discrete Harnack inequality, $$ \max_{y \in \partial B_{2kn}} \Pro{y}{\xi_K < \xi_\alpha} \leq C \min_{y \in \partial B_{2kn}} \Pro{y}{\xi_K < \xi_\alpha}.$$ Therefore, in order to prove \eqref{eqcondesc}, it suffices to show that $$ \Pro{z}{\sigma_{2kn} < \xi_\alpha} \geq c \Pro{z}{\xi_{kn/2} < \xi_\alpha}.$$ Let $B = B(z; kn/2)$. By \cite[Proposition 3.5]{Mas09}, there exists $c > 0$ such that $$ \condPro{z}{\abs{\arg(S(\sigma_B) - z)} \leq \pi/3}{\sigma_B < \xi_\alpha} > c.$$ Therefore, \begin{align*} \Pro{z}{\sigma_{2kn} < \xi_\alpha} &\ge \sum_{\substack{y \in \partial B \\ \abs{\arg(y - z)} \leq \pi/3}} \Pro{y}{\sigma_{2kn} < \xi_\alpha} \Pro{z}{\sigma_B < \xi_\alpha; S(\sigma_B) = y} \\ &\ge c \Pro{z}{\sigma_B < \xi_\alpha; \abs{\arg(S(\sigma_B) - z)} \leq \pi/3} \\ &\ge c \Pro{z}{\sigma_B < \xi_\alpha} \\ &\ge c \Pro{z}{\xi_{kn/2} < \xi_\alpha}. \end{align*} {\hfill $\square$ \bigskip} \end{proof} \begin{remark} \label{r:shatlb} {\rm One can also show that there exists $\delta > 0$ such that \begin{equation}\label{e:lhat-retn} \Pro{}{\widehat{S}[\widehat{\sigma}_{kn}, \infty) \cap B_{n} \neq \emptyset} \geq c k^{-\delta}. \end{equation} As we will not need this bound we only give a sketch of the proof. Since it will not be close to being optimal, we will not try to find the value of $\delta$ that the argument yields. First, we have \begin{align*} \Pro{}{\widehat{S}[\widehat{\sigma}_{kn}, \infty) \cap B_{n} \neq \emptyset} &\geq \Pro{}{\widehat{S}[\widehat{\sigma}_{kn}, \widehat{\sigma}_{4kn}) \cap B_{n} \neq \emptyset}. \end{align*} However, by \cite[Corollary 4.5]{Mas09}, the latter probability is comparable to the probability that $\operatorname{L}(S[0,\sigma_{16kn}])$ leaves $B_{kn}$ and then reenters $B_n$ before leaving $B_{4kn}$. Call the latter event $F$. Partition $\bZ^2$ into the three cones $A_1 = \{z \in \bZ^2 : 0 \leq \arg(z) < 2\pi/3 \}$, $A_2 = \{z \in \bZ^2 : 2\pi/3 \leq \arg(z) < 4\pi/3 \}$ and $A_3 = \{z \in \bZ^2 : 4\pi/3 \leq \arg(z) < 2\pi \}$. Then the event $F$ contains the event that a random walk started at $0$ \begin{shortitemize} \item[(1)] leaves $B_{2kn}$ before leaving $A_1 \cup B_{n/2}$, \item[(2)] then enters $A_2$ while staying in $B_{4kn} \setminus B_{kn}$, \item[(3)] then enters $B_n$ while staying in $A_2 \cap B_{4kn}$, \item[(4)] then enters $A_3$ while staying in $A_2 \cap B_n \setminus B_{n/2}$, \item[(5)] then leaves $B_{16kn}$ while staying in $A_3 \setminus B_{n/2}$. \end{shortitemize} One can bound the probabilities of the events in steps (1), (3) and (5) from below by $c k^{-\beta}$ for some $\beta>0$. The other steps contribute terms that can be bounded from below by a constant; combining these bounds gives \eqref{e:lhat-retn}. } \end{remark} \begin{lemma} \label{l:Losubset} There exists a positive constant $C$ such that for all $k \geq 1$ and $w \in \bZ^2$, \begin{equation} \frac{1}{8} k^{-1} \leq \Pro{}{\gamma(0,w) \not\subset B_{k\abs{w}}} \leq C k^{-1/3}. \end{equation} \end{lemma} \begin{proof} We first prove the upper bound. By adjusting the value of $C$ we may assume that $k \geq 4$. As in the proof of Proposition \ref{p:expbnd}, in order to obtain $\gamma(0,w)$, we first run an infinite LERW $\gamma$ started at $0$ and then run an independent random walk started at $w$ until it hits $\gamma$ and then erase its loops. By Wilson's algorithm, the resulting path from $0$ to $w$ has the same distribution as $\gamma(0,w)$. By Lemma \ref{condesc}, the probability that $\gamma$ reenters $B_{k^{2/3}\abs{w}}$ after leaving $B_{k\abs{w}}$ is less than $C k^{-1/3}$. Furthermore, by the discrete Beurling estimates \cite[Proposition 6.8.1]{LL08}, $$ \Pro{w}{\sigma_{k^{2/3}\abs{w}} < \xi_\gamma} \leq C (k^{2/3})^{-1/2} = C k^{-1/3}.$$ Therefore, $$ \Pro{}{\gamma(0.w) \not\subset B_{k\abs{w}}} \leq C k^{-1/3}.$$ To prove the lower bound, we follow the method of proof of \cite[Theorem 14.3]{BLPS} where it was shown that if $v$ and $w$ are nearest neighbors then $$\Pro{}{{\rm diam} \ \gamma(v,w) \geq n} \geq \frac{1}{8n}.$$ If $w = (w_1,w_2)$, let $u = (w_1-w_2,w_1+w_2)$ and $v = (-w_2,w_1)$ so that $\{0,w,u,v\}$ form four vertices of a square of side length $\abs{w}$. Now consider the sets \begin{align*} Q_1 = \{j w : j=0,\ldots,2k\} &\quad Q_2 = \{2kw+j(u-w): j=0,\ldots,2k\} \\ Q_3 = \{j v : j=0,\ldots,2k\} &\quad Q_4 = \{2kv+j(u-v): j=0,\ldots,2k\} \end{align*} and let $Q = \bigcup_{i=1}^4 Q_i$. Then $Q$ consists of $8k$ lattice points on the perimeter of a square of side length $2k\abs{w}$. Let $x_1, \ldots, x_{8k}$ be the ordering of these points obtained by letting $x_1 = 0$ and then travelling along the perimeter of the square clockwise. Thus $\abs{x_{i+1} - x_{i}} = \abs{w}$. Now consider any spanning tree $U$ on $\bZ^2$. If for all $i$, $\gamma(x_i,x_{i+1})$ stayed in the ball $B(x_i,k\abs{w})$ then the concatenation of these paths would be a closed loop, which contradicts the fact that $U$ is a tree. Therefore, $$ 1 = \Pro{}{\exists i : \gamma(x_{i},x_{i+1}) \not\subset B(x_{i},k \abs{w})} \leq \sum_{i=1}^{8k} \Pro{}{\gamma(x_{i},x_{i+1}) \not\subset B(x_{i},k \abs{w})}.$$ Finally, using the fact that $\bZ^2$ is transitive and is invariant under rotations by $90$ degrees, all the probabilities on the right hand side are equal. This proves the lower bound. {\hfill $\square$ \bigskip} \end{proof} \begin{proposition} \label{p:ProMw<} For all $\varepsilon > 0$, there exist $c(\varepsilon), C(\varepsilon) > 0$ and ${\lambda}_0(\varepsilon) \geq 1$ such that for all $w \in \bZ^2$ and all $\lambda \geq 1$, \begin{equation} \Pro{}{d(0,w) > \lambda G(\abs{w})} \leq C(\varepsilon) \lambda^{-4/15 + \varepsilon}, \end{equation} and for all $w \in \bZ^2$ and all $\lambda \geq \lambda_0(\varepsilon)$, \begin{equation} \Pro{}{d(0,w) > \lambda G(\abs{w})} \geq c(\varepsilon) \lambda^{-4/5 - \varepsilon}. \end{equation} \end{proposition} \begin{proof} To prove the upper bound, let $k = \lambda^{4/5 - 3\varepsilon}$. Then by Lemma \ref{l:vgrwth}, there exists $C(\varepsilon) < \infty$ such that \begin{equation} \label{e:ProMw1} G(k\abs{w}) \leq C(\varepsilon) k^{5/4 + \varepsilon} G(\abs{w}) \leq C(\varepsilon) \lambda^{1 - \varepsilon} G(\abs{w}). \end{equation} Then, \begin{equation*} \Pro{}{d(0,w) > \lambda G(\abs{w})} \leq \Pro{}{\gamma(0,w) \not\subset B_{k \abs{w}}} + \Pro{}{d(0,w) > \lambda G(\abs{w}); \gamma(0,w) \subset B_{k \abs{w}}}. \end{equation*} However, by Lemma \ref{l:Losubset}, \begin{equation} \Pro{}{\gamma(0,w) \not\subset B_{k \abs{w}}} \leq C k^{-1/3} = C \lambda^{-4/15 + \varepsilon}, \end{equation} while by Proposition \ref{p:expbnd} and \eqref{e:ProMw1}, \begin{align*} \Pro{}{d(0,w) > \lambda G(\abs{w}); \gamma(0,w) \subset B_{k \abs{w}}} &\leq \Pro{}{d(0,w) > c(\varepsilon) \lambda^{\varepsilon} G(k\abs{w}); \gamma(0,w) \subset B_{k \abs{w}}} \\ &\leq C \exp(-c(\varepsilon) \lambda^{\varepsilon}). \end{align*} Therefore, \begin{equation} \Pro{}{d(0,w) > \lambda G(\abs{w})} \leq C \exp(-c(\varepsilon) \lambda^{\varepsilon}) + C \lambda^{-4/15 + \varepsilon} \leq C(\eps) \lambda^{-4/15 + \varepsilon}. \end{equation} \medskip To prove the lower bound we fix $k = \lambda^{4/5 + \varepsilon}$ and assume $k \geq 2$ and $\eps < 1/4$. Then by Lemma \ref{l:vgrwth}, there exists $C(\varepsilon) < \infty$ such that \begin{equation*} G((k-1)\abs{w}) \geq C(\varepsilon)^{-1} k^{5/4 - \varepsilon} G(\abs{w}) \geq C(\varepsilon)^{-1} \lambda^{1 + \varepsilon/3} G(\abs{w}). \end{equation*} Hence, $$ \Pro{}{d(0,w) > \lambda G(\abs{w})} \geq \Pro{}{d(0,w) > C(\varepsilon) \lambda^{-\varepsilon/3} G((k-1)\abs{w})}.$$ Now consider the UST on $\bZ^2$ and recall that $\gamma(0,\infty)$ and $\gamma(w,\infty)$ denote the infinite paths starting at $0$ and $w$. We write $Z_{0w}$ for the unique point where these meet: thus $\gamma(Z_{0w},\infty) = \gamma(0,\infty) \cap \gamma(w,\infty)$. Then $\gamma(0,w)$ is the concatenation of $\gamma(0,Z_{0w})$ and $\gamma(w, Z_{0w})$. By Lemma \ref{l:Losubset}, $$ \Pro{}{\gamma(0,w) \not\subset B_{k\abs{w}}} \geq \frac{1}{8k}.$$ Therefore, $$ \Pro{}{\gamma(0,Z_{0w}) \not\subset B_{k\abs{w}} \quad \text{or} \quad \gamma(w,Z_{0w}) \not\subset B_{k\abs{w}}} \geq \frac{1}{8k}.$$ By the transitivity of $\bZ^2$, the paths $\gam(0, Z_{0, -w})$ and $\gam(w, Z_{0w})-w$ have the same distribution, and therefore $$ \Pro{}{\gamma(0,Z_{0w}) \not\subset B_{(k-1)\abs{w}}} \geq \frac{1}{16k}.$$ Since $Z_{0w}$ is on the path $\gamma(0,\infty)$, by \eqref{eq:lowtailinf}, \begin{align*} & \Pro{}{d(0,w) > C(\varepsilon) \lambda^{-\varepsilon/3} G((k-1)\abs{w})} \\ & \quad \geq \Pro{}{d(0,Z_{0w}) > C(\varepsilon)\lambda^{-\varepsilon/3} G((k-1)\abs{w})} \\ & \quad \geq \Pro{}{\widehat{M}_{(k-1)\abs{w}} > C(\varepsilon) \lambda^{-\varepsilon/3} G((k-1)\abs{w}); \gamma(0,Z_{0w}) \not\subset B_{(k-1)\abs{w}}} \\ & \quad \geq \Pro{}{\gamma(0,Z_{0w}) \not\subset B_{(k-1)\abs{w}}} - \Pro{}{\widehat{M}_{(k-1)\abs{w}} < C(\varepsilon) \lambda^{-\varepsilon/3} G((k-1)\abs{w})} \\ &\quad \geq \frac{1}{16k} - C \exp\{-c \lambda^{\eps/4}\}. \end{align*} Finally, since $k = {\lambda}^{4/5+\varepsilon}$, the previous quantity can be made greater than $c(\varepsilon) \lambda^{-4/5 - \varepsilon}$ for ${\lambda}$ sufficiently large. {\hfill $\square$ \bigskip} \end{proof} \section{ Uniform spanning trees} \label{sect:UST} \medskip We recall that $\sU$ denotes the UST in $\bZ^2$, and we write $x \sim y$ if $x$ and $y$ are joined by an edge in $\sU$. Let $\sE$ be the quadratic form given by \begin{equation} \sE(f,g)=\fract12 \sum_{x \sim y} (f(x)-f(y))(g(x)-g(y)), \end{equation} If we regard $\sU$ as an electrical network with a unit resistor on each edge, then $\sE(f,f)$ is the energy dissipation when the vertices of $\bZ^2$ are at a potential $f$. Set $H^2=\{ f: \bZ^2 \to \bR: \sE(f,f)<\infty\}$. Let $A,B$ be disjoint subsets of $G$. The effective resistance between $A$ and $B$ is defined by: \begin{equation} \label{3.3bk} R_{\rm eff}(A,B)^{-1}=\inf\{\sE(f,f): f\in H^2, f|_A=1, f|_B=0\}. \end{equation} Let $R_{\rm eff}(x,y)=R_{\rm eff}(\{x\},\{y\})$, and $R_{\rm eff}(x,x)=0$. For general facts on effective resistance and its connection with random walks see \cite{AF09,DS84,LP09}. \bigskip In this section, we establish the volume and effective resistance estimates for the UST $\sU$ that will be used in the next section to study random walks on $\sU$. \begin{thm} \label{t:Vub} There exist positive constants $C$ and $c$ such that for all $r \ge 1$ and ${\lambda} > 0$,\\ (a) \begin{equation}\label{e:Bdtail} \Pro{}{ B_d(0, {\lambda}^{-1} G(r) ) \not\subset B(0, r)} \le C e^{-c {\lambda}^{2/3}}. \end{equation} (b) \begin{equation} \label{e:Bdtail2} \Pro{}{ R_{\rm eff}(0, B(0,r)^c)< {\lambda}^{-3} G(r) } \le C e^{-c {\lambda}^{2/3}}. \end{equation} \end{thm} \begin{proof} By adjusting the constants $c$ and $C$ we can assume ${\lambda} \ge 4$. For $k \ge 1$, let $\delta_k = {\lambda}^{-1} 2^{-k}$, and $\eta_k=(2k)^{-1}$. Let $k_0$ be the smallest integer such that $r \delta_{k_0} < 1$. Set $$ A_k = B(0, r) - B(0, (1 - \eta_k)r ), {\qquad} k \ge 1. $$ Let $D_k$ be a finite collection of points in $A_k$ such that $|D_k| \le C \delta_k^{-2}$ and \begin{align*} A_k &\subset \bigcup_{ z \in D_k} B(z, \delta_k r). \end{align*} Write $\sU_1, \sU_2, \dots$ for the random trees obtained by running Wilson's algorithm (with root $0$) with walks first starting at all points in $D_1$, then adding those points in $D_2$, and so on. So $\sU_k$ is a finite tree which contains $\bigcup_{i=1}^k D_i \cup \{ 0\}$, and the sequence $(\sU_k)$ is increasing. Since $r \delta_{k_0} < 1$ we have $\partial_i B(0,r) \subset A_{k_0} \subset \sU_{k_0} $. We then complete a UST $\sU$ on $\bZ^2$ by applying Wilson's algorithm to the remaining points in $\bZ^2$. For $z \in D_1$, let $N_z$ be the length of the path $\gam(0,z)$ until it first exits from $B(0,r/8)$. By first applying \cite[Proposition 4.4]{Mas09} and then \eqref{eq:lowtailinf}, $$ {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( N_z < {\lambda}^{-1} G(r)) \le C {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( \widehat{M}_{r/8} < {\lambda}^{-1} G(r)) \le C e^{-c {\lambda}^{2/3}}, $$ so if $$ \widetilde F_1 = \{ N_z < {\lambda}^{-1} G(r) \hbox{ for some $z \in D_1$ } \} = \bigcup_{z \in D_1} \{ N_z < {\lambda}^{-1} G(r) \}, $$ then \begin{equation} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( \widetilde F_1 ) \le |D_1| C e^{-c{\lambda}^{2/3}} \le C \delta_1^{-2} e^{-c {\lambda}^{2/3}} \le C {\lambda}^2 e^{-c {\lambda}^{2/3}}. \end{equation} For $z \in A_{k+1}$, let $H_z$ be the event that the path $\gam(z,0)$ enters $B(0,(1 - \eta_k)r)$ before it hits $\sU_k$. For $k\ge 1$, let $$ F_{k+1} = \bigcup_{ z \in D_{k+1}} H_z. $$ Let $z \in D_{k+1}$ and $S^z$ be a simple random walk started at $z$ and run until its hits $\sU_k$. Then by Wilson's algorithm, for the event $H_z$ to occur, $S^z$ must enter $B(0,(1 - \eta_k)r)$ before it hits $\sU_k$. Since each point in $A_k$ is within a distance $\delta_k r$ of $\sU_k$, $\sU_k$ is a connected set, and $z$ is a distance at least $(\eta_k- \eta_{k+1})r$ from $B(0,(1 - \eta_k)r)$, we have $$ \Pro{}{H_z} \le \exp( -c (\eta_k- \eta_{k+1})/\delta_k ). $$ Hence \begin{equation} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( F_{k+1} ) \le C \delta_{k+1}^{-2} \exp( -c (\eta_k- \eta_{k+1})/\delta_k ) \le C {\lambda}^2 4^k \exp( -c {\lambda} 2^k k^{-2} ). \end{equation} Now define $G$ by $$ G^c = \widetilde F_1 \cup \bigcup_{k=2}^{k_0} F_k, $$ so that \begin{align}\label{e:PGub} \Pro{}{G^c} \le C {\lambda} e^{-c {\lambda}^{2/3}} + \sum_{k=2}^{\infty} C {\lambda}^2 4^k \exp( -c {\lambda} 2^k k^{-2} ) \le C e^{-c {\lambda}^{2/3} }. \end{align} Now suppose that $\omega \in G$. Then we claim that: \begin{shortitemize} \item[(1)] For every $z \in D_1$ the part of the path $\gam(0,z)$ until its first exit from $B(0,r/2)$ is of length greater than ${\lambda}^{-1} G(r)$, \item[(2)] If $z \in D_k$ for any $k \ge 2$ then the path $\gam(z,0)$ hits $\sU_1$ before it enters $B(0,r/2)$. \end{shortitemize} Of these, (1) is immediate since $\omega \not\in \widetilde F_1$, while (2) follows by induction on $k$ using the fact that $\omega \not\in F_k$ for any $k$. Hence, if ${\omega} \in G$, then $|\gam(0,z)| \ge {\lambda}^{-1} G(r)$ for every $z \in \partial_i B(0,r)$, which proves (a). \smallskip\noindent To prove (b) we use the Nash-Williams bound for resistance \cite{NW}. For $1\le k \le {\lambda}^{-1} G(r)$ let $\Gamma}\def\gam{\gamma_k$ be the set of $z$ such that $d(0,z)=k$ and $z$ is connected to $B(0,r)^c$ by a path in $\{z\} \cup (\sU-\gam(0,z))$. Assume now that the event $G$ holds. Then the $\Gamma}\def\gam{\gamma_k$ are disjoint sets disconnecting $0$ and $B(0,r)^c$, and so $$ R_{\rm eff}(0, B(0,r)^c) \ge \sum_{k=1}^{{\lambda}^{-1} G(r)} |\Gamma}\def\gam{\gamma_k|^{-1}. $$ Furthermore, each $z \in \Gamma}\def\gam{\gamma_k$ is on a path from $0$ to a point in $D_1$, and so $|\Gamma}\def\gam{\gamma_k| \le |D_1| \le C \delta_1^{-2} \le C {\lambda}^2$. Hence on $G$ we have $ R_{\rm eff}(0, B(0,r)^c) \ge c {\lambda}^{-3} G(r)$, which proves (b). {\hfill $\square$ \bigskip} \end{proof} \medskip A similar argument will give a (much weaker) bound in the opposite direction. We begin with a result we will use to control the way the UST fills in a region once we have constructed some initial paths. \begin{proposition} \label{p:fillin} There exist positive constants $c$ and $C$ such that for each $\delta_0 \le 1$ the following holds. Let $r \geq 1$, and $U_0$ be a fixed tree in $\bZ^2$ connecting $0$ to $B(0,2r)^c$ with the property that ${\rm dist}(x, U_0) \le \delta_0 r$ for each $x \in B(0,r)$ (here ${\rm dist}$ refers to the Euclidean distance). Let $\sU$ be the random spanning tree in $\bZ^2$ obtained by running Wilson's algorithm with root $U_0$. Then there exists an event $G$ such that \begin{equation}\label{e:pfillub} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}(G^c) \le C e^{-c \delta_0^{-1/3}}, \end{equation} and on $G$ we have that for all $x \in B(0,r/2)$, \begin{align}\label{e:fillub} &d(x, U_0) \le G( \delta_0^{1/2} r); \\ \label{e:pathinc} &\gam(x, U_0) \subset B(0,r). \end{align} \end{proposition} \begin{proof} We follow a similar strategy to that in Theorem \ref{t:Vub}. Define sequences $(\delta_k)$ and $({\lambda}_k)$ by $\delta_k = 2^{-k} \delta_0$, ${\lambda}_k = 2^{k/2} {\lambda}_0$, where ${\lambda}_0 = 5^{-1} \delta_0^{-1/2}$. For $k \ge 0$, let $$A_k = B(0, \fract12 (1+ (1+k)^{-1})r ),$$ and let $D_k \subset A_k$ be such that for $k\ge 1$, \begin{align*} \abs{D_k} \leq C \delta_k^{-2} , \\ A_k \subset \bigcup_{z \in D_k} B(z, \delta_k r). \end{align*} Let $\sU_0 = U_0$ and as before let $\sU_1, \sU_2, \ldots$ be the random trees obtained by performing Wilson's algorithm with root $U_0$ and starting first at points in $D_1$, then in $D_2$ etc. Set \begin{align*} M_z &= d(z, \sU_{k-1}), \quad z \in D_k, \\ F_z &= \{ \gamma(z, \mathcal{U}_{k-1}) \not\subset A_{k-1} \}, \quad z \in D_k, \\ M_k &= \max_{z \in D_k} M_z, \\ F_k &= \bigcup_{z \in D_k} F_z. \end{align*} For $z \in D_k$, \begin{align} \label{e:p41a} \Pro{}{ M_z > {\lambda}_k G(\delta_{k-1} r) } \leq \Pro{}{ F_z } + \Pro{}{ M_z > {\lambda}_k G(\delta_{k-1} r) ; F_z^c } . \end{align} Since $z$ is a distance at least $\fract12 r ( k^{-1} - (k+1)^{-1})$ from $A_{k-1}^c$, and each point in $A_{k-1}$ is within a distance $\delta_{k-1} r$ of $\sU_{k-1}$, \begin{align} \label{e:p41b} \Pro{}{ F_z } \leq C\exp(-c \delta_{k-1}^{-1} ( k^{-1} - (k+1)^{-1})) \leq C\exp(-c \delta_{k-1}^{-1} k^{-2} ). \end{align} By \eqref{eq:uptail}, again using the fact that each point in $A_{k-1}$ is within distance $\delta_{k-1} r$ of $\sU_{k-1}$, \begin{align} \label{e:p41c} \Pro{}{ M_z > {\lambda}_k G(\delta_{k-1} r) ; F_z^c } \leq C\exp(- c {\lambda}_k^{2/3} ). \end{align} So, combining \eqref{e:p41a}--\eqref{e:p41c}, for $k \ge 1$, \begin{align} \label{e:mbnd} \Pro{}{M_k > {\lambda}_k G(\delta_{k-1} r) } + \Pro{}{F_k} &\leq C \abs{D_k} \left[ \exp(-c \delta_{k-1}^{-1} k^{-2} ) + \exp(- c {\lambda}_k^{2/3} ) \right]. \end{align} Now let \begin{equation}\label{e:Gdef} G = \bigcap_{k=1}^\infty F_k^c \cap \{ M_k \leq {\lambda}_k G(\delta_{k-1} r) \}. \end{equation} Summing the series given by \eqref{e:mbnd}, and using the bound $|D_k| \le c \delta_k^{-2}$, we have \begin{align*} \Pro{}{ G^c } &\le C \delta_0^{-2} \sum_{k} 2^{2k} \left[ \exp(-c \delta_{0}^{-1} 2^k k^{-2} ) + \exp(- c 2^{k/3} \delta_0^{-1/3} ) \right] \\ &\le C \delta_0^{-2} e^{-c \delta_0^{-1/3}} \\ &\le C e^{-c' \delta_0^{-1/3}}. \end{align*} Using Lemma \ref{l:vgrwth} with $\varepsilon = \fract14$ gives $$ {\lambda}_k G(\delta_{k-1} r ) \le {\lambda}_k \delta_0^{1/2} 2^{-(k-1)} G(\delta_0^{1/2} r) = 2 {\lambda}_0 \delta_0^{1/2} 2^{-k/2} G(\delta_0^{1/2} r). $$ So $$ \sum_{k=1}^\infty {\lambda}_k G(\delta_{k-1} r ) \le 5 {\lambda}_0 \delta_0^{1/2} G(\delta_0^{1/2} r) = G(\delta_0^{1/2} r). $$ Since $B(0,r/2) \subset \bigcap_k A_k$, we have $B(0,r/2) \subset \bigcup_k \sU_k$. Therefore on the event $G$, for any $x \in B(0,r/2)$, $d(x,U_0) \le G(\delta_0^{1/2} r)$. Further, on $G$, for each $z \in D_k$, we have $\gam(z, \sU_{k-1}) \subset A_{k-1}$. Therefore if $x \in B(0, r/2)$ the connected component of $\sU- U_0$ containing $x$ is contained in $B(0,r)$, which proves \eqref{e:pathinc}. {\hfill $\square$ \bigskip} \end{proof} \begin{theorem} \label{t:Vlb} For all $\varepsilon > 0$, there exist $c(\eps), C(\eps) > 0$ and $\lambda_0(\varepsilon) \geq 1$ such that for all $r \ge 1$ and $\lambda \geq 1$, \begin{equation}\label{e:Bd-lb} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R} \big( B(0, r) \not\subset B_d(0, {\lambda} G(r) \big) \leq C \lambda^{-4/15 + \varepsilon}, \end{equation} and for all $r \geq 1$ and all $\lambda \geq \lambda_0(\varepsilon)$, $$ {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R} \big( B(0, r) \not\subset B_d(0, {\lambda} G(r) \big) \geq c \lambda^{-4/5 - \varepsilon}.$$ \end{theorem} \begin{proof} The lower bound follows immediately from the lower bound in Proposition \ref{p:ProMw<}. To prove the upper bound, let $E \subset B(0,4r)$ be such that $\abs{E} \leq C \lambda^{\varepsilon/2}$ and $$ B(0,4r) \subset \bigcup_{z \in E} B(z, \lambda^{-\varepsilon/4}r).$$ We now let $\sU_0$ be the random tree obtained by applying Wilson's algorithm with points in $E$ and root $0$. Therefore, by Proposition \ref{p:ProMw<}, for any $z \in E$, \begin{align*} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R} \big(d(0,z) > {\lambda} G(r)/2 \big) &\leq {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R} \big( d(0,z) > c {\lambda} G(\abs{z})/2 \big) \\ &\leq C(\varepsilon) \lambda^{-4/15+\varepsilon/2}. \end{align*} Let $$ F = \{ d(0,z) \le {\lambda} G(r)/2 \, \hbox{ for all } z \in E \};$$ then $$ {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}(F^c) \le \abs{E} C(\varepsilon) {\lambda}^{-4/15 + \varepsilon/2} \leq C(\varepsilon) {\lambda}^{-4/15 + \varepsilon}. $$ We have now constructed a tree $\mathcal{U}_0$ connecting $0$ to $B(0,4r)^c$ and by the definition of the set $E$, for all $z \in B(0,2r)$, ${\mathop {{\rm dist\, }}}(z,\sU_0) \leq {\lambda}^{-\varepsilon/4}r$. We now use Wilson's algorithm to produce the UST $\mathcal{U}$ on $\bZ^2$ with root $\mathcal{U}_0$. Let $G$ be the event given by applying Proposition 3.2 (with $r$ replaced by $2r$), so that $$ \Pro{}{G^c} \le C e^{-c {\lambda}^{\varepsilon/12}}. $$ On the event $G$ we have $d(x, \sU_0) \le G({\lambda}^{-\varepsilon/2}r) \le {\lambda} G(r)/2$ for all $ x \in B(0,r)$. Therefore, on the event $F \cap G$ we have $d(x,0) \le {\lambda} G(r)$ for all $ x \in B(0,r)$. Thus, $$ \Pro{}{\max_{x \in B(0,r)} d(x,0) > {\lambda} G(r)} \leq C(\varepsilon) \lambda^{-4/15 + \varepsilon} + C e^{-c {\lambda}^{\varepsilon/12}} \leq C(\varepsilon) \lambda^{-4/15 + \varepsilon}.$$ {\hfill $\square$ \bigskip} \end{proof} \medskip Theorem \ref{t:main1} is now immediate from Theorem \ref{t:Vub} and Theorem \ref{t:Vlb}. \medskip While Theorem \ref{t:Vub} immediately gives the exponential bound \eqref{e:vlubpii} on the upper tail of $| B_d(0,r)|$ in Theorem \ref{t:main2}, it only gives a polynomial bound for the lower tail. The following theorem gives an exponential bound on the lower tail of $|B_d(0,r)|$ and consequently proves Theorem \ref{t:main2}. \begin{theorem} \label{t:Vexplb} There exist constants $c$ and $C$ such that if $R\ge 1$, ${\lambda} \geq 1$ then \begin{equation}\label{e:Vexplb} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( |B_d(0, R)| \le {\lambda}^{-1} g(R)^2 ) \le C e^{-c {\lambda}^{1/9}}. \end{equation} \end{theorem} \begin{proof} Let $k \ge 1$ and let $r = g(R/k^{1/2})$, so that $R= k^{1/2} G(r)$. Fix a constant $\delta_0<1$ such that the right side of \eqref{e:pfillub} is less than $1/4$. Fix a further constant $\theta<1$, to be chosen later but which will depend only on $\delta_0$. We begin the construction of $\sU$ with an infinite LERW $\widehat S$ started at 0 which gives the path $\gam_0=\sU_0=\gam(0, \infty)$. Let $z_i$, $i=1, \dots k$ be points on $\widehat S[0.\widehat{\sigma}_r]$ chosen such that $B_i=B(z_i, r/k)$ are disjoint. (We choose these according to some fixed algorithm so that they depend only on the path $\widehat S[0, \widehat \sigma_r ]$.) Let \begin{align} \label{e:F1def} F_1 &=\{\hbox{ $\widehat S[\widehat \sigma_{2r}, \infty)$ hits more than $k/2$ of $B_1, \dots B_k$} \} , \\ F_2 &=\{ | \widehat S[0, \widehat \sigma_{2r} ]| \ge \fract12 k^{1/2} G(r) \}. \end{align} We have \begin{align} \label{e:F1} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( F_1) \le C e ^{-c k^{1/3}}, \\ \label{e:F2} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( F_2) \le C e ^{-c k^{1/2}}. \end{align} Of these, \eqref{e:F2} is immediate from \eqref{eq:uptailinf} while \eqref{e:F1} will be proved in Lemma \ref{lem:F1} below. If either $F_1$ or $F_2$ occurs, we terminate the algorithm with a `Type 1' or `Type 2' failure. Otherwise, we continue as follows to construct $\sU$ using Wilson's algorithm. We define $$ B_j' = B(z_i, \theta r/k), \qquad B_j''= B(z_i, \theta^2 r/k). $$ The algorithm is at two `levels' which we call `ball steps' and `point steps'. We begin with a list $J_0$ of good balls. These are the balls $B_j$ such that $B_j \cap \widehat S[\widehat \sigma_{2r}, \infty) = \emptyset$. The $n$th ball step starts by selecting a good ball $B_j$ from the list $J_{n-1}$ of remaining good balls. We then run Wilson's algorithm with paths starting in $B'_j$. The ball step will end either with success, in which case the whole algorithm terminates, or with one of three kinds of failure. In the event of failure the ball $B_j$, and possibly a number of other balls also, will be labelled `bad', and $J_{n}$ is defined to be the remaining set of good balls. If more than $k^{1/2}/4$ balls are labelled bad at any one ball step, we terminate the whole algorithm with a `Type 3 failure'. Otherwise, we proceed until, if we have tried $k^{1/2}$ balls steps without a success, we terminate the algorithm with a `Type 4 failure'. We write $\sU_n$ for the tree obtained after $n$ ball steps. After ball step $n$, any ball $B_j$ in $J_{n}$ will have the property that $B_j' \cap \sU_n = B'_j \cap \sU_0$. We now describe in detail the second level of the algorithm, which works with a fixed (initially good) ball $B_j$. We assume that this is the $n$th ball step (where $n \ge 1$), so that we have already built the tree $\sU_{n-1}$. Let $D' \subset B(0, \theta^2 r/k)$ satisfy $$ |D'| \le c \delta_0^{-2}, \qquad B(0, \theta^2 r/k) \subset \bigcup_{x \in D'} B(x, \delta_0 \theta^2 r/k ). $$ Let $D_j = z_j + D'$, so that $D_j \subset B''_j$. We now proceed to use Wilson's algorithm to build the paths $\gam(w, \sU_{n-1})$ for $w \in D_j$. For $w \in D_j$ let $S^w$ be a random walk started at $w$. For each $w \in D_j$ let $G_{w}$ be the event that $\gam(w, \sU_{n-1}) \subset B_j'$. If $F_{w}$ is the event that $S^{w}$ exits from $B'_j$ before it hits $\sU_0$, then \begin{equation}\label{e:Gwc} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( G^c_w) \le {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}(F_{w}) \le c \theta^{1/2}. \end{equation} Here the first inequality follows from Wilson's algorithm, while the second is by the discrete Beurling estimates (\cite[Proposition 6.8.1]{LL08}). Let $M_{w}= d(w, \sU_{n-1})$, and $T_{w}$ be the first time $S^{w}$ hits $\sU_{n-1}$. Then by Wilson's algorithm and \eqref{eq:uptail}, \begin{align} \label{e:MwGw} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( M_{w} \ge \theta^{-1} G(\theta r/k); G_w ) = {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( M_{w} \ge \theta^{-1} G(\theta r/k); \operatorname{L}(S^{w}[0, T_{w}]) \subset B'_j ) \le c e^{ - c \theta^{-1}}. \end{align} We now define sets corresponding to three possible outcomes to this procedure: \begin{align*} H_{1,n} &= \bigcup_{w \in D_j} G^c_w, \\ H_{2,n} &= \left\{ \max_{w \in D_j} M_w \ge \theta^{-1} G(\theta r/k)\right\} \cap \bigcap_{w \in D_j} G_w, \\ H_{3,n} &= \left\{ \max_{w \in D_j} M_w < \theta^{-1} G(\theta r/k)\right\} \cap \bigcap_{w \in D_j} G_w. \end{align*} By \eqref{e:Gwc}, \begin{equation}\label{e:H1ub} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}(H_{1,n}) \le \sum_{w \in D_j} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}(G_w) \le c \delta_0^{-2} \theta^{1/2}, \end{equation} and by \eqref{e:MwGw}, \begin{equation}\label{e:H2ub} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}(H_{2,n}) \le \sum_{w \in D_j} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( M_{w} \ge \theta^{-1} G(\theta r/k); G_w) \le c \delta_0^{-2} e^{ - c \theta^{-1}} . \end{equation} We now choose the constant $\theta$ small enough so that each of ${\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}(H_{i,n}) \le \fract14$ for $i=1,2$, and therefore \begin{equation}\label{e:H3lb} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}(H_{3,n}) \ge \fract12. \end{equation} If $H_{3,n}$ occurs then we have constructed a tree $\sU_n'$ which contains $\sU_{n-1}$ and $D_j$. Further, we have that for each point $w \in D_j$, the path $\gam(w,0)$ hits $\sU_0$ before it leaves $B'_j$. Hence, $$ d(w,0) \le M_w + \max_{z \in \sU_0 \cap B_j} d(0,z) \le \fract12 k^{1/2} G(r) + \theta^{-1} G(\theta r/k). $$ We now use Wilson's algorithm to fill in the remainder of $B'_j$. Let $G_n$ be the event given by applying Proposition \ref{p:fillin} to the ball $B''_j$ with $U_0= \sU'_{n}$. Then $$ {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( G_{n}^c ) \le c e^{-c \delta_0^{-1/3} } \le \fract14 $$ by the choice of $\delta_0$, and therefore ${\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}(H_{3,n} \cap G_n) \ge \fract14$. If this event occurs, then all points in $B(z_j, \theta^2 r/2k)$ are within distance $G( \delta_0^{1/2} \theta^2 r/k)$ of $\sU'_n$ in the graph metric $d$; in this case we label ball step $n$ as successful, and we terminate the whole algorithm. Then for all $z \in B(z_j, \theta^2 r/2k)$, \begin{align*} d(0,z) &\le d(z, \sU_n') + \max_{w \in \sU_n'} d(w,0) \\ &\le G( \delta_0^{1/2} \theta^2 r/k) + \fract12 k^{1/2} G(r) + \theta^{-1} G(\theta r/k)\\ &\le k^{1/2} G(r), \end{align*} provided that $k$ is large enough. So there exists $k_0\ge 1$ such that, provided that $k\ge k_0$, if $H_{3,n} \cap G_n$ occurs then $B(z_j, \theta^2 r/2k ) \subset B_d (0, k^{1/2} G(r) )$. Since $R = k^{1/2} G(r) \le G(k^{1/2} r)$ we have $g(R) \le k^{1/2} r$, and therefore \begin{equation}\label{e:succ} |B_d(0, R)| \ge | B(z_j, \theta^2 r/2k)| \ge c k^{-2} r^2 \ge c g(R)^2/k^3. \end{equation} \smallskip If $H_{1,n} \cup H_{2,n} \cup (H_{3,n} \cap G_n^c)$ occurs then as soon as we have a random walk $S^w$ that `misbehaves' (either by leaving $B_j'$ before hitting $\sU_0$, or by having $M_w$ too large), then we terminate the ball step and mark the ball $B_j$ as `bad'. If ${\omega} \in H_{2,n}$ only the ball $B_j$ becomes bad, but if ${\omega} \in H_{1,n} \cup (H_{3,n}\cap G_n^c) $ then $S^w$ may hit several other balls $B'_i$ before it hits $\sU_{n-1}$. Let $N^B_w$ denote the number of such balls hit by $S^w$. By Beurling's estimate, the probability that $S^w$ enters a ball $B_i'$ and then exits $B_i$ without hitting $\sU_0$ is less than $c \theta^{1/2}$. Since the balls $B_i$ are disjoint, \begin{equation} \label{e:NBtail} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( N^B_w \ge m ) \le (c \theta^{1/2})^m \le e^{-c' m}. \end{equation} A Type 3 failure occurs if $N^B_w\ge k^{1/2}/4$; using \eqref{e:NBtail} we see that the probability that a ball step ends with a Type 3 failure is bounded by $\exp(- c k^{1/2})$. If we write $F_3$ for the event that some ball step ends with a Type 3 failure, then since there are at most $k^{1/2}$ ball steps, \begin{equation}\label{e:F3} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}(F_3) \le k^{1/2} \exp(- c k^{1/2}) \le C \exp(- c' k^{1/2}). \end{equation} The final possibility is that $k^{1/2}$ ball steps all end in failure; write $F_4$ for this event. Since each ball step has a probability at least $1/4$ of success (conditional on the previous steps of the algorithm), we have \begin{equation}\label{e:F4} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}(F_4) \le (3/4)^{k^{1/2}} \le e^{-c k^{1/2}}. \end{equation} Thus either the algorithm is successful, or it ends with one of four types of failure, corresponding to the events $F_i$, $i=1, \dots 4$. By Lemma \ref{lem:F1} and \eqref{e:F2}, \eqref{e:F3}, \eqref{e:F4} we have ${\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}(F_i) \le C \exp(-c k^{1/3})$ for each $i$. Therefore, we have that provided $k \ge k_0$, \eqref{e:succ} holds except on an event of probability $C \exp(-c k^{1/3})$. Taking $k=c {\lambda}^{1/3}$ for a suitable constant $c$, and adjusting the constant $C$ so that \eqref{e:Vexplb} holds for all ${\lambda}$ completes the proof. {\hfill $\square$ \bigskip} \end{proof} \medskip The reason why we can only get a polynomial bound in the Theorem \ref{t:Vlb} is that one cannot get exponential estimates for the probability that $\gamma(0,w)$ leaves $B(0,k\abs{w})$ (see Lemma \ref{l:Losubset}). However, if we let $U_r$ be the connected component of $0$ in $\sU \cap B(0,r)$, then the following proposition enables us to get exponential control on the length of $\gamma(0,w)$ for $w \in U_r$. This will allow us to obtain an exponential bound on the lower tail of $R_{\rm eff}(0, B_d(0,R)^c)$ in Proposition \ref{p:kmest}. \begin{proposition} \label{p:Ur} There exist positive constants $c$ and $C$ such that for all ${\lambda} \ge 1$ and $r \ge 1$, \begin{equation}\label{e:Uri} \Pro{}{ U_r \not\subset B_d(0,{\lambda} G(r)) } \le C e^{-c {\lambda}}. \end{equation} \end{proposition} \begin{proof} This proof is similar to that of Theorem \ref{t:Vlb}. Let $E \subset B(0,2r)$ be such that $\abs{E} \leq C \lambda^6$ and $$ B(0,2r) \subset \bigcup_{z \in E} B(z, \lambda^{-3} r),$$ and let $\sU_0$ be the random tree obtained by applying Wilson's algorithm with points in $E$ and root $0$. For each $z \in E$, let $Y_z$ be defined as in Proposition \ref{p:expbnd2}, so that $Y_z = z$ if $\gam(0,z) \subset B(0,2r)$, and otherwise $Y_z$ is the first point on $\gam(0,z)$ which is outside $B(0,2r)$. Let \begin{align*} G_1 &= \{ d(Y_z,0) \le \fract12 {\lambda} G(r) \, \hbox{ for all } z \in E \}. \end{align*} Then by Proposition \ref{p:expbnd2}, \begin{equation}\label{e:pg1} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( G_1^c) \le \sum_{z \in E} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( d(Y_z,0) > {\textstyle \frac12} {\lambda} G(2 r) ) \le \abs{E} C e^{-c {\lambda}} \le C {\lambda}^6 e^{-c{\lambda}}. \end{equation} We now complete the construction of $\sU$ by using Wilson's algorithm. Then Proposition \ref{p:fillin} with $\delta_0 = {\lambda}^{-3}$ implies that there exists an event $G_2$ with \begin{equation}\label{e:pg2} \Pro{}{G_2^c} \le e^{-c \delta_0^{-1/3}} = e^{-c {\lambda}}, \end{equation} and on $G_2$, \begin{align*} &\max_{x \in B(0,r)} d(x, \sU_0) \le G( {\lambda}^{-3/2} r). \end{align*} Suppose $G_1 \cap G_2$ occurs, and let $x \in U_r$. Write $Z_x$ for the point where $\gam(x,0)$ meets $\sU_0$. Since $x \in U_r$, we must have $Z_x \in B(0,r)$, and $\gam(Z_x, 0) \subset B(0,r)$. As $Z_x \in \sU_0$, there exists $z \in E$ such that $Z_x \in \gam(0,z)$. Since $G_1$ occurs, $d(0, Z_x) \le d(0, Y_z) \le {\textstyle \frac12} {\lambda} G(r)$, while since $G_2$ occurs $d(x,Z_x) \le G({\lambda}^{-3/2}r)$. So, provided ${\lambda}$ is large enough, $$ d(0,x) \le d(0, Z_x) + d(Z_x,x) \le {\textstyle \frac12} {\lambda} G(r)+ G({\lambda}^{-3/2}r) \le {\lambda} G(r). $$ Using \eqref{e:pg1} and \eqref{e:pg2}, and adjusting the constant $C$ to handle the case of small ${\lambda}$ completes the proof. {\hfill $\square$ \bigskip} \end{proof} \begin{proposition} \label{p:kmest} There exist positive constants $c$ and $C$ such that for all $R\ge 1$ and ${\lambda} \ge 1$, \\ (a) \begin{equation}\label{e:Reffa} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( R_{\rm eff}(0, B_d(0,R)^c) < {\lambda}^{-1} R ) \le C e^{-c {\lambda}^{2/11}}; \end{equation} (b) \begin{equation} \bE ( R_{\rm eff}(0, B_d(0,R)^c) |B_d(0,R)|) \le C R g(R)^2. \end{equation} \end{proposition} \begin{proof} (a) Recall the definition of $U_r$ given before Proposition \ref{p:Ur}, and note that for all $r \geq 1$, $R_{\rm eff}(0, B(0,r)^c) = R_{\rm eff}(0, U_r^c)$. Given $R$ and ${\lambda}$, let $r$ be such that $R= {\lambda}^{2/11} G(r)$. By monotonicity of resistance we have that if $U_r \subset B_d(0,R)$, then $$ R_{\rm eff}(0, B_d(0,R)^c) \ge R_{\rm eff}(0, U_r^c). $$ So, writing $B_d = B_d(0,R)$, \begin{align*} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( R_{\rm eff}(0, B_d^c) < {\lambda}^{-1} R ) &= {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( R_{\rm eff}(0, B_d^c) < {\lambda}^{-1} R; U_r \not\subset B_d) + {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( R_{\rm eff}(0, B_d^c) < {\lambda}^{-1} R; U_r \subset B_d)\\ &\le {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( U_r \not\subset B_d(0, {\lambda}^{2/11} G(r) ) ) + {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( R_{\rm eff}(0, U_r^c) < {\lambda}^{-9/11} G(r) ). \end{align*} By Proposition \ref{p:Ur}, $$ {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( U_r \not\subset B_d(0, {\lambda}^{2/11} G(r) ) ) \leq C e^{-c{\lambda}^{2/11}},$$ while by \eqref{e:Bdtail2}, $$ {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( R_{\rm eff}(0, U_r^c) < {\lambda}^{-9/11} G(r) ) \leq C e^{-c {\lambda}^{2/11}}.$$ This proves (a). (b) Since $R_{\rm eff}(0, B_d(0,R)^c) \le R$, this is immediate from Theorem \ref{t:main2}. {\hfill $\square$ \bigskip} \end{proof} \medskip We conclude this section by proving the following technical lemma that was used in the proof of Theorem \ref{t:Vexplb}. \begin{lemma}\label{lem:F1} Let $F_1$ be the event defined by \eqref{e:F1def}. Then \begin{align} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( F_1) \le C e ^{-c k^{1/3} }. \end{align} \end{lemma} \begin{proof} Let $b= e^{k^{1/3}}$. Then by Lemma \ref{condesc} \begin{equation}\label{e:expret} \Pro{}{\widehat{S}[\widehat{\sigma}_{br}, \infty) \cap B_{r} \neq \emptyset} \le C b^{-1} \le C e^{-k^{1/3}}. \end{equation} If $\widehat S[\widehat \sigma_{2r}, \infty)$ hits more than $k/2$ balls then either $\widehat S$ hits $B_r$ after time $\widehat \sigma_{br}$, or $\widehat S[\widehat \sigma_{2r}, \widehat \sigma_{br}]$ hits more than $k/2$ balls. Given \eqref{e:expret}, it is therefore sufficient to prove that \begin{equation}\label{e:43b} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( \widehat S[\widehat \sigma_{2r}, \widehat \sigma_{br}] \hbox{ hits more than $k/2$ balls} ) \le C e ^{-c k^{1/3} }. \end{equation} Let $S$ be a simple random walk started at $0$, and let $L' = \operatorname{L}(S[0, \sigma_{4br}])$. Then by \cite[Corollary 4.5]{Mas09}, in order to prove \eqref{e:43b}, it is sufficient to prove that \begin{equation}\label{e:43c} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( L' \hbox{ hits more than $k/2$ balls} ) \le C e ^{-c k^{1/3} }. \end{equation} Define stopping times for $S$ by letting $T_0=\sigma_{2r}$ and for $j \ge 1$, \begin{align*} R_j &= \min\{ n \ge T_{j-1}: S_n \in B(0, r) \}, \\ T_j &= \min\{ n \ge R_{j}: S_n \notin B(0, 2r) \}. \end{align*} Note that the balls $B_j$ can only be hit by $S$ in the intervals $[R_j, T_j]$ for $j \ge 1$. Let $M =\min\{j: R_j \ge \sigma_{4 br} \}$. Then $$ {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( M = j+1 | M>j) = \frac{ \log (2r) -\log (r)}{\log ( 4br) - \log r} = \frac{\log 2}{ \log (4b)} \ge c k^{-1/3}. $$ Hence $$ {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( M \ge k^{2/3}) \le C \exp( - c k^{1/3} ). $$ For each $j\ge 1$ let $L_j = \operatorname{L}(S[0, T_j])$, let $\alpha_j$ be the first exit by $L_j$ from $B(0,2r)$, and $\beta_j$ be the number of steps of $L_j$. If $L'$ hits more than $k/2$ balls then there must exist some $j \le M$ such that $L_j[\alpha_j, \beta_j]$ hits more than $k/2$ balls $B_i$. (We remark that since the balls $B_i$ are defined in terms of the loop erased walk path, they will depend on $L_j[0, \alpha_j]$. However, they will be fixed in each of the intervals $[R_j, T_j]$.) Hence, if $M \le k^{2/3}$ and $L'$ hits more than $k/2$ balls then $S$ must hit more than $c k^{1/3}$ balls in one of the intervals $[R_j, T_j]$, without hitting the path $L_j[0, \alpha_j]$. However, by Beurling's estimate the probability of this event is less than $C \exp(-c k^{1/3})$. Combining these estimates concludes the proof. {\hfill $\square$ \bigskip} \end{proof} \section{Random walk estimates}\label{sect:RW} We recall the notation of random walks on the UST given in the introduction. In addition, define $P^*$ on $\Omega \times {\cal D}} \def\sE {{\cal E}} \def\sF {{\cal F}$ by setting $P^*( A \times B) = \bE [\mathbf{1}_A P^0_{\omega} (B)]$ and extending this to a probability measure. We write $\overline {\omega}$ for elements of ${\cal D}} \def\sE {{\cal E}} \def\sF {{\cal F}$. Finally, we recall the definitions of the stopping times $\tau_R$ and $\widetilde \tau_r$ from \eqref{e:tRdef} and \eqref{e:wtrdef} and the transition densities $p^{\omega}_n(x,y)$ from \eqref{eq:hkdef}. To avoid difficulties due to $\sU$ being bipartite, we also define \begin{equation} \wt p^{\omega}_n(x,y)= p^{\omega}_n(x,y) + p^{\omega}_{n+1}(x,y). \end{equation} \smallskip Throughout this section, we will write $C({\lambda})$ to denote expressions of the form $C {\lambda}^p$ and $c({\lambda})$ to denote expressions of the form $c {\lambda}^{-p}$, where $c$, $C$ and $p$ are positive constants. \smallskip As in \cite{BJKS, KM} we define a (random) set $J({\lambda})$: \begin{definition} \label{jdef} \emph{ Let $\sU$ be the UST. For $\lambda \ge 1$ and $x \in \bZ^2$, let $J(x,\lambda)$ be the set of those $R \in [1,\infty]$ such that the following all hold: \\ (1) $ |B_d(x,R)| \le {\lambda} g(R)^2 $,\\ (2) ${\lambda}^{-1} g(R)^2 \le |B_d(x,R)| $,\\ (3) $R_{\rm eff}(x, B_d(x,R)^c) \ge {\lambda}^{-1} R$. } \end{definition} \begin{proposition}\label{p:KMsat} For $R\ge 1$, ${\lambda} \ge 1$ and $x \in \bZ^2$, \\ (a) \begin{equation} \label{e:kmest} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( R \in J(x, {\lambda}) ) \ge 1 - C e^{-c {\lambda}^{1/9}}; \end{equation} (b) $$ \bE ( R_{\rm eff}(0, B_d(0,R)^c) |B_d(0,R)|) \le C R g(R)^2. $$ Therefore conditions (1), (2) and (4) of \cite[Assumption 1.2]{KM} hold with $v(R) = g(R)^2$ and $r(R) = R$. \end{proposition} \begin{proof} (a) is immediate from Theorem \ref{t:main2} and Proposition \ref{p:kmest}(a), while (b) is exactly Proposition \ref{p:kmest}(b). We note that since $r(R)=R$, the condition $R_{\rm eff}(x,y) \le {\lambda} r (d(x,y))$ in \cite[Definition 1.1]{KM} always holds for ${\lambda} \ge 1$, so that our definition of $J({\lambda})$ agrees with that in \cite{KM}. {\hfill $\square$ \bigskip} \end{proof} We will see that the time taken by the random walk $X$ to move a distance $R$ is of order $R g(R)^2$. We therefore define \begin{equation} F(R)= R g(R)^2, \end{equation} and let $f$ be the inverse of $F$. We will prove that the heat kernel $\wt p_T(x,y)$ is of order $g(f(T))^{-2}$ and so we let \begin{equation} k(t) = g(f(t))^2, \quad t \ge 1. \end{equation} Note that we have $f(t) k(t) = f(t) g(f(t))^2 = F( f(t))=t$, so \begin{equation} \frac{1}{k(t)} = \frac{1}{g(f(t))^2} = \frac{f(t)}{t}. \end{equation} Furthermore, since $G(R) \approx R^{5/4}$, we have \begin{align} G(R) \approx R^{5/4}, \quad g(R) &\approx R^{4/5}, \quad F(R) \approx R^{13/5}, \\ f(R) \approx R^{5/13}, \quad k(R) &\approx R^{8/13}, \quad R^2 G(R) \approx R^{13/4}. \end{align} We now state our results for the SRW $X$ on $\sU$, giving the asymptotic behaviour of $d(0,X_n)$, the transition densities $\wt p^{\omega}_{n}(x,y)$, and the exit times $\tau_R$ and $\widetilde \tau_r$. We begin with three theorems which follow directly from Proposition \ref{p:KMsat} and \cite{KM}. The first theorem gives tightness for some of these quantities, the second theorem gives expectations with respect to ${\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}$, and the third theorem gives `quenched' limits which hold ${\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}$-a.s. In various ways these results make precise the intuition that the time taken by $X$ to escape from a ball of radius $R$ is of order $F(R)$, that $X$ moves a distance of order $f(n)$ in time $n$, and that the probability of $X$ returning to its initial point after $2n$ steps is the same order as $1/|B(0, f(n))|$, that is $g(f(n))^{-2} = k(n)^{-1}$. \begin{theorem} \label{ptight} Uniformly with respect to $n\ge 1$, $R\ge 1$ and $r \ge 1$, \begin{align} \label{pt-a} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R} \Big(\theta^{-1}\le \frac{ E^0_\omega \tau_R}{F(R) } \le \theta \Big) &\to 1 \quad \text{ as } \theta\to \infty, \\ \label{pt-a2} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R} \Big(\theta^{-1}\le \frac{ E^0_\omega \widetilde \tau_r}{r^2 G(r) } \le \theta \Big) &\to 1 \quad \text{ as } \theta\to \infty, \\ \label{pt-b} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R} (\theta^{-1}\le k(n) p_{2n}^\omega(0,0)\le \theta) &\to 1 \quad \text{ as } \theta\to \infty, \\ \label{pt-d} P^* \Big( \theta^{-1} < \frac{1+d(0,X_n)}{ f(n)} < \theta \Big) &\to 1 \quad \text{ as } \theta\to \infty. \end{align} \end{theorem} \begin{theorem} \label{t:means} There exist positive constants $c$ and $C$ such that for all $n\ge 1$, $R \ge 1$, $r\ge 1$, \begin{align} \label{e:mmean} cF(R) &\le \bE (E^0_\omega\tau_R) \le C F(R) , \\ \label{e:mEmean} c r^2 G(r) &\le \bE (E^0_\omega \widetilde \tau_r) \le C r^2 G(r) , \\ \label{e:pmean} c k(n)^{-1} &\le \bE (p_{2n}^\omega(0,0))\le C k(n)^{-1} , \\ \label{e:dmean} c f(n) &\le \bE (E_\omega^0 d(0,X_n)). \end{align} \end{theorem} \begin{theorem} \label{thm-rwre} There exist $\alpha_i < \infty$, and a subset $\Omega_0$ with ${\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}(\Omega_0)=1$ such that the following statements hold.\\ (a) For each $\omega \in \Omega_0$ and $x \in \bZ^2$ there exists $N_x(\omega)< \infty$ such that \begin{align} \label{e:logpnlima} (\log\log n)^{-\alpha_1} k(n)^{-1} \le p^\omega_{2n}(x,x) \le (\log\log n)^{\alpha_1} k(n)^{-1}, \quad n\ge N_x(\omega). \end{align} In particular, $d_s(\sU) = 16/13$, ${\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}$-a.s. \\ (b) For each $\omega \in \Omega_0$ and $x \in \bZ^2$ there exists $R_x(\omega)< \infty$ such that \begin{align} \label{e:logtaulima} (\log\log R)^{-\alpha_2} F(R) &\le E^x_\omega \tau_R \le (\log\log R)^{\alpha_2} F(R), \quad R\ge R_x(\omega), \\ \label{e:logtaulimE} (\log\log r)^{-\alpha_3} r^2 G(r) &\le E^x_\omega \widetilde\tau_r \le (\log\log r)^{\alpha_3} r^2 G(r)^2, \quad r \ge R_x(\omega). \end{align} Hence \begin{align} d_w(\sU)= \lim_{R \to \infty} \frac{\log E^x_\omega \tau_R}{\log R} = \frac{13}{5}, \quad \quad \lim_{r \to \infty} \frac{\log E^x_\omega \widetilde\tau_r}{\log r} = \frac{13}{4}. \end{align} (c) Let $Y_n= \max_{0\le k \le n} d(0,X_k)$. For each $\omega \in \Omega_0$ and $x \in \bZ^2$ there exist $\overline N_x(\overline \omega)$, $\overline R_x(\overline \omega)$ such that $P^x_\omega(\overline N_x <\infty)=P^x_\omega(\overline R_x <\infty)=1$, and such that \begin{align} \label{e:ynlim} (\log\log n)^{-\alpha_4} f(n) &\le Y_n(\overline \omega) \le (\log\log n)^{\alpha_4} f(n), \quad n \geq \overline N_x(\overline \omega), \\ \label{e:rnlim} (\log\log R)^{-\alpha_4} F(R) &\le \tau_R(\overline \omega) \le (\log\log R)^{\alpha_4} F(R), \quad\quad R \ge \overline R_x(\overline \omega), \\ \label{e:rnlimE} (\log\log r)^{-\alpha_4} r^2G(r) &\le \widetilde \tau_r(\overline \omega) \le (\log\log r)^{\alpha_4} r^2G(r), \quad r \ge R_x(\overline \omega). \end{align} (d) Let $W_n = \{X_0,X_1,\ldots, X_n\}$ and let $|W_n|$ denote its cardinality. For each $\omega \in \Omega_0$ and $x \in \bZ^2$, \begin{equation} \label{e:snlim} \lim_{n \to \infty} \frac{\log |W_n|}{\log n} = \frac{8}{13}, \quad P^x_\omega \text{-a.s.}. \end{equation} \end{theorem} \smallskip The papers \cite{BJKS, KM} studied random graphs for which information on ball volumes and resistances were only available from one point. These conditions were not strong enough to bound $E^0_{\omega} d(0,X_n)$ or $\wt p^{\omega}_T(x,y)$ -- see \cite[Example 2.6]{BJKS}. Since the UST is stationary, we have the same estimates available from every point $x$, and this means that stronger conclusions are possible. \begin{thm} \label{t:dist} There exist $N_0({\omega})$ with ${\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( N_0 < \infty) =1$, $\alpha > 0$ and for all $q > 0$, $C_q$ such that \begin{equation} \label{e:qdub} E^0_{\omega} d(0, X_n)^q \le C_q f(n)^q (\log n)^{\alpha q} \quad \hbox{ for } n \ge N_0({\omega}). \end{equation} Further, for all $n \geq 1$, \begin{equation} \label{e:adub} \bE( E^0_{\omega} d(0, X_n)^q) \le C_q f(n)^q (\log n)^{\alpha q} . \end{equation} \end{thm} Write $\Phi(T,x,x) = 0$, and for $x \neq y$ let \begin{equation} \label{e:Phidef} \Phi(T,x,y)= \frac{d(x,y)}{G((T/ d(x,y))^{1/2})}. \end{equation} \begin{theorem} \label{t:hk} There exists a constant $\alpha>0$ and r.v. $N_x({\omega})$ with \begin{equation}\label{e:Nxtail} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( N_x \ge n ) \le C e^{ -c (\log n)^2 } \end{equation} such that provided $ F(T) \vee |x-y| \ge N_x({\omega})$ and $T \ge d(x,y)$, then writing $A= A(x,y,T) =C (\log( |x-y| \vee F(T)))^\alpha$, \begin{align}\label{e:hkb2} \frac{1} {A k(T)} \exp\Big( - A \Phi(T,x,y) \Big) \le \wt p_T(x,y) &\le \frac{A}{k(T)} \exp\Big( - A^{-1} \Phi(T,x,y) \Big). \end{align} \end{theorem} \begin{remark} {\rm If we had $G(n) \asymp n^{5/4}$ then since $d_f = 8/5$ and $d_w=1+d_f$, we would have \begin{equation} \Phi(T,x,y) \asymp \Big( \frac{d(x,y)^{d_w}}{T} \Big)^{1/(d_w-1)}, \end{equation} so that, except for the logarithmic term $A$, the bounds in \eqref{e:hkb2} would be of the same form as those obtained in the diffusions on fractals literature. } \end{remark} \smallskip Before we prove Theorems \ref{ptight} -- \ref{t:hk}, we summarize some properties of the exit times $\tau_R$. \begin{proposition}\label{p:kmtau} Let ${\lambda} \ge 1$ and $x \in \bZ^2$. \\ (a) If $R, R/(4 {\lambda}) \in J(x,{\lambda})$ then \begin{equation}\label{e:fixtau} c_1({\lambda}) F(R) \le E^x_{\omega} \tau(x,R) \le C_2({\lambda}) F(R). \end{equation} (b) Let $0< \varepsilon \le c_3({\lambda})$. Suppose that $R, \varepsilon R, c_4({\lambda})\varepsilon R \in J(x, {\lambda})$. Then \begin{equation}\label{e:epstau} P^x_{\omega}( \tau(x,R) < c_5({\lambda}) F(\varepsilon R) ) \le C_6({\lambda}) \varepsilon. \end{equation} \end{proposition} {\medskip\noindent {\bf Proof. }} This follows directly from \cite[Proposition 2.1]{BJKS} and \cite[Proposition 3.2, 3.5]{KM}. {\hfill $\square$ \bigskip} \smallskip\noindent {\bf Proof of Theorems \ref{ptight}, \ref{t:means}, and \ref{thm-rwre}. } All these statements, except those relating to $\widetilde\tau_r$, follow immediately from Proposition \ref{p:KMsat} and Propositions 1.3 and 1.4 and Theorem 1.5 of \cite{KM}. Thus it remains to prove \eqref{pt-a2}, \eqref{e:mEmean}, \eqref{e:logtaulimE} and \eqref{e:rnlimE}. By the stationarity of $\sU$ it is enough to consider the case $x=0$. Recall that $U_r$ denotes the connected component of $0$ in $\sU \cap B(0,r)$, and therefore $$ \widetilde \tau_r = \min\{ n \ge 0: X_n \not\in U_r\}. $$ Let $$ H_1(r, {\lambda}) = \{ B_d(0, {\lambda}^{-1} G(r)) \subset U_r \subset B_d(0, {\lambda} G(r) \}. $$ On $H_1(r, {\lambda})$ we have \begin{equation} \label{e:tauineq} \tau_{{\lambda}^{-1} G(r) } \le \widetilde \tau_r \le \tau_{{\lambda} G(r) }, \end{equation} while by Theorem \ref{t:Vub} and Proposition \ref{p:Ur} we have for $r \ge 1$, ${\lambda} \ge 1$, $$ {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( H_1(r, {\lambda}) ^c ) \le e^{ - c {\lambda}^{2/3} }. $$ The upper bound in \eqref{pt-a2} will follow from \eqref{e:mEmean}. For the lower bound, on $H_1(r,{\lambda})$ we have, writing $R = {\lambda}^{-1} G(r)$, \begin{align} \frac{ E^0_{\omega} \widetilde \tau_r }{r^2 G(r) } \ge \frac{ E^0_{\omega} \tau_{R} }{ F(R) } \cdot \frac{ F(R)}{r^2 G(r) }, \end{align} while $F(R)/ r^2 G(r) \ge {\lambda}^{-3}$ by Lemma \ref{l:vgrwth}. So \begin{align} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}\Big( \frac{ E^0_{\omega} \widetilde \tau_r }{r^2 G(r) } < {\lambda}^{-4} \Big) \le {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R} ( H_1(r,{\lambda})^c) + {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}\Big( \frac{ E^0_{\omega} \tau_{R} }{ F(R) } < {\lambda}^{-1}\Big ), \end{align} and the bound on the lower tail in \eqref{pt-a2} follows from \eqref{pt-a}. \smallskip We now prove the remaining statements in Theorem \ref{thm-rwre}. Let $r_k=e^k$, and ${\lambda}_k = a( \log k)^{3/2}$, and choose $a$ large enough so that $$ \sum_k \exp( - c {\lambda}_k^{2/3}) < \infty. $$ Hence by Borel-Cantelli there exists a r.v. $K({\omega})$ with ${\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}(K< \infty)=1$ such that $H_1(r_k, {\lambda}_k)$ holds for all $k \ge K$. So if $k$ is sufficiently large, and $\alpha_2$ is as in \eqref{e:logtaulima}, \begin{align*} E^0_{\omega} \widetilde \tau_{r_k} \le E^0_{\omega} \tau_{{\lambda}_k G(r_k)} &\le [\log \log ({\lambda}_k G(r_k))]^{\alpha_2} {\lambda}_k G(r_k) g( {\lambda}_k G(r_k))^2 \\ &\le C (\log k)^{\alpha_3} r_k^2 G(r_k) \\ &= C (\log \log r_k)^{\alpha_3} r_k^2 G(r_k). \end{align*} Since $\widetilde \tau_r$ is monotone in $r$, the upper bound in \eqref{e:logtaulimE} follows. A very similar argument gives the lower bound, and also \eqref{e:rnlimE}. It remains to prove \eqref{e:mEmean}. A general result on random walks (see e.g. \cite{BJKS}, (2.21)) implies that $$ E^0_{\omega} \widetilde \tau_r \le R_{\rm eff}(0, U_r^c) \sum_{x \in U_r} \mu_x \leq C r^2 R_{\rm eff}(0, U_r^c) . $$ Let $z$ be the first point on the path $\gam(0,\infty)$ outside $B(0,r)$. Then $R_{\rm eff}(0, U_r^c) \le d(0, z)$, and since $\gam(0,\infty)$ has the law of an infinite LERW, $\bE d(0,z) \le \bE \widehat M_{r+1} \le C G(r)$. Hence $$ \bE (E^0_{\omega} \widetilde \tau_r) \le C r^2 G(r). $$ For the lower bound, let $$ H_2(r, {\lambda}) =\{ {\lambda}^{-1}G(r), (2 {\lambda})^{-2}G(r) \in J({\lambda}) \}. $$ Choose ${\lambda}_0$ large enough so that ${\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( H_1({\lambda}_0, r) \cap H_2({\lambda}_0, r)) \ge \fract12$. If $H_2(r, {\lambda}_0)$ holds then by Proposition \ref{p:kmtau}, writing $R = {\lambda}_0^{-1} G(r)$, $$ E^0_{\omega} \tau_R \ge c({\lambda}_0) R g(R)^2. $$ So, since $R g(R)^2 \ge c({\lambda}_0) r^2 G(r)$, \begin{align*} \bE E^0_{\omega} \widetilde \tau_r &\ge \bE( E^0_{\omega} \widetilde \tau_r; H_1({\lambda}_0, r) \cap H_2({\lambda}_0, r) )\\ &\ge \bE( E^0_{\omega} \tau_{R} ; H_1({\lambda}_0, r) \cap H_2({\lambda}_0, r) )\\ &\ge \fract12 c({\lambda}_0) R g(R)^2 \\ &\ge c({\lambda}_0) r^2 G(r). \end{align*} {\hfill $\square$ \bigskip} \smallskip We now turn to the proofs of Theorems \ref{t:dist} and \ref{t:hk}, and begin with a slight simplification of Lemma 1.1 of \cite{BBSC}. \begin{lemma}\label{lem:bbi} There exists $c_0 > 0$ such that the following holds. Suppose we have nonnegative r.v. $\xi_i$ which satisfy, for some $t_0>0$, $$ {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( \xi_i \le t_0 | \xi_1, \dots, \xi_{i-1} ) \le \fract12. $$ Then \begin{equation} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( \sum_{i=1}^n \xi_i < T ) \le \exp( - c_0 n + T/t_0 ). \end{equation} \end{lemma} {\medskip\noindent {\bf Proof. }} Write $\sF_i = \sigma(\xi_1, \dots \xi_i)$. Let $\theta = 1/t_0$, and let $e^{-c_0} = \fract12 (1 + e^{-1})$. Then \begin{align*} \bE (e^{-\theta \xi_i} | \sF_{i-1}) &\le {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( \xi_i < t_0 | \sF_{i-1} ) + e^{-\theta t_0} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( \xi_i > t_0 | \sF_{i-1}) \\ &= {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( \xi_i < t_0 | \sF_{i-1} ) ( 1- e^{-\theta t_0} ) + e^{-\theta t_0} \\ &\le \fract12 ( 1 + e^{-\theta t_0} ) = e^{-c_0}. \end{align*} Then \begin{align*} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( \sum_{i=1}^n \xi_i < T ) = {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( e^{-\theta \sum_{i=1}^n \xi_i} > e^{-\theta T} ) \le e^{\theta T} \bE( e^{-\theta \sum_{i=1}^n \xi_i} ) \le e^{\theta T} e^{-n c_0}. \end{align*} {\hfill $\square$ \bigskip} We also require the following lemma which is an immediate consequence of the definitions of the functions $F$ and $G$. \begin{lemma}\label{lem:mc} Let $R\ge 1$, $T \ge 1$, and \begin{equation}\label{e:msat5} b_0= \frac{R}{ G((T/R)^{1/2})}. \end{equation} Then, \begin{align} \label{e:m41} {R}/{b_0} &= G( (T/R)^{1/2}) = f(T/b_0), \\ \label{e:m42} b \le b_0 &\Leftrightarrow T/b \le F(R/b) \Leftrightarrow f(T/b) \le R/b. \end{align} Also, if $\theta<1$ and $\theta R \ge 1$, then \begin{align} \label{e:Fpower} c_7 \theta^3 F(R) \le F(\theta R) \le C_8 \theta^2 F(R), \\ \label{e:fpower} c_7 \theta^{1/2} f(R) \le f(\theta R) \le C_8 \theta^{1/3} f(R). \end{align} \end{lemma} \smallskip For $x \in \bZ^2$, let \begin{align*} A_x({\lambda}, n) =\{{\omega}: R' \in J(y, {\lambda}) \, \hbox { for all } y \in B(x, n^2), 1 \le R' \le n^2 \}. \end{align*} and let $A({\lambda}, n) = A_0({\lambda},n)$. \begin{proposition} \label{p:tautail} Let ${\lambda} \geq 1$ and suppose that $1 \leq R \le n$, \begin{equation} \label{e:Tcond} T \ge C_{9}({\lambda}) R, \end{equation} and $A({\lambda}, n)$ occurs. Then, \begin{equation}\label{e:tautail2} P^0_{\omega}( \tau_R < T ) \le C_{10}({\lambda}) \exp\left(- c_{11}({\lambda}) \frac{R}{G( (T/R)^{1/2}) } \right). \end{equation} \end{proposition} {\medskip\noindent {\bf Proof. }} In this proof, the constants $c_i({\lambda})$, $C_i({\lambda})$ for $1\le i \le 8$ will be as in Proposition \ref{p:kmtau} and Lemma \ref{lem:mc}, and $c_0$ will be as in Lemma \ref{lem:bbi}. We work with the probability $P^0_{\omega}$, so that $X_0=0$. Let $b_0= R/ G((T/R)^{1/2})$ be as in \eqref{e:msat5}, and define the quantities \begin{eqnarray*} \varepsilon = (2C_6({\lambda}))^{-1}, \qquad & \qquad \theta = \frac14 C_8^{-1} c_0 c_5({\lambda}) \varepsilon^2, \qquad & \qquad C^*({\lambda}) = 2\theta^{-1}, \\ m = \lfloor \theta b_0 \rfloor, \qquad & \qquad R' = R/m, \qquad & \qquad t_0 = c_5({\lambda}) F(\varepsilon R'). \end{eqnarray*} We now establish the key facts that we will need about the quantities defined above. We can assume that $b_0 \ge C^*({\lambda})$ for if $b_0 \le C^*({\lambda})$, then by adjusting the constants $C_{10}({\lambda})$ and $c_{11}({\lambda})$ we will still obtain \eqref{e:tautail2}. Therefore, \begin{equation} \label{e:msat2} 1 \leq \fract12 \theta b_0 \leq m \le \theta b_0. \end{equation} Furthermore, since $m / \theta \leq b_0$, $\theta R /m = G((T/R)^{1/2}) \geq 1$ and $\theta/\varepsilon < 1$, we have by Lemma \ref{lem:mc} that $$ T/m \le \theta^{-1} F(\theta R/m) \le C_8 \theta \varepsilon^{-2} F( \varepsilon R/m) \le \fract14 c_0 c_5({\lambda}) F( \varepsilon R/m) = \frac{1}{4} c_0 t_0. $$ Therefore, \begin{equation} \label{e:msat3} T/t_0 < {\textstyle \frac12} c_0 m. \end{equation} Finally, we choose $$ C_{9}({\lambda}) \geq g(c_4({\lambda})^{-1} \varepsilon^{-1} \theta)^2,$$ so that if $T/R \geq C_{9}({\lambda})$, then $$ G((T/R)^{1/2}) \geq c_4({\lambda})^{-1} \varepsilon^{-1} \theta,$$ and therefore \begin{equation} \label{e:msat1} c_4({\lambda}) \varepsilon R' \geq c_4({\lambda}) \varepsilon R \theta^{-1} b_0^{-1} \geq 1. \end{equation} Having established \eqref{e:msat2}, \eqref{e:msat3} and \eqref{e:msat1}, the proof of the Proposition is straightforward. Let $\sF_n = \sigma(X_0, \dots, X_n)$. Define stopping times for $X$ by \begin{align*} T_0 &= 0, \\ T_k &= \min \{ j \ge T_{k-1}: X_j \not\in B_d( X_{T_{k-1}}, R'-1) \}, \end{align*} and let $\xi_k = T_k - T_{k-1}$. Note that $T_m \leq \tau_R$, and that if $k \le m$, then $$ X_{T_k} \in B_d(0, kR' ) \subset B_d(0,n) \subset B(0,n).$$ Therefore, since \eqref{e:msat1} holds and $A({\lambda}, n)$ occurs, we can apply Proposition \ref{p:kmtau} to obtain that \begin{equation*} P^0_{\omega}( \xi_k < c_5({\lambda}) F(\varepsilon R' )| \sF_{k-1} ) \le C_6({\lambda}) \varepsilon = \fract12. \end{equation*} Hence by Lemma \ref{lem:bbi} and \eqref{e:msat3}, \begin{align* P^0_{\omega}( \tau_R < T ) &\le P^0_{\omega}( \sum_1^m \xi_i < T ) \\ &\le \exp( -c_0 m + T/t_0) \\ &\le \exp(-c_0/2 m) \\ &\le \exp\left(- c_{11}({\lambda}) \frac{R}{G( (T/R)^{1/2}) } \right). \end{align*} {\hfill $\square$ \bigskip} \smallskip\noindent {\bf Proof of Theorem \ref{t:dist} } We will prove Theorem \ref{t:dist} with $T$ replacing $n$. Let $R = f(T)$; we can assume that $T$ is large enough so that $R \ge 2$. We also let $C_9({\lambda})$, $C_{10}({\lambda})$ and $c_{11}({\lambda})$ be as in Proposition \ref{p:tautail}, and let $p > 0$ be such that $C_{i}({\lambda}) \leq C {\lambda}^{p}$, $i=9,10$ and $c_{11}({\lambda}) \geq c {\lambda}^{-p}$. We have \begin{align} \nonumber E^0_{\omega} d(0, X_T)^q &\le R^q + E^0_{\omega}\Big( \sum_{k=1}^\infty 1_{( e^{k-1} R \le d(0, X_T) < e^k R)} d(0, X_T)^q \Big)\\ \label{e:d-est} &\le R^q + R^q \sum_{k=1}^\infty e^{kq} P^0_{\omega}( e^{k-1} R \le d(0, X_T) \le e^{k} R ) . \end{align} By \eqref{e:kmest} we have \begin{equation} \label{e:apbnd} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( A({\lambda},n)^c ) \le 4 n^3 e^{-c{\lambda}^{1/9} } \le \exp( -c {\lambda}^{1/9} + C \log n ). \end{equation} Let ${\lambda}_k = k^{10}$. Then $\sum_k {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( A({\lambda}_k, e^k)^c) < \infty$, and so by Borel-Cantelli there exists $K_0({\omega})$ such that $A({\lambda}_k, e^k)$ holds for all $k \ge K_0$. Furthermore, we have \begin{equation} \label{e:Ktail} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( K_0 \ge n ) \le C e^{-c n^{10/9}}. \end{equation} Suppose now that $k \geq K_0$. To bound the sum \eqref{e:d-est}, we consider two ranges of $k$. If $C_{9}({\lambda}_k)e^{k-1} R > T$, then we let $A_k = B_d(0, e^k R) - B_d(0, e^{k-1} R)$, and by the Carne-Varopoulos bound (see \cite{Ca}), \begin{align} \nonumber e^{kq} P^0_{\omega}( e^{k-1} R \le d(0, X_T) \le e^{k} R ) &\le e^{kq} \sum_{y \in A_k } P^0_{\omega}(X_T =y) \\ \nonumber &\le e^{kq} \sum_{y \in A_k } C \exp( - d(0,y)^2/2T )\\ \nonumber &\le C e^{kq} (e^k R)^2 \exp( - (e^{k-1} R)^2/2T ) \\ \nonumber &\le C \exp( - C_{9}({\lambda}_k)^{-1} e^kR + 2 \log(e^k R) + kq )\\ \label{e:dsum1} &\le C \exp( - c k^{-10p} e^k + C_q k ). \end{align} On the other hand, if $C_{9}({\lambda}_k)e^{k-1} R \leq T$, then we let $m= \lceil k + \log R \rceil$, so that $e^k R \le e^m < e^{k+1} R$. Then by Proposition \ref{p:tautail}, \begin{align} \nonumber e^{kq} P^0_{\omega}( e^{k-1} R \le d(0, X_T) \le e^k R ) &\le e^{kq} P^0_{\omega} ( \tau_{e^{k-1} R} < T) \\ \nonumber &\le e^{kq} C_{10}({\lambda}_m) \exp\left( - c_{11}({\lambda}_m) \frac{e^{k-1}R}{G((e^{-k+1}T/R)^{1/2})} \right) \\ \nonumber &\le e^{kq} C m^{10p} \exp\left( - c m^{-10p} e^{k} \frac{R}{G((T/R)^{1/2})} \right) \\ &\le C (k + \log R)^{10p} \exp( - c (k + \log R)^{-10p} e^k + kq ). \end{align} Let $k_1= 20p \log \log R$. Then if $k \ge k_1$, $$ (k + \log R)^{10p} \le ( k + e^{k/(20p)})^{10p} \le C e^{k/2}. $$ Hence for $k \ge k_1$, \begin{align} \label{e:dsum2} e^{kq} P^0_{\omega}( e^{k-1} R \le d(0, X_T) \le e^k R ) &\le C \exp( - c e^{k/2} + C_q k ). \end{align} Let $K' = K_0 \vee k_1$. Then since the series given by \eqref{e:dsum1} and \eqref{e:dsum2} both converge, \begin{align}\nonumber \sum_{k=1}^\infty e^{kq} P^0_{\omega}( e^{k-1} R \le d(0, X_T) \le e^k R ) &\le \sum_{k=1}^{K'-1} e^{kq} +C_q \\ \nonumber &\le e^{K' q} + C_q \\ \nonumber &\le e^{K_0 q } + (\log R)^{20 p q} + C_q. \end{align} Hence since $R \le T$, we have that for all $T \geq N_0 = e^{e^{K_0}}$ \begin{equation}\label{e:dub1} E^0_{\omega} d(0, X_T)^q \le C_q R^q( (\log T)^q + (\log T )^{20 pq} ), \end{equation} so that \eqref{e:qdub} holds. Taking expectations in \eqref{e:dub1} and using \eqref{e:Ktail} gives \eqref{e:adub}. {\hfill $\square$ \bigskip} \begin{remark}{\rm It is natural to ask if \eqref{e:adub} holds without the term in $\log T$, as with the averaged estimates in Theorem \ref{t:means}. It seems likely that this is the case; such an averaged estimate was proved for the incipient infinite cluster on regular trees in \cite[Theorem 1.4(a)]{BK06}. The key to obtaining such a bound is to control the exit times $\tau_{e^k R}$; this was done above using the events $A({\lambda}, n)$, but this approach is far from optimal. The argument of Proposition \ref{p:tautail} goes through if only a positive proportion of the points $X_{T_k}$ are at places where the estimate \eqref{e:epstau} can be applied. This idea was used in \cite{BK06} -- see the definition of the event $G_2(N,R)$ on page 48. Suppose we say that $B_d(x,R)$ is ${\lambda}$-bad if $R \not\in J(x, {\lambda})$. Then it is natural to conjecture that there exists ${\lambda}_c$ such that for ${\lambda} > {\lambda}_c$ the bad balls fail to percolate on $\sU$. Given such a result (and suitable control on the size of the clusters of bad balls) it seems plausible that the methods of this paper and \cite{BK06} would then lead to a bound of the form $\bE( E^0_{\omega} d(0,X_T)^q) \le C_q f(T)^q$. } \end{remark} \medskip We now use the arguments in \cite{BCK} to obtain full heat kernel bounds for $p_T(x,y)$ and thereby prove Theorem \ref{t:hk}. Since the techniques are fairly standard, we only give full details for the less familiar steps. \begin{lemma} \label{lem:alam} Suppose $A({\lambda}, n)$ holds. Let $x, y \in B(0,n)$. Then \\ (a) \begin{equation}\label{e:ptondub} p_T(x,y) \le C_{12}({\lambda}) k(T)^{-1}, \quad \hbox{ if } 1 \le T \le F(n). \end{equation} (b) \begin{equation}\label{e:ptndlb} \wt p_T(x,y) \ge c_{13}({\lambda}) k(T)^{-1}, \quad \hbox{ if } 1 \le T \le F(n) \hbox{ and } d(x,y) \le c_{14}({\lambda}) f(T). \end{equation} \end{lemma} {\medskip\noindent {\bf Proof. }} If $x=y$ then (a) is immediate from \cite[Proposition 3.1]{KM}. Since $p_T(x,y)^2 \le \wt p_T(x,x) \wt p_T(y,y)$, the general case then follows. \\ (b) The bound when $x=y$ is given by \cite[Proposition 3.3(2)]{KM}. We also have, by \cite[Proposition 3.1]{KM}, \begin{equation*} |\wt p_T(x,y) - \wt p_T(x,z)|^2 \le \frac{c}{T} d(y,z) p_{2 \lfloor T/2 \rfloor}(x,x). \end{equation*} Therefore using (a), \begin{align*} \wt p_T(x,y) &\ge \wt p_T(x,x) - |\wt p_T(x,x) - \wt p_T(x,y)| \\ &\ge c({\lambda}) k(T)^{-1} - \Big( C({\lambda}) d(x,y) T^{-1} k(T)^{-1} \Big)^{1/2} \\ &= c({\lambda}) k(T)^{-1} \Big( 1- \big( C({\lambda}) d(x,y) T^{-1} k(T) \big)^{1/2} \Big). \end{align*} Since $k(T)/T = f(T)^{-1}$, \eqref{e:ptndlb} follows. {\hfill $\square$ \bigskip} Recall that $\Phi(T,x,x) = 0$, and for $x \neq y$, $$ \Phi(T,x,y)= \frac{d(x,y)}{G((T/ d(x,y))^{1/2})}.$$ \begin{proposition} \label{p:ubfix} Suppose that $A({\lambda}, n)$ holds. Let $x, y \in B(0,n)$. If $ d(x,y) \le T \le F(n)$, then \begin{align}\label{e:hkb} \frac{c({\lambda})}{k(T)} \exp\Big( - C({\lambda}) \Phi(T,x,y) \Big) \le \wt p_T(x,y) &\le \frac{C({\lambda})}{k(T)} \exp\Big( - c({\lambda}) \Phi(T,x,y) \Big). \end{align} \end{proposition} {\medskip\noindent {\bf Proof. }} Let $R = d(x,y)$. In this proof we take $c_{13}({\lambda})$ and $c_{14}({\lambda})$ to be as in \eqref{e:ptndlb}. We will choose a constant $C^*({\lambda})\ge 2$ later. Suppose first that $R \le T \le C^*({\lambda})R$. Then the upper bound in \eqref{e:hkb} is immediate from the Carne-Varopoulos bound. If $R + T$ is even and then we have $p_T(x,y) \ge 4^{-T}$, and this gives the lower bound. We can therefore assume that $T \ge C^*({\lambda})R$. The upper bound follows from the bounds \eqref{e:ptondub} and \eqref{e:tautail2} by the same argument as in \cite[Proposition 3.8]{BCK}. It remains to prove the lower bound in the case when $T \ge C^*({\lambda})R$, and for this we use a standard chaining technique which derives \eqref{e:hkb} from the `near diagonal lower bound' \eqref{e:ptndlb}. For its use in a discrete setting see for example \cite[Section 3.3]{BCK}. As in Lemma \ref{lem:mc}, we set \begin{equation}\label{e:mchoose} b_0 = \frac{R}{G((T/R)^{1/2})}. \end{equation} If $b_0< 1$ then we have from Lemma \ref{lem:mc} that $R \le C_8 b_0^{2/3} f(T)$. If $C_8 b_0^{2/3} \le c_{14}({\lambda})$ then $R \le c_{14}({\lambda}) f(T)$ and the lower bound in \eqref{e:hkb} follows from \eqref{e:ptndlb}. We can therefore assume that $C_8 b_0^{2/3} > c_{14}({\lambda})$. We will choose $\theta> 2 (c_{14}/C_8)^{-3/2}$ later; this then implies that $\theta b_0 \ge 2$. Let $m = \lfloor \theta b_0 \rfloor$; we have ${\textstyle \frac12} \theta b_0 \le m \le \theta b_0$. Let $r = R/m$, $t=T/m$; we will require that both $r$ and $t$ are greater than $4$. Choose integers $t_1, \dots, t_m$ so that $|t_i - t| \le 2$ and $\sum t_i = T$. Choose a chain $x=z_0, z_1, \dots, z_m = y$ of points so that $d(z_{i-1}, z_i) \le 2r$, and let $B_i = B(z_i, r)$. If $x_i \in B_i$ for $1\le i \le m$ then $d(x_{i-1}, x_i) \le 4r$. We choose $\theta$ so that we have \begin{equation} \label{e:usend} \wt p_{t_i}(x_{i-1},x_i) \ge c_{13}({\lambda}) k(t)^{-1} \hbox { whenever } x_{i-1} \in B_{i-1}, \, x_i \in B_i. \end{equation} By \eqref{e:ptndlb} it is sufficient for this that \begin{equation} \label{e:rtcond} 4R/m= 4r \le c_{14}({\lambda}) f(t/2) = c_{14}({\lambda}) f(T/2m). \end{equation} Since $2m/\theta \ge b_0$, Lemma \ref{lem:mc} implies that $f( \theta T/(2m)) \ge \theta R/(2m)$, and therefore \begin{equation} \label{e:et4} 4R/m \le 8 \theta^{-1} f(\theta T/(2m)) \le C \theta^{-1/3} f(T/2m), \end{equation} and so taking $\theta =\max(2 (c_{14}/C_8)^{-3/2}, (C/c_3({\lambda}))^3)$ gives \eqref{e:rtcond}. The condition $T \ge C^*({\lambda})R$ implies that $f(T/b_0) = R/b_0 \ge G(C^*({\lambda}))$, so taking $C^*$ large enough ensures that both $r$ and $t$ are greater than 4. The Chapman-Kolmogorov equations give \begin{align} \nonumber \wt p_T(x,y) &\ge \sum_{x_1 \in B_1} \dots \sum_{x_{m-1} \in B_{m-1}} p_{t_1}(x_0,x_1)\mu_{x_1} p_{t_2}(x_1,x_2)\mu_{x_2} \dots \\ \label{e:cklb} & {\qquad} {\qquad} p_{t_{m-1}}(x_{m-2}, x_{m-1})\mu_{x_{m-1}} \wt p_{t_{m}}(x_{m-1},y). \end{align} Since $x_{m-1} \in B_{m-1} $ we have $\wt p_{t_{m}}(x_{m-1},y) \ge c_{13}({\lambda}) k(t)^{-1} \ge c_{13}({\lambda}) k(T)^{-1}$. Note that exactly one of $p_t(x,y)$ and $p_{t+1}(x,y)$ can be non-zero. Using this, and \eqref{e:usend} we deduce that for $1\le i \le m-1$, \begin{equation} \sum_{x_{i} \in B_{i}} p_{t_i}(x_{i-1}, x_i) \mu_{x_i} \ge c({\lambda}) k(t)^{-1} g(r)^2. \end{equation} The choice of $m$ implies that $c'({\lambda}) f(t) \le r \le c({\lambda}) f(t) $, and therefore $$ k(t)^{-1} g(r)^2 = g(r)^2/g(f(t))^2 \ge c({\lambda}). $$ So we obtain \begin{equation} \wt p_T(x,y) \ge k(T) c({\lambda})^m \ge k(T) \exp( - c({\lambda}) R / G((T/R)^{1/2} )). \end{equation} {\hfill $\square$ \bigskip} \smallskip\noindent {\bf Proof of Theorem \ref{t:hk} } As in the proof of Theorem \ref{t:dist}, we have that by by \eqref{e:kmest} \begin{equation*} {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( A({\lambda},n)^c ) \le 4 n^3 e^{-c{\lambda}^{1/9} } \le \exp( -c {\lambda}^{1/9} + C \log n ). \end{equation*} Therefore if we let ${\lambda}_n = (\log n)^{18}$, then by Borel Cantelli, for each $x \in \bZ^2$ there exists $N_x$ such that $A_x({\lambda}_n,n)$ holds for all $n \ge N_x$. Further we have that $$ {\mathbb P}} \def\bQ {{\mathbb Q}} \def\bR {{\mathbb R}( N_x \ge n ) \le C e^{ -c (\log n)^2 }.$$ Let $x, y \in \bZ^2$ and $T \ge 1$. To apply the bound in Proposition \ref{p:ubfix} we need to find $n$ such that $T \le F(n)$, $y \in B(x,n)$ and $n \ge N_x$. Hence if $F(T) \vee |x-y| \ge N_x$ we can take $n = F(T) \vee |x-y|$, to obtain \eqref{e:hkb} with constants $c({\lambda}_n) = c (\log n)^p$. Choosing $\alpha$ suitably then gives \eqref{e:hkb2}. {\hfill $\square$ \bigskip} \medskip \begin{remark} {\rm If both $d(x,y)=R$ and $T$ are large then since $d_w = 13/5$ $$ \Phi(x,y) \simeq R ((T/R)^{1/2})^{-5/4} = \frac{R^{13/8}}{T^{5/8}} = (R^{d_w}/T)^{1/(d_w-1)}.$$ Thus the term in the exponent takes the usual form one expects for heat kernel bounds on a regular graph with fractal growth -- see the conditions UHK$(\beta)$ and LHK$(\beta)$ on page 1644 of \cite{BCK}. } \end{remark} \smallskip\noindent {\bf Acknowledgment} The first author would like to thank Adam Timar for some valuable discussions on stationary trees in $\bZ^d$. The second author would like to thank Greg Lawler for help in proving Lemma \ref{condesc}.
{'timestamp': '2009-12-24T02:17:57', 'yymm': '0912', 'arxiv_id': '0912.4765', 'language': 'en', 'url': 'https://arxiv.org/abs/0912.4765'}
\section{Acknowledgements} \label{sec:acknowledgements} The authors would like to thank Joop Schaye and Claudio Dalla Vecchia for their helpful conversations. JB is supported by STFC studentship ST/R504725/1. MS is supported by the Netherlands Organisation for Scientific Research (NWO) through VENI grant 639.041.749. This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K00042X/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. \subsection{Software Citations} This paper made use of the following software packages: \begin{itemize} \item {\tt python} \citep{VanRossum1995}, with the following libraries \begin{itemize} \item {\tt numpy} \citep{Harris2020} \item {\tt scipy} \citep{SciPy1.0Contributors2020} \item {\tt matplotlib} \citep{Hunter2007} \item {\tt numba} \citep{Lam2015} \item {\tt swiftsimio} \citep{Borrow2020a} \end{itemize} \end{itemize} \section{Data Availability} \label{sec:dataavail} No new data were generated or analysed in support of this research. The figures in this paper were all produced from the equations described within, and all programs used to generate those figures are available in the {\tt GitHub} repository at \url{https://github.com/JBorrow/pressure-sph-paper-plots}. They are also available in the online supplementary material accompanying this article. \section{Conclusions} \label{sec:conclusions} The Pressure-Energy and Pressure-Entropy schemes have been prized for their ability to capture contact discontinuities significantly better than their Density-based cousins due to their use of a directly smoothed pressure field \citep{Hopkins2013}. However, there are several disadvantages to using these schemes that have been presented: \begin{itemize} \item Injecting energy in a Pressure-Entropy based scheme requires the use of an iterative solver and many transformations between variables. This makes this scheme computationally expensive, and as such for this to be used in practice an efficient implementation is required. Approximate solutions do exist, but result in incorrect amounts of energy being injected into the field when particles are heated only by a small amount (typically by less than 100 times their own internal energy). This occurs even in the case where the fluid is evolved with a single, global, time-step, and is complicated even further by the inclusion of the multiple time-stepping scheme that is commonplace in cosmological simulations. \item In a Pressure-Energy based scheme, the injection of energy in a multi-d$t$ simulation requires either `waking up' all of the neighbours of the affected particle (and forcing them to be active in the next time-step), or a loop over these neighbours to back-port changes to their pressure due to the changes in internal energy of the heated particle. This is a computationally expensive procedure, and is generally avoided in the practical use of these schemes. As such, while no explicit energy conservation errors manifest, there is an offset between the energy field represented by the particle distribution and the associated smooth pressure field in practical implementations. \item These issues also manifest themselves in cases where energy is removed from active particles, such as an `operator-splitting' radiative cooling scheme where energy is directly removed from particles. \item Correctly `drifting' the smoothed pressure of particles (as is required in a multi-d$t$ simulation) requires knowing the time differential of the smoothed pressure. To compute this, either an extra loop over neighbours is required for active particles, or an approximate solution based on the time differential of the density field and internal energy field is used. This approximate solution does not account for the changes taking place in the local internal energy field and as such does not correctly capture the evolution of the smoothed pressure. \item Even when using the `correct' drift operator for the smoothed pressure significant pressure, and hence force, errors can occur when particles cool rapidly. This can be mitigated somewhat with time-step limiting techniques (either through the use of a time-step limiter like the one described in \citet{Durier2012} or through a careful construction of a more representative sound-speed) but it is not possible to prevent errors on the same order as the relative energy difference between the cooling particle and its neighbours. \end{itemize} All of the above listed issues are symptomatic of one main flaw in these schemes; the SPH method assumes that the variables being smoothed over vary slowly during a single time-step. This is often true for the internal energy or particle entropy in idealised hydrodynamics tests, but in practical simulations with sub-grid radiative cooling (and energy injection) this leads to significant errors. These errors could be mitigated by using a different cooling model, where over a single time-step only small changes in the energies of particles could be made (i.e. by limiting the time-steps of particles to significantly less than their cooling time), however this would render most cosmological simulations impractical to complete due to the huge increase in the number of time-steps to finish the simulation that this would imply. Thankfully, due to the explicit connection between internal energy and pressure in the Density-based SPH schemes, they do not suffer the same ills. They also smooth over the mass field, which either does not vary or generally varies very slowly (on much larger timescales than the local dynamical time). As such, the only recommendation that it is possible to make is to move away from Pressure-based schemes in favour of their Density-based cousins, solving the surface tension issues at contact discontinuities with artificial conduction instead of relying on the smoothed pressure field from Pressure-based schemes. It is worth noting that most modern implementations of the Pressure-based schemes already use an artificial conduction (also known as energy diffusion) term to resolve residual errors in fluid mixing problems \citet{Hu2014,Hopkins2015}. Of particular note is the lack of phase mixing (due to the non-diffusive nature of SPH) between hot and cold fluids, even in Pressure-SPH. \section{A simple galaxy formation model} \label{sec:eagle} The discussion that follows requires an understanding of two pieces of a galaxy formation model: energy injection into the fluid and energy removal from the fluid. These are used to model the processes of supernovae and AGN feedback, and radiative cooling respectively. The results presented here are not necessarily tied to the model used, and are applicable to a wide range of current galaxy formation models that use Pressure-based SPH schemes. Here we use a simplified version of the {EAGLE}{} galaxy formation model as an instructive example, as this used Pressure-Entropy SPH for its hydrodynamics model in \citet{Schaye2015} and associated works \citep[of particular note is ][that discusses the effects of the choice of numerical SPH scheme on galaxy properties]{Schaller2015}. \subsection{Cooling} \label{sec:cooling} The following equation is solved implicitly for each particle separately: \begin{equation} u(t + \Delta t) = u(t) + \frac{\mathrm{d}u}{\mathrm{d}t}(t)\Delta t, \end{equation} where $\mathrm{d}u/\mathrm{d}t$ being the `cooling rate' calculated from the underlying atomic processes with the resulting final internal energy being transformed into an average rate of change of internal energy as a function of time over the step, \begin{equation} \bar{\frac{\mathrm{d}u}{\mathrm{d}t}} = \frac{u(t + \Delta t) - u(t_i)}{\Delta t}. \label{eqn:avg_cooling_rate} \end{equation} After this occurs, this rate is limited in some circumstances \citep[see][for more detail]{Schaye2015} that are not relevant to the discussion here. This average `cooling rate' is then applied as either an addition to the $\mathrm{d}u/\mathrm{d}t$ or $\mathrm{d}A/\mathrm{d}t$ from the hydrodynamics scheme for each particle depending on the variable that the scheme tracks. \subsection{Energy Injection Feedback} A common, simple, feedback model is implemented as heating particles \emph{by} a constant temperature jump. It is possible to implement different types of feedback with this method, all being represented with a separate change in temperature $\Delta T$. For supernovae feedback, $\Delta T_{\rm SNII} = 10^{7.5}$ K, and for AGN $\Delta T_{\rm AGN} = 10^{8.5}$ K (in {EAGLE}{}). The change in temperature does not actually ensure that the particle has this temperature once the feedback has taken place, however; the amount of energy corresponding to heating a particle from 0 K to this temperature is added to the particle. This ensures that even in cases where the particle is hotter than the heating temperature energy is still injected. To apply feedback to a given particle, this change in temperature must be converted to a change in internal energy. This is performed by using a linear relationship between temperature and internal energy to find the internal energy that corresponds to a temperature of $\Delta T$, and adding this additional energy onto the internal energy of the particle. \section{Energy injection in Pressure-Entropy} \label{sec:energyinjection} In cosmology codes it is typical to use the particle-carried entropy as the thermodynamic variable rather than the internal energy. This custom originated because in many codes \citep[of particular note here is {\sc Gadget}{};][]{ Springel2005} the choice of co-ordinates in a space co-moving with expansion due to dark energy is such that the entropy variable is cosmology-less, i.e. it is the same in physical and co-moving space. Entropy is also conserved under adiabatic expansion, meaning that fewer equations of motion are required. This makes it convenient from an implementation point of view to track entropy rather than internal energy. However, at the level of the equation of motion, this makes no difference, as this is essentially just a choice of co-ordinate system. This naturally leads the Pressure-Entropy variant (i.e. as opposed to Pressure-Energy) of the Pressure-based schemes to be frequently chosen; here the main smoothed quantity is pressure, with entropy being the thermodynamic variable. The Pressure-Entropy and Pressure-Energy scheme perform equally well on hydrodynamics tests (see \citet{Hopkins2013} for a collection), but when coupling to sub-grid physics there are some key differences. For an entropy-based scheme, energy injection naturally leads to a conversion between the requested energy input and an increase in entropy for the relevant particle. Considering a Density-Entropy scheme to begin with \citep[e.g.][]{Springel2002}, with only a smooth density $\hat{\rho}$, \begin{align} P_i = (\gamma - 1) u_i \hat{\rho}_i, \end{align} with $P$ the pressure from the equation of state, $\gamma$ the ratio of specific heats, and $u_i$ the particle energy per unit mass. In addition, the expression for the pressure as a function of the entropy $A_i$, \begin{align} P_i = A_i \hat{\rho}^\gamma. \end{align} Given that these should give the same thermodynamic pressure, the pressure variable can be eliminated to give \begin{align} u_i = \frac{A_i \hat{\rho}^{\gamma - 1}}{\gamma - 1} \end{align} and as these variables are independent for a change in energy $\Delta u$ the change in entropy can be written \begin{align} \Delta A_i = (\gamma - 1)\frac{\Delta u_i}{\hat{\rho}^{\gamma - 1}}. \end{align} For any energy based scheme (either Density-Energy or Pressure-Energy), it is possible to directly modify the internal energy per unit mass $u$ of a particle, and this directly corresponds to the same change in total energy of the field. This is clearly also true here too for the Density-Entropy scheme. Then, the sum of all energies (converted from entropies in the Density-Entropy case) in the box will be the original value plus the injected energy, without the requirement for an extra loop over neighbours\footnote{This is only true given that the values entering the smooth quantities, here the density, are not changed at the same time. In practice, the mass of particles in cosmological simulations either does not change or changes very slowly with time (due to sub-grid stellar enrichment models for instance).}. Now considering Pressure-Entropy, the smoothed pressure shown in Equation \ref{eqn:smoothedpressure} at a particle depends on a smoothed entropy over all of its neighbours. To connect the internal energy and entropy of a particle again the equation of state can be used by introducing a new variable, the weighted density $\bar{\rho}$, \begin{align} \hat{P}_i = (\gamma - 1) u_i \bar{\rho}_i = A_i \bar{\rho}_i^\gamma \end{align} now being rearranged to eliminate the weighted density $\bar{\rho}$ such that \begin{align} A_i(u_i) = \hat{P}_i^{1 - \gamma} ( \gamma - 1) u_i^\gamma, \label{eqn:Aasfuncu} \end{align} \begin{align} u_i(A_i) = \frac{A_i^{1/\gamma} \hat{P}^{1 - 1/\gamma}}{\gamma - 1}. \label{eqn:uasfuncA} \end{align} To inject energy into the \emph{field} by explicitly heating a single particle $i$ in any entropy-based scheme the key is to find $\Delta A_i$ for a given $\Delta u_i$. In a pressure-based scheme this is problematic, as (converting Equation \ref{eqn:Aasfuncu} to a set of differences), \begin{equation} \Delta A = \hat{P}_i(A_i)^{1 - \gamma} (\gamma - 1) (u_i + \Delta u)^\gamma - A_i, \label{eqn:deltaAasfuncu} \end{equation} to find this difference requires conversion via the smoothed pressure which directly depends on the value of $A_i$. This also occurs for the particles that neighbour $i$, meaning that there will be a non-zero change in the energy $u_j$ that they report. Hence, this means that simply solving a linear equation for $\Delta A(\Delta u)$ is not enough; whilst this can be calculated, the true change in energy of the whole field will not be $\Delta u$ (as it was in Density-Entropy) because of the changing pressures of the neighbours. When attempting to inject energy it is vital that these contributions to the total field energy are considered. To correctly account for these changes, we must turn to an iterative solution. \begin{figure} \centering \includegraphics{plots/energy_ratio_improved.pdf} \caption{Energy injection as a function of iterations of the neighbour loop-based algorithm in Pressure-Entropy. Different coloured lines show ratios of injected energy to the original energy of the chosen particle, increasing in steps of 10. This algorithm allows for the correct energy to be injected into each particle after around 10 iterations, however more complex convergence criteria could be incorporated. A better estimate of the change in the smoothed pressure $\hat{P}$ could also significantly improve convergence.} \label{fig:energy_injection_better} \end{figure} \begin{figure} \centering \includegraphics{plots/energy_ratio_EAGLE.pdf} \caption{The same as Fig. \ref{fig:energy_injection_better}, however this time using an approximate algorithm that only updates the self-contribution of the heated particle. This version of the algorithm shows non-convergent behaviour at low energy injection values, but is significantly computationally cheaper than solutions that require neighbour loops during the iteration procedure.} \label{fig:energy_injection_EAGLE} \end{figure} A simple algorithm for injecting energy $\Delta u$ in this case would be as follows: \begin{enumerate} \item Calculate the total energy of all particles that neighbour the one that will have energy injected, $u_{{\rm field}, i} = \sum_j u(A_j, \hat{P}_j)$\footnote{More specifically we actually require all particles $j$ that see particle $i$ as a neighbour (rather than all particles $j$ that $i$ sees as a neighbour), which may be different in regions where the smoothing length varies significantly over a kernel, but this detail is omitted from the main discussion for clarity.}. \vspace{1.5mm} \item Find a target energy for the field, $u_{{\rm field}, t} = u_{{\rm field}, i} + \Delta u$. \vspace{1.5mm} \item While the energy of the field $u_{{\rm field}} = \sum_j u(A_j, \hat{P}_j)$ is outside of the bounds of the target energy: \begin{enumerate} \item Calculate $A_{\rm inject} = A(u_{{\rm field}, t} - u_{{\rm field}}, \hat{P})$ for the particle that will have energy injected (i.e. apply Equation \ref{eqn:deltaAasfuncu} assuming that $\hat{P}_i$ does not change).\vspace{1.5mm} \item Add on $A_{\rm inject}$ to the entropy of the chosen particle.\vspace{1.5mm} \item Re-calculate the smoothed pressures for all neighbouring particles.\vspace{1.5mm} \item Re-calculate the energy of the field $u_{{\rm field}}$ (i.e. go to item \emph{iii} above). \end{enumerate} \end{enumerate} The results of this process, for various injection energies, are shown in Fig. \ref{fig:energy_injection_better}. After around 10 iterations, the requested injection of energy is reached. This process is valid only for working on a single particle at a time, however, and as such would be non-trivial to parallelise without the use of locks on particles that were currently being modified. Suddenly changing the energy of a neighbouring particle while this process was being performed would destroy the convergent behaviour that is demonstrated in Fig. \ref{fig:energy_injection_better}. Even without locks, this algorithm is computationally expensive, with many thousands of operations required to change a single variable. Re-calculating the smoothed pressure (step \emph{c}) for every particle multiple times per step, is generally infeasible as it would require many thousands of operations per particle per step. An ideal algorithm would not require neighbour loops; only updating the self contribution for the heated particle\footnote{This algorithm was implemented in the original {EAGLE}{} code using the weighted density, $\bar{\rho}$ as the smoothed quantity, however this algorithm has been re-written to act on the smoothed pressure for simplicity. See Appendix A1.1 of \citet{Schaye2015} for more details.}: \begin{enumerate} \item Calculate the total energy of the particle that will have the energy injected, $u_{i, {\rm initial}} = u(A_i, \hat{P}_i)$.\vspace{1.5mm} \item Find a target energy for the particle, $u_{i, {\rm target}} = u_{i, {\rm initial}} + \Delta u$. \vspace{1.5mm} \item While the energy of the particle $u_i = u(A_i, \hat{P}_i)$ is outside of the bounds of the target energy (tolerance here is $10^{-6}$, and is rarely reached) and the number of iterations is below the maximum (10): \begin{enumerate} \item Calculate $A_{\rm inject} = A(u_{i, t} - u_i, \hat{P})$ for the particle that will have the energy injected.\vspace{1.5mm} \item Add on $A_{\rm inject}$ to the entropy of that particle.\vspace{1.5mm} \item Update the self contribution to the smoothed pressure for the injection particle by $\hat{P}_{i, {\rm new}} = \left[ \hat{P}_{i, {\rm old}}^{1/\gamma} + (A_{\rm new}^{1 / \gamma} - A_{\rm old}^{1 / \gamma})W_0 \right]^\gamma$ with $W_0=W(0, h_i)$ the kernel self-contribution term.\vspace{1.5mm} \item Re-calculate the energy of the particle $u_i = u(A_i, \hat{P}_i)$ using the new entropy and energy of that particle (i.e. go to \emph{iii} above). \end{enumerate} \end{enumerate} The implementation of the faster procedure is shown in Fig. \ref{fig:energy_injection_EAGLE}. This simple algorithm leads to significantly higher than expected energy injection for low (relative) energy injection events. For the case of the requested energy injection being the same as the initial particle energy, over 50\% too much energy is injected into the field. For events that inject more entropy into particle $i$, the value $A_i^{1/\gamma} W_{ij}$ for all neighbouring kernels becomes the leading component of the smoothed pressure field. This allows the pressure field to be dominated by this one particle, meaning that changes in $A_i^{1/\gamma}$ represent linear changes in the pressures of neighbouring particles, and hence allowing the simple methodology to correctly predict the changes in the global internal energy field. \begin{figure} \centering \includegraphics{plots/energy_compare.pdf} \caption{Comparison between the simple energy injection procedure (Fig. \ref{fig:energy_injection_EAGLE}, solid lines) against the method including a neighbour loop each iteration (Fig. \ref{fig:energy_injection_better}, dashed lines) for various energy injection values. The vertical axis here shows the energy offset from the true requested energy (in absolute arbitrary code units). The neighbour loop approach allows for the injected energy error to decrease with each iteration, where the simple procedure has a fixed (injection dependent) energy error that is reached rapidly at low values of energy injection where the entropies of neighbouring particles remain dominant.} \label{fig:energy_injection_compare} \end{figure} The error in the computationally cheaper injection method is directly compared against the neighbour loop procedure from Fig. \ref{fig:energy_injection_better} in Fig. \ref{fig:energy_injection_compare}. The extra energy injected per event is clear here; the method using a full neighbour loop each iteration manages to reduce the error each iteration, with the non neighbour loop method showing a fixed offset after a few iterations. This also shows that the energy injection error grows as the amount injected grows, despite this becoming a lower relative fraction of the requested energy. It is unclear exactly how much these errors impact the results of a full cosmological run. For the case of supernovae following \citet{DallaVecchia2012}, which has a factor of $u_{\rm new} / u_{\rm old} \approx 10^4$ this should not represent a significant overinjection (the energy converges within 10 iterations to around a percent or so). For feedback pathways that inject a relatively smaller amount of energy (for instance SNIa, AGN events on particles that have been recently heated, events on particles in haloes with a high virial temperature, or schemes that inject using smaller steps of energy or into multiple particles simultaneously) there will be a significantly larger amount of energy injected than initially expected. This uncontrolled energy injection is clearly undesirable. \subsection{A Different Injection Procedure} \begin{figure} \centering \includegraphics{plots/converge.pdf} \caption{The blue line shows the dependence of change in field energy $u$ as a function of the change in the entropy $A_i$ of a single particle, for a requested change in energy $\Delta u$. This change in energy $\Delta u$ corresponds to a heating event from $10^{3.5}$ K to $10^{7.5}$ K (a factor of $10^{4}$ in $u$), which corresponds to a typical energetic supernovae feedback event. The orange dashed line shows the predicted change in $A_i$ for this change $\Delta u$ from the iterative solution (using the Newton-Raphson method) of Equation \ref{eqn:energyinjectionsolveiteratively}.} \label{fig:betterthaneagle} \end{figure} Pressure-Entropy based schemes have been shown to be unable to inject the correct amount of energy using a simple algorithm based on updating only a single particle (i.e. without neighbour loops), however it is possible to perform this task exactly within a single step by using an iterative solver to find the change in entropy $\Delta A$. To inject a set amount of energy $\Delta u$ the total energy of the field $u_{\rm tot}$ must be modified by changing the properties of particle $i$ (with neighbouring particles $j$), with \begin{equation} u_{\rm tot} = \frac{1}{\gamma - 1} \left( \sum_{j} m_j A_j^{1/\gamma}W_{ij} \right)^{\gamma - 1}. \end{equation} This can be re-arranged to extract components specifically dependent on the injection particle $i$, \begin{align} u_{\rm tot} = \frac{1}{1 - \gamma} \sum_{j \neq i} & A_j^{1 / \gamma} \left( {p}_{j, i} + m_i A_i^{1 / \gamma} W_{ij} \right)^{\gamma - 1} \nonumber \\ + & A_i^{1 / \gamma} \left( {p}_{i, i} + m_i A_i^{1 / \gamma} W_{ii} \right)^{\gamma - 1}, \end{align} with \begin{equation} {p}_{a,b} = \hat{P}_b - m_b A^{1/\gamma}_b W_{ab}. \label{eqn:pab} \end{equation} Finally, now considering a change in energy $\Delta u$ as a function of the change in entropy for particle $i$, $\Delta A$, \begin{align} \Delta u = \frac{1}{1 - \gamma} \sum_{j \neq i} & A_j^{1 / \gamma} \left( {p}_{j, i} + m_i (A_i + \Delta A)^{1 / \gamma} W_{ij} \right)^{\gamma - 1} \nonumber \\ + & (A_i + \Delta A)^{1 / \gamma} \left( {p}_{i, i} + m_i (A_i + \Delta A)^{1 / \gamma} W_{ii} \right)^{\gamma - 1} \nonumber \\ - & u_{\rm tot}, \label{eqn:energyinjectionsolveiteratively} \end{align} which can be solved iteratively using, for example, the Newton-Raphson method. This method converges very well in just a few steps to calculate the change in entropy $\Delta A$ as demonstrated in Fig. \ref{fig:betterthaneagle}. In practice, this method would require two loops over the neighbours of particle $i$ per injection event. In the first loop, the values of $p_{j, i}$ and $W_{ij}$ would be calculated and stored, with the iterative solver then used to find the appropriate value of $\Delta A$. These changes would then need to be back-propagated to the neighbouring particles, as their smoothed pressures $\hat{P}_j$ will have changed significantly, reversing the procedure in Equation \ref{eqn:pab}. Such a scheme could potentially make a Pressure-Entropy based SPH method viable for a model that uses energy injection. This procedure requires tens of thousands of operations per thermal injection event, however, and as such would require significant effort to implement efficiently. This also highlights a possible issue with Pressure-Energy based SPH schemes, as even in this case, where it is much simpler to make changes to the global energy field, changes to the internal energy of a particle must be back-propagated to neighbours to ensure that the pressure and internal energy fields remain consistent. These errors also compound, should more than one particle in a kernel be heated without the back-propagation of changes. \section{Equations of Motion} \label{sec:eom} So far only static fields have been under consideration; before moving on to discussing the effects of sub-grid cooling on pressure-based schemes, the \emph{dynamics} part of SPH must be considered. Below only two equations of motion are described, the one corresponding to Density-Energy, and the equation of motion for Pressure-Energy SPH. For a more expanded derivation of the following from a Lagrangian and the first law of Thermodynamics see \citet{Hopkins2013}, or the {\sc Swift}{} simulation code theory documentation\footnote{ \url{http://www.swiftsim.com}}. \subsection{Density-Energy} For Density-Energy the smoothed quantity of interest is the smoothed mass density (Equation \ref{eqn:sphdensity}). This leads to a corresponding equation of motion for velocity of \begin{align} \frac{\mathrm{d}\mathbf{v}_i}{\mathrm{d}t} = - \sum_j m_j &\left[ f_i \frac{P_i}{\hat{\rho}_i^2} \nabla W(r_{i\!j}, h_i) + f_j \frac{P_j}{\hat{\rho}_j^2} \nabla W(r_{ji}, h_j) \right], \label{eqn:density_energy_v_eom} \end{align} with the $f_i$ here representing correction factors for interactions between particles with different smoothing lengths \begin{equation} f_i = \left(1 + \frac{h_i}{n_d \hat{\rho}_i} \frac{\partial \hat{\rho}_i}{\partial h_i}\right)^{-1}. \label{eqn:density_energy_f_fac} \end{equation} This factor also enters into the equation of motion for the internal energy \begin{equation} \frac{\mathrm{d}u_i}{\mathrm{d}t} = \sum_j m_j f_i \frac{P_i}{\hat{\rho}_i^2} \mathbf{v}_{ij} \cdot \nabla W(r_{ij}, h_i). \label{eqn:density_energy_u_eom} \end{equation} \subsection{Pressure-Energy} For Pressure-Energy SPH, the thermodynamic quantity $u$ remains the same as for Density-Energy, but the smoothed pressure field $\hat{P}$ is introduced (see Equation \ref{eqn:smoothedpressure}). This is then used in the equation of motion for the particle velocities \begin{equation} \frac{\mathrm{d} \mathbf{v}_i}{\mathrm{d} t} = -\sum_j (\gamma - 1)^2 m_j u_j u_i \left[ \frac{f_{ij}}{\hat{P}_i} \nabla W(r_{ij}, h_i) + \frac{f_{ji}}{\hat{P}_j} \nabla W(r_{ji}, h_j) \right]. \label{eqn:pressure_energy_v_eom} \end{equation} with the $f_{ij}$ now depending on both particle $i$ and $j$ \begin{equation} f_{ij} = 1 - \left[\frac{h_i}{n_d (\gamma - 1) \hat{n}_i m_j u_j} \frac{\partial \hat{P}_i}{\partial h_i} \right] \left( 1 + \frac{h_i}{n_d \hat{n}_i} \frac{\partial \hat{n}_i}{\partial h_i} \right)^{-1}, \label{eqn:pressure_energy_f_fac} \end{equation} with $\hat{n}$ the local particle number density (Equation \ref{eqn:numberdensity}). Again, this factor enters into the equation of motion for the internal energy \begin{equation} \frac{\mathrm{d} u_i}{\mathrm{d} t} = (\gamma - 1)^2 \sum_j m_j u_i u_j \frac{f_{ij}}{\hat{P}_i} \mathbf{v}_{ij} \cdot \nabla W_{ij}. \label{eqn:pressure_energy_u_eom} \end{equation} \subsection{Choosing an Appropriate Time-Step} To integrate these forward in time, an appropriate time-step between the evaluation of these smoothed equations of motion must be chosen. SPH schemes typically use a modified version of the Courant–Friedrichs–Lewy \citep[CFL, ][]{Courant1928} condition to determine this step length. The CFL condition takes the form of \begin{equation} \Delta t = C_{\rm CFL} \frac{H_i}{c_{\rm s}}, \label{eqn:cfl} \end{equation} with $c_{\rm s}$ the local sound-speed, and $C_{\rm CFL}$ a constant that should be strictly less than 1.0, typically taking a value of 0.1-0.3\footnote{In practice this $c_s$ is usually replaced with a signal velocity $v_{\rm sig}$ that depends on the artificial viscosity parameters. As the implementation of an artificial viscosity is not discussed here, this detail is omitted for simplicity.}. Computing this sound-speed is a simple affair in density-based SPH, with it being a particle-carried property that is a function solely of other particle carried properties, \begin{equation} c_{\rm s} = \sqrt{\gamma \frac{P}{\hat{\rho}}} = \sqrt{\gamma (\gamma - 1) u}. \label{eqn:speed_of_sound_density} \end{equation} For pressure-based schemes this requires a little more thought. The same sound-speed can be used, but this is not representative of the variables that actually enter the equation of motion. To clarify this, first consider the equation of motion for Density-Energy (Equation \ref{eqn:density_energy_v_eom}) and re-write it in terms of the sound-speed, \begin{align} \frac{\mathrm{d}\mathbf{v}_i}{\mathrm{d} t} \sim \frac{c_{{\rm s}, i}^2}{\hat{\rho}_i} \nabla_i W_{ij}, \nonumber \end{align} and for Pressure-Energy (Equation \ref{eqn:pressure_energy_v_eom}) \begin{align} \frac{\mathrm{d}\mathbf{v}_i}{\mathrm{d} t} \sim (\gamma - 1)^2 \frac{u_i u_j}{\hat{P}_i} \nabla_i W_{ij}. \nonumber \end{align} From this it is reasonable to assume that the sound-speed, i.e. the speed at which information propagates in the system through pressure waves, is given by the expression \begin{align} c_{\rm s} = (\gamma - 1) u_i \sqrt{\gamma \frac{\hat{\rho}_i}{\hat{P}_i}}. \label{eqn:pressure_energy_wrong_soundspeed} \end{align} This expression is dimensionally consistent with a sound-speed, and includes the gas density information (through $\hat{\rho}$), traditionally used for sound-speeds, as well as including the extra information from the smoothed pressure $\hat{P}$. However, such a sound-speed leads to a considerably \emph{higher} time-step in front of a shock wave (where the smoothed pressure is higher, but the smooth density is relatively constant), leading to time integration problems. Using \begin{align} c_{\rm s} = \sqrt{\gamma \frac{\hat{P}_i}{\hat{\rho}_i}} \label{eqn:pressure_energy_soundspeed} \end{align} instead of Equation \ref{eqn:pressure_energy_wrong_soundspeed} leads to a sound-speed that does not represent the equation of motion as directly but does not lead to time-integration problems, and effectively represents a smoothed internal energy field. It is also possible to use the same sound-speed using the particle-carried internal energy directly above. \section{Introduction} Over the past three decades, the inclusion of hydrodynamics in (cosmological) galaxy formation simulations has become commonplace \citep{Hernquist1989, Evrard1994, Springel2002, Springel2005, Dolag2009}. One of the first hydrodynamics methods to be used in such simulations was Smoothed Particle Hydrodynamics \citep[SPH, ][]{Gingold1977, Monaghan1992}. SPH is prized for its adaptivity, conservation properties, and stability and is still used in state-of-the-art simulations by many groups today \citep{Schaye2015, Teklu2015, McCarthy2017, Tremmel2017, Cui2019, Steinwandel2020}; see \citet{Vogelsberger2020} for a recent overview of cosmological simulations. As the SPH method has developed, two key issues have arisen. The first, a consequence of the non-diffusive nature of the SPH equations, was that the method was unable to capture shocks. This was resolved by the addition of a diffusive `artificial viscosity' term \citep{Monaghan1983}. This added diffusivity is only required in shocks, and so many schemes include particle-carried switches for the viscosity \citep{Morris1997,Cullen2010} to prevent unnecessary conversion between kinetic and thermal energy in e.g. shearing flows. The second, artificial surface tension appearing in contact discontinuities \citep[e.g.][]{Agertz2007}, has led to the development of several mitigation procedures. One possible solution is artificial conductivity (also known as energy diffusion) to smooth out the discontinuity \citep[e.g.][]{Price2008, Read2012, Rosswog2019}; this method applies an extra equation of motion to the thermodynamic variable to transfer energy between particles. The alternative solution, generally favoured in the cosmology community, is to reconstruct a smooth pressure field \citep{Ritchie2001, Saitoh2013, Hopkins2013}. This smooth pressure field allows for a gradual transition pressure between hot and cold fluids, suppressing any variation in the thermodynamic variable at scales smaller than the resolution limit. This can be beneficial in fluids where there is a high degree of mixing between phases, such as in gas flowing into galactic haloes \citep[e.g.][]{ Tumlinson2017,Stern2019}. Cosmological simulations typically include so-called `sub-grid' physics that aims to represent underlying physics that is below the (mass) resolution limit \citep[which is usually around $10^{3-7} \;{\rm M}_{\odot}{}$;][]{Vogelsberger2014, Schaye2015, Hopkins2018, Marinacci2019, Dave2019}. This is commonplace in many fields, and is essential in galaxy formation to reproduce many of the observed properties of galaxies. One key piece of sub-grid physics is star formation, which occurs on mass scales smaller than a solar mass. Cold, dense, gas is required to enable stars to form; to reach these temperatures and densities radiative cooling (which occurs on atomic scales) must be included in a sub-grid fashion. Finally, when these stars have reached the end of their life some will produce supernovae explosions, which are modelled using sub-grid `feedback' schemes \citep[such a sub-grid scheme is chosen for many reasons, including but not limited to limited resolution and the 'overcooling problem'; see ][and references for more information]{Navarro1993, DallaVecchia2012}. Each of these processes has an impact on the hydrodynamics solver which must be carefully examined. Here we employ a simple galaxy formation model including implicit cooling and energetic feedback, based on the {EAGLE}{} galaxy formation model \citep{Schaye2015}, to understand how the inclusion of such a model may affect simulations employing Density- or Pressure-based SPH differently. We note, however, that the results obtained in the following sections are applicable to all kinds of galaxy formation models, including those that instead use instantaneous or `operator-split' cooling. The rest of this paper is organised as follows: In \S \ref{sec:sph} the SPH method is described, along with the Density- and Pressure-based schemes; in \S \ref{sec:eagle} the basics of a galaxy formation model are discussed in more detail; in \S \ref{sec:energyinjection} issues relating to injection of energy into Pressure-based schemes are explored; in \S \ref{sec:eom} the SPH equations of motion are discussed; in \S \ref{sec:timeintegration} the time-integration schemes used in cosmological simulations are presented and issues with sub-grid cooling are explored, and in \S \ref{sec:conclusions} it is concluded that while Pressure-SPH schemes can introduce significant errors it is possible in some cases to use measures (albeit computationally expensive ones) to remedy them. Because of this added expense it is suggested that a Density-based scheme is preferred, with an energy diffusion term used to mediate contact discontinuities. \section{Smoothed Particle Hydrodynamics} \label{sec:sph} \begin{figure*} \centering \includegraphics{plots/sph_description.pdf} \caption{The three leftmost panels show the consequences of choosing a correct (large, left), too large (top right), and too small (bottom right) smoothing length (for $\eta = 1.1$) in 1D on a set of particles with an expected density $\hat{n} = 1$. This is quantified through both the density, $\hat{n}$, for the central particle $i$, and the ratio between the chosen smoothing length $h_i$ and the expected smoothing length given by $\eta / \hat{n}_i$, parametrised as $\chi_i$. $\chi_i$ is a well behaved function of the smoothing length, and finding the root of $\chi_i - 1$ is a reliable way to choose the value of $h_i$ that corresponds to a given choice of $\eta$. Note how the density is only erroneous in the case with a smoothing length that is too small (bottom panel); the larger smoothing length (top panel) produces the correct density but would be less computationally efficient and inconsistent with the chosen value of $\eta$. The rightmost panel shows a 2D case with a random particle distribution, with the background colour map showing the low (blue) to high (white and then red) density regions and the associated variation in smoothing length. Here, for selected particles, the smoothing length $h$ and kernel cut-off radius $H$ are shown with dotted and dashed lines respectively. In particular, note how the higher density regions show smaller smoothing lengths such that Equation \ref{eqn:numberdensity} is respected. } \label{fig:sph_description} \end{figure*} \input{table_of_symbols} SPH is a Lagrangian method that uses particles to discretise the fluid. To find the equation of motion for the system, and hence integrate a fluid in time, the forces acting on each particle are required. In a fluid, these forces are determined by the local pressure field acting on the particles. The ultimate goal of the SPH method, then, is to find the pressure gradient associated with a set of discretised particles; once this is obtained finding the equations of motion is a relatively simple task. The reader is referred to the first few pages of the review by \citet{Price2012} for more information on the fundamentals of the SPH method. Before continuing, it is important to separate the two types of quantities present in SPH. The first, \emph{particle carried properties} (denoted as symbols with an index corresponding to their particle, e.g. $m_i$ is the mass of particle $i$), are valid only at the positions of particles in the system and include variables such as mass. The second, \emph{field properties} (denoted as symbols with a hat, and with a corresponding index if they are evaluated at particle positions, such as $\hat{\rho}_i$, the density at the position of particle $i$), are valid at all points in the computational domain, and generally are volumetric quantities. These field properties are built out of particle-carried properties by convolving them with the smoothing kernel. The smoothing kernel is a weighting function of two parameters, inter-particle separation ($|\mathbf{r}_i - \mathbf{r}_j| = r_{ij}$) and smoothing length $h_i$, with a shape similar to a Gaussian with a full-width half maximum of $\sqrt{2\ln 2} h_i$. The smoothing lengths of particles are chosen such that, for each particle, the following equation is satisfied: \begin{equation} \hat{n}_i = \sum_{{\rm All ~ particles}~ j} W(r_{ij}, h_i) = \left( \frac{\eta}{h_i}\right)^{n_D}, \label{eqn:numberdensity} \end{equation} where $\hat{n}_i$ is the local number density, $n_D$ the number of spatial dimensions, and the kernel $W(r_{ij}, h_i)$ (henceforth written as $W_{ij}$) has the same dimensions as number density, typically being composed of a dimensionless weighting function $w_{ij} = w(r_{ij} / h_i)$ such that $W_{ij} \propto w_{ij} h_i^{-n_D}$. $\eta$ is a dimensionless parameter that determines how smooth the field reconstruction should be (effectively setting the spatial resolution), with larger values leading to kernels that encompass more particles and typically takes values around $\eta \approx 1.2$\footnote{This corresponds to the popular choice of around 48 neighbours for a cubic spline kernel.}. An important distinction is the difference between the smoothing length, $h_i$, related to the full-width half-maximum (FWHM) of the Gaussian that the kernel approximates, and the kernel cut-off radius $H_i$. This cut-off radius is parametrised as $H_i = \gamma_K h_i$, with $\gamma_K$ a kernel-dependent quantity taking values around $1.5-2.5$, such that $H_i$ gives the maximum value of $r_{ij}$ at which the kernel will be non-zero\footnote{The choice of which variable to store, $h$ or $H$, is tricky; $h$ is more easily motivated \citep{Dehnen2012} and independent of the choice of kernel, but $H$ is much more practical in the code as outside this radius interactions do not need to be considered.}. We note that Table \ref{tab:symbols} shows all symbols used regularly throughout this paper and encourage readers to refer to it when necessary. An example kernel \citep[the cubic spline kernel, see][for significantly more information on kernels]{Dehnen2012} is shown in Fig. \ref{fig:sph_description}, with three choices for the smoothing length that satisfy Equation \ref{eqn:numberdensity}: one that is too large; one that is `just right' for the given choice of $\eta$, and one that is too small. The choice to satisfy both equations is not strictly equivalent to ensuring that the kernel encompasses a fixed number of neighbouring particles; note how the edges of the kernel in the left panel do not coincide with a particle, even despite their uniform spacing. To evaluate the mass density of the system, at the particle positions, the kernel is again used to re-evaluate the above equation now including the particle masses such that the density \begin{equation} \hat{\rho}_i = \sum_{j} m_j W_{ij} \label{eqn:sphdensity} \end{equation} is the sum over the kernel contributions and neighbouring masses $m_j$ that may differ between particles. Note that this summation includes the self-contribution from the particle $i$, $m_i W(0, h_i)$. Typically in SPH, the particle-carried property of either internal energy $u_i$, or entropy $A_i$ (per unit mass)\footnote{Note that this quantity is not really the `entropy', but rather the adiabat that corresponds to this choice of entropy, hence the choice of symbol $A$.} is chosen to encode the thermal properties of the particle. These are related to each other, and the particle-carried pressure, through the ideal gas equation of state \begin{equation} P_i = (\gamma - 1)u_i\hat{\rho}_i = A_i\hat{\rho}^{\gamma}, \label{eqn:equationofstate} \end{equation} with the ratio of specific heats $\gamma = C_P / C_V = 5/3$ for the fluids usually considered in cosmological hydrodynamics models. Alternatively, it is possible to construct a smooth pressure field that is evaluated at the particle positions such that \begin{equation} \hat{P}_i = \sum_j (\gamma - 1) m_j u_j W_{ij} = \left(\sum_j m_j A_j^{1/\gamma} W_{ij}\right)^\gamma, \label{eqn:smoothedpressure} \end{equation} directly includes the particle-carried thermal quantities of the neighbours into the definition of the pressure. The differences between SPH models that use the particle pressures evaluated through the equation of state and smoothed density (i.e. those that use Equations \ref{eqn:sphdensity} and \ref{eqn:equationofstate}), known as Density SPH, and those that use the smooth pressures (i.e. those that use Equation \ref{eqn:smoothedpressure}), known as Pressure SPH, is the central topic of this paper. Frequently, the SPH scheme is also referred to by its choice of thermodynamic variable, internal energy or entropy, as Density-Energy (Density-Entropy) or Pressure-Energy (Pressure-Entropy). SPH schemes are usually implemented as a fixed number of `loops over neighbours' (often just called loops). For a basic scheme like the ones presented above, two loops are usually used. The first loop, frequently called the `density' loop, goes over all neighbours $j$ of all particles $i$ to calculate their SPH density (Equation \ref{eqn:sphdensity}) or smooth pressure (Equation \ref{eqn:smoothedpressure}). The second loop, often called the `force' loop, evaluates the equation of motion for each particle $i$ through the use of the pre-calculated smoothed quantities of all neighbours $j$. Each loop is computationally expensive, and so schemes that require extra loops are generally unfavourable unless they provide a significant benefit. State-of-the-art schemes typically use three loops, inserting a `gradient' loop between the `density' and `force' loops to calculate either improved gradient estimators \citep{Rosswog2019} or coefficients for artificial viscosity and diffusion schemes \citep{Price2008, Cullen2010}. \section{Time Integration} \label{sec:timeintegration} A typical astrophysics SPH code will use Leapfrog integration or a velocity-verlet scheme to integrate particles through time \citep[see e.g. ][]{Hernquist1989,Springel2005,Borrow2018}. This approach takes the accelerations, $\mathbf{a}_i = \mathrm{d}\mathbf{v}_i / \mathrm{d} t$, and the velocities, $\mathbf{v}_i = \mathrm{d}\mathbf{r}_i / \mathrm{d} t$ and solves the system for the positions $r_i(t)$ as a function of time. It is convenient to write the equations as follows (for each particle): \begin{align} \mathbf{v}\left(t + \frac{\Delta t}{2}\right) & = \mathbf{v}(t) + \frac{\Delta t}{2}\mathbf{a}(t),\\ \mathbf{r}\left(t + \Delta t\right) & = \mathbf{r}(t) + \mathbf{v}\left(t + \frac{\Delta t}{2}\right)\Delta t, \\ \mathbf{v}\left(t + \Delta t\right) & = \mathbf{v} \left(t + \frac{\Delta t}{2}\right) + \frac{\Delta t}{2}\mathbf{a}(t + \Delta t), \label{eqn:KDK} \end{align} commonly referred to (in order) as a Kick-Drift-Kick scheme. Importantly, these equations must be solved for all variables of interest. This leapfrog time-integration is prized for its second order accuracy (in $\Delta t$) despite only including first order operators, due to cancelling second order terms as well as its manifest conservation of energy \citep{ Hernquist1989}. \subsection{Multiple Time-Stepping} As noted above, it is possible to find a reasonable time-step to evolve a given hydrodynamical system with using the CFL condition (Equation \ref{eqn:cfl}). This condition applies on a particle-by-particle basis, meaning that to evolve the whole \emph{system} a method for combining these individual time-steps into a global mechanism must be devised. In less adaptive problems than those considered here (e.g. those with little dynamic range in smoothing length), it is reasonable to find the minimal time-step over all particles, and evolve the whole system with this time-step. This scenario is frequently referred to as `single-d$t$'. For a cosmological simulation, however, the huge dynamic range in smoothing length (and hence time-step) amongst particles means that evolving the whole system with a single time-step would render most simulations infeasible \citep{Borrow2018}. Instead, each particle is evolved according to its own time-step (referred to as a multi-d$t$ simulation) using a so-called `time-step hierarchy' as originally described in \citet{Hernquist1989}. This choice is common-place in astrophysics codes \citep{Teyssier2002, Springel2005}. In some steps in a multi-d$t$ simulation only the particles on the very shortest time-steps are updated in a loop over their neighbours to re-calculate, for example, $\hat{\rho}$ (referred to as these particles being `active'). The rest of the particles are referred to as being `inactive'. As the inactive particles may interact with the active ones, their properties must be interpolated, or drifted, to the current time. For particle-carried quantities, such as the internal energy $u$, a simple first-order equation is used, \begin{equation} u\left(t + \Delta t\right) = u + \frac{\mathrm{d} u}{\mathrm{d} t}\Delta t. \label{eqn:internal_energy_drift} \end{equation} \subsection{Drifting Smoothed Quantities} \label{sec:driftoperators} As a particle may experience many more drift steps than loops over neighbours (that are only performed for active particles), it is important to have drift operators ($\mathrm{d}\hat{x} / \mathrm{d}t$) for smoothed quantities $\hat{x}$ to interpolate their values between full time-steps. This is achieved through taking the time differential of smoothed quantities. Starting with the simplest, the smoothed number density, \begin{align} \frac{\mathrm{d}\hat{n}_i}{\mathrm{d}t} &= \sum_j \frac{\mathrm{d} W(r_{ij}, h_i)}{\mathrm{d}t}, \nonumber \\ &= \sum_j \mathbf{v}_{ij} \cdot \nabla_j W(r_{ij}, h_i). \end{align} Following this process through for the smoothed quantities of interest yields \begin{align} \frac{\mathrm{d}\hat{\rho}_i}{\mathrm{d}t} &= \sum_j m_j \mathbf{v}_{ij} \cdot \nabla_j W(r_{ij}, h_i),\\ \frac{\mathrm{d}\hat{P}_i}{\mathrm{d}t} &= (\gamma - 1)\sum_j m_j\left(W_{ij}\frac{\mathrm{d}u_j}{\mathrm{d}t} + u_j \mathbf{v}_{ij} \cdot \nabla_j W_{ij}\right), \label{eqn:drift_P} \end{align} for the smoothed density and pressure respectively, with $W_{ij} = W(r_{ij}, h_i)$. In the smoothed density case, the pressure is re-calculated at each drift step from the now drifted internal energy and density using the equation of state\footnote{Note that the first equation for the smoothed density corresponds to the SPH discretisation of the continuity equation \citep{Monaghan1992}, but the second equation makes little physical sense.}. The latter drift equation, due to its inclusion of $\mathrm{d}u_j/\mathrm{d}t$ (i.e. the rate of change of internal energy of all neighbours of particle $i$), presents several issues. This sum is difficult to compute in practice; it requires that all of the $\mathrm{d}u_j/\mathrm{d}t$ are set before a neighbour loop takes place. This would require an extra loop over neighbours after the `force' loop, which has generally been considered computationally infeasible for a scheme that purports to be so cheap. In practice, the following is used to drift the smoothed pressure: \begin{equation} \frac{\mathrm{d}\hat{P}_i}{\mathrm{d}t} = \frac{\mathrm{d}\hat{\rho}_i}{\mathrm{d}t} \cdot \frac{\mathrm{d}u_i}{\mathrm{d}t}~, \label{eqn:drift_P_with_rho} \end{equation} which clearly does not fully capture the expected behaviour of Equation \ref{eqn:drift_P} as it only includes the rate of change of the internal energy for particle $i$, discarding the contribution from neighbours. Such behaviour becomes particularly problematic in cases where sub-grid cooling is used, where particles within a kernel may have both very large $\mathrm{d}u_j/\mathrm{d}t$ (where $(\mathrm{d}u_j/\mathrm{d}t)\Delta t$ is comparable to $u_j$), and $\mathrm{d}u_j/\mathrm{d}t$ that vary rapidly with time. Consider the case where an active particle cools rapidly from some temperature to the equilibrium temperature in one step (which occurs frequently in a typical cosmological simulation where no criterion on the time-step for $\mathrm{d}u/\mathrm{d}t$ is included to ensure the number of steps required to complete the calculation remains reasonable whilst employing implicit cooling). If this particle has a neighbour at the equilibrium temperature that is inactive, the pressure for the neighbouring particle will remain significantly (potentially orders of magnitude) higher than what is mandated by the local internal energy field, leading to force errors of a similar level. To apply these drift operators to smoothed quantities, instead of using a linear drift as in Equation \ref{eqn:internal_energy_drift}, the analytic solution to these first order differential equations is used. For a smooth quantity $\hat{x}$ it is drifted forwards in time using \begin{equation} \hat{x}(t + \Delta t) = \hat{x}(t)\cdot \exp\left( \frac{1}{\hat{x}} \frac{\mathrm{d}\hat{x}}{\mathrm{d}t} \right). \label{eqn:smooth_drift} \end{equation} This also has the added benefit of preventing the smoothed quantities from becoming negative. For this to be accurate, it requires an accurate $\mathrm{d}\hat{x}/\mathrm{d}t$ term. \subsection{Impact of Drift Operators in multi-d$t$} \begin{figure} \centering \includegraphics{plots/cooling_pressure_ratio.pdf} \caption{Smooth pressure as a function of time for different strategies in a uniform fluid of `cold' particles, with one initially `hot' particle with a temperature 100 times higher than the cold particles that cools to the `cold' temperature in one time-step. The solid blue line shows the pressure of the central particle as a function of time (relative to its initial pressure). The dashed blue line shows the pressure of the closest `hot' neighbour in a single-d$t$ scenario, i.e. the whole system is evolved with time-step d$t_{\rm hot}$. This shows the true answer for the pressure of the neighbour particle. The dotted red line shows the result of drifting the cold particle with Equation \ref{eqn:drift_P_with_rho}. As this particle has no cooling rate, and the fluid is stationary, the pressure does not change. The solid orange line shows the result of drifting using Equation \ref{eqn:drift_P}. This rapidly leads to the particle having a pressure of zero, a highly undesirable result. Note that the orange line does not follow the dashed blue line in the first few steps due to different drifting schemes for smoothed and particle-carried quantities (Equation \ref{eqn:internal_energy_drift} and \ref{eqn:smooth_drift}).} \label{fig:pressure_ratio_drift} \end{figure} Whilst the true drift operator for $\hat{P}$ appears to be impractical from a computational perspective due to the requirement of another loop over neighbours, at first glance it appears that the use of this correct drift operator would remedy the issues with cooling. Unfortunately, in a multi-d$t$ simulation where active and in-active particles are mixed, this `correct' operator can still lead to negative pressures when applied. In Fig. \ref{fig:pressure_ratio_drift} the different ways of drifting smooth pressure in a multi-d$t$ simulation are explored. In this highly idealised test, a cubic volume of uniform `cold' fluid is considered. A single particle at the center is set to have a `hot' temperature of 100 times higher than the background fluid, and is set to have a cooling rate that ensures that it cools to the `cold' temperature within its first time-step. This scenario is similar to a hot $10^6$ K particle in the CGM cooling to join particles in the ISM at the $10^4$ K equilibrium temperature. The difference between the time-step of the hot and cold particles, implied by Equation \ref{eqn:cfl}, is a factor of 10 (when using the original definition of sound-speed, see Equation \ref{eqn:speed_of_sound_density}). Here the cold particle is drifted ten times to interact with its hot neighbour over a single time-step of its own. In practice, this scenario would evolve slightly differently, with the previously hot particle having its time-step re-set to d$t_{\rm cool}$ after it has cooled to the equilibrium temperature, but the nuances of the time-step hierarchy are ignored here for simplicity. \begin{figure} \centering \includegraphics{plots/cooling_error_ratio.pdf} \caption{The same lines as Fig. \ref{fig:pressure_ratio_drift}, except now showing the `error' as a function of time relative to the single-d$t$ case (blue dashed line) of the pressure $\hat{P}$ of the nearest neighbour to the `hot' particle. Here the fractional error is defined as $\hat{P}(t) - \hat{P}_{\rm{single-d}t} / \hat{P}_{\rm{single-d}t}$. The orange line showing the drifting using Equation \ref{eqn:drift_P} shows that the pressure rapidly drops to zero after around four steps. The red dotted line (Equation \ref{eqn:drift_P_with_rho}) shows the offset in pressure that is maintained even after the central `hot' particle cools.} \label{fig:pressure_error_drift} \end{figure} The three drifting scenarios proceed very differently. In Fig. \ref{fig:pressure_error_drift} the fractional errors relative to the single-d$t$ case are shown. In the case of the drift using Equation \ref{eqn:drift_P}, the pressure rapidly drops to zero. This is prevented from becoming negative thanks to the integration strategy that is employed (Equation \ref{eqn:smooth_drift}); the rate of $\mathrm{d}\hat{P}/\mathrm{d}t$ is high enough to lead to negative pressures within a few drift steps should a simple linear integration strategy like that employed for the internal energy (Equation \ref{eqn:internal_energy_drift}) be used. Because there is only a linear time integration (with a poorly chosen time-step for the equation to be evolved) method for a now non-linear problem (as there is a significant $\mathrm{d}^2 u / \mathrm{d}t^2$ from changes in cooling rate) errors naturally manifest. The drift operator using a combination of the local cooling rate and density time differential (Equation \ref{eqn:drift_P_with_rho}) is the safest, leading to pressures that are higher than expected; this does however come at the cost of larger relative errors in the pressure (500\% increase v.s. 100\% decrease; both of these are highly undesirable). \subsubsection{Limiting time-steps} One way to address the issues presented in Fig. \ref{fig:pressure_ratio_drift} is to limit the time-steps between neighbouring particles. Such a `time-step limiter' is common-place in galaxy formation simulations, as they are key to capturing the energy injected during feedback events \citep[see e.g.][]{Durier2012}. In addition, the use of the `smoothed' sound-speed (from Equation \ref{eqn:pressure_energy_soundspeed}) ensures that the neighbouring particle has a time-step that is much closer to the time-step of the `hot' particle than the sound-speed based solely on the internal energy of each particle alone. However, as Fig. \ref{fig:pressure_error_drift} shows, even only after one intervening time-step (i.e. after $\mathrm{d}t_{\rm hot}$), there is a 50\% to 500\% error in the pressure of the neighbouring particle. This error in the pressure of the neighbouring particle represents a poorly tracked non-conservation of energy. An incorrect relationship between the local internal energy and pressure field of the particles leads directly to force errors of the same magnitude. Because of the conservative and symmetric structure of the applied equations of motion, however, this does not lead to the total energy of the fluid changing over time (i.e. the sum of the kinetic and internal energy of the fluid remains constant), instead manifesting as unstable dynamics.
{'timestamp': '2020-11-25T02:00:25', 'yymm': '2011', 'arxiv_id': '2011.11641', 'language': 'en', 'url': 'https://arxiv.org/abs/2011.11641'}
\section{Introduction} \noindent Over the last few years, active matter has emerged as a testing ground for nonequilibrium statistical physics \cite{Seifert2016,ActiveEntropyProduction2,Tailleur2018,BradyReview,DDFTMicroswimmers,MCTActive,HermingHaus2016,PFTActive}. Its relevance comes from the fact that experimental realizations exist \cite{MicroSwimmersReview,PoonEcoli,Drescher} of relatively simple active matter models, such as active Brownian particles (ABPs) and run-and-tumble (RnT) particles \cite{ABPvsRnT}. While describing these systems can be very challenging when they are far from thermodynamic equilibrium \cite{TailleurGeneralThermo,Siddharth2018}, for small activity they are well understood by effective equilibrium approaches \cite{Brader2015,SpeckEffectivePairPotential,MarconiMaggi,MarconiMaggiEffectivePotential,Binder2016}. In particular, it is well established that noninteracting particles at small activity can be described as an equilibrium system at an effective temperature \cite{CugliandoloEffTemp,Wang2011,MarconiMaggi,Fily2012,Szamel2014}. For example, inserting the effective temperature in the Einstein relation yields the enhanced diffusion coefficient of an active particle, and using the effective temperature in the Boltzmann distribution gives the distribution of weakly active particles in a gravitational field \cite{Palacci2010,ES,EoSExperiment,Wolff2013,Szamel2014,ABPvsRnT,EoSExperiment,Stark2016}.\\ \begin{figure} \centering \includegraphics[width=\linewidth]{SawtoothPotential.pdf} \caption{\raggedright (Dimensionless) ratchet potential $\beta V$, as a function of the Cartesian $x$-coordinate in units of the diffusive length scale $\ell$. The ratchet can be characterized by its height $\beta V_{\text{max}}$, the width of its left side $x_l/\ell$, and its asymmetry $a = (x_l - x_r)/x_r$.} \label{fig:RatchetPotential} \end{figure} \indent However, even weakly active systems can display behavior very different from equilibrium systems \cite{YaouensTip,Galajda2007, ActiveRatchets2011,RatchetEffectsReview,Reichhardt2016CollectiveRatchet,Nikola2016,UnderdampedRatchet2017, Caleb2017,Yongjoo2018}. For instance, a single array of funnel-shaped barriers, that is more easily crossed from one lateral direction than from the other, can induce a steady state with ratchet currents that span the entire system \cite{RatchetEffectsReview}. Alternatively, when the boundary conditions deny such a system-wide flux, the result is a steady state with a higher density on one side of the array than on the other \cite{Galajda2007,RatchetEffectsReview}. As the system can be arbitrarily long in the lateral direction, the presence of the funnels influences the density profile at arbitrarily large distance.\\ \indent Needless to say, characterizing such a long-range effect is a challenge, and the natural place to start is in a setting as simple as possible. As we shall show, having an external potential with a long-range influence on the density profile in steady state is only possible with the key ingredients of (1) activity, and (2) an external potential that is nonlinear. Therefore, a good candidate for a minimal model is to study the distribution of active particles over two bulks separated by a potential barrier that is only piecewise linear. Here, we focus on a sawtooth-shaped barrier, known as a ratchet potential (see Fig. \ref{fig:RatchetPotential}). As we will see, the asymmetry of the ratchet induces a flux-free steady state with different densities in both bulks. Since the bulk sizes can be arbitrarily large, the influence of the ratchet potential is indeed of infinite range. This system has actually already been studied, both experimentally \cite{DiLeonardo2013} and theoretically \cite{ActiveRatchets2014}. However, the former study was performed at high degree of activity, and the latter study neglected Brownian fluctuations, such that the degree of activity could not be quantified. Thereby, the regime of weak activity, where the statistical physics generally seems best understood \cite{Brader2015,SpeckEffectivePairPotential,MarconiMaggi,MarconiMaggiEffectivePotential,Binder2016,Siddharth2018}, remains largely unexplored.\\ \indent In this work, we study the effect of an external potential on arbitrarily large bulk regions with as few complications as possible. To this end, we investigate how a ratchet potential affects active particles that also undergo translational Brownian motion, such that the degree of activity can be quantified. We ask the questions: can we characterize how the external potential influences the density distribution as a function of activity? And can we understand this distribution in the limit of weak activity?\\ \noindent The article is organized as follows. In section \ref{sec:Models}, we introduce two active particle models, as well as the ratchet potential. In section \ref{sec:NumSol}, we numerically solve the density and polarization profiles of these active particles in the ratchet potential, and we study how the difference in bulk densities depends on activity, and on the ratchet potential. In section \ref{sec:SmallfpSol}, we specialize to the limit of weak activity, and provide an analytical solution that explicitly shows that the nonzero difference in bulk densities cannot be understood by the use of an effective temperature. Instead, in section \ref{sec:TSModel}, we propose to understand the density difference in terms of a simple transition state model. We end with a discussion, in section \ref{sec:Discussion}, on what ingredients are necessary to have the external potential affect the densities in such a (highly) nonlocal way, and with concluding remarks in section \ref{sec:Conclusion}. \section{Models} \label{sec:Models} \subsection{2D ABPs} \noindent In order to investigate the behavior of active particles in a ratchet potential, we consider the widely employed model of active Brownian particles\cite{ABPs} (ABPs) in two dimensions. For simplicity, we consider spherical, noninteracting particles. Every particle is represented by its position $\mb{r}(t) = x(t) \mb{\hat{x}} + y(t) \mb{\hat{y}}$, where $\mb{\hat{x}}$ and $\mb{\hat{y}}$ are Cartesian unit vectors and $t$ is time, as well as by its orientation $\mb{\hat{e}}(t) \equiv \cos\theta(t) \mb{\hat{x}} + \sin\theta(t) \mb{\hat{y}}$. Its time evolution is governed by the overdamped Langevin equations \begin{subequations} \label{eqn:Langevin} \begin{align*} \partial_t\mb{r}(t) &= v_0 \mb{\hat{e}}(t) - \gamma^{-1}\mbs{\nabla}V(\mb{r}) + \sqrt{2D_t} \mbs{\eta}_t(t), \label{eqn:Langevina} \numberthis\\ \partial_t \theta(t) &= \sqrt{2D_r} \eta_r(t). \label{eqn:Langevinb}\numberthis \end{align*} \end{subequations} \noindent Eq. (\ref{eqn:Langevina}) expresses that a particle's position changes in response to (i) a propulsion force, that acts in the direction of $\mb{\hat{e}}$, and that gives rise to a propulsion speed $v_0$, (ii) an external force, generated by the external potential $V(\mb{r})$, and (iii) the unit-variance Wiener process $\mbs{\eta}_t(t)$, that gives rise to translational diffusion with diffusion coefficient $D_t$. Here $\gamma$ is the friction coefficient. Note that $\beta \equiv (\gamma D_t)^{-1}$ is an inverse energy scale, and that in thermodynamic equilibrium the Einstein relation implies $\beta = (k_BT)^{-1}$, where $k_B$ is the Boltzmann constant and $T$ the temperature. Eq. (\ref{eqn:Langevinb}) expresses that the orientation of a particle changes due to the unit-variance Wiener process $\eta_r(t)$, which leads to rotational diffusion with diffusion coefficient $D_r$.\\ \indent The stochastic Langevin equations (\ref{eqn:Langevin}) induce a probability density $\psi(\mb{r},\theta,t)$, whose time evolution follows the Smoluchowski equation \begin{flalign*} \label{eqn:SE} \partial_t\psi = -\mbs{\nabla} \cdot \Big( v_0 \mb{\hat{e}}\psi - \frac{1}{\gamma} (\mbs{\nabla}V) \psi - D_t \mbs{\nabla} \psi \Big) \mathrlap{+ D_r \partial_{\theta\theta} \psi.} \numberthis && \end{flalign*} \noindent Here $\mbs{\nabla} = (\partial_x,\partial_y)^T$ is the two-dimensional spatial gradient operator. Two useful functions to characterize the probability density $\psi(\mb{r},\theta,t)$ are the density $\rho(\mb{r},t) \equiv \int\mathrm{d}\theta\psi(\mb{r},\theta,t)$ and the polarization $\mb{m}(\mb{r},t) \equiv \int\mathrm{d}\theta\psi(\mb{r},\theta,t)\mb{\hat{e}}(\theta)$. \noindent Their time-evolutions follow from the Smoluchowski equation (\ref{eqn:SE}) as \begin{align*} \label{eqn:SEMoments} \partial_t \rho &= -\mbs{\nabla} \cdot \Big\{ v_0 \mb{m} -\frac{1}{\gamma}(\mbs{\nabla}V)\rho -D_t\mbs{\nabla}\rho \Big\}, \numberthis \\ \partial_t\mb{m} &= - \mbs{\nabla} \cdot \Big\{ v_0 \big(\mbs{\mathcal{S}} + \frac{\Id}{2}\rho\big) - \frac{1}{\gamma}(\mbs{\nabla}V)\mb{m} - D_t \mbs{\nabla} \mb{m} \Big\} -D_r\mb{m}, \end{align*} \noindent where $\Id$ is the $2 \times 2$ identity matrix, and where $\mbs{\mathcal{S}}(\mb{r},t) \equiv \int\mathrm{d}\theta\psi(\mb{r},\theta,t)(\mb{\hat{e}}(\theta)\mb{\hat{e}}(\theta)-\Id/2)$ is the $2 \times 2$ nematic alignment tensor. Due to the appearance of $\mbs{\mathcal{S}}$, Eqs. (\ref{eqn:SEMoments}) are not closed. Therefore, solving Eqs. (\ref{eqn:SEMoments}), rather than the full Smoluchowski Eq. (\ref{eqn:SE}), requires a closure, an example of which we discuss in section \ref{subsec:1DRnT}.\\ \indent We consider a planar geometry that is invariant in the $y$-direction, i.e. $V(\mb{r}) = V(x)$, such that $\psi(\mb{r},\theta,t)=\psi(x,\theta,t)$, $\rho(\mb{r},t) = \rho(x,t)$, $\mb{m}(\mb{r},t) = m_x(x,t)\mb{\hat{x}}$ etc. The geometry consists of two bulks, located at $x\ll0$ and $x\gg0$. These bulk systems are separated by the ratchet potential \begin{flalign*} \label{eqn:RatchetPotential} V(x) = \left\{ \begin{alignedat}{2} &\quad0, \quad&&\text{for}\quad x<-x_l,\\ &V_{\text{max}}\Big(\frac{x}{x_l}+1\Big), \quad&&\mathrlap{\text{for}\quad -x_l < x < 0,}\\ &V_{\text{max}}\Big(1-\frac{x}{x_r}\Big), \quad&&\mathrlap{\text{for}\quad 0 < x < x_r,}\\ &\quad0, \quad&&\text{for}\quad x_r < x, \end{alignedat} \right. \numberthis&& \end{flalign*} \noindent where $x_l$ and $x_r$ are both positive. This sawtooth-shaped potential is illustrated in Fig. \ref{fig:RatchetPotential}. Note that the potential is generally asymmetric, the degree of which is characterized by the asymmetry factor $a \equiv (x_l-x_r)/x_r$. Without loss of generality, we only consider ratchets for which $x_l > x_r$, such that $a>0$. \indent The complete problem is specified by four dimensionless parameters. We use the rotational time $D_r^{-1}$, and the diffusive length scale $\ell \equiv \sqrt{D_t/D_r}$, which is proportional to the size of a particle undergoing free translational and rotational diffusion, to obtain the Peclet number \begin{flalign*} \begin{alignedat}{2} &\text{Pe} \equiv \frac{1}{\sqrt{2}}\frac{v_0}{D_r \ell}, &&\text{ as a measure for the degree of } \mathrlap{\text{activity,}}\\ &\beta V_{\text{max}}, &&\text{ the barrier height},\\ &\frac{x_l}{\ell}, &&\text{ the width of the ratchet's left side,}\\ &a, &&\text{ the asymmetry of the ratchet}. \end{alignedat} \label{eqn:DimlessPars} \numberthis && \end{flalign*} \noindent We caution the reader that the factor $1/\sqrt{2}$ is often omitted from the definition of the Peclet number; it is included here to connect to the model described below. \subsection{1D RnT} \label{subsec:1DRnT} \noindent The fact that there is only one nontrivial dimension in the problem suggests a simpler, one-dimensional model with the same physical ingredients. In this model, which we refer to as the 1D Run and Tumble (RnT) model, particles are characterized by a position $x(t)$, as well as by an orientation $e_x(t)$ that points in either the positive or the negative $x$-direction, i.e. $e_x = \pm 1$. The orientation $e_x$ can flip with probability $D_r$ per unit time. Every particle performs overdamped motion driven by (i) a propulsion force, that acts in the direction of its orientation, (ii) an external force, generated by the ratchet potential (\ref{eqn:RatchetPotential}), and (iii) Brownian motion, with associated diffusion constant $D_t$. The problem can be specified in terms of probability density functions $\psi_{\pm}(x,t)$ to find particles with orientation $e_x = \pm 1$. For our purposes, it is more convenient to consider the density $\rho(x,t) \equiv \psi_+(x,t) + \psi_-(x,t)$, and polarization $m_x(x,t) \equiv [\psi_+(x,t) - \psi_-(x,t)]/\sqrt{2}$. These fields evolve as \begin{flalign*} \label{eqn:RnTEqs} \begin{alignedat}{1} \partial_t \rho &= -\partial_x \Big\{ \sqrt{2} v_0 m_x -\frac{1}{\gamma}(\partial_x V)\rho -D_t\partial_x\rho \Big\} ,\\ \partial_tm_x &= - \partial_x \Big\{ \frac{v_0}{\sqrt{2}} \rho - \frac{1}{\gamma}(\partial_xV)m_x - D_t \partial_x m_x \Big\} \mathrlap{-D_r m_x.} \end{alignedat} \numberthis && \end{flalign*} \noindent Note the similarity of Eqs. (\ref{eqn:RnTEqs}) with Eqs. (\ref{eqn:SEMoments}) of the 2D ABP model. In fact, if we define the Peclet number for the 1D RnT model as $\text{Pe} \equiv v_0/(D_r \ell)$, then supplying the 2D ABP model with the closure $\mbs{\mathcal{S}}(\mb{r},t) = 0$ maps Eqs. (\ref{eqn:SEMoments}) to the 1D RnT model. The mapping is such that if one uses the same values for the dimensionless parameters $\text{Pe}, \beta V_{\text{max}}$, $x_l/\ell$, and $a$, then both models yield equal density profiles $\rho(x)$ and polarization profiles $m_x(x,t)$. As the closure $\mbs{\mathcal{S}}(\mb{r},t) = 0$ is exact in the limit of weak activity, i.e. $\text{Pe} \ll 1$, this mapping is expected to give good agreement between the two models for small values of the Peclet number $\text{Pe}$. \section{Numerical solutions} \label{sec:NumSol} \subsection{Density and mean orientation profiles} \label{subsec:NumSolA} \begin{figure} \includegraphics[width=\linewidth]{TypicalProfiles.pdf} \caption{\raggedright (a) Density profiles $\rho(x)/\rho_l$ and (b) mean orientation profiles $m_x(x)/\rho(x)$ of 2D ABPs, and 1D RnT particles, as indicated, in a ratchet potential $V(x)$ of height $\beta V_{\text{max}} = 4$, width $x_l/\ell = 1$, and asymmetry $a=3$. The dashed, vertical lines indicate the positions of the barrier peak ($x=0$) and the ratchet sides ($x=-x_l$ and $x=x_r)$. The bulk density to the left of the ratchet is $\rho(x \ll -x_l) = \rho_l$. Passive particles ($\text{Pe} = 0$) are distributed isotropically ($m_x = 0$), with a density profile given by the Boltzmann weight $\rho(x) = \rho_l \exp(-\beta V(x))$. Consequently, the densities $\rho_l$ and $\rho_r$ in the bulks on either side of the ratchet are equal. Active particles ($\text{Pe} = 1$) display much richer behaviour, with an accumulation of particles at either side of the ratchet, with a mean orientation towards the barrier peak, with a depletion of particles near the top of the ratchet, and with the right bulk density $\rho_r$ exceeding the left bulk density $\rho_l$.} \label{fig:TypicalProfiles} \end{figure} \noindent We study steady state solutions of both 2D ABPs and 1D RnT particles in the ratchet potential (\ref{eqn:RatchetPotential}). To find the solutions, for the 2D ABP model we numerically solve Eq. (\ref{eqn:SE}) with $\partial_t\psi = 0$, whereas for the 1D model we numerically solve Eqs. (\ref{eqn:RnTEqs}) with $\partial_t \rho = \partial_t m_x = 0$. We impose the following three boundary conditions. \begin{enumerate} \item{To the left of the ratchet, we imagine an infinitely large reservoir that fixes the density to be $\rho_l$ at $x_{\text{res}} \ll -x_l$, i.e. we impose $\psi(x_{\text{res}},\theta) = (2\pi)^{-1} \rho_l$ for the 2D case, and $\rho(x_{\text{res}}) = \rho_l$, $m_x(x_{\text{res}}) = 0$ for the 1D case.} \item{To the right of the ratchet, we assume an isotropic bulk that is thermodynamically large, yet finite, such that its density follows from the solution of the equations. In technical terms, at $x_{\text{max}} \gg x_r$ we impose $\partial_x \psi(x_{\text{max}},\theta) = 0$ for the 2D case, and $\partial_x \rho(x_{\text{max}})=0$, $m_x(x_{\text{max}}) = 0$ for the 1D case.} \item{Additionally, for the 2D case we assume periodic boundary conditions, i.e. $\psi(x,0)=\psi(x,2\pi)$ and $\partial_{\theta}\psi(x,0)=\partial_{\theta}\psi(x,2\pi)$ for all $x$.} \end{enumerate} \noindent In order to allow the profiles to decay to their bulk values specified by boundary conditions 1 and 2, in our numerical calculations we always ensure the distance between $x_\text{res}$ (or $x_\text{max}$) and the ratchet potential to be at least a multitude of the most significant length scale.\\ \indent Typical solutions are shown in Fig. \ref{fig:TypicalProfiles}. The considered ratchet potential, with height $\beta V_{\text{max}} = 4$, width $x_l / \ell = 1$, and asymmetry $a=3$, is shown as the dashed line in Fig. \ref{fig:TypicalProfiles}(a). We consider both a passive system ($\text{Pe} = 0$) and an active system ($\text{Pe} = 1$), using $x_\text{res}=-11l$ and $x_\text{max}=10.25l$ in this case. The resulting density profiles and mean orientation profiles are shown in Fig. \ref{fig:TypicalProfiles}(a) and Fig. \ref{fig:TypicalProfiles}(b), respectively. For the passive system, the solution is isotropic (i.e. $\psi(x,\theta) \propto \rho(x)$ and $m_x(x) = 0$ everywhere), and given by the Boltzmann weight $\rho(x) = \rho_l\exp(-\beta V(x))$. One checks that these solutions indeed solve Eqs. (\ref{eqn:SE}) and (\ref{eqn:RnTEqs}) when the propulsion speed $v_0$ equals $0$. Thus, in accordance with this Boltzmann distribution, the density in the passive system is lower in the ratchet region than in the left bulk, and its value $\rho_r \equiv \rho(x_{\text{max}})$ in the right bulk satisfies $\rho_r = \rho_l$, with $\rho_l$ the density in the left bulk. This is a necessity in thermodynamic equilibrium, even for interacting systems: the equality of the external potential implies equal densities of the bulks.\\ \indent For the active case ($\text{Pe} = 1$), the behavior is much richer. Firstly, the solution is anisotropic in the ratchet region, even though the external potential is isotropic. Indeed, Fig. \ref{fig:TypicalProfiles}(b) shows a mean orientation of particles directed towards the barrier on either side of the ratchet. This is consistent with the finding that active particles tend to align against a constant external force \cite{ES,Sedimentation2018}, but is also reminiscent of active particles near a repulsive wall. Indeed, at walls particles tend to accumulate with a mean orientation towards the wall \cite{GompperWallAccumulation,Ran2015}, and a similar accumulation is displayed by the density profiles of Fig. \ref{fig:TypicalProfiles}(a) at the ratchet sides $x = -x_l$ and $x = x_r$. The overall result is an accumulation of particles at the ratchet sides, a depletion of particles near the center of the ratchet, and, remarkably, a density $\rho_r$ in the right bulk that is higher than the density $\rho_l$ in the left bulk.\\ \indent The fact that the difference in bulk densities $\Delta \rho \equiv \rho_r - \rho_l$ is positive is caused by the asymmetry of the ratchet: due to their propulsion force, particles can cross the potential barrier more easily from the shallower, left side than from the steeper, right side. This argument is easily understood in the absence of translational Brownian motion ($D_t = 0$), i.e. when the only force that makes particles move (apart from the external force) is the propulsion force. Indeed, in this case, one can even think of ratchet potentials whose asymmetry is such that particles \emph{can} climb it from the shallow side, but \emph{not} from the steep side \cite{ActiveRatchets2014}. For such a ratchet potential, \emph{all} particles eventually end up on the right side of the ratchet, such that clearly the right bulk density $\rho_r$ exceeds the left bulk density $\rho_l$. The effect of having nonzero translational Brownian motion ($D_t > 0$) is that particles always have \emph{some} probability to climb also the steep side of the ratchet. This leads to a density difference $\Delta \rho$ that is smaller than in the $D_t = 0$ case. Yet, as long as the ratchet is asymmetric, the density difference \emph{always} turns out positive for \emph{any} positive activity $\text{Pe}$.\\ \indent We stress that the fact that $\rho_r > \rho_l$ is actually quite remarkable. The reason is that, whereas the ratchet potential is localized around $x=0$, the right bulk can be arbitrarily large. Since our results clearly show that the right bulk density $\rho_r$ is influenced by the ratchet, this means that the range of influence of the external potential is in some sense infinitely large. \\ \begin{figure*}[t] \includegraphics[width=\linewidth]{DensityDifferences.pdf} \caption{\raggedright Normalized density difference $\Delta \rho/\rho_l$ as a function of (a) activity $\text{Pe}$, (b) barrier height $\beta V_{\text{max}}$, (c) barrier width $x_l/\ell$, and (d) barrier asymmetry $a$. Results are shown for both the 2D ABP and 1D RnT models, as indicated. In the limiting cases of small and large values of its arguments, the density difference shows power law behavior. The corresponding exponents are listed in Table \ref{tab:powerlaws}. Additionally, (a) shows the density difference obtained analytically in the limit of weak activity (see section \ref{sec:SmallfpSol}), for the same ratchet parameters as used for the numerical solutions. The analytical and numerical solutions show good agreement up to $\text{Pe} \approx 0.5$.} \label{fig:DensityDifference} \end{figure*} \begin{table} \begin{ruledtabular} \begin{tabular}{@{}c c c c c@{}} & & \multicolumn{3}{c}{exponent} \\\cline{3-5} base & limit & \makecell{numerical \\ solution} & \makecell{Pe $\ll 1$ \\ solution} & \makecell{transition state \\ model} \\\colrule Pe & Pe $\ll 1$ & 2 & 2 & 2 \\ & Pe $\gg 1$ & -4 & & \\ $\beta V_{\text{max}}$ & $\beta V_{\text{max}} \ll 1$ & 3 & 3 & \\ & $\beta V_{\text{max}} \gg 1$ & 0 & 0 & 0 \\ $x_l/\ell$ & $x_l/\ell \ll 1$ & 2 & 2 & 2 \\ & $x_l/\ell \gg 1$ & * & -3 & \\ $a$ & $a \ll 1$ & 1 & 1 & 1 \\ & $a \gg 1$ & 0 & 0 & 0 \end{tabular} \end{ruledtabular} * depends on Pe. For $\text{Pe} \ll 1$, this exponent equals $-3$. \caption{\raggedright Power laws $\Delta \rho \propto \text{base}^{\text{exponent}}$, for limiting values of the base. Here the base denotes either the activity Pe, the barrier height $\beta V_{\text{max}}$, the barrier width $x_l/\ell$, or the barrier asymmetry $a$. Exponents were obtained numerically for the 1D RnT and 2D ABP models (yielding consistent exponents), analytically for the case of small activity $\text{Pe} \ll 1$, and for a simple transition state model. Exponents are shown only in limits where the corresponding solution is applicable.} \label{tab:powerlaws} \end{table} \subsection{Scaling of the bulk density difference $\Delta \rho$} \label{subsec:NumSolB} \noindent Next, we examine, one by one, how the density difference $\Delta\rho$ depends on activity $\text{Pe}$, the barrier height $\beta V_{\text{max}}$, the barrier width $x_l/\ell$, and on the barrier asymmetry $a$. The results are shown in Figs. \ref{fig:DensityDifference}(a)-(d), for both the 2D ABP and the 1D RnT models. In all cases, both models give density differences that are quantitatively somewhat different, but qualitatively similar, as they are both consistent with identical power laws\footnote{For numerical reasons, fewer results were obtained for the 2D ABP model than for the 1D RnT model. Therefore, not all of the power laws obtained for the 1D model could be tested for the 2D model. Yet, all 2D results seem consistent with all of the power laws.}.\\ \indent Fig. \ref{fig:DensityDifference}(a) shows the density difference as a function of activity $\text{Pe}$, for two different ratchet potentials. For small $\text{Pe}$, the figure shows that the density difference increases as $\text{Pe}^2$. For large $\text{Pe}$, the density difference decreases again, to decay to $0$ in the limit $\text{Pe} \rightarrow \infty$. The reason for this decrease is that particles with high activity can easily climb either side of the ratchet potential, such that they hardly notice the presence of the barrier at all. As shown by Fig. \ref{fig:DensityDifference}(a), this decay follows the power law $\Delta \rho \propto \text{Pe}^{-4}$. Whereas the prefactors of these power laws are different for the two different ratchet potentials considered, the exponents were found to be independent of the ratchet parameters, which was tested for many more values of $\beta V_{\text{max}}$, $x_l/\ell$, and $a$.\\ \indent Fig. \ref{fig:DensityDifference}(b) shows the density difference as a function of the barrier height $\beta V_{\text{max}}$. The barrier width, $x_l/\ell = 1$, and asymmetry, $a=3$, are kept fixed, and two levels of activity, $\text{Pe} = 1$ and $\text{Pe}=4$, are considered. For all cases, we find the power law $\Delta \rho \propto (\beta V_{\text{max}})^3$, up to values of the barrier height $\beta V_{\text{max}} \approx 3$. Exploring the behavior for large values of the barrier height $\beta V_{\text{max}}$ was numerically not feasible, but the fact that the curves for activity $\text{Pe} = 1$ level off for barrier heights $\beta V_{\text{max}} \geq 5$ seems consistent with the asymptotic behavior for $\beta V_{\text{max}} \gg 1$ that we shall obtain, in section \ref{sec:SmallfpSol}, in the limit of weak activity.\\ \indent Fig. \ref{fig:DensityDifference}(c) shows the density difference as a function of the width $x_l/\ell$ of the left side of the ratchet. Here the barrier height and asymmetry are fixed, at $\beta V_{\text{max}} = 2$ and $a=1$, respectively, whereas the degree of activity is varied as $\text{Pe} = 0.1, 0.3,$ and $1$. For small barrier widths, i.e. for $x_l/\ell \ll 1$, the curves show the power law $\Delta \rho \propto (x_l/\ell)^{2}$, independent of the activity $\text{Pe}$. For very wide barriers, i.e. for $x_l/\ell \gg 1$, the curves show power law behavior with an exponent that does depend on the activity $\text{Pe}$. For the smallest degree of activity, $\text{Pe} = 0.1$, this exponent is found to equal $-3$. This scaling, $\Delta \rho \propto (x_l/\ell)^{-3}$ for large widths $x_l/\ell \gg 1$, will also be obtained analytically in section \ref{sec:SmallfpSol} for the case of weak activity.\\ \indent Finally, Fig. \ref{fig:DensityDifference}(d) shows the density difference as a function of the barrier asymmetry $a$. The barrier height and width are fixed, at $\beta V_{\text{max}} = 1$ and $x_l/\ell = 1$, respectively, and the degree of activity is varied as $\text{Pe} = 1$ and $\text{Pe} = 4$. For nearly symmetric ratchets, i.e. for $a \ll 1$, all curves show $\Delta \rho \propto a$, whereas for large asymmetries $a \gg 1$ the curves suggest asymptotic behavior, i.e. $\Delta \rho \propto a^0$. This asymptotic behavior can be understood on physical grounds, as the limit $a\rightarrow \infty$ corresponds to a ratchet whose right slope is vertical, a situation that we expect to lead to a finite density difference indeed.\\ \indent All discussed scalings are summarized in Table \ref{tab:powerlaws}. Of these, the scaling $\Delta \rho \propto \text{Pe}^{2}$ for small activity $\text{Pe} \ll 1$ can be regarded as trivial. The reason is that, in an expansion of the density difference $\Delta \rho$ around $\text{Pe} = 0$, the quadratic term is the first term to be expected on general grounds: (i) Eqs. (\ref{eqn:SE}) and (\ref{eqn:RnTEqs}) are invariant under a simultaneous inversion of the self-propulsion speed ($v_0 \rightarrow -v_0$) and the orientation ($\mb{\hat{e}} \rightarrow -\mb{\hat{e}}$, and hence $m_x \rightarrow -m_x$), such that the expansion of the density difference $\Delta \rho$ contains only even powers of $\text{Pe}$, and (ii) for the passive case ($\text{Pe} =0$), the density difference $\Delta \rho$ equals $0$, such that the zeroth order term is absent. Similarly, the obtained scaling $\Delta \rho \propto a$ is as expected: since a symmetric ratchet ($a=0$) leads to the density difference $\Delta \rho = 0$, the leading order term one expects in an expansion of the density difference $\Delta \rho$ around $a=0$ is linear in the asymmetry $a$. However, all other scalings listed in Table \ref{tab:powerlaws} cannot be predicted by such general arguments, and are therefore nontrivial findings.\\ \indent We emphasize that these results have been obtained and verified by multiple approaches independently. While the presented results have been obtained by numerically solving the differential equations (\ref{eqn:SE}) and (\ref{eqn:RnTEqs}) as explained above, both the 2D ABP model and the 1D RnT model were also solved by separate approaches. For the 2D ABP model, results were additionally obtained by numerically integrating the Langevin equations (\ref{eqn:Langevin}) in particle-based computer simulations. For the 1D RnT model, results were also obtained by solving a lattice model, where particles can hop to neighbouring lattice sites, and change their orientation, with probabilities that reflect the same physical processes of self-propulsion, external forcing, translational Brownian motion, and tumbling\cite{MdJ}. For both the 2D ABP and the 1D RnT model, the two alternative approaches showed full agreement with the presented results. \section{Weak activity limit} \label{sec:SmallfpSol} \noindent Having characterized how the ratchet potential influences the densities of the adjoining bulks, we now turn to the question whether we can better understand this effect. We first try to answer this question for the simplest case possible, and therefore focus on the limit of weak activity, i.e. $\text{Pe} \ll 1$. Recall that in this limit the 2D ABP model and the 1D RnT model are equivalent. In this section, we present an analytical solution for the $\text{Pe} \ll 1$ limit. In the next section, we propose to rationalize its results by a simple transition state model, that is valid for, but not limited to, weak activity.\\ \begin{figure} \includegraphics[width=\linewidth]{SmallPeSolutions.pdf} \caption{\raggedright (a) Normalized polarization profiles $m_x(x)/\rho_l$ and (b) deviations of the density $\rho(x)$ from the passive solution $\rho_0(x)$, for a ratchet potential of height $\beta V_{\text{max}} = 4$, width $x_l/\ell = 1$, and asymmetry $a=3$. The dashed, vertical lines indicate the positions of the barrier peak ($x=0$) and the ratchet sides ($x=-x_l$ and $x=x_r)$. Results are shown for the analytical Pe $\ll 1$ solution, and for the numerical solutions to the 1D RnT model, for activity levels Pe $=0.1, 0.5$ and $1$. The polarizations and density deviations are divided by Pe and Pe$^2$, respectively, such that the curves for the analytical solution are independent of Pe.} \label{fig:SmallfpPlots} \end{figure} \noindent In case of a small propulsion force, i.e. of $\text{Pe} \ll 1$, the density can be expanded as $\rho(x) = \rho_0(x) + \text{Pe}^2 \rho_2(x) + \Ov(\text{Pe}^4)$, and the polarization as $m_x(x) = \text{Pe} \, m_1(x) + \Ov(\text{Pe}^3)$. Here $\rho_0(x)$, $\rho_2(x)$ and $m_1(x)$ are assumed to be independent of $\text{Pe}$. We used the arguments that the density $\rho(x)$ is an even function of $\text{Pe}$, and the polarization $m_x(x)$ an odd function of $\text{Pe}$, as explained in section \ref{subsec:NumSolB}. With these expansions, Eqs. (\ref{eqn:RnTEqs}) can be solved perturbatively in $\text{Pe}$, separately for each region where the ratchet potential (\ref{eqn:RatchetPotential}) is defined. As shown in the appendix, the solutions within one region are \begin{align*} \label{eqn:SmallfpSol} \rho_0(x) = &A_0 e^{-\beta V(x)},\\ m_1(x) = &-\frac{A_0}{\sqrt{2}} f e^{-\beta V(x)} + B_+ e^{c_+x/\ell} + B_- e^{c_-x/\ell},\\ \rho_2(x) = &\left[A_2-A_0 f \frac{x}{\ell} \right] e^{-\beta V(x)}\\ &+ \frac{\sqrt{2}B_+}{c_+ - f} e^{c_+x/\ell} + \frac{\sqrt{2}B_-}{c_- - f} e^{c_-x/\ell}. \numberthis \end{align*} \noindent Here we defined the non-dimensionalized external force $f(x) \equiv -\beta \ell \partial_x V(x)$, such that $f=0$ for $x<-x_l$, $f = -\beta V_{\text{max}}\ell/x_l$ for $-x_l < x <0$, $f = \beta V_{\text{max}}\ell/x_r$ for $0<x<x_r$, and $f=0$ for $x>x_r$, in accordance with Eq. (\ref{eqn:RatchetPotential}). Furthermore, we defined $c_{\pm} \equiv (f\pm\sqrt{f^2+4})/2$. The integration constants $A_0, A_2, B_+,$ and $B_-$ are found separately for each region, by applying the boundary conditions $\rho(-\infty)=\rho_l$, $m(\infty)=m(-\infty)=0$, and the appropriate continuity conditions at the region boundaries $x=-x_l$, $x=0$ and $x=x_r$. Applying these conditions to the solutions $\rho_0(x)$ in Eq. (\ref{eqn:SmallfpSol}) shows that the leading order solution is given by the Boltzmann weight, i.e. $\rho_0(x) = \rho_l \exp(-\beta V(x))$ for all $x$. Clearly, this is the correct passive solution. The higher order solutions that follow, i.e. the polarization profile $m_1(x)$ and the density correction $\rho_2(x)$, are plotted in Fig. \ref{fig:SmallfpPlots}. Qualitatively, these plots show the same features as displayed by the numerical solutions in Fig. \ref{fig:TypicalProfiles}: an accumulation of particles facing the barrier at the ratchet sides $x = -x_l$ and $x=x_r$, and a right bulk density $\rho_r$ that exceeds the left bulk density $\rho_l$. To allow for a quantitative comparison, Fig. \ref{fig:SmallfpPlots} also shows polarization profiles $m_x(x)$ and density corrections $\rho(x) - \rho_0(x)$ that were obtained for the 1D RnT model numerically. While the ratchet potential is fixed, with barrier height $\beta V_{\text{max}} = 4$, width $x_l/\ell = 1$, and asymmetry $a=3$, the comparison is made for several degrees of activity, namely $\text{Pe} = 0.1, 0.5, \text{ and } 1$. The analytical and numerical results show good agreement for $\text{Pe} = 0.1$, reasonable agreement for $\text{Pe}=0.5$, and deviate significantly for $\text{Pe} = 1$. All of these observations are as expected, since the analytical solutions (\ref{eqn:SmallfpSol}) are obtained under the assumption $\text{Pe} \ll 1$. \\ \indent The most interesting part of solution (\ref{eqn:SmallfpSol}) is the density correction $\rho_2(x)$, as this correction contains the leading order contribution to the difference in bulk densities $\Delta \rho$. To gain some understanding for the meaning of the various terms contributing to $\rho_2(x)$, we point out that for small activity, i.e. for $\text{Pe} \ll 1$, active particles are often understood as passive particles at an effective temperature \cite{CugliandoloEffTemp,Wang2011,MarconiMaggi,Fily2012,Palacci2010,Szamel2014}. In our convention, this effective temperature reads $T_{\text{eff}} = T(1+\text{Pe}^2)$. Therefore, one might think that for our weakly active system the density profile is given by Boltzmann weight at this effective temperature, i.e. by $\rho(x) = A \exp(-V(x)/k_BT_{\text{eff}})$ within one region. Here the prefactor $A$ can depend on the activity $\text{Pe}$. Expanding this effective Boltzmann weight for small $\text{Pe}$ yields the passive solution $\rho_0(x)$, and the terms on the first line of $\rho_2(x)$ in Eq. (\ref{eqn:SmallfpSol}). However, it does \emph{not} reproduce the final two terms that contribute to $\rho_2(x)$ in Eq. (\ref{eqn:SmallfpSol}). Precisely these last two terms are crucial to obtain a nonzero difference $\Delta \rho$ in bulk densities. Indeed, a density profile given solely by the effective Boltzmann weight necessarily yields equal bulk densities $\rho_l = \rho_r$, as the external potential $V(x)$ is equal on either side of the ratchet.\\ \begin{figure}[t] \includegraphics[width=\linewidth]{AnalyticalDensityDifferences.pdf} \caption{\raggedright Normalized leading order coefficient $(\Delta \rho)_2$ in the expansion of the density difference $\Delta \rho$ for small activity $\text{Pe}$, as found from the analytical $\text{Pe} \ll 1$ solution and as predicted by the transition state model, (a) as a function of the barrier height $\beta V_{\text{max}}$, at fixed barrier width $x_l/\ell = 1$ and asymmetry $a=1$, (b) as a function of the barrier width $x_l/\ell$, at fixed barrier height $\beta V_{\text{max}}=1$ and asymmetry $a=1$, and (c) as a function of the asymmetry $a$, at fixed barrier height $\beta V_{\text{max}} = 2$ and barrier width $x_l/\ell = 1$. The power laws shown by the transition state model in its regime of applicability, i.e. for $\beta V_{\text{max}} \gg 1$ and $x_l/\ell \ll 1$, have exponents that agree with the power laws of the analytical solution. These exponents can be found in Table \ref{tab:powerlaws}. The analytical and transition state solution do not agree quantitavely for these parameter values.} \label{fig:AnaDDs} \end{figure} \indent The analytical expression for the difference in bulk densities $\Delta \rho$, implied by the solutions (\ref{eqn:SmallfpSol}), is rather lengthy and intransparent, and is therefore not shown here. Instead, we show the dependence of $\Delta \rho$ on the activity Pe graphically, in Fig. \ref{fig:DensityDifference}(a), for the same two ratchet potentials as used for the numerical solutions. As the density difference $\Delta \rho$ follows from the correction $\rho_2(x)$, it scales as $\text{Pe}^2$, just like the numerical solutions for Pe $\ll 1$. As shown by Fig. \ref{fig:DensityDifference}(a), the analytical and numerical solutions agree quantitatively up to $\text{Pe} \approx 0.5$, as also found in Fig. \ref{fig:SmallfpPlots}. Before we illustrate how the density difference $\Delta \rho$ depends on the ratchet potential, we extract its dependence on activity $\text{Pe}$ by considering $(\Delta \rho)_2 = \Delta \rho / \text{Pe}^2$, i.e. the leading order coefficient in an expansion of $\Delta \rho$ around $\text{Pe} = 0$. The coefficient $(\Delta\rho)_2$ is independent of $\text{Pe}$, but still depends on the barrier height $\beta V_{\text{max}}$, the barrier width $x_l/\ell$, and the asymmetry $a$. Its dependence on these ratchet parameters is plotted in Figs. \ref{fig:AnaDDs}(a)-(c), respectively. These figures display all the power law behavior that was obtained numerically in section \ref{sec:NumSol}. The power laws are summarized in Table \ref{tab:powerlaws}. \section{Transition State Model} \label{sec:TSModel} \noindent As argued in the previous section, the nonzero difference in bulk densities $\Delta \rho$ cannot be accounted for by the effective temperature that is often employed in the weak activity limit. Instead, to understand the behavior of the bulk density difference $\Delta \rho$ better, we propose the following simple transition state model. The model consists of four states, designed to mimic the 1D RnT model in a minimal way. Particles in the bulk to the left of the ratchet, with an orientation in the positive (negative) $x$-direction, are said to be in state $l_+(l_-)$, whereas particles in the bulk to the right of the ratchet, with positive (negative) $x$-orientation, are in state $r_+(r_-)$. This setting is illustrated in Fig. \ref{fig:TransitionStateModel}. Particles can change their orientation, i.e. transition from $l_{\pm}$ to $l_{\mp}$, and from $r_{\pm}$ to $r_{\mp}$, with a rate $D_r$. Furthermore, particles can cross the potential barrier and transition between the $l$- and $r$-states. The associated rate constants are assumed to be given by modified Arrhenius rates \cite{Friddle2008,Eyring1943,Bell1978}, where the effect of self-propulsion is to effectively increase or decrease the potential barrier. For example, the rate to transition from $l_+$ to $r_+$ is \begin{align*} \label{eqn:Rate1} k_{l_+ \rightarrow r_+} = \frac{\nu_l}{L_l}\exp\left[-\beta(V_{\text{max}}-\gamma v_0 x_l)\right]. \numberthis \end{align*} \noindent As the propulsion force helps the particle to cross the barrier, it effectively lowers the potential barrier $V_{\text{max}}$ by the work $\gamma v_0 x_l$ that the propulsion force performs when the particle climbs the left slope of the ratchet. This modified Arrhenius rate is expected to be valid under the assumptions (a) of a large barrier height $\beta V_{\text{max}} \gg 1$, which is a condition for the Arrhenius rates to be valid even for passive systems \cite{Kramers1940}, (b) of a ratchet potential that is typically crossed faster than a particle reorients, which can be achieved by making the barrier width $x_l/\ell$ sufficiently small, and (c) that the work $\gamma v_0 x_l$ performed by the propulsion force is much smaller than the barrier height $V_{\text{max}}$. We point out that assumption (c) can be rewritten as $\text{Pe} \ll \beta V_{\text{max}} \ell / x_l$. This means that if assumptions (a) and (b) are satisfied, which imply that $\beta V_{\text{max}} \ell / x_l \gg 1$, then assumption (c) is not much further restrictive on the activity $\text{Pe}$. The remaining rate constants follow along a similar reasoning as \begin{align*} \label{eqn:OtherRates} \begin{alignedat}{1} k_{l_- \rightarrow r_-} &= \frac{\nu_l}{L_l}\exp\left[-\beta(V_{\text{max}}+\gamma v_0 x_l)\right], \\ k_{r_+ \rightarrow l_+} &= \frac{\nu_r}{L_r}\exp\left[-\beta(V_{\text{max}}+\gamma v_0 x_r)\right], \\ k_{r_- \rightarrow l_-} &= \frac{\nu_r}{L_r}\exp\left[-\beta(V_{\text{max}}-\gamma v_0 x_r)\right]. \end{alignedat} \numberthis \end{align*} \noindent For large bulks on either side of the ratchet, the attempt frequencies in the rate expressions (\ref{eqn:Rate1}) and (\ref{eqn:OtherRates}) are inversely proportional to the size of the bulk that is being transitioned from. This size is denoted by $L_l$ for the left bulk, and by $L_r$ for the right bulk. Therefore, the factors $\nu_l$ and $\nu_r$ are independent of the bulk sizes $L_l$ and $L_r$, and can only depend on the shape of the rachet potential, i.e. on its height $\beta V_{\text{max}}$, on its width $x_l/\ell$, and on its asymmetry $a$. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{1Dgassetup5-crop.pdf} \caption{\raggedright Illustration of the states in the transition state model. Particles in the left bulk with positive (negative) $x$-orientation are in state $l_+$($l_-$). Similarly, particles in the right bulk are in state $r_+$ or $r_-$. Within one bulk, particles can change their orientation with rate constant $D_r$. Between the bulks, particles can transition by crossing the potential barrier with the effective Arrhenius rates of Eqs. (\ref{eqn:Rate1}) and (\ref{eqn:OtherRates}), where the effect of self-propulsion is to shift the potential barrier $V_{\text{max}}$ by the work $\gamma v_0 x_l$ ($\gamma v_0 x_r$) performed by the propulsion force when a particles climbs the left (right) slope of the ratchet.} \label{fig:TransitionStateModel} \end{figure}\\ \indent We denote the number of particles in the $l_{\pm}$ and $r_{\pm}$ states by $N_{l_{\pm}}(t)$ and $N_{r_{\pm}}(t)$, respectively. The time evolution of these particle numbers follows from the rates outlined above. For example, the number of particles $N_{l_+}(t)$ in state $l_+$ evolves according to the rate equation \begin{flalign*} \label{eqn:RateEq} \partial_t N_{l_+} \! = \! - \! \left(D_r+k_{l_+ \rightarrow r_+} \! \right) \! N_{l_+} +D_r N_{l_-} + \mathrlap{k_{r_+ \rightarrow l_+} \! N_{r_+} \! .} \numberthis && \end{flalign*} \noindent Similar equations hold for the particle numbers $N_{l_{-}}(t)$, $N_{r_{+}}(t)$ and $N_{r_{-}}(t)$. These rate equations can be solved in steady state, i.e. when $\partial_t N_{l_{\pm}} = \partial_t N_{r_{\pm}} = 0$, for the particle numbers $N_{l_{\pm}}$ and $N_{r_{\pm}}$. We consider infinitely large bulks, i.e. $L_l,L_r \rightarrow \infty$. In this case, the solutions show that $N_{l_+} = N_{l_-}$ and $N_{r_+} = N_{r_-}$, such that the $l$ and $r$ states correspond to isotropic bulks. Furthermore, the solution shows that the bulk densities $\rho_l = (N_{l_+} + N_{l_-})/L_l$ and $\rho_r = (N_{r_+} + N_{r_-})/L_r$ differ by an amount $\Delta \rho = \rho_r - \rho_l$ given by \begin{align*} \label{eqn:TwoStateSol} \frac{\Delta \rho}{\rho_l} = \frac{\nu_l}{\nu_r} \frac{\cosh\left(\text{Pe} \, x_l/\ell\right)-\cosh\left(\text{Pe} \, x_r/\ell\right)} {\cosh\left(\text{Pe} \, x_r/\ell\right)}, \numberthis \end{align*} \noindent where we recall that $x_r = (1+a)^{-1} x_l$. We point out that the ratio $\nu_l/\nu_r$ can generally depend on the ratchet parameters $\beta V_{\text{max}}$, $x_l/\ell$, and $a$. However, in the following we simply assume $\nu_l/\nu_r = 1$, which is justified for nearly symmetric ratchets. \\ \indent To enable a comparison with the analytical solution of the previous section, we now focus on the limit of weak activity, i.e. of $\text{Pe} \ll 1$. This ensures assumption (c) to be satisfied, but we emphasize that the transition state model is not limited to weak activity. We expand the density difference (\ref{eqn:TwoStateSol}) as $\Delta \rho = (\Delta \rho)_2 \ \text{Pe}^2 + \Ov(\text{Pe}^4)$, and compare the coefficient $\left(\Delta\rho\right)_2$ with the same coefficient obtained in section \ref{sec:NumSol} for the analytical solution in the weak activity limit. The coefficient $(\Delta\rho)_2$ is plotted in Figs. \ref{fig:AnaDDs}(a)-(c), as a function of the of the barrier height $\beta V_{\text{max}}$, the barrier width $x_l/\ell$, and the barrier asymmetry $a$, respectively. Fig. \ref{fig:AnaDDs}(a) merely illustrates that the density difference (\ref{eqn:TwoStateSol}) is independent of the barrier height $\beta V_{\text{max}}$. This independency agrees with the asymptotic behavior displayed by the analytical solution for large barrier heights $\beta V_{\text{max}} \gg 1$. Note that the regime $\beta V_{\text{max}} \gg 1$ is indeed assumed for the modified Arrhenius rates (assumption (b)). Fig. \ref{fig:AnaDDs}(b) illustrates that the density difference predicted by the transition state model scales quadratically with the barrier width, i.e. that $\Delta \rho \propto (x_l/\ell)^2$. This scaling agrees with the scaling of the analytical solution for the regime of small barrier widths $x_l/\ell \ll 1$. Again, this regime is assumed for the modified Arrhenius rates, as having a small barrier width is required for having particles cross the ratchet faster than they typically reorient (assumption (c)). Finally, Fig. \ref{fig:AnaDDs}(c) illustrates that the density difference predicted by the transition state model scales linearly with the barrier asymmetry for nearly symmetric ratchets, i.e. $\Delta \rho \propto a$ for $a\ll1$, and asymptotically for very asymmetric ratchets, i.e. $\Delta \rho \propto a^0$ for $a\gg1$. Both scalings are also displayed by the analytical solution. All these power laws can again be found in Table \ref{tab:powerlaws}.\\ \indent Of course, the transition state model reproduces only the power laws that lie inside its regime of applicability. However, the fact this simple model \emph{does} reproduce all these power laws is quite remarkable, since, as discussed in section \ref{sec:NumSol}, most of these scalings are nontrivial. Furthermore, we note that the transition state model can also be solved for finite bulk sizes, which in fact predicts a turnover of the density difference $\Delta \rho$ as a function of activity Pe, as observed in Fig. \ref{fig:DensityDifference}(a).\\ \indent Quantitatively, Fig. \ref{fig:AnaDDs} clearly shows that the predictions of the transition state model typically differ from the analytical solution by an order of magnitude. A possible reason for this disagreement is that these plots are made for parameters values that do not satisfy assumptions (a) and (b) that underly the modified Arrhenius rates. In fact, it turned out to be impossible to satisfy these assumptions simultaneously with feasible parameter values. The root of the difficulty is that the time it takes a particle to cross the potential barrier increases with the barrier height $\beta V_{\text{max}}$. As a consequence, having a barrier that is simultaneously very high (assumption (a)), and typically crossed faster than a particle reorients (assumption (b)), turns out to require unrealistically small barrier widths $x_l/\ell$. The quantitative mismatch of the transition state model with the full solution for small activity might also be attributed to the assumption that the prefactors $\nu_l$ and $\nu_r$ in the rate expressions (\ref{eqn:Rate1}) and (\ref{eqn:OtherRates}) are not exactly identical, but in fact might depend on the precise shape of the barrier. However, this possibility goes beyond the current scope of this paper, and we leave it for future study.\\ \indent We conclude that, whereas it was not possible to test the predictions of the transition state model in its regime of applicability quantitatively, the model \emph{does} reproduce the complete power law behavior of this regime correctly. \section{Discussion} \label{sec:Discussion} \noindent The most interesting aspect of the studied system is that the external potential has a long-range influence on the density profile. This is in sharp contrast to an ideal gas in equilibrium, whose density profile is only a function of the \emph{local} external potential. So what ingredients are necessary to obtain this effect? To answer this question, we consider the 1D RnT model subject to a general external potential $V(x)$. Furthermore, we introduce the particle current $J(x)$ and the orientation current $J_m(x)$ that appear in the evolution equations (\ref{eqn:RnTEqs}), i.e. \begin{align*} \label{eqn:1DCurrents} \begin{alignedat}{1} J(x) &= \sqrt{2} v_0 m_x -\frac{1}{\gamma}(\partial_xV)\rho -D_t\partial_x\rho,\\ J_m(x) &= \frac{v_0}{\sqrt{2}} \rho - \frac{1}{\gamma}(\partial_xV)m_x - D_t \partial_x m_x. \end{alignedat} \numberthis \end{align*} \noindent We focus on a state that is steady, such that $J(x) = \text{constant} \equiv J$, and flux-free, such that $J = 0$. Then Eqs. (\ref{eqn:RnTEqs}) and (\ref{eqn:1DCurrents}) can be recast into the first order differential equation \begin{align*} \label{eqn:DiffEq} \ell \partial_x \mb{Y}(x) = \mbs{\mathcal{M}}(x) \mb{Y}(x) \numberthis \end{align*} \noindent for the three (non-dimensionalized) unknowns $\mb{Y}(x) \equiv \left( \ell \rho(x), \ell m_x(x), J_m(x)/D_r\right)^T$. The coefficient matrix in Eq. (\ref{eqn:DiffEq}) is given by \begin{align*} \label{eqn:M} \mbs{\mathcal{M}}(x) = \begin{bmatrix} f(x) & \sqrt{2}\text{Pe} & 0 \\ \text{Pe}/\sqrt{2} & f(x) & -1 \\ 0 & -1 & 0 \end{bmatrix}, \numberthis \end{align*} \noindent where $f(x) \equiv -\beta \ell \partial_x V(x)$ is the dimensionless external force, that is now a function of position $x$. For a passive system (Pe $=0$), Eqs. (\ref{eqn:DiffEq}) and (\ref{eqn:M}) show that the density equation decouples. In this case, the density profile is solved by the Boltzmann weight, i.e. $\rho(x) \propto \exp(-\beta V(x))$, as required in thermodynamic equilibrium. For the general case, we observe that, \emph{if} the coefficient matrix $\mbs{\mathcal{M}}(x)$ commutes with its integral $\int_{x_0}^x \mathrm{d}x'\mbs{\mathcal{M}}(x')$, \emph{then} Eq. (\ref{eqn:DiffEq}) is solved by \begin{align*} \label{eqn:SpecialSolution} \mb{Y}(x) = \exp\left(\frac{1}{\ell}\int_{x_0}^x\mathrm{d}x'\mbs{\mathcal{M}}(x')\right) \cdot \begin{pmatrix} c_1 \\ c_2 \\ c_3 \end{pmatrix}, \numberthis \end{align*} \noindent where the integration constants $c_1$, $c_2$ and $c_3$ are to be determined from boundary conditions. \noindent Here $x_0$ is an arbitrary reference position. By virtue of $\int_{x_0}^x\mathrm{d}x'f(x')=-\beta \ell V(x)$, the solution (\ref{eqn:SpecialSolution}) \emph{is} a local function of the external potential. An explicit calculation of the commutator shows that $[ \mbs{\mathcal{M}}(x),\int_{x_0}^x\mathrm{d}x'\mbs{\mathcal{M}}(x')] = 0$ if and only if $\beta (V(x)-V(x_0)) = -f(x) \, (x-x_0)/\ell$, i.e. if the external potential is a linear function of $x$. Therefore, for linear potentials, the density profile \emph{is} a local function of the external potential. This explains why in a gravitational field the density profile \emph{can} be found as a local function of the external potential, and why sedimentation profiles stand a chance to be described in terms of an effective temperature in the first place\cite{Tailleur2008,Tailleur2009,SelfPumpingState1,Palacci2010,ES,Wolff2013,Szamel2014,ABPvsRnT,EoSExperiment,Stark2016,MatthiasSedimentation,Sedimentation2018}. However, for nonlinear external potentials, e.g. for the ratchet studied here that is only piecewise linear, the solution (\ref{eqn:SpecialSolution}) is \emph{not} valid, and a nonlocal dependence on the external potential is to be expected. Therefore, for the ratchet potential (\ref{eqn:RatchetPotential}), the kinks at $x=-x_l$, $x=0$ and $x=x_r$ are crucial to have a density that depends nonlocally on the external potential. Indeed, in the analytical solution for weak activity, presented in section \ref{sec:SmallfpSol}, the nonlocal dependence of the right bulk density $\rho_r$ on the external potential enters through the fact that the integration constants in Eq. (\ref{eqn:SmallfpSol}) are found from continuity conditions that are applied precisely at the positions of these kinks.\\ \indent Summarizing, in order to have the external potential influence the steady-state density of ideal particles in a nonlocal way, one needs to have (1) particles that are active (such that the system is out of thermodynamic equilibrium), and (2) an external potential that is nonlinear. Thereby, the 1D RnT particles in the ratchet potential (\ref{eqn:RatchetPotential}) illustrate the nonlocal, and even long-range, influence of the external potential in a most minimal way.\\ \indent In the discussion above, we have only shown that a linear external potential yields a density profile that is a strictly local function of the potential. Thereby, a nonlinear potential is not guaranteed to influence the density (arbitrarily) far away, and indeed other criteria have been discussed in the literature. For example, in the context of active Ornstein-Uhlenbeck particles, approximate locality was shown for a wide class of nonlinear potentials \cite{MarconiMaggi,MarconiMaggi2}, and it was argued that in order to lose this property it is crucial to have an external potential with nonconvex regions \cite{Fily2017b}. More generally, the fact that the potential barrier is more easily crossed from one side than from the other is a rectification effect, and it has been shown that such effects can occur when the dynamics break time-reversal symmetry, while also the spatial mirror symmetry is broken \cite{Magnasco1993,Prost1994}. In our case, these criteria are met by the presence of activity, and by having a ratchet that is asymmetric ($a\neq 0$), respectively.\\ \indent Our results are also fully consistent with the work by Baek et al. \cite{Yongjoo2018}, who study the effect of placing a nonspherical body in a two-dimensional fluid of ABPs. They show that such an inclusion leads to a steady state with a density perturbation that scales in the far field as $1/r$, where $r$ is the distance to the body. Repeating their derivation for the 1D RnT model in our setting yields a far-field density perturbation that is simply constant, i.e. independent of $r$. This is consistent with our findings. Furthermore, under suitable conditions, in particular that the external potential is small everywhere, the authors of \cite{Yongjoo2018} derive that the far-field density perturbation scales as $(V_{\text{max}})^3$. This confirms our finding of the powerlaw $\Delta \rho \propto (\beta V_{\text{max}})^3$ for small potential barriers $\beta V_{\text{max}} \ll 1$. Moreover, it suggests that this scaling is not limited to the sawtooth-shaped potential barrier considered here, but also holds for external potentials of more general shape. \section{Conclusions} \label{sec:Conclusion} \noindent We have studied the distribution of noninteracting, active particles over two bulks separated by a ratchet potential. The active particles were modelled both as two-dimensional ABPs, and as one-dimensional RnT particles. Our numerical solutions to the steady state Smoluchowski equations show that the ratchet potential influences the distribution of particles over the bulks, even though the potential is short-ranged itself. Thus, the external potential exerts a long-range influence on the density profile. We have shown that such a (highly) nonlocal influence can occur for noninteracting particles only when they are (1) active, and (2) subject to an external potential that is nonlinear. Thereby, the piecewise linear setup considered in this work captures this long-range influence in a most minimal way. \\ \indent To characterize the influence of the external potential, we have described how the difference in bulk densities depends on activity, as well as on the ratchet potential itself. Both models of active particles showed consistent power law behavior that is summarized in Table \ref{tab:powerlaws}.\\ \indent To understand the long-range influence of the potential in the simplest case possible, we focussed on the limit of weak activity. While weakly active systems are often described by an effective temperature, our analytical solution explicitly shows that the long-range influence of the ratchet potential cannot be rationalized in this way. Instead, we propose a simple transition state model, in which particles can cross the potential barrier by Arrhenius rates with an effective barrier height that depends on the degree of activity. While the model could not be tested quantitatively, as its underlying assumptions could not be simultaneously satisfied for feasible parameter values, it does reproduce - in its regime of applicability - the complete power law behavior of the distribution of particles over the bulks.\\ \indent Future questions are whether the power law behavior can be understood also outside the regime where the transition state model applies, and whether the power laws also hold for potential barriers of more generic shape than the sawtooth of Fig. \ref{fig:RatchetPotential}. Our work illustrates that even weakly active, noninteracting particles pose challenges that are fundamental to nonequilibrium systems, and, moreover, that an external potential can exert a long-range influence in such systems. We expect that incorporating such long-range and nonlocal effects will be part of a more generic statistical mechanical description of nonequilibrium systems. \section{Acknowledgments} \noindent This work is part of the D-ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). We acknowledge funding of a NWO-VICI grant. S.P. and M.D. acknowledge the funding from the Industrial Partnership Programme `Computational Sciences for Energy Research' (Grant No. 14CSER020) of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organization for Scientific Research (NWO). This research programme is co-financed by Shell Global Solutions International B.V.
{'timestamp': '2018-11-13T02:20:30', 'yymm': '1807', 'arxiv_id': '1807.09156', 'language': 'en', 'url': 'https://arxiv.org/abs/1807.09156'}
\section{Introduction} Fine-tuning pre-trained large-scale language models (LMs) is the dominant paradigm of current NLP. The LMs proved to be a versatile technology that can help to solve an array of NLP tasks, such as parsing, machine translation, text summarization, sentiment analysis, semantic similarity etc. The LMs can be used for tasks on various levels of linguistic complexity (syntactic, semantic, etc.) but also with various types of data modalities (text classification, text generation, text comparison, etc.). As such, it seems to be vital for any speech community to develop a proper model for their language and thus push the limits of what NLP can do for them. In this paper, we introduce the first Slovak-only transformers-based language model trained with a non-trivial corpus called \textit{SlovakBERT}\footnote{Available at \url{https://github.com/gerulata/slovakbert}}. Although several multilingual models already support Slovak, we believe that developing a Slovak-only model is still important, as it can lead to better results and more compute-wise and memory-wise efficient processing of Slovak language. \textit{SlovakBERT} has RoBERTa architecture~\cite{roberta} and it is trained on a Web-crawled corpus. We evaluate \textit{SlovakBERT} on various existing datasets to study how well it handles different tasks. We also compare it to other available LMs (mainly multilingual) and other existing approaches. The tasks we study are: part-of-speech tagging, semantic textual similarity, sentiment analysis and document classification. As a by-product of our experimentation, we have developed and published the best performing models for selected tasks. These might be used by other researchers or Slovak NLP practitioners in the future as strong baseline. Our main contributions in this paper are: \begin{itemize} \item We trained and published the first proper Slovak-only LM on a dataset of non-trivial size that we collected. \item We evaluated the LM on a series of Slovak NLP tasks. \item We published several fine-tuned models based on this LM, namely a part-of-speech tagger, sentiment analysis model and sentence embedding model. \item We published several additional datasets for multiple tasks, namely sentiment analysis test sets and semantic similarity translated dataset. \end{itemize} The rest of this paper is structured as follows: In Section~\ref{sec:related} we discuss related work about language models and their language mutations. In Section~\ref{sec:model} we describe the corpus crawling efforts and how we train our LM with it. In Section~\ref{sec:evaluation} we evaluate the model with four NLP tasks. In Section~\ref{sec:concl} we conclude our findings. \section{Related Work}\label{sec:related} \subsection{Language Models} Most of the LMs are currently based on self-attention layers called \textit{transformers}~\cite{transformer}. The models differ in the details of their architecture, as well as in the task they are trained with~\cite{whichbert}. The most common task is the so called \textit{masked language modeling}~\cite{bert}, where certain parts of the sentence are masked and the model is expected to fill these parts with the original tokens. The models like these are useful mainly as backbones for further fine-tuning. Another approach is to train a generative model~\cite{gpt2}, that always predicts the next word in the sequence, which can be used for various text generation tasks. Sequence-to-sequence models~\cite{t5} are also an option, which is in a sense a combination of the previous two approaches. They have an encoder that encodes an arbitrary input and then a decoder that is used to generate text as an output. For all these types, variants exist to make the underlying LMs more efficient~\cite{electra, tinybert}, able to handle longer sentences~\cite{longformer} or fulfill various other requirements. \subsection{Availability in Different Languages} English is the most commonly used language in NLP, and a \textit{de facto} standard for experimental work. Most of the proposed LM variants are indeed trained and evaluated only on English. Other languages usually have at most only a few LMs trained, usually with a very safe choice of model type (e.g. BERT models or RoBERTa models are popular). Languages with available native models are e.g. French~\cite{french}, Dutch~\cite{dutch} or Arabic~\cite{arabic}. There are also models for related Slavic languages, notably Czech~\cite{czech} and Polish~\cite{polish}. There is no Slovak-specific large scale LM available so far. There is a Slovak version of WikiBERT model~\cite{wikibert}, but it is trained only on texts from Wikipedia, which is not a large enough corpus for proper language modeling at this scale. The limitations of this model will be shown in the results as well. \subsection{Multilingual Language Models} Multilingual LMs are sometimes proposed as an alternative to training language-specific LMs. These LMs can handle more than one language. In practice, they are often trained with more than 100 languages. Training them is more efficient than training separate models for all the languages. Additionally, cross-lingual transfer learning might improve the performance with the languages being able to learn from each other. This is especially beneficial for low-resource languages. The first large-scale multilingual LM is MBERT~\cite{bert} trained on 104 languages. The authors observed that by simply exposing the model to data from multiple languages, the model was able to discover the multilingual signal and it spontaneously developed interesting cross-lingual capabilities, i.e. sentences from different languages with similar meaning also have similar representations. Other models explicitly use multilingual supervision, e.g. dictionaries, parallel corpora or machine translation systems~\cite{xlm, unicoder} \section{Training}\label{sec:model} In this section we describe our own Slovak masked language model, what data were used for training, what is the architecture of the model and how it was trained. \subsection{Data} We used a combination of available corpora and our own Web-crawled corpus as our training data. The available corpora we used were: Wikipedia (326MB of text), Open Subtitles (415MB) and OSCAR corpus (4.6GB). We crawled \texttt{.sk} top-level domain webpages, applied language detection and extracted the title and the main content of each page as clean texts without HTML tags (17.4GB). The text was then processed with the following steps: \begin{itemize} \item URL and email addresses were replaced with special tokens. \item Elongated interpunction was reduced, i.e. if there were sequences of the same interpunction character, these were reduced to one character (e.g. \texttt{--} to \texttt{-}). \item Markdown syntax was deleted. \item All text content in braces \texttt{\{.\}} was eliminated to reduce the amount of markup and programming language text. \end{itemize} We segmented the resulting corpus into sentences and removed duplicates to get 181.6M unique sentences. In total, the final corpus has 19.35GB of text. \subsection{Model Architecture and Training} The model itself is a RoBERTa model~\cite{roberta}. The details of the architecture are shown in Table~\ref{tab:lms} in the \texttt{SlovakBERT} column. We use BPE~\cite{bpe} tokenizer with the vocabulary size of 50264. The model was trained for 300k training steps with a batch size of 512. Samples were limited to a maximum of 512 tokens and for each sample we fit as many full sentences as possible. We used Adam optimization algorithm~\cite{adam} with $5 \times 10^{-4}$ learning rate and 10k warmup steps. Dropout (dropout rate $0.1$) and weight decay ($\lambda = 0.01$) were used for regularization. We used \texttt{fairseq}~\cite{fairseq} library for training, which took approximately 248 hours on 4 NVIDIA A100 GPUs. We used 16-bit float precision. \section{Evaluation}\label{sec:evaluation} In this section, we describe the evaluation procedure and results of \textit{SlovakBERT}. We use two main methods to examine the performance of \textit{SlovakBERT}, but also of various other LMs: \begin{enumerate} \item \textit{Downstream performance.} We fine-tune the LMs for various NLP tasks and we analyze the achieved results. We compare the results with existing solutions based on other approaches, e.g. rule-based solutions or solutions based on word embeddings. \item \textit{Probing.} Probing is a technique that aims to measure how useful the information contained within individual layers of the LM is. We can check how important the individual layers for various tasks are. We use simple \textit{linear probes} in our work, i.e. the hidden representations from the LMs are used as features for linear classification. \end{enumerate} We conducted the evaluation on four different tasks: part-of-speech tagging, semantic textual similarity, sentiment analysis and document classification. For each task, we introduce the dataset that is used, various baselines solutions, the LM-based approach we took and the final results for the task. \subsection{Evaluated Language Models} We evaluate and compare several LMs that support Slovak language to some extent: \begin{itemize} \item \textbf{XLM-R}~\cite{xlm-r} - XLM-R is a suite of RoBERTa-style LMs. The models support 100 languages, including Slovak. Training data are based on CommonCrawl Web-crawled corpus. Slovak part has 23.2 GB (3.5B tokens). The XLM-R models differ in their size, ranging from Base model with 270M parameters to XXL model with 10.7B parameters. \item \textbf{MBERT}~\cite{mbert} - MBERT is a multilingual version of the original BERT model trained with Wikipedia-based corpus containing 104 languages. Authors do not mention the amount of data for each language, but considering the size of Slovak Wikipedia, we assume that the Slovak part is in the tens of millions of tokens. \item \textbf{WikiBERT}~\cite{wikibert} - WikiBERT is a series of monolingual BERT-style models trained on dumps of Wikipedia. The Slovak model was trained with 39M tokens. \end{itemize} Note that both XLM-R and MBERT models were trained in cross-lingually unsupervised manner, i.e. no additional signal about how sentences or words from different languages relate to each other was provided. The models were trained with simple multilingual corpora, although language balancing was performed. In Table~\ref{tab:lms} we provide a basic quantitative measures of all the models. We compare their architecture and training data. We also studied the tokenization productivity on texts from \textit{Universal Dependencies}~\cite{ud} train set. We show the average length of tokens for each model. Longer tokens are considered to be better, because they can be more semantically meaningful. Also, with longer tokens, we need fewer of them. Model that needs more tokens will take longer to compute, since a self-attention layer has a $O(N^2)$ time complexity w.r.t. input length $N$. We also show how many tokens were used (effective vocabulary) for the tokenization in this particular dataset. Multilingual LMs have smaller portion of their vocabulary used, since they contain many tokens used in other languages and not in Slovak. These tokens are redundant for Slovak text processing. \begin{table}[] \centering \tiny \begin{tabular}{l|l|l|l|l|l} \textbf{Model} & SlovakBERT & XLM-R-Base & XLM-R-Large & MBERT & WikiBERT \\ \hline Architecture & RoBERTa & \multicolumn{2}{c|}{RoBERTa} & BERT & BERT \\ Num. layers & 12 & 12 & 24 & 12 & 12 \\ Num. attention head & 12 & 12 & 16 & 12 & 12 \\ Hidden size & 768 & 768 & 1024 & 768 & 768 \\ Num. parameters & 125M & 278M & 560M & 178M & 102M \\ Languages & 1 & 100 & 100 & 104 & 1 \\ Training dataset size (tokens) & 4.6B & \multicolumn{2}{c|}{167B} & n/a & 39M \\ Slovak dataset size (tokens) & 4.6B & \multicolumn{2}{c|}{3.2B} & 25-50M & 39M \\ Vocabulary size & 50K & \multicolumn{2}{c|}{250K} & 120K & 20K \\ Average token length * & 3.23 & \multicolumn{2}{c|}{2.84} & 2.40 & 2.70 \\ Effective vocabulary * & 16.6K & \multicolumn{2}{c|}{9.6K} & 6.7K & 5.8K \\ Effective vocabulary (\%) * & 33.05 & \multicolumn{2}{c|}{3.86} & 5.62 & 29.10 \\ \end{tabular} \caption{Basic statistics about the evaluated LMs. *Data are calculated based on \textit{Universal Dependencies} dataset.} \label{tab:lms} \end{table} \subsection{Part-of-Speech Tagging} The goal of part-of-speech (POS) tagging is to assign a certain POS tag from the predefined set of possible tags to each word. This task mainly evaluates the syntactic capabilities of the models. \subsubsection{Data} We use Slovak Dependency Treebank from \textit{Universal Dependencies} dataset~\cite{slovak-ud, ud} (UD). It contains annotations for both Universal (UPOS, 17 tags) and Slovak-specific (XPOS, 19 tags) POS tagsets. We mainly work with UPOS, but we use XPOS for comparison with other systems as well. XPOS uses a more complicated system and it encodes not only POS tags, but also other morphological categories in the label. In this work, we only use the first letter from each Slovak XPOS label, which corresponds to a typical POS tag. The tagsets and their relations are shown in Table~\ref{tab:pos-tagsets}. \begin{table}[] \centering \begin{tabular}{l|l||l|l} \multicolumn{2}{c||}{\textbf{XPOS}} & \multicolumn{2}{c}{\textbf{UPOS}} \\ \hline \textbf{Tag} & \textbf{Description} & \textbf{Tag} & \textbf{Description} \\ \hline \hline A & adjective & \multirow{2}*{ADJ} & \multirow{2}*{adjective} \\ \cline{1-2} G & participle & & \\ \hline E & preposition & ADP & adposition \\ \hline D & adverb & ADV & adverb \\ \hline Y & conditional morpheme & \multirow{2}*{AUX} & \multirow{2}*{auxiliary} \\ \cline{1-2} \multirow{2}*{V} & \multirow{2}*{verb} & & \\ \cline{3-4} & & VERB & verb \\ \hline \multirow{2}*{O} & \multirow{2}*{conjuction} & CCONJ & coordinating conjunction \\ \cline{3-4} & & SCONJ & subordinating conjunction \\ \hline \multirow{2}*{P} & \multirow{2}*{pronoun} & DET & determiner \\ \cline{3-4} & & \multirow{2}*{PRON} & \multirow{2}*{pronoun} \\ \cline{1-2} R & reflexive pronoun & & \\ \hline J & interjection & INTJ & interjection \\ \hline \multirow{2}*{S} & \multirow{2}*{noun} & NOUN & noun \\ \cline{3-4} & & PROPN & proper noun \\ \hline N & numeral & \multirow{2}*{NUM} & \multirow{2}*{numeral} \\ \cline{1-2} 0 & digit & & \\ \hline T & particle & PART & particle \\ \hline Z & punctuation & PUNCT & punctuation \\ \hline W & abbreviation & \multirow{4}*{X} & \multirow{4}*{other} \\ \cline{1-2} Q & unidentifiable & & \\ \cline{1-2} \# & non-word element & & \\ \cline{1-2} \% & citation in foreign language & & \\ \hline & & SYM & symbol \\ \end{tabular} \caption{Slovak POS tagsets and their mapping~\cite{slovak-ud}.} \label{tab:pos-tagsets} \end{table} \subsubsection{Previous work} Since Slovak is an official part of the UD dataset, systems that attempt to cover multiple or all UD languages can often support Slovak as well. The following systems were trained on UD data and support both UPOS and XPOS tagsets: \begin{itemize} \item \textbf{UDPipe 2}~\cite{udpipe} - A deep learning model based on multilayer bidirectional LSTM architecture with pre-trained Slovak word embeddings. The model supports multiple languages, but the models themselves are monolingual. \item \textbf{Stanza}~\cite{stanza} - Stanza is a very similar model to UDPipe, it is also based on multilayer bidirectional LSTM with pre-trained word embeddings. \item \textbf{Trankit}~\cite{trankit} - Trankit is based on adapter-style fine-tuning~\cite{adapter} of XLM-R-Base. The adapters are fine-tuned for specific languages and they are able to handle multiple tasks at the same time. \end{itemize} \subsubsection{Our Fine-Tuning} We use a standard setup for fine-tuning the LMs for token classification. The final layer of an LM that is used to predict the masked tokens is discarded. A classifier linear layer with dropout is used in its place to generate POS tag logits for each token. These logits are then transformed to a probability vector with softmax function and a cross-entropy is calculated for each token. The loss function for batch of samples is defined as an average cross-entropy across all the tokens. For inference, we simply pick the class with the highest probability for each token. Note that there is a discrepancy between what we perceive as words and what the models use as tokens. Some words might be tokenized into multiple tokens. In that case, we only make the prediction on the first token and the final classifier layer is not applied to the subsequent tokens for this word. We use \texttt{Hugging Face Transformers} library for LM fine-tuning. We use similar setup for probing, but with two changes: (1) We freeze all the weights apart from the classifier layer, and (2) we remove several top layers from the LM, i.e. instead of making predictions from the topmost layer, we make them from other layer instead. This way we can analyze how well the representations generated on the given layer work. \subsubsection{Results} We have performed a random hyperparameter search with \textit{SlovakBERT}. The range of individual hyperparameters is shown in Table~\ref{tab:pos-hparams}. We have found out that weight decay is a beneficial regularization technique, while label smoothing proved itself to be inappropriate for our case. Other hyperparameters showed to have very little reliable effect, apart from the learning rate, which proved to be very sensitive. We have not repeated this tuning for other LMs, instead, we only tuned the learning rate. We have found out that it is appropriate to use learning rate of $10^{-5}$ for all the models, but XLM-R-Large. XLM-R-Large, the biggest model we tested, needs a smaller learning rate of $10^{-6}$. \begin{table}[] \centering \begin{tabular}{l|r|r} \textbf{Hyperparameter} & \textbf{Range} & \textbf{Selected} \\ \hline Learning rate & $[10^{-7}, 10^{-3}]$ & $10^{-5}$ \\ Batch size & $\{8, 16, 32, 64, 128\}$ & $32$ \\ Warmup steps & $\{0, 500, 1000, 2000\}$ & $1000$ \\ Weight decay & $[0, 0.1]$ & $0.05$ \\ Label smoothing & $[0, 0.2]$ & $0$ \\ Learning rate scheduler & Various\footnotemark & \texttt{linear} \\ \end{tabular} \caption{Hyperparameters used for POS tagging. Adam was used as an optimization algorithm.} \label{tab:pos-hparams} \end{table} \footnotetext{See the list of schedulers supported by Hugging Face Transformers library.} The results for POS tagging are shown in Table~\ref{tab:pos-results}. We report accuracy for both XPOS and UPOS tagsets. WikiBERT seems to be the worst-performing LM, probably because of its small training set. \textit{SlovakBERT} seems to be on par with larger XLM-R-Large. Other models lag behind slightly. From existing solutions, only transformers-based Trankit seems to be able to keep up. \begin{table}[] \centering \begin{tabular}{l|r|r} \textbf{Model} & \textbf{UPOS} & \textbf{XPOS} \\ \hline UDPipe 2.0 & 92.83 & 94.74 \\ UDPipe 2.6 & 97.30 & 97.87 \\ Stanza & 96.03 & 97.29 \\ Trankit & 97.85 & 98.03 \\ \hline WikiBERT & 94.41 & 96.54 \\ MBERT & 97.50 & 98.03 \\ XLM-R-Base & 97.61 & 98.23 \\ XLM-R-Large & \textbf{97.96} & 98.34 \\ \hline SlovakBERT & 97.84 & \textbf{98.37} \\ \end{tabular} \caption{Results for POS tagging (accuracy).} \label{tab:pos-results} \end{table} To make sure that the results are statistically significant, we have performed McNemar test among all the models. We have used the information about whether each token was predicted correctly or incorrectly as the observed variables. The results are shown in Table~\ref{tab:pos-mcnemar}. We can see that the differences between the top-performing models are mostly not considered statistically significant. \begin{table}[] \centering \tiny \begin{tabular}{l|r|r|r|r|r|r|r|r|r} & \rotatebox{90}{UDPipe 2.0} & \rotatebox{90}{UDPipe 2.6} & \rotatebox{90}{Stanza} & \rotatebox{90}{Trankit} & \rotatebox{90}{WikiBERT} & \rotatebox{90}{MBERT} & \rotatebox{90}{XLM-R-Base} & \rotatebox{90}{XLM-R-Large} & \rotatebox{90}{SlovakBERT} \\ UDPipe 2.0 & - & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 \\ UDPipe 2.6 & 0.0000 & - & 0.0000 & 0.0001 & 0.0000 & \textbf{0.5345} & \textbf{0.0543} & 0.0000 & 0.0057 \\ Stanza & 0.0000 & 0.0000 & - & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 \\ Trankit & 0.0000 & 0.0001 & 0.0000 & - & 0.0000 & 0.0006 & \textbf{0.0105} & \textbf{0.1776} & \textbf{0.2693} \\ WikiBERT & 0.0000 & 0.0000 & 0.0000 & 0.0000 & - & 0.0000 & 0.0000 & 0.0000 & 0.0000 \\ MBERT & 0.0000 & \textbf{0.5345} & 0.0000 & 0.0006 & 0.0000 & - & \textbf{0.2214} & 0.0000 & \textbf{0.0302} \\ XLM-R-Base & 0.0000 & \textbf{0.0543} & 0.0000 & \textbf{0.0105} & 0.0000 & \textbf{0.2214} & - & 0.0001 & \textbf{0.2652} \\ XLM-R-Large & 0.0000 & 0.0000 & 0.0000 & \textbf{0.1776} & 0.0000 & 0.0000 & 0.0001 & - & \textbf{0.0166} \\ SlovakBERT & 0.0000 & 0.0057 & 0.0000 & \textbf{0.2693} & 0.0000 & \textbf{0.0302} & \textbf{0.2652} & \textbf{0.0166} & - \\ \end{tabular} \caption{Statistical significance from McNemar test applied on POS results. Bold are results, where the difference between the two models is not deemed statistically significant ($p > 0.001$).} \label{tab:pos-mcnemar} \end{table} We also analyzed the dynamics of the LM fine-tuning. We analyzed the performance for various checkpoints of our LM (checkpoints were made after 1000 training steps). We can see in Figure~\ref{fig:pos}, that \textit{SlovakBERT} was saturated w.r.t POS performance quite soon, after approximately 15k steps. We stopped the analysis after the first 125k steps, since the results seemed to be stable. Similar results for probing can be seen in the same figure. We show the performance for all the layers for selected checkpoints. Again, we can see a rapid saturation of the model. Note that the performance on layers peaks quite soon at layer 6 and then plateaus. The last layers even have degraded performance. This shows, that the morphosyntactic information needed for POS tagging is stored and processed mainly in the middle part of the model. This is in accord with the current knowledge about how LMs work, i.e. that they process the text in a bottom-up manner~\cite{bert-pipeline}. \begin{figure}[h] \centering \includegraphics[width=12cm]{figures/fig_pos.pdf} \caption{Analysis of POS tagging learning dynamics. \textit{Left:} Accuracy after fine-tuning for several checkpoints. \textit{Right:} Accuracy of probes on all the layers with selected checkpoints.}\label{fig:pos} \end{figure} \subsection{Semantic Textual Similarity} Semantic textual similarity (STS) is an NLP task where a similarity between pairs of sentences is measured. In our work, we train the LMs to generate sentence embeddings and then we measure how much the cosine similarity between embeddings correlates with the ground truth labels provided by human annotators. We measure how well the models work with semantics on this task. On top of that, we can use the resulting models to generate universal sentence embeddings for Slovak. \subsubsection{Data} Currently, there is no native Slovak STS dataset. We decided to translate existing English datasets, namely STSbenchmark~\cite{stsb} and SICK~\cite{sick}, into Slovak. These datasets use a $\langle 0,5 \rangle$ scale that expresses the similarity of two sentences. The meaning of individual steps on this scale is shown in Table~\ref{tab:sts-labels}. English STS systems also usually use natural language inference (NLI) data to perform additional pre-training. NLI is a task where the goal is to identify cases of entailment or contradiction between two sentences. We translated SNLI~\cite{snli} and MNLI~\cite{mnli} datasets to Slovak as well. We use M2M100 (1.2B parameters variant) machine translation system~\cite{m2m100}. \begin{table}[] \centering \begin{tabular}{l|p{10cm}} \textbf{Label} & \textbf{Meaning} \\ \hline 0 & The two sentences are completely dissimilar. \\ \hline 1 & The two sentences are not equivalent, but are on the same topic. \\ \hline 2 & The two sentences are not equivalent, but share some details. \\ \hline 3 & The two sentences are roughly equivalent, but some important information differs.\\ \hline 4 & The two sentences are mostly equivalent, but some unimportant details differ. \\ \hline 5 & The two sentences are completely equivalent, as they mean the same thing. \\ \end{tabular} \caption{Annotation schema for STS datasets~\cite{sick}.} \label{tab:sts-labels} \end{table} \subsubsection{Previous Work} No Slovak-specific sentence embedding model has been published yet. We use a naive solution based on Slovak word embeddings and several available multilingual models for comparison in this work: \begin{itemize} \item \textbf{fastText}~\cite{fasttext} - We use pre-trained Slovak fastText word embeddings to generate representations for individual words. The sentence representation is an average of all its words. This represents a very naive baseline, since it completely omits the word order. \item \textbf{LASER}~\cite{laser} - LASER is a model trained to generate multilingual sentence embeddings. It is based on an encoder-decoder LSTM machine translation system that is trained with 93 languages. The encoder is shared across all the languages and as such, it is able to generate multilingual representations. \item \textbf{LaBSE}~\cite{labse} - LaBSE is an MBERT model fine-tuned with parallel corpus to produce mutlilingual sentence representations. \item \textbf{XLM-R$_{EN}$}~\cite{multi-sbert} - XLM-R model fine-tuned with English STS-related data (SNLI, MNLI and STSbenchmark datasets). This is a zero-shot cross-lingual learning setup, i.e. no Slovak data are used and only English fine-tuning is done. \end{itemize} \subsubsection{Our Fine-Tuning} We use a setup similar to~\cite{multi-sbert}, but we actually fine-tune it with the translated Slovak data. A pre-trained LM is used to initialize a Siamese network. Both branches of the network are an LM with a mean-pooling layer at the top that generates the final sentence embeddings. The embeddings from the two sentences are compared using cosine similarity. The network is trained as a regression model, i.e. the final computed similarity is compared with the ground truth similarity with \textit{mean squared error} loss function. We use \texttt{SentenceTransformers} library for the fine-tuning. We also performed a layer-wise analysis, where we analyzed which layers have the most viable representations for this task. We conducted the mean-pooling at different layers and ignored all the subsequent layers. This is similar to probing, but probing is usually done with frozen LM layers. In this case, we can not freeze the layers, since all the additional layers we added (mean-pooling, cosine similarity calculation) are not parametric. \subsubsection{Results} We compare the systems using Spearman correlation between the cosine similarity of the generated sentence representations and the ground truth data. The original STS datasets are using $\langle 0,5 \rangle$ scale. We normalize these scores to $\langle 0,1 \rangle$ range so that they can be directly compared to the cosine similarities. We performed a hyperparameter search in this case as well. Again, we have found out that the results are quite stable across various hyperparameter values, with learning rate being the most sensitive hyperparameter. The details of the hyperparameter tuning are shown in Table~\ref{tab:sts-hparams}. We show the main results in Table~\ref{tab:sts-results}. \begin{table}[] \centering \begin{tabular}{l|r|r} \textbf{Hyperparameter} & \textbf{Range} & \textbf{Selected} \\ \hline Learning rate & $[10^{-7}, 10^{-3}]$ & $10^{-5}$ \\ Batch size & $\{8, 16, 32, 64, 128\}$ & $32$ \\ Warmup steps & $\{0, 500, 1000, 2000\}$ & $1000$ \\ Weight decay & $[0, 0.2]$ & $0.15$ \\ Learning rate scheduler & Various\footnotemark & \texttt{cosine with hard restarts} \\ \end{tabular} \caption{Hyperparameters used for STS tagging. Adam was used as an optimization algorithm.} \label{tab:sts-hparams} \end{table} \footnotetext{See the list of schedulers supported by Sentence Transformers library.} \begin{table}[] \centering \begin{tabular}{l|r} \textbf{Model} & \textbf{Score} \\ \hline fastText & 0.383 \\ LASER & 0.711 \\ LaBSE & 0.739 \\ XLM-R$_{EN}$ & \textbf{0.801} \\ \hline WikiBERT & 0.673 \\ MBERT & 0.748 \\ XLM-R-Base & 0.767 \\ XLM-R-Large & 0.791 \\ \hline SlovakBERT & 0.791 \\ \end{tabular} \caption{Spearman correlation between cosine similarity of generated representations and desired similarities on STSbenchmark dataset translated to Slovak.} \label{tab:sts-results} \end{table} We can see that the results are fairly similar to POS tagging w.r.t. how the LMs are relatively ordered. The existing solutions are worse, except for XLM-R trained with English data, which is actually the best performing model in our experiments. It seems that their model fine-tuned with real data without machine-translation-induced noise works better, even if it has to perform the inference cross-lingually on Slovak data. We also experimented with Slovak-translated NLI data in a way where the model was first fine-tuned on NLI task and then the final STS fine-tuning was performed. However, we were not able to outperform the purely STS fine-tuning with this approach and the results remained virtually the same. This result is in contrast with the usual case for English training, where the NLI data regularly improve the results~\cite{sbert}. We theorize that this effect might be caused by noisy machine translation. Figure~\ref{fig:sts} shows the learning dynamics of STS. On the left, we can see that the performance takes much longer to plateau than in the case of POS. This shows that the model needs longer time to learn about semantics. Still, we can see that the performance ultimately stabilizes just below $0.8$ score. Similarly, unlike POS, we can see that the best performing layers are actually the last layers of the model. This suggests that a model with bigger capacity might have been trained successfully. \begin{figure}[h] \centering \includegraphics[width=12cm]{figures/fig_sts.pdf} \caption{Analysis of STS learning dynamics. \textit{Left:} Spearman correlation after fine-tuning with various checkpoints. \textit{Right:} Spearman correlation on all the layers with selected checkpoints.}\label{fig:sts} \end{figure} \subsection{Sentiment Analysis}\label{sec:sa} The goal of sentiment analysis is to identify the affective sentiment of a given text. It requires semantic analysis of the text, as well as certain amount of emotional understanding. \subsubsection{Data} We use a Twitter-based dataset~\cite{twitter-sa} annotated on a scale with three values: \textit{negative}, \textit{neutral} and \textit{positive}. Some of the tweets have already been removed since the dataset was created. Therefore, we work with a subset of the original dataset. We cleaned the data by removing URLs, retweet prefixes, hashtags, user mentions, quotes, asterisks, redundant whitespaces and trailing punctuation. We have also deduplicated the samples, as there were cases of identical samples (i.e. retweets) or very similar samples (i.e. automatically generated tweets) with the same, or even with different labels. After the deduplication, we were left with 41084 tweets with 11160 negative samples, 6668 neutral samples and 23256 positive samples. Additionally, we have also constructed a series of test sets based on reviews from various domains: accommodation, books, cars, games, mobiles and movies. Each domain has approximately 100 manually labeled samples. These are published along with this paper. They serve to check how well the model behavior transfers to other domains. This dataset is called \textit{Reviews} in the results below, while the original Twitter-based dataset is called \textit{Twitter}. \subsubsection{Previous Work and Baselines} The original paper introducing the Twitter dataset introduced an array of traditional classifiers (Naive Bayes and 5 SVM variants) to solve the task. The authors report the macro-F1 score for positive and negative classes only. Additionally, unlike us, they worked with the whole dataset. Approximately 10K tweets have been deleted since the dataset was introduced. \cite{pecar} use the same version of the dataset as we do. They use approaches based on word embeddings and ELMO~\cite{elmo} to solve the task. Note that both published works use cross-validation, but no canonical dataset split is provided in either of them. There are several existing approaches we use for comparison: \begin{itemize} \item \textbf{NLP4SK}\footnote{\url{http://arl6.library.sk/nlp4sk/webapi/analyza-sentimentu}} - A rule-based sentiment analysis system for Slovak that is available online \item \textbf{Amazon} - We also translated the Slovak data into English and used Amazon's commercial sentiment analysis API and tested its performance on our test sets. \end{itemize} We have implemented several baseline classifiers that were trained with the same training data as the LMs: \begin{itemize} \item \textbf{TF-IDF linear classifier} - A perceptron trained with SGD algorithm was used as a classifier. The text was represented as a TF-IDF vector computed over N-grams. \item \textbf{fastText classifier} - We used the built-in fastText classifier with and without pre-trained Slovak word embedding models. \item \textbf{Our STS embedding linear classifier} - A perceptron trained with SGD algorithm was used as a classifier. The text was represented using the sentence embedding model we have trained for STS. \end{itemize} We performed a random search hyperparameter optimization for all the approaches. \subsubsection{Our Fine-Tuning} We fine-tuned the LMs as classifiers with 3 classes. The topmost layer of an LM is discarded and instead a multilayer perceptron classifier with one hidden layer and dropout is applied on the representation of the first token. Categorical cross-entropy loss function is used as loss function. The class with the highest probability coming from the softmax function is selected as the predicted label during inference. We use \texttt{Hugging Face Transformers} library for fine-tuning. \subsubsection{Results} We report macro-F1 scores for all three classes as our main performance measure. The LMs were trained on the Twitter dataset. We calculate average F1 from our \textit{Reviews} dataset as an additional measure. Again, we have performed a hyperparameter optimization of \textit{SlovakBERT}. The results are similar to results from POS tagging and STS. We have found out that learning rate is the most sensitive hyperparameter and that a small amount of weight decay is a beneficial regularization. The main results are shown in Table~\ref{tab:sa-results}. We can see that we were able to obtain better results than the results that were reported previously. However, the comparison is not perfect, as we use slightly different datasets for the aforementioned reasons. \begin{table}[] \centering \begin{tabular}{l|r|r|r} \textbf{Model} & \multicolumn{2}{c|}{\textbf{Twitter F1}} & \textbf{Reviews F1} \\ & \textbf{3-class} & \textbf{2-class} & \textbf{3-class}\\ \hline \cite{twitter-sa}* & - & 0.682 & - \\ \cite{pecar}* & 0.669 & - & - \\ Amazon & 0.502 & 0.472 & 0.766 \\ NLP4SK & 0.489 & 0.468 & \textbf{0.815} \\ \hline TF-IDF & 0.571 & 0.603 & 0.412 \\ fastText & 0.591 & 0.622 & 0.416 \\ fastText w/ emb. & 0.606 & 0.631 & 0.426 \\ STS embeddings & 0.581 & 0.597 & 0.582 \\ \hline WikiBERT & 0.580 & 0.597 & 0.398 \\ MBERT & 0.587 & 0.622 & 0.453 \\ XLM-R-Base & 0.620 & 0.651 & 0.518 \\ XLM-R-Large & 0.655 & \textbf{0.716} & 0.617 \\ \hline SlovakBERT & \textbf{0.672} & 0.705 & 0.583 \\ \end{tabular} \caption{Macro-F1 scores for sentiment analysis task. The 2-class F1 score for Twitter is calculated only from positive and negative classes -- a methodology introduced in the original dataset paper. *Indicates different evaluation sets.} \label{tab:sa-results} \end{table} The LMs are ordered in performance similarly to how they are ordered in the two previous tasks. \textit{SlovakBERT} seems to be among the best performing models, along with the larger XLM-R-Large. The LMs were also able to successfully transfer their sentiment knowledge to new domains and they achieve up to 0.617 macro-F1 in the reviews as well. However, both Amazon commercial sentiment API and NLP4SK have even better scores, even though their performance on Twitter data was not very impressive. This is probably caused by the underlying training data they use in their systems, that might match our \textit{Reviews} datasets more than the tweets used for our fine-tuning. \subsection{Document Classification} The final task which we evaluate our LMs on is classification of documents into 6 news categories. The goal of this task is to ascertain how well LMs handle common classification problems. We use a Slovak Categorized News Corpus~\cite{scnc} that contains 4.7K news articles classified into 6 classes: Sports, Politics, Culture, Economy, Health and World. We do not use the \textit{Culture} category, since it contains significantly smaller number of samples. Unfortunately, no existing work has used this dataset for document classification, so there are no existing results publicly available. We use the same set of baselines and LM fine-tuning as in the case of sentiment analysis, since both these tasks are text classification tasks, see Section~\ref{sec:sa} for more details. \subsubsection{Results} The main results from our experiment are shown in Table~\ref{tab:dc-results}. We can see that the LMs are again the best performing approach. In this case, the results are quite similar with \textit{SlovakBERT} being the best by a narrow margin. The baselines achieved significantly worse results. Note that our sentence embedding model has the worst results on this task, while it had competitive performance in sentiment classification. We theorize, that the sentence embedding model was trained on sentences and is therefore less capable of handling longer texts, typical for the dataset used here. \begin{table}[] \centering \begin{tabular}{l|r} \textbf{Model} & \textbf{F1} \\ TF-IDF & 0.953 \\ fastText & 0.963 \\ fastText w/ emb. & 0.963 \\ STS embeddings & 0.935 \\ \hline WikiBERT & 0.935 \\ MBERT & 0.985 \\ XLM-R-Base & 0.987 \\ XLM-R-Large & 0.985 \\ \hline Our model & \textbf{0.990} \\ \end{tabular} \caption{Macro-F1 scores for document classification task.} \label{tab:dc-results} \end{table} \section{Conclusions}\label{sec:concl} We have trained and published \textit{SlovakBERT} -- the first large-scale transformers-based Slovak masked language model using 19.35GB of Web-crawled Slovak text. We evaluated this model on multiple NLP tasks. We conclude, that \textit{SlovakBERT} achieves state-of-the-art results on these tasks. We also release the fine-tuned models for the Slovak community. Existing multilingual models can achieve comparable results on some tasks, however they are less efficient memory-wise and/or compute-wise. \bibliographystyle{apalike}
{'timestamp': '2021-10-01T02:25:40', 'yymm': '2109', 'arxiv_id': '2109.15254', 'language': 'en', 'url': 'https://arxiv.org/abs/2109.15254'}
\section{Introduction} We study output trajectory tracking for a class of nonlinear reaction diffusion equations such that a prescribed performance of the tracking error is achieved. To this end, we utilize the method of funnel control which was developed in~\cite{IlchRyan02b}, see also the survey~\cite{IlchRyan08}. The funnel controller is a model-free output-error feedback of high-gain type. Therefore, it is inherently robust and of striking simplicity. The funnel controller has been successfully applied e.g.\ in temperature control of chemical reactor models~\cite{IlchTren04}, control of industrial servo-systems~\cite{Hack17} and underactuated multibody systems~\cite{BergOtto19}, speed control of wind turbine systems~\cite{Hack14,Hack15b,Hack17}, current control for synchronous machines~\cite{Hack15a,Hack17}, DC-link power flow control~\cite{SenfPaug14}, voltage and current control of electrical circuits~\cite{BergReis14a}, oxygenation control during artificial ventilation therapy~\cite{PompAlfo14}, control of peak inspiratory pressure~\cite{PompWeye15} and adaptive cruise control~\cite{BergRaue18}. A funnel controller for a large class of systems described by functional differential equations with arbitrary (well-defined) relative degree has been developed in~\cite{BergHoan18}. It is shown in~\cite{BergPuch19b} that this abstract class indeed allows for fairly general infinite-dimensional systems, where the internal dynamics are modeled by a (PDE). In particular, it was shown in~\cite{BergPuch19} that the linearized model of a moving water tank, where sloshing effects appear, belongs to the aforementioned system class. On the other hand, not even every linear, infinite-dimensional system has a well-defined relative degree, in which case the results as in~\cite{BergHoan18,IlchRyan02b} cannot be applied. Instead, the feasibility of funnel control has to be investigated directly for the (nonlinear) closed-loop system, see~\cite{ReisSeli15b} for a boundary controlled heat equation and~\cite{PuchReis19} for a general class of boundary control systems. The nonlinear reaction diffusion system that we consider in the present paper is known as the monodomain model and represents defibrillation processes of the human heart~\cite{Tung78}. The monodomain equations are a reasonable simplification of the well accepted bidomain equations, which arise in cardiac electrophysiology~\cite{SundLine07}. In the monodomain model the dynamics are governed by a parabolic reaction diffusion equation which is coupled with a linear ordinary differential equation that models the ionic current. It is discussed in~\cite{KuniNagaWagn11} that, under certain initial conditions, reentry phenomena and spiral waves may occur. From a medical point of view, these situations can be interpreted as fibrillation processes of the heart that should be terminated by an external control, for instance by applying an external stimulus to the heart tissue, see~\cite{NagaKuniPlan13}. The present paper is organized as follows: In Section~\ref{sec:mono_main} we introduce the mathematical framework, which strongly relies on preliminaries on Neumann elliptic operators. The control objective is presented in Section~\ref{sec:mono_controbj}, where we also state the main result on the feasibility of the proposed controller design in Theorem~\ref{thm:mono_funnel}. The proof of this result is given in Section~\ref{sec:mono_proof_mt} and it uses several auxiliary results derived in Appendices~\ref{sec:mono_prep_proof} and~\ref{sec:mono_prep_proof2}. We illustrate our result by a simulation in Section~\ref{sec:numerics}. \textbf{Nomenclature}. The set of bounded operators from $X$ to $Y$ is denoted by $\mathcal{L}(X,Y)$, $X'$ stands for the dual of a~Banach space $X$, and $B'$ is the dual of an operator $B$.\\ For a bounded and measurable set $\Omega\subset{\mathbb{R}}^d$, $p\in[1,\infty]$ and $k\in{\mathbb{N}}_0$, $W^{k,p}(\Omega;{\mathbb{R}}^n)$ denotes the Sobolev space of equivalence classes of $p$-integrable and $k$-times weakly differentiable functions $f:\Omega\to{\mathbb{R}}^n$, $W^{k,p}(\Omega;{\mathbb{R}}^n)\cong (W^{k,p}(\Omega))^n$, and the Lebesgue space of equivalence classes of $p$-integrable functions is $L^p(\Omega)=W^{0,p}(\Omega)$. For $r\in(0,1)$ we further set \[ W^{r,p}(\Omega) := \setdef{f\in L^p(\Omega)}{ \left( (x,y)\mapsto \frac{|f(x)-f(y)|}{|x-y|^{d/p+r}}\right) \in L^p(\Omega\times\Omega)}. \] For a domain $\Omega$ with smooth boundary, $W^{k,p}(\partial\Omega)$ denotes the Sobolev space at the boundary.\\ We identify functions with their restrictions, that is, for instance, if $f\in L^p(\Omega)$ $\Omega_0\subset \Omega$, then the restriction $f|_{\Omega_0}\in L^p(\Omega_0)$ is again dentoted by~$f$. For an interval $J\subset{\mathbb{R}}$, a Banach space $X$ and $p\in[1,\infty]$, we denote by $L^p(J;X)$ the vector space of equivalence classes of strongly measurable functions $f:J\to X$ such that $\|f(\cdot)\|_X\in L^p(J)$. Note that if $J=(a,b)$ for $a,b\in{\mathbb{R}}$, the spaces $L^p((a,b);X)$, $L^p([a,b];X)$, $L^p([a,b);X)$ and $L^p((a,b];X)$ coincide, since the points at the boundary have measure zero. We will simply write $L^p(a,b;X)$, also for the case $a=-\infty$ or $b=\infty$. We refer to \cite{Adam75} for further details on Sobolev and Lebesgue spaces.\\ In the following, let $J\subset{\mathbb{R}}$ be an interval, $X$ be a Banach space and $k\in{\mathbb{N}}_0$. Then $C^k(J;X)$ is defined as the space of $k$-times continuously differentiable functions $f:J\to X$. The space of bounded $k$-times continuously differentiable functions with bounded first $k$ derivatives is denoted by $BC^k(J;X)$, and it is a Banach space endowed with the usual supremum norm. The space of bounded and uniformly continuous functions will be denoted by $BUC(J;X)$. The Banach space of H\"older continuous functions $C^{0,r}(J;X)$ with $r\in(0,1)$ is given by \begin{align*} C^{0,r}(J;X)&:=\setdef{f\in BC(J;X)}{[f]_{r}:=\sup_{t,s\in J,s<t}\frac{\|f(t)-f(s)\|}{(t-s)^r}<\infty},\\ \|f\|_r&:=\|f\|_\infty+[f]_{r}, \end{align*} see \cite[Chap.~0]{Luna95}. We like to note that for all $0<r<q<1$ we have that \[ C^{0,q}(J;X) \subseteq C^{0,r}(J;X) \subseteq BUC(J;X). \] For $p\in[1,\infty]$, the symbol $W^{1,p}(J;X)$ stands for the Sobolev space of $X$-valued equivalance classes of weakly differentiable and $p$-integrable functions $f:J\to X$ with $p$-integrable weak derivative, i.e., $f,\dot{f}\in L^p(J;X)$. Thereby, integration (and thus weak differentiation) has to be understood in the Bochner sense, see~\cite[Sec.~5.9.2]{Evan10}. The spaces $L^p_{\rm loc}(J;X)$ and $W^{1,p}_{\rm loc}(J;X)$ consist of all $f$ whose restriction to any compact interval $K\subset J$ are in $L^p(K;X)$ or $W^{1,p}(K;X)$, respectively. \section{The FitzHugh-Nagumo model} \label{sec:mono_main} Throughout this paper we will frequently use the following assumption. For $d\in{\mathbb{N}}$ we denote the scalar product in $L^2(\Omega;{\mathbb{R}}^d)$ by $\scpr{\cdot}{\cdot}$ and the norm in $L^2(\Omega)$ by $\|\cdot\|$. \begin{Ass}\label{Ass1} Let $d\le 3$ and $\Omega\subset {\mathbb{R}}^d$ be a bounded domain with Lipschitz boundary $\partial\Omega$. Further, let $D\in L^\infty(\Omega;{\mathbb{R}}^{d\times d})$ be symmetric-valued and satisfy the \emph{ellipticity condition} \begin{equation} \exists\,\delta>0:\ \ \text{for a.e.}\,\zeta\in\Omega\ \forall\, \xi\in{\mathbb{R}}^d:\ \xi^\top D(\zeta) \xi = \sum_{i,j=1}^d D_{ij}(\zeta)\xi_i\xi_j\geq\delta \|\xi\|_{{\mathbb{R}}^d}^2.\label{eq:ellcond} \end{equation} \end{Ass} To formulate the model of interest, we consider the sesquilinear form \begin{equation} \mathfrak{a}:W^{1,2}(\Omega)\times W^{1,2}(\Omega)\to{\mathbb{R}},\ (z_1,z_2)\mapsto\scpr{\nabla z_1}{D\nabla z_2}.\label{eq:sesq} \end{equation} We can associate a linear operator to $\mathfrak{a}$. \begin{Prop}\label{prop:Aop} Let Assumption~\ref{Ass1} hold. Then there exists exactly one operator $\mathcal{A}:\mathcal{D}(\mathcal{A})\subset L^2(\Omega)\to L^2(\Omega)$ with \[ \mathcal{D}(\mathcal{A})=\setdef{\!\!z_2\in W^{1,2}(\Omega)\!\!}{\exists\, y_2\in L^2(\Omega)\ \forall\, z_1\in W^{1,2}(\Omega):\,\mathfrak{a}(z_1,z_2)=-\scpr{z_1}{y_2}\!\!}, \] and \[ \forall\, z_1\in W^{1,2}(\Omega)\ \forall\, z_2\in \mathcal{D}(\mathcal{A}):\ \mathfrak{a}(z_1,z_2)=-\scpr{z_1}{\mathcal{A} z_2}. \] We call $\mathcal{A}$ the {\em Neumann elliptic operator on $\Omega$ associated to $D$}. The operator $\mathcal{A}$ is closed, self-adjoint, and $\mathcal{D}(\mathcal{A})$ is dense in $W^{1,2}(\Omega)$. \end{Prop} \begin{proof} Existence, uniqueness and closedness of $A$ as well as the density of $\mathcal{D}(\mathcal{A})$ in $W^{1,2}(\Omega)$ follow from Kato's First Representation Theorem~\cite[Sec.~VI.2, Thm~2.1]{Kato80}, whereas self-adjointness is an immediate consequence of the property $\mathfrak{a}(z_1,z_2)={\mathfrak{a}(z_2,z_1)}$ for all $z_1,z_2\in W^{1,2}(\Omega)$. \end{proof} Note that the operator $\mathcal{A}$ in Proposition~\ref{prop:Aop} is well-defined, independent of any further smoothness requirements on $\partial\Omega$. In particular, the classical Neumann boundary trace, i.e., the derivative of a function in the direction of the outward normal unit vector $\nu:\partial\Omega\to{\mathbb{R}}^d$ does not need to exist. However, if $\partial\Omega$ and the coefficient matrix $D$ are sufficiently smooth, then \[ \mathcal{A} z={\rm div} D\nabla z,\quad z\in\mathcal{D}(\mathcal{A})=\setdef{z\in W^{2,2}(\Omega) }{ (\nu^\top\cdot D\nabla z)|_{\partial\Omega}=0}, \] see \cite[Thm.~2.2.2.5]{Gris85}. This justifies to call $\mathcal{A}$ a Neumann elliptic operator. We collect some further properties of such operators in Appendix~\ref{sec:neum_lapl}. Now we are in the position to introduce the model for the interaction of the electric current in a cell, namely \begin{equation}\label{eq:FHN_model} \begin{aligned} \dot v(t)&=\mathcal{A} v(t)+p_3(v(t))-u(t)+I_{s,i}(t)+\mathcal{B} I_{s,e}(t),\quad&v(0)&=v_0,\\ \dot u(t)&=c_5v(t)-c_4u(t),&u(0)&=u_0,\\ y(t) &= \mathcal{B}'v(t), \end{aligned} \end{equation} where \[p_3(v):=-c_1v+c_2v^2-c_3v^3,\] with constants $c_i>0$ for $i=1,\dots,5$, initial values $v_0,u_0\in L^2(\Omega)$, the Neumann elliptic operator $\mathcal{A}:\mathcal{D}(\mathcal{A})\subseteq L^2(\Omega)\to L^2(\Omega)$ on $\Omega$ associated to $D\in L^\infty(\Omega;\mathbb R^{d\times d})$ and control operator $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,W^{1,2}(\Omega)')$, where $W^{1,2}(\Omega)'$ is the dual of $W^{1,2}(\Omega)$ with respect to the pivot space $L^{2}(\Omega)$; consequently, $\mathcal{B}'\in\mathcal{L}(W^{1,2}(\Omega),{\mathbb{R}}^m)$. System~\eqref{eq:FHN_model} is known as the FitzHugh-Nagumo model for the ionic current~\cite{Fitz61}, where \[I_{ion}(u,v)=p_3(v)-u.\] The functions $I_{s,i}\in L^2_{{\rm loc}}(0,T;L^2(\Omega))$, $I_{s,e}\in L^2_{{\rm loc}}(0,T;{\mathbb{R}}^m)$ are the intracellular and extracellular stimulation currents, respectively. In particular, $I_{s,e}$ is the control input of the system, whereas $y$ is the output.\\ Next we introduce the solution concept. \begin{Def}\label{def:solution} Let Assumption~\ref{Ass1} hold and $\mathcal{A}$ be a Neumann elliptic operator on $\Omega$ associated to $D$ (see Proposition~\ref{prop:Aop}), let $\mathcal{B}\in \mathcal{L}({\mathbb{R}}^m,W^{1,2}(\Omega)')$, and $u_0,v_0\in L^2(\Omega)$ be given. Further, let $T\in(0,\infty]$ and $I_{s,i}\in L^2_{{\rm loc}}(0,T;L^2(\Omega))$, $I_{s,e}\in L^2_{{\rm loc}}(0,T;{\mathbb{R}}^m)$. A triple of functions $(u,v,y)$ is called \emph{solution} of~\eqref{eq:FHN_model} on $[0,T)$, if \begin{enumerate}[(i)] \item $v\in L^2(0,T;W^{1,2}(\Omega))\cap C([0,T);L^{2}(\Omega))$ with $v(0)=v_0$; \item $u\in C([0,T);L^{2}(\Omega))$ with $u(0)=u_0$; \item for all $\chi\in L^2(\Omega)$, $\theta\in W^{1,2}(\Omega)$, the scalar functions $t\mapsto\scpr{u(t)}{\chi}$, $t\mapsto\scpr{v(t)}{\theta}$ are weakly differentiable on $[0,T)$, and for almost all $t\in (0,T)$ we have \begin{equation}\label{eq:solution} \begin{aligned} {\textstyle\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}}\scpr{v(t)}{\theta}&=-\mathfrak{a}(v(t),\theta)+\scpr{p_3(v(t))-u(t)+I_{s,i}(t)}{\theta}+\scpr{I_{s,e}(t)}{\mathcal{B}'\theta}_{{\mathbb{R}}^m},\\ {\textstyle\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}} \scpr{u(t)}{\chi}&=\scpr{c_5v(t)-c_4u(t)}{\chi},\\ y(t)&=\mathcal{B}'v(t), \end{aligned} \end{equation} where $\mathfrak{a}:W^{1,2}(\Omega)\times W^{1,2}(\Omega)\to{\mathbb{R}}$ is the sesquilinear defined as in \eqref{eq:sesq}. \end{enumerate} \end{Def} \begin{Rem}\label{rem:openloop} \hspace{1em} \begin{enumerate}[a)] \item\label{rem:openloop1} Weak differentiability of $t\mapsto\scpr{u(t)}{\chi}$, $t\mapsto\scpr{v(t)}{\theta}$ for all $\chi\in L^2(\Omega)$, $\theta\in W^{1,2}(\Omega)$ on $(0,T)$ further leads to $v\in W^{1,2}(0,T;W^{1,2}(\Omega)')$ and $u\in W^{1,2}(0,T;L^{2}(\Omega))$. \item\label{rem:openloop2} The Sobolev Embedding Theorem \cite[Thm.~5.4]{Adam75} implies that the inclusion map $W^{1,2}(\Omega)\hookrightarrow L^6(\Omega)$ is bounded. This guarantees that $p_3(v)\in L^2(0,T;L^2(\Omega))$, whence the first equation in \eqref{eq:solution} is well-defined. \item\label{rem:openloop3} Let $w\in L^2(\Omega)$. An input operator of the form $\mathcal{B} u=u\cdot w$ corresponds to distributed input, and we have $\mathcal{B}\in\mathcal{L}({\mathbb{R}},L^{2}(\Omega))$. In this case, the output is given by \[y(t)=\int_\Omega w(\xi)\cdot(v(t))(\xi)d\xi.\] A typical situation is that $w$ is an indicator function on a subset of $\Omega$; such choices have been considered in~\cite{KuniSouz18} for instance. \item\label{rem:openloop4} Let $w\in L^2(\partial\Omega)$. An input operator with \begin{equation}\mathcal{B}' z=\int_{\partial\Omega} w(\xi)\cdot z(\xi)d\sigma\label{eq:neumanncontr}\end{equation} corresponds to a~Neumann boundary control \[\nu(\xi)^\top\cdot (\nabla v(t))(\xi) =w(\xi)\cdot I_{s,e}(t),\quad \xi\in\partial\Omega.\] In this case, the output is given by a~weighted integral of the Dirichlet boundary values. More precisely \[y(t)=\int_{\partial\Omega} w(\xi)\cdot(v(t))(\xi)d\sigma.\] Note that $\mathcal{B}'$ is the composition of the trace operator \[\begin{aligned} {\rm tr}:&& z&\,\mapsto \left. z\right|_{\partial\Omega} \end{aligned}\] and the inner product in $L^2(\partial\Omega)$ with respect to $w$. The trace operator satisfies ${\rm tr}\in \mathcal{L}(W^{1/2+\varepsilon,2}(\Omega),L^2(\partial\Omega))$ for all $\varepsilon>0$ by the Trace Theorem \cite[Thm.~1.39]{Yagi10}. In particular, ${\rm tr}\in \mathcal{L}(W^{1,2}(\Omega),L^2(\partial\Omega))$, which implies that $\mathcal{B}'\in \mathcal{L}(W^{1,2}(\Omega),{\mathbb{R}})$ and $\mathcal{B}\in \mathcal{L}({\mathbb{R}},W^{1,2}(\Omega)')$. \end{enumerate} \end{Rem} \section{Control objective}\label{sec:mono_controbj} The objective is that the output~$y$ of the system~\eqref{eq:FHN_model} tracks a given reference signal which is $y_{\rm ref}\in W^{1,\infty}(0,\infty;{\mathbb{R}}^m)$ with a prescribed performance of the tracking error $e:= y- y_{\rm ref}$, that is~$e$ evolves within the performance funnel \[ \mathcal{F}_\varphi := \setdef{ (t,e)\in[0,\infty)\times{\mathbb{R}}^m}{ \varphi(t) \|e\|_{{\mathbb{R}}^m} < 1} \] defined by a function~$\varphi$ belonging to \[ \Phi_\gamma:=\setdef{\varphi\in W^{1,\infty}(0,\infty;{\mathbb{R}}) }{\varphi|_{[0,\gamma]}\equiv0,\ \forall\delta>0, \inf_{t>\gamma+\delta}\varphi(t)>0 }, \] for some $\gamma>0$. \begin{figure}[h] \begin{minipage}{0.45\textwidth} \begin{tikzpicture}[domain=0.001:4,scale=1.2] \fill[color=blue!20,domain=0.47:4] (0,0)-- plot (\x,{min(2.2,1/\x+2*exp(-3))})--(4,0)-- (0,0); \fill[color=blue!20] (0,0) -- (0,2.2) -- (0.47,2.2) -- (0.47,0) -- (0,0); \draw[->] (-0.2,0) -- (4.3,0) node[right] {$t$}; \draw[->] (0,-0.2) -- (0,2.5) node[above] {}; \draw[color=blue,domain=0.47:4] plot (\x,{min(2.2,1/\x+2*exp(-3))}) node[above] {$1/\varphi(t)$}; \draw[smooth,color=red,style=thick] (1,0) node[below] {$\|e(t)\|_{{\mathbb{R}}^m}$} plot coordinates{(0,0.8)(0.5,1.2)(1,0)}-- plot coordinates{(1,0)(1.25,0.6)(2,0.2)(3,0.3)(4,0.05)} ; \end{tikzpicture} \caption{Error evolution in a funnel $\mathcal F_{\varphi}$ with boundary~$1/\varphi(t)$.} \label{Fig:monodomain_funnel} \end{minipage} \quad \begin{minipage}{0.5\textwidth} The situation is illustrated in Fig.~\ref{Fig:monodomain_funnel}. The funnel boundary given by~$1/\varphi$ is unbounded in a small interval $[0,\gamma]$ to allow for an arbitrary initial tracking error. Since $\varphi$ is bounded there exists $\lambda>0$ such that $1/\varphi(t) \ge \lambda$ for all $t>0$. Thus, we seek practical tracking with arbitrary small accuracy $\lambda>0$, but asymptotic tracking is not required in general. \end{minipage} \end{figure} The funnel boundary is not necessarily monotonically decreasing, while in most situations it is convenient to choose a monotone funnel. Sometimes, widening the funnel over some later time interval might be beneficial, for instance in the presence of periodic disturbances or strongly varying reference signals. For typical choices of funnel boundaries see e.g.~\cite[Sec.~3.2]{Ilch13}.\\ A controller which achieves the above described control objective is the funnel controller. In the present paper, it suffices to restrict ourselves to the simple version developed in~\cite{IlchRyan02b}, which is the feedback law \begin{equation}\label{eq:monodomain_funnel_controller} I_{s,e}(t)=-\frac{k_0}{1-\varphi(t)^2\|\mathcal{B}' v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}}(\mathcal{B}'v(t)-y_{\rm ref}(t)), \end{equation} where $k_0>0$ is some constant used for scaling and agreement of physical units. Note that, by $\varphi|_{[0,\gamma]}\equiv0$, the controller satisfies \[\forall\, t\in[0,\gamma]:\ I_{s,e}(t)=-k_0(\mathcal{B}'v(t)-y_{\rm ref}(t)).\] We are interested in considering solutions of~(\ref{eq:FHN_feedback}), which leads to the following weak solution framework. \begin{Def}\label{def:solution_feedback} Use the assumptions from Definition~\ref{def:solution}. Furthermore, let $k_0>0$, $y_{\rm ref}\in W^{1,\infty}(0,\infty;{\mathbb{R}}^m)$, $\gamma>0$ and $\varphi\in\Phi_\gamma$. A triple of functions $(u,v,y)$ is called \emph{solution} of system~\eqref{eq:FHN_model} with feedback~\eqref{eq:monodomain_funnel_controller} on $[0,T)$, if $(u,v,y)$ satisfies the conditions~(i)--(iii) from Definition~\ref{def:solution} with $I_{s,e}$ as in~\eqref{eq:monodomain_funnel_controller}. \end{Def} \begin{Rem}\ \begin{enumerate}[a)] \item Inserting the feedback law~\eqref{eq:monodomain_funnel_controller} into the system~\eqref{eq:FHN_model}, we obtain the closed-loop system \begin{equation}\label{eq:FHN_feedback} \begin{aligned} \dot v(t)&=\mathcal{A} v(t)+p_3(v)(t)-u(t)+I_{s,i}(t) -\frac{k_0 \mathcal{B} (\mathcal{B}' v(t)-y_{\rm ref}(t))}{1-\varphi(t)^2\|\mathcal{B}'v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}},\\ \dot u(t)&=c_5v(t)-c_4u(t). \end{aligned} \end{equation} Consequently, $(u,v,y)$ is a {solution} of~\eqref{eq:FHN_model},~\eqref{eq:monodomain_funnel_controller} (resp.~\eqref{eq:FHN_feedback}) if, and only if, \begin{enumerate}[(i)] \item $v\in L^2(0,T;W^{1,2}(\Omega))\cap C([0,T);L^{2}(\Omega)))$ with $v(0)=v_0$; \item $u\in C([0,T);L^{2}(\Omega))$ with $u(0)=u_0$; \item for all $\chi\in L^2(\Omega)$, $\theta\in W^{1,2}(\Omega)$, the scalar functions $t\mapsto\scpr{u(t)}{\chi}$, $t\mapsto\scpr{v(t)}{\theta}$ are weakly differentiable on $[0,T)$, and it holds that, for almost all $t\in (0,T)$, \begin{equation}\label{eq:solution_cl} \begin{aligned} {\textstyle\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}}\scpr{v(t)}{\theta}&=-\mathfrak{a}(v(t),\theta)+\scpr{p_3(v(t))-u(t)+I_{s,i}(t)}{\theta}\\&\qquad-{\frac{k_0 \scpr{\mathcal{B}'v(t)-y_{\rm ref}(t)}{\mathcal{B}'\theta}_{{\mathbb{R}}^m}}{1-\varphi(t)^2\|\mathcal{B}' v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}}},\\[2mm] {\textstyle\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}}\scpr{u(t)}{\chi}&=\scpr{c_5v(t)-c_4u(t)}{\chi},\\ y(t)&=\mathcal{B}'v(t). \end{aligned} \end{equation} \end{enumerate} The system~\eqref{eq:FHN_feedback} is a nonlinear and non-autonomous PDE and any solution needs to satisfy that the tracking error evolves in the prescribed performance funnel~$\mathcal{F}_\varphi$. Therefore, existence and uniqueness of solutions is a nontrivial problem and even if a solution exists on a finite time interval $[0,T)$, it is not clear that it can be extended to a global solution. \item For global solutions it is desirable that $I_{s,e}\in L^\infty(\delta,\infty;{\mathbb{R}}^m)$ for all $\delta>0$. Note that this is equivalent to \[\limsup_{t\to\infty}\varphi(t)^2\|\mathcal{B}'v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}<1.\] It is as well desirable that $y$ and $I_{s,e}$ have a certain smoothness. \end{enumerate} \end{Rem} In the following we state the main result of the present paper. We will show that the closed-loop system~\eqref{eq:FHN_feedback} has a unique global solution so that all signals remain bounded. Furthermore, the tracking error stays uniformly away from the funnel boundary. We further show that we gain more regularity of the solution, if $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,W^{r,2}(\Omega)')$ for some $r\in [0,1)$ or even $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,W^{1,2}(\Omega))$. Recall that $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,W^{r,2}(\Omega)')$ if, and only if, $\mathcal{B}'\in\mathcal{L}(W^{r,2}(\Omega),{\mathbb{R}}^m)$. Furthermore, for any $r\in(0,1)$ we have the inclusions \[ \mathcal{L}({\mathbb{R}}^m,W^{1,2}(\Omega)) \subset \mathcal{L}({\mathbb{R}}^m,L^2(\Omega)) \subset \mathcal{L}({\mathbb{R}}^m,W^{r,2}(\Omega)') \subset \mathcal{L}({\mathbb{R}}^m,W^{1,2}(\Omega)'). \] \begin{Thm}\label{thm:mono_funnel} Use the assumptions from Definition~\ref{def:solution_feedback}. Furthermore, assume that $\ker\mathcal{B}=\{0\}$ and $I_{s,i}\in L^\infty(0,\infty;L^2(\Omega))$. Then there exists a unique solution of~\eqref{eq:FHN_feedback} on $[0,\infty)$ and we have \begin{enumerate}[(i)] \item $u,\dot{u},v\in BC([0,\infty);L^2(\Omega))$; \item for all $\delta>0$ we have \begin{align*} v&\in BUC([\delta,\infty);W^{1,2}(\Omega))\cap C^{0,1/2}([\delta,\infty);L^{2}(\Omega)),\\ y,I_{s,e}&\in BUC([\delta,\infty);{\mathbb{R}}^m); \end{align*} \item $\exists\,\varepsilon_0>0\ \forall\,\delta>0\ \forall\, t\geq\delta:\ \varphi(t)^2\|\mathcal{B}'v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}\leq1-\varepsilon_0.$ \end{enumerate} Furthermore, \begin{enumerate}[a)] \item if additionally $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,W^{r,2}(\Omega)')$ for some $r\in (0,1)$, then for all $\delta>0$ we have that \begin{align*} v\in C^{0,1-r/2}([\delta,\infty);L^{2}(\Omega)),\quad y,I_{s,e}\in C^{0,1-r}([\delta,\infty);{\mathbb{R}}^m). \end{align*} \item if additionally $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,L^2(\Omega))$, then for all $\delta>0$ and all $\lambda\in(0,1)$ we have \begin{align*} v\in C^{0,\lambda}([\delta,\infty);L^{2}(\Omega)),\quad y,I_{s,e}\in C^{0,\lambda}([\delta,\infty);{\mathbb{R}}^m). \end{align*} \item if additionally $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,W^{1,2}(\Omega))$, then for all $\delta>0$ we have $y,I_{s,e}\in W^{1,\infty}(\delta,\infty;{\mathbb{R}}^m)$. \end{enumerate} \end{Thm} \begin{Rem}\label{rem:main} \hspace{1em} \begin{enumerate}[a)] \item\label{rem:main1} The condition $\ker \mathcal{B}=\{0\}$ is equivalent to $\im \mathcal{B}'$ being dense in ${\mathbb{R}}^m$. The latter is equivalent to $\im \mathcal{B}'={\mathbb{R}}^m$ by the finite-dimensionality of ${\mathbb{R}}^m$.\\ Note that surjectivity of $\mathcal{B}'$ is mandatory for tracking control, since it is necessary that any reference signal $y_{\rm ref}\in W^{1,\infty}(0,\infty;{\mathbb{R}}^m)$ can actually be generated by the output $y(t)=\mathcal{B}' v$. This property is sometimes called \emph{right-invertibility}, see e.g.~\cite[Sec.~8.2]{TrenStoo01}. \item\label{rem:main2} If the input operator corresponds to Neumann boundary control, i.e., $\mathcal{B}$ is as in~\eqref{eq:neumanncontr} for some $w\in L^2(\partial\Omega)$, then $\mathcal{B}\in\mathcal{L}({\mathbb{R}},W^{r,2}(\Omega)')$ for some $r\in(1/2,1)$, cf.\ Remark~\ref{rem:openloop}\,\ref{rem:openloop4}), and the assertions of Theorem~\ref{thm:mono_funnel}\,a) hold. \item\label{rem:main3} If the input operator corresponds to distributed control, that is $\mathcal{B} u=u\cdot w$ for some $w\in L^2(\Omega)$, then $\mathcal{B}\in\mathcal{L}({\mathbb{R}},L^2(\Omega))$, cf.\ Remark~\ref{rem:openloop}\,\ref{rem:openloop3}), and the assertions of Theorem~\ref{thm:mono_funnel}\,b) hold. \end{enumerate} \end{Rem} \section{Proof of Theorem~\ref{thm:mono_funnel}} \label{sec:mono_proof_mt} The proof is inspired by~the results of \cite{Jack90} on existence and uniqueness of (non-controlled) FitzHugh-Nagamo equations, which is based on a spectral approximation and subsequent convergence proofs by using arguments from~\cite{Lion69}. We divide the proof in two major parts. First, we show that there exists a unique solution on the interval $[0,\gamma]$. After that we show that the solution also exists on $(\gamma,\infty)$, is continuous at $t=\gamma$ and has the desired properties. \subsection{Solution on $[0,\gamma]$} \label{ssec:mono_proof_tleqgamma} Assuming that $t\in[0,\gamma]$, we have that $\varphi(t)\equiv0$ so that we need to show existence of a pair of functions $(v,u)$ with the properties as in Definition~\ref{def:solution}~(i)--(iii), where~\eqref{eq:solution} simplifies to \begin{equation}\label{eq:weak_uv_delta} \begin{aligned} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\scpr{v(t)}{\theta}&=-\mathfrak{a}(v(t),\theta)+\scpr{p_3(v(t))-u(t)+I_{s,i}(t)}{\theta}+\scpr{I_{s,e}(t)}{\mathcal{B}'\theta}_{{\mathbb{R}}^m},\\ \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\scpr{u(t)}{\chi}&=\scpr{c_5 v(t)-c_4u(t)}{\chi},\\ I_{s,e}(t)&=-k_0(\mathcal{B}' v(t)-y_{\rm ref}(t)),\\ y(t)&= \mathcal{B}' v(t). \end{aligned} \end{equation} Recall that $\mathfrak{a}:W^{1,2}(\Omega)\times W^{1,2}(\Omega)\to{\mathbb{R}}$ is the sesquilinear form \eqref{eq:sesq}. \emph{Step 1: We show existence and uniqueness of a solution.}\\ \emph{Step 1a: We show existence of a local solution on $[0,\gamma]$.} To this end, let $(\theta_i)_{i\in{\mathbb{N}}_0}$ be the eigenfunctions of $-\mathcal{A}$ and $\alpha_i$ be the corresponding eigenvalues, with $\alpha_i\geq0$ for all $i\in{\mathbb{N}}_0$. Recall that $(\theta_i)_{i\in{\mathbb{N}}_0}$ form an orthonormal basis of $L^2(\Omega)$ by Proposition~\ref{prop:Aop_n}\,\ref{item:Aop5}). Hence, with $a_i := \scpr{v_0}{\theta_i}$ and $b_i := \scpr{u_0}{\theta_i}$ for $i\in{\mathbb{N}}_0$ and \[ v_0^n:= \sum_{i=0}^na_{i}\theta_i,\quad u_0^n:= \sum_{i=0}^nb_{i}\theta_i,\quad n\in{\mathbb{N}}, \] we have that $v^n_0\to v_0$ and $u^n_0\to u_0$ strongly in $L^2(\Omega)$.\\ Fix $n\in{\mathbb{N}}_0$ and let $\gamma_i:=\mathcal{B}'\theta_i$ for $i=0,\dots,n$. Consider, for $j=0,\ldots,n$, the differential equations \begin{equation}\label{eq:muj-nuj-0gamma} \begin{aligned} \dot{\mu}_j(t)&=-\alpha_j\mu_j(t)-\nu_j(t)-\scpr{k_0\left(\sum_{i=0}^n\gamma_i\mu_i(t)-y_{\rm ref}(t)\right)}{\gamma_j}_{{\mathbb{R}}^m}+\scpr{I_{s,i}(t)}{\theta_j} \\ &\quad +\scpr{p_3\left(\sum_{i=0}^n\mu_i(t)\theta_i\right)}{\theta_j},\\ \dot{\nu}_j(t)&=-c_4\nu_j(t)+c_5\mu_j(t),\qquad\qquad \text{with}\ \mu_j(0)=a_j,\ \nu_j(0)=b_j, \end{aligned} \end{equation} defined on ${\mathbb{D}}:=[0,\infty)\times{\mathbb{R}}^{2(n+1)}$. Since the functions on the right hand side of~\eqref{eq:muj-nuj-0gamma} are continuous, it follows from ODE theory, see e.g.~\cite[\S~10, Thm.~XX]{Walt98}, that there exists a weakly differentiable solution $(\mu^n,\nu^n)=(\mu_0,\dots,\mu_n,\nu_0,\dots,\nu_n):[0,T_n)\to{\mathbb{R}}^{2(n+1)}$ of~\eqref{eq:muj-nuj-0gamma} such that $T_n\in(0,\infty]$ is maximal. Furthermore, the closure of the graph of~$(\mu^n,\nu^n)$ is not a compact subset of ${\mathbb{D}}$.\\ Now, set $v_n(t):=\sum_{i=0}^n{\mu}_i(t)\theta_i$ and $u_n(t):=\sum_{i=0}^n{\nu}_i(t)\theta_i$. Invoking~\eqref{eq:muj-nuj-0gamma} and using the functions $\theta_j$ we have that for $j=0,\dots,n$ the functions $(v_n,u_n)$ satisfy \begin{equation}\label{eq:weak_delta} \begin{aligned} \scpr{\dot{v}_n(t)}{\theta_j}&=-\mathfrak{a}(v_n(t),\theta_j)- \scpr{u_n(t)}{\theta_j}+\scpr{p_3(v_n(t))}{\theta_j}+\scpr{I_{s,i}(t)}{\theta_j}\\ &\quad -\scpr{k_0(\mathcal{B}' v_n(t)-y_{\rm ref}(t))}{\mathcal{B}'\theta_j}_{{\mathbb{R}}^m},\\ \scpr{\dot{u}_n(t)}{\theta_j} &= -c_4\scpr{u_n(t)}{\theta_j}+c_5\scpr{v_n(t)}{\theta_j}. \end{aligned} \end{equation} \emph{Step 1b: We show boundedness of $(v_n,u_n)$.} Consider the Lyapunov function candidate \begin{equation}\label{eq:Lyapunov} V:L^2(\Omega)\times L^2(\Omega)\to{\mathbb{R}},\ (v,u)\mapsto\frac12(c_5\|v\|^2+\|u\|^2). \end{equation} Observe that, since $(\theta_i)_{i\in{\mathbb{N}}_0}$ are orthonormal, we have $\|v_n\|^2 = \sum_{j=0}^n \mu_j^2$ and $\|u_n\|^2 = \sum_{j=0}^n \nu_j^2$. Hence we find that, for all $t\in[0, T_n)$, \begin{align*} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} V(v_n(t),u_n(t)) &\stackrel{\eqref{eq:muj-nuj-0gamma}}{=} c_5\sum_{j=0}^n\mu_j(t)\dot{\mu}_j(t)+\sum_{j=0}^n\nu_j(t)\dot{\nu}_j(t)\\ &=-c_5\sum_{j=0}^{n}\alpha_j\mu_j(t)^2-c_4\sum_{j=0}^{n}\nu_j(t)^2\\ &\quad -c_5\scpr{k_0\left(\sum_{i=0}^n\gamma_i\mu_i(t)-y_{\rm ref}(t)\right)}{\sum_{j=0}^n\gamma_j\mu_j(t)}_{{\mathbb{R}}^m}\\ &\quad +c_5\scpr{p_3\left(v_n(t)\right)}{v_n(t)}+c_5\scpr{I_{s,i}(t)}{v_n(t)} \end{align*} hence, omitting the argument~$t$ for brevity in the following, \begin{equation}\label{eq:Lyapunov_delta} \begin{aligned} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} V(v_n,u_n)=&-c_5\mathfrak{a}(v_n,v_n)-c_4\|u_n\|^2+c_5\scpr{I_{s,i}}{v_n}\\ &-c_5k_0\|\overline{e}_n\|_{{\mathbb{R}}^m}^2+c_5k_0\scpr{\overline{e}_n}{y_{\rm ref}}_{{\mathbb{R}}^m}+c_5\scpr{p_3(v_n)}{v_n}, \end{aligned} \end{equation} where \[\overline{e}_n(t):=\sum_{i=0}^n\gamma_i\mu_i(t)-y_{\rm ref}(t)=\mathcal{B}' v_n(t)-y_{\rm ref}(t).\] Before proceeding, recall Young's inequality for products, i.e., for $a,b\ge 0$ and $p,q\ge 1$ such that $1/p + 1/q = 1$ we have that \[ a b \le \frac{a^p}{p} + \frac{b^q}{q}, \] which will be frequently used in the following. Note that \begin{align*} \scpr{p_3(v_n)}{v_n}&=-c_1\|v_n\|^2+c_2\scpr{v_n^2}{v_n}-c_3\|v_n\|^4_{L^4},\\ c_2|\scpr{v_n^2}{v_n}|&=|\scpr{\epsilon v_n^3}{\epsilon^{-1}c_2}|\leq \frac{3\epsilon^{4/3}}{4}\|v_n\|^4_{L^4}+\frac{c_2^4}{4\epsilon^4}|\Omega|, \end{align*} where the latter follows from Young's inequality with $p=\tfrac43$ and $q=4$. Choosing $\epsilon=\left(\tfrac23 c_3\right)^{\tfrac34}$ we obtain \[\scpr{p_3(v_n)}{v_n}\leq \frac{27 c_2^4}{32 c_3^3}|\Omega|-c_1\|v_n\|^2-\frac{c_3}{2}\|v_n\|_{L^4}^4.\] Moreover, \[\scpr{\overline{e}_n}{y_{\rm ref}}_{{\mathbb{R}}^m}\leq\frac{1}{2}\|\overline{e}_n\|_{{\mathbb{R}}^m}^2+\frac{1}{2}\|y_{\rm ref}\|_{{\mathbb{R}}^m}^2\] and \[\scpr{I_{s,i}}{v_n}\leq \frac{c_1}{2}\|v_n\|^2+\frac{1}{2c_1}\|I_{s,i}\|^2,\] such that \eqref{eq:Lyapunov_delta} can be estimated by \begin{align*} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} V(v_n,u_n) \leq&-c_5\mathfrak{a}(v_n,v_n)-\frac{c_1c_5}{2}\|v_n\|^2-\frac{c_5k_0}{2}\|\overline{e}_n\|_{{\mathbb{R}}^m}^2-\frac{c_3 c_5}{2}\|v_n\|_{L^4}^4\\ &+\frac{k_0c_5}{2}\|y_{\rm ref}\|_{{\mathbb{R}}^m}^2+\frac{1}{2c_1}\|I_{s,i}\|^2+\frac{27 c_2^4}{32 c_3^3}|\Omega|\\ \leq&\ -c_5\mathfrak{a}(v_n,v_n)-\frac{c_1c_5}{2}\|v_n\|^2-\frac{c_5k_0}{2}\|\overline{e}_n\|_{{\mathbb{R}}^m}^2-\frac{c_3 c_5}{2}\|v_n\|_{L^4}^4\\ &+\frac{k_0c_5}{2}\|y_{\rm ref}\|_{\infty}^2+\frac{1}{2c_1}\|I_{s,i}\|^2_{2,\infty}+\frac{27 c_2^4}{32 c_3^3}|\Omega|, \end{align*} where $\|I_{s,i}\|_{2,\infty} = \esssup_{t\ge 0} \left( \int_\Omega |I_{s,i}(\zeta,t)|^2 \ds{\zeta}\right)^{1/2}$. Setting \[C_\infty:=\frac{k_0c_5}{2}\|y_{\rm ref}\|_{\infty}^2+\frac{1}{2c_1}\|I_{s,i}\|^2_{2,\infty}+\frac{27 c_2^4}{32 c_3^3}|\Omega|,\] we obtain that, for all $t\in[0, T_n)$, \begin{align*} &V(v_n(t),u_n(t))+c_5\int_0^t\mathfrak{a}(v_n(s),v_n(s))\ds{s}+\frac{c_1 c_5}{2}\int_0^t\|v_n(s)\|^2\ds{s}\\ &+\frac{c_5k_0}{2}\int_0^t\|\overline{e}_n(s)\|_{{\mathbb{R}}^m}^2\ds{s} +\frac{c_3 c_5}{2}\int_0^t\|v_n(s)\|_{L^4}^4\ds{s}\leq V(v_0^n,u_0^n)+C_\infty t. \end{align*} Since $(u_n^0,v_n^0)\to (u_0,v_0)$ strongly in $L^2(\Omega)$ and we have for all $p\in L^2(\Omega)$ that \[ \left\|\sum_{i=0}^n \scpr{p}{\theta_i} \theta_i\right\|^2=\sum_{i=0}^n \scpr{p}{\theta_i}^2\leq\sum_{i=0}^\infty \scpr{p}{\theta_i}^2=\left\|\sum_{i=1}^\infty \scpr{p}{\theta_i}\theta_i\right\|^2=\|p\|^2, \] it follows that, for all $t\in[0, T_n)$, \begin{equation}\label{eq:bound_1} \begin{aligned} &c_5\|v_n(t)\|^2+\|u_n(t)\|^2+ 2c_5\int_0^t\mathfrak{a}(v_n(s),v_n(s))\ds{s}+ {c_1 c_5}\int_0^t\|v_n(s)\|^2\ds{s} \\ &+ {c_5k_0}\int_0^t\|\overline{e}_n(s)\|_{{\mathbb{R}}^m}^2\ds{s} +{c_3 c_5}\int_0^t\|v_n(s)\|_{L^4}^4\ds{s}\leq 2C_\infty t+c_5\|u_0\|^2+\|v_0\|^2. \end{aligned} \end{equation} \emph{Step 1c: We show that $T_n = \infty$.} Assume that $T_n<\infty$, then it follows from~\eqref{eq:bound_1} together with~\eqref{eq:sesq} that $(v_n,u_n)$ is bounded, thus the solution $(\mu^n,\nu^n)$ of~\eqref{eq:muj-nuj-0gamma} is bounded on $[0,T_n)$. But this implies that the closure of the graph of $(\mu^n,\nu^n)$ is a compact subset of ${\mathbb{D}}$, a contradiction. Therefore, $T_n=\infty$ and in particular the solution is defined for all $t\in[0,\gamma]$.\\ \emph{Step 1d: We show convergence of $(v_n,u_n)$ to a solution of~\eqref{eq:weak_uv_delta} on $[0,\gamma]$.} First note that it follows from~\eqref{eq:bound_1} that \begin{equation}\label{eq:uv_bound_delta} \forall\, t\in[0,\gamma]:\ \|v_n(t)\|^2\leq C_v,\quad \|u_n(t)\|^2\leq C_u \end{equation} for some $C_v, C_u>0$. From \eqref{eq:bound_1} and condition~\eqref{eq:ellcond} in Assumption~\ref{Ass1} it follows that there is a constant $C_\delta>0$ such that \[\int_0^\gamma\|\nabla v_n(t)\|^2\ds{t}\leq\delta^{-1}\int_0^\gamma\mathfrak{a}(v_n(t),v_n(t))\ds{t}\leq C_\delta.\] This together with~\eqref{eq:bound_1} and~\eqref{eq:uv_bound_delta} implies that there exist constants $C_1,C_2>0$ with \begin{equation}\label{eq:extra_bounds_delta} \|v_n\|^4_{L^4(0,\gamma;L^{4}(\Omega))}\leq C_1,\quad \|v_n\|_{L^2(0,\gamma;W^{1,2}(\Omega))}\leq C_2. \end{equation} Note that \eqref{eq:extra_bounds_delta} directly implies that \begin{equation}\label{eq:vn2} \begin{aligned}\|v_n^2\|^2_{L^2(0,\gamma;L^{2}(\Omega))}\leq&\, C_1,\\ \|v_n^3\|_{L^{4/3}(0,\gamma;L^{4/3}(\Omega))} =&\, \left( \|v_n^2\|^2_{L^2(0,\gamma;L^{2}(\Omega))}\right)^{3/4} \le C_1^{3/4}. \end{aligned}\end{equation} Multiplying the second equation in~\eqref{eq:weak_delta} by $\dot{\nu}_j$ and summing up over $j\in\{0,\ldots,n\}$ leads to \begin{align*} \|\dot{u}_n\|^2 &= -\frac{c_4}{2} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} \|u_n\|^2 + c_5 \scpr{v_n}{\dot{u}_n}\\ &\le -\frac{c_4}{2} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|u_n\|^2 + \frac{c_5^2}{2}\|v_n\|^2+\frac{1}{2}\|\dot{u}_n\|^2, \end{align*} thus \[\|\dot{u}_n\|^2\leq-c_4 \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|u_n\|^2+c_5^2\|v_n\|^2.\] Upon integration over $[0,\gamma]$ and using \eqref{eq:uv_bound_delta} this yields that \[\int_0^\gamma\|\dot{u}_n(t)\|^2\ds{t}\leq c_4 C_u + c_5^2\int_0^\gamma\|v_n(t)\|^2\ds{t} \le \hat{C}_3\] for some $\hat{C}_3>0$, where the last inequality is a consequence of~\eqref{eq:bound_1}. This together with \eqref{eq:uv_bound_delta} implies that there is $C_3>0$ such that $\|u_n\|_{W^{1,2}(0,\gamma;L^2(\Omega))}\leq C_3$. Now, let $P_n$ be the orthogonal projection of $L^2(\Omega)$ onto the subspace generated by the set $\setdef{\theta_i}{i=1,\dots,n}$. Consider the norm \[\|v\|_{W^{1,2}}=\left(\sum_{i=0}^n(1+\alpha_i)|\scpr{v}{\theta_i}|^2\right)^{1/2}\] on $W^{1,2}(\Omega)$ according to Proposition \ref{prop:Aop2_n} and Remark \ref{rem:X_alpha}. By duality we have that \[\|\hat v\|_{(W^{1,2})'}=\left(\sum_{i=0}^n(1+\alpha_i)^{-1}|\scpr{\hat v}{\theta_i}|^2\right)^{1/2}\] is a norm on $W^{1,2}(\Omega)'$, cf.~\cite[Prop.~3.4.8]{TucsWeis09}. Note that we can consider $P_n:W^{1,2}(\Omega)'\to W^{1,2}(\Omega)'$, which is a bounded linear operator with norm one, independent of $n$. Using this together with the fact that the injection from $L^2(\Omega)$ into $W^{1,2}(\Omega)'$ is continuous and $\mathcal{A}\in\mathcal{L}(W^{1,2}(\Omega),W^{1,2}(\Omega)')$, we can rewrite the weak formulation~\eqref{eq:weak_delta} as \begin{equation}\label{eq:approx_dual} \dot{v}_n=P_n\mathcal{A} v_n+P_np_3(v_n)-P_nu_n+P_nI_{s,i}-P_n\mathcal{B} k_0(\mathcal{B}' v_n-y_{\rm ref}). \end{equation} Since $v_n\in L^2(0,\gamma;W^{1,2}(\Omega))$ and hence, by the Sobolev Embedding Theorem, $v_n\in L^2(0,\gamma;L^p(\Omega))$ for all $2\leq p\leq6$, we find that $p_3(v_n)\in L^2(0,\gamma;L^2(\Omega))$. We also have $\mathcal{A} v_n\in L^2(0,\gamma;W^{1,2}(\Omega)')$ and $\mathcal{B} k_0(\mathcal{B}' v_n-y_{\rm ref})\in L^2(0,\gamma;W^{1,2}(\Omega)')$ so that by using the previously derived estimates and~\eqref{eq:approx_dual}, there exists $C_4>0$ independent of $n$ and $t$ with \[\|\dot{v}_n\|_{L^2(0,\gamma;W^{1,2}(\Omega)')}\leq C_4.\] Now, by Lemma~\ref{lem:weak_convergence} we have that there exist subsequences of $(u_n)$, $(v_n)$ and $(\dot v_n)$, resp., again denoted in the same way, for which \begin{equation}\label{eq:convergence_subseq} \begin{aligned} u_n\to u&\in W^{1,2}(0,\gamma;L^2(\Omega))\mbox{ weakly},\\ u_n\to u&\in W^{1,\infty}(0,\gamma;L^{2}(\Omega))\mbox{ weak}^\star,\\ v_n\to v&\in L^2(0,\gamma;W^{1,2}(\Omega))\mbox{ weakly},\\ v_n\to v&\in L^\infty(0,\gamma;L^{2}(\Omega))\mbox{ weak}^\star,\\ v_n\to v&\in L^4(0,\gamma;L^{4}(\Omega))\mbox{ weakly},\\ \dot{v}_n\to \dot{v}&\in L^2(0,\gamma;W^{1,2}(\Omega)')\mbox{ weakly}. \end{aligned} \end{equation} Moreover, let $p_0=p_1=2$ and $X=W^{1,2}(\Omega)$, $Y=L^2(\Omega)$, $Z=W^{1,2}(\Omega)'$. Then, \cite[Chap.~1, Thm.~5.1]{Lion69} implies that \[W:=\setdef{u\in L^{p_0}(0,\gamma;X) }{ \dot{u}\in L^{p_1}(0,\gamma;Z) }\] with norm $\|u\|_{L^{p_0}(0,\gamma;X)}+\|\dot{u}\|_{L^{p_1}(0,\gamma;Y)}$ has a compact injection into $L^{p_0}(0,\gamma;Y)$, so that the weakly convergent sequence $v_n\to v\in W$ converges strongly in $L^2(0,\gamma;L^2(\Omega))$ by \cite[Lem.~1.6]{HinzPinn09}. Further, $(u(0),v(0))=(u_0,v_0)$ and by $v\in W^{1,2}(0,\gamma;L^2(\Omega))$, $v\in L^2(0,\gamma;W^{1,2}(\Omega))$ and $\dot{v}\in L^2(0,\gamma;W^{1,2}(\Omega)')$ it follows that $u,v\in C([0,\gamma];L^2(\Omega))$, see for instance \cite[Thm.~1.32]{HinzPinn09}. Moreover, note that $\mathcal{B}' v-y_{\rm ref}\in L^2(0,\gamma;{\mathbb{R}}^m)$. Hence, $(u,v)$ is a solution of \eqref{eq:FHN_feedback} in $[0,\gamma]$ and \begin{equation}\label{eq:strong_delta} \dot{v}(t)=\mathcal{A} v(t)+p_3(v(t))-u(t)+I_{s,i}(t)-\mathcal{B} k_0(\mathcal{B}' v(t)-y_{\rm ref}(t)) \end{equation} is satisfied in $W^{1,2}(\Omega)'$. Moreover, by~\eqref{eq:vn2},~\cite[Chap.~1, Lem.~1.3]{Lion69} and $v_n\to v$ in $L^4(0,\gamma;L^{4}(\Omega))$ we have that $v_n^3\to v^3$ weakly in $L^{4/3}(0,\gamma;L^{4/3}(\Omega))$ and $v_n^2\to v^2$ weakly in $L^2(0,\gamma;L^{2}(\Omega))$.\\ \emph{Step 1e: We show uniqueness of the solution $(v,u)$.} To this end, we separate the linear part of~$p_3$ so that \[p_3(v)=-c_1v-c_3\hat{p}_3(v),\quad \hat{p}_3(v)\coloneqq v^2\left(v-c\right),\ \quad c\coloneqq c_2/c_3.\] Assume that $(v_1,u_1)$ and $(v_2,u_2)$ are two solutions of~\eqref{eq:FHN_feedback} on $[0,\gamma]$ with the same initial values, $v_1(0) = v_2(0) = v_0$ and $u_1(0) = u_2(0) = u_0$. Let $t_0\in(0,\gamma]$ be given. Let $Q_0\coloneqq(0,t_0)\times\Omega$. Define \[\Sigma(t,\zeta):= |v_1(t,\zeta)|+|v_2(t,\zeta)|,\] and let \[Q^\Lambda:= \{(t,\zeta)\in Q_0\ |\ \Sigma(t,\zeta)\leq\Lambda\},\quad \Lambda>0.\] Note that, by convexity of the map $x\mapsto x^p$ on $[0,\infty)$ for $p>1$, we have that \[ \forall\, a,b\ge 0:\ \big(\tfrac12 a+ \tfrac12 b)^p \le \tfrac12 a^p + \tfrac12 b^p. \] Therefore, since $v_1,v_2\in L^4(0,\gamma;L^{4}(\Omega))$, we find that $\Sigma\in L^4(0,\gamma;L^{4}(\Omega))$. Hence, by the monotone convergence theorem, for all $\epsilon>0$ we may choose $\Lambda$ large enough such that \[\int_{Q_0\setminus Q^\Lambda}|\Sigma(\zeta,t)|^4\ds{(\zeta,t)}<\epsilon.\] Note that without loss of generality we may assume that $\Lambda>\frac{c}{3}$. Let $V:= v_2-v_1$ and $U:= u_2-u_1$, then, by~\eqref{eq:FHN_feedback}, \begin{align*} \dot V &= (\mathcal{A}-c_1I) V -c_3(\hat{p}_3(v_2) - \hat{p}_3(v_1)) - U - k_0 \mathcal{B}\cB' V,\\ \dot U &= c_5 V - c_4 U. \end{align*} By~\cite[Thm.~1.32]{HinzPinn09}, we have for all $t\in(0,\gamma)$ that \[\tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|V(t)\|^2=\scpr{\dot{V}(t)}{V(t)},\quad \tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|U(t)\|^2=\scpr{\dot{U}(t)}{U(t)},\] thus we may compute that \begin{align*} \tfrac{c_5}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|V\|^2+\tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|U\|^2&= \scpr{(\mathcal{A}-c_1I) V - U - k_0 \mathcal{B}\cB' V}{c_5 V} - c_4 \|U\|^2 + c_5 \langle U,V \rangle\\ &\quad-c_5 c_3\scpr{ \hat{p}_3(v_2) - \hat{p}_3(v_1)}{V}\\ &= c_5 \scpr{(\mathcal{A}-c_1I) V}{V} - c_5 k_0 \scpr{\mathcal{B}' V}{\mathcal{B}' V} - c_4 \|U\|^2 \\ &\quad - c_5c_3 \scpr{\hat{p}_3(v_2)-\hat{p}_3(v_1)}{V}\\ &\le-c_5c_3 \scpr{\hat{p}_3(v_2)-\hat{p}_3(v_1)}{V}. \end{align*} Integration over $[0,t_0]$ and using $(U(0),V(0))=(0,0)$ leads to \begin{align*} \tfrac{c_5}{2}\|V(t_0)\|^2+\tfrac{1}{2}\|U(t_0)\|^2 &=-c_5c_3\int_0^{t_0}\int_\Omega (\hat{p}_3(v_2(\zeta,t))-\hat{p}_3(v_1(\zeta,t)))V(\zeta,t)\ds{\zeta}\ds{t}\\ &=-c_5c_3\int_{Q^\Lambda} (\hat{p}_3(v_2(\zeta,t))-\hat{p}_3(v_1(\zeta,t)))V(\zeta,t)\ds{\zeta}\ds{t}\\ &\quad-c_5c_3\int_{Q_0\setminus Q^\Lambda} (\hat{p}_3(v_2(\zeta,t))-\hat{p}_3(v_1(\zeta,t)))V(\zeta,t)\ds{\zeta}\ds{t}. \end{align*} Note that on $Q^\Lambda$ we have $-\Lambda\leq v_1\leq\Lambda$ and $-\Lambda\leq v_2\leq\Lambda$. Let $a,b\in[-\Lambda,\Lambda]$, then the mean value theorem implies \begin{align*} (\hat{p}_3(b)-\hat{p}_3(a))(b-a)=\hat{p}_3'(\xi)(b-a)^2 \end{align*} for some $\xi\in(-\Lambda,\Lambda)$. Since $\hat{p}_3'(\xi)=3\xi^2-2c\xi$ has a minimum at \[\xi^\ast=\frac{c}{3}\] we have that \[(\hat{p}_3(b)-\hat{p}_3(a))(b-a)=\hat{p}_3'(\xi)(b-a)^2\geq-\frac{c^2}{3}(b-a)^2.\] Using that in the above inequality leads to \begin{align*} \tfrac{c_5}{2}\|V(t_0)\|^2+\tfrac{1}{2}\|U(t_0)\|^2&\leq c_5c_3\frac{c^2}{3}\int_{Q^\Lambda} V(\zeta,t)^2\ds{\zeta}\ds{t}\\ &\quad-c_5c_3\int_{Q_0\setminus Q^\Lambda} (\hat{p}_3(v_2(\zeta,t))-\hat{p}_3(v_1(\zeta,t)))V(\zeta,t)\ds{\zeta}\ds{t}\\ &\leq c_5c_3\frac{c^2}{3}\int_{Q_0} V(\zeta,t)^2\ds{\zeta}\ds{t}\\ &\quad+c_5c_3\int_{Q_0\setminus Q^\Lambda} |\hat{p}_3(v_2(\zeta,t))-\hat{p}_3(v_1(\zeta,t))||V(\zeta,t)|\ds{\zeta}\ds{t}\\ &\le c_5c_3\frac{c^2}{3}\int_0^{t_0}\|V(t)\|^2\ds{t}+2c_5c_3\int_{Q_0\setminus Q^\Lambda}|\Sigma(\zeta,t)|^4\ds{(\zeta,t)}\\ &\leq c_3\frac{c^2}{3}\int_0^{t_0}c_5\|V(t)\|^2+\|U(t)\|^2\ds{t}+2c_5c_3\epsilon. \end{align*} Since $\epsilon>0$ was arbitrary we may infer that \[\tfrac{c_5}{2}\|V(t_0)\|^2+\tfrac{1}{2}\|U(t_0)\|^2\leq \frac{2c_3c^2}{3}\int_0^{t_0}\tfrac{c_5}{2}\|V(t)\|^2+\tfrac12\|U(t)\|^2\ds{t}.\] Hence, by Gronwall's lemma and $U(0)=0,V(0)=0$ it follows that $U(t_0)=0$ and $V(t_0)=0$. Since $t_0$ was arbitrary, this shows that $v_1 = v_2$ and $u_1 = u_2$ on $[0,\gamma]$. \emph{Step 2: We show that for all $\epsilon\in(0,\gamma)$ and all $t\in[\epsilon,\gamma]$ we have $v(t)\in W^{1,2}(\Omega)$.}\\ Fix $\epsilon\in(0,\gamma)$. First we show that $v\in BUC([\epsilon,\gamma];W^{1,2}(\Omega))$. Multiplying the first equation in~\eqref{eq:weak_delta} by $\dot{\mu}_j$ and summing up over $j\in\{0,\dots,n\}$ we obtain \begin{align*} \|\dot{v}_n\|^2&=-\tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t} \mathfrak{a}(v_n,v_n)-\scpr{u_n}{\dot{v}_n}+\scpr{p_3(v_n)}{\dot{v}_n}+\scpr{I_{s,i}}{\dot{v}_n}\\ &\quad -k_0\scpr{\mathcal{B}' v_n-y_{\rm ref}}{\mathcal{B}'\dot{v}_n}_{{\mathbb{R}}^m}\\ &=-\tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t} \mathfrak{a}(v_n,v_n)-\scpr{u_n}{\dot{v}_n}+\scpr{p_3(v_n)}{\dot{v}_n}+\scpr{I_{s,i}}{\dot{v}_n}\\ &\quad -k_0\scpr{\mathcal{B}' v_n-y_{\rm ref}}{\mathcal{B}'\dot{v}_n-\dot{y}_{\rm ref}}_{{\mathbb{R}}^m} - k_0\scpr{\mathcal{B}' v_n-y_{\rm ref}}{\dot{y}_{\rm ref}}_{{\mathbb{R}}^m} \end{align*} Furthermore, we may derive that \begin{align*} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} v_n^4 &= 4 v_n^3 \dot v_n = -\frac{4}{c_3}\big( p_3(v_n) - c_2 v_n^2 + c_1 v_n\big) \dot v_n,\quad \text{thus}\\ p_3(v_n) \dot v_n &= -\frac{c_3}{4} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} v_n^4 + c_2 v_n^2 \dot v_n - c_1 v_n \dot v_n, \end{align*} and this implies, for any $\delta>0$, \begin{align*} \scpr{p_3(v_n)}{\dot v_n} &\le -\frac{c_3}{4} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} \|v_n\|_{L^4}^4 + c_2 \scpr{v_n^2}{\dot v_n} - c_1 \scpr{v_n}{\dot v_n}\\ &\le -\frac{c_3}{4} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} \|v_n\|_{L^4}^4 + \frac{c_2}{2} \left( \delta \|v_n\|_{L^4}^4 + \frac{1}{\delta} \|\dot v_n\|^2\right) + \frac{c_1}{2} \left(\delta \|v_n\|^2 + \frac{1}{\delta} \|\dot v_n\|^2\right)\\ &\stackrel{\eqref{eq:uv_bound_delta}}{\le} -\frac{c_3}{4} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} \|v_n\|_{L^4}^4 + \frac{c_2}{2} \left( \delta \|v_n\|_{L^4}^4 + \frac{1}{\delta} \|\dot v_n\|^2 \right) + \frac{c_1}{2} \left(\delta C_v + \frac{1}{\delta} \|\dot v_n\|^2\right). \end{align*} Moreover, we find that, recalling $\overline{e}_n = \mathcal{B}' v_n-y_{\rm ref}$ \begin{align*} \scpr{u_n}{\dot{v}_n} &\le \frac{\delta}{2} \|u_n\|^2 + \frac{1}{2\delta} \|\dot v_n\|^2 \stackrel{\eqref{eq:uv_bound_delta}}{\le} \frac{\delta C_u}{2} + \frac{1}{2\delta} \|\dot v_n\|^2,\\ \scpr{I_{s,i}}{\dot{v}_n} &\le \frac{\delta}{2} \|I_{s,i}\|_{2,\infty}^2 + \frac{1}{2\delta} \|\dot v_n\|^2,\\ \scpr{\overline{e}_n}{\dot{y}_{\rm ref}}_{{\mathbb{R}}^m} &\le \frac{1}{2} \|\overline{e}_n\|^2_{{\mathbb{R}}^m} + \frac12 \|\dot{y}_{\rm ref}\|_\infty^2. \end{align*} Therefore, choosing $\delta$ large enough, we obtain that there exist constants $Q_1,Q_2>0$ independent of $n$ such that \begin{align*} \|\dot{v}_n\|^2\le&-\frac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t} \mathfrak{a}(v_n,v_n)-\frac{c_3}{4}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|v_n\|_{L^4}^4-\frac{k_0}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|\overline{e}_n\|_{{\mathbb{R}}^m}^2+\frac{1}{2}\|\dot{v}_n\|^2\\ &+Q_1\|v_n\|^4_{L^4}+Q_2+\frac{k_0}{2}\|\overline{e}_n\|_{{\mathbb{R}}^m}^2, \end{align*} thus, \begin{equation}\label{eq:est-norm-dotvn} \begin{aligned} \|\dot{v}_n\|^2&+\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\left(\mathfrak{a}(v_n,v_n)+\frac{c_3}{2}\|v_n\|_{L^4}^4+k_0\|\overline{e}_n\|_{{\mathbb{R}}^m}^2\right)\\ \le &\ 2Q_1\|v_n\|^4_{L^4}+2Q_2+k_0\|\overline{e}_n\|_{{\mathbb{R}}^m}^2. \end{aligned} \end{equation} As a consequence, we find that for all $t\in[0,\gamma]$ we have \begin{align*} t\|\dot{v}_n(t)\|^2&+\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\left(t\mathfrak{a}(v_n(t),v_n(t))+\frac{c_3t}{2}\|v_n(t)\|_{L^4}^4+k_0t\|\overline{e}_n(t)\|_{{\mathbb{R}}^m}^2\right)\\ \stackrel{\eqref{eq:est-norm-dotvn}}{\le}&\ \left(2Q_1 t +\frac{c_3}{2}\right)\|v_n(t)\|^4_{L^4}+\mathfrak{a}(v_n(t),v_n(t)) +2Q_2 t+k_0(t+1)\|\overline{e}_n(t)\|_{{\mathbb{R}}^m}^2. \end{align*} Since $t\|\dot{v}_n(t)\|^2\geq0$ and $t\le \gamma$ for all $t\in[0,\gamma]$, it follows that \begin{align*} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}&\left(t\mathfrak{a}(v_n(t),v_n(t))+\frac{c_3t}{2}\|v_n(t)\|_{L^4}^4+k_0t\|\overline{e}_n(t)\|_{{\mathbb{R}}^m}^2\right)\\ \le&\ \left(2Q_1\gamma+\frac{c_3}{2}\right)\|v_n(t)\|^4_{L^4}+\mathfrak{a}(v_n(t),v_n(t)) +2Q_2\gamma+k_0(\gamma+1)\|\overline{e}_n(t)\|_{{\mathbb{R}}^m}^2. \end{align*} Integrating the former and using \eqref{eq:bound_1}, there exist $P_1,P_2>0$ independent of $n$ such that for $t\in[0,\gamma]$ we have \begin{align*} t\mathfrak{a}(v_n(t),v_n(t))+\frac{c_3t}{2}\|v_n(t)\|_{L^4}^4+k_0t\|\overline{e}_n(t)\|_{{\mathbb{R}}^m}^2 \le P_1+P_2t. \end{align*} Thus, there exist constants $C_5,C_6>0$ independent of $n$ such that \[\forall\, t\in[0,\gamma]:\ t\mathfrak{a}(v_n(t),v_n(t))\leq C_5\ \wedge\ t\|\overline{e}_n(t)\|_{{\mathbb{R}}^m}\leq C_6.\] Hence, for all $\epsilon\in(0,\gamma)$, it follows from the above estimates together with~\eqref{eq:bound_1} that $v_n\in L^\infty(\epsilon,\gamma;W^{1,2}(\Omega))$ and $\overline{e}_n\in L^\infty(\epsilon,\gamma;{\mathbb{R}}^m)$, so that in addition to~\eqref{eq:convergence_subseq}, from Lemma~\ref{lem:weak_convergence} we further have that there exists a subsequence such that \[v_n\to v\in L^\infty(\epsilon,\gamma;W^{1,2}(\Omega))\mbox{ weak}^\star\] and $\mathcal{B}'v\in L^\infty(\epsilon,\gamma;{\mathbb{R}}^m)$ for all $\epsilon\in(0,\gamma)$, hence $I_{s,e}\in L^2(0,\gamma;{\mathbb{R}}^m)\cap L^\infty(\epsilon,\gamma;{\mathbb{R}}^m)$. By the Sobolev Embedding Theorem, $W^{1,2}(\Omega)\hookrightarrow L^p(\Omega)$ for $2\leq p\leq 6$ we have that $p_3(v)\in L^\infty(\epsilon,\gamma;L^2(\Omega))$. Moreover, since \eqref{eq:strong_delta} holds, we can rewrite it as \[\dot{v}(t)=(\mathcal{A}-c_1 I) v(t)+I_r(t)+\mathcal{B} I_{s,e}(t),\] where $I_r:=c_2v^2-c_3v^3-u+I_{s,i}\in L^2(0,\gamma;L^2(\Omega))\cap L^\infty(\epsilon,\gamma;L^2(\Omega))$ and Proposition~\ref{prop:hoelder} (recall that $W^{1,2}(\Omega)' = X_{-1/2}$ and hence $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,X_{-1/2})$) with $c=c_1$ implies that $v\in BUC([\epsilon,\gamma];W^{1,2}(\Omega))$. Hence, for all $\epsilon\in(0,\gamma)$, $v(t)\in W^{1,2}(\Omega)$ for $t\in[\epsilon,\gamma]$, so that in particular $v(\gamma)\in W^{1,2}(\Omega)$. \subsection{Solution on $(\gamma,\infty)$} \label{ssec:mono_proof_tgeqgamma} The crucial step in this part of the proof is to show that the error remains uniformly bounded away from the funnel boundary while $v\in L^\infty(\gamma,\infty;W^{1,2}(\Omega))$. The proof is divided into several steps. \emph{Step 1: We show existence of an approximate solution by means of a time-varying state-space transformation.}\\ Again, let $(\theta_i)_{i\in{\mathbb{N}}_0}$ be the eigenfunctions of $-\mathcal{A}$ and let $\alpha_i$ be the corresponding eigenvalues, with $\alpha_i\geq0$ for all $i\in{\mathbb{N}}_0$. Recall that $(\theta_i)_{i\in{\mathbb{N}}_0}$ form an orthonormal basis of $L^2(\Omega)$ by Proposition~\ref{prop:Aop_n}\ref{item:Aop5}). Let $(u_\gamma,v_\gamma):=(u(\gamma),v(\gamma))$, $a_i := \scpr{v_\gamma}{\theta_i}$ and $b_i := \scpr{u_\gamma}{\theta_i}$ for $i\in{\mathbb{N}}_0$ and \[ v_\gamma^n:= \sum_{i=0}^na_{i}\theta_i,\quad u_\gamma^n:= \sum_{i=0}^nb_{i}\theta_i,\quad n\in{\mathbb{N}}. \] Then we have that $v^n_\gamma\to v_\gamma$ strongly in $W^{1,2}(\Omega)$ and $u^n_\gamma\to u_\gamma$ strongly in $L^2(\Omega)$. As stated in Remark~\ref{rem:main}\,\ref{rem:main1}) we have that $\ker\mathcal{B}=\{0\}$ implies $\mathcal{B}' \mathcal{D}(\mathcal{A})={\mathbb{R}}^m$. As a~consequence, there exist $q_1,\ldots,q_m\in\mathcal{D}(\mathcal{A})$ such that $\mathcal{B}'q_k=e_k$ for $k=1,\dots,m$. By Proposition~\ref{prop:Aop_n}\,\ref{item:Aop3}), we further have $q_k\in C^{0,\nu}(\Omega)$ for some $\nu\in(0,1)$.\\ Note that $U\coloneqq\bigcup_{n\in{\mathbb{N}}} U_n$, where $U_n = \mathrm{span}\{\theta_i\}_{i=0}^n$, satisfies $\overline{U}=W^{1,2}(\Omega)$ with the respective norm. Moreover, $\overline{\mathcal{B}' U}={\mathbb{R}}^m$. Since ${\mathbb{R}}^m$ is complete and finite dimensional and $\mathcal{B}'$ is linear and continuous it follows that $\mathcal{B}' U={\mathbb{R}}^m$. By the surjectivity of $\mathcal{B}'$ we have that for all $k\in\{1,\dots,m\}$ there exist $n_k\in{\mathbb{N}}$ and $q_k\in U_{n_k}$ such that $\mathcal{B}' q_k=e_k$. Thus, there exists $n_0\in{\mathbb{N}}$ with $q_k\in U_{n_0}$ for all $k=\{1,\dots,m\}$, hence the $q_k$ are a (finite) linear combination of the eigenfunctions $\theta_i$.\\ Define $q\in W^{1,2}(\Omega;{\mathbb{R}}^m)\cap C^{0,\nu} (\Omega;{\mathbb{R}}^m)$ by $q(\zeta)=\big(q_1(\zeta),\ldots,q_m(\zeta)\big)^\top$ and $q\cdoty_{\rm ref}$ by \[ (q\cdoty_{\rm ref}) (t,\zeta) := \sum_{k=1}^mq_k(\zeta)y_{{\rm ref},k}(t),\quad \zeta\in \Omega,\, t\ge 0. \] We may define $q\cdot\dot{y}_{\rm ref}$ analogously. Note that we have $(q\cdoty_{\rm ref})\in BC([0,\infty)\times\Omega)$, because \[ |(q\cdoty_{\rm ref}) (t,\zeta)| \le \sum_{k=1}^m \|q_k\|_\infty\, \|y_{{\rm ref},k}\|_\infty \] for all $\zeta\in \Omega$ and $t\ge 0$, where we write $\|\cdot\|_\infty$ for the supremum norm. We define $q_{k,j}:= \scpr{q_k}{\theta_j}$ for $k=1,\ldots,m$, $j\in{\mathbb{N}}_0$ and $q_k^n:=\sum_{j=0}^nq_{k,j}$ for $n\in{\mathbb{N}}_0$. Similarly, $q^n:=(q_1^n,\dots,q_m^n)^\top$ for $n\in{\mathbb{N}}$, so that $q^n\to q$ strongly in $W^{1,2}(\Omega)$. In fact, since $q_k\in U_{n_0}$ for all $k=1,\ldots,m$, it follows that $q^n = q$ for all $n\ge n_0$.\\ Since $\mathcal{B}':W^{r,2}(\Omega)\to{\mathbb{R}}^m$ is continuous for some $r\in[0,1]$, it follows that for all $\theta\in W^{r,2}(\Omega)$ there exists $\Gamma_r>0$ such that \[\|\mathcal{B}'\theta\|_{{\mathbb{R}}^m}\leq\Gamma_r\|\theta\|_{W^{r,2}}.\] For $n\in{\mathbb{N}}_0$, let \[\kappa_n\coloneqq \big((n+1) \Gamma_r (1+\|v_\gamma^n-q^n\cdoty_{\rm ref}(\gamma)\|^2_{W^{r,2}})\big)^{-1}.\] Note that for $v_\gamma\in W^{1,2}(\Omega)$ it holds that $\kappa_n>0$ for all $n\in{\mathbb{N}}_0$, $(\kappa_n)_{n\in{\mathbb{N}}_0}$ is bounded by $\Gamma_r^{-1}$ (and monotonically decreasing) and $\kappa_n\to0$ as $n\to\infty$ and by construction \[\forall\,n\in{\mathbb{N}}_0:\ \kappa_n\|\mathcal{B}' (v_\gamma^n-q^n\cdoty_{\rm ref}(\gamma))\|_{{\mathbb{R}}^m}<1.\] Consider a modification of $\varphi$ induced by $\kappa_n$, namely \[\varphi_n\coloneqq\varphi+\kappa_n,\quad n\in{\mathbb{N}}_0.\] It is clear that for each $n\in{\mathbb{N}}_0$ we have $\varphi_n\in W^{1,\infty}([\gamma,\infty);{\mathbb{R}})$, the estimates $\|\varphi_n\|_\infty\leq\|\varphi\|_\infty+\Gamma_r^{-1}$ and $\|\dot{\varphi}_n\|_\infty=\|\dot{\varphi}\|_\infty$ are independent of $n$, and $\varphi_n\to\varphi\in\Phi_\gamma$ uniformly. Moreover, $\inf_{t>\gamma}\varphi_n(t)>0$.\\ Now, fix $n\in{\mathbb{N}}_0$. For $t\ge \gamma$, define \begin{align*} \phi(e)&:= \frac{k_0}{1-\|e\|_{{\mathbb{R}}^m}^2}e,\quad e\in{\mathbb{R}}^m,\ \|e\|_{{\mathbb{R}}^m}<1,\\ \omega_0(t)&:= \dot{\varphi}_n(t)\varphi_n(t)^{-1},\\ F(t,z)&:= \varphi_n(t)f_{-1}(t)+\varphi_n(t)f_0(t)+f_1(t)z+\varphi_n(t)^{-1}f_2(t)z^2\\ &\quad \ -c_3\varphi_n(t)^{-2}z^3,\quad z\in{\mathbb{R}},\\ f_{-1}(t)&:= I_{s,i}(t)+\sum_{k=1}^my_{{\rm ref},k}(t)\mathcal{A} q_k,\\ f_0(t)&:= -q\cdot(\dot{y}_{\rm ref}(t)+c_1y_{\rm ref}(t))+c_2(q\cdoty_{\rm ref}(t))^2-c_3(q\cdoty_{\rm ref}(t))^3,\\ f_1(t)&:= (q\cdoty_{\rm ref}(t))(2c_2-3c_3(q\cdoty_{\rm ref}(t))),\\ f_2(t)&:= c_2-3c_3(q\cdoty_{\rm ref}(t)),\\ g(t)&:= c_5(q\cdoty_{\rm ref}(t)). \end{align*} We have that $f_{-1}\in L^\infty(\gamma,\infty;L^2(\Omega))$, since \begin{align*} \|f_{-1}\|_{2,\infty} &:= \esssup_{t\ge \gamma} \left(\int_\Omega f_{-1}(\zeta,t)^2 \ds{\lambda} \right)^{1/2}\\ &\le \|I_{s,i}\|_{2,\infty} + \sum_{k=1}^m \|y_{{\rm ref},k}\|_\infty\, \|A q_k\|_{L^2} < \infty. \end{align*} Furthermore, we have that $f_0\in L^\infty((\gamma,\infty)\times\Omega)$, because \begin{align*} |f_0(\zeta,t)|&\leq\ (\|\dot{y}_{\rm ref}\|_\infty+c_1\|y_{\rm ref}\|_\infty)\sum_{k=1}^m\|q_k\|_\infty+c_2\|y_{\rm ref}\|_\infty^2\left(\sum_{k=1}^m\|q_k\|_\infty\right)^2\\ &\quad +c_3\|y_{\rm ref}\|^3_\infty\left(\sum_{k=1}^m\|q_k\|_\infty\right)^3\text{ for a.a.\ $(\zeta,t)\in \Omega\times [\gamma,\infty)$}, \end{align*} whence \[\|f_0\|_{\infty,\infty}:= \esssup_{t\geq\gamma,\zeta\in\Omega}|f_0(\zeta,t)|<\infty.\] Similarly $\|f_1\|_{\infty,\infty}<\infty$, $\|f_2\|_{\infty,\infty}<\infty$ and $\|g\|_{\infty,\infty}<\infty$.\\ Consider the system of $2(n+1)$ ODEs \begin{equation}\label{eq:appr_ODE} \begin{aligned} \dot{\mu}_j(t)&=-\alpha_j\mu_j(t)-(c_1-\omega_0(t))\mu_j(t)-\nu_j(t)- \scpr{\phi\left(\sum_{i=0}^n \mathcal{B}'\theta_i \mu_i(t)\right)}{\mathcal{B}'\theta_j}_{{\mathbb{R}}^m} \\ &\quad +\scpr{F\left(t,\sum_{i=0}^n\mu_i(t)\theta_i\right)}{\theta_j},\\ \dot{\nu}_j(t)&=-(c_4-\omega_0(t))\nu_j(t)+c_5\mu_j(t)+\varphi_n(t)\scpr{g(t)}{\theta_j} \end{aligned} \end{equation} defined on \[ {\mathbb{D}}:= \setdef{(t,\mu_0,\dots,\mu_n,\nu_0,\dots,\nu_n)\in[\gamma,\infty)\times{\mathbb{R}}^{2(n+1)} }{ \left\|\sum_{i=0}^n\gamma_i\mu_i\right\|_{{\mathbb{R}}^m}<1 }, \] with initial value \[\mu_j(\gamma)=\kappa_n\left(a_j-\sum_{k=1}^mq_{k,j}y_{{\rm ref},k}(\gamma)\right),\quad \nu_j(\gamma)=\kappa_n b_j,\quad j\in{\mathbb{N}}_0.\] Since the functions on the right hand side of~\eqref{eq:appr_ODE} are continuous, the set~${\mathbb{D}}$ is relatively open in $[\gamma,\infty)\times{\mathbb{R}}^{2(n+1)}$ and by construction the initial condition satisfies $(\gamma,\mu_0(\gamma),\dots,\mu_n(\gamma),\nu_0(\gamma),\dots,\nu_n(\gamma))\in{\mathbb{D}}$ it follows from ODE theory, see e.g.~\cite[\S~10, Thm.~XX]{Walt98}, that there exists a weakly differentiable solution \[(\mu^n,\nu^n)=(\mu_0,\dots,\mu_n,\nu_0,\dots,\nu_n):[\gamma,T_n)\to{\mathbb{R}}^{2(n+1)}\] such that $T_n\in(\gamma,\infty]$ is maximal. Furthermore, the closure of the graph of~$(\mu^n,\nu^n)$ is not a compact subset of ${\mathbb{D}}$.\\ With that, we may define \[z_n(t):=\sum_{i=0}^n\mu_i(t)\theta_i,\quad w_n(t):=\sum_{i=0}^n\nu_i(t)\theta_i,\quad e_n(t):= \sum_{i=0}^n\mathcal{B}'\theta_i\mu_i(t),\quad t\in[\gamma,T_n)\] and note that \[z_\gamma^n:=z_n(\gamma)=\kappa_n(v_\gamma^n-q^n\cdoty_{\rm ref}(\gamma)),\quad w_\gamma^n:=w_n(\gamma)=\kappa_n u_\gamma^n.\] From the orthonormality of the $\theta_i$ we have that \begin{equation}\label{eq:weak} \begin{aligned} \scpr{\dot{z}_n(t)}{\theta_j}&=-\mathfrak{a}(z_n(t),\theta_j)-(c_1-\omega_0(t))\scpr{z_n(t)}{\theta_j} - \scpr{w_n(t)}{\theta_j}\\ &\quad -\scpr{\phi\left(\mathcal{B}' z_n(t)\right)}{\mathcal{B}' \theta_j}_{{\mathbb{R}}^m} +\scpr{F\left(t,z_n(t)\right)}{\theta_j},\\ \scpr{\dot{w}_n(t)}{\theta_j} &= -(c_4-\omega_0(t))\scpr{w_n(t)}{\theta_j}+c_5\scpr{z_n(t)}{\theta_j}+\varphi_n\scpr{g(t)}{\theta_j}. \end{aligned} \end{equation} Define now \begin{equation}\label{eq:transformation} \begin{aligned} v_n(t)&\coloneqq \varphi_n(t)^{-1}z_n(t)+q^n\cdoty_{\rm ref}(t),\\ u_n(t)&\coloneqq \varphi_n(t)^{-1}w_n(t),\\ \tilde{\mu}_i(t) &\coloneqq \varphi_n(t)^{-1}\mu_i(t) +\sum_{k=1}^m q_{k,i}y_{{\rm ref},k}(t),\\ \tilde{\nu}_i(t) &\coloneqq \varphi_n(t)^{-1}\nu_i(t), \end{aligned} \end{equation} then $v_n(t)=\sum_{i=0}^n\tilde{\mu}_i(t)\theta_i$ and $u_n(t)=\sum_{i=0}^n\tilde{\nu}_i(t)\theta_i$. With this transformation we obtain that $(v_n, u_n)$ satisfies, for all $\theta\in W^{1,2}(\Omega)$, $\chi\in L^2(\Omega)$ and all $t\in[\gamma,T_n)$ that \begin{equation*} \begin{aligned} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\scpr{v_n(t)}{\theta}=&-\mathfrak{a}(v_n(t),\theta)+\scpr{p_3(v_n(t)+(q-q^n)\cdoty_{\rm ref}(t))-u_n(t)}{\theta}\\ &+\scpr{I_{s,i}(t)-(q-q^n)\cdot\dot{y}_{\rm ref}(t)+\sum_{k=1}^my_{{\rm ref},k}(t)\mathcal{A}(q_k-q_k^n) }{\theta}\\ &+\scpr{I_{s,e}^n(t)}{\mathcal{B}'\theta}_{{\mathbb{R}}^m},\\ \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\scpr{u_n(t)}{\chi}=&\scpr{c_5(v_n(t)+(q-q^n)\cdoty_{\rm ref}(t))-c_4u_n(t)}{\chi},\\ I_{s,e}^n(t)=&-\frac{k_0}{1-\varphi_n(t)^2\|\mathcal{B}' (v_n(t)-q^n\cdoty_{\rm ref}(t))\|^2_{{\mathbb{R}}^m}}(\mathcal{B}'( v_n(t)-q^n\cdoty_{\rm ref}(t))), \end{aligned} \end{equation*} with $(u_n(\gamma),v_n(\gamma))=(u_\gamma,v_\gamma)$. Since there exists some $n_0\in{\mathbb{N}}$ with $q^{n}=q$ for all $n\geq n_0$, we have for all $n\geq n_0$, $\theta\in W^{1,2}(\Omega)$ and $\chi\in L^2(\Omega)$ that \begin{equation}\label{eq:weak_uv} \begin{aligned} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\scpr{v_n(t)}{\theta}=&-\mathfrak{a}(v_n(t),\theta)+\scpr{p_3(v_n(t))-u_n(t)}{\theta}\\ &+\scpr{I_{s,i}(t)}{\theta}+\scpr{I_{s,e}^n(t)}{\mathcal{B}'\theta}_{{\mathbb{R}}^m},\\ \tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\scpr{u_n(t)}{\chi}=&\scpr{c_5v_n(t)-c_4u_n(t)}{\chi},\\ I_{s,e}^n(t)=&-\frac{k_0}{1-\varphi_n(t)^2\|\mathcal{B}' v_n(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}}(\mathcal{B}' v_n(t)-y_{\rm ref}(t)), \end{aligned} \end{equation} \emph{Step 2: We show boundedness of $(z_n,w_n)$ in terms of~$\varphi_n$.}\\ Consider again the Lyapunov function \eqref{eq:Lyapunov} and observe that $\|z_n(t)\|^2 = \sum_{j=0}^n \mu_j(t)^2$ and $\|w_n(t)\|^2 = \sum_{j=0}^n \nu_j(t)^2$. We find that, for all $t\in[\gamma, T_n)$, \begin{align*} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} V(z_n(t),w_n(t)) &= c_5\sum_{j=0}^n\mu_j(t)\dot{\mu}_j(t)+\sum_{j=0}^n\nu_j(t)\dot{\nu}_j(t)\\ &=-c_5\sum_{j=0}^{n}\alpha_j\mu_j(t)^2- c_5 (c_1-\omega_0(t))\sum_{j=0}^{n}\mu_j(t)^2\\ &\quad-(c_4-\omega_0(t))\sum_{j=0}^{n}\nu_j(t)^2 -c_5\scpr{\phi(e_n(t))}{e_n(t)}_{{\mathbb{R}}^m} \\ &\quad +\varphi_n (t) \scpr{g(t)}{\sum_{i=0}^n\nu_i(t)\theta_i}\\ &\quad+c_5\scpr{F\left(t,\sum_{i=0}^n\mu_i(t)\theta_i\right)}{\sum_{i=0}^n\mu_i(t)\theta_i}, \end{align*} hence, omitting the argument~$t$ for brevity in the following, \begin{equation}\label{eq:Lyapunov_boundary_1} \begin{aligned} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} V(z_n,w_n)=&-c_5\mathfrak{a}(z_n,z_n)-c_5(c_1-\omega_0)\|z_n\|^2-(c_4-\omega_0)\|w_n\|^2\\ &-c_5\frac{k_0\|e_n\|_{{\mathbb{R}}^m}^2}{1-\|e_n\|_{{\mathbb{R}}^m}^2}+c_5\scpr{F(t,z_n)}{z_n}+\varphi_n \scpr{g}{w_n}. \end{aligned} \end{equation} Next we use some Young and Hölder inequalities to estimate the term \begin{align*} \scpr{F(t,z_n)}{z_n}&=\underbrace{\varphi_n(t)\scpr{f_{-1}(t)}{z_n}}_{I_{-1}} +\underbrace{\varphi_n(t)\scpr{f_0(t)}{z_n}}_{I_0}+\underbrace{\scpr{f_1(t)z_n}{z_n}}_{I_1}\\ &\quad+\underbrace{\varphi_n(t)^{-1}\scpr{f_2(t)z_n^2}{z_n}}_{I_2}-c_3\varphi_n(t)^{-2}\underbrace{\scpr{z_n^3}{z_n}}_{=\|z_n\|_{L^4}^4}. \end{align*} For the first term we derive, using Young's inequality for products with $p=4/3$ and $q=4$, that \begin{align*} I_{-1}&\leq \scpr{\frac{2^{1/2}\varphi_n^{3/2} |I_{s,i}|}{c_3^{1/4}}}{\frac{c_3^{1/4}|z_n|}{2^{1/2}\varphi_n^{1/2}}}+ \sum_{k=1}^m\scpr{\frac{(4m)^{1/4}\varphi_n^{3/2}\|y_{\rm ref}\|_\infty |Aq_k|}{c_3^{1/4}}}{\frac{c_3^{1/4}|z_n|}{(4m)^{1/4}\varphi_n^{1/2}}}\\ &\leq\frac{2^{2/3} 3\varphi_n^2\|I_{s,i}\|_{2,\infty}^{4/3}|\Omega|^{1/3}}{4c_3^{1/3}}+ \sum_{k=1}^m\frac{3(4m)^{1/3}\varphi_n^2\|y_{\rm ref}\|_\infty^{4/3}\|Aq_k\|^{4/3}|\Omega|^{1/3}}{4c_3^{1/3}}+ \frac{c_3\|z_n\|^4_{L^4}}{8\varphi_n^2} \end{align*} and with the same choice we obtain for the second term \[I_0\leq\scpr{\frac{2^{1/4}\varphi_n^{3/2}\|f_0\|_{\infty,\infty}}{c_3^{1/4}}}{\frac{c_3^{1/4}|z_n|}{2^{1/4}\varphi_n^{1/2}}}\leq \frac{2^{1/3}3\varphi_n^2\|f_0\|_{\infty,\infty}^{4/3}|\Omega|}{4c_3^{1/3}}+\frac{c_3\|z_n\|^4_{L^4}}{8\varphi_n^2}.\] Using $p=q=2$ we find that the third term satisfies \[ I_1\leq\scpr{\frac{2\varphi_n\|f_1\|_{\infty,\infty}}{\sqrt{c_3}}}{\frac{\sqrt{c_3}|z_n|^2}{2\varphi_n}}\leq\frac{2\varphi_n^2\|f_1\|_{\infty,\infty}^2|\Omega|}{c_3}+ \frac{c_3\|z_n\|^4_{L^4}}{8\varphi_n^2}, \] and finally, with $p=4$ and $q=4/3$, \begin{align*} I_2&\leq\scpr{\varphi_n^{-1}\|f_2\|_{\infty,\infty}}{|z_n|^3}= \scpr{\frac{3^{3/2}\varphi_n^{1/2}\|f_2\|_{\infty,\infty}}{c_3^{3/4}}}{\left|\frac{c_3^{1/4}z_n}{\varphi_n^{1/2}\sqrt{3}}\right|^3}\\ &\leq\frac{9^3\varphi_n^2\|f_2\|^4_{\infty,\infty}|\Omega|}{4c_3^3}+\frac{c_3}{12\varphi_n^2}\|z_n\|^4_{L^4}. \end{align*} Summarizing, we have shown that \[\scpr{F(t,z_n)}{z_n}\leq K_0\varphi_n^2-\frac{13c_3}{24\varphi_n^2}\|z_n\|^4_{L^4}\leq K_0\varphi_n^2-\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4},\] where \begin{align*} K_0\coloneqq & \ \frac{2^{2/3} 3\|I_{s,i}\|_{2,\infty}^{4/3}|\Omega|^{1/3}}{4c_3^{1/3}}+ \sum_{k=1}^m\frac{3(4m)^{1/3}\|y_{\rm ref}\|_\infty^{4/3}\|Aq_k\|^{4/3}|\Omega|^{1/3}}{4c_3^{1/3}}\\ &+ \frac{2^{1/3}3\|f_0\|_{\infty,\infty}^{4/3}|\Omega|}{4c_3^{1/3}}+\frac{2\|f_1\|_{\infty,\infty}^2|\Omega|}{c_3}+\frac{9^3\|f_2\|^4_{\infty,\infty}|\Omega|}{4c_3^3}. \end{align*} Finally, using Young's inequality with $p=q=2$, we estimate the last term in~\eqref{eq:Lyapunov_boundary_1} as follows \[\varphi_n \scpr{g}{w_n}\leq\frac{\varphi_n^2\|g\|_{\infty,\infty}^2|\Omega|}{2c_4}+\frac{c_4}{2}\|w_n\|^2.\] We have thus obtained the estimate \begin{equation}\label{eq:Lyapunov_boundary_2} \begin{aligned} \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} V(z_n,w_n)\leq&-(\sigma-2\omega_0) V(z_n,w_n)\\ &-c_5\mathfrak{a}(z_n,z_n)-c_5\frac{k_0\|e_n\|_{{\mathbb{R}}^m}^2}{1-\|e_n\|_{{\mathbb{R}}^m}^2}-\frac{c_3c_5}{2\varphi_n^{2}}\|z_n\|_{L^4}^4+\varphi_n^2K_1, \end{aligned} \end{equation} where \begin{align*} \sigma\coloneqq 2\min\{c_1,c_4\},\quad K_1\coloneqq c_5K_0+\frac{\|g\|_{\infty,\infty}^2|\Omega|}{2c_4}. \end{align*} In particular, we have the conservative estimate \[ \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} V(z_n,w_n) \le -(\sigma-2\omega_0) V(z_n,w_n) +\varphi_n^2K_1 \] on $[\gamma,T_n)$, which implies that \[ V(z_n(t),w_n(t)) \le \mathrm{e}^{-K(t,\gamma)} V(z_n(\gamma),w_n(\gamma)) + \int_\gamma^t \mathrm{e}^{-K(t,s)} \varphi_n(s)^2 K_1 \ds{s}, \] where \[ K(t,s) = \int_s^t \sigma -2\omega_0(\tau) \ds{\tau} = \sigma(t-s) - 2\ln \varphi_n(t) + 2 \ln \varphi_n(s),\quad \gamma\le s\le t < T_n. \] Therefore, invoking $\varphi_n(\gamma)=\kappa_n$, for all $t\in[\gamma, T_n)$ we have \begin{align*} &c_5\|z_n(t)\|^2+\|w_n(t)\|^2 = 2V(z_n(t),w_n(t))\\ &\leq 2\mathrm{e}^{-\sigma (t-\gamma)}\frac{\varphi_n(t)^2}{\kappa_n^2}V(z_n(\gamma),w_n(\gamma))+\frac{2K_1}{\sigma}\varphi_n(t)^2\\ &= \varphi_n(t)^2\left((c_5\|v_\gamma^n-q^n\cdoty_{\rm ref}(\gamma)\|^2+\|u_\gamma^n\|^2)\mathrm{e}^{-\sigma (t-\gamma)}+2K_1\sigma^{-1}\right)\\ &\le \varphi_n(t)^2\left(c_5\|v_\gamma-q\cdoty_{\rm ref}(\gamma)\|^2+\|u_\gamma\|^2+2K_1\sigma^{-1}\right). \end{align*} Thus there exist $M,N>0$ which are independent of $n$ and $t$ such that \begin{equation}\label{eq:L2_bound} \forall\, t\in[\gamma,T_n):\ \|z_n(t)\|^2\leq M\varphi_n(t)^2\ \ \text{and}\ \ \|w_n(t)\|^2\leq N\varphi_n(t)^2, \end{equation} and, as a consequence, \begin{equation}\label{eq:L2_bound_uv} \forall\, t\in[\gamma,T_n):\ \|v_n(t)-q^n\cdoty_{\rm ref}(t)\|^2\leq M\ \ \text{and}\ \ \|u_n(t)\|^2\leq N. \end{equation} \emph{Step 3: We show $T_n=\infty$ and that $e_n$ is uniformly bounded away from~1 on~$[\gamma,\infty)$.}\\ \emph{Step 3a: We derive some estimates for $\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2$ and for an integral involving $\|z_n\|^4_{L^4}$.} In a similar way in which we have derived~\eqref{eq:Lyapunov_boundary_2} we can obtain the estimate \begin{equation}\label{eq:energy_boundary_z} \begin{aligned} \tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2\leq&-\mathfrak{a}(z_n,z_n)-(c_1-\omega_0)\|z_n\|^2+\|z_n\|\|w_n\|\\ &-\frac{k_0\|e_n\|^2_{{\mathbb{R}}^m}}{1-\|e_n\|^2_{{\mathbb{R}}^m}}-\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}+K_0\varphi_n^2. \end{aligned} \end{equation} Using \eqref{eq:L2_bound} and $-c_1\|z_n\|^2\le 0$ leads to \begin{align*} \tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2\leq&-\mathfrak{a}(z_n,z_n)-\frac{k_0\|e_n\|^2_{{\mathbb{R}}^m}}{1-\|e_n\|^2_{{\mathbb{R}}^m}}-\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}\\ &+\|\dot{\varphi}\|_\infty M\varphi_n+(K_0+\sqrt{MN})\varphi_n^2. \end{align*} Hence, \begin{equation}\label{eq:energy_boundary_2} \begin{aligned} \tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2\leq&-\mathfrak{a}(z_n,z_n)-\frac{k_0\|e_n\|^2_{{\mathbb{R}}^m}}{1-\|e_n\|^2_{{\mathbb{R}}^m}}-\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}+K_1\varphi_n+K_2\varphi_n^2 \end{aligned} \end{equation} on $[\gamma,T_n)$, where $K_1\coloneqq M\|\dot{\varphi}\|_\infty$ and $K_2\coloneqq K_0+\sqrt{MN}$. Observe that \[ \frac{c_3}{2}\varphi_n^{-3}\|z_n\|^4_{L^4}\leq-\frac{\varphi_n^{-1}}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+K_3, \] where $K_3\coloneqq K_1+K_2\|\varphi\|_\infty$. Therefore, \begin{align*} &\frac{c_3}{2}\int_\gamma^t\mathrm{e}^s\varphi_n(s)^{-3}\|z_n(s)\|^4_{L^4}\ds{s}\\ &\le K_3(\mathrm{e}^t-\mathrm{e}^\gamma)-\frac{1}{2}\int_\gamma^t\mathrm{e}^s\varphi_n(s)^{-1}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n(s)\|^2\ds{s}\\ &= K_3(\mathrm{e}^t-\mathrm{e}^\gamma)-\frac{1}{2}\left(\mathrm{e}^t\varphi_n(t)^{-1}\|z_n(t)\|^2-\frac{\|z_\gamma^n\|^2}{\kappa_n}\mathrm{e}^\gamma\right)\\ &\quad +\frac{1}{2}\int_\gamma^t\mathrm{e}^s\varphi_n(s)^{-2}(\varphi_n(s)-\dot{\varphi}_n(s))\|z_n(s)\|^2\ds{s}\\ &\le \frac{\mathrm{e}^t}{2}(2K_3+(\|\varphi\|_\infty+\Gamma_r^{-1}+\|\dot{\varphi}\|_\infty)M)+\kappa_n\mathrm{e}^\gamma(\|v_\gamma\|^2+\|q\cdoty_{\rm ref}(\gamma)\|^2), \end{align*} and hence there exist $D_0,D_1>0$ independent of $n$ and $t$ such that \begin{equation} \label{eq:expz4_boundary} \forall\, t\in[\gamma,T_n):\ \int_\gamma^t\mathrm{e}^s\varphi_n(s)^{-3}\|z_n(s)\|^4_{L^4}\ds{s}\leq D_1\mathrm{e}^t+\kappa_n D_0. \end{equation} \emph{Step 3b: We derive an estimate for $\|\dot z_n\|^2$.} Multiplying the first equation in~\eqref{eq:weak} by $\dot{\mu}_j$ and summing up over $j\in\{0,\ldots,n\}$ we obtain \begin{align*} \|\dot{z}_n\|^2=&-\frac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\mathfrak{a}(z_n,z_n)-\frac{c_1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+\frac{k_0}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\ln(1-\|e_n\|^2_{{\mathbb{R}}^m})\\ &+ \scpr{\omega_0z_n+F\left(t,z_n\right)-w_n}{\dot{z}_n}. \end{align*} We can estimate the last term above by \begin{align*} \scpr{\omega_0z_n}{\dot{z}_n}\leq&\ \frac{7}{2}\|\dot{\varphi}\|_\infty^2\varphi_n^{-2}\|z_n\|^2+\frac{1}{14}\|\dot{z}_n\|^2 \stackrel{\eqref{eq:L2_bound}}{\leq} \frac{7}{2}\|\dot{\varphi}\|_\infty^2M+\frac{1}{14}\|\dot{z}_n\|^2,\\ \scpr{-w_n}{\dot{z}_n}\leq&\ \frac{7}{2}\|w_n\|^2+\frac{1}{14}\|\dot{z}_n\|^2,\\ \scpr{F\left(t,z_n\right)}{\dot{z}_n}\leq&\ \frac{7}{2}\varphi_n^2\left(m\sum_{k=1}^m \|y_{{\rm ref},k}\|_\infty^2 \|\mathcal{A} q_k\|^2+\|I_{s,i}\|^2_{2,\infty}+\|f_0\|_{\infty,\infty}^2|\Omega|\right)\\ &+\frac{7}{2}\|f_1\|^2_{\infty,\infty}\|z_n\|^2+\frac{7}{2}\varphi_n^{-2}\|f_2\|_{\infty,\infty}^2\|z_n\|^4_{L^4}\\ &+\frac{5}{14}\|\dot{z}_n\|^2-\frac{c_3}{4\varphi_n^2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^4_{L^4}. \end{align*} Inserting these inequalities, substracting $\tfrac12 \|\dot{z}_n\|^2$ and then multiplying by~$2$ gives \begin{equation*}\label{eq:energy_boundary_1} \begin{aligned} \|\dot{z}_n\|^2=&-\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\mathfrak{a}(z_n,z_n)-c_1\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+k_0\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\ln(1-\|e_n\|^2_{{\mathbb{R}}^m})-\frac{c_3}{2\varphi_n^2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^4_{L^4}\\ &+7\varphi_n^2\left(m\sum_{k=1}^m\|y_{{\rm ref},k}\|_\infty^2 \|\mathcal{A} q_k\|^2+\|I_{s,i}\|^2_{2,\infty}\!+\|f_0\|_{\infty,\infty}^2|\Omega|+\|f_1\|^2_{\infty,\infty}M\!+\!N\!\right)\\ &+7\|\dot{\varphi}\|_\infty^2M+7\varphi_n^{-2}\|f_2\|_{\infty,\infty}^2\|z_n\|^4_{L^4}. \end{aligned} \end{equation*} Now we add and subtract $\frac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2$, thus we obtain \begin{align*} \|\dot{z}_n\|^2\leq&-\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\mathfrak{a}(z_n,z_n)-\left(c_1+\frac{1}{2}\right)\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+k_0\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\ln(1-\|e_n\|^2_{{\mathbb{R}}^m}) -\frac{c_3}{2\varphi_n^2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^4_{L^4}\\ &+7(\|\varphi\|_\infty+\Gamma_r^{-1})^2\left(m\sum_{k=1}^m\|y_{{\rm ref},k}\|_\infty^2\|\mathcal{A} q_k\|^2+\|I_{s,i}\|^2_{2,\infty}+\|f_0\|_{\infty,\infty}^2|\Omega|\right.\\ &\left.\phantom{\sum_{i=0}^n}\hspace*{-6mm}+\|f_1\|^2_{\infty,\infty}M\right)+7(N(\|\varphi\|_\infty+\Gamma_r^{-1})^2 +\|\dot{\varphi}\|_\infty^2M)+7\varphi_n^{-2}\|f_2\|_{\infty,\infty}^2\|z_n\|^4_{L^4}\\ &+\frac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2. \end{align*} By the product rule we have \[-\frac{c_3}{2\varphi_n^2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^4_{L^4}=-\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\left(\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}\right)- c_3\varphi_n^{-3}\dot{\varphi_n}\|z_n\|^4_{L^4},\] thus we find that \begin{equation}\label{eq:energy_boundary_3} \begin{aligned} \|\dot{z}_n\|^2&+\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\mathfrak{a}(z_n,z_n)-k_0\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\ln(1-\|e_n\|^2_{{\mathbb{R}}^m})+\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\left(\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}\right)\\ \leq&-\left(c_1+\frac{1}{2}\right)\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+E_1+E_2\varphi_n^{-3}\|z_n\|^4_{L^4}+\frac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2, \end{aligned} \end{equation} where \begin{align*} E_1&\coloneqq \ 7(\|\varphi\|_\infty+\Gamma_r^{-1})^2\left(m\sum_{k=1}^m\|y_{{\rm ref},k}\|_\infty^2\|Aq_k\|^2+ \|I_{s,i}\|^2_{2,\infty}+\|f_0\|_{\infty,\infty}^2|\Omega|\right.\\ &\quad\ \ \left.\phantom{\sum_{i=0}^n}\hspace*{-6mm}+\|f_1\|^2_{\infty,\infty}M\right) +7\big(N(\|\varphi\|_\infty+\Gamma_r^{-1})^2+\|\dot{\varphi}\|_\infty^2M\big),\\ E_2&\coloneqq 7\|f_2\|_{\infty,\infty}^2(\|\varphi\|_\infty+\Gamma_r^{-1})+c_3\|\dot{\varphi}\|_\infty \end{align*} are independent of $n$ and $t$.\\ \emph{Step 3c: We show uniform boundedness of~$e_n$.} Using~\eqref{eq:energy_boundary_2} in~\eqref{eq:energy_boundary_3} we obtain \begin{align*} \|\dot{z}_n\|^2+\dot\rho_n\leq&-\left(c_1+\frac{1}{2}\right)\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+E_1+E_2\varphi_n^{-3}\|z_n\|^4_{L^4}\\ &-\mathfrak{a}(z_n,z_n)-\frac{k_0\|e_n\|^2_{{\mathbb{R}}^m}}{1-\|e_n\|^2_{{\mathbb{R}}^m}}-\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}+K_1\varphi_n+K_2\varphi_n^2\\ =&-\left(c_1+\frac{1}{2}\right)\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+E_2\varphi_n^{-3}\|z_n\|^4_{L^4}\\ &-\mathfrak{a}(z_n,z_n)-\frac{k_0}{1-\|e_n\|^2_{{\mathbb{R}}^m}}-\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}+\Lambda, \end{align*} where \begin{align*} \rho_n&\coloneqq \mathfrak{a}(z_n,z_n)-k_0\ln(1-\|e_n\|^2_{{\mathbb{R}}^m})+\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4},\\ \Lambda&\coloneqq E_1+K_1(\|\varphi\|_\infty+\Gamma_r^{-1})+K_2(\|\varphi\|_\infty+\Gamma_r^{-1})^2+k_0, \end{align*} and we have used the equality $$\frac{\|e_n\|^2_{{\mathbb{R}}^m}}{1-\|e_n\|^2_{{\mathbb{R}}^m}}=-1+\frac{1}{1-\|e_n\|^2_{{\mathbb{R}}^m}}.$$ Adding and subtracting $k_0\ln(1-\|e_n\|^2_{{\mathbb{R}}^m})$ leads to \begin{align} \|\dot{z}_n\|^2+\dot\rho_n\leq&-\rho_n-\left(c_1+\frac{1}{2}\right)\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+E_2\varphi_n^{-3}\|z_n\|^4_{L^4}\notag\\ &-k_0\left(\frac{1}{1-\|e_n\|^2_{{\mathbb{R}}^m}}+\ln(1-\|e_n\|^2_{{\mathbb{R}}^m})\right)+\Lambda\notag\\ \leq&-\rho_n-\left(c_1+\frac{1}{2}\right)\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+E_2\varphi_n^{-3}\|z_n\|^4_{L^4}+\Lambda, \label{eq:L2zdot_boundary} \end{align} where for the last inequality we have used that \[\forall\, p\in(-1,1):\ \frac{1}{1-p^2}\geq\ln\left(\frac{1}{1-p^2}\right) = -\ln(1-p^2).\] We may now use the integrating factor $\mathrm{e}^t$ to obtain \[ \tfrac{\text{\normalfont d}}{\text{\normalfont d}t} \left(\mathrm{e}^t\rho_n\right) = \mathrm{e}^t(\rho_n + \dot \rho_n) \leq -\mathrm{e}^t\left(c_1+\frac{1}{2}\right)\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2+E_2\mathrm{e}^t\varphi_n^{-3}\|z_n\|^4_{L^4}+\Lambda \mathrm{e}^t \underset{\le 0}{\underbrace{- \mathrm{e}^t \|\dot z_n\|^2}}. \] Integrating and using \eqref{eq:expz4_boundary} yields that for all $t\in[\gamma,T_n)$ we have \begin{align*} \mathrm{e}^t\rho_n(t)-\rho_n(\gamma)\mathrm{e}^\gamma\leq&\ (E_2D_1+\Lambda)\mathrm{e}^t+\kappa_n E_2D_0-\int_\gamma^t\mathrm{e}^s\left(c_1+\frac{1}{2}\right)\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n(s)\|^2\ds{s}\\ \leq&\ (E_2D_1+\Lambda)\mathrm{e}^t+\kappa_n E_2D_0+\left(c_1+\frac{1}{2}\right)\|z_\gamma^n\|^2\mathrm{e}^\gamma\\ &+\left(c_1+\frac{1}{2}\right)\int_\gamma^t\mathrm{e}^s\|z_n(s)\|^2\ds{s}\\ \stackrel{\eqref{eq:L2_bound}}{\leq}&\ (E_2D_1+\Lambda)\mathrm{e}^t+\kappa_n E_2D_0+\left(c_1+\frac12\right)\kappa_n^2\mathrm{e}^\gamma(\|v_\gamma-q\cdoty_{\rm ref}(\gamma)\|^2)\\ &+\left(c_1+\frac{1}{2}\right)(\|\varphi\|_\infty+\Gamma_r^{-1})^2M\mathrm{e}^t. \end{align*} Thus, there exit $\Xi_1,\Xi_2,\Xi_3>0$ independent of $n$ and $t$, such that \[\rho_n(t)\leq\rho_n(\gamma)\mathrm{e}^{-(t-\gamma)}+\Xi_1+\kappa_n(\Xi_2+\kappa_n\Xi_3)\mathrm{e}^{-(t-\gamma)}.\] Invoking the definition of $\rho_n$ and that $\mathrm{e}^{-(t-\gamma)}\leq1$ for $t\geq\gamma$ we find that \begin{equation}\label{eq:rho} \forall\, t\in[\gamma,T_n):\ \rho_n(t)\leq \rho_{n}^0+\Xi_1+\kappa_n\Xi_2+\kappa_n^2\Xi_3, \end{equation} where \begin{align*} \rho_{n}^0\coloneqq &\ \kappa_n^2\mathfrak{a}(v_\gamma^n\!-\!q^n\cdoty_{\rm ref}(\gamma),v_\gamma^n\!-\!q^n\cdoty_{\rm ref}(\gamma))\!-\!k_0\ln(1\!-\!\kappa_n^2\|\mathcal{B}' (v_\gamma^n\!-\!q^n\cdoty_{\rm ref}(\gamma))\|^2_{{\mathbb{R}}^m})\\ &+\kappa_n^2\|v_\gamma^n-q^n\cdoty_{\rm ref}(\gamma)\|_{L^4}^4 = \rho_n(\gamma). \end{align*} Note that by construction of $\kappa_n$ and the Sobolev Embedding Theorem, $(\rho_n^0)_{n\in{\mathbb{N}}}$ is bounded, $\rho_n^0\to0$ as $n\to\infty$, so that $\rho_n^0$ can be bounded independently of $n$.\\ Again using the definition of $\rho_n$ and~\eqref{eq:rho} we find that \begin{align*} k_0 \ln\left(\frac{1}{1-\|e_n\|^2_{{\mathbb{R}}^m}}\right) = \rho_n - \mathfrak{a}(z_n,z_n)- \frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4} \le \rho_{n}^0+\Xi_1+\kappa_n\Xi_2+\kappa_n^2\Xi_3, \end{align*} and hence \[\frac{1}{1-\|e_n\|^2_{{\mathbb{R}}^m}}\leq\exp\left(\frac{1}{k_0}\left(\rho_{n}^0+\Xi_1+\kappa_n\Xi_2+\kappa_n^2\Xi_3\right)\right)=:\varepsilon(n).\] We may thus conclude that \begin{equation}\label{eq:err_bounded_away} \forall\, t\in[\gamma,T_n):\ \|e_n(t)\|^2_{{\mathbb{R}}^m}\leq1-\varepsilon(n), \end{equation} or, equivalently, \begin{equation}\label{eq:err_bdd} \forall\, t\in[\gamma,T_n):\ \varphi_n(t)^2\|\mathcal{B}'( v_n(t)-q^n\cdoty_{\rm ref}(t))\|^2_{{\mathbb{R}}^m}\leq1-\varepsilon(n). \end{equation} Moreover, from~\eqref{eq:rho}, the definition of $\rho$, $k_0\ln(1-\|e_n\|^2_{{\mathbb{R}}^m})\leq0$ and Assumption~\ref{Ass1} we have that \begin{align*} \delta\|\nabla z_n\|^2+\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4} \leq\rho_{n}^0+\Xi_1+\kappa_n\Xi_2+\kappa_n^2\Xi_3. \end{align*} Reversing the change of variables leads to \begin{equation}\label{eq:potential} \begin{aligned} \forall\, t\in[\gamma,T_n):\ \delta\varphi_n(t)^2\|\nabla(v_n(t)-q^n\cdoty_{\rm ref}(t))\|^2&+\varphi_n(t)^2\|v_n(t)-q^n\cdoty_{\rm ref}(t)\|_{L^4}^4\\ &\leq\rho_{n}^0+\Xi_1+\kappa_n\Xi_2+\kappa_n^2\Xi_3, \end{aligned} \end{equation} which implies that for all $t\in[\gamma,T_n)$ we have $v_n(t)\in W^{1,2}(\Omega)$.\\ \emph{Step 3d: We show that~$T_n=\infty$.} Assuming $T_n<\infty$ it follows from~\eqref{eq:err_bounded_away} that the graph of the solution~$(\mu^n,\nu^n)$ from Step~2 would be a compact subset of~${\mathbb{D}}$, a contradiction. Therefore, we have $T_n=\infty$. \emph{Step 4: We show convergence of the approximate solution, uniqueness and regularity of the solution in $[\gamma,\infty)\times \Omega$.}\\ \emph{Step~4a: we prove some inequalities for later use.} From~\eqref{eq:rho} we have that, on $[\gamma,\infty)$, \[\varphi_n^{-2}\|z_n\|_{L^4}^4\leq\rho_{n}^0+\Xi_1+\kappa_n\Xi_2 + \kappa_n^2\Xi_3.\] Using a similar procedure as for the derivation of~\eqref{eq:expz4_boundary} we may obtain the estimate \begin{equation}\label{eq:noexpz4_boundary} \forall\, t\ge 0:\ \int_\gamma^t\varphi_n(s)^{-3}\|z_n(s)\|^4_{L^4}\ds{s}\leq\kappa_n d_0+d_1t \end{equation} for $d_0,d_1>0$ independent of $n$ and $t$. Further, we can integrate \eqref{eq:L2zdot_boundary} on the interval $[\gamma,t]$ to obtain, invoking $\rho_n(t)\ge 0$ and~\eqref{eq:noexpz4_boundary}, \[ \int_\gamma^t\|\dot{z}_n(s)\|^2\ds{s}\leq\rho_{n}^0+ \left(c_1+\frac12\right)\kappa_n^2(\|v_\gamma-q\cdoty_{\rm ref}(\gamma)\|^2)+E_2(\kappa_n d_0+d_1t)+\Lambda t \] for all $t\ge \gamma$. Hence, there exist $S_0,S_1,S_2>0$ independent of $n$ and $t$ such that \begin{equation}\label{eq:intzdot} \forall\, t\ge \gamma:\ \int_\gamma^t\|\dot{z}_n(s)\|^2\ds{s}\leq\rho_{n}^0+S_0\kappa_n+S_1\kappa_n^2+S_2t. \end{equation} This implies existence of $S_3,S_4>0$ such that \begin{equation}\label{eq:dot_var_v} \forall\, t\ge \gamma:\ \int_\gamma^t\left\|\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}(\varphi_nv_n)\right\|^2\ds{s}\leq\rho_{n}^0+S_0\kappa_n+S_1\kappa_n^2+S_3t+S_4. \end{equation} In order to improve~\eqref{eq:noexpz4_boundary}, we observe that from~\eqref{eq:energy_boundary_z} it follows \begin{align*} \tfrac{1}{2}\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\|z_n\|^2\leq&-\mathfrak{a}(z_n,z_n)-(c_1-\omega_0)\|z_n\|^2+\|z_n\|\|w_n\|\\ &-\frac{k_0\|e_n\|^2_{{\mathbb{R}}^m}}{1-\|e_n\|^2_{{\mathbb{R}}^m}}-\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}+K_0\varphi_n^2\\ \leq&\ \omega_0\|z_n\|^2-\frac{c_3}{2\varphi_n^2}\|z_n\|^4_{L^4}+K_2\varphi_n^2 -\mathfrak{a}(z_n,z_n)-\frac{k_0\|e_n\|^2_{{\mathbb{R}}^m}}{1-\|e_n\|^2_{{\mathbb{R}}^m}}, \end{align*} which gives \[\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\varphi_n^{-2}\|z_n\|^2\leq 2K_2-c_3\varphi_n^{-4}\|z_n\|^4_{L^4}-2\varphi_n^{-2}\mathfrak{a}(z_n,z_n)-\frac{2k_0\varphi_n^{-2}\|e_n\|^2_{{\mathbb{R}}^m}}{1-\|e_n\|^2_{{\mathbb{R}}^m}}.\] This implies that for all $t\ge \gamma$ we have \begin{equation}\label{eq:vphi4z4} \begin{aligned} \int_\gamma^t c_3\varphi_n(s)^{-4}\|z_n(s)\|^4_{L^4} &+2\varphi_n(s)^{-2}\mathfrak{a}(z_n(s),z_n(s))+\frac{2k_0\varphi_n(s)^{-2}\|e_n(s)\|^2_{{\mathbb{R}}^m}}{1-\|e_n(s)\|^2_{{\mathbb{R}}^m}}\ds{s}\\ &\leq2K_2t+\|v_\gamma-q\cdoty_{\rm ref}(\gamma)\|^2, \end{aligned} \end{equation} which is bounded independently of $n$. This shows that for all $t\ge \gamma$ we have \begin{equation}\label{eq:L4} \begin{aligned} c_3\int_\gamma^t\|v_n(s)-q^n\cdoty_{\rm ref}(s)\|^4_{L^4}\ds{s}+\int_\gamma^t2\mathfrak{a}(v_n(s)-q^n\cdot y_{\rm ref}(s),v_n(s)-q^n\cdot y_{\rm ref}(s))\ds{s}\\ +\int_\gamma^t\frac{2k_0\|\mathcal{B}' (v_n(s)-q^n\cdoty_{\rm ref}(s))\|^2_{{\mathbb{R}}^m}}{1-\varphi_n(s)^{2}\|\mathcal{B}' (v_n(s)-q^n\cdoty_{\rm ref}(s))\|^2_{{\mathbb{R}}^m}}\ds{s}\leq2K_2t+\|v_\gamma-q\cdoty_{\rm ref}(\gamma)\|^2. \end{aligned} \end{equation} In order to prove that $\|\dot{w}_n\|^2$ is bounded independently of $n$ and $t$, a last calculation is required. Multiply the second equation in~\eqref{eq:weak} by $\dot{\nu}_j$ and sum over $j$ to obtain \[\|\dot{w}_n\|^2=-(c_4-\omega_0)\scpr{w_n}{\dot{w}_n}+c_5\scpr{z_n}{\dot{w}_n}+\varphi_n \scpr{g}{\dot{w}_n}.\] Using $(\omega_0 - c_4) w_n = (\dot\varphi_n - c_4\varphi_n) \varphi_n^{-1} w_n$ and the inequalities \begin{align*} -(c_4-\omega_0)\scpr{w_n}{\dot{w}_n}&\leq\frac{3}{2} \| \dot\varphi - c_4\varphi\|_\infty^2 \varphi_n^{-2}\|w_n\|^2+\frac{\|\dot{w}_n\|^2}{6}\\ &\leq\frac{3}{2}(\| \dot{\varphi}\|_\infty + c_4(\|\varphi\|_\infty+\Gamma_r^{-1}))^2 N+\frac{\|\dot{w}_n\|^2}{6},\\ c_5\scpr{z_n}{\dot{w}_n}&\leq\frac{3c_5^2}{2}\|z_n\|^2+\frac{1}{6}\|\dot{w}_n\|^2\\ &\leq\frac{3c_5^2M}{2}(\|\varphi\|_\infty+\Gamma_r^{-1})^2+\frac{1}{6}\|\dot{w}_n\|^2,\\ \varphi_n \scpr{g}{\dot{w}_n}& \leq\frac{3}{2}(\|\varphi\|_\infty+\Gamma_r^{-1})^2\|g\|^2_{\infty,\infty}|\Omega|+\frac{1}{6}\|\dot{w}_n\|^2, \end{align*} it follows that for all $t\ge\gamma$ we have \begin{equation}\label{eq:dotw} \begin{aligned} \|\dot{w}_n(t)\|^2\leq&3\|(\| \dot{\varphi}\|_\infty + c_4(\|\varphi\|_\infty+\Gamma_r^{-1}))^2 N\\&+3c_5^2M(\|\varphi\|_\infty+\Gamma_r^{-1})^2+3(\|\varphi\|_\infty+\Gamma_r^{-1})^2\|g\|^2_{\infty,\infty}|\Omega|,\end{aligned} \end{equation} which is bounded independently of $n$ and $t$. Multiplying the second equation in~\eqref{eq:weak} by $\varphi_n^{-1}$ and $\theta_i$ and summing up over $i\in\{0,\dots,n\}$ leads to \[\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}(\varphi_n^{-1}w_n)=-\varphi^{-2}\dot{\varphi}_nw_n+\varphi_n^{-1}\dot{w}_n=-c_4\varphi_n^{-1}w_n+c_5\varphi_n^{-1}z_n+g_n,\] where \[g_n\coloneqq \sum_{i=0}^n\scpr{g}{\theta_i}\theta_i.\] Taking the norm of the latter gives \begin{align*} \left\|\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}(\varphi_n^{-1}w_n)\right\|&\leq c_4\varphi_n^{-1}\|w_n\|+c_5\varphi_n^{-1}\|z_n\|+\|g_n\|\\ &\leq c_4N+c_5M+\|g\|_{\infty,\infty}, \end{align*} thus \begin{equation}\label{eq:dot_u} \forall\, t\ge \gamma:\ \|\dot{u}_n(t)\|\leq c_4N+c_5M+\|g\|_{\infty,\infty}. \end{equation} \emph{Step 4b: We show that $(v_n,u_n)$ converges weakly.} Let $T>\gamma$ be given. Using a similar argument as in Section~\ref{ssec:mono_proof_tleqgamma}, we have that $v_n\in L^2(\gamma,T;W^{1,2}(\Omega))$ and $\dot{v}_n\in L^2(\gamma,T;W^{1,2}(\Omega)')$, since~\eqref{eq:L4} together with~\eqref{eq:err_bdd} implies that $I_{s,e}^n\in L^2(\gamma,T;{\mathbb{R}}^m)$ and $v_n\in L^2(\gamma,T;W^{1,2}(\Omega))$.\\ Furthermore, analogously to Section~\ref{ssec:mono_proof_tleqgamma}, we have that there exist subsequences such that \begin{align*} u_n\to u&\in W^{1,2}(\gamma,T;L^{2}(\Omega))\mbox{ weakly},\\ v_n\to v&\in L^2(\gamma,T;W^{1,2}(\Omega))\mbox{ weakly},\\ \dot{v}_n\to\dot{v}&\in L^2(\gamma,T;(W^{1,2}(\Omega))')\mbox{ weakly}, \end{align*} so that $u,v\in C([\gamma,T];L^2(\Omega))$. Also $v_n^2\to v^2$ weakly in $L^2((\gamma,T)\times\Omega)$ and $v_n^3\to v^3$ weakly in $L^{4/3}((\gamma,T)\times\Omega)$.\\ We may infer further properties of $u$ and $v$. By \eqref{eq:L2_bound_uv}, \eqref{eq:potential}, \eqref{eq:dot_var_v} \& \eqref{eq:dot_u} we have that $u_n,\dot{u}_n$ lie in a bounded subset of $L^\infty(\gamma,\infty;L^2(\Omega))$ and that $v_n$ lie in a bounded subset of $L^\infty(\gamma,\infty;L^2(\Omega))$. Moreover, $\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}(\varphi_nv_n)\in L^2_{\rm loc}(\gamma,\infty;L^2(\Omega))$. Then, using Lemma~\ref{lem:weak_convergence}, we find a subsequence such that \begin{align*} u_n\to u&\in L^\infty(\gamma,T;L^2(\Omega))\mbox{ weak}^\star,\\ \dot{u}_n\to \dot{u}&\in L^\infty(\gamma,T;L^2(\Omega))\mbox{ weak}^\star,\\ v_n\to v&\in L^\infty(\gamma,T;L^{2}(\Omega))\mbox{ weak}^\star,\\ \varphi_nv_n\to \varphi v&\in L^\infty(\gamma,T;W^{1,2}(\Omega))\mbox{ weak}^\star,\\ \dot{v}_n\to \dot{v}&\in L^2(\gamma,T;W^{1,2}(\Omega)')\mbox{ weakly},\\ \varphi_n\dot{v}_n\to \varphi\dot{v}&\in L^2(\gamma,T;L^2(\Omega))\mbox{ weakly}, \end{align*} since $\varphi_n\to\varphi$ in $BC([\gamma,T];{\mathbb{R}})$. Moreover, by $\inf_{t>\gamma+\delta}\varphi(t)>0$, we also have that $v\in L^\infty(\gamma+\delta,T;W^{1,2}(\Omega))$ and $\dot{v}\in L^2(\gamma+\delta,T;L^2(\Omega))$ for all~$\delta>0$.\\ Further, $\kappa_n,\rho_n^0\to0$ and \[\varepsilon(n)\underset{n\to\infty}{\to}\varepsilon_0\coloneqq \exp\left(-k_0^{-1}\Xi_1\right).\] Thus, by \eqref{eq:L2_bound_uv}, \eqref{eq:err_bdd}, \eqref{eq:potential} \& \eqref{eq:L4} we have $v\in L^4((\gamma,T)\times\Omega)$ and for almost all $t\in[\gamma,T)$ the following estimates hold: \begin{equation}\label{eq:potential_limit} \begin{aligned} &\|v(t)-q\cdoty_{\rm ref}(t)\|\leq \sqrt{M},\\ &\|u(t)\|\leq \sqrt{N},\\ &\varphi(t)^2\|\mathcal{B}' v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}\leq1-\varepsilon_0,\\ &\delta\varphi(t)^2\|\nabla(v(t)-q\cdoty_{\rm ref}(t))\|^2+\varphi(t)^2\|v(t)-q\cdoty_{\rm ref}(t)\|_{L^4}^4\leq\Xi_1,\\ &\int_\gamma^t\|v(s)-q\cdoty_{\rm ref}(s)\|^4_{L^4}\ds{s}\leq2K_2t+\|v_\gamma-q\cdoty_{\rm ref}(\gamma)\|^2. \end{aligned} \end{equation} Moreover, as in Section~\ref{ssec:mono_proof_tleqgamma}, $v_n\to v$ strongly in $L^2(\gamma,T;L^2(\Omega))$ and $u,v\in C([\gamma,T);L^2(\Omega))$ with $(u(\gamma),v(\gamma))=(u_\gamma,v_\gamma)$.\\ Hence, for $\chi\in L^2(\Omega)$ and $\theta\in W^{1,2}(\Omega)$ we have that $(u_n,v_n)$ satisfy the integrated version of \eqref{eq:weak_uv}, thus we obtain that for $t\in(\gamma,T)$ \begin{align*} \scpr{v(t)}{\theta}=&\ \scpr{v_\gamma}{\theta}+\int_\gamma^t-\mathfrak{a}(v(s),\theta)+\scpr{p_3(v(s))-u(s)+I_{s,i}(s)}{\theta}\ds{s},\\ &+\int_\gamma^T\scpr{I_{s,e}(s)}{\mathcal{B}'\theta}_{{\mathbb{R}}^m}\ds{s},\\ \scpr{u(t)}{\chi}=&\ \scpr{u_\gamma}{\chi}+\int_\gamma^t\scpr{c_5v(s)-c_4u(s)}{\chi}\ds{s},\\ I_{s,e}(t)=&\ -\frac{k_0}{1-\varphi(t)^2\|\mathcal{B}' v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}}(\mathcal{B}' v(t)-y_{\rm ref}(t)) \end{align*} by bounded convergence \cite[Thm.~II.4.1]{Dies77}. Hence, $(u,v)$ is a solution of~\eqref{eq:FHN_feedback} in $(\gamma,T)$. Moreover,~\eqref{eq:strong_delta} also holds in $W^{1,2}(\Omega)'$ for $t\geq\gamma$, that is \begin{equation}\label{eq:Xminus} \dot{v}(t)=\mathcal{A} v(t)+p_3(v(t))+\mathcal{B} I_{s,e}(t)-u(t)+I_{s,i}(t). \end{equation} \emph{Step 5: We show uniqueness of the solution on $[0,\infty)$.}\\ Using the same arguments as in Step~1e of Section~\ref{ssec:mono_proof_tleqgamma} together with $v,u\in L^4((\gamma,T)\times\Omega)$, it can be shown that the solution $(v,u)$ of~\eqref{eq:FHN_feedback} is unique on $(\gamma,T)$ for any $T>0$. Combining this with uniqueness on $[0,\gamma]$ we obtain a unique solution on $[0,\infty)$. \emph{Step 6: We show the regularity properties of the solution.}\\ To this end, note that for all $\delta>0$ we have that \[v\in L^2_{\rm loc}(\gamma,\infty;W^{1,2}(\Omega))\cap L^\infty(\gamma+\delta,\infty;W^{1,2}(\Omega)),\] so that $I_r\coloneqq I_{s,i}+c_2v^2-c_3v^3-u\in L^2_{\rm loc}(\gamma,\infty;L^{2}(\Omega))\cap L^\infty(\gamma+\delta,\infty;L^2(\Omega))$, and the application of Proposition \ref{prop:hoelder} yields that $v\in BC([\gamma,\infty);L^2(\Omega))\cap BUC((\gamma,\infty);W^{1,2}(\Omega))$. By the uniform continuity of $v$ and the completeness of $W^{1,2}(\Omega)$, $v$ has a limit at $t=\gamma$, see for instance \cite[Thm.~II.13.D]{Simm63}. Thus, $v\in L^\infty(\gamma,\infty;W^{1,2}(\Omega))$. From Section \ref{ssec:mono_proof_tleqgamma} and the latter we have that $v\in L^2_{\rm loc}(0,\infty;W^{1,2}(\Omega))\cap L^\infty(\delta,\infty;W^{1,2}(\Omega))$ for all $\delta>0$, so we have \begin{align*} I_{s,e}&\in L^2_{\rm loc}(0,\infty;{\mathbb{R}}^m)\cap L^\infty(\delta,\infty;{\mathbb{R}}^m),\\ v&\in L^2_{\rm loc}(0,\infty;W^{1,2}(\Omega))\cap L^\infty(\delta,\infty;W^{1,2}(\Omega))\\ &\quad \ \ \cap BC([0,\infty);L^2(\Omega)) \cap BUC([\delta,\infty);W^{1,2}(\Omega)), \end{align*} so that $I_r\coloneqq I_{s,i}+c_2v^2-c_3v^3-u\in L^2_{\rm loc}(0,\infty;L^{2}(\Omega))\cap L^\infty(\delta,\infty;L^2(\Omega))$.\\ Recall that by assumption we have $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,W^{r,2}(\Omega)')$ for some $r\in [0,1]$. Applying Proposition \ref{prop:hoelder} we have that for all $\delta>0$ the unique solution of~\eqref{eq:Xminus} satisfies \begin{equation}\label{eq:sol_reg} \begin{aligned} \text{if $r=0$:} &\quad \forall\,\lambda\in(0,1):\ v\in C^{0,\lambda}([\delta,\infty);L^2(\Omega)); \\ \text{if $r\in(0,1)$:} &\quad v\in C^{0,1-r/2}([\delta,\infty);L^2(\Omega));\\ \text{if $r=1$:} &\quad v\in C^{0,1/2}([\delta,\infty);L^2(\Omega)). \end{aligned} \end{equation} Since $u,v\in BC([0,\infty);L^2(\Omega))$ and $\dot{u}=c_4v-c_5u$, we also have $\dot{u}\in BC([0,\infty);L^2(\Omega))$. Now, from~\eqref{eq:sol_reg} and $\mathcal{B}' \in\mathcal{L}(W^{r,2}(\Omega),{\mathbb{R}}^m)$ for $r\in[0,1]$ we obtain that \begin{itemize} \item for $r=0$ and $\lambda\in(0,1)$:\ $y= \mathcal{B}' v\in C^{0,\lambda}([\delta,\infty);{\mathbb{R}}^m)$; \item for $r\in(0,1)$:\ $y= \mathcal{B}' v\in C^{0,1-r}([\delta,\infty);{\mathbb{R}}^m)$; \item for $r=1$:\ $y= \mathcal{B}' v\in BUC([\delta,\infty);{\mathbb{R}}^m)$. \end{itemize} Further, from \eqref{eq:potential_limit} we have \[\forall\,t\geq\delta:\ \varphi(t)^{2}\|\mathcal{B}' v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}\leq1-\varepsilon_0,\] hence $I_{s,e}\in L^\infty(\delta,\infty;{\mathbb{R}}^m)$ and $I_{s,e}$ has the same regularity properties as $y$, since we have that $\varphi\in\Phi_\gamma$ and $y_{\rm ref}\in W^{1,\infty}(0,\infty;{\mathbb{R}}^m)$. Therefore, we have proved statements (i)--(iii) in Theorem~\ref{thm:mono_funnel} as well as~a) and~b). It remains to show~c), for which we additionally require that $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,W^{1,2}(\Omega))$. Then there exist $b_1,\ldots,b_m\in W^{1,2}(\Omega)$ such that $(\mathcal{B}' x)_i=\scpr{x}{b_i}$ for all $i=1,\dots,m$ and $x\in L^2(\Omega)$. Using the $b_i$ in the weak formulation for $i=1,\dots,m$, we have \[\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}\scpr{v(t)}{b_i}=-\mathfrak{a}(v(t),b_i)+\scpr{p_3(v(t))-u(t)+I_{s,i}(t)}{b_i}+\scpr{I_{s,e}(t)}{\mathcal{B}' b_i}_{{\mathbb{R}}^m}.\] Since $(\mathcal{B}' v(t))_i=\scpr{v(t)}{b_i}$, this leads to \[\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}(\mathcal{B}' v(t)_i)=-\mathfrak{a}(v(t),b_i)+\scpr{p_3(v(t))-u(t)+I_{s,i}(t)}{b_i}+\scpr{I_{s,e}(t)}{\mathcal{B}' b_i}_{{\mathbb{R}}^m}.\] Taking the absolute value and using the Cauchy-Schwarz inequality yields \begin{align*} \left|\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}(\mathcal{B}' v(t))_i\right|\leq&\ \|D\|_{L^\infty}\|v(t)\|_{W^{1,2}}\|b_i\|_{W^{1,2}}+\|p_3(v(t))-u(t)+I_{s,i}(t)\|_{L^2}\|b_i\|_{L^2}\\ &+\|I_{s,e}(t)\|_{{\mathbb{R}}^m}\|\mathcal{B}' b_i\|_{{\mathbb{R}}^m}, \end{align*} and therefore \begin{align*} \forall\, i=1,\ldots,m\ \forall\,\delta>:\ \left\|\tfrac{\text{\normalfont d}}{\text{\normalfont d}t}(\mathcal{B}' v)_i\right\|_{L^\infty(\delta,\infty;{\mathbb{R}}^m)}<\infty, \end{align*} by which $y = \mathcal{B}'v \in W^{1,\infty}(\delta,\infty;{\mathbb{R}}^m)$ as well as $I_{s,e} \in W^{1,\infty}(\delta,\infty;{\mathbb{R}}^m)$. This completes the proof of the theorem. \ensuremath{\hfill\ensuremath{\square}} \newlength\fheight \newlength\fwidth \setlength\fheight{0.3\linewidth} \setlength\fwidth{0.9\linewidth} \section{A numerical example} \label{sec:numerics} In this section, we illustrate the practical applicability of the funnel controller by means of a numerical example. The setup chosen here is a standard test example for termination of reentry waves and has been considered similarly e.g.\ in~\cite{BreiKuni17,KuniNagaWagn11}. All simulations are generated on an AMD Ryzen 7 1800X @ 3.68 GHz x 16, 64 GB RAM, MATLAB\textsuperscript{\textregistered} \;Version 9.2.0.538062 (R2017a). The solutions of the ODE systems are obtained by the MATLAB\textsuperscript{\textregistered}\;routine \texttt{ode23}. The parameters for the FitzHugh-Nagumo model \eqref{eq:FHN_model} used here are as follows: \begin{align*} \Omega&=(0,1)^2,\ \ D=\begin{bmatrix} 0.015 & 0 \\ 0 & 0.015 \end{bmatrix}, \ \ \begin{pmatrix} c_1 \\ c_2 \\ c_3 \\ c_4 \\ c_5 \end{pmatrix} \approx \begin{pmatrix} 1.614\\ 0.1403 \\ 0.012\\ 0.00015\\ 0.015 \end{pmatrix}. \end{align*} The spatially discrete system of ODEs corresponds to a finite element discretization with piecewise linear finite elements on a uniform $64\times 64$ mesh. For the control action, we assume that $\mathcal{B}\in \mathcal{L}(\mathbb R^4,W^{1,2}(\Omega)')$, where the Neumann control operator is defined by \begin{align*} \mathcal{B}'z &= \begin{pmatrix} \int_{\Gamma_1} z(\xi)\, \mathrm{d}\sigma,\int_{\Gamma_2} z(\xi)\, \mathrm{d}\sigma,\int_{\Gamma_3} z(\xi)\, \mathrm{d}\sigma ,\int_{\Gamma_4} z(\xi) \,\mathrm{d}\sigma\end{pmatrix}^\top, \\ \Gamma_1 &= \{1\}\times [0,1], \ \ \Gamma_2= [0,1]\times \{1\}, \ \ \Gamma_3 = \{0\}\times [0,1], \ \ \Gamma=[0,1]\times \{0\}. \end{align*} The purpose of the numerical example is to model a typical defibrillation process as a tracking problem as discussed above. In this context, system \eqref{eq:FHN_model} is initialized with $(v(0),u(0))=(v_0^*,u_0^*)$ and $I_{s,i}=0=I_{s,e}$, where $(v_0^*,u_0^*)$ is an arbitrary snapshot of a reentry wave. The resulting reentry phenomena are shown in Fig.~\ref{fig:reentry_waves} and resemble a dysfunctional heart rhythm which impedes the intracellular stimulation current~$I_{s,i}$. The objective is to design a stimulation current $I_{s,e}$ such that the dynamics return to a natural heart rhythm modeled by a reference trajectory $y_{\text{ref}}$. The trajectory $y_{\text{ref}} = \mathcal{B}' v_{\text{ref}}$ corresponds to a solution $(v_{\text{ref}},u_{\text{ref}})$ of~\eqref{eq:FHN_model} with $(v_{\text{ref}}(0),u_{\text{ref}}(0))=(0,0)$, $I_{s,e}=0$ and \begin{align*} I_{s,i}(t) = 101\cdot w(\xi) (\chi_{[49,51]}(t) + \chi_{[299,301]}(t)), \end{align*} where the excitation domain of the intracellular stimulation current~$I_{s,i}$ is described by \begin{align*} w(\xi) = \begin{cases} 1 , & \text{if } (\xi_1-\frac{1}{2})^2+(\xi_2-\frac{1}{2})^2 \le 0.0225, \\ 0 , & \text{otherwise}. \end{cases} \end{align*} The smoothness of the signal is guaranteed by convoluting the original signal with a triangular function. The function $\varphi$ characterizing the performance funnel (see Fig.~\ref{fig:funnel_error}) is chosen as \begin{align*} \varphi(t)= \begin{cases} 0, & t \in [0,0.05] ,\\ \mathrm{tanh}(\frac{t}{100}), & t > 0.05 . \end{cases} \end{align*} \begin{figure}[tb] \begin{subfigure}{.5\linewidth} \begin{center} \includegraphics[scale=0.4]{reentry_1.pdf} \end{center} \end{subfigure} \begin{subfigure}{.5\linewidth} \begin{center} \includegraphics[scale=0.4]{reentry_2.pdf} \end{center} \end{subfigure} \caption{Snapshots of reentry waves for $t=100$ (left) and $t=200$ (right).} \label{fig:reentry_waves} \end{figure} \begin{figure}[tb] \begin{center} \input{funnel_error.tikz} \end{center} \caption{Error dynamics and funnel boundary.} \label{fig:funnel_error} \end{figure} Fig.~\ref{fig:y_funnel} shows the results of the closed-loop system for $(v(0),u(0))=(v_0^*,u_0^*)$ and the control law \begin{align*} I_{s,e}(t)=-\frac{0.75}{1-\varphi(t)^2\|\mathcal{B}' v(t)-y_{\rm ref}(t)\|^2_{{\mathbb{R}}^m}}(\mathcal{B}'v(t)-y_{\rm ref}(t)), \end{align*} which is visualized in Fig.~\ref{fig:u_funnel}. Let us note that the sudden changes in the feedback law are due to the jump discontinuities of the intracellular stimulation current $I_{s,i}$ used for simulating a regular heart beat. \setlength\fheight{0.3\linewidth} \setlength\fwidth{0.4\linewidth} \begin{figure}[tb] \begin{subfigure}{.5\linewidth} \begin{center} \input{y_funnel_1.tikz} \end{center} \end{subfigure}\quad \begin{subfigure}{.5\linewidth} \begin{center} \input{y_funnel_2.tikz} \end{center} \end{subfigure} \begin{subfigure}{.5\linewidth} \begin{center} \input{y_funnel_3.tikz} \end{center} \end{subfigure}\quad \begin{subfigure}{.5\linewidth} \begin{center} \input{y_funnel_4.tikz} \end{center} \end{subfigure} \caption{Reference signals and outputs of the funnel controlled system.} \label{fig:y_funnel} \end{figure} We see from Fig.~\ref{fig:y_funnel} that the controlled system tracks the desired reference signal with the prescribed performance. Also note that the performance constraints are not active on the interval $[0,0.05]$. Fig.~\ref{fig:u_funnel} further shows that the tracking is achieved with a comparably small control effort. \begin{figure}[tb] \begin{subfigure}{.5\linewidth} \begin{center} \input{u_funnel_1.tikz} \end{center} \end{subfigure}\quad \begin{subfigure}{.5\linewidth} \begin{center} \input{u_funnel_2.tikz} \end{center} \end{subfigure} \begin{subfigure}{.5\linewidth} \begin{center} \input{u_funnel_3.tikz} \end{center} \end{subfigure}\quad \begin{subfigure}{.5\linewidth} \begin{center} \input{u_funnel_4.tikz} \end{center} \end{subfigure} \caption{Funnel control laws.} \label{fig:u_funnel} \end{figure} \begin{appendices} \section{Neumann elliptic operators}\label{sec:neum_lapl} We collect some further facts on Neumann elliptic operators as introduced in Proposition~\ref{prop:Aop}. \begin{Prop}\label{prop:Aop_n} If Assumption~\ref{Ass1} holds, then the Neumann elliptic operator $\mathcal{A}$ on $\Omega$ associated to $D$ has the following properties: \begin{enumerate}[a)] \item\label{item:Aop3} there exists $\nu\in(0,1)$ such that $\mathcal{D}(\mathcal{A})\subset C^{0,\nu}(\Omega)$; \item\label{item:Aop4} $\mathcal{A}$ has compact resolvent; \item\label{item:Aop5} there exists a real-valued and monotonically increasing sequence $(\alpha_j)_{j\in{\mathbb{N}}_0}$ such that \begin{enumerate}[(i)] \item $\alpha_0=0$, $\alpha_1>0$ and $\lim_{j\to\infty}\alpha_j=\infty$, and \item the spectrum of $\mathcal{A}$ reads $\sigma(\mathcal{A})=\setdef{-\alpha_j}{j\in{\mathbb{N}}_0}$ \end{enumerate} and an orthonormal basis $(\theta_j)_{j\in{\mathbb{N}}_0}$ of $L^2(\Omega)$, such that \begin{equation}\forall\,x\in\mathcal{D}(\mathcal{A}):\ \mathcal{A} x=-\sum_{j=0}^\infty\alpha_j\scpr{x}{\theta_j}\theta_j,\label{eq:spectr}\end{equation} and the domain of $\mathcal{A}$ reads \begin{equation}\mathcal{D}(\mathcal{A})=\setdef{\sum_{j=0}^\infty \lambda_j \theta_j}{(\lambda_j)_{j\in{\mathbb{N}}_0}\text{ with }\sum_{j=1}^\infty \alpha_j^2 |\lambda_j|^2<\infty}.\label{eq:spectrda}\end{equation} \end{enumerate} \end{Prop} \begin{proof} Statement \ref{item:Aop3}) follows from \cite[Prop.~3.6]{Nitt11}.\\ To prove \ref{item:Aop4}), we first use that the ellipticity condition \eqref{eq:ellcond} implies \begin{equation}\delta\|z\|+\|\mathcal{A} z\|\geq \|z\|_{W^{1,2}}.\label{eq:Acoer}\end{equation} Since $\partial\Omega$ is Lipschitz, $\Omega$ has the cone property \cite[p.~66]{Adam75}, and we can apply the Rellich-Kondrachov Theorem~\cite[Thm.~6.3]{Adam75}, which states that $W^{1,2}(\Omega)$ is compactly embedded in $L^2(\Omega)$. Combining this with \eqref{eq:Acoer}, we obtain that $\mathcal{A}$ has compact resolvent.\\ We show~\ref{item:Aop5}). Since $\mathcal{A}$ has compact resolvent and is self-adjoint by Proposition~\ref{prop:Aop}, we obtain from \cite[Props.~3.2.9 \&~3.2.12]{TucsWeis09} that there exists a~real valued sequence $(\alpha_j)_{j\in{\mathbb{N}}_0}$ with $\lim_{j\to\infty}|\alpha_j|=\infty$ and \eqref{eq:spectr}, and the domain of $\mathcal{A}$ has the representation \eqref{eq:spectrda}. Further taking into account that \[\forall\, z\in \mathcal{D}(\mathcal{A}):\ \scpr{z}{\mathcal{A} z}=-\mathfrak{a}(z,z)\leq0,\] we obtain that $\alpha_j\geq0$ for all $j\in{\mathbb{N}}_0$. Consequently, it is no loss of generality to assume that $(\alpha_j)_{j\in{\mathbb{N}}_0}$ is monotonically increasing. It remains to prove that $\alpha_0=0$ and $\alpha_1>0$: On the one hand, we have that the constant function $1_\Omega\in L^2(\Omega)$ satisfies $\mathcal{A} 1_\Omega=0$, since \[\forall\, z\in W^{1,2}(\Omega):\ \scpr{z}{ \cA1_\Omega}=-\mathfrak{a}(z,1_\Omega)=-\scpr{\nabla z_1}{D\nabla 1_\Omega}=0.\] On the other hand, if $z\in\ker \mathcal{A}$, then \[0=\scpr{z}{\mathcal{A} z}=-\mathfrak{a}(z,z)=-\scpr{\nabla z}{D\nabla z},\] and the pointwise positive definiteness of $D$ implies $\nabla z=0$, whence $z$ is a constant function. This gives $\dim \ker \mathcal{A}=1$, by which $\alpha_0=0$ and $\alpha_1>0$. \end{proof} \section{Interpolation spaces} \label{sec:mono_prep_proof} We collect some results on interpolation spaces, which are necessary for the proof of Theorem~\ref{thm:mono_funnel}. For a (more) general interpolation theory, we refer to \cite{Luna18}. \begin{Def} Let $X,Y$ be Hilbert spaces and let $\alpha\in[0,1]$. Consider the function \[K:(0,\infty)\times (X+Y)\to{\mathbb{R}},\ (t,x)\mapsto \underset{x=a+b}{\inf_{a\in X,\, b\in Y,}}\, \|a\|_X+t\|b\|_Y.\] The {\em interpolation space} $(X,Y)_{\alpha}$ is defined by \[(X,Y)_{\alpha}:=\setdef{x\in X+Y}{\Big(t\mapsto t^{-\alpha} K(t,x)\Big)\in L^2(0,\infty)},\] and it is a Hilbert space with the norm \[\|x\|_{(X,Y)_\alpha}=\|t\mapsto t^{-\alpha} K(t,x)\|_{L^2}.\] \end{Def} Note that interpolation can be performed in a more general fashion for Banach spaces $X$, $Y$. More precise, we may utilize the $L^p$-norm of the map $t\mapsto t^{-\alpha} K(t,x)$ for some $p\in[1,\infty)$ instead of the $L^2$-norm in the above definition. However, this does not lead to Hilbert spaces $(X,Y)_\alpha$, not even when~$X$ and~$Y$ are Hilbert spaces. For a~self-adjoint operator $A:\mathcal{D}(A)\subset X\to X$, $X$ a Hilbert space and $n\in{\mathbb{N}}$, we may define the space $X_n:=\mathcal{D}(A^n)$ by $X_0=X$ and $X_{n+1}:=\setdef{x\in X_n}{Ax\in X_n}$. This is a Hilbert space with norm $\|z\|_{X_{n+1}}=\|-\lambda z+Az\|_{X_n}$, where $\lambda\in{\mathbb{C}}$ is in the resolvent set of $A$. Likewise, we introduce $X_{-n}$ as the completion of $X$ with respect to the norm $\|z\|_{X_{-n}}=\|(-\lambda I+A)^{-n}z\|$. Note that $X_{-n}$ is the dual of $X_n$ with respect to the pivot space $X$, cf.~\cite[Sec.~2.10]{TucsWeis09}. Using interpolation theory, we may further introduce the spaces~$X_\alpha$ for any $\alpha\in{\mathbb{R}}$ as follows. \begin{Def}\label{Def:int-space-A} Let $\alpha\in{\mathbb{R}}$, $X$ a Hilbert space and $A:\mathcal{D}(A)\subset X\to X$ be self-adjoint. Further, let $n\in{\mathbb{Z}}$ be such that $\alpha\in[n,n+1)$. The space $X_\alpha$ is defined as the interpolation space \[X_\alpha=(X_{n},X_{n+1})_{\alpha-n}.\] \end{Def} The Reiteration Theorem, see~\cite[Cor.~1.24]{Luna18}, together with~\cite[Prop.~3.8]{Luna18} yields that for all $\alpha\in[0,1]$ and $\alpha_1,\alpha_2\in{\mathbb{R}}$ with $\alpha_1\leq\alpha_2$ we have that \begin{equation}\label{eq:Xalpha-reit} (X_{\alpha_1},X_{\alpha_2})_{\alpha}=X_{\alpha_1+\alpha (\alpha_2-\alpha_1)}. \end{equation} Next we characterize interpolation spaces associated with the Neumann elliptic operator $\mathcal{A}$. \begin{Prop}\label{prop:Aop2_n} Let Assumption~\ref{Ass1} hold and $\mathcal{A}$ be the Neumann elliptic operator on $\Omega$ associated to $D$. Further let $X_\alpha$, $\alpha \in{\mathbb{R}}$, be the corresponding interpolation spaces with, in particular, $X=X_0=L^2(\Omega)$. Then \[X_{r/2}=W^{r,2}(\Omega)\;\text{ for all $r\in[0,1]$}.\] \end{Prop} \begin{proof} The equation $X_{1/2}=W^{1,2}(\Omega)$ is an immediate consequence of Kato's Second Representation Theorem~\cite[Sec.~VI.2, Thm.~2.23]{Kato80}. For general $r\in[0,1]$ equation~\eqref{eq:Xalpha-reit} implies \[X_{r/2}=(X_{0},X_{1/2})_{r}.\] Now using that $X_0=L^2(\Omega)$ by definition and, as already stated, $X_{1/2}=W^{1,2}(\Omega)$, it follows from~\cite[Thm.~1.35]{Yagi10} that \[(L^2(\Omega),W^{1,2}(\Omega))_{r}=W^{r,2}(\Omega),\] and thus $X_{r/2}=W^{r,2}(\Omega)$. \end{proof} \begin{Rem} \label{rem:X_alpha} In terms of the spectral decomposition \eqref{eq:spectr}, the space $X_\alpha$ has the representation \begin{equation}X_\alpha=\setdef{\sum_{j=0}^\infty \lambda_j \theta_j}{(\lambda_j)_{j\in{\mathbb{N}}_0}\text{ with }\sum_{j=1}^\infty \alpha_j^{2\alpha} |\lambda_j|^2<\infty}.\label{eq:Xrspec}\end{equation} This follows from a~combination of~\cite[Thm.~4.33]{Luna18} with~\cite[Thm.~4.36]{Luna18}. \end{Rem} \section{Abstract Cauchy problems and regularity} \label{sec:mono_prep_proof2} We consider mild solutions of certain abstract Cauchy problems and the concept of admissible control operators. This notion is well-known in infinite-dimensional linear systems theory with unbounded control and observation operators and we refer to~\cite{TucsWeis09} for further details. Let $X$ be a real Hilbert space and recall that a semigroup $(\T_t)_{t\geq0}$ on $X$ is a $\mathcal{L}(X,X)$-valued map satisfying ${\mathbb{T}}_0=I_{X}$ and ${\mathbb{T}}_{t+s}={\mathbb{T}}_t {\mathbb{T}}_s$, $s,t\geq0$, where $I_{X}$ denotes the identity operator, and $t\mapsto {\mathbb{T}}_t x$ is continuous for every $x\in X$. Semigroups are characterized by their generator~$A$, which is a, not necessarily bounded, operator on~$X$. If $A:\mathcal{D}(A)\subset X\to X$ is self-adjoint with $\scpr{x}{Ax}\leq0$ for all $x\in\mathcal{D}(A)$, then it generates a~contractive, analytic semigroup $(\T_t)_{t\geq0}$ on $X$, cf.~\cite[Thm.~4.2]{ArenElst12}. Furthermore, if additionally there exists $\omega_0>0$ such that $\scpr{x}{Ax}\leq-\omega_0 \|x\|^2$ for all $x\in\mathcal{D}(A)$, then the semigroup $(\T_t)_{t\geq0}$ generated by $A$ satisfies $\|{\mathbb{T}}_t\|\leq \mathrm{e}^{-\omega_0 t}$ for all $t\geq0$; the smallest number $\omega_0$ for which this is true is called \emph{growth bound} of $(\T_t)_{t\geq0}$. We can further conclude from~\cite[Thm.~6.13\,(b)]{Pazy83} that, for all $\alpha\in{\mathbb{R}}$, $(\T_t)_{t\geq0}$ restricts (resp.\ extends) to an analytic semigroup $(({\mathbb{T}}|_{\alpha})_t)_{t\ge0}$ on $X_\alpha$ with same growth bound as $(\T_t)_{t\geq0}$. Furthermore, we have $\im {\mathbb{T}}_t\subset X_r$ for all $t>0$ and $r\in{\mathbb{R}}$, see~\cite[Thm.~6.13(a)]{Pazy83}. In the following we present an estimate for the corresponding operator norm. \begin{Lem}\label{lem:A_alpha} Assume that $A:\mathcal{D}(A)\subset X\to X$, $X$ a Hilbert space, is self-adjoint and there exists $\omega_0>0$ with $\scpr{x}{Ax}\leq-\omega_0 \|x\|^2$ for all $x\in\mathcal{D}(A)$. Then there exist $M,\omega>0$ such that the semigroup $(\T_t)_{t\geq0}$ generated by $A$ satisfies \[ \forall\, \alpha\in[0,2]\ \forall\, t>0:\ \|{\mathbb{T}}_t\|_{\mathcal{L}(X,X_\alpha)}\leq M(1+t^{-\alpha})\mathrm{e}^{-\omega t}.\] Thus, for each $\alpha\in[0,2]$ there exists $K>0$ such that \[\sup_{t\in[0,\infty)}t^\alpha\|{\mathbb{T}}_t\|_{\mathcal{L}(X,X_\alpha)}<K.\] \end{Lem} \begin{proof} Since $A$ with the above properties generates an exponentially stable analytic semigroup $(\T_t)_{t\geq0}$, the cases $\alpha\in[0,1]$ and $\alpha=2$ follow from~\cite[Cor.~3.10.8~\&~Lem.~3.10.9]{Staf05}. The result for $\alpha\in[1,2]$ is a consequence of~\cite[Lem~3.9.8]{Staf05} and interpolation between $X_1$ and $X_2$, cf.\ Appendix~\ref{sec:mono_prep_proof}. \end{proof} Next we consider the abstract Cauchy problem with source term. \begin{Def}\label{Def:Cauchy} Let $X$ be a Hilbert space, $A:\mathcal{D}(A)\subset X\to X$ be self-adjoint with $\scpr{x}{Ax}\leq0$ for all $x\in\mathcal{D}(A)$, $T\in(0,\infty]$, and $\alpha\in[0,1]$. Let $(\T_t)_{t\geq0}$ be the semigroup on $X$ generated by $A$, and let $B\in\mathcal{L}({\mathbb{R}}^m,X_{-\alpha})$. For $x_0\in X$, $p\in[1,\infty]$, $f\in L^p_{\loc}(0,T;X)$ and $u\in L^p_{\loc}(0,T;{\mathbb{R}}^m)$, we call~$x:[0,T)\to X$ a \emph{mild solution} of \begin{equation}\label{eq:abstract_cauchy} \begin{aligned} \dot{x}(t)=Ax(t)+f(t)+Bu(t),\quad x(0)=x_0 \end{aligned} \end{equation} on $[0,T)$, if it satisfies \begin{equation}\label{eq:mild_solution} \forall\, t\in[0,T):\ x(t)={\mathbb{T}}_tx_0+\int_0^t{\mathbb{T}}_{t-s}f(s)\ds{s}+\int_0^t({\mathbb{T}}|_{-\alpha})_{t-s}Bu(s)\ds{s}. \end{equation} We further call $x:[0,T)\to X$ a \emph{strong solution} of \eqref{eq:abstract_cauchy} on $[0,T)$, if $x$ in~\eqref{eq:mild_solution} satisfies $x\in C([0,T);X)\cap W^{1,p}_{\rm loc}(0,T;X_{-1})$. \end{Def} Definition~\ref{Def:Cauchy} requires that the integral $\int_0^t({\mathbb{T}}|_{-\alpha})_{t-s}Bu(s)\ds{s}$ is in $X$, whilst the integrand is not necessarily in $X$. This motivates the definition of admissibility, which is now introduced for self-adjoint $A$. Note that admissibility can also be defined for arbitrary generators of semigroups, see~\cite{TucsWeis09}. \begin{Def}\label{Def:Adm} Let $X$ be a Hilbert space, $A:\mathcal{D}(A)\subset X\to X$ be self-adjoint with $\scpr{x}{Ax}\leq0$ for all $x\in\mathcal{D}(A)$, $T\in(0,\infty]$, $\alpha\in[0,1]$ and $p\in[1,\infty]$. Let $(\T_t)_{t\geq0}$ be the semigroup on $X$ generated by $A$, and let $B\in\mathcal{L}({\mathbb{R}}^m,X_{-\alpha})$. Then $B$ is called an {\em $L^p$-admissible (control operator) for $(\T_t)_{t\geq0}$}, if for some (and hence any) $t> 0$ we have $$\forall\, u\in L^p(0,t;{\mathbb{R}}^m):\ \Phi_{t}u :=\int_0^t({\mathbb{T}}|_{-\alpha})_{t-s}Bu(s)\ds{s} \in X.$$ By a closed graph theorem argument this implies that $\Phi_t\in \mathcal{L}(L^p(0,t;{\mathbb{R}}^m),X)$ for all $t> 0$. We call $B$ an {\em infinite-time $L^p$-admissible (control operator) for $(\T_t)_{t\geq0}$}, if \[ \sup_{t>0} \|\Phi_t\| < \infty. \] \end{Def} In the following we show that for $p\ge 2$ and $\alpha\le 1/2$ any~$B$ is admissible and the mild solution of the abstract Cauchy problem is indeed a strong solution. \begin{Lem}\label{lem:abstract_solution} Let $X$ be a Hilbert space, $A:\mathcal{D}(A)\subset X\to X$ be self-adjoint with $\scpr{x}{Ax}\leq0$ for all $x\in\mathcal{D}(A)$, $B\in\mathcal{L}({\mathbb{R}}^m,X_{-\alpha})$ for some $\alpha\in[0,1/2]$, and $(\T_t)_{t\geq0}$ be the analytic semigroup generated by $A$. Then for all $p\in[2,\infty]$ we have that~$B$ is $L^p$-admissible for $(\T_t)_{t\geq0}$. Furthermore, for all $x_0\in X$, $T\in(0,\infty]$, $f\in L^p_{\loc}(0,T;X)$ and $u\in L^p_{\loc}(0,T;{\mathbb{R}}^m)$, the function~$x$ in~\eqref{eq:mild_solution} is a strong solution of~\eqref{eq:abstract_cauchy} on $[0,T)$. \end{Lem} \begin{proof} For the case $p=2$, there exists a unique strong solution in $X_{-1}$ (that is, we replace $X$ by $X_{-1}$ and $X_{-1}$ by $X_{-2}$ in the definition) given by~\eqref{eq:mild_solution} and at most one strong solution in $X$, see for instance~\cite[Thm.~3.8.2~(i)~\&~(ii)]{Staf05}, so we only need to check that all the elements are in the correct spaces. Since $A$ is self-adjoint, the semigroup generated by $A$ is self-adjoint as well. Further, by combining~\cite[Prop.~5.1.3]{TucsWeis09} with~\cite[Thm.~4.4.3]{TucsWeis09}, we find that~$B$ is an $L^2$-admissible control operator for~$(\T_t)_{t\geq0}$. Moreover, by~\cite[Prop.~4.2.5]{TucsWeis09} we have that \[\left(t\mapsto{\mathbb{T}}_t x_0+\int_0^t({\mathbb{T}}|_{-\alpha})_{t-s}Bu(s)\ds{s}\right)\in C([0,T);X)\cap W^{1,2}_{\rm loc}(0,T;X_{-1})\] and from \cite[Thm.~3.8.2~(iv)]{Staf05}, \[\left(t\mapsto\int_0^t{\mathbb{T}}_{t-s}f(s)\ds{s}\right)\in C([0,T);X)\cap W^{1,2}_{\rm loc}(0,T;X_{-1}),\] whence $x\in C([0,T);X)\cap W^{1,2}_{\rm loc}(0,T;X_{-1})$, which proves that~$x$ is a strong solution of~\eqref{eq:abstract_cauchy} on $[0,T)$. Since $B$ is $L^2$-admissible, it follows from the nesting property of $L^p$ on finite intervals that~$B$ is an $L^p$-admissible control operator for~$(\T_t)_{t\geq0}$ for all $p\in[2,\infty]$. Furthermore, for $p>2$, set $\tilde{f}\coloneqq f+Bu$ and apply~\cite[Thm.~3.10.10]{Staf05} with $\tilde{f}\in L^\infty_{\rm loc}(0,T;X_{-\alpha})$ to conclude that~$x$ is a strong solution. \end{proof} Next we show the regularity properties of the solution of~\eqref{eq:abstract_cauchy}, if $A = \mathcal{A}$ and $B = \mathcal{B}$ are as in the model~\eqref{eq:FHN_model}. Note that this result also holds when considering some $t_0\ge 0$, $T\in(t_0,\infty]$, and the initial condition $x(t_0)=x_0$ (instead of $x(0)=x^0$) by some straightforward modifications, cf.~\cite[Sec.~3.8]{Staf05}. \begin{Prop}\label{prop:hoelder} Let Assumption~\ref{Ass1} hold, $\mathcal{A}$ be the Neumann elliptic operator on $\Omega$ associated to $D$, $T\in(0,\infty]$ and $c>0$. Further let $X=X_0=L^2(\Omega)$ and $X_r$, $r\in{\mathbb{R}}$, be the interpolation spaces corresponding to~$\mathcal{A}$ according to Definition~\ref{Def:int-space-A}. Define $\mathcal{A}_0\coloneqq \mathcal{A}-cI$ with $\mathcal{D}(\mathcal{A}_0)=\mathcal{D}(\mathcal{A})$ and consider $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,X_{-\alpha})$ for $\alpha\in[0,1/2]$, $u\in L_{\rm loc}^2(0,T;{\mathbb{R}}^m)\cap L^\infty(\delta,T;{\mathbb{R}}^m)$ and $f\in L_{\rm loc}^2(0,T;X)\cap L^\infty(\delta,T;X)$ for all $\delta>0$. Then for all $x_0\in X$ and all $\delta>0$ the mild solution of~\eqref{eq:abstract_cauchy} (with $A=\mathcal{A}_0$ and $B=\mathcal{B}$) on~$[0,T)$, given by~$x$ as in~\eqref{eq:mild_solution}, satisfies \begin{enumerate}[(i)] \item if $\alpha=0$, then \[\forall\, \lambda\in(0,1):\ x\in BC([0,T);X)\cap C^{0,\lambda}([\delta,T);X);\] \item if $\alpha\in(0,1/2)$, then \[x\in BC([0,T);X)\cap C^{0,1-\alpha}([\delta,T);X)\cap C^{0,1-2\alpha}([\delta,T);X_{\alpha});\] \item if $\alpha=1/2$, then \[x\in BC([0,T);X)\cap C^{0,1/2}([\delta,T);X)\cap BUC([\delta,T);X_{1/2}).\] \end{enumerate} \end{Prop} \begin{proof} First observe that by Proposition~\ref{prop:Aop} the assumptions of Lemma~\ref{lem:abstract_solution} are satisfied with $p=2$, hence~$x$ as in~\eqref{eq:mild_solution} is a strong solution of~\eqref{eq:abstract_cauchy} on $[0,T)$ in the sense of Definition~\ref{Def:Cauchy}. In the following we restrict ourselves to the case $T=\infty$, and the assertions for $T<\infty$ follow from these arguments by considering the restrictions to $[0,T)$. Define, for $t\ge 0$, the functions \begin{align} x_h(t)\coloneqq {\mathbb{T}}_tx_0,\quad x_f(t)\coloneqq \int_0^t{\mathbb{T}}_{t-s}f(s)\ds{s},\quad x_u(t)\coloneqq \int_0^t({\mathbb{T}}|_{-\alpha})_{t-s}\mathcal{B} u(s)\ds{s},\label{eq:xhxfxu} \end{align} so that $x=x_h+x_f+x_u$. \emph{Step 1}: We show that $x\in BC([0,\infty);X)$. The definition of $\mathcal{A}$ in Proposition~\ref{prop:Aop} implies that for all $z\in \mathcal{D}(\mathcal{A})$ we have $\langle z,\mathcal{A} z\rangle\leq -c\|z\|^2$. The self-adjointness of $\mathcal{A}$ moreover implies that $\mathcal{A}_0$ is self-adjoint, whence~\cite[Thm.~4.2]{ArenElst12} gives that $\mathcal{A}_0$ generates an analytic, contractive semigroup $(\T_t)_{t\geq0}$ on $X$, which satisfies \begin{equation}\label{eq:est-sg-exp} \forall\, t\ge 0\ \forall\, x\in X:\ \|{\mathbb{T}}_t x\|\leq \mathrm{e}^{-ct}\|x\|. \end{equation} Since, by Lemma~\ref{lem:abstract_solution}, $x$ is a strong solution, we have $x\in C([0,\infty);X)\cap W^{1,2}_{\rm loc}(0,\infty;X_{-1})$. Further observe that $\mathcal{B}$ is $L^\infty$-admissible by Lemma~\ref{lem:abstract_solution}. Then it follows from~\eqref{eq:est-sg-exp} and~\cite[Lem.~2.9\,(i)]{JacoNabi18} that~$\mathcal{B}$ is infinite-time $L^\infty$-admissible, which implies that for $x_u$ as in~\eqref{eq:xhxfxu} we have \[ \|x_u\|_\infty \le \left( \sup_{t>0} \|\Phi_t\|\right) \|u\|_\infty < \infty, \] thus $x_u\in BC([0,\infty);X)$. A direct calculation using~\eqref{eq:est-sg-exp} further shows that $x_h,x_f\in BC([0,\infty);X)$, whence $x\in BC([0,\infty);X)$. \emph{Step 2}: We show~(i). Let $\delta>0$ and set $\tilde{f}:=f+Bu\in L^2_{\rm loc}(0,\infty;X)\cap L^\infty(\delta,\infty;X)$, then we may infer from~\cite[Props.~4.2.3~\&~4.4.1\,(i)]{Luna95} that \[\forall\,\lambda\in(0,1):\ x\in C^{0,\lambda}([\delta,\infty);X).\] From this together with Step~1 we may infer~(i). \emph{Step 3}: We show~(ii). Let $\delta>0$, then it follows from~\cite[Props.~4.2.3~\&~4.4.1\,(i)]{Luna95} together with $x_0\in X$ and $f\in L^\infty(\delta,\infty;X)$, that \[\begin{aligned} x_h+x_f&\in C^{0,1-\alpha}([\delta,\infty);X_{\alpha})\cap C^{1}([\delta,\infty);X)\\ &= C^{0,1-2\alpha}([\delta,\infty);X_{\alpha})\cap C^{0,1-\alpha}([\delta,\infty);X).\end{aligned}\] Since we have shown in Step~1 that $x\in BC([0,\infty),X)$, it remains to show that $x_u\in C^{0,1-2\alpha}([\delta,\infty);X_{\alpha})\cap C^{0,1-\alpha}([\delta,\infty);X)$.\\ To this end, consider the space $Y:=X_{-\alpha}$. Then $(\T_t)_{t\geq0}$ extends to a~semigroup~$\big(({\mathbb{T}}|_{-\alpha})_{t}\big)_{t\ge 0}$ on $Y$ with generator $\mathcal{A}_{0,\alpha}:\mathcal{D}(\mathcal{A}_{0,\alpha})=X_{-\alpha+1}\subset X_{-\alpha}=Y$, cf.~\cite[pp.~50]{Luna95}. Now, for $r\in{\mathbb{R}}$, consider the interpolation spaces $Y_r$ as in Definition~\ref{Def:int-space-A} by means of the operator $\mathcal{A}_{0,\alpha}$. Then it is straightforward to show that $Y_n = \mathcal{D}(\mathcal{A}_{0,\alpha}^n) = X_{n-\alpha}$ for all $n\in{\mathbb{N}}$ using the representation~\eqref{eq:Xrspec}. Similarly, we may show that $Y_n = X_{n-\alpha}$ for all $n\in{\mathbb{Z}}$. Then the Reiteration Theorem, see~\cite[Cor.~1.24]{Luna18} and also \eqref{eq:Xalpha-reit}, gives \[\forall\, r\in{\mathbb{R}}\,:\quad Y_r=X_{r-\alpha}.\] Since $\mathcal{B}\in\mathcal{L}({\mathbb{R}}^m,Y)$, \cite[Props.~4.2.3~\&~4.4.1\,(i)]{Luna95} now imply \[\begin{aligned} x_u &\in C^{0,1-2\alpha}([\delta,\infty);Y_{2\alpha})\cap C^{0,1-\alpha}([\delta,\infty);Y_{\alpha})\\ &= C^{0,1-2\alpha}([\delta,\infty);X_{\alpha})\cap C^{0,1-\alpha}([\delta,\infty);X), \end{aligned}\] which completes the proof of (ii). \emph{Step 4}: We show~(iii). The proof of $x\in C^{0,1/2}([\delta,\infty);X)$ is analogous to that of $x\in C^{0,1-\alpha}([\delta,\infty);X)$ in Step~3. Boundedness and continuity of $x$ on $[0,\infty)$ was proved in Step~1. Hence, it remains to show that $x$ is uniformly continuous: Again consider the additive decomposition of $x$ into $x_h$, $x_f$ and $x_u$ as in~\eqref{eq:xhxfxu}. Similar to Step~3 it can be shown that $x_h,x_f\in C^{0,1/2}([\delta,\infty);X_{1/2})$, whence $x_h,x_f\in BUC([\delta,\infty);X_{1/2})$. It remains to show that $x_u\in BUC([\delta,\infty);X_{1/2})$. Note that Lemma \ref{lem:abstract_solution} gives that $x_\delta\coloneqq x(\delta)\in X$. Then $x_u$ solves $\dot z(t) = \mathcal{A}_0 z(t) + \mathcal{B} u(t)$ with $z(\delta) = x_u(\delta)$ and hence, for all $t\geq\delta$ we have \begin{equation}\label{eq:x-T-delta} \begin{aligned} x_u(t)=&\,{\mathbb{T}}_{t-\delta}x_u(\delta)+\underbrace{\int_\delta^t({\mathbb{T}}|_{-\alpha})_{t-s}\mathcal{B} u(s)\ds{s}}_{=:x_u^\delta(t)}\\ \end{aligned} \end{equation} Since $x_u(\delta)\in X$ by Lemma~\ref{lem:abstract_solution}, it remains to show that $x_u^\delta\in BUC([\delta,\infty);X_{1/2})$. We obtain from Proposition~\ref{prop:Aop_n}\,\ref{item:Aop5}) that $\mathcal{A}_0$ has an eigendecomposition of type~\eqref{eq:spectr} with eigenvalues $(-\beta_j)_{j\in{\mathbb{N}}_0}$, $\beta_j\coloneqq \alpha_j+c$, and eigenfunctions $(\theta_j)_{j\in{\mathbb{N}}_0}$. Moreover, there exist $b_i\in X_{-1/2}$ for $i=1,\dots,m$ such that $\mathcal{B} \xi = \sum_{i=1}^m b_i \cdot \xi_i$ for all $\xi\in{\mathbb{R}}^m$. Therefore, \begin{align*} x^\delta_u(t)&=\int_\delta^t\sum_{j=0}^\infty \mathrm{e}^{-\beta_j(t-\tau)}\theta_j \sum_{i=1}^m\scpr{b_i \cdot u_i(\tau)}{\theta_j}\ds{\tau}\\ &=\int_\delta^t\sum_{j=0}^\infty \mathrm{e}^{-\beta_j(t-\tau)}\theta_j \sum_{i=1}^m u_i(\tau) \scpr{b_i}{\theta_j}\ds{\tau}, \end{align*} where the last equality holds since $u_i(\tau)\in{\mathbb{R}}$ and can be treated as a constant in~$X$. By considering each of the factors in the sum over $i=1,\dots,m$, we can assume without loss of generality that $m=1$ and $b\coloneqq b_1$, so that \[x_u^\delta(t)=\int_\delta^t\sum_{j=0}^\infty \mathrm{e}^{-\beta_j(t-\tau)} u(\tau) \scpr{b}{\theta_j} \theta_j \ds{\tau}.\] Define $b^j \coloneqq \scpr{b}{\theta_j}$ for $j\in{\mathbb{N}}_0$. Since $b\in X_{-1/2}$ we have that $\sum_{j=0}^\infty b_j^2/\beta_j$ converges, which implies \begin{equation}S\coloneqq \sum_{j=0}^\infty\frac{(b^j)^2}{\beta_j}<\infty.\label{eq:Sdef}\end{equation} Recall that the spaces $X_\alpha$, $\alpha\in{\mathbb{R}}$, are defined by using $\lambda\in{\mathbb{C}}$ belonging to the resolvent set of $\mathcal{A}$, and they are independent of the choice of $\lambda$. Since $c>0$ in the statement of the proposition is in the resolvent set of $\mathcal{A}$, the spaces $X_\alpha$ coincide for $\mathcal{A}$ and $\mathcal{A}_0=\mathcal{A}-cI$.\\ Using the diagonal representation from Remark~\ref{rem:X_alpha} and \cite[Prop.~3.4.8]{TucsWeis09}, we may infer that $x_u^\delta(t)\in X_{1/2}$ for a.e. $t\geq\delta$, namely, \begin{align*} \|x_u^\delta(t)\|_{X_{1/2}}^2&\leq\sum_{j=0}^\infty\beta_j(b^j)^2\|u\|^2_{L^\infty(\delta,\infty)}\left(\int_\delta^t\mathrm{e}^{-\beta_j(t-s)}\ds{s}\right)^2\\ &=\|u\|^2_{L^\infty(\delta,\infty)}\sum_{j=0}^\infty\frac{(b^j)^2}{\beta_j}\left(1-\mathrm{e}^{-\beta_j(t-\delta)}\right)^2\\ &\leq\|u\|^2_{L^\infty(\delta,\infty)}\sum_{j=0}^\infty\frac{(b^j)^2}{\beta_j}<\infty. \end{align*} Hence, \begin{equation}\label{eq:xudelta_X12} \|x_u^\delta(t)\|_{X_{1/2}}\leq \|u\|_{L^\infty(\delta,\infty)}\sqrt{S}. \end{equation} Now let $t>s>\delta$ and $\sigma>0$ such that $t-s<\sigma$. By dominated convergence \cite[Thm.~II.2.3]{Dies77}, summation and integration can be interchanged, so that \begin{align*} \|x_u^\delta&(t)-x_u^\delta(s)\|_{X_{1/2}}^2\\ \leq&\,\|u\|_{L^\infty(\delta,\infty)}^2\sum_{j=0}^\infty \beta_j (b^j)^2 \left(\int_\delta^s\mathrm{e}^{-\beta_j(s-\tau)}-\mathrm{e}^{-\beta_j(t-\tau)}\ds{\tau}+\int_s^t\mathrm{e}^{-\beta_j(t-\tau)}\ds{\tau}\right)^2\\ \leq&\,4\|u\|_{L^\infty(\delta,\infty)}^2\sum_{j=0}^\infty \frac{(b^j)^2}{\beta_j} \left(1-\mathrm{e}^{-\beta_j (t-s)}\right)^2\\ \le&\, 4\|u\|_{L^\infty(\delta,\infty)}^2\sum_{j=0}^\infty \frac{(b^j)^2}{\beta_j} \left(1-\mathrm{e}^{-\beta_j \sigma}\right)^2. \end{align*} We can conclude from \eqref{eq:Sdef} that the series $F:(0,\infty)\to(0,S)$ with \[F(\sigma)\coloneqq \sum_{j=0}^\infty\frac{(b^j)^2}{\beta_j}(1-\mathrm{e}^{-\beta_j\sigma})^2\] converges uniformly to a strictly monotone, continuous and surjective function. Therefore, $F$ has an inverse. The function $x_u^\delta$ is thus uniformly continuous on~$[\delta,\infty)$ and by~\eqref{eq:est-sg-exp} we obtain boundedness, i.e., $x_u^\delta\in BUC([\delta,\infty);X_{1/2})$. \end{proof} Finally we present a consequence of the Banach-Alaoglu Theorem, see e.g.~\cite[Thm.~3.15]{Rudi91}. \begin{Lem}\label{lem:weak_convergence} Let $T>0$ and $Z$ be a reflexive and separable Banach space. Then \begin{enumerate}[(i)] \item every bounded sequence $(w_n)_{n\in{\mathbb{N}}}$ in $L^\infty(0,T;Z)$ has a weak$^\star$ convergent subsequence in $L^\infty(0,T;Z)$;\label{it:weak_star} \item every bounded sequence $(w_n)_{n\in{\mathbb{N}}}$ in $L^p(0,T;Z)$ with $p\in(1,\infty)$ has a weakly convergent subsequence in $L^p(0,T;Z)$.\label{it:weak} \end{enumerate} \end{Lem} \begin{proof} Let $p\in[1,\infty)$. Then $W:=L^p(0,T;Z')$ is a separable Banach space, see~\cite[Sec.~IV.1]{Dies77}. Since $Z$ is reflexive, by \cite[Cor.~III.4]{Dies77} it has the Radon-Nikodým property. Then it follows from~\cite[Thm.~IV.1]{Dies77} that $W'=L^q(0,T;Z)$ is the dual of $W$, where $q\in(1,\infty]$ such that $p^{-1}+q^{-1}=1$. Assertion~\eqref{it:weak_star} now follows from~\cite[Thm.~3.17]{Rudi91} with $p=1$ and $q=\infty$. On the other hand, statement~\eqref{it:weak} follows from~\cite[Thm.~V.2.1]{Yosi80} by further using that~$W$ is reflexive for $p\in(1,\infty)$. \end{proof} \end{appendices} \section*{Acknowledgments} The authors would like to thank Felix L.\ Schwenninger (U Twente) and Mark R.\ Opmeer (U Bath) for helpful comments on maximal regularity. \bibliographystyle{elsarticle-harv}
{'timestamp': '2019-12-05T02:11:10', 'yymm': '1912', 'arxiv_id': '1912.01847', 'language': 'en', 'url': 'https://arxiv.org/abs/1912.01847'}
\section{Concluding remarks}\label{sec: conclusion} In this work we provided a proof of the spread conjecture for sufficiently large $n$, a proof of an asymptotic version of the bipartite spread conjecture, and an infinite class of counterexamples that illustrates that our asymptotic version of this conjecture is the strongest result possible. There are a number of interesting future avenues of research, some of which we briefly describe below. These avenues consist primarily of considering the spread of more general classes of graphs (directed graphs, graphs with loops) or considering more general objective functions. Our proof of the spread conjecture for sufficiently large $n$ immediately implies a nearly-tight estimate for the adjacency matrix of undirected graphs with loops, also commonly referred to as symmetric $0-1$ matrices. Given a directed graph $G = (V,\mathcal{A})$, the corresponding adjacency matrix $A$ has entry $A_{i,j} = 1$ if the arc $(i,j) \in \mathcal{A}$, and is zero otherwise. In this case, $A$ is not necessarily symmetric, and may have complex eigenvalues. One interesting question is what digraph of order $n$ maximizes the spread of its adjacency matrix, where spread is defined as the diameter of the spectrum. Is this more general problem also maximized by the same set of graphs as in the undirected case? This problem for either loop-less directed graphs or directed graphs with loops is an interesting question, and the latter is equivalent to asking the above question for the set of all $0-1$ matrices. Another approach is to restrict ourselves to undirected graphs or undirected graphs with loops, and further consider the competing interests of simultaneously producing a graph with both $\lambda_1$ and $-\lambda_n$ large, and understanding the trade-off between these two goals. To this end, we propose considering the class of objective functions $$f(G; \beta) = \beta \lambda_1(G) - (1-\beta) \lambda_n(G), \qquad \beta \in [0,1].$$ When $\beta = 0$, this function is maximized by the complete bipartite graph $K_{\lceil n /2 \rceil, \lfloor n/2 \rfloor}$ and when $\beta = 1$, this function is maximized by the complete graph $K_n$. This paper treats the specific case of $\beta = 1/2$, but none of the mathematical techniques used in this work rely on this restriction. In fact, the structural graph-theoretic results of Section \ref{sec:graphs}, suitably modified for arbitrary $\beta$, still hold (see the thesis \cite[Section 3.3.1]{urschel2021graphs} for this general case). Understanding the behavior of the optimum between these three well-studied choices of $\beta = 0,1/2,1$ is an interesting future avenue of research. More generally, any linear combination of graph eigenvalues could be optimized over any family of graphs. Many sporadic examples of this problem have been studied and Nikiforov \cite{Nikiforov} proposed a general framework for it and proved some conditions under which the problem is well-behaved. We conclude with some specific instances of the problem that we think are most interesting. Given a graph $F$, maximizing $\lambda_1$ over the family of $n$-vertex $F$-free graphs can be thought of as a spectral version of Tur\'an's problem. Many papers have been written about this problem which was proposed in generality in \cite{Nikiforov3}. We remark that these results can often strengthen classical results in extremal graph theory. Maximizing $\lambda_1 + \lambda_n$ over the family of triangle-free graphs has been considered in \cite{Brandt} and is related to an old conjecture of Erd\H{o}s on how many edges must be removed from a triangle-free graph to make it bipartite \cite{erdos}. In general it would be interesting to maximize $\lambda_1 + \lambda_n$ over the family of $K_r$-free graphs. When a graph is regular the difference between $\lambda_1$ and $\lambda_2$ (the spectral gap) is related to the graph's expansion properties. Aldous and Fill \cite{AldousFill} asked to minimize $\lambda_1 - \lambda_2$ over the family of $n$-vertex connected regular graphs. Partial results were given by \cite{quartic1, quartic2, Guiduli1, Guiduli2}. A nonregular version of the problem was proposed by Stani\'c \cite{Stanic} who asked to minimize $\lambda_1-\lambda_2$ over connected $n$-vertex graphs. Finally, maximizing $\lambda_3$ or $\lambda_4$ over the family of $n$-vertex graphs seems to be a surprisingly difficult question and even the asymptotics are not known (see \cite{Nikiforov2}). \section{Introduction} The spread $s(M)$ of an arbitrary $n\times n$ complex matrix $M$ is the diameter of its spectrum; that is, \[s(M):= \max_{i,j} |\lambda_i-\lambda_j|,\] where the maximum is taken over all pairs of eigenvalues of $M$. This quantity has been well studied in general, see \cite{deutsch1978spread,johnson1985lower,mirsky1956spread,wu2012upper} for details and additional references. Most notably, Johnson, Kumar, and Wolkowitz produced the lower bound $$ s(M) \ge \textstyle{ \big| \sum_{i \ne j} m_{i,j} \big|/(n-1)}$$ for normal matrices $M = (m_{i,j})$ \cite[Theorem 2.1]{johnson1985lower}, and Mirsky produced the upper bound $$ s(M) \le \sqrt{\textstyle{2 \sum_{i,j} |m_{i,j}|^2 - (2/n)\big| \sum_{i} m_{i,i} \big|^2}}$$ for any $n$ by $n$ matrix $M$, which is tight for normal matrices with $n-2$ of its eigenvalues all equal and equal to the arithmetic mean of the other two \cite[Theorem 2]{mirsky1956spread}. The spread of a matrix has also received interest in certain particular cases. Consider a simple undirected graph $G = (V(G),E(G))$ of order $n$. The adjacency matrix $A$ of a graph $G$ is the $n \times n$ matrix whose rows and columns are indexed by the vertices of $G$, with entries satisfying $A_{u,v} = 1$ if $\{u,v\} \in E(G)$ and $A_{u,v} = 0$ otherwise. This matrix is real and symmetric, and so its eigenvalues are real, and can be ordered $\lambda_1(G) \geq \lambda_2(G)\geq \cdots \geq \lambda_n(G)$. When considering the spread of the adjacency matrix $A$ of some graph $G$, the spread is simply the distance between $\lambda_1(G)$ and $\lambda_n(G)$, denoted by $$s(G) := \lambda_1(G) - \lambda_n(G).$$ In this instance, $s(G)$ is referred to as the \emph{spread of the graph}. In \cite{gregory2001spread}, the authors investigated a number of properties regarding the spread of a graph, determining upper and lower bounds on $s(G)$. Furthermore, they made two key conjectures. Let us denote the maximum spread over all $n$ vertex graphs by $s(n)$, the maximum spread over all $n$ vertex graphs of size $e$ by $s(n,e)$, and the maximum spread over all $n$ vertex bipartite graphs of size $e$ by $s_b(n,e)$. Let $K_k$ be the clique of order $k$ and $G(n,k) := K_k \vee \overline{K_{n-k}}$ be the join of the clique $K_k$ and the independent set $\overline{K_{n-k}}$. We say a graph is \emph{spread-extremal} if it has spread $s(n)$. The conjectures addressed in this article are as follows. \begin{conjecture}[\cite{gregory2001spread}, Conjecture 1.3]\label{conj:spread} For any positive integer $n$, the graph of order $n$ with maximum spread is $G(n,\lfloor 2n/3 \rfloor)$; that is, $s(n)$ is attained only by $G(n,\lfloor 2n/3 \rfloor)$. \end{conjecture} \begin{conjecture}[\cite{gregory2001spread}, Conjecture 1.4]\label{conj:bispread} If $G$ is a graph with $n$ vertices and $e$ edges attaining the maximum spread $s(n,e)$, and if $e\leq \lfloor n^2/4\rfloor$, then $G$ must be bipartite. That is, $s_b(n,e) = s(n,e)$ for all $e \le \lfloor n^2/4\rfloor$. \end{conjecture} Conjecture \ref{conj:spread} is referred to as the Spread Conjecture, and Conjecture \ref{conj:bispread} is referred to as the Bipartite Spread Conjecture. Much of what is known about Conjecture \ref{conj:spread} is contained in \cite{gregory2001spread}, but the reader may also see \cite{StanicBook} for a description of the problem and references to other work on it. In this paper, we resolve both conjectures. We prove the Spread Conjecture for all $n$ sufficiently large, prove an asymptotic version of the Bipartite Spread Conjecture, and provide an infinite family of counterexamples to illustrate that our asymptotic version is as tight as possible, up to lower order error terms. These results are given by Theorems \ref{thm: spread maximum graphs} and \ref{thm: bipartite spread theorem}. \begin{theorem}\label{thm: spread maximum graphs} There exists a constant $N$ so that the following holds: Suppose $G$ is a graph on $n\geq N$ vertices with maximum spread; then $G$ is the join of a clique on $\lfloor 2n/3\rfloor$ vertices and an independent set on $\lceil n/3\rceil$ vertices. \end{theorem} \begin{theorem}\label{thm: bipartite spread theorem} $$s(n,e) - s_b(n,e) \le \frac{1+16 e^{-3/4}}{e^{3/4}} s(n,e)$$ for all $n,e \in \mathbb{N}$ satisfying $e \le \lfloor n^2/4\rfloor$. In addition, for any $\varepsilon>0$, there exists some $n_\varepsilon$ such that $$s(n,e) - s_b(n,e) \ge \frac{1-\varepsilon}{e^{3/4}} s(n,e)$$ for all $n\ge n_\varepsilon$ and some $e \le \lfloor n^2/4\rfloor$ depending on $n$. \end{theorem} The proof of Theorem \ref{thm: spread maximum graphs} is quite involved, and constitutes the main subject of this work. The general technique consists of showing that a spread-extremal graph has certain desirable properties, considering and solving an analogous problem for graph limits, and then using this result to say something about the Spread Conjecture for sufficiently large $n$. For the interested reader, we state the analogous graph limit result in the language of functional analysis. \begin{theorem}\label{thm: functional analysis spread} Let $W:[0,1]^2\to [0,1]$ be a Lebesgue-measurable function such that $W(x,y) = W(y,x)$ for a.e. $(x,y)\in [0,1]^2$ and let $A = A_W$ be the kernel operator on $\mathscr{L}^2[0,1]$ associated to $W$. For all unit functions $f,g\in\mathscr{L}^2[0,1]$, \begin{align*} \langle f, Af\rangle - \langle g, Ag\rangle &\leq \dfrac{2}{\sqrt{3}}. \end{align*} Moreover, equality holds if and only if there exists a measure-preserving transformation $\sigma$ on $[0,1]$ such that for a.e. $(x,y)\in [0,1]^2$, \begin{align*} W(\sigma(x),\sigma(y)) &= \left\{\begin{array}{rl} 0, & (x,y)\in [2/3, 1]\times [2/3, 1]\\ 1, &\text{otherwise} \end{array}\right. . \end{align*} \end{theorem} The proof of Theorem \ref{thm: spread maximum graphs} can be found in Sections 2-6, with certain technical details reserved for the Appendix. We provide an in-depth overview of the proof of Theorem \ref{thm: spread maximum graphs} in Subsection \ref{sub: outline}. In comparison, the proof of Theorem \ref{thm: bipartite spread theorem} is surprisingly short, making use of the theory of equitable decompositions and a well-chosen class of counter-examples. The proof of Theorem \ref{thm: bipartite spread theorem} can be found in Section \ref{sec:bispread}. Finally, in Section \ref{sec: conclusion}, we discuss further questions and possible future avenues of research. \iffalse \begin{theorem}\label{thm: spread maximum graphs} There exists a constant $N$ so that the following holds: Suppose $G$ is a graph on $n\geq N$ vertices with maximum spread; then $G$ is the join of a clique on $\lfloor 2n/3\rfloor$ vertices and an independent set on $\lceil n/3\rceil$ vertices. \end{theorem} As a general outline, we proceed in three steps. First in Section \ref{sec: graphon spread reduction}, we show that the solution to an analogous problem for graph limits (c.f. \cite{Lovasz2012Hombook}) is a stepgraphon with a given structure. Then in Section \ref{sub-sec: numerics}, we optimize the spread of this stepgraphon by solving $17$ constrained optimization problems. This proof is computer-assisted and uses interval arithmetic, the same technique used to prove the Kelper Conjecture and {\color{blue} Smale's 14th problem} (c.f. \cite{2002HalesKepler} and \cite{2002WarwickInterval}). In Theorem \ref{thm: spread maximum graphon}, we state this result in the language of graph limits. For the interested reader, we state this result in the language of functional analysis. \begin{theorem} Let $W:[0,1]^2\to [0,1]$ be a Lebesgue-measurable function such that $W(x,y) = W(y,x)$ for a.e. $(x,y)\in [0,1]^2$ and let $A = A_W$ be the kernel operator on $\mathscr{L}^2[0,1]$ associated to $W$. For for all unit functions $f,g\in\mathscr{L}^2[0,1]$, \begin{align*} \langle f, Af\rangle - \langle g, Ag\rangle &\leq \dfrac{2}{\sqrt{3}}. \end{align*} Moreover, equality holds if and only if there exists a measure-preserving transformation $\sigma$ on $[0,1]$ such that for a.e. $(x,y)\in [0,1]^2$ \begin{align*} W(\sigma(x),\sigma(y)) &= \left\{\begin{array}{rl} 0, & (x,y)\in [2/3, 1]\times [2/3, 1]\\ 1, &\text{otherwise} \end{array}\right. . \end{align*} \end{theorem} Finally in Section \ref{sub-sec: graphons to graphs}, deduce Theorem \ref{thm: spread maximum graphs} from Theorem \ref{thm: spread maximum graphon} using several technical arguments. Additionally in Section \ref{sec:bispread}, we resolve the Bipartite Spread Conjecture in the following sense. We prove an asymptotic version of the conjecture and give an infinite family of counterexamples that illustrate that our asymptotic version is as tight as possible up to a multiplicative factor on the error term. \begin{theorem}\label{thm: bipartite spread theorem} $$s(n,e) - s_b(n,e) \le \frac{4}{e^{3/4}} s(n,e)$$ for all $n,e \in \mathbb{N}$ satisfying $e \le \lfloor n^2/4\rfloor$. In addition, for any $\varepsilon>0$, there exists some $n_\varepsilon$ such that $$s(n,e) - s_b(n,e) \ge \frac{1-\varepsilon}{e^{3/4}} s(n,e)$$ for all $n\ge n_\varepsilon$ and some $e \le \lfloor n^2/4\rfloor$ \end{theorem} \fi \subsection{High-Level Outline of Spread Proof}\label{sub: outline} Here, we provide a concise, high-level description of our asymptotic proof of the Spread Conjecture. The proof itself is quite involved, making use of interval arithmetic and a number of fairly complicated symbolic calculations, but conceptually, is quite intuitive. Our proof consists of four main steps. \\ \noindent{\bf Step 1: } Graph-Theoretic Results \\ \begin{adjustwidth}{1.5em}{0pt} In Section \ref{sec:graphs}, we observe a number of important structural properties of any graph that maximizes the spread for a given order $n$. In particular, we show that\\ \begin{itemize} \item any graph that maximizes spread must be the join of two threshold graphs (Lemma \ref{lem: graph join}), \item both graphs in this join have order linear in $n$ (Lemma \ref{linear size parts}), \item the unit eigenvectors $\mathbf{x}$ and $\mathbf{z}$ corresponding to $\lambda_1(A)$ and $\lambda_n(A)$ have infinity norms of order $n^{-1/2}$ (Lemma \ref{upper bound on eigenvector entries}), \item the quantities $\lambda_1 \mathbf{x}_u^2 - \lambda_n \mathbf{z}_u^2$, $u \in V$, are all nearly equal, up to a term of order $n^{-1}$ (Lemma \ref{discrete ellipse equation}).\\ \end{itemize} This last structural property serves as the backbone of our proof. In addition, we note that, by a tensor argument, an asymptotic upper bound for $s(n)$ implies a bound for all $n$. \\ \end{adjustwidth} \noindent{\bf Step 2: } Graphons and a Finite-Dimensional Eigenvalue Problem \\ \begin{adjustwidth}{1.5em}{0pt} In Sections \ref{sec: graphon background} and \ref{sec: graphon spread reduction}, we make use of graphons to understand how spread-extremal graphs behave as $n$ tends to infinity. Section \ref{sec: graphon background} consists of a basic introduction to graphons, and a translation of the graph results of Step 1 to the graphon setting. In particular, we prove the graphon analogue of the graph properties that \\ \begin{itemize} \item vertices $u$ and $v$ are adjacent if and only if $\mathbf{x}_u \mathbf{x}_v - \mathbf{z}_u \mathbf{z}_v >0$ (Lemma \ref{lem: K = indicator function}), \item the quantities $\lambda_1 \mathbf{x}_u^2 - \lambda_n \mathbf{z}_u^2$, $u \in V$, are all nearly equal (Lemma \ref{lem: local eigenfunction equation}). \\ \end{itemize} Next, in Section \ref{sec: graphon spread reduction}, we show that the spread-extremal graphon for our problem takes the form of a particular stepgraphon with a finite number of blocks (Theorem \ref{thm: reduction to stepgraphon}). In particular, through an averaging argument, we note that the spread-extremal graphon takes the form of a stepgraphon with a fixed structure of symmetric seven by seven blocks, illustrated below. \begin{align*} \input{graphics/stepgraphon7x7} \end{align*} The lengths $\alpha = (\alpha_1,...,\alpha_7)$, $\alpha^T {\bf 1} = 1$, of each row and column in the spread-extremal stepgraphon is unknown. For any choice of lengths $\alpha$, we can associate a $7\times7$ matrix whose spread is identical to that of the associated stepgraphon pictured above. Let $B$ be the $7\times7$ matrix with $B_{i,j}$ equal to the value of the above stepgraphon on block $i,j$, and $D = \text{diag}(\alpha_1,...,\alpha_7)$ be a diagonal matrix with $\alpha$ on the diagonal. Then the matrix $D^{1/2} B D^{1/2}$ has spread equal to the spread of the associated stepgraphon. \\ \end{adjustwidth} \noindent{\bf Step 3: } Computer-Assisted Proof of a Finite-Dimensional Eigenvalue Problem \\ \begin{adjustwidth}{1.5em}{0pt} In Section \ref{sec:spread_graphon}, we show that the optimizing choice of $\alpha$ is, without loss of generality, given by $\alpha_1 = 2/3$, $\alpha_6 =1/3$, and all other $\alpha_i =0$ (Theorem \ref{thm: spread maximum graphon}). This is exactly the limit of the conjectured spread-extremal graph as $n$ tends to infinity. The proof of this fact is extremely technical, and relies on a computer-assisted proof using both interval arithmetic and symbolic computations. This is the only portion of the proof that requires the use of interval arithmetic. Though not a proof, in Figure 1 we provide intuitive visual justification that this result is true. In this figure, we provide contour plots resulting from numerical computations of the spread of the above matrix for various values of $\alpha$. The numerical results suggest that the $2/3-1/3$ two by two block stepgraphon is indeed optimal. See Figure 1 and the associated caption for details. The actual proof of this fact consists of the following steps: \\ \begin{itemize} \item we reduce the possible choices of non-zero $\alpha_i$ from $2^7$ to $17$ different cases (Lemma \ref{lem: 19 cases}), \item using eigenvalue equations, the graphon version of $\lambda_1 \mathbf{x}_u^2 - \lambda_n \mathbf{z}_u^2$ all nearly equal, and interval arithmetic, we prove that, of the $17$ cases, only the cases \begin{itemize} \item $\alpha_1,\alpha_7 \ne 0$ \item $\alpha_4,\alpha_5,\alpha_7 \ne 0$ \end{itemize} can produce a spread-extremal stepgraphon (Lemma \ref{lem: 2 feasible sets}), \item prove that the three by three case cannot be spread-extremal, using basic results from the theory of cubic polynomials and computer-assisted symbolic calculations (Lemma \ref{lem: SPR457}). \\ \end{itemize} This proves the the spread-extremal graphon is a two by two stepgraphon that, without loss of generality, takes value zero on the block $[2/3,1]^2$ and one elsewhere (Theorem \ref{thm: spread maximum graphon}). \\ \end{adjustwidth} \begin{figure} \centering \subfigure[$\alpha_i \ne 0$ for all $i$]{\includegraphics[width=2.9in,height = 2.5in]{graphics/125_34_67.png}} \quad \subfigure[$\alpha_2=\alpha_3=\alpha_4 = 0$]{\includegraphics[width=2.9in,height = 2.5in]{graphics/16_5_7.png}} \caption{Contour plots of the spread for some choices of $\alpha$. Each point $(x,y)$ of Plot (a) illustrates the maximum spread over all choices of $\alpha$ satisfying $\alpha_3 + \alpha_4 = x$ and $\alpha_6 + \alpha_7 = y$ (and therefore, $\alpha_1 + \alpha_2 + \alpha_ 5 = 1 - x - y$) on a grid of step size $1/100$. Each point $(x,y)$ of Plot (b) illustrates the maximum spread over all choices of $\alpha$ satisfying $\alpha_2=\alpha_3=\alpha_4=0$, $\alpha_5 = y$, and $\alpha_7 = x$ on a grid of step size $1/100$. The maximum spread of Plot (a) is achieved at the black x, and implies that, without loss of generality, $\alpha_3 + \alpha_4=0$, and therefore $\alpha_2 = 0$ (indices $\alpha_1$ and $\alpha_2$ can be combined when $\alpha_3 + \alpha_4=0$). Plot (b) treats this case when $\alpha_2 = \alpha_3 = \alpha_4 = 0$, and the maximum spread is achieved on the black line. This implies that either $\alpha_5 =0$ or $\alpha_7 = 0$. In both cases, this reduces to the block two by two case $\alpha_1,\alpha_7 \ne 0$ (or, if $\alpha_7 = 0$, then $\alpha_1,\alpha_6 \ne 0$).} \label{fig:contour} \end{figure} \noindent{\bf Step 4: } From Graphons to an Asymptotic Proof of the Spread Conjecture \\ \begin{adjustwidth}{1.5em}{0pt} Finally, in Section \ref{sub-sec: graphons to graphs}, we convert our result for the spread-extremal graphon to a statement for graphs. This process consists of two main parts:\\ \begin{itemize} \item using our graphon theorem, we show that any spread-extremal graph takes the form $(K_{n_1}\dot{\cup} \overline{K_{n_2}})\vee \overline{K_{n_3}}$ for $n_1 = (2/3+o(1))n$, $n_2 = o(n)$, and $n_3 = (1/3+o(1))n$ (Lemma \ref{lem: few exceptional vertices}), i.e. any spread-extremal graph is equal up to a set of $o(n)$ vertices to the conjectured optimal graph $K_{\lfloor 2n/3\rfloor} \vee \overline{K_{\lceil n/3 \rceil}}$, \item we show that, for $n$ sufficiently large, the spread of $(K_{n_1}\dot{\cup} \overline{K_{n_2}})\vee \overline{K_{n_3}}$, $n_1 + n_2 + n_3 = n$, is maximized when $n_2 = 0$ (Lemma \ref{lem: no exceptional vertices}).\\ \end{itemize} Together, these two results complete our proof of the spread conjecture for sufficiently large $n$ (Theorem \ref{thm: spread maximum graphs}). \end{adjustwidth} \section{Properties of spread-extremal graphs} \label{sec:graphs} In this section, we review what has already been proven about spread-extremal graphs ($n$ vertex graphs with spread $s(n)$) in \cite{gregory2001spread}, where the original conjectures were made. We then prove a number of properties of spread-extremal graphs and properties of the eigenvectors associated with the maximum and minimum eigenvalues of a spread-extremal graph. Let $G$ be a graph, and let $A$ be the adjacency matrix of $G$, with eigenvalues $\lambda_1 \geq \cdots \geq \lambda_n$. For unit vectors $\mathbf{x}$, $\mathbf{y} \in \mathbb{R}^n$, we have \[\lambda_1 \geq \mathbf{x}^T A \mathbf{x} \quad \mbox{and} \quad \lambda_n \leq \mathbf{y}^TA\mathbf{y}.\] Hence (as it is observed in \cite{gregory2001spread}), the spread of a graph can be expressed \begin{equation}\label{eq: gregory min max} s(G) = \max_{\mathbf{x}, \mathbf{z}}\sum_{u\sim v} (\mathbf{x}_u\mathbf{x}_v-\mathbf{z}_u\mathbf{z}_v)\end{equation} where the maximum is taken over all unit vectors $\mathbf{x}, \mathbf{z}$. Furthermore, this maximum is attained only for $\mathbf{x}, \mathbf{z}$ orthonormal eigenvectors corresponding to the eigenvalues $\lambda_1, \lambda_n$, respectively. We refer to such a pair of vectors $\mathbf{x}, \mathbf{z}$ as \emph{extremal eigenvectors} of $G$. For any two vectors $\mathbf{x}$, $\mathbf{z}$ in $\mathbb{R}^n$, let $G(\mathbf{x}, \mathbf{z})$ denote the graph for which distinct vertices $u, v$ are adjacent if and only if $\mathbf{x}_u\mathbf{x}_v-\mathbf{z}_u\mathbf{z}_v\geq 0$. Then from the above, there is some graph $G(\mathbf{x}, \mathbf{z})$ which is a spread-extremal graph, with $\mathbf{x}$, $\mathbf{z}$ orthonormal and $\mathbf{x}$ positive (\cite[Lemma 3.5]{gregory2001spread}). In addition, we enhance \cite[Lemmas 3.4 and 3.5]{gregory2001spread} using some helpful definitions and the language of threshold graphs. Whenever $G = G(\mathbf{x}, \mathbf{z})$ is understood, let $P = P(\mathbf{x}, \mathbf{z}) := \{u \in V(G) : \mathbf{z}_u \geq 0\}$ and $N = N(\mathbf{x}, \mathbf{z}) := V(G)\setminus P$. For our purposes, we say that $G$ is a \emph{threshold graph} if and only if there exists a function $\varphi:V(G)\to (-\infty,\infty]$ such that for all distinct $u,v\in V(G)$, $uv\in E(G)$ if and only if $\varphi(u)+\varphi(v) \geq 0$ \footnote{ Here, we take the usual convention that for all $x\in (-\infty, \infty]$, $\infty + x = x + \infty = \infty$}. Here, $\varphi$ is a {\it threshold function} for $G$ (with $0$ as its {\it threshold}). The following detailed lemma shows that any spread-extremal graph is the join of two threshold graphs with threshold functions which can be made explicit. \begin{lemma}\label{lem: graph join} Let $n> 2$ and suppose $G$ is a $n$-vertex graph such that $s(G) = s(n)$. Denote by $\mathbf{x}$ and $\mathbf{z}$ the extremal unit eigenvectors for $G$. Then \begin{enumerate}[(i)] \item\label{item: matrix 0 or 1} For any two vertices $u,v$ of $G$, $u$ and $v$ are adjacent whenever $\mathbf{x}_u\mathbf{x}_v-\mathbf{z}_u\mathbf{z}_v>0$ and $u$ and $v$ are nonadjacent whenever $\mathbf{x}_u\mathbf{x}_v-\mathbf{z}_u\mathbf{z}_v<0$. \item\label{item: xx-zz nonzero} For any distinct $u,v\in V(G)$, $\mathbf{x}_u\mathbf{x}_v-\mathbf{z}_u\mathbf{z}_v\not=0$. \item\label{item: graph join} Let $P := P(\mathbf{x}, \mathbf{z})$, $N := N(\mathbf{x}, \mathbf{z})$ and let $G_1 := G[P]$ and $G_2 := G[N]$. Then $G = G(\mathbf{x}, \mathbf{z}) = G_1\vee G_2$. \item\label{item: G1 G2 thresholds} For each $i\in \{1,2\}$, $G_i$ is a threshold graph with threshold function defined on all $u\in V(G_i)$ by \begin{align*} \varphi(u) := \log\left| \dfrac{\mathbf{x}_u} {\mathbf{z}_u} \right| . \end{align*} \end{enumerate} \end{lemma} \begin{proof} Suppose $G$ is a $n$-vertex graph such that $s(G) = s(n)$ and write $A = (a_{uv})_{u,v\in V(G)}$ for its adjacency matrix. Item \eqref{item: matrix 0 or 1} is equivalent to Lemma 3.4 from \cite{gregory2001spread}. For completeness, we include a proof. By Equation \eqref{eq: gregory min max} we have that \begin{align*} s(G) &= \max_{ x,z } \mathbf{x}^T A\mathbf{x} -\mathbf{z}^TA\mathbf{z} = \sum_{u,v\in V(G)} a_{uv}\cdot \left(\mathbf{x}_u\mathbf{x}_v - \mathbf{z}_u\mathbf{z}_v\right), \end{align*} where the maximum is taken over all unit vectors of length $|V(G)|$. If $\mathbf{x}_u\mathbf{x}_v - \mathbf{z}_u\mathbf{z}_v > 0$ and $a_{uv} = 0$, then $s(G+uv) > s(G)$, a contradiction. And if $\mathbf{x}_u\mathbf{x}_v - \mathbf{z}_u\mathbf{z}_v < 0$ and $a_{uv} = 1$, then $s(G-uv) > s(G)$, a contradiction. So Item \eqref{item: matrix 0 or 1} holds. \\ For a proof of Item \eqref{item: xx-zz nonzero} suppose $\mathbf{x}_u\mathbf{x}_v - \mathbf{z}_u\mathbf{z}_v = 0$ and denote by $G'$ the graph formed by adding or deleting the edge $uv$ from $G$. With $A' = (a_{uv}')_{u,v\in V(G')}$ denoting the adjacency matrix of $G'$, note that \begin{align*} s(G') \geq \mathbf{x}^T A'\mathbf{x} -\mathbf{z}^TA'\mathbf{z} = \mathbf{x}^T A\mathbf{x} -\mathbf{z}^TA\mathbf{z} = s(G) &\geq s(G), \end{align*} so each inequality is an equality. It follows that $\mathbf{x}, \mathbf{z}$ are eigenvectors for $A'$. Furthermore, without loss of generality, we may assume that $uv\in E(G)$. In particular, there exists some $\lambda'$ such that \begin{align*} A\mathbf{x} &= \lambda \mathbf{x}\\ (A -{\bf e}_u{\bf e}_v^T -{\bf e}_v{\bf e}_u^T )\mathbf{x} &= \lambda'\mathbf{x} . \end{align*} So $( {\bf e}_u{\bf e}_v^T +{\bf e}_v{\bf e}_u^T )\mathbf{x} = (\lambda - \lambda')\mathbf{x}$. Let $w\in V(G)\setminus \{u,v\}$. By the above equation, $(\lambda-\lambda')\mathbf{x}_w = 0$ and either $\lambda' = \lambda$ or $\mathbf{x}_w = 0$. To find a contradiction, it is sufficient to note that $G$ is a connected graph with Perron-Frobenius eigenvector $\mathbf{x}$. Indeed, let $P := \{w\in V(G) : \mathbf{z}_w \geq 0\}$ and let $N := V(G)\setminus P$. Then for any $w\in P$ and any $w'\in N$, $\mathbf{x}_w\mathbf{x}_{w'} - \mathbf{z}_w\mathbf{z}_{w'} > 0$ and by Item \eqref{item: matrix 0 or 1}, $ww'\in E(G)$. So $G$ is connected and this completes the proof of Item \eqref{item: xx-zz nonzero}. \\ Now, we prove Item \eqref{item: graph join}. To see that $G = G(\mathbf{x}, \mathbf{z})$, note by Items \eqref{item: matrix 0 or 1} and \eqref{item: xx-zz nonzero}, for all distinct $u,v\in V(G)$, $\mathbf{x}_u\mathbf{x}_v - \mathbf{z}_u\mathbf{z}_v > 0$ if and only if $uv\in E(G)$, and otherwise, $\mathbf{x}_u\mathbf{x}_v - \mathbf{z}_u\mathbf{z}_v < 0$ and $uv\notin E(G)$. To see that $G = G_1\vee G_2$, note that for any $u\in P$ and any $v\in N$, $0 \neq \mathbf{x}_u\mathbf{x}_v - \mathbf{z}_u\mathbf{z}_v \geq \mathbf{z}_u\cdot (-\mathbf{z}_v)\geq 0$. \\ Finally, we prove Item \eqref{item: G1 G2 thresholds}. Suppose $u,v$ are distinct vertices such that either $u,v\in P$ or $u,v\in N$. Allowing the possibility that $0\in \{\mathbf{z}_u, \mathbf{z}_v\}$, the following equivalence holds: \begin{align*} \varphi(u) + \varphi(v) &\geq 0 & \text{ if and only if }\\ \log\left| \dfrac{\mathbf{x}_u\mathbf{x}_v}{\mathbf{z}_u\mathbf{z}_v} \right| &\geq 1 & \text{ if and only if }\\ \mathbf{x}_u\mathbf{x}_v - |\mathbf{z}_u\mathbf{z}_v| &\geq 0. \end{align*} Since $\mathbf{z}_u,\mathbf{z}_v$ have the same sign, Item \eqref{item: G1 G2 thresholds}. This completes the proof. \end{proof} From \cite{thresholdbook}, we recall the following useful characterization in terms of ``nesting'' neighborhoods: $G$ is a threshold graph if and only there exists a numbering $v_1,\cdots, v_n$ of $V(G)$ such that for all $1\leq i<j\leq n$, if $v_k\in V(G)\setminus\{v_i,v_j\}$, $v_jv_k\in E(G)$ implies that $v_iv_k\in E(G)$. Given this ordering, if $k$ is the smallest natural number such that $v_kv_{k+1}\in E(G)$ then we have that the set $\{v_1,\cdots, v_k\}$ induces a clique and the set $\{v_{k+1},\cdots, v_n\}$ induces an independent set. The next lemma shows that both $P$ and $N$ have linear size. \begin{lemma}\label{linear size parts} If $G$ is a spread-extremal graph, then both $P$ and $N$ have size $\Omega(n)$. \end{lemma} \begin{proof} We will show that $P$ and $N$ both have size at least $\frac{n}{100}$. First, since $G$ is spread-extremal it has spread more than $1.1n$ and hence has smallest eigenvalue $\lambda_n < \frac{-n}{10}$. Without loss of generality, for the remainder of this proof we will assume that $|P| \leq |N|$, that $\mathbf{z}$ is normalized to have infinity norm $1$, and that $v$ is a vertex satisfying $| \mathbf{z}_v| = 1$. By way of contradiction, assume that $|P|< \frac{n}{100} $. If $v\in N$, then we have \[ \lambda_n \mathbf{z}_v = -\lambda_n = \sum_{u\sim v} \mathbf{z}_u \leq \sum_{u\in P} \mathbf{z}_u \leq |P| < \frac{n}{100}, \] contradicting that $\lambda_n < \frac{-n}{10}$. Therefore, assume that $v\in P$. Then \[ \lambda_n^2 \mathbf{z}_v = \lambda_n^2 = \sum_{u\sim v} \sum_{w\sim u}\mathbf{z}_w \leq \sum_{u\sim v} \sum_{\substack{w\sim u\\w\in P}} \mathbf{z}_w \leq |P||N| + 2e(P) \leq |P||N| + |P|^2 \leq \frac{99n^2}{100^2} + \frac{n^2}{100^2}. \] This gives $|\lambda_n| \leq \frac{n}{10}$, a contradiction. \end{proof} \begin{lemma}\label{upper bound on eigenvector entries} If $\mathbf{x}$ and $\mathbf{z}$ are unit eigenvectors for $\lambda_1$ and $\lambda_n$, then $\norm{\mathbf{x}}_\infty = O(n^{-1/2})$ and $\norm{\mathbf{z}}_\infty = O(n^{-1/2})$. \end{lemma} \begin{proof} During this proof we will assume that $\hat{u}$ and $\hat{v}$ are vertices satisfying $\norm{\mathbf{x}}_\infty = \mathbf{x}_{\hat{u}}$ and $\norm{\mathbf{z}}_\infty = |\mathbf{z}_{\hat{v}}|$ and without loss of generality that $\hat{v} \in N$. We will use the weak estimates that $\lambda_1 > \frac{n}{2}$ and $\lambda_n < \frac{-n}{10}$. Define sets \begin{align*} A &= \left\{ w: \mathbf{x}_w > \frac{\mathbf{x}_{\hat{u}}}{4} \right\}\\ B &= \left\{ w: \mathbf{z}_w > \frac{-\mathbf{z}_{\hat{v}}}{20} \right\}. \end{align*} It suffices to show that $A$ and $B$ both have size $\Omega(n)$, for then there exists a constant $\epsilon > 0$ such that \[ 1 = \mathbf{x}^T \mathbf{x} \geq \sum_{w\in A} \mathbf{x}_w^2 \geq |A| \frac{\norm{\mathbf{x}}^2_\infty}{16} \geq \epsilon n \norm{\mathbf{x}}^2_\infty, \] and similarly \[ 1 = \mathbf{z}^T \mathbf{z} \geq \sum_{w\in B} \mathbf{z}_w^2 \geq |B| \frac{\norm{\mathbf{z}}^2_\infty}{400} \geq \epsilon n \norm{\mathbf{z}}^2_\infty. \] We now give a lower bound on the sizes of $A$ and $B$ using the eigenvalue-eigenvector equation and the weak bounds on $\lambda_1$ and $\lambda_n$. \[ \frac{n}{2} \norm{\mathbf{x}}_\infty = \frac{n}{2} \mathbf{x}_{\hat{u}} < \lambda_1 \mathbf{x}_{\hat{u}} = \sum_{w\sim \hat{u}} \mathbf{x}_w \leq \norm{\mathbf{x}}_\infty \left(|A| + \frac{1}{4}(n-|A|) \right), \] giving that $|A| > \frac{n}{3}$. Similarly, \[ \frac{n}{10} \norm{\mathbf{z}}_\infty = - \frac{n}{10} \mathbf{z}_{\hat{v}} < \lambda_n \mathbf{z}_{\hat{v}} = \sum_{w\sim \hat{v}} \mathbf{z}_w \leq \norm{\mathbf{z}}_\infty \left( |B| + \frac{1}{20}(n-|B|)\right), \] and so $|B| > \frac{n}{19}$. \end{proof} \begin{lemma}\label{discrete ellipse equation} Assume that $\mathbf{x}$ and $\mathbf{z}$ are unit vectors. Then there exists a constant $C$ such that for any pair of vertices $u$ and $v$, we have \[ |(\lambda_1 \mathbf{x}_u^2 - \lambda_n\mathbf{z}_u^2) - (\lambda_1 \mathbf{z}_v^2 - \lambda_n \mathbf{z}_v^2)| < \frac{C}{n}. \] \end{lemma} \begin{proof} Let $u$ and $v$ be vertices, and create a graph $\tilde{G}$ by deleting $u$ and cloning $v$. That is, $V(\tilde{G}) = \{v'\} \cup V(G) \setminus \{u\}$ and \[E(\tilde{G}) = E(G\setminus \{u\}) \cup \{v'w:vw\in E(G)\}.\] Note that $v \not\sim v'$. Let $\tilde{A}$ be the adjacency matrix of $\tilde{G}$. Define two vectors $\mathbf{\tilde{x}}$ and $\mathbf{\tilde{z}}$ by \[ \mathbf{\tilde{x}}_w = \begin{cases} \mathbf{x}_w & w\not=v'\\ \mathbf{x}_v & w=v', \end{cases} \] and \[ \mathbf{\tilde{z}}_w = \begin{cases} \mathbf{z}_w & w\not=v'\\ \mathbf{z}_v & w=v. \end{cases} \] Then $\mathbf{\tilde{x}}^T \mathbf{\tilde{x}} = 1 - \mathbf{x}_u^2 + \mathbf{x}_v^2$ and $\mathbf{\tilde{z}}^T \mathbf{\tilde{z}} = 1 - \mathbf{z}_u^2 + \mathbf{z}_v^2$. Similarly, \begin{align*} \mathbf{\tilde{x}}^T\tilde{A}\mathbf{\tilde{x}} &= \lambda_1 - 2\mathbf{x}_u \sum_{uw\in E(G)} \mathbf{x}_w + 2\mathbf{x}_{v'} \sum_{vw \in E(G)} \mathbf{x}_w - 2A_{uv}\mathbf{x}_v\mathbf{x}_u \\ &= \lambda_1 - 2\lambda_1\mathbf{x}_u^2 + 2\lambda_1 \mathbf{x}_v^2 - 2A_{uv} \mathbf{x}_u \mathbf{x}_v, \end{align*} and \begin{align*} \mathbf{\tilde{z}}^T\tilde{A}\mathbf{\tilde{z}} &= \lambda_n - 2\mathbf{z}_u \sum_{uw\in E(G)} \mathbf{z}_w + 2\mathbf{z}_{v'} \sum_{vw \in E(G)} \mathbf{z}_w - 2A_{uv}\mathbf{z}_v\mathbf{z}_u \\ &= \lambda_n - 2\lambda_n\mathbf{z}_u^2 + 2\lambda_n \mathbf{z}_v^2 - 2A_{uv} \mathbf{z}_u \mathbf{z}_v. \end{align*} By Equation \eqref{eq: gregory min max}, \begin{align*} 0 & \geq \left(\frac{\mathbf{\tilde{x}}^T\tilde{A}\mathbf{\tilde{x}}}{\mathbf{\tilde{x}}^T \mathbf{\tilde{x}}} - \frac{\mathbf{\tilde{z}}^T\tilde{A}\mathbf{\tilde{z}}}{\mathbf{\tilde{z}}^T \mathbf{\tilde{z}}} \right) - (\lambda_1 - \lambda_n) \\ & = \left(\frac{\lambda_1 - 2\lambda_1 \mathbf{x}_u^2 + 2\lambda_1 \mathbf{x}_v^2 - 2A_{uv}\mathbf{x}_u\mathbf{x}_v}{1 - \mathbf{x}_u^2 + \mathbf{x}_v^2} - \frac{\lambda_n - 2\lambda_n \mathbf{z}_u^2 + 2\lambda_n \mathbf{z}_v^2 - 2A_{uv} \mathbf{z}_u\mathbf{z}_v}{1-\mathbf{z}_u^2 + \mathbf{z}_v^2}\right) - (\lambda_1 - \lambda_n) \\ & = \frac{-\lambda_1 \mathbf{x}_u^2 + \lambda_1\mathbf{x}_v^2 - 2A_{ij}\mathbf{x}_u\mathbf{x}_v}{1 - \mathbf{x}_u^2 + \mathbf{x}_v^2} - \frac{-\lambda_n \mathbf{z}_u^2 + \lambda_n\mathbf{z}_v^2 - 2A_{ij}\mathbf{z}_u \mathbf{z}_v}{1 - \mathbf{z}_u^2 + \mathbf{z}_v^2}. \end{align*} By Lemma \ref{upper bound on eigenvector entries}, we have that $|\mathbf{x}_u|$, $|\mathbf{x}_v|$, $|\mathbf{z}_u|$, and $|\mathbf{z}_v|$ are all $O(n^{-1/2})$, and so it follows that \[ |(\lambda_1 \mathbf{x}_u^2 - \lambda_1\mathbf{x}_v^2) - (\lambda_n \mathbf{z}_u^2 - \lambda_n \mathbf{z}_v^2)| < \frac{C}{n}, \] for some absolute constant $C$. Rearranging terms gives the desired result. \end{proof} \section{The spread-extremal problem for graphons}\label{sec: graphon background} Graphons (or graph functions) are analytical objects which may be used to study the limiting behavior of large, dense graphs, and were originally introduced in \cite{BCLSV2012GraphLimitsSpectra} and \cite{lovasz2006limits}. \subsection{Introduction to graphons} Consider the set $\mathcal{W}$ of all bounded symmetric measurable functions $W:[0,1]^2 \to [0,1]$ (by symmetric, we mean $W(x, y)=W(y,x)$ for all $(x, y)\in [0,1]^2$. A function $W\in \mathcal{W}$ is called a \emph{stepfunction} if there is a partition of $[0,1]$ into subsets $S_1, S_2, \ldots, S_m$ such that $W$ is constant on every block $S_i\times S_j$. Every graph has a natural representation as a stepfunction in $\mathcal{W}$ taking values either 0 or 1 (such a graphon is referred to as a \emph{stepgraphon}). In particular, given a graph $G$ on $n$ vertices indexed $\{1, 2, \ldots, n\}$, we can define a measurable set $K_G \subseteq [0,1]^2$ as \[K_G = \bigcup_{u \sim v} \left[\frac{u-1}{n}, \frac{u}{n}\right]\times \left[\frac{v-1}{n}, \frac{v}{n}\right],\] and this represents the graph $G$ as a bounded symmetric measurable function $W_G$ which takes value $1$ on $K_G$ and $0$ everywhere else. For a measurable subset $U$ we will use $m(U)$ to denote its Lebesgue measure. This representation of a graph as a measurable subset of $[0,1]^2$ lends itself to a visual presentation sometimes referred to as a \emph{pixel picture}; see, for example, Figure \ref{bipartites_pixel} for two representations of a bipartite graph as a measurable subset of $[0,1]^2.$ Clearly, this indicates that such a representation is not unique; neither is the representation of a graph as a stepfunction. Using an equivalence relation on $\mathcal{W}$ derived from the so-called \emph{cut metric}, we can identify graphons that are equivalent up to relabelling, and up to any differences on a set of measure zero (i.e. equivalent \emph{almost everywhere}). \begin{figure} \begin{center} \begin{tabular}{||cccccccc||}\hline\hline & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} \\ \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & \\ & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} \\ \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & \\ & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} \\ \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & \\ & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} \\ \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & \\\hline\hline \end{tabular} \hspace{40pt} \begin{tabular}{||cccccccc||}\hline\hline & & & & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ & & & & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ & & & & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ & & & & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & & & \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & & & \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & & & \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & & & \\\hline\hline \end{tabular} \end{center} \label{bipartites_pixel} \caption{Two presentations of a bipartite graph as a stepfunction.} \end{figure} For all symmetric, bounded Lebesgue-measurable functions $W:[0,1]^2\to \mathbb{R}$, we let \[ \|W\|_\square = \sup_{S, T\subseteq [0,1]} \left|\int_{S\times T} W(x,y)\,dx\,dy \right|. \] Here, $\|\cdot\|_\square$ is referred to as the \emph{cut norm}. Next, one can also define a semidistance $\delta_\square$ on $\mathcal{W}$ as follows. First, we define \emph{weak isomorphism} of graphons. Let $\mathcal{S}$ be the set of of all measure-preserving functions on $[0,1]$. For every $\varphi\in \mathcal{S}$ and every $W\in \mathcal{W}$, define $W^\varphi:[0,1]^2\to [0,1]$ by \begin{align*} W^\varphi(x,y) := W(\varphi(x),\varphi(y)) \end{align*} for a.e. $(x,y)\in [0,1]^2$. Now for any $W_1,W_2\in \mathcal{W}$, let \[ \delta_\square(W_1, W_2) = \inf_{\phi\in \mathcal{S}} \{ \|W_1-W_2\circ\phi\|_\square \}. \] Define the equivalence relation $\sim$ on $\mathcal{W}$ as follows: for all $W_1, W_2\in \mathcal{W}$, $W_1\sim W_2$ if and only if $\delta_\square(W_1,W_2) = 0$. Furthermore, let $\hat{\mathcal{W}} := \mathcal{W}/\sim$ be the quotient space of $\mathcal{W}$ under $\sim$. Note that $\delta_\square$ induces a metric on $\hat{\mathcal{W}}$. Crucially, by \cite[Theorem 5.1]{lovasz2007szemeredi}, $\hat{\mathcal{W}}$ is a compact metric space. Given $W\in \hat{\mathcal{W}}$, {we} define the Hilbert-Schmidt operator $A_W: \mathscr{L}^2[0,1] \to \mathscr{L}^2[0,1]$ {by} \[ (A_Wf)(x) := \int_0^1 W(x,y)f(y) \,dy. \] for all $f\in \mathscr{L}^2[0,1]$ and a.e. $x\in [0,1]$. Since $W$ is symmetric and bounded, $A_W$ is a compact Hermitian operator. In particular, $A_W$ has a discrete, real spectrum whose only possible accumulation point is $0$ (c.f. \cite{aubin2011applied}). In particular, the maximum and minimum eigenvalues exist and we focus our attention on these extremes. Let $\mu(W)$ and $\nu(W)$ be the maximum and minimum eigenvalue of $A_W$, respectively, and define the \emph{spread} of $W$ as \[ \text{spr}(W) := \mu(W) - \nu(W). \] By the {M}in-{M}ax Theorem, we have that \[ \mu(W) = \max_{\|f\|_2 = 1} \int_0^1\int_0^1 W(x,y)f(x)f(y) \, dx\, dy, \] and \[ \nu(W) = \min_{\|f\|_2 = 1} \int_0^1\int_0^1 W(x,y)f(x)f(y) \, dx\, dy. \] Both $\mu$ and $\nu$ are continuous functions with respect to $\delta_\square$: in particular we have the following. \begin{theorem}[c.f. Theorem 6.6 from \cite{BCLSV2012GraphLimitsSpectra} or Theorem 11.54 in \cite{Lovasz2012Hombook}]\label{thm: graphon eigenvalue continuity} Let $\{W_i\}_i$ be a sequence of graphons converging to $W$ with respect to $\delta_\square$. Then as $n\to\infty$, \begin{align*} \mu(W_n)\to\mu(W) \quad\text{ and }\quad \nu(W_n)\to\nu(W). \end{align*} \end{theorem} ~\\ If $W \sim W'$ then $\mu(W) = \mu(W')$ and $\nu(W) = \nu(W')$. By compactness, we may consider the optimization problem on the factor space $\hat{\mathcal{W}}$ \[ \text{spr}(\hat{\mathcal{W}})=\max_{W\in \hat{\mathcal{W}}}, \text{spr}(W) \] and furthermore there is a $W \in \hat{\mathcal{W}}$ that attains the maximum. Since every graph is represented by $W_G\in \hat{\mathcal{W}}$, this allows us to give an upper bound for $s(n)$ in terms of $\text{spr}(\hat{\mathcal{W}})$. Indeed, by replacing the eigenvectors of $G$ with their corresponding stepfunctions, the following {proposition} can be shown. \begin{proposition}\label{graph to graphon eigenvalue scaling} Let $G$ be a graph on $n$ vertices. Then \begin{align*} \lambda_1(G) = n \blue{\, \cdot\, } \mu({W_G}) \quad \text{ and }\quad \lambda_n(G) = n \blue{\, \cdot\, } \nu({W_G}). \end{align*} \end{proposition} Proposition \ref{graph to graphon eigenvalue scaling} implies that $s(n) \leq n\cdot\text{spr}(\hat{\mathcal{W}}) $ for all $n$. Combined with Theorem \ref{thm: functional analysis spread}, this gives the following corollary. \begin{corollary} For all $n$, $s(n) \leq \frac{2n}{\sqrt{3}}$. \end{corollary} This can be proved more directly using Theorem \ref{thm: spread maximum graphs} and taking tensor powers. \subsection{Properties of spread-extremal graphons} Our main objective in the next sections is to solve the maximum spread problem for graphons in order to determine this upper bound for $s(n)$. As such, in this subsection we set up some preliminaries to the solution which largely comprise a translation of what is known in the graph setting (see Section~\ref{sec:graphs}). Specifically, we define what it means for a graphon to be connected, and show that spread-extremal graphons must be connected. We then prove a standard corollary of the Perron-Frobenius theorem. Finally, we prove graphon versions of Lemma~\ref{lem: graph join} and Lemma~\ref{discrete ellipse equation}. Let $W_1$ and $W_2$ be graphons and let $\alpha_1,\alpha_2$ be positive real numbers with $\alpha_1+\alpha_2 = 1$. We define the \textit{direct sum} of $W_1$ and $W_2$ with weights $\alpha_1$ and $\alpha_2$, denoted $W = \alpha_1W_1\oplus \alpha_2W_2$, as follows. Let $\varphi_1$ and $\varphi_2$ be the increasing affine maps which send $J_1 := [0,\alpha_1]$ and $J_2 := [\alpha_1,1]$ to $[0,1]$, respectively. Then for all $(x,y)\in [0,1]^2$, let \begin{align*} W(x,y) := \left\{\begin{array}{rl} W_i(\varphi_i(x),\varphi_i(y)), &\text{if }(x,y)\in J_i\times J_i\text{ for some }i\in \{1,2\}\\ 0, & \text{otherwise} \end{array}\right. . \end{align*} A graphon $W$ is \textit{connected} if $W$ is not weakly isomorphic to a direct sum $\alpha_1W_1\oplus \alpha_2W_2$ where $\alpha_1\neq 0,1$. Equivalently, $W$ is connected if there does not exist a measurable subset $A\subseteq [0,1]$ of positive measure such that $W(x,y) = 0$ for a.e. $(x,y)\in A\times A^c$. \\ \begin{proposition}\label{prop: disconnected spectrum} Suppose $W_1,W_2$ are graphons and $\alpha_1,\alpha_2$ are positive real numbers summing to $1$. Let $W:=\alpha_1W_1\oplus\alpha_2W_2$. Then as multisets, \begin{align*} \Lambda(W) = \{\alpha_1 u : u \in \Lambda(W_1) \}\cup \{ \alpha_2v : v\in \Lambda(W_2)\}. \end{align*} Moreover, $\text{spr}(W)\leq \alpha_1 \text{spr}(W_1) + \alpha_2 \text{spr}(W_2)$ with equality if and only $W_1$ or $W_2$ is the all-zeroes graphon. \end{proposition} \begin{proof} For convenience, let $\Lambda_i := \{\alpha_iu : u\in\Lambda(W_i)\}$ for each $i\in\{1,2\}$ and $\Lambda := \Lambda(W)$. The first claim holds simply by considering the restriction of eigenfunctions to the intervals $[0,\alpha_1]$ and $[\alpha_1,1]$. \\ For the second claim, we first write $\text{spr}(W) = \alpha_i\mu-\alpha_j\nu$ where $i,j\in\{1,2\}$. Let $I_i := [\min(\Lambda_i), \max(\Lambda_i)]$ for each $i\in\{1,2\}$ and $I := [\min(\Lambda), \max(\Lambda)]$. Clearly $\alpha_i \text{spr}(W_i) = \text{diam}(I_i)$ for each $i\in\{1,2\}$ and $\text{spr}(W) = \text{diam}(I)$. Moreover, $I = I_1\cup I_2$. Since $0\in I_1\cap I_2$, $\text{diam}(I)\leq \text{diam}(I_1)+\text{diam}(I_2)$ with equality if and only if either $I_1$ or $I_2$ equals $\{0\}$. So the desired claim holds. \end{proof} Furthermore, the following basic corollary of the Perron-Frobenius holds. For completeness, we prove it here. \begin{proposition}\label{prop: PF eigenfunction} Let $W$ be a connected graphon and write $f$ for an eigenfunction corresponding to $\mu(W)$. Then $f$ is nonzero with constant sign a.e. \end{proposition} \begin{proof} Let $\mu = \mu(W)$. Since \begin{align*} \mu = \max_{\|h\|_2 = 1}\int_{(x,y)\in [0,1]^2}W(x,y)h(x)h(y), \end{align*} it follows without loss of generality that $f\geq 0$ a.e. on $[0,1]$. Let $Z:=\{x\in [0,1] : f(x) = 0\}$. Then for a.e. $x\in Z$, \begin{align*} 0 = \mu f(x) = \int_{y\in [0,1]}W(x,y)f(y) = \int_{y\in Z^c} W(x,y)f(y). \end{align*} Since $f > 0$ on $Z^c$, it follows that $W(x,y) = 0$ a.e. on $Z\times Z^c$. Clearly $m(Z^c) \neq 0$. If $m(Z) = 0$ then the desired claim holds, so without loss of generality, $0 < m(Z),m(Z^c)<1$. It follows that $W$ is disconnected, a contradiction to our assumption, which completes the proof of the desired claim. \end{proof} We may now prove a graphon version of Lemma \ref{lem: graph join}. \begin{lemma}\label{lem: K = indicator function} Suppose $W$ is a graphon achieving maximum spread and let $f,g$ be eigenfunctions for the maximum and minimum eigenvalues for $W$, respectively. Then the following claims hold: \begin{enumerate}[(i)] \item For a.e. $(x,y)\in [0,1]^2$, \label{item: K = 0 or 1} \begin{align*} W(x,y) = \left\{\begin{array}{rl} 1, & f(x)f(y) > g(x)g(y) \\ 0, & \text{otherwise} \end{array} \right. . \end{align*} \item $f(x)f(y)-g(x)g(y) \neq 0$ for a.e. $(x,y)\in [0,1]^2$. \label{item: |diff| > 0} \end{enumerate} \end{lemma} \begin{proof} We proceed in the following order: \begin{itemize} \item Prove Item \eqref{item: K = 0 or 1} holds for a.e. $(x,y)\in [0,1]^2$ such that $f(x)f(y)\neq g(x)g(y)$. We will call this Item \eqref{item: K = 0 or 1}*. \item Prove Item \eqref{item: |diff| > 0}. \item Deduce Item \eqref{item: K = 0 or 1} also holds. \end{itemize} By Propositions \ref{prop: disconnected spectrum} and \ref{prop: PF eigenfunction}, we may assume without loss of generality that $f > 0$ a.e.~on $[0,1]$. For convenience, we define the quantity $d(x,y) := f(x)f(y)-g(x)g(y)$. To prove Item \eqref{item: K = 0 or 1}*, we first define a graphon $W'$ by \begin{align*} W'(x,y) = \left\{\begin{array}{rl} 1, & d(x,y) > 0 \\ 0, & d(x,y) < 0 \\ W(x,y) & \text{otherwise} \end{array} \right. . \end{align*} Then by inspection, \begin{align*} \text{spr}(W') &\geq \int_{(x,y)\in [0,1]^2} W'(x,y)(f(x)f(y)-g(x)g(y)) \\ &= \int_{(x,y)\in [0,1]^2} W(x,y)(f(x)f(y)-g(x)g(y)) \\&+ \int_{d(x,y) > 0} (1-W(x,y))d(x,y) - \int_{d(x,y) < 0} W(x,y)d(x,y) \\ &= \text{spr}(W) + \int_{d(x,y) > 0} (1-W(x,y))d(x,y) - \int_{d(x,y) < 0} W(x,y)d(x,y). \end{align*} Since $W$ maximizes spread, both integrals in the last line must be $0$ and hence Item \eqref{item: K = 0 or 1}* holds. \\ Now, we prove Item \eqref{item: |diff| > 0}. For convenience, we define $U$ to be the set of all pairs $(x,y)\in [0,1]^2$ so that $d(x,y) = 0$. Now let $W'$ be any graphon which differs from $W$ only on $U$. Then \begin{align*} \text{spr}(W') &\geq \int_{(x,y)\in [0,1]^2} W'(x,y)(f(x)f(y)-g(x)g(y)) \\ &= \int_{(x,y)\in [0,1]^2} W(x,y)(f(x)f(y)-g(x)g(y))\\&+ \int_{(x,y)\in U} (W'(x,y)-W(x,y))(f(x)f(y)-g(x)g(y)) \\ &= \text{spr}(W). \end{align*} Since $\text{spr}(W)\geq \text{spr}(W')$, $f$ and $g$ are eigenfunctions for $W'$ and we may write $\mu'$ and $\nu'$ for the corresponding eigenvalues. Now, we define \begin{align*} I_{W'}(x) &:= (\mu'-\mu)f(x) \\ &= \int_{y\in [0,1]}(W'(x,y)-W(x,y))f(y)\\ &= \int_{y\in [0,1], \, (x,y)\in U} (W'(x,y)-W(x,y))f(y). \end{align*} Similarly, we define \begin{align*} J_{W'}(x) &:= (\nu'-\nu)g(x) \\ &= \int_{y\in [0,1]}(W'(x,y)-W(x,y))g(y)\\ &= \int_{y\in [0,1],\, (x,y)\in U} (W'(x,y)-W(x,y))g(y). \end{align*} Since $f$ and $g$ are orthogonal, \begin{align*} 0 &= \int_{x\in [0,1]} I_{W'}(x)J_{W'}(x). \end{align*} By definition of $U$, we have that for a.e. $(x,y) \in U$, $0 = d(x,y) = f(x)f(y) - g(x)g(y)$. In particular, since $f(x),f(y) > 0$ for a.e. $(x,y)\in [0,1]^2$, then a.e. $(x,y)\in U$ has $g(x)g(y) > 0$. So by letting \begin{align*} U_+ &:= \{(x,y)\in U: g(x),g(y) > 0\}, \\ U_- &:= \{(x,y)\in U:g(x),g(y)<0\},\text{ and} \\ U_0 &:= U\setminus (U_+\cup U_-), \end{align*} $U_0$ has measure $0$. \\ First, let $W'$ be the graphon defined by \begin{align*} W'(x,y) &= \left\{\begin{array}{rl} 1, & (x,y)\in U_+\\ W(x,y), &\text{otherwise} \end{array} \right. . \end{align*} For this choice of $W'$, \begin{align*} I_{W'}(x) &= \int_{y\in [0,1], \, (x,y)\in U_+} (1-W(x,y))f(y), \text{ and} \\ J_{W'}(x) &= \int_{y\in [0,1], \, (x,y)\in U_+} (1-W(x,y))g(y). \end{align*} Clearly $I_{W'}$ and $J_{W'}$ are nonnegative functions so $I_{W'}(x)J_{W'}(x) = 0$ for a.e. $x\in [0,1]$. Since $f(y)$ and $g(y)$ are positive for a.e. $(x,y)\in U$, $W(x,y) = 1$ for a.e. on $U_+$. \\ If instead we let $W'(x,y)$ be $0$ for all $(x,y)\in U_+$, it follows by a similar argument that $W(x,y) = 0$ for a.e. $(x,y)\in U_+$. So $U_+$ has measure $0$. Repeating the same argument on $U_-$, we similarly conclude that $U_-$ has measure $0$. This completes the proof of Item \eqref{item: |diff| > 0}. \\ Finally we note that Items \eqref{item: K = 0 or 1}* and \eqref{item: |diff| > 0} together implies Item \eqref{item: K = 0 or 1}. \end{proof} From here, it is easy to see that any graphon maximizing the spread is a join of two threshold graphons. Next we prove the graphon version of Lemma \ref{discrete ellipse equation}. \begin{lemma}\label{lem: local eigenfunction equation} If $W$ is a graphon achieving the maximum spread with corresponding eigenfunctions $f,g$, then $\mu f^2 - \nu g^2 = \mu-\nu$ almost everywhere. \end{lemma} \begin{proof} We will use the notation $(x,y) \in W$ to denote that $(x,y)\in [0,1]^2$ satisfies $W(x,y) =1$. Let $\varphi:[0,1]\to[0,1]$ be an arbitrary homeomorphism which is {\em orientation-preserving} in the sense that $\varphi(0) = 0$ and $\varphi(1) = 1$. Then $\varphi$ is a continuous strictly monotone increasing function which is differentiable almost everywhere. Now let $\tilde{f} := \varphi'\cdot (f\circ \varphi)$, $\tilde{g} := \varphi'\cdot (g\circ \varphi)$ and $\tilde{W} := \{ (x,y)\in [0,1]^2 : (\varphi(x),\varphi(y))\in W \}$. Using the substitutions $u = \varphi(x)$ and $v = \varphi(y)$, \begin{align*} \tilde{f} \tilde{W} \tilde{f} &= \int_{ (x,y)\in [0,1]^2 } \chi_{(\varphi(x),\varphi(y))\in \tilde{W}}\; \varphi'(x)\varphi'(y)\cdot f(\varphi(x))f(\varphi(y)) dx\, dy\\ &= \int_{ (x,y)\in [0,1]^2 } \chi_{(x,y)\in W}\; f(u)f(v) du\, dv\\ &= \mu. \end{align*} Similarly, $\tilde{g}\tilde{W}\tilde{g} = \nu$. \\ \\ Note however that the $L_2$ norms of $\tilde{f},\tilde{g}$ may not be $1$. Indeed using the substitution $u = \varphi(x)$, \[ \|\tilde{f}\|_2^2 = \int_{x\in[0,1]} \varphi'(x)^2f(\varphi(x))^2\, dx = \int_{u\in [0,1]} \varphi'(\varphi^{-1}(u))\cdot f(u)^2\, du . \] We exploit this fact as follows. Suppose $I,J$ are disjoint subintervals of $[0,1]$ of the same positive length $m(I) = m(J) = \ell > 0$ and for any $\varepsilon > 0$ sufficiently small (in terms of $\ell$), let $\varphi$ be the (unique) piecewise linear function which stretches $I$ to length $(1+\varepsilon)m(I)$, shrinks $J$ to length $(1-\varepsilon)m(J)$, and shifts only the elements in between $I$ and $J$. Note that for a.e. $x\in [0,1]$, \[ \varphi'(x) = \left\{\begin{array}{rl} 1+\varepsilon, & x\in I \\ 1-\varepsilon, & x\in J \\ 1, & \text{otherwise}. \end{array}\right. \] Again with the substitution $u = \varphi(x)$, \begin{align*} \|\tilde{f}\|_2^2 &= \int_{x\in[0,1]} \varphi'(x)^2\cdot f(\varphi(x))^2\, dx\\ &= \int_{[u\in [0,1]} \varphi'(\varphi^{-1}(u))f(u)^2\, du \\ &= 1 + \varepsilon\cdot ( \|\chi_If\|_2^2 -\|\chi_Jf\|_2^2 ). \end{align*} The same equality holds for $\tilde{g}$ instead of $\tilde{f}$. After normalizing $\tilde{f}$ and $\tilde{g}$, by optimality of $W$, we get a difference of Rayleigh quotients as \begin{align*} 0 &\leq (fWf-gWg) - \dfrac{\tilde{f}\tilde{W}\tilde{f}}{\|\tilde{f}\|_2^2} - \dfrac{\tilde{g}\tilde{W}\tilde{g}}{\|\tilde{g}\|_2^2} \\ &= \dfrac{ \mu \varepsilon\cdot ( \|\chi_If\|_2^2 -\|\chi_Jf\|_2^2) } {1+\varepsilon\cdot ( \|\chi_If\|_2^2 -\|\chi_Jf\|_2^2) } - \dfrac{ \nu\varepsilon\cdot ( \|\chi_Ig\|_2^2 -\|\chi_Jg\|_2^2) } {1+\varepsilon\cdot ( \|\chi_Ig\|_2^2 -\|\chi_Jg\|_2^2) }\\ &= (1+o(1))\varepsilon\cdot\left( \int_I (\mu f(x)^2-\nu g(x)^2)dx -\int_J (\mu f(x)^2-\nu g(x)^2)dx \right) \end{align*} as $\varepsilon\to 0$. It follows that for all disjoint intervals $I,J\subseteq [0,1]$ of the same length that the corresponding integrals are the same. Taking finer and finer partitions of $[0,1]$, it follows that the integrand $\mu f(x)^2-\nu g(x)^2$ is constant almost everywhere. Since the average of this quantity over all $[0,1]$ is $\mu-\nu$, the desired claim holds. \end{proof} \section{From graphons to stepgraphons}\label{sec: graphon spread reduction} The main result of this section is as follows. \begin{theorem}\label{thm: reduction to stepgraphon} Suppose $W$ maximizes $\text{spr}(\hat{\mathcal{W}})$. Then $W$ is a stepfunction taking values $0$ and $1$ of the following form \begin{align*} \input{graphics/stepgraphon7x7} \quad . \end{align*} Furthermore, the internal divisions separate according to the sign of the eigenfunction corresponding to the minimum eigenvalue of $W$. \end{theorem} We begin Section \ref{sub-sec: L2 averaging} by mirroring the argument in \cite{terpai} which proved a conjecture of Nikiforov regarding the largest eigenvalue of a graph and its complement, $\mu+\overline{\mu}$. There Terpai showed that performing two operations on graphons leads to a strict increase in $\mu+\overline{\mu}$. Furthermore based on previous work of Nikiforov from \cite{Nikiforov4}, the conjecture for graphs reduced directly to maximizing $\mu+\overline{\mu}$ for graphons. Using these operations, Terpai \cite{terpai} reduced to a $4\times 4$ stepgraphon and then completed the proof by hand. In our case, we are not so lucky and are left with a $7\times 7$ stepgraphon after performing similar but more technical operations, detailed in this section. In order to reduce to a $3\times 3$ stepgraphon, we appeal to interval arithmetic (see Section \ref{sub-sec: numerics} and Appendices \ref{sec: ugly} and \ref{sec: appendix}). Furthermore, our proof requires an additional technical argument to translate the result for graphons (Theorem \ref{thm: spread maximum graphon}) to our main result for graphs (Theorem \ref{thm: spread maximum graphs}). In Section \ref{sub-sec: stepgraphon proof}, we prove Theorem \ref{thm: reduction to stepgraphon}. \subsection{Averaging}\label{sub-sec: L2 averaging} For convenience, we introduce some terminology. For any graphon $W$ with $\lambda$-eigenfunction $h$, we say that $x\in [0,1]$ is \textit{typical} (with respect to $W$ and $h$) if \begin{align*} \lambda\cdot h(x) = \int_{y\in [0,1]}W(x,y)h(y). \end{align*} Note that a.e. $x\in [0,1]$ is typical. Additionally if $U\subseteq [0,1]$ is measurable with positive measure, then we say that $x_0\in U$ is \textit{average} (on $U$, with respect to $W$ and $h$) if \begin{align*} h(x_0)^2 = \dfrac{1}{m(U)}\int_{y\in U}h(y)^2. \end{align*} Given $W,h,U$, and $x_0$ as above, we define the $L_2[0,1]$ function $\text{av}_{U,x_0}h$ by setting \begin{align*} (\text{av}_{U,x_0}h)(x) := \left\{\begin{array}{rl} h(x_0), & x\in U\\ h(x), & \text{otherwise} \end{array}\right. . \end{align*} Clearly $\|\text{av}_{U,x_0}h\|_2 = \|h\|_2$. Additionally, we define the graphon $\text{av}_{U,x_0}W$ by setting \begin{align*} \text{av}_{U,x_0}W (x,y) := \left\{\begin{array}{rl} 0, & (x,y)\in U\times U \\ W(x_0,y), &(x,y)\in U\times U^c \\ W(x,x_0), &(x,y)\in U^c\times U \\ W(x,y), & (x,y)\in U^c\times U^c \end{array}\right. . \end{align*} In the graph setting, this is analogous to replacing $U$ with an independent set whose vertices are clones of $x_0$. The following lemma indicates how this cloning affects the eigenvalues. \begin{lemma}\label{lem: eigenfunction averaging} Suppose $W$ is a graphon with $h$ a $\lambda$-eigenfunction and suppose there exist disjoint measurable subsets $U_1,U_2\subseteq [0,1]$ of positive measures $\alpha$ and $\beta$, respectively. Let $U:=U_1\cup U_2$. Moreover, suppose $W = 0$ a.e. on $(U\times U)\setminus (U_1\times U_1)$. Additionally, suppose $x_0\in U_2$ is typical and average on $U$, with respect to $W$ and $h$. Let $\tilde{h} := \text{av}_{U,x_0}h$ and $\tilde{W} := \text{av}_{U,x_0}W$. Then for a.e. $x\in [0,1]$, \begin{align}\label{eq: averaged vector image} (A_{\tilde{W}}\tilde{h})(x) &= \lambda\tilde{h}(x) + \left\{\begin{array}{rl} 0, & x\in U\\ m(U)\cdot W(x_0,x)h(x_0) - \int_{y\in U} W(x,y)h(y), &\text{otherwise} \end{array}\right. . \end{align} Furthermore, \begin{align}\label{eq: averaged vector product} \langle A_{\tilde{W}}\tilde{h}, \tilde{h}\rangle = \lambda + \int_{(x,y)\in U_1\times U_1}W(x,y)h(x)h(y). \end{align} \end{lemma} \begin{proof} We first prove Equation \eqref{eq: averaged vector image}. Note that for a.e. $x\in U$, Then \begin{align*} (A_{\tilde{W}}\tilde{h})(x) &= \int_{y\in [0,1]} \tilde{W}(x,y)\tilde{h}(y) \\ &= \int_{y\in U} \tilde{W}(x,y)\tilde{h}(y) + \int_{y\in [0,1]\setminus U} \tilde{W}(x,y)\tilde{h}(y) \\ &= \int_{y\in [0,1]\setminus U} W(x_0,y)h(y) \\ &= \int_{y\in [0,1]} W(x_0,y)h(y) - \int_{y\in U} W(x_0,y)h(y) \\ &= \lambda h(x_0)\\ &= \lambda \tilde{h}(x), \end{align*} as desired. Now note that for a.e. $x\in [0,1]\setminus U$, \begin{align*} (A_{\tilde{W}}\tilde{h})(x) &= \int_{y\in [0,1]} \tilde{W}(x,y)\tilde{h}(y) \\ &= \int_{y\in U} \tilde{W}(x,y)\tilde{h}(y) + \int_{y\in [0,1]\setminus U} \tilde{W}(x,y)\tilde{h}(y) \\ &= \int_{y\in U} W(x_0,x)h(x_0) + \int_{y\in [0,1]\setminus U} W(x,y)h(y) \\ &= m(U)\cdot W(x_0,x)h(x_0) + \int_{y\in [0,1]} W(x,y)h(y) - \int_{y\in U} W(x,y)h(y) \\ &= \lambda h(x) + m(U)\cdot W(x_0,x)h(x_0) - \int_{y\in U} W(x,y)h(y) . \end{align*} So again, the claim holds and this completes the proof of Equation \eqref{eq: averaged vector image}. Now we prove Equation \eqref{eq: averaged vector product}. Indeed by Equation \eqref{eq: averaged vector image}, \begin{align*} \langle (A_{\tilde{W}}\tilde{h}), \tilde{h}\rangle &= \int_{x\in [0,1]} (A_{\tilde{W}}\tilde{h})(x) \tilde{h}(x) \\ &= \int_{x\in [0,1]} \lambda\tilde{h}(x)^2 + \int_{x\in [0,1]\setminus U} \left( m(U)\cdot W(x_0,x)h(x_0) - \int_{y\in U}W(x,y)h(y) \right) \cdot h(x) \\ &= \lambda + m(U)\cdot h(x_0) \left( \int_{x\in [0,1]} W(x_0,x)h(x) - \int_{x\in U} W(x_0,x)h(x) \right) \\ &\quad - \int_{y\in U} \left( \int_{x\in [0,1]} W(x,y)h(x) -\int_{x\in U} W(x,y)h(x) \right) \cdot h(y) \\ &= \lambda + m(U)\cdot h(x_0)\left( \lambda h(x_0) - \int_{y\in U} 0 \right) - \int_{y\in U} \left( \lambda h(y)^2 -\int_{x\in U} W(x,y)h(x)h(y) \right) \\ &= \lambda + \lambda m(U)\cdot h(x_0)^2 - \lambda \int_{y\in U} h(y)^2 + \int_{(x,y)\in U\times U} W(x,y)h(x)h(y) \\ &= \lambda + \int_{(x,y)\in U_1\times U_1} W(x,y)h(x)h(y), \end{align*} and this completes the proof of desired claims. \end{proof} We have the following useful corollary. \begin{corollary}\label{cor: averaging reduction} Suppose $\text{spr}(W) = \text{spr}(\hat{\mathcal{W}})$ with maximum and minimum eigenvalues $\mu,\nu$ corresponding respectively to eigenfunctions $f,g$. Moreover, suppose that there exist disjoint subsets $A,B\subseteq [0,1]$ and $x_0\in B$ so that the conditions of Lemma \ref{lem: eigenfunction averaging} are met for $W$ with $\lambda = \mu$, $h = f$, $U_1=A $, and $U_2=B$. Then, \begin{enumerate}[(i)] \item \label{item: U independent} $W(x,y) = 0$ for a.e. $(x,y)\in U^2$, and \item \label{item: f,g constant on U} $f$ is constant on $U$. \end{enumerate} \end{corollary} \begin{proof} Without loss of generality, we assume that $\|f\|_2 = \|g\|_2 = 1$. Write $\tilde{W}$ for the graphon and $\tilde{f},\tilde{g}$ for the corresponding functions produced by Lemma \ref{lem: eigenfunction averaging}. By Lemma \ref{prop: PF eigenfunction}, we may assume without loss of generality that $f > 0$ a.e. on $[0,1]$. We first prove Item \eqref{item: U independent}. Note that \begin{align} \text{spr}(\tilde{W}) &\geq \int_{(x,y)\in [0,1]^2} \tilde{W}(x,y)(\tilde{f}(x)\tilde{f}(y)-\tilde{g}(x)\tilde{g}(y)) \nonumber\\ &= (\mu-\nu) + \int_{(x,y)\in A\times A}W(x,y)(f(x)f(y)-g(x)g(y)) \nonumber\\ &= \text{spr}(W) + \int_{(x,y)\in A\times A}W(x,y)(f(x)f(y)-g(x)g(y)). \label{eq: sandwich spread} \end{align} Since $\text{spr}(W)\geq \text{spr}(\tilde{W})$ and by Lemma \ref{lem: K = indicator function}.\eqref{item: |diff| > 0}, $f(x)f(y)-g(x)g(y) > 0$ for a.e. $(x,y)\in A\times A$ such that $W(x,y) \neq 0$. Item \eqref{item: U independent} follows. \\ For Item \eqref{item: f,g constant on U}, we first note that $f$ is a $\mu$-eigenfunction for $\tilde{W}$. Indeed, if not, then the inequality in \eqref{eq: sandwich spread} holds strictly, a contradiction to the fact that $\text{spr}(W)\geq \text{spr}(\tilde{W})$. Again by Lemma \ref{lem: eigenfunction averaging}, \begin{align*} m(U)\cdot W(x_0,x)f(x_0) = \int_{y\in U}W(x,y)f(y) \end{align*} for a.e. $x\in [0,1]\setminus U$. Let $S_1 := \{x\in [0,1]\setminus U : W(x_0,x) = 1\}$ and $S_0 := [0,1]\setminus (U\cup S_1)$. We claim that $m(S_1) = 0$. Assume otherwise. By Lemma \ref{lem: eigenfunction averaging} and by Cauchy-Schwarz, for a.e. $x\in S_1$ \begin{align*} m(U)\cdot f(x_0) &= m(U)\cdot W(x_0,x)f(x_0) \\ &= \int_{y\in U}W(x,y)f(y) \\ &\leq \int_{y\in U} f(y) \\ &\leq m(U)\cdot f(x_0), \end{align*} and by sandwiching, $W(x,y) = 1$ and $f(y) = f(x_0)$ for a.e. $y\in U$. Since $m(S_1) > 0$, it follows that $f(y) = f(x_0) = 0$ for a.e. $y\in U$, as desired. \\ So we assume otherwise, that $m(S_1) = 0$. Then for a.e. $x\in [0,1]\setminus U$, $W(x_0,x) = 0$ and \begin{align*} 0 &= m(U)\cdot W(x_0,x)f(x_0)= \int_{y\in U} W(x,y)f(y) \end{align*} and since $f>0$ a.e. on $[0,1]$, it follows that $W(x,y) = 0$ for a.e. $y\in U$. So altogether, $W(x,y) = 0$ for a.e. $(x,y)\in ([0,1]\setminus U)\times U$. So $W$ is a disconnected, a contradiction to Fact \ref{prop: disconnected spectrum}. So the desired claim holds. \end{proof} \subsection{Proof of Theorem \ref{thm: reduction to stepgraphon}}\label{sub-sec: stepgraphon proof} \begin{proof} For convenience, we write $\mu := \mu(W)$ and $\nu := \nu(W)$ and let $f,g$ denote the corresponding unit eigenfunctions. Moreover by Proposition \ref{prop: PF eigenfunction}, we may assume without loss of generality that $f>0$. \\ First, we show without loss of generality that $f,g$ are monotone on the sets $P := \{x\in [0,1] : g(x)\geq 0\}$ and $N := [0,1]\setminus P$. Indeed, we define a total ordering $\preccurlyeq$ on $[0,1]$ as follows. For all $x$ and $y$, we let $x \preccurlyeq y$ if: \begin{enumerate}[(i)] \item $g(x)\geq 0$ and $g(y)<0$, or \item Item (i) does not hold and $f(x) > f(y)$, or \item Item (i) does not hold, $f(x) = f(y)$, and $x\leq y$. \end{enumerate} By inspection, the function $\varphi:[0,1]\to[0,1]$ defined by \begin{align*} \varphi(x) := m(\{y\in [0,1] : y\preccurlyeq x\}). \end{align*} is a weak isomorphism between $W$ and its entrywise composition with $\varphi$. By invariance of $\text{spr}(\cdot)$ under weak isomorphism, we make the above replacement and write $f,g$ for the replacement eigenfunctions. That is, we are assuming that our graphon is relabeled so that $[0,1]$ respects $\preccurlyeq$. \\ As above, let $P := \{x\in [0,1] : g(x) \geq 0\}$ and $N:= [0,1]\setminus P$. By Lemma \ref{lem: local eigenfunction equation}, $f$ and $-g$ are monotone {nonincreasing} on $P$. Additionally, $f$ and $g$ are monotone {nonincreasing} on $N$. Without loss of generality, we may assume that $W$ is of the form from Lemma \ref{lem: K = indicator function}. Now we let $S := \{x\in [0,1] : f(x) < |g(x)|\}$ and $C:=[0,1]\setminus S$. By Lemma \ref{lem: K = indicator function} we have that $W(x,y)=1$ for almost every $x,y\in C$ and $W(x,y)=0$ for almost every $x,y\in S \cap P$ or $x,y\in S\cap N$. We have used the notation $C$ and $S$ because the analogous sets in the graph setting form a clique or a stable set respectively. We first prove the following claim. \\ \\ \textbf{Claim A:} Except on a set of measure $0$, $f$ takes on at most $2$ values on $P\cap S$, and at most $2$ values on $N\cap S$. \\ We first prove this claim for $f$ on $P\cap S$. Let $D$ be the set of all discontinuities of $f$ on the interior of the interval $P\cap S$. Clearly $D$ consists only of jump-discontinuities. By the Darboux-Froda Theorem, $D$ is at most countable and moreover, $(P\cap S)\setminus D$ is a union of at most countably many disjoint intervals $\mathcal{I}$. Moreover, $f$ is continuous on the interior of each $I\in\mathcal{I}$. \\ We show now that $f$ is piecewise constant on the interiors of each $I\in \mathcal{I}$. Indeed, let $I\in \mathcal{I}$. Since $f$ is a $\mu$-eigenfunction function for $W$, \begin{align*} \mu f(x) = \int_{y\in [0,1]}W(x,y)f(y) \end{align*} for a.e. $x\in [0,1]$ and by continuity of $f$ on the interior of $I$, this equation holds everywhere on the interior of $I$. Additionally since $f$ is continuous on the interior of $I$, by the Mean Value Theorem, there exists some $x_0$ in the interior of $I$ so that \begin{align*} f(x_0)^2 = \dfrac{1}{m(U)}\int_{x\in U}f(x)^2. \end{align*} By Corollary \ref{cor: averaging reduction}, $f$ is constant on the interior of $U$, as desired. \\ If $|\mathcal{I}|\leq 2$, the desired claim holds, so we may assume otherwise. Then there exists distinct $I_1,I_2,I_3\in \mathcal{I}$. Moreover, $f$ equals a constant $f_1,f_2,f_3$ on the interiors of $I_1,I_2,$ and $I_3$, respectively. Additionally since $I_1,I_2,$ and $I_3$ are separated from each other by at least one jump discontinuity, we may assume without loss of generality that $f_1 < f_2 < f_3$. It follows that there exists a measurable subset $U\subseteq I_1\cup I_2\cup I_3$ of positive measure so that \begin{align*} f_2^2 &= \dfrac{1}{m(U)}\int_{x\in U}f(x)^2. \end{align*} By Corollary \ref{cor: averaging reduction}, $f$ is constant on $U$, a contradiction. So Claim A holds on $P\cap S$. For Claim A on $N\cap S$, we may repeat this argument with $P$ and $N$ interchanged, and $g$ and $-g$ interchanged. \\ Now we show the following claim. \\ \\ \textbf{Claim B:} For a.e. $(x,y) \in (P\times P)\cup (N\times N)$ such that $f(x)\geq f(y)$, we have that for a.e. $z\in [0,1]$, $W(x,z) = 0$ implies that $W(y,z) = 0$. \\ We first prove the claim for a.e. $(x,y)\in P\times P$. Suppose $W(y,z) = 0$. By Lemma \ref{lem: K = indicator function}, in this case $z\in P$. Then for a.e. such $x,y$, by Lemma \ref{lem: local eigenfunction equation}, $g(x)\leq g(y)$. By Lemma \ref{lem: K = indicator function}.\eqref{item: K = 0 or 1}, $W(x,z) = 0$ implies that $f(x)f(z) < g(x)g(z)$. Since $f(x)\geq f(y)$ and $g(x)\leq g(y)$, $f(y)f(z) < g(y)g(z)$. Again by Lemma \ref{lem: K = indicator function}.\eqref{item: K = 0 or 1}, $W(y,z) = 0$ for a.e. such $x,y,z$, as desired. So the desired claim holds for a.e. $(x,y)\in P\times P$ such that $f(x)\geq f(y)$. We may repeat the argument for a.e. $(x,y)\in N\times N$ to arrive at the same conclusion. \\ \\ The next claim follows directly from Lemma \ref{lem: local eigenfunction equation}. \\ \\ \textbf{Claim C:} For a.e. $x\in [0,1]$, $x\in C$ if and only if $f(x) \geq 1$, if and only if $|g(x)| \leq 1$. \\ \\ Finally, we show the following claim. \\ \\ \textbf{Claim D:} Except on a set of measure $0$, $f$ takes on at most $3$ values on $P\cap C$, and at most $3$ values on $N\cap C$. \\ For a proof, we first write $P\cap S = S_1 \cup S_2$ so that $S_1,S_2$ are disjoint and $f$ equals some constant $f_1$ a.e. on $S_1$ and $f$ equals some constant $f_2$ a.e. on $S_2$. By Lemma \ref{lem: local eigenfunction equation}, $g$ equals some constant $g_1$ a.e. on $S_1$ and $g$ equals some constant $g_2$ a.e. on $S_2$. By definition of $P$, $g_1,g_2\geq 0$. Now suppose $x\in P\cap C$ so that \begin{align*} \mu f(x) &= \int_{y\in [0,1]}W(x,y)f(y). \end{align*} Then by Lemma \ref{lem: K = indicator function}.\eqref{item: K = 0 or 1}, \begin{align*} \mu f(x) &= \int_{y\in (P\cap C)\cup N} f(y) + \int_{y\in S_1}W(x,y)f(y) + \int_{y\in S_2}W(x,y)f(y) . \end{align*} By Claim B, this expression for $\mu f(x)$ may take on at most $3$ values. So the desired claim holds on $P\cap C$. Repeating the same argument, the claim also holds on $N\cap C$. \\ We are nearly done with the proof of the theorem, as we have now reduced $W$ to a $10\times 10$ stepgraphon. To complete the proof, we show that we may reduce to at most $7\times 7$. We now partition $P\cap C, P\cap S, N\cap C$, and $N\cap S$ so that $f$ and $g$ are constant a.e. on each part as: \begin{itemize} \item $P\cap C = U_1\cup U_2\cup U_3$, \item $P\cap S = U_4\cup U_5$, \item $N\cap C = U_6\cup U_7\cup U_8$, and \item $N\cap S = U_9\cup U_{10}$. \end{itemize} Then by Lemma \ref{lem: K = indicator function}.\eqref{item: K = 0 or 1}, there exists a matrix $(m_{ij})_{i,j\in [10]}$ so that for all $(i,j)\in [10]\times [10]$, \begin{itemize} \item $m_{ij}\in \{0,1\}$, \item $W(x,y) = m_{ij}$ for a.e. $(x,y)\in U_i\times U_j$, \item $m_{ij} = 1$ if and only if $f_if_j > g_ig_j$, and \item $m_{ij} = 0$ if and only if $f_if_j < g_ig_j$. \end{itemize} Additionally, we set $\alpha_i = m(U_i)$ and also denote by $f_i$ and $g_i$ the constant values of $f,g$ on each $U_i$, respectively, for each $i= 1, \ldots, 10$. Furthermore, by Claim C and Lemma \ref{lem: K = indicator function} we assume without loss of generality that that $f_1 > f_2 > f_3 \geq 1 > f_4 > f_5$ and that $f_6 > f_7 > f_8 \geq 1 > f_9 > f_{10}$. Also by Lemma \ref{lem: local eigenfunction equation}, $0 \leq g_1 < g_2 < g_3 \leq 1 < g_4 < g_5$ and $0 \leq -g_1 < -g_2 < -g_3 \leq 1 < -g_4 < -g_5$. Also, by Claim B, no two columns of $m$ are identical within the sets $\{1,2,3,4,5\}$ and within $\{6,7,8,9,10\}$. Shading $m_{ij} = 1$ black and $m_{ij} = 0$ white, we let \begin{align*} M = \begin{tabular}{||ccc||cc|||ccc||cc||}\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\\hline\hline \cellcolor{black} & \cellcolor{black} & & & & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & & & & & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\\hline\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & \\\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & & \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & & & \\\hline\hline \end{tabular} \quad . \end{align*} Therefore, $W$ is a stepgraphon with values determined by $M$ and the size of each block determined by the $\alpha_i$. \\ We claim that $0\in \{\alpha_3, \alpha_4, \alpha_5\}$ and $0\in\{\alpha_8,\alpha_9,\alpha_{10}\}$. For the first claim, assume to the contrary that all of $\alpha_3, \alpha_4, \alpha_5$ are positive and note that there exists some $x_4\in U_4$ such that \begin{align*} \mu f_4 = \mu f(x_4) = \int_{y\in [0,1]}W(x_4,y)f(y). \end{align*} Moreover for some measurable subsets $U_3'\subseteq U_3$ and $U_5'\subseteq U_5$ of positive measure so that with $U := U_3'\cup U_4\cup U_5'$, \begin{align*} f(x_4)^2 = \dfrac{1}{m(U)}\int_{y\in U}f(y)^2. \end{align*} Note that by Lemma \ref{lem: local eigenfunction equation}, we may assume that $x_4$ is average on $U$ with respect to $g$ as well. The conditions of Corollary \ref{cor: averaging reduction} are met for $W$ with $A=U_3', B=U_4\cup U_5',x_0 = x_4$. Since $\int_{A\times A}W(x,y)f(x)f(y) > 0$, this is a contradiction to the corollary, so the desired claim holds. The same argument may be used to prove that $0\in \{\alpha_8,\alpha_9,\alpha_{10}\}$. \\ We now form the principal submatrix $M'$ by removing the $i$-th row and column from $M$ if and only if $\alpha_i = 0$. Since $\alpha_i=0$, $W$ is a stepgraphon with values determined by $M'$. Let $M_P'$ denote the principal submatrix of $M'$ corresponding to the indices $i\in\{1,\dots,5\}$ so that $\alpha_i>0$. That is, $M_P'$ corresponds to the upper left hand block of $M$. We use red to indicate rows and columns present in $M$ but not $M_P'$. When forming the submatrix $M_P'$, we borrow the internal subdivisions which are present in the definition of $M$ above to denote where $f\geq 1$ and where $f<1$ (or between $S \cap P$ and $C \cap P$). Note that this is not the same as what the internal divisions denote in the statement of the theorem. Since $0\in\{\alpha_3,\alpha_4,\alpha_5\}$, it follows that $M_P'$ is a principal submatrix of \begin{align*} \begin{tabular}{||ccc||cc||}\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & \cellcolor{black} & \\ \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} \\\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & & \\ \cellcolor{black} & & \cellcolor{red} & & \\ \hline\hline \end{tabular} \quad , \quad \begin{tabular}{||ccc||cc||}\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & \\\hline\hline \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} \\ \cellcolor{black} & & & \cellcolor{red} & \\ \hline\hline \end{tabular} \quad, \text{ or }\quad \begin{tabular}{||ccc||cc||}\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{red} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{red} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & \cellcolor{red}\\\hline\hline \cellcolor{black} & \cellcolor{black} & & & \cellcolor{red} \\ \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} \\ \hline\hline \end{tabular} \quad . \end{align*} In the second case, columns $2$ and $3$ are identical in $M'$, and in the third case, columns $1$ and $2$ are identical in $M'$. So without loss of generality, $M_P'$ is a principal submatrix of one of \begin{align*} \begin{tabular}{||ccc||cc||}\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & \cellcolor{black} & \\ \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} \\\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & & \\ \cellcolor{black} & & \cellcolor{red} & & \\ \hline\hline \end{tabular} \quad , \quad \begin{tabular}{||ccc||cc||}\hline\hline \cellcolor{black} & \cellcolor{red} & \cellcolor{black} & \cellcolor{red} & \cellcolor{black} \\ \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} \\ \cellcolor{black} & \cellcolor{red} & \cellcolor{black} & \cellcolor{red} & \\\hline\hline \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} \\ \cellcolor{black} & \cellcolor{red} & & \cellcolor{red} & \\ \hline\hline \end{tabular} \quad, \text{ or }\quad \begin{tabular}{||ccc||cc||}\hline\hline \cellcolor{black} & \cellcolor{red} & \cellcolor{black} & \cellcolor{black} & \cellcolor{red} \\ \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} \\ \cellcolor{black} & \cellcolor{red} & \cellcolor{black} & & \cellcolor{red}\\\hline\hline \cellcolor{black} & \cellcolor{red} & & & \cellcolor{red} \\ \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} \\ \hline\hline \end{tabular} \quad . \end{align*} In each case, $M_P'$ is a principal submatrix of \begin{align*} \begin{tabular}{||cc||cc||}\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \\\hline\hline \cellcolor{black} & \cellcolor{black} & & \\ \cellcolor{black} & & & \\ \hline\hline \end{tabular} \quad . \end{align*} An identical argument shows that the principal submatrix of $M'$ on the indices $i\in\{6,\dots,10\}$ such that $\alpha_i>0$ is a principal submatrix of \begin{align*} \begin{tabular}{||cc||cc||}\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \\\hline\hline \cellcolor{black} & \cellcolor{black} & & \\ \cellcolor{black} & & & \\ \hline\hline \end{tabular} \quad . \end{align*} Finally, we note that $0\in \{\alpha_1,\alpha_6\}$. Indeed otherwise the corresponding columns are identical in $M'$, a contradiction. So without loss of generality, row and column $6$ were also removed from $M$ to form $M'$. This completes the proof of the theorem. \end{proof} \section{Spread maximum graphons}\label{sec:spread_graphon} In this section, we complete the proof of the graphon version of the spread conjecture of Gregory, Hershkowitz, and Kirkland from \cite{gregory2001spread}. In particular, we prove the following theorem. For convenience and completeness, we state this result in the following level of detail. \begin{theorem}\label{thm: spread maximum graphon} If $W$ is a graphon that maximizes spread, then $W$ may be represented as follows. For all $(x,y)\in [0,1]^2$, \begin{align*} W(x,y) =\left\{\begin{array}{rl} 0, &(x,y)\in [2/3, 1]^2\\ 1, &\text{otherwise} \end{array}\right. . \end{align*} Furthermore, \begin{align*} \mu &= \dfrac{1+\sqrt{3}}{3} \quad \text{ and } \quad \nu = \dfrac{1-\sqrt{3}}{3} \end{align*} are the maximum and minimum eigenvalues of $W$, respectively, and if $f,g$ are unit eigenfunctions associated to $\mu,\nu$, respectively, then, up to a change in sign, they may be written as follows. For every $x\in [0,1]$, \begin{align*} f(x) &= \dfrac{1}{2\sqrt{3+\sqrt{3}}} \cdot \left\{\begin{array}{rl} 3+\sqrt{3}, & x\in [0,2/3] \\ 2\cdot\sqrt{3} & \text{otherwise} \end{array}\right. , \text{ and }\\ g(x) &= \dfrac{1}{2\sqrt{3-\sqrt{3}}} \cdot \left\{\begin{array}{rl} 3-\sqrt{3}, & x\in [0,2/3] \\ -2\cdot\sqrt{3} & \text{otherwise} \end{array}\right. . \end{align*} \end{theorem} To help outline our proof of Theorem \ref{thm: spread maximum graphon}, let the spread-extremal graphon have block sizes $\alpha_1,\ldots, \alpha_7$. Note that the spread of the graphon is the same as the spread of matrix $M^*$ in Figure \ref{fig: 7-vertex loop graph}, and so we will optimize the spread of $M^*$ over choices of $\alpha_1,\dots, \alpha_7$. Let $G^*$ be the unweighted graph (with loops) corresponding to the matrix. We proceed in the following steps. \begin{enumerate}[1. ] \item In Section \ref{appendix 17 cases}, we reduce the proof of Theorem \ref{thm: spread maximum graphon} to $17$ cases, each corresponding to a subset $S$ of $V(G^*)$. For each such $S$ we define an optimization problem $\SPR_S$, the solution to which gives us an upper bound on the spread of any graphon in the case corresponding to $S$. \item In Section \ref{sub-sec: numerics}, we appeal to interval arithmetic to translate these optimization problems into algorithms. Based on the output of the $17$ programs we wrote, we eliminate $15$ of the $17$ cases. We address the multitude of formulas used throughout and relocate their statements and proofs to Appendix \ref{sub-sec: formulas}. \item Finally in Section \ref{sub-sec: cases 4|57 and 1|7}, we complete the proof of Theorem \ref{thm: spread maximum graphon} by analyzing the $2$ remaining cases. Here, we apply Vi\`ete's Formula for roots of cubic equations and make a direct argument. \end{enumerate} \begin{comment} \begin{figure}[ht] \centering \[\left[\begin{array}{ccccccc} \alpha_1 &\sqrt{\alpha_1\alpha_2}&\sqrt{\alpha_1\alpha_3}&\sqrt{\alpha_1\alpha_4}&\sqrt{\alpha_1\alpha_5}&\sqrt{\alpha_1\alpha_6}&\sqrt{\alpha_1\alpha_7}\\ \sqrt{\alpha_1\alpha_2}&\alpha_2&\sqrt{\alpha_2\alpha_3}&0&\sqrt{\alpha_2\alpha_5}&\sqrt{\alpha_2\alpha_6}&\sqrt{\alpha_2\alpha_7}\\ \sqrt{\alpha_1\alpha_3}&\sqrt{\alpha_2\alpha_3}&0 &0&\sqrt{\alpha_3\alpha_5}&\sqrt{\alpha_3\alpha_6}&\sqrt{\alpha_3\alpha_7}\\ \sqrt{\alpha_1\alpha_4}&0&0&0&\sqrt{\alpha_4\alpha_5}&\sqrt{\alpha_4\alpha_6}&\sqrt{\alpha_4\alpha_7}\\ \sqrt{\alpha_1\alpha_5} &\sqrt{\alpha_2\alpha_5}&\sqrt{\alpha_3\alpha_5}&\sqrt{\alpha_4\alpha_5}&\alpha_5&\sqrt{\alpha_5\alpha_6}&0 \\ \sqrt{\alpha_1\alpha_6}&\sqrt{\alpha_2\alpha_6}&\sqrt{\alpha_3\alpha_6}&\sqrt{\alpha_4\alpha_6}&\sqrt{\alpha_5\alpha_6}&0&0\\ \sqrt{\alpha_1\alpha_7}&\sqrt{\alpha_2\alpha_7}&\sqrt{\alpha_3\alpha_7}&\sqrt{\alpha_4\alpha_7}&0&0&0 \end{array}\right]\] \newline \scalebox{.8}[.8]{ \input{graphics/7-vertex-graph-all-edges} } \caption{{\color{blue}The graph $G^*$ corresponding to the matrix $M^*$. } } \label{fig: 7-vertex loop graph} \end{figure} \end{comment} \begin{figure}[ht] \centering \begin{minipage}{0.55\textwidth} \[M^* := D_\alpha^{1/2}\left[\begin{array}{cccc|ccc} 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 0 & 1 & 1 & 1 \\ 1 & 1 & 0 & 0 & 1 & 1 & 1 \\ 1 & 0 & 0 & 0 & 1 & 1 & 1 \\\hline 1 & 1 & 1 & 1 & 1 & 1 & 0 \\ 1 & 1 & 1 & 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 & 0 & 0 & 0 \end{array}\right]D_\alpha^{1/2}\]\end{minipage}\quad\begin{minipage}{0.4\textwidth} \scalebox{.7}[.7]{ \input{graphics/7-vertex-graph-all-edges} } \end{minipage} \caption{{The matrix $M^*$ with corresponding graph $G^*$, where $D_\alpha$ is the diagonal matrix with entries $\alpha_1, \ldots, \alpha_7$. } } \label{fig: 7-vertex loop graph} \end{figure} For concreteness, we define $G^*$ on the vertex set $\{1,\dots,7\}$. Explicitly, the neighborhoods $N_1,\dots, N_7$ of $1,\dots, 7$ are defined as: \begin{align*} \begin{array}{ll} N_1 := \{1,2,3,4,5,6,7\} & N_2 := \{1,2,3,5,6,7\} \\ N_3 := \{1,2,5,6,7\} & N_4 := \{1,5,6,7\} \\ N_5 := \{1,2,3,4,5,6\} & N_6 := \{1,2,3,4,5\} \\ N_7 := \{1,2,3,4\} \end{array} . \end{align*} More compactly, we may note that \begin{align*} \begin{array}{llll} N_1 = \{1,\dots, 7\} & N_2 = N_1\setminus\{4\} & N_3 = N_2\setminus\{3\} & N_4 = N_3\setminus\{2\} \\ & N_5 = N_1\setminus\{7\} & N_6 = N_5\setminus\{6\} & N_7 = N_6\setminus\{5\} \end{array} . \end{align*} \subsection{Stepgraphon case analysis}\label{sub-sec: cases} {Let $W$ be a graphon maximizing spread. By Theorem \ref{thm: reduction to stepgraphon}, we may assume that $W$ is a $7\times 7$ stepgraphon corresponding to $G^\ast$. We will break into cases depending on which of the $7$ weights $\alpha_1, \ldots \alpha_7$ are zero and which are positive. {For some of these combinations the corresponding graphons are isomorphic}, and in this section we will outline how one can show that we need only consider $17$ cases rather than $2^7$. We will present each case with the set of indices which have strictly positive weight. Additionally, we will use vertical bars to partition the set of integers according to its intersection with the sets $\{1\}$, $\{2,3,4\}$ and $\{5,6,7\}$. Recall that vertices in block $1$ are dominating vertices and vertices in blocks $5$, $6$, and $7$ have negative entries in the eigenfunction corresponding to $\nu$. For example, we use $4|57$ to refer to the case that $\alpha_4, \alpha_5, \alpha_7$ are all positive and $\alpha_1 = \alpha_2 = \alpha_3 = \alpha_6 = 0$; see Figure \ref{457 fig}. \begin{figure}[ht] \begin{center} \begin{minipage}{0.45\textwidth}\centering \input{graphics/457-stepgraphon} \end{minipage}\quad\begin{minipage}{0.45\textwidth}\centering \scalebox{.7}[.7]{\input{graphics/457-graph}}\end{minipage} \end{center} \caption{The family of graphons and the graph corresponding to case $4|57$} \label{457 fig} \end{figure} To give an upper bound on the spread of any graphon { corresponding to case} $4|57$ we solve a constrained optimization problem. Let $f_4, f_5, f_7$ and $g_4, g_5, g_7$ denote the eigenfunction entries for unit eigenfunctions $f$ and $g$ of the graphon. Then we maximize $\mu - \nu$ subject to \begin{align*} \alpha_4 + \alpha_5 + \alpha_7 &=1\\ \alpha_4f_4^2 + \alpha_5f_5^2 + \alpha_7 f_7^2 &=1 \\ \alpha_4g_4^2 + \alpha_5g_5^2 + \alpha_7 g_7^2 &= 1\\ \mu f_i^2 - \nu g_i^2 &= \mu-\nu \quad \text{for all } i \in \{4,5,7\}\\ \mu f_4 = \alpha_5 f_5 + \alpha_7 f_7, \quad \mu f_5 &= \alpha_4 f_4 + \alpha_5 f_5, \quad \mu f_7 = \alpha_4 f_4\\ \nu g_4 = \alpha_5 g_5 + \alpha_7 g_7, \quad \nu g_5 &= \alpha_4 g_4 + \alpha_5 g_5, \quad \nu g_7 = \alpha_4 g_4 \end{align*} The first three constraints say that the weights sum to $1$ and that $f$ and $g$ are unit eigenfunctions. The fourth constraint is from Lemma \ref{lem: local eigenfunction equation}. The final two lines of constraints say that $f$ and $g$ are eigenfunctions for $\mu$ and $\nu$ respectively. Since these equations must be satisfied for any spread-extremal graphon, the solution to this optimization problem gives an upper bound on any spread-extremal graphon {corresponding to} case $4|57$. For each case we formulate a similar optimization problem in Appendix \ref{appendix 17 cases}. First, if two distinct blocks of vertices have the same neighborhood, then without loss of generality we may assume that only one of them has positive weight. For example, see Figure \ref{123567 fig}: in case $123|567$, blocks $1$ and $2$ have the same neighborhood, and hence without loss of generality we may assume that only block $1$ has positive weight. Furthermore, in this case the resulting graphon could be considered as case $13|567$ or equivalently as case $14|567$; the graphons corresponding to these cases are isomorphic. Therefore cases $123|567$, $13|567$, and $14|567$ reduce to considering only case $14|567$. \begin{figure}[ht] \begin{center} \begin{minipage}{0.3\textwidth}\centering \input{graphics/123567-stepgraphon} \end{minipage}\quad \begin{minipage}{0.3\textwidth}\centering \input{graphics/13567-stepgraphon}\end{minipage}\quad\begin{minipage}{0.3\textwidth}\centering \input{graphics/14567-stepgraphon} \end{minipage} \newline\vspace{10pt} \begin{minipage}{0.3\textwidth}\centering \scalebox{.7}[.7]{\input{graphics/123567-graph}}\end{minipage}\quad\begin{minipage}{0.3\textwidth}\centering \scalebox{.7}[.7]{\input{graphics/13567-graph}}\end{minipage}\quad\begin{minipage}{0.3\textwidth}\centering \scalebox{.7}[.7]{\input{graphics/14567-graph}}\end{minipage} \end{center} \caption{Redundancy, then renaming: we can assume $\alpha_2=0$ in the family of graphons corresponding to $123|567$, which produces families of graphons corresponding to both cases $13|567$ and $14|567$.} \label{123567 fig} \end{figure} Additionally, if there is no dominant vertex, then some pairs cases may correspond to isomorphic graphons and the optimization problems are equivalent up to flipping the sign of the eigenvector corresponding to $\nu$. For example, see Figure \ref{23457 24567}, in which cases $23|457$ and $24|567$ reduce to considering only a single one. However, because of how we choose to order the eigenfunction entries when setting up the constraints of the optimization problems, there are some examples of cases corresponding to isomorphic graphons that we solve as separate optimization problems. For example, the graphons corresponding to cases $1|24|7$ and $1|4|57$ are isomorphic, but we will consider them separate cases; see Figure \ref{1247 1457}. \begin{figure}[ht] \begin{center} \begin{minipage}{0.45\textwidth}\centering \input{graphics/23457-stepgraphon} \end{minipage}\quad\begin{minipage}{0.45\textwidth}\centering \scalebox{.7}[.7]{\input{graphics/23457-graph}}\end{minipage} \newline \vspace{20pt} \begin{minipage}{0.45\textwidth}\centering \input{graphics/24567-stepgraphon} \end{minipage}\quad\begin{minipage}{0.45\textwidth}\centering \scalebox{.7}[.7]{\input{graphics/24567-graph}}\end{minipage} \end{center} \caption{Changing the sign of $g$: the optimization problems in these cases are equivalent.} \label{23457 24567} \end{figure} \begin{figure} \begin{center} \begin{minipage}{0.45\textwidth}\centering \input{graphics/1247-stepgraphon} \end{minipage}\quad\begin{minipage}{0.45\textwidth}\centering \scalebox{.7}[.7]{\input{graphics/1247-graph}}\end{minipage} \newline \vspace{20pt} \begin{minipage}{0.45\textwidth}\centering \input{graphics/1457-stepgraphon} \end{minipage}\quad\begin{minipage}{0.45\textwidth}\centering \scalebox{.7}[.7]{\input{graphics/1457-graph}}\end{minipage} \end{center} \caption{{The cases $1|24|7$ and $1|4|57$ correspond to the same family graphons but we consider the optimization problems separately, due to our prescribed ordering of the vertices.}} \label{1247 1457} \end{figure} Repeated applications of these three principles show that there are only $17$ distinct cases that we must consider. The details are straightforward to verify, see Lemma \ref{lem: 19 cases}. \begin{figure}[ht] \centering \scalebox{1.0}{ \input{graphics/17-cases} } \caption{ The set $\mathcal{S}_{17}$, as a poset ordered by inclusion. Each element is a subset of $V(G^*) = \{1,\dots,7\}$, written without braces and commas. As noted in the proof of Lemma \ref{lem: 19 cases}, the sets $\{1\}$, $\{2,3,4\}$, and $\{5,6,7\}$ have different behavior in the problems $\SPR_S$. For this reason, we use vertical bars to separate each $S\in\mathcal{S}_{17}$ according to the corresponding partition. } \label{fig: 17 cases} \end{figure} The distinct cases that we must consider are the following, summarized in Figure \ref{fig: 17 cases}. \begin{align*} \mathcal{S}_{17} &:= \left\{\begin{array}{r} 1|234|567, 1|24|567, 1|234|57, 1|4|567, 1|24|57, 1|234|7, 234|567, \\ 24|567, 4|567, 24|57, 1|567, 1|4|57, 1|2|47, 1|57, 4|57, 1|4|7, 1|7 \end{array} \right\} \end{align*} } \subsection{Interval arithmetic}\label{sub-sec: numerics} Interval arithmetic is a computational technique which bounds errors that accumulate during computation. For convenience, let $\mathbb{R}^* := [-\infty, +\infty]$ be the extended real line. To enhance order floating point arithmetic, we replace extended real numbers with unions of intervals which are guaranteed to contain them. Moreover, we extend the basic arithmetic operations $+, -, \times, \div$, and $\sqrt{}$ to operations on unions of intervals. This technique has real-world applications in the hard sciences, but has also been used in computer-assisted proofs. For two famous examples, we refer the interested reader to \cite{2002HalesKepler} for Hales' proof of the Kepler Conjecture on optimal sphere-packing in $\mathbb{R}^2$, and to \cite{2002WarwickInterval} for Warwick's solution of Smale's $14$th problem on the Lorenz attractor as a strange attractor. As stated before, we consider extensions of the binary operations $+, -, \times,$ and $\div$ as well as the unary operation $\sqrt{}$ defined on $\mathbb{R}$ to operations on unions of intervals of extended real numbers. For example if $[a,b], [c,d]\subseteq \mathbb{R}$, then we may use the following extensions of $+, -, $ and $\times$: \begin{align*} [a,b] + [c,d] &= [a+c,b+d], \\ [a,b] - [c,d] &= [a-d,b-c], \text{and}\\ [a,b] \times [c,d] &= \left[ \min\{ac, ad, b c, b d\}, \max\{a c, a d, b c, b d\} \right]. \end{align*} For $\div$, we must address the cases $0\in [c,d]$ and $0\notin [c,d]$. Here, we take the extension \begin{align*} [a,b] \div [c,d] &= \left[ \min\bigg\{\frac{a}{c}, \frac{a}{d}, \frac{b}{c}, \frac{b}{d}\bigg\}, \max\bigg\{\frac{a}{c}, \frac{a}{d}, \frac{b}{c}, \frac{b}{d}\bigg\} \right] \end{align*} where \begin{align*} 1 \div [c,d] &= \left\{\begin{array}{rl} \left[ \min\{ c^{-1}, d^{-1} \}, \max\{ c^{-1}, d^{-1} \} \right], & 0\notin [c,d] \\ \left[ d^{-1}, +\infty \right], & \text{c=0} \\ \left[ -\infty, c^{-1} \right], & d = 0 \\ \left[ -\infty,\frac{1}{c} \right] \cup \left[ \frac{1}{d},+\infty \right], & c < 0 < d \end{array}\right. . \end{align*} Additionally, we may let \begin{align*} \sqrt{[a,b]} &= \left\{\begin{array}{rl} \emptyset, & b < 0\\ \left[ \sqrt{\max\left\{ 0, a \right\}} , \sqrt{b} \right], & \text{othewise} \end{array}\right. . \end{align*} When endpoints of $[a,b]$ and $[c,d]$ include $-\infty$ or $+\infty$, the definitions above must be modified slightly in a natural way. We use interval arithmetic to prove the strict upper bound $<2/\sqrt{3}$ for the maximum graphon spread claimed in Theorem \ref{thm: spread maximum graphon}, for any solutions to $15$ of the $17$ constrained optimization problems $\SPR_S$ stated in Lemma \ref{lem: 19 cases}. The constraints in each $\SPR_S$ allow us to derive equations for the variables $(\alpha_i,f_i,g_i)_{i\in S}$ in terms of each other, and $\mu$ and $\nu$. For the reader's convenience, we relocate these formulas and their derivations to Appendix \ref{sub-sec: formulas}. In the programs corresponding to each set $S\in\mathcal{S}_{17}$, we find we find two indices $i\in S\cap\{1,2,3,4\}$ and $j\in S\cap \{5,6,7\}$ such that for all $k\in S$, $\alpha_k,f_k,$ and $g_k$ may be calculated, step-by-step, from $\alpha_i, \alpha_j, \mu,$ and $\nu$. See Table \ref{tab: i j program search spaces} for each set $S\in\mathcal{S}_{17}$, organized by the chosen values of $i$ and $j$. \begin{table}[ht] \centering \begin{tabular}{c||r|r|r|r} & $1$ & $2$ & $3$ & $4$ \\ \hline\hline \multirow{2}{*}{$5$} & \multirow{2}{*}{$1|57$} & $24|57$ & \multirow{2}{*}{$1|234|57$} & $4|57$ \\ & & $1|24|57$ & & $1|4|57$ \\ \hline \multirow{2}{*}{$6$} & \multirow{2}{*}{$1|567$} & $24|567$ & $234|567$ & $4|567$ \\ & & $1|24|567$ & $1|234|567$ & $1|4|57$ \\ \hline $7$ & $1|7$ & $1|24|7$ & $1|234|7$ & $1|4|7$ \end{tabular} \caption{ The indices $i,j$ corresponding to the search space used to bound solutions to $\SPR_S$. } \label{tab: i j program search spaces} \end{table} In the program corresponding to a set $S\in\mathcal{S}_{17}$, we search a carefully chosen set $\Omega \subseteq [0,1]^3\times [-1,0]$ for values of $(\alpha_i,\alpha_j,\mu,\nu)$ which satisfy $\SPR_S$. We first divide $\Omega$ into a grid of ``boxes''. Starting at depth $0$, we test each box $B$ for feasibility by assuming that $(\alpha_i,\alpha_j,\mu,\nu)\in B$ and that $\mu-\nu\geq 2/\sqrt{3}$. Next, we calculate $\alpha_k,f_k,$ and $g_k$ for all $k\in S$ in interval arithmetic using the formulas from Section \ref{sec: appendix}. When the calculation detects that a constraint of $\SPR_S$ is not satisfied, e.g., by showing that some $\alpha_k, f_k,$ or $g_k$ lies in an empty interval, or by constraining $\sum_{i\in S}\alpha_i$ to a union of intervals which does not contain $1$, then the box is deemed infeasible. Otherwise, the box is split into two boxes of equal dimensions, with the dimension of the cut alternating cyclically. For each $S\in\mathcal{S}_{17}$, the program $\SPR_S$ has $3$ norm constraints, $2|S|$ linear eigenvector constraints, $|S|$ elliptical constraints, $\binom{|S|}{2}$ inequality constraints, and $3|S|$ interval membership constraints. By using interval arithmetic, we have a computer-assisted proof of the following result. \begin{lemma}\label{lem: 2 feasible sets} Suppose $S\in\mathcal{S}_{17}\setminus\{\{1,7\}, \{4,5,7\}\}$. Then any solution to $\text{SPR}_S$ attains a value strictly less than $2/\sqrt{3}$. \end{lemma} To better understand the role of interval arithmetic in our proof, consider the following example. \begin{example}\label{ex: infeasible box} Suppose $\mu,\nu$, and $(\alpha_i,f_i,g_i)$ is a solution to $\text{SPR}_{\{1,\dots,7\}}$. We show that $(\alpha_3, \mu,\nu)\notin [.7,.8]\times [.9,1]\times [-.2,-.1]$. By Proposition \ref{prop: fg23 assume 23}, $\displaystyle{g_3^2 = \frac{\nu(\alpha_3 + 2 \mu)}{\alpha_3(\mu + \nu) + 2 \mu \nu}}$. Using interval arithmetic, \begin{align*} \nu(\alpha_3 + 2 \mu) &= [-.2,-.1] \times \big([.7,.8] + 2 \times [.9,1] \big) \\ &= [-.2,-.1] \times [2.5,2.8] = [-.56,-.25], \text{ and } \\ \alpha_3(\mu + \nu) + 2 \mu \nu &= [.7,.8]\times \big([.9,1] + [-.2,-.1]\big) + 2 \times [.9,1] \times [-.2,-.1] \\ &= [.7,.8] \times [.7,.9] + [1.8,2] \times [-.2,-.1] \\ &= [.49,.72] + [-.4,-.18] = [.09,.54]. \end{align*} Thus \begin{align*} g_3^2 &= \frac{ \nu (\alpha_3 + 2 \mu) } { \alpha_3(\mu + \nu) + 2 \mu \nu } = [-.56,-.25] \div [.09,.54] = [-6.\overline{2},-.4\overline{629}]. \end{align*} Since $g_3^2\geq 0$, we have a contradiction. \end{example} Example \ref{ex: infeasible box} illustrates a number of key elements. First, we note that through interval arithmetic, we are able to provably rule out the corresponding region. However, the resulting interval for the quantity $g_3^2$ is over fifty times bigger than any of the input intervals. This growth in the size of intervals is common, and so, in some regions, fairly small intervals for variables are needed to provably illustrate the absence of a solution. For this reason, using a computer to complete this procedure is ideal, as doing millions of calculations by hand would be untenable. However, the use of a computer for interval arithmetic brings with it another issue. Computers have limited memory, and therefore cannot represent all numbers in $\mathbb{R}^*$. Instead, a computer can only store a finite subset of numbers, which we will denote by $F\subsetneq \mathbb{R}^*$. This set $F$ is not closed under the basic arithmetic operations, and so when some operation is performed and the resulting answer is not in $F$, some rounding procedure must be performed to choose an element of $F$ to approximate the exact answer. This issue is the cause of roundoff error in floating point arithmetic, and must be treated in order to use computer-based interval arithmetic as a proof. PyInterval is one of many software packages designed to perform interval arithmetic in a manner which accounts for this crucial feature of floating point arithmetic. Given some $x \in \mathbb{R}^*$, let $fl_-(x)$ be the largest $y \in F$ satisfying $y \le x$, and $fl_+(x)$ be the smallest $y \in F$ satisfying $y \ge x$. Then, in order to maintain a mathematically accurate system of interval arithmetic on a computer, once an operation is performed to form a union of intervals $\bigcup_{i=1}^k[a_i, b_i]$, the computer forms a union of intervals containing $[fl_-(a_i),fl_+(b_i)]$ for all $1\leq i\leq k$. The programs which prove Lemma \ref{lem: 2 feasible sets} can be found at \cite{2021riasanovsky-spread}. \subsection{Completing the proof of Theorem \ref{thm: spread maximum graphon}}\label{sub-sec: cases 4|57 and 1|7} Finally, we complete the second main result of this paper. We will need the following lemma. \begin{lemma} \label{lem: SPR457} If $(\alpha_4,\alpha_5,\alpha_7)$ is a solution to $\text{SPR}_{\{4,5,7\}}$, then $\alpha_7=0.$ \end{lemma} We delay the proof of Lemma \ref{lem: SPR457} to Section \ref{sec: ugly} because it is technical. We now proceed with the Proof of Theorem \ref{thm: spread maximum graphon}. \begin{proof}[Proof of Theorem \ref{thm: spread maximum graphon}] Let $W$ be a graphon such that $\text{spr}(W) = \max_{U\in\mathcal{W}}\text{spr}(U)$. By Lemma \ref{lem: 2 feasible sets} and Lemma \ref{lem: SPR457}, $W$ is a $2\times 2$ stepgraphon. Let the weights of the parts be $\alpha_1$ and $1-\alpha_1$. \begin{comment} By Lemma \ref{lem: SPR457}, Finally, we complete the proof of the desired claim. Note that the \begin{align*} \left[\begin{array}{ccc} \alpha_5 & 0 & 0\\ 0 & 0 & 0\\ \alpha_5 & 1-\alpha_5 & 0 \end{array}\right] \quad \text{ and }\quad \left[\begin{array}{ccc} \alpha_5 & 0\\ \alpha_5 & 1-\alpha_5 \end{array}\right] \end{align*} have the same multiset of nonzero eigenvalues. By Claim F and the definition of $\text{SPR}_{\{4,5,7\}}$ and $\text{SPR}_{\{4,5\}}$, we have the following. If$\mu, \nu, $ and $(\alpha_4,\alpha_5,\alpha_7)$ are part of a solution to $\text{SPR}_{\{4,5,7\}}$, then $\alpha_7 = 0$ and the quantities $\mu, \nu, $ and $(\alpha_4,1-\alpha_4)$ are part of a solution to $\text{SPR}_{\{4,5\}}$. By setting $(\alpha_1',\alpha_7') := (\alpha_5,1-\alpha_5)$, it follows by inspection of the definition of $\text{SPR}_{\{4,5\}}$ and $\text{SPR}_{\{1,7\}}$ that $\mu, \nu$, and $(\alpha_1',\alpha_7')$ are part of a solution to $\text{SPR}_{\{1,7\}}$. \end{comment} Thus, it suffices to demonstrate the uniqueness of the desired solution $\mu, \nu, $ and $(\alpha_i,f_i,g_i)_{i\in \{1,7\}}$ to $\text{SPR}_{\{1,7\}}$. Indeed, we first note that with \begin{align*} N(\alpha_1) := \left[\begin{array}{ccc} \alpha_1 & 1-\alpha_1\\ \alpha_1 & 0 \end{array}\right], \end{align*} the quantities $\mu$ and $\nu$ are precisely the eigenvalues of the characteristic polynomial \begin{align*} p(x) = x^2-\alpha_1x-\alpha_1(1-\alpha_1). \end{align*} In particular, \begin{align*} \mu &= \dfrac{ \alpha_1 + \sqrt{\alpha_1(4-3\alpha_1)} } {2} , \quad \nu = \dfrac{ \alpha_1 - \sqrt{\alpha_1(4-3\alpha_1)} } {2} , \end{align*} and \begin{align*} \mu-\nu &= \sqrt{\alpha_1(4-3\alpha_1)}. \end{align*} Optimizing, it follows that $(\alpha_1, 1-\alpha_1) = (2/3, 1/3)$. Calculating the eigenfunctions and normalizing them gives that $\mu, \nu, $ and their respective eigenfunctions match those from the statement of Theorem \ref{thm: spread maximum graphon}. \end{proof} \section{From graphons to graphs}\label{sub-sec: graphons to graphs} In this section, we show that Theorem \ref{thm: spread maximum graphon} implies Conjecture \ref{thm: spread maximum graphs} for all $n$ sufficiently large; that is, the solution to the problem of maximizing the spread of a graphon implies the solution to the problem of maximizing the spread of a graph for sufficiently large $n$. The outline for our argument is as follows. First, we define the spread-maximum graphon $W$ as in Theorem \ref{thm: spread maximum graphon}. Let $\{G_n\}$ be any sequence where each $G_n$ is a spread-maximum graph on $n$ vertices and denote by $\{W_n\}$ the corresponding sequence of graphons. We show that, after applying measure-preserving transformations to each $W_n$, the extreme eigenvalues and eigenvectors of each $W_n$ converge suitably to those of $W$. It follows for $n$ sufficiently large that except for $o(n)$ vertices, $G_n$ is a join of a clique of $2n/3$ vertices and an independent set of $n/3$ vertices (Lemma \ref{lem: few exceptional vertices}). Using results from Section \ref{sec:graphs}, we precisely estimate the extreme eigenvector entries on this $o(n)$ set. Finally, Lemma \ref{lem: no exceptional vertices} shows that the set of $o(n)$ exceptional vertices is actually empty, completing the proof. \\ Before proceeding with the proof, we state the following corollary of the Davis-Kahan theorem \cite{DK}, stated for graphons. \begin{corollary}\label{cor: DK eigenfunction perturbation} Suppose $W,W':[0,1]^2\to [0,1]$ are graphons Let $\mu$ be an eigenvalue of $W$ with $f$ a corresponding unit eigenfunction. Let $\{h_k\}$ be an orthonormal eigenbasis for $W'$ with corresponding eigenvalues $\{\mu_k'\}$. Suppose that $|\mu_k'-\mu| > \delta$ for all $k\neq 1$. Then \begin{align*} \sqrt{1 - \langle h_1,f\rangle^2} \leq \dfrac{\|A_{W'-W}f\|_2}{\delta}. \end{align*} \end{corollary} Before proving Theorem \ref{thm: spread maximum graphs}, we prove the following approximate result. For all nonnegative integers $n_1,n_2,n_3$, let $G(n_1,n_2,n_3) := (K_{n_1}\dot{\cup} K_{n_2}^c)\vee K_{n_3}^c$. \begin{lemma}\label{lem: few exceptional vertices} For all positive integers integers $n$, let $G_n$ denote a graph on $n$ vertices which maximizes spread. Then $G_n = G(n_1,n_2,n_3)$ for some nonnegative integers $n_1,n_2,n_3$ such that $n_1 = (2/3+o(1))n$, $n_2 = o(n)$, and $n_3 = (1/3+o(1))n$. \end{lemma} \begin{proof} Our argument outline is: \begin{enumerate} \item show that the eigenvectors for the spread-extremal graphs resemble the eigenfunctions of the spread-extremal graphon in an $L_2$ sense \item show that with the exception of a small proportion of vertices, a spread-extremal graph is the join of a clique and an independent set \end{enumerate} Let $\mathcal{P} := [0,2/3]$ and $\mathcal{N} := [0,1]\setminus \mathcal{P}$. By Theorem \ref{thm: spread maximum graphon}, the graphon $W$ which is the indicator function of the set $[0,1]^2\setminus \mathcal{N}^2$ maximizes spread. Denote by $\mu$ and $\nu$ its maximum and minimum eigenvalues, respectively. For every positive integer $n$, let $G_n$ denote a graph on $n$ vertices which maximizes spread, let $W_n$ be any stepgraphon corresponding to $G_n$, and let $\mu_n$ and $\nu_n$ denote the maximum and minimum eigenvalues of $W_n$, respectively. By Theorems \ref{thm: graphon eigenvalue continuity} and \ref{thm: spread maximum graphon}, and compactness of $\hat{\mathcal{W}}$, \begin{align*} \max\left\{ |\mu-\mu_n|, |\nu-\nu_n|, \delta_\square(W, W_n) \right\}\to 0. \end{align*} Moreover, we may apply measure-preserving transformations to each $W_n$ so that without loss of generality, $\|W-W_n\|_\square\to 0$. As in Theorem \ref{thm: spread maximum graphon}, let $f$ and $g$ be unit eigenfunctions which take values $f_1,f_2, g_1, g_2$. Furthermore, let $\varphi_n$ be a nonnegative unit $\mu_n$-eigenfunction for $W_n$ and let $\psi_n$ be a $\nu_n$-eigenfunction for $W_n$. \\ We show that without loss of generality, $\varphi_n\to f$ and $\psi_n\to g$ in the $L_2$ sense. Since $\mu$ is the only positive eigenvalue of $W$ and it has multiplicity $1$, taking $\delta := \mu/2$, Corollary \ref{cor: DK eigenfunction perturbation} implies that \begin{align*} 1-\langle f,\varphi_n\rangle^2 &\leq \dfrac{4\|A_{W-W_n}f\|_2^2} {\mu^2} \\ &= \dfrac{4}{\mu^2} \cdot \left\langle A_{W-W_n}f, A_{W-W_n}f \right\rangle \\ &\leq \dfrac{4}{\mu^2} \cdot \| A_{W-W_n}f \|_1 \cdot \| A_{W-W_n}f \|_\infty \\ &\leq \dfrac{4}{\mu^2} \cdot \left( \|A_{W-W_n}\|_{\infty\to1}\|f\|_\infty \right) \cdot \|f\|_\infty \\ &\leq \dfrac{ 16\|W-W_n\|_\square \cdot \|f\|_\infty^2} {\mu^2}, \end{align*} where the last inequality follows from Lemma 8.11 of \cite{Lovasz2012Hombook}. Since $\|f\|_\infty\leq 1/\mu$, this proves the first claim. The second claim follows by replacing $f$ with $g$, and $\mu$ with $|\nu|$. \\ \\ \textbf{Note: }For the remainder of the proof, we will introduce quantities $\varepsilon_i > 0$ in lieu of writing complicated expressions explicitly. When we introduce a new $\varepsilon_i$, we will remark that given $\varepsilon_0,\dots,\varepsilon_{i-1}$ sufficiently small, $\varepsilon_i$ can be made sufficiently small enough to meet some other conditions. \\ Let $\varepsilon_0 > 0$ and for all $n\geq 1$, define \begin{align*} \mathcal{P}_n &:= \{ x\in [0,1] : |\varphi_n(x) - f_1| < \varepsilon_0 \text{ and } |\psi_n(x)-g_1| < \varepsilon_0 \}, \\ \mathcal{N}_n &:= \{ x\in [0,1] : |\varphi_n(x) - f_2| < \varepsilon_0 \text{ and } |\psi_n(x)-g_2| < \varepsilon_0 \}, \text{ and } \\ \mathcal{E}_n &:= [0,1]\setminus (\mathcal{P}_n\cup \mathcal{N}_n). \end{align*} Since \begin{align*} \int_{|\varphi_n-f|\geq \varepsilon_0} |\varphi_n-f|^2 &\leq \int_{} |\varphi_n-f|^2 \to 0, \text{ and }\\ \int_{|\psi_n-g|\geq \varepsilon_0} |\psi_n-g|^2 &\leq \int_{} |\psi_n-g|^2 \to 0, \end{align*} it follows that \begin{align*} \max\left\{ m(\mathcal{P}_n\setminus\mathcal{P}), m(\mathcal{N}_n\setminus\mathcal{N}), m(\mathcal{E}_n) \right\} \to 0. \end{align*} For all $u\in V(G_n)$, let $S_u$ be the subinterval of $[0,1]$ corresponding to $u$ in $W_n$, and denote by $\varphi_u$ and $\psi_u$ the constant values of $\varphi_n$ on $S_u$. For convenience, we define the following discrete analogues of $\mathcal{P}_n, \mathcal{N}_n, \mathcal{E}_n$: \begin{align*} P_n &:= \{ u\in V(G_n) : |\varphi_u - f_1| < \varepsilon_0 \text{ and } |\psi_u-g_1| < \varepsilon_0 \}, \\ N_n &:= \{ u\in V(G_n) : |\varphi_u - f_2| < \varepsilon_0 \text{ and } |\psi_u-g_2| < \varepsilon_0 \}, \text{ and } \\ E_n &:= V(G_n) \setminus (P_n\cup N_n). \end{align*} Let $\varepsilon_1>0$. By Lemma \ref{discrete ellipse equation} and using the fact that $\mu_n\to \mu$ and $\nu_n\to \nu$, \begin{align}\label{eq: recall graph ellipse equation} \left| \mu \varphi_u^2 - \nu\psi_u^2 - (\mu-\nu) \right| &< \varepsilon_1 \quad \text{ for all }u\in V(G_n) \end{align} for all $n$ sufficiently large. Let $\varepsilon_0'>0$. We next need the following claim, which says that the eigenvector entries of the exceptional vertices behave as if they have neighborhood $N_n$. \\ \\ \textbf{Claim I. } Suppose $\varepsilon_0$ is sufficiently small and $n$ is sufficiently large in terms of $\varepsilon_0'$. Then for all $v\in E_n$, \begin{align}\label{eq: exceptional vertex entries} \max\left\{ \left| \varphi_v - \dfrac{f_2}{3\mu} \right|, \left| \psi_v - \dfrac{g_2}{3\nu} \right| \right\} < \varepsilon_0'. \end{align} Indeed, suppose $v \in E_n$ and let \begin{align*} U_n := \{w\in V(G_n) : vw\in E(G_n)\} \quad\text{ and }\quad \mathcal{U}_n := \bigcup_{w\in U_n} S_w. \end{align*} We take two cases, depending on the sign of $\psi_v$. \\ \\ \textbf{Case A: $\psi_v \geq 0$. } Recall that $f_2 > 0 > g_2$. Furthermore, $\varphi_v \geq 0$ and by assumption, $\psi_v\geq 0$. It follows that for all $n$ sufficiently large, $f_2\varphi_v - g_2\psi_v > 0$, so by Lemma \ref{lem: graph join}, $N_n\subseteq U_n$. Since $\varphi_n$ is a $\mu_n$-eigenfunction for $W_n$, \begin{align*} \mu_n \varphi_v &= \int_{y\in [0,1]}W_n(x,y)\varphi_n(y) \\ &= \int_{y\in \mathcal{P}_n\cap \mathcal{U}_n}\varphi_n(y) + \int_{y\in \mathcal{N}_n}\varphi_n(y) + \int_{y\in \mathcal{E}_n\cap \mathcal{U}_n}\varphi_n(y). \end{align*} Similarly, \begin{align*} \nu_n \psi_v &= \int_{y\in [0,1]}K_n(x,y)\psi_n(y) \\ &= \int_{y\in \mathcal{P}_n\cap \mathcal{U}_n}\psi_n(y) + \int_{y\in \mathcal{N}_n}\psi_n(y) + \int_{y\in \mathcal{E}_n\cap \mathcal{U}_n}\psi_n(y). \end{align*} Let $\rho_n := m(\mathcal{P}_n\cap \mathcal{U}_n)$. Note that for all $\varepsilon_2 > 0$, as long as $n$ is sufficiently large and $\varepsilon_1$ is sufficiently small, then \begin{align}\label{eq: eigenvector entries with rho (case A)} \max\left\{ \left| \varphi_v-\dfrac{3\rho_n f_1 + f_2}{3\mu} \right| , \left| \psi_v-\dfrac{3\rho_n g_1 + g_2}{3\nu} \right| \right\} < \varepsilon_2. \end{align} Let $\varepsilon_3 > 0$. By Equations \eqref{eq: recall graph ellipse equation} and \eqref{eq: eigenvector entries with rho (case A)} and with $\varepsilon_1,\varepsilon_2$ sufficiently small, \begin{align*} \left| \mu\cdot \left( \dfrac{3\rho_n f_1 + f_2}{3\mu} \right)^2 -\nu\cdot \left( \dfrac{3\rho_n g_1 + g_2}{3\nu} \right)^2 - (\mu-\nu) \right| < \varepsilon_3. \end{align*} Substituting the values of $f_1,f_2, g_1,g_2$ from Theorem \ref{thm: spread maximum graphon} and simplifying, it follows that \begin{align*} \left| \dfrac{\sqrt{3}}{2} \cdot \rho_n(3\rho_n-2) \right| < \varepsilon_3 \end{align*} Let $\varepsilon_4 > 0$. It follows that if $n$ is sufficiently large and $\varepsilon_3$ is sufficiently small, then \begin{align}\label{eq: (case A) rho estimates} \min\left\{ \rho_n, |2/3-\rho_n| \right\} < \varepsilon_4. \end{align} Combining Equations \eqref{eq: eigenvector entries with rho (case A)} and \eqref{eq: (case A) rho estimates}, it follows that with $\varepsilon_2,\varepsilon_4$ sufficiently small, then \begin{align*} \max\left\{ \left| \varphi_v - \dfrac{f_2}{3\mu} \right|, \left| \psi_v - \dfrac{g_2}{3\mu} \right| \right\} &< \varepsilon_0', \text{ or } \\ \max\left\{ \left| \varphi_v - \dfrac{2f_1 + f_2}{3\mu} \right|, \left| \psi_v - \dfrac{2g_1 + g_2}{3\mu} \right| \right\} &< \varepsilon_0'. \end{align*} Note that \begin{align*} f_1 &= \dfrac{2f_1 + f_2}{3\mu} \quad \text{ and }\quad g_1 = \dfrac{2g_1 + g_2}{3\nu}. \end{align*} Since $v\in E_n$, the second inequality does not hold, which completes the proof of the desired claim. \\ \\ \textbf{Case B: $\psi_v < 0$. } Recall that $f_1 > g_1 > 0$. Furthermore, $\varphi_v \geq 0$ and by assumption, $\psi_v < 0$. It follows that for all $n$ sufficiently large, $f_1\varphi_v - g_1\psi_v > 0$, so by Lemma \ref{lem: graph join}, $P_n\subseteq U_n$. Since $\varphi_n$ is a $\mu_n$-eigenfunction for $W_n$, \begin{align*} \mu_n \varphi_v &= \int_{y\in [0,1]}W_n(x,y)\varphi_n(y) \\ &= \int_{y\in \mathcal{N}_n\cap \mathcal{U}_n}\varphi_n(y) + \int_{y\in \mathcal{P}_n}\varphi_n(y) + \int_{y\in \mathcal{E}_n\cap \mathcal{U}_n}\varphi_n(y). \end{align*} Similarly, \begin{align*} \nu_n \psi_v &= \int_{y\in [0,1]}W_n(x,y)\psi_n(y) \\ &= \int_{y\in \mathcal{N}_n\cap \mathcal{U}_n}\psi_n(y) + \int_{y\in \mathcal{P}_n}\psi_n(y) + \int_{y\in \mathcal{E}_n\cap \mathcal{U}_n}\psi_n(y). \end{align*} Let $\rho_n := m(\mathcal{N}_n\cap \mathcal{U}_n)$. Note that for all $\varepsilon_5 > 0$, as long as $n$ is sufficiently large and $\varepsilon_1$ is sufficiently small, then \begin{align}\label{eq: eigenvector entries with rho (case B)} \max\left\{ \left| \varphi_v-\dfrac{2f_1 + 3\rho_n f_2}{3\mu} \right| , \left| \psi_v-\dfrac{2g_1 + 3\rho_n g_2}{3\nu} \right| \right\} < \varepsilon_5. \end{align} Let $\varepsilon_6 > 0$. By Equations \eqref{eq: recall graph ellipse equation} and \eqref{eq: eigenvector entries with rho (case B)} and with $\varepsilon_1,\varepsilon_2$ sufficiently small, \begin{align*} \left| \mu\cdot \left( \dfrac{2f_1 + 3\rho_n f_2}{3\mu} \right)^2 -\nu\cdot \left( \dfrac{2g_1 + 3\rho_n g_2}{3\nu} \right)^2 - (\mu-\nu) \right| < \varepsilon_6. \end{align*} Substituting the values of $f_1,f_2, g_1,g_2$ from Theorem \ref{thm: spread maximum graphon} and simplifying, it follows that \begin{align*} \left| 2\sqrt{3} \cdot \rho_n(3\rho_n-1) \right| < \varepsilon_6 \end{align*} Let $\varepsilon_7 > 0$. It follows that if $n$ is sufficiently large and $\varepsilon_6$ is sufficiently small, then \begin{align}\label{eq: (case B) rho estimates} \min\left\{ \rho_n, |1/3-\rho_n| \right\} < \varepsilon_7. \end{align} Combining Equations \eqref{eq: eigenvector entries with rho (case A)} and \eqref{eq: (case B) rho estimates}, it follows that with $\varepsilon_2,\varepsilon_4$ sufficiently small, then \begin{align*} \max\left\{ \left| \varphi_v - \dfrac{2f_1}{3\mu} \right|, \left| \psi_v - \dfrac{2g_1}{3\mu} \right| \right\} &< \varepsilon_0', \text{ or } \\ \max\left\{ \left| \varphi_v - \dfrac{2f_1 + f_2}{3\mu} \right|, \left| \psi_v - \dfrac{2g_1 + g_2}{3\mu} \right| \right\} &< \varepsilon_0'. \end{align*} Again, note that \begin{align*} f_1 &= \dfrac{2f_1 + f_2}{3\mu} \quad \text{ and }\quad g_1 = \dfrac{2g_1 + g_2}{3\nu}. \end{align*} Since $v\in E_n$, the second inequality does not hold. \\ Similarly, note that \begin{align*} f_2 &= \dfrac{2f_1}{3\mu} \quad \text{ and }\quad g_2 = \dfrac{2g_1}{3\nu}. \end{align*} Since $v\in E_n$, the first inequality does not hold, a contradiction. So the desired claim holds. \\ We now complete the proof of Lemma \ref{lem: no exceptional vertices} by showing that for all $n$ sufficiently large, $G_n$ is the join of an independent set $N_n$ with a disjoint union of a clique $P_n$ and an independent set $E_n$. As above, we let $\varepsilon_0,\varepsilon_0'>0$ be arbitrary. By definition of $P_n$ and $N_n$ and by Equation \eqref{eq: exceptional vertex entries} from Claim I, then for all $n$ sufficiently large, \begin{align*} \max\left\{ \left| \varphi_v - f_1 \right|, \left| \psi_v - g_1 \right| \right\} &< \varepsilon_0 &\text{ for all }v\in P_n \\ \max\left\{ \left| \varphi_v - \dfrac{f_2}{3\mu} \right|, \left| \psi_v - \dfrac{g_2}{3\nu} \right| \right\} &< \varepsilon_0' &\text{ for all }v\in E_n \\ \max\left\{ \left| \varphi_v - f_2 \right|, \left| \psi_v - g_2 \right| \right\} &< \varepsilon_0 &\text{ for all }v\in N_n \end{align*} With rows and columns respectively corresponding to the vertex sets $P_n, E_n,$ and $N_n$, we note the following inequalities: Indeed, note the following inequalities: \begin{align*} \begin{array}{c|c||c} f_1^2 > g_1^2 & f_1\cdot\dfrac{f_2}{3\mu} < g_1\cdot\dfrac{g_2}{3\nu} & f_1f_2 > g_1g_2\\\hline & \left(\dfrac{f_2}{3\mu}\right)^2 < \left(\dfrac{g_2}{3\nu}\right)^2 & \dfrac{f_2}{3\mu}\cdot f_2 > \dfrac{g_2}{3\nu}\\\hline\hline && f_2^2 < g_2^2 \end{array} \quad . \end{align*} Let $\varepsilon_0, \varepsilon_0'$ be sufficiently small. Then for all $n$ sufficiently large and for all $u,v\in V(G_n)$, then $\varphi_u\varphi_v-\psi_u\psi_v < 0$ if and only if $u,v\in E_n$, $u,v\in N_n$, or $(u,v)\in (P_n\times E_n)\cup (E_n\times P_n)$. By Lemma \ref{lem: graph join}, since $m(P_n) \to 2/3$ and $m(N_n) \to 1/3$, the proof is complete. \end{proof} We have now shown that the spread-extremal graph is of the form $(K_{n_1}\dot{\cup} K_{n_2}^c)\vee K_{n_3}^c$ where $n_2 = o(n)$. The next lemma refines this to show that actually $n_2 = 0$. \begin{lemma}\label{lem: no exceptional vertices} For all nonnegative integers $n_1,n_2,n_3$, let $G(n_1, n_2, n_3) := (K_{n_1}\cup K_{n_2}^c)\vee K_{n_3}^c$. Then for all $n$ sufficiently large, the following holds. If $\text{spr}(G(n_1,n_2,n_3))$ is maximized subject to the constraint $n_1+n_2+n_3 = n$ and $n_2 = o(n)$, then $n_2 = 0$. \end{lemma} \emph{Proof outline:} We aim to maximize the spread of $G(n_1, n_2, n_3)$ subject to $n_2 = o(n)$. The spread of $G(n_1, n_2, n_3)$ is the same as the spread of the quotient matrix \[ Q_n = \begin{bmatrix} n_1 - 1 & 0 & n_3\\ 0 & 0 & n_3 \\ n_1 & n_2 & 0 \end{bmatrix}. \] We reparametrize with parameters $\varepsilon_1$ and $\varepsilon_2$ representing how far away $n_1$ and $n_3$ are proportionally from $\frac{2n}{3}$ and $\frac{n}{3}$, respectively. Namely, $\varepsilon_1 = \frac{2}{3} - \frac{n_1}{n}$ and $\varepsilon_2 = \frac{1}{3} - \frac{n_3}{n}$. Then $\varepsilon_1 + \varepsilon_2 = \frac{n_2}{n}$. Hence maximizing the spread of $G(n_1,n_2,n_3)$ subject to $n_2 = o(n)$ is equivalent to maximizing the spread of the matrix \[ n\begin{bmatrix} \frac{2}{3} - \varepsilon_1 - \frac{1}{n} & 0 & \frac{1}{3} - \varepsilon_2 \\ 0 & 0 & \frac{1}{3} - \varepsilon_2 \\ \frac{2}{3} - \varepsilon_1 & \varepsilon_1 + \varepsilon_2 & 0 \end{bmatrix} \] subject to the constraint that $\textstyle \frac{2}{3}-\varepsilon_1$ and $\textstyle \frac{1}{3}-\varepsilon_2$ are nonnegative integer multiples of $\frac{1}{n}$ and $\varepsilon_1+\varepsilon_2 = o(1)$. In order to utilize calculus, we instead solve a continuous relaxation of the optimization problem. As such, consider the following matrix. \begin{align*} M_z(\varepsilon_1,\varepsilon_2) := \left[\begin{array}{ccc} \dfrac{2}{3}-\varepsilon_1-z & 0 & \dfrac{1}{3}-\varepsilon_2\\ 0 & 0 & \dfrac{1}{3}-\varepsilon_2\\ \dfrac{2}{3}-\varepsilon_1 & \varepsilon_1+\varepsilon_2 & 0 \end{array}\right] . \end{align*} Since $M_z(\varepsilon_1,\varepsilon_2)$ is diagonalizable, we may let $S_z(\varepsilon_1,\varepsilon_2)$ be the difference between the maximum and minimum eigenvalues of $M_z(\varepsilon_1,\varepsilon_2)$. We consider the optimization problem $\mathcal{P}_{z,C}$ defined for all $z\in\mathbb{R}$ and all $C>0$ such that $|z|$ and $C$ are sufficiently small, by \begin{align*} (\mathcal{P}_{z,C}): \left\{\begin{array}{rl} \max & S_z(\varepsilon_1,\varepsilon_2)\\ \text{s.t}. & \varepsilon_1,\varepsilon_2\in [-C,C]. \end{array}\right. \end{align*} { We show that as long as $C$ and $|z|$ are sufficiently small, then the optimum of $\mathcal{P}_{z,C}$ is attained by \begin{align*} (\varepsilon_1, \varepsilon_2) &= \left( (1+o(z))\cdot \dfrac{7z}{30}, (1+o(z))\cdot \dfrac{-z}{3} \right). \end{align*} Moreover we show that in the feasible region of $\mathcal{P}_{z,C}$, $S_{z,C}(\varepsilon_1,\varepsilon_2)$ is concave-down in $(\varepsilon_1,\varepsilon_2)$. We return to the original problem by imposing the constraint that $ \frac{2}{3}-\varepsilon_1$ and $\frac{1}{3}-\varepsilon_2$ are multiples of $\frac{1}{n}$. Together these two observations complete the proof of the lemma. Under these added constraints, the optimum is obtained when \begin{align*} (\varepsilon_1, \varepsilon_2) &= \left\{\begin{array}{rl} (0,0), & n\equiv 0 \pmod{3}\\ (2/3,-2/3), & n\equiv 1 \pmod{3}\\ (1/3,-1/3), & n\equiv 2 \pmod{3} \end{array}\right. . \end{align*} } Since the details are straightforward but tedious calculus, we delay this part of the proof to Section \ref{sec: 2 by 2 reduction}. We may now complete the proof of Theorem \ref{thm: spread maximum graphs}. \begin{proof}[Proof of Theorem \ref{thm: spread maximum graphs}] Suppose $G$ is a graph on $n$ vertices which maximizes spread. By Lemma \ref{lem: few exceptional vertices}, $G = (K_{n_1}\dot{\cup} K_{n_2}^c)\vee K_{n_3}^c$ for some nonnegative integers $n_1,n_2,n_3$ such that $n_1+n_2+n_3 = n$ where \begin{align*} (n_1,n_2,n_3) &= \left( \left( \dfrac{2}{3}+o(1) \right), o(n), \left( \dfrac{1}{3}+o(1) \right) \cdot n \right). \end{align*} By Lemma \ref{lem: no exceptional vertices}, if $n$ is sufficiently large, then $n_2 = 0$. To complete the proof of the main result, it is sufficient to find the unique maximum of $\text{spr}( K_{n_1}\vee K_{n_2}^c )$, subject to the constraint that $n_1+n_2 = n$. { This is determined in \cite{gregory2001spread} to be the join of a clique on $\lfloor\frac{2n}{3}\rfloor$ and an independent set on $\lceil \frac{n}{3} \rceil$ vertices.} The interested reader can prove that $n_1$ is the nearest integer to $(2n-1)/3$ by considering the spread of the quotient matrix \begin{align*} \left[\begin{array}{cc} n_1-1 & n_2\\ n_1 & 0 \end{array}\right] \end{align*}and optimizing the choice of $n_1$. \end{proof} \section{Technical proofs}\label{sec: ugly} \subsection{Reduction to 17 cases}\label{appendix 17 cases} Now, we introduce the following specialized notation. For any nonempty set $S\subseteq V(G^*)$ and any labeled partition $(I_i)_{i\in S}$ of $[0,1]$, we define the stepgraphon $W_\mathcal{I}$ as follows. For all $i,j\in S$, $W_\mathcal{I}$ equals $1$ on $I_i\times I_j$ if and only if $ij$ is an edge (or loop) of $G^*$, and $0$ otherwise. If $\alpha = (\alpha_i)_{i\in S}$ where $\alpha_i = m(I_i)$ for all $i\in S$, we may write $W_\alpha$ to denote the graphon $W_\mathcal{I}$ up to weak isomorphism. \\ {To make the observations from Section \ref{sub-sec: cases} more explicit, we note that Theorem \ref{thm: reduction to stepgraphon} implies that a spread-optimal graphon has the form $W = W_\mathcal{I}$ where $\mathcal{I} = (I_i)_{i\in S}$ is a labeled partition of $[0,1]$, $S\subseteq [7]$, and each $I_i$ is measurable with positive positive measure. Since $W$ is a stepgraphon, its extreme eigenfunctions may be taken to be constant on $I_i$, for all $i\in S$. With $f,g$ denoting the extreme eigenfunctions for $W$, we may let $f_i$ and $g_i$ be the constant value of $f$ and $g$, respectively, on step $S_i$, for all $i\in S$. Appealing again to Theorem \ref{thm: reduction to stepgraphon}, we may assume without loss of generality that $f_i\geq 0$ for all $i\in S$, and for all $i\in S$, $g_i\geq 0$ implies that $i\in\{1,2,3,4\}$. By Lemma \ref{lem: local eigenfunction equation}, for each $i\in S$, $\mu f_i^2-\nu g_i^2 = \mu-\nu$. Combining these facts, we note that $f_i$ and $g_i$ belong to specific intervals as in Figure \ref{fig: f g interval}. } \begin{figure}[ht] \centering \input{graphics/interval-eigenfunction-values} \caption{ Intervals containing the quantities $f_i$ and $g_i$. Note that $f_i$ and $g_i$ are only defined for all $i\in S$. } \label{fig: f g interval} \end{figure} {For convenience, we define the following sets $F_i$ and $G_i$, for all $i\in S$. First, let $\mathcal{U} := [0,1]$ and $\mathcal{V} := [1,+\infty]$. With some abuse of notation, we denote $-\mathcal{U} = [-1,0]$ and $-\mathcal{V} = [-\infty,-1]$. } For each $i\in V(G^*)$, we define the intervals $F_i$ and $G_i$ by \begin{align*} (F_i, G_i) &:= \left\{\begin{array}{rl} (\mathcal{V}, \mathcal{U}), &i\in \{1,2\}\\ (\mathcal{U}, \mathcal{V}), &i\in \{3,4\} \\ (\mathcal{V}, -\mathcal{U}), &i=5 \\ (\mathcal{U}, -\mathcal{V}), &i\in \{6,7\} \end{array}\right. . \end{align*} Given that the set $S$ and the quantities $(\alpha_i,f_i,g_i)_{i\in S}$ are clear from context, we label the following equation: \begin{align} \sum_{i\in S} \alpha_i &= \sum_{i\in S} \alpha_if_i^2 = \sum_{i\in S} \alpha_ig_i^2 = 1 \label{eq: program norms} . \end{align} Furthermore when $i\in S$ is understood from context, we define the equations \begin{align} \mu f_i^2 - \nu g_i^2 &= \mu - \nu \label{eq: program ellipse} \\ \sum_{j\in N_i\cap S}\alpha_jf_j &= \mu f_i \label{eq: program eigen f} \\ \sum_{j\in N_i\cap S}\alpha_jg_j &= \nu g_i \label{eq: program eigen g} \end{align} Additionally, we consider the following inequalities. For all $S\subseteq V(G^*)$ and all distinct $i,j\in S$, \begin{align}\label{ieq: program inequality constraint} f_if_j - g_ig_j &\left\{\begin{array}{rl} \geq 0, &ij\in E(G^*)\\ \leq 0, &ij\notin E(G^*) \end{array}\right. \end{align} Finally, for all nonempty $S\subseteq V(G^*)$, we define the constrained-optimization problem $\SPR_S$ by: \begin{align*} (\text{SPR}_S): \left\{\begin{array}{rll} \max & \mu-\nu \\ \text{s.t} & \text{Equation }\eqref{eq: program norms} \\ & \text{Equations } \eqref{eq: program ellipse}, \eqref{eq: program eigen f}, \text{ and } \eqref{eq: program eigen g} & \text{ for all }i\in S \\ & \text{Inequality } \eqref{ieq: program inequality constraint} & \text{ for all distinct }i,j\in S \\ & (\alpha_i,f_i,g_i)\in [0,1] \times F_i \times G_i & \text{ for all }i\in S \\ & \mu,\nu\in\mathbb{R} \end{array}\right. . \end{align*} For completeness, we state and prove the following observation. \begin{proposition}\label{prop: problem solutions} Let $W\in\mathcal{W}$ such that $\text{spr}(W) = \max_{U\in \mathcal{W}}\text{spr}(U)$ and write $\mu,\nu$ for the maximum and minimum eigenvalues of $W$, with corresponding unit eigenfunctions $f,g$. Then for some nonempty set $S\subseteq V(G^*)$, the following holds. There exists a triple $(I_i, f_i, g_i)_{i\in S}$, where $(I_i)_{i\in S}$ is a labeled partition of $[0,1]$ with parts of positive measure and $f_i,g_i\in \mathbb{R}$ for all $i\in S$, such that: \begin{enumerate}[(i)] \item\label{item: W = W_I} $W = W_\mathcal{I}$. \item\label{item: f,g constants} Allowing the replacement of $f$ by $-f$ and of $g$ by $-g$, for all $i\in S$, $f$ and $g$ equal $f_i$ and $g_i$ a.e. on $I_i$. \item\label{item: problem solution} With $\alpha_i := m(I_i)$ for all $i\in S$, $\SPR_S$ is solved by $\mu,\nu$, and $(\alpha_i, f_i, g_i)_{i\in S}$. \end{enumerate} \end{proposition} \begin{proof} First we prove Item \eqref{item: W = W_I}. By Theorem \ref{thm: reduction to stepgraphon} and the definition of $G^*$, there exists a nonempty set $S\subseteq V(G^*)$ and a labeled partition $\mathcal{I} = (I_i)_{i\in S}$ such that $W = W_\mathcal{I}$. By merging any parts of measure $0$ into some part of positive measure, we may assume without loss of generality that $m(I_i) > 0$ for all $i\in S$. So Item \eqref{item: W = W_I} holds. For Item \eqref{item: f,g constants}, the eigenfunctions corresponding to the maximum and minimum eigenvalues of a stepgraphon must be constant on each block by convexity and the Courant-Fischer Min-Max Theorem. Finally, we prove Item \eqref{item: problem solution}, we first prove that for all $i\in V(G^*)$, $(f_i,g_i)\in F_i\times G_i$. By Lemma \ref{lem: local eigenfunction equation}, \begin{align*} \mu f_i^2-\nu g_i^2 &= \mu-\nu \end{align*} for all $i\in S$. In particular, either $f_i^2\leq 1\leq g_i^2$ or $g_i^2\leq 1\leq f_i^2$. By Lemma \ref{lem: K = indicator function}, for all $i,j\in S$, $f_if_j-g_ig_j\neq 0$ and $ij\in E(G)$ if and only if $f_if_j-g_ig_j > 0$. Note that the loops of $G^*$ are $1, 2, $ and $5$. It follows that for all $i\in S$, $f_i^2 > 1 > g_i^2$ if and only if $i\in\{1,2,5\}$, and $g_i^2>1>f_i^2$, otherwise. Since $f$ is positive on $[0,1]$, this completes the proof that $f_i\in F_i$ for all $i\in S$. Similarly since $g$ is positive on $\bigcup_{i\in \{1,2,3,4\}\cap S}I_i$ and negative on $\bigcup_{i\in \{5,6,7\}}I_i$, by inspection $g_i\in G_i$ for all $i\in S$. Similarly, Inequalities \eqref{ieq: program inequality constraint} follow directly from Lemma \ref{lem: K = indicator function}. Continuing, we note the following. Since $W$ is a stepgraphon, if $\lambda\neq 0$ is an eigenvalue of $W$, there exists a $\lambda$-eigenfunction $h$ for $W$ such that for all $i\in S$, $h = h_i$ on $I_i$ for some $h_i\in \mathbb{R}$. Moreover for all $i\in S$, since $m(I_i) > 0$, \begin{align*} \lambda h_i &= \sum_{i\in S} \alpha_ih_i. \end{align*} In particular, any solution to $\SPR_S$ is at most $\mu-\nu$. Since $f,g$ are eigenfunctions corresponding to $W$ and the eigenvalues $\mu,\nu$, respectively, Equations \eqref{eq: program eigen f}, and \eqref{eq: program eigen g} hold. Finally since $(I_i)_{i\in S}$ is a partition of $[0,1]$ and since $\|f\|_2^2 = \|g\|_2^2 = 1$, Equation \eqref{eq: program norms} holds. So $\mu,\nu$, and $(\alpha_i, f_i, g_i)_{i\in S}$ lie in the domain of $\SPR_S$. This completes the proof of item \eqref{item: problem solution}, and the desired claim. \end{proof} We enhance Proposition \ref{prop: problem solutions} as follows. \begin{lemma}\label{lem: 19 cases} Proposition \ref{prop: problem solutions} holds with the added assumption that $S\in\mathcal{S}_{17}$. \end{lemma} \begin{proof} We begin our proof with the following claim. \\ \\ {\bf Claim A: } Suppose $i\in S$ and $j\in V(G^*)$ are distinct such that $N_i\cap S = N_j\cap S$. Then Proposition \ref{prop: problem solutions} holds with the set $S' := (S\setminus\{i\})\cup \{j\}$ replacing $S$. First, we define the following quantities. For all $k\in S'\setminus \{j\}$, let $(f_k', g_k', I_k') := (f_k, g_k, I_k)$, and also let $(f_j', g_j') := (f_i, g_i)$. If $j\in S$, let $I_j' := I_i\cup I_j$, and otherwise, let $I_j' := I_i$. Additionally let $\mathcal{I}' := (I_k')_{k\in S'}$ and for each $k\in S'$, let $\alpha_k' := m(I_k')$. By the criteria from Proposition \ref{prop: problem solutions}, the domain criterion $(\alpha_k', f_k', g_k') \in [0,1]\times F_k\times G_k$ as well as Equation \eqref{eq: program ellipse} holds for all $k\in S'$. Since we are reusing $\mu,\nu$, the constraint $\mu,\nu\in \mathbb{R}$ also holds. It suffices to show that Equation \eqref{eq: program norms} holds, and that Equations \eqref{eq: program eigen f} and \eqref{eq: program eigen g} hold for all $k\in S'$. To do this, we first note that for all $k\in S'$, $f = f_k'$ and $g = g_k'$ on $I_k'$. By definition, $f = f_k$ and $g = g_k$ on $I_k' = I_k$ for all $k\in S'\setminus\{j\}$ as needed by Claim A. Now suppose $j\notin S$. Then $f = f_i = f_j'$ and $g = g_i = g_j'$ and $I_j' = I_i$ on the set $I_i = I_j'$, matching Claim A. Finally, suppose $j\in S$. Note by definition that $f = f_i = f_j'$ and $g = g_i = g_j'$ on $I_i$. Since and $I_j' = I_i\cup I_j$, it suffices to prove that $f = f_j'$ and $g = g_j'$ on $I_j$. We first show that $f_j = f_i$ and $g_j = g_i$. Indeed, \begin{align*} \mu f_j &= \sum_{k\in N_j\cap S} \alpha_k f_k = \sum_{k\in N_i\cap S} \alpha_k f_k = \mu f_i \end{align*} and since $\mu\neq 0$, $f_j = f_i$. Similarly, $g_j = g_i$. So $f = f_j = f_i = f_j'$ and $g = g_j = g_i = g_j'$ on the set $I_j' = I_i \cup I_j$. Finally, we claim that $W_{\mathcal{I}'} = W$. Indeed, this follows directly from Lemma \ref{lem: K = indicator function} and the fact that $W = W_\mathcal{I}$. Since $\mathcal{I}'$ is a partition of $[0,1]$ and since $f,g$ are unit eigenfunctions for $W$ Equation \eqref{eq: program norms} holds, and Equations \eqref{eq: program eigen f} and \eqref{eq: program eigen g} hold for all $k\in S'$. This completes the proof of Claim A. Next, we prove the following claim. \\ {\bf Claim B: } If $S$ satisfies the criteria of Proposition \ref{prop: problem solutions}, then without loss of generality the following holds. \begin{enumerate}[(a)] \item\label{item: vertex 1} If there exists some $i\in S$ such that $N_i = S$, then $i = 1$. \item\label{item: vertices 1234} $S\cap \{1,2,3,4\}\neq\emptyset$. \item\label{item: vertices 234} $S\cap \{2,3,4\}$ is one of $\emptyset, \{4\}, \{2,4\}$, and $\{2,3,4\}$. \item\label{item: vertices 567} $S\cap \{5,6,7\}$ is one of $\{7\}, \{5,7\}$, and $\{5,6,7\}$. \end{enumerate} Since $N_1\cap S = S = N_i$, item \eqref{item: vertex 1} follows from Claim A applied to the pair $(i,1)$. Since $f,g$ are orthogonal and $f$ is positive on $[0,1]$, $g$ is positive on a set of positive measure, so item \eqref{item: vertices 1234} holds. To prove item \eqref{item: vertices 234}, we have $4$ cases. If $S\cap \{2,3,4\} = \{2\}$, then $N_2\cap S = N_1\cap S$ and we may apply Claim A to the pair $(2,1)$. If $S\cap \{2,3,4\} = \{3\}$ or $\{3,4\}$, then $N_3\cap S = N_4\cap S$ and we may apply Claim A to the pair $(3,4)$. If $S\cap \{2,3,4\} = \{2,3\}$, then $N_2\cap S = N_1\cap S$ and we may apply Claim A to the pair $(2,1)$. So item \eqref{item: vertices 234} holds. For item \eqref{item: vertices 567}, we reduce $S\cap \{5,6,7\}$ to one of $\emptyset, \{7\}, \{5,7\}$, and $\{5,6,7\}$ in the same fashion. To eliminate the case where $S\cap \{5,6,7\} = \emptyset$, we simply note that since $f$ and $g$ are orthogonal and $f$ is positive on $[0,1]$, $g$ is negative on a set of positive measure. This completes the proof of Claim B. \input{graphics/table21} After repeatedly applying Claim B, we may replace $S$ with one of the cases found in Table \ref{tab: table 21}. Let $\mathcal{S}_{21}$ denote the sets in Table \ref{tab: table 21}. By definition, \begin{align*} \mathcal{S}_{21} &= \mathcal{S}_{17} \bigcup \left\{ \{4,7\}, \{2,4,7\}, \{2,3,4,7\}, \{2,3,4,5,7\} \right\} . \end{align*} Finally, we eliminate the $4$ cases in $\mathcal{S}_{21}\setminus \mathcal{S}_{17}$. If $S = \{4,7\}$, then $W$ is a bipartite graphon, hence $\text{spr}(W) \leq 1$, a contradiction since $\max_{U\in\mathcal{W}}\text{spr}(W) > 1$. For the three remaining cases, let $\tau$ be the permutation on $\{2,\dots,7\}$ defined as follows. For all $i\in \{2,3,4\}$, $\tau(i) := i+3$ and $\tau(i+3) := i$. If $S$ is among $\{2,4,7\}, \{2,3,4,7\}, \{2,3,4,5,7\}$, we apply $\tau$ to $S$ in the following sense. Replace $g$ with $-g$ and replace $(\alpha_i, I_i, f_i, g_i)_{i\in S}$ with $(\alpha_{\tau(i)}, I_{\tau(i)}, f_{\tau(i)}, -g_{\tau(i)})_{i\in \tau(S)}$. By careful inspection, it follows that $\tau(S)$ satisfies the criteria from Proposition \ref{prop: problem solutions}. Since $\tau(\{2,4,7\}) = \{4,5,7\}$, $\tau(\{2,3,4,7\}) = \{4,5,6,7\}$, and $\tau(\{2,3,4,5,7\}) = \{2,4,5,6,7\}$, this completes the proof. \end{proof} \subsection{Proof of Lemma \ref{lem: SPR457}} Let $(\alpha_4, \alpha_5, \alpha_7)$ be a solution to $\text{SPR}_{457}$. First, let $T := \{(\varepsilon_1,\varepsilon_2)\in(-1/3, 2/3)\times (-2/3, 1/3) : \varepsilon_1+\varepsilon_2 \in (0,1)\}$, and for all $\varepsilon = (\varepsilon_1,\varepsilon_2)\in T$, let \begin{align*} M(\varepsilon) &:= \left[\begin{array}{ccc} 2/3-\varepsilon_1 & 0 & 1/3-\varepsilon_2\\ 0 & 0 & 1/3-\varepsilon_2\\ 2/3-\varepsilon_1 & \varepsilon_1 + \varepsilon_2 & 0 \end{array}\right] . \end{align*} As a motivation, suppose $\mu,\nu$, and $(\alpha_4,\alpha_5,\alpha_7)$ are part of a solution to $\text{SPR}_{\{4,5,7\}}$. Then with $\varepsilon := (\varepsilon_1,\varepsilon_2) = (2/3-\alpha_5, 1/3-\alpha_4)$, $\varepsilon\in T$ and $\mu,\nu$ are the maximum and minimum eigenvalues of $M(\varepsilon)$, respectively. By the end of the proof, we show that any solution of $\text{SPR}_{\{4,5,7\}}$ has $\alpha_7 = 0$. \\ To proceed, we prove the following claims. \\ \\ {\bf Claim A: } For all $\varepsilon\in T$, $M(\varepsilon)$ has two distinct positive eigenvalues and one negative eigenvalue. Since $M(\varepsilon)$ is diagonalizable, it has $3$ real eigenvalues which we may order as $\mu\geq \delta\geq \nu$. Since $\mu\delta\nu = \det(M(\varepsilon)) = -\alpha_4\alpha_5\alpha_7\neq 0 < 0$, $M(\varepsilon)$ has an odd number of negative eigenvalues. Since $0 < \alpha_5 = \mu + \delta + \nu$, it follows that $\mu \geq \delta > 0 > \nu$. Finally, note by the Perron-Frobenius Theorem that $\mu > \delta$. This completes the proof of Claim A. \\ Next, we define the following quantities, treated as functions of $\varepsilon$ for all $\varepsilon\in T$. For convenience, we suppress the argument ``$\varepsilon$'' in most places. Let $k(x) = ax^3+bx^2+cx+d$ be the characteristic polynomial of $M(\varepsilon)$. By inspection, \begin{align*} &a = 1 &b = \varepsilon_1-\dfrac{2}{3} \\ &c = \dfrac{ (3\varepsilon_2+2) (3\varepsilon_2-1) } {9} &d = \dfrac{ (\varepsilon_1+\varepsilon_2) (3\varepsilon_1-2) (3\varepsilon_2-1) } {9} \end{align*} Continuing, let \begin{align*} &p := \dfrac{ 3a c -b^2 } { 3a^2 } &q := \dfrac{ 2b^3 -9a b c +27a^2 d } { 27a^3 } \\ &A := 2\sqrt{ \dfrac{-p} {3} } &B := \dfrac{ -b } {3a} \\ &\phi := \arccos\left( \dfrac{ 3q } { A p } \right). \end{align*} Let $S(\varepsilon)$ be the difference between the maximum and minimum eigenvalues of $M(\varepsilon)$. We show the following claim. \\ \\ {\bf Claim B: } For all $\varepsilon\in T$, \begin{align*} S(\varepsilon) &= \sqrt{3}\cdot A(\varepsilon)\cdot\cos\left( \dfrac{2\phi(\varepsilon) - \pi}{6} \right). \end{align*} Moreover, $S$ is analytic on $T$. Indeed, by Vi\'{e}te's Formula, using the fact that $k(x,y)$ has exactly $3$ distinct real roots, the quantities $a(\varepsilon),\dots,\phi(x,y)$ are analytic on $T$. Moreover, the eigenvalues of $M(\varepsilon)$ are $x_0, x_1, x_2$ where, for all $k\in\{0,1,2\}$, \begin{align*} x_k(\varepsilon) &= A(\varepsilon)\cdot \cos\left( \dfrac{\phi + 2\pi\cdot k}{3} \right) + B(\varepsilon) . \end{align*} Moreover, $x_0(\varepsilon),x_1(\varepsilon),x_2(\varepsilon)$ are analytic on $T$. For all $k,\ell\in \{1,2,3\}$, let \begin{align*} D(k,\ell,x) &:= \cos\left( x+\dfrac{2\pi k}{3} \right) - \cos\left( x+\dfrac{2\pi \ell}{3} \right) \end{align*} For all $(k,\ell)\in \{(0,1), (0,2), (2,1)\}$, note the trigonometric identities \begin{align*} D(k,\ell,x) &= \sqrt{3}\cdot\left\{\begin{array}{rl} \cos\left( x - \dfrac{\pi}{6} \right), & (k,\ell) = (0, 1) \\ \cos\left( x + \dfrac{\pi}{6} \right), & (k,\ell) = (0, 2) \\ \sin(x), & (k,\ell) = (2, 1) \end{array}\right. . \end{align*} By inspection, for all $x\in (0,\pi/3)$, \begin{align*} D(0,1) &> \max\left\{ D(0,2), D(2,1) \right\} \geq \min\left\{ D(0,2), D(2,1) \right\} \geq 0. \end{align*} Since $A > 0$ and $\phi \in (0,\pi/3)$, the claimed equality holds. Since $x_0(\varepsilon),x_1(\varepsilon)$ are analytic, $S(\varepsilon)$ is analytic on $T$. This completes the proof of Claim B. \\ Next, we compute the derivatives of $S(\varepsilon)$ on $T$. For convenience, denote by $A_i, \phi_i, $ and $S_i$ for the partial derivatives of $A$ and $\phi$ by $\varepsilon_i$, respectively, for $i\in \{1,2\}$. Furthermore, let \begin{align*} \psi(\varepsilon) &:= \dfrac{2\phi(\varepsilon)-\pi}{6}. \end{align*} The next claim follows directly from Claim B. \\ \\ {\bf Claim C: } For all $i\in T$, then on the set $T$, we have \begin{align*} 3S_i &= 3A_i\cdot \cos\left( \psi \right) - \cdot A\phi_i \sin\left( \psi \right) . \end{align*} Moreover, each expression is analytic on $T$. Finally, we solve $\text{SPR}_{\{4,5,7\}}$. \\ \\ {\bf Claim D: } If $(\alpha_4,\alpha_5,\alpha_7)$ is a solution to $\text{SPR}_{\{4,5,7\}}$, then $0\in\{\alpha_4,\alpha_5,\alpha_7\}$. With $(\alpha_4,\alpha_5,\alpha_7) := (1/3-\varepsilon_2, 2/3-\varepsilon_1, \varepsilon_1+\varepsilon_2)$ and using the fact that $S$ is analytic on $T$, it is sufficient to eliminate all common zeroes of $S_1$ and $S_2$ on $T$. With the help of a computer algebra system and the formulas for $S_1$ and $S_2$ from Claim C, we replace the system $S_1 = 0$ and $S_2 = 0$ with a polynomial system of equations $P = 0$ and $Q = 0$ whose real solution set contains all previous solutions. Here, \begin{align*} P(\varepsilon) &= 9\varepsilon_1^3 + 18\varepsilon_1^2\varepsilon_2 + 54\varepsilon_1\varepsilon_2^2 + 18\varepsilon_2^3 - 15\varepsilon_1^2 - 33\varepsilon_1\varepsilon_2 - 27\varepsilon_2^2 + 5\varepsilon_1 + \varepsilon_2 \end{align*} and $Q = 43046721\varepsilon_1^{18}\varepsilon_2+\cdots + (-532480\varepsilon_2)$ is a polynomial of degree $19$, with coefficients between $-184862311457373$ and $192054273812559$. For brevity, we do not express $Q$ explicitly. To complete the proof of Claim D, it suffices to show that no common real solution to $P = Q = 0$ which lies in $T$ also satisfies $S_1 = S_2 = 0$. Again using a computer algebra system, we first find all common zeroes of $P$ and $Q$ on $\mathbb{R}^2$. Included are the rational solutions $(2/3, -2/3), (-1/3, 1/3), (0,0), (2/3, 1/3), $ and $(2/3, -1/6)$ which do not lie in $T$. Furthermore, the solution $(1.2047\dots, 0.0707\dots)$ may also be eliminated. For the remaining $4$ zeroes, $S_1, S_2\neq 0$. A notebook showing these calculations can be found at \cite{2021riasanovsky-spread}. {\bf Claim E: } If $\mu, \nu$, and $\alpha = (\alpha_4,\alpha_5,\alpha_7)$ is part of a solution to $\text{SPR}_{\{4,5,7\}}$ such that $\mu-\nu\geq 1$, then $\alpha_7 = 0$. By definition of $\text{SPR}_{\{4,5,7\}}$, $\mu$ and $\nu$ are eigenvalues of the matrix \begin{align*} N(\alpha) := \left[\begin{array}{ccc} \alpha_5 & 0 & \alpha_4\\ 0 & 0 & \alpha_4\\ \alpha_5 & \alpha_7 & 0 \end{array}\right]. \end{align*} Furthermore, $N(\alpha)$ has characteristic polynomial \begin{align*} p(x) &= x^3 - \alpha_5 x^2 -\alpha_4\cdot(\alpha_5+\alpha_7) +\alpha_4\alpha_5\alpha_7 . \end{align*} Recall that $\alpha_4+\alpha_5+\alpha_7 = 1$. By Claim D, $0\in\{4,5,7\}$, and it follows that $p\in\{p_4, p_5, p_7\}$ where \begin{align*} p_4(x) &:= x^2\cdot (x-\alpha_5), \\ p_5(x) &:= x\cdot (x^2-\alpha_4(1-\alpha_4)), \text{ and } \\ p_7(x) &:= x\cdot (x^2-(1-\alpha_4)x-\alpha_4(1-\alpha_4)). \end{align*} If $p = p_4$, then $\mu-\nu = \alpha_5\leq 1$, and if $p = p_5$, then $\mu-\nu = 2\sqrt{\alpha_4(1-\alpha_4)} \leq 1$. So $p = p_7$, which completes the proof of Claim E. This completes the proof of Lemma \ref{lem: SPR457}. \subsection{Proof of Lemma \ref{lem: no exceptional vertices}} \label{sec: 2 by 2 reduction} First, we find $S_z(\varepsilon_1,\varepsilon_3)$ using Vi\`{e}te's Formula. In doing so, we define functions $k_z(\varepsilon_1,\varepsilon_2;x),\dots,\delta_z(\varepsilon_1,\varepsilon_2)$. To ease the burden on the reader, we suppress the subscript $z$ and the arguments $\varepsilon_1,\varepsilon_2$ when convenient and unambiguous. Let $k(x) = ax^3+bx^2+cx+d$ be the characteristic polynomial of $M_z(\varepsilon_1,\varepsilon_2)$. By inspection, \begin{align*} &a = 1 &b = \varepsilon_1+z-\dfrac{2}{3} \\ &c = \dfrac{ (3\varepsilon_2+2) (3\varepsilon_2-1) } {9} &d = \dfrac{ (\varepsilon_1+\varepsilon_2) (3\varepsilon_1+3z-2) (3\varepsilon_2-1) } {9} \end{align*} Continuing, let \begin{align*} &p := \dfrac{ 3a c -b^2 } { 3a^2 } &q := \dfrac{ 2b^3 -9a b c +27a^2 d } { 27a^3 } \\ &A := 2\sqrt{ \dfrac{-p} {3} } &B := \dfrac{ -b } {3a} \\ &\phi := \arccos\left( \dfrac{ 3q } { A p } \right). \end{align*} By Vi\`{e}te's Formula, the roots of $k_z(\varepsilon_1,\varepsilon_2;x)$ are the suggestively defined quantities: \begin{align*} &\mu := A \cos\left( \dfrac{ \phi } {3} \right) +B &\nu := A \cos\left( \dfrac{ \phi +2\pi } {3} \right) +B \\ \delta &:= A \cos\left( \dfrac{ \phi +4\pi } {3} \right) +B . \end{align*} First, We prove the following claim. \\ \\ {\bf Claim A: } If $(\varepsilon_1,\varepsilon_2,z)$ is sufficiently close to $(0,0,0)$, then \begin{align}\label{eq: spread trig formula} S_z(\varepsilon_1,\varepsilon_2) &= A_z(\varepsilon_1,\varepsilon_2)\sqrt{3}\, \cdot\cos\left( \dfrac{2\phi_z(\varepsilon_1,\varepsilon_2)-\pi}{6} \right) . \end{align} Indeed, suppose $z>0$ and $z\to 0$. Then for all $(\varepsilon_1,\varepsilon_2)\in (-3z,3z)$, $\varepsilon_1,\varepsilon_2\to 0$. With the help of a computer algebra system, we substitute in $z=0$ and $\varepsilon_1,\varepsilon_2=0$ to find the limits: \begin{align*} (a,b,c,d) &\to \left( 1, \dfrac{-2}{3}, \dfrac{-2}{9}, 0 \right) \\ (p,q) &\to \left( \dfrac{-10}{27}, \dfrac{-52}{729} \right) \\ (A,B,\phi) &\to \left( \dfrac{2\sqrt{10}}{9}, \dfrac{2}{9}, \arccos\left( \dfrac{13\sqrt{10}}{50} \right) \right). \end{align*} Using a computer algebra system, these substitutions imply that \begin{align*} (\mu,\nu,\delta) \to \left( 0.9107\dots, -0.2440\dots, 0. \right) \end{align*} So for all $z$ sufficiently small, $S = \mu-\nu$. After some trigonometric simplification, \begin{align*} \mu - \nu &= A\cdot\left( \cos\left( \dfrac{\phi}{3} \right) -\cos\left( \dfrac{\phi+2\phi}{3} \right) \right) = A\sqrt{3}\, \cdot\cos\left( \dfrac{2\phi-\pi}{6} \right) \end{align*} and Equation \eqref{eq: spread trig formula}. This completes the proof of Claim A. \\ Now we prove the following claim. \\ \\ {\bf Claim B: } There exists a constants $C_0'>0$ such that the following holds. If $|z|$ is sufficiently small, then $S_z$ is concave-down on $[-C_0,C_0]^2$ and strictly decreasing on $[-C_0,C_0]^2\setminus [-C_0z, C_0z]^2$. \\ \\ First, we define \begin{align*} D_z(\varepsilon_1,\varepsilon_2) := \restr{\left( \dfrac{\partial^2 S_z}{\partial\varepsilon_1^2} \cdot \dfrac{\partial^2 S_z}{\partial\varepsilon_2^2} - \left(\dfrac{\partial^2 S_z}{\partial\varepsilon_1\partial\varepsilon_2}\right)^2 \right)} {(\varepsilon_1,\varepsilon_2,z)} . \end{align*} As a function of $(\varepsilon_1,\varepsilon_2)$, $D_z$ is the determinant of the Hessian matrix of $S_z$. Using a computer algebra system, we note that \begin{align*} D_0(0,0) &= 22.5\dots, \quad \text{ and } \\ \restr{\left( \dfrac{ \partial^2S } { \partial \varepsilon_1^2 }, \dfrac{ \partial^2S } { \partial \varepsilon_1 \partial \varepsilon_2 }, \dfrac{ \partial^2S } { \partial \varepsilon_2^2 } \right) } {(0,0,0)} &= \left( -8.66\dots, -8.66\dots, -11.26\dots \right). \end{align*} Since $S$ is analytic to $(0,0,0)$, there exist constants $C_1, C_2>0$ such that the following holds. For all $z\in [-C_1,C_1]$, $S_z$ is concave-down on $[-C_1,C_1]^2$. This completes the proof of the first claim. Moreover for all $z\in [-C_1,C_1]$ and for all $(\varepsilon_1,\varepsilon_2)\in [-C_1,C_1]^2$, \begin{align*} \restr{\max\left\{ \dfrac{\partial^2S_z} {\partial\varepsilon_1^2}, \dfrac{\partial^2S_z} {\partial\varepsilon_1\partial\varepsilon_2}, \dfrac{\partial^2S_z} {\partial\varepsilon_2^2} \right\}} {(\varepsilon_1,\varepsilon_2,z)} &\leq -C_2. \end{align*} to complete the proof of the second claim, note also that since $S$ is analytic at $(0,0,0)$, there exist constants $C_3,C_4>0$ such that for all $z\in [-C_3,C_3]$ and all $(\varepsilon_1,\varepsilon_2)\in [-C_3, C_3]^2$, \begin{align*} \dfrac{\partial^2 S}{\partial z\partial\varepsilon_i} \leq C_4. \end{align*} Since $(0,0)$ is a local maximum of $S_0$, \begin{align*} \restr{ \dfrac{\partial S} {\partial \varepsilon_i} } {(\varepsilon_1,\varepsilon_2,z)} &= \restr{ \dfrac{\partial S} {\partial \varepsilon_i} } {(0,0,0)} +\int_{w=0}^{z} \restr{ \dfrac{\partial^2 S} {\partial z\partial \varepsilon_i} } {(0,0,w)} dw \\ &\quad + \int_{ {\bf u} = (0,0)}^{ (\varepsilon_1,\varepsilon_2) } \restr{\dfrac{ \partial^2 S } { \partial {\bf u}\partial \varepsilon_i }} {({\bf u}, z)} d{\bf u} \\ &\leq C_4\cdot z - C_2 \cdot \|(\varepsilon_1,\varepsilon_2)\|_2. \end{align*} Since $C_2, C_4>0$, this completes the proof of Claim B. \\ Next, we prove the following claim. \\ \\ {\bf Claim C: } If $z$ is sufficiently small, then $\mathcal{P}_{z,C_0}$ is solved by a unique point $(\varepsilon_1^*,\varepsilon_2^*) = (\varepsilon_1^*(z),\varepsilon_2^*(z))$. Moreover as $z\to0$, \begin{align}\label{eq: optimal epsilon approximation} \left( \varepsilon_1^*, \varepsilon_2^* \right) &= \left( (1+o(z))\, \dfrac{7z}{30}, (1+o(z))\, \dfrac{-z}{3} \right). \end{align} Indeed, the existence of a unique maximum $(\varepsilon_1^*, \varepsilon_2^*)$ on $[-C_0,C_0]^2$ follows from the fact that $S_z$ is strictly concave-down and bounded on $[-C_0, C_0]^2$ for all $z$ sufficiently small. Since $S_z$ is strictly decreasing on $[-C_0,C_0]^2\setminus (-C_0z, C_0z)^2$, it follows that $(\varepsilon_1^*, \varepsilon_2^*)\in (-C_0z, C_0z)$. For the second claim, note that since $S$ is analytic at $(0,0,0)$, \begin{align*} 0 &= \restr{ \dfrac{\partial S} {\partial \varepsilon_i} } {(\varepsilon_1^*, \varepsilon_2^*, z)} = \sqrt{3}\cdot \left( \dfrac{\partial A}{\partial\varepsilon_i}\cdot \cos\left( \dfrac{2\phi-\pi}{6} \right) - \dfrac{A}{3}\cdot \dfrac{\partial\phi}{\partial\varepsilon_i}\cdot \sin\left( \dfrac{2\phi-\pi}{6} \right) \right) \end{align*} for both $i = 1$ and $i = 2$. Let \begin{align*} \tau_i := \dfrac{ 3\cdot \dfrac{\partial A}{\partial\varepsilon_i} } { A\cdot \dfrac{\partial \phi}{\partial \varepsilon_i} } \end{align*} for both $i = 1$ and $i = 2$ Then by Equation \eqref{eq: spread trig formula}, \begin{align*} \restr{ \arctan(\tau_i) } {(\varepsilon_1^*,\varepsilon_2^*,z)} &= \restr{ \dfrac{2\phi-\pi}{6} } {(\varepsilon_1^*,\varepsilon_2^*,z)} \end{align*} for both $i=1$ and $i=2$. We first consider linear approximation of the above quantities under the limit $(\varepsilon_1,\varepsilon_2,z)\to (0,0,0)$. Here, we write $f(\varepsilon_1,\varepsilon_2,z) \sim g(\varepsilon_1,\varepsilon_2,z)$ to mean that \begin{align*} f(\varepsilon_1,\varepsilon_2,z) &= \left( 1 +o\left(\max\left\{ |\varepsilon_1|, |\varepsilon_2|, |z| \right\}\right) \right) \cdot g(\varepsilon_1,\varepsilon_2,z). \end{align*} With the help of a computer algebra system, we note that \begin{align*} \arctan\left( \tau_1 \right) &\sim \dfrac{ -78\varepsilon_1 -96\varepsilon_2 -3z -40\arctan\left( \dfrac{1}{3} \right) } {40} \\ \arctan\left( \tau_2 \right) &\sim \dfrac{ -64\varepsilon_1 -103\varepsilon_2 -14z -20\arctan\left( \dfrac{1}{3} \right) } {20} \\ \dfrac{2\phi-\pi} {6} &\sim \dfrac{ 108\varepsilon_1 +81\varepsilon_2 +18z +20\arccos\left( \dfrac{13\sqrt{10}}{50} \right) -10\pi } {60} . \end{align*} By inspection, the constant terms match due to the identity \begin{align*} -\arctan\left( \dfrac{1}{3} \right) &= \dfrac{1}{3} \arccos\left( \dfrac{13\sqrt{10}}{50} \right) -\dfrac{\pi}{6}. \end{align*} Since $\max\left\{|\varepsilon_1^*|, |\varepsilon_2^*|\right\}\leq C_0z$, replacing $(\varepsilon_1,\varepsilon_2)$ with $(\varepsilon_1^*,\varepsilon_2^*)$ implies that \begin{align*} \dfrac{-78\varepsilon_1^*-96\varepsilon_2^*-3z}{2} &= (1+o(z))\cdot (36\varepsilon_1^*+27\varepsilon_2^*+6z), \quad \text{ and } \\ -64\varepsilon_1^*-103\varepsilon_2^*-14z &= (1+o(z))\cdot (36\varepsilon_1^*+27\varepsilon_2^*+6z) \end{align*} as $z\to 0$. After applying Gaussian Elimination to this $3$-variable system of $2$ equations, it follows that \begin{align*} (\varepsilon_1^*, \varepsilon_2^*) &= \left( (1+o(z))\cdot \dfrac{7z}{30}, (1+o(z))\cdot \dfrac{-z}{3} \right). \end{align*} This completes the proof of Claim C. \\ \\ For the next step, we prove the following claim. First, let $\mathcal{Q}_{n}$ denote the program formed from $\mathcal{P}_{n^{-1}, C_0}$ subject to the added constraint that $\textstyle n\cdot( \frac{2}{3}-\varepsilon_1 ),n\cdot( \frac{1}{3}-\varepsilon_2 )\in\mathbb{Z}$. \\ \\ {\bf Claim D: } For all $n$ sufficiently large, $\mathcal{Q}_{n}$ is solved by a unique point $(n_1^*, n_3^*)$ which satisfies $n_1^* + n_3^* = n$. \\ \\ Note by Lemma \ref{lem: few exceptional vertices} that for all $n$ sufficiently large, \begin{align*} \max\left\{ \left|\dfrac{n_1}{n}-\dfrac{2}{3}\right|, \left|\dfrac{n_3}{n}-\dfrac{1}{3}\right| \right\} &\leq C_0. \end{align*} Moreover, by Claim C, $\mathcal{P}_{n^{-1}}$ is solved uniquely by \begin{align*} (\varepsilon_1^*, \varepsilon_2^*) &= \left( (1+o(z))\cdot \dfrac{7}{30n}, (1+o(z))\cdot \dfrac{-1}{3n} \right). \end{align*} Since \begin{align*} \dfrac{2n}{3}-n\cdot \varepsilon_1^* &= \dfrac{2n}{3}-(1+o(1))\cdot \dfrac{7}{30} \end{align*} and $7/30 < 1/3$, it follows for $n$ sufficiently large that $2n/3-n\cdot\varepsilon_1^*\in I_1$ where \begin{align*} I_1 &:= \left\{\begin{array}{rl} \left( \dfrac{2n}{3} -1, \dfrac{2n}{3} \right), & 3 \mid n \\ \left( \left\lfloor \dfrac{2n}{3} \right\rfloor, \left\lceil \dfrac{2n}{3} \right\rceil \right), & 3 \nmid n \end{array}\right. . \end{align*} Similarly since \begin{align*} n\cdot(\varepsilon_1^* + \varepsilon_2^*) &= (1+o(1)) \cdot \left( \dfrac{7}{30}-\dfrac{1}{3} \right) = (1+o(1)) \cdot \dfrac{-1}{10} \end{align*} and $1/10 < 1/3$, it follows that $n\cdot (\varepsilon_1^*+\varepsilon_2^*)\in (-1,0)$. Altogether, \begin{align*} \left( \dfrac{2n}{3}-n\cdot \varepsilon_1, n\cdot(\varepsilon_1^*+\varepsilon_2^*) \right) &\in I_1\times (-1, 0). \end{align*} Note that to solve $\mathcal{Q}_{n}$, it is sufficient to maximize $S_{n^{-1}}$ on the set $[-C_0, C_0]^2 \cap \{ (n_1/n, n_3/n) \}_{u,v\in\mathbb{N}}$. Since $S_{n^{-1}}$ is concave-down on $I_1\times (-1, 0)$, $(n_1^*, n-n_1^*-n_3^*)$ is a corner of the square $I_1\times (-1,0)$. So $n_1^*+n_2^* = n$, which implies Claim D. This completes the proof of the main result. \begin{comment} For completeness, we prove this result directly. Let $G = K_{n_1}\vee K_{n_2}^c$. Then $A_G$ has the same spread as the ``reduced matrix'' defined as \begin{align*} A(n_1,n_2) &= \left[\begin{array}{cc} n_1-1 & n_2\\ n_1 & 0 \end{array}\right]. \end{align*} By inspection, $A(n_1,n_2)$ has characteristic polynomial $x^2 - (n_1 - 1)x + -n_1n_2$ and thus its eigenvalues are \begin{align*} \dfrac{ n_1-1 \pm \sqrt{ n_1^2 + 4n_1n_2 - 2n_1+1 } } {2} \end{align*} and \begin{align*} \text{spr}(G) &= \sqrt{ n_1^2 +4n_1n_2 -2n_1 +1 } . \end{align*} Making the substitution $(n_1, n_2) = (rn, (1-r)n)$ and simplifying, we see that \begin{align*} \text{spr}(G)^2 &= -3n^2r^2 +2n(2n-1)a +1 \\ &= -3n^2 \cdot\left( \left( r-\dfrac{2n-1}{3n} \right)^2 -\dfrac{2(2n^2-2n-1)}{9n^2} \right), \end{align*} which is maximized when $r = n_1/n$ is nearest $2/3-1/(3n)$, or equivalently, when $n_1$ is nearest $(2n-1)/3$. After noting that \begin{align*} \left\lfloor \dfrac{2n}{3} \right\rfloor - \dfrac{2n-1}{3} &= \left\{\begin{array}{rl} 1/3, & n\equiv0\pmod3\\ -1/3, & n\equiv1\pmod3\\ 0, & n\equiv2\pmod3 \end{array}\right. , \end{align*} the desired claim follows. \end{comment} \section{The Bipartite Spread Conjecture}\label{sec:bispread} In \cite{gregory2001spread}, the authors investigated the structure of graphs which maximize the spread over all graphs with a fixed number of vertices $n$ and edges $m$, denoted by $s(n,m)$. In particular, they proved the upper bound \begin{equation}\label{eqn:spread_bound} s(G) \le \lambda_1 + \sqrt{2 m - \lambda^2_1} \le 2 \sqrt{m}, \end{equation} and noted that equality holds throughout if and only if $G$ is the union of isolated vertices and $K_{p,q}$, for some $p+q \le n$ satisfying $m=pq$ \cite[Thm. 1.5]{gregory2001spread}. This led the authors to conjecture that if $G$ has $n$ vertices, $m \le \lfloor n^2/4 \rfloor$ edges, and spread $s(n,m)$, then $G$ is bipartite \cite[Conj. 1.4]{gregory2001spread}. In this section, we prove an asymptotic form of this conjecture and provide an infinite family of counterexamples to the exact conjecture which verifies that the error in the aforementioned asymptotic result is of the correct order of magnitude. Recall that $s_b(n,m)$, $m \le \lfloor n^2/4 \rfloor$, is the maximum spread over all bipartite graphs with $n$ vertices and $m$ edges. To explicitly compute the spread of certain graphs, we make use of the theory of equitable partitions. In particular, we note that if $\phi$ is an automorphism of $G$, then the quotient matrix of $A(G)$ with respect to $\phi$, denoted by $A_\phi$, satisfies $\Lambda(A_\phi) \subset \Lambda(A)$, and therefore $s(G)$ is at least the spread of $A_\phi$ (for details, see \cite[Section 2.3]{brouwer2011spectra}). Additionally, we require two propositions, one regarding the largest spectral radius of subgraphs of $K_{p,q}$ of a given size, and another regarding the largest gap between sizes which correspond to a complete bipartite graph of order at most $n$. Let $K_{p,q}^m$, $0 \le pq-m <\min\{p,q\}$, be the subgraph of $K_{p,q}$ resulting from removing $pq-m$ edges all incident to some vertex in the larger side of the bipartition (if $p=q$, the vertex can be from either set). In \cite{liu2015spectral}, the authors proved the following result. \begin{proposition}\label{prop:bi_spr} If $0 \le pq-m <\min\{p,q\}$, then $K_{p,q}^m$ maximizes $\lambda_1$ over all subgraphs of $K_{p,q}$ of size $m$. \end{proposition} We also require estimates regarding the longest sequence of consecutive sizes $m < \lfloor n^2/4\rfloor$ for which there does not exist a complete bipartite graph on at most $n$ vertices and exactly $e$ edges. As pointed out by \cite{pc1}, the result follows quickly by induction. However, for completeness, we include a brief proof. \begin{proposition}\label{prop:seq} The length of the longest sequence of consecutive sizes $m < \lfloor n^2/4\rfloor$ for which there does not exist a complete bipartite graph on at most $n$ vertices and exactly $m$ edges is zero for $n \le 4$ and at most $\sqrt{2n-1}-1$ for $n \ge 5$. \end{proposition} \begin{proof} We proceed by induction. By inspection, for every $n \le 4$, $m \le \lfloor n^2/4 \rfloor$, there exists a complete bipartite graph of size $m$ and order at most $n$, and so the length of the longest sequence is trivially zero for $n \le 4$. When $n =m = 5$, there is no complete bipartite graph of order at most five with exactly five edges. This is the only such instance for $n =5$, and so the length of the longest sequence for $n = 5$ is one. Now, suppose that the statement holds for graphs of order at most $n-1$, for some $n > 5$. We aim to show the statement for graphs of order at most $n$. By our inductive hypothesis, it suffices to consider only sizes $m \ge\lfloor (n-1)^2/4 \rfloor$ and complete bipartite graphs on $n$ vertices. We have $$\left( \frac{n}{2} + k \right)\left( \frac{n}{2} - k \right) \ge \frac{(n-1)^2}{4} \qquad \text{ for} \quad |k| \le \frac{\sqrt{2n-1}}{2}.$$ When $1 \le k \le \sqrt{2n-1}/2$, the difference between the sizes of $K_{n/2+k-1,n/2-k+1}$ and $K_{n/2+k,n/2-k}$ is at most \begin{align*} \big| E\big(K_{\frac{n}{2}+k-1,\frac{n}{2}-k+1}\big)\big| - \big| E\big(K_{n/2+k,n/2-k}\big)\big| &=2k-1 \le \sqrt{2n-1} -1. \end{align*} Let $k^*$ be the largest value of $k$ satisfying $k \le \sqrt{2n-1}/2$ and $n/2 + k \in \mathbb{N}$. Then \begin{align*} \big| E\big(K_{\frac{n}{2}+k^*,\frac{n}{2}-k^*}\big)\big| &< \left(\frac{n}{2} + \frac{\sqrt{2n-1}}{2} -1 \right)\left(\frac{n}{2} - \frac{\sqrt{2n-1}}{2} +1 \right) \\ &= \sqrt{2n-1} + \frac{(n-1)^2}{4} - 1, \end{align*} and the difference between the sizes of $K_{n/2+k^*,n/2-k^*}$ and $K_{\lceil \frac{n-1}{2}\rceil,\lfloor \frac{n-1}{2}\rfloor}$ is at most \begin{align*} \big| E\big(K_{\frac{n}{2}+k^*,\frac{n}{2}-k^*}\big)\big| - \big| E\big(K_{\lceil \frac{n-1}{2}\rceil,\lfloor \frac{n-1}{2}\rfloor}\big)\big| &< \sqrt{2n-1} + \frac{(n-1)^2}{4} -\left\lfloor \frac{(n-1)^2}{4} \right\rfloor - 1 \\ &< \sqrt{2n-1}. \end{align*} Combining these two estimates completes our inductive step, and the proof. \end{proof} We are now prepared to prove an asymptotic version of \cite[Conjecture 1.4]{gregory2001spread}, and provide an infinite class of counterexamples that illustrates that the asymptotic version under consideration is the tightest version of this conjecture possible. \begin{theorem} $$s(n,m) - s_b(n,m) \le \frac{1+16 \,m^{-3/4}}{m^{3/4}}\, s(n,m)$$ for all $n,m \in \mathbb{N}$ satisfying $m \le \lfloor n^2/4\rfloor$. In addition, for any $\epsilon>0$, there exists some $n_\epsilon$ such that $$s(n,m) - s_b(n,m) \ge \frac{1-\epsilon}{m^{3/4}} \, s(n,m)$$ for all $n\ge n_\epsilon$ and some $m \le \lfloor n^2/4\rfloor$ depending on $n$. \end{theorem} \begin{proof} The main idea of the proof is as follows. To obtain an upper bound on $s(n,m) - s_b(n,m)$, we upper bound $s(n,m)$ by $2 \sqrt{m}$ using Inequality \eqref{eqn:spread_bound}, and we lower bound $s_b(n,m)$ by the spread of some specific bipartite graph. To obtain a lower bound on $s(n,m) - s_b(n,m)$ for a specific $n$ and $m$, we explicitly compute $s_b(n,m)$ using Proposition \ref{prop:bi_spr}, and lower bound $s(n,m)$ by the spread of some specific non-bipartite graph. First, we analyze the spread of $K_{p,q}^m$, $0 < pq-m <q \le p$, a quantity that will be used in the proof of both the upper and lower bound. Let us denote the vertices in the bipartition of $K_{p,q}^m$ by $u_1,...,u_p$ and $v_1,...,v_{q}$, and suppose without loss of generality that $u_1$ is not adjacent to $v_1,...,v_{pq-m}$. Then $$\phi = (u_1)(u_2,...,u_p)(v_1,...,v_{pq-m})(v_{pq-m+1},...,v_{q})$$ is an automorphism of $K^m_{p,q}$. The corresponding quotient matrix is given by $$ A_\phi = \begin{pmatrix} 0 & 0 & 0 & m-(p-1)q \\ 0 & 0 & pq-m & m-(p-1)q \\ 0 & p-1 & 0 & 0 \\ 1 & p-1 & 0 & 0 \end{pmatrix},$$ has characteristic polynomial $$Q(p,q,m) = \det[A_\phi - \lambda I] = \lambda^4 -m \lambda^2 + (p-1)(m-(p-1)q)(pq-m),$$ and, therefore, \begin{equation}\label{eqn:bispread_exact} s\left(K^m_{p,q}\right) \ge 2 \left( \frac{m + \sqrt{m^2-4(p-1)(m-(p-1)q)(pq-m)}}{2} \right)^{1/2}. \end{equation} For $pq = \Omega(n^2)$ and $n$ sufficiently large, this lower bound is actually an equality, as $A(K^m_{p,q})$ is a perturbation of the adjacency matrix of a complete bipartite graph with each partite set of size $\Omega(n)$ by an $O(\sqrt{n})$ norm matrix. For the upper bound, we only require the inequality, but for the lower bound, we assume $n$ is large enough so that this is indeed an equality. Next, we prove the upper bound. For some fixed $n$ and $m\le \lfloor n^2/4 \rfloor$, let $m = pq -r$, where $p,q,r \in \mathbb{N}$, $p+q \le n$, and $r$ is as small as possible. If $r = 0$, then by \cite[Thm. 1.5]{gregory2001spread} (described above), $s(n,m) = s_b(n,m)$ and we are done. Otherwise, we note that $0<r < \min \{p,q\}$, and so Inequality \eqref{eqn:bispread_exact} is applicable (in fact, by Proposition \ref{prop:seq}, $r = O(\sqrt{n})$). Using the upper bound $s(n,m) \le 2 \sqrt{m}$ and Inequality \eqref{eqn:bispread_exact}, we have \begin{equation}\label{eqn:spr_upper} \frac{s(n,pq-r)-s\left(K^{m}_{p,q}\right)}{s(n,pq-r)} \le 1 - \left(\frac{1}{2}+\frac{1}{2} \sqrt{1-\frac{4(p-1)(q-r) r}{(pq-r)^2}} \right)^{1/2}. \end{equation} To upper bound $r$, we use Proposition \ref{prop:seq} with $n'=\lceil 2 \sqrt{m}\rceil \le n$ and $m$. This implies that $$ r \le \sqrt{2 \lceil 2 \sqrt{m}\rceil -1} -1 < \sqrt{2 ( 2 \sqrt{m}+1) -1} -1 = \sqrt{ 4 \sqrt{m} +1}-1 \le 2 m^{1/4}.$$ Recall that $\sqrt{1-x} \ge 1 - x/2 - x^2/2$ for all $x \in [0,1]$, and so \begin{align*} 1 - \big(\tfrac{1}{2} + \tfrac{1}{2} \sqrt{1-x} \big)^{1/2} &\le 1 - \big( \tfrac{1}{2} + \tfrac{1}{2} (1 - \tfrac{1}{2}x - \tfrac{1}{2}x^2) \big)^{1/2} = 1 - \big(1 - \tfrac{1}{4} (x + x^2) \big)^{1/2} \\ &\le 1 - \big(1 - \tfrac{1}{8}(x + x^2) - \tfrac{1}{32}(x + x^2)^2 \big) \\ &\le \tfrac{1}{8} x + \tfrac{1}{4} x^2 \end{align*} for $ x \in [0,1]$. To simplify Inequality \eqref{eqn:spr_upper}, we observe that $$\frac{4(p-1)(q-r)r}{(pq-r)^2} \le \frac{4r}{m} \le \frac{8}{m^{3/4}}.$$ Therefore, $$\frac{s(n,pq-r)-s\left(K^{m}_{p,q}\right)}{s(n,pq-r)} \le \frac{1}{m^{3/4}}+ \frac{16}{m^{3/2}}.$$ This completes the proof of the upper bound. Finally, we proceed with the proof of the lower bound. Let us fix some $0<\epsilon<1$, and consider some sufficiently large $n$. Let $m = (n/2+k)(n/2-k)+1$, where $k$ is the smallest number satisfying $n/2 + k \in \mathbb{N}$ and $\hat \epsilon:=1 - 2k^2/n < \epsilon/2$ (here we require $n = \Omega(1/\epsilon^2)$). Denote the vertices in the bipartition of $K_{n/2+k,n/2-k}$ by $u_1,...,u_{n/2+k}$ and $v_1,...,v_{n/2-k}$, and consider the graph $K^+_{n/2+k,n/2-k}:=K_{n/2+k,n/2-k} \cup \{(v_1,v_2)\}$ resulting from adding one edge to $K_{n/2+k,n/2-k}$ between two vertices in the smaller side of the bipartition. Then $$ \phi = (u_1 ,...,u_{n/2+k})(v_1, v_2)(v_3,...,v_{n/2-k})$$ is an automorphism of $K^+_{n/2+k,n/2-k}$, and $$A_\phi = \begin{pmatrix} 0 & 2 & n/2- k - 2 \\ n/2+k & 1 & 0 \\ n/2+k & 0 & 0 \end{pmatrix} $$ has characteristic polynomial \begin{align*} \det[A_\phi - \lambda I] &= -\lambda^3 +\lambda^2 + \left(n^2/4 - k^2\right) \lambda - (n/2+k)(n/2-k-2) \\ &= -\lambda^3 + \lambda^2 + \left(\frac{ n^2}{4} - \frac{(1-\hat \epsilon) n}{2} \right)\lambda - \left(\frac{n^2}{4} - \frac{(3-\hat \epsilon)n}{2} -\sqrt{2(1-\hat \epsilon)n} \right). \end{align*} By matching higher order terms, we obtain $$ \lambda_{max}(A_\phi) = \frac{n}{2}-\frac{1-\hat \epsilon}{2} + \frac{\left( 8-(1-\hat \epsilon)^2 \right)}{4 n} +o(1/n),$$ $$\lambda_{min}(A_\phi) = -\frac{n}{2}+\frac{1-\hat \epsilon}{2} + \frac{\left( 8+(1-\hat \epsilon)^2 \right)}{4 n} +o(1/n),$$ and $$s(K^+_{n/2+k,n/2-k}) \ge n-(1-\hat \epsilon)-\frac{(1-\hat \epsilon)^2}{2n} + o(1/n).$$ Next, we aim to compute $s_b(n,m)$, $m = (n/2+k)(n/2-k)+1$. By Proposition \ref{prop:bi_spr}, $s_b(n,m)$ is equal to the maximum of $s(K^m_{n/2+\ell,n/2-\ell})$ over all $\ell \in [0,k-1]$, $k-\ell \in \mathbb{N}$. As previously noted, for $n$ sufficiently large, the quantity $s(K^m_{n/2+\ell,n/2-\ell})$ is given exactly by Equation (\ref{eqn:bispread_exact}), and so the optimal choice of $\ell$ minimizes \begin{align*} f(\ell) &:= (n/2+\ell-1)(k^2-\ell^2-1)(n/2-\ell-(k^2-\ell^2-1))\\ &=(n/2+\ell)\big((1-\hat \epsilon)n/2-\ell^2\big)\big(\hat \epsilon n/2 +\ell^2-\ell \big) + O(n^2). \end{align*} We have $$ f(k-1) = (n/2+k-2)(2k-2)(n/2-3k+3),$$ and if $\ell \le \frac{4}{5} k$, then $f(\ell) = \Omega(n^3)$. Therefore the minimizing $\ell$ is in $ [\frac{4}{5} k,k]$. The derivative of $f(\ell)$ is given by \begin{align*} f'(\ell) &=(k^2-\ell^2-1)(n/2-\ell-k^2+\ell^2+1)\\ &\qquad-2\ell(n/2+\ell-1)(n/2-\ell-k^2+\ell^2+1)\\ &\qquad+(2\ell-1)(n/2+\ell-1)(k^2-\ell^2-1). \end{align*} For $\ell \in [\frac{4}{5} k,k]$, \begin{align*} f'(\ell) &\le \frac{n(k^2-\ell^2)}{2}-\ell n(n/2-\ell-k^2+\ell^2)+2\ell(n/2+\ell)(k^2-\ell^2)\\ &\le \frac{9 k^2 n}{50} - \tfrac{4}{5} kn(n/2-k- \tfrac{9}{25} k^2)+ \tfrac{18}{25} (n/2+k)k^3 \\ &= \frac{81 k^3 n}{125}-\frac{2 k n^2}{5} + O(n^2)\\ &=kn^2\left(\frac{81(1-\hat \epsilon)}{250}-\frac{2}{5}\right)+O(n^2)<0 \end{align*} for sufficiently large $n$. This implies that the optimal choice is $\ell = k-1$, and $s_b(n,m) = s(K^m_{n/2+k-1,n/2-k+1})$. The characteristic polynomial $Q(n/2+k-1,n/2-k+1,n^2/4 -k^2+1)$ equals $$ \lambda^4 - \left(n^2/4 -k^2+1 \right)\lambda^2+2(n/2+k-2)(n/2-3k+3)(k-1).$$ By matching higher order terms, the extreme root of $Q$ is given by $$\lambda = \frac{n}{2} -\frac{1-\hat \epsilon}{2} - \sqrt{\frac{2(1-\hat \epsilon)}{n}}+\frac{27-14\hat \epsilon-\hat \epsilon^2}{4n}+o(1/n),$$ and so $$ s_b(n,m) = n -(1-\hat \epsilon) - 2 \sqrt{\frac{2(1-\hat \epsilon)}{n}}+\frac{27-14\hat \epsilon-\hat \epsilon^2}{2n}+o(1/n),$$ and \begin{align*} \frac{s(n,m)-s_b(n,m)}{s(n,m)} &\ge \frac{2^{3/2}(1-\hat \epsilon)^{1/2}}{n^{3/2}} - \frac{14-8\hat \epsilon}{n^2}+o(1/n^2)\\ &=\frac{(1-\hat \epsilon)^{1/2}}{m^{3/4}} + \frac{(1-\hat \epsilon)^{1/2}}{(n/2)^{3/2}}\bigg[1-\frac{(n/2)^{3/2}}{m^{3/4}}\bigg] - \frac{14-8\hat \epsilon}{n^2}+o(1/n^2)\\ &\ge \frac{1-\epsilon/2}{m^{3/4}} +o(1/m^{3/4}). \end{align*} This completes the proof. \end{proof} \section*{Acknowledgements} The work of A. Riasanovsky was supported in part by NSF award DMS-1839918 (RTG). The work of M. Tait was supported in part by NSF award DMS-2011553. The work of J. Urschel was supported in part by ONR Research Contract N00014-17-1-2177. The work of J.~Breen was supported in part by NSERC Discovery Grant RGPIN-2021-03775. The authors are grateful to Louisa Thomas for greatly improving the style of presentation. \section{A Computer-Assisted Proof of Lemma \ref{lem: 2 feasible sets}}\label{sec: appendix} In this appendix, we derive a number of formulas that a stepgraphon corresponding to some set $S \subseteq \{1,2,3,4,5,6,7\}$ in Lemma \ref{lem: 19 cases} satisfies, and detail how these formulas are used to provide a computer-assisted proof of Lemma \ref{lem: 2 feasible sets}. \subsection{Formulas}\label{sub-sec: formulas} In this subsection, we derive the formulas used in our computer-assisted proof, from the equations described in Section \ref{appendix 17 cases}. First, we define a number of functions which will ease the notational burden in the results that follow. Let \begin{align*} F_1(x) &:= (\mu+\nu)x + 2 \mu \nu, \\ F_2(x) &:= 2 ( \mu \nu + (\mu+\nu) x)^2 + (\mu+\nu) x^3, \\ F_3(x) &:= 4\mu^2\nu^2\cdot (\mu\nu + (\mu+\nu)x)^2 \\ &\quad - 2(\mu+\nu)x^3\cdot ((\mu+\nu)x + \mu\nu)((\mu+\nu)x+3\mu\nu) \\ &\quad - (\mu+\nu)x^5\cdot (2\mu\nu + (\mu+\nu)x), \\ F_4(x) &:= 4\mu^2\nu^2x\cdot ((3(\mu+\nu)x + \mu\nu)\cdot (2(\mu+\nu)x+\mu\nu)-\mu\nu(\mu+\nu)x) \\ &\quad +4(\mu+\nu)x^4\cdot (((\mu+\nu)x+\mu\nu)^2 + (\mu+\nu)^2\cdot ((\mu+\nu)x+4\mu\nu)) \\ &\quad + (\mu+\nu)^2x^7. \end{align*} Letting $S := \{i\in\{1,\dots,7\} : \alpha_i>0\}$, we prove the following six formulas. \begin{proposition}\label{prop: fg23 assume 23} Let $i\in \{1,2,5\} \cap S$ and $j\in \{3,4,6,7\} \cap S$ be such that $N_i \cap S = (N_j\cap S)\dot{\cup}\{j\}$. Then \begin{align*} f_j^2 &= \dfrac{(\alpha_j+2\nu)\mu}{F_1(\alpha_j)}, \quad \quad g_j^2 = \dfrac{(\alpha_j+2\mu)\nu}{F_1(\alpha_j)} \end{align*} and \begin{align*} f_i &= \left( 1 + \frac{\alpha_j}{\mu} \right) f_j, \quad \quad g_i = \left( 1 + \frac{\alpha_j}{\nu} \right) g_j. \end{align*} Moreover, $F_1(\alpha_j)$ and $\alpha_j+2\nu$ are negative. \end{proposition} \begin{proof} By Lemma \ref{lem: local eigenfunction equation}, \begin{align*} \mu f_i^2 - \nu g_i^2 &= \mu-\nu\\ \mu f_j^2 - \nu g_j^2 &= \mu-\nu. \end{align*} By taking the difference of the eigenvector equations for $f_i$ and $f_j$ (and also $g_i$ and $g_j$), we obtain \begin{align*} \alpha_j f_j &= \mu(f_i - f_j)\\ \alpha_j g_j &= \nu(g_i - g_j), \end{align*} or, equivalently, \begin{align*} f_i &= \left( 1 + \frac{\alpha_j}{\mu} \right) f_j\\ g_i &= \left( 1 + \frac{\alpha_j}{\nu} \right) g_j. \end{align*} This leads to the system of equations \begin{align*} \left[\begin{array}{cc} \mu & -\nu\\ \mu\cdot \left(1+\dfrac{\alpha_j}{\mu}\right)^2 & -\nu\cdot \left(1+\dfrac{\alpha_j}{\nu}\right)^2 \end{array}\right] \cdot \left[ \begin{array}{c} f_j^2\\ g_j^2 \end{array} \right] = \left[\begin{array}{c} \mu-\nu\\ \mu-\nu \end{array}\right]. \end{align*} If the corresponding matrix is invertible, then after substituting the claimed formulas for $f_j^2,g_j^2$ and simplifying, it follows that they are the unique solutions. To verify that $F_1(\alpha_j)$ and $\alpha_j+2\nu$ are negative, it is sufficient to inspect the formulas for $f_j$ and $g_j$, noting that $\nu$ is negative and both $\mu$ and $\alpha_j+2\mu$ are positive. \\ Suppose the matrix is not invertible. By assumption $\mu, \nu \ne 0$, and so $$\bigg( 1+\frac{\alpha_j}{\mu}\bigg)^2 = \bigg( 1+\frac{\alpha_j}{\nu}\bigg)^2.$$ But, since $i\in \{1,2,5\}$ and $j\in \{3,4,6,7\}$, \begin{align*} 1 > f_j^2g_i^2 = f_i^2g_i^2\cdot \left( 1+\dfrac{\alpha_j}{\mu} \right)^2 = f_i^2g_i^2\cdot \left( 1+\dfrac{\alpha_j}{\nu} \right)^2 = f_i^2g_j^2 > 1, \end{align*} a contradiction. \end{proof} \begin{proposition}\label{prop: fg24 assume 2N4} Let $i\in \{1,2,5\} \cap S$ and $j\in \{3,4,6,7\} \cap S$ be such that $N_i \cap S = (N_j\cap S)\dot{\cup}\{i\}$. Then \begin{align*} f_i^2 &= \dfrac{(\alpha_i-2\nu)\mu} {-F_1(-\alpha_i)}, \quad \quad g_i^2 = \dfrac{(\alpha_i-2\mu)\nu} {-F_1(-\alpha_i)}, \end{align*} and \begin{align*} f_j &= \left( 1- \frac{\alpha_i}{\mu} \right) f_i, \quad \quad g_j = \left( 1- \frac{\alpha_i}{\nu} \right) g_i. \end{align*} Moreover, $-F_1(-\alpha_i)$ is positive and $\alpha_i-2\mu$ is negative. \end{proposition} \begin{proof} The proof of Proposition \ref{prop: fg23 assume 23}, slightly modified, gives the desired result. \end{proof} \begin{proposition}\label{prop: a2 assume 234} Suppose $i,j,k\in S$ where $(i,j,k)$ is either $(2,3,4)$ or $(5,6,7)$. Then \begin{align*} f_k &= \dfrac{ \mu f_j-\alpha_if_i }{\mu}, \quad \quad g_k = \dfrac{ \nu g_j-\alpha_ig_i }{\nu}, \end{align*} and $$\alpha_i = \frac{2 \mu^2 \nu^2 \alpha_j}{ F_2(\alpha_j)}.$$ \end{proposition} \begin{proof} Using the eigenfunction equations for $f_j,f_k$ and for $g_j,g_k$, it follows that \begin{align*} f_k &= \dfrac{ \mu f_j-\alpha_if_i }{\mu}, \quad \quad g_k = \dfrac{ \nu g_j-\alpha_ig_i }{\nu}. \end{align*} Combined with Lemma \ref{lem: local eigenfunction equation}, it follows that \begin{align*} 0 &= \mu f_k^2 - \nu g_k^2 - (\mu-\nu) \\ &= \mu\left( \dfrac{ \mu f_j-\alpha_if_i }{\mu} \right)^2 -\nu\left( \dfrac{ \nu g_j-\alpha_ig_i }{\nu} \right)^2 - (\mu-\nu). \end{align*} After expanding, we note that the right-hand side can be expressed purely in terms of $\mu, \nu, \alpha_i, f_i^2, f_if_j, f_j^2, g_i^2, g_ig_j,$ and $g_j^2$. Note that Proposition \ref{prop: fg23 assume 23} gives explicit formulas for $f_i^2, f_if_j$, and $f_j^2$, as well as $g_i^2, g_ig_j$, and $g_j^2$, purely in terms of $\mu, \nu$, and $\alpha_j$. With the help of a computer algebra system, we make these substitutions and factor the right-hand side as: \begin{align*} 0 &= (\mu-\nu)\cdot \alpha_i \cdot \dfrac{ 2\mu^2\nu^2\cdot \alpha_j - F_2(\alpha_j)\cdot \alpha_i } { \mu^2\nu^2 \cdot F_1(\alpha_i) } . \end{align*} Since $\alpha_i, (\mu-\nu) \neq 0$, the desired claim holds. \end{proof} \begin{proposition}\label{prop: a4 assume 1234} Suppose $1,i,j,k \in S$ where $(i,j,k)$ is either $(2,3,4)$ or $(5,6,7)$. Then \begin{align*} f_1 &= \dfrac{\mu f_i + \alpha_k f_k}{\mu}, \quad\quad g_1 = \dfrac{\nu g_i + \alpha_k g_k}{\nu}, \end{align*} and \begin{align*} \alpha_k &= \dfrac{ \alpha_j\cdot F_2(\alpha_j)^2 } { F_3(\alpha_j) }. \end{align*} \end{proposition} \begin{proof} Using the eigenfunction equations for $f_1, f_i, f_j, f_k$ and for $g_1, g_i, g_j, g_k$, it follows that \begin{align*} f_1 &= \dfrac{\mu f_i + \alpha_k f_k}{\mu}, \quad\quad g_1 = \dfrac{\nu g_i + \alpha_k g_k}{\nu}, \end{align*} and \begin{align*} f_k &= \dfrac{ \mu f_j-\alpha_if_i }{\mu}, \quad \quad g_k = \dfrac{ \nu g_j-\alpha_ig_i }{\nu}. \end{align*} Altogether, \begin{align*} f_1 &= \dfrac{\mu^2 f_i + \alpha_k( \mu f_j - \alpha_if_i )}{\mu^2}, \quad\quad g_1 &= \dfrac{\nu^2 g_i + \alpha_k( \nu g_j - \alpha_ig_i )}{\nu^2} \end{align*} Combined with Lemma 3.6, it follows that \begin{align*} 0 &= \mu f_1^2-\nu g_1^2 - (\mu-\nu) \\ &= \mu\left( \dfrac{\mu^2 f_i + \alpha_k( \mu f_j - \alpha_if_i )}{\mu^2} \right)^2 -\nu\left( \dfrac{\nu^2 g_i + \alpha_k( \nu g_j - \alpha_ig_i )}{\nu^2} \right)^2 -(\mu-\nu). \end{align*} After expanding, we note that the right-hand side can be expressed purely in terms of $\mu,\nu, f_i^2, f_if_j, f_j^2, g_i^2, g_ig_j,$ and $\alpha_i$. Note that Proposition \ref{prop: fg23 assume 23} gives explicit formulas for $f_i^2, f_if_j, f_j^2, g_i^2, g_ig_j,$ and $g_j^2$ purely in terms of $\mu, \nu$, and $\alpha_j$. With the help of a computer algebra system, we make these substitutions and factor the right-hand side as: \begin{align*} 0 &= 2\alpha_k\cdot (\mu-\nu)\cdot \dfrac{ \alpha_j\cdot F_2(\alpha_j)^2 - \alpha_k\cdot F_3(\alpha_j) } { F_1(\alpha_j)\cdot F_2(\alpha_j)^2 } . \end{align*} So the desired claim holds. \end{proof} \begin{proposition}\label{prop: a4 assume 12N4} Suppose $1,i,k\in S$ and $j\notin S$ where $(i,j,k)$ is either $(2,3,4)$ or $(5,6,7)$. Then, \begin{align*} f_1 &= \dfrac{\mu f_i + \alpha_k f_k}{\mu}, \quad\quad g_1 = \dfrac{\nu g_i + \alpha_k g_k}{\nu}, \end{align*} and \begin{align*} \alpha_k &= \dfrac{2\alpha_i \mu^2\nu^2} { F_2(-\alpha_i)} \end{align*} \end{proposition} \begin{proof} Using the eigenfunction equations for $f_1, f_i, f_j, f_k$ and for $g_1, g_i, g_j, g_k$, it follows that \begin{align*} f_1 &= \dfrac{\mu f_i + \alpha_k f_k}{\mu}, \quad\quad g_1 = \dfrac{\nu g_i + \alpha_k g_k}{\nu}, \end{align*} and \begin{align*} f_k &= \dfrac{ \mu f_i-\alpha_if_i }{\mu}, \quad \quad g_k = \dfrac{ \nu g_i-\alpha_ig_i }{\nu}. \end{align*} Altogether, \begin{align*} f_1 &= \dfrac{\mu^2 f_i + \alpha_k( \mu f_i - \alpha_if_i )}{\mu^2}, \quad\quad g_1 &= \dfrac{\nu^2 g_i + \alpha_k( \nu g_i - \alpha_ig_i )}{\nu^2} \end{align*} Combined with Lemma 3.6, it follows that \begin{align*} 0 &= \mu f_1^2-\nu g_1^2 - (\mu-\nu) \\ &= \mu\left( \dfrac{\mu^2 f_i + \alpha_k( \mu f_i - \alpha_if_i )}{\mu^2} \right)^2 -\nu\left( \dfrac{\nu^2 g_i + \alpha_k( \nu g_i - \alpha_if_i )}{\nu^2} \right)^2 -(\mu-\nu). \end{align*} After expanding, we note that the right-hand side can be expressed purely in terms of $\mu,\nu, f_i^2, f_if_j, f_j^2, g_i^2, g_ig_j,$ and $\alpha_i$. Note that Proposition \ref{prop: fg23 assume 23} gives explicit formulas for $f_i^2, f_if_j, f_j^2, g_i^2, g_ig_j,$ and $g_j^2$ purely in terms of $\mu, \nu$, and $\alpha_j$. With the help of a computer algebra system, we make these substitutions and factor the right-hand side as: \begin{align*} 0 &= 2\alpha_k\cdot (\mu-\nu)\cdot \dfrac{ \alpha_j\cdot F_2(\alpha_j)^2 - \alpha_k\cdot F_3(\alpha_j) } { F_1(\alpha_j)\cdot F_2(\alpha_j)^2 } . \end{align*} So the desired claim holds. \end{proof} \begin{proposition}\label{prop: a4 assume N2347} Suppose $1\notin S$ and $i,j,k,\ell\in S$ where $(i,j,k,\ell)$ is either $(2,3,4,7)$ or $(5,6,7,4)$. Then \begin{align*} \alpha_k &= \dfrac{F_4(x)}{F_3(x)}. \end{align*} \end{proposition} \begin{proof} Using the eigenfunction equations for $f_\ell, f_i, f_j, f_k$ and for $g_\ell, g_i, g_j, g_k$, it follows that \begin{align*} f_\ell &= \dfrac{\alpha_if_i + \alpha_jf_j + \alpha_k f_k}{\mu}, \quad\quad g_1 = \dfrac{\alpha_ig_i + \alpha_jg_j + \alpha_k g_k}{\nu}, \end{align*} and \begin{align*} f_k &= \dfrac{ \mu f_j-\alpha_if_i }{\mu}, \quad \quad g_k = \dfrac{ \nu g_j-\alpha_ig_i }{\nu}. \end{align*} Altogether, \begin{align*} f_\ell &= \dfrac{ \mu \alpha_if_i + \alpha_jf_j + \alpha_k (\mu f_j-\alpha_if_i) }{\mu^2}, \quad\quad g_\ell &= \dfrac{ \nu \alpha_ig_i + \alpha_jg_j + \alpha_k (\nu g_j-\alpha_ig_i) }{\nu^2} \end{align*} Combined with Lemma \ref{lem: local eigenfunction equation}, it follows that \begin{align*} 0 &= \mu f_\ell^2 - \nu g_\ell^2 - (\mu-\nu)\\ &= \mu\left( \dfrac{ \mu \alpha_if_i + \alpha_jf_j + \alpha_k (\mu f_j-\alpha_if_i) }{\mu^2} \right)^2-\nu\left( \dfrac{ \nu \alpha_ig_i + \alpha_jg_j + \alpha_k (\nu g_j-\alpha_ig_i) }{\nu^2} \right)^2\\ &\quad -(\mu-\nu) \end{align*} After expanding, we note that the right-hand side can be expressed purely in terms of $\mu,\nu, f_i^2, f_if_j, f_j^2, g_i^2, g_ig_j,$ and $\alpha_i$. Note that Proposition \ref{prop: fg23 assume 23} gives explicit formulas for $f_i^2, f_if_j, f_j^2, g_i^2, g_ig_j,g_j^2,\alpha_i,\alpha_j,\alpha_k$ purely in terms of $\mu, \nu$, and $\alpha_j$. With the help of a computer algebra system, we make these substitutions and factor the right-hand side as: \begin{align*} 0 &= 2(\mu-\nu)\cdot\alpha_k\cdot \dfrac{ F_4(\alpha_j)-\alpha_k\cdot F_3(\alpha_j) } {F_1(\alpha_j)\cdot F_2(\alpha_j)^2} \end{align*} \end{proof} \begin{proposition}\label{prop: a47 assume 2457} Suppose $2,4,5,7\in S$ and let $\alpha_{\ne 4,7}:=\sum_{\substack{i\in S,\\ i \ne 4,7}} \alpha_i$. Then \begin{align*} \alpha_4 &= \frac{ (1-\alpha_{\ne 4,7}) f_7 - \mu (f_2 - f_7)}{f_4 +f_7}, \\ \alpha_7 &= \frac{ (1-\alpha_{\ne 4,7}) f_4 - \mu (f_5 - f_2)}{f_4 +f_7}, \end{align*} and \begin{align*} \alpha_4 &= \frac{ ((1-\alpha_{\ne 4,7}) g_7 - \nu (g_2 - g_7)}{g_4 +g_7}, \\ \alpha_7 &= \frac{ (1-\alpha_{\ne 4,7}) g_4 - \nu (g_5 - g_2)}{g_4 +g_7}. \end{align*} \end{proposition} \begin{proof} Taking the difference of the eigenvector equations for $f_2$ and $f_5$, and for $g_2$ and $g_5$, we have $$\alpha_7 f_7 -\alpha_4 f_4 = \mu(f_2 - f_5), \qquad \alpha_7 g_7 -\alpha_4 g_4 = \nu(g_2 - g_5).$$ Combining these equalities with the equation $\alpha_4 + \alpha_7 = 1-\alpha_{\ne 4,7}$ completes the proof. \end{proof} \subsection{Algorithm} In this subsection, we briefly detail how the computer-assisted proof of Lemma \ref{lem: 2 feasible sets} works. This proof is via interval arithmetic, and, at a high level, consists largely of iteratively decomposing the domain of feasible choices of $(\alpha_3,\alpha_6,\mu,\nu)$ for a given $S$ into smaller subregions (boxes) until all subregions violate some required equality or inequality. We provide two similar, but slightly different computer assisted proofs of this fact, and both of which can be found at the spread\_numeric GitHub repository \cite{2021riasanovsky-spread}. The first, found in folder interval$1$, is a shorter and simpler version, containing slightly fewer formulas, albeit at the cost of overall computation and run time. The second, found in the folder interval$2$, contains slightly more formulas and makes a greater attempt to optimize computation and run time. Below, we further detail the exact output and run time of both versions (exact output can be found in \cite{2021riasanovsky-spread}), but for now, we focus on the main aspects of both proofs, and consider both together, saving a more detailed discussion of the differences for later. These algorithms are implemented in Python using the PyInterval package. The algorithms consists of two parts: a main file containing useful formulas and subroutines and $17$ different files used to rule out each of the $17$ cases for $S$. The main file, casework\_helper, contains functions with the formulas of Appendix Subsection \ref{sub-sec: formulas} (suitably modified to limit error growth), and functions used to check that certain equalities and inequalities are satisfied. In particular, casework\_helper contains formulas for \begin{itemize} \item $\alpha_2$, assuming $\{2,3,4\} \subset S$ (using Proposition \ref{prop: a2 assume 234}) \item $\alpha_4$, assuming $\{1,2,3,4\} \subset S$ (using Proposition \ref{prop: a4 assume 1234}) \item $\alpha_4$, assuming $\{2,3,4,7\} \subset S$, $1 \not \in S$ (using Proposition \ref{prop: a4 assume N2347}) \item $\alpha_4$, assuming $\{1,2,4\}\subset S$, $3 \not \in S$ (using Proposition \ref{prop: a4 assume 12N4}) \item $f_3$ and $g_3$, assuming $\{2,3\}\subset S$ (using Proposition \ref{prop: fg23 assume 23}) \item $f_2$ and $g_2$, assuming $\{2,3\} \subset S$ (using Proposition \ref{prop: fg23 assume 23}) \item $f_4$ and $g_4$, assuming $\{2,3,4\} \subset S$ (using Proposition \ref{prop: a2 assume 234}) \item $f_1$ and $g_1$, assuming $\{1,2,4\} \subset S$ (using Propositions \ref{prop: a4 assume 1234} and \ref{prop: a4 assume 12N4}) \item $f_2$ and $g_2$, assuming $\{2,4\} \subset S$, $3 \not\in S$ (using Proposition \ref{prop: fg24 assume 2N4}) \item $f_4$ and $g_4$, assuming $\{2,4\} \subset S$, $3 \not\in S$ (using Proposition \ref{prop: fg24 assume 2N4}) \end{itemize} as a function of $\alpha_3$, $\mu$, and $\nu$ (and $\alpha_2$ and $\alpha_4$, which can be computed as functions of $\alpha_3$, $\mu$, and $\nu$). Some of the formulas are slightly modified compared to their counterparts in this Appendix, for the purpose of minimizing accumulated error. Each formula is performed using interval arithmetic, while restricting the resulting interval solution to the correct range. In addition, we recall that we have the inequalities \begin{itemize} \item $\alpha_i \in [0,1]$, for $i \in S$ \item $|g_2|,|f_3|\le 1$, $|f_2|,|g_3| \ge 1$, for $\{2,3\}\subset S$ \item $|f_4|\le 1$, $|g_4|\ge 1$, for $4 \in S$ \item $|f_1|\ge 1$, $|g_1|\le 1$, for $\{1,2,4\}\in S$ \item $|f_4|,|g_2| \le 1$, $|f_2|,|g_4|\ge 1$, for $\{2,4\} \in S$, $3 \not \in S$ \item $\alpha_3 + 2 \nu \le 0$, for $\{2,3\} \in S$ (using Proposition \ref{prop: fg23 assume 23}) \item $\alpha_2 - 2 \mu \le 0$, for $\{2,4\} \in S$, $3 \not \in S$ (using Proposition \ref{prop: fg24 assume 2N4}). \end{itemize} These inequalities are also used at various points in the algorithms. This completes a brief overview of the casework\_helper file. Next, we consider the different files used to test feasibility for a specific choice of $S \subset \{1,...,7\}$, each denoted by case$\{\text{elements of S}\}$, i.e., for $S = \{1,4,5,7\}$, the associated file is case$1457$. For each specific case, there are a number of different properties which can be checked, including eigenvector equations, bounds on edge density, norm equations for the eigenvectors, and the ellipse equations. Each of these properties has an associated function which returns FALSE, if the property cannot be satisfied, given the intervals for each variable, and returns TRUE otherwise. The implementation of each of these properties is rather intuitive, and we refer the reader to the programs themselves (which contain comments) for exact details \cite{2021riasanovsky-spread}. Each feasibility file consists of two parts. The first part is a function is\_feasible(mu,nu,a3,a6) that, given bounding intervals for $\mu$, $\nu$, $\alpha_3$, $\alpha_6$, computes intervals for all other variables (using interval arithmetic) and checks feasibility using the functions in the casework\_helper file. If any checked equation or inequality in the file is proven to be unsatisfiable (i.e., see Example \ref{ex: infeasible box}), then this function outputs `FALSE', otherwise the function outputs `TRUE' by default. The second part is a divide and conquer algorithm that breaks the hypercube $$ (\mu, \nu, \alpha_3,\alpha_6) \in [.65,1] \times [-.5,-.15] \times [0,1] \times [0,1]$$ into sub-boxes of size $1/20$ by $1/20$ by $1/10$ by $1/10$, checks feasibility in each box using is\_feasible, and subdivides any box that does not rule out feasibility (i.e., subdivides any box that returns `TRUE'). This subdivision breaks a single box into two boxes of equal size, by subdividing along one of the four variables. The variable used for this subdivision is chosen iteratively, in the order $\alpha_3,\alpha_6,\mu, \nu, \alpha_3,...$. The entire divide and conquer algorithm terminates after all sub-boxes, and therefore, the entire domain $$ (\mu, \nu, \alpha_3,\alpha_6) \in [.65,1] \times [-.5,-.15] \times [0,1] \times [0,1],$$ has been shown to be infeasible, at which point the algorithm prints `infeasible'. Alternatively, if the number of subdivisions reaches some threshold, then the algorithm terminates and outputs `feasible'. Next, we briefly detail the output of the algorithms casework\_helper/intervals$1$ and casework\_helper/intervals$2$. Both algorithms ruled out 15 of the 17 choices for $S$ using a maximum depth of 26, and failed to rule out cases $S = \{4,5,7\}$ and $S = \{1,7\}$ up to depth 51. For the remaining 15 cases, intervals$1$ considered a total of 5.5 million boxes, was run serially on a personal computer, and terminated in slightly over twelve hours. For these same 15 cases, intervals$2$ considered a total of 1.3 million boxes, was run in parallel using the Penn State math department's `mathcalc' computer, and terminated in under 140 minutes. The exact output for both versions of the spread\_numeric algorithm can be found at \cite{2021riasanovsky-spread}. \section{A Computer-Assisted Proof of Lemma \ref{lem: 2 feasible sets}}\label{sec: appendix} In this appendix, we derive a number of formulas that a stepgraphon corresponding to some set $S \subseteq \{1,2,3,4,5,6,7\}$ in Lemma \ref{lem: 19 cases} satisfies, and detail how these formulas are used to provide a computer-assisted proof of Lemma \ref{lem: 2 feasible sets}. \subsection{Formulas}\label{sub-sec: formulas} In this subsection, we derive the formulas used in our computer-assisted proof, from the equations described in Section \ref{appendix 17 cases}. First, we define a number of functions which will ease the notational burden in the results that follow. Let \begin{align*} F_1(x) &:= (\mu+\nu)x + 2 \mu \nu, \\ F_2(x) &:= 2 ( \mu \nu + (\mu+\nu) x)^2 + (\mu+\nu) x^3, \\ F_3(x) &:= 4\mu^2\nu^2\cdot (\mu\nu + (\mu+\nu)x)^2 \\ &\quad - 2(\mu+\nu)x^3\cdot ((\mu+\nu)x + \mu\nu)((\mu+\nu)x+3\mu\nu) \\ &\quad - (\mu+\nu)x^5\cdot (2\mu\nu + (\mu+\nu)x), \\ F_4(x) &:= 4\mu^2\nu^2x\cdot ((3(\mu+\nu)x + \mu\nu)\cdot (2(\mu+\nu)x+\mu\nu)-\mu\nu(\mu+\nu)x) \\ &\quad +4(\mu+\nu)x^4\cdot (((\mu+\nu)x+\mu\nu)^2 + (\mu+\nu)^2\cdot ((\mu+\nu)x+4\mu\nu)) \\ &\quad + (\mu+\nu)^2x^7. \end{align*} Letting $S := \{i\in\{1,\dots,7\} : \alpha_i>0\}$, we prove the following six formulas. \begin{proposition}\label{prop: fg23 assume 23} Let $i\in \{1,2,5\} \cap S$ and $j\in \{3,4,6,7\} \cap S$ be such that $N_i \cap S = (N_j\cap S)\dot{\cup}\{j\}$. Then \begin{align*} f_j^2 &= \dfrac{(\alpha_j+2\nu)\mu}{F_1(\alpha_j)}, \quad \quad g_j^2 = \dfrac{(\alpha_j+2\mu)\nu}{F_1(\alpha_j)} \end{align*} and \begin{align*} f_i &= \left( 1 + \frac{\alpha_j}{\mu} \right) f_j, \quad \quad g_i = \left( 1 + \frac{\alpha_j}{\nu} \right) g_j. \end{align*} Moreover, $F_1(\alpha_j)$ and $\alpha_j+2\nu$ are negative. \end{proposition} \begin{proof} By Lemma \ref{lem: local eigenfunction equation}, \begin{align*} \mu f_i^2 - \nu g_i^2 &= \mu-\nu\\ \mu f_j^2 - \nu g_j^2 &= \mu-\nu. \end{align*} By taking the difference of the eigenvector equations for $f_i$ and $f_j$ (and also $g_i$ and $g_j$), we obtain \begin{align*} \alpha_j f_j &= \mu(f_i - f_j)\\ \alpha_j g_j &= \nu(g_i - g_j), \end{align*} or, equivalently, \begin{align*} f_i &= \left( 1 + \frac{\alpha_j}{\mu} \right) f_j\\ g_i &= \left( 1 + \frac{\alpha_j}{\nu} \right) g_j. \end{align*} This leads to the system of equations \begin{align*} \left[\begin{array}{cc} \mu & -\nu\\ \mu\cdot \left(1+\dfrac{\alpha_j}{\mu}\right)^2 & -\nu\cdot \left(1+\dfrac{\alpha_j}{\nu}\right)^2 \end{array}\right] \cdot \left[ \begin{array}{c} f_j^2\\ g_j^2 \end{array} \right] = \left[\begin{array}{c} \mu-\nu\\ \mu-\nu \end{array}\right]. \end{align*} If the corresponding matrix is invertible, then after substituting the claimed formulas for $f_j^2,g_j^2$ and simplifying, it follows that they are the unique solutions. To verify that $F_1(\alpha_j)$ and $\alpha_j+2\nu$ are negative, it is sufficient to inspect the formulas for $f_j$ and $g_j$, noting that $\nu$ is negative and both $\mu$ and $\alpha_j+2\mu$ are positive. \\ Suppose the matrix is not invertible. By assumption $\mu, \nu \ne 0$, and so $$\bigg( 1+\frac{\alpha_j}{\mu}\bigg)^2 = \bigg( 1+\frac{\alpha_j}{\nu}\bigg)^2.$$ But, since $i\in \{1,2,5\}$ and $j\in \{3,4,6,7\}$, \begin{align*} 1 > f_j^2g_i^2 = f_i^2g_i^2\cdot \left( 1+\dfrac{\alpha_j}{\mu} \right)^2 = f_i^2g_i^2\cdot \left( 1+\dfrac{\alpha_j}{\nu} \right)^2 = f_i^2g_j^2 > 1, \end{align*} a contradiction. \end{proof} \begin{proposition}\label{prop: fg24 assume 2N4} Let $i\in \{1,2,5\} \cap S$ and $j\in \{3,4,6,7\} \cap S$ be such that $N_i \cap S = (N_j\cap S)\dot{\cup}\{i\}$. Then \begin{align*} f_i^2 &= \dfrac{(\alpha_i-2\nu)\mu} {-F_1(-\alpha_i)}, \quad \quad g_i^2 = \dfrac{(\alpha_i-2\mu)\nu} {-F_1(-\alpha_i)}, \end{align*} and \begin{align*} f_j &= \left( 1- \frac{\alpha_i}{\mu} \right) f_i, \quad \quad g_j = \left( 1- \frac{\alpha_i}{\nu} \right) g_i. \end{align*} Moreover, $-F_1(-\alpha_i)$ is positive and $\alpha_i-2\mu$ is negative. \end{proposition} \begin{proof} The proof of Proposition \ref{prop: fg23 assume 23}, slightly modified, gives the desired result. \end{proof} \begin{proposition}\label{prop: a2 assume 234} Suppose $i,j,k\in S$ where $(i,j,k)$ is either $(2,3,4)$ or $(5,6,7)$. Then \begin{align*} f_k &= \dfrac{ \mu f_j-\alpha_if_i }{\mu}, \quad \quad g_k = \dfrac{ \nu g_j-\alpha_ig_i }{\nu}, \end{align*} and $$\alpha_i = \frac{2 \mu^2 \nu^2 \alpha_j}{ F_2(\alpha_j)}.$$ \end{proposition} \begin{proof} Using the eigenfunction equations for $f_j,f_k$ and for $g_j,g_k$, it follows that \begin{align*} f_k &= \dfrac{ \mu f_j-\alpha_if_i }{\mu}, \quad \quad g_k = \dfrac{ \nu g_j-\alpha_ig_i }{\nu}. \end{align*} Combined with Lemma \ref{lem: local eigenfunction equation}, it follows that \begin{align*} 0 &= \mu f_k^2 - \nu g_k^2 - (\mu-\nu) \\ &= \mu\left( \dfrac{ \mu f_j-\alpha_if_i }{\mu} \right)^2 -\nu\left( \dfrac{ \nu g_j-\alpha_ig_i }{\nu} \right)^2 - (\mu-\nu). \end{align*} After expanding, we note that the right-hand side can be expressed purely in terms of $\mu, \nu, \alpha_i, f_i^2, f_if_j, f_j^2, g_i^2, g_ig_j,$ and $g_j^2$. Note that Proposition \ref{prop: fg23 assume 23} gives explicit formulas for $f_i^2, f_if_j$, and $f_j^2$, as well as $g_i^2, g_ig_j$, and $g_j^2$, purely in terms of $\mu, \nu$, and $\alpha_j$. With the help of a computer algebra system, we make these substitutions and factor the right-hand side as: \begin{align*} 0 &= (\mu-\nu)\cdot \alpha_i \cdot \dfrac{ 2\mu^2\nu^2\cdot \alpha_j - F_2(\alpha_j)\cdot \alpha_i } { \mu^2\nu^2 \cdot F_1(\alpha_i) } . \end{align*} Since $\alpha_i, (\mu-\nu) \neq 0$, the desired claim holds. \end{proof} \begin{proposition}\label{prop: a4 assume 1234} Suppose $1,i,j,k \in S$ where $(i,j,k)$ is either $(2,3,4)$ or $(5,6,7)$. Then \begin{align*} f_1 &= \dfrac{\mu f_i + \alpha_k f_k}{\mu}, \quad\quad g_1 = \dfrac{\nu g_i + \alpha_k g_k}{\nu}, \end{align*} and \begin{align*} \alpha_k &= \dfrac{ \alpha_j\cdot F_2(\alpha_j)^2 } { F_3(\alpha_j) }. \end{align*} \end{proposition} \begin{proof} Using the eigenfunction equations for $f_1, f_i, f_j, f_k$ and for $g_1, g_i, g_j, g_k$, it follows that \begin{align*} f_1 &= \dfrac{\mu f_i + \alpha_k f_k}{\mu}, \quad\quad g_1 = \dfrac{\nu g_i + \alpha_k g_k}{\nu}, \end{align*} and \begin{align*} f_k &= \dfrac{ \mu f_j-\alpha_if_i }{\mu}, \quad \quad g_k = \dfrac{ \nu g_j-\alpha_ig_i }{\nu}. \end{align*} Altogether, \begin{align*} f_1 &= \dfrac{\mu^2 f_i + \alpha_k( \mu f_j - \alpha_if_i )}{\mu^2}, \quad\quad g_1 &= \dfrac{\nu^2 g_i + \alpha_k( \nu g_j - \alpha_ig_i )}{\nu^2} \end{align*} Combined with Lemma 3.6, it follows that \begin{align*} 0 &= \mu f_1^2-\nu g_1^2 - (\mu-\nu) \\ &= \mu\left( \dfrac{\mu^2 f_i + \alpha_k( \mu f_j - \alpha_if_i )}{\mu^2} \right)^2 -\nu\left( \dfrac{\nu^2 g_i + \alpha_k( \nu g_j - \alpha_ig_i )}{\nu^2} \right)^2 -(\mu-\nu). \end{align*} After expanding, we note that the right-hand side can be expressed purely in terms of $\mu,\nu, f_i^2, f_if_j, f_j^2, g_i^2, g_ig_j,$ and $\alpha_i$. Note that Proposition \ref{prop: fg23 assume 23} gives explicit formulas for $f_i^2, f_if_j, f_j^2, g_i^2, g_ig_j,$ and $g_j^2$ purely in terms of $\mu, \nu$, and $\alpha_j$. With the help of a computer algebra system, we make these substitutions and factor the right-hand side as: \begin{align*} 0 &= 2\alpha_k\cdot (\mu-\nu)\cdot \dfrac{ \alpha_j\cdot F_2(\alpha_j)^2 - \alpha_k\cdot F_3(\alpha_j) } { F_1(\alpha_j)\cdot F_2(\alpha_j)^2 } . \end{align*} So the desired claim holds. \end{proof} \begin{proposition}\label{prop: a4 assume 12N4} Suppose $1,i,k\in S$ and $j\notin S$ where $(i,j,k)$ is either $(2,3,4)$ or $(5,6,7)$. Then, \begin{align*} f_1 &= \dfrac{\mu f_i + \alpha_k f_k}{\mu}, \quad\quad g_1 = \dfrac{\nu g_i + \alpha_k g_k}{\nu}, \end{align*} and \begin{align*} \alpha_k &= \dfrac{2\alpha_i \mu^2\nu^2} { F_2(-\alpha_i)} \end{align*} \end{proposition} \begin{proof} Using the eigenfunction equations for $f_1, f_i, f_j, f_k$ and for $g_1, g_i, g_j, g_k$, it follows that \begin{align*} f_1 &= \dfrac{\mu f_i + \alpha_k f_k}{\mu}, \quad\quad g_1 = \dfrac{\nu g_i + \alpha_k g_k}{\nu}, \end{align*} and \begin{align*} f_k &= \dfrac{ \mu f_i-\alpha_if_i }{\mu}, \quad \quad g_k = \dfrac{ \nu g_i-\alpha_ig_i }{\nu}. \end{align*} Altogether, \begin{align*} f_1 &= \dfrac{\mu^2 f_i + \alpha_k( \mu f_i - \alpha_if_i )}{\mu^2}, \quad\quad g_1 &= \dfrac{\nu^2 g_i + \alpha_k( \nu g_i - \alpha_ig_i )}{\nu^2} \end{align*} Combined with Lemma 3.6, it follows that \begin{align*} 0 &= \mu f_1^2-\nu g_1^2 - (\mu-\nu) \\ &= \mu\left( \dfrac{\mu^2 f_i + \alpha_k( \mu f_i - \alpha_if_i )}{\mu^2} \right)^2 -\nu\left( \dfrac{\nu^2 g_i + \alpha_k( \nu g_i - \alpha_if_i )}{\nu^2} \right)^2 -(\mu-\nu). \end{align*} After expanding, we note that the right-hand side can be expressed purely in terms of $\mu,\nu, f_i^2, f_if_j, f_j^2, g_i^2, g_ig_j,$ and $\alpha_i$. Note that Proposition \ref{prop: fg23 assume 23} gives explicit formulas for $f_i^2, f_if_j, f_j^2, g_i^2, g_ig_j,$ and $g_j^2$ purely in terms of $\mu, \nu$, and $\alpha_j$. With the help of a computer algebra system, we make these substitutions and factor the right-hand side as: \begin{align*} 0 &= 2\alpha_k\cdot (\mu-\nu)\cdot \dfrac{ \alpha_j\cdot F_2(\alpha_j)^2 - \alpha_k\cdot F_3(\alpha_j) } { F_1(\alpha_j)\cdot F_2(\alpha_j)^2 } . \end{align*} So the desired claim holds. \end{proof} \begin{proposition}\label{prop: a4 assume N2347} Suppose $1\notin S$ and $i,j,k,\ell\in S$ where $(i,j,k,\ell)$ is either $(2,3,4,7)$ or $(5,6,7,4)$. Then \begin{align*} \alpha_k &= \dfrac{F_4(x)}{F_3(x)}. \end{align*} \end{proposition} \begin{proof} Using the eigenfunction equations for $f_\ell, f_i, f_j, f_k$ and for $g_\ell, g_i, g_j, g_k$, it follows that \begin{align*} f_\ell &= \dfrac{\alpha_if_i + \alpha_jf_j + \alpha_k f_k}{\mu}, \quad\quad g_1 = \dfrac{\alpha_ig_i + \alpha_jg_j + \alpha_k g_k}{\nu}, \end{align*} and \begin{align*} f_k &= \dfrac{ \mu f_j-\alpha_if_i }{\mu}, \quad \quad g_k = \dfrac{ \nu g_j-\alpha_ig_i }{\nu}. \end{align*} Altogether, \begin{align*} f_\ell &= \dfrac{ \mu \alpha_if_i + \alpha_jf_j + \alpha_k (\mu f_j-\alpha_if_i) }{\mu^2}, \quad\quad g_\ell &= \dfrac{ \nu \alpha_ig_i + \alpha_jg_j + \alpha_k (\nu g_j-\alpha_ig_i) }{\nu^2} \end{align*} Combined with Lemma \ref{lem: local eigenfunction equation}, it follows that \begin{align*} 0 &= \mu f_\ell^2 - \nu g_\ell^2 - (\mu-\nu)\\ &= \mu\left( \dfrac{ \mu \alpha_if_i + \alpha_jf_j + \alpha_k (\mu f_j-\alpha_if_i) }{\mu^2} \right)^2-\nu\left( \dfrac{ \nu \alpha_ig_i + \alpha_jg_j + \alpha_k (\nu g_j-\alpha_ig_i) }{\nu^2} \right)^2\\ &\quad -(\mu-\nu) \end{align*} After expanding, we note that the right-hand side can be expressed purely in terms of $\mu,\nu, f_i^2, f_if_j, f_j^2, g_i^2, g_ig_j,$ and $\alpha_i$. Note that Proposition \ref{prop: fg23 assume 23} gives explicit formulas for $f_i^2, f_if_j, f_j^2, g_i^2, g_ig_j,g_j^2,\alpha_i,\alpha_j,\alpha_k$ purely in terms of $\mu, \nu$, and $\alpha_j$. With the help of a computer algebra system, we make these substitutions and factor the right-hand side as: \begin{align*} 0 &= 2(\mu-\nu)\cdot\alpha_k\cdot \dfrac{ F_4(\alpha_j)-\alpha_k\cdot F_3(\alpha_j) } {F_1(\alpha_j)\cdot F_2(\alpha_j)^2} \end{align*} \end{proof} \begin{proposition}\label{prop: a47 assume 2457} Suppose $2,4,5,7\in S$ and let $\alpha_{\ne 4,7}:=\sum_{\substack{i\in S,\\ i \ne 4,7}} \alpha_i$. Then \begin{align*} \alpha_4 &= \frac{ (1-\alpha_{\ne 4,7}) f_7 - \mu (f_2 - f_7)}{f_4 +f_7}, \\ \alpha_7 &= \frac{ (1-\alpha_{\ne 4,7}) f_4 - \mu (f_5 - f_2)}{f_4 +f_7}, \end{align*} and \begin{align*} \alpha_4 &= \frac{ ((1-\alpha_{\ne 4,7}) g_7 - \nu (g_2 - g_7)}{g_4 +g_7}, \\ \alpha_7 &= \frac{ (1-\alpha_{\ne 4,7}) g_4 - \nu (g_5 - g_2)}{g_4 +g_7}. \end{align*} \end{proposition} \begin{proof} Taking the difference of the eigenvector equations for $f_2$ and $f_5$, and for $g_2$ and $g_5$, we have $$\alpha_7 f_7 -\alpha_4 f_4 = \mu(f_2 - f_5), \qquad \alpha_7 g_7 -\alpha_4 g_4 = \nu(g_2 - g_5).$$ Combining these equalities with the equation $\alpha_4 + \alpha_7 = 1-\alpha_{\ne 4,7}$ completes the proof. \end{proof} \subsection{Algorithm} In this subsection, we briefly detail how the computer-assisted proof of Lemma \ref{lem: 2 feasible sets} works. This proof is via interval arithmetic, and, at a high level, consists largely of iteratively decomposing the domain of feasible choices of $(\alpha_3,\alpha_6,\mu,\nu)$ for a given $S$ into smaller subregions (boxes) until all subregions violate some required equality or inequality. We provide two similar, but slightly different computer assisted proofs of this fact, and both of which can be found at the spread\_numeric GitHub repository \cite{2021riasanovsky-spread}. The first, found in folder interval$1$, is a shorter and simpler version, containing slightly fewer formulas, albeit at the cost of overall computation and run time. The second, found in the folder interval$2$, contains slightly more formulas and makes a greater attempt to optimize computation and run time. Below, we further detail the exact output and run time of both versions (exact output can be found in \cite{2021riasanovsky-spread}), but for now, we focus on the main aspects of both proofs, and consider both together, saving a more detailed discussion of the differences for later. These algorithms are implemented in Python using the PyInterval package. The algorithms consists of two parts: a main file containing useful formulas and subroutines and $17$ different files used to rule out each of the $17$ cases for $S$. The main file, casework\_helper, contains functions with the formulas of Appendix Subsection \ref{sub-sec: formulas} (suitably modified to limit error growth), and functions used to check that certain equalities and inequalities are satisfied. In particular, casework\_helper contains formulas for \begin{itemize} \item $\alpha_2$, assuming $\{2,3,4\} \subset S$ (using Proposition \ref{prop: a2 assume 234}) \item $\alpha_4$, assuming $\{1,2,3,4\} \subset S$ (using Proposition \ref{prop: a4 assume 1234}) \item $\alpha_4$, assuming $\{2,3,4,7\} \subset S$, $1 \not \in S$ (using Proposition \ref{prop: a4 assume N2347}) \item $\alpha_4$, assuming $\{1,2,4\}\subset S$, $3 \not \in S$ (using Proposition \ref{prop: a4 assume 12N4}) \item $f_3$ and $g_3$, assuming $\{2,3\}\subset S$ (using Proposition \ref{prop: fg23 assume 23}) \item $f_2$ and $g_2$, assuming $\{2,3\} \subset S$ (using Proposition \ref{prop: fg23 assume 23}) \item $f_4$ and $g_4$, assuming $\{2,3,4\} \subset S$ (using Proposition \ref{prop: a2 assume 234}) \item $f_1$ and $g_1$, assuming $\{1,2,4\} \subset S$ (using Propositions \ref{prop: a4 assume 1234} and \ref{prop: a4 assume 12N4}) \item $f_2$ and $g_2$, assuming $\{2,4\} \subset S$, $3 \not\in S$ (using Proposition \ref{prop: fg24 assume 2N4}) \item $f_4$ and $g_4$, assuming $\{2,4\} \subset S$, $3 \not\in S$ (using Proposition \ref{prop: fg24 assume 2N4}) \end{itemize} as a function of $\alpha_3$, $\mu$, and $\nu$ (and $\alpha_2$ and $\alpha_4$, which can be computed as functions of $\alpha_3$, $\mu$, and $\nu$). Some of the formulas are slightly modified compared to their counterparts in this Appendix, for the purpose of minimizing accumulated error. Each formula is performed using interval arithmetic, while restricting the resulting interval solution to the correct range. In addition, we recall that we have the inequalities \begin{itemize} \item $\alpha_i \in [0,1]$, for $i \in S$ \item $|g_2|,|f_3|\le 1$, $|f_2|,|g_3| \ge 1$, for $\{2,3\}\subset S$ \item $|f_4|\le 1$, $|g_4|\ge 1$, for $4 \in S$ \item $|f_1|\ge 1$, $|g_1|\le 1$, for $\{1,2,4\}\in S$ \item $|f_4|,|g_2| \le 1$, $|f_2|,|g_4|\ge 1$, for $\{2,4\} \in S$, $3 \not \in S$ \item $\alpha_3 + 2 \nu \le 0$, for $\{2,3\} \in S$ (using Proposition \ref{prop: fg23 assume 23}) \item $\alpha_2 - 2 \mu \le 0$, for $\{2,4\} \in S$, $3 \not \in S$ (using Proposition \ref{prop: fg24 assume 2N4}). \end{itemize} These inequalities are also used at various points in the algorithms. This completes a brief overview of the casework\_helper file. Next, we consider the different files used to test feasibility for a specific choice of $S \subset \{1,...,7\}$, each denoted by case$\{\text{elements of S}\}$, i.e., for $S = \{1,4,5,7\}$, the associated file is case$1457$. For each specific case, there are a number of different properties which can be checked, including eigenvector equations, bounds on edge density, norm equations for the eigenvectors, and the ellipse equations. Each of these properties has an associated function which returns FALSE, if the property cannot be satisfied, given the intervals for each variable, and returns TRUE otherwise. The implementation of each of these properties is rather intuitive, and we refer the reader to the programs themselves (which contain comments) for exact details \cite{2021riasanovsky-spread}. Each feasibility file consists of two parts. The first part is a function is\_feasible(mu,nu,a3,a6) that, given bounding intervals for $\mu$, $\nu$, $\alpha_3$, $\alpha_6$, computes intervals for all other variables (using interval arithmetic) and checks feasibility using the functions in the casework\_helper file. If any checked equation or inequality in the file is proven to be unsatisfiable (i.e., see Example \ref{ex: infeasible box}), then this function outputs `FALSE', otherwise the function outputs `TRUE' by default. The second part is a divide and conquer algorithm that breaks the hypercube $$ (\mu, \nu, \alpha_3,\alpha_6) \in [.65,1] \times [-.5,-.15] \times [0,1] \times [0,1]$$ into sub-boxes of size $1/20$ by $1/20$ by $1/10$ by $1/10$, checks feasibility in each box using is\_feasible, and subdivides any box that does not rule out feasibility (i.e., subdivides any box that returns `TRUE'). This subdivision breaks a single box into two boxes of equal size, by subdividing along one of the four variables. The variable used for this subdivision is chosen iteratively, in the order $\alpha_3,\alpha_6,\mu, \nu, \alpha_3,...$. The entire divide and conquer algorithm terminates after all sub-boxes, and therefore, the entire domain $$ (\mu, \nu, \alpha_3,\alpha_6) \in [.65,1] \times [-.5,-.15] \times [0,1] \times [0,1],$$ has been shown to be infeasible, at which point the algorithm prints `infeasible'. Alternatively, if the number of subdivisions reaches some threshold, then the algorithm terminates and outputs `feasible'. Next, we briefly detail the output of the algorithms casework\_helper/intervals$1$ and casework\_helper/intervals$2$. Both algorithms ruled out 15 of the 17 choices for $S$ using a maximum depth of 26, and failed to rule out cases $S = \{4,5,7\}$ and $S = \{1,7\}$ up to depth 51. For the remaining 15 cases, intervals$1$ considered a total of 5.5 million boxes, was run serially on a personal computer, and terminated in slightly over twelve hours. For these same 15 cases, intervals$2$ considered a total of 1.3 million boxes, was run in parallel using the Penn State math department's `mathcalc' computer, and terminated in under 140 minutes. The exact output for both versions of the spread\_numeric algorithm can be found at \cite{2021riasanovsky-spread}. \section{Concluding remarks}\label{sec: conclusion} In this work we provided a proof of the spread conjecture for sufficiently large $n$, a proof of an asymptotic version of the bipartite spread conjecture, and an infinite class of counterexamples that illustrates that our asymptotic version of this conjecture is the strongest result possible. There are a number of interesting future avenues of research, some of which we briefly describe below. These avenues consist primarily of considering the spread of more general classes of graphs (directed graphs, graphs with loops) or considering more general objective functions. Our proof of the spread conjecture for sufficiently large $n$ immediately implies a nearly-tight estimate for the adjacency matrix of undirected graphs with loops, also commonly referred to as symmetric $0-1$ matrices. Given a directed graph $G = (V,\mathcal{A})$, the corresponding adjacency matrix $A$ has entry $A_{i,j} = 1$ if the arc $(i,j) \in \mathcal{A}$, and is zero otherwise. In this case, $A$ is not necessarily symmetric, and may have complex eigenvalues. One interesting question is what digraph of order $n$ maximizes the spread of its adjacency matrix, where spread is defined as the diameter of the spectrum. Is this more general problem also maximized by the same set of graphs as in the undirected case? This problem for either loop-less directed graphs or directed graphs with loops is an interesting question, and the latter is equivalent to asking the above question for the set of all $0-1$ matrices. Another approach is to restrict ourselves to undirected graphs or undirected graphs with loops, and further consider the competing interests of simultaneously producing a graph with both $\lambda_1$ and $-\lambda_n$ large, and understanding the trade-off between these two goals. To this end, we propose considering the class of objective functions $$f(G; \beta) = \beta \lambda_1(G) - (1-\beta) \lambda_n(G), \qquad \beta \in [0,1].$$ When $\beta = 0$, this function is maximized by the complete bipartite graph $K_{\lceil n /2 \rceil, \lfloor n/2 \rfloor}$ and when $\beta = 1$, this function is maximized by the complete graph $K_n$. This paper treats the specific case of $\beta = 1/2$, but none of the mathematical techniques used in this work rely on this restriction. In fact, the structural graph-theoretic results of Section \ref{sec:graphs}, suitably modified for arbitrary $\beta$, still hold (see the thesis \cite[Section 3.3.1]{urschel2021graphs} for this general case). Understanding the behavior of the optimum between these three well-studied choices of $\beta = 0,1/2,1$ is an interesting future avenue of research. More generally, any linear combination of graph eigenvalues could be optimized over any family of graphs. Many sporadic examples of this problem have been studied and Nikiforov \cite{Nikiforov} proposed a general framework for it and proved some conditions under which the problem is well-behaved. We conclude with some specific instances of the problem that we think are most interesting. Given a graph $F$, maximizing $\lambda_1$ over the family of $n$-vertex $F$-free graphs can be thought of as a spectral version of Tur\'an's problem. Many papers have been written about this problem which was proposed in generality in \cite{Nikiforov3}. We remark that these results can often strengthen classical results in extremal graph theory. Maximizing $\lambda_1 + \lambda_n$ over the family of triangle-free graphs has been considered in \cite{Brandt} and is related to an old conjecture of Erd\H{o}s on how many edges must be removed from a triangle-free graph to make it bipartite \cite{erdos}. In general it would be interesting to maximize $\lambda_1 + \lambda_n$ over the family of $K_r$-free graphs. When a graph is regular the difference between $\lambda_1$ and $\lambda_2$ (the spectral gap) is related to the graph's expansion properties. Aldous and Fill \cite{AldousFill} asked to minimize $\lambda_1 - \lambda_2$ over the family of $n$-vertex connected regular graphs. Partial results were given by \cite{quartic1, quartic2, Guiduli1, Guiduli2}. A nonregular version of the problem was proposed by Stani\'c \cite{Stanic} who asked to minimize $\lambda_1-\lambda_2$ over connected $n$-vertex graphs. Finally, maximizing $\lambda_3$ or $\lambda_4$ over the family of $n$-vertex graphs seems to be a surprisingly difficult question and even the asymptotics are not known (see \cite{Nikiforov2}). \section{Technical proofs}\label{sec: ugly} \subsection{Reduction to 17 cases}\label{appendix 17 cases} Now, we introduce the following specialized notation. For any nonempty set $S\subseteq V(G^*)$ and any labeled partition $(I_i)_{i\in S}$ of $[0,1]$, we define the stepgraphon $W_\mathcal{I}$ as follows. For all $i,j\in S$, $W_\mathcal{I}$ equals $1$ on $I_i\times I_j$ if and only if $ij$ is an edge (or loop) of $G^*$, and $0$ otherwise. If $\alpha = (\alpha_i)_{i\in S}$ where $\alpha_i = m(I_i)$ for all $i\in S$, we may write $W_\alpha$ to denote the graphon $W_\mathcal{I}$ up to weak isomorphism. \\ {To make the observations from Section \ref{sub-sec: cases} more explicit, we note that Theorem \ref{thm: reduction to stepgraphon} implies that a spread-optimal graphon has the form $W = W_\mathcal{I}$ where $\mathcal{I} = (I_i)_{i\in S}$ is a labeled partition of $[0,1]$, $S\subseteq [7]$, and each $I_i$ is measurable with positive positive measure. Since $W$ is a stepgraphon, its extreme eigenfunctions may be taken to be constant on $I_i$, for all $i\in S$. With $f,g$ denoting the extreme eigenfunctions for $W$, we may let $f_i$ and $g_i$ be the constant value of $f$ and $g$, respectively, on step $S_i$, for all $i\in S$. Appealing again to Theorem \ref{thm: reduction to stepgraphon}, we may assume without loss of generality that $f_i\geq 0$ for all $i\in S$, and for all $i\in S$, $g_i\geq 0$ implies that $i\in\{1,2,3,4\}$. By Lemma \ref{lem: local eigenfunction equation}, for each $i\in S$, $\mu f_i^2-\nu g_i^2 = \mu-\nu$. Combining these facts, we note that $f_i$ and $g_i$ belong to specific intervals as in Figure \ref{fig: f g interval}. } \begin{figure}[ht] \centering \input{graphics/interval-eigenfunction-values} \caption{ Intervals containing the quantities $f_i$ and $g_i$. Note that $f_i$ and $g_i$ are only defined for all $i\in S$. } \label{fig: f g interval} \end{figure} {For convenience, we define the following sets $F_i$ and $G_i$, for all $i\in S$. First, let $\mathcal{U} := [0,1]$ and $\mathcal{V} := [1,+\infty]$. With some abuse of notation, we denote $-\mathcal{U} = [-1,0]$ and $-\mathcal{V} = [-\infty,-1]$. } For each $i\in V(G^*)$, we define the intervals $F_i$ and $G_i$ by \begin{align*} (F_i, G_i) &:= \left\{\begin{array}{rl} (\mathcal{V}, \mathcal{U}), &i\in \{1,2\}\\ (\mathcal{U}, \mathcal{V}), &i\in \{3,4\} \\ (\mathcal{V}, -\mathcal{U}), &i=5 \\ (\mathcal{U}, -\mathcal{V}), &i\in \{6,7\} \end{array}\right. . \end{align*} Given that the set $S$ and the quantities $(\alpha_i,f_i,g_i)_{i\in S}$ are clear from context, we label the following equation: \begin{align} \sum_{i\in S} \alpha_i &= \sum_{i\in S} \alpha_if_i^2 = \sum_{i\in S} \alpha_ig_i^2 = 1 \label{eq: program norms} . \end{align} Furthermore when $i\in S$ is understood from context, we define the equations \begin{align} \mu f_i^2 - \nu g_i^2 &= \mu - \nu \label{eq: program ellipse} \\ \sum_{j\in N_i\cap S}\alpha_jf_j &= \mu f_i \label{eq: program eigen f} \\ \sum_{j\in N_i\cap S}\alpha_jg_j &= \nu g_i \label{eq: program eigen g} \end{align} Additionally, we consider the following inequalities. For all $S\subseteq V(G^*)$ and all distinct $i,j\in S$, \begin{align}\label{ieq: program inequality constraint} f_if_j - g_ig_j &\left\{\begin{array}{rl} \geq 0, &ij\in E(G^*)\\ \leq 0, &ij\notin E(G^*) \end{array}\right. \end{align} Finally, for all nonempty $S\subseteq V(G^*)$, we define the constrained-optimization problem $\SPR_S$ by: \begin{align*} (\text{SPR}_S): \left\{\begin{array}{rll} \max & \mu-\nu \\ \text{s.t} & \text{Equation }\eqref{eq: program norms} \\ & \text{Equations } \eqref{eq: program ellipse}, \eqref{eq: program eigen f}, \text{ and } \eqref{eq: program eigen g} & \text{ for all }i\in S \\ & \text{Inequality } \eqref{ieq: program inequality constraint} & \text{ for all distinct }i,j\in S \\ & (\alpha_i,f_i,g_i)\in [0,1] \times F_i \times G_i & \text{ for all }i\in S \\ & \mu,\nu\in\mathbb{R} \end{array}\right. . \end{align*} For completeness, we state and prove the following observation. \begin{proposition}\label{prop: problem solutions} Let $W\in\mathcal{W}$ such that $\text{spr}(W) = \max_{U\in \mathcal{W}}\text{spr}(U)$ and write $\mu,\nu$ for the maximum and minimum eigenvalues of $W$, with corresponding unit eigenfunctions $f,g$. Then for some nonempty set $S\subseteq V(G^*)$, the following holds. There exists a triple $(I_i, f_i, g_i)_{i\in S}$, where $(I_i)_{i\in S}$ is a labeled partition of $[0,1]$ with parts of positive measure and $f_i,g_i\in \mathbb{R}$ for all $i\in S$, such that: \begin{enumerate}[(i)] \item\label{item: W = W_I} $W = W_\mathcal{I}$. \item\label{item: f,g constants} Allowing the replacement of $f$ by $-f$ and of $g$ by $-g$, for all $i\in S$, $f$ and $g$ equal $f_i$ and $g_i$ a.e. on $I_i$. \item\label{item: problem solution} With $\alpha_i := m(I_i)$ for all $i\in S$, $\SPR_S$ is solved by $\mu,\nu$, and $(\alpha_i, f_i, g_i)_{i\in S}$. \end{enumerate} \end{proposition} \begin{proof} First we prove Item \eqref{item: W = W_I}. By Theorem \ref{thm: reduction to stepgraphon} and the definition of $G^*$, there exists a nonempty set $S\subseteq V(G^*)$ and a labeled partition $\mathcal{I} = (I_i)_{i\in S}$ such that $W = W_\mathcal{I}$. By merging any parts of measure $0$ into some part of positive measure, we may assume without loss of generality that $m(I_i) > 0$ for all $i\in S$. So Item \eqref{item: W = W_I} holds. For Item \eqref{item: f,g constants}, the eigenfunctions corresponding to the maximum and minimum eigenvalues of a stepgraphon must be constant on each block by convexity and the Courant-Fischer Min-Max Theorem. Finally, we prove Item \eqref{item: problem solution}, we first prove that for all $i\in V(G^*)$, $(f_i,g_i)\in F_i\times G_i$. By Lemma \ref{lem: local eigenfunction equation}, \begin{align*} \mu f_i^2-\nu g_i^2 &= \mu-\nu \end{align*} for all $i\in S$. In particular, either $f_i^2\leq 1\leq g_i^2$ or $g_i^2\leq 1\leq f_i^2$. By Lemma \ref{lem: K = indicator function}, for all $i,j\in S$, $f_if_j-g_ig_j\neq 0$ and $ij\in E(G)$ if and only if $f_if_j-g_ig_j > 0$. Note that the loops of $G^*$ are $1, 2, $ and $5$. It follows that for all $i\in S$, $f_i^2 > 1 > g_i^2$ if and only if $i\in\{1,2,5\}$, and $g_i^2>1>f_i^2$, otherwise. Since $f$ is positive on $[0,1]$, this completes the proof that $f_i\in F_i$ for all $i\in S$. Similarly since $g$ is positive on $\bigcup_{i\in \{1,2,3,4\}\cap S}I_i$ and negative on $\bigcup_{i\in \{5,6,7\}}I_i$, by inspection $g_i\in G_i$ for all $i\in S$. Similarly, Inequalities \eqref{ieq: program inequality constraint} follow directly from Lemma \ref{lem: K = indicator function}. Continuing, we note the following. Since $W$ is a stepgraphon, if $\lambda\neq 0$ is an eigenvalue of $W$, there exists a $\lambda$-eigenfunction $h$ for $W$ such that for all $i\in S$, $h = h_i$ on $I_i$ for some $h_i\in \mathbb{R}$. Moreover for all $i\in S$, since $m(I_i) > 0$, \begin{align*} \lambda h_i &= \sum_{i\in S} \alpha_ih_i. \end{align*} In particular, any solution to $\SPR_S$ is at most $\mu-\nu$. Since $f,g$ are eigenfunctions corresponding to $W$ and the eigenvalues $\mu,\nu$, respectively, Equations \eqref{eq: program eigen f}, and \eqref{eq: program eigen g} hold. Finally since $(I_i)_{i\in S}$ is a partition of $[0,1]$ and since $\|f\|_2^2 = \|g\|_2^2 = 1$, Equation \eqref{eq: program norms} holds. So $\mu,\nu$, and $(\alpha_i, f_i, g_i)_{i\in S}$ lie in the domain of $\SPR_S$. This completes the proof of item \eqref{item: problem solution}, and the desired claim. \end{proof} We enhance Proposition \ref{prop: problem solutions} as follows. \begin{lemma}\label{lem: 19 cases} Proposition \ref{prop: problem solutions} holds with the added assumption that $S\in\mathcal{S}_{17}$. \end{lemma} \begin{proof} We begin our proof with the following claim. \\ \\ {\bf Claim A: } Suppose $i\in S$ and $j\in V(G^*)$ are distinct such that $N_i\cap S = N_j\cap S$. Then Proposition \ref{prop: problem solutions} holds with the set $S' := (S\setminus\{i\})\cup \{j\}$ replacing $S$. First, we define the following quantities. For all $k\in S'\setminus \{j\}$, let $(f_k', g_k', I_k') := (f_k, g_k, I_k)$, and also let $(f_j', g_j') := (f_i, g_i)$. If $j\in S$, let $I_j' := I_i\cup I_j$, and otherwise, let $I_j' := I_i$. Additionally let $\mathcal{I}' := (I_k')_{k\in S'}$ and for each $k\in S'$, let $\alpha_k' := m(I_k')$. By the criteria from Proposition \ref{prop: problem solutions}, the domain criterion $(\alpha_k', f_k', g_k') \in [0,1]\times F_k\times G_k$ as well as Equation \eqref{eq: program ellipse} holds for all $k\in S'$. Since we are reusing $\mu,\nu$, the constraint $\mu,\nu\in \mathbb{R}$ also holds. It suffices to show that Equation \eqref{eq: program norms} holds, and that Equations \eqref{eq: program eigen f} and \eqref{eq: program eigen g} hold for all $k\in S'$. To do this, we first note that for all $k\in S'$, $f = f_k'$ and $g = g_k'$ on $I_k'$. By definition, $f = f_k$ and $g = g_k$ on $I_k' = I_k$ for all $k\in S'\setminus\{j\}$ as needed by Claim A. Now suppose $j\notin S$. Then $f = f_i = f_j'$ and $g = g_i = g_j'$ and $I_j' = I_i$ on the set $I_i = I_j'$, matching Claim A. Finally, suppose $j\in S$. Note by definition that $f = f_i = f_j'$ and $g = g_i = g_j'$ on $I_i$. Since and $I_j' = I_i\cup I_j$, it suffices to prove that $f = f_j'$ and $g = g_j'$ on $I_j$. We first show that $f_j = f_i$ and $g_j = g_i$. Indeed, \begin{align*} \mu f_j &= \sum_{k\in N_j\cap S} \alpha_k f_k = \sum_{k\in N_i\cap S} \alpha_k f_k = \mu f_i \end{align*} and since $\mu\neq 0$, $f_j = f_i$. Similarly, $g_j = g_i$. So $f = f_j = f_i = f_j'$ and $g = g_j = g_i = g_j'$ on the set $I_j' = I_i \cup I_j$. Finally, we claim that $W_{\mathcal{I}'} = W$. Indeed, this follows directly from Lemma \ref{lem: K = indicator function} and the fact that $W = W_\mathcal{I}$. Since $\mathcal{I}'$ is a partition of $[0,1]$ and since $f,g$ are unit eigenfunctions for $W$ Equation \eqref{eq: program norms} holds, and Equations \eqref{eq: program eigen f} and \eqref{eq: program eigen g} hold for all $k\in S'$. This completes the proof of Claim A. Next, we prove the following claim. \\ {\bf Claim B: } If $S$ satisfies the criteria of Proposition \ref{prop: problem solutions}, then without loss of generality the following holds. \begin{enumerate}[(a)] \item\label{item: vertex 1} If there exists some $i\in S$ such that $N_i = S$, then $i = 1$. \item\label{item: vertices 1234} $S\cap \{1,2,3,4\}\neq\emptyset$. \item\label{item: vertices 234} $S\cap \{2,3,4\}$ is one of $\emptyset, \{4\}, \{2,4\}$, and $\{2,3,4\}$. \item\label{item: vertices 567} $S\cap \{5,6,7\}$ is one of $\{7\}, \{5,7\}$, and $\{5,6,7\}$. \end{enumerate} Since $N_1\cap S = S = N_i$, item \eqref{item: vertex 1} follows from Claim A applied to the pair $(i,1)$. Since $f,g$ are orthogonal and $f$ is positive on $[0,1]$, $g$ is positive on a set of positive measure, so item \eqref{item: vertices 1234} holds. To prove item \eqref{item: vertices 234}, we have $4$ cases. If $S\cap \{2,3,4\} = \{2\}$, then $N_2\cap S = N_1\cap S$ and we may apply Claim A to the pair $(2,1)$. If $S\cap \{2,3,4\} = \{3\}$ or $\{3,4\}$, then $N_3\cap S = N_4\cap S$ and we may apply Claim A to the pair $(3,4)$. If $S\cap \{2,3,4\} = \{2,3\}$, then $N_2\cap S = N_1\cap S$ and we may apply Claim A to the pair $(2,1)$. So item \eqref{item: vertices 234} holds. For item \eqref{item: vertices 567}, we reduce $S\cap \{5,6,7\}$ to one of $\emptyset, \{7\}, \{5,7\}$, and $\{5,6,7\}$ in the same fashion. To eliminate the case where $S\cap \{5,6,7\} = \emptyset$, we simply note that since $f$ and $g$ are orthogonal and $f$ is positive on $[0,1]$, $g$ is negative on a set of positive measure. This completes the proof of Claim B. \input{graphics/table21} After repeatedly applying Claim B, we may replace $S$ with one of the cases found in Table \ref{tab: table 21}. Let $\mathcal{S}_{21}$ denote the sets in Table \ref{tab: table 21}. By definition, \begin{align*} \mathcal{S}_{21} &= \mathcal{S}_{17} \bigcup \left\{ \{4,7\}, \{2,4,7\}, \{2,3,4,7\}, \{2,3,4,5,7\} \right\} . \end{align*} Finally, we eliminate the $4$ cases in $\mathcal{S}_{21}\setminus \mathcal{S}_{17}$. If $S = \{4,7\}$, then $W$ is a bipartite graphon, hence $\text{spr}(W) \leq 1$, a contradiction since $\max_{U\in\mathcal{W}}\text{spr}(W) > 1$. For the three remaining cases, let $\tau$ be the permutation on $\{2,\dots,7\}$ defined as follows. For all $i\in \{2,3,4\}$, $\tau(i) := i+3$ and $\tau(i+3) := i$. If $S$ is among $\{2,4,7\}, \{2,3,4,7\}, \{2,3,4,5,7\}$, we apply $\tau$ to $S$ in the following sense. Replace $g$ with $-g$ and replace $(\alpha_i, I_i, f_i, g_i)_{i\in S}$ with $(\alpha_{\tau(i)}, I_{\tau(i)}, f_{\tau(i)}, -g_{\tau(i)})_{i\in \tau(S)}$. By careful inspection, it follows that $\tau(S)$ satisfies the criteria from Proposition \ref{prop: problem solutions}. Since $\tau(\{2,4,7\}) = \{4,5,7\}$, $\tau(\{2,3,4,7\}) = \{4,5,6,7\}$, and $\tau(\{2,3,4,5,7\}) = \{2,4,5,6,7\}$, this completes the proof. \end{proof} \subsection{Proof of Lemma \ref{lem: SPR457}} Let $(\alpha_4, \alpha_5, \alpha_7)$ be a solution to $\text{SPR}_{457}$. First, let $T := \{(\varepsilon_1,\varepsilon_2)\in(-1/3, 2/3)\times (-2/3, 1/3) : \varepsilon_1+\varepsilon_2 \in (0,1)\}$, and for all $\varepsilon = (\varepsilon_1,\varepsilon_2)\in T$, let \begin{align*} M(\varepsilon) &:= \left[\begin{array}{ccc} 2/3-\varepsilon_1 & 0 & 1/3-\varepsilon_2\\ 0 & 0 & 1/3-\varepsilon_2\\ 2/3-\varepsilon_1 & \varepsilon_1 + \varepsilon_2 & 0 \end{array}\right] . \end{align*} As a motivation, suppose $\mu,\nu$, and $(\alpha_4,\alpha_5,\alpha_7)$ are part of a solution to $\text{SPR}_{\{4,5,7\}}$. Then with $\varepsilon := (\varepsilon_1,\varepsilon_2) = (2/3-\alpha_5, 1/3-\alpha_4)$, $\varepsilon\in T$ and $\mu,\nu$ are the maximum and minimum eigenvalues of $M(\varepsilon)$, respectively. By the end of the proof, we show that any solution of $\text{SPR}_{\{4,5,7\}}$ has $\alpha_7 = 0$. \\ To proceed, we prove the following claims. \\ \\ {\bf Claim A: } For all $\varepsilon\in T$, $M(\varepsilon)$ has two distinct positive eigenvalues and one negative eigenvalue. Since $M(\varepsilon)$ is diagonalizable, it has $3$ real eigenvalues which we may order as $\mu\geq \delta\geq \nu$. Since $\mu\delta\nu = \det(M(\varepsilon)) = -\alpha_4\alpha_5\alpha_7\neq 0 < 0$, $M(\varepsilon)$ has an odd number of negative eigenvalues. Since $0 < \alpha_5 = \mu + \delta + \nu$, it follows that $\mu \geq \delta > 0 > \nu$. Finally, note by the Perron-Frobenius Theorem that $\mu > \delta$. This completes the proof of Claim A. \\ Next, we define the following quantities, treated as functions of $\varepsilon$ for all $\varepsilon\in T$. For convenience, we suppress the argument ``$\varepsilon$'' in most places. Let $k(x) = ax^3+bx^2+cx+d$ be the characteristic polynomial of $M(\varepsilon)$. By inspection, \begin{align*} &a = 1 &b = \varepsilon_1-\dfrac{2}{3} \\ &c = \dfrac{ (3\varepsilon_2+2) (3\varepsilon_2-1) } {9} &d = \dfrac{ (\varepsilon_1+\varepsilon_2) (3\varepsilon_1-2) (3\varepsilon_2-1) } {9} \end{align*} Continuing, let \begin{align*} &p := \dfrac{ 3a c -b^2 } { 3a^2 } &q := \dfrac{ 2b^3 -9a b c +27a^2 d } { 27a^3 } \\ &A := 2\sqrt{ \dfrac{-p} {3} } &B := \dfrac{ -b } {3a} \\ &\phi := \arccos\left( \dfrac{ 3q } { A p } \right). \end{align*} Let $S(\varepsilon)$ be the difference between the maximum and minimum eigenvalues of $M(\varepsilon)$. We show the following claim. \\ \\ {\bf Claim B: } For all $\varepsilon\in T$, \begin{align*} S(\varepsilon) &= \sqrt{3}\cdot A(\varepsilon)\cdot\cos\left( \dfrac{2\phi(\varepsilon) - \pi}{6} \right). \end{align*} Moreover, $S$ is analytic on $T$. Indeed, by Vi\'{e}te's Formula, using the fact that $k(x,y)$ has exactly $3$ distinct real roots, the quantities $a(\varepsilon),\dots,\phi(x,y)$ are analytic on $T$. Moreover, the eigenvalues of $M(\varepsilon)$ are $x_0, x_1, x_2$ where, for all $k\in\{0,1,2\}$, \begin{align*} x_k(\varepsilon) &= A(\varepsilon)\cdot \cos\left( \dfrac{\phi + 2\pi\cdot k}{3} \right) + B(\varepsilon) . \end{align*} Moreover, $x_0(\varepsilon),x_1(\varepsilon),x_2(\varepsilon)$ are analytic on $T$. For all $k,\ell\in \{1,2,3\}$, let \begin{align*} D(k,\ell,x) &:= \cos\left( x+\dfrac{2\pi k}{3} \right) - \cos\left( x+\dfrac{2\pi \ell}{3} \right) \end{align*} For all $(k,\ell)\in \{(0,1), (0,2), (2,1)\}$, note the trigonometric identities \begin{align*} D(k,\ell,x) &= \sqrt{3}\cdot\left\{\begin{array}{rl} \cos\left( x - \dfrac{\pi}{6} \right), & (k,\ell) = (0, 1) \\ \cos\left( x + \dfrac{\pi}{6} \right), & (k,\ell) = (0, 2) \\ \sin(x), & (k,\ell) = (2, 1) \end{array}\right. . \end{align*} By inspection, for all $x\in (0,\pi/3)$, \begin{align*} D(0,1) &> \max\left\{ D(0,2), D(2,1) \right\} \geq \min\left\{ D(0,2), D(2,1) \right\} \geq 0. \end{align*} Since $A > 0$ and $\phi \in (0,\pi/3)$, the claimed equality holds. Since $x_0(\varepsilon),x_1(\varepsilon)$ are analytic, $S(\varepsilon)$ is analytic on $T$. This completes the proof of Claim B. \\ Next, we compute the derivatives of $S(\varepsilon)$ on $T$. For convenience, denote by $A_i, \phi_i, $ and $S_i$ for the partial derivatives of $A$ and $\phi$ by $\varepsilon_i$, respectively, for $i\in \{1,2\}$. Furthermore, let \begin{align*} \psi(\varepsilon) &:= \dfrac{2\phi(\varepsilon)-\pi}{6}. \end{align*} The next claim follows directly from Claim B. \\ \\ {\bf Claim C: } For all $i\in T$, then on the set $T$, we have \begin{align*} 3S_i &= 3A_i\cdot \cos\left( \psi \right) - \cdot A\phi_i \sin\left( \psi \right) . \end{align*} Moreover, each expression is analytic on $T$. Finally, we solve $\text{SPR}_{\{4,5,7\}}$. \\ \\ {\bf Claim D: } If $(\alpha_4,\alpha_5,\alpha_7)$ is a solution to $\text{SPR}_{\{4,5,7\}}$, then $0\in\{\alpha_4,\alpha_5,\alpha_7\}$. With $(\alpha_4,\alpha_5,\alpha_7) := (1/3-\varepsilon_2, 2/3-\varepsilon_1, \varepsilon_1+\varepsilon_2)$ and using the fact that $S$ is analytic on $T$, it is sufficient to eliminate all common zeroes of $S_1$ and $S_2$ on $T$. With the help of a computer algebra system and the formulas for $S_1$ and $S_2$ from Claim C, we replace the system $S_1 = 0$ and $S_2 = 0$ with a polynomial system of equations $P = 0$ and $Q = 0$ whose real solution set contains all previous solutions. Here, \begin{align*} P(\varepsilon) &= 9\varepsilon_1^3 + 18\varepsilon_1^2\varepsilon_2 + 54\varepsilon_1\varepsilon_2^2 + 18\varepsilon_2^3 - 15\varepsilon_1^2 - 33\varepsilon_1\varepsilon_2 - 27\varepsilon_2^2 + 5\varepsilon_1 + \varepsilon_2 \end{align*} and $Q = 43046721\varepsilon_1^{18}\varepsilon_2+\cdots + (-532480\varepsilon_2)$ is a polynomial of degree $19$, with coefficients between $-184862311457373$ and $192054273812559$. For brevity, we do not express $Q$ explicitly. To complete the proof of Claim D, it suffices to show that no common real solution to $P = Q = 0$ which lies in $T$ also satisfies $S_1 = S_2 = 0$. Again using a computer algebra system, we first find all common zeroes of $P$ and $Q$ on $\mathbb{R}^2$. Included are the rational solutions $(2/3, -2/3), (-1/3, 1/3), (0,0), (2/3, 1/3), $ and $(2/3, -1/6)$ which do not lie in $T$. Furthermore, the solution $(1.2047\dots, 0.0707\dots)$ may also be eliminated. For the remaining $4$ zeroes, $S_1, S_2\neq 0$. A notebook showing these calculations can be found at \cite{2021riasanovsky-spread}. {\bf Claim E: } If $\mu, \nu$, and $\alpha = (\alpha_4,\alpha_5,\alpha_7)$ is part of a solution to $\text{SPR}_{\{4,5,7\}}$ such that $\mu-\nu\geq 1$, then $\alpha_7 = 0$. By definition of $\text{SPR}_{\{4,5,7\}}$, $\mu$ and $\nu$ are eigenvalues of the matrix \begin{align*} N(\alpha) := \left[\begin{array}{ccc} \alpha_5 & 0 & \alpha_4\\ 0 & 0 & \alpha_4\\ \alpha_5 & \alpha_7 & 0 \end{array}\right]. \end{align*} Furthermore, $N(\alpha)$ has characteristic polynomial \begin{align*} p(x) &= x^3 - \alpha_5 x^2 -\alpha_4\cdot(\alpha_5+\alpha_7) +\alpha_4\alpha_5\alpha_7 . \end{align*} Recall that $\alpha_4+\alpha_5+\alpha_7 = 1$. By Claim D, $0\in\{4,5,7\}$, and it follows that $p\in\{p_4, p_5, p_7\}$ where \begin{align*} p_4(x) &:= x^2\cdot (x-\alpha_5), \\ p_5(x) &:= x\cdot (x^2-\alpha_4(1-\alpha_4)), \text{ and } \\ p_7(x) &:= x\cdot (x^2-(1-\alpha_4)x-\alpha_4(1-\alpha_4)). \end{align*} If $p = p_4$, then $\mu-\nu = \alpha_5\leq 1$, and if $p = p_5$, then $\mu-\nu = 2\sqrt{\alpha_4(1-\alpha_4)} \leq 1$. So $p = p_7$, which completes the proof of Claim E. This completes the proof of Lemma \ref{lem: SPR457}. \subsection{Proof of Lemma \ref{lem: no exceptional vertices}} \label{sec: 2 by 2 reduction} First, we find $S_z(\varepsilon_1,\varepsilon_3)$ using Vi\`{e}te's Formula. In doing so, we define functions $k_z(\varepsilon_1,\varepsilon_2;x),\dots,\delta_z(\varepsilon_1,\varepsilon_2)$. To ease the burden on the reader, we suppress the subscript $z$ and the arguments $\varepsilon_1,\varepsilon_2$ when convenient and unambiguous. Let $k(x) = ax^3+bx^2+cx+d$ be the characteristic polynomial of $M_z(\varepsilon_1,\varepsilon_2)$. By inspection, \begin{align*} &a = 1 &b = \varepsilon_1+z-\dfrac{2}{3} \\ &c = \dfrac{ (3\varepsilon_2+2) (3\varepsilon_2-1) } {9} &d = \dfrac{ (\varepsilon_1+\varepsilon_2) (3\varepsilon_1+3z-2) (3\varepsilon_2-1) } {9} \end{align*} Continuing, let \begin{align*} &p := \dfrac{ 3a c -b^2 } { 3a^2 } &q := \dfrac{ 2b^3 -9a b c +27a^2 d } { 27a^3 } \\ &A := 2\sqrt{ \dfrac{-p} {3} } &B := \dfrac{ -b } {3a} \\ &\phi := \arccos\left( \dfrac{ 3q } { A p } \right). \end{align*} By Vi\`{e}te's Formula, the roots of $k_z(\varepsilon_1,\varepsilon_2;x)$ are the suggestively defined quantities: \begin{align*} &\mu := A \cos\left( \dfrac{ \phi } {3} \right) +B &\nu := A \cos\left( \dfrac{ \phi +2\pi } {3} \right) +B \\ \delta &:= A \cos\left( \dfrac{ \phi +4\pi } {3} \right) +B . \end{align*} First, We prove the following claim. \\ \\ {\bf Claim A: } If $(\varepsilon_1,\varepsilon_2,z)$ is sufficiently close to $(0,0,0)$, then \begin{align}\label{eq: spread trig formula} S_z(\varepsilon_1,\varepsilon_2) &= A_z(\varepsilon_1,\varepsilon_2)\sqrt{3}\, \cdot\cos\left( \dfrac{2\phi_z(\varepsilon_1,\varepsilon_2)-\pi}{6} \right) . \end{align} Indeed, suppose $z>0$ and $z\to 0$. Then for all $(\varepsilon_1,\varepsilon_2)\in (-3z,3z)$, $\varepsilon_1,\varepsilon_2\to 0$. With the help of a computer algebra system, we substitute in $z=0$ and $\varepsilon_1,\varepsilon_2=0$ to find the limits: \begin{align*} (a,b,c,d) &\to \left( 1, \dfrac{-2}{3}, \dfrac{-2}{9}, 0 \right) \\ (p,q) &\to \left( \dfrac{-10}{27}, \dfrac{-52}{729} \right) \\ (A,B,\phi) &\to \left( \dfrac{2\sqrt{10}}{9}, \dfrac{2}{9}, \arccos\left( \dfrac{13\sqrt{10}}{50} \right) \right). \end{align*} Using a computer algebra system, these substitutions imply that \begin{align*} (\mu,\nu,\delta) \to \left( 0.9107\dots, -0.2440\dots, 0. \right) \end{align*} So for all $z$ sufficiently small, $S = \mu-\nu$. After some trigonometric simplification, \begin{align*} \mu - \nu &= A\cdot\left( \cos\left( \dfrac{\phi}{3} \right) -\cos\left( \dfrac{\phi+2\phi}{3} \right) \right) = A\sqrt{3}\, \cdot\cos\left( \dfrac{2\phi-\pi}{6} \right) \end{align*} and Equation \eqref{eq: spread trig formula}. This completes the proof of Claim A. \\ Now we prove the following claim. \\ \\ {\bf Claim B: } There exists a constants $C_0'>0$ such that the following holds. If $|z|$ is sufficiently small, then $S_z$ is concave-down on $[-C_0,C_0]^2$ and strictly decreasing on $[-C_0,C_0]^2\setminus [-C_0z, C_0z]^2$. \\ \\ First, we define \begin{align*} D_z(\varepsilon_1,\varepsilon_2) := \restr{\left( \dfrac{\partial^2 S_z}{\partial\varepsilon_1^2} \cdot \dfrac{\partial^2 S_z}{\partial\varepsilon_2^2} - \left(\dfrac{\partial^2 S_z}{\partial\varepsilon_1\partial\varepsilon_2}\right)^2 \right)} {(\varepsilon_1,\varepsilon_2,z)} . \end{align*} As a function of $(\varepsilon_1,\varepsilon_2)$, $D_z$ is the determinant of the Hessian matrix of $S_z$. Using a computer algebra system, we note that \begin{align*} D_0(0,0) &= 22.5\dots, \quad \text{ and } \\ \restr{\left( \dfrac{ \partial^2S } { \partial \varepsilon_1^2 }, \dfrac{ \partial^2S } { \partial \varepsilon_1 \partial \varepsilon_2 }, \dfrac{ \partial^2S } { \partial \varepsilon_2^2 } \right) } {(0,0,0)} &= \left( -8.66\dots, -8.66\dots, -11.26\dots \right). \end{align*} Since $S$ is analytic to $(0,0,0)$, there exist constants $C_1, C_2>0$ such that the following holds. For all $z\in [-C_1,C_1]$, $S_z$ is concave-down on $[-C_1,C_1]^2$. This completes the proof of the first claim. Moreover for all $z\in [-C_1,C_1]$ and for all $(\varepsilon_1,\varepsilon_2)\in [-C_1,C_1]^2$, \begin{align*} \restr{\max\left\{ \dfrac{\partial^2S_z} {\partial\varepsilon_1^2}, \dfrac{\partial^2S_z} {\partial\varepsilon_1\partial\varepsilon_2}, \dfrac{\partial^2S_z} {\partial\varepsilon_2^2} \right\}} {(\varepsilon_1,\varepsilon_2,z)} &\leq -C_2. \end{align*} to complete the proof of the second claim, note also that since $S$ is analytic at $(0,0,0)$, there exist constants $C_3,C_4>0$ such that for all $z\in [-C_3,C_3]$ and all $(\varepsilon_1,\varepsilon_2)\in [-C_3, C_3]^2$, \begin{align*} \dfrac{\partial^2 S}{\partial z\partial\varepsilon_i} \leq C_4. \end{align*} Since $(0,0)$ is a local maximum of $S_0$, \begin{align*} \restr{ \dfrac{\partial S} {\partial \varepsilon_i} } {(\varepsilon_1,\varepsilon_2,z)} &= \restr{ \dfrac{\partial S} {\partial \varepsilon_i} } {(0,0,0)} +\int_{w=0}^{z} \restr{ \dfrac{\partial^2 S} {\partial z\partial \varepsilon_i} } {(0,0,w)} dw \\ &\quad + \int_{ {\bf u} = (0,0)}^{ (\varepsilon_1,\varepsilon_2) } \restr{\dfrac{ \partial^2 S } { \partial {\bf u}\partial \varepsilon_i }} {({\bf u}, z)} d{\bf u} \\ &\leq C_4\cdot z - C_2 \cdot \|(\varepsilon_1,\varepsilon_2)\|_2. \end{align*} Since $C_2, C_4>0$, this completes the proof of Claim B. \\ Next, we prove the following claim. \\ \\ {\bf Claim C: } If $z$ is sufficiently small, then $\mathcal{P}_{z,C_0}$ is solved by a unique point $(\varepsilon_1^*,\varepsilon_2^*) = (\varepsilon_1^*(z),\varepsilon_2^*(z))$. Moreover as $z\to0$, \begin{align}\label{eq: optimal epsilon approximation} \left( \varepsilon_1^*, \varepsilon_2^* \right) &= \left( (1+o(z))\, \dfrac{7z}{30}, (1+o(z))\, \dfrac{-z}{3} \right). \end{align} Indeed, the existence of a unique maximum $(\varepsilon_1^*, \varepsilon_2^*)$ on $[-C_0,C_0]^2$ follows from the fact that $S_z$ is strictly concave-down and bounded on $[-C_0, C_0]^2$ for all $z$ sufficiently small. Since $S_z$ is strictly decreasing on $[-C_0,C_0]^2\setminus (-C_0z, C_0z)^2$, it follows that $(\varepsilon_1^*, \varepsilon_2^*)\in (-C_0z, C_0z)$. For the second claim, note that since $S$ is analytic at $(0,0,0)$, \begin{align*} 0 &= \restr{ \dfrac{\partial S} {\partial \varepsilon_i} } {(\varepsilon_1^*, \varepsilon_2^*, z)} = \sqrt{3}\cdot \left( \dfrac{\partial A}{\partial\varepsilon_i}\cdot \cos\left( \dfrac{2\phi-\pi}{6} \right) - \dfrac{A}{3}\cdot \dfrac{\partial\phi}{\partial\varepsilon_i}\cdot \sin\left( \dfrac{2\phi-\pi}{6} \right) \right) \end{align*} for both $i = 1$ and $i = 2$. Let \begin{align*} \tau_i := \dfrac{ 3\cdot \dfrac{\partial A}{\partial\varepsilon_i} } { A\cdot \dfrac{\partial \phi}{\partial \varepsilon_i} } \end{align*} for both $i = 1$ and $i = 2$ Then by Equation \eqref{eq: spread trig formula}, \begin{align*} \restr{ \arctan(\tau_i) } {(\varepsilon_1^*,\varepsilon_2^*,z)} &= \restr{ \dfrac{2\phi-\pi}{6} } {(\varepsilon_1^*,\varepsilon_2^*,z)} \end{align*} for both $i=1$ and $i=2$. We first consider linear approximation of the above quantities under the limit $(\varepsilon_1,\varepsilon_2,z)\to (0,0,0)$. Here, we write $f(\varepsilon_1,\varepsilon_2,z) \sim g(\varepsilon_1,\varepsilon_2,z)$ to mean that \begin{align*} f(\varepsilon_1,\varepsilon_2,z) &= \left( 1 +o\left(\max\left\{ |\varepsilon_1|, |\varepsilon_2|, |z| \right\}\right) \right) \cdot g(\varepsilon_1,\varepsilon_2,z). \end{align*} With the help of a computer algebra system, we note that \begin{align*} \arctan\left( \tau_1 \right) &\sim \dfrac{ -78\varepsilon_1 -96\varepsilon_2 -3z -40\arctan\left( \dfrac{1}{3} \right) } {40} \\ \arctan\left( \tau_2 \right) &\sim \dfrac{ -64\varepsilon_1 -103\varepsilon_2 -14z -20\arctan\left( \dfrac{1}{3} \right) } {20} \\ \dfrac{2\phi-\pi} {6} &\sim \dfrac{ 108\varepsilon_1 +81\varepsilon_2 +18z +20\arccos\left( \dfrac{13\sqrt{10}}{50} \right) -10\pi } {60} . \end{align*} By inspection, the constant terms match due to the identity \begin{align*} -\arctan\left( \dfrac{1}{3} \right) &= \dfrac{1}{3} \arccos\left( \dfrac{13\sqrt{10}}{50} \right) -\dfrac{\pi}{6}. \end{align*} Since $\max\left\{|\varepsilon_1^*|, |\varepsilon_2^*|\right\}\leq C_0z$, replacing $(\varepsilon_1,\varepsilon_2)$ with $(\varepsilon_1^*,\varepsilon_2^*)$ implies that \begin{align*} \dfrac{-78\varepsilon_1^*-96\varepsilon_2^*-3z}{2} &= (1+o(z))\cdot (36\varepsilon_1^*+27\varepsilon_2^*+6z), \quad \text{ and } \\ -64\varepsilon_1^*-103\varepsilon_2^*-14z &= (1+o(z))\cdot (36\varepsilon_1^*+27\varepsilon_2^*+6z) \end{align*} as $z\to 0$. After applying Gaussian Elimination to this $3$-variable system of $2$ equations, it follows that \begin{align*} (\varepsilon_1^*, \varepsilon_2^*) &= \left( (1+o(z))\cdot \dfrac{7z}{30}, (1+o(z))\cdot \dfrac{-z}{3} \right). \end{align*} This completes the proof of Claim C. \\ \\ For the next step, we prove the following claim. First, let $\mathcal{Q}_{n}$ denote the program formed from $\mathcal{P}_{n^{-1}, C_0}$ subject to the added constraint that $\textstyle n\cdot( \frac{2}{3}-\varepsilon_1 ),n\cdot( \frac{1}{3}-\varepsilon_2 )\in\mathbb{Z}$. \\ \\ {\bf Claim D: } For all $n$ sufficiently large, $\mathcal{Q}_{n}$ is solved by a unique point $(n_1^*, n_3^*)$ which satisfies $n_1^* + n_3^* = n$. \\ \\ Note by Lemma \ref{lem: few exceptional vertices} that for all $n$ sufficiently large, \begin{align*} \max\left\{ \left|\dfrac{n_1}{n}-\dfrac{2}{3}\right|, \left|\dfrac{n_3}{n}-\dfrac{1}{3}\right| \right\} &\leq C_0. \end{align*} Moreover, by Claim C, $\mathcal{P}_{n^{-1}}$ is solved uniquely by \begin{align*} (\varepsilon_1^*, \varepsilon_2^*) &= \left( (1+o(z))\cdot \dfrac{7}{30n}, (1+o(z))\cdot \dfrac{-1}{3n} \right). \end{align*} Since \begin{align*} \dfrac{2n}{3}-n\cdot \varepsilon_1^* &= \dfrac{2n}{3}-(1+o(1))\cdot \dfrac{7}{30} \end{align*} and $7/30 < 1/3$, it follows for $n$ sufficiently large that $2n/3-n\cdot\varepsilon_1^*\in I_1$ where \begin{align*} I_1 &:= \left\{\begin{array}{rl} \left( \dfrac{2n}{3} -1, \dfrac{2n}{3} \right), & 3 \mid n \\ \left( \left\lfloor \dfrac{2n}{3} \right\rfloor, \left\lceil \dfrac{2n}{3} \right\rceil \right), & 3 \nmid n \end{array}\right. . \end{align*} Similarly since \begin{align*} n\cdot(\varepsilon_1^* + \varepsilon_2^*) &= (1+o(1)) \cdot \left( \dfrac{7}{30}-\dfrac{1}{3} \right) = (1+o(1)) \cdot \dfrac{-1}{10} \end{align*} and $1/10 < 1/3$, it follows that $n\cdot (\varepsilon_1^*+\varepsilon_2^*)\in (-1,0)$. Altogether, \begin{align*} \left( \dfrac{2n}{3}-n\cdot \varepsilon_1, n\cdot(\varepsilon_1^*+\varepsilon_2^*) \right) &\in I_1\times (-1, 0). \end{align*} Note that to solve $\mathcal{Q}_{n}$, it is sufficient to maximize $S_{n^{-1}}$ on the set $[-C_0, C_0]^2 \cap \{ (n_1/n, n_3/n) \}_{u,v\in\mathbb{N}}$. Since $S_{n^{-1}}$ is concave-down on $I_1\times (-1, 0)$, $(n_1^*, n-n_1^*-n_3^*)$ is a corner of the square $I_1\times (-1,0)$. So $n_1^*+n_2^* = n$, which implies Claim D. This completes the proof of the main result. \begin{comment} For completeness, we prove this result directly. Let $G = K_{n_1}\vee K_{n_2}^c$. Then $A_G$ has the same spread as the ``reduced matrix'' defined as \begin{align*} A(n_1,n_2) &= \left[\begin{array}{cc} n_1-1 & n_2\\ n_1 & 0 \end{array}\right]. \end{align*} By inspection, $A(n_1,n_2)$ has characteristic polynomial $x^2 - (n_1 - 1)x + -n_1n_2$ and thus its eigenvalues are \begin{align*} \dfrac{ n_1-1 \pm \sqrt{ n_1^2 + 4n_1n_2 - 2n_1+1 } } {2} \end{align*} and \begin{align*} \text{spr}(G) &= \sqrt{ n_1^2 +4n_1n_2 -2n_1 +1 } . \end{align*} Making the substitution $(n_1, n_2) = (rn, (1-r)n)$ and simplifying, we see that \begin{align*} \text{spr}(G)^2 &= -3n^2r^2 +2n(2n-1)a +1 \\ &= -3n^2 \cdot\left( \left( r-\dfrac{2n-1}{3n} \right)^2 -\dfrac{2(2n^2-2n-1)}{9n^2} \right), \end{align*} which is maximized when $r = n_1/n$ is nearest $2/3-1/(3n)$, or equivalently, when $n_1$ is nearest $(2n-1)/3$. After noting that \begin{align*} \left\lfloor \dfrac{2n}{3} \right\rfloor - \dfrac{2n-1}{3} &= \left\{\begin{array}{rl} 1/3, & n\equiv0\pmod3\\ -1/3, & n\equiv1\pmod3\\ 0, & n\equiv2\pmod3 \end{array}\right. , \end{align*} the desired claim follows. \end{comment} \section{Spread maximum graphons}\label{sec:spread_graphon} In this section, we complete the proof of the graphon version of the spread conjecture of Gregory, Hershkowitz, and Kirkland from \cite{gregory2001spread}. In particular, we prove the following theorem. For convenience and completeness, we state this result in the following level of detail. \begin{theorem}\label{thm: spread maximum graphon} If $W$ is a graphon that maximizes spread, then $W$ may be represented as follows. For all $(x,y)\in [0,1]^2$, \begin{align*} W(x,y) =\left\{\begin{array}{rl} 0, &(x,y)\in [2/3, 1]^2\\ 1, &\text{otherwise} \end{array}\right. . \end{align*} Furthermore, \begin{align*} \mu &= \dfrac{1+\sqrt{3}}{3} \quad \text{ and } \quad \nu = \dfrac{1-\sqrt{3}}{3} \end{align*} are the maximum and minimum eigenvalues of $W$, respectively, and if $f,g$ are unit eigenfunctions associated to $\mu,\nu$, respectively, then, up to a change in sign, they may be written as follows. For every $x\in [0,1]$, \begin{align*} f(x) &= \dfrac{1}{2\sqrt{3+\sqrt{3}}} \cdot \left\{\begin{array}{rl} 3+\sqrt{3}, & x\in [0,2/3] \\ 2\cdot\sqrt{3} & \text{otherwise} \end{array}\right. , \text{ and }\\ g(x) &= \dfrac{1}{2\sqrt{3-\sqrt{3}}} \cdot \left\{\begin{array}{rl} 3-\sqrt{3}, & x\in [0,2/3] \\ -2\cdot\sqrt{3} & \text{otherwise} \end{array}\right. . \end{align*} \end{theorem} To help outline our proof of Theorem \ref{thm: spread maximum graphon}, let the spread-extremal graphon have block sizes $\alpha_1,\ldots, \alpha_7$. Note that the spread of the graphon is the same as the spread of matrix $M^*$ in Figure \ref{fig: 7-vertex loop graph}, and so we will optimize the spread of $M^*$ over choices of $\alpha_1,\dots, \alpha_7$. Let $G^*$ be the unweighted graph (with loops) corresponding to the matrix. We proceed in the following steps. \begin{enumerate}[1. ] \item In Section \ref{appendix 17 cases}, we reduce the proof of Theorem \ref{thm: spread maximum graphon} to $17$ cases, each corresponding to a subset $S$ of $V(G^*)$. For each such $S$ we define an optimization problem $\SPR_S$, the solution to which gives us an upper bound on the spread of any graphon in the case corresponding to $S$. \item In Section \ref{sub-sec: numerics}, we appeal to interval arithmetic to translate these optimization problems into algorithms. Based on the output of the $17$ programs we wrote, we eliminate $15$ of the $17$ cases. We address the multitude of formulas used throughout and relocate their statements and proofs to Appendix \ref{sub-sec: formulas}. \item Finally in Section \ref{sub-sec: cases 4|57 and 1|7}, we complete the proof of Theorem \ref{thm: spread maximum graphon} by analyzing the $2$ remaining cases. Here, we apply Vi\`ete's Formula for roots of cubic equations and make a direct argument. \end{enumerate} \begin{comment} \begin{figure}[ht] \centering \[\left[\begin{array}{ccccccc} \alpha_1 &\sqrt{\alpha_1\alpha_2}&\sqrt{\alpha_1\alpha_3}&\sqrt{\alpha_1\alpha_4}&\sqrt{\alpha_1\alpha_5}&\sqrt{\alpha_1\alpha_6}&\sqrt{\alpha_1\alpha_7}\\ \sqrt{\alpha_1\alpha_2}&\alpha_2&\sqrt{\alpha_2\alpha_3}&0&\sqrt{\alpha_2\alpha_5}&\sqrt{\alpha_2\alpha_6}&\sqrt{\alpha_2\alpha_7}\\ \sqrt{\alpha_1\alpha_3}&\sqrt{\alpha_2\alpha_3}&0 &0&\sqrt{\alpha_3\alpha_5}&\sqrt{\alpha_3\alpha_6}&\sqrt{\alpha_3\alpha_7}\\ \sqrt{\alpha_1\alpha_4}&0&0&0&\sqrt{\alpha_4\alpha_5}&\sqrt{\alpha_4\alpha_6}&\sqrt{\alpha_4\alpha_7}\\ \sqrt{\alpha_1\alpha_5} &\sqrt{\alpha_2\alpha_5}&\sqrt{\alpha_3\alpha_5}&\sqrt{\alpha_4\alpha_5}&\alpha_5&\sqrt{\alpha_5\alpha_6}&0 \\ \sqrt{\alpha_1\alpha_6}&\sqrt{\alpha_2\alpha_6}&\sqrt{\alpha_3\alpha_6}&\sqrt{\alpha_4\alpha_6}&\sqrt{\alpha_5\alpha_6}&0&0\\ \sqrt{\alpha_1\alpha_7}&\sqrt{\alpha_2\alpha_7}&\sqrt{\alpha_3\alpha_7}&\sqrt{\alpha_4\alpha_7}&0&0&0 \end{array}\right]\] \newline \scalebox{.8}[.8]{ \input{graphics/7-vertex-graph-all-edges} } \caption{{\color{blue}The graph $G^*$ corresponding to the matrix $M^*$. } } \label{fig: 7-vertex loop graph} \end{figure} \end{comment} \begin{figure}[ht] \centering \begin{minipage}{0.55\textwidth} \[M^* := D_\alpha^{1/2}\left[\begin{array}{cccc|ccc} 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 0 & 1 & 1 & 1 \\ 1 & 1 & 0 & 0 & 1 & 1 & 1 \\ 1 & 0 & 0 & 0 & 1 & 1 & 1 \\\hline 1 & 1 & 1 & 1 & 1 & 1 & 0 \\ 1 & 1 & 1 & 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 & 0 & 0 & 0 \end{array}\right]D_\alpha^{1/2}\]\end{minipage}\quad\begin{minipage}{0.4\textwidth} \scalebox{.7}[.7]{ \input{graphics/7-vertex-graph-all-edges} } \end{minipage} \caption{{The matrix $M^*$ with corresponding graph $G^*$, where $D_\alpha$ is the diagonal matrix with entries $\alpha_1, \ldots, \alpha_7$. } } \label{fig: 7-vertex loop graph} \end{figure} For concreteness, we define $G^*$ on the vertex set $\{1,\dots,7\}$. Explicitly, the neighborhoods $N_1,\dots, N_7$ of $1,\dots, 7$ are defined as: \begin{align*} \begin{array}{ll} N_1 := \{1,2,3,4,5,6,7\} & N_2 := \{1,2,3,5,6,7\} \\ N_3 := \{1,2,5,6,7\} & N_4 := \{1,5,6,7\} \\ N_5 := \{1,2,3,4,5,6\} & N_6 := \{1,2,3,4,5\} \\ N_7 := \{1,2,3,4\} \end{array} . \end{align*} More compactly, we may note that \begin{align*} \begin{array}{llll} N_1 = \{1,\dots, 7\} & N_2 = N_1\setminus\{4\} & N_3 = N_2\setminus\{3\} & N_4 = N_3\setminus\{2\} \\ & N_5 = N_1\setminus\{7\} & N_6 = N_5\setminus\{6\} & N_7 = N_6\setminus\{5\} \end{array} . \end{align*} \subsection{Stepgraphon case analysis}\label{sub-sec: cases} {Let $W$ be a graphon maximizing spread. By Theorem \ref{thm: reduction to stepgraphon}, we may assume that $W$ is a $7\times 7$ stepgraphon corresponding to $G^\ast$. We will break into cases depending on which of the $7$ weights $\alpha_1, \ldots \alpha_7$ are zero and which are positive. {For some of these combinations the corresponding graphons are isomorphic}, and in this section we will outline how one can show that we need only consider $17$ cases rather than $2^7$. We will present each case with the set of indices which have strictly positive weight. Additionally, we will use vertical bars to partition the set of integers according to its intersection with the sets $\{1\}$, $\{2,3,4\}$ and $\{5,6,7\}$. Recall that vertices in block $1$ are dominating vertices and vertices in blocks $5$, $6$, and $7$ have negative entries in the eigenfunction corresponding to $\nu$. For example, we use $4|57$ to refer to the case that $\alpha_4, \alpha_5, \alpha_7$ are all positive and $\alpha_1 = \alpha_2 = \alpha_3 = \alpha_6 = 0$; see Figure \ref{457 fig}. \begin{figure}[ht] \begin{center} \begin{minipage}{0.45\textwidth}\centering \input{graphics/457-stepgraphon} \end{minipage}\quad\begin{minipage}{0.45\textwidth}\centering \scalebox{.7}[.7]{\input{graphics/457-graph}}\end{minipage} \end{center} \caption{The family of graphons and the graph corresponding to case $4|57$} \label{457 fig} \end{figure} To give an upper bound on the spread of any graphon { corresponding to case} $4|57$ we solve a constrained optimization problem. Let $f_4, f_5, f_7$ and $g_4, g_5, g_7$ denote the eigenfunction entries for unit eigenfunctions $f$ and $g$ of the graphon. Then we maximize $\mu - \nu$ subject to \begin{align*} \alpha_4 + \alpha_5 + \alpha_7 &=1\\ \alpha_4f_4^2 + \alpha_5f_5^2 + \alpha_7 f_7^2 &=1 \\ \alpha_4g_4^2 + \alpha_5g_5^2 + \alpha_7 g_7^2 &= 1\\ \mu f_i^2 - \nu g_i^2 &= \mu-\nu \quad \text{for all } i \in \{4,5,7\}\\ \mu f_4 = \alpha_5 f_5 + \alpha_7 f_7, \quad \mu f_5 &= \alpha_4 f_4 + \alpha_5 f_5, \quad \mu f_7 = \alpha_4 f_4\\ \nu g_4 = \alpha_5 g_5 + \alpha_7 g_7, \quad \nu g_5 &= \alpha_4 g_4 + \alpha_5 g_5, \quad \nu g_7 = \alpha_4 g_4 \end{align*} The first three constraints say that the weights sum to $1$ and that $f$ and $g$ are unit eigenfunctions. The fourth constraint is from Lemma \ref{lem: local eigenfunction equation}. The final two lines of constraints say that $f$ and $g$ are eigenfunctions for $\mu$ and $\nu$ respectively. Since these equations must be satisfied for any spread-extremal graphon, the solution to this optimization problem gives an upper bound on any spread-extremal graphon {corresponding to} case $4|57$. For each case we formulate a similar optimization problem in Appendix \ref{appendix 17 cases}. First, if two distinct blocks of vertices have the same neighborhood, then without loss of generality we may assume that only one of them has positive weight. For example, see Figure \ref{123567 fig}: in case $123|567$, blocks $1$ and $2$ have the same neighborhood, and hence without loss of generality we may assume that only block $1$ has positive weight. Furthermore, in this case the resulting graphon could be considered as case $13|567$ or equivalently as case $14|567$; the graphons corresponding to these cases are isomorphic. Therefore cases $123|567$, $13|567$, and $14|567$ reduce to considering only case $14|567$. \begin{figure}[ht] \begin{center} \begin{minipage}{0.3\textwidth}\centering \input{graphics/123567-stepgraphon} \end{minipage}\quad \begin{minipage}{0.3\textwidth}\centering \input{graphics/13567-stepgraphon}\end{minipage}\quad\begin{minipage}{0.3\textwidth}\centering \input{graphics/14567-stepgraphon} \end{minipage} \newline\vspace{10pt} \begin{minipage}{0.3\textwidth}\centering \scalebox{.7}[.7]{\input{graphics/123567-graph}}\end{minipage}\quad\begin{minipage}{0.3\textwidth}\centering \scalebox{.7}[.7]{\input{graphics/13567-graph}}\end{minipage}\quad\begin{minipage}{0.3\textwidth}\centering \scalebox{.7}[.7]{\input{graphics/14567-graph}}\end{minipage} \end{center} \caption{Redundancy, then renaming: we can assume $\alpha_2=0$ in the family of graphons corresponding to $123|567$, which produces families of graphons corresponding to both cases $13|567$ and $14|567$.} \label{123567 fig} \end{figure} Additionally, if there is no dominant vertex, then some pairs cases may correspond to isomorphic graphons and the optimization problems are equivalent up to flipping the sign of the eigenvector corresponding to $\nu$. For example, see Figure \ref{23457 24567}, in which cases $23|457$ and $24|567$ reduce to considering only a single one. However, because of how we choose to order the eigenfunction entries when setting up the constraints of the optimization problems, there are some examples of cases corresponding to isomorphic graphons that we solve as separate optimization problems. For example, the graphons corresponding to cases $1|24|7$ and $1|4|57$ are isomorphic, but we will consider them separate cases; see Figure \ref{1247 1457}. \begin{figure}[ht] \begin{center} \begin{minipage}{0.45\textwidth}\centering \input{graphics/23457-stepgraphon} \end{minipage}\quad\begin{minipage}{0.45\textwidth}\centering \scalebox{.7}[.7]{\input{graphics/23457-graph}}\end{minipage} \newline \vspace{20pt} \begin{minipage}{0.45\textwidth}\centering \input{graphics/24567-stepgraphon} \end{minipage}\quad\begin{minipage}{0.45\textwidth}\centering \scalebox{.7}[.7]{\input{graphics/24567-graph}}\end{minipage} \end{center} \caption{Changing the sign of $g$: the optimization problems in these cases are equivalent.} \label{23457 24567} \end{figure} \begin{figure} \begin{center} \begin{minipage}{0.45\textwidth}\centering \input{graphics/1247-stepgraphon} \end{minipage}\quad\begin{minipage}{0.45\textwidth}\centering \scalebox{.7}[.7]{\input{graphics/1247-graph}}\end{minipage} \newline \vspace{20pt} \begin{minipage}{0.45\textwidth}\centering \input{graphics/1457-stepgraphon} \end{minipage}\quad\begin{minipage}{0.45\textwidth}\centering \scalebox{.7}[.7]{\input{graphics/1457-graph}}\end{minipage} \end{center} \caption{{The cases $1|24|7$ and $1|4|57$ correspond to the same family graphons but we consider the optimization problems separately, due to our prescribed ordering of the vertices.}} \label{1247 1457} \end{figure} Repeated applications of these three principles show that there are only $17$ distinct cases that we must consider. The details are straightforward to verify, see Lemma \ref{lem: 19 cases}. \begin{figure}[ht] \centering \scalebox{1.0}{ \input{graphics/17-cases} } \caption{ The set $\mathcal{S}_{17}$, as a poset ordered by inclusion. Each element is a subset of $V(G^*) = \{1,\dots,7\}$, written without braces and commas. As noted in the proof of Lemma \ref{lem: 19 cases}, the sets $\{1\}$, $\{2,3,4\}$, and $\{5,6,7\}$ have different behavior in the problems $\SPR_S$. For this reason, we use vertical bars to separate each $S\in\mathcal{S}_{17}$ according to the corresponding partition. } \label{fig: 17 cases} \end{figure} The distinct cases that we must consider are the following, summarized in Figure \ref{fig: 17 cases}. \begin{align*} \mathcal{S}_{17} &:= \left\{\begin{array}{r} 1|234|567, 1|24|567, 1|234|57, 1|4|567, 1|24|57, 1|234|7, 234|567, \\ 24|567, 4|567, 24|57, 1|567, 1|4|57, 1|2|47, 1|57, 4|57, 1|4|7, 1|7 \end{array} \right\} \end{align*} } \subsection{Interval arithmetic}\label{sub-sec: numerics} Interval arithmetic is a computational technique which bounds errors that accumulate during computation. For convenience, let $\mathbb{R}^* := [-\infty, +\infty]$ be the extended real line. To enhance order floating point arithmetic, we replace extended real numbers with unions of intervals which are guaranteed to contain them. Moreover, we extend the basic arithmetic operations $+, -, \times, \div$, and $\sqrt{}$ to operations on unions of intervals. This technique has real-world applications in the hard sciences, but has also been used in computer-assisted proofs. For two famous examples, we refer the interested reader to \cite{2002HalesKepler} for Hales' proof of the Kepler Conjecture on optimal sphere-packing in $\mathbb{R}^2$, and to \cite{2002WarwickInterval} for Warwick's solution of Smale's $14$th problem on the Lorenz attractor as a strange attractor. As stated before, we consider extensions of the binary operations $+, -, \times,$ and $\div$ as well as the unary operation $\sqrt{}$ defined on $\mathbb{R}$ to operations on unions of intervals of extended real numbers. For example if $[a,b], [c,d]\subseteq \mathbb{R}$, then we may use the following extensions of $+, -, $ and $\times$: \begin{align*} [a,b] + [c,d] &= [a+c,b+d], \\ [a,b] - [c,d] &= [a-d,b-c], \text{and}\\ [a,b] \times [c,d] &= \left[ \min\{ac, ad, b c, b d\}, \max\{a c, a d, b c, b d\} \right]. \end{align*} For $\div$, we must address the cases $0\in [c,d]$ and $0\notin [c,d]$. Here, we take the extension \begin{align*} [a,b] \div [c,d] &= \left[ \min\bigg\{\frac{a}{c}, \frac{a}{d}, \frac{b}{c}, \frac{b}{d}\bigg\}, \max\bigg\{\frac{a}{c}, \frac{a}{d}, \frac{b}{c}, \frac{b}{d}\bigg\} \right] \end{align*} where \begin{align*} 1 \div [c,d] &= \left\{\begin{array}{rl} \left[ \min\{ c^{-1}, d^{-1} \}, \max\{ c^{-1}, d^{-1} \} \right], & 0\notin [c,d] \\ \left[ d^{-1}, +\infty \right], & \text{c=0} \\ \left[ -\infty, c^{-1} \right], & d = 0 \\ \left[ -\infty,\frac{1}{c} \right] \cup \left[ \frac{1}{d},+\infty \right], & c < 0 < d \end{array}\right. . \end{align*} Additionally, we may let \begin{align*} \sqrt{[a,b]} &= \left\{\begin{array}{rl} \emptyset, & b < 0\\ \left[ \sqrt{\max\left\{ 0, a \right\}} , \sqrt{b} \right], & \text{othewise} \end{array}\right. . \end{align*} When endpoints of $[a,b]$ and $[c,d]$ include $-\infty$ or $+\infty$, the definitions above must be modified slightly in a natural way. We use interval arithmetic to prove the strict upper bound $<2/\sqrt{3}$ for the maximum graphon spread claimed in Theorem \ref{thm: spread maximum graphon}, for any solutions to $15$ of the $17$ constrained optimization problems $\SPR_S$ stated in Lemma \ref{lem: 19 cases}. The constraints in each $\SPR_S$ allow us to derive equations for the variables $(\alpha_i,f_i,g_i)_{i\in S}$ in terms of each other, and $\mu$ and $\nu$. For the reader's convenience, we relocate these formulas and their derivations to Appendix \ref{sub-sec: formulas}. In the programs corresponding to each set $S\in\mathcal{S}_{17}$, we find we find two indices $i\in S\cap\{1,2,3,4\}$ and $j\in S\cap \{5,6,7\}$ such that for all $k\in S$, $\alpha_k,f_k,$ and $g_k$ may be calculated, step-by-step, from $\alpha_i, \alpha_j, \mu,$ and $\nu$. See Table \ref{tab: i j program search spaces} for each set $S\in\mathcal{S}_{17}$, organized by the chosen values of $i$ and $j$. \begin{table}[ht] \centering \begin{tabular}{c||r|r|r|r} & $1$ & $2$ & $3$ & $4$ \\ \hline\hline \multirow{2}{*}{$5$} & \multirow{2}{*}{$1|57$} & $24|57$ & \multirow{2}{*}{$1|234|57$} & $4|57$ \\ & & $1|24|57$ & & $1|4|57$ \\ \hline \multirow{2}{*}{$6$} & \multirow{2}{*}{$1|567$} & $24|567$ & $234|567$ & $4|567$ \\ & & $1|24|567$ & $1|234|567$ & $1|4|57$ \\ \hline $7$ & $1|7$ & $1|24|7$ & $1|234|7$ & $1|4|7$ \end{tabular} \caption{ The indices $i,j$ corresponding to the search space used to bound solutions to $\SPR_S$. } \label{tab: i j program search spaces} \end{table} In the program corresponding to a set $S\in\mathcal{S}_{17}$, we search a carefully chosen set $\Omega \subseteq [0,1]^3\times [-1,0]$ for values of $(\alpha_i,\alpha_j,\mu,\nu)$ which satisfy $\SPR_S$. We first divide $\Omega$ into a grid of ``boxes''. Starting at depth $0$, we test each box $B$ for feasibility by assuming that $(\alpha_i,\alpha_j,\mu,\nu)\in B$ and that $\mu-\nu\geq 2/\sqrt{3}$. Next, we calculate $\alpha_k,f_k,$ and $g_k$ for all $k\in S$ in interval arithmetic using the formulas from Section \ref{sec: appendix}. When the calculation detects that a constraint of $\SPR_S$ is not satisfied, e.g., by showing that some $\alpha_k, f_k,$ or $g_k$ lies in an empty interval, or by constraining $\sum_{i\in S}\alpha_i$ to a union of intervals which does not contain $1$, then the box is deemed infeasible. Otherwise, the box is split into two boxes of equal dimensions, with the dimension of the cut alternating cyclically. For each $S\in\mathcal{S}_{17}$, the program $\SPR_S$ has $3$ norm constraints, $2|S|$ linear eigenvector constraints, $|S|$ elliptical constraints, $\binom{|S|}{2}$ inequality constraints, and $3|S|$ interval membership constraints. By using interval arithmetic, we have a computer-assisted proof of the following result. \begin{lemma}\label{lem: 2 feasible sets} Suppose $S\in\mathcal{S}_{17}\setminus\{\{1,7\}, \{4,5,7\}\}$. Then any solution to $\text{SPR}_S$ attains a value strictly less than $2/\sqrt{3}$. \end{lemma} To better understand the role of interval arithmetic in our proof, consider the following example. \begin{example}\label{ex: infeasible box} Suppose $\mu,\nu$, and $(\alpha_i,f_i,g_i)$ is a solution to $\text{SPR}_{\{1,\dots,7\}}$. We show that $(\alpha_3, \mu,\nu)\notin [.7,.8]\times [.9,1]\times [-.2,-.1]$. By Proposition \ref{prop: fg23 assume 23}, $\displaystyle{g_3^2 = \frac{\nu(\alpha_3 + 2 \mu)}{\alpha_3(\mu + \nu) + 2 \mu \nu}}$. Using interval arithmetic, \begin{align*} \nu(\alpha_3 + 2 \mu) &= [-.2,-.1] \times \big([.7,.8] + 2 \times [.9,1] \big) \\ &= [-.2,-.1] \times [2.5,2.8] = [-.56,-.25], \text{ and } \\ \alpha_3(\mu + \nu) + 2 \mu \nu &= [.7,.8]\times \big([.9,1] + [-.2,-.1]\big) + 2 \times [.9,1] \times [-.2,-.1] \\ &= [.7,.8] \times [.7,.9] + [1.8,2] \times [-.2,-.1] \\ &= [.49,.72] + [-.4,-.18] = [.09,.54]. \end{align*} Thus \begin{align*} g_3^2 &= \frac{ \nu (\alpha_3 + 2 \mu) } { \alpha_3(\mu + \nu) + 2 \mu \nu } = [-.56,-.25] \div [.09,.54] = [-6.\overline{2},-.4\overline{629}]. \end{align*} Since $g_3^2\geq 0$, we have a contradiction. \end{example} Example \ref{ex: infeasible box} illustrates a number of key elements. First, we note that through interval arithmetic, we are able to provably rule out the corresponding region. However, the resulting interval for the quantity $g_3^2$ is over fifty times bigger than any of the input intervals. This growth in the size of intervals is common, and so, in some regions, fairly small intervals for variables are needed to provably illustrate the absence of a solution. For this reason, using a computer to complete this procedure is ideal, as doing millions of calculations by hand would be untenable. However, the use of a computer for interval arithmetic brings with it another issue. Computers have limited memory, and therefore cannot represent all numbers in $\mathbb{R}^*$. Instead, a computer can only store a finite subset of numbers, which we will denote by $F\subsetneq \mathbb{R}^*$. This set $F$ is not closed under the basic arithmetic operations, and so when some operation is performed and the resulting answer is not in $F$, some rounding procedure must be performed to choose an element of $F$ to approximate the exact answer. This issue is the cause of roundoff error in floating point arithmetic, and must be treated in order to use computer-based interval arithmetic as a proof. PyInterval is one of many software packages designed to perform interval arithmetic in a manner which accounts for this crucial feature of floating point arithmetic. Given some $x \in \mathbb{R}^*$, let $fl_-(x)$ be the largest $y \in F$ satisfying $y \le x$, and $fl_+(x)$ be the smallest $y \in F$ satisfying $y \ge x$. Then, in order to maintain a mathematically accurate system of interval arithmetic on a computer, once an operation is performed to form a union of intervals $\bigcup_{i=1}^k[a_i, b_i]$, the computer forms a union of intervals containing $[fl_-(a_i),fl_+(b_i)]$ for all $1\leq i\leq k$. The programs which prove Lemma \ref{lem: 2 feasible sets} can be found at \cite{2021riasanovsky-spread}. \subsection{Completing the proof of Theorem \ref{thm: spread maximum graphon}}\label{sub-sec: cases 4|57 and 1|7} Finally, we complete the second main result of this paper. We will need the following lemma. \begin{lemma} \label{lem: SPR457} If $(\alpha_4,\alpha_5,\alpha_7)$ is a solution to $\text{SPR}_{\{4,5,7\}}$, then $\alpha_7=0.$ \end{lemma} We delay the proof of Lemma \ref{lem: SPR457} to Section \ref{sec: ugly} because it is technical. We now proceed with the Proof of Theorem \ref{thm: spread maximum graphon}. \begin{proof}[Proof of Theorem \ref{thm: spread maximum graphon}] Let $W$ be a graphon such that $\text{spr}(W) = \max_{U\in\mathcal{W}}\text{spr}(U)$. By Lemma \ref{lem: 2 feasible sets} and Lemma \ref{lem: SPR457}, $W$ is a $2\times 2$ stepgraphon. Let the weights of the parts be $\alpha_1$ and $1-\alpha_1$. \begin{comment} By Lemma \ref{lem: SPR457}, Finally, we complete the proof of the desired claim. Note that the \begin{align*} \left[\begin{array}{ccc} \alpha_5 & 0 & 0\\ 0 & 0 & 0\\ \alpha_5 & 1-\alpha_5 & 0 \end{array}\right] \quad \text{ and }\quad \left[\begin{array}{ccc} \alpha_5 & 0\\ \alpha_5 & 1-\alpha_5 \end{array}\right] \end{align*} have the same multiset of nonzero eigenvalues. By Claim F and the definition of $\text{SPR}_{\{4,5,7\}}$ and $\text{SPR}_{\{4,5\}}$, we have the following. If$\mu, \nu, $ and $(\alpha_4,\alpha_5,\alpha_7)$ are part of a solution to $\text{SPR}_{\{4,5,7\}}$, then $\alpha_7 = 0$ and the quantities $\mu, \nu, $ and $(\alpha_4,1-\alpha_4)$ are part of a solution to $\text{SPR}_{\{4,5\}}$. By setting $(\alpha_1',\alpha_7') := (\alpha_5,1-\alpha_5)$, it follows by inspection of the definition of $\text{SPR}_{\{4,5\}}$ and $\text{SPR}_{\{1,7\}}$ that $\mu, \nu$, and $(\alpha_1',\alpha_7')$ are part of a solution to $\text{SPR}_{\{1,7\}}$. \end{comment} Thus, it suffices to demonstrate the uniqueness of the desired solution $\mu, \nu, $ and $(\alpha_i,f_i,g_i)_{i\in \{1,7\}}$ to $\text{SPR}_{\{1,7\}}$. Indeed, we first note that with \begin{align*} N(\alpha_1) := \left[\begin{array}{ccc} \alpha_1 & 1-\alpha_1\\ \alpha_1 & 0 \end{array}\right], \end{align*} the quantities $\mu$ and $\nu$ are precisely the eigenvalues of the characteristic polynomial \begin{align*} p(x) = x^2-\alpha_1x-\alpha_1(1-\alpha_1). \end{align*} In particular, \begin{align*} \mu &= \dfrac{ \alpha_1 + \sqrt{\alpha_1(4-3\alpha_1)} } {2} , \quad \nu = \dfrac{ \alpha_1 - \sqrt{\alpha_1(4-3\alpha_1)} } {2} , \end{align*} and \begin{align*} \mu-\nu &= \sqrt{\alpha_1(4-3\alpha_1)}. \end{align*} Optimizing, it follows that $(\alpha_1, 1-\alpha_1) = (2/3, 1/3)$. Calculating the eigenfunctions and normalizing them gives that $\mu, \nu, $ and their respective eigenfunctions match those from the statement of Theorem \ref{thm: spread maximum graphon}. \end{proof} \section*{Acknowledgements} The work of A. Riasanovsky was supported in part by NSF award DMS-1839918 (RTG). The work of M. Tait was supported in part by NSF award DMS-2011553. The work of J. Urschel was supported in part by ONR Research Contract N00014-17-1-2177. The work of J.~Breen was supported in part by NSERC Discovery Grant RGPIN-2021-03775. The authors are grateful to Louisa Thomas for greatly improving the style of presentation. \section{From graphons to stepgraphons}\label{sec: graphon spread reduction} The main result of this section is as follows. \begin{theorem}\label{thm: reduction to stepgraphon} Suppose $W$ maximizes $\text{spr}(\hat{\mathcal{W}})$. Then $W$ is a stepfunction taking values $0$ and $1$ of the following form \begin{align*} \input{graphics/stepgraphon7x7} \quad . \end{align*} Furthermore, the internal divisions separate according to the sign of the eigenfunction corresponding to the minimum eigenvalue of $W$. \end{theorem} We begin Section \ref{sub-sec: L2 averaging} by mirroring the argument in \cite{terpai} which proved a conjecture of Nikiforov regarding the largest eigenvalue of a graph and its complement, $\mu+\overline{\mu}$. There Terpai showed that performing two operations on graphons leads to a strict increase in $\mu+\overline{\mu}$. Furthermore based on previous work of Nikiforov from \cite{Nikiforov4}, the conjecture for graphs reduced directly to maximizing $\mu+\overline{\mu}$ for graphons. Using these operations, Terpai \cite{terpai} reduced to a $4\times 4$ stepgraphon and then completed the proof by hand. In our case, we are not so lucky and are left with a $7\times 7$ stepgraphon after performing similar but more technical operations, detailed in this section. In order to reduce to a $3\times 3$ stepgraphon, we appeal to interval arithmetic (see Section \ref{sub-sec: numerics} and Appendices \ref{sec: ugly} and \ref{sec: appendix}). Furthermore, our proof requires an additional technical argument to translate the result for graphons (Theorem \ref{thm: spread maximum graphon}) to our main result for graphs (Theorem \ref{thm: spread maximum graphs}). In Section \ref{sub-sec: stepgraphon proof}, we prove Theorem \ref{thm: reduction to stepgraphon}. \subsection{Averaging}\label{sub-sec: L2 averaging} For convenience, we introduce some terminology. For any graphon $W$ with $\lambda$-eigenfunction $h$, we say that $x\in [0,1]$ is \textit{typical} (with respect to $W$ and $h$) if \begin{align*} \lambda\cdot h(x) = \int_{y\in [0,1]}W(x,y)h(y). \end{align*} Note that a.e. $x\in [0,1]$ is typical. Additionally if $U\subseteq [0,1]$ is measurable with positive measure, then we say that $x_0\in U$ is \textit{average} (on $U$, with respect to $W$ and $h$) if \begin{align*} h(x_0)^2 = \dfrac{1}{m(U)}\int_{y\in U}h(y)^2. \end{align*} Given $W,h,U$, and $x_0$ as above, we define the $L_2[0,1]$ function $\text{av}_{U,x_0}h$ by setting \begin{align*} (\text{av}_{U,x_0}h)(x) := \left\{\begin{array}{rl} h(x_0), & x\in U\\ h(x), & \text{otherwise} \end{array}\right. . \end{align*} Clearly $\|\text{av}_{U,x_0}h\|_2 = \|h\|_2$. Additionally, we define the graphon $\text{av}_{U,x_0}W$ by setting \begin{align*} \text{av}_{U,x_0}W (x,y) := \left\{\begin{array}{rl} 0, & (x,y)\in U\times U \\ W(x_0,y), &(x,y)\in U\times U^c \\ W(x,x_0), &(x,y)\in U^c\times U \\ W(x,y), & (x,y)\in U^c\times U^c \end{array}\right. . \end{align*} In the graph setting, this is analogous to replacing $U$ with an independent set whose vertices are clones of $x_0$. The following lemma indicates how this cloning affects the eigenvalues. \begin{lemma}\label{lem: eigenfunction averaging} Suppose $W$ is a graphon with $h$ a $\lambda$-eigenfunction and suppose there exist disjoint measurable subsets $U_1,U_2\subseteq [0,1]$ of positive measures $\alpha$ and $\beta$, respectively. Let $U:=U_1\cup U_2$. Moreover, suppose $W = 0$ a.e. on $(U\times U)\setminus (U_1\times U_1)$. Additionally, suppose $x_0\in U_2$ is typical and average on $U$, with respect to $W$ and $h$. Let $\tilde{h} := \text{av}_{U,x_0}h$ and $\tilde{W} := \text{av}_{U,x_0}W$. Then for a.e. $x\in [0,1]$, \begin{align}\label{eq: averaged vector image} (A_{\tilde{W}}\tilde{h})(x) &= \lambda\tilde{h}(x) + \left\{\begin{array}{rl} 0, & x\in U\\ m(U)\cdot W(x_0,x)h(x_0) - \int_{y\in U} W(x,y)h(y), &\text{otherwise} \end{array}\right. . \end{align} Furthermore, \begin{align}\label{eq: averaged vector product} \langle A_{\tilde{W}}\tilde{h}, \tilde{h}\rangle = \lambda + \int_{(x,y)\in U_1\times U_1}W(x,y)h(x)h(y). \end{align} \end{lemma} \begin{proof} We first prove Equation \eqref{eq: averaged vector image}. Note that for a.e. $x\in U$, Then \begin{align*} (A_{\tilde{W}}\tilde{h})(x) &= \int_{y\in [0,1]} \tilde{W}(x,y)\tilde{h}(y) \\ &= \int_{y\in U} \tilde{W}(x,y)\tilde{h}(y) + \int_{y\in [0,1]\setminus U} \tilde{W}(x,y)\tilde{h}(y) \\ &= \int_{y\in [0,1]\setminus U} W(x_0,y)h(y) \\ &= \int_{y\in [0,1]} W(x_0,y)h(y) - \int_{y\in U} W(x_0,y)h(y) \\ &= \lambda h(x_0)\\ &= \lambda \tilde{h}(x), \end{align*} as desired. Now note that for a.e. $x\in [0,1]\setminus U$, \begin{align*} (A_{\tilde{W}}\tilde{h})(x) &= \int_{y\in [0,1]} \tilde{W}(x,y)\tilde{h}(y) \\ &= \int_{y\in U} \tilde{W}(x,y)\tilde{h}(y) + \int_{y\in [0,1]\setminus U} \tilde{W}(x,y)\tilde{h}(y) \\ &= \int_{y\in U} W(x_0,x)h(x_0) + \int_{y\in [0,1]\setminus U} W(x,y)h(y) \\ &= m(U)\cdot W(x_0,x)h(x_0) + \int_{y\in [0,1]} W(x,y)h(y) - \int_{y\in U} W(x,y)h(y) \\ &= \lambda h(x) + m(U)\cdot W(x_0,x)h(x_0) - \int_{y\in U} W(x,y)h(y) . \end{align*} So again, the claim holds and this completes the proof of Equation \eqref{eq: averaged vector image}. Now we prove Equation \eqref{eq: averaged vector product}. Indeed by Equation \eqref{eq: averaged vector image}, \begin{align*} \langle (A_{\tilde{W}}\tilde{h}), \tilde{h}\rangle &= \int_{x\in [0,1]} (A_{\tilde{W}}\tilde{h})(x) \tilde{h}(x) \\ &= \int_{x\in [0,1]} \lambda\tilde{h}(x)^2 + \int_{x\in [0,1]\setminus U} \left( m(U)\cdot W(x_0,x)h(x_0) - \int_{y\in U}W(x,y)h(y) \right) \cdot h(x) \\ &= \lambda + m(U)\cdot h(x_0) \left( \int_{x\in [0,1]} W(x_0,x)h(x) - \int_{x\in U} W(x_0,x)h(x) \right) \\ &\quad - \int_{y\in U} \left( \int_{x\in [0,1]} W(x,y)h(x) -\int_{x\in U} W(x,y)h(x) \right) \cdot h(y) \\ &= \lambda + m(U)\cdot h(x_0)\left( \lambda h(x_0) - \int_{y\in U} 0 \right) - \int_{y\in U} \left( \lambda h(y)^2 -\int_{x\in U} W(x,y)h(x)h(y) \right) \\ &= \lambda + \lambda m(U)\cdot h(x_0)^2 - \lambda \int_{y\in U} h(y)^2 + \int_{(x,y)\in U\times U} W(x,y)h(x)h(y) \\ &= \lambda + \int_{(x,y)\in U_1\times U_1} W(x,y)h(x)h(y), \end{align*} and this completes the proof of desired claims. \end{proof} We have the following useful corollary. \begin{corollary}\label{cor: averaging reduction} Suppose $\text{spr}(W) = \text{spr}(\hat{\mathcal{W}})$ with maximum and minimum eigenvalues $\mu,\nu$ corresponding respectively to eigenfunctions $f,g$. Moreover, suppose that there exist disjoint subsets $A,B\subseteq [0,1]$ and $x_0\in B$ so that the conditions of Lemma \ref{lem: eigenfunction averaging} are met for $W$ with $\lambda = \mu$, $h = f$, $U_1=A $, and $U_2=B$. Then, \begin{enumerate}[(i)] \item \label{item: U independent} $W(x,y) = 0$ for a.e. $(x,y)\in U^2$, and \item \label{item: f,g constant on U} $f$ is constant on $U$. \end{enumerate} \end{corollary} \begin{proof} Without loss of generality, we assume that $\|f\|_2 = \|g\|_2 = 1$. Write $\tilde{W}$ for the graphon and $\tilde{f},\tilde{g}$ for the corresponding functions produced by Lemma \ref{lem: eigenfunction averaging}. By Lemma \ref{prop: PF eigenfunction}, we may assume without loss of generality that $f > 0$ a.e. on $[0,1]$. We first prove Item \eqref{item: U independent}. Note that \begin{align} \text{spr}(\tilde{W}) &\geq \int_{(x,y)\in [0,1]^2} \tilde{W}(x,y)(\tilde{f}(x)\tilde{f}(y)-\tilde{g}(x)\tilde{g}(y)) \nonumber\\ &= (\mu-\nu) + \int_{(x,y)\in A\times A}W(x,y)(f(x)f(y)-g(x)g(y)) \nonumber\\ &= \text{spr}(W) + \int_{(x,y)\in A\times A}W(x,y)(f(x)f(y)-g(x)g(y)). \label{eq: sandwich spread} \end{align} Since $\text{spr}(W)\geq \text{spr}(\tilde{W})$ and by Lemma \ref{lem: K = indicator function}.\eqref{item: |diff| > 0}, $f(x)f(y)-g(x)g(y) > 0$ for a.e. $(x,y)\in A\times A$ such that $W(x,y) \neq 0$. Item \eqref{item: U independent} follows. \\ For Item \eqref{item: f,g constant on U}, we first note that $f$ is a $\mu$-eigenfunction for $\tilde{W}$. Indeed, if not, then the inequality in \eqref{eq: sandwich spread} holds strictly, a contradiction to the fact that $\text{spr}(W)\geq \text{spr}(\tilde{W})$. Again by Lemma \ref{lem: eigenfunction averaging}, \begin{align*} m(U)\cdot W(x_0,x)f(x_0) = \int_{y\in U}W(x,y)f(y) \end{align*} for a.e. $x\in [0,1]\setminus U$. Let $S_1 := \{x\in [0,1]\setminus U : W(x_0,x) = 1\}$ and $S_0 := [0,1]\setminus (U\cup S_1)$. We claim that $m(S_1) = 0$. Assume otherwise. By Lemma \ref{lem: eigenfunction averaging} and by Cauchy-Schwarz, for a.e. $x\in S_1$ \begin{align*} m(U)\cdot f(x_0) &= m(U)\cdot W(x_0,x)f(x_0) \\ &= \int_{y\in U}W(x,y)f(y) \\ &\leq \int_{y\in U} f(y) \\ &\leq m(U)\cdot f(x_0), \end{align*} and by sandwiching, $W(x,y) = 1$ and $f(y) = f(x_0)$ for a.e. $y\in U$. Since $m(S_1) > 0$, it follows that $f(y) = f(x_0) = 0$ for a.e. $y\in U$, as desired. \\ So we assume otherwise, that $m(S_1) = 0$. Then for a.e. $x\in [0,1]\setminus U$, $W(x_0,x) = 0$ and \begin{align*} 0 &= m(U)\cdot W(x_0,x)f(x_0)= \int_{y\in U} W(x,y)f(y) \end{align*} and since $f>0$ a.e. on $[0,1]$, it follows that $W(x,y) = 0$ for a.e. $y\in U$. So altogether, $W(x,y) = 0$ for a.e. $(x,y)\in ([0,1]\setminus U)\times U$. So $W$ is a disconnected, a contradiction to Fact \ref{prop: disconnected spectrum}. So the desired claim holds. \end{proof} \subsection{Proof of Theorem \ref{thm: reduction to stepgraphon}}\label{sub-sec: stepgraphon proof} \begin{proof} For convenience, we write $\mu := \mu(W)$ and $\nu := \nu(W)$ and let $f,g$ denote the corresponding unit eigenfunctions. Moreover by Proposition \ref{prop: PF eigenfunction}, we may assume without loss of generality that $f>0$. \\ First, we show without loss of generality that $f,g$ are monotone on the sets $P := \{x\in [0,1] : g(x)\geq 0\}$ and $N := [0,1]\setminus P$. Indeed, we define a total ordering $\preccurlyeq$ on $[0,1]$ as follows. For all $x$ and $y$, we let $x \preccurlyeq y$ if: \begin{enumerate}[(i)] \item $g(x)\geq 0$ and $g(y)<0$, or \item Item (i) does not hold and $f(x) > f(y)$, or \item Item (i) does not hold, $f(x) = f(y)$, and $x\leq y$. \end{enumerate} By inspection, the function $\varphi:[0,1]\to[0,1]$ defined by \begin{align*} \varphi(x) := m(\{y\in [0,1] : y\preccurlyeq x\}). \end{align*} is a weak isomorphism between $W$ and its entrywise composition with $\varphi$. By invariance of $\text{spr}(\cdot)$ under weak isomorphism, we make the above replacement and write $f,g$ for the replacement eigenfunctions. That is, we are assuming that our graphon is relabeled so that $[0,1]$ respects $\preccurlyeq$. \\ As above, let $P := \{x\in [0,1] : g(x) \geq 0\}$ and $N:= [0,1]\setminus P$. By Lemma \ref{lem: local eigenfunction equation}, $f$ and $-g$ are monotone {nonincreasing} on $P$. Additionally, $f$ and $g$ are monotone {nonincreasing} on $N$. Without loss of generality, we may assume that $W$ is of the form from Lemma \ref{lem: K = indicator function}. Now we let $S := \{x\in [0,1] : f(x) < |g(x)|\}$ and $C:=[0,1]\setminus S$. By Lemma \ref{lem: K = indicator function} we have that $W(x,y)=1$ for almost every $x,y\in C$ and $W(x,y)=0$ for almost every $x,y\in S \cap P$ or $x,y\in S\cap N$. We have used the notation $C$ and $S$ because the analogous sets in the graph setting form a clique or a stable set respectively. We first prove the following claim. \\ \\ \textbf{Claim A:} Except on a set of measure $0$, $f$ takes on at most $2$ values on $P\cap S$, and at most $2$ values on $N\cap S$. \\ We first prove this claim for $f$ on $P\cap S$. Let $D$ be the set of all discontinuities of $f$ on the interior of the interval $P\cap S$. Clearly $D$ consists only of jump-discontinuities. By the Darboux-Froda Theorem, $D$ is at most countable and moreover, $(P\cap S)\setminus D$ is a union of at most countably many disjoint intervals $\mathcal{I}$. Moreover, $f$ is continuous on the interior of each $I\in\mathcal{I}$. \\ We show now that $f$ is piecewise constant on the interiors of each $I\in \mathcal{I}$. Indeed, let $I\in \mathcal{I}$. Since $f$ is a $\mu$-eigenfunction function for $W$, \begin{align*} \mu f(x) = \int_{y\in [0,1]}W(x,y)f(y) \end{align*} for a.e. $x\in [0,1]$ and by continuity of $f$ on the interior of $I$, this equation holds everywhere on the interior of $I$. Additionally since $f$ is continuous on the interior of $I$, by the Mean Value Theorem, there exists some $x_0$ in the interior of $I$ so that \begin{align*} f(x_0)^2 = \dfrac{1}{m(U)}\int_{x\in U}f(x)^2. \end{align*} By Corollary \ref{cor: averaging reduction}, $f$ is constant on the interior of $U$, as desired. \\ If $|\mathcal{I}|\leq 2$, the desired claim holds, so we may assume otherwise. Then there exists distinct $I_1,I_2,I_3\in \mathcal{I}$. Moreover, $f$ equals a constant $f_1,f_2,f_3$ on the interiors of $I_1,I_2,$ and $I_3$, respectively. Additionally since $I_1,I_2,$ and $I_3$ are separated from each other by at least one jump discontinuity, we may assume without loss of generality that $f_1 < f_2 < f_3$. It follows that there exists a measurable subset $U\subseteq I_1\cup I_2\cup I_3$ of positive measure so that \begin{align*} f_2^2 &= \dfrac{1}{m(U)}\int_{x\in U}f(x)^2. \end{align*} By Corollary \ref{cor: averaging reduction}, $f$ is constant on $U$, a contradiction. So Claim A holds on $P\cap S$. For Claim A on $N\cap S$, we may repeat this argument with $P$ and $N$ interchanged, and $g$ and $-g$ interchanged. \\ Now we show the following claim. \\ \\ \textbf{Claim B:} For a.e. $(x,y) \in (P\times P)\cup (N\times N)$ such that $f(x)\geq f(y)$, we have that for a.e. $z\in [0,1]$, $W(x,z) = 0$ implies that $W(y,z) = 0$. \\ We first prove the claim for a.e. $(x,y)\in P\times P$. Suppose $W(y,z) = 0$. By Lemma \ref{lem: K = indicator function}, in this case $z\in P$. Then for a.e. such $x,y$, by Lemma \ref{lem: local eigenfunction equation}, $g(x)\leq g(y)$. By Lemma \ref{lem: K = indicator function}.\eqref{item: K = 0 or 1}, $W(x,z) = 0$ implies that $f(x)f(z) < g(x)g(z)$. Since $f(x)\geq f(y)$ and $g(x)\leq g(y)$, $f(y)f(z) < g(y)g(z)$. Again by Lemma \ref{lem: K = indicator function}.\eqref{item: K = 0 or 1}, $W(y,z) = 0$ for a.e. such $x,y,z$, as desired. So the desired claim holds for a.e. $(x,y)\in P\times P$ such that $f(x)\geq f(y)$. We may repeat the argument for a.e. $(x,y)\in N\times N$ to arrive at the same conclusion. \\ \\ The next claim follows directly from Lemma \ref{lem: local eigenfunction equation}. \\ \\ \textbf{Claim C:} For a.e. $x\in [0,1]$, $x\in C$ if and only if $f(x) \geq 1$, if and only if $|g(x)| \leq 1$. \\ \\ Finally, we show the following claim. \\ \\ \textbf{Claim D:} Except on a set of measure $0$, $f$ takes on at most $3$ values on $P\cap C$, and at most $3$ values on $N\cap C$. \\ For a proof, we first write $P\cap S = S_1 \cup S_2$ so that $S_1,S_2$ are disjoint and $f$ equals some constant $f_1$ a.e. on $S_1$ and $f$ equals some constant $f_2$ a.e. on $S_2$. By Lemma \ref{lem: local eigenfunction equation}, $g$ equals some constant $g_1$ a.e. on $S_1$ and $g$ equals some constant $g_2$ a.e. on $S_2$. By definition of $P$, $g_1,g_2\geq 0$. Now suppose $x\in P\cap C$ so that \begin{align*} \mu f(x) &= \int_{y\in [0,1]}W(x,y)f(y). \end{align*} Then by Lemma \ref{lem: K = indicator function}.\eqref{item: K = 0 or 1}, \begin{align*} \mu f(x) &= \int_{y\in (P\cap C)\cup N} f(y) + \int_{y\in S_1}W(x,y)f(y) + \int_{y\in S_2}W(x,y)f(y) . \end{align*} By Claim B, this expression for $\mu f(x)$ may take on at most $3$ values. So the desired claim holds on $P\cap C$. Repeating the same argument, the claim also holds on $N\cap C$. \\ We are nearly done with the proof of the theorem, as we have now reduced $W$ to a $10\times 10$ stepgraphon. To complete the proof, we show that we may reduce to at most $7\times 7$. We now partition $P\cap C, P\cap S, N\cap C$, and $N\cap S$ so that $f$ and $g$ are constant a.e. on each part as: \begin{itemize} \item $P\cap C = U_1\cup U_2\cup U_3$, \item $P\cap S = U_4\cup U_5$, \item $N\cap C = U_6\cup U_7\cup U_8$, and \item $N\cap S = U_9\cup U_{10}$. \end{itemize} Then by Lemma \ref{lem: K = indicator function}.\eqref{item: K = 0 or 1}, there exists a matrix $(m_{ij})_{i,j\in [10]}$ so that for all $(i,j)\in [10]\times [10]$, \begin{itemize} \item $m_{ij}\in \{0,1\}$, \item $W(x,y) = m_{ij}$ for a.e. $(x,y)\in U_i\times U_j$, \item $m_{ij} = 1$ if and only if $f_if_j > g_ig_j$, and \item $m_{ij} = 0$ if and only if $f_if_j < g_ig_j$. \end{itemize} Additionally, we set $\alpha_i = m(U_i)$ and also denote by $f_i$ and $g_i$ the constant values of $f,g$ on each $U_i$, respectively, for each $i= 1, \ldots, 10$. Furthermore, by Claim C and Lemma \ref{lem: K = indicator function} we assume without loss of generality that that $f_1 > f_2 > f_3 \geq 1 > f_4 > f_5$ and that $f_6 > f_7 > f_8 \geq 1 > f_9 > f_{10}$. Also by Lemma \ref{lem: local eigenfunction equation}, $0 \leq g_1 < g_2 < g_3 \leq 1 < g_4 < g_5$ and $0 \leq -g_1 < -g_2 < -g_3 \leq 1 < -g_4 < -g_5$. Also, by Claim B, no two columns of $m$ are identical within the sets $\{1,2,3,4,5\}$ and within $\{6,7,8,9,10\}$. Shading $m_{ij} = 1$ black and $m_{ij} = 0$ white, we let \begin{align*} M = \begin{tabular}{||ccc||cc|||ccc||cc||}\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\\hline\hline \cellcolor{black} & \cellcolor{black} & & & & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & & & & & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\\hline\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & \\\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & & \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & & & \\\hline\hline \end{tabular} \quad . \end{align*} Therefore, $W$ is a stepgraphon with values determined by $M$ and the size of each block determined by the $\alpha_i$. \\ We claim that $0\in \{\alpha_3, \alpha_4, \alpha_5\}$ and $0\in\{\alpha_8,\alpha_9,\alpha_{10}\}$. For the first claim, assume to the contrary that all of $\alpha_3, \alpha_4, \alpha_5$ are positive and note that there exists some $x_4\in U_4$ such that \begin{align*} \mu f_4 = \mu f(x_4) = \int_{y\in [0,1]}W(x_4,y)f(y). \end{align*} Moreover for some measurable subsets $U_3'\subseteq U_3$ and $U_5'\subseteq U_5$ of positive measure so that with $U := U_3'\cup U_4\cup U_5'$, \begin{align*} f(x_4)^2 = \dfrac{1}{m(U)}\int_{y\in U}f(y)^2. \end{align*} Note that by Lemma \ref{lem: local eigenfunction equation}, we may assume that $x_4$ is average on $U$ with respect to $g$ as well. The conditions of Corollary \ref{cor: averaging reduction} are met for $W$ with $A=U_3', B=U_4\cup U_5',x_0 = x_4$. Since $\int_{A\times A}W(x,y)f(x)f(y) > 0$, this is a contradiction to the corollary, so the desired claim holds. The same argument may be used to prove that $0\in \{\alpha_8,\alpha_9,\alpha_{10}\}$. \\ We now form the principal submatrix $M'$ by removing the $i$-th row and column from $M$ if and only if $\alpha_i = 0$. Since $\alpha_i=0$, $W$ is a stepgraphon with values determined by $M'$. Let $M_P'$ denote the principal submatrix of $M'$ corresponding to the indices $i\in\{1,\dots,5\}$ so that $\alpha_i>0$. That is, $M_P'$ corresponds to the upper left hand block of $M$. We use red to indicate rows and columns present in $M$ but not $M_P'$. When forming the submatrix $M_P'$, we borrow the internal subdivisions which are present in the definition of $M$ above to denote where $f\geq 1$ and where $f<1$ (or between $S \cap P$ and $C \cap P$). Note that this is not the same as what the internal divisions denote in the statement of the theorem. Since $0\in\{\alpha_3,\alpha_4,\alpha_5\}$, it follows that $M_P'$ is a principal submatrix of \begin{align*} \begin{tabular}{||ccc||cc||}\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & \cellcolor{black} & \\ \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} \\\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & & \\ \cellcolor{black} & & \cellcolor{red} & & \\ \hline\hline \end{tabular} \quad , \quad \begin{tabular}{||ccc||cc||}\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & \\\hline\hline \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} \\ \cellcolor{black} & & & \cellcolor{red} & \\ \hline\hline \end{tabular} \quad, \text{ or }\quad \begin{tabular}{||ccc||cc||}\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{red} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{red} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & \cellcolor{red}\\\hline\hline \cellcolor{black} & \cellcolor{black} & & & \cellcolor{red} \\ \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} \\ \hline\hline \end{tabular} \quad . \end{align*} In the second case, columns $2$ and $3$ are identical in $M'$, and in the third case, columns $1$ and $2$ are identical in $M'$. So without loss of generality, $M_P'$ is a principal submatrix of one of \begin{align*} \begin{tabular}{||ccc||cc||}\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & \cellcolor{black} & \\ \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} \\\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{red} & & \\ \cellcolor{black} & & \cellcolor{red} & & \\ \hline\hline \end{tabular} \quad , \quad \begin{tabular}{||ccc||cc||}\hline\hline \cellcolor{black} & \cellcolor{red} & \cellcolor{black} & \cellcolor{red} & \cellcolor{black} \\ \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} \\ \cellcolor{black} & \cellcolor{red} & \cellcolor{black} & \cellcolor{red} & \\\hline\hline \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} \\ \cellcolor{black} & \cellcolor{red} & & \cellcolor{red} & \\ \hline\hline \end{tabular} \quad, \text{ or }\quad \begin{tabular}{||ccc||cc||}\hline\hline \cellcolor{black} & \cellcolor{red} & \cellcolor{black} & \cellcolor{black} & \cellcolor{red} \\ \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} \\ \cellcolor{black} & \cellcolor{red} & \cellcolor{black} & & \cellcolor{red}\\\hline\hline \cellcolor{black} & \cellcolor{red} & & & \cellcolor{red} \\ \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} & \cellcolor{red} \\ \hline\hline \end{tabular} \quad . \end{align*} In each case, $M_P'$ is a principal submatrix of \begin{align*} \begin{tabular}{||cc||cc||}\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \\\hline\hline \cellcolor{black} & \cellcolor{black} & & \\ \cellcolor{black} & & & \\ \hline\hline \end{tabular} \quad . \end{align*} An identical argument shows that the principal submatrix of $M'$ on the indices $i\in\{6,\dots,10\}$ such that $\alpha_i>0$ is a principal submatrix of \begin{align*} \begin{tabular}{||cc||cc||}\hline\hline \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \\\hline\hline \cellcolor{black} & \cellcolor{black} & & \\ \cellcolor{black} & & & \\ \hline\hline \end{tabular} \quad . \end{align*} Finally, we note that $0\in \{\alpha_1,\alpha_6\}$. Indeed otherwise the corresponding columns are identical in $M'$, a contradiction. So without loss of generality, row and column $6$ were also removed from $M$ to form $M'$. This completes the proof of the theorem. \end{proof} \section{From graphons to graphs}\label{sub-sec: graphons to graphs} In this section, we show that Theorem \ref{thm: spread maximum graphon} implies Conjecture \ref{thm: spread maximum graphs} for all $n$ sufficiently large; that is, the solution to the problem of maximizing the spread of a graphon implies the solution to the problem of maximizing the spread of a graph for sufficiently large $n$. The outline for our argument is as follows. First, we define the spread-maximum graphon $W$ as in Theorem \ref{thm: spread maximum graphon}. Let $\{G_n\}$ be any sequence where each $G_n$ is a spread-maximum graph on $n$ vertices and denote by $\{W_n\}$ the corresponding sequence of graphons. We show that, after applying measure-preserving transformations to each $W_n$, the extreme eigenvalues and eigenvectors of each $W_n$ converge suitably to those of $W$. It follows for $n$ sufficiently large that except for $o(n)$ vertices, $G_n$ is a join of a clique of $2n/3$ vertices and an independent set of $n/3$ vertices (Lemma \ref{lem: few exceptional vertices}). Using results from Section \ref{sec:graphs}, we precisely estimate the extreme eigenvector entries on this $o(n)$ set. Finally, Lemma \ref{lem: no exceptional vertices} shows that the set of $o(n)$ exceptional vertices is actually empty, completing the proof. \\ Before proceeding with the proof, we state the following corollary of the Davis-Kahan theorem \cite{DK}, stated for graphons. \begin{corollary}\label{cor: DK eigenfunction perturbation} Suppose $W,W':[0,1]^2\to [0,1]$ are graphons Let $\mu$ be an eigenvalue of $W$ with $f$ a corresponding unit eigenfunction. Let $\{h_k\}$ be an orthonormal eigenbasis for $W'$ with corresponding eigenvalues $\{\mu_k'\}$. Suppose that $|\mu_k'-\mu| > \delta$ for all $k\neq 1$. Then \begin{align*} \sqrt{1 - \langle h_1,f\rangle^2} \leq \dfrac{\|A_{W'-W}f\|_2}{\delta}. \end{align*} \end{corollary} Before proving Theorem \ref{thm: spread maximum graphs}, we prove the following approximate result. For all nonnegative integers $n_1,n_2,n_3$, let $G(n_1,n_2,n_3) := (K_{n_1}\dot{\cup} K_{n_2}^c)\vee K_{n_3}^c$. \begin{lemma}\label{lem: few exceptional vertices} For all positive integers integers $n$, let $G_n$ denote a graph on $n$ vertices which maximizes spread. Then $G_n = G(n_1,n_2,n_3)$ for some nonnegative integers $n_1,n_2,n_3$ such that $n_1 = (2/3+o(1))n$, $n_2 = o(n)$, and $n_3 = (1/3+o(1))n$. \end{lemma} \begin{proof} Our argument outline is: \begin{enumerate} \item show that the eigenvectors for the spread-extremal graphs resemble the eigenfunctions of the spread-extremal graphon in an $L_2$ sense \item show that with the exception of a small proportion of vertices, a spread-extremal graph is the join of a clique and an independent set \end{enumerate} Let $\mathcal{P} := [0,2/3]$ and $\mathcal{N} := [0,1]\setminus \mathcal{P}$. By Theorem \ref{thm: spread maximum graphon}, the graphon $W$ which is the indicator function of the set $[0,1]^2\setminus \mathcal{N}^2$ maximizes spread. Denote by $\mu$ and $\nu$ its maximum and minimum eigenvalues, respectively. For every positive integer $n$, let $G_n$ denote a graph on $n$ vertices which maximizes spread, let $W_n$ be any stepgraphon corresponding to $G_n$, and let $\mu_n$ and $\nu_n$ denote the maximum and minimum eigenvalues of $W_n$, respectively. By Theorems \ref{thm: graphon eigenvalue continuity} and \ref{thm: spread maximum graphon}, and compactness of $\hat{\mathcal{W}}$, \begin{align*} \max\left\{ |\mu-\mu_n|, |\nu-\nu_n|, \delta_\square(W, W_n) \right\}\to 0. \end{align*} Moreover, we may apply measure-preserving transformations to each $W_n$ so that without loss of generality, $\|W-W_n\|_\square\to 0$. As in Theorem \ref{thm: spread maximum graphon}, let $f$ and $g$ be unit eigenfunctions which take values $f_1,f_2, g_1, g_2$. Furthermore, let $\varphi_n$ be a nonnegative unit $\mu_n$-eigenfunction for $W_n$ and let $\psi_n$ be a $\nu_n$-eigenfunction for $W_n$. \\ We show that without loss of generality, $\varphi_n\to f$ and $\psi_n\to g$ in the $L_2$ sense. Since $\mu$ is the only positive eigenvalue of $W$ and it has multiplicity $1$, taking $\delta := \mu/2$, Corollary \ref{cor: DK eigenfunction perturbation} implies that \begin{align*} 1-\langle f,\varphi_n\rangle^2 &\leq \dfrac{4\|A_{W-W_n}f\|_2^2} {\mu^2} \\ &= \dfrac{4}{\mu^2} \cdot \left\langle A_{W-W_n}f, A_{W-W_n}f \right\rangle \\ &\leq \dfrac{4}{\mu^2} \cdot \| A_{W-W_n}f \|_1 \cdot \| A_{W-W_n}f \|_\infty \\ &\leq \dfrac{4}{\mu^2} \cdot \left( \|A_{W-W_n}\|_{\infty\to1}\|f\|_\infty \right) \cdot \|f\|_\infty \\ &\leq \dfrac{ 16\|W-W_n\|_\square \cdot \|f\|_\infty^2} {\mu^2}, \end{align*} where the last inequality follows from Lemma 8.11 of \cite{Lovasz2012Hombook}. Since $\|f\|_\infty\leq 1/\mu$, this proves the first claim. The second claim follows by replacing $f$ with $g$, and $\mu$ with $|\nu|$. \\ \\ \textbf{Note: }For the remainder of the proof, we will introduce quantities $\varepsilon_i > 0$ in lieu of writing complicated expressions explicitly. When we introduce a new $\varepsilon_i$, we will remark that given $\varepsilon_0,\dots,\varepsilon_{i-1}$ sufficiently small, $\varepsilon_i$ can be made sufficiently small enough to meet some other conditions. \\ Let $\varepsilon_0 > 0$ and for all $n\geq 1$, define \begin{align*} \mathcal{P}_n &:= \{ x\in [0,1] : |\varphi_n(x) - f_1| < \varepsilon_0 \text{ and } |\psi_n(x)-g_1| < \varepsilon_0 \}, \\ \mathcal{N}_n &:= \{ x\in [0,1] : |\varphi_n(x) - f_2| < \varepsilon_0 \text{ and } |\psi_n(x)-g_2| < \varepsilon_0 \}, \text{ and } \\ \mathcal{E}_n &:= [0,1]\setminus (\mathcal{P}_n\cup \mathcal{N}_n). \end{align*} Since \begin{align*} \int_{|\varphi_n-f|\geq \varepsilon_0} |\varphi_n-f|^2 &\leq \int_{} |\varphi_n-f|^2 \to 0, \text{ and }\\ \int_{|\psi_n-g|\geq \varepsilon_0} |\psi_n-g|^2 &\leq \int_{} |\psi_n-g|^2 \to 0, \end{align*} it follows that \begin{align*} \max\left\{ m(\mathcal{P}_n\setminus\mathcal{P}), m(\mathcal{N}_n\setminus\mathcal{N}), m(\mathcal{E}_n) \right\} \to 0. \end{align*} For all $u\in V(G_n)$, let $S_u$ be the subinterval of $[0,1]$ corresponding to $u$ in $W_n$, and denote by $\varphi_u$ and $\psi_u$ the constant values of $\varphi_n$ on $S_u$. For convenience, we define the following discrete analogues of $\mathcal{P}_n, \mathcal{N}_n, \mathcal{E}_n$: \begin{align*} P_n &:= \{ u\in V(G_n) : |\varphi_u - f_1| < \varepsilon_0 \text{ and } |\psi_u-g_1| < \varepsilon_0 \}, \\ N_n &:= \{ u\in V(G_n) : |\varphi_u - f_2| < \varepsilon_0 \text{ and } |\psi_u-g_2| < \varepsilon_0 \}, \text{ and } \\ E_n &:= V(G_n) \setminus (P_n\cup N_n). \end{align*} Let $\varepsilon_1>0$. By Lemma \ref{discrete ellipse equation} and using the fact that $\mu_n\to \mu$ and $\nu_n\to \nu$, \begin{align}\label{eq: recall graph ellipse equation} \left| \mu \varphi_u^2 - \nu\psi_u^2 - (\mu-\nu) \right| &< \varepsilon_1 \quad \text{ for all }u\in V(G_n) \end{align} for all $n$ sufficiently large. Let $\varepsilon_0'>0$. We next need the following claim, which says that the eigenvector entries of the exceptional vertices behave as if they have neighborhood $N_n$. \\ \\ \textbf{Claim I. } Suppose $\varepsilon_0$ is sufficiently small and $n$ is sufficiently large in terms of $\varepsilon_0'$. Then for all $v\in E_n$, \begin{align}\label{eq: exceptional vertex entries} \max\left\{ \left| \varphi_v - \dfrac{f_2}{3\mu} \right|, \left| \psi_v - \dfrac{g_2}{3\nu} \right| \right\} < \varepsilon_0'. \end{align} Indeed, suppose $v \in E_n$ and let \begin{align*} U_n := \{w\in V(G_n) : vw\in E(G_n)\} \quad\text{ and }\quad \mathcal{U}_n := \bigcup_{w\in U_n} S_w. \end{align*} We take two cases, depending on the sign of $\psi_v$. \\ \\ \textbf{Case A: $\psi_v \geq 0$. } Recall that $f_2 > 0 > g_2$. Furthermore, $\varphi_v \geq 0$ and by assumption, $\psi_v\geq 0$. It follows that for all $n$ sufficiently large, $f_2\varphi_v - g_2\psi_v > 0$, so by Lemma \ref{lem: graph join}, $N_n\subseteq U_n$. Since $\varphi_n$ is a $\mu_n$-eigenfunction for $W_n$, \begin{align*} \mu_n \varphi_v &= \int_{y\in [0,1]}W_n(x,y)\varphi_n(y) \\ &= \int_{y\in \mathcal{P}_n\cap \mathcal{U}_n}\varphi_n(y) + \int_{y\in \mathcal{N}_n}\varphi_n(y) + \int_{y\in \mathcal{E}_n\cap \mathcal{U}_n}\varphi_n(y). \end{align*} Similarly, \begin{align*} \nu_n \psi_v &= \int_{y\in [0,1]}K_n(x,y)\psi_n(y) \\ &= \int_{y\in \mathcal{P}_n\cap \mathcal{U}_n}\psi_n(y) + \int_{y\in \mathcal{N}_n}\psi_n(y) + \int_{y\in \mathcal{E}_n\cap \mathcal{U}_n}\psi_n(y). \end{align*} Let $\rho_n := m(\mathcal{P}_n\cap \mathcal{U}_n)$. Note that for all $\varepsilon_2 > 0$, as long as $n$ is sufficiently large and $\varepsilon_1$ is sufficiently small, then \begin{align}\label{eq: eigenvector entries with rho (case A)} \max\left\{ \left| \varphi_v-\dfrac{3\rho_n f_1 + f_2}{3\mu} \right| , \left| \psi_v-\dfrac{3\rho_n g_1 + g_2}{3\nu} \right| \right\} < \varepsilon_2. \end{align} Let $\varepsilon_3 > 0$. By Equations \eqref{eq: recall graph ellipse equation} and \eqref{eq: eigenvector entries with rho (case A)} and with $\varepsilon_1,\varepsilon_2$ sufficiently small, \begin{align*} \left| \mu\cdot \left( \dfrac{3\rho_n f_1 + f_2}{3\mu} \right)^2 -\nu\cdot \left( \dfrac{3\rho_n g_1 + g_2}{3\nu} \right)^2 - (\mu-\nu) \right| < \varepsilon_3. \end{align*} Substituting the values of $f_1,f_2, g_1,g_2$ from Theorem \ref{thm: spread maximum graphon} and simplifying, it follows that \begin{align*} \left| \dfrac{\sqrt{3}}{2} \cdot \rho_n(3\rho_n-2) \right| < \varepsilon_3 \end{align*} Let $\varepsilon_4 > 0$. It follows that if $n$ is sufficiently large and $\varepsilon_3$ is sufficiently small, then \begin{align}\label{eq: (case A) rho estimates} \min\left\{ \rho_n, |2/3-\rho_n| \right\} < \varepsilon_4. \end{align} Combining Equations \eqref{eq: eigenvector entries with rho (case A)} and \eqref{eq: (case A) rho estimates}, it follows that with $\varepsilon_2,\varepsilon_4$ sufficiently small, then \begin{align*} \max\left\{ \left| \varphi_v - \dfrac{f_2}{3\mu} \right|, \left| \psi_v - \dfrac{g_2}{3\mu} \right| \right\} &< \varepsilon_0', \text{ or } \\ \max\left\{ \left| \varphi_v - \dfrac{2f_1 + f_2}{3\mu} \right|, \left| \psi_v - \dfrac{2g_1 + g_2}{3\mu} \right| \right\} &< \varepsilon_0'. \end{align*} Note that \begin{align*} f_1 &= \dfrac{2f_1 + f_2}{3\mu} \quad \text{ and }\quad g_1 = \dfrac{2g_1 + g_2}{3\nu}. \end{align*} Since $v\in E_n$, the second inequality does not hold, which completes the proof of the desired claim. \\ \\ \textbf{Case B: $\psi_v < 0$. } Recall that $f_1 > g_1 > 0$. Furthermore, $\varphi_v \geq 0$ and by assumption, $\psi_v < 0$. It follows that for all $n$ sufficiently large, $f_1\varphi_v - g_1\psi_v > 0$, so by Lemma \ref{lem: graph join}, $P_n\subseteq U_n$. Since $\varphi_n$ is a $\mu_n$-eigenfunction for $W_n$, \begin{align*} \mu_n \varphi_v &= \int_{y\in [0,1]}W_n(x,y)\varphi_n(y) \\ &= \int_{y\in \mathcal{N}_n\cap \mathcal{U}_n}\varphi_n(y) + \int_{y\in \mathcal{P}_n}\varphi_n(y) + \int_{y\in \mathcal{E}_n\cap \mathcal{U}_n}\varphi_n(y). \end{align*} Similarly, \begin{align*} \nu_n \psi_v &= \int_{y\in [0,1]}W_n(x,y)\psi_n(y) \\ &= \int_{y\in \mathcal{N}_n\cap \mathcal{U}_n}\psi_n(y) + \int_{y\in \mathcal{P}_n}\psi_n(y) + \int_{y\in \mathcal{E}_n\cap \mathcal{U}_n}\psi_n(y). \end{align*} Let $\rho_n := m(\mathcal{N}_n\cap \mathcal{U}_n)$. Note that for all $\varepsilon_5 > 0$, as long as $n$ is sufficiently large and $\varepsilon_1$ is sufficiently small, then \begin{align}\label{eq: eigenvector entries with rho (case B)} \max\left\{ \left| \varphi_v-\dfrac{2f_1 + 3\rho_n f_2}{3\mu} \right| , \left| \psi_v-\dfrac{2g_1 + 3\rho_n g_2}{3\nu} \right| \right\} < \varepsilon_5. \end{align} Let $\varepsilon_6 > 0$. By Equations \eqref{eq: recall graph ellipse equation} and \eqref{eq: eigenvector entries with rho (case B)} and with $\varepsilon_1,\varepsilon_2$ sufficiently small, \begin{align*} \left| \mu\cdot \left( \dfrac{2f_1 + 3\rho_n f_2}{3\mu} \right)^2 -\nu\cdot \left( \dfrac{2g_1 + 3\rho_n g_2}{3\nu} \right)^2 - (\mu-\nu) \right| < \varepsilon_6. \end{align*} Substituting the values of $f_1,f_2, g_1,g_2$ from Theorem \ref{thm: spread maximum graphon} and simplifying, it follows that \begin{align*} \left| 2\sqrt{3} \cdot \rho_n(3\rho_n-1) \right| < \varepsilon_6 \end{align*} Let $\varepsilon_7 > 0$. It follows that if $n$ is sufficiently large and $\varepsilon_6$ is sufficiently small, then \begin{align}\label{eq: (case B) rho estimates} \min\left\{ \rho_n, |1/3-\rho_n| \right\} < \varepsilon_7. \end{align} Combining Equations \eqref{eq: eigenvector entries with rho (case A)} and \eqref{eq: (case B) rho estimates}, it follows that with $\varepsilon_2,\varepsilon_4$ sufficiently small, then \begin{align*} \max\left\{ \left| \varphi_v - \dfrac{2f_1}{3\mu} \right|, \left| \psi_v - \dfrac{2g_1}{3\mu} \right| \right\} &< \varepsilon_0', \text{ or } \\ \max\left\{ \left| \varphi_v - \dfrac{2f_1 + f_2}{3\mu} \right|, \left| \psi_v - \dfrac{2g_1 + g_2}{3\mu} \right| \right\} &< \varepsilon_0'. \end{align*} Again, note that \begin{align*} f_1 &= \dfrac{2f_1 + f_2}{3\mu} \quad \text{ and }\quad g_1 = \dfrac{2g_1 + g_2}{3\nu}. \end{align*} Since $v\in E_n$, the second inequality does not hold. \\ Similarly, note that \begin{align*} f_2 &= \dfrac{2f_1}{3\mu} \quad \text{ and }\quad g_2 = \dfrac{2g_1}{3\nu}. \end{align*} Since $v\in E_n$, the first inequality does not hold, a contradiction. So the desired claim holds. \\ We now complete the proof of Lemma \ref{lem: no exceptional vertices} by showing that for all $n$ sufficiently large, $G_n$ is the join of an independent set $N_n$ with a disjoint union of a clique $P_n$ and an independent set $E_n$. As above, we let $\varepsilon_0,\varepsilon_0'>0$ be arbitrary. By definition of $P_n$ and $N_n$ and by Equation \eqref{eq: exceptional vertex entries} from Claim I, then for all $n$ sufficiently large, \begin{align*} \max\left\{ \left| \varphi_v - f_1 \right|, \left| \psi_v - g_1 \right| \right\} &< \varepsilon_0 &\text{ for all }v\in P_n \\ \max\left\{ \left| \varphi_v - \dfrac{f_2}{3\mu} \right|, \left| \psi_v - \dfrac{g_2}{3\nu} \right| \right\} &< \varepsilon_0' &\text{ for all }v\in E_n \\ \max\left\{ \left| \varphi_v - f_2 \right|, \left| \psi_v - g_2 \right| \right\} &< \varepsilon_0 &\text{ for all }v\in N_n \end{align*} With rows and columns respectively corresponding to the vertex sets $P_n, E_n,$ and $N_n$, we note the following inequalities: Indeed, note the following inequalities: \begin{align*} \begin{array}{c|c||c} f_1^2 > g_1^2 & f_1\cdot\dfrac{f_2}{3\mu} < g_1\cdot\dfrac{g_2}{3\nu} & f_1f_2 > g_1g_2\\\hline & \left(\dfrac{f_2}{3\mu}\right)^2 < \left(\dfrac{g_2}{3\nu}\right)^2 & \dfrac{f_2}{3\mu}\cdot f_2 > \dfrac{g_2}{3\nu}\\\hline\hline && f_2^2 < g_2^2 \end{array} \quad . \end{align*} Let $\varepsilon_0, \varepsilon_0'$ be sufficiently small. Then for all $n$ sufficiently large and for all $u,v\in V(G_n)$, then $\varphi_u\varphi_v-\psi_u\psi_v < 0$ if and only if $u,v\in E_n$, $u,v\in N_n$, or $(u,v)\in (P_n\times E_n)\cup (E_n\times P_n)$. By Lemma \ref{lem: graph join}, since $m(P_n) \to 2/3$ and $m(N_n) \to 1/3$, the proof is complete. \end{proof} We have now shown that the spread-extremal graph is of the form $(K_{n_1}\dot{\cup} K_{n_2}^c)\vee K_{n_3}^c$ where $n_2 = o(n)$. The next lemma refines this to show that actually $n_2 = 0$. \begin{lemma}\label{lem: no exceptional vertices} For all nonnegative integers $n_1,n_2,n_3$, let $G(n_1, n_2, n_3) := (K_{n_1}\cup K_{n_2}^c)\vee K_{n_3}^c$. Then for all $n$ sufficiently large, the following holds. If $\text{spr}(G(n_1,n_2,n_3))$ is maximized subject to the constraint $n_1+n_2+n_3 = n$ and $n_2 = o(n)$, then $n_2 = 0$. \end{lemma} \emph{Proof outline:} We aim to maximize the spread of $G(n_1, n_2, n_3)$ subject to $n_2 = o(n)$. The spread of $G(n_1, n_2, n_3)$ is the same as the spread of the quotient matrix \[ Q_n = \begin{bmatrix} n_1 - 1 & 0 & n_3\\ 0 & 0 & n_3 \\ n_1 & n_2 & 0 \end{bmatrix}. \] We reparametrize with parameters $\varepsilon_1$ and $\varepsilon_2$ representing how far away $n_1$ and $n_3$ are proportionally from $\frac{2n}{3}$ and $\frac{n}{3}$, respectively. Namely, $\varepsilon_1 = \frac{2}{3} - \frac{n_1}{n}$ and $\varepsilon_2 = \frac{1}{3} - \frac{n_3}{n}$. Then $\varepsilon_1 + \varepsilon_2 = \frac{n_2}{n}$. Hence maximizing the spread of $G(n_1,n_2,n_3)$ subject to $n_2 = o(n)$ is equivalent to maximizing the spread of the matrix \[ n\begin{bmatrix} \frac{2}{3} - \varepsilon_1 - \frac{1}{n} & 0 & \frac{1}{3} - \varepsilon_2 \\ 0 & 0 & \frac{1}{3} - \varepsilon_2 \\ \frac{2}{3} - \varepsilon_1 & \varepsilon_1 + \varepsilon_2 & 0 \end{bmatrix} \] subject to the constraint that $\textstyle \frac{2}{3}-\varepsilon_1$ and $\textstyle \frac{1}{3}-\varepsilon_2$ are nonnegative integer multiples of $\frac{1}{n}$ and $\varepsilon_1+\varepsilon_2 = o(1)$. In order to utilize calculus, we instead solve a continuous relaxation of the optimization problem. As such, consider the following matrix. \begin{align*} M_z(\varepsilon_1,\varepsilon_2) := \left[\begin{array}{ccc} \dfrac{2}{3}-\varepsilon_1-z & 0 & \dfrac{1}{3}-\varepsilon_2\\ 0 & 0 & \dfrac{1}{3}-\varepsilon_2\\ \dfrac{2}{3}-\varepsilon_1 & \varepsilon_1+\varepsilon_2 & 0 \end{array}\right] . \end{align*} Since $M_z(\varepsilon_1,\varepsilon_2)$ is diagonalizable, we may let $S_z(\varepsilon_1,\varepsilon_2)$ be the difference between the maximum and minimum eigenvalues of $M_z(\varepsilon_1,\varepsilon_2)$. We consider the optimization problem $\mathcal{P}_{z,C}$ defined for all $z\in\mathbb{R}$ and all $C>0$ such that $|z|$ and $C$ are sufficiently small, by \begin{align*} (\mathcal{P}_{z,C}): \left\{\begin{array}{rl} \max & S_z(\varepsilon_1,\varepsilon_2)\\ \text{s.t}. & \varepsilon_1,\varepsilon_2\in [-C,C]. \end{array}\right. \end{align*} { We show that as long as $C$ and $|z|$ are sufficiently small, then the optimum of $\mathcal{P}_{z,C}$ is attained by \begin{align*} (\varepsilon_1, \varepsilon_2) &= \left( (1+o(z))\cdot \dfrac{7z}{30}, (1+o(z))\cdot \dfrac{-z}{3} \right). \end{align*} Moreover we show that in the feasible region of $\mathcal{P}_{z,C}$, $S_{z,C}(\varepsilon_1,\varepsilon_2)$ is concave-down in $(\varepsilon_1,\varepsilon_2)$. We return to the original problem by imposing the constraint that $ \frac{2}{3}-\varepsilon_1$ and $\frac{1}{3}-\varepsilon_2$ are multiples of $\frac{1}{n}$. Together these two observations complete the proof of the lemma. Under these added constraints, the optimum is obtained when \begin{align*} (\varepsilon_1, \varepsilon_2) &= \left\{\begin{array}{rl} (0,0), & n\equiv 0 \pmod{3}\\ (2/3,-2/3), & n\equiv 1 \pmod{3}\\ (1/3,-1/3), & n\equiv 2 \pmod{3} \end{array}\right. . \end{align*} } Since the details are straightforward but tedious calculus, we delay this part of the proof to Section \ref{sec: 2 by 2 reduction}. We may now complete the proof of Theorem \ref{thm: spread maximum graphs}. \begin{proof}[Proof of Theorem \ref{thm: spread maximum graphs}] Suppose $G$ is a graph on $n$ vertices which maximizes spread. By Lemma \ref{lem: few exceptional vertices}, $G = (K_{n_1}\dot{\cup} K_{n_2}^c)\vee K_{n_3}^c$ for some nonnegative integers $n_1,n_2,n_3$ such that $n_1+n_2+n_3 = n$ where \begin{align*} (n_1,n_2,n_3) &= \left( \left( \dfrac{2}{3}+o(1) \right), o(n), \left( \dfrac{1}{3}+o(1) \right) \cdot n \right). \end{align*} By Lemma \ref{lem: no exceptional vertices}, if $n$ is sufficiently large, then $n_2 = 0$. To complete the proof of the main result, it is sufficient to find the unique maximum of $\text{spr}( K_{n_1}\vee K_{n_2}^c )$, subject to the constraint that $n_1+n_2 = n$. { This is determined in \cite{gregory2001spread} to be the join of a clique on $\lfloor\frac{2n}{3}\rfloor$ and an independent set on $\lceil \frac{n}{3} \rceil$ vertices.} The interested reader can prove that $n_1$ is the nearest integer to $(2n-1)/3$ by considering the spread of the quotient matrix \begin{align*} \left[\begin{array}{cc} n_1-1 & n_2\\ n_1 & 0 \end{array}\right] \end{align*}and optimizing the choice of $n_1$. \end{proof} \section{Introduction} The spread $s(M)$ of an arbitrary $n\times n$ complex matrix $M$ is the diameter of its spectrum; that is, \[s(M):= \max_{i,j} |\lambda_i-\lambda_j|,\] where the maximum is taken over all pairs of eigenvalues of $M$. This quantity has been well studied in general, see \cite{deutsch1978spread,johnson1985lower,mirsky1956spread,wu2012upper} for details and additional references. Most notably, Johnson, Kumar, and Wolkowitz produced the lower bound $$ s(M) \ge \textstyle{ \big| \sum_{i \ne j} m_{i,j} \big|/(n-1)}$$ for normal matrices $M = (m_{i,j})$ \cite[Theorem 2.1]{johnson1985lower}, and Mirsky produced the upper bound $$ s(M) \le \sqrt{\textstyle{2 \sum_{i,j} |m_{i,j}|^2 - (2/n)\big| \sum_{i} m_{i,i} \big|^2}}$$ for any $n$ by $n$ matrix $M$, which is tight for normal matrices with $n-2$ of its eigenvalues all equal and equal to the arithmetic mean of the other two \cite[Theorem 2]{mirsky1956spread}. The spread of a matrix has also received interest in certain particular cases. Consider a simple undirected graph $G = (V(G),E(G))$ of order $n$. The adjacency matrix $A$ of a graph $G$ is the $n \times n$ matrix whose rows and columns are indexed by the vertices of $G$, with entries satisfying $A_{u,v} = 1$ if $\{u,v\} \in E(G)$ and $A_{u,v} = 0$ otherwise. This matrix is real and symmetric, and so its eigenvalues are real, and can be ordered $\lambda_1(G) \geq \lambda_2(G)\geq \cdots \geq \lambda_n(G)$. When considering the spread of the adjacency matrix $A$ of some graph $G$, the spread is simply the distance between $\lambda_1(G)$ and $\lambda_n(G)$, denoted by $$s(G) := \lambda_1(G) - \lambda_n(G).$$ In this instance, $s(G)$ is referred to as the \emph{spread of the graph}. In \cite{gregory2001spread}, the authors investigated a number of properties regarding the spread of a graph, determining upper and lower bounds on $s(G)$. Furthermore, they made two key conjectures. Let us denote the maximum spread over all $n$ vertex graphs by $s(n)$, the maximum spread over all $n$ vertex graphs of size $e$ by $s(n,e)$, and the maximum spread over all $n$ vertex bipartite graphs of size $e$ by $s_b(n,e)$. Let $K_k$ be the clique of order $k$ and $G(n,k) := K_k \vee \overline{K_{n-k}}$ be the join of the clique $K_k$ and the independent set $\overline{K_{n-k}}$. We say a graph is \emph{spread-extremal} if it has spread $s(n)$. The conjectures addressed in this article are as follows. \begin{conjecture}[\cite{gregory2001spread}, Conjecture 1.3]\label{conj:spread} For any positive integer $n$, the graph of order $n$ with maximum spread is $G(n,\lfloor 2n/3 \rfloor)$; that is, $s(n)$ is attained only by $G(n,\lfloor 2n/3 \rfloor)$. \end{conjecture} \begin{conjecture}[\cite{gregory2001spread}, Conjecture 1.4]\label{conj:bispread} If $G$ is a graph with $n$ vertices and $e$ edges attaining the maximum spread $s(n,e)$, and if $e\leq \lfloor n^2/4\rfloor$, then $G$ must be bipartite. That is, $s_b(n,e) = s(n,e)$ for all $e \le \lfloor n^2/4\rfloor$. \end{conjecture} Conjecture \ref{conj:spread} is referred to as the Spread Conjecture, and Conjecture \ref{conj:bispread} is referred to as the Bipartite Spread Conjecture. Much of what is known about Conjecture \ref{conj:spread} is contained in \cite{gregory2001spread}, but the reader may also see \cite{StanicBook} for a description of the problem and references to other work on it. In this paper, we resolve both conjectures. We prove the Spread Conjecture for all $n$ sufficiently large, prove an asymptotic version of the Bipartite Spread Conjecture, and provide an infinite family of counterexamples to illustrate that our asymptotic version is as tight as possible, up to lower order error terms. These results are given by Theorems \ref{thm: spread maximum graphs} and \ref{thm: bipartite spread theorem}. \begin{theorem}\label{thm: spread maximum graphs} There exists a constant $N$ so that the following holds: Suppose $G$ is a graph on $n\geq N$ vertices with maximum spread; then $G$ is the join of a clique on $\lfloor 2n/3\rfloor$ vertices and an independent set on $\lceil n/3\rceil$ vertices. \end{theorem} \begin{theorem}\label{thm: bipartite spread theorem} $$s(n,e) - s_b(n,e) \le \frac{1+16 e^{-3/4}}{e^{3/4}} s(n,e)$$ for all $n,e \in \mathbb{N}$ satisfying $e \le \lfloor n^2/4\rfloor$. In addition, for any $\varepsilon>0$, there exists some $n_\varepsilon$ such that $$s(n,e) - s_b(n,e) \ge \frac{1-\varepsilon}{e^{3/4}} s(n,e)$$ for all $n\ge n_\varepsilon$ and some $e \le \lfloor n^2/4\rfloor$ depending on $n$. \end{theorem} The proof of Theorem \ref{thm: spread maximum graphs} is quite involved, and constitutes the main subject of this work. The general technique consists of showing that a spread-extremal graph has certain desirable properties, considering and solving an analogous problem for graph limits, and then using this result to say something about the Spread Conjecture for sufficiently large $n$. For the interested reader, we state the analogous graph limit result in the language of functional analysis. \begin{theorem}\label{thm: functional analysis spread} Let $W:[0,1]^2\to [0,1]$ be a Lebesgue-measurable function such that $W(x,y) = W(y,x)$ for a.e. $(x,y)\in [0,1]^2$ and let $A = A_W$ be the kernel operator on $\mathscr{L}^2[0,1]$ associated to $W$. For all unit functions $f,g\in\mathscr{L}^2[0,1]$, \begin{align*} \langle f, Af\rangle - \langle g, Ag\rangle &\leq \dfrac{2}{\sqrt{3}}. \end{align*} Moreover, equality holds if and only if there exists a measure-preserving transformation $\sigma$ on $[0,1]$ such that for a.e. $(x,y)\in [0,1]^2$, \begin{align*} W(\sigma(x),\sigma(y)) &= \left\{\begin{array}{rl} 0, & (x,y)\in [2/3, 1]\times [2/3, 1]\\ 1, &\text{otherwise} \end{array}\right. . \end{align*} \end{theorem} The proof of Theorem \ref{thm: spread maximum graphs} can be found in Sections 2-6, with certain technical details reserved for the Appendix. We provide an in-depth overview of the proof of Theorem \ref{thm: spread maximum graphs} in Subsection \ref{sub: outline}. In comparison, the proof of Theorem \ref{thm: bipartite spread theorem} is surprisingly short, making use of the theory of equitable decompositions and a well-chosen class of counter-examples. The proof of Theorem \ref{thm: bipartite spread theorem} can be found in Section \ref{sec:bispread}. Finally, in Section \ref{sec: conclusion}, we discuss further questions and possible future avenues of research. \iffalse \begin{theorem}\label{thm: spread maximum graphs} There exists a constant $N$ so that the following holds: Suppose $G$ is a graph on $n\geq N$ vertices with maximum spread; then $G$ is the join of a clique on $\lfloor 2n/3\rfloor$ vertices and an independent set on $\lceil n/3\rceil$ vertices. \end{theorem} As a general outline, we proceed in three steps. First in Section \ref{sec: graphon spread reduction}, we show that the solution to an analogous problem for graph limits (c.f. \cite{Lovasz2012Hombook}) is a stepgraphon with a given structure. Then in Section \ref{sub-sec: numerics}, we optimize the spread of this stepgraphon by solving $17$ constrained optimization problems. This proof is computer-assisted and uses interval arithmetic, the same technique used to prove the Kelper Conjecture and {\color{blue} Smale's 14th problem} (c.f. \cite{2002HalesKepler} and \cite{2002WarwickInterval}). In Theorem \ref{thm: spread maximum graphon}, we state this result in the language of graph limits. For the interested reader, we state this result in the language of functional analysis. \begin{theorem} Let $W:[0,1]^2\to [0,1]$ be a Lebesgue-measurable function such that $W(x,y) = W(y,x)$ for a.e. $(x,y)\in [0,1]^2$ and let $A = A_W$ be the kernel operator on $\mathscr{L}^2[0,1]$ associated to $W$. For for all unit functions $f,g\in\mathscr{L}^2[0,1]$, \begin{align*} \langle f, Af\rangle - \langle g, Ag\rangle &\leq \dfrac{2}{\sqrt{3}}. \end{align*} Moreover, equality holds if and only if there exists a measure-preserving transformation $\sigma$ on $[0,1]$ such that for a.e. $(x,y)\in [0,1]^2$ \begin{align*} W(\sigma(x),\sigma(y)) &= \left\{\begin{array}{rl} 0, & (x,y)\in [2/3, 1]\times [2/3, 1]\\ 1, &\text{otherwise} \end{array}\right. . \end{align*} \end{theorem} Finally in Section \ref{sub-sec: graphons to graphs}, deduce Theorem \ref{thm: spread maximum graphs} from Theorem \ref{thm: spread maximum graphon} using several technical arguments. Additionally in Section \ref{sec:bispread}, we resolve the Bipartite Spread Conjecture in the following sense. We prove an asymptotic version of the conjecture and give an infinite family of counterexamples that illustrate that our asymptotic version is as tight as possible up to a multiplicative factor on the error term. \begin{theorem}\label{thm: bipartite spread theorem} $$s(n,e) - s_b(n,e) \le \frac{4}{e^{3/4}} s(n,e)$$ for all $n,e \in \mathbb{N}$ satisfying $e \le \lfloor n^2/4\rfloor$. In addition, for any $\varepsilon>0$, there exists some $n_\varepsilon$ such that $$s(n,e) - s_b(n,e) \ge \frac{1-\varepsilon}{e^{3/4}} s(n,e)$$ for all $n\ge n_\varepsilon$ and some $e \le \lfloor n^2/4\rfloor$ \end{theorem} \fi \subsection{High-Level Outline of Spread Proof}\label{sub: outline} Here, we provide a concise, high-level description of our asymptotic proof of the Spread Conjecture. The proof itself is quite involved, making use of interval arithmetic and a number of fairly complicated symbolic calculations, but conceptually, is quite intuitive. Our proof consists of four main steps. \\ \noindent{\bf Step 1: } Graph-Theoretic Results \\ \begin{adjustwidth}{1.5em}{0pt} In Section \ref{sec:graphs}, we observe a number of important structural properties of any graph that maximizes the spread for a given order $n$. In particular, we show that\\ \begin{itemize} \item any graph that maximizes spread must be the join of two threshold graphs (Lemma \ref{lem: graph join}), \item both graphs in this join have order linear in $n$ (Lemma \ref{linear size parts}), \item the unit eigenvectors $\mathbf{x}$ and $\mathbf{z}$ corresponding to $\lambda_1(A)$ and $\lambda_n(A)$ have infinity norms of order $n^{-1/2}$ (Lemma \ref{upper bound on eigenvector entries}), \item the quantities $\lambda_1 \mathbf{x}_u^2 - \lambda_n \mathbf{z}_u^2$, $u \in V$, are all nearly equal, up to a term of order $n^{-1}$ (Lemma \ref{discrete ellipse equation}).\\ \end{itemize} This last structural property serves as the backbone of our proof. In addition, we note that, by a tensor argument, an asymptotic upper bound for $s(n)$ implies a bound for all $n$. \\ \end{adjustwidth} \noindent{\bf Step 2: } Graphons and a Finite-Dimensional Eigenvalue Problem \\ \begin{adjustwidth}{1.5em}{0pt} In Sections \ref{sec: graphon background} and \ref{sec: graphon spread reduction}, we make use of graphons to understand how spread-extremal graphs behave as $n$ tends to infinity. Section \ref{sec: graphon background} consists of a basic introduction to graphons, and a translation of the graph results of Step 1 to the graphon setting. In particular, we prove the graphon analogue of the graph properties that \\ \begin{itemize} \item vertices $u$ and $v$ are adjacent if and only if $\mathbf{x}_u \mathbf{x}_v - \mathbf{z}_u \mathbf{z}_v >0$ (Lemma \ref{lem: K = indicator function}), \item the quantities $\lambda_1 \mathbf{x}_u^2 - \lambda_n \mathbf{z}_u^2$, $u \in V$, are all nearly equal (Lemma \ref{lem: local eigenfunction equation}). \\ \end{itemize} Next, in Section \ref{sec: graphon spread reduction}, we show that the spread-extremal graphon for our problem takes the form of a particular stepgraphon with a finite number of blocks (Theorem \ref{thm: reduction to stepgraphon}). In particular, through an averaging argument, we note that the spread-extremal graphon takes the form of a stepgraphon with a fixed structure of symmetric seven by seven blocks, illustrated below. \begin{align*} \input{graphics/stepgraphon7x7} \end{align*} The lengths $\alpha = (\alpha_1,...,\alpha_7)$, $\alpha^T {\bf 1} = 1$, of each row and column in the spread-extremal stepgraphon is unknown. For any choice of lengths $\alpha$, we can associate a $7\times7$ matrix whose spread is identical to that of the associated stepgraphon pictured above. Let $B$ be the $7\times7$ matrix with $B_{i,j}$ equal to the value of the above stepgraphon on block $i,j$, and $D = \text{diag}(\alpha_1,...,\alpha_7)$ be a diagonal matrix with $\alpha$ on the diagonal. Then the matrix $D^{1/2} B D^{1/2}$ has spread equal to the spread of the associated stepgraphon. \\ \end{adjustwidth} \noindent{\bf Step 3: } Computer-Assisted Proof of a Finite-Dimensional Eigenvalue Problem \\ \begin{adjustwidth}{1.5em}{0pt} In Section \ref{sec:spread_graphon}, we show that the optimizing choice of $\alpha$ is, without loss of generality, given by $\alpha_1 = 2/3$, $\alpha_6 =1/3$, and all other $\alpha_i =0$ (Theorem \ref{thm: spread maximum graphon}). This is exactly the limit of the conjectured spread-extremal graph as $n$ tends to infinity. The proof of this fact is extremely technical, and relies on a computer-assisted proof using both interval arithmetic and symbolic computations. This is the only portion of the proof that requires the use of interval arithmetic. Though not a proof, in Figure 1 we provide intuitive visual justification that this result is true. In this figure, we provide contour plots resulting from numerical computations of the spread of the above matrix for various values of $\alpha$. The numerical results suggest that the $2/3-1/3$ two by two block stepgraphon is indeed optimal. See Figure 1 and the associated caption for details. The actual proof of this fact consists of the following steps: \\ \begin{itemize} \item we reduce the possible choices of non-zero $\alpha_i$ from $2^7$ to $17$ different cases (Lemma \ref{lem: 19 cases}), \item using eigenvalue equations, the graphon version of $\lambda_1 \mathbf{x}_u^2 - \lambda_n \mathbf{z}_u^2$ all nearly equal, and interval arithmetic, we prove that, of the $17$ cases, only the cases \begin{itemize} \item $\alpha_1,\alpha_7 \ne 0$ \item $\alpha_4,\alpha_5,\alpha_7 \ne 0$ \end{itemize} can produce a spread-extremal stepgraphon (Lemma \ref{lem: 2 feasible sets}), \item prove that the three by three case cannot be spread-extremal, using basic results from the theory of cubic polynomials and computer-assisted symbolic calculations (Lemma \ref{lem: SPR457}). \\ \end{itemize} This proves the the spread-extremal graphon is a two by two stepgraphon that, without loss of generality, takes value zero on the block $[2/3,1]^2$ and one elsewhere (Theorem \ref{thm: spread maximum graphon}). \\ \end{adjustwidth} \begin{figure} \centering \subfigure[$\alpha_i \ne 0$ for all $i$]{\includegraphics[width=2.9in,height = 2.5in]{graphics/125_34_67.png}} \quad \subfigure[$\alpha_2=\alpha_3=\alpha_4 = 0$]{\includegraphics[width=2.9in,height = 2.5in]{graphics/16_5_7.png}} \caption{Contour plots of the spread for some choices of $\alpha$. Each point $(x,y)$ of Plot (a) illustrates the maximum spread over all choices of $\alpha$ satisfying $\alpha_3 + \alpha_4 = x$ and $\alpha_6 + \alpha_7 = y$ (and therefore, $\alpha_1 + \alpha_2 + \alpha_ 5 = 1 - x - y$) on a grid of step size $1/100$. Each point $(x,y)$ of Plot (b) illustrates the maximum spread over all choices of $\alpha$ satisfying $\alpha_2=\alpha_3=\alpha_4=0$, $\alpha_5 = y$, and $\alpha_7 = x$ on a grid of step size $1/100$. The maximum spread of Plot (a) is achieved at the black x, and implies that, without loss of generality, $\alpha_3 + \alpha_4=0$, and therefore $\alpha_2 = 0$ (indices $\alpha_1$ and $\alpha_2$ can be combined when $\alpha_3 + \alpha_4=0$). Plot (b) treats this case when $\alpha_2 = \alpha_3 = \alpha_4 = 0$, and the maximum spread is achieved on the black line. This implies that either $\alpha_5 =0$ or $\alpha_7 = 0$. In both cases, this reduces to the block two by two case $\alpha_1,\alpha_7 \ne 0$ (or, if $\alpha_7 = 0$, then $\alpha_1,\alpha_6 \ne 0$).} \label{fig:contour} \end{figure} \noindent{\bf Step 4: } From Graphons to an Asymptotic Proof of the Spread Conjecture \\ \begin{adjustwidth}{1.5em}{0pt} Finally, in Section \ref{sub-sec: graphons to graphs}, we convert our result for the spread-extremal graphon to a statement for graphs. This process consists of two main parts:\\ \begin{itemize} \item using our graphon theorem, we show that any spread-extremal graph takes the form $(K_{n_1}\dot{\cup} \overline{K_{n_2}})\vee \overline{K_{n_3}}$ for $n_1 = (2/3+o(1))n$, $n_2 = o(n)$, and $n_3 = (1/3+o(1))n$ (Lemma \ref{lem: few exceptional vertices}), i.e. any spread-extremal graph is equal up to a set of $o(n)$ vertices to the conjectured optimal graph $K_{\lfloor 2n/3\rfloor} \vee \overline{K_{\lceil n/3 \rceil}}$, \item we show that, for $n$ sufficiently large, the spread of $(K_{n_1}\dot{\cup} \overline{K_{n_2}})\vee \overline{K_{n_3}}$, $n_1 + n_2 + n_3 = n$, is maximized when $n_2 = 0$ (Lemma \ref{lem: no exceptional vertices}).\\ \end{itemize} Together, these two results complete our proof of the spread conjecture for sufficiently large $n$ (Theorem \ref{thm: spread maximum graphs}). \end{adjustwidth} \section{The Bipartite Spread Conjecture}\label{sec:bispread} In \cite{gregory2001spread}, the authors investigated the structure of graphs which maximize the spread over all graphs with a fixed number of vertices $n$ and edges $m$, denoted by $s(n,m)$. In particular, they proved the upper bound \begin{equation}\label{eqn:spread_bound} s(G) \le \lambda_1 + \sqrt{2 m - \lambda^2_1} \le 2 \sqrt{m}, \end{equation} and noted that equality holds throughout if and only if $G$ is the union of isolated vertices and $K_{p,q}$, for some $p+q \le n$ satisfying $m=pq$ \cite[Thm. 1.5]{gregory2001spread}. This led the authors to conjecture that if $G$ has $n$ vertices, $m \le \lfloor n^2/4 \rfloor$ edges, and spread $s(n,m)$, then $G$ is bipartite \cite[Conj. 1.4]{gregory2001spread}. In this section, we prove an asymptotic form of this conjecture and provide an infinite family of counterexamples to the exact conjecture which verifies that the error in the aforementioned asymptotic result is of the correct order of magnitude. Recall that $s_b(n,m)$, $m \le \lfloor n^2/4 \rfloor$, is the maximum spread over all bipartite graphs with $n$ vertices and $m$ edges. To explicitly compute the spread of certain graphs, we make use of the theory of equitable partitions. In particular, we note that if $\phi$ is an automorphism of $G$, then the quotient matrix of $A(G)$ with respect to $\phi$, denoted by $A_\phi$, satisfies $\Lambda(A_\phi) \subset \Lambda(A)$, and therefore $s(G)$ is at least the spread of $A_\phi$ (for details, see \cite[Section 2.3]{brouwer2011spectra}). Additionally, we require two propositions, one regarding the largest spectral radius of subgraphs of $K_{p,q}$ of a given size, and another regarding the largest gap between sizes which correspond to a complete bipartite graph of order at most $n$. Let $K_{p,q}^m$, $0 \le pq-m <\min\{p,q\}$, be the subgraph of $K_{p,q}$ resulting from removing $pq-m$ edges all incident to some vertex in the larger side of the bipartition (if $p=q$, the vertex can be from either set). In \cite{liu2015spectral}, the authors proved the following result. \begin{proposition}\label{prop:bi_spr} If $0 \le pq-m <\min\{p,q\}$, then $K_{p,q}^m$ maximizes $\lambda_1$ over all subgraphs of $K_{p,q}$ of size $m$. \end{proposition} We also require estimates regarding the longest sequence of consecutive sizes $m < \lfloor n^2/4\rfloor$ for which there does not exist a complete bipartite graph on at most $n$ vertices and exactly $e$ edges. As pointed out by \cite{pc1}, the result follows quickly by induction. However, for completeness, we include a brief proof. \begin{proposition}\label{prop:seq} The length of the longest sequence of consecutive sizes $m < \lfloor n^2/4\rfloor$ for which there does not exist a complete bipartite graph on at most $n$ vertices and exactly $m$ edges is zero for $n \le 4$ and at most $\sqrt{2n-1}-1$ for $n \ge 5$. \end{proposition} \begin{proof} We proceed by induction. By inspection, for every $n \le 4$, $m \le \lfloor n^2/4 \rfloor$, there exists a complete bipartite graph of size $m$ and order at most $n$, and so the length of the longest sequence is trivially zero for $n \le 4$. When $n =m = 5$, there is no complete bipartite graph of order at most five with exactly five edges. This is the only such instance for $n =5$, and so the length of the longest sequence for $n = 5$ is one. Now, suppose that the statement holds for graphs of order at most $n-1$, for some $n > 5$. We aim to show the statement for graphs of order at most $n$. By our inductive hypothesis, it suffices to consider only sizes $m \ge\lfloor (n-1)^2/4 \rfloor$ and complete bipartite graphs on $n$ vertices. We have $$\left( \frac{n}{2} + k \right)\left( \frac{n}{2} - k \right) \ge \frac{(n-1)^2}{4} \qquad \text{ for} \quad |k| \le \frac{\sqrt{2n-1}}{2}.$$ When $1 \le k \le \sqrt{2n-1}/2$, the difference between the sizes of $K_{n/2+k-1,n/2-k+1}$ and $K_{n/2+k,n/2-k}$ is at most \begin{align*} \big| E\big(K_{\frac{n}{2}+k-1,\frac{n}{2}-k+1}\big)\big| - \big| E\big(K_{n/2+k,n/2-k}\big)\big| &=2k-1 \le \sqrt{2n-1} -1. \end{align*} Let $k^*$ be the largest value of $k$ satisfying $k \le \sqrt{2n-1}/2$ and $n/2 + k \in \mathbb{N}$. Then \begin{align*} \big| E\big(K_{\frac{n}{2}+k^*,\frac{n}{2}-k^*}\big)\big| &< \left(\frac{n}{2} + \frac{\sqrt{2n-1}}{2} -1 \right)\left(\frac{n}{2} - \frac{\sqrt{2n-1}}{2} +1 \right) \\ &= \sqrt{2n-1} + \frac{(n-1)^2}{4} - 1, \end{align*} and the difference between the sizes of $K_{n/2+k^*,n/2-k^*}$ and $K_{\lceil \frac{n-1}{2}\rceil,\lfloor \frac{n-1}{2}\rfloor}$ is at most \begin{align*} \big| E\big(K_{\frac{n}{2}+k^*,\frac{n}{2}-k^*}\big)\big| - \big| E\big(K_{\lceil \frac{n-1}{2}\rceil,\lfloor \frac{n-1}{2}\rfloor}\big)\big| &< \sqrt{2n-1} + \frac{(n-1)^2}{4} -\left\lfloor \frac{(n-1)^2}{4} \right\rfloor - 1 \\ &< \sqrt{2n-1}. \end{align*} Combining these two estimates completes our inductive step, and the proof. \end{proof} We are now prepared to prove an asymptotic version of \cite[Conjecture 1.4]{gregory2001spread}, and provide an infinite class of counterexamples that illustrates that the asymptotic version under consideration is the tightest version of this conjecture possible. \begin{theorem} $$s(n,m) - s_b(n,m) \le \frac{1+16 \,m^{-3/4}}{m^{3/4}}\, s(n,m)$$ for all $n,m \in \mathbb{N}$ satisfying $m \le \lfloor n^2/4\rfloor$. In addition, for any $\epsilon>0$, there exists some $n_\epsilon$ such that $$s(n,m) - s_b(n,m) \ge \frac{1-\epsilon}{m^{3/4}} \, s(n,m)$$ for all $n\ge n_\epsilon$ and some $m \le \lfloor n^2/4\rfloor$ depending on $n$. \end{theorem} \begin{proof} The main idea of the proof is as follows. To obtain an upper bound on $s(n,m) - s_b(n,m)$, we upper bound $s(n,m)$ by $2 \sqrt{m}$ using Inequality \eqref{eqn:spread_bound}, and we lower bound $s_b(n,m)$ by the spread of some specific bipartite graph. To obtain a lower bound on $s(n,m) - s_b(n,m)$ for a specific $n$ and $m$, we explicitly compute $s_b(n,m)$ using Proposition \ref{prop:bi_spr}, and lower bound $s(n,m)$ by the spread of some specific non-bipartite graph. First, we analyze the spread of $K_{p,q}^m$, $0 < pq-m <q \le p$, a quantity that will be used in the proof of both the upper and lower bound. Let us denote the vertices in the bipartition of $K_{p,q}^m$ by $u_1,...,u_p$ and $v_1,...,v_{q}$, and suppose without loss of generality that $u_1$ is not adjacent to $v_1,...,v_{pq-m}$. Then $$\phi = (u_1)(u_2,...,u_p)(v_1,...,v_{pq-m})(v_{pq-m+1},...,v_{q})$$ is an automorphism of $K^m_{p,q}$. The corresponding quotient matrix is given by $$ A_\phi = \begin{pmatrix} 0 & 0 & 0 & m-(p-1)q \\ 0 & 0 & pq-m & m-(p-1)q \\ 0 & p-1 & 0 & 0 \\ 1 & p-1 & 0 & 0 \end{pmatrix},$$ has characteristic polynomial $$Q(p,q,m) = \det[A_\phi - \lambda I] = \lambda^4 -m \lambda^2 + (p-1)(m-(p-1)q)(pq-m),$$ and, therefore, \begin{equation}\label{eqn:bispread_exact} s\left(K^m_{p,q}\right) \ge 2 \left( \frac{m + \sqrt{m^2-4(p-1)(m-(p-1)q)(pq-m)}}{2} \right)^{1/2}. \end{equation} For $pq = \Omega(n^2)$ and $n$ sufficiently large, this lower bound is actually an equality, as $A(K^m_{p,q})$ is a perturbation of the adjacency matrix of a complete bipartite graph with each partite set of size $\Omega(n)$ by an $O(\sqrt{n})$ norm matrix. For the upper bound, we only require the inequality, but for the lower bound, we assume $n$ is large enough so that this is indeed an equality. Next, we prove the upper bound. For some fixed $n$ and $m\le \lfloor n^2/4 \rfloor$, let $m = pq -r$, where $p,q,r \in \mathbb{N}$, $p+q \le n$, and $r$ is as small as possible. If $r = 0$, then by \cite[Thm. 1.5]{gregory2001spread} (described above), $s(n,m) = s_b(n,m)$ and we are done. Otherwise, we note that $0<r < \min \{p,q\}$, and so Inequality \eqref{eqn:bispread_exact} is applicable (in fact, by Proposition \ref{prop:seq}, $r = O(\sqrt{n})$). Using the upper bound $s(n,m) \le 2 \sqrt{m}$ and Inequality \eqref{eqn:bispread_exact}, we have \begin{equation}\label{eqn:spr_upper} \frac{s(n,pq-r)-s\left(K^{m}_{p,q}\right)}{s(n,pq-r)} \le 1 - \left(\frac{1}{2}+\frac{1}{2} \sqrt{1-\frac{4(p-1)(q-r) r}{(pq-r)^2}} \right)^{1/2}. \end{equation} To upper bound $r$, we use Proposition \ref{prop:seq} with $n'=\lceil 2 \sqrt{m}\rceil \le n$ and $m$. This implies that $$ r \le \sqrt{2 \lceil 2 \sqrt{m}\rceil -1} -1 < \sqrt{2 ( 2 \sqrt{m}+1) -1} -1 = \sqrt{ 4 \sqrt{m} +1}-1 \le 2 m^{1/4}.$$ Recall that $\sqrt{1-x} \ge 1 - x/2 - x^2/2$ for all $x \in [0,1]$, and so \begin{align*} 1 - \big(\tfrac{1}{2} + \tfrac{1}{2} \sqrt{1-x} \big)^{1/2} &\le 1 - \big( \tfrac{1}{2} + \tfrac{1}{2} (1 - \tfrac{1}{2}x - \tfrac{1}{2}x^2) \big)^{1/2} = 1 - \big(1 - \tfrac{1}{4} (x + x^2) \big)^{1/2} \\ &\le 1 - \big(1 - \tfrac{1}{8}(x + x^2) - \tfrac{1}{32}(x + x^2)^2 \big) \\ &\le \tfrac{1}{8} x + \tfrac{1}{4} x^2 \end{align*} for $ x \in [0,1]$. To simplify Inequality \eqref{eqn:spr_upper}, we observe that $$\frac{4(p-1)(q-r)r}{(pq-r)^2} \le \frac{4r}{m} \le \frac{8}{m^{3/4}}.$$ Therefore, $$\frac{s(n,pq-r)-s\left(K^{m}_{p,q}\right)}{s(n,pq-r)} \le \frac{1}{m^{3/4}}+ \frac{16}{m^{3/2}}.$$ This completes the proof of the upper bound. Finally, we proceed with the proof of the lower bound. Let us fix some $0<\epsilon<1$, and consider some sufficiently large $n$. Let $m = (n/2+k)(n/2-k)+1$, where $k$ is the smallest number satisfying $n/2 + k \in \mathbb{N}$ and $\hat \epsilon:=1 - 2k^2/n < \epsilon/2$ (here we require $n = \Omega(1/\epsilon^2)$). Denote the vertices in the bipartition of $K_{n/2+k,n/2-k}$ by $u_1,...,u_{n/2+k}$ and $v_1,...,v_{n/2-k}$, and consider the graph $K^+_{n/2+k,n/2-k}:=K_{n/2+k,n/2-k} \cup \{(v_1,v_2)\}$ resulting from adding one edge to $K_{n/2+k,n/2-k}$ between two vertices in the smaller side of the bipartition. Then $$ \phi = (u_1 ,...,u_{n/2+k})(v_1, v_2)(v_3,...,v_{n/2-k})$$ is an automorphism of $K^+_{n/2+k,n/2-k}$, and $$A_\phi = \begin{pmatrix} 0 & 2 & n/2- k - 2 \\ n/2+k & 1 & 0 \\ n/2+k & 0 & 0 \end{pmatrix} $$ has characteristic polynomial \begin{align*} \det[A_\phi - \lambda I] &= -\lambda^3 +\lambda^2 + \left(n^2/4 - k^2\right) \lambda - (n/2+k)(n/2-k-2) \\ &= -\lambda^3 + \lambda^2 + \left(\frac{ n^2}{4} - \frac{(1-\hat \epsilon) n}{2} \right)\lambda - \left(\frac{n^2}{4} - \frac{(3-\hat \epsilon)n}{2} -\sqrt{2(1-\hat \epsilon)n} \right). \end{align*} By matching higher order terms, we obtain $$ \lambda_{max}(A_\phi) = \frac{n}{2}-\frac{1-\hat \epsilon}{2} + \frac{\left( 8-(1-\hat \epsilon)^2 \right)}{4 n} +o(1/n),$$ $$\lambda_{min}(A_\phi) = -\frac{n}{2}+\frac{1-\hat \epsilon}{2} + \frac{\left( 8+(1-\hat \epsilon)^2 \right)}{4 n} +o(1/n),$$ and $$s(K^+_{n/2+k,n/2-k}) \ge n-(1-\hat \epsilon)-\frac{(1-\hat \epsilon)^2}{2n} + o(1/n).$$ Next, we aim to compute $s_b(n,m)$, $m = (n/2+k)(n/2-k)+1$. By Proposition \ref{prop:bi_spr}, $s_b(n,m)$ is equal to the maximum of $s(K^m_{n/2+\ell,n/2-\ell})$ over all $\ell \in [0,k-1]$, $k-\ell \in \mathbb{N}$. As previously noted, for $n$ sufficiently large, the quantity $s(K^m_{n/2+\ell,n/2-\ell})$ is given exactly by Equation (\ref{eqn:bispread_exact}), and so the optimal choice of $\ell$ minimizes \begin{align*} f(\ell) &:= (n/2+\ell-1)(k^2-\ell^2-1)(n/2-\ell-(k^2-\ell^2-1))\\ &=(n/2+\ell)\big((1-\hat \epsilon)n/2-\ell^2\big)\big(\hat \epsilon n/2 +\ell^2-\ell \big) + O(n^2). \end{align*} We have $$ f(k-1) = (n/2+k-2)(2k-2)(n/2-3k+3),$$ and if $\ell \le \frac{4}{5} k$, then $f(\ell) = \Omega(n^3)$. Therefore the minimizing $\ell$ is in $ [\frac{4}{5} k,k]$. The derivative of $f(\ell)$ is given by \begin{align*} f'(\ell) &=(k^2-\ell^2-1)(n/2-\ell-k^2+\ell^2+1)\\ &\qquad-2\ell(n/2+\ell-1)(n/2-\ell-k^2+\ell^2+1)\\ &\qquad+(2\ell-1)(n/2+\ell-1)(k^2-\ell^2-1). \end{align*} For $\ell \in [\frac{4}{5} k,k]$, \begin{align*} f'(\ell) &\le \frac{n(k^2-\ell^2)}{2}-\ell n(n/2-\ell-k^2+\ell^2)+2\ell(n/2+\ell)(k^2-\ell^2)\\ &\le \frac{9 k^2 n}{50} - \tfrac{4}{5} kn(n/2-k- \tfrac{9}{25} k^2)+ \tfrac{18}{25} (n/2+k)k^3 \\ &= \frac{81 k^3 n}{125}-\frac{2 k n^2}{5} + O(n^2)\\ &=kn^2\left(\frac{81(1-\hat \epsilon)}{250}-\frac{2}{5}\right)+O(n^2)<0 \end{align*} for sufficiently large $n$. This implies that the optimal choice is $\ell = k-1$, and $s_b(n,m) = s(K^m_{n/2+k-1,n/2-k+1})$. The characteristic polynomial $Q(n/2+k-1,n/2-k+1,n^2/4 -k^2+1)$ equals $$ \lambda^4 - \left(n^2/4 -k^2+1 \right)\lambda^2+2(n/2+k-2)(n/2-3k+3)(k-1).$$ By matching higher order terms, the extreme root of $Q$ is given by $$\lambda = \frac{n}{2} -\frac{1-\hat \epsilon}{2} - \sqrt{\frac{2(1-\hat \epsilon)}{n}}+\frac{27-14\hat \epsilon-\hat \epsilon^2}{4n}+o(1/n),$$ and so $$ s_b(n,m) = n -(1-\hat \epsilon) - 2 \sqrt{\frac{2(1-\hat \epsilon)}{n}}+\frac{27-14\hat \epsilon-\hat \epsilon^2}{2n}+o(1/n),$$ and \begin{align*} \frac{s(n,m)-s_b(n,m)}{s(n,m)} &\ge \frac{2^{3/2}(1-\hat \epsilon)^{1/2}}{n^{3/2}} - \frac{14-8\hat \epsilon}{n^2}+o(1/n^2)\\ &=\frac{(1-\hat \epsilon)^{1/2}}{m^{3/4}} + \frac{(1-\hat \epsilon)^{1/2}}{(n/2)^{3/2}}\bigg[1-\frac{(n/2)^{3/2}}{m^{3/4}}\bigg] - \frac{14-8\hat \epsilon}{n^2}+o(1/n^2)\\ &\ge \frac{1-\epsilon/2}{m^{3/4}} +o(1/m^{3/4}). \end{align*} This completes the proof. \end{proof} \section{The spread-extremal problem for graphons}\label{sec: graphon background} Graphons (or graph functions) are analytical objects which may be used to study the limiting behavior of large, dense graphs, and were originally introduced in \cite{BCLSV2012GraphLimitsSpectra} and \cite{lovasz2006limits}. \subsection{Introduction to graphons} Consider the set $\mathcal{W}$ of all bounded symmetric measurable functions $W:[0,1]^2 \to [0,1]$ (by symmetric, we mean $W(x, y)=W(y,x)$ for all $(x, y)\in [0,1]^2$. A function $W\in \mathcal{W}$ is called a \emph{stepfunction} if there is a partition of $[0,1]$ into subsets $S_1, S_2, \ldots, S_m$ such that $W$ is constant on every block $S_i\times S_j$. Every graph has a natural representation as a stepfunction in $\mathcal{W}$ taking values either 0 or 1 (such a graphon is referred to as a \emph{stepgraphon}). In particular, given a graph $G$ on $n$ vertices indexed $\{1, 2, \ldots, n\}$, we can define a measurable set $K_G \subseteq [0,1]^2$ as \[K_G = \bigcup_{u \sim v} \left[\frac{u-1}{n}, \frac{u}{n}\right]\times \left[\frac{v-1}{n}, \frac{v}{n}\right],\] and this represents the graph $G$ as a bounded symmetric measurable function $W_G$ which takes value $1$ on $K_G$ and $0$ everywhere else. For a measurable subset $U$ we will use $m(U)$ to denote its Lebesgue measure. This representation of a graph as a measurable subset of $[0,1]^2$ lends itself to a visual presentation sometimes referred to as a \emph{pixel picture}; see, for example, Figure \ref{bipartites_pixel} for two representations of a bipartite graph as a measurable subset of $[0,1]^2.$ Clearly, this indicates that such a representation is not unique; neither is the representation of a graph as a stepfunction. Using an equivalence relation on $\mathcal{W}$ derived from the so-called \emph{cut metric}, we can identify graphons that are equivalent up to relabelling, and up to any differences on a set of measure zero (i.e. equivalent \emph{almost everywhere}). \begin{figure} \begin{center} \begin{tabular}{||cccccccc||}\hline\hline & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} \\ \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & \\ & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} \\ \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & \\ & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} \\ \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & \\ & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} \\ \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & & \cellcolor{black} & \\\hline\hline \end{tabular} \hspace{40pt} \begin{tabular}{||cccccccc||}\hline\hline & & & & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ & & & & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ & & & & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ & & & & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & & & \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & & & \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & & & \\ \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & \cellcolor{black} & & & & \\\hline\hline \end{tabular} \end{center} \label{bipartites_pixel} \caption{Two presentations of a bipartite graph as a stepfunction.} \end{figure} For all symmetric, bounded Lebesgue-measurable functions $W:[0,1]^2\to \mathbb{R}$, we let \[ \|W\|_\square = \sup_{S, T\subseteq [0,1]} \left|\int_{S\times T} W(x,y)\,dx\,dy \right|. \] Here, $\|\cdot\|_\square$ is referred to as the \emph{cut norm}. Next, one can also define a semidistance $\delta_\square$ on $\mathcal{W}$ as follows. First, we define \emph{weak isomorphism} of graphons. Let $\mathcal{S}$ be the set of of all measure-preserving functions on $[0,1]$. For every $\varphi\in \mathcal{S}$ and every $W\in \mathcal{W}$, define $W^\varphi:[0,1]^2\to [0,1]$ by \begin{align*} W^\varphi(x,y) := W(\varphi(x),\varphi(y)) \end{align*} for a.e. $(x,y)\in [0,1]^2$. Now for any $W_1,W_2\in \mathcal{W}$, let \[ \delta_\square(W_1, W_2) = \inf_{\phi\in \mathcal{S}} \{ \|W_1-W_2\circ\phi\|_\square \}. \] Define the equivalence relation $\sim$ on $\mathcal{W}$ as follows: for all $W_1, W_2\in \mathcal{W}$, $W_1\sim W_2$ if and only if $\delta_\square(W_1,W_2) = 0$. Furthermore, let $\hat{\mathcal{W}} := \mathcal{W}/\sim$ be the quotient space of $\mathcal{W}$ under $\sim$. Note that $\delta_\square$ induces a metric on $\hat{\mathcal{W}}$. Crucially, by \cite[Theorem 5.1]{lovasz2007szemeredi}, $\hat{\mathcal{W}}$ is a compact metric space. Given $W\in \hat{\mathcal{W}}$, {we} define the Hilbert-Schmidt operator $A_W: \mathscr{L}^2[0,1] \to \mathscr{L}^2[0,1]$ {by} \[ (A_Wf)(x) := \int_0^1 W(x,y)f(y) \,dy. \] for all $f\in \mathscr{L}^2[0,1]$ and a.e. $x\in [0,1]$. Since $W$ is symmetric and bounded, $A_W$ is a compact Hermitian operator. In particular, $A_W$ has a discrete, real spectrum whose only possible accumulation point is $0$ (c.f. \cite{aubin2011applied}). In particular, the maximum and minimum eigenvalues exist and we focus our attention on these extremes. Let $\mu(W)$ and $\nu(W)$ be the maximum and minimum eigenvalue of $A_W$, respectively, and define the \emph{spread} of $W$ as \[ \text{spr}(W) := \mu(W) - \nu(W). \] By the {M}in-{M}ax Theorem, we have that \[ \mu(W) = \max_{\|f\|_2 = 1} \int_0^1\int_0^1 W(x,y)f(x)f(y) \, dx\, dy, \] and \[ \nu(W) = \min_{\|f\|_2 = 1} \int_0^1\int_0^1 W(x,y)f(x)f(y) \, dx\, dy. \] Both $\mu$ and $\nu$ are continuous functions with respect to $\delta_\square$: in particular we have the following. \begin{theorem}[c.f. Theorem 6.6 from \cite{BCLSV2012GraphLimitsSpectra} or Theorem 11.54 in \cite{Lovasz2012Hombook}]\label{thm: graphon eigenvalue continuity} Let $\{W_i\}_i$ be a sequence of graphons converging to $W$ with respect to $\delta_\square$. Then as $n\to\infty$, \begin{align*} \mu(W_n)\to\mu(W) \quad\text{ and }\quad \nu(W_n)\to\nu(W). \end{align*} \end{theorem} ~\\ If $W \sim W'$ then $\mu(W) = \mu(W')$ and $\nu(W) = \nu(W')$. By compactness, we may consider the optimization problem on the factor space $\hat{\mathcal{W}}$ \[ \text{spr}(\hat{\mathcal{W}})=\max_{W\in \hat{\mathcal{W}}}, \text{spr}(W) \] and furthermore there is a $W \in \hat{\mathcal{W}}$ that attains the maximum. Since every graph is represented by $W_G\in \hat{\mathcal{W}}$, this allows us to give an upper bound for $s(n)$ in terms of $\text{spr}(\hat{\mathcal{W}})$. Indeed, by replacing the eigenvectors of $G$ with their corresponding stepfunctions, the following {proposition} can be shown. \begin{proposition}\label{graph to graphon eigenvalue scaling} Let $G$ be a graph on $n$ vertices. Then \begin{align*} \lambda_1(G) = n \blue{\, \cdot\, } \mu({W_G}) \quad \text{ and }\quad \lambda_n(G) = n \blue{\, \cdot\, } \nu({W_G}). \end{align*} \end{proposition} Proposition \ref{graph to graphon eigenvalue scaling} implies that $s(n) \leq n\cdot\text{spr}(\hat{\mathcal{W}}) $ for all $n$. Combined with Theorem \ref{thm: functional analysis spread}, this gives the following corollary. \begin{corollary} For all $n$, $s(n) \leq \frac{2n}{\sqrt{3}}$. \end{corollary} This can be proved more directly using Theorem \ref{thm: spread maximum graphs} and taking tensor powers. \subsection{Properties of spread-extremal graphons} Our main objective in the next sections is to solve the maximum spread problem for graphons in order to determine this upper bound for $s(n)$. As such, in this subsection we set up some preliminaries to the solution which largely comprise a translation of what is known in the graph setting (see Section~\ref{sec:graphs}). Specifically, we define what it means for a graphon to be connected, and show that spread-extremal graphons must be connected. We then prove a standard corollary of the Perron-Frobenius theorem. Finally, we prove graphon versions of Lemma~\ref{lem: graph join} and Lemma~\ref{discrete ellipse equation}. Let $W_1$ and $W_2$ be graphons and let $\alpha_1,\alpha_2$ be positive real numbers with $\alpha_1+\alpha_2 = 1$. We define the \textit{direct sum} of $W_1$ and $W_2$ with weights $\alpha_1$ and $\alpha_2$, denoted $W = \alpha_1W_1\oplus \alpha_2W_2$, as follows. Let $\varphi_1$ and $\varphi_2$ be the increasing affine maps which send $J_1 := [0,\alpha_1]$ and $J_2 := [\alpha_1,1]$ to $[0,1]$, respectively. Then for all $(x,y)\in [0,1]^2$, let \begin{align*} W(x,y) := \left\{\begin{array}{rl} W_i(\varphi_i(x),\varphi_i(y)), &\text{if }(x,y)\in J_i\times J_i\text{ for some }i\in \{1,2\}\\ 0, & \text{otherwise} \end{array}\right. . \end{align*} A graphon $W$ is \textit{connected} if $W$ is not weakly isomorphic to a direct sum $\alpha_1W_1\oplus \alpha_2W_2$ where $\alpha_1\neq 0,1$. Equivalently, $W$ is connected if there does not exist a measurable subset $A\subseteq [0,1]$ of positive measure such that $W(x,y) = 0$ for a.e. $(x,y)\in A\times A^c$. \\ \begin{proposition}\label{prop: disconnected spectrum} Suppose $W_1,W_2$ are graphons and $\alpha_1,\alpha_2$ are positive real numbers summing to $1$. Let $W:=\alpha_1W_1\oplus\alpha_2W_2$. Then as multisets, \begin{align*} \Lambda(W) = \{\alpha_1 u : u \in \Lambda(W_1) \}\cup \{ \alpha_2v : v\in \Lambda(W_2)\}. \end{align*} Moreover, $\text{spr}(W)\leq \alpha_1 \text{spr}(W_1) + \alpha_2 \text{spr}(W_2)$ with equality if and only $W_1$ or $W_2$ is the all-zeroes graphon. \end{proposition} \begin{proof} For convenience, let $\Lambda_i := \{\alpha_iu : u\in\Lambda(W_i)\}$ for each $i\in\{1,2\}$ and $\Lambda := \Lambda(W)$. The first claim holds simply by considering the restriction of eigenfunctions to the intervals $[0,\alpha_1]$ and $[\alpha_1,1]$. \\ For the second claim, we first write $\text{spr}(W) = \alpha_i\mu-\alpha_j\nu$ where $i,j\in\{1,2\}$. Let $I_i := [\min(\Lambda_i), \max(\Lambda_i)]$ for each $i\in\{1,2\}$ and $I := [\min(\Lambda), \max(\Lambda)]$. Clearly $\alpha_i \text{spr}(W_i) = \text{diam}(I_i)$ for each $i\in\{1,2\}$ and $\text{spr}(W) = \text{diam}(I)$. Moreover, $I = I_1\cup I_2$. Since $0\in I_1\cap I_2$, $\text{diam}(I)\leq \text{diam}(I_1)+\text{diam}(I_2)$ with equality if and only if either $I_1$ or $I_2$ equals $\{0\}$. So the desired claim holds. \end{proof} Furthermore, the following basic corollary of the Perron-Frobenius holds. For completeness, we prove it here. \begin{proposition}\label{prop: PF eigenfunction} Let $W$ be a connected graphon and write $f$ for an eigenfunction corresponding to $\mu(W)$. Then $f$ is nonzero with constant sign a.e. \end{proposition} \begin{proof} Let $\mu = \mu(W)$. Since \begin{align*} \mu = \max_{\|h\|_2 = 1}\int_{(x,y)\in [0,1]^2}W(x,y)h(x)h(y), \end{align*} it follows without loss of generality that $f\geq 0$ a.e. on $[0,1]$. Let $Z:=\{x\in [0,1] : f(x) = 0\}$. Then for a.e. $x\in Z$, \begin{align*} 0 = \mu f(x) = \int_{y\in [0,1]}W(x,y)f(y) = \int_{y\in Z^c} W(x,y)f(y). \end{align*} Since $f > 0$ on $Z^c$, it follows that $W(x,y) = 0$ a.e. on $Z\times Z^c$. Clearly $m(Z^c) \neq 0$. If $m(Z) = 0$ then the desired claim holds, so without loss of generality, $0 < m(Z),m(Z^c)<1$. It follows that $W$ is disconnected, a contradiction to our assumption, which completes the proof of the desired claim. \end{proof} We may now prove a graphon version of Lemma \ref{lem: graph join}. \begin{lemma}\label{lem: K = indicator function} Suppose $W$ is a graphon achieving maximum spread and let $f,g$ be eigenfunctions for the maximum and minimum eigenvalues for $W$, respectively. Then the following claims hold: \begin{enumerate}[(i)] \item For a.e. $(x,y)\in [0,1]^2$, \label{item: K = 0 or 1} \begin{align*} W(x,y) = \left\{\begin{array}{rl} 1, & f(x)f(y) > g(x)g(y) \\ 0, & \text{otherwise} \end{array} \right. . \end{align*} \item $f(x)f(y)-g(x)g(y) \neq 0$ for a.e. $(x,y)\in [0,1]^2$. \label{item: |diff| > 0} \end{enumerate} \end{lemma} \begin{proof} We proceed in the following order: \begin{itemize} \item Prove Item \eqref{item: K = 0 or 1} holds for a.e. $(x,y)\in [0,1]^2$ such that $f(x)f(y)\neq g(x)g(y)$. We will call this Item \eqref{item: K = 0 or 1}*. \item Prove Item \eqref{item: |diff| > 0}. \item Deduce Item \eqref{item: K = 0 or 1} also holds. \end{itemize} By Propositions \ref{prop: disconnected spectrum} and \ref{prop: PF eigenfunction}, we may assume without loss of generality that $f > 0$ a.e.~on $[0,1]$. For convenience, we define the quantity $d(x,y) := f(x)f(y)-g(x)g(y)$. To prove Item \eqref{item: K = 0 or 1}*, we first define a graphon $W'$ by \begin{align*} W'(x,y) = \left\{\begin{array}{rl} 1, & d(x,y) > 0 \\ 0, & d(x,y) < 0 \\ W(x,y) & \text{otherwise} \end{array} \right. . \end{align*} Then by inspection, \begin{align*} \text{spr}(W') &\geq \int_{(x,y)\in [0,1]^2} W'(x,y)(f(x)f(y)-g(x)g(y)) \\ &= \int_{(x,y)\in [0,1]^2} W(x,y)(f(x)f(y)-g(x)g(y)) \\&+ \int_{d(x,y) > 0} (1-W(x,y))d(x,y) - \int_{d(x,y) < 0} W(x,y)d(x,y) \\ &= \text{spr}(W) + \int_{d(x,y) > 0} (1-W(x,y))d(x,y) - \int_{d(x,y) < 0} W(x,y)d(x,y). \end{align*} Since $W$ maximizes spread, both integrals in the last line must be $0$ and hence Item \eqref{item: K = 0 or 1}* holds. \\ Now, we prove Item \eqref{item: |diff| > 0}. For convenience, we define $U$ to be the set of all pairs $(x,y)\in [0,1]^2$ so that $d(x,y) = 0$. Now let $W'$ be any graphon which differs from $W$ only on $U$. Then \begin{align*} \text{spr}(W') &\geq \int_{(x,y)\in [0,1]^2} W'(x,y)(f(x)f(y)-g(x)g(y)) \\ &= \int_{(x,y)\in [0,1]^2} W(x,y)(f(x)f(y)-g(x)g(y))\\&+ \int_{(x,y)\in U} (W'(x,y)-W(x,y))(f(x)f(y)-g(x)g(y)) \\ &= \text{spr}(W). \end{align*} Since $\text{spr}(W)\geq \text{spr}(W')$, $f$ and $g$ are eigenfunctions for $W'$ and we may write $\mu'$ and $\nu'$ for the corresponding eigenvalues. Now, we define \begin{align*} I_{W'}(x) &:= (\mu'-\mu)f(x) \\ &= \int_{y\in [0,1]}(W'(x,y)-W(x,y))f(y)\\ &= \int_{y\in [0,1], \, (x,y)\in U} (W'(x,y)-W(x,y))f(y). \end{align*} Similarly, we define \begin{align*} J_{W'}(x) &:= (\nu'-\nu)g(x) \\ &= \int_{y\in [0,1]}(W'(x,y)-W(x,y))g(y)\\ &= \int_{y\in [0,1],\, (x,y)\in U} (W'(x,y)-W(x,y))g(y). \end{align*} Since $f$ and $g$ are orthogonal, \begin{align*} 0 &= \int_{x\in [0,1]} I_{W'}(x)J_{W'}(x). \end{align*} By definition of $U$, we have that for a.e. $(x,y) \in U$, $0 = d(x,y) = f(x)f(y) - g(x)g(y)$. In particular, since $f(x),f(y) > 0$ for a.e. $(x,y)\in [0,1]^2$, then a.e. $(x,y)\in U$ has $g(x)g(y) > 0$. So by letting \begin{align*} U_+ &:= \{(x,y)\in U: g(x),g(y) > 0\}, \\ U_- &:= \{(x,y)\in U:g(x),g(y)<0\},\text{ and} \\ U_0 &:= U\setminus (U_+\cup U_-), \end{align*} $U_0$ has measure $0$. \\ First, let $W'$ be the graphon defined by \begin{align*} W'(x,y) &= \left\{\begin{array}{rl} 1, & (x,y)\in U_+\\ W(x,y), &\text{otherwise} \end{array} \right. . \end{align*} For this choice of $W'$, \begin{align*} I_{W'}(x) &= \int_{y\in [0,1], \, (x,y)\in U_+} (1-W(x,y))f(y), \text{ and} \\ J_{W'}(x) &= \int_{y\in [0,1], \, (x,y)\in U_+} (1-W(x,y))g(y). \end{align*} Clearly $I_{W'}$ and $J_{W'}$ are nonnegative functions so $I_{W'}(x)J_{W'}(x) = 0$ for a.e. $x\in [0,1]$. Since $f(y)$ and $g(y)$ are positive for a.e. $(x,y)\in U$, $W(x,y) = 1$ for a.e. on $U_+$. \\ If instead we let $W'(x,y)$ be $0$ for all $(x,y)\in U_+$, it follows by a similar argument that $W(x,y) = 0$ for a.e. $(x,y)\in U_+$. So $U_+$ has measure $0$. Repeating the same argument on $U_-$, we similarly conclude that $U_-$ has measure $0$. This completes the proof of Item \eqref{item: |diff| > 0}. \\ Finally we note that Items \eqref{item: K = 0 or 1}* and \eqref{item: |diff| > 0} together implies Item \eqref{item: K = 0 or 1}. \end{proof} From here, it is easy to see that any graphon maximizing the spread is a join of two threshold graphons. Next we prove the graphon version of Lemma \ref{discrete ellipse equation}. \begin{lemma}\label{lem: local eigenfunction equation} If $W$ is a graphon achieving the maximum spread with corresponding eigenfunctions $f,g$, then $\mu f^2 - \nu g^2 = \mu-\nu$ almost everywhere. \end{lemma} \begin{proof} We will use the notation $(x,y) \in W$ to denote that $(x,y)\in [0,1]^2$ satisfies $W(x,y) =1$. Let $\varphi:[0,1]\to[0,1]$ be an arbitrary homeomorphism which is {\em orientation-preserving} in the sense that $\varphi(0) = 0$ and $\varphi(1) = 1$. Then $\varphi$ is a continuous strictly monotone increasing function which is differentiable almost everywhere. Now let $\tilde{f} := \varphi'\cdot (f\circ \varphi)$, $\tilde{g} := \varphi'\cdot (g\circ \varphi)$ and $\tilde{W} := \{ (x,y)\in [0,1]^2 : (\varphi(x),\varphi(y))\in W \}$. Using the substitutions $u = \varphi(x)$ and $v = \varphi(y)$, \begin{align*} \tilde{f} \tilde{W} \tilde{f} &= \int_{ (x,y)\in [0,1]^2 } \chi_{(\varphi(x),\varphi(y))\in \tilde{W}}\; \varphi'(x)\varphi'(y)\cdot f(\varphi(x))f(\varphi(y)) dx\, dy\\ &= \int_{ (x,y)\in [0,1]^2 } \chi_{(x,y)\in W}\; f(u)f(v) du\, dv\\ &= \mu. \end{align*} Similarly, $\tilde{g}\tilde{W}\tilde{g} = \nu$. \\ \\ Note however that the $L_2$ norms of $\tilde{f},\tilde{g}$ may not be $1$. Indeed using the substitution $u = \varphi(x)$, \[ \|\tilde{f}\|_2^2 = \int_{x\in[0,1]} \varphi'(x)^2f(\varphi(x))^2\, dx = \int_{u\in [0,1]} \varphi'(\varphi^{-1}(u))\cdot f(u)^2\, du . \] We exploit this fact as follows. Suppose $I,J$ are disjoint subintervals of $[0,1]$ of the same positive length $m(I) = m(J) = \ell > 0$ and for any $\varepsilon > 0$ sufficiently small (in terms of $\ell$), let $\varphi$ be the (unique) piecewise linear function which stretches $I$ to length $(1+\varepsilon)m(I)$, shrinks $J$ to length $(1-\varepsilon)m(J)$, and shifts only the elements in between $I$ and $J$. Note that for a.e. $x\in [0,1]$, \[ \varphi'(x) = \left\{\begin{array}{rl} 1+\varepsilon, & x\in I \\ 1-\varepsilon, & x\in J \\ 1, & \text{otherwise}. \end{array}\right. \] Again with the substitution $u = \varphi(x)$, \begin{align*} \|\tilde{f}\|_2^2 &= \int_{x\in[0,1]} \varphi'(x)^2\cdot f(\varphi(x))^2\, dx\\ &= \int_{[u\in [0,1]} \varphi'(\varphi^{-1}(u))f(u)^2\, du \\ &= 1 + \varepsilon\cdot ( \|\chi_If\|_2^2 -\|\chi_Jf\|_2^2 ). \end{align*} The same equality holds for $\tilde{g}$ instead of $\tilde{f}$. After normalizing $\tilde{f}$ and $\tilde{g}$, by optimality of $W$, we get a difference of Rayleigh quotients as \begin{align*} 0 &\leq (fWf-gWg) - \dfrac{\tilde{f}\tilde{W}\tilde{f}}{\|\tilde{f}\|_2^2} - \dfrac{\tilde{g}\tilde{W}\tilde{g}}{\|\tilde{g}\|_2^2} \\ &= \dfrac{ \mu \varepsilon\cdot ( \|\chi_If\|_2^2 -\|\chi_Jf\|_2^2) } {1+\varepsilon\cdot ( \|\chi_If\|_2^2 -\|\chi_Jf\|_2^2) } - \dfrac{ \nu\varepsilon\cdot ( \|\chi_Ig\|_2^2 -\|\chi_Jg\|_2^2) } {1+\varepsilon\cdot ( \|\chi_Ig\|_2^2 -\|\chi_Jg\|_2^2) }\\ &= (1+o(1))\varepsilon\cdot\left( \int_I (\mu f(x)^2-\nu g(x)^2)dx -\int_J (\mu f(x)^2-\nu g(x)^2)dx \right) \end{align*} as $\varepsilon\to 0$. It follows that for all disjoint intervals $I,J\subseteq [0,1]$ of the same length that the corresponding integrals are the same. Taking finer and finer partitions of $[0,1]$, it follows that the integrand $\mu f(x)^2-\nu g(x)^2$ is constant almost everywhere. Since the average of this quantity over all $[0,1]$ is $\mu-\nu$, the desired claim holds. \end{proof} \section{Properties of spread-extremal graphs} \label{sec:graphs} In this section, we review what has already been proven about spread-extremal graphs ($n$ vertex graphs with spread $s(n)$) in \cite{gregory2001spread}, where the original conjectures were made. We then prove a number of properties of spread-extremal graphs and properties of the eigenvectors associated with the maximum and minimum eigenvalues of a spread-extremal graph. Let $G$ be a graph, and let $A$ be the adjacency matrix of $G$, with eigenvalues $\lambda_1 \geq \cdots \geq \lambda_n$. For unit vectors $\mathbf{x}$, $\mathbf{y} \in \mathbb{R}^n$, we have \[\lambda_1 \geq \mathbf{x}^T A \mathbf{x} \quad \mbox{and} \quad \lambda_n \leq \mathbf{y}^TA\mathbf{y}.\] Hence (as it is observed in \cite{gregory2001spread}), the spread of a graph can be expressed \begin{equation}\label{eq: gregory min max} s(G) = \max_{\mathbf{x}, \mathbf{z}}\sum_{u\sim v} (\mathbf{x}_u\mathbf{x}_v-\mathbf{z}_u\mathbf{z}_v)\end{equation} where the maximum is taken over all unit vectors $\mathbf{x}, \mathbf{z}$. Furthermore, this maximum is attained only for $\mathbf{x}, \mathbf{z}$ orthonormal eigenvectors corresponding to the eigenvalues $\lambda_1, \lambda_n$, respectively. We refer to such a pair of vectors $\mathbf{x}, \mathbf{z}$ as \emph{extremal eigenvectors} of $G$. For any two vectors $\mathbf{x}$, $\mathbf{z}$ in $\mathbb{R}^n$, let $G(\mathbf{x}, \mathbf{z})$ denote the graph for which distinct vertices $u, v$ are adjacent if and only if $\mathbf{x}_u\mathbf{x}_v-\mathbf{z}_u\mathbf{z}_v\geq 0$. Then from the above, there is some graph $G(\mathbf{x}, \mathbf{z})$ which is a spread-extremal graph, with $\mathbf{x}$, $\mathbf{z}$ orthonormal and $\mathbf{x}$ positive (\cite[Lemma 3.5]{gregory2001spread}). In addition, we enhance \cite[Lemmas 3.4 and 3.5]{gregory2001spread} using some helpful definitions and the language of threshold graphs. Whenever $G = G(\mathbf{x}, \mathbf{z})$ is understood, let $P = P(\mathbf{x}, \mathbf{z}) := \{u \in V(G) : \mathbf{z}_u \geq 0\}$ and $N = N(\mathbf{x}, \mathbf{z}) := V(G)\setminus P$. For our purposes, we say that $G$ is a \emph{threshold graph} if and only if there exists a function $\varphi:V(G)\to (-\infty,\infty]$ such that for all distinct $u,v\in V(G)$, $uv\in E(G)$ if and only if $\varphi(u)+\varphi(v) \geq 0$ \footnote{ Here, we take the usual convention that for all $x\in (-\infty, \infty]$, $\infty + x = x + \infty = \infty$}. Here, $\varphi$ is a {\it threshold function} for $G$ (with $0$ as its {\it threshold}). The following detailed lemma shows that any spread-extremal graph is the join of two threshold graphs with threshold functions which can be made explicit. \begin{lemma}\label{lem: graph join} Let $n> 2$ and suppose $G$ is a $n$-vertex graph such that $s(G) = s(n)$. Denote by $\mathbf{x}$ and $\mathbf{z}$ the extremal unit eigenvectors for $G$. Then \begin{enumerate}[(i)] \item\label{item: matrix 0 or 1} For any two vertices $u,v$ of $G$, $u$ and $v$ are adjacent whenever $\mathbf{x}_u\mathbf{x}_v-\mathbf{z}_u\mathbf{z}_v>0$ and $u$ and $v$ are nonadjacent whenever $\mathbf{x}_u\mathbf{x}_v-\mathbf{z}_u\mathbf{z}_v<0$. \item\label{item: xx-zz nonzero} For any distinct $u,v\in V(G)$, $\mathbf{x}_u\mathbf{x}_v-\mathbf{z}_u\mathbf{z}_v\not=0$. \item\label{item: graph join} Let $P := P(\mathbf{x}, \mathbf{z})$, $N := N(\mathbf{x}, \mathbf{z})$ and let $G_1 := G[P]$ and $G_2 := G[N]$. Then $G = G(\mathbf{x}, \mathbf{z}) = G_1\vee G_2$. \item\label{item: G1 G2 thresholds} For each $i\in \{1,2\}$, $G_i$ is a threshold graph with threshold function defined on all $u\in V(G_i)$ by \begin{align*} \varphi(u) := \log\left| \dfrac{\mathbf{x}_u} {\mathbf{z}_u} \right| . \end{align*} \end{enumerate} \end{lemma} \begin{proof} Suppose $G$ is a $n$-vertex graph such that $s(G) = s(n)$ and write $A = (a_{uv})_{u,v\in V(G)}$ for its adjacency matrix. Item \eqref{item: matrix 0 or 1} is equivalent to Lemma 3.4 from \cite{gregory2001spread}. For completeness, we include a proof. By Equation \eqref{eq: gregory min max} we have that \begin{align*} s(G) &= \max_{ x,z } \mathbf{x}^T A\mathbf{x} -\mathbf{z}^TA\mathbf{z} = \sum_{u,v\in V(G)} a_{uv}\cdot \left(\mathbf{x}_u\mathbf{x}_v - \mathbf{z}_u\mathbf{z}_v\right), \end{align*} where the maximum is taken over all unit vectors of length $|V(G)|$. If $\mathbf{x}_u\mathbf{x}_v - \mathbf{z}_u\mathbf{z}_v > 0$ and $a_{uv} = 0$, then $s(G+uv) > s(G)$, a contradiction. And if $\mathbf{x}_u\mathbf{x}_v - \mathbf{z}_u\mathbf{z}_v < 0$ and $a_{uv} = 1$, then $s(G-uv) > s(G)$, a contradiction. So Item \eqref{item: matrix 0 or 1} holds. \\ For a proof of Item \eqref{item: xx-zz nonzero} suppose $\mathbf{x}_u\mathbf{x}_v - \mathbf{z}_u\mathbf{z}_v = 0$ and denote by $G'$ the graph formed by adding or deleting the edge $uv$ from $G$. With $A' = (a_{uv}')_{u,v\in V(G')}$ denoting the adjacency matrix of $G'$, note that \begin{align*} s(G') \geq \mathbf{x}^T A'\mathbf{x} -\mathbf{z}^TA'\mathbf{z} = \mathbf{x}^T A\mathbf{x} -\mathbf{z}^TA\mathbf{z} = s(G) &\geq s(G), \end{align*} so each inequality is an equality. It follows that $\mathbf{x}, \mathbf{z}$ are eigenvectors for $A'$. Furthermore, without loss of generality, we may assume that $uv\in E(G)$. In particular, there exists some $\lambda'$ such that \begin{align*} A\mathbf{x} &= \lambda \mathbf{x}\\ (A -{\bf e}_u{\bf e}_v^T -{\bf e}_v{\bf e}_u^T )\mathbf{x} &= \lambda'\mathbf{x} . \end{align*} So $( {\bf e}_u{\bf e}_v^T +{\bf e}_v{\bf e}_u^T )\mathbf{x} = (\lambda - \lambda')\mathbf{x}$. Let $w\in V(G)\setminus \{u,v\}$. By the above equation, $(\lambda-\lambda')\mathbf{x}_w = 0$ and either $\lambda' = \lambda$ or $\mathbf{x}_w = 0$. To find a contradiction, it is sufficient to note that $G$ is a connected graph with Perron-Frobenius eigenvector $\mathbf{x}$. Indeed, let $P := \{w\in V(G) : \mathbf{z}_w \geq 0\}$ and let $N := V(G)\setminus P$. Then for any $w\in P$ and any $w'\in N$, $\mathbf{x}_w\mathbf{x}_{w'} - \mathbf{z}_w\mathbf{z}_{w'} > 0$ and by Item \eqref{item: matrix 0 or 1}, $ww'\in E(G)$. So $G$ is connected and this completes the proof of Item \eqref{item: xx-zz nonzero}. \\ Now, we prove Item \eqref{item: graph join}. To see that $G = G(\mathbf{x}, \mathbf{z})$, note by Items \eqref{item: matrix 0 or 1} and \eqref{item: xx-zz nonzero}, for all distinct $u,v\in V(G)$, $\mathbf{x}_u\mathbf{x}_v - \mathbf{z}_u\mathbf{z}_v > 0$ if and only if $uv\in E(G)$, and otherwise, $\mathbf{x}_u\mathbf{x}_v - \mathbf{z}_u\mathbf{z}_v < 0$ and $uv\notin E(G)$. To see that $G = G_1\vee G_2$, note that for any $u\in P$ and any $v\in N$, $0 \neq \mathbf{x}_u\mathbf{x}_v - \mathbf{z}_u\mathbf{z}_v \geq \mathbf{z}_u\cdot (-\mathbf{z}_v)\geq 0$. \\ Finally, we prove Item \eqref{item: G1 G2 thresholds}. Suppose $u,v$ are distinct vertices such that either $u,v\in P$ or $u,v\in N$. Allowing the possibility that $0\in \{\mathbf{z}_u, \mathbf{z}_v\}$, the following equivalence holds: \begin{align*} \varphi(u) + \varphi(v) &\geq 0 & \text{ if and only if }\\ \log\left| \dfrac{\mathbf{x}_u\mathbf{x}_v}{\mathbf{z}_u\mathbf{z}_v} \right| &\geq 1 & \text{ if and only if }\\ \mathbf{x}_u\mathbf{x}_v - |\mathbf{z}_u\mathbf{z}_v| &\geq 0. \end{align*} Since $\mathbf{z}_u,\mathbf{z}_v$ have the same sign, Item \eqref{item: G1 G2 thresholds}. This completes the proof. \end{proof} From \cite{thresholdbook}, we recall the following useful characterization in terms of ``nesting'' neighborhoods: $G$ is a threshold graph if and only there exists a numbering $v_1,\cdots, v_n$ of $V(G)$ such that for all $1\leq i<j\leq n$, if $v_k\in V(G)\setminus\{v_i,v_j\}$, $v_jv_k\in E(G)$ implies that $v_iv_k\in E(G)$. Given this ordering, if $k$ is the smallest natural number such that $v_kv_{k+1}\in E(G)$ then we have that the set $\{v_1,\cdots, v_k\}$ induces a clique and the set $\{v_{k+1},\cdots, v_n\}$ induces an independent set. The next lemma shows that both $P$ and $N$ have linear size. \begin{lemma}\label{linear size parts} If $G$ is a spread-extremal graph, then both $P$ and $N$ have size $\Omega(n)$. \end{lemma} \begin{proof} We will show that $P$ and $N$ both have size at least $\frac{n}{100}$. First, since $G$ is spread-extremal it has spread more than $1.1n$ and hence has smallest eigenvalue $\lambda_n < \frac{-n}{10}$. Without loss of generality, for the remainder of this proof we will assume that $|P| \leq |N|$, that $\mathbf{z}$ is normalized to have infinity norm $1$, and that $v$ is a vertex satisfying $| \mathbf{z}_v| = 1$. By way of contradiction, assume that $|P|< \frac{n}{100} $. If $v\in N$, then we have \[ \lambda_n \mathbf{z}_v = -\lambda_n = \sum_{u\sim v} \mathbf{z}_u \leq \sum_{u\in P} \mathbf{z}_u \leq |P| < \frac{n}{100}, \] contradicting that $\lambda_n < \frac{-n}{10}$. Therefore, assume that $v\in P$. Then \[ \lambda_n^2 \mathbf{z}_v = \lambda_n^2 = \sum_{u\sim v} \sum_{w\sim u}\mathbf{z}_w \leq \sum_{u\sim v} \sum_{\substack{w\sim u\\w\in P}} \mathbf{z}_w \leq |P||N| + 2e(P) \leq |P||N| + |P|^2 \leq \frac{99n^2}{100^2} + \frac{n^2}{100^2}. \] This gives $|\lambda_n| \leq \frac{n}{10}$, a contradiction. \end{proof} \begin{lemma}\label{upper bound on eigenvector entries} If $\mathbf{x}$ and $\mathbf{z}$ are unit eigenvectors for $\lambda_1$ and $\lambda_n$, then $\norm{\mathbf{x}}_\infty = O(n^{-1/2})$ and $\norm{\mathbf{z}}_\infty = O(n^{-1/2})$. \end{lemma} \begin{proof} During this proof we will assume that $\hat{u}$ and $\hat{v}$ are vertices satisfying $\norm{\mathbf{x}}_\infty = \mathbf{x}_{\hat{u}}$ and $\norm{\mathbf{z}}_\infty = |\mathbf{z}_{\hat{v}}|$ and without loss of generality that $\hat{v} \in N$. We will use the weak estimates that $\lambda_1 > \frac{n}{2}$ and $\lambda_n < \frac{-n}{10}$. Define sets \begin{align*} A &= \left\{ w: \mathbf{x}_w > \frac{\mathbf{x}_{\hat{u}}}{4} \right\}\\ B &= \left\{ w: \mathbf{z}_w > \frac{-\mathbf{z}_{\hat{v}}}{20} \right\}. \end{align*} It suffices to show that $A$ and $B$ both have size $\Omega(n)$, for then there exists a constant $\epsilon > 0$ such that \[ 1 = \mathbf{x}^T \mathbf{x} \geq \sum_{w\in A} \mathbf{x}_w^2 \geq |A| \frac{\norm{\mathbf{x}}^2_\infty}{16} \geq \epsilon n \norm{\mathbf{x}}^2_\infty, \] and similarly \[ 1 = \mathbf{z}^T \mathbf{z} \geq \sum_{w\in B} \mathbf{z}_w^2 \geq |B| \frac{\norm{\mathbf{z}}^2_\infty}{400} \geq \epsilon n \norm{\mathbf{z}}^2_\infty. \] We now give a lower bound on the sizes of $A$ and $B$ using the eigenvalue-eigenvector equation and the weak bounds on $\lambda_1$ and $\lambda_n$. \[ \frac{n}{2} \norm{\mathbf{x}}_\infty = \frac{n}{2} \mathbf{x}_{\hat{u}} < \lambda_1 \mathbf{x}_{\hat{u}} = \sum_{w\sim \hat{u}} \mathbf{x}_w \leq \norm{\mathbf{x}}_\infty \left(|A| + \frac{1}{4}(n-|A|) \right), \] giving that $|A| > \frac{n}{3}$. Similarly, \[ \frac{n}{10} \norm{\mathbf{z}}_\infty = - \frac{n}{10} \mathbf{z}_{\hat{v}} < \lambda_n \mathbf{z}_{\hat{v}} = \sum_{w\sim \hat{v}} \mathbf{z}_w \leq \norm{\mathbf{z}}_\infty \left( |B| + \frac{1}{20}(n-|B|)\right), \] and so $|B| > \frac{n}{19}$. \end{proof} \begin{lemma}\label{discrete ellipse equation} Assume that $\mathbf{x}$ and $\mathbf{z}$ are unit vectors. Then there exists a constant $C$ such that for any pair of vertices $u$ and $v$, we have \[ |(\lambda_1 \mathbf{x}_u^2 - \lambda_n\mathbf{z}_u^2) - (\lambda_1 \mathbf{z}_v^2 - \lambda_n \mathbf{z}_v^2)| < \frac{C}{n}. \] \end{lemma} \begin{proof} Let $u$ and $v$ be vertices, and create a graph $\tilde{G}$ by deleting $u$ and cloning $v$. That is, $V(\tilde{G}) = \{v'\} \cup V(G) \setminus \{u\}$ and \[E(\tilde{G}) = E(G\setminus \{u\}) \cup \{v'w:vw\in E(G)\}.\] Note that $v \not\sim v'$. Let $\tilde{A}$ be the adjacency matrix of $\tilde{G}$. Define two vectors $\mathbf{\tilde{x}}$ and $\mathbf{\tilde{z}}$ by \[ \mathbf{\tilde{x}}_w = \begin{cases} \mathbf{x}_w & w\not=v'\\ \mathbf{x}_v & w=v', \end{cases} \] and \[ \mathbf{\tilde{z}}_w = \begin{cases} \mathbf{z}_w & w\not=v'\\ \mathbf{z}_v & w=v. \end{cases} \] Then $\mathbf{\tilde{x}}^T \mathbf{\tilde{x}} = 1 - \mathbf{x}_u^2 + \mathbf{x}_v^2$ and $\mathbf{\tilde{z}}^T \mathbf{\tilde{z}} = 1 - \mathbf{z}_u^2 + \mathbf{z}_v^2$. Similarly, \begin{align*} \mathbf{\tilde{x}}^T\tilde{A}\mathbf{\tilde{x}} &= \lambda_1 - 2\mathbf{x}_u \sum_{uw\in E(G)} \mathbf{x}_w + 2\mathbf{x}_{v'} \sum_{vw \in E(G)} \mathbf{x}_w - 2A_{uv}\mathbf{x}_v\mathbf{x}_u \\ &= \lambda_1 - 2\lambda_1\mathbf{x}_u^2 + 2\lambda_1 \mathbf{x}_v^2 - 2A_{uv} \mathbf{x}_u \mathbf{x}_v, \end{align*} and \begin{align*} \mathbf{\tilde{z}}^T\tilde{A}\mathbf{\tilde{z}} &= \lambda_n - 2\mathbf{z}_u \sum_{uw\in E(G)} \mathbf{z}_w + 2\mathbf{z}_{v'} \sum_{vw \in E(G)} \mathbf{z}_w - 2A_{uv}\mathbf{z}_v\mathbf{z}_u \\ &= \lambda_n - 2\lambda_n\mathbf{z}_u^2 + 2\lambda_n \mathbf{z}_v^2 - 2A_{uv} \mathbf{z}_u \mathbf{z}_v. \end{align*} By Equation \eqref{eq: gregory min max}, \begin{align*} 0 & \geq \left(\frac{\mathbf{\tilde{x}}^T\tilde{A}\mathbf{\tilde{x}}}{\mathbf{\tilde{x}}^T \mathbf{\tilde{x}}} - \frac{\mathbf{\tilde{z}}^T\tilde{A}\mathbf{\tilde{z}}}{\mathbf{\tilde{z}}^T \mathbf{\tilde{z}}} \right) - (\lambda_1 - \lambda_n) \\ & = \left(\frac{\lambda_1 - 2\lambda_1 \mathbf{x}_u^2 + 2\lambda_1 \mathbf{x}_v^2 - 2A_{uv}\mathbf{x}_u\mathbf{x}_v}{1 - \mathbf{x}_u^2 + \mathbf{x}_v^2} - \frac{\lambda_n - 2\lambda_n \mathbf{z}_u^2 + 2\lambda_n \mathbf{z}_v^2 - 2A_{uv} \mathbf{z}_u\mathbf{z}_v}{1-\mathbf{z}_u^2 + \mathbf{z}_v^2}\right) - (\lambda_1 - \lambda_n) \\ & = \frac{-\lambda_1 \mathbf{x}_u^2 + \lambda_1\mathbf{x}_v^2 - 2A_{ij}\mathbf{x}_u\mathbf{x}_v}{1 - \mathbf{x}_u^2 + \mathbf{x}_v^2} - \frac{-\lambda_n \mathbf{z}_u^2 + \lambda_n\mathbf{z}_v^2 - 2A_{ij}\mathbf{z}_u \mathbf{z}_v}{1 - \mathbf{z}_u^2 + \mathbf{z}_v^2}. \end{align*} By Lemma \ref{upper bound on eigenvector entries}, we have that $|\mathbf{x}_u|$, $|\mathbf{x}_v|$, $|\mathbf{z}_u|$, and $|\mathbf{z}_v|$ are all $O(n^{-1/2})$, and so it follows that \[ |(\lambda_1 \mathbf{x}_u^2 - \lambda_1\mathbf{x}_v^2) - (\lambda_n \mathbf{z}_u^2 - \lambda_n \mathbf{z}_v^2)| < \frac{C}{n}, \] for some absolute constant $C$. Rearranging terms gives the desired result. \end{proof}
{'timestamp': '2021-09-08T02:20:13', 'yymm': '2109', 'arxiv_id': '2109.03129', 'language': 'en', 'url': 'https://arxiv.org/abs/2109.03129'}
\section{Introduction} \label{sec:Introduction} Massive, young stars are sites of nucleosynthesis, not just of the stable nuclides, but radioactive isotopes as well. Long-lived radioisotopes with decay times of billions of years or longer, such as $^{40}$K, mix into the interstellar medium (ISM), accumulating and provide a very low level source of energetic radiation in all gas \citep{Cameron62,Umebayashi81}. More unstable isotopes are also synthesized; some have decay times of a few years or less and cannot reach most of the ISM. In between those two extremes are the short lived radionuclides (SLRs) of the ISM: with $\sim \textrm{Myr}$ decay times, they can be present in a galaxy's gas but only as long as star formation replenishes their abundances. The most prominent of the SLRs is \textrm{$^{26}$Al}, which is detected in gamma-ray decay lines from star-formation regions throughout the Milky Way \citep{Mahoney84,Diehl06}. Another SLR that has recently been detected in gamma-ray lines is $^{60}$Fe \citep{Harris05}. SLRs are not just passive inhabitants of the ISM. By releasing energetic particles and radiation when they decay, they inject power that can heat or ionize the surrounding matter. In the Milky Way's molecular clouds, the radioactivity is overwhelmed by that of cosmic rays, which sustain an ionization rate of $\zeta_{\rm CR} \approx 5 \times 10^{-17}\ \sec^{-1}$. Rapidly star-forming galaxies known as starbursts may have elevated levels of cosmic rays and ionization rates a thousand times or more higher than the Milky Way (\citealt*{Suchkov93}; \citealt{Papadopoulos10-CRDRs}). However, it is possible that cosmic rays are stopped by the abundant gas in starbursts before they enter the densest molecular gas. Gamma rays can provide ionization through large columns; but while the gamma-ray ionization rate can reach up to $\zeta_{\gamma} \approx 10^{-16}\ \sec^{-1}$ in the densest starbursts, in most starbursts gamma rays sustain relatively weak ionization \citep{Lacki12-GRDRs}. SLRs like \textrm{$^{26}$Al} can in principle provide ionization through any column of gas, and if abundant enough, maintain moderate levels of ionization. The major open question for SLR-induced ionization is how well mixed the SLRs are with the gas of the galaxy, a process which takes anything up to 100 Myr; if the mixing times are longer than a few Myr, the SLRs are not abundant in most star-forming cores \citep{Meyer00,Huss09}. Meteorites recording the composition of the primordial Solar system demonstrate that SLRs were present during its formation. Assuming the SLRs were not created in situ by energetic radiation from the Sun \citep{Lee98}, the SLRs provide evidence that the Solar system formed near a star-forming region with young massive stars \citep[e.g.,][]{Adams10}. In fact, \textrm{$^{26}$Al} was overabundant by a factor $\sim 6$ in the primordial Solar system, with $X (\textrm{$^{26}$Al}) \approx 10^{-10}$ ($\textrm{$^{26}$Al} / ^{27}{\rm Al} \approx 5 \times 10^{-5}$), compared to its present day abundances in the Milky Way (e.g., \citealt*{Lee77}; \citealt*{MacPherson95}; \citealt{Diehl06}; \citealt{Huss09}). Their quick decay time also indicate the Solar system formed quickly, within about a few Myr. SLRs, particularly \textrm{$^{26}$Al}, were a primary source of ionization in the Solar Nebula \citep{Stepinski92,Finocchi97,Umebayashi09}, affecting the conductivity and ultimately accretion rate in the protoplanetary disc. Moreover, \textrm{$^{26}$Al} and other SLRs may have regulated the early geological evolution of the Solar system by being a major source of heat in early planetesimals, driving their differentiation and rock metamorphism \citep[e.g.,][]{Hutcheon89,Grimm93,Shukolyukov93}. The contemporary Milky Way, with a star-formation rate (SFR) of a few stars per year is not a typical environment for most of the star-formation in the history of the Universe, however. Roughly 5-20\% of star-formation at all times occurred in rapid starbursts mainly driven by galaxy-galaxy interactions and mergers \citep{Rodighiero11,Sargent12}. Furthermore, most of the star-formation in `normal' galaxies occurred in massive galaxies with a much higher star-formation rate ($\ga 10\ \textrm{M}_{\sun}\ \textrm{yr}^{-1}$) at redshifts $z$ of 1 and higher, when most star-formation took place \citep[e.g.,][]{Magnelli09}. These high star-formation rates translate into large masses of SLRs present in these galaxies. I { will} show that \textrm{$^{26}$Al} in these galaxies, if it is well mixed with the gas, can sustain rather high ionization rates in their ISMs. This has consequences for both star formation and planet formation. When necessary, I assume a Hubble constant of $H_0 = 72\ \textrm{km}~\textrm{s}^{-1}\ \textrm{Mpc}^{-1}$, a matter density of $\Omega_M = 0.25$, and a cosmological constant $\Omega_{\Lambda} = 0.75$ for the cosmology. \section{The Equilibrium Abundance of SLR\lowercase{s}} In a one-zone model of a galaxy, which disregards spatial inhomogeneities, the complete equation for the SLR mass $M_{\rm SLR}$ in the ISM is \begin{equation} \label{eqn:MeqSLRFull} \frac{dM_{\rm SLR}}{dt} = Q_{\rm SLR}(t) - \frac{M_{\rm SLR} (t)}{\tau_{\rm SLR}}, \end{equation} where $\tau_{\rm SLR}$ is the lifetime of the SLR in the galaxy. $Q_{\rm SLR} (t)$, the injection rate of the SLR, depends on the past star-formation history: \begin{equation} \label{eqn:QSLR} Q_{\rm SLR} (t) = \int_{-\infty}^t Y_{\rm SLR}(t - t^{\prime}) \times {\rm SFR}(t^{\prime}) dt^{\prime}. \end{equation} For a coeval stellar population of age $t$, the yield $Y_{\rm SLR}(t)$ is the mass ejection rate of the SLR into the interstellar medium per unit stellar mass \citep{Cervino00}. If there are no big fluctuations in the star-formation rate over the past few Myr, then the SLR abundance approaches a steady-state. The equilibrium mass of a SLR in a galaxy is proportional to its star-formation (or supernova) rate averaged over the previous few Myr. We can then { parametrize} the injection of SLRs in the ISM by a yield $\Upsilon_{\rm SLR}$ per supernova: \begin{equation} \Upsilon_{\rm SLR} = \varepsilon \int_0^{\infty} Y(t^{\prime\prime}) dt^{\prime\prime}, \end{equation} regardless of whether SNe are actually the source of SLRs. The $\varepsilon$ factor is the ratio of the supernova rate $\Gamma_{\rm SN}$ and star-formation rate. Then the equilibrium SLR mass is given by \citep[e.g.,][]{Diehl06} \begin{equation} \label{eqn:MeqSLR} M_{\rm SLR}^{\rm eq} = \Gamma_{\rm SN} \Upsilon_{\rm SLR} \tau_{\rm SLR}. \end{equation} The supernova rate is proportional to the star-formation rate, so $\Gamma_{\rm SN} = \varepsilon {\rm SFR}$. The abundance of an SLR is given by $X_{\rm SLR} = M_{\rm SLR}^{\rm eq} m_H / (M_{\rm gas} m_{\rm SLR})$, where $m_{\rm SLR}$ is the mass of one atom of the SLR and $M_{\rm gas}$ is the gas mass in the galaxy. Therefore the abundance of an SLR is \begin{equation} X_{\rm SLR} = \varepsilon \frac{\rm SFR}{M_{\rm gas}} \frac{\Upsilon_{\rm SLR} \tau_{\rm SLR} m_H}{m_{\rm SLR}} \end{equation} The quantity $M_{\rm gas} / {\rm SFR} = \tau_{\rm gas}$ is the gas consumption time. Note that it is related to the specific star formation rate, ${\rm SSFR} = M_{\star} / {\rm SFR}$, as $\tau_{\rm gas} = f_{\rm gas} / ((1 - f_{\rm gas}) {\rm SSFR})$, where $M_{\star}$ is the stellar mass and $f_{\rm gas} = M_{\rm gas} / (M_{\rm gas} + M_{\star})$ is the gas fraction. Therefore, we can express the equilibrium mass of the SLR in a galaxy as \begin{equation} X_{\rm SLR} = \frac{\varepsilon \Upsilon_{\rm SLR} \tau_{\rm SLR} m_H}{\tau_{\rm gas} m_{\rm SLR}} = \varepsilon \frac{1 - f_{\rm gas}}{f_{\rm gas}} {\rm SSFR} \frac{\Upsilon_{\rm SLR} \tau_{\rm SLR} m_H}{m_{\rm SLR}} \end{equation} Finally, the ratio of SLR abundance in a galaxy to that in the Milky Way is \begin{eqnarray} \nonumber \frac{X_{\rm SLR}}{X_{\rm SLR}^{\rm MW}} & = & \frac{\tau_{\rm gas}^{\rm MW}}{\tau_{\rm gas}} \frac{\tau_{\rm SLR}}{\tau_{\rm SLR}^{\rm MW}}\\ & = & \frac{1 - f_{\rm gas}}{1 - f_{\rm gas}^{\rm MW}} \frac{f_{\rm gas}^{\rm MW}}{f_{\rm gas}} \frac{\rm SSFR}{\rm SSFR^{\rm MW}} \frac{\tau}{\tau^{\rm MW}}, \end{eqnarray} with a MW superscript referring to values in the present day Milky Way. Thus, galaxies with short gas consumption times (and generally those with high SSFRs) should have high abundances of SLRs. The reason is that in such galaxies, more of the gas is converted into stars and SLRs within the residence time of an SLR. The greatest uncertainty in these abundances is the residence time $\tau_{\rm SLR}$. In the Milky Way, these times are just the radioactive decay times, defined here as the e-folding time. In starburst galaxies, however, much of the volume is occupied by a hot, low density gas which forms into a galactic wind with characteristic speeds $v$ of several hundred { kilometres} per second (e.g., \citealt{Chevalier85}; \citealt*{Heckman90}; \citealt{Strickland09}). If massive stars emit SLRs at random locations in the starburst, most of them will dump their SLRs into the wind phase of the ISM. The wind-crossing time is $\tau_{\rm wind} = 330\ \textrm{kyr}\ (h / 100\ \textrm{pc}) (v / 300\ \textrm{km}~\textrm{s}^{-1})^{-1}$, where $h$ is the gas scale-height. The equilibrium time in starburst galaxies is then $\tau = [\tau_{\rm decay}^{-1} + \tau_{\rm wind}^{-1}]^{-1}$. Furthermore, the SLRs ejected into the wind may never mix with the molecular gas, so the fraction of SLRs injected into the molecular medium may be $\ll 1$ (I discuss this issue further in section~\ref{sec:Mixing}). However, very massive stars are found close to their birth environments where there is a lot of molecular gas to enrich, and these may be the source of \textrm{$^{26}$Al}, as supported by the correlation of the 1.809 MeV \textrm{$^{26}$Al} decay line emission and free-free emission from massive young stars \citep{Knoedlseder99}. Turning to the specific example of \textrm{$^{26}$Al}, I note that the yield of \textrm{$^{26}$Al} is thought to be $\Upsilon_{\rm Al-26} \approx 1.4 \times 10^{-4}\ \textrm{M}_{\sun}$ per supernova \citep{Diehl06}. For a Salpeter initial mass function from $0.1 - 100\ \textrm{M}_{\sun}$, the supernova rate is $\Gamma_{\rm SN} = 0.0064\ \textrm{yr}^{-1} ({\rm SFR} / \textrm{M}_{\sun}\ \textrm{yr}^{-1})$, or $\Gamma_{\rm SN} = 0.11\ \textrm{yr}^{-1} (L_{\rm TIR} / 10^{11}\ \textrm{L}_{\sun})$ in terms of the total infrared ($8 - 1000\ \mu\textrm{m}$) luminosity $L_{\rm TIR}$ of starbursts (\citealt{Kennicutt98}; \citealt*{Thompson07}). If I suppose all of the \textrm{$^{26}$Al} is retained by the molecular gas, so that the residence time is the \textrm{$^{26}$Al} decay time of 1.04 Myr, then the equilibrium abundance of \textrm{$^{26}$Al} in a galaxy is just \begin{equation} \label{eqn:XAl26Numer} X (\textrm{$^{26}$Al}) = 3.4 \times 10^{-11} \left(\frac{\tau_{\rm gas}}{\textrm{Gyr}}\right)^{-1} = 1.7 \times 10^{-9} \left(\frac{\tau_{\rm gas}}{20\ \textrm{Myr}}\right)^{-1} \end{equation} \subsection{High-Redshift Normal Galaxies} \label{sec:MSGalaxies} The star-formation rates and stellar masses of normal star-forming galaxies lie on a `main sequence' with a characteristic SSFR that varies weakly, if at all, with stellar mass \citep[e.g.,][]{Brinchmann04}. However, the characteristic SSFR evolves rapidly with redshift \citep[e.g.,][]{Daddi07,Noeske07,Karim11}, with ${\rm SSFR} \propto (1 + z)^{2.8}$ out to $z \approx 2.5$ -- a rise of factor $\sim 30$ \citep{Sargent12}. At $z \ga 2.5$, the SSFR of the main sequence then seems to remain constant \citep{Gonzalez10}. Countering this rise in the SSFR, the gas fractions of normal galaxies at high $z$ were also higher: the high equilibrium masses of SLRs are diluted to some extent by higher gas masses. \citet{Hopkins10} provide a convenient equation, motivated by the Schmidt law \citep{Kennicutt98}, to describe the evolution of gas fraction: \begin{equation} \label{eqn:fGas} f_{\rm gas} (z) = f_0 [1 - (t_L (z) / t_0) (1 - f_0^{3/2})]^{-2/3}, \end{equation} assuming a gas fraction $f_0$ at $z = 0$, with a look back time of $t_L (z) = \int_0^z dz^{\prime} / [H_0 (1 + z^{\prime}) \sqrt{\Omega_{\Lambda} + \Omega_M (1 + z^{\prime})^3}]$ and a current cosmic age of $t_0$ \citep[see also][]{Hopkins09}. Since the gas fractions of normal galaxies at present are small, the evolution at low redshifts can be approximated as $f_{\rm gas} (z) = f_0 [1 - (t_L (z) / t_0)]^{-2/3}$. After calculating the mean abundances of SLRs in normal galaxies, I find that the rapid SSFR evolution overwhelms the modest evolution in $f_{\rm gas}$ at high $z$: the SLR abundances of normal galaxies evolves quickly. These enhancements are plotted in Fig.~\ref{fig:NormGalaxy}. Observational studies of high redshift main sequence galaxies indicate a slower evolution in $\tau_{\rm gas}$, resulting from a quicker evolution of $f_{\rm gas}$. Although equation~\ref{eqn:fGas} implies that $f_{\rm gas}$ was about twice as high ten billion years ago at $z \approx 2$, massive disc galaxies are observed with gas fractions of $\sim 40$ -- $50\%$, which is 3 to 10 times greater than at present \citep[e.g.,][]{Tacconi10,Daddi10}. According to \citet{Genzel10}, the typical (molecular) gas consumption time at redshifts 1 to 2.5 was $\sim 500\ \textrm{Myr}$. In the \citet{Daddi10} sample of BzK galaxies at $z \approx 1.5$, gas consumption times are likewise $\sim 300$ -- $700\ \textrm{Myr}$. To compare, the molecular gas consumption times at the present are estimated to be 1.5 to 3 Gyr \citep{Diehl06,Genzel10,Bigiel11,Rahman12}, implying an enhancement of a factor 3 to 6 in SLR abundances at $z \ga 1$. But note that the BzK galaxies are not the direct ancestors of galaxies like the present Milky Way, which are less massive. The SSFR, when observed to have any mass dependence, is greater in low mass galaxies at all $z$ \citep{Sargent12}. This means that lower mass galaxies at all $z$ have shorter $\tau_{\rm gas}$, as indicated by observations of present galaxies \citep{Saintonge11}. The early Milky Way therefore may have had a gas consumption time smaller than $500\ \textrm{Myr}$. So far, I have ignored possible metallicity $Z$ dependencies in the yield $\Upsilon$ of SLRs. It may be generally expected that star-forming galaxies had lower metallicity in the past, since less of the gas has been processed by stellar nucleosynthesis. However, observations of the age-metallicity relation of G dwarfs near the Sun reveal that they have nearly the same metallicity at ages approaching 10 Gyr (e.g., \citealt{Twarog80,Haywood06}; \citealt*{Holmberg07}), though the real significance of the lack of a trend remains unclear, since there is a wide scatter in metallicity with age \citep[see the discussion by][]{Prantzos09}. Observations of external star-forming galaxies find weak evolution at constant stellar mass, with metallicity $Z$ decreasing by $\sim 0.1-0.2$ dex per unit redshift (\citealt*{Lilly03}; \citealt{Kobulnicky04}). After adopting a metallicity dependence of $Z(z) = Z(0) \times 10^{-0.2 z}$ (0.2 dex decrease per unit redshift), I show in the revised SLR abundances Fig.~\ref{fig:NormGalaxy} assuming that the SLR yield goes as $Z^{-1}$, $Z^{-0.5}$, $Z^{0.5}$, $Z$, $Z^{1.5}$, and $Z^2$. If the yields are smaller at lower metallicity, the SLR abundances are still elevated at high redshift, though by not as much for metallicity-independent yields. As an example, the yield of \textrm{$^{26}$Al} in the winds of Wolf-Rayet stars is believed to scale as $\Upsilon \propto Z^{1.5}$ \citep{Palacios05}. According to \citet{Limongi06}, these stellar winds contribute only a minority of the \textrm{$^{26}$Al} yield, so it is unclear how the \textrm{$^{26}$Al} yield really scales. \citet{Martin10} considered the \textrm{$^{26}$Al} and \textrm{$^{60}$Fe} yields from stars with half Solar metallicity. They found that, because reduced metallicity lowers wind losses, more SLRs are produced in supernovae. This mostly compensates for the reduced wind \textrm{$^{26}$Al} yield, and actually raises the synthesized amount of \textrm{$^{60}$Fe}. \begin{figure} \centerline{\includegraphics[width=8cm]{f1.eps}} \caption{Plot of the SLR abundance enhancements in normal galaxies lying on the `main sequence', for a gas fraction evolution described by equation~\ref{eqn:fGas}. The rapid evolution of SSFRs leads to big enhancements of SLRs at high $z$. Even during the epoch of Solar system formation, the mean SLR abundance was twice the present value. The different lines are for different $f_{\rm gas}$ at $z = 0$, assuming SLR yields are independent of metallicity: 0.05 (dotted), 0.1 (solid), 0.2 (dashed). The shading shows the abundances for $0.05 \le f_{\rm gas} \le 0.2$ when the SLR yield depends on metallicity, assuming a 0.2 dex decrease in metallicity per unit redshift. \label{fig:NormGalaxy}} \end{figure} \subsection{Starbursts} \begin{table*} \begin{minipage}{170mm} \caption{$^{26}$A\lowercase{l} Abundances and Associated Ionization Rates} \label{table:Al26Abundances} \begin{tabular}{lccccccccc} \hline Starburst & SFR & $\Gamma_{\rm SN}$ & $M_{\rm Al-26}^{\rm eq}$ & $M_H$ & $\tau_{\rm gas}$ & $X (\textrm{$^{26}$Al})^a$ & $\displaystyle \frac{^{26}\rm Al}{^{27}\rm Al}^b$ & $\zeta_{\rm Al-26}(e^+)^c$ & $\zeta_{\rm Al-26}(e^+ \gamma)^d$\\ & ($\textrm{M}_{\sun}\ \textrm{yr}^{-1}$) & (yr$^{-1}$) & ($\textrm{M}_{\sun}$) & ($\textrm{M}_{\sun}$) & ($\textrm{Myr}$) & & & $(\sec^{-1})$ & $(\sec^{-1})$\\ \hline Milky Way ($z = 0$)$^e$ & 3.0 & 0.019 & 2.8 & $4.5 \times 10^9$ & 1500 & $2.4 \times 10^{-11}$ & $9.4 \times 10^{-6}$ & $1.9 \times 10^{-20}$ & $7.1 \times 10^{-20}$\\ Galactic Centre CMZ$^f$ & 0.071 & $4.6 \times 10^{-4}$ & 0.067 & $3 \times 10^7$ & 420 & $8.6 \times 10^{-11}$ & $3.4 \times 10^{-5}$ & $6.9 \times 10^{-20}$ & $2.6 \times 10^{-19}$\\ NGC 253 core$^{g,h}$ & 3.6 & 0.023 & 3.3 & $3 \times 10^7$ & 8.3 & $4.3 \times 10^{-9}$ & $1.8 \times 10^{-3}$ & $3.4 \times 10^{-18}$ & $1.3 \times 10^{-17}$\\ M82$^{g,i}$ & 10.5 & 0.067 & 9.8 & $2 \times 10^8$ & 19 & $1.9 \times 10^{-9}$ & $7.5 \times 10^{-4}$ & $1.5 \times 10^{-18}$ & $5.7 \times 10^{-18}$\\ Arp 220 nuclei$^j$ & 50 & 0.3 & 44 & $10^9$ & 20 & $1.7 \times 10^{-9}$ & $6.7 \times 10^{-4}$ & $1.3 \times 10^{-18}$ & $5.0 \times 10^{-18}$\\ Submillimeter galaxy$^k$ & 1000 & 6.4 & 930 & $2.5 \times 10^{10}$ & 25 & $1.4 \times 10^{-9}$ & $5.7 \times 10^{-4}$ & $1.1 \times 10^{-18}$ & $4.3 \times 10^{-18}$\\ BzK galaxies$^l$ & 200 & 1 & 200 & $7 \times 10^{10}$ & 400 & $1 \times 10^{-10}$ & $4 \times 10^{-5}$ & $8 \times 10^{-20}$ & $3 \times 10^{-19}$\\ \hline \end{tabular} \\$^a$: Mean abundance of \textrm{$^{26}$Al}, calculated assuming the \textrm{$^{26}$Al} is well-mixed with the gas and resides there for a full decay time (instead of, for example, a wind-crossing time). \\$^b$: Calculated assuming Solar metallicity with $\log_{10} [N(^{27}{\rm Al})/N(H)] = -5.6$. \\$^c$: Ionization rate from \textrm{$^{26}$Al} with the derived abundance, with ionization only from MeV positrons released by the decay, assuming effective stopping. \\$^d$: Ionization rate from \textrm{$^{26}$Al} with the derived abundance, where ionization from the 1.809 MeV decay line and 0.511 keV positron annihilation gamma rays is included, assuming they are all stopped. \\$^e$: Supernova rate and gas mass from \citet{Diehl06}; SFR calculated from supernova rate using Salpeter IMF for consistency. \\$^f$: Inner 100 pc of the Milky Way. SFR and $\Gamma_{\rm SN}$ from IR luminosity in \citet*{Launhardt02}; gas mass from \citet{Molinari11}. \citet{PiercePrice00} gives a gas mass of $5 \times 10^7\ \textrm{M}_{\sun}$. \\$^g$: SFR and $\Gamma_{\rm SN}$ from IR luminosity in \citet{Sanders03}. \\$^h$: Gas mass from \citet*{Harrison99}. \\$^i$: Gas mass from \citet{Weiss01}. \\$^j$: Assumes IR luminosity of $3 \times 10^{11}\ \textrm{L}_{\sun}$ for SFR and $\Gamma_{\rm SN}$ and gas mass given in \citet{Downes98}. \\$^k$: Typical gas mass and SFR of { submillimetre} galaxies from \citet{Tacconi06}. \\$^l$: Mean SFR and gas mass of the 6 BzK galaxies in \citet{Daddi10}, which are representative of main sequence galaxies at $z \approx 1.5$. \end{minipage} \end{table*} The true starbursts, driven by galaxy mergers and galaxies interacting with each other, represent about $\sim 10\%$ of star formation at all redshifts \citep{Rodighiero11,Sargent12}. They have SSFRs that are up to an order of magnitude higher than $z = 2$ normal galaxies. The mean, background abundances of SLRs in starbursts are therefore about 100 times greater than the present day Milky Way. I show the \textrm{$^{26}$Al} abundances in some nearby starburst galaxies in Table~\ref{table:Al26Abundances}. In the Galactic Centre region, the \textrm{$^{26}$Al} abundance is only twice that of the present Milky Way as a whole. However, the \textrm{$^{26}$Al} abundances are extremely high in the other starbursts, $\sim 2 \times 10^{-9}$, about twenty times that of the primordial Solar nebula. The $^{26}{\rm Al}/^{27}{\rm Al}$ ratio in these starbursts is also very high. Assuming Solar metallicity with an $^{27}$Al abundance of $\log_{10} [N(^{27}{\rm Al})/N(H)] = -5.6$ \citep{Diehl06}, this ratio is $\sim (0.6 - 1.8) \times 10^{-3}$. Again, this ratio for Solar metallicity gas is $\sim 10 - 30$ times higher than that of the early Solar Nebula, $\sim 5 \times 10^{-5}$. \section{Systematic Uncertainties} \subsection{Effects of Variable Star-Formation Rates} The steady-state abundance (equation~\ref{eqn:MeqSLR}) is only appropriate when the star-formation rate is slowly varying on time-scales of a few Myr. Since young stellar populations produce SLRs for several Myr, and since \textrm{$^{26}$Al} and \textrm{$^{60}$Fe} themselves survive for $\ga 1\ \textrm{Myr}$, the injection rate of SLRs is smoothed over those time-scales (equation~\ref{eqn:QSLR}). Very high frequency fluctuations in the SFR therefore have little effect on the abundance of SLRs. In the opposite extreme, when the fluctuations in SLRs are slow compared to a few Myr, we can simply take the present SFR and use it in equation~\ref{eqn:MeqSLR} for accurate results. However, intermediate frequency variability invalidates the use of equation~\ref{eqn:MeqSLR}, and can result in the SLR abundance being out of phase with the SFR. Normal main sequence galaxies at high redshift built up their stellar populations over Gyr times, evolving secularly \citep[c.f.,][]{Wuyts11}. They are also large enough to contain many stellar clusters, so that stochastic effects average out. It is reasonable to suppose that they have roughly constant SFRs over the relevant time-scales. True starbursts, on the other hand, are violent events that last no more than $\sim 100\ \textrm{Myr}$, as evinced by their short $\tau_{\rm gas}$. They are relatively small, so stochastic fluctuations in their star-formation rates are more likely. \citet{Forster03} studied the nearby, bright starburst M82 and concluded that its star-formation history is in fact bursty. The star-formation histories of other starbursts are poorly known, but \citet{Mayya04} present evidence for large fluctuations on $\sim 4\ \textrm{Myr}$ times. I estimate the magnitude of these fluctuations for the prototypical starburst M82 with the full equation for SLR mass in a one-zone model (equation~\ref{eqn:MeqSLRFull}). The solution to equation~\ref{eqn:MeqSLRFull} for $M_{\rm SLR}$ is \begin{equation} M_{\rm SLR} (t) = \int_{-\infty}^t {\rm SFR}(t^{\prime}) \times m_{\rm SLR}(t^{\prime}) dt^{\prime}, \end{equation} where \begin{equation} m_{\rm SLR}(t^{\prime}) = \int_{-t^{\prime}}^0 Y_{\rm SLR}(t^{\prime\prime}) \exp\left(-\frac{t^{\prime} - t^{\prime\prime}}{\tau_{\rm SLR}}\right) dt^{\prime\prime}. \end{equation} The quantity $m_{\rm SLR}(t^{\prime})$ represents the SLR mass in the ISM from a coeval stellar population of unit mass and age $t^{\prime}$. It is given by \citet{Cervino00} and \citet{Voss09} for \textrm{$^{26}$Al} and $^{60}$Fe. I use the star-formation history derived by \citet{Forster03} for the `3D region' of M82, which consists of two peaks at 4.7 Myr ago and 8.9 Myr ago. The peaks are modelled as Gaussians with the widths given in \citet{Forster03} (standard deviations $\sigma$ of 0.561 Myr for the more recent burst, and 0.867 Myr for the earlier burst). I convert the star-formation rate from a Salpeter IMF from 1 to 100$\ \textrm{M}_{\sun}$ given in \citet{Forster03} to a Salpeter IMF from 0.1 to 100$\ \textrm{M}_{\sun}$ for consistency with the rest of the paper.\footnote{I ignore the relatively small difference between the upper mass limit of 100$\ \textrm{M}_{\sun}$ in \citet{Forster03} and 120$\ \textrm{M}_{\sun}$ in \citet{Cervino00} and \citet{Voss09}. Since stars with masses 100 to 120$\ \textrm{M}_{\sun}$ can affect stellar diagnostics, converting to that IMF may require an adjustment to the star-formation history beyond a simple mass scaling.} This region does not include the entire starburst; it has roughly $1/3$ of the luminosity of the starburst, but the stellar mass formed within the 3D region over the past 10 Myr gives an average SFR of 10 $\textrm{M}_{\sun}\ \textrm{yr}^{-1}$ in the \citet{Forster03} history. Note that \citet*{RodriguezMerino11} derives a different age distributions for stellar clusters (compare with \citealt{Satyapal97}). \citet{Strickland09} has also argued that the star-formation history of M82's starburst core is not well constrained before 10 Myr ago (as observed from Earth), and may have extended as far back as 60 Myr ago. Thus, I take the \citet{Forster03} history merely as a representative example of fluctuating SFRs. \begin{figure} \centerline{\includegraphics[width=8cm]{f2.eps}} \caption{History of the SLR masses in M82's `3D region' ISM for the star-formation history given in \citet{Forster03}. The black lines are for \textrm{$^{26}$Al} and grey lines are for $^{60}$Fe; solid lines are using the yields in \citet{Voss09} and dashed lines are using the \citet{Cervino00} yields. We presently observe M82 at $t = 0$; I assume there are no bursts of star-formation after then, so that the masses inevitably decay away. \label{fig:M82SLRHistory}} \end{figure} The calculated \textrm{$^{26}$Al} (black) and \textrm{$^{60}$Fe} (grey) masses are plotted in Fig.~\ref{fig:M82SLRHistory}. At first, there is no SLR mass in the starburst, because it takes a few Myr for SLR injection to start. With the \citet{Voss09} yields, the SLR masses rise quickly and peak $\sim 5\ \textrm{Myr}$ ago (as observed from Earth). The SLR masses drop afterwards. Yet they are still within a factor of 1.7 of their peak values even now, $\sim 5\ \textrm{Myr}$ after the last star-formation burst. If there is no further star-formation, the SLRs will mostly vanish over the next 10 Myr. The \citet{Cervino00} yields predict a greater role for supernovae from lower mass stars, so the fluctuations are not as great; the \textrm{$^{60}$Fe} mass remains roughly the same even 10 Myr from now. As long as there has been recent star-formation in the past $\sim 5\ \textrm{Myr}$, the SLR abundances are at least half of those predicted by the steady-state assumption. There is a more fundamental reason to expect that the steady-state SLR abundances are roughly correct for starbursts. A common way of estimating star-formation rates in starbursts is to use the total infrared luminosity \citep{Kennicutt98}, which is nearly the bolometric luminosity for these dust-obscured galaxies. Young stellar populations, containing very massive stars, are brighter and contribute disproportionately to the bolometric luminosity. Therefore, both the luminosity and the SLR abundances primarily trace young stars. To compare the bolometric luminosity, I ran a Starburst99 (v6.04) model of a $Z = 0.02$ metallicity coeval stellar population with a Salpeter IMF ($dN/dM \propto M^{-2.35}$) between 0.1 and 120$\ \textrm{M}_{\sun}$ \citet{Leitherer99}. I then calculate the SFR that would be derived from these { luminosities} using the \citet{Kennicutt98} conversion, and then from that, the expected steady-state SLR masses from equation~\ref{eqn:MeqSLR}. The `bolometric' \textrm{$^{26}$Al} masses are compared to the actual masses in Fig.~\ref{fig:LBolVsMAl26}. \begin{figure} \centerline{\includegraphics[width=8cm]{f3.eps}} \caption{How the bolometric luminosity traces \textrm{$^{26}$Al} mass for a coeval stellar population with age $t$. The grey line is the predicted steady state \textrm{$^{26}$Al} mass I would predict from the bolometric luminosity of the population, whereas the black lines are the actual mass of \textrm{$^{26}$Al} (solid for \citealt{Voss09} and dashed for \citealt{Cervino00}).\label{fig:LBolVsMAl26}} \end{figure} Although the very youngest stellar populations are bright but not yet making SLRs, the bolometric luminosity (grey) is a good tracer of \textrm{$^{26}$Al} mass (black) for stellar populations with ages between 3 and 20 Myr. For most of the interval, the bolometric \textrm{$^{26}$Al} masses are within a factor 2 of the actual masses. For populations between 15 Myr and 20 Myr, the \citet{Voss09} and \citet{Cervino00} predictions envelop the bolometric \textrm{$^{26}$Al} masses. At 20 to 25 Myr old, the bolometric \textrm{$^{26}$Al} masses are about twice the true masses. For older populations still, the true \textrm{$^{26}$Al} masses finally die away while the bolometric luminosity only slowly declines. Note that, if stars have been forming continuously for the past 100 Myr, over half of the luminosity comes from stars younger than 20 Myr. Thus, the use of the bolometric luminosities introduces a factor $\la 3$ systematic error. In short, the use of bolometric luminosity as a SFR indicator, and the natural variability in the star-formation rates of starbursts can lead to overestimations of the SLR abundances by a factor $\sim 3$. But I estimate the SLR abundances of true starbursts are a hundred times higher than in the present Milky Way (equation~\ref{eqn:XAl26Numer} and Table~\ref{table:Al26Abundances}). The ratio is so great that the systematic effects do not undermine the basic conclusion that SLR abundances are much larger in true starbursts. \subsection{Are SLRs mixed quickly enough into the gas?} \label{sec:Mixing} Although the average levels of SLRs in starbursts and high-$z$ normal galaxies are high, that does not by itself mean the SLRs influence the environments for star-formation. While SLRs can play an important role in star-forming regions, by elevating the ionization rates and by being incorporated into solid bodies, SLRs trapped in ionized gas are irrelevant for these processes. The mixing of metals from young stars into the ISM gas mass is usually thought to be very slow in the present Milky Way, compared to SLR lifetimes. The massive stars responsible for making SLRs often live in star clusters, which blow hot and rarefied bubbles in the ISM. Supernovae also excavate the coronal phase of the ISM \citep{McKee77}. Turbulence within the bubbles mixes the SLRs and homogenizes their abundances \citep[e.g.,][]{Martin10}, over a time scale $t_{\rm mix} \approx L / v_{\rm turb}$, where $L$ is the outer scale of turbulence (typical size of the largest eddies) and $v_{\rm turb}$ is the turbulent speed \citep{Roy95,Pan10}. The large outer scale of turbulence, $\sim$100 -- 1000$\ \textrm{pc}$, and the slow turbulent speeds ($\sim 5$--$10\ \textrm{km}~\textrm{s}^{-1}$) in the Milky Way imply mixing times of $\sim 10$ -- $200\ \textrm{Myr}$. Even if the \textrm{$^{26}$Al} is homogenized within the superbubbles, this low density hot gas requires a long time to mechanically affect cold star-forming clouds, because of the large density contrast \citep{deAvillez02}. Mixing between the phases, particularly warm and hot gas, is accelerated by Rayleigh-Taylor and Kelvin-Helmholtz instabilities \citep{Roy95}, but overall, mixing takes tens of Myr to operate in the Milky Way (\citealt{deAvillez02}; see also \citealt{Clayton83}, where mixing times are between the warm ISM from evaporated H I clouds, cool and large H I clouds, and molecular clouds). Thus, SLRs like \textrm{$^{26}$Al} are thought to decay long before they are mixed thoroughly with the star-forming gas. Indeed, studies of the abundances of longer lived isotopes in the primordial Solar system supports longer mixing times of $\sim 50$ -- 100$\ \textrm{Myr}$ \citep[e.g.,][]{Meyer00,Huss09}. It is thought that these obstacles existed, at least qualitatively, in the $z \approx 0.45$ Milky Way, when the Solar system formed. These problems are part of the motivation for invoking a local source of SLRs, including energetic particles from the Sun itself \citep{Lee98}, injection from a nearby AGB star (\citealt*{Busso03}), or injection from an anomalously nearby supernova \citep{Cameron77} or Wolf-Rayet star (\citealt*{Arnould97}). Recently, though, several authors proposed models that might overcome the mixing obstacle, where young stars are able to inject SLRs into star-forming clouds. A motivation behind these models is the idea that molecular clouds are actually intermittent high density turbulent fluctuations in the ISM \citep[e.g.,][]{MacLow04}, and the supernovae that partly drive the turbulence -- indirectly forming the molecular clouds -- also are the sources of SLRs \citep{Gounelle09}. In the model of \citet{Gounelle09}, old superbubbles surrounding stellar clusters form into molecular clouds after { ploughing} through the ISM for $\sim$10 Myr. Supernovae continue going off in the star clusters, adding their SLRs into these newborn molecular clouds \citep[see also][]{Gounelle12}. Simulations by \citet*{Vasileiadis13} also demonstrate that SLRs from supernovae going off very near giant molecular clouds are mixed thoroughly with the star-forming gas. On a different note, \citet{Pan12} argued that supernovae remnants are clumpy, and that clumps could penetrate into molecular clouds surround star clusters and inject SLRs. If these scenarios are common, then a large fraction of the produced SLRs reaches the star-forming gas before decaying. In fact, these mechanisms may be so efficient that SLRs are concentrated only into star-forming molecular gas, a minority of the Milky Way gas mass. If so, then the abundance of SLRs within Galactic molecular gas ($M_{\rm SLR} / M_{\rm H2}$) is greater than the mean background level ($M_{\rm SLR} / M_{\rm gas}$); in this way, SLR abundances could reach the elevated levels that existed in the early Solar system \citep{Gounelle09,Vasileiadis13}. { On the other hand, young stars can trigger star-formation in nearby gas without polluting them. This can occur when a shock from a supernova or from an overpressured H II region propagates into a molecular cloud, causing the cores within it to collapse (\citealt{Bertoldi89}; \citealt*{Elmegreen95}). This process has been inferred to happen in the Upper Scorpius OB association \citep{Preibisch99}. Since the cores are pre-existing, they may not be enriched with SLRs (although supernova shocks can also inject SLRs into a molecular cloud; see \citealt{Boss10} and \citealt{Boss12}). The triggering can also occur before any supernovae enrich the material with SLRs. Star formation can also be triggered when a shock from a H II region sweeps up a shell of material, which eventually becomes gravitationally unstable and collapses (\citealt{Elmegreen77}; see also \citealt{Gritschneder09}).} I note, however, that the homogeneity of the \textrm{$^{26}$Al} abundance in the early Solar system is controversial; if the abundance was inhomogeneous, that is inconsistent with efficient SLR mixing within the Solar system's progenitor molecular cloud. Although \citet*{Villeneuve09} conclude that \textrm{$^{26}$Al} had a constant abundance, \citet{Makide11} find that the \textrm{$^{26}$Al} abundance varied during the earliest stages of Solar system formation, when { aluminium} first condensed from the Solar nebula. What is even less clear, though, is how similar the Milky Way is to starbursts and the massive high-$z$ normal galaxies that host the majority of the cosmic star formation. As in the Milky Way, supernovae in starbursts like M82 probably blast out a hot phase. But the hot phase escapes in a rapid hot wind in starbursts with high star-formation densities ($\ga 0.1\ \textrm{M}_{\sun}\ \textrm{yr}^{-1}\ \textrm{kpc}^{-2}$; \citealt{Chevalier85,Heckman90}). There is direct evidence for this hot wind from X-ray observations \citep{Strickland07,Strickland09}. Furthermore, supernova remnants are observed to expand quickly in M82, implying that they are going off in a very low density environment (e.g., \citealt{Beswick06}; compare with \citealt{Chevalier01}). Cool and warm gas is observed outflowing alongside the hotter wind, possibly entrained by the hot wind \citep{Strickland09}. Whereas the edges of superbubbles eventually cool and fade back into the ISM in the Milky Way after a 10 -- 100 Myr, any superbubble gas that does cool in these starbursts could still be pushed out by the wind. If the SLRs are trapped in the wind, they reside in these starbursts for only a few hundred kyr. But the ISM conditions in the starbursts are much different, with higher densities in all phases, higher pressures, and higher turbulent speeds \citep[e.g.,][]{Smith06}. Starbursts are physically small, with radii of $\sim 200\ \textrm{pc}$ at $z = 0$ -- comparable to the size of some individual superbubbles in the Milky Way. The eddy sizes therefore must be smaller and mixing processes could be faster. To demonstrate just how small superbubbles are in starbursts, \citet*{Silich07} modelled the superbubble surrounding the star cluster M82 A-1 in M82, which has a mass of $2 \times 10^6\ \textrm{M}_{\sun}$. They find that the wind propagates only for a few parsecs before being shocked and cooled. Stellar ejecta in the core of the cluster also cool rapidly in their model. The turbulent mixing time is therefore much smaller, $\sim 200\ \textrm{kyr}$ for an eddy length of 10 pc and a turbulent speed of $50\ \textrm{km}~\textrm{s}^{-1}$. Conditions are even more extreme in present day compact Ultraluminous Infrared Galaxy starbursts, where the ISM is so dense ($\sim 10^4\ \textrm{cm}^{-3}$; \citealt{Downes98}) that a hot phase may be unable to form (\citealt{Thornton98}; \citealt*{Thompson05}). Instead, the ISM is almost entirely molecular \citep{Downes98}. Indeed, observations of supernovae in Arp 220 indicate they are going off in average density molecular gas \citep{Batejat11}. Supernovae remnants fade within a few tens of kyr into the ISM, due to powerful radiative losses \citep{McKee77}. The SLRs then are incorporated into the molecular ISM in a turbulent mixing time, the whole process taking just a few hundred kyr. The main uncertainty is then, not whether the SLRs are injected into the molecular gas, but whether these SLR-polluted regions of the molecular gas fill the entire starburst. Turbulent mixing smooths abundances over regions the size of the largest eddies \citep{Pan10}, but if the distribution of SLR injection sites varies over larger scales, the final SLR abundance may also vary on these large scales. We know very little about the conditions in high redshift galaxies. Star formation in main sequence galaxies is dominated by massive galaxies with large star-formation rates at high redshift. These massive galaxies are several kpc in radius, but contain large amounts of molecular gas \citep{Daddi10}. They also have large star-formation densities and host winds. In the more extreme galaxies, radiative bremsstrahlung losses stall any hot wind \citep{Silich10}. Turbulent speeds in these galaxies are high \citep{Green10}, implying faster turbulent mixing than in the Milky Way. But it is not clear which phase the SLRs are injected into or how long it takes for them to mix throughout star formation regions. The effects of clustering in the ISM is also uncertain, but it probably is important in these galaxies, where huge clumps ($\ga 10^8\ \textrm{M}_{\sun}$ and a kpc wide) are observed \citep[e.g.,][]{Genzel11}. To summarize, while there are reasons to expect that most of the SLRs synthesized by young stars in the Milky Way decay before reaching star-forming gas, this is not necessarily true in starbursts or high-$z$ normal galaxies. Turbulent mixing is probably fast, at least in compact starbursts which are physically small. On the other hand, winds might blow out SLRs before they reach the star-forming gas, at least in the weaker starbursts. Clearly, this issue deserves further study. \section{Implications} \subsection{Implications for the early Solar system and Galactic stars of similar age} The rapid evolution in SSFRs implies that Galactic background SLR abundances were up to twice as high during the epoch of Solar system formation (4.56 Gyr ago; $z \approx 0.44$). If the evolution of the Milky Way's gas fraction was comparable to that in observed massive main sequence galaxies, the enhancement may have been only $\sim 50\%$ above present values (see the discussion in section~\ref{sec:MSGalaxies}; \citealt{Genzel10}). The inferred primordial abundances of $^{60}$Fe in the Solar system are in fact up to twice as high as in the contemporary Milky Way, as determined with gamma-ray lines \citep{Williams08,Gounelle08}. This overabundance is cited as evidence for an individual, rare event enriching the early Solar system, or the gas that formed into the Solar system, with SLRs. However, my calculations show this is not necessarily the case: the twice high abundances of $^{60}$Fe in the early Solar system can arise \emph{simply because the Galaxy was more efficient at converting gas into stars 4.5 Gyr ago.} The primordial abundance of \textrm{$^{26}$Al} in the Solar system was about six times higher than the mean Galactic value at present \citep{Diehl06}, or three times higher than the mean Galactic abundance at $z = 0.44$ assuming that equation~\ref{eqn:fGas} holds. Even so, the normal effects of galaxy evolution are a potential contributor to the greater \textrm{$^{26}$Al} abundances, assuming efficient mixing of \textrm{$^{26}$Al} with the molecular gas of the Milky Way. Furthermore, the high abundances of \textrm{$^{26}$Al} in the early Solar system are actually typical of star-formation at $z \approx 1 - 2$ -- when most cosmic star-formation occurred. In this sense, the early Solar system's \textrm{$^{26}$Al} abundance may be normal for most planetary systems in the Universe. As I have discussed in Section~\ref{sec:Mixing}, it is not clear whether the background abundances of \textrm{$^{26}$Al} and other SLRs actually represent those of typical star-forming gas; if mixing takes more than a few Myr, these SLRs could not have affected star and planet formation \citep{Meyer00,deAvillez02,Huss09}. But although there may have been a wide distribution of abundances if mixing is inefficient, the mean of the distribution is still higher simply because there were more supernovae and young stars per unit gas mass. Thus, a greater fraction of star formation occurred above any given threshold in the past. In addition, the Galactic background level can be meaningful if a large fraction of the SLRs from young stars make it into the cold gas, and \citet{Gounelle09}{,} \citet{Gounelle12}, \citet{Pan12}, and \citet{Vasileiadis13} have presented mechanisms where this can happen. Although there is suggestive evidence that these mechanisms did not operate for the Solar system \citep{Makide11}, there is no reason they could not have worked for other star systems of similar ages. Then the Solar system's relatively high abundance of SLRs, and \textrm{$^{60}$Fe} in particular, may be common for Galactic stars of similar ages, even if through a different process. Finally, as I noted, these conclusions depend on how the yields of SLRs from massive stars change with metallicity, and what the mean Galactic metallicity was at the epoch of Solar system formation. \subsection{The Ionization Rate and Physical Conditions in Starbursts' Dense Clouds} Radioactive decay from SLRs injects energy in the form of daughter particles into the ISM. The decay particles, with typical energies of order an MeV, ionize gas and ultimately heat the ISM, if they do not escape entirely. The high abundances of SLRs, including \textrm{$^{26}$Al}, can alter the ionization state of molecular gas in these galaxies. The ionization rate, in particular, is important in determining whether magnetic fields can thread through the densest gas. I focus here on the contribution from \textrm{$^{26}$Al}, which dominated the ionization rate from SLRs in the primordial Solar system \citep{Umebayashi09}. For the sake of discussion, I assume that the SLRs are well-mixed into the gas, despite the uncertainties (section~\ref{sec:Mixing}). Each \textrm{$^{26}$Al} decay releases an energy $E_{\rm decay}$ into the ISM in the form of MeV positrons and gamma rays. If each atom in the ISM takes an energy $E_{\rm ion}$ to be ionized, each \textrm{$^{26}$Al} decay can therefore ionize $E_{\rm decay} / E_{\rm ion}$ atoms. In 82\% of the decays, a positron with kinetic energy of 1.16 MeV is released, and the positron is slowed by ionization losses \citep{Finocchi97}. The minimum energy per decay, after accounting for this branching ratio, that goes into ionization is $E_e^{\rm min} = 0.95\ \textrm{MeV}$, when the medium stops the positron (inflight annihilation losses are negligible at these energies; \citealt{Beacom06}). The annihilation of the positron with { an} electron in the ISM produces gamma rays of total energy $2 \times 0.511 = 1.022$ MeV. In addition, in very nearly all \textrm{$^{26}$Al} decays, a 1.809 MeV gamma ray is produced. These gamma rays only interact with the ISM over columns of several $\textrm{g}~\textrm{cm}^{-2}$, so only in particularly dense regions will they contribute to the ionization \citep{Finocchi97}. When they do, $E_{\rm decay}^{e \gamma} = 3.60\ \textrm{MeV}$. \citet{Stepinski92} gives $E_{\rm ion}$ as 36.3 eV, so that the ionization rate is \begin{equation} \zeta_{\rm Al-26} = \frac{X_{\rm Al-26} E_{\rm decay}}{(36.3\ \textrm{eV}) \tau_{\rm decay}}. \end{equation} In terms of gas consumption time, the ionization rate from \textrm{$^{26}$Al} is \begin{equation} \zeta = (1.4 - 5.1) \times 10^{-18}\ \sec^{-1}\ \left(\frac{\tau_{\rm gas}}{20\ \textrm{Myr}}\right)^{-1}. \end{equation} My results for the mean ionization rate from \textrm{$^{26}$Al} of some characteristic starbursts are shown in Table~\ref{table:Al26Abundances}; they are in the range $10^{-18} - 10^{-17}\ \sec^{-1}$. The maximal ionization rates are roughly an order of magnitude higher than that found in early Solar system, a dense environment with $\zeta \approx (0.6 - 1) \times 10^{-18}\ \sec^{-1}$ \citep{Finocchi97,Umebayashi09}. Even if the \textrm{$^{26}$Al} is in the cold star-forming gas, it could actually be condensed into dust grains instead of existing in the gas phase. Yet the decay products still escape into the ISM from within the grain. The attenuation of gamma rays at $\sim 1$ MeV is dominated by Compton scattering, requiring columns of a few $\textrm{g}~\textrm{cm}^{-2}$ to be important. Thus, gamma rays pass freely through dust grains that are smaller than a { centimetre}. The energy loss rate of relativistic electrons or positrons in neutral matter is approximately \begin{equation} \frac{dK}{ds} = \frac{9}{4} m_e c^2 \sigma_T \sum_j \frac{\rho_j Z_j}{A_j m_H} \left[\ln \frac{K + m_e c^2}{m_e c^2} + \frac{2}{3} \ln \frac{m_e c^2}{\mean{E_j}}\right] \end{equation} from \citet{Schlickeiser02}, where $K$ is the particle kinetic energy, $s$ is the physical length, and $\sigma_T$ is the Thomson cross section. The sum is over elements $j$; for heavy elements $Z_j \approx A_j / 2$, $\rho_j$ is the partial density of each element within the grain, and $\mean{E_j}$ is related to the atomic properties of the element. I take the bracketed term to have a value $\sim 5$ and $\rho \approx 3\ \textrm{g}~\textrm{cm}^{-2}$. Then the stopping length is $K / (dK / ds) \approx 0.3\ \textrm{cm}\ (K / \textrm{MeV})$, much bigger than the typical grain radius. Thus, \textrm{$^{26}$Al} (and other SLRs) in dust grains still contribute to the ionization of the ISM. On the other hand, are the positrons actually stopped in starburst molecular clouds, or do they escape? For neutral interstellar gas, the stopping column of MeV positrons is $\sim K \rho / (dK / ds) \approx 0.2\ \textrm{g}~\textrm{cm}^{-2}$ { through ionization and excitation of atoms.} \citep{Schlickeiser02}. { In cold molecular gas, ionization and excitation continue to cool the positrons until $K \approx 10\ \textrm{eV}$, at which point they start annihilating by charge exchange reactions or they thermalize \citep*{Guessoum05}.} The column densities of starbursts range from $\sim 0.1 - 10\ \textrm{g}~\textrm{cm}^{-2}$ \citep[e.g.,][]{Kennicutt98}, and the columns through individual molecular clouds are expected to be similar to those of the galaxies (\citealt*{Hopkins12}). In the denser molecular clouds, positrons are stopped even if they are not confined at all. In massive main sequence galaxies, the columns are $\sim 0.1\ \textrm{g}~\textrm{cm}^{-2}$ \citep{Daddi10}, insufficient to stop positrons moving in straight lines. If magnetic waves with a wavelength near the positron gyroradius scale exist in these clouds, they efficiently scatter positrons and confine them. { As a result, the propagation of the positrons can be described with a diffusion constant, as widely used when interpreting the Galactic GeV positron spectrum (e.g., \citealt{Moskalenko98}; \citealt*{Hooper09}), although it is unclear how relevant these studies are for MeV positrons \citep{Jean09,Prantzos11,Martin12}.} However, these waves are probably damped quickly in dense neutral gas (see { \citealt{Jean09}; \citealt*{Higdon09};} \citealt{Prantzos11} and references therein). On the other hand, positrons move along magnetic field lines, and if the lines themselves are twisted on a scale $\lambda_B$, the positrons are forced to random walk with a similar mean free path { \citep{Martin12}}. As long as $\lambda_B$ is less than about a third of the molecular cloud size, then positrons are stopped in these galaxies. If it is well mixed with the molecular gas, does \textrm{$^{26}$Al} dominate the ionization rate in molecular gas in starbursts, and if so, what physical conditions does it induce? The starburst \textrm{$^{26}$Al} ionization rates are about an order of magnitude lower than the canonical cosmic ray-sustained ionization rates in most Milky Way molecular clouds, but in some of the densest Galactic starless cores, the ionization rate drops to $\sim 10^{-18}\ \sec^{-1}$ \citep{Caselli02,Bergin07}. Cosmic rays in starbursts can sustain much higher ionization rates (up to $\sim 10^{-14}\ \sec^{-1}$; c.f., \citealt{Papadopoulos10-CRDRs}), but cosmic rays can be deflected by magnetic fields, possibly preventing them from penetrating high columns. { Aside from cosmic rays themselves, starbursts are also bright sources of GeV gamma rays, which are generated when cosmic rays interact with the prevalent gas \citep[e.g.,][]{Ackermann12}. These gamma rays can penetrate molecular clouds and induce low levels of ionization \citep{Lacki12-GRDRs}.} In \citet{Lacki12-GRDRs}, I found that the gamma-ray ionization rate in starbursts can be anywhere from $10^{-22} - 10^{-16}\ \sec^{-1}$, with values of $\sim (1 - 3) \times 10^{-19}\ \sec^{-1}$ in M82 and $\sim (5 - 8) \times 10^{-17}\ \sec^{-1}$ in Arp 220's radio nuclei. In the dense clouds of most starbursts, \textrm{$^{26}$Al} radioactivity could exceed the ionization rate over gamma rays, setting a floor to the ionization rate. In the most extreme starbursts, with mean gas surface densities of $\ga 3\ \textrm{g}~\textrm{cm}^{-2}$ (c.f. equation 10 of \citealt{Lacki12-GRDRs}), however, gamma-ray ionization is more important, since the gamma-ray ionization rate depends strongly on the density of gas and compactness of the starbursts. Unlike the uncertainty of how SLRs and their positron decay products are transported and mixed with the gas of starbursts, gamma rays propagate relatively simply, so the gamma ray ionization rates are more secure. An \textrm{$^{26}$Al}-dominated ionization rate has implications for the physical conditions of star-forming clouds. According to \citet{McKee89}, the ionization fraction of a cloud with hydrogen number density $n_H$ is \begin{equation} \label{eqn:xELow} x_e \approx 1.4 \times 10^{-8} \left(\frac{\zeta}{10^{-18}\ \sec^{-1}}\right)^{1/2} \left(\frac{n_H}{10^4\ \textrm{cm}^{-3}}\right)^{-1/2} \end{equation} when the ionization rate is low. We see that the ionization fraction of cloud with density $n_H = 10^4\ \textrm{cm}^{-3}$ { is} $(1 - 4) \times 10^{-8}$, if the ionization is powered solely by \textrm{$^{26}$Al} decay. For these ionization fractions, the ambipolar diffusion time of a molecular core, the time for magnetic fields to slip from the gas, is a few times its free-fall time. Since clouds with strong magnetic fields do not collapse until the field slips away by ambipolar diffusion \citep{Mestel56,Mouschovias76}, this means that \textrm{$^{26}$Al}-ionized clouds in starbursts collapse quasi-statically, as in the Milky Way. On the other hand, the energy injection from \textrm{$^{26}$Al} has essentially no effect on the gas temperature. \citet{Papadopoulos10-CRDRs} gives the minimum gas temperature of gas as: \begin{equation} T_k^{\rm min} = 6.3\ \textrm{K}\ [(0.0707 n_4^{1/2} \zeta_{-18} + 0.186^2 n_4^3)^{1/2} - 0.186 n_4^{3/2}]^{2/3}, \end{equation} which was derived under the assumption that there is no heating from interactions with dust grains or the dissipation of turbulence in the gas, for gas with density $n_4 = (n_H / 10^4\ \textrm{cm}^{-3})$ and ionization rate $\zeta_{-18} = (\zeta / 10^{-18}\ \sec^{-1})$. In typical starbursts, I find that \textrm{$^{26}$Al} decay alone heats gas of density $n_H = 10^{4}\ \textrm{cm}^{-3}$ to $\sim 2 - 5\ \textrm{K}$ ($0.1 - 0.5\ \textrm{K}$ for $n_H = 10^6\ \textrm{cm}^{-3}$). As I note in \citet{Lacki12-GRDRs}, under such conditions, dust heating is more likely to set the temperature of the gas than ionization, raising the temperature to the dust temperature for densities $\ga 40000\ \textrm{cm}^{-3}$. \section{Conclusions} The high SSFRs of starbursts and high-$z$ normal galaxies implies high abundances of \textrm{$^{26}$Al} and other SLRs in their ISMs. In true starbursts, these abundances are enormous, with $X (\textrm{$^{26}$Al}) \approx 10^{-9}$ and $^{26}$Al/$^{27}$Al $\approx 10^{-3}$. The SSFRs of normal galaxies evolve rapidly with $z$; even taking into account higher gas fractions, the SLR abundances were about 3 -- 10 times higher at $z \approx 2$ than in the present Milky Way. Even at the epoch of Solar system formation, the mean SLR abundance of the Milky Way was 1.5 to 2 times as high as at the present (Fig.~\ref{fig:NormGalaxy}). This alone could explain the high abundances of $^{60}$Fe in the early Solar system, and reduce the discrepancy in the \textrm{$^{26}$Al} abundances from a factor of $\sim 6$ to $\sim 3$. In this way, the cosmic evolution of star-forming galaxies may have direct implications for the early geological history of the Solar system. The first main uncertainty is whether the SLRs produced by massive stars is well-mixed with the molecular gas: they may instead be ejected by the galactic winds known to be present in starbursts and high-$z$ galaxies, or decay before they can propagate far from its injection site, or before it penetrates cold gas. I discussed these uncertainties in section~\ref{sec:Mixing}. The other uncertainty is how SLR yields depend on metallicity. The most direct way to test the high \textrm{$^{26}$Al} abundances of starburst galaxies is to detect the 1.809 MeV gamma-ray line produced by the decay of \textrm{$^{26}$Al}, directly informing us of the equilibrium mass of \textrm{$^{26}$Al}. Whether most of the \textrm{$^{26}$Al} is ejected by the superwind can be resolved with spectral information. The turbulent velocities of molecular gas in starbursts, $\sim 100\ \textrm{km}~\textrm{s}^{-1}$ \citep{Downes98}, is much smaller than the bulk speed of the superwind, which is hundreds or even thousands of $\textrm{km}~\textrm{s}^{-1}$ \citep{Chevalier85}. Unfortunately, the \textrm{$^{26}$Al} line fluxes predicted for even the nearest external starbursts ($\sim 10^{-8}\ \textrm{cm}^{-2}\ \sec^{-1}$) are too low to detect with any planned instrument (\citealt*{Lacki12-MeV}). However, \citet{Crocker11-Wild} have argued that the inner 100 pc of the Galactic Centre are an { analogue} of starburst galaxies, launching a strong superwind. The \textrm{$^{26}$Al} line from this region should have a flux of $\sim 2 \times 10^{-5}\ \textrm{cm}^{-2}\ \sec^{-1}$, easily achievable with possible future MeV telescopes like Advanced Compton Telescope (ACT; \citealt{Boggs06}) or Gamma-Ray Burst Investigation via Polarimetry and Spectroscopy (GRIPS; \citealt{Greiner11}). A search for the \textrm{$^{26}$Al} signal from this region may inform us on its propagation, since it is nearby and resolved spatially{.}\footnote{\citet{Naya96} reported that the \textrm{$^{26}$Al} signal from the inner Galaxy had line widths corresponding to speeds of several hundred $\textrm{km}~\textrm{s}^{-1}$, but this was not verified by later observations { by RHESSI} \citep{Smith03} { and INTEGRAL} \citep{Diehl06-Linewidth}. { Instead, recent observations indicate that Galactic \textrm{$^{26}$Al} is swept up in superbubbles expanding at 200 $\textrm{km}~\textrm{s}^{-1}$ into the interarm regions \citep{Kretschmer13}.} However, this signal is from the entire inner Galactic disc on kiloparsec scales; the inner 100 pc of the Galactic Centre is a much smaller region and just a small part of this signal, so the kinematics of its \textrm{$^{26}$Al} are currently unconstrained. Current observations of the \textrm{$^{26}$Al} decay signal from the Galactic Centre region are summarized in \citet{Wang09}.} If the \textrm{$^{26}$Al} generated in the Centre region is actually advected away by the wind, the `missing' \textrm{$^{26}$Al} should be visible several hundred pc above and below the Galactic Plane near the Galactic Centre{.}\footnote{This will also be true in other starbursts, but these starbursts would not be resolved spatially by proposed MeV telescopes. The total amount of \textrm{$^{26}$Al} line emission from other starbursts would therefore not inform us of whether it is in the starburst proper or in the superwind; other information, such as the Doppler width of the line, is necessary to determine that.} Resolved measurements of the Galactic Centre can also inform us on whether the \textrm{$^{26}$Al} is present in all molecular clouds in the region (and therefore is well-mixed), or just if it is trapped near a few injection sites. If the SLRs do mix with the star-forming molecular gas of these galaxies, there are implications for both their star formation and planet formation. Ionization rates of $10^{-18} - 10^{-17}\ \sec^{-1}$, like those in some Milky Way starless cores, result from the energy injection of \textrm{$^{26}$Al} decay in starbursts. While cosmic ray ionization rates can easily exceed those ionization rates by orders of magnitude in gas they penetrate into, and while gamma rays produce higher ionization rates in the most extreme starbursts like Arp 220, \textrm{$^{26}$Al} might dominate the ionization rate in the dense clouds shielded from cosmic rays in typical starbursts. In starbursts' protoplanetary discs, \textrm{$^{26}$Al} can provide moderate ionization through all columns, possibly eliminating the `dead zones' where there is little accretion (e.g., \citealt{Gammie96}; \citealt*{Fatuzzo06}). Any planetesimals that do form in starbursts may have much higher radiogenic heating from \textrm{$^{26}$Al}. Admittedly, studying the geological history of planets, much less planetesimals, in other galaxies is very difficult for the forseeable future. However, at $z \approx 2$ the Milky Way likely had background SLR abundances $\sim 10$ times higher than at present, so the effects of elevated SLR abundances may be studied in planetary systems around old Galactic stars. On a final point, \citet{Gilmour09} propose that the elevated abundances of \textrm{$^{26}$Al} in the early Solar system are mandated by anthropic selection, since high SLR abundances are necessary for planetesimal differentiation and the loss of volatiles, but that explanation may be difficult to maintain. If high \textrm{$^{26}$Al} abundances, far from being very rare, are actually typical of high-$z$ and starburst solar systems (and indeed, much of the star-formation in the Universe's history), the anthropic principle would imply that most technological civilizations would develop in solar systems formed in these environments. Instead of asking why we find ourselves in a system with an \textrm{$^{26}$Al} abundance just right to power differentiation and evaporate volatiles, we must ask why we find ourselves in one of the rare solar systems with sufficient \textrm{$^{26}$Al} that formed in a normal spiral galaxy at $z \approx 0.4$, instead of the common \textrm{$^{26}$Al}-enriched solar systems formed at $z \approx 2$ or in starbursts. \section*{Acknowledgements} During this work, I was supported by a Jansky Fellowship from the National Radio Astronomy Observatory. The National Radio Astronomy Observatory is operated by Associated Universities, Inc., under cooperative agreement with the National Science Foundation.
{'timestamp': '2014-05-06T02:05:11', 'yymm': '1204', 'arxiv_id': '1204.2584', 'language': 'en', 'url': 'https://arxiv.org/abs/1204.2584'}
\section{BEAM-BEAM EFFECTS} At the Interaction Point (IP) of the ILC, beam-beam effects due to the strong electromagnetic fields that the bunches experience during collisions cause a mutual focusing, called pinch effect, which enhances the luminosity in the case of \ep collisions. The opposite is true for \ee collisions. In this case the luminosity is reduced by mutual defocusing, or anti-pinching and is only about 20$\%$ of the \ep one (see Fig.~\ref{lumi-GP}). Moreover this repulsion between the bunches causes the luminosity to drop with a vertical offset at the IP much more rapidly for the \ee case than for \ep. Another effect of this strong repulsive electromagnetic field is the much steeper beam-beam deflection curve (see Fig.~\ref{deflection-GP}). Since the fast intra-train feedback system used to maintain the beams in collision at the IP \cite{feedback-GWhite} exploits these deflections as its main signal and because of the higher sensitivity to the vertical offsets, it is important to compare average performances for \ee and \ep for a set of representative values of initial beam offsets and bunch-to- bunch jitter. \begin{figure}[htb] \centering \includegraphics*[width=85mm]{MOPLS061f1.eps} \caption{Luminosity versus vertical half beam-beam offset, for \ep and \ee collisions simulated with GUINEA-PIG \cite{guinea}, using idealised Gaussian beam distributions with ILC nominal parameters at 500 GeV in the center-of- mass.} \label{lumi-GP} \end{figure} \begin{figure}[htb] \centering \includegraphics*[width=85mm]{MOPLS061f2.eps} \caption{Vertical deflection angle versus vertical half beam-beam offset, for \ep and \ee collisions at the ILC with nominal parameters at 500 GeV in the center-of-mass.} \label{deflection-GP} \end{figure} \section{FEEDBACK SIMULATION} A simplified simulation of the feedback has been carried out using parametrized information from the last bunch crossing with a single proportional factor to relate the measured deflection angle of the outgoing beam to the correction of the offset at the IP \cite{feedback-Schreiber}. At frequencies of a few Hz corresponding to the ILC train repetition rate, offsets of order of hundreds of nm are predicted (see e.g. \cite{ground-motion}). In addition bunch-to-bunch jitter of a fraction of the beam size can be expected. The simulation has been done for different assumptions on the initial train offset and bunch-to-bunch jitter, and including a 10$\%$ error on the correction to represent the measured uncertainties. The factor relating the correction to the measured deflection angle was optimized, independently for \ep and \ee beam parameters, to maximize the speed of the correction without amplifying the bunch-to-bunch jitter by over-correcting. Nominal beam parameters \cite{parameters} were used at $\sqrt{s}=$ 500~GeV. The average luminosity loss over a train was found to be almost independent of the offset at the beginning of the pulse for the range considered (up to 500~nm). The \ee luminosity loss was however found to be a factor 2 greater compared to \ep for the same assumption on the jitter.This is due to the greater sensitivity to the vertical offset. The ability to decrease this sensitivity with alternative beam parameters could be important if jitter conditions are worse than expected, e.g. during early ILC operation. With this purpose, sets of alternative beam parameters with smaller disruption have been derived by decreasing the bunch length and varying the transverse beam sizes, in order to maximize the luminosity while limiting the beamstrahlung energy loss to 5$\%$ (see Table~\ref{parameters_table}). These alternative parameters have increased luminosity, and some of them smaller sensitivity to the IP offset, compared to those obtained for the nominal case for \ee. The average train luminosity for different amplitudes of the jitter applied to each beam, is also improved compared to the nominal parameters (see Fig.\ref{feedback_eminus}). \begin{table}[hbt] \begin{center} \caption{Luminosity and beamstrahlung energy loss for \ee collision for different parameter sets with a beam energy of 250~GeV. The nominal values for the beam sizes at the IP are $\sigma_{z0}^{*}=$300~$\mu$m and $\sigma_{x0/y0}^{*}=$655.2/5.7~nm and the nominal intensity is $N_{0}=2\times10^{10}$ particles.} \vspace*{0,6cm} \begin{tabular}{|l||c|c|c|c|c|} \hline \textbf{} & \textbf{nom.} & \textbf{set 1} & \textbf{set 2} & \textbf{set 3} & \textbf{low P} \\ \hline \hline $N/N_{0}$ & 1 & 1 & 1 & 1 & 0.5 \\ \hline $\sigma_{z}^{*}$/$\sigma_{z0}^{*}$ & 1 & 0.7 & 0.5 & 0.5 & 0.5 \\ $\sigma_{x}^{*}$/$\sigma_{x0}^{*}$ & 1 & 0.7 & 0.8 & 0.9 & 0.7 \\ $\sigma_{y}^{*}$/$\sigma_{y0}^{*}$ & 1 & 1.5 & 1.5 & 1 & 0.6 \\ \hline \hline $\epsilon_{x}^{*}$($\mu$m) & 10 & 10 & 10 & 10 & 9.6 \\ $\epsilon_{y}^{*}$($\mu$m) & 0.04 & 0.04 & 0.04 & 0.04 & 0.03 \\ \hline $\beta_{x}^{*}$(mm) & 21.0 & 10.3 & 13.4 & 17.0 & 10.0 \\ $\beta_{y}^{*}$(mm) & 0.4 & 0.9 & 0.9 & 0.4 & 0.2 \\ \hline $L~(\times10^{33})$ & 3.9 & 4.6 & 4.9 & 5.8 & 3.0 \\ ($cm^{-2}s^{-1}$) & & & & & \\ \hline $\delta_{B}$ ($\%$) & 2.24 & 4.9 & 5.0 & 4.3 & 2.2 \\ \hline \end{tabular} \label{parameters_table} \end{center} \end{table} \begin{figure}[htb] \centering \includegraphics*[width=85mm]{MOPLS061f3.eps} \caption{Average train luminosity normalized to the peak luminosity with nominal parameters for \ee versus r.m.s. vertical offset difference between the beams. The results shown include a 100~nm initial offset.} \label{feedback_eminus} \end{figure} An additional set of parameters is also being investigated with only half of the bunch charge, while keeping the same number of bunches per train. It has a smaller peak luminosity (see Table~\ref{parameters_table}) and similar sensitivity to IP offsets as the \ee nominal case. Such a parameter set could be important for early ILC operation and flexibility. \section{OPTICS STUDIES FOR THE $20\; {\rm mrad}$ CROSSING-ANGLE GEOMETRY} \subsection{Final focus} The optics of the Final Focus (FF) (corresponding to the 20~mrad crossing-angle geometry) has been refitted to obtain the new $\beta$-functions at the IP for the alternative beam parameters in Table~\ref{parameters_table}. Only the quadrupoles upstream of the chromatic correction section and the sextupoles were readjusted. This allows to maintain the geometry and overall optimization of high order effects. The $\beta$-functions and the dispersion for the parameter set 2 in Table~\ref{parameters_table} are shown in Fig.~\ref{20mrad-par2}. \begin{figure}[h] \hspace*{-1cm} \rotatebox{-90}{\includegraphics*[width=65mm]{MOPLS061f4.eps}} \caption{Optics solution for the parameter set 2, for the 20 mrad crossing-angle geometry.} \label{20mrad-par2} \end{figure} The optical bandwidth for the different sets of parameters has been studied, considering beams with a uniform flat momentum distribution with an energy spread of 0.1$\%$. The distribution of particles at the entrance of the FF, was created with PLACET \cite{placet} for different average momentum offsets. The beam was then tracked through the FF with MAD8 \cite{mad} and used as input for GUINEA-PIG to compute the luminosity. The results (see Fig.~\ref{bandwidth-20mrad}) show similar off-momentum behavior for all parameter sets, with the alternative sets having better peak performance. \begin{figure}[htb] \centering \includegraphics*[width=79mm]{MOPLS061f5.eps} \caption{Optical bandwidth for the different \ee set of parameters. All the luminosities are normalized with respect to that obtained with GUINEA-PIG at nominal energy for nominal parameters and with ideal beam parameters at the IP (i.e. without higher order optical effects).} \label{bandwidth-20mrad} \end{figure} \subsection{Extraction line} The effective parameters corresponding to the disrupted beam have been computed along the extraction line for the different beam parameter sets in Table \ref{parameters_table} (see Fig.~\ref{extraction-optics-20mrad}), and for the different parameters sets suggested for \ep in \cite{parameters}. The largest values found for the $\beta$-functions in the \ee and \ep cases were comparable. The tracking of the disrupted beams has been simulated with BDSIM \cite{bdsim} and the power losses along the line have been computed. For the parameter set 2 the losses are smaller than for the high luminosity parameters for \ep (see Fig.~\ref{extraction-tracking-20mrad}). \begin{figure}[htb] \hspace{-1cm} \rotatebox{-90}{\includegraphics*[width=65mm]{MOPLS061f6.eps}} \caption{$\beta$-functions for the disrupted outgoing beam for the parameters set 2.} \label{extraction-optics-20mrad} \end{figure} \begin{figure}[htb] \centering \includegraphics*[width=84mm]{MOPLS061f7.eps} \caption{Power losses along the extraction line for the parameter set 2 for \ee and the high luminosity parameters, for \ep at 500 GeV in the center-of-mass.} \label{extraction-tracking-20mrad} \end{figure} \section{OPTICS STUDIES FOR THE $2\; {\rm mrad}$ CROSSING-ANGLE GEOMETRY} In the 2~mrad crossing-angle geometry the spent beam is transported off-axis through the last defocussing quadrupole of the final focus. The kick produced by this quadrupole is used to extract the spent beam. This scheme doesn't work for the \ee option unless one can reverse the signs of the focusing and defocusing final doublet quadrupoles (and sextupoles), while keeping at least the strength of the last quadrupole to maintain the kick needed for extraction. A first attempt in this direction \cite{snowmass-Seryi} indicated that this was feasible, but large $\beta_{y}$-value had to be used at the IP to limit the vertical beam size in the final doublet, which is important to keep reasonable collimation depth. This resulted however in about a factor 2 lower peak luminosity. Improvements with half the bunch length are also being investigated, with for example $\beta^{*}_{x/y}=10/3$ mm. In this case, more acceptable overall performance is expected. \vspace*{0,5cm} \section{CONCLUSIONS AND PROSPECTS} For the 20 mrad crossing-angle geometry, beam parameters can be obtained for the \ee option by decreasing the bunch length, with improved peak luminosity, smaller sensitivity to IP offsets, and similar beam losses in the extraction line as those found for \ep. For the 2 mrad crossing-angle, it is necessary to go to larger vertical beam sizes at the IP, which decreases the luminosity. With half of the bunch length and optimizing the transverse beam sizes taking into account collimation requirements, a first study indicates that some of this reduction can be recovered. In the near future, these problems will be studied to further characterize the \ee option at the ILC. \vspace*{0,5cm}
{'timestamp': '2006-09-06T09:41:36', 'yymm': '0609', 'arxiv_id': 'physics/0609043', 'language': 'en', 'url': 'https://arxiv.org/abs/physics/0609043'}
\section{Introduction} According to general relativity, in the asymptotic regime near spacelike singularities, a spacetime would oscillate between Kasner states. The BKL conjectures~\cite{art:LK63,art:BKL1970,art:BKL1982} hold except where and when spikes occur~\cite{art:BergerMoncrief1993,art:GarfinkleWeaver2003}. Spikes are a recurring inhomogeneous phenomenon in which the fabric of spacetime temporarily develops a spiky structure as the spacetime oscillates between Kasner states. See the introduction section of~\cite{art:HeinzleUgglaLim2012} for a comprehensive background. Previously in \cite{art:Lim2008} the orthogonally transitive (OT) $G_2$ spike solution, which is important in describing the recurring spike oscillation, was generated by applying the Rendall-Weaver transformation~\cite{art:RendallWeaver2001} on a Kasner seed solution. The solution is unsatisfactory, however, in that it contains permanent spikes, and there is a debate whether permanent spike are actually unresolved spike transitions in the oscillatory regime or are really permanent. In other words, would the yet undiscovered non-OT $G_2$ spike solution contain permanent spikes? The proponents for permanent spikes argue that the spatial derivative terms of a permanent spike are negligible, and hence the spike stays permanent~\cite{art:Garfinkle2007}. The opponents base their argument on numerical evidence that the permanent spike is mapped by an $R_1$ frame transition to a regime where the spatial derivative terms are not neglibigle, which allows the spike to resolve~\cite{art:HeinzleUgglaLim2012}. To settle the debate, we need to find the non-OT $G_2$ spike solution. It was found that Geroch's transformation~\cite{art:Geroch1971,art:Geroch1972} would generate the desired solution, which always resolves its spike. The next section describes the generation process. \section{Generating the solution} For our purpose, we express a metric $g_{ab}$ using the Iwasawa frame~\cite{art:HeinzleUgglaRohr2009}, as follows. Indicies $0,1,2,3$ corresponds to coordinates $\tau,x,y,z$. Assume zero vorticity (zero shift). The metric components in terms of $b$'s and $n$'s are given by \begin{align} g_{00} &= -N^2 \\ g_{11} &= e^{-2b_1},\quad g_{12} = e^{-2b_1} n_1,\quad g_{13} = e^{-2b_1} n_2 \\ g_{22} &= e^{-2b_2} + e^{-2b_1} n_1^2,\quad g_{23} = e^{-2b_1} n_1 n_2 + e^{-2b_2} n_3 \\ g_{33} &= e^{-2b_3} + e^{-2b_1} n_2^2 + e^{-2b_2} n_3^2. \end{align} One advantage of the Iwasawasa frame is that the determinant of the metric is given by \begin{equation} \det g_{ab} = -N^2 e^{-2b_1-2b_2-2b_3}. \end{equation} A pedagogical starting point is the Kasner solution with the following parametrization: \begin{equation} b_1 = \frac14(w^2-1)\tau,\quad b_2 = \frac12(w+1)\tau,\quad b_3 = -\frac12(w-1)\tau,\quad N^2 = {\rm e}^{-2b_1-2b_2-2b_3} = {\rm e}^{-\frac12(w^2+3)\tau}, \end{equation} and $n_1=n_2=n_3=0$. We shall use a linear combination of all three Killing vector fields (KVFs) \begin{equation} a_1 \partial_x + a_2 \partial_y + a_3 \partial_z. \end{equation} as the KVF in Geroch's transformation, so that the transformation generates the most general metric possible from the given seed. \subsection{Change of coordinates} To simplify the KVF before applying Geroch's transformation, make the coordinate change \begin{equation} x = X + n_{10} Y + n_{20} Z,\quad y = Y + n_{30} Z ,\quad z = Z \end{equation} where $n_{10}$, $n_{20}$, $n_{30}$ are constants. Then the metric parameters $b_1$, $b_2$, $b_3$ and $N$ are unchanged but $n_1=n_{10}$, $n_2=n_{20}$, $n_3=n_{30}$ are now constants instead of zero. The KVF becomes \begin{equation} (a_3 (n_{10} n_{30} - n_{20}) - a_2 n_{10} + a_1) \partial_X + (a_2 - a_3 n_{30}) \partial_Y + a_3 \partial_Z. \end{equation} We cannot set the $Z$ component to zero, but we can set the $X$ and $Y$ components to zero, leading to \begin{equation} n_{30} = \frac{a_2}{a_3},\quad n_{10} = \frac{a_1}{a_3}. \end{equation} Without loss of generality, we set $a_3=1$, and so $n_{30} = a_2$ and $n_{10} = a_1$. $n_{20}$ remains free. We will see later that it can be used to eliminate any $y$-dependence. To make transparent the effect of Geroch's transformation on the $b$'s (see (\ref{nonOT_spike})--(\ref{nonOT_spike_b3}) below), it is best to adapt the KVF to $\partial_x$. So we make another coordinate change to swap $X$ and $Z$: \begin{equation} X = \tilde{z},\quad Y = \tilde{y},\quad Z = \tilde{x}, \end{equation} which in effect introduces frame rotations to the Kasner solution. The Kasner solution now has \begin{align} \label{Kasner_rotated} N^2 &= {\rm e}^{-\frac12(w^2+3)\tau} \\ {\rm e}^{-2b_1} &= {\rm e}^{(w-1)\tau} + n_{20}^2 {\rm e}^{-\frac12(w^2-1)\tau} + n_{30}^2 {\rm e}^{-(w+1)\tau} \\ {\rm e}^{-2b_2} &= \frac{\mathcal{A}^2}{{\rm e}^{-2b_1}} \\ {\rm e}^{-2b_3} &= {\rm e}^{-\frac12(w^2+3)\tau} \mathcal{A}^{-2} \\ n_1 &= \frac{n_{30} {\rm e}^{-(w-1)\tau} + n_{10} n_{20} {\rm e}^{-\frac12(w^2-1)\tau}}{{\rm e}^{-2b_1}} \\ n_2 &= \frac{n_{20} {\rm e}^{-\frac12(w^2-1)\tau}}{{\rm e}^{-2b_1}} \\ \label{Kasner_rotated_n3} n_3 &= {\rm e}^{-\frac12(w^2-1)\tau}\mathcal{A}^{-2}\left[n_{30}(n_{10}n_{30}-n_{20}){\rm e}^{-(w+1)\tau}+n_{10}{\rm e}^{(w-1)\tau} \right], \intertext{where} \label{area} \mathcal{A}^2 &= (n_{10} n_{30} - n_{20})^2 {\rm e}^{-\frac12(w+1)^2\tau} + n_{10}^2 {\rm e}^{-\frac12(w-1)^2\tau} + {\rm e}^{-2\tau}. \end{align} Effectively, we are applying Geroch's transformation to the seed solution (\ref{Kasner_rotated})--(\ref{Kasner_rotated_n3}), using the KVF $\partial_{\tilde{x}}$. We shall now drop the tilde from the coordinates. \subsection{Applying Geroch's transformation} Applying Geroch's transformation using a KVF $\xi_a$ involves the following steps. First compute \begin{equation} \lambda=\xi^a \xi_a \end{equation} and integrate the equation \begin{equation} \nabla_a \omega =\varepsilon_{abcd}\xi ^b\nabla^c \xi^d \end{equation} for the general solution for $\omega$. $\omega$ is determined up to an additive constant $\omega_0$. In our case we get \begin{equation} \lambda = {\rm e}^{-2b_1} = {\rm e}^{(w-1)\tau} + {\rm e}^{-\frac12(w^2-1)\tau} n_{20}^2 + {\rm e}^{-(w+1)\tau} n_{30}^2,\quad \omega = 2w n_{30} z - K y + \omega_0, \end{equation} where the constant $K$ is given by \begin{equation} K = \frac12 (w-1)(w+3) n_{20} - 2 w n_{10} n_{30}. \end{equation} We could absorb $\omega_0$ by a translation in the $z$ direction if $w n_{30} \neq 0$, but we shall keep $\omega_0$ for the case $w n_{30} = 0$. The next step involves finding a particular solution for $\alpha_a$ and $\beta_a$: \begin{align} \nabla_{[a}\alpha_{b]} &=\frac{1}{2}\varepsilon_{abcd} \nabla^c \xi^d,\quad \xi^a \alpha_a =\omega, \\ \nabla_{[a}\beta_{b]} &=2\lambda \nabla_a \xi_b + \omega \varepsilon_{abcd} \nabla^c \xi^d,\quad \xi^a \beta_a =\omega^2 + \lambda^2 - 1. \end{align} Without loss of generality, we choose $\theta=\frac{\pi}{2}$ in Geroch's transformation, so $\alpha_a$ is not needed in $\eta_a$ below. We assume that $\beta_a$ has zero $\tau$-component. Its other components are \begin{align} \beta_1 &= \omega^2 + \lambda^2 -1 \\ \beta_2 &= n_{10} n_{20}^3 {\rm e}^{-(w^2-1)\tau} + \left[ 2 \frac{w-1}{w+1} n_{10} n_{20} n_{30}^2 + \frac{4}{w+1} n_{20}^2 n_{30} \right] {\rm e}^{-\frac12(w+1)^2 \tau} \notag\\ &\quad + 2 \frac{w+1}{w-1} n_{10} n_{20} {\rm e}^{-\frac12(w-1)^2\tau} + (w+1) n_{30} {\rm e}^{-2\tau} + n_{30}^3 {\rm e}^{-2(w+1)\tau} + F_2(y,z) \\ \beta_3 &= n_{20}^3 {\rm e}^{-(w^2-1)\tau} + 2 n_{20} n_{30}^2 \frac{w-1}{w+1} {\rm e}^{-\frac12(w+1)^2 \tau} + 2 n_{20} \frac{w+1}{w-1} {\rm e}^{-\frac12(w-1)^2\tau} + F_3(y,z) \end{align} where $F_2(y,z)$ and $F_3(y,z)$ satisfy the constraint equation \begin{equation} - \partial_z F_2 + \partial_y F_3 + 2(w-1)\omega = 0. \end{equation} For our purpose, we want $F_3$ to be as simple as possible, so we choose \begin{equation} F_3 = 0,\quad F_2 = \int 2(w-1)\omega {\rm d} z = 2w(w-1) n_{30} z^2 - 2 (w-1) K y z +2(w-1)\omega_0 z. \end{equation} The last step constructs the new metric. Define $\tilde{\lambda}$ and $\eta_a$ as \begin{align} \frac{\lambda}{\tilde{\lambda}} &= (\cos\theta-\omega\sin\theta)^2 +\lambda^2 \sin^2\theta, \\ \eta_a &=\tilde{\lambda}^{-1} \xi_a +2 \alpha_a \cos\theta\sin\theta-\beta_a \sin^2\theta. \end{align} The new metric is given by \begin{equation} \tilde{g}_{ab}=\frac{\lambda}{\tilde{\lambda}}(g_{ab}-\lambda^{-1} \xi_a\xi_b)+\tilde{\lambda} \eta_a \eta_b. \end{equation} In our case $\tilde{g}_{ab}$ is given by the metric parameters \begin{align} \label{nonOT_spike} \tilde{N}^2 &= N^2 (\omega^2+\lambda^2) \\ {\rm e}^{-2\tilde{b}_1} &= \frac{{\rm e}^{-2b_1}}{\omega^2+\lambda^2} \\ {\rm e}^{-2\tilde{b}_2} &= {\rm e}^{-2b_2} (\omega^2+\lambda^2) \\ \label{nonOT_spike_b3} {\rm e}^{-2\tilde{b}_3} &= {\rm e}^{-2b_3} (\omega^2+\lambda^2) \\ \tilde{n}_1 &= -2w(w-1) n_{30} z^2 + 2 (w-1) K y z - 2(w-1)\omega_0 z + \frac{\omega^2}{\lambda}(n_{30} {\rm e}^{-(w+1)\tau} + n_{10} n_{20} {\rm e}^{-\frac12(w^2-1)\tau}) \notag\\ &\quad -\left[ n_{30} w {\rm e}^{-2\tau} + \frac{w+3}{w-1} n_{10} n_{20} {\rm e}^{-\frac12(w-1)^2\tau} + \frac{w-3}{w+1} n_{20} n_{30} (n_{10} n_{30} - n_{20}) {\rm e}^{-\frac12(w+1)^2\tau} \right] \\ \tilde{n}_2 &= n_{20} {\rm e}^{-\frac12(w^2-1)\tau} \left[ - \frac{w+3}{w-1} {\rm e}^{(w-1)\tau} - n_{30}^2 \frac{w-3}{w+1} {\rm e}^{-(w+1)\tau} + \frac{\omega^2}{\lambda} \right] \\ \tilde{n}_3 &= \mathcal{A}^{-2} \left[ n_{10} {\rm e}^{-\frac12(w-1)^2\tau} + n_{30} (n_{10} n_{30} - n_{20}) {\rm e}^{-\frac12(w+1)^2\tau} \right], \label{nonOT_spike_n3} \end{align} and $\mathcal{A}$, given by (\ref{area}), is the area density~\cite{art:vEUW2002} of the $G_2$ orbits. Note that the $w=\pm1$ cases would have to be computed separately, which we shall leave to future work. The new solution admits two commuting KVFs: \begin{equation} \partial_x,\quad [- (w-1)K^2y^2+2(w-1)K\omega_0 y] \partial_x + 2wn_{30}\partial_y + K \partial_z. \end{equation} Their $G_2$ action is non-OT, unless $n_{10}=n_{20}=0$. The solution is also the first non-OT Abelian $G_2$ explicit solution found. In the next section we shall focus on the case where $K=0$, or equivalently, where \begin{equation} \label{n20choice} n_{20} = \frac{4w}{(w-1)(w+3)} n_{10} n_{30}, \end{equation} which turns off the $R_2$ frame transition (which is shown to be asymptotically suppressed in~\cite{art:HeinzleUgglaRohr2009}), and eliminates the $y$-dependence. Setting (\ref{n20choice}) in the rotated Kasner solution (\ref{Kasner_rotated})--(\ref{Kasner_rotated_n3}) also turns off the $R_2$ frame transition there, giving the explicit solution that describes the double frame transition $\mathcal{T}_{R_3 R_1}$ in~\cite{art:HeinzleUgglaRohr2009}. The mixed frame/curvature transition $\mathcal{T}_{N_1 R_1}$ in~\cite{art:HeinzleUgglaRohr2009} is described by the metric $\tilde{g}_{ab}$ with $n_{20} = n_{30} = 0$. Both the double frame transition and the mixed frame/curvature transition are encountered in the exceptional Bianchi type VI$_{-1/9}^*$ cosmologies~\cite{art:Hewittetal2003}. Setting $n_{10}=n_{20}=0$ yields the OT $G_2$ spike solution in~\cite{art:Lim2008}. To adapt the solutions in~\cite{art:Lim2008} to the Iwasawa frame here, let \begin{equation} b_1 = - \frac12 (P(\tau,z)-\tau), \quad b_2 = \frac12 (P(\tau,z)+\tau), \quad b_3 = - \frac14 (\lambda(\tau,z)+\tau), \quad n_1 = -Q(\tau,z), \quad n_2=n_3=0, \end{equation} where $x$-dependence in~\cite{art:Lim2008} becomes $z$-dependence here, and set $w$ to $-w$, $\lambda_2=\ln 16$, $Q_0 = 1$, $Q_2 = 0$ there, and set $n_{30}=1$, $\omega_0=0$ here. As pointed out in~\cite{art:Limetal2009} and~\cite{art:Binietal2009}, the factor 4 in Equation (34) of~\cite{art:Lim2008} should not be there. \section{The dynamics of the solution} To describe the dynamics of the non-OT spike solution, we shall plot the state space orbit projected onto the Hubble-normalized $(\Sigma_+,\Sigma_-)$ plane, as done in~\cite{art:Lim2008}. The formulas are \begin{align} \Sigma_+ &= -1 + \frac14 \mathcal{N}^{-1} \partial_\tau (\mathcal{A}^2) \\ \Sigma_- &= \frac{1}{2\sqrt{3}} \mathcal{N}^{-1} \partial_\tau(\tilde{b}_2 - \tilde{b}_1) \\ \mathcal{N} &= \frac16\left[ \frac{\partial_\tau (\lambda^2)}{\omega^2+\lambda^2} + \partial_\tau \ln (N^2) \right] \end{align} \cite{art:HeinzleUgglaRohr2009} uses a different orientation, where their $(\Sigma_+,\Sigma_-)$ are given by \begin{align} \Sigma_+ &= -\frac12(\Sigma_+ + \sqrt{3}\Sigma_-) \\ \Sigma_- &= -\frac12(\sqrt{3}\Sigma_+ - \Sigma_-) \end{align} The non-OT spike solution (with $K=0$, $\omega_0=0$) goes from a Kasner state with $2 < w < 3$, through a few intermediate Kasner states, and arrives at the final Kasner state with $w < -1$. The transitions are composed of spike transitions and $R_1$ frame transitions. The non-OT spike solution always resolves its spike, unlike the OT spike solution with $|w|<1$, which has a permanent spike. For a typical Kasner source with $2 < w < 3$, there are six non-OT spike solutions, some of which are equivalent, that start there. For example, non-OT spike solutions with $|w| = \tfrac13,\ 2,\ 5$ all start at $w_\text{source} = \frac73$. From there, however, there are two extreme alternative spike orbits. The first alternative is to form a ``permanent" spike, followed by an $R_1$ transition, and lastly to resolve the spike. This was described in~\cite{art:HeinzleUgglaLim2012} as the joint spike transition. This alternative is more commonly encountered (assuming that permanent spikes are more commonly encountered than no-spike at the end of a Kasner era). The second alternative is to undergo an $R_1$ transition first, followed by a transient spike transition, and finish with another $R_1$ transition. By varying $n_1$ and $n_3$, one can get orbits that are close to one extreme alternative or the other, or some indistinct mix. The sequence of $w$-value of the Kasner states for the spike orbit is given below. For non-OT spike solution with $|w|>3$, the first and second alternatives are \begin{align} &\frac{3|w|-1}{1+|w|},\ \frac{5+|w|}{1+|w|},\ 2+|w|,\ 2-|w| \\ &\frac{3|w|-1}{1+|w|},\ \frac{3|w|+1}{|w|-1},\ \frac{|w|-5}{|w|-1},\ 2-|w| \end{align} For $1<|w|<3$, the first and second alternatives are \begin{align} &\frac{5+|w|}{1+|w|},\ \frac{3|w|-1}{1+|w|},\ \frac{3|w|+1}{|w|-1}, \frac{5-|w|}{1-|w|} \\ &\frac{5+|w|}{1+|w|},\ 2+|w|,\ 2-|w|, \frac{5-|w|}{1-|w|} \end{align} For $|w|<1$, the first and second alternatives are \begin{align} &\frac{5+|w|}{1+|w|},\ \frac{3|w|-1}{1+|w|},\ \frac{3|w|+1}{|w|-1}, \frac{5-|w|}{1-|w|} \\ &\frac{5+|w|}{1+|w|},\ 2+|w|,\ 2-|w|, \frac{5-|w|}{1-|w|} \end{align} For $|w|<1$, the first and second alternatives are \begin{align} &2+|w|,\ 2-|w|,\ \frac{5-|w|}{1-|w|},\ \frac{3|w|+1}{|w|-1} \\ &2+|w|,\ \frac{5+|w|}{1+|w|},\ \frac{3|w|-1}{1+|w|},\ \frac{3|w|+1}{|w|-1} \end{align} For example, for $|w| = \tfrac13,\ 2,\ 5$, the first alternative is $\frac73,\ \frac53,\ 7,\ -3$ and the second alternative is $\frac73,\ 4,\ 0,\ -3$. See Figure~\ref{fig:w5alternatives}. \begin{figure} \begin{center} \resizebox{\textwidth}{!}{\includegraphics{w5alternatives_v2_cropped.png}} \caption{Alternative spike orbits for $w=5$. Top row is the orientation used in~\cite{art:Lim2008}, bottom row is the orientation used in~\cite{art:HeinzleUgglaRohr2009}. Left column is the first alternative orbit, right column is the second alternative. Spike orbits ($z=0$) are in red, faraway orbits ($z=10^{12}$) in blue. Left column is generated with $n_{10}=10^{-3}$, $n_{30}=1$, right column with $n_{10}=10^9$, $n_{30}=10^{-9}$. A red circle marks the start of the orbits, a red star marks the end.} \label{fig:w5alternatives} \end{center} \end{figure} \section{Summary} In this paper, we went through the steps of generating the non-OT $G_2$ spike solution, and illustrated its state space orbits for the case $K=0$, which show two extreme alternative orbits. More importantly, the non-OT $G_2$ spike solution always resolves its spikes, in contrast to its OT $G_2$ special case which produces an unresolved permanent spike for some parameter values. The non-OT $G_2$ spike solution shows that, in the oscillatory regime near spacelike singularities, unresolved permanent spikes are artefacts of restricting oneself to the OT $G_2$ case, and that spikes are resolved in the more general non-OT $G_2$ case. Therefore spikes are expected to recur in the oscillatory regime rather than to become permanent spikes. We also obtained explicit solutions describing the double frame transition and the mixed frame/curvature transition in~\cite{art:HeinzleUgglaRohr2009}. We leave the further analysis of the non-OT $G_2$ spike solution to future work. \section*{Acknowledgment} Part of this work was carried out at the Max Planck Institute for Gravitational Physics (Albert Einstein Institute) and Dalhousie University. I would like to thank Claes Uggla and Alan Coley for useful discussions. The symbolic computation software {\tt MAPLE} and numerical software {\tt MATLAB} are essential to the work.
{'timestamp': '2015-07-13T02:04:58', 'yymm': '1507', 'arxiv_id': '1507.02754', 'language': 'en', 'url': 'https://arxiv.org/abs/1507.02754'}
\section{Introduction} The interference between quantum transition amplitudes is the underlying principle of coherent control \cite{doi:10.1146/annurev.pc.43.100192.001353,doi:10.1146/annurev.physchem.48.1.601,doi:10.1146/annurev.physchem.040808.090427}. Given some independent excitation pathways connecting an initial state to a final target state of a quantum system, the total transition probability depends on the modulus squared of the sum of the corresponding complex transition amplitudes. Therefore, upon changing the relative phase of these amplitudes, it is possible to modify the final yield. This concept has found several applications in atomic, molecular and semiconductor physics, among other areas \cite{Petta2180}. In addition, it has contributed towards the development of quantum optimal control theory (QOCT), which seeks to find external controls to drive a given transition, maximizing some performance criteria \cite{rabitz2000,brif2010,peirce1988}. Coherent control has been applied in the context of ultracold atomic gases \cite{koch2012,PhysRevLett.96.173002,PhysRevA.75.051401,PhysRevA.95.013411} and in particular to Bose-Einstein condensates (BEC) \cite{Thomasen_2016,Bohi2009,Lozada_Vera_2013}. For instance, the control of the onset of self-trapping of a condensate in a periodically modulated double well has been demonstrated \cite{PhysRevA.64.011601}. Also, SU(2) rotations on spinor condensates has been coherently controlled by stimulated Raman adiabatic passage \cite{PhysRevA.94.053636}. In addition, there have been several experimental and theoretical successful implementations of quantum optimal control algorithms for BEC \cite{Nat1,Nat2,hohenester2014,hohenester2007,Mennemann_2015,0953-4075-46-10-104012,PhysRevA.90.033628,PhysRevA.98.022119}. A challenging control problem is the generation of non-ground-state condensates, which can exhibit several interesting features such as mode locking, critical dynamics, interference patterns and atomic squeezing \cite{yukalov69}. In the mean-field picture, non-ground-states can be represented as stationary solutions of the Gross-Pitaevskii equation (GPE), and are termed nonlinear topological modes \cite{yukalov66}. It has been shown that a given nonlinear mode can be generated by a time-dependent modulation of the trapping potential tuned to the resonance between the ground state and the desired excited nonlinear mode \cite{yukalov56}. Alternatively, it is also possible to generate a nonlinear mode by means of a resonant time-dependent modulation of the scattering length \cite{yukalov78,PhysRevA.81.053627}, which can be produced by an alternating magnetic field close to a Feshbach resonance \cite{PhysRevA.67.013605,PhysRevA.81.053627,clark2015,arunkumar2019}. Since quantum transition amplitudes can be attributed to these different modulations, their relative phase can be used to control the overall transition. Nevertheless, due to the nonlinear nature of the GPE, the transition amplitudes of the trapping and scattering length modulations are not completely independent, and new aspects of the coherent control can be expected in comparison with the linear Schr\"odinger equation. It is worth noting that QOCT has been applied to maximize the transition between nonlinear modes utilizing temporal and spatial modulation of both the trapping and the scattering length \cite{,PhysRevA.93.053612}. However, the control fields obtained from QOCT are often complex and difficult to be implemented in laboratory. Additionally, the detailed role of the interference between both modulations has not been addressed. In the present work, we explore the possibility of using both modulations to resonantly generate non-ground-state condensate. In particular, we consider the role of the relative phase of the modulations on the transitions between nonlinear modes. In sec.~\ref{sec2}, we introduce the theoretical framework for the production of the nonlinear coherent modes. In sec.~\ref{twolevels}, a time average technique is applied to the GPE allowing the description of the dynamics by only two modes. A perturbative treatment is used to obtain an analytic expression for the transition probability in sec.~\ref{sec4}. Numerical results confirming the predictions are presented in sec.~\ref{sec5}. Finally, conclusions are drawn in sec.~\ref{sec6}. \section{Excitation of nonlinear modes}\label{sec2} We consider the dynamics of a Bose gas wavefunction $\Psi({\bf r},t)$ described by the Gross-Pitaevskii equation \cite{RevModPhys.71.463,pethick}, \begin{equation} {\rm i}\hbar\frac{\partial}{\partial t}\Psi({\bf r},t)=\left(-\frac{\hbar^2}{2m}\nabla^2+V({\bf r},t)+g({\bf r},t)N|\Psi({\bf r},t)|^2\right)\Psi({\bf r},t), \label{eq:TDGPE} \end{equation} where $m$ is the atomic mass and $N$ is the number of atoms in the condensate. The trapping potential $V({\bf r},t)$ is composed by two parts, \begin{equation} V({\bf r},t)=V_{\rm trap}({\bf r})+V_{\rm mod}({\bf r},t), \label{eq:trappingwm} \end{equation} where $V_{\rm trap}({\bf r})$ is a fixed trapping potential and $V_{\rm mod}({\bf r},t)$ a time-dependent modulating potential. The nonlinear interaction amplitude $g({\bf r},t)$ is also composed by two parts, \begin{equation} g({\bf r},t)=g_{0}+g_{\rm mod}({\bf r},t), \label{eq:nonlinearitygwm} \end{equation} where $g_0=4\pi\hbar^2a_0/m$ is a fixed nonlinearity and $g_{\rm mod}({\bf r},t)=4\pi\hbar^2a({\bf r},t)/m$ is a modulating nonlinearity, with the s-wave scattering length $a_s$ near a Feshbach resonance being written as $a_s=a_0+a({\bf r},t)$. The normalization condition of the wavefunction is $\int d{\rm \bf r}|\Psi({\bf r},t)|^2=1$. With both modulations turned off, i.e., $V_{\rm mod}({\bf r},t)=0$ and $g_{\rm mod}({\bf r},t)=0$, the system can be described by the nonlinear Hamiltonian $H_0$, \begin{equation} H_0[\phi({\bf r})]=-\frac{\hbar^2}{2m}\nabla^2+V_{\rm trap}({\bf r})+{g}_{0}N|\phi({\bf r})|^2. \end{equation} We consider the nonlinear topological modes of $H_0$, which are solutions of the eigenvalue problem \cite{yukalov56}, \begin{equation} H_0[\phi_n({\bf r})]\phi_n({\bf r})=\mu_n\phi_n({\bf r}), \label{multiindex0} \end{equation} with $n$ generally being a multi-index label for the quantum states and $\mu_n$ the corresponding chemical potential. Here, we are concerned with inducing transitions between stationary solutions $\phi_n({\bf r})$. This task can be accomplished by means of modulating the trapping potential with an oscillatory field with frequency $\omega_t$ \cite{yukalov56,courteille}. Alternatively, one may also modulate the atomic scattering length with frequency $\omega_g$ \cite{yukalov78}. We assume that both modulations are present and that they possess a phase difference given by $\theta$, \begin{equation} V_{\rm mod}({\bf r},t)=V({\bf r})\cos(\omega_t t+\theta), \label{Vmod} \end{equation} and \begin{equation} g_{\rm mod}({\bf r},t)=g({\bf r})\cos(\omega_g t). \label{gmod} \end{equation} For definiteness, we consider a transition between an initial state $\phi_1({\bf r})$ to a final state $\phi_2({\bf r})$, with $\mu_1<\mu_2$, and we associate the resonance frequency $\omega_{21}=(\mu_2-\mu_1)/\hbar$ with this transition. As we have already pointed out, the transition can be induced by resonant modulations and in this case the system can be approximately described solely by the topological modes involved in the transition \cite{yukalov56,courteille}. Thus, we assume that the modulating frequencies $\omega_t$ and $\omega_g$ are close to $\omega_{21}$. More specifically, we assume that $|\Delta\omega_{t}/\omega_t|\ll1$ and $|\Delta\omega_{g}/\omega_g|\ll1$, with the detunings defined by $\Delta\omega_{t}=\omega_{t}-\omega_{21}$ and $\Delta\omega_{g}=\omega_{g}-\omega_{21}$. \section{Two-Level approximation}\label{twolevels} In order to simplify the dynamical equations, we consider that the wave function $\Psi({\bf r},t)$ can be written as an expansion in terms of nonlinear modes \cite{yukalov56}, \begin{equation} \Psi( {\bf r},t) =\sum_{m}c_{m}(t)\phi_{m}({\bf r})\exp\left(-i\mu_m t/\hbar\right), \label{expan0} \end{equation} and that the following condition for mode separation is valid, \begin{equation} \frac{\hbar}{\mu_m} \left|\frac{dc_{m}}{dt}\right|\ll1, \label{condvary0} \end{equation} meaning that the $c_m(t)$ are slow function of time in comparison with $\exp(-i\mu_m t/\hbar)$. Substituting expansion (\ref{expan0}) in the GPE and performing a time-averaging procedure, with the coefficients $c_m(t)$ treated as quasi-invariants, will result in a set of coupled nonlinear differential equations for the coefficients $c_m(t)$ (see Ref. \cite{yukalov66} for details). As a consequence, if at the initial time only the levels $n=1,2$ are populated and the frequencies of the modulations are close to $\omega_{21}$, the only relevant coefficients are $c_1(t)$ and $c_2(t)$ and we obtain the set of equations, \begin{subequations}\label{pop2} \begin{alignat}{2} i\frac{d{c}_{1}}{dt} = & {\alpha}_{12}{\left| c_{2} \right|}^{2}{c}_{1} +\frac{1}{2}{\beta}_{12}c_{2}{\rm{e}}^{i\left(\Delta \omega_{t} t +\theta \right)}+ \frac{1}{2}{\rm{e}}^{-i\Delta\omega_{g} t}c_{2}^{*}{c_{1}}^{2}\gamma_{21} \nonumber \\ & +\frac{1}{2}{\rm{e}}^{i\Delta\omega_{g} t}\left( {\left|c_{2} \right|}^{2}c_{2}\gamma_{12} + 2{\left|c_{1} \right|}^{2}c_{2}{\gamma}_{21}^{*} \right) \ , \\ i\frac{d{c}_{2}}{dt}= & {\alpha}_{21}{\left| c_{1} \right|}^{2}{c}_{2}+\frac{1}{2}{\beta}_{12}^{*}c_{1}{\rm{e}}^{-i\left(\Delta \omega_{t} t +\theta\right)} + \frac{1}{2}{\rm{e}}^{i\Delta\omega_{g} t}c_{1}^{*}{c_{2}}^{2}\gamma_{12} \nonumber \\ & + \frac{1}{2}{\rm{e}}^{-i\Delta\omega_{g} t}\left( {\left|c_{1} \right|}^{2}c_{1}\gamma_{21} + 2{\left|c_{2} \right|}^{2}c_{1}{\gamma}_{12}^{*} \right), \end{alignat} \end{subequations} with the coupling constants $\alpha_{mn}$, $\beta_{mn}$ and $\gamma_{mn}$ given by \begin{equation} \alpha_{mn} \equiv g_{0}\frac{N}{\hbar}\int{d{\bf r}{|\phi_{m}({\bf r})|}^{2}\left[ 2{|\phi_{n}({\bf r})|}^{2}-{|\phi_{m}({\bf r})|}^{2}\right]}, \label{alpha0} \end{equation} \begin{equation} \beta_{mn}\equiv \frac{1}{\hbar}\int{d{\bf r}{\phi}_{m}^{*}({\bf r})V({\bf r}){\phi}_{n}({\bf r})} \ , \label{beta0} \end{equation} and \begin{equation} \gamma_{mn}\equiv \frac{N}{\hbar}\int{d{\bf r}{\phi}_{m}^{*}({\bf r})g({\bf r}){|\phi_{n}({\bf r})|}^{2}}\phi_{n}({\bf r}) \ . \label{I0} \end{equation} In order to fulfill condition (\ref{condvary0}), the couplings are assumed to be much smaller than the transition frequency, i.e., $ |\alpha_{mn}/\omega_{21}|\ll1 $, $ |\beta_{mn}/\omega_{21} |\ll1 $ and $ |\gamma_{mn}/\omega_{21} |\ll1 $ \cite{yukalov66}. From the dynamical equations (\ref{pop2}), we note that the modulation of the trap couples the modes by means of the linear term containing $\beta_{mn}$ and the nonlinear term with $\alpha_{mn}$, while the modulation of the scattering length couples the modes by means of distinct nonlinear terms containing $\gamma_{mn}$. Therefore, there exists interference between the linear and nonlinear terms and by varying the phase $\theta$, this interference can be controlled. We also note that when the modulation of the scattering length is absent, $g_{\rm mod}({\bf r},t)=0$, an approximate analytic solution to (\ref{pop2}) has been derived, which shows that the population oscillates between the two states with a Rabi-like chirped frequency \cite{yukalov56}. This chirped frequency depends on the populations $|c_m(t)|^2$. Unfortunately, such approximate solutions for $g_{\rm mod}({\bf r},t)\neq0$ is not possible due to the presence of terms with $c_m(t)^2$. Thus, we resort to perturbation theory to gain more insight into the role of $\theta$ in the transition. \section{Perturbative Approximation}\label{sec4} We assume that modulating fields (\ref{Vmod}) and (\ref{gmod}) can be considered as small perturbations in order to apply canonical perturbation theory \cite{cohen}. To this end, we introduce a perturbation parameter $\lambda\ll1$ such that the Hamiltonian can be written as \begin{equation} H[\Psi]=H_0[\Psi]+\lambda \left[V_{\rm mod}({\bf r},t)+g_{\rm mod}({\bf r},t)N|\Psi|^2\right] \ . \label{pertH} \end{equation} We are interested in the transition probability from state $\phi_1$ to state $\phi_2$, often defined as $P_{1\rightarrow 2}(t)=\left|\bra{\phi_2}\left.\Psi({\bf r},t)\right>\right|^2$ and we assume the initial conditions $c_1(0)=1$ and $c_2(0)=0$. However, from the approximations of the last section, one can deduce the normalization condition for the coefficients, $\sum_{m}\left|c_m(t)\right|^2=1$. Thus, despite of the fact that the set of nonlinear modes is not orthogonal, we can define the transition probability simply as $P_{1\rightarrow 2}(t)=\left|c_2(t)\right|^2$. As usual, we write the coefficients $c_{j}(t)$ as a power series in $\lambda$, \begin{equation} c_{j}(t)={c}_{j}^{(0)}(t)+\lambda{c}_{j}^{(1)}(t)+{\lambda}^{2}{c}_{j}^{(2)}(t)+\cdots, \label{cserie} \end{equation} and substitute this series into dynamic equations (\ref{pop2}) equating the like powers of $\lambda$. To zeroth order, this yields, \begin{subequations}\label{zerotho} \begin{alignat}{2} i\frac{d{c}_{1}^{(0)}}{dt}=\alpha_{12}{|{c}_{2}^{(0)} |}^{2}{c}_{1}^{(0)} , \label{c1lambda0} \\ i\frac{d{c}_{2}^{(0)}}{dt}=\alpha_{21}{|{c}_{1}^{(0)}|}^{2}{c}_{2}^{(0)} \ . \label{c2lambda0} \end{alignat} \end{subequations} And we obtain the zeroth order solution as being ${c}_{1}^{(0)}(t)=1$, ${c}_{2}^{(0)}(t)=0$. To first order in $\lambda$, the equations are \begin{subequations}\label{firsto} \begin{alignat}{2} \label{c1lambda1} i\frac{d{c}_{1}^{(1)}}{dt}=\alpha_{12}\left[\left({c}_{2}^{*(0)}{c}_{2}^{(1)}+{c}_{2}^{*(1)}{c}_{2}^{(0)} \right){c}_{1}^{(0)} +{|{c}_{2}^{(0)} |}^{2}{c}_{1}^{(1)} \right]+\frac{1}{2}{\beta}_{12}{c}_{2}^{(0)}{\rm{e}}^{i\left(\Delta\omega_{t} t+\theta \right)} \nonumber \\ +\frac{1}{2}{\rm{e}}^{i\Delta\omega_{g} t}\left[ {| {c}_{2}^{(0)}|}^{2}{c}_{2}^{(0)}\gamma_{12}+2{|{c}_{1}^{(0)} |}^{2}{c}_{2}^{(0)}{\gamma}_{21}^{*} \right]+\frac{1}{2}{\rm{e}}^{-i\Delta\omega_{g} t} {c}_{2}^{*(0)}{{c}_{1}^{(0)}}^{2}\gamma_{21} \ , \\ i\frac{d{c}_{2}^{(1)}}{dt}=\alpha_{21}\left[\left({c}_{1}^{*(0)}{c}_{1}^{(1)}+{c}_{1}^{*(1)}{c}_{1}^{(0)} \right){c}_{2}^{(0)} +{|{c}_{1}^{(0)} |}^{2}{c}_{2}^{(1)} \right]+\frac{1}{2}{\beta}_{12}^{*}{c}_{1}^{(0)}{\rm{e}}^{-i\left(\Delta\omega_{t} t+\theta \right)} \nonumber \\ +\frac{1}{2}{\rm{e}}^{-i\Delta\omega_{g} t}\left[ {| {c}_{1}^{(0)}|}^{2}{c}_{1}^{(0)}\gamma_{21}+2{|{c}_{2}^{(0)} |}^{2}{c}_{1}^{(0)}{\gamma}_{12}^{*} \right]+\frac{1}{2}{\rm{e}}^{i\Delta\omega_{g} t} {c}_{1}^{*(0)}{{c}_{2}^{(0)}}^{2}\gamma_{12} \ . \label{c2lambda1} \end{alignat} \end{subequations} Substituting the zeroth order solutions into (\ref{c1lambda1}) and (\ref{c2lambda1}), these equations simplify to \begin{subequations}\label{firsto2} \begin{alignat}{2} i\frac{d{c}_{1}^{(1)}}{dt} & =0 \ , \label{c11lambda1} \\ i\frac{d{c}_{2}^{(1)}}{dt} & =\alpha_{21}{|{c}_{1}^{(0)}|}^{2}{c}_{2}^{(1)}+\frac{1}{2}{\beta}_{12}^{*}{\rm{e}}^{-i\left(\Delta\omega_{t} t +\theta \right)}+\frac{1}{2}\gamma_{21}{\rm{e}}^{-i\Delta\omega_{g} t} \ . \label{c22lambda1} \end{alignat} \end{subequations} Thus, within first order, ${c}_{1}^{(1)}(t)=0$ and \begin{equation} {c}_{2}^{(1)}(t)=-\frac{1}{2}\frac{{\beta}_{12}^{*}}{(\alpha_{21}-\Delta\omega_{t})}{\rm{e}}^{-i\theta}\left( {\rm{e}}^{-i\Delta\omega_{t} t}-1\right)-\frac{1}{2}\frac{\gamma_{21}}{(\alpha_{21}-\Delta\omega_{g})}\left( {\rm{e}}^{-i\Delta\omega_{g}t}-1\right) \ . \label{sc22lambda1} \end{equation} Thus, we can write the transition probability within first order as \begin{eqnarray} P_{1\rightarrow 2}(t)\approx \frac{{|\beta_{12}|}^{2}}{2{|\alpha_{21}-\Delta\omega_{t}|}^{2}}\left[1- \cos(\Delta\omega_{t} t)\right]+\frac{{|\gamma_{21}|}^{2}}{2{|\alpha_{21}-\Delta\omega_{g}|}^{2}}\left[1- \cos(\Delta\omega_{g} t)\right] \nonumber \\ + \frac{{\beta}_{12}\gamma_{21}}{4{{(\alpha_{21}-\Delta\omega}_{t})}^{*}(\alpha_{21}\Delta\omega_{g})}{\rm{e}}^{i\theta}\left[1+{\rm{e}}^{i(\Delta\omega_{t}-\Delta\omega_{g})t}-{\rm{e}}^{i\Delta\omega_{t}t}-{\rm{e}}^{-i\Delta\omega_{g}t}\right] \nonumber \\ +\frac{{\gamma}_{21}^{*}{\beta}_{12}^{*}}{4{(\alpha_{21}-\Delta{\omega}_{g})}^{*}(\alpha_{21}-\Delta\omega_{t})}{\rm{e}}^{-i\theta}\left[1+{\rm{e}}^{-i(\Delta\omega_{t}-\Delta\omega_{g})t}-{\rm{e}}^{-i\Delta\omega_{t}t}-{\rm{e}}^{i\Delta\omega_{g}t}\right] \ . \label{P12t} \end{eqnarray} When the frequencies of the modulations are equal, $\Delta \omega_{t}=\Delta\omega_{g}=\Delta\omega$, the expression for the transition probability simplifies to \begin{equation} P_{1\rightarrow 2}\approx \left[ \frac{{|\beta_{12}|}^{2}+{|\gamma_{21}|}^{2}+2\Re\{{\beta}_{12}{\gamma}_{21}\exp(i\theta)\}}{{|\alpha_{21}-\Delta\omega|}^{2}}\right]{\sin}^{2}\left(\frac{\Delta\omega t}{2}\right) \ , \label{P12twr} \end{equation} where $\Re\{\cdot\}$ stands for the real part. Although the above expression is only valid for very short times, for which the population of the state $\phi_1$ is still very close to $1$, expression (\ref{P12twr}) evidences the role of the relative phase $\theta$ on the transition. For instance, if $({\beta}_{12}\gamma_{21})$ is real and positive, then for $\theta=\pi$ the modulations will act destructively decreasing the transition probability, whereas for $\theta=0$ they will act constructively. The extent of the interference will be dictated by the magnitude of the couplings parameters $\beta_{12}$ and $\gamma_{12}$. Additionally, according to (\ref{P12twr}), if the modulation of the scattering length is absent, then variation of $\theta$ plays no role in the dynamics. \section{numerical results}\label{sec5} We have carried out direct numerical calculations of the GPE solving Eq.~(\ref{eq:TDGPE}) in its 1D version, \begin{equation} {\rm i}\frac{\partial}{\partial t}\Psi(x,t)=H[\Psi]\Psi(x,t), \label{eq:TDGPE_1D} \end{equation} with the nonlinear Hamiltonian given by \begin{equation} H[\Psi]= -\frac{\partial^2}{\partial x^2}+V(x,t)+g(x,t)|\Psi(x,t)|^2, \end{equation} and considering arbitrary units such that $\hbar=m=N=g_0=1$. The nonlinear Hamiltonian operator has been written as a matrix over a grid of points according to the Chebyshev spectral method \cite{trefethen,mason}. In order to solve the time-dependent equation (\ref{eq:TDGPE_1D}), we express the corresponding time evolution operator, which connects the initial time $t=0$ to the final time $t=t_f$, in $N$ small time-step $\Delta t$ evolution operators, \begin{equation} U(t_f,0)=\prod_{k=1}^{N}U\left(k\Delta t,(k-1)\Delta t\right). \label{evolEP} \end{equation} Each one of the small time-step evolution operators is calculated as an expansion in Chebyshev polynomials \cite{kosloff1984,LEFORESTIER199159,formanek2010}, \begin{equation} U\left(k\Delta t,(k-1)\Delta t\right)\approx\sum_{n=0}^{N_p}a_n\chi_n(-iH[\Psi((k-1)\Delta t)]\Delta t), \end{equation} where $a_n$ are the expansion coefficients, $\chi_n$ are the complex Chebyshev polynomials and $N_p$ sets the number of terms in the expansion. The propagation of the wavefunction in the $k$-th time step is obtained by applying $U\left(k\Delta t,(k-1)\Delta t\right)$ to the wavefunction calculated in the previous step $\Psi((k-1)\Delta t)$. The relaxation method, which in essence consists in performing propagation with imaginary time $t\rightarrow {\rm i}t$, has been applied to obtain the ground state \cite{kosloff1986}. The excited modes of the condensate have been determined by the spectrum-adapted scheme described in Ref. \cite{PhysRevA.93.053612}. We have found very good agreement comparing our results with those from Refs. \cite{MURUGANANDAM20091888,PhysRevA.93.053612}. For harmonic trapping potentials and modulating fields with linear behavior with distance, no transition to excited modes is possible through modulation of the trap \cite{yukalov69}. Thus, we have fixed the trapping potential to $V_{trap}(x,t)=x^4/4$, allowing for a simple form of the spatial dependence of $V(x)$. For this trap, we have obtained the chemical potentials $\mu_{0}= 0.808$, $\mu_{1}=1.857$, and $\mu_{2}= 3.279$, for the ground, first and second nonlinear modes, respectively. We have considered transitions from the ground state to the first and to the second excited modes. In the first case, we have set $g(x)=A_g x$ and $V(x)=A_t x$, while in the second case, $g(x)=A_g x^2$ and $V(x)=A_t x^2$. The frequencies of the modulations are set to be equal $\omega_t=\omega_g=\omega$ and are chosen to satisfy the resonance condition for each target. Figure~\ref{figure1} compares single modulation with double modulation for $\theta=0$ by showing the corresponding target population dynamics, denoted by $n_j\equiv\left|\bra{\phi_j}\left.\Psi(x,t)\right>\right|^2$. In panel (a), the target is the first excited state, while in panel (b) the target is the second excited state. In both cases, we observe the double modulation performing a faster transition than the individual modulations. Additionally, the double modulation enhances the target population beyond that of the sum of the individual modulations, which is an evidence of quantum interference. Panels (a) and (c) of Fig.~\ref{figure2} show the population of the target modes, the first and second modes, respectively, as a function of the relative phase of the modulations for some fixed times. For $\theta=\pi$ the transition is essentially inhibited, whereas for $\theta=0$ the target population is enhanced, in agreement with the perturbative analysis. Panels (b) and (d) show the corresponding population dynamics for some fixed phases. We observe that as the phase varies from $0$ to $\pi$, the transitions become slower while transferring fewer atoms. This behavior has not been captured by the perturbative expression and may be attributed to the nonlinear character of the GPE. Figure~\ref{figure3} illustrates the comparison of the two-level approximation obtained by solving Eq.~(\ref{pop2}) using a fourth-order Runge-Kutta method with the numerical solution of the GPE. We have obtained for the coupling parameters: $\alpha_{21}\approx0.124$, $\beta_{12}\approx7.7\times{10}^{-2}$ and $\gamma_{21}\approx4.6\times{10}^{-2}$ for the transition to the first excited mode. Panel (a) shows the target population dynamics for $\theta=0$, while panel (b) shows the target population at $t=31$ as a function of the relative phase. Although the two-level approximation departs from the expected solution as the evolution takes place, these two panels illustrate the good qualitative agreement between the approaches that we have generally found in our calculations, corroborating our analysis. Figure \ref{figure4} considers the impact of the phase when only a single modulating field is present. In panels (a) and (b), the modulation of the nonlinearity is turned off $g_{\rm mod}(x,t)=0$, whereas in panels (c) and (d) the modulation of the trap is turned off $V_{\rm trap}(x,t)=0$ and Eq.(\ref{gmod}) reads $g_{\rm mod}=g(x)\cos(\omega_g t+\theta)$. The upper panels show the population dynamics of the first mode, while the lower panels show the population of the first mode as a function of $\theta$ for some fixed times. In contrast with the prediction of the perturbative approach, in both cases the phase does have an impact on the population dynamics. However, this impact is indeed very small compared to the case when the two modulating fields are present. \section{Conclusion}\label{sec6} Non-ground-state BEC can be created from the ground state by resonantly modulating either the trapping potential or the atomic interactions. We have explored the simultaneous use of the modulations on the population dynamic and the impact of their relative phase on the formation of the excited modes in the framework of the GPE. Numerical as well as approximated analytical methods have been applied. We have shown that the relative phase can be used to coherently control the transition to the excited modes by enhancing or suppressing the transition. We have also shown that the double modulation can affect the speed of the transitions. This behavior, which is not often found in ordinary quantum dynamics, can be attributed to the nonlinear character of the GPE. This work should stimulate the search for experimental evidences of coherent control of transitions between nonlinear modes induced by double modulation. It should also motivate the study of different control problems using double modulation. \section*{Acknowledgments}\label{sec7} This study was financed in part by the Coordination for the Improvement of Higher Education Personnel (CAPES) - Finance Code 001. EFL acknowledges support from Sao Paulo Research Foundation, FAPESP (grant 2014/23648-9) and from National Council for Scientific and Technological Development, CNPq (grant 423982/2018-4). \begin{figure}[ht] \begin{center} \includegraphics[width=\textwidth]{archive1_new_mod.pdf} \end{center} \caption{\label{figure1} a) Population of first excited state versus time for a system driven by trap, scattering length and double modulation ($\theta=0$), with $A_{t}=0.1$ and $A_{g}=0.3$ amplitudes. b) Population of second excited state by trap, scattering length and double modulations, with $A_{t}=0.08$ and $A_{g}=0.4$ amplitudes.} \end{figure} \pagebreak \begin{figure}[ht] \begin{center} \includegraphics[width=\textwidth]{archive2_new.pdf} \end{center} \caption{\label{figure2} a) Population of the first excited mode versus relative phase of the modulations for some fixed times and parameters of panel a) of Fig. \ref{figure1}. b) Population of the first excited state versus time for some fixed phases (same parameters of a)). c) Population of the second excited state for some fixed times and parameters of panel b) of Fig. \ref{figure1}. d) of the second excited mode versus time for some fixed phases (same parameters of c)).} \end{figure} \pagebreak \begin{figure}[ht] \begin{center} \includegraphics[width=\textwidth]{archive3_new.pdf} \end{center} \caption{\label{figure3} a) Population of the first excited mode versus time under comparison of the direct numerical calculations and the two levels model for the same system and parameters as we introduced in Fig. \ref{figure1} with $\theta=0$ phase . b) Population versus relative phase of the modulations at $t=31$.} \end{figure} \pagebreak \begin{figure}[ht] \begin{center} \includegraphics[width=\textwidth]{archive4_new.pdf} \end{center} \caption{\label{figure4} a) Population of first excited mode versus time for a system driven by the trap modulation only, with $\theta=0$ and $A_{t}=0.1$ amplitude. b) Population versus relative phase for some fixed times (same parameters as a)). c) Population of first excited state versus time for a system driven by the scattering length modulation only with $\theta=0$ and $A_{g}=0.3$ amplitude. d) Population of the first excited mode versus relative phase for some fixed times (same parameters as c).} \end{figure} \pagebreak
{'timestamp': '2019-09-18T02:03:13', 'yymm': '1909', 'arxiv_id': '1909.05918', 'language': 'en', 'url': 'https://arxiv.org/abs/1909.05918'}
\section{Introduction} A chemical reactor could be any vessel containing chemical reactions. In general, a reactor is designed such as to maximize the yield of some particular products while requiring the least amount of money to purchase and operate. Normal operating expenses include energy input, energy removal, raw material costs, labor, etc. Energy changes can occur in the form of heating or cooling, or agitation. The latter is quite important because an appropriate mixing has a large influence on the yield. Therefore, the design and operation of mixing devices often determines the profitability of the whole plant.Theoretically, the effect of stirring in reactant media have also attracted considerable attention \cite{theory} In particular, in the widely developed continuous stirred tank reactors (CSTR) one or more fluid reagents are introduced into a tank equipped with an impeller while the reactor effluent is removed \cite{Epstein}. The impeller stirs the reagents to ensure proper mixing. Classical CSTR dynamical models, based on coupled deterministic ordinary differential equations (ODEs), are the usual approach to chemical systems at the macroscopic scale. It has been demonstrated to have considerable usefulness. However, their validity relies on many assumptions that limit the situations in which they can be applied. One of the most important is that chemical systems are discrete at the molecular level and statistical fluctuations in concentration and temperature occur at the local scale. The elaboration of models considering this discreteness is important. Seyborg \cite{Seybold 1997} and Neuforth \cite{Neuforth 2000} have shown that stochastic cellular automata models can be successfully applied in simulating first order chemical reactions. In their papers, they worked on a squared arrangement of cells, each of them having a chemical reactive. The reactions are performed by considering a probability of change, from reactive A to reactive B, proportional to the kinetics constant that defines the chemical equation. However, this type of calculation does not apply directly to the CSTR case, where a chemical feed flow is present. In this paper we extend the stochastic CA model to CSTRs by simulating the feed flow flux by means of a random selection of a subset of cells to which the flux conditions with respect to chemical concentration and temperature are imposed. We would like to remark that mixing in a stirred tank is complicated and not well described despite the extensive usage of dimensionless numbers and models based on ODEs \cite{Chakraborty 2003}. Therefore, more accurate models are essential for developing and testing control strategies or even to explore new reactor geometries. The organization of the paper is the following. Section \ref{S2} presents shortly the standard ODE-based CSTR model. Section \ref{S3} describes the CA method that we implemented for jacketed CSTRs. The simulations are displayed and briefly discussed in Section \ref{S4}. The paper ends up with several concluding remarks. \section{The Deterministic Dynamical Systems Model}\label{S2} As already mentioned, we consider an ideal jacketed CSTR where the following exothermic and irreversible first-order reaction is taking place: \begin{eqnarray} A \longrightarrow B \nonumber \end{eqnarray} The CSTR modeling equations in dimensionless form are the following \cite{Silva 1999} \begin{eqnarray} \frac{d\,X_1}{d\,\tau} &=& -\phi\,X_1\,k(X_2)+q\,(X_{1_{f}}-X_1)\\ \frac{d\,X_2}{d\,\tau} &=& \beta\,\phi\,X_1\,k(X_2)-(q+\delta)\,X_{2}+\delta\,X_3+q\,X_{2_{f}}\\ \frac{d\,X_3}{d\,\tau} &=& \frac{q_c}{\delta_1}\,\left(X_{3_{f}}-X_3\right)+\frac{\delta}{\delta_1\,\delta_2}\,(X_2-X_3)~, \end{eqnarray} where $X_1$, $X_2$, and $X_3$ are the dimensionless concentration, temperature, and cooling jacket temperature, respectively. We note that it is possible to use the dimensionless coolant flow rate, $q_c$, to manipulate $X_2$. \begin{figure}[htb] \begin{center} \includegraphics[height=5cm]{cstr2.eps} \put(-109,-10){$A+B$} \put(-160,122){\line(-1,0){15}} \put(-160,122){\vector(1,0){2}} \put(-172,126){$A$} \put(-155,0){\tiny{Coolant}} \put(-159,-7){\tiny{inlet-stream}} \put(-30,100){\tiny{Coolant}} \put(-34,95){\tiny{outlet-stream}} \put(-108,70){CSTR} \end{center} \caption{Schematic representation of the jacketed CSTR.} \end{figure} The relationships between the dimensionless parameters and variables and the physical variables are the following: \begin{eqnarray} k(X_2)=\exp\left(\frac{X_2}{1+X_2\,\gamma^{-1}}\right),\ \gamma=\frac{E}{R\,T_{f_{0}}},\ X_3=\frac{T_c-T_{f_{0}}}{T_{f_{0}}}\,\gamma,\ X_2=\frac{T-T_{f_{0}}}{T_{f_{0}}}\,\gamma\nonumber\\ \beta=\frac{(-\Delta H)\ C_f}{\rho C_p t_{f_0}},\ \delta=\frac{U\ A}{\rho \ C_p\ Q_0},\ \phi=\frac{V}{Q_0}k_0\ e^{-r}, \ X_1=\frac{C}{C_{f_0}},\ X_{1_f}=\frac{C_f}{C_{f_0}},\ \nonumber\\ X_{2_f}=\frac{T_f-T_{f_0}}{T_{f_0}}\gamma,\ \delta_1=\frac{V_c}{V},\ \tau=\frac{Q_0}{V}t,\ X_{3_f}=\frac{T_{c_f}-T_{f_0}}{T_{f_0}}\gamma,\ \delta_2=\frac{\rho_c\ C_{p_c}}{\rho\ C_p} \nonumber \end{eqnarray} where the meaning of the symbols is given in Table 1. In the rest of the paper we shall use the solution of this ODE model as the theoretical case with which the CA simulations will be compared. \begin{table}[htb] \centering \caption{Parameters of the model} \label{tab:1} \begin{tabular}{lll} \hline\noalign{\smallskip} Symbol & \ \ \ \ \ \ \ \ \ \ \ Meaning & \ \ \ \ Value \\ & \ \ \ \ \ \ \ \ \ \ & (arb. units) \\ \noalign{\smallskip}\hline\noalign{\smallskip} $C$ & Reactor composition&\ \ \ \ \ $0.001$ \\ $C_f$ & Feed composition&\ \ \ \ \ $1.0$ \\ $q$ & Dimensionless reactor feed flow rate&\ \ \ \ \ $1.0$ \\ $q_c$ & Dimensionless coolant flow rate&\ \ \ \ \ $1.65$ \\ $q_{cs}$ & Steady-state value of $Q$ &\ \ \ \ \ $1.0$ \\ $T$ & Reactor temperature &\ \ \ \ \ $1.0$ \\ $T_c$ & Coolant temperature &\ \ \ \ \ $1.0$ \\ $UA$ & Heat transfer coefficient times the heat transfer area &\ \ \ \ \ $1.0$ \\ $V$ & Reactor volume &\ \ \ \ \ $1.0$ \\ $V_c$ & Cooling jacket volume &\ \ \ \ \ $1.0$ \\ $X_{1f}$ & Dimensionless feed concentration &\ \ \ \ \ $1.0$ \\ $X_{2f}$ & Dimensionless feed temperature &\ \ \ \ \ $0.0$ \\ $X_{3f}$ & Dimensionless coolant feed temperature&\ \ \ \ \ $1.0$ \\ $\beta$ & Dimensionless heat of reaction &\ \ \ \ \ $8.0$ \\ $\delta _1$ & Dimensionless volume ratio &\ \ \ \ \ $0.1$ \\ $\delta _2$ & Dimensionless density multiplied by the heat capacity of coolant &\ \ \ \ \ $1.0$ \\ $\phi$ & Hill's threshold parameter &\ \ \ \ \ $0.072$ \\ $\gamma$ & Dimensionless activation energy &\ \ \ \ \ $20.0$ \\ $\rho _c C_{p_c}$ & Density multiplied by the heat capacity of coolant&\ \ \ \ \ $1.0$ \\ $\tau$ & Dimensionless time &\ \ \ \ \ $--$ \\ \\\noalign{\smallskip}\hline \end{tabular} \end{table} The nominal parameter values used here are given by: \begin{eqnarray} \beta = 8.0,\ \delta = 0.3,\ X_{1_f} = 1.0,\ X_{3_f} = 1.0,\ q = 1.0,\ q_{c_s} = 1.65\nonumber\\ \gamma = 20.0,\ X_{2_f} = 0.0,\ \phi = 0.072,\ \delta_1 = 0.1,\ \delta_2 = 0.5~.\nonumber \end{eqnarray} \section{Stochastic CA Model for Jacketed CSTR}\label{S3} The process simulated in this work is the exothermic reaction that converts a chemical A into a product B in a jacketed CSTR. Our model is composed of three squared arrangements of cells, all of the same size. The first lattice is for chemicals A and B and represents the chemical distribution in the tank reactor. In each cell there is only one unit of reactive A or one unit of product B (not necessary representing a single molecule), under the condition that all cells are occupied. The second arrangement is for the tank temperature. It contains the temperatures $t_{ij}$ in real values, with each cell in this arrangement corresponding to the respective cell in the first arrangement. The third arrangement represents the coolant system. We have used a squared lattice of the same size as the temperature array in such a way that each temperature tank cell is in ``contact'' with a coolant jacket cell. In our model the first process in each time step is the irreversible conversion of chemical A into product B. The conversion rate is determined by $\phi\,k(X_{2})$ as in \cite{Silva 1999}, where $X_{2}$ is the average temperature of the tank temperature arrangement. This first order kinetics ``constant'' is multiplied by the time step in order to get the proportion of the reactive A that is expected to be converted into product B in each evolution step. This number could also be considered as the probability that a molecule of chemical A would be converted in product B if the time step is small enough. Such proportion is compared with a randomly generated number, one different random number for each cell in the arrangement containing reactive A. If the random number is less than the proportion, the reactive A is changed for product B in the cell. Since the reaction is exothermic, the temperature value in the temperature array is increased by $\beta$ (according to Eq.~(2)) in the corresponding cell.\\ The second simulated process in our model is the tank temperature diffusion that can be simultaneously considered as an energy diffusion. It can be performed by means of finite differences, but in order to obtain a model almost ODE independent we have implemented a moving average method, where the value of the temperature in a cell at the next time step is the average temperature of its neighborhood. This procedure gives similar results to those of finite differences, as shown by Weimar for reaction-diffusion systems simulated by cellular automata \cite{Weimar 1997}. We used a square neighborhood formed by $(2R+1)^{2}$ cells, where $R$ is the number of steps that we have to walk from the center of the cell in order to reach the most far horizontal (vertical) cell in the neighborhood. The third simulated process is the tank feed flow. We have simulated the feed flow rate $q$ in a stochastic way. In order to get an approximation to the proportion of the tank that must be replaced by the incoming flow, $q$ is multiplied by the time step and by the total number of cells in the arrangement. This give us a real number $x$. Then, following Weimar \cite{Weimar 1997}, we used a probabilistic minimal noise rule, \textit{i.e.} it is defined the probability $p = x- \lfloor x \rfloor$ in order to decide if $\lfloor x \rfloor$ or $\lfloor x \rfloor +1$ cells will be replaced by the flow. We choose $\lfloor x \rfloor$ with probability $1-p$ and $\lfloor x \rfloor +1$ with probability $p$. This method conserves the proportion $x$ in a statistical way. Subsequently, a cell is selected in a random way, by means of two random numbers which are used to select a row and a column, in such a way that all cells has the same probability of being selected. If the cell has been selected in the same time step, a new selection is made. This is repeated until we have reached the number of cells that must be replaced. Finally, the selected cells are changed in the temperature arrangement by the feed flow temperature $X_{2f}$, and in the reactive arrangement it is put a unit of reactive A with probability $X_{1f}$, that represent the concentration of chemical A in the feed flow. In the simulations presented in this work we used $X_{1f}=1$. This method of flow simulation could be improved in several ways, in order to simulate different tank geometries or for showing the flow direction. However, in this work we want only to show that the CA method could fit the CSTR behavior in a very good approximation, with the advantage of spatial analysis. The fourth simulated process is the energy interchange between the tank and the jacket. This has been done by directly calculating the energy/temperature interchange between each tank temperature cell and its corresponding jacket temperature cell. This interchange is dictated by the difference between the two temperatures and it is weighted by $\delta$, $\delta_{1}$, and $\delta_{2}$ as in Eqs.~(1) and (2). The fifth and the sixth simulated processes are the coolant flow and the coolant temperature diffusion, respectively. Both of them are performed in a similar way as for the concentration and temperature tank. \section{Simulations}\label{S4} In this section we first present the comparison between the curves obtained by differential equations and those obtained from the implemented cellular automata model (see Fig.~\ref{comparativo}). It is clear that cellular automata simulations resemble with excellent agreement the values for the concentration of chemical A, the tank temperature and the jacket temperature at all times. Using a time step of 0.001 it is shown that the curves coincide at the initial time, transient time and for stable state. We have found that we can maintain this remarkable fitness by properly adjusting the time step to a sufficiently small value. \begin{figure}[htb] \centering \includegraphics[scale=0.5]{comparativo2.ps} \caption[]{Comparison between the curves from the differential equations and the curves obtained from simulations with the cellular automata approach. Initial values are: $X_{1}=0.1$, $X_{2}=0.1$, $X_{3}=0.1$. For the six curves: 100 points separated by a dimensionless time of 0.2 were taken from 20000-point simulations with a dimensionless time step of 0.001.} \label{comparativo} \end{figure} When a kinetic constant based on the average temperature is used, it is implicity assumed that the mixing in the CSTR is perfect, vanishing any temperature inhomogeneities. One could ask what could be the change in the tank behavior if the mixing is almost perfect. It could be studied by using a model that consider the spatial distribution of temperature. We studied this effect by calculating the kinetic constant for each cell based in the its correspondent temperature. The effect for a $1000 \times 1000$ cells array and a time step of 0.001 is shown in Fig~\ref{local}. It could be observed that the tank temperature curve for the perfect mixing and the one for the locally calculated kinetic constant are the same for almost all times. However, they separate during the transient period, leading to a reduction in the magnitude of the peak and a little delay in its appearance. The curve was calculated with a tank temperature diffusion process per time step with a $R=1$ neighborhood. If more diffusion steps are used per time step the curves obtained tend to the theoretical one as is expected. Besides, it could be also interpreted as the effect of a perfect mixed tank, but with material where each component tends to maintain its energy.\\ \begin{figure}[htb] \centering \includegraphics[scale=0.55]{local.ps} \caption[]{Comparison between the tank temperature evolution curves for a kinetic constant based on average temperature and for kinetics constant calculated on the base of the local temperature. Initial values are: $X_{1}=0.1$, $X_{2}=0.1$, $X_{3}=0.1$. The array is of $1000 \times 1000$ cells and the dimensionless time step is of 0.001; one temperature diffusion step per time step.} \label{local} \end{figure} The ODEs generally represent the characteristics of the global system, where it has enough number of elements such that the statistical fluctuations are of small amplitude. However, when the system size and the quantity of elements are diminished, the statistical fluctuations could be of increasing importance. In this way, another advantages of the cellular automata method proposed here is its flexibility with respect to the reactor size, and its stochastic nature, that allows to study how much the system could be affected by the initial conditions and by the stochastic features of the process.\\ We have performed several simulations applying the parameters presented above in arrangements of small size, where the model is no longer representative for a 3-dimensional CSTR at such scales. However, we can think of it as a representation of a catalytic surface dividing two regions, one carrying the chemical A and the other as a temperature reservoir. Therefore, this model is a simple approach, useful as a first approximation, in the analysis and study of microreactors or even nanoreactors. We recall that the usage of microreactors for {\em in situ} and on-demand chemical production is gaining increasing importance as the field of microreaction engineering has already demonstrated potential to impact a wide spectrum of chemical, biological, and process system applications \cite{Jensen}. There are already many successfully developed microreactors for chemical applications such as partial oxidation reactions \cite{partial}, phosgene synthesis \cite{phosgene}, multiphase processing \cite{multiphase}, and (bio)chemical detection \cite{detection}. Figure \ref{smallconc} displays the variability that could be found in CSTR systems at small scales. It is clear evidence that the statistical fluctuations are a primordial issue at this scale. In addition, one could notice that the dynamical behavior could be totally different to the expected behavior of a larger system e.g., $1000 \times 1000$ cells arrangement. \begin{figure}[htb] \centering \includegraphics[scale=0.55]{smallconc.ps} \caption[]{Behaviors that could be found in systems with small number of elements (cells for the CAs and clusters of molecules in the real case). These behaviors are different from that expected in systems with a large number of elements. The initial values are $X_{1}=0.1$, $X_{2}=0.1$, $X_{3}=0.1$. The employed time step is 0.001. One hundred points separated by a time lag of 0.2 were taken from 20000-point simulations with a time step of 0.001; one temperature diffusion step per time step.} \label{smallconc} \end{figure} Finally, the study of small systems by direct simulation using stochastic simulations could give us insight in how an open CSTR system could behave when the statistical fluctuations and the initial configuration are important. In Fig.~\ref{rep} one can see that the possible behaviors of a system of size $20 \times 20$ have big deviations from the average (theoretical) value. This kind of variability is not provided by pure ODEs (without a stochastic term). We think that this stochastic CA approach could be an important tool for testing control strategies since the CA approach could be seen as a step between the ODEs models and the specific experimental situation. \begin{figure}[htb] \centering \includegraphics[scale=0.55]{rep.ps} \caption[]{Different concentration behaviors of chemical A for systems of the same size ($20 \times 20$) that are treated by the same method and could be the underlying dynamic characteristic of microreactors. Five simulations are displayed together with the average curve (thick line) of 20 individual simulations. The initial configuration and the stochasticity introduced in the model lead to a time-distributed behavior. The initial values are: $X_{1}=0.1$, $X_{2}=0.1$, $X_{3}=0.1$. The time step is 0.001; one temperature diffusion step per time step.} \label{rep} \end{figure} \section{Concluding Remarks}\label{S5} A cellular automata approach for the CSTR with cooling jacket has been presented in this paper. It is able to reproduce the CSTR dynamical behavior calculated by ODE's with a good approximation and in an easy way. The stochastic model presented allow us to study what could be the behavior of the variables of the tank when the reaction probability depends on the local temperature. It also give us an approach to study systems with few elements, as could be micro and nanoreactors, as could be catalytic membranes separating two phases. The main advantages of the CA approach presented here are its stochastic nature and the direct involvement of a spatial structure. This also represents a tool for studying the role of initial configuration and stochastic fluctuations in systems with few elements. Additionally, the CA approach is a clear improvement of the CSTR modeling and moreover can be applied to different reactor and jacket geometries, as well as for considering in more detail the real mass flow in the tank reactor-geometry.
{'timestamp': '2006-09-28T23:18:24', 'yymm': '0609', 'arxiv_id': 'nlin/0609068', 'language': 'en', 'url': 'https://arxiv.org/abs/nlin/0609068'}
\section{Introduction}\label{sec:intro} We argue that a very general family of covariance matrix distributions, known as the \emph{Inverse G-Wishart} family, plays a fundamental role in modularization of variational inference algorithms via variational message passing when a factor graph fragment (Wand, 2017) approach is used. A factor graph fragment, or \emph{fragment} for short, is a sub-graph of the relevant factor graph consisting of a factor and all of its neighboring nodes. Even though use of the Inverse G-Wishart distribution is not necessary, its adoption allows for fundamental factor graph fragment natural parameter updates to be expressed elegantly and succinctly. An essential aspect of this strategy is that the Inverse G-Wishart distribution is the \emph{only} distribution used for covariance matrix and variance parameters. The family includes as special cases the Inverse Chi-Squared, Inverse Gamma and Inverse Wishart distributions. Therefore, just a single distribution is required which leads to savings in notation and code. Whilst similar comments concerning modularity apply to Monte Carlo-based approaches to approximate Bayesian inference, here we focus on variational inference. Two of the most common contemporary approaches to fast approximate Bayesian inference are mean field variational Bayes (e.g.\ Attias, 1999) and expectation propagation (e.g.\ Minka, 2001). Minka (2005) explains how each approach can be expressed as message passing on relevant \emph{factor graphs} with \emph{variational message passing} (Winn \&\ Bishop, 2005) being the name used for the message passing version of mean field variational Bayes. Wand (2017) introduced the concept of \emph{factor graph fragments}, or \emph{fragments} for short, for compartmentalization of variational message passing into atom-like components. Chen \&\ Wand (2020) demonstrate the use of fragments for expectation propagation. Explanations of factor graph-based variational message passing that match the current exposition are given in Sections 2.4--2.5 of Wand (2017). Sections 4.1.2--4.1.3 of Wand (2017) introduce two variational message passing fragments known as the \emph{Inverse Wishart prior} fragment and the \emph{iterated Inverse G-Wishart} fragment. The first of these simply corresponds to imposing an Inverse Wishart prior on a covariance matrix. In the scalar case this reduces to imposing an Inverse Chi-Squared or, equivalently, an Inverse Gamma prior on a variance parameter. The iterated Inverse G-Wishart fragment facilitates the imposition of arbitrarily non-informative priors on standard deviation parameters such as members of the Half-$t$ family (Gelman, 2006 ; Polson \&\ Scott, 2012). An extension to the covariance matrix case, for which there is the option to impose marginal Uniform distribution priors over the interval $(-1,1)$ on correlation parameters, is elucidated in Huang \&\ Wand (2013). Mulder \&\ Pericchi (2018) provide a different type of extension that is labelled the \emph{Matrix-$F$} distribution. These two fragments arise in many classes of Bayesian models, such as both Gaussian and generalized response linear mixed models (e.g. McCulloch \textit{et al.}, 2008), Bayesian factor models (e.g. Conti \textit{et al.}, 2014), vector autoregressive models (e.g. Assaf \textit{et al.}, 2019), and generalized additive mixed models and group-specific curve models (e.g. Harezlak \textit{et al.}, 2018). Despite the fundamentalness of Inverse G-Wishart-based fragments for variational message passing, the main reference to date, Wand (2017), is brief in its exposition and contains some errors that affect certain cases. In this article we provide a detailed exposition of the Inverse G-Wishart distribution in the context of variational message passing and list the Inverse Wishart prior and iterated Inverse G-Wishart fragment updates in full ready-to-code forms. \textsf{R} functions (\textsf{R} Core Team, 2020) that implement these algorithms are provided as part of the supplementary material of this article. We also explain the errors in Wand (2017). Section \ref{sec:GWandIGW} contains relevant definitions and results concerning the G-Wishart and Inverse G-Wishart distributions. Connections with the Huang-Wand and Matrix-$F$ families of marginally noninformative prior distributions for covariance matrices are summarized in Section \ref{sec:HWandMatrixFconns} and in Section \ref{sec:VMPbackground} we point to background material on variational message passing. In Sections \ref{sec:IGWprior} and \ref{sec:iterIGWfrag} we provide detailed accounts of the two variational message passing fragments pertaining to variance and covariance matrix parameters, expanding on what is presented in Sections 4.1.2 and 4.1.3 of Wand (2017), and making some corrections to what is presented there. In Section \ref{sec:priorSpec} we provide explicit instructions on how the two fragments are used to specify different types of prior distributions on standard deviation and covariance matrix parameters in variational message passing-based approximate Bayesian inference. Section \ref{sec:illustrative} contains a data analytic example that illustrates the use of the covariance matrix fragment update algorithms. Some closing discussion is given in Section \ref{sec:closing}. A web-supplement contains relevant details. \section{The G-Wishart and Inverse G-Wishart Distributions}\label{sec:GWandIGW} A random matrix $\boldsymbol{X}$ has an Inverse G-Wishart distribution if and only if $\boldsymbol{X}^{-1}$ has a G-Wishart distribution. In this section we first review the G-Wishart distribution, which has an established literature. Then we discuss the Inverse G-Wishart distribution and list properties that are relevant to its employment in variational message passing. Let $G$ be an undirected graph with $d$ nodes labeled $1,\ldots,d$ and set $E$ consisting of pairs of nodes that are connected by an edge. We say that the symmetric $d\times d$ matrix $\boldsymbol{M}$ \emph{respects} $G$ if $$\boldsymbol{M}_{ij}=0\quad\mbox{for all}\quad \{i,j\}\notin E.$$ Figure \ref{fig:MandG} shows the zero/non-zero entries of four $5\times5$ symmetric matrices. For each matrix, the $5$-node graph that the matrix respects is shown underneath. \begin{figure}[h] \null\vskip4mm $\Mone\quad\Mtwo\quad\Mthree\quad\Mfour$ \centering \subfigure{\includegraphics[width=0.22\textwidth]{MandGa.pdf}} \quad\subfigure{\includegraphics[width=0.22\textwidth]{MandGb.pdf}} \quad\subfigure{\includegraphics[width=0.22\textwidth]{MandGc.pdf}} \quad\subfigure{\includegraphics[width=0.22\textwidth]{MandGd.pdf}} \caption{\textit{The zero/non-zero entries of four $5\times5$ symmetric matrices with non-zero entries denoted by ${\Large\mbox{$\times$}}$. Underneath each matrix is the $5$-node undirected graph that the matrix respects. The nodes are numbered according to the rows and columns of the matrices. A graph edge is present between nodes $i$ and $j$ whenever the $(i,j)$ entry of the matrix is non-zero. The graph respected by the full matrix is denoted by} $G_{\mbox{\tiny{\rm full}}}$. \textit{The graph respected by the diagonal matrix is denoted by} $G_{\mbox{\tiny{\rm diag}}}$.} \label{fig:MandG} \end{figure} The first graph in Figure \ref{fig:MandG} is totally connected and corresponds to the matrix being full. Hence we denote this graph by $G_{\mbox{\tiny{\rm full}}}$. At the other end of the spectrum is the last graph of Figure \ref{fig:MandG}, which is totally disconnected. Since this corresponds to the matrix being diagonal we denote this graph by $G_{\mbox{\tiny{\rm diag}}}$. An important concept in G-Wishart and Inverse G-Wishart distribution theory is graph decomposability. An undirected graph $G$ is \emph{decomposable} if and only if all cycles of four or more nodes have an edge that is not part of the cycle but connects two nodes of the cycle. In Figure \ref{fig:MandG} the first, third and fourth graphs are decomposable. However, the second graph is not decomposable since it contains a four-node cycle that is devoid of edges that connect pairs of nodes within this cycle. Alternative labels for decomposable graphs are \emph{chordal} graphs and \emph{triangulated} graphs. In Sections \ref{sec:GWishartDistr} and \ref{sec:IGWdistn} we define the G-Wishart and Inverse G-Wishart distributions and treat important special cases. This exposition depends on particular notation, which we define here. For a generic proposition ${\mathcal P}$ we define $I({\mathcal P})$ to equal $1$ if ${\mathcal P}$ is true and zero otherwise. If the random variables $x_j$, $1\le j\le d$, are independent such that $x_j$ has distribution ${\mathcal D}_j$ we write $x_j\stackrel{{\tiny \mbox{ind.}}}{\sim}{\mathcal D}_j$, $1\le j\le d$. For a $d\times 1$ vector $\boldsymbol{v}$ let $\mbox{diag}(\boldsymbol{v})$ be the $d\times d$ diagonal matrix with diagonal comprising the entries of $\boldsymbol{v}$ in order. For a $d\times d$ matrix $\boldsymbol{M}$ let $\mbox{diagonal}(\boldsymbol{M})$ denote the $d\times 1$ vector comprising the diagonal entries of $\boldsymbol{M}$ in order. The $\mbox{\rm vec}$ and $\mbox{\rm vech}$ matrix operators are well-established (e.g. Gentle, 2007). If $\boldsymbol{a}$ is a $d^2\times1$ vector then $\mbox{\rm vec}^{-1}(\boldsymbol{a})$ is the $d\times d$ matrix such that $\mbox{\rm vec}\big(\mbox{\rm vec}^{-1}(\boldsymbol{a})\big)=\boldsymbol{a}$. The matrix $\boldsymbol{D}_d$, known as the \emph{duplication matrix of order $d$}, is the $d^2\times\{\frac{1}{2}d(d+1)\}$ matrix containing only zeros and ones such that $\boldsymbol{D}_d\mbox{\rm vech}(\boldsymbol{A})=\mbox{\rm vec}(\boldsymbol{A})$ for any symmetric $d\times d$ matrix $\boldsymbol{A}$ (Magnus \&\ Neudecker, 1999). For example, $$\boldsymbol{D}_2=\left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right]. $$ The Moore-Penrose inverse of $\boldsymbol{D}_d$ is $\boldsymbol{D}_d^+\equiv(\boldsymbol{D}_d^T\boldsymbol{D}_d)^{-1}\boldsymbol{D}_d^T$ and is such that $\boldsymbol{D}_d^+\mbox{\rm vec}(\boldsymbol{A})=\mbox{\rm vech}(\boldsymbol{A})$ for a symmetric matrix $\boldsymbol{A}$. \subsection{The G-Wishart Distribution}\label{sec:GWishartDistr} The \textit{G-Wishart distribution} (Atay-Kayis \&\ Massam, 2005) is defined as follows: \begin{definition} Let $\boldsymbol{X}$ be a $d\times d$ symmetric and positive definite random matrix and $G$ be a $d$-node undirected graph such that $\boldsymbol{X}$ respects $G$. For $\delta>0$ and a symmetric positive definite $d\times d$ matrix $\boldsymbol{\Lambda}$ we say that $\boldsymbol{X}$ has a G-Wishart distribution with graph $G$, shape parameter $\delta$ and rate matrix $\boldsymbol{\Lambda}$, and write $$\boldsymbol{X}\sim\mbox{\rm G-Wishart}(G,\delta,\boldsymbol{\Lambda}),$$ if and only if the non-zero values of the density function of $\boldsymbol{X}$ satisfy \begin{equation} p(\boldsymbol{X})\propto|\boldsymbol{X}|^{(\delta-2)/2}\exp\{-{\textstyle{\frac{1}{2}}}\mbox{\rm tr}(\boldsymbol{\Lambda}\boldsymbol{X})\}. \label{eq:GWishartDensKern} \end{equation} \label{def:GWdefn} \end{definition} Obtaining an expression for the normalizing factor of a general G-Wishart density function is a challenging problem and recently was resolved by Uhler \textit{et al.} (2018). In the special case where $G$ is a decomposable graph a relatively simple expression for the normalizing factor exists and is given, for example, by equation (1.4) of Uhler \textit{et al.} (2018). The non-decomposable case is much more difficult and treated in Section 3 of Uhler \textit{et al.} (2018), but the normalizing factor does not have a succinct expression for general $G$. Similar comments apply to expressions for the mean of a G-Wishart random matrix. As discussed in Section 3 of Atay-Kayis \&\ Massam (2005), the G-Wishart distribution has connections with other distributional constructs such as the hyper Wishart law defined by Dawid \&\ Lauritzen (1993). Let $G_{\mbox{\tiny{\rm full}}}$ be the totally connected $d$-node undirected graph and $G_{\mbox{\tiny{\rm diag}}}$ be the totally disconnected $d$-node undirected graph. The special cases of $G=G_{\mbox{\tiny{\rm full}}}$ and $G=G_{\mbox{\tiny{\rm diag}}}$ are such that the normalizing factor and mean do have simple closed form expressions. Since these cases arise in fundamental variational message passing algorithms we now turn our attention to them. \subsubsection{The $G=G_{\mbox{\tiny{\rm full}}}$ Special Case} In the case where $G$ is a fully connected graph we have: \begin{result} If the $d\times d$ random matrix $\boldsymbol{X}$ is such that $\boldsymbol{X}\sim\mbox{\rm G-Wishart}(G_{\mbox{\tiny{\rm full}}},\delta,\boldsymbol{\Lambda})$ then \begin{equation} \begin{array}{rcl} p(\boldsymbol{X})&=&\frac{|\boldsymbol{\Lambda}|^{(\delta+d-1)/2}}{2^{d(\delta+d-1)/2}\pi^{d(d-1)/4} \prod_{j=1}^d\Gamma(\frac{\delta+d-j}{2})}\, |\boldsymbol{X}|^{(\delta-2)/2}\exp\{-{\textstyle{\frac{1}{2}}}\mbox{\rm tr}(\boldsymbol{\Lambda}\boldsymbol{X})\}\\[1ex] &&\quad\times I(\boldsymbol{X}\ \mbox{a symmetric and positive definite $d\times d$ matrix}). \end{array} \label{eq:pXfullCase} \end{equation} The mean of $\boldsymbol{X}$ is $$E(\boldsymbol{X})=(\delta+d-1)\,\boldsymbol{\Lambda}^{-1}.$$ \label{res:GfullResult} \end{result} Result \ref{res:GfullResult} is not novel at all since the $G=G_{\mbox{\tiny{\rm full}}}$ case corresponds to $\boldsymbol{X}$ having a Wishart distribution. In other words, (\ref{eq:pXfullCase}) is simply the density function of a Wishart random matrix. However, it is worth pointing out the the shape parameter used here is different from that commonly used for the Wishart distribution. For example, in Table A.1 of Gelman \textit{et al.} (2014) the shape parameter is denoted by $\nu$ and is related to the shape parameter of (\ref{eq:pXfullCase}) according to $$\nu=\delta+d-1$$ and therefore are the same only in the special case of $\boldsymbol{X}$ being scalar. Also, note that Definition \ref{def:GWdefn} and Result \ref{res:GfullResult} use the rate matrix parameterisation, whereas Table A.1 of Gelman \textit{et al.} (2014) uses the scale matrix parameterisation for the Wishart distribution. The scale matrix is $\boldsymbol{\Lambda}^{-1}$. \subsubsection{The $G=G_{\mbox{\tiny{\rm diag}}}$ Special Case} Before treating the $\boldsymbol{X}\sim\mbox{\rm G-Wishart}(G_{\mbox{\tiny{\rm diag}}},\delta,\boldsymbol{\Lambda})$ situation, we define the notation \begin{equation} x\sim\mbox{Gamma}(\alpha,\beta) \label{eq:FriedCigar} \end{equation} to mean that the scalar random variable $x$ has a Gamma distribution with shape parameter $\alpha$ and rate parameter $\beta$. The density function corresponding to (\ref{eq:FriedCigar}) is $$p(x)=\frac{\beta^{\alpha}}{\Gamma(\alpha)}\,x^{\alpha-1}\exp(-\beta\,x)I(x>0).$$ The $\mbox{\rm G-Wishart}(G_{\mbox{\tiny{\rm diag}}},\delta,\boldsymbol{\Lambda})$ distribution is tied intimately to the Gamma distribution, as Result \ref{res:GdiagResult} shows. \begin{result} Suppose that the $d\times d$ random matrix $\boldsymbol{X}$ is such that $\boldsymbol{X}\sim\mbox{\rm G-Wishart}(G_{\mbox{\tiny{\rm diag}}},\delta,\boldsymbol{\Lambda})$. Then the non-zero entries of $\boldsymbol{X}$ satisfy $$X_{jj}\stackrel{{\tiny \mbox{ind.}}}{\sim}\mbox{Gamma}\big({\textstyle{\frac{1}{2}}}\delta,{\textstyle{\frac{1}{2}}}\Lambda_{jj}\big),\quad 1\le j\le d,$$ where $\Lambda_{jj}$ is the $j$th diagonal entry of $\boldsymbol{\Lambda}$. The density function of $\boldsymbol{X}$ is {\setlength\arraycolsep{3pt} \begin{eqnarray*} p(\boldsymbol{X})&=&\frac{|\boldsymbol{\Lambda}|^{\delta/2}}{2^{d\delta/2}\Gamma(\delta/2)^d} \,|\boldsymbol{X}|^{(\delta-2)/2}\exp\{-{\textstyle{\frac{1}{2}}}\mbox{\rm tr}(\boldsymbol{\Lambda}\boldsymbol{X})\}\prod_{j=1}^d I(X_{jj}>0)\\[1ex] &=&\frac{\prod_{j=1}^d\Lambda_{jj}^{\delta/2}}{2^{d\delta/2}\Gamma(\delta/2)^d}\, \prod_{j=1}^d\,X_{jj}^{(\delta-2)/2}\,\exp\left(-{\textstyle{\frac{1}{2}}}\sum_{j=1}^d\Lambda_{jj}\,X_{jj}\right) \prod_{j=1}^d I(X_{jj}>0). \end{eqnarray*} } The mean of $\boldsymbol{X}$ is $$E(\boldsymbol{X})=\delta\,\boldsymbol{\Lambda}^{-1}=\delta\,\mbox{\rm diag}(1/\Lambda_{11},\ldots,1/\Lambda_{dd}).$$ \label{res:GdiagResult} \end{result} \noindent We now make some remarks concerning Result \ref{res:GdiagResult}. \begin{enumerate} \item When $G=G_{\mbox{\tiny{\rm diag}}}$ the off-diagonal entries of $\boldsymbol{\Lambda}$ have no effect on the distribution of $\boldsymbol{X}$. In other words, the declaration $\boldsymbol{X}\sim\mbox{\rm G-Wishart}(G_{\mbox{\tiny{\rm diag}}},\delta,\boldsymbol{\Lambda})$ is equivalent to the declaration $\boldsymbol{X}\sim\mbox{\rm G-Wishart}\big(G_{\mbox{\tiny{\rm diag}}},\delta,\mbox{diag}\{\mbox{diagonal}(\boldsymbol{\Lambda})\}\big)$. \item The declaration $\boldsymbol{X}\sim\mbox{\rm G-Wishart}(G_{\mbox{\tiny{\rm diag}}},\delta,\boldsymbol{\Lambda})$ is equivalent to the diagonal entries of $\boldsymbol{X}$ being independent Gamma random variables with shape parameter ${\textstyle{\frac{1}{2}}}\delta$ and rate parameters equalling the diagonal entries of ${\textstyle{\frac{1}{2}}}\boldsymbol{\Lambda}$. \item Even though statements concerning the distributions of independent random variables may seem simpler than a statement of the form $\boldsymbol{X}\sim\mbox{\rm G-Wishart}(G_{\mbox{\tiny{\rm diag}}},\delta,\boldsymbol{\Lambda})$, the major thrust of this article is the elegance provided by key variational message passing fragment updates being expressed in terms of a single family of distributions. \end{enumerate} \subsubsection{Exponential Family Form and Natural Parameterisation} Suppose that $\boldsymbol{X}\sim\mbox{\rm G-Wishart}(G,\delta,\boldsymbol{\Lambda})$. Then for $\boldsymbol{X}$ such that $p(\boldsymbol{X})>0$ we have \begin{equation} p(\boldsymbol{X})\propto \exp\left\{ \left[ \begin{array}{c} \log|\boldsymbol{X}|\\ \mbox{\rm vech}(\boldsymbol{X}) \end{array} \right]^T \left[ \begin{array}{c} {\textstyle{\frac{1}{2}}}(\delta-2)\\ -{\textstyle{\frac{1}{2}}}\boldsymbol{D}_d^T\mbox{\rm vec}(\boldsymbol{\Lambda}) \end{array} \right] \right\}\\[1ex]=\exp\{\boldsymbol{T}(\boldsymbol{X})^T\boldsymbol{\eta}\} \label{eq:vechFirst} \end{equation} where $$ \boldsymbol{T}(\boldsymbol{X})\equiv\left[ \begin{array}{c} \log|\boldsymbol{X}|\\ \mbox{\rm vech}(\boldsymbol{X}) \end{array} \right] \quad\mbox{and}\quad \boldsymbol{\eta} \equiv \left[ \begin{array}{c} \eta_1\\ \boldsymbol{\eta}_2 \end{array} \right] = \left[ \begin{array}{c} {\textstyle{\frac{1}{2}}}(\delta-2)\\ -{\textstyle{\frac{1}{2}}}\boldsymbol{D}_d^T\mbox{\rm vec}(\boldsymbol{\Lambda}) \end{array} \right] $$ are, respectively, sufficient statistic and natural parameter vectors. The inverse of the natural parameter mapping is $$ \left\{ \setlength\arraycolsep{1pt}{ \begin{array}{rcl} \delta&=&2(\eta_1+1),\\[1ex] \boldsymbol{\Lambda}&=&-2\,\mbox{\rm vec}^{-1}(\boldsymbol{D}_d^{+T}\boldsymbol{\eta}_2) \end{array}. } \right. $$ Note that, throughout this article, we use $\mbox{\rm vech}(\boldsymbol{X})$ rather than $\mbox{\rm vec}(\boldsymbol{X})$ since the former is more compact and avoids duplications. Section \ref{sec:vecANDvech} in the web-supplement has further discussion on this matter. \subsection{The Inverse G-Wishart Distribution}\label{sec:IGWdistn} Suppose that $\boldsymbol{X}\sim\mbox{\rm G-Wishart}(G,\delta,\boldsymbol{\Lambda})$, where $\boldsymbol{X}$ is $d\times d$, and $\boldsymbol{Y}=\boldsymbol{X}^{-1}$. Let the density functions of $\boldsymbol{X}$ and $\boldsymbol{Y}$ be denoted by $p_{\boldsymbol{X}}$ and $p_{\boldsymbol{Y}}$ respectively. Then the density function of $\boldsymbol{Y}$ is \begin{equation} p_{\boldsymbol{Y}}(\boldsymbol{Y})=p_{\boldsymbol{X}}(\boldsymbol{Y}^{-1})\,|J(\boldsymbol{Y})| \label{eq:IGWdensFunction} \end{equation} where $$J(\boldsymbol{Y})\equiv\mbox{the determinant of}\ \frac{\partial\mbox{\rm vec}(\boldsymbol{Y}^{-1})}{\partial\mbox{\rm vec}(\boldsymbol{Y})^T}$$ is the Jacobian of the transformation. An important observation is that the form of $J(\boldsymbol{Y})$ is dependent on the graph $G$. In the case of $G$ being a decomposable graph an expression for $J(\boldsymbol{Y})$ is given by (2.4) of Letac \&\ Massam (2007), with credit given to Roverato (2000). Therefore, if $G$ is decomposable, the density function of an Inverse G-Wishart random matrix can be obtained by substitution of (2.4) of Letac \&\ Massam (2007) into (\ref{eq:IGWdensFunction}). However, depending on the complexity of $G$, simplification of the density function expression may be challenging. With variational message passing in mind, we now turn to the $G=G_{\mbox{\tiny{\rm full}}}$ and $G=G_{\mbox{\tiny{\rm diag}}}$ special cases. The $G=G_{\mbox{\tiny{\rm diag}}}$ case is simple since it involves products of univariate density functions and we have \begin{equation} \mbox{if $G=G_{\mbox{\tiny{\rm diag}}}$ then $|J(\boldsymbol{Y})|=|\boldsymbol{Y}|^{-2}$ for any $d\in{\mathbb N}$}. \label{eq:JacForDiag} \end{equation} The $G=G_{\mbox{\tiny{\rm full}}}$ case is more challenging and is the focus of Theorem 2.1.8 of Muirhead (1982): \begin{equation} \mbox{if $G=G_{\mbox{\tiny{\rm full}}}$ then $|J(\boldsymbol{Y})|=|\boldsymbol{Y}|^{-(d+1)}$}. \label{eq:JacForFull} \end{equation} This result is also stated as Lemma 2.1 in Letac \&\ Massam (2007). Combining (\ref{eq:IGWdensFunction}), (\ref{eq:JacForDiag}) and (\ref{eq:JacForFull}) we have: \begin{result} Suppose that $\boldsymbol{Y}=\boldsymbol{X}^{-1}$ where $\boldsymbol{X}\sim\mbox{\rm G-Wishart}(G,\delta,\boldsymbol{\Lambda})$ and $\boldsymbol{X}$ is $d\times d$. \begin{itemize} % \item[(a)] If $G=G_{\mbox{\tiny{\rm full}}}$ then $p(\boldsymbol{Y})\propto|\boldsymbol{Y}|^{-(\delta+2d)/2} \exp\{-{\textstyle{\frac{1}{2}}}\mbox{\rm tr}(\boldsymbol{\Lambda}\boldsymbol{Y}^{-1})\}$. % \item[(b)] If $G=G_{\mbox{\tiny{\rm diag}}}$ then $p(\boldsymbol{Y})\propto|\boldsymbol{Y}|^{-(\delta+2)/2} \exp\{-{\textstyle{\frac{1}{2}}}\mbox{\rm tr}(\boldsymbol{\Lambda}\boldsymbol{Y}^{-1})\}$. % \end{itemize} \label{res:GWtoIGW} \end{result} Whilst Result \ref{res:GWtoIGW} only covers $G=G_{\mbox{\tiny{\rm full}}}$ or $G=G_{\mbox{\tiny{\rm diag}}}$ it shows that, in these special cases, the density function of an Inverse G-Wishart random matrix $\boldsymbol{Y}$ is proportional to a power of $|\boldsymbol{Y}|$ multiplied by an an exponentiated trace of a matrix multiplied by $\boldsymbol{Y}^{-1}$. This form does not necessarily arise for $G\notin\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$. Since the motivating variational message passing fragment update algorithms only involve the $G\in\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$ cases we focus on them for the remainder of this section. \subsubsection{The Inverse G-Wishart Distribution When $G\in\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$} For succinct statement of variational message passing fragment update algorithms involving variance and covariance matrix parameters it is advantageous to have a single Inverse G-Wishart distribution notation for the $G\in\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$ cases. \begin{definition} Let $\boldsymbol{X}$ be a $d\times d$ symmetric and positive definite random matrix and $G$ be a $d$-node undirected graph such that $\boldsymbol{X}^{-1}$ respects $G$. Let $\xi>0$ and $\boldsymbol{\Lambda}$ be a symmetric positive definite $d\times d$ matrix $\boldsymbol{\Lambda}$. \begin{itemize} % \item[(a)] If $G=G_{\mbox{\tiny{\rm full}}}$ and $\xi$ is restricted such that $\xi>2d-2$ then we say that $\boldsymbol{X}$ has an Inverse G-Wishart distribution with graph $G$, shape parameter $\xi$ and scale matrix $\boldsymbol{\Lambda}$, and write % $$\boldsymbol{X}\sim\mbox{\rm Inverse G-Wishart}(G,\xi,\boldsymbol{\Lambda}),$$ % if and only if the non-zero values of the density function of $\boldsymbol{X}$ satisfy $$p(\boldsymbol{X})\propto|\boldsymbol{X}|^{-(\xi+2)/2}\exp\{-{\textstyle{\frac{1}{2}}}\mbox{\rm tr}(\boldsymbol{\Lambda}\boldsymbol{X}^{-1})\}.$$ \item[(b)]If $G=G_{\mbox{\tiny{\rm diag}}}$ then say that $\boldsymbol{X}$ has an Inverse G-Wishart distribution with graph $G$, shape parameter $\xi$ and scale matrix $\boldsymbol{\Lambda}$, and write % $$\boldsymbol{X}\sim\mbox{\rm Inverse G-Wishart}(G,\xi,\boldsymbol{\Lambda}),$$ % if and only if the non-zero values of the density function of $\boldsymbol{X}$ satisfy $$p(\boldsymbol{X})\propto|\boldsymbol{X}|^{-(\xi+2)/2}\exp\{-{\textstyle{\frac{1}{2}}}\mbox{\rm tr}(\boldsymbol{\Lambda}\boldsymbol{X}^{-1})\}.$$ % \item[(c)] If $G\notin\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$ then $\boldsymbol{X}\sim\mbox{\rm Inverse G-Wishart}(G,\xi,\boldsymbol{\Lambda})$ is not defined. % \end{itemize} \label{def:IGWdefn} \end{definition} The shape parameter $\xi$ used in Definition \ref{def:IGWdefn} is a reasonable compromise between various competing parameterisation choices for the Inverse G-Wishart distribution for $G\in\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$ and for use in variational message passing algorithms. It has the following attractions: \begin{itemize} \item The exponent of the determinant in the density function expression is $-(\xi+2)/2$ regardless of whether $G=G_{\mbox{\tiny{\rm full}}}$ or $G=G_{\mbox{\tiny{\rm diag}}}$, which is consistent with the G-Wishart distributional notation used in Definition \ref{def:GWdefn}. \item In the $d=1$ case $\xi$ matches the shape parameter in the most common parameterisation of the Inverse Chi-Squared distribution such as that used in Table A.1 of Gelman \textit{et al.} (2014). \end{itemize} In case where $\boldsymbol{X}\sim\mbox{\rm Inverse G-Wishart}(G_{\mbox{\tiny{\rm full}}},\xi,\boldsymbol{\Lambda})$ we have the following: \begin{result} If the $d\times d$ random matrix $\boldsymbol{X}$ is such that $\boldsymbol{X}\sim\mbox{\rm Inverse G-Wishart}(G_{\mbox{\tiny{\rm full}}},\xi,\boldsymbol{\Lambda})$ then $$ \begin{array}{rcl} p(\boldsymbol{X})&=&\displaystyle{\frac{|\boldsymbol{\Lambda}|^{(\xi-d+1)/2}} {2^{d(\xi-d+1)/2}\pi^{d(d-1)/4} \prod_{j=1}^d\Gamma(\frac{\xi-d-j}{2}+1)}}\, |\boldsymbol{X}|^{-(\xi+2)/2}\exp\{-{\textstyle{\frac{1}{2}}}\mbox{\rm tr}(\boldsymbol{\Lambda}\boldsymbol{X}^{-1})\}\\[1ex] &&\quad\times I(\boldsymbol{X}\ \mbox{a symmetric and positive definite $d\times d$ matrix}). \end{array} $$ The mean of $\boldsymbol{X}^{-1}$ is $$E(\boldsymbol{X}^{-1})=(\xi-d+1)\,\boldsymbol{\Lambda}^{-1}.$$ \label{res:IGfullWres} \end{result} Result \ref{res:IGfullWres} follows directly from the fact that $\boldsymbol{X}\sim\mbox{\rm Inverse G-Wishart}(G_{\mbox{\tiny{\rm full}}},\xi,\boldsymbol{\Lambda})$ if and only if $\boldsymbol{X}$ has an Inverse Wishart distribution and established results for the density function and mean of this distribution given in, for example, Table A.1 of Gelman \textit{et al.} (2014). We now deal with the $G=G_{\mbox{\tiny{\rm diag}}}$ case. \begin{definition} Let $x$ be a random variable. For $\delta>0$ and $\lambda>0$ we say that the random variable $x$ has an Inverse Chi-Squared distribution with shape parameter $\delta$ and rate parameter $\lambda$, and write $$x\sim\mbox{{\rm Inverse}-$\chi^2$}(\delta,\lambda),$$ if and only if $1/x\sim\chi^2(\delta,\lambda)$. If $x\sim\mbox{{\rm Inverse}-$\chi^2$}(\delta,\lambda)$ then the density function of $x$ is $$p(x)=\frac{(\lambda/2)^{\delta/2}}{\Gamma(\delta/2)} x^{-(\delta+2)/2}\,\exp\{-(\lambda/2)\big/x\}I(x>0). $$ \end{definition} \noindent We are now ready to state: \begin{result} Suppose that the $d\times d$ random matrix $\boldsymbol{X}$ is such that $\boldsymbol{X}\sim\mbox{\rm Inverse-G-Wishart}(G_{\mbox{\tiny{\rm diag}}},\xi,\boldsymbol{\Lambda})$. Then the non-zero entries of $\boldsymbol{X}$ satisfy $$X_{jj}\stackrel{{\tiny \mbox{ind.}}}{\sim}\mbox{{\rm Inverse}-$\chi^2$}(\xi,\Lambda_{jj}),\quad 1\le j\le d,$$ where $\Lambda_{jj}$ is the $j$th diagonal entry of $\boldsymbol{\Lambda}$. The density function of $\boldsymbol{X}$ is {\setlength\arraycolsep{3pt} \begin{eqnarray*} p(\boldsymbol{X})&=&\frac{|\boldsymbol{\Lambda}|^{\xi/2}}{2^{d\xi/2}\Gamma(\xi/2)^d} \,|\boldsymbol{X}|^{-(\xi+2)/2}\exp\{-{\textstyle{\frac{1}{2}}}\mbox{\rm tr}(\boldsymbol{\Lambda}\boldsymbol{X}^{-1})\}\prod_{j=1}^d I(X_{jj}>0)\\[1ex] &=&\frac{\prod_{j=1}^d\Lambda_{jj}^{\xi/2}}{2^{d\xi/2}\Gamma(\xi/2)^d} \prod_{j=1}^d\,X_{jj}^{-(\xi+2)/2}\,\exp\left\{-{\textstyle{\frac{1}{2}}}\sum_{j=1}^d(\Lambda_{jj}/X_{jj})\right\} \prod_{j=1}^d I(X_{jj}>0). \end{eqnarray*} } The mean of $\boldsymbol{X}^{-1}$ is $$E(\boldsymbol{X}^{-1})=\xi\boldsymbol{\Lambda}^{-1}=\xi\,\mbox{\rm diag}(1/\Lambda_{11},\ldots,1/\Lambda_{dd}).$$ \end{result} \subsubsection{Natural Parameter Forms and Sufficient Statistic Expectations} Suppose that $\boldsymbol{X}\sim\mbox{Inverse-G-Wishart}(G,\xi,\boldsymbol{\Lambda})$ where $G\in\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$. Then for $\boldsymbol{X}$ such that $p(\boldsymbol{X})>0$, $$ p(\boldsymbol{X})\propto\exp\left\{ \left[ \begin{array}{c} \log|\boldsymbol{X}|\\ \mbox{\rm vech}(\boldsymbol{X}^{-1}) \end{array} \right]^T \left[ \begin{array}{c} -(\xi+2)/2\\ -{\textstyle{\frac{1}{2}}}\boldsymbol{D}_d^T\mbox{\rm vec}(\boldsymbol{\Lambda}) \end{array} \right] \right\}=\exp\{\boldsymbol{T}(\boldsymbol{X})^T\boldsymbol{\eta}\} $$ where \begin{equation} \boldsymbol{T}(\boldsymbol{X})\equiv\left[ \begin{array}{c} \log|\boldsymbol{X}|\\[1ex] \mbox{\rm vech}(\boldsymbol{X}^{-1}) \end{array} \right] \quad\mbox{and}\quad \boldsymbol{\eta} \equiv \left[ \begin{array}{c} \eta_1\\ \boldsymbol{\eta}_2 \end{array} \right] = \left[ \begin{array}{c} -{\textstyle{\frac{1}{2}}}(\xi+2)\\[1ex] -{\textstyle{\frac{1}{2}}}\boldsymbol{D}_d^T\mbox{\rm vec}(\boldsymbol{\Lambda}) \end{array} \right] \label{eq:IsawLara} \end{equation} are, respectively, sufficient statistic and natural parameter vectors. The inverse of the natural parameter mapping is \begin{equation} \left\{ \begin{array}{rcl} \xi&=&-2\eta_1-2,\\[1ex] \boldsymbol{\Lambda}&=&-2\,\mbox{\rm vec}^{-1}(\boldsymbol{D}_d^{+T}\boldsymbol{\eta}_2). \end{array} \right. \label{eq:IGWinvMap} \end{equation} As explained in Section \ref{sec:vecANDvech} of the web-supplement, alternatives to (\ref{eq:IsawLara}) are those that use $\mbox{\rm vec}(\boldsymbol{X})$ instead of $\mbox{\rm vech}(\boldsymbol{X})$. Throughout this article we use the more compact ``$\mbox{\rm vech}$'' form. The following result is fundamental to succinct formulation of updates of covariance and variance parameter fragment updates for variational message passing: \begin{result} If $\boldsymbol{X}$ is a $d\times d$ random matrix that has an Inverse G-Wishart distribution with graph $G\in\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$ and natural parameter vector $\boldsymbol{\eta}$. Then $$E(\boldsymbol{X}^{-1})=\left\{ \begin{array}{ll} \{\eta_1+{\textstyle{\frac{1}{2}}}(d+1)\}\{\mbox{\rm vec}^{-1}(\boldsymbol{D}_d^{+T}\boldsymbol{\eta}_2)\}^{-1} & \mbox{if}\ \ G=G_{\mbox{\tiny{\rm full}}}\\[1.5ex] (\eta_1+1)\{\mbox{\rm vec}^{-1}(\boldsymbol{D}_d^{+T}\boldsymbol{\eta}_2)\}^{-1}& \mbox{if}\ \ G=G_{\mbox{\tiny{\rm diag}}}. \end{array} \right. $$ \label{eq:recipMoment} \end{result} \subsubsection{Relationships with the Hyper Inverse Wishart Distributions} Throughout this article we follow the G-Wishart nomenclature as used by, for example, Atay-Kayis \&\ Massam (2005), Letac \&\ Massam (2007) and Uhler \textit{et al.} (2018) in our naming of the Inverse G-Wishart family. Some earlier articles, such as Roverato (2000), use the term \emph{Hyper Inverse Wishart} for the same family of distributions. The naming used here is in keeping with the more recent literature concerning Wishart distributions with graphical restrictions. \section{Connections with Some Recent Covariance Matrix Distributions}\label{sec:HWandMatrixFconns} Recently Huang \&\ Wand (2013) and Mulder \&\ Pericchi (2018) developed covariance matrix distributional families that have attractions in terms of the types of marginal prior distributions that can be imposed on interpretable parameters within the covariance matrix. Mulder \&\ Pericchi (2018) referred to their proposal as the \emph{Matrix-F} family of distributions. \subsection{The Huang-Wand Family of Distributions} A major motivation for working with the Inverse G-Wishart distribution is the fact that the family of marginally non-informative priors proposed in Huang \&\ Wand (2013) can be expressed succinctly in terms of the $\mbox{Inverse-G-Wishart}(G,\xi,\boldsymbol{\Lambda})$ family where $G\in\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$. This means that variational message fragments that cater for Huang-Wand prior specification, as well as Inverse-Wishart prior specification, only require natural parameter vector manipulations within a single distributional family. If $\boldsymbol{\Sigma}$ is a $d\times d$ symmetric positive definite matrix then, for $\nu_{\mbox{\tiny HW}}>0$ and $s_1,\ldots,s_d>0$, the specification \begin{equation} \begin{array}{c} \boldsymbol{\Sigma}|\boldsymbol{A}\sim\mbox{Inverse-G-Wishart}\Big(G_{\mbox{\tiny{\rm full}}},\nu_{\mbox{\tiny HW}}+2d-2,\boldsymbol{A}^{-1}\Big),\\[2ex] \boldsymbol{A}\sim\mbox{Inverse-G-Wishart}\Big(G_{\mbox{\tiny{\rm diag}}},1,\big\{\nu_{\mbox{\tiny HW}}\,\mbox{diag}(s_1^2,\ldots,s_d^2)\big\}^{-1}\Big) \end{array} \label{eq:HuangWandGeneralPrior} \end{equation} places a distribution of the type given in Huang \&\ Wand (2013) on $\boldsymbol{\Sigma}$ with shape parameter $\nu_{\mbox{\tiny HW}}$ and scale parameters $s_1,\ldots,s_d$. The specification (\ref{eq:HuangWandGeneralPrior}) matches (2) of Huang \&\ Wand (2013) but with some differences in notation. Firstly, $d$ is used for matrix dimension here rather than $p$ in Huang \&\ Wand (2013). Also, the $s_j$, $1\le j\le d$, scale parameters are denoted by $A_j$ in Huang \&\ Wand (2013). The $a_j$ auxiliary variables in (2) of Huang \&\ Wand (2013) are related to the matrix $\boldsymbol{A}$ via the expression $\mbox{diag}(a_1,\ldots,a_d)=2\nu_{\mbox{\tiny HW}}\boldsymbol{A}$. As discussed in Huang \&\ Wand (2013), special cases of (\ref{eq:HuangWandGeneralPrior}) correspond to marginally noninformative prior specification of the covariance matrix $\boldsymbol{\Sigma}$ in the sense that the standard deviation parameters $\sigma_j\equiv(\boldsymbol{\Sigma})^{1/2}_{jj}$, $1\le j\le d$, can have Half-$t$ priors with arbitrarily large scale parameters, controlled by the $s_j$ values. This is in keeping with the advice given in Gelman (2006). Moreover, correlation parameters $\rho_{jj'}\equiv(\boldsymbol{\Sigma})^{1/2}_{jj'}/(\sigma_j\sigma_{j'})$, for each $j\ne j'$ pair, have a Uniform distribution over the interval $(-1,1)$ when $\nu_{\mbox{\tiny HW}}=2$. We refer to this special case as the \emph{Huang-Wand} marginally non-informative prior distribution with scale parameters $s_1,\ldots,s_d$ and write \begin{equation} \boldsymbol{\Sigma}\sim\mbox{Huang-Wand}(s_1,\ldots,s_d) \label{eq:HWprior} \end{equation} as a shorthand for (\ref{eq:HuangWandGeneralPrior}) with $\nu_{\mbox{\tiny HW}}=2$. \subsection{The Matrix-$F$ Family of Distributions} For $\nu_{\mbox{\tiny MP}}>d-1$, $\delta_{\mbox{\tiny MP}}>0$ and $\boldsymbol{B}_{\mbox{\tiny MP}}$ a $d\times d$ symmetric positive definite matrix Mulder \&\ Pericchi (2018) defined a $d\times d$ random matrix $\boldsymbol{\Sigma}$ to have a Matrix-$F$ distribution, written \begin{equation} \boldsymbol{\Sigma}\sim F(\nu_{\mbox{\tiny MP}},\delta_{\mbox{\tiny MP}},\boldsymbol{B}_{\mbox{\tiny MP}}), \label{eq:CockOfTheNorth} \end{equation} if its density function has the form $$p(\boldsymbol{\Sigma})\propto \frac{|\boldsymbol{\Sigma}|^{(\nu_{\mbox{\tiny MP}}-d-1)/2}\, I(\boldsymbol{\Sigma}\ \mbox{symmetric and positive definite})} {|\boldsymbol{I}_d+\boldsymbol{\Sigma}\boldsymbol{B}_{\mbox{\tiny MP}}^{-1}|^{(\nu_{\mbox{\tiny MP}}+\delta_{\mbox{\tiny MP}}+d-1)/2}}. $$ However, standard manipulations of results given in Mulder \&\ Pericchi (2018) show that specification (\ref{eq:CockOfTheNorth}) is equivalent to \begin{equation} \begin{array}{c} \boldsymbol{\Sigma}|\boldsymbol{A}\sim\mbox{Inverse-G-Wishart}\Big(G_{\mbox{\tiny{\rm full}}},\delta_{\mbox{\tiny MP}}+2d-2,\boldsymbol{A}^{-1}\Big),\\[2ex] \boldsymbol{A}\sim\mbox{Inverse-G-Wishart}\Big(G_{\mbox{\tiny{\rm full}}},\nu_{\mbox{\tiny MP}}+d-1,\boldsymbol{B}_{\mbox{\tiny MP}}^{-1}\Big) \end{array} \label{eq:FMatrixPrior} \end{equation} in the notation used in the current paper. An important difference between (\ref{eq:HuangWandGeneralPrior}) and (\ref{eq:FMatrixPrior}) is that the former involves $\boldsymbol{A}$ having an Inverse G-Wishart distribution with the restriction $G=G_{\mbox{\tiny{\rm diag}}}$, whilst the latter has $G=G_{\mbox{\tiny{\rm full}}}$. Section 2.4 of Mulder \&\ Pericchi (2018) compares the two specifications in terms of the types of prior distributions that can be imposed on standard deviation and correlation parameters. \section{Variational Message Passing Background}\label{sec:VMPbackground} The overarching goal of this article is to identify and specify algebraic primitives for flexible imposition of covariance matrix priors within a variational message passing framework. In Wand (2017) these algebraic primitives are organised into fragments. This formalism is also used in Nolan \&\ Wand (2017), Maestrini \&\ Wand (2018) and McLean \&\ Wand (2019). Despite it being a central theme of this article, we will not provide a detailed description of variational message passing here. Instead we refer the reader to Sections 2--4 of Wand (2017) for the relevant variational message passing background material. Since the notational conventions for messages used in this section's references are used in the remainder of this article we summarize them here. If $f$ denotes a generic factor and $\theta$ denotes a generic stochastic variable that is a neighbour of $f$ in the factor graph then the message passed from $f$ to $\theta$ and the message passed from $\theta$ to $f$ are both functions of $\theta$ and are denoted by, respectively, $$\biggerm_{\mbox{\scriptsize$f\rightarrow\theta$}}(\theta)\quad\mbox{and}\quad\biggerm_{\mbox{\scriptsize$\theta\rightarrow f$}}(\theta).$$ Typically, the messages are proportional to an exponential family density function with sufficient statistic $\boldsymbol{T}(\theta)$, and we have $$\biggerm_{\mbox{\scriptsize$f\rightarrow\theta$}}(\theta)\propto\exp\left\{\boldsymbol{T}(\theta)^T\biggerbdeta_{\mbox{\scriptsize$f\rightarrow\theta$}}\right\} \quad\mbox{and}\quad \biggerm_{\mbox{\scriptsize$\theta\rightarrow f$}}(\theta)\propto\exp\left\{\boldsymbol{T}(\theta)^T\biggerbdeta_{\mbox{\scriptsize$\theta\rightarrow f$}}\right\} $$ where $\biggerbdeta_{\mbox{\scriptsize$f\rightarrow\theta$}}$ and $\biggerbdeta_{\mbox{\scriptsize$\theta\rightarrow f$}}$ are the message natural parameter vectors. Such vectors play a central role in variational message passing iterative algorithms. We also adopt the notation $$\biggerbdeta_{\mbox{\scriptsize$f\leftrightarrow\theta$}}\equiv\biggerbdeta_{\mbox{\scriptsize$f\rightarrow\theta$}}+\biggerbdeta_{\mbox{\scriptsize$\theta\rightarrow f$}}.$$ \section{The Inverse G-Wishart Prior Fragment}\label{sec:IGWprior} The Inverse G-Wishart prior fragment corresponds to the following prior imposition on a $d\times d$ covariance matrix $\boldsymbol{\Theta}$: $$\boldsymbol{\Theta}\sim\mbox{Inverse-G-Wishart}(G_{\scriptscriptstyle\boldsymbol{\Theta}},\xi_{\scriptscriptstyle\boldsymbol{\Theta}},\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Theta}})$$ for a $d$-node undirected graph $G_{\scriptscriptstyle\boldsymbol{\Theta}}$, scalar shape parameter $\xi_{\scriptscriptstyle\boldsymbol{\Theta}}$ and scale matrix $\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Theta}}$. The fragment's factor is {\setlength\arraycolsep{3pt} \begin{eqnarray*} p(\boldsymbol{\Theta})&\propto&|\boldsymbol{\Theta}|^{-(\xi_{\scriptscriptstyle\boldsymbol{\Theta}}+2)/2}\exp\{-{\textstyle{\frac{1}{2}}}\mbox{tr}(\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Theta}}\boldsymbol{\Theta}^{-1})\}\\[2ex] &&\quad\times I(\boldsymbol{\Theta}\ \mbox{is symmetric and positive definite and}\ \boldsymbol{\Theta}^{-1}\ \mbox{respects}\ G_{\scriptscriptstyle\boldsymbol{\Theta}}). \end{eqnarray*} } \begin{figure}[h] \centering {\includegraphics[width=0.25\textwidth]{InvGWishPriorFrag.pdf}} \caption{\it Diagram of the Inverse G-Wishart prior fragment.} \label{fig:InvGWishPriorFrag} \end{figure} Figure \ref{fig:InvGWishPriorFrag} is a diagram of the fragment, which shows that its only factor to stochastic node message is $$\biggerm_{\mbox{\scriptsize$\pDens(\bTheta)\rightarrow\bTheta$}}(\bTheta)\propto p(\boldsymbol{\Theta})$$ which leads to $$\biggerm_{\mbox{\scriptsize$\pDens(\bTheta)\rightarrow\bTheta$}}(\bTheta)=\exp\left\{ \left[ \begin{array}{c} \log|\boldsymbol{\Theta}|\\[1ex] \mbox{\rm vech}(\boldsymbol{\Theta}^{-1}) \end{array} \right]^T \left[ \begin{array}{c} -{\textstyle{\frac{1}{2}}}\,(\xi_{\scriptscriptstyle\boldsymbol{\Theta}}+2)\\[1ex] -{\textstyle{\frac{1}{2}}}\,\boldsymbol{D}_d^T\mbox{\rm vec}(\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Theta}}) \end{array} \right] \right\}. $$ Therefore, the natural parameter update is $$\biggerbdeta_{\mbox{\scriptsize$\pDens(\bTheta)\rightarrow\bTheta$}}\longleftarrow\left[ \begin{array}{c} -{\textstyle{\frac{1}{2}}}\,(\xi_{\scriptscriptstyle\boldsymbol{\Theta}}+2)\\[1ex] -{\textstyle{\frac{1}{2}}}\,\boldsymbol{D}_d^T\mbox{\rm vec}(\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Theta}}) \end{array} \right]. $$ Apart from passing the natural parameter vector out of the fragment, we should also pass the graph out of the fragment. This entails the update: $$G_{\mbox{\scriptsize$\pDens(\bTheta)\rightarrow\bTheta$}}\longleftarrow G_{\scriptscriptstyle\boldsymbol{\Theta}}.$$ Algorithm \ref{alg:InvGWishPriorFrag} provides the inputs, updates and outputs for the Inverse G-Wishart prior fragment. \begin{algorithm}[!th] \begin{center} \begin{minipage}[t]{165mm} \textbf{Hyperparameter Inputs:} $G_{\scriptscriptstyle\boldsymbol{\Theta}},\xi_{\scriptscriptstyle\boldsymbol{\Theta}},\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Theta}}$. \vskip3mm\noindent\noindent \textbf{Updates:} \begin{itemize} \setlength\itemsep{0pt} \item[] $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bTheta)\rightarrow\bTheta$}}\longleftarrow\left[ \begin{array}{c} -{\textstyle{\frac{1}{2}}}\,(\xi_{\scriptscriptstyle\boldsymbol{\Theta}}+2)\\[1ex] -{\textstyle{\frac{1}{2}}}\,\boldsymbol{D}_d^T\mbox{\rm vec}(\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Theta}}) \end{array} \right]$\ \ \ ;\ \ \ $G_{\mbox{\scriptsize$\pDens(\bTheta)\rightarrow\bTheta$}}\longleftarrow G_{\scriptscriptstyle\boldsymbol{\Theta}}$ \end{itemize} \textbf{Outputs:} $G_{\mbox{\scriptsize$\pDens(\bTheta)\rightarrow\bTheta$}}$, $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bTheta)\rightarrow\bTheta$}}$. \vskip3mm\noindent \end{minipage} \end{center} \caption{\it The inputs, updates and outputs for the Inverse G-Wishart prior fragment.} \label{alg:InvGWishPriorFrag} \end{algorithm} \section{The Iterated Inverse G-Wishart Fragment}\label{sec:iterIGWfrag} The iterated Inverse G-Wishart fragment corresponds to the following specification involving a $d\times d$ covariance matrix $\boldsymbol{\Sigma}$: $$\boldsymbol{\Sigma}|\boldsymbol{A}\sim\mbox{Inverse-G-Wishart}(G,\xi,\boldsymbol{A}^{-1})$$ where $G$ is a $d$-node undirected graph such that $G\in\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$ and $\xi$ is a particular deterministic value of the Inverse G-Wishart shape parameter according to Definition \ref{def:IGWdefn}. Figure \ref{fig:iterInvGWishFrag} is a diagram of this fragment, showing that it has a factor $p(\boldsymbol{\Sigma}|\boldsymbol{A})$ connected to two stochastic nodes $\boldsymbol{\Sigma}$ and $\boldsymbol{A}$. \begin{figure}[h] \centering {\includegraphics[width=0.4\textwidth]{iterInvGWishFrag.pdf}} \caption{\it Diagram of the iterated Inverse G-Wishart fragment.} \label{fig:iterInvGWishFrag} \end{figure} The factor of the iterated Inverse G-Wishart fragment is, as a function of both $\boldsymbol{\Sigma}$ and $\boldsymbol{A}$, {\setlength\arraycolsep{3pt} \begin{eqnarray*} p(\boldsymbol{\Sigma}|\boldsymbol{A})\propto\left\{\begin{array}{l} |\boldsymbol{A}|^{-(\xi-d+1)/2}|\boldsymbol{\Sigma}|^{-(\xi+2)/2}\, \exp\{-{\textstyle{\frac{1}{2}}}\mbox{tr}(\boldsymbol{A}^{-1}\boldsymbol{\Sigma}^{-1})\}\quad\mbox{if $G=G_{\mbox{\tiny{\rm full}}}$,}\\[2ex] |\boldsymbol{A}|^{-\xi/2}|\boldsymbol{\Sigma}|^{-(\xi+2)/2}\,\exp\{-{\textstyle{\frac{1}{2}}}\mbox{tr}(\boldsymbol{A}^{-1}\boldsymbol{\Sigma}^{-1})\} \quad\mbox{if $G=G_{\mbox{\tiny{\rm diag}}}$.} \end{array} \right. \end{eqnarray*} } As shown in Section \ref{sec:drvFirstMsg} of the web-supplement both of the factor to stochastic node messages of this fragment, $$\biggerm_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}}(\bSigma)\quad\mbox{and}\quad\biggerm_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}(\bA),$$ are proportional to Inverse G-Wishart density functions with graph $G\in\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$. We assume the following conjugacy constraints: \begin{center} \begin{minipage}[t]{130mm} All messages passed to $\boldsymbol{\Sigma}$ and $\boldsymbol{A}$ from outside the fragment are proportional to Inverse G-Wishart density functions with graph $G\in\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$. The Inverse G-Wishart messages passed between $\boldsymbol{\Sigma}$ and $p(\boldsymbol{\Sigma}|\boldsymbol{A})$ have the same graph. The Inverse G-Wishart messages passed between $\boldsymbol{A}$ and $p(\boldsymbol{\Sigma}|\boldsymbol{A})$ have the same graph. \end{minipage} \end{center} \noindent Under these constraints, and in view of e.g. (7) of Wand (2017), the message passed from $\boldsymbol{\Sigma}$ to $p(\boldsymbol{\Sigma}|\boldsymbol{A})$ has the form $$ \biggerm_{\mbox{\scriptsize$\bSigma\rightarrow \pDens(\bSigma|\bA)$}}(\bSigma)= \exp\left\{ \left[ \begin{array}{c} \log|\boldsymbol{\Sigma}|\\[1ex] \mbox{\rm vech}(\boldsymbol{\Sigma}^{-1}) \end{array} \right]^T\biggerbdeta_{\mbox{\scriptsize$\bSigma\rightarrow \pDens(\bSigma|\bA)$}}\right\} $$ and the message passed from $\boldsymbol{A}$ to $p(\boldsymbol{\Sigma}|\boldsymbol{A})$ has the form $$ \biggerm_{\mbox{\scriptsize$\bA\rightarrow \pDens(\bSigma|\bA)$}}(\bA)= \exp\left\{ \left[ \begin{array}{c} \log|\boldsymbol{A}|\\[1ex] \mbox{\rm vech}(\boldsymbol{A}^{-1}) \end{array} \right]^T\biggerbdeta_{\mbox{\scriptsize$\bA\rightarrow \pDens(\bSigma|\bA)$}}\right\}. $$ Algorithm \ref{alg:IterInvGWishFrag} gives the full set of updates of the message natural parameter vectors and graphs for the iterated Inverse-G-Wishart fragment. The derivation of Algorithm \ref{alg:IterInvGWishFrag} is given in Section \ref{sec:mainAlgDrv} of the web-supplement. \begin{algorithm}[!th] \begin{center} \begin{minipage}[t]{165mm} \textbf{Graph Input:} $G\in\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$.\\[1ex] \textbf{Shape Parameter Input:} $\xi>0$.\\[1ex] \textbf{Message Graph Input:}\ $G_{\mbox{\scriptsize$\bA\rightarrow \pDens(\bSigma|\bA)$}}\in\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$.\\[1ex] \textbf{Natural Parameter Inputs:} $\biggerbdeta_{\mbox{\scriptsize$\bSigma\rightarrow \pDens(\bSigma|\bA)$}}$,\ $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}}$,\ $\biggerbdeta_{\mbox{\scriptsize$\bA\rightarrow \pDens(\bSigma|\bA)$}}$,\ $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}$.\\[1ex] \textbf{Updates:} \begin{itemize} \setlength\itemsep{0pt} \item[] $\GSUBpSigmaATOSigma\longleftarrow G$\ \ \ ;\ \ \ $G_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}\longleftarrowG_{\mbox{\scriptsize$\bA\rightarrow \pDens(\bSigma|\bA)$}}$ \item[] $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bSigma$}}\longleftarrow\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}}+\biggerbdeta_{\mbox{\scriptsize$\bSigma\rightarrow \pDens(\bSigma|\bA)$}}$ \item[] $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bA$}}\longleftarrow\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}+\biggerbdeta_{\mbox{\scriptsize$\bA\rightarrow \pDens(\bSigma|\bA)$}}$ \item[] If $G_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}=G_{\mbox{\tiny{\rm full}}}$ then $\omega_1\longleftarrow(d+1)/2$ \item[] If $G_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}=G_{\mbox{\tiny{\rm diag}}}$ then $\omega_1\longleftarrow 1$ \item[] $E_{q}(\boldsymbol{A}^{-1})\longleftarrow\Big\{\big(\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bA$}}\big)_1+\omega_1\Big\} \left\{\mbox{\rm vec}^{-1}\Big(\boldsymbol{D}_d^{+T}\big(\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bA$}}\big)_2\Big)\right\}^{-1}$ \item[] If $\GSUBpSigmaATOSigma=G_{\mbox{\tiny{\rm diag}}}$ then $E_{q}(\boldsymbol{A}^{-1})\longleftarrow\mbox{diag}\left\{\mbox{diagonal}\Big(E_{q}(\boldsymbol{A}^{-1})\Big)\right\}$ \item[] $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}}\longleftarrow \left[ \begin{array}{c} -{\textstyle{\frac{1}{2}}}\,(\xi+2)\\[1ex] -{\textstyle{\frac{1}{2}}}\boldsymbol{D}_d^T\mbox{\rm vec}\Big(E_{q}(\boldsymbol{A}^{-1})\Big) \end{array} \right]$ \item[] If $\GSUBpSigmaATOSigma=G_{\mbox{\tiny{\rm full}}}$ then $\omega_2\longleftarrow(d+1)/2$ \item[] If $\GSUBpSigmaATOSigma=G_{\mbox{\tiny{\rm diag}}}$ then $\omega_2\longleftarrow 1$ \item[] $E_{q}(\boldsymbol{\Sigma}^{-1})\longleftarrow\Big\{\big(\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bSigma$}}\big)_1+\omega_2\Big\} \left\{\mbox{\rm vec}^{-1}\Big(\boldsymbol{D}_d^{+T}\big(\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bSigma$}}\big)_2\Big)\right\}^{-1}$ \item[]If $G_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}=G_{\mbox{\tiny{\rm diag}}}$ then $E_{q}(\boldsymbol{\Sigma}^{-1})\longleftarrow\mbox{diag}\left\{\mbox{diagonal}\Big(E_{q}(\boldsymbol{\Sigma}^{-1})\Big)\right\}$ \item[] $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}\longleftarrow \left[ \begin{array}{c} -(\xi+2-2\omega_2)/2 \\[1ex] -{\textstyle{\frac{1}{2}}}\boldsymbol{D}_d^T\mbox{\rm vec}\Big(E_{q}(\boldsymbol{\Sigma}^{-1})\Big) \end{array} \right]$ \end{itemize} \textbf{Outputs:} $\GSUBpSigmaATOSigma,G_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}},\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}},\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}$. \vskip3mm\noindent \end{minipage} \end{center} \caption{\it The inputs, updates and outputs for the iterated Inverse G-Wishart fragment.} \label{alg:IterInvGWishFrag} \end{algorithm} \null\vfill\eject \subsection{Corrections to Section 4.1.3 of Wand (2017)} The iterated Inverse G-Wishart fragment was introduced in Section 4.1.3 of Wand (2017) and it is one of the five fundamental fragments of semiparametric regression given in Table 1. However, there are some errors due to the author of Wand (2017) failing to recognise particular subtleties regarding the Inverse G-Wishart distribution, as discussed in Section \ref{sec:IGWdistn}. We now point out misleading or erroneous aspects in Section 4.1.3 of Wand (2017). Firstly, in Wand (2017) $\boldsymbol{\Theta}_1$ plays the role of $\boldsymbol{\Sigma}$ and $\boldsymbol{\Theta}_2$ plays the role of $\boldsymbol{A}$. The dimension of $\boldsymbol{\Theta}_1$ and $\boldsymbol{\Theta}_2$ is denoted by $d^{\Theta}$. The first displayed equation of Section 4.1.3 is \begin{equation} \boldsymbol{\Theta}_1|\boldsymbol{\Theta}_2\sim\mbox{Inverse-G-Wishart}(G,\kappa,\boldsymbol{\Theta}_2^{-1}) \label{eq:XmasEve} \end{equation} for $\kappa>d^{\Theta}-1$ but it is only in the $G=G_{\mbox{\tiny{\rm full}}}$ case that such a statement is reasonable for general $d^{\Theta}\in{\mathbb N}$. When $G=G_{\mbox{\tiny{\rm full}}}$ then $\kappa=\xi-d^{\Theta}+1$ according the notation used in the current article. Therefore, (\ref{eq:XmasEve}) involves a different parameterisation to that used throughout this article. Therefore, our first correction is to replace the first displayed equation of Section 4.1.3 of Wand (2017) by: $$\boldsymbol{\Theta}_1|\boldsymbol{\Theta}_2\sim\mbox{Inverse-G-Wishart}(G,\xi,\boldsymbol{\Theta}_2^{-1})$$ where $\xi>0$ if $G=G_{\mbox{\tiny{\rm diag}}}$ and $\xi>2d^{\Theta}-2$ if $G=G_{\mbox{\tiny{\rm full}}}$. The following sentence in Section 4.1.3 of Wand (2017): ``The fragment factor is of the form $$p(\boldsymbol{\Theta}_1|\boldsymbol{\Theta}_2)\propto|\boldsymbol{\Theta}_2|^{-\kappa/2} |\boldsymbol{\Theta}_1|^{-(\kappa+d^{\Theta}+1)/2} \exp\left\{-{\textstyle{\frac{1}{2}}}\mbox{tr}(\boldsymbol{\Theta}_1^{-1}\boldsymbol{\Theta}_2^{-1})\right\}\mbox{\null''} $$ should instead be ``The fragment factor is of the form $$p(\boldsymbol{\Theta}_1|\boldsymbol{\Theta}_2)\propto \left\{ \begin{array}{l} |\boldsymbol{\Theta}_2|^{-(\xi-d^{\Theta}+1)/2} |\boldsymbol{\Theta}_1|^{-(\xi+2)/2} \exp\left\{-{\textstyle{\frac{1}{2}}}\mbox{tr}(\boldsymbol{\Theta}_1^{-1}\boldsymbol{\Theta}_2^{-1})\right\} \quad\mbox{if $G=G_{\mbox{\tiny{\rm full}}}$,}\\[1ex] |\boldsymbol{\Theta}_2|^{-\xi/2} |\boldsymbol{\Theta}_1|^{-(\xi+2)/2} \exp\left\{-{\textstyle{\frac{1}{2}}}\mbox{tr}(\boldsymbol{\Theta}_1^{-1}\boldsymbol{\Theta}_2^{-1})\right\} \quad\mbox{if $G=G_{\mbox{\tiny{\rm diag}}}$.''}\\[1ex] \end{array} \right. $$ In equation (31) of Wand (2017), the first entry of the vector on the right-hand side of the $\longleftarrow$ should be $$-(\xi+2)/2\quad\mbox{rather than}\quad -(\kappa+d^{\Theta}+1)/2.$$ To match the correct parameterisation of the Inverse G-Wishart distribution, as used in the current article, equation (32) of Wand (2017) should be $$\mbox{\null``}E(\boldsymbol{X}^{-1})\quad\mbox{where}\quad \boldsymbol{X}\sim\mbox{Inverse-G-Wishart}(G,\xi,\boldsymbol{\Lambda})\mbox{\null''}.$$ The equation in Section 4.1.3 of Wand (2017): $$ \mbox{\null``} \etaSUBpThetaOneThetaTwoTOThetaTwo\longleftarrow \left[ \begin{array}{c} -\kappa/2\\[1ex] -{\textstyle{\frac{1}{2}}}\mbox{\rm vec}\Big( E_{\mbox{\footnotesize$p(\boldsymbol{\Theta}_1|\boldsymbol{\Theta}_2)\to\boldsymbol{\Theta}_2$}} (\boldsymbol{\Theta}_1^{-1})\Big) \end{array} \right]\mbox{\null''} $$ should be replaced by $$ \mbox{\null``} \etaSUBpThetaOneThetaTwoTOThetaTwo\longleftarrow \left[ \begin{array}{c} -(\xi+2-2\omega_2)/2\\[1ex] -{\textstyle{\frac{1}{2}}}\mbox{\rm vec}\Big( E_{\mbox{\footnotesize$p(\boldsymbol{\Theta}_1|\boldsymbol{\Theta}_2)\to\boldsymbol{\Theta}_2$}} (\boldsymbol{\Theta}_1^{-1})\Big) \end{array} \right] $$ where $\omega_2$ depends on the graph of the Inverse G-Wishart distribution corresponding to $E_{\mbox{\footnotesize$p(\boldsymbol{\Theta}_1|\boldsymbol{\Theta}_2)\to\boldsymbol{\Theta}_2$}}$. If the graph is $G_{\mbox{\tiny{\rm full}}}$ then $\omega_2=(d^{\Theta}+1)/2$ and if the graph is $G_{\mbox{\tiny{\rm diag}}}$ then $\omega_2=1$.'' Lastly the iterated Inverse G-Wishart fragment natural parameter updates given by equations (36) and (37) of Wand (2017) are affected by the oversights described in the preceding paragraphs. They should be replaced by the updates given in Algorithm \ref{alg:IterInvGWishFrag} with $\boldsymbol{\Theta}_1=\boldsymbol{\Sigma}$ and $\boldsymbol{\Theta}_2=\boldsymbol{A}$. \section{Use of the Fragments for Covariance Matrix Prior Specification}\label{sec:priorSpec} The underlying rationale for the Inverse G-Wishart prior and iterated Inverse G-Wishart fragments is their ability to facilitate the specification of a wide range of covariance matrix priors within the variational message passing framework. In the $d=1$ special case, covariance matrix parameters reduce to variance parameters and their square roots are standard deviation parameters. In this section we spell out how the fragments, and their natural parameter updates in Algorithms \ref{alg:InvGWishPriorFrag} and \ref{alg:IterInvGWishFrag}, can be used for prior specification in important special cases. \subsection{Imposing an Inverse Chi-Squared Prior on a Variance Parameter} Let $\sigma^2$ be a variance parameter and consider the prior imposition $$\sigma^2\sim\mbox{Inverse-$\chi^2$}(\delta_{\sigma^2},\lambda_{\sigma^2})$$ for hyperparameters $\delta_{\sigma^2},\lambda_{\sigma^2}>0$, within a variational message passing scheme. Then Algorithm \ref{alg:InvGWishPriorFrag} should be called with inputs set to: $$G_{\scriptscriptstyle\boldsymbol{\Theta}}=G_{\mbox{\tiny{\rm full}}},\quad \xi_{\scriptscriptstyle\boldsymbol{\Theta}}=\delta_{\sigma^2},\quad \boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Theta}}=\lambda_{\sigma^2}.$$ \subsection{Imposing an Inverse Gamma Prior on a Variance Parameter} Let $\sigma^2$ be a variance parameter and consider the prior imposition \begin{equation} \sigma^2\sim\mbox{Inverse-Gamma}(\alpha_{\sigma^2},\beta_{\sigma^2}) \label{eq:IGpriorSpec} \end{equation} for hyperparameters $\alpha_{\sigma^2},\beta_{\sigma^2}>0$. The density function corresponding to (\ref{eq:IGpriorSpec}) is $$p(\sigma^2;\alpha_{\sigma^2},\beta_{\sigma^2})\propto (\sigma^2)^{-\alpha_{\sigma^2}-1} \exp\{-\beta_{\sigma^2}/(\sigma^2)\}I(\sigma^2>0).$$ Note that the Inverse Chi-Squared and Inverse Gamma distributions are simple reparameterisations of each other since $$x\sim\mbox{Inverse-$\chi^2$}(\delta,\lambda) \quad\mbox{if and only if}\quad x\sim\mbox{Inverse-Gamma}\big({\textstyle{\frac{1}{2}}}\delta,{\textstyle{\frac{1}{2}}}\lambda\big). $$ To achieve (\ref{eq:IGpriorSpec}) Algorithm \ref{alg:InvGWishPriorFrag} should be called with inputs set to: $$G_{\scriptscriptstyle\boldsymbol{\Theta}}=G_{\mbox{\tiny{\rm full}}},\quad \xi_{\scriptscriptstyle\boldsymbol{\Theta}}=2\alpha_{\sigma^2},\quad \boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Theta}}=2\beta_{\sigma^2}.$$ \subsection{Imposing an Inverse Wishart Prior on a Covariance Matrix Parameter} A random matrix $\boldsymbol{X}$ is defined to have an Inverse Wishart distribution with shape parameter $\kappa$ and scale matrix $\boldsymbol{\Sigma}$, written $\boldsymbol{X}\sim\mbox{Inverse-Wishart}(\kappa,\boldsymbol{\Lambda})$, if and only if the density function of $\boldsymbol{X}$ is \begin{equation} \begin{array}{rcl} p(\boldsymbol{X})&=&\displaystyle{\frac{|\boldsymbol{\Lambda}|^{\kappa/2}}{2^{\kappa d/2}\pi^{d(d-1)/4} \prod_{j=1}^d\Gamma(\frac{\kappa+1-j}{2})}}\, |\boldsymbol{X}|^{-(\kappa+d+1)/2}\exp\{-{\textstyle{\frac{1}{2}}}\mbox{\rm tr}(\boldsymbol{\Lambda}\boldsymbol{X}^{-1})\}\\[2ex] &&\quad\times I(\boldsymbol{X}\ \mbox{a symmetric and positive definite $d\times d$ matrix}). \end{array} \label{eq:GelmanTable} \end{equation} Note that this is the common parameterisation of the Inverse Wishart distribution (e.g. Table A.1 of Gelman \textit{et al.}, 2014). Crucially, (\ref{eq:GelmanTable}) uses a \emph{different} shape parametrization from that used for the Inverse G-Wishart distribution in Definition \ref{def:IGWdefn} when $G=G_{\mbox{\tiny{\rm full}}}$ with the relationship between the two shape parameters given by $\kappa=\xi-d+1$. Even though the more general Inverse G-Wishart family is important for the internal workings of variational message passing, the ordinary Inverse Wishart distribution, with the parameterisation as given in (\ref{eq:GelmanTable}), is more common when imposing a prior on a covariance matrix. Let $\boldsymbol{\Sigma}$ be a $d\times d$ matrix and consider the prior imposition \begin{equation} \boldsymbol{\Sigma}\sim\mbox{Inverse-Wishart}(\kappa_{\scriptscriptstyle\boldsymbol{\Sigma}},\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Sigma}}) \label{eq:classicalIW} \end{equation} for hyperparameters $\kappa_{\scriptscriptstyle\boldsymbol{\Sigma}},\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Sigma}}>0$, within a variational message passing scheme. Then Algorithm \ref{alg:InvGWishPriorFrag} should be called with inputs set to: $$G_{\scriptscriptstyle\boldsymbol{\Theta}}=G_{\mbox{\tiny{\rm full}}},\quad \xi_{\scriptscriptstyle\boldsymbol{\Theta}}=\kappa_{\scriptscriptstyle\boldsymbol{\Sigma}}+d-1,\quad \boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Theta}}=\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Sigma}}.$$ \subsection{Imposing a Half-$t$ Prior on a Standard Deviation Parameter} Consider the prior imposition \begin{equation} \sigma\sim\mbox{Half-$t$}(s_{\sigma},\nu_{\sigma}) \label{eq:sigmaHt} \end{equation} for a scale parameter $s_{\sigma}>0$ and a degrees of freedom parameter $\nu_{\sigma}>0$. The density function corresponding to (\ref{eq:sigmaHt}) is such that $p(\sigma)\propto \{1+(\sigma/s_{\sigma})^2/\nu_{\sigma}\}^{-(\nu_{\sigma}+1)/2}I(\sigma>0)$. This is equivalent to \begin{equation} \sigma^2|a\sim\mbox{Inverse-$\chi^2$}(\nu_{\sigma},1/a)\quad\mbox{and}\quad a\sim\mbox{Inverse-$\chi^2$}(1,1/s_{\sigma}^2). \label{eq:HtPriorAux} \end{equation} Since $d=1$, the graphs $G_{\mbox{\tiny{\rm full}}}$ and $G_{\mbox{\tiny{\rm diag}}}$ are the same -- a single node graph. Treating $\sigma^2$ and $a$ as $1\times1$ matrices we can re-write (\ref{eq:HtPriorAux}) as $$\sigma^2|a\sim\mbox{Inverse-G-Wishart}(G_{\mbox{\tiny{\rm full}}},\nu_{\sigma},a^{-1})\quad\mbox{and}\quad a\sim\mbox{Inverse-G-Wishart}(G_{\mbox{\tiny{\rm diag}}},1,(\nu_{\sigma}\,s_{\sigma}^2)^{-1}) $$ (e.g. Armagan \textit{et al.}, 2011). The specification $$a\sim\mbox{Inverse-G-Wishart}(G_{\mbox{\tiny{\rm diag}}},1,(\nusigmas_{\sigma}^2)^{-1})$$ involves calling Algorithm \ref{alg:InvGWishPriorFrag} with $$G_{\scriptscriptstyle\boldsymbol{\Theta}}=G_{\mbox{\tiny{\rm diag}}},\quad\xi_{\scriptscriptstyle\boldsymbol{\Theta}}=1\quad\mbox{and}\quad \boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Theta}}=(\nu_{\sigma}\,s_{\sigma}^2)^{-1}.$$ The output is the single node graph $G_{\mbox{\scriptsize$\pDens(\bTheta)\rightarrow\bTheta$}}$ and the $2\times1$ natural parameter vector $$\biggerbdeta_{\mbox{\scriptsize$\pDens(\bTheta)\rightarrow\bTheta$}}=\mbox{\Large $\bdeta$}_{p(a)\rightarrow a}.$$ The specification $$\sigma^2|a\sim\mbox{Inverse-G-Wishart}(G_{\mbox{\tiny{\rm full}}},\nu_{\sigma},a^{-1})$$ implies that Algorithm \ref{alg:IterInvGWishFrag} is called with graph input $G=G_{\mbox{\tiny{\rm full}}}$, shape parameter input $\xi=\nu_{\sigma}$ and message parameter inputs $$\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}}=\mbox{\Large $\bdeta$}_{p(\sigma^2|a)\to\sigma^2}, \quad\biggerbdeta_{\mbox{\scriptsize$\bSigma\rightarrow \pDens(\bSigma|\bA)$}}=\mbox{\Large $\bdeta$}_{\sigma^2\to p(\sigma^2|a)},$$ and $$G_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}=G_{\mbox{\tiny{\rm diag}}},\quad\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}=\mbox{\Large $\bdeta$}_{p(\sigma^2|a)\to a} \quad\mbox{and}\quad\biggerbdeta_{\mbox{\scriptsize$\bA\rightarrow \pDens(\bSigma|\bA)$}}=\mbox{\Large $\bdeta$}_{a\to p(\sigma^2|a)}.$$ Note that in this $d=1$ special case $G_{\mbox{\tiny{\rm full}}}$ and $G_{\mbox{\tiny{\rm diag}}}$ are both the single node graph. \subsubsection{The Half-Cauchy Special Case} The special case of \begin{equation} \sigma\sim\mbox{Half-Cauchy}(s_{\sigma}). \label{eq:sigmaHC} \end{equation} corresponds to $\nu_{\sigma}=1$. The density function corresponding to (\ref{eq:sigmaHC}) is such that $p(\sigma)\propto \{1+(\sigma/s_{\sigma})^2\}^{-1}I(\sigma>0)$. Therefore, one should set $\xi=1$ in the call to Algorithm \ref{alg:IterInvGWishFrag}. \subsection{Imposing a Huang-Wand Prior on a Covariance Matrix} To impose the Huang-Wand prior $$\boldsymbol{\Sigma}\sim\mbox{Huang-Wand}(s_{\mbox{\tiny{$\bSigma,1$}}},\ldots,s_{\mbox{\tiny{$\bSigma,d$}}})$$ in a variational message passing framework we should have the inputs to Algorithm \ref{alg:InvGWishPriorFrag} being as follows: $$G_{\scriptscriptstyle\boldsymbol{\Theta}}=G_{\mbox{\tiny{\rm diag}}},\quad\xi_{\scriptscriptstyle\boldsymbol{\Theta}}=1 \quad\mbox{and}\quad\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Theta}}=\big\{2\,\mbox{diag}(s_{\mbox{\tiny{$\bSigma,1$}}}^2,\ldots,s_{\mbox{\tiny{$\bSigma,d$}}}^2)\big\}^{-1}.$$ The graph parameter input to Algorithm \ref{alg:IterInvGWishFrag} should be $G=G_{\mbox{\tiny{\rm full}}}$ and the shape parameter input should be $\xi=2d$. \subsection{Imposing a Matrix-$F$ Prior on a Covariance Matrix} To impose the Matrix-$F$ prior $$\boldsymbol{\Sigma}\sim F(\nu_{\mbox{\tiny MP}},\delta_{\mbox{\tiny MP}},\boldsymbol{B}_{\mbox{\tiny MP}})$$ in a variational message passing framework the inputs to Algorithm \ref{alg:InvGWishPriorFrag} should be as follows: $$G_{\scriptscriptstyle\boldsymbol{\Theta}}=G_{\mbox{\tiny{\rm full}}},\quad\xi_{\scriptscriptstyle\boldsymbol{\Theta}}=\nu_{\mbox{\tiny MP}}+d-1 \quad\mbox{and}\quad\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Theta}}=\boldsymbol{B}_{\mbox{\tiny MP}}^{-1}.$$ The graph parameter input to Algorithm \ref{alg:IterInvGWishFrag} should be $G=G_{\mbox{\tiny{\rm full}}}$ and the shape parameter input should be $\xi=\delta_{\mbox{\tiny MP}}+2d-2$. \subsection{Tabular Summary of Fragment-Based Prior Specification} Table \ref{tab:fragPrior} summarizes the results of this section and is a crucial reference for placing priors of covariance matrix, variance and standard deviation parameters in variational message passing schemes that make use of Algorithms \ref{alg:InvGWishPriorFrag} and \ref{alg:IterInvGWishFrag}. \begin{table}[ht] \begin{center} {\setlength{\tabcolsep}{4pt} \begin{tabular}{lccccccc} \hline \multicolumn{1}{c}{\null}& \multicolumn{3}{c}{Algorithm 1}& \multicolumn{1}{c}{\null}& \multicolumn{3}{c}{Algorithm 2}\\ \cmidrule(r){2-4}\cmidrule(l){6-8} \multicolumn{1}{c}{prior specification}& \multicolumn{1}{c}{$G_{\scriptscriptstyle\boldsymbol{\Theta}}$} & \multicolumn{1}{c}{$\xi_{\scriptscriptstyle\boldsymbol{\Theta}}$}& \multicolumn{1}{c}{$\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Theta}}$}& \multicolumn{1}{c}{\null}& \multicolumn{1}{c}{$\xi$}& \multicolumn{1}{c}{$G$}& \multicolumn{1}{c}{${\small G}_{\mbox{\tiny$\bA\rightarrow \pDens(\bSigma|\bA)$}}$ } \\ \hline\\[-1.5ex] \begin{small}$\sigma^2\sim\mbox{Inverse-$\chi^2$}(\delta_{\sigma^2},\lambda_{\sigma^2})$\end{small} & $G_{\mbox{\tiny{\rm full}}}$ & $\delta_{\sigma^2}$ & $\lambda_{\sigma^2}$&\null& N.A.& N.A. & N.A. \\[1ex] \begin{small}$\sigma^2\sim\mbox{Inv.-Gamma}(\alpha_{\sigma^2},\beta_{\sigma^2})$\end{small} & $G_{\mbox{\tiny{\rm full}}}$ & $2\alpha_{\sigma^2}$& $2\beta_{\sigma^2}$ &\null & N.A.& N.A. & N.A. \\[1ex] \begin{small}$\boldsymbol{\Sigma}\sim\mbox{Inv.-Wishart}(\kappa_{\scriptscriptstyle\boldsymbol{\Sigma}},\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Sigma}})$\end{small} & $G_{\mbox{\tiny{\rm full}}}$ & $\kappa_{\scriptscriptstyle\boldsymbol{\Sigma}}+d-1$ & $\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Sigma}}$ &\null & N.A.& N.A. & N.A. \\[1ex] \begin{small}$\sigma\sim\mbox{Half-$t$}(s_{\sigma},\nu_{\sigma})$\end{small} & $G_{\mbox{\tiny{\rm diag}}}$ & $1$ & $(\nusigmas_{\sigma}^2)^{-1}$ &\null & $\nu_{\sigma}$ & $G_{\mbox{\tiny{\rm full}}}$ & $G_{\mbox{\tiny{\rm diag}}}$\\[1ex] \begin{small}$\sigma\sim\mbox{Half-Cauchy}(s_{\sigma})$\end{small} & $G_{\mbox{\tiny{\rm diag}}}$ & $1$ & $(s_{\sigma}^2)^{-1}$ &\null & $1$ & $G_{\mbox{\tiny{\rm full}}}$ & $G_{\mbox{\tiny{\rm diag}}}$\\[1ex] \begin{small}$\boldsymbol{\Sigma}\sim\mbox{Huang-Wand}$ \end{small} & $G_{\mbox{\tiny{\rm diag}}}$ & $1$ & $\big\{2\,\mbox{diag}(s_{\mbox{\tiny{$\bSigma,1$}}}^2,$ &\null & $2d$ & $G_{\mbox{\tiny{\rm full}}}$ & $G_{\mbox{\tiny{\rm diag}}}$\\[0ex] $\qquad\ (s_{\mbox{\tiny{$\bSigma,1$}}},\ldots,s_{\mbox{\tiny{$\bSigma,d$}}})$ & & & $\null\ \ \ldots,s_{\mbox{\tiny{$\bSigma,d$}}}^2)\big\}^{-1}$ &\null & & & \\[1ex] \begin{small}$\boldsymbol{\Sigma}\sim\mbox{Matrix-$F$}$ \end{small} &$G_{\mbox{\tiny{\rm full}}}$ & $\nu_{\mbox{\tiny MP}}+$ & $\boldsymbol{B}_{\mbox{\tiny MP}}^{-1}$ &\null & $\delta_{\mbox{\tiny MP}}+$ & $G_{\mbox{\tiny{\rm full}}}$ & $G_{\mbox{\tiny{\rm full}}}$\\[0ex] $\qquad\ (\nu_{\mbox{\tiny MP}},\delta_{\mbox{\tiny MP}},\boldsymbol{B}_{\mbox{\tiny MP}})$ & & $\ d-1$ & &\null & $\ 2d-2$ & & \\[1ex] \hline \end{tabular} } \end{center} \caption{\it Specifications of inputs of Algorithms \ref{alg:InvGWishPriorFrag} and \ref{alg:IterInvGWishFrag} for several variance, standard deviation and covariance matrix prior impositions. The abbreviation N.A. stands for not applicable since Algorithm \ref{alg:IterInvGWishFrag} is not needed for the first three prior impositions.} \label{tab:fragPrior} \end{table} \section{Illustrative Example}\label{sec:illustrative} We illustrate the use of Algorithms \ref{alg:InvGWishPriorFrag} and \ref{alg:IterInvGWishFrag} for the case of Bayesian linear mixed models with $t$ distribution responses. Such $t$-based models impose a form of robustness in situations where the responses are susceptible to having outlying values (e.g. Lange \textit{et al.}, 1989). The notation $y\sim t(\mu,\sigma,\nu)$ indicates that the random variable $y$ has a $t$ distribution with location parameter $\mu$, scale parameter $\sigma>0$ and degrees of freedom parameter $\nu>0$. The corresponding density function of $y$ is $$p(y)=\displaystyle{\frac{\Gamma\left(\frac{\nu+1}{2}\right)} {\sigma\sqrt{\pi\nu}\Gamma(\nu/2)[1+\{(y-\mu)/\sigma\}^2/\nu]^{\frac{\nu+1}{2}}}}. $$ Now suppose that the response data consists of repeated measures within each of $m$ groups. Let $$y_{ij}\equiv\mbox{the $j$th response for the $i$th group},\quad 1\le j\le n_i,\ 1\le i\le m,$$ and then let $\boldsymbol{y}_i$, $1\le i\le m$, be the $n_i\times 1$ vectors containing $y_{ij}$ data for the $i$th group. For each $1\le i\le m$, let $\boldsymbol{X}_i$ be $n_i\times p$ design matrices corresponding to the fixed effects and $\boldsymbol{Z}_i$ be $n_i\times q$ design matrices corresponding to the random effects. Next put \begin{equation} \boldsymbol{y}\equiv\left[ \begin{array}{c} \boldsymbol{y}_1\\ \vdots\\ \boldsymbol{y}_m \end{array} \right], \quad \boldsymbol{X}\equiv\left[ \begin{array}{c} \boldsymbol{X}_1\\ \vdots\\ \boldsymbol{X}_m \end{array} \right] \quad\mbox{and}\quad \boldsymbol{Z}\equiv\blockdiag{1\le i\le m}(\boldsymbol{Z}_i) \label{eq:yXZdefn} \end{equation} and define $N=n_1+\ldots+n_m$ to be the number of rows in each of $\boldsymbol{y}$, $\boldsymbol{X}$ and $\boldsymbol{Z}$. Let $y_{\ell}$ be the $\ell$th entry of $\boldsymbol{y}$, $1\le\ell\le N$. The family of Bayesian $t$ response linear mixed models that we consider is \begin{equation} \begin{array}{c} y_{\ell}|\boldsymbol{\beta},\boldsymbol{u},\sigma\stackrel{{\tiny \mbox{ind.}}}{\sim} t\big((\boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{Z}\boldsymbol{u})_{\ell},\sigma,\nu\big), \quad 1\le \ell\le N,\quad \boldsymbol{u}|\boldsymbol{\Sigma}\sim N(\boldsymbol{0},\boldsymbol{I}_m\otimes\boldsymbol{\Sigma}),\\[2ex] \boldsymbol{\beta}\sim N(0,\sigma_{\boldsymbol{\beta}}^2\boldsymbol{I}),\quad \sigma\sim\mbox{Half-Cauchy}(s_{\sigma}),\quad {\textstyle{\frac{1}{2}}}\nu\sim\mbox{Moon-Rock}(0,\lambda_{\nu}),\\[2ex] \boldsymbol{\Sigma}\sim\mbox{Huang-Wand}\big(s_{\mbox{\tiny{$\bSigma,1$}}},\ldots,s_{\mbox{\tiny{$\bSigma,q$}}}\big) \end{array} \label{eq:tRespModel} \end{equation} for hyperparameters $\sigma_{\boldsymbol{\beta}},s_{\sigma},\lambda_{\nu},s_{\mbox{\tiny{$\bSigma,1$}}},\ldots,s_{\mbox{\tiny{$\bSigma,q$}}}>0$. As explained in McLean \&\ Wand (2019), the Moon Rock family of distributions is conjugate for the parameter ${\textstyle{\frac{1}{2}}}\nu$, with the notation $x\sim\mbox{Moon-Rock}(\alpha,\beta)$ indicating that the corresponding density function satisfies $p(x)\propto\{x^x/\Gamma(x)\}^{\alpha}\exp(-\beta x)I(x>0)$. In the variational message passing treatment of the degrees of freedom parameter it is simpler to work with $$\upsilon\equiv {\textstyle{\frac{1}{2}}}\nu\quad\mbox{so that}\quad\upsilon\sim\mbox{Moon-Rock}(0,\lambda_{\nu}).$$ After the approximate posterior density function of $\upsilon$ is obtained via variational message passing, it is trivial to then obtain the same for $\nu$. Hence, we work with $\upsilon$, rather than $\nu$, in the upcoming description of variational message passing-based fitting and inference for (\ref{eq:tRespModel}). Next note that $$\boldsymbol{y}_{\ell}|\boldsymbol{\beta},\boldsymbol{u},\sigma\stackrel{{\tiny \mbox{ind.}}}{\sim} t\Big((\boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{Z}\boldsymbol{u})_{\ell},\sigma,2\upsilon\Big), \quad 1\le \ell\le N$$ is equivalent to \begin{equation} \boldsymbol{y}_{\ell}|\boldsymbol{\beta},\boldsymbol{u},\sigma^2,b_{\ell}\sim N\big((\boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{Z}\boldsymbol{u})_{\ell},b_{\ell}\sigma^2\big), \quad b_{\ell}|\upsilon\stackrel{{\tiny \mbox{ind.}}}{\sim}\mbox{Inverse-$\chi^2$}\left(2\upsilon,2\upsilon\right), \label{eq:auxVarOne} \end{equation} $\sigma\sim\mbox{Half-Cauchy}(s_{\sigma})$ is equivalent to \begin{equation} \sigma^2|a\sim\mbox{Inverse-$\chi^2$}(1,1/a)\quad\mbox{and}\quad a\sim\mbox{Inverse-$\chi^2$}(1,1/s_{\sigma}^2) \label{eq:auxVarTwo} \end{equation} and $\boldsymbol{\Sigma}\sim\mbox{Huang-Wand}\big(s_{\mbox{\tiny{$\bSigma,1$}}},\ldots,s_{\mbox{\tiny{$\bSigma,q$}}}\big)$ is equivalent to \begin{equation} \begin{array}{c} \boldsymbol{\Sigma}|\boldsymbol{A}\sim\mbox{Inverse-G-Wishart}(G_{\mbox{\tiny{\rm full}}},2q,\boldsymbol{A}^{-1}),\\[1ex] \boldsymbol{A}\sim\mbox{Inverse-G-Wishart} \Big(G_{\mbox{\tiny{\rm diag}}},1,\big\{2\,\mbox{diag}(s_{\mbox{\tiny{$\bSigma,1$}}}^2,\ldots,s_{\mbox{\tiny{$\bSigma,q$}}}^2)\big\}^{-1}\Big). \end{array} \label{eq:auxVarThree} \end{equation} Substitution of (\ref{eq:auxVarOne}), (\ref{eq:auxVarTwo}) and (\ref{eq:auxVarThree}) into (\ref{eq:tRespModel}) leads to the hierarchical Bayesian model depicted as a directed acyclic graph in Figure \ref{fig:tMixModDAG} with $\boldsymbol{b}\equiv(b_1,\ldots,b_N)$. The unshaded circles in Figure \ref{fig:tMixModDAG} correspond to model parameters and auxiliary variables and will be referred to as \emph{hidden nodes}. \begin{figure}[h] \centering {\includegraphics[width=0.8\textwidth]{tMixModDAG.pdf}} \caption{\it Directed acyclic graph corresponding to the $t$ response linear mixed model (\ref{eq:tRespModel}) with auxiliary variable representations (\ref{eq:auxVarOne})--(\ref{eq:auxVarThree}). The shaded circle corresponds to the observed data. The unshaded circles correspond to model parameters and auxiliary variables. The small solid circles correspond to hyperparameters. } \label{fig:tMixModDAG} \end{figure} Consider the following mean field approximation of the joint posterior of the hidden nodes in Figure \ref{fig:tMixModDAG} \begin{equation} p(\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\upsilon,\boldsymbol{\Sigma},a,\boldsymbol{A},\boldsymbol{b}|\boldsymbol{y}) \approx q(\boldsymbol{\beta},\boldsymbol{u},a,\boldsymbol{A},\boldsymbol{b})q(\sigma^2,\boldsymbol{\Sigma},\upsilon) \label{eq:lessStringent} \end{equation} where $q$ denotes the approximate posterior density functions of the relevant parameters. Application of induced factor results (e.g. Bishop, 2006; Section 10.2.5) leads to the additional factorizations $$q(\boldsymbol{\beta},\boldsymbol{u},a,\boldsymbol{A},\boldsymbol{b})=q(\boldsymbol{\beta},\boldsymbol{u})q(a)q(\boldsymbol{A})\prod_{\ell=1}^Nq(b_{\ell}) \quad\mbox{and}\quad q(\sigma^2,\boldsymbol{\Sigma},\upsilon)=q(\sigma^2)q(\boldsymbol{\Sigma})q(\upsilon). $$ and so the restriction given in (\ref{eq:lessStringent}) is equivalent to \begin{equation} p(\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\upsilon,\boldsymbol{\Sigma},a,\boldsymbol{A},\boldsymbol{b}|\boldsymbol{y}) \approx q(\boldsymbol{\beta},\boldsymbol{u})q(\sigma^2)q(\upsilon)q(\boldsymbol{\Sigma})q(a)q(\boldsymbol{A}) \prod_{\ell=1}^Nq(b_{\ell}). \label{eq:finalMF} \end{equation} Figure \ref{fig:tMixModFacGraph} is a factor graph representation of the joint density function of all random variables and vectors, or stochastic nodes, in Figure \ref{fig:tMixModDAG} hierarchical model, with unshaded circles for each stochastic node according to the $q$-density factorization given in (\ref{eq:finalMF}) and filled-in rectangles corresponding to factors on the right-hand side of \begin{equation} \begin{array}{l} p(\boldsymbol{y},\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\upsilon,\boldsymbol{\Sigma},a,\boldsymbol{A},\boldsymbol{b})\\[1ex] \qquad\qquad =p(\boldsymbol{A})p(a)p(\boldsymbol{\Sigma}|\boldsymbol{A})p(\sigma^2|a) p(\boldsymbol{\beta},\boldsymbol{u}|\boldsymbol{\Sigma})p(\boldsymbol{y}|\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b}) p(\boldsymbol{b}|\upsilon)p(\upsilon). \end{array} \label{eq:fullFactor} \end{equation} Edges join each factor to a stochastic node that appears in the factor. To aid upcoming discussion, the fragments are numbered $1$ to $8$ according to appearance from left to right. Recall that a fragment is a sub-graph consisting of a factor and all of its neighboring nodes. Figure \ref{fig:tMixModFacGraph} uses shading to show the distinction between adjacent fragments. \begin{figure}[h] \centering {\includegraphics[width=1\textwidth]{tMixModFacGraph.pdf}} \caption{\it Factor graph corresponding to the $t$ response linear mixed model (\ref{eq:tRespModel}) with auxiliary variable representations (\ref{eq:auxVarOne})--(\ref{eq:auxVarThree}). The circular nodes correspond to stochastic nodes in the $q$-density factorization in (\ref{eq:finalMF}). The rectangular nodes correspond to the factors on the right-hand side of (\ref{eq:fullFactor}). The fragments are numbered $1$ to $8$ according to appearance from left to right. Shading is used to show the distinction between adjacent fragments. } \label{fig:tMixModFacGraph} \end{figure} Note that (e.g. Minka, 2005; Wand, 2017) the variational message passing iteration loop has the following generic steps: $$ \begin{array}{lll} &\mbox{1. Choose a factor.}\\ &\mbox{2. Update the parameter vectors of the messages passed from}\\ &\mbox{\ \ \ \ the factor's neighboring stochastic nodes to the factor.}\\ &\mbox{3. Update the parameter vectors of the messages passed}\\ &\mbox{\ \ \ \ from the factor to its neighboring stochastic nodes.} \end{array} $$ Step 2. is very simple and has generic form given by, for example, (7) of Wand (2017). In the Figure \ref{fig:tMixModFacGraph} factor graph an example of Step 2. is: \begin{equation} {\setlength\arraycolsep{3pt} \begin{array}{rcl} &&\mbox{the message passed from $\boldsymbol{\Sigma}$ to $p(\boldsymbol{\beta},\boldsymbol{u}|\boldsymbol{\Sigma})$}\\[1ex] &&\qquad\qquad=\mbox{the message passed from $p(\boldsymbol{\Sigma}|\boldsymbol{A})$ to $\boldsymbol{\Sigma}$ in the previous iteration.} \end{array} } \label{eq:StoFexamp} \end{equation} In terms of natural parameter vector updates, (\ref{eq:StoFexamp}) corresponds to: $$\biggerbdeta_{\mbox{\scriptsize$\bSigma\rightarrow \pDens(\bSigma|\bA)$}}\longleftarrow\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}}.$$ Most of the other stochastic node to factor updates in Figure \ref{fig:tMixModFacGraph} have an analogous form. The exception are the messages passed within fragments 6 and 7, which require use of the slightly more complicated form as given by, for example, equation (7) of Wand (2017). It remains to discuss Step 3., corresponding to the factor to stochastic node updates: \begin{itemize} \item Fragments 1 and 2 are Inverse G-Wishart prior fragments and the factor to stochastic node parameter vector updates are performed according to Algorithm \ref{alg:InvGWishPriorFrag}. In view of Table \ref{tab:fragPrior}, the graph and shape hyperparameter inputs are $G_{\scriptscriptstyle\boldsymbol{\Theta}}=G_{\mbox{\tiny{\rm diag}}}$ and $\xi_{\scriptscriptstyle\boldsymbol{\Theta}}=1$. For fragment 1 the rate hyperparameter is $\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Theta}} =\{2\mbox{diag}\big(s_{\mbox{\tiny{$\bSigma,1$}}}^2,\ldots,s_{\mbox{\tiny{$\bSigma,q$}}}^2\big)\}^{-1}$. For fragment 2 the rate hyperparameter is $\boldsymbol{\Lambda}_{\scriptscriptstyle\boldsymbol{\Theta}}=(s^2_{\sigma})^{-1}$. \item Fragments 3 and 4 are iterated Inverse G-Wishart prior fragments and the factor to stochastic node parameter vector updates are performed according to Algorithm \ref{alg:IterInvGWishFrag}. As shown in Table \ref{tab:fragPrior}, the graph inputs should be $$G_{\mbox{\scriptsize$\bA\rightarrow \pDens(\bSigma|\bA)$}}=G_{\mbox{\tiny{\rm diag}}},\ G_{\mbox{\scriptsize$a\rightarrow p(\sigma^2|a)$}}=G_{\mbox{\tiny{\rm diag}}},\ G_{\mbox{\scriptsize$\bSigma\rightarrow \pDens(\bSigma|\bA)$}}=G_{\mbox{\tiny{\rm full}}}, \ \mbox{and}\ G_{\mbox{\scriptsize$\sigma^2\rightarrow \pDens(\sigma^2|a)$}}=G_{\mbox{\tiny{\rm full}}}. $$ The first two of these are imposed by the messages passed from fragments 1 and 2. For fragment 3, the shape parameter input is $\xi=2q$. For fragment 4, the shape parameter input is $\xi=1$. \item Fragment 5 is the Gaussian penalization fragment described in Section 4.1.4 of Wand (2017) with, in the notation given there, $L=1$, $\boldsymbol{\mu}_{\boldsymbol{\theta}_0}=\boldsymbol{0}$ and $\boldsymbol{\Sigma}_{\boldsymbol{\theta}_0}=\sigma_{\boldsymbol{\beta}}^2\boldsymbol{I}$. \item Fragments 6 and 7 correspond to the $t$ likelihood fragment. Its natural parameter updates are provided by Algorithm 2 of McLean \&\ Wand (2019). \item Fragment 8 corresponds to the imposition of a Moon Rock prior distribution on a shape parameter. This is a very simple fragment for which the only inputs are the Moon Rock prior specification hyperparameters and the output is the natural parameter vector of the Moon Rock prior density function. Since this fragment is not listed as an algorithm in this article or elsewhere, we provide further details in the paragraph after the next one. \end{itemize} For Fragments 5, 6 and 7 simple conversions between two different versions of natural parameter vectors need to be made. Section \ref{sec:vecANDvech} of the web-supplement explains these conversions. The most general Moon Rock prior specification for a generic parameter $\theta$ is $$\theta\sim\mbox{Moon-Rock}(\alpha_{\theta},\beta_{\theta}).$$ This corresponds to the prior density function having exponential family form $$p(\theta)\propto\exp\left\{ \left[ \begin{array}{c} \theta\log(\theta)-\log\Gamma(\theta)\\[1ex] \theta \end{array} \right]^T \left[ \begin{array}{c} \alpha_{\theta}\\[1ex] -\beta_{\theta} \end{array} \right] \right\} $$ The inputs of the Moon Rock prior fragment are $\alpha_{\theta}\ge 0$ and $\beta_{\theta}>0$ and the output is the natural parameter vector $$\biggerbdeta_{\mbox{\scriptsize$\pDens(\theta)\rightarrow\theta$}}\longleftarrow\left[ \begin{array}{c} \alpha_{\theta}\\[1ex] -\beta_{\theta} \end{array} \right]. $$ Since, for the $t$ response mixed model illustrative example, we have the prior imposition $\upsilon\sim\mbox{Moon-Rock}(0,\lambda_{\nu})$ we simply call the Moon Rock prior fragment with $(\alpha_{\theta},\beta_{\theta})$ set to $(0,\lambda_{\nu})$. To demonstrate variational message passing for fitting and inference for model (\ref{eq:tRespModel}), we simulated data according to the dimension values $p=q=2$ and the true parameter values \begin{equation} \bbeta_{\mbox{\tiny true}}= \left[ \begin{array}{c} -0.58 \\[1ex] 1.89 \end{array} \right], \quad \sigma^2_{\mbox{\tiny true}}=0.2, \quad \bSigma_{\mbox{\tiny true}} = \left[ \begin{array}{cc} 2.58 & 0.22 \\[1ex] 0.22 & 1.73 \end{array} \right] \quad\mbox{and}\quad \nu_{\mbox{\tiny true}}=1.5. \label{eq:trueValues} \end{equation} The sample sizes were $m=20$, with $n_i=15$ observations per group, and the predictor data were generated from the Uniform distribution on the unit interval. The hyperparameter values were set at $$\sigma_{\boldsymbol{\beta}}=s_{\sigma}=s_{\mbox{\tiny{$\bSigma,1$}}}=s_{\mbox{\tiny{$\bSigma,2$}}}=10^5\quad\mbox{and}\quad \lambda_{\nu}=0.01.$$ We ran the variational message passing algorithm as described above until the relative change the variational parameters was below $10^{-10}$. as well as Markov chain Monte Carlo via the \textsf{R} language (\textsf{R} Core Team, 2020) package \texttt{rstan} (Stan Development Team, 2019). For Markov chain Monte Carlo fitting, a warmup of size 1000 was used, followed by chains of size 5000 retained for inference. \begin{figure}[h] \centering {\includegraphics[width=0.95\textwidth]{tMixModPosterDens.pdf}} \caption{\it Approximate posterior density functions for the parameters in model (\ref{eq:tRespModel}) based on both variational message passing (VMP) and Markov chain Monte Carlo (MCMC) algorithms applied to data simulated according to the true values (\ref{eq:trueValues}) and sample sizes, predictor values and hyperparameter values as described in the text. The vertical lines indicate true parameter values. } \label{fig:tMixModPosterDens} \end{figure} Figure \ref{fig:tMixModPosterDens} compares the approximate posterior density functions based on both variational message passing (VMP) and Markov chain Monte Carlo (MCMC). The middle row performs the comparison for the random intercept and slope parameters, $u_{i0}$ and $u_{i1}$, for $i=1,2$. The parameters in the third row of Figure \ref{fig:tMixModPosterDens} are for the standard deviation and correlation parameters in the $\boldsymbol{\Sigma}$ matrix, according to the notation $(\boldsymbol{\Sigma})_{11}=\sigma_1$, $(\boldsymbol{\Sigma})_{22}=\sigma_2$ and $(\boldsymbol{\Sigma})_{12}=\sigma_1\sigma_2\rho$. For most of the stochastic nodes, the accuracy of variational message passing is seen to be very good. For $\sigma$ and $\nu$, some under-approximation of the spread and locational shift is apparent. A likely root cause is the imposition of the product restriction $q(\sigma,\nu)=q(\sigma)q(\nu)$ even though these two parameters have a significant amount of posterior dependence. We have prepared a bundle of \textsf{R} language code that carries out variational message passing for this illustrative example, including use of Algorithms \ref{alg:InvGWishPriorFrag} and \ref{alg:IterInvGWishFrag} for the imposition of Half Cauchy and Huang-Wand priors. This code is part of the web-supplement for this article. Lastly, we point out that this illustrative example does not involve matrix algebraic streamlining for random effects models. This relatively new area for variational message passing research, which streamlines calculations involving sparse matrix forms that arise in linear mixed models, is described in Nolan, Menictas \&\ Wand (2020). \section{Closing Remarks}\label{sec:closing} Algorithms \ref{alg:InvGWishPriorFrag} and, especially, Algorithm \ref{alg:IterInvGWishFrag} and their underpinnings are quite involved and dependent upon a careful study of particular special cases of the inverses of G-Wishart random matrices. The amount of detail provided by this article is tedious, but necessary, to ensure that the fragment updates based on a single distributional structure, the Inverse G-Wishart distribution with $G\in\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$, are correct. The good news is that these algorithms only need to be derived once. Their implementations, within a suite of computer programmes for carrying out variational message passing for models containing variance and covariance matrix parameters, can be isolated into subroutines which, once working as intended, do not have to be revisited ever again. Given the quintessence of variance and covariance parameters in throughout statistics and machine learning, Algorithms \ref{alg:InvGWishPriorFrag} and Algorithm \ref{alg:IterInvGWishFrag} are important and fundamental contributions to variational message passing. \section*{Acknowledgements} We are grateful to two referees for their comments and suggestions. This research was supported by Australian Research Council Discovery Project DP140100441. \section*{References} \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Armagan, A., Dunson, D.B. and Clyde, M. (2011). Generalized beta mixtures of Gaussians. In \textit{Advances in Neural Information Processing Systems 24}, J.Shawe-Taylor, R.S. Zamel, P. Bartlett, F. Pereira and K.Q. Weinberger (eds.), pp. 523--531. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Assaf, A.G., Li, G., Song, H. \&\ Tsionas, M.G. (2019). Modeling and forecasting regional tourism demand using the Bayesian global vector autoregressive (BGVAR) model. \textit{Journal of Travel Research}, {\bf 58}, 383--397. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Attias, H. (1999). Inferring parameters and structure of latent variable models by variational Bayes. In Laskey, K.B. and Prade, H. (editors) \textit{Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence}, pp. 21--30. San Francisco: Morgan Kauffmann. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Atay-Kayis, A. \&\ Massam, H. (2005). A Monte Carlo method for computing marginal likelihood in nondecomposable Gaussian graphical models. \textit{Biometrika}, {\bf 92}, 317--335. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Bishop, C.M. (2006). {\it Pattern Recognition and Machine Learning.} New York: Springer. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Chen, W.Y and Wand, M.P. (2020). Factor graph fragmentization of expectation propagation. \textit{Journal of the Korean Statistical Society}, in press. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Conti, G., Fr\"{u}hwirth-Schnatter, S., Heckman, J.J. \&\ Piatek, R. (2014). Bayesian exploratory factor analysis. \textit{Journal of Econometrics}, {\bf 183}, 31--57. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Dawid, A.P. \&\ Lauritzen, S.L. (1993). Hyper Markov laws in the statistical analysis of decomposable graphical models. \textit{The Annals of Statistics}, \textbf{21}, 1272--1317. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Gelman, A. (2006). Prior distributions for variance parameters in hierarchical models. \textit{Bayesian Analysis}, {\bf 1}, 515--533. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Gelman, A., Carlin, J.B., Stern, H.S., Dunson, D.B., Vehtari, A. \&\ Rubin, D.B. (2014). \textit{Bayesian Data Analysis, Third Edition}, Boca Raton, Florida: CRC Press. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Gentle, J.E. (2007). \textit{Matrix Algebra}. New York: Springer. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Harezlak, J., Ruppert, D. \&\ Wand, M.P. (2018). {\it Semiparametric Regression with R}. New York: Springer. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Huang, A. \&\ Wand, M.P. (2013). Simple marginally noninformative prior distributions for covariance matrices. \textit{Bayesian Analysis}, {\bf 8}, 439--452. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Lange, K.L., Little, R.J.A. and Taylor, J.M.G. (1989). Robust statistical modeling using the $t$-distribution. {\it Journal of the American Statistical Association}, {\bf 84}, 881-896. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Letac, G. \&\ Massam, H. (2007). Wishart distributions for decomposable graphs. \textit{The Annals of Statistics}, {\bf 35}, 1278--1323. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Maestrini, L., \&\ Wand, M.P. (2018). Variational message passing for skew t regression. \emph{Stat}, {\bf7}, e196. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Magnus, J.R. \&\ Neudecker, H. (1999). \emph{Matrix Differential Calculus with Applications in Statistics and Econometrics, Revised Edition}. Chichester U.K.: Wiley \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 McCulloch, C.E., Searle, S.R. \&\ Neuhaus, J.M. (2008). \textit{Generalized, Linear, and Mixed Models, Second Edition}. New York: John Wiley \& Sons. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 McLean, M.W. \&\ Wand, M.P. (2019). Variational message passing for elaborate response regression models. \textit{Bayesian Analysis}, {\bf 14}, 371--398. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Minka, T.P. (2001). Expectation propagation for approximate Bayesian inference. In J.S. Breese \&\ D. Koller (eds), \textit{Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence}, pp. 362--369. Burlington, Massachusetts: Morgan Kaufmann. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Minka, T. (2005). Divergence measures and message passing. \textit{Microsoft Research Technical Report Series}, {\bf MSR-TR-2005-173}, 1--17. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Muirhead, R.J. (1982). \textit{Aspects of Multivariate Statistical Theory.} New York: John Wiley \& Sons. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Mulder, J. \&\ Pericchi, L.R. (2018). The Matrix-$F$ prior for estimating and testing covariance matrices. \textit{Bayesian Analysis}, {\bf 13}, 1193--1214. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Nolan, T.H., Menictas, M. and Wand, M.P. (2020). Streamlined computing for variational inference with higher level random effects. Unpublished manuscript available at \textit{https://arxiv.org/abs/1903.06616}. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Nolan, T.H. and Wand, M.P. (2017). Accurate logistic variational message passing: algebraic and numerical details. \textit{Stat}, {\bf 6}, 102--112. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Polson, N. G. \&\ Scott, J. G. (2012). On the half-Cauchy prior for a global scale parameter. \textit{Bayesian Analysis}, {\bf 7}, 887--902. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 \textsf{R} Core Team (2020). \textsf{R}: A language and environment for statistical computing. \textsf{R} Foundation for Statistical Computing. Vienna, Austria. \texttt{https://www.R-project.org/} \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Roverato, A. (2000). Cholesky decomposition of a hyper inverse Wishart matrix. \textit{Biometrika}, {\bf 87}, 99-112. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Stan Development Team (2019). \textsf{RStan}: the \textsf{R} interface to \textsf{Stan}. \textsf{R} package version 2.19.2. \texttt{http://mc-stan.org/.} \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Uhler, C., Lenkoski, A. and Richards, D. (2018). Exact formulas for the normalizing constants of Wishart distributions for graphical models. \textit{The Annals of Statistics}, {\bf 46}, 90--118. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Wand, M.P. (2017). Fast approximate inference for arbitrarily large semiparametric regression models via message passing (with discussion). \textit{Journal of the American Statistical Association}, {\bf 112}, 137--168. \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Winn, J. \&\ Bishop, C.M. (2005). Variational message passing. \textit{Journal of Machine Learning Research}, {\bf 6}, 661--694. \vfill\eject \renewcommand{\theequation}{S.\arabic{equation}} \renewcommand{\thesection}{S.\arabic{section}} \renewcommand{\thetable}{S.\arabic{table}} \setcounter{equation}{0} \setcounter{table}{0} \setcounter{section}{0} \setcounter{page}{1} \setcounter{footnote}{0} \begin{center} {\Large Web-supplement for:} \vskip3mm \centerline{\Large\bf The Inverse G-Wishart Distribution and Variational Message Passing} \vskip7mm \ifthenelse{\boolean{UnBlinded}}{\centerline{\normalsize\sc By L. Maestrini and M.P. Wand} \vskip5mm \centerline{\textit{University of Technology Sydney}} \vskip6mm}{\null} \end{center} \section{Natural Parameter Versions and Mappings}\label{sec:vecANDvech} Throughout this article we use the ``$\mbox{\rm vech}$'' versions of the natural parameter forms of the Multivariate Normal and Inverse G-Wishart distributions. However, Wand (2017) and McLean \&\ Wand (2019) used ``$\mbox{\rm vec}$'' versions of these distributions. The ``$\mbox{\rm vech}$'' version has the attraction of being more compact since entries of symmetric matrices are not duplicated. However, adoption of the ``$\mbox{\rm vech}$'' version entails use of duplication matrices. For implementation in the \textsf{R} language (\textsf{R} Core Team, 2020) we note that the function \texttt{duplication.matrix()} in the package \textsf{matrixcalc} (Novomestky, 2012) returns the duplication matrix of a given order. First we explain the two versions for the Multivariate Normal distribution. Suppose that the $d\times1$ random vector $\boldsymbol{v}$ has a $N(\boldsymbol{u},\boldsymbol{\Sigma})$ distribution. Then the density function of $\boldsymbol{v}$ is $$p(\boldsymbol{v})\propto \exp\left\{\left[ \begin{array}{c} \boldsymbol{v}\\ \mbox{\rm vec}(\boldsymbol{v}\bv^T) \end{array} \right]^T\bdeta_v^{\mbox{\tiny vec}} \right\} =\exp\left\{\left[ \begin{array}{c} \boldsymbol{v}\\ \mbox{\rm vech}(\boldsymbol{v}\bv^T) \end{array} \right]^T\bdeta_v^{\mbox{\tiny vech}} \right\} $$ where $$\bdeta_v^{\mbox{\tiny vec}}\equiv \left[ \begin{array}{c} \boldsymbol{\Sigma}^{-1}\boldsymbol{\mu}\\[1ex] -{\textstyle{\frac{1}{2}}}\mbox{\rm vec}(\boldsymbol{\Sigma}^{-1}) \end{array} \right] \quad \mbox{and} \quad \bdeta_v^{\mbox{\tiny vech}}\equiv \left[ \begin{array}{c} \boldsymbol{\Sigma}^{-1}\boldsymbol{\mu}\\[1ex] -{\textstyle{\frac{1}{2}}}\boldsymbol{D}_d^T\mbox{\rm vec}(\boldsymbol{\Sigma}^{-1}) \end{array} \right]. $$ The two natural parameter vectors can be mapped between each other using \begin{equation} \bdeta_v^{\mbox{\tiny vech}}=\mbox{blockdiag}(\boldsymbol{I}_d,\boldsymbol{D}_d^T)\bdeta_v^{\mbox{\tiny vec}} \quad\mbox{and}\quad \bdeta_v^{\mbox{\tiny vec}}=\mbox{blockdiag}(\boldsymbol{I}_d,\boldsymbol{D}_d^{+T})\bdeta_v^{\mbox{\tiny vech}}. \label{eq:vecvechmapsMVN} \end{equation} Now we explain the interplay between the ``vec'' and ``vech'' forms of the Inverse G-Wishart distribution. Let the $d\times d$ matrix $\boldsymbol{V}$ have an $\mbox{Inverse-G-Wishart}(G,\xi,\boldsymbol{\Lambda})$ distribution. Then the density function of $\boldsymbol{V}$ is $$p(\boldsymbol{V})\propto \exp\left\{\left[ \begin{array}{c} \log|\boldsymbol{V}|\\[1ex] \mbox{\rm vec}(\boldsymbol{V}^{-1}) \end{array} \right]^T\bdeta_{\mbox{\tiny$\bV$}}^{\mbox{\tiny vec}} \right\} =\exp\left\{\left[ \begin{array}{c} \log|\boldsymbol{V}|\\[1ex] \mbox{\rm vech}(\boldsymbol{V}^{-1}) \end{array} \right]^T\bdeta_{\mbox{\tiny$\bV$}}^{\mbox{\tiny vech}} \right\} $$ where $$\bdeta_{\mbox{\tiny$\bV$}}^{\mbox{\tiny vec}}\equiv \left[ \begin{array}{c} -{\textstyle{\frac{1}{2}}}(\xi+1)\\[1ex] -{\textstyle{\frac{1}{2}}}\mbox{\rm vec}(\boldsymbol{\Lambda}) \end{array} \right] \quad\mbox{and}\quad \bdeta_{\mbox{\tiny$\bV$}}^{\mbox{\tiny vech}}\equiv \left[ \begin{array}{c} -{\textstyle{\frac{1}{2}}}(\xi+1)\\[1ex] -{\textstyle{\frac{1}{2}}}\boldsymbol{D}_d^T\mbox{\rm vec}(\boldsymbol{\Lambda}) \end{array} \right]. $$ Mappings between the two natural parameter vectors are as follows: \begin{equation} \bdeta_{\mbox{\tiny$\bV$}}^{\mbox{\tiny vech}}=\mbox{blockdiag}(1,\boldsymbol{D}_d^T)\,\bdeta_{\mbox{\tiny$\bV$}}^{\mbox{\tiny vec}} \quad\mbox{and}\quad \bdeta_{\mbox{\tiny$\bV$}}^{\mbox{\tiny vec}}=\mbox{blockdiag}(1,\boldsymbol{D}_d^{+T})\,\bdeta_{\mbox{\tiny$\bV$}}^{\mbox{\tiny vech}}. \label{eq:vecvechmapsIGW} \end{equation} \section{Justification of Algorithm \ref{alg:IterInvGWishFrag}}\label{sec:mainAlgDrv} We now provide justification for Algorithm \ref{alg:IterInvGWishFrag}, which is concerned with the graph and natural parameter updates for the iterated Inverse G-Wishart fragment. \subsection{The Updates for $\biggerm_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}}(\bSigma)$}\label{sec:drvFirstMsg} As a function of $\boldsymbol{\Sigma}$, $$\log\,p(\boldsymbol{\Sigma}|\boldsymbol{A})= \left[ \begin{array}{c} \log|\boldsymbol{\Sigma}|\\[1ex] \mbox{\rm vech}(\boldsymbol{\Sigma}^{-1}) \end{array} \right]^T \left[ \begin{array}{c} -{\textstyle{\frac{1}{2}}}\,(\xi+2)\\[1ex] -{\textstyle{\frac{1}{2}}}\boldsymbol{D}_d^T\mbox{\rm vec}(\boldsymbol{A}^{-1}) \end{array} \right]+\mbox{const} $$ where `const' denotes terms that do not depend on $\boldsymbol{\Sigma}$. Hence $$\biggerm_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}}(\bSigma)=\exp\left\{ \left[ \begin{array}{c} \log|\boldsymbol{\Sigma}|\\[1ex] \mbox{\rm vech}(\boldsymbol{\Sigma}^{-1}) \end{array} \right]^T\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}}\right\} $$ where \begin{equation} \biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}} = \left[ \begin{array}{c} -{\textstyle{\frac{1}{2}}}\,(\xi+2)\\[1ex] -{\textstyle{\frac{1}{2}}}\boldsymbol{D}_d^T\mbox{\rm vec}\Big(E_{q}(\boldsymbol{A}^{-1})\Big) \end{array} \right] \label{eq:roosterCrow} \end{equation} and $E_{q}$ denotes expectation with respect to the normalization of $$\biggerm_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}(\bA)\,\biggerm_{\mbox{\scriptsize$\bA\rightarrow \pDens(\bSigma|\bA)$}}(\bA).$$ Let $q(\boldsymbol{A})$ denote this normalized density function. Then $q(\boldsymbol{A})$ is an Inverse-G-Wishart distribution with graph $G_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}\in\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$ and natural parameter vector $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bA$}}$. From Result \ref{eq:recipMoment}, $$E_{q}(\boldsymbol{A}^{-1}) =\left\{ \begin{array}{l} \Big\{\big(\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bA$}}\big)_1+{\textstyle{\frac{1}{2}}}(d+1)\Big\} \left\{\mbox{\rm vec}^{-1}\Big(\boldsymbol{D}_d^{+T}\big(\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bA$}}\big)_2\Big)\right\}^{-1}\\[2ex] \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\mbox{if $G_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}=G_{\mbox{\tiny{\rm full}}}$},\\[2ex] \Big\{\big(\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bA$}}\big)_1+1\Big\} \left\{\mbox{\rm vec}^{-1}\Big(\boldsymbol{D}_d^{+T}\big(\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bA$}}\big)_2\Big)\right\}^{-1}\\[2ex] \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \mbox{if $G_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}=G_{\mbox{\tiny{\rm diag}}}$}. \end{array} \right. $$ Noting that the first factor of $E_{q}(\boldsymbol{A}^{-1})$ is $(\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bA$}})_1+\omega_1$, where $$\omega_1=\omega_1(d,G)=\left\{ \begin{array}{ll} (d+1)/2 & \mbox{if $G=G_{\mbox{\tiny{\rm full}}}$}\\[1ex] 1 & \mbox{if $G=G_{\mbox{\tiny{\rm diag}}}$}, \end{array} \right. $$ the first update of $E_{q}(\boldsymbol{A}^{-1})$ in Algorithm \ref{alg:IterInvGWishFrag} is justified. Lastly, we need to possibly adjust for the fact that $\biggerm_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}}(\bSigma)$ is proportional to an Inverse G-Wishart density function with $G=G_{\mbox{\tiny{\rm diag}}}$. This is achieved by the conditional step: \centerline{If $\GSUBpSigmaATOSigma=G_{\mbox{\tiny{\rm diag}}}$ then $E_{q}(\boldsymbol{A}^{-1})\longleftarrow\mbox{diag}\left\{\mbox{diagonal}\Big(E_{q}(\boldsymbol{A}^{-1})\Big)\right\}.$} \subsection{The Updates for $\biggerm_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}(\bA)$} As a function of $\boldsymbol{A}$, $$\log\,p(\boldsymbol{\Sigma}|\boldsymbol{A})= \left[ \begin{array}{c} \log|\boldsymbol{A}|\\[1ex] \mbox{\rm vech}(\boldsymbol{A}^{-1}) \end{array} \right]^T \left[ \begin{array}{c} -(\xi+2-2\omega_2)/2\\[1ex] -{\textstyle{\frac{1}{2}}}\boldsymbol{D}_d^T\mbox{\rm vec}(\boldsymbol{\Sigma}^{-1}) \end{array} \right]+\mbox{const} $$ where \begin{equation} \omega_2=\omega_2(d,G)=\left\{ \begin{array}{ll} (d+1)/2 & \mbox{if $G=G_{\mbox{\tiny{\rm full}}}$},\\[1ex] 1 & \mbox{if $G=G_{\mbox{\tiny{\rm diag}}}$} \end{array} \right. \label{eq:fryingPan} \end{equation} and `const' denotes terms that do not depend on $\boldsymbol{A}$. Hence $$\biggerm_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}(\bA)=\exp\left\{ \left[ \begin{array}{c} \log|\boldsymbol{A}|\\[1ex] \mbox{\rm vech}(\boldsymbol{A}^{-1}) \end{array} \right]^T\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}\right\} $$ where \begin{equation} \biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}} = \left[ \begin{array}{c} -(\xi+2-2\omega_2)/2\\[1ex] -{\textstyle{\frac{1}{2}}}\boldsymbol{D}_d^T\mbox{\rm vec}\Big(E_{q}(\boldsymbol{\Sigma}^{-1})\Big) \end{array} \right] \label{eq:boobyTrap} \end{equation} and $E_{q}$ denotes expectation with respect to the normalization of $$\biggerm_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}}(\bSigma)\,\biggerm_{\mbox{\scriptsize$\bSigma\rightarrow \pDens(\bSigma|\bA)$}}(\bSigma).$$ Let $q(\boldsymbol{\Sigma})$ denote this normalized density function. Then $q(\boldsymbol{\Sigma})$ is an Inverse-G-Wishart distribution with graph $\GSUBpSigmaATOSigma\in\{G_{\mbox{\tiny{\rm full}}},G_{\mbox{\tiny{\rm diag}}}\}$ and natural parameter vector $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bSigma$}}$. From Result \ref{eq:recipMoment}, $$E_{q}(\boldsymbol{\Sigma}^{-1}) =\left\{ \begin{array}{l} \Big\{\big(\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bSigma$}}\big)_1+{\textstyle{\frac{1}{2}}}(d+1)\Big\} \left\{\mbox{\rm vec}^{-1}\Big(\boldsymbol{D}_d^{+T}\big(\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bSigma$}}\big)_2\Big)\right\}^{-1}\\[2ex] \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\mbox{if $\GSUBpSigmaATOSigma=G_{\mbox{\tiny{\rm full}}}$},\\[2ex] \Big\{\big(\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bSigma$}}\big)_1+1\Big\} \left\{\mbox{\rm vec}^{-1}\Big(\boldsymbol{D}_d^{+T}\big(\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bSigma$}}\big)_2\Big)\right\}^{-1}\\[2ex] \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \mbox{if $\GSUBpSigmaATOSigma=G_{\mbox{\tiny{\rm diag}}}$}. \end{array} \right. $$ Noting that the first factor of $E_{q}(\boldsymbol{\Sigma}^{-1})$ is $(\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\leftrightarrow\bSigma$}})_1+\omega_2$, where $\omega_2$ is given by (\ref{eq:fryingPan}), the first update of $E_{q}(\boldsymbol{\Sigma}^{-1})$ in Algorithm \ref{alg:IterInvGWishFrag} is justified. Finally, there is the possible need to adjust for the fact that $\biggerm_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}(\bA)$ is proportional to an Inverse G-Wishart density function with $G=G_{\mbox{\tiny{\rm diag}}}$. This is achieved by the conditional step: \centerline{If $G_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}=G_{\mbox{\tiny{\rm diag}}}$ then $E_{q}(\boldsymbol{\Sigma}^{-1})\longleftarrow\mbox{diag}\left\{\mbox{diagonal}\Big(E_{q}(\boldsymbol{\Sigma}^{-1})\Big)\right\}.$} \section{Illustrative Example Variational Message Passing Details} The variational message passing approach to fitting and approximate inference for statistical models is still quite a new concept. In this section we provide details on the approach for the illustrative example involving the $t$ response linear mixed model described in Section \ref{sec:illustrative}. \subsection{Data and Hyperparameter Inputs} Let $\boldsymbol{y}$ be the vector of responses as defined in (\ref{eq:yXZdefn}). Also, let $$\boldsymbol{C}=[\boldsymbol{X}\ \boldsymbol{Z}]$$ be the full design matrix, where the matrices $\boldsymbol{X}$ and $\boldsymbol{Z}$ are as defined in (\ref{eq:yXZdefn}). The data inputs are $\boldsymbol{y}$ and $\boldsymbol{C}$. The hyperparameter inputs are $$\sigma_{\boldsymbol{\beta}},s_{\sigma^2},\lambda_{\nu},s_{\mbox{\tiny{$\bSigma,1$}}},\ldotss_{\mbox{\tiny{$\bSigma,q$}}}>0.$$ \subsection{Factor to Stochastic Node Parameter Initialisations} \noindent Initialize $G_{\mbox{\footnotesize$p(\boldsymbol{A})\rightarrow\boldsymbol{A}$}}$ and $\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{A})\rightarrow \boldsymbol{A}$}}$ via a call to Algorithm \ref{alg:InvGWishPriorFrag} with hyperparameter inputs: $$G_{\mbox{\tiny{$\boldsymbol{\Theta}$}}}=G_{\mbox{\tiny{\rm diag}}},\quad\xi_{\mbox{\tiny{$\boldsymbol{\Theta}$}}}=1\quad\mbox{and}\quad \boldsymbol{\Lambda}_{\mbox{\tiny{$\boldsymbol{\Theta}$}}}=\{2\mbox{diag}(s_{\mbox{\tiny{$\bSigma,1$}}}^2,\ldotss_{\mbox{\tiny{$\bSigma,q$}}}^2)\}^{-1}. $$ \vskip5mm \noindent Initialize $G_{\mbox{\footnotesize$p(a)\rightarrow a$}}$ and $\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(a)\rightarrow a$}}$ via a call to Algorithm \ref{alg:InvGWishPriorFrag} with hyperparameter inputs: $$G_{\mbox{\tiny{$\boldsymbol{\Theta}$}}}=G_{\mbox{\tiny{\rm diag}}},\quad\xi_{\mbox{\tiny{$\boldsymbol{\Theta}$}}}=1\quad\mbox{and}\quad \boldsymbol{\Lambda}_{\mbox{\tiny{$\boldsymbol{\Theta}$}}}=(s_{\sigma}^2)^{-1}. $$ \noindent Note that the initialisations of $G_{\mbox{\footnotesize$p(\boldsymbol{A})\rightarrow\boldsymbol{A}$}}$, $\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{A})\rightarrow \boldsymbol{A}$}}$, $G_{\mbox{\footnotesize$p(a)\rightarrow a$}}$ and $\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(a)\rightarrow a$}}$ are part of the prior impositions for $\boldsymbol{\Sigma}$ and $\sigma^2$. These four factor to stochastic node parameters remain constant throughout the variational message passing iterations. \vskip5mm \noindent Initialize $$ \mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\upsilon)\rightarrow\upsilon$}}\longleftarrow \left[ \begin{array}{c} 0\\[1ex] -\lambda_{\nu} \end{array} \right]. $$ \noindent This initialization of $\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\upsilon)\rightarrow\upsilon$}}$ corresponds to the prior imposition for $\upsilon$. This factor to stochastic node natural parameter remains constant throughout the variational message passing iterations. \vskip5mm The remaining factor to stochastic node natural parameters in the Figure \ref{fig:tMixModFacGraph} factor graph are updated in the variational message passing iterations, but require initial values. In theory, they can be set to any legal value according to the relevant exponential family. The following initialisations, which are used in the code that produced Figure \ref{fig:tMixModPosterDens}, are simple legal natural parameter vectors: $$G_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}\longleftarrowG_{\mbox{\tiny{\rm diag}}},\ \ \biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}\longleftarrow\left[ \begin{array}{c} -\frac{1}{2}\\[1ex] -\frac{1}{2}\boldsymbol{D}_q^T\mbox{\rm vec}(\boldsymbol{I}_q) \end{array} \right], $$ $$\GSUBpSigmaATOSigma\longleftarrowG_{\mbox{\tiny{\rm full}}},\ \ \biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}}\longleftarrow\left[ \begin{array}{c} -\frac{1}{2}\\[1ex] -\frac{1}{2}\boldsymbol{D}_q^T\mbox{\rm vec}(\boldsymbol{I}_q) \end{array} \right], $$ $$G_{\mbox{\footnotesize$p(\sigma^2|\,a)\rightarrow a$}}\longleftarrowG_{\mbox{\tiny{\rm diag}}},\ \ \biggerbdeta_{\mbox{\scriptsize$\pDens(\sigma^2|a)\rightarrow a$}}\longleftarrow\left[ \begin{array}{c} -2\\[1ex] -1 \end{array} \right], $$ $$G_{\mbox{\footnotesize$p(\sigma^2|\,a)\rightarrow\sigma^2$}}\longleftarrowG_{\mbox{\tiny{\rm full}}},\ \biggerbdeta_{\mbox{\scriptsize$\pDens(\sigma^2|a)\rightarrow\sigma^2$}}\longleftarrow\left[ \begin{array}{c} -2\\[1ex] -1 \end{array} \right], $$ $$\biggerbdeta_{\mbox{\scriptsize$\pDens(\bbeta,\bu|\bSigma)\rightarrow\bSigma$}}\longleftarrow\left[ \begin{array}{c} -\frac{1}{2}\\[1ex] -\frac{1}{2}\mbox{\rm vec}(\boldsymbol{I}_q) \end{array} \right],\quad \biggerbdeta_{\mbox{\scriptsize$\pDens(\bbeta,\bu|\bSigma)\rightarrow(\bbeta,\bu)$}}\longleftarrow\left[ \begin{array}{c} \boldsymbol{0}_{p+m q}\\[1ex] -\frac{1}{2}\mbox{\rm vec}(\boldsymbol{I}_{p+m q}) \end{array} \right], $$ $$\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{y}|\,\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b})\rightarrow(\boldsymbol{\beta},\boldsymbol{u})$}}\longleftarrow\left[ \begin{array}{c} \boldsymbol{0}_{p+m q}\\[1ex] -\frac{1}{2}\mbox{\rm vec}(\boldsymbol{I}_{p+mq}) \end{array} \right],\quad \mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{y}|\,\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b})\rightarrow\sigma^2$}} \longleftarrow \left[ \begin{array}{c} -2\\[1ex] -1 \end{array} \right] $$ and $$\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{b}|\,\upsilon)\rightarrow\upsilon$}}\longleftarrow\left[ \begin{array}{c} 1\\[1ex] -1.1 \end{array} \right]. $$ The messages involving the $b_{\ell}$, $1\le\ell\le N$, nodes do not need to be included here since there messages are subsumed in the calculations used for the natural parameter updates for the model parameters in Algorithm 2 of McLean \&\ Wand (2019). \subsection{Variational Message Passing Iterations} With all factor to stochastic node initialisations accomplished, now we describe the iterative updates inside the variational message passing cycle loop. Each iteration involves: \begin{itemize} \item updating the stochastic node to factor message parameters. \item updating the factor to stochastic node message parameters. \end{itemize} \subsubsection{Stochastic Node to Factor Message Parameter Updates} The stochastic node to factor message updates are quite simple and follow from, e.g., equation (7) of Wand (2017). For the Figure \ref{fig:tMixModFacGraph} factor graph the updates are: $$G_{\mbox{\scriptsize$\bA\rightarrow \pDens(\bSigma|\bA)$}}\longleftarrowG_{\mbox{\footnotesize$p(\boldsymbol{A})\rightarrow\boldsymbol{A}$}},\quad \biggerbdeta_{\mbox{\scriptsize$\bA\rightarrow \pDens(\bSigma|\bA)$}}\longleftarrow\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{A})\rightarrow \boldsymbol{A}$}}, $$ $$ G_{\mbox{\scriptsize$\bSigma\rightarrow \pDens(\bSigma|\bA)$}}\longleftarrowG_{\mbox{\tiny{\rm full}}},\quad \biggerbdeta_{\mbox{\scriptsize$\bSigma\rightarrow \pDens(\bSigma|\bA)$}}\longleftarrow\biggerbdeta_{\mbox{\scriptsize$\pDens(\bbeta,\bu|\bSigma)\rightarrow\bSigma$}}, $$ $$ \biggerbdeta_{\mbox{\scriptsize$\bSigma\rightarrow \pDens(\bbeta,\bu|\bSigma)$}}\longleftarrow\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}},\quad \biggerbdeta_{\mbox{\scriptsize$(\bbeta,\bu)\rightarrow \pDens(\bbeta,\bu|\bSigma)$}}\longleftarrow\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{y}|\,\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b})\rightarrow(\boldsymbol{\beta},\boldsymbol{u})$}}, $$ $$\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$(\boldsymbol{\beta},\boldsymbol{u})\rightarrowp(\boldsymbol{y}|\,\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b})$}}\longleftarrow\biggerbdeta_{\mbox{\scriptsize$\pDens(\bbeta,\bu|\bSigma)\rightarrow(\bbeta,\bu)$}},\quad G_{\mbox{\scriptsize$a\rightarrow p(\sigma^2|a)$}}\longleftarrowG_{\mbox{\footnotesize$p(a)\rightarrow a$}}, $$ $$ \biggerbdeta_{\mbox{\scriptsize$a\rightarrow \pDens(\sigma^2|a)$}}\longleftarrow\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(a)\rightarrow a$}},\quad G_{\mbox{\scriptsize$\sigma^2\rightarrow \pDens(\sigma^2|a)$}}\longleftarrowG_{\mbox{\tiny{\rm full}}}, $$ $$\biggerbdeta_{\mbox{\scriptsize$\sigma^2\rightarrow \pDens(\sigma^2|a)$}}\longleftarrow\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{y}|\,\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b})\rightarrow\sigma^2$}},\quad \mbox{\Large $\bdeta$}_{\mbox{\footnotesize$\sigma^2\rightarrowp(\boldsymbol{y}|\,\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b})$}}\longleftarrow\biggerbdeta_{\mbox{\scriptsize$\pDens(\sigma^2|a)\rightarrow\sigma^2$}}$$ and $$\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$\upsilon\rightarrowp(\boldsymbol{b}|\,\upsilon)$}}\longleftarrow\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\upsilon)\rightarrow\upsilon$}}.$$ \vskip5mm Some additional remarks concerning stochastic node to factor updates are: \begin{itemize} \item The stochastic node to factor messages corresponding to the extremities of the Figure \ref{fig:tMixModFacGraph} factor graph, such as the message from $\boldsymbol{A}$ to $p(\boldsymbol{A})$, are not required in the variational message passing iterations. Therefore, updates for these messages can be omitted. \item Some of the stochastic node to factor message parameter updates, such as that for $\biggerbdeta_{\mbox{\scriptsize$\bA\rightarrow \pDens(\bSigma|\bA)$}}$, remain constant throughout the iterations. However, for simplicity of exposition, we list all of the updates together. \end{itemize} \subsubsection{Factor to Stochastic Node Message Parameter Updates} The updates for the parameters of factor to stochastic node messages are a good deal more complicated than the reverse messages. For the illustrative example, these updates are encapsulated in three algorithms across three different articles. Algorithm \ref{alg:IterInvGWishFrag} plays an important role for the variance and covariance matrix parameter parts of the factor graph. \begin{itemize} \setlength\itemsep{0pt} \item[] \textrm{Use Algorithm \ref{alg:IterInvGWishFrag} with:} \begin{itemize} \setlength\itemsep{0pt} \item[] \textbf{\small Shape Parameter Input}: $1$. \item[] \textbf{\small Graph Inputs}: $G_{\mbox{\scriptsize$\sigma^2\rightarrow \pDens(\sigma^2|a)$}}$, $G_{\mbox{\scriptsize$a\rightarrow p(\sigma^2|a)$}}$. \item[] \textbf{\small Natural Parameter Inputs:} $\biggerbdeta_{\mbox{\scriptsize$\sigma^2\rightarrow \pDens(\sigma^2|a)$}}$, $\biggerbdeta_{\mbox{\scriptsize$\pDens(\sigma^2|a)\rightarrow\sigma^2$}}$ $\biggerbdeta_{\mbox{\scriptsize$a\rightarrow \pDens(\sigma^2|a)$}}$, $\biggerbdeta_{\mbox{\scriptsize$\pDens(\sigma^2|a)\rightarrow a$}}$ \item[] \textbf{\small Outputs:} $G_{\mbox{\footnotesize$p(\sigma^2|\,a)\rightarrow\sigma^2$}}$, $\biggerbdeta_{\mbox{\scriptsize$\pDens(\sigma^2|a)\rightarrow\sigma^2$}}$, $G_{\mbox{\footnotesize$p(\sigma^2|\,a)\rightarrow a$}}$, $\biggerbdeta_{\mbox{\scriptsize$\pDens(\sigma^2|a)\rightarrow a$}}$ \end{itemize} % \item[] \textrm{Use Algorithm \ref{alg:IterInvGWishFrag} with:} \begin{itemize} \setlength\itemsep{0pt} \item[] \textbf{\small Shape Parameter Input}: $2q$ \item[] \textbf{\small Graph Inputs}: $G_{\mbox{\scriptsize$\bSigma\rightarrow \pDens(\bSigma|\bA)$}}$, $G_{\mbox{\scriptsize$\bA\rightarrow \pDens(\bSigma|\bA)$}}$ \item[] \textbf{\small Natural Parameter Inputs:} $\biggerbdeta_{\mbox{\scriptsize$\bSigma\rightarrow \pDens(\bSigma|\bA)$}}$, $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}}$, $\biggerbdeta_{\mbox{\scriptsize$\bA\rightarrow \pDens(\bSigma|\bA)$}}$, $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}$ \item[] \textbf{\small Outputs:} $\GSUBpSigmaATOSigma$, $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}}$, $G_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}$, $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bA$}}$ \end{itemize} \item[] \textrm{Use the Gaussian Penalisation Fragment of Wand (2017, Section 4.1.4):} \begin{itemize} \setlength\itemsep{0pt} \item[] \textbf{\small Hyperparameter Input:} $\sigma^2_{\boldsymbol{\beta}}$ \item[] \textbf{\small Natural Parameter Inputs:} $\biggerbdeta_{\mbox{\scriptsize$(\bbeta,\bu)\rightarrow \pDens(\bbeta,\bu|\bSigma)$}}$, $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bbeta,\bu|\bSigma)\rightarrow(\bbeta,\bu)$}}$,\\ \null$\qquad\qquad\qquad\qquad\qquad\quad\,\biggerbdeta_{\mbox{\scriptsize$\bSigma\rightarrow \pDens(\bbeta,\bu|\bSigma)$}}$, $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bbeta,\bu|\bSigma)\rightarrow\bSigma$}}$ \item[] \textbf{\small Outputs:} $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bbeta,\bu|\bSigma)\rightarrow(\bbeta,\bu)$}}$, $\biggerbdeta_{\mbox{\scriptsize$\pDens(\bbeta,\bu|\bSigma)\rightarrow\bSigma$}}$ \end{itemize} \item[] \textrm{Use the $t$ Likelihood Fragment of McLean \&\ Wand (2019, Algorithm 2):} \begin{itemize} \setlength\itemsep{0pt} \item[] \textbf{\small Data Inputs:} $\boldsymbol{y}$, $\boldsymbol{C}$ \item[] \textbf{\small Natural Parameter Inputs:} $\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$(\boldsymbol{\beta},\boldsymbol{u})\rightarrowp(\boldsymbol{y}|\,\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b})$}}$, $\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{y}|\,\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b})\rightarrow(\boldsymbol{\beta},\boldsymbol{u})$}}$,\\ \null$\qquad\qquad\qquad\qquad\qquad\quad\,\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$\sigma^2\rightarrowp(\boldsymbol{y}|\,\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b})$}}$, $\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{y}|\,\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b})\rightarrow\sigma^2$}}$,\\ \null$\qquad\qquad\qquad\qquad\qquad\quad\,\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$\upsilon\rightarrowp(\boldsymbol{b}|\,\upsilon)$}}$, $\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{b}|\,\upsilon)\rightarrow\upsilon$}}$ \item[] \textbf{\small Outputs:} $\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{y}|\,\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b})\rightarrow(\boldsymbol{\beta},\boldsymbol{u})$}}$, $\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{y}|\,\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b})\rightarrow\sigma^2$}}$, $\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{b}|\,\upsilon)\rightarrow\upsilon$}}$ \end{itemize} \end{itemize} Regarding, the last two fragment updates it should be noted that Wand (2017) and McLean \&\ Wand (2019) work with the ``$\mbox{\rm vec}$'' versions of Multivariate Normal and Inverse G-Wishart natural parameter vectors. To match the ``$\mbox{\rm vech}$'' natural parameter forms used in Algorithms \ref{alg:InvGWishPriorFrag} and \ref{alg:IterInvGWishFrag} of the current article conversions given by (\ref{eq:vecvechmapsMVN}) and (\ref{eq:vecvechmapsIGW}) are required. \subsection{Determination of Posterior Density Function Approximations} After convergence of the variational message passing iterations, the optimal $q^*$-densities for each stochastic node are obtained by multiplying each of the messages that pass messages to that node. See, for example, (10) of Wand (2017). We now give details for the model parameters $\boldsymbol{\Sigma}$, $\sigma^2$, $(\boldsymbol{\beta},\boldsymbol{u})$ and $\upsilon$. \subsubsection{Determination of $q^*(\boldsymbol{\Sigma})$} From (10) of Wand (2017): $$q^*(\boldsymbol{\Sigma})\propto\exp\left\{ \left[ \begin{array}{c} \log|\boldsymbol{\Sigma}|\\[1ex] \mbox{\rm vech}(\boldsymbol{\Sigma}^{-1}) \end{array} \right]^T \left(\biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}}+\biggerbdeta_{\mbox{\scriptsize$\pDens(\bbeta,\bu|\bSigma)\rightarrow\bSigma$}}\right) \right\}. $$ It is apparent that $q^*(\boldsymbol{\Sigma})$ is an Inverse Wishart density function with natural parameter vector $$\mbox{\Large $\bdeta$}_{q(\boldsymbol{\Sigma})}\equiv \biggerbdeta_{\mbox{\scriptsize$\pDens(\bSigma|\bA)\rightarrow\bSigma$}}+\biggerbdeta_{\mbox{\scriptsize$\pDens(\bbeta,\bu|\bSigma)\rightarrow\bSigma$}}.$$ \subsubsection{Determination of $q^*(\sigma^2)$} Using (10) of Wand (2017): $$q^*(\sigma^2)\propto\exp\left\{ \left[ \begin{array}{c} \log(\sigma^2)\\[1ex] 1/\sigma^2 \end{array} \right]^T \left(\biggerbdeta_{\mbox{\scriptsize$\pDens(\sigma^2|a)\rightarrow\sigma^2$}}+\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{y}|\,\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b})\rightarrow\sigma^2$}}\right) \right\}. $$ We see that $q^*(\sigma^2)$ is an Inverse Chi-Squared density function with natural parameter vector $$\mbox{\Large $\bdeta$}_{q(\sigma^2)}\equiv \biggerbdeta_{\mbox{\scriptsize$\pDens(\sigma^2|a)\rightarrow\sigma^2$}}+\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{y}|\,\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b})\rightarrow\sigma^2$}}.$$ \subsubsection{Determination of $q^*(\boldsymbol{\beta},\boldsymbol{u})$} Another application of (10) of Wand (2017) leads to: $$q^*(\boldsymbol{\beta},\boldsymbol{u})\propto\exp\left\{ \left[ \begin{array}{c} \boldsymbol{\beta}\\[0ex] \boldsymbol{u}\\[1ex] \mbox{\rm vech}\left(\left[ \begin{array}{c} \boldsymbol{\beta}\\[0ex] \boldsymbol{u} \end{array} \right] \left[ \begin{array}{c} \boldsymbol{\beta}\\[0ex] \boldsymbol{u} \end{array} \right]^T \right) \end{array} \right]^T \left(\biggerbdeta_{\mbox{\scriptsize$\pDens(\bbeta,\bu|\bSigma)\rightarrow(\bbeta,\bu)$}}+\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{y}|\,\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b})\rightarrow(\boldsymbol{\beta},\boldsymbol{u})$}}\right) \right\}. $$ We then have $q^*(\boldsymbol{\beta},\boldsymbol{u})$ having a Multivariate Normal density function with natural parameter vector $$\mbox{\Large $\bdeta$}_{q(\boldsymbol{\beta},\boldsymbol{u})}\equiv \biggerbdeta_{\mbox{\scriptsize$\pDens(\bbeta,\bu|\bSigma)\rightarrow(\bbeta,\bu)$}}+\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{y}|\,\boldsymbol{\beta},\boldsymbol{u},\sigma^2,\boldsymbol{b})\rightarrow(\boldsymbol{\beta},\boldsymbol{u})$}}.$$ \subsubsection{Determination of $q^*(\upsilon)$} One last application of (10) of Wand (2017) gives: $$q^*(\upsilon)\propto\exp\left\{ \left[ \begin{array}{c} \upsilon\log(\upsilon)-\log\{\Gamma(\upsilon)\}\\[1ex] \upsilon \end{array} \right]^T \left(\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{b}|\,\upsilon)\rightarrow\upsilon$}}+\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\upsilon)\rightarrow\upsilon$}}\right) \right\}. $$ Therefore, $q^*(\upsilon)$ is a Moon Rock density function with natural parameter vector $$\mbox{\Large $\bdeta$}_{q(\upsilon)}\equiv\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\boldsymbol{b}|\,\upsilon)\rightarrow\upsilon$}}+\mbox{\Large $\bdeta$}_{\mbox{\footnotesize$p(\upsilon)\rightarrow\upsilon$}}.$$ \subsection{Conversion from Natural Parameters to Common Parameters} A final set of steps involves conversion of the $q^*$-densities to common parameter forms. \subsubsection{Conversion of $q^*(\boldsymbol{\Sigma})$ to Common Parameter Form} The common parameter form of $q^*(\boldsymbol{\Sigma})$ is the $\mbox{Inverse-G-Wishart}(G_{\mbox{\tiny{\rm full}}},\xi_{q(\boldsymbol{\Sigma})},\boldsymbol{\Lambda}_{q(\boldsymbol{\Sigma})})$ density function where $$\xi_{q(\boldsymbol{\Sigma})}=-2\big(\mbox{\Large $\bdeta$}_{q(\boldsymbol{\Sigma})}\big)_1-2 \quad\mbox{and}\quad \boldsymbol{\Lambda}_{q(\boldsymbol{\Sigma})}=-2\mbox{\rm vec}^{-1}\Big(\boldsymbol{D}_q^{+T}\big(\mbox{\Large $\bdeta$}_{q(\boldsymbol{\Sigma})}\big)_2\Big). $$ Alternatively, $q^*(\boldsymbol{\Sigma})$ is the $\mbox{Inverse-Wishart}(\kappa_{q(\boldsymbol{\Sigma})},\boldsymbol{\Lambda}_{q(\boldsymbol{\Sigma})})$ density function, as defined by (\ref{eq:GelmanTable}), where $$\kappa_{q(\boldsymbol{\Sigma})}=\xi_{q(\boldsymbol{\Sigma})}-q+1.$$ \subsubsection{Conversion of $q^*(\sigma^2)$ to Common Parameter Form} The common parameter form of $q^*(\sigma^2)$ is the $\mbox{{\rm Inverse}-$\chi^2$}(\delta_{q(\sigma^2)},\lambda_{q(\sigma^2)})$ density function where $$\delta_{q(\sigma^2)}=-2\big(\mbox{\Large $\bdeta$}_{q(\sigma^2)}\big)_1-2 \quad\mbox{and}\quad \lambda_{q(\sigma^2)}=-2\big(\mbox{\Large $\bdeta$}_{q(\sigma^2)}\big)_2. $$ \subsubsection{Conversion of $q^*(\boldsymbol{\beta},\boldsymbol{u})$ to Common Parameter Form} The common parameter form of $q^*(\boldsymbol{\beta},\boldsymbol{u})$ is the $N\big(\boldsymbol{\mu}_{q(\boldsymbol{\beta},\boldsymbol{u})},\boldsymbol{\Sigma}_{q(\boldsymbol{\beta},\boldsymbol{u})}\big)$ density function where $$\boldsymbol{\mu}_{q(\boldsymbol{\beta},\boldsymbol{u})}=-{\textstyle{\frac{1}{2}}}\left\{\mbox{\rm vec}^{-1} \Big(\boldsymbol{D}_{p+mq}^{+T} \big(\mbox{\Large $\bdeta$}_{q(\boldsymbol{\beta},\boldsymbol{u})}\big)_2\Big) \right\}^{-1}\big(\mbox{\Large $\bdeta$}_{q(\boldsymbol{\beta},\boldsymbol{u})}\big)_1 $$ and $$ \boldsymbol{\Sigma}_{q(\boldsymbol{\beta},\boldsymbol{u})}=-{\textstyle{\frac{1}{2}}}\left\{\mbox{\rm vec}^{-1} \Big(\boldsymbol{D}_{p+mq}^{+T} \big(\mbox{\Large $\bdeta$}_{q(\boldsymbol{\beta},\boldsymbol{u})}\big)_2\Big) \right\}^{-1}. $$ Here $\big(\mbox{\Large $\bdeta$}_{q(\boldsymbol{\beta},\boldsymbol{u})}\big)_1$ denotes the first $p+mq$ entries of $\mbox{\Large $\bdeta$}_{q(\boldsymbol{\beta},\boldsymbol{u})}$ and $\big(\mbox{\Large $\bdeta$}_{q(\boldsymbol{\beta},\boldsymbol{u})}\big)_2$ denotes the remaining entries of the same vector. \subsubsection{Conversion of $q^*(\upsilon)$ to Common Parameter Form and Conversion to $q^*(\nu)$ } Recall that $q^*(\upsilon)$ is a Moon Rock density function. The Moon Rock distribution is not as established as the other distributions appearing in this subsection. Nevertheless, the web-supplement of McLean \&\ Wand (2019) defines a random variable $x$ to have a Moon Rock distribution with parameters $\alpha>0$ and $\beta>\alpha$, written $x\sim\mbox{Moon-Rock}(\alpha,\beta)$, if the density function of $x$ is $$p(x)=\left[\int_0^{\infty}\{t^t/\Gamma(t)\}^{\alpha}\exp(-\beta\,t)\,dt\right]^{-1} \{x^x/\Gamma(x)\}^{\alpha}\exp(-\beta\,x),\quad x>0. $$ Therefore, $q^*(\upsilon)$ has a $\mbox{Moon-Rock}(\alpha_{q(\upsilon)},\beta_{q(\upsilon)})$ density function where $$\alpha_{q(\upsilon)}=\big(\mbox{\Large $\bdeta$}_{q(\upsilon)}\big)_1 \quad\mbox{and}\quad\beta_{q(\upsilon)}=\,-\big(\mbox{\Large $\bdeta$}_{q(\upsilon)}\big)_2. $$ Explicitly, $$q^*(\upsilon)=\left[\int_0^{\infty}\{t^t/\Gamma(t)\}^{\alpha_{q(\upsilon)}} \exp(-\beta_{q(\upsilon)}\,t)\,dt\right]^{-1} \{\upsilon^{\upsilon}/\Gamma(\upsilon)\}^{\alpha_{q(\upsilon)}} \exp(-\beta_{q(\upsilon)}\,\upsilon),\quad \upsilon>0. $$ Lastly, we note that since $\nu=2\upsilon$ the $q^*$-density function of $\nu$ is {\setlength\arraycolsep{3pt} \begin{eqnarray*} &&q^*(\nu)={\textstyle{\frac{1}{2}}} \left[\int_0^{\infty}\{t^t/\Gamma(t)\}^{\alpha_{q(\upsilon)}} \exp(-\beta_{q(\upsilon)}\,t)\,dt\right]^{-1}\\[1ex] &&\qquad\qquad\qquad\qquad\times \{(\nu/2)^{\nu/2}/\Gamma(\nu/2)\}^{\alpha_{q(\upsilon)}} \exp\big(-{\textstyle{\frac{1}{2}}} \beta_{q(\upsilon)}\nu\big), \quad\nu>0. \end{eqnarray*} } \section*{Reference} \vskip12pt\par\noindent\hangindent=1 true cm\hangafter=1 Novomestky, F. (2012). \textsf{matrixcalc}: Collection of functions for matrix calculations. \textsf{R} package. \texttt{https://CRAN.R-project.org/package=matrixcalc} \end{document}
{'timestamp': '2020-12-14T02:10:36', 'yymm': '2005', 'arxiv_id': '2005.09876', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.09876'}
\section{Introduction} We identify the $d$-dimensional flat torus $\mathbb{T}^{d}=\mathbb{R ^{d}/\mathbb{Z}^{d}$ with the unit cube $\left[ -\frac{1}{2},\frac{1 {2}\right) ^{d}$ and we recall that a sequence $\{t_{j}\}_{j=1}^{+\infty }\subset\mathbb{T}^{d}$ is \textit{uniformly distributed} if one of the following three equivalent conditions is satisfied: (i) for every $d$-dimensional box $I\subset\left[ -\frac{1}{2},\frac{1}{2}\right) ^{d}$ with volume $|I|$ \[ \lim_{N\rightarrow+\infty}\frac{1}{N}\,\operatorname*{card}\{t_{j}\in I:1\leq j\leq N\}=\left\vert I\right\vert \ ; \] (ii) for every continuous function $f$ on $\mathbb{T}^{d}$ \[ \lim_{N\rightarrow+\infty}\frac{1}{N}\sum_{j=1}^{N}f(t_{j})=\int _{\mathbb{T}^{d}}f\left( t\right) \ dt\ ; \] and (iii) for every $0\neq k\in\mathbb{Z}^{d}$ \[ \lim_{N\rightarrow+\infty}\frac{1}{N}\sum_{j=1}^{N}e^{2\pi ik\cdot t_{j }=0\ , \] where \textquotedblleft$\cdot$\textquotedblright\ denotes the $d$-dimensional inner product. The concept of uniform distribution and the defining properties given above go back to a fundamental paper written one hundred years ago by H. Weyl \cite{W}; see \cite{KN} for the basic reference on uniformly distributed sequences. Observe that the above definition does not show the quality of a uniformly distributed sequence. In the late thirties J. van der Corput coined the term \textit{discrepancy}: let $\mathfrak{D}_{N}:=\{t_{j}\}_{j=1}^{N}$ be a sequence of $N$ points in $\mathbb{T}^{d}$, henceforth called a \textit{distribution} (of $N$ points), and let \[ D\left( \mathfrak{D}_{N}\right) :=\sup_{I\subset\mathbb{T}^{d}}\left\vert \operatorname*{card}\left( \{t_{j}\}_{j=1}^{N}\cap I\right) -N|I|\right\vert \] be the (non normalized) discrepancy associated with $\mathfrak{D}_{N}$ with respect to the $d$-dimensional boxes $I$ in $\mathbb{T}^{d}$. There are different approaches to define a discrepancy that measures the quality of a distribution of points; see e.g. \cite{BC88,CST,DT,KN,Mat,DP} for an introduction of discrepancy theory. See \cite{BCCGST,CF,DG,DHRZ} for the connections of discrepancy to energy and numerical integration. Throughout this paper we shall denote by $c,c_{1},\ldots$ positive constants which may change from step to step. \bigskip K. Roth \cite{Roth} proved the following lower estimate: for every distribution $\mathfrak{D}_{N}$ of $N$ points in $\mathbb{T}^{2}$, we have \begin{equation} \int_{\mathbb{T}^{2}}\left\vert \operatorname*{card}(\mathfrak{D}_{N}\cap I_{x,y})-Nxy\right\vert ^{2}\ dxdy\geq c\ \log N\ , \label{roth \end{equation} where $I_{x,y}:=[0,x]\times\lbrack0,y]$ and $0\leq x,y<1$. This yields $D(\mathfrak{D}_{N})\geq c\ \log^{1/2}N$. H. Davenport \cite{Dav} proved that the estimate (\ref{roth}) is sharp. W. Schmidt \cite{Scmdt} investigated the discrepancy with respect to discs. His results were improved and extended, independently, by J. Beck \cite{Beck} and H. Montgomery \cite{mont}: for every convex body $C\subset\left[ -\frac{1}{2},\frac{1}{2}\right) ^{d}$ of diameter less than one and for every distribution $\mathfrak{D}_{N}$ of $N$ points in $\mathbb{T}^{d}$, one ha \begin{equation} \int_{0}^{1}\int_{SO(d)}\int_{\mathbb{T}^{d}}\left\vert \operatorname*{card (\mathfrak{D}_{N}\cap(\lambda\sigma(C)+t))-\lambda^{d}N|C|\right\vert ^{2}\ dtd\sigma d\lambda\geq c\ N^{(d-1)/d}\ . \label{bm \end{equation} This relation implies that for every distribution $\mathfrak{D}_{N}$ there exists a translated, rotated, and dilated copy $\overline{C}$ of a given convex body $C\subset\left[ -\frac{1}{2},\frac{1}{2}\right) ^{d}$ having diameter less than one, such that \[ \left\vert \operatorname*{card}(\mathfrak{D}_{N}\cap\overline{C )-N|\overline{C}|\right\vert \geq c\ N^{(d-1)/(2d)}\ . \] J. Beck and W. Chen \cite{BC} proved that (\ref{bm}) is sharp. Indeed, they showed that for every positive integer $N$ there exists a distribution $\widetilde{\mathfrak{D}}_{N}\subset\mathbb{T}^{d}$ satisfyin \begin{equation} \int_{SO(d)}\int_{\mathbb{T}^{d}}\left\vert \operatorname*{card (\widetilde{\mathfrak{D}}_{N}\cap C)-N|C|\right\vert ^{2}\ dtd\sigma\leq c\ N^{(d-1)/d}\ . \label{pallino discr \end{equation} This distribution $\widetilde{\mathfrak{D}}_{N}$ can be obtained either by applying a probabilistic argument or by reduction to a lattice point problem; see \cite{BGT, CHEN,CT,T} for a comparison of probabilistic and deterministic results. \bigskip In the following, we shall consider bounds for the integral in (\ref{pallino discr}) for distributions of $N$ points that are restrictions of a shrunk integer lattice to the unit cube $\left[ -\frac{1}{2},\frac{1 {2}\right) ^{d}$. Due to an argument in \cite[p. 3533]{BIT} that also extends to higher dimensions, we may assume that $N$ is a $d$th power $N=M^{d}$ for a positive integer $M$. More precisely, we consider distribution \[ \mathfrak{D}_{N}:=\left( \frac{1}{N^{1/d}}\mathbb{Z}^{d}\right) \cap\left[ -\frac{1}{2},\frac{1}{2}\right) ^{d}\ . \] Given a convex body $C\subset\left[ -\frac{1}{2},\frac{1}{2}\right) ^{d}$ of diameter less than one, we then hav \begin{equation} \operatorname*{card}\left( \mathfrak{D}_{N}\cap C\right) -N\left\vert C\right\vert =\operatorname*{card}\left( \mathbb{Z}^{d}\cap N^{1/d}C\right) -N\left\vert C\right\vert \ . \label{N ro lattice \end{equation} Estimation of the RHS in (\ref{N ro lattice}) is a classical lattice point problem. Results concerning lattice points are extensively used in different areas of pure and applied mathematics; see, for example, \cite{GL,H,K}. For the definition of a suitable discrepancy function, we change the discrete dilation $N^{1/d}$ in (\ref{N ro lattice}) to an arbitrary dilation $\rho \geq1$ and replace the convex body $C$ in (\ref{N ro lattice}) with a translated, rotated and then dilated copy $\rho\sigma\left( C\right) +t$, where $\sigma\in SO\left( d\right) $ and $t\in\mathbb{T}^{d}$. Thus the discrepanc \[ D_{C}^{\rho}(\sigma,t):=\operatorname*{card}(\mathbb{Z}^{d}\cap(\rho \sigma(C)+t))-\rho^{d}|C|=\sum_{k\in\mathbb{Z}^{d}}\chi_{\rho\sigma (C)+t}(k)-\rho^{d}|C| \] is defined as the difference between the number of integer lattice points in the set $\rho\sigma\left( C\right) +t$ and its volume $\rho^{d}\left\vert C\right\vert $ (here, $\chi_{A}$ denotes the characteristic function for the set $A$). It is easy to see (e.g., \cite{BGT}) that the periodic function $t\mapsto D_{C}^{\rho}(\sigma,t)$ has the Fourier series expansio \begin{equation} \rho^{d}\sum_{0\neq m\in\mathbb{Z}^{d}}\widehat{\chi_{\sigma\left( C\right) }}\left( \rho m\right) \ e^{2\pi im\cdot t}\ . \label{fourier D \end{equation} D. Kendall \cite{Kendall} seems to have been the first to realize that multiple Fourier series expansions can be helpful in certain lattice point problems. Using our notation, he proved that for every convex body $C\subset\mathbb{R}^{d}$ and $\rho\geq1 \begin{equation} \Vert D_{C}^{\rho}\Vert_{L^{2}\left( SO(d)\times\mathbb{T}^{d}\right) }\leq c\ \rho^{(d-1)/2}\ . \label{kend \end{equation} This also follows from more recent results in \cite{Pod91} and \cite{BHI} as demonstrated next. Given a convex body $C\subset\mathbb{R}^{d}$, we define the (spherical) average decay of $\widehat{\chi_{C}}$ a \[ \left\Vert \widehat{\chi_{C}}\left( \rho\cdot\right) \right\Vert _{L^{2}\left( \Sigma_{d-1}\right) }:=\left\{ \int_{\Sigma_{d-1}}\left\vert \widehat{\chi_{C}}\left( \rho\tau\right) \right\vert ^{2}\ d\tau\right\} ^{1/2}\ , \] where $\Sigma_{d-1}:=\left\{ t\in\mathbb{R}^{d}:\left\vert t\right\vert =1\right\} $ and $\tau$ is the rotation invariant normalized measure on $\Sigma_{d-1}$. Extending an earlier result of A. Podkorytov \cite{Pod91}, L. Brandolini, S. Hofmann, and A. Iosevich \cite{BHI} proved that \begin{equation} \left\Vert \widehat{\chi_{C}}\left( \rho\cdot\right) \right\Vert _{L^{2}\left( \Sigma_{d-1}\right) }\leq c\ \rho^{-\left( d+1\right) /2}\ . \label{bhi \end{equation} By applying the Parseval identity to the Fourier series (\ref{fourier D}) of the discrepancy function, we obtain Kendall's result (\ref{kend}); i.e. \begin{align} \Vert D_{C}^{\rho}\Vert_{L^{2}\left( SO(d)\times\mathbb{T}^{d}\right) }^{2} & =\rho^{2d}\sum_{0\neq k\in\mathbb{Z}^{d}}\int_{SO\left( d\right) }\left\vert \widehat{\chi_{\sigma\left( C\right) }}\left( \rho k\right) \right\vert ^{2}\ d\sigma\label{parsev}\\ & \leq c\ \rho^{2d}\sum_{0\neq k\in\mathbb{Z}^{d}}\left\vert \rho k\right\vert ^{-\left( d+1\right) }\leq c_{1}\ \rho^{d-1}\ .\nonumber \end{align} We are interested in the reversed inequalit \begin{equation} \Vert D_{C}^{\rho}\Vert_{L^{2}\left( SO(d)\times\mathbb{T}^{d}\right) ^{2}\geq c_{1}\ \rho^{d-1}\ , \label{reverse \end{equation} which, as we shall see, may or may not hold. To understand this, let us assume that (\ref{bhi})\ can be reverse \begin{equation} \left\Vert \widehat{\chi_{C}}\left( \rho\cdot\right) \right\Vert _{L^{2}\left( \Sigma_{d-1}\right) }\geq c_{1}\ \rho^{-\left( d+1\right) /2}\ . \label{simplex reverse \end{equation} This relation (\ref{simplex reverse}) is true for a simplex (see \cite[Theorem 2.3]{BCT}) but it is not true for every convex body (see the next section). The following result was proved in \cite[Proof of Theorem 3.7]{BCT}. \begin{proposition} \label{prop}Let $C$ in $\mathbb{R}^{d}$ be a convex body which satisfies (\ref{simplex reverse}). Then $C$ satisfies (\ref{reverse}). \end{proposition} \begin{proof} Indeed \begin{align} \Vert D_{C}^{\rho}\Vert_{L^{2}\left( SO(d)\times\mathbb{T}^{d}\right) }^{2} & =\rho^{2d}\sum_{0\neq k\in\mathbb{Z}^{d}}\int_{SO\left( d\right) }\left\vert \widehat{\chi_{\sigma\left( C\right) }}\left( \rho k\right) \right\vert ^{2}\ d\sigma\label{dalbassocondecad}\\ & \geq c\ \rho^{2d}\int_{SO\left( d\right) }\left\vert \widehat {\chi_{\sigma\left( C\right) }}\left( \rho k^{\prime}\right) \right\vert ^{2}\ d\sigma\geq c_{1}\ \rho^{d-1}\ ,\nonumber \end{align} where $k^{\prime}$ is any non-zero element in $\mathbb{Z}^{d}$. \end{proof} We are going to see that (\ref{reverse}) does not imply (\ref{simplex reverse}). \section{$L^{2}$-regularity of convex bodies} We say that a convex body $C\subset\mathbb{R}^{d}$ is $L^{2}$\emph{-regular }if there exists a positive constant $c_{1}$ such that \begin{equation} c_{1}\ \rho^{(d-1)/2}\leq\Vert D_{C}^{\rho}\Vert_{L^{2}\left( SO(d)\times \mathbb{T}^{d}\right) } \label{L2reg \end{equation} (by (\ref{kend}) we already know that $\Vert D_{C}^{\rho}\Vert_{L^{2}\left( SO(d)\times\mathbb{T}^{d}\right) }\leq c_{2}$ $\rho^{(d-1)/2}$ for some $c_{2}>0$). If (\ref{L2reg}) fails we say that $C$ is $L^{2}$\emph{-irregular}. Let $d>1$. L. Parnovski and A. Sobolev \cite{PS} proved that the $d$-dimensional ball $B_{d}:=\left\{ t\in\mathbb{R}^{d}:\left\vert t\right\vert \leq1\right\} $ is $L^{2}$-regular if and only if $d\not \equiv 1\ (\operatorname{mod}4)$. More generally, it was proved \cite{BCGT} that if $C\subset\mathbb{R}^{d}$ ($d>1$) is a convex body with smooth boundary, having everywhere positive Gaussian curvature, then (i) if $C$ is not symmetric about a point, or if $d\not \equiv 1\left( \operatorname{mod}4\right) $, then $C$ is $L^{2 $-regular; (ii) if $C$ is symmetric about a point and if $d\equiv1\,\left( \operatorname{mod}4\right) $ then $C$ is $L^{2}$-irregular. L. Parnovski and N. Sidorova \cite{psi} studied the above problem for the non-convex case of a $d$-dimensional annulus ($d>1$). They provided a complete answer in terms of the width of the annulus. In the case of a polyhedron $P$, inequality (\ref{kend}) was extended to $L^{p}$ norms in \cite{BCT}: for any $p>1$ and $\rho\geq1$ we have \[ \Vert D_{P}^{\rho}\Vert_{L^{p}\left( SO(d)\times\mathbb{T}^{d}\right) }\leq c_{p}\ \rho^{(d-1)(1-1/p) \] and, specifically for simplices $S$, one ha \[ c_{p}^{\prime}\ \rho^{(d-1)(1-1/p)}\leq\Vert D_{S}^{\rho}\Vert_{L^{p}\left( SO(d)\times\mathbb{T}^{d}\right) }\leq c_{p}\ \rho^{(d-1)(1-1/p)}\ . \] In particular, this implies that the $d$-dimensional simplices are $L^{2}$-regular. For the planar case it was proved in \cite[Theorem 6.2]{BRT} that every convex body with piecewise $C^{\infty}$ boundary that is not a polygon is $L^{2}$-regular. Related results can be found in \cite{BCT,CT,ko}. Until now no example of a $L^{2}$-irregular polyhedron has been found. We are interested in identifying the $L^{2}$-regular convex polyhedrons. In this paper we give a complete answer for the planar case. \bigskip Let us first compare the $L^{2}$-regularity for a disc $B\subset\mathbb{R ^{2}$ and a square $Q\subset\mathbb{R}^{2}$. Their characteristic functions $\chi_{B}$ and $\chi_{Q}$ do not satisfy (\ref{simplex reverse}). Indeed, $\widehat{\chi_{B}}\left( \xi\right) =\left\vert \xi\right\vert ^{-1 J_{1}\left( 2\pi\left\vert \xi\right\vert \right) $, where $J_{1}$ is the Bessel function (see e.g. \cite{T}). Then the zeroes of $J_{1}$ yield an increasing diverging sequence $\left\{ \rho_{u}\right\} _{u=1}^{\infty}$ such that \[ \left\Vert \widehat{\chi_{C}}\left( \rho_{u}\cdot\right) \right\Vert _{L^{2}\left( \Sigma_{1}\right) }=0\ . \] Less obvious is the fact that the inequality $\left\Vert \widehat{\chi_{C }\left( \rho\cdot\right) \right\Vert _{L^{2}\left( \Sigma_{1}\right) }\geq c\ \rho^{-3/2}$ fails for a square $Q$: it was observed in \cite{BCT} the existence of a positive constant $c$ such that, for every positive integer $n$, one ha \begin{equation} \left\Vert \widehat{\chi_{Q}}\left( n\cdot\right) \right\Vert _{L^{2}\left( \Sigma_{1}\right) }\leq c\ n^{-7/4}\ . \label{7quarti \end{equation} For completeness we write the short proof of (\ref{7quarti}). Indeed, let $\ Q=\left[ -\frac{1}{2},\frac{1}{2}\right] ^{2}$ and let $n$ be a positive integer. Let $\Theta:=\left( \cos\theta,\sin\theta\right) $. Then an explicit computation of $\widehat{\chi_{Q}}$ yield \begin{align*} & \int_{0}^{2\pi}\left\vert \widehat{\chi_{Q}}(n\Theta)\right\vert ^{2}~d\theta=8\int_{0}^{\pi/4}\left\vert \dfrac{\sin(\pi n\cos\theta)}{\pi n\cos\theta}\frac{\sin(\pi n\sin\theta)}{\pi n\sin\theta}\right\vert ^{2}~d\theta\\ & \leq c\,\frac{1}{n^{4}}\int_{0}^{\pi/4}\left\vert \dfrac{\sin(\pi n\cos\theta)}{\sin\theta}\right\vert ^{2}~d\theta=c\,\frac{1}{n^{4}}\int _{0}^{\pi/4}\left\vert \dfrac{\sin(\pi n\left( 1-2\sin^{2}\left( \theta/2\right) \right) )}{\sin\theta}\right\vert ^{2}~d\theta\\ & \leq c^{\prime}\,\frac{1}{n^{4}}\int_{0}^{\pi/4}\left\vert \sin(2\pi n\sin^{2}\left( \theta/2\right) )\right\vert ^{2}\theta^{-2}~d\theta\\ & \leq c^{\prime\prime}\,\frac{1}{n^{4}}\int_{0}^{n^{-1/2}}n^{2}\theta ^{2}~d\theta+c^{\prime\prime}\ \frac{1}{n^{4}}\int_{n^{-1/2}}^{\pi/4 \theta^{-2}~d\theta\leq c^{\prime\prime\prime}\ n^{-7/2}\;. \end{align*} Then $B$ and $Q$ \textit{may} be $L^{2}$-irregular. On the one hand it is known that a disc $B$ is $L^{2}$-regular (see \cite{PS} or \cite[Theorem 6.2]{BRT}), so that (\ref{reverse}) does not imply (\ref{simplex reverse}). On the other hand we shall prove in this paper that $Q$ is $L^{2}$-irregular. The $L^{2}$-irregularity of the square $Q$ is shared by each member of the family of polygons described in the following definition. \begin{definition} \label{def Gotic B}Let $\mathfrak{P}$ be the family of all convex polygons in $\mathbb{R}^{2}$ which can be inscribed in a circle and are symmetric about the centre. \end{definition} \section{Statements of the results} We now state our main result. \begin{theorem} \label{maintheorem}A convex polygon $P$ is $L^{2}$-regular if and only if $P\notin\mathfrak{P}$. \end{theorem} The \textquotedblleft only if\textquotedblright\ part is a consequence of the following more precise result. \begin{proposition} \label{propeps}If $P\in\mathfrak{P}$, then for every $\varepsilon>0$ there is an increasing diverging sequence $\left\{ \rho_{u}\right\} _{u=1}^{\infty}$ such tha \[ \Vert D_{P}^{\rho_{u}}\Vert_{L^{2}\left( SO(2)\times\mathbb{T}^{2}\right) }\leq c_{\varepsilon}\ \rho_{u}^{1/2}\log^{-1/\left( 32+\varepsilon\right) }(\rho_{u})\ . \] \end{proposition} Theorem \ref{maintheorem} above and \cite[Theorem 6.2]{BRT} yield the following more general result. \begin{corollary} Let $C$ be a convex body in $\mathbb{R}^{2}$ having piecewise smooth boundary. Then $C$ is not $L^{2}$-regular if and only if it belongs to $\mathfrak{P}$. \end{corollary} The following result shows that Theorem \ref{maintheorem} is essentially sharp. \begin{proposition} \label{stimaepsilon} For every $P\in\mathfrak{P}$, for $\varepsilon>0$ arbitrary small, and for any $\rho$ large enough \[ \Vert D_{P}^{\rho}\Vert_{L^{2}(SO(2)\times\mathbb{T}^{2})}\geq c_{\varepsilon }\ \rho^{1/2-\varepsilon}\ , \] where $c_{\varepsilon}$ is independent of $\rho$. \end{proposition} The \textquotedblleft if\textquotedblright\ part of Theorem \ref{maintheorem} is a consequences of the following three lemmas. \begin{lemma} \label{lemma1}Let $P$ in $\mathbb{R}^{2}$ be a polygon having a side not parallel to any other side. Then $P$ is $L^{2}$-regular. \end{lemma} \begin{lemma} \label{lemma2}Let $P$ in $\mathbb{R}^{2}$ be a convex polygon with a pair of parallel sides having different lengths. Then $P$ is $L^{2}$-regular. \end{lemma} \begin{lemma} \label{lemma3} Let $P$ in $\mathbb{R}^{2}$ be a convex polygon which cannot be inscribed in a circle. Then $P$ is $L^{2}$-regular. \end{lemma} \section{ Notation and preliminary arguments} In the remainder of the paper, a polygon $P$ is given by its vertex set $\{P_{h}\}_{h=1}^{s}$, where it is assumed that the numbering indicates counterclockwise ordering of the vertices;\ we write $P\sim\{P_{h}\}_{h=1 ^{s}$. For convenience we use periodic labeling; i.e., $P_{h+s},$ $P_{h+2s},$ $\ldots$ refer to the same point $P_{h}$ for $1\leq h\leq s$. For every $h$ le \[ \tau_{h}:=\frac{P_{h+1}-P_{h}}{\left\vert P_{h+1}-P_{h}\right\vert \] be the direction of the oriented side $P_{h}P_{h+1}$ and $\ell_{h}:=\left\vert P_{h+1}-P_{h}\right\vert $ its length. For every $h$ let $\nu_{h}$ be the outward unit normal vector corresponding to the side $P_{h}P_{h+1}$. Let \[ \mathcal{L}_{h}:=\left\vert P_{h}+P_{h+1}\right\vert \] be the length of the vector $P_{h}+P_{h+1}$. Observe that if $\left\vert P_{h}\right\vert =\left\vert P_{h+1}\right\vert $ (in particular if the polygon $P$ is inscribed in a circle centred at the origin) then \[ P_{h}+P_{h+1}=\mathcal{L}_{h}\nu_{h}\ . \] We shall always assume $\ell_{h}\geq1$ and $\mathcal{L}_{h}\geq1$. Let $\nu(s)$ be the outward unit normal vector at a point $s\in\partial P$ which is not a vertex of $P$. By applying Green's formula we see that, for any $\rho\geq1$, we hav \begin{align} & \widehat{\chi_{P}}(\rho\Theta)\nonumber\\ & =\int_{P}e^{-2\pi i\rho\Theta\cdot t}\ dt=-\frac{1}{2\pi i\rho \int_{\partial P}e^{-2\pi i\rho\Theta\cdot s}\ \left( \Theta\cdot \nu(s)\right) \ ds\nonumber\\ & =-\frac{1}{2\pi i\rho}\sum_{h=1}^{s}\ell_{h}\left( \Theta\cdot\nu _{h}\right) \int_{0}^{1}e^{-2\pi i\rho\Theta\cdot\left( P_{h}+\lambda\left( P_{h+1}-P_{h}\right) \right) }\ d\lambda\nonumber\\ & =-\frac{1}{4\pi^{2}\rho^{2}}\sum_{h=1}^{s}\frac{\Theta\cdot\nu_{h} {\Theta\cdot\tau_{h}}\ \left[ e^{-2\pi i\rho\Theta\cdot P_{h+1}}-e^{-2\pi i\rho\Theta\cdot P_{h}}\right] \nonumber\\ & =-\frac{1}{4\pi^{2}\rho^{2}}\sum_{h=1}^{s}\frac{\Theta\cdot\nu_{h} {\Theta\cdot\tau_{h}}\ e^{-\pi i\rho\Theta\cdot(P_{h+1}+P_{h})}\left[ e^{-\pi i\rho\Theta\cdot(P_{h+1}-P_{h})}-e^{\pi i\rho\Theta\cdot(P_{h+1}-P_{h )}\right] \nonumber\\ & =\frac{i}{2\pi^{2}\rho^{2}}\sum_{h=1}^{s}\frac{\Theta\cdot\nu_{h} {\Theta\cdot\tau_{h}}\ e^{-\pi i\rho\mathcal{L}_{h}\Theta\cdot\nu_{h}}\sin (\pi\rho\ell_{h}\Theta\cdot\tau_{h})\ . \label{divergenza \end{align} For any $1\leq h\leq s$, let $\theta_{h}\in\lbrack0,2\pi)$ be the angle defined by \begin{equation} \tau_{h}=:(\cos\theta_{h},\sin\theta_{h})\ . \label{thetaj \end{equation} Hence \begin{equation} \nu_{h}=(\sin\theta_{h},-\cos\theta_{h}) \label{nu j \end{equation} and, if $\Theta:=\left( \cos\theta,\sin\theta\right) $, \[ \Theta\cdot\tau_{h}=\cos(\theta-\theta_{h})\ ,\ \ \ \ \ \Theta\cdot\nu _{h}=-\sin(\theta-\theta_{h}). \] Then (\ref{divergenza}) can be written a \[ \widehat{\chi_{P}}(\rho\Theta)=-\frac{i}{2\pi^{2}\rho^{2}}\sum_{h=1}^{s \frac{\sin\left( \theta-\theta_{h}\right) }{\cos\left( \theta-\theta _{h}\right) }\ e^{\pi i\rho\mathcal{L}_{h}\sin\left( \theta-\theta _{h}\right) }\sin\left( \pi\rho\ell_{h}\cos\left( \theta-\theta_{h}\right) \right) \] and the equality in (\ref{parsev}) yields \begin{align} & \Vert D_{P}^{\rho}\Vert_{L^{2}\left( SO(2)\times\mathbb{T}^{2}\right) }^{2}\label{generale}\\ & =\rho^{4}\sum_{0\neq k\in\mathbb{Z}^{2}}\int_{0}^{2\pi}\left\vert \widehat{\chi_{P}}(\rho\left\vert k\right\vert \Theta)\right\vert ^{2}\ d\theta\nonumber\\ & =c\sum_{0\neq k\in\mathbb{Z}^{2}}\frac{1}{|k|^{4}}\nonumber\\ & \times\int_{0}^{2\pi}\left\vert \sum_{h=1}^{s}\frac{\sin(\theta-\theta _{h})}{\cos(\theta-\theta_{h})}\ e^{-\pi i\rho\left\vert k\right\vert \mathcal{L}_{h}\sin\left( \theta-\theta_{h}\right) }\sin(\pi\rho\left\vert k\right\vert \ell_{h}\cos(\theta-\theta_{h}))\right\vert ^{2}d\theta \ .\nonumber \end{align} For $P\in\mathfrak{P}$, relation (\ref{generale}) can be further simplified. Let $P\in\mathfrak{P}$ have $s=2n$ sides (i.e. $P\sim\{P_{h}\}_{h=1}^{2n}$) and be inscribed in a circle centered at the origin. Then $P_{h P_{h+1}=-P_{n+h}P_{n+h+1}$ for any $1\leq h\leq n$ and $P_{h+1}+P_{h =\mathcal{L}_{h}\nu_{h}$. Therefore, for every $1\leq h\leq n$, \[ \tau_{h}=-\tau_{n+h}\ ,\ \ \ \ \ \nu_{h}=-\nu_{n+h}\ ,\ \ \ \ \ \ell_{h =\ell_{n+h}\ ,\ \ \ \ \ \mathcal{L}_{h}=\mathcal{L}_{n+h}\ . \] Then the relation (\ref{divergenza}) become \begin{equation} \widehat{\chi}_{P}(\rho\Theta)=\frac{1}{\pi^{2}\rho^{2}}\sum_{h=1}^{n \frac{\sin(\theta-\theta_{h})}{\cos(\theta-\theta_{h})}\sin(\pi\rho \mathcal{L}_{h}\sin(\theta-\theta_{h}))\sin(\pi\rho\ell_{h}\cos(\theta -\theta_{h}))\ . \label{divergenza2 \end{equation} and the equality in (\ref{parsev}) yields \begin{align} & \Vert D_{P}^{\rho}\Vert_{L^{2}\left( SO(2)\times\mathbb{T}^{2}\right) }^{2}\label{final}\\ & =c\sum_{0\neq k\in\mathbb{Z}^{2}}\frac{1}{|k|^{4}}\nonumber\\ & \times\int_{0}^{2\pi}\left\vert \sum_{h=1}^{n}\frac{\sin(\theta-\theta _{h})}{\cos(\theta-\theta_{h})}\sin(\pi\rho|k|\ell_{h}\cos(\theta-\theta _{h}))\sin(\pi\rho|k|\mathcal{L}_{h}\sin(\theta-\theta_{h}))\right\vert ^{2}d\theta\nonumber\\ & \leq c\ \sum_{0\neq k\in\mathbb{Z}^{2}}\frac{1}{|k|^{4}}\sum_{h=1}^{n \int_{0}^{\pi/2}\left\vert \frac{\sin(\pi\rho|k|\ell_{h}\sin\theta) {\sin\theta}\sin(\pi\rho|k|\mathcal{L}_{h}\cos\theta)\right\vert ^{2 d\theta\ .\nonumber \end{align} The last relation holds for every $P\in\mathfrak{P}$ with $2n$ sides. \section{Proofs} \begin{proof} [Proof of Lemma \ref{lemma1}]The proof of Lemma \ref{lemma1} is essentially the proof of \cite[Theorem 3.7]{BCT}, which is stated for a simplex but the argument also works for every polyhedron having a face not parallel to any other face. \end{proof} \begin{proof} [Proof of Lemma \ref{lemma2}]By Lemma \ref{lemma1} we can assume that $P\sim\{P_{h}\}_{h=1}^{2n}$ is a convex polygon with an even number of sides, and that for every $h=1,\ldots,n$ the sides $P_{h}P_{h+1}$ and $P_{h+n P_{h+n+1}$ are parallel. Suppose that the length $\ell_{j}$ of the $j$th side $P_{j}P_{j+1}$ is longer than the length $\ell_{j+n}$ of the opposite side $P_{j+n}P_{j+n+1}$. Then there exist $0<\varepsilon<1$ and $0<\alpha<1$ such tha \begin{equation} (1+\varepsilon)\frac{\ell_{j+n}}{\ell_{j}}<\alpha\ . \label{alfa \end{equation} Let $H>1$ be a large constant satisfying \begin{equation} \sin\left( \theta-\theta_{j}\right) \geq\sqrt{\alpha}\left( \theta -\theta_{j}\right) \ \ \ \ \ \text{if \ }0\leq\theta-\theta_{j}\leq \frac{1+\varepsilon}{H}\ . \label{alfa A \end{equation} We further assume (recall $\rho\geq1$ \[ \frac{1}{H\pi\rho\ell_{j}}\leq\theta-\theta_{j}\leq\frac{1+\varepsilon {H\pi\rho\ell_{j}}\ . \] Observe that (\ref{alfa}) and (\ref{alfa A}) yiel \begin{align*} & |\sin(\pi\rho\ell_{j}\sin\left( \theta-\theta_{j}\right) )|-|\sin(\pi \rho\ell_{j+n}\sin\left( \theta-\theta_{j+n}\right) )|\\ & \geq\sin(\pi\rho\ell_{j}\sqrt{\alpha}\left( \theta-\theta_{j}\right) )-\sin(\pi\rho\ell_{j+n}\left( \theta-\theta_{j+n}\right) )\geq\frac{\alpha }{H}-\frac{1+\varepsilon}{H}\frac{\ell_{j+n}}{\ell_{j}}=:a_{j}>0\ . \end{align*} Henc \begin{align} & \left\vert \frac{\sin(\pi\rho\ell_{j}\sin(\theta-\theta_{j}))}{\sin (\theta-\theta_{j})}\cos(\theta-\theta_{j})e^{-\pi i\rho\Theta\cdot\left( P_{j+1}+P_{j}\right) }\right. \label{19}\\ & \left. +\frac{\sin(\pi\rho\ell_{j+n}\sin(\theta-\theta_{j}))}{\sin (\theta-\theta_{j})}\cos(\theta-\theta_{j+n})e^{-\pi i\rho\Theta\cdot\left( P_{j+n+1}+P_{j+n}\right) }\right\vert \nonumber\\ & \geq\frac{|\cos(\theta-\theta_{j})|}{|\sin(\theta-\theta_{j})|}\left( |\sin(\pi\rho\ell_{j}\sin(\theta-\theta_{j}))|-|\sin(\pi\rho\ell_{j+n \sin(\theta-\theta_{j+n}))|\right) \nonumber\\ & \geq a_{j}\ \frac{|\cos(\theta-\theta_{j})|}{|\sin(\theta-\theta_{j )|}\ .\nonumber \end{align} We use the previous estimates to evaluate the last integral in (\ref{generale ) in a neighborhood of $\theta_{j}$ and therefore obtain an estimate from below of $\Vert D_{P}^{\rho}\Vert_{L^{2}\left( SO(2)\times\mathbb{T ^{2}\right) }$. By the arguments in \cite[Theorem 2.3]{BCT} or \cite[Lemma 10.6]{T}, the contribution of all the sides $P_{h}P_{h+1}$ (with $h\neq j$ and $h\neq j+n$) to the term $\Vert D_{P}^{\rho}\Vert_{L^{2}\left( SO(2)\times \mathbb{T}^{2}\right) }$ is $\mathcal{O}\left( 1\right) $. Then (\ref{dalbassocondecad}), (\ref{divergenza2}) and (\ref{19}) yiel \[ \Vert D_{P}^{\rho}\Vert_{L^{2}\left( SO(2)\times\mathbb{T}^{2}\right) ^{2}\geq c\ \int_{\frac{1}{H\pi\rho\ell_{j}}}^{\frac{1+\varepsilon}{H\pi \rho\ell_{j}}}\frac{\cos^{2}\theta}{\sin^{2}\theta}\ d\theta+c_{1}\geq c\int_{\frac{1}{H\pi\rho\ell_{j}}}^{\frac{1+\varepsilon}{H\pi\rho\ell_{j} }\frac{d\theta}{\theta^{2}}+c_{1}\geq c_{2}\ \rho\ . \] \end{proof} \begin{proof} [Proof of Lemma \ref{lemma3}]We can assume that $P\sim\{P_{h}\}_{h=1}^{2n}$ is a convex polygon such that for every $h=1,\ldots,n$ the sides $P_{h}P_{h+1}$ and $P_{h+n}P_{h+n+1}$ are parallel and of the same length (that is, $\ell _{h}=\ell_{h+n}$, $\ \tau_{h}=-\tau_{h+n}$, \ $\nu_{h}=-\nu_{h+n}$). Then we may assume that $P$ is symmetric about the origin. As $P$ cannot be inscribed in a circle, there exists an index $1\leq j\leq n$ such that the two opposite equal and parallel sides $P_{j}P_{j+1}$ and $P_{j+n}P_{j+n+1}$ are not the sides of a rectangle. Then $P_{j}+P_{j+1}$ is not orthogonal to $P_{j+1 -P_{j}$. Let $\phi_{j}\in\lbrack\theta_{j}-\pi,\theta_{j}]$ be defined by \[ P_{j+1}+P_{j}=\mathcal{L}_{j}(\cos\phi_{j},\sin\phi_{j})\ . \] Since $\tau_{j}=(\cos\theta_{j},\sin\theta_{j})$ and $\nu(j)=(\cos(\theta _{j}-\frac{\pi}{2}),\sin(\theta_{j}-\frac{\pi}{2}))$, see (\ref{thetaj}) and (\ref{nu j}), we have $\phi_{j}-\theta_{j}\neq-\frac{\pi}{2}$. We put $\varphi_{j}:=\phi_{j}-\theta_{j}$. Then \[ \varphi_{j}\in\lbrack-\pi,0]\setminus\{-\frac{\pi}{2}\}\ . \] Again we need to find a lower bound for the last integral in (\ref{generale}). As in the previous proof it is enough to conside \begin{align*} F_{j}(\theta) & :=\sum_{h\in\{j,j+n\}}\frac{\sin(\theta-\theta_{h}) {\cos(\theta-\theta_{h})}\sin(\pi\rho\ell_{j}\cos(\theta-\theta_{h}))e^{-\pi i\rho\Theta\cdot(P_{h+1}+P_{h})}\\ & =\frac{\sin(\theta-\theta_{j})}{\cos(\theta-\theta_{j})}\sin(\pi\rho \ell_{j}\cos(\theta-\theta_{j}))\left[ e^{-\pi i\rho\mathcal{L}_{j \cos(\theta-\phi_{j})}-e^{\pi i\rho\mathcal{L}_{j}\cos(\theta-\phi_{j )}\right] \\ & =-2i\ \frac{\sin(\theta-\theta_{j})}{\cos(\theta-\theta_{j})}\sin(\pi \rho\ell_{j}\cos(\theta-\theta_{j}))\sin(\pi\rho\mathcal{L}_{j}\cos (\theta-\phi_{j}))\ . \end{align*} We writ \begin{align*} & \int_{0}^{2\pi}\left\vert \frac{\sin(\theta-\theta_{j})}{\cos(\theta -\theta_{j})}\sin(\pi\rho\ell_{j}\cos(\theta-\theta_{j}))\sin(\pi \rho\mathcal{L}_{j}\cos(\theta-\phi_{j}))\right\vert ^{2}\ d\theta\\ & =\int_{0}^{2\pi}\left\vert \frac{\sin\left( \pi\rho\ell_{j}\sin \theta\right) }{\sin\theta}\cos\theta\sin(\pi\rho\mathcal{L}_{j}\sin (\theta-\varphi_{j}))\right\vert ^{2}\ d\theta\ . \end{align*} We shall integrate $\theta$ in a neighborhood of $0$ (actually $0\leq \theta\leq1$ suffices). As for $\varphi_{j}$ we first assume $\varphi_{j \in(-\frac{\pi}{2},0]$. Then $\cos\varphi_{j}>0$ and $\sin\varphi_{j}\leq0$. Let $0<\gamma<1$ satisfy $\cos\varphi_{j}>\gamma$. In order to prove that $\left\vert \sin(\pi\rho\mathcal{L}_{j}\sin(\theta-\varphi_{j})\right\vert \geq c$ we consider two cases. \noindent\emph{Case 1:} $|\sin(\pi\rho\mathcal{L}_{j}\sin\varphi_{j )|>\gamma/2$. \noindent We need to bound $\sin(\theta-\varphi_{j})-\left\vert \sin \varphi_{j}\right\vert $. Since $\sin\varphi_{j}\leq0$ one ha \[ \frac{\theta}{2}\cos\varphi_{j}+\left[ 1-\frac{\theta^{2}}{2}\right] |\sin\varphi_{j}|\leq\sin\theta\cos\varphi_{j}-\cos\theta\sin\varphi_{j \leq\theta\cos\varphi_{j}+|\sin\varphi_{j}|\ . \] Therefore \begin{equation} \frac{\theta}{2}\gamma-\frac{\theta^{2}}{2}\leq\sin(\theta-\varphi _{j})-\left\vert \sin\varphi_{j}\right\vert \leq\theta\ . \label{newast \end{equation} Let $\rho\geq1$ and assume \[ \frac{\gamma}{8\pi\rho\mathcal{L}_{j}}\leq\theta\leq\frac{\gamma}{4\pi \rho\mathcal{L}_{j}}\ . \] We recall that $\mathcal{L}_{j}\geq1$. Again we have to estimate $\sin (\theta-\varphi_{j})-\left\vert \sin\varphi_{j}\right\vert $. By (\ref{newast}) we hav \[ 0<\frac{\gamma}{16\pi\rho\mathcal{L}_{j}}-\frac{\gamma^{2}}{32(\pi \rho\mathcal{L}_{j})^{2}}<\sin(\theta-\varphi_{j})-|\sin\varphi_{j}|\leq \frac{\gamma}{4\pi\rho\mathcal{L}_{j}}\ . \] Therefore \begin{equation} 0<\pi\rho\mathcal{L}_{j}\sin(\theta-\varphi_{j})-\pi\rho\mathcal{L}_{j |\sin\varphi_{j}|\leq\frac{\gamma}{4}\ . \label{astetrig1 \end{equation} Hence the assumption of \emph{Case 1} and (\ref{astetrig1}) yield \begin{align*} & \left\vert \sin(\pi\rho\mathcal{L}_{j}\sin(\theta-\varphi_{j}))\right\vert \\ & =\left\vert \sin(\pi\rho\mathcal{L}_{j}\left[ \sin(\theta-\varphi _{j})+\sin\varphi_{j}\right] -\pi\rho\mathcal{L}_{j}\sin\varphi _{j})\right\vert \\ & =\left\vert \sin\left( \pi\rho\mathcal{L}_{j}\left[ \sin(\theta -\varphi_{j})-\left\vert \sin\varphi_{j}\right\vert \right] \right) \cos\left( \pi\rho\mathcal{L}_{j}\sin\varphi_{j}\right) \right. \\ & \left. -\cos\left( \pi\rho\mathcal{L}_{j}\left[ \sin(\theta-\varphi _{j})-\left\vert \sin\varphi_{j}\right\vert \right] \right) \sin\left( \pi\rho\mathcal{L}_{j}\sin\varphi_{j}\right) \right\vert \\ & \geq\left\vert \sin(\pi\rho\mathcal{L}_{j}\sin\varphi_{j})\right\vert \left\vert \cos(\pi\rho\mathcal{L}_{j}\sin(\theta-\varphi_{j})-\pi \rho\mathcal{L}_{j}|\sin\varphi_{j}|)\right\vert \\ & -\left\vert \sin(\pi\rho\mathcal{L}_{j}\sin(\theta-\varphi_{j})-\pi \rho\mathcal{L}_{j}|\sin\varphi_{j}|)\right\vert \\ & >\frac{\gamma}{2}\left[ 1-\frac{\gamma^{2}}{32}\right] -\frac{\gamma {4}\\ & >\frac{\gamma}{5}\ . \end{align*} \noindent\emph{Case 2:} $|\sin(\pi\rho\mathcal{L}_{j}\sin\varphi_{j )|\leq\gamma/2$. \noindent Let $\rho$ be large so that $0\leq\theta\leq\frac{3}{2\pi \rho\mathcal{L}_{j}}$ implies $\sin\theta\geq(1-\delta)\theta$, with $\delta<1/20$. Then for \begin{equation} \frac{1}{\pi\rho\mathcal{L}_{j}}\leq\theta\leq\frac{3}{2\pi\rho\mathcal{L _{j}} \label{newpo \end{equation} we hav \[ \theta(1-\delta)\gamma+\left[ 1-\frac{\theta^{2}}{2}\right] |\sin\varphi _{j}|\leq\sin\theta\cos\varphi_{j}-\cos\theta\sin\varphi_{j}\leq\theta +|\sin\varphi_{j}| \] and \begin{equation} \theta\gamma(1-\delta)-\frac{\theta^{2}}{2}\leq\sin(\theta-\varphi_{j )-|\sin\varphi_{j}|\leq\theta\ . \label{newsta \end{equation} For $\rho$ large enough we have $\frac{9}{8(\pi\rho\mathcal{L}_{j})^{2} <\frac{\gamma\delta}{\pi\rho\mathcal{L}_{j}}$. Then (\ref{newpo}) and (\ref{newsta}) yiel \begin{align*} \frac{\gamma(1-2\delta)}{\pi\rho\mathcal{L}_{j}} & <\frac{\gamma(1-\delta )}{\pi\rho\mathcal{L}_{j}}-\frac{9}{8(\pi\rho\mathcal{L}_{j})^{2}}\leq \theta\gamma\left( 1-\delta\right) -\frac{\theta^{2}}{2}\\ & <\sin(\theta-\varphi_{j})-|\sin\varphi_{j}|\leq\theta\leq\frac{3}{2\pi \rho\mathcal{L}_{j} \end{align*} and \begin{equation} \gamma(1-2\delta)<\pi\rho\mathcal{L}_{j}\sin(\theta-\varphi_{j})-\pi \rho\mathcal{L}_{j}|\sin\varphi_{j}|\leq\frac{3}{2}\ . \label{caso2tr \end{equation} We choose $\gamma$ small enough so that \[ \sin(\gamma(1-2\delta))\geq(1-2\delta)^{2}\gamma\text{ \ \ \ and\ \ \ \ \gamma^{2}/4<2\delta\ . \] Then (\ref{caso2tr}) and the assumption of \emph{Case 2} yield \begin{align} & \left\vert \sin(\pi\rho\mathcal{L}_{j}\sin(\theta-\varphi_{j}))\right\vert =\left\vert \sin(\pi\rho\mathcal{L}_{j}\left[ \sin(\theta-\varphi_{j )-\sin\varphi_{j}\right] +\pi\rho\mathcal{L}_{j}\sin\varphi_{j})\right\vert \nonumber\\ & \geq\left\vert \cos(\pi\rho\mathcal{L}_{j}\sin\varphi_{j})\right\vert \left\vert \sin(\pi\rho\mathcal{L}_{j}\sin(\theta-\varphi_{j})-\pi \rho\mathcal{L}_{j}|\sin\varphi_{j}|)\right\vert -\left\vert \sin(\pi \rho\mathcal{L}_{j}\sin\varphi_{j})\right\vert \nonumber\\ & \geq\gamma(1-2\delta)^{2}\sqrt{1-\frac{\gamma^{2}}{4}}-\frac{\gamma {2}>\gamma\left[ (1-2\delta)^{5/2}-\frac{1}{2}\right] >\frac{\gamma {4}\ .\nonumber \end{align} \emph{Case 1} and \emph{Case 2} prove that for a suitable choice of $0<\gamma<1$, such that $\cos\varphi_{j}>\gamma$, there exist $0<\alpha<\beta$ such that for $\frac{\alpha}{\pi\rho\mathcal{L}_{j}}\leq\theta\leq\frac{\beta }{\pi\rho\mathcal{L}_{j}}$ and $\rho$ large enough we have \begin{equation} \left\vert \sin(\pi\rho\mathcal{L}_{j}\sin(\theta-\varphi_{j}))\right\vert >\frac{\gamma}{5}\ . \label{stimaseno \end{equation} If $\varphi_{j}\in\lbrack-\pi,-\frac{\pi}{2})$ we have $\cos\varphi_{j}<0$ and $\sin\varphi_{j}\leq0$. Then for $0\leq\theta<1$ we have \[ -\theta+\left[ 1-\frac{\theta^{2}}{2}\right] |\sin\varphi_{j}|\leq\sin \theta\cos\varphi_{j}-\cos\theta\sin\varphi_{j}\leq-\sin\theta|\cos\varphi _{j}|+|\sin\varphi_{j}|\ . \] Hence, for a positive constant $K$, \[ \sin\theta|\cos\varphi_{j}|\leq|\sin\varphi_{j}|-\sin(\theta-\varphi_{j})\leq K\ \theta\ . \] If we choose a suitable constant $\gamma>0$ such that $|\cos\varphi _{j}|>\gamma$, we can prove as for the case $\varphi_{j}\in(-\frac{\pi}{2},0]$ that (\ref{stimaseno}) still holds for $\frac{\alpha}{\pi\rho\mathcal{L}_{j }\leq\theta\leq\frac{\beta}{\pi\rho\mathcal{L}_{j}}$, with $0<\alpha<\beta$ and $\rho$ large enough. Then (\ref{stimaseno}) yield \begin{align*} & \int_{0}^{2\pi}\left\vert F_{j}(\theta)\right\vert ^{2}d\theta\geq \int_{\frac{\alpha}{\pi\rho\mathcal{L}_{j}}}^{\frac{\beta}{\pi\rho \mathcal{L}_{j}}}\left\vert \frac{\sin(\pi\rho\ell_{j}\sin\theta)}{\sin\theta }\cos\theta\sin(\pi\rho\mathcal{L}_{j}\sin(\theta-\varphi_{j}))\right\vert ^{2}d\theta\\ & \geq c\ \gamma^{2}\int_{\frac{\alpha}{\pi\rho\mathcal{L}_{j}}}^{\frac {\beta}{\pi\rho\mathcal{L}_{j}}}\left\vert \frac{\sin(\pi\rho\ell_{j \sin\theta)}{\sin\theta}\right\vert ^{2}d\theta\geq c_{1}\ \rho\int_{c_{1 }^{c_{2}}\left\vert \frac{\sin(t)}{t}\right\vert ^{2}dt\geq c_{2}\ \rho. \end{align*} This ends the proof. \end{proof} The proof of Theorem \ref{maintheorem} will be complete after the proof of Proposition \ref{propeps}. We need a simultaneous approximation lemma from \cite{PS}. \begin{lemma} \label{diri} Let $r_{1},r_{2},...,r_{n}\in\mathbb{R}$. For every positive integer $j$ there exists $j\leq q\leq j^{n+1}$ such that $\left\Vert r_{s}q\right\Vert <j^{-1}$ for any $1\leq s\leq n$, where $\left\Vert x\right\Vert $ denotes the distance of a real number $x$ from the integers. \end{lemma} \begin{proof} [Proof of Proposition \ref{propeps}]Let $P\sim\{P_{j}\}_{j=1}^{2n}$ be a polygon in $\mathfrak{P}$. For every positive integer $u$ let \[ A_{u}^{j}:=\{k\in\mathbb{Z}^{2}:0<\mathcal{L}_{j}|k|\leq u^{2 \}\ \ \ \text{for }j=1,\ldots,n\ ,\ \ \ \ \ \ \ \ \text{\ }A_{u :=\bigcup_{j=1}^{n}A_{u}^{j}\ . \] Observe that $\operatorname*{card}(A_{u}^{j})\leq4u^{4}$ and therefore $\operatorname*{card}(A_{u})\leq4nu^{4}$. By Lemma \ref{diri} there exists a sequence $\left\{ \rho_{u}\right\} _{u=1}^{+\infty}$ of positive integers such that, for$\ $every $k\in A_{u}$ and every $j=1,\ldots,n$, \begin{equation} u\leq\rho_{u}\leq u^{4nu^{4}+1}\ ,\ \ \ \ \ |\sin(\pi\rho_{u}|k|\mathcal{L _{j})|<1/u\ . \label{i e ii \end{equation} Observe that (\ref{i e ii}) implie \begin{equation} u\geq c_{\varepsilon}\log^{\frac{1}{4+\varepsilon}}(\rho_{u}) \label{iii \end{equation} for every $\varepsilon>0$. For any $1\leq j\leq n$ and $k\in A_{u}^{j}$ we split the integral in (\ref{final}) into several parts \[ E_{1,j,|k|}^{\rho}:=\int_{0}^{\left( 8\rho_{u}|k|\right) ^{-1}}\left\vert \frac{\sin(\pi\rho_{u}|k|\ell_{j}\sin\theta)}{\sin\theta}\sin(\pi\rho _{u}|k|\mathcal{L}_{j}\cos\theta)\right\vert ^{2}d\theta\ . \] For $0\leq\theta\leq\left( 8\rho_{u}|k|\right) ^{-1}$ we have $0\leq 1-\cos\theta\leq\left( 128\rho_{u}^{2}|k|^{2}\right) ^{-1}$. Then (\ref{i e ii}) yiel \begin{align} & \left\vert \sin(\pi\rho_{u}|k|\mathcal{L}_{j}\cos\theta)\right\vert \label{33}\\ & =\left\vert \sin(\pi\rho_{u}|k|\mathcal{L}_{j}\left[ \cos\theta -1+1\right] )\right\vert \nonumber\\ & \leq\left\vert \sin(\pi\rho_{u}|k|\mathcal{L}_{j}(\cos\theta-1))\cos (\pi\rho_{u}|k|\mathcal{L}_{j})\right\vert \nonumber\\ & +\left\vert \sin(\pi\rho_{u}|k|\mathcal{L}_{j})\cos(\pi\rho_{u |k|\mathcal{L}_{j}(\cos\theta-1)\right\vert \nonumber\\ & \leq\left\vert \sin(\pi\rho_{u}|k|\mathcal{L}_{j}(1-\cos\theta))\right\vert +\left\vert \sin(\pi\rho_{u}|k|\mathcal{L}_{j})\right\vert \nonumber\\ & \leq\frac{\pi\mathcal{L}_{j}}{128\rho_{u}|k|}+\left\vert \sin(\pi\rho _{u}|k|\mathcal{L}_{j})\right\vert \nonumber\\ & \leq c\ \frac{1}{u}\ .\nonumber \end{align} By (\ref{33}) we obtain \begin{align*} E_{1,j,|k|}^{\rho} & \leq c\ \frac{1}{u^{2}}\int_{0}^{\left( 8\rho _{u}|k|\right) ^{-1}}\left\vert \frac{\sin(\pi\rho_{u}|k|\ell_{j}\sin\theta )}{\sin\theta}\right\vert ^{2}d\theta\\ & \leq c_{1}\ \frac{|k|\rho_{u}}{u^{2}}\int_{0}^{1}\left\vert \frac{\sin (t)}{t}\right\vert ^{2}dt\leq c_{2}\ \frac{|k|\rho_{u}}{u^{2}}\ . \end{align*} Let \[ E_{2,j,|k|}^{\rho}:=\int_{\left( 8\rho_{u}|k|\right) ^{-1}}^{\left( 8u^{1/4}\rho_{u}^{1/2}\left\vert k\right\vert ^{1/2}\right) ^{-1}}\left\vert \frac{\sin(\pi\rho_{u}|k|\ell_{j}\sin\theta)}{\sin\theta}\sin(\pi\rho _{u}|k|\mathcal{L}_{j}\cos\theta)\right\vert ^{2}d\theta\ . \] For $\left( 8\rho_{u}|k|\right) ^{-1}\leq\theta\leq\left( 8\ell^{1/4 \rho_{u}^{1/2}\left\vert k\right\vert ^{1/2}\right) ^{-1}$ we have \[ \frac{1}{2000\rho_{u}^{2}|k|^{2}}\leq2\sin^{2}\left( \theta/2\right) =1-\cos\theta\leq\frac{1}{128u^{1/2}\rho_{u}|k|}\ . \] As in (\ref{33}) we obtain \begin{align*} \left\vert \sin(\pi\rho_{u}|k|\mathcal{L}_{j}\cos\theta)\right\vert & \leq\left\vert \sin(\pi\rho_{u}|k|\mathcal{L}_{j}(1-\cos\theta))\right\vert +\left\vert \sin(\pi\rho_{u}|k|\mathcal{L}_{j})\right\vert \\ & \leq\frac{\pi\mathcal{L}_{j}}{128u^{1/2}}+\frac{1}{u}\leq c\ u^{-1/2 \end{align*} and then \begin{align*} E_{2,j,|k|}^{\rho} & \leq c\ \frac{1}{u}\int_{\left( 8\rho_{u}|k|\right) ^{-1}}^{\left( 8u^{1/4}\rho_{u}^{1/2}\left\vert k\right\vert ^{1/2}\right) ^{-1}}\left\vert \frac{\sin(\pi\rho_{u}|k|\ell_{j}\sin\theta)}{\sin\theta }\right\vert ^{2}d\theta\\ & \leq c_{1}\ \frac{1}{u}\int_{\left( 8\rho_{u}|k|\right) ^{-1}}^{\left( 8u^{1/4}\rho_{u}^{1/2}\left\vert k\right\vert ^{1/2}\right) ^{-1} \frac{d\theta}{\theta^{2}}\leq c_{2}\ \frac{\rho_{u}|k|}{u}\ . \end{align*} Let $1/4<\lambda<1/2$ and let \[ E_{3,j,|k|}^{\rho}:=\int_{\left( 8u^{1/4}\rho_{u}^{1/2}\left\vert k\right\vert ^{1/2}\right) ^{-1}}^{\lambda}\left\vert \frac{\sin(\pi\rho _{u}|k|\ell_{j}\sin\theta)}{\sin\theta}\sin(\pi\rho_{u}|k|\mathcal{L}_{j \cos\theta)\right\vert ^{2}d\theta\ . \] We hav \[ E_{3,j,|k|}^{\rho}\leq\int_{\left( 8u^{1/4}\rho_{u}^{1/2}\left\vert k\right\vert ^{1/2}\right) ^{-1}}^{\lambda}\frac{d\theta}{\theta^{2} \leq8u^{1/4}\rho_{u}^{1/2}\left\vert k\right\vert ^{1/2}\ . \] Finally we have \[ E_{4,j,|k|}^{\rho}:=\int_{\lambda}^{\frac{\pi}{2}}\left\vert \frac{\sin (\pi\rho_{u}|k|\ell_{j}\sin\theta)}{\sin\theta}\sin(\pi\rho_{u}|k|\mathcal{L _{j}\cos\theta)\right\vert ^{2}d\theta\leq c\ . \] By the above estimates, (\ref{final}), (\ref{i e ii})\ and (\ref{iii}) we hav \begin{align*} & \Vert D_{P}^{\rho_{u}}\Vert_{L^{2}\left( SO(2)\times\mathbb{T}^{2}\right) }^{2}\\ & \leq c\ \rho_{u}\sum_{k\in A_{u}}\frac{1}{|k|^{3}}\left( \frac{1}{u^{2 }+\frac{1}{u}+u^{1/4}\rho_{u}^{-1/2}\left\vert k\right\vert ^{-1/2}+\rho _{u}^{-1}\left\vert k\right\vert ^{-1}\right) \\ & +c_{1}\ \sum_{k\notin A_{u}}\frac{1}{|k|^{4}}\int_{0}^{\pi/2}\left\vert \frac{\sin(\pi\rho_{u}|k|\ell_{j}\sin\theta)}{\sin\theta}\right\vert ^{2}\ d\theta\\ & \leq c\ \rho_{u}\sum_{0\neq k\in A_{u}}\frac{1}{|k|^{3}}u^{-1/4}\\ & +c_{1}\ \sum_{\left\vert k\right\vert >c_{1}u^{2}}\frac{1}{|k|^{4}}\left( \int_{0}^{\left( \rho_{u}\left\vert k\right\vert \right) ^{-1/2}}\left( \rho_{u}\left\vert k\right\vert \right) \ d\theta+\int_{\left( \rho _{u}\left\vert k\right\vert \right) ^{-1/2}}^{\pi/2}\frac{1}{\theta^{2 }\ d\theta\right) \\ & \leq c_{\varepsilon}\ \rho_{u}\sum_{0\neq k\in\mathbb{Z}^{2}}\frac {1}{|k|^{3}}\log^{-\frac{1}{16+\varepsilon}}\left( \rho_{u}\right) +c\ \sum_{\left\vert k\right\vert >c_{1}u^{2}}\frac{1}{|k|^{4}}\left( \rho_{u}\left\vert k\right\vert \right) ^{1/2}\\ & \leq c_{\varepsilon}\ \rho_{u}\log^{-\frac{1}{16+\varepsilon}}\left( \rho_{u}\right) +c\ \rho_{u}^{1/2}\int_{\{t\in\mathbb{R}^{2}:|t|>c_{1 u^{2}\}}\frac{1}{\left\vert t\right\vert ^{7/2}}\ dt\\ & \leq c_{\varepsilon}\ \rho_{u}\log^{-\frac{1}{16+\varepsilon}}\left( \rho_{u}\right) \ . \end{align*} \end{proof} We now turn to the proof of Proposition \ref{stimaepsilon}, which depends on the following lemma proved by L. Parnovski and A. Sobolev \cite{PS}. \begin{lemma} \label{PSlemma} For any $\varepsilon>0$ there exist $\rho_{0}\geq1$ and $0<\alpha<1/2$ such that for every $\rho\geq\rho_{0}$ there exists $k\in\mathbb{Z}^{d}$ such that $|k|\leq\rho^{\varepsilon}$ and $\Vert \rho|k|\Vert\geq\alpha$, where $\Vert x\Vert$ is the distance of a real number $x$ from the integers. \end{lemma} \begin{proof} [Proof of Proposition \ref{stimaepsilon}]Let $P\sim\{P_{j}\}_{j=1}^{2n}$ be a polygon in $\mathfrak{P}$. Let $\varepsilon>0$ and let $j\in\left\{ 1,2,\ldots,n\right\} $. By Lemma \ref{PSlemma} there exist $\rho_{0}\geq1$ and $0<a<1/2$ such that for any $\rho\geq\rho_{0}$ there is $\widetilde{k \in\mathbb{Z}^{2}$ such that $|\widetilde{k}|\leq\rho^{\varepsilon/3}$ and $|\sin(\pi\rho|\widetilde{k}|\mathcal{L}_{j})|>a$. Then we consider the interva \begin{equation} \theta_{j}\leq\theta\leq\theta_{j}+\frac{1}{\pi\rho|\widetilde{k}|}\ . \label{interval j \end{equation} We have \[ 0\leq1-\cos(\theta-\theta_{j})\leq\frac{1}{2(\pi\rho|\widetilde{k}|)^{2}}\ . \] Then for large $\rho$ we have \begin{align} & |\sin(\pi\rho|\widetilde{k}|\mathcal{L}_{j}\cos(\theta-\theta _{j}))|\label{epsi1}\\ & \geq|\sin(\pi\rho|\widetilde{k}|\mathcal{L}_{j})||\cos(\pi\rho |\widetilde{k}|\mathcal{L}_{j}(1-\cos(\theta-\theta_{j})))|\nonumber\\ & -|\sin(\pi\rho|\widetilde{k}|\mathcal{L}_{j}(1-\cos(\theta-\theta _{j})))|\nonumber\\ & \geq c\ \left[ 1-\frac{\mathcal{L}_{j}^{2}}{8(\pi\rho|\widetilde{k}|)^{2 }\right] -\frac{\mathcal{L}_{j}}{2\pi\rho|\widetilde{k}|}>c_{1}\ .\nonumber \end{align} As before the sides non parallel to $P_{j}P_{j+1}$ give a bounded contribution to the integration of $\left\vert D_{P_{n}}^{\rho}\right\vert ^{2}$ over the interval in (\ref{interval j}). Finally (\ref{epsi1}) yield \begin{align*} & \Vert D_{P_{n}}^{\rho}\Vert_{L^{2}\left( SO(2)\times\mathbb{T}^{2}\right) }^{2}\\ & \geq c+c_{1}\ \frac{1}{|\widetilde{k}|^{4}}\int_{\theta_{j}}^{\theta _{j}+1/(\pi\rho|\widetilde{k}|)}\left\vert \frac{\sin(\pi\rho|\widetilde {k}|\ell_{j}\sin(\theta-\theta_{j}))}{\sin(\theta-\theta_{j})}\right\vert ^{2}\\ & \times|\sin(\pi\rho|\widetilde{k}|\mathcal{L}_{j}\cos(\theta-\theta _{j}))|^{2}|\cos(\theta-\theta_{j})|^{2}\ d\theta\\ & \geq c+c_{2}\ \frac{1}{|\widetilde{k}|^{4}}\int_{0}^{1/(\pi\rho |\widetilde{k}|)}\left\vert \frac{\sin(\pi\rho|\widetilde{k}|\ell_{j \sin\theta)}{\sin\theta}\right\vert ^{2}\ d\theta\\ & \geq c+c_{3}\frac{\rho}{|\widetilde{k}|^{3}}\\ & \geq c_{4}\ \rho^{1-\varepsilon}\ . \end{align*} \end{proof} The proofs of Lemmas \ref{lemma1}, \ref{lemma2}, and \ref{lemma3} actually show that $\left\Vert \widehat{\chi_{P}}\left( \rho\cdot\right) \right\Vert _{L^{2}\left( \Sigma_{1}\right) }\geq c\ \rho^{-3/2}$ whenever $P\notin\mathfrak{P}$. Hence Theorem \ref{maintheorem} and Proposition \ref{prop} readily yield the following result. \begin{corollary} Let $P$ be a polygon in $\mathbb{R}^{2}$. Then $P$ satisfie \[ \left\Vert \widehat{\chi_{P}}\left( \rho\cdot\right) \right\Vert _{L^{2}\left( \Sigma_{1}\right) }\geq c\ \rho^{-3/2 \] if and only if $P\notin\mathfrak{P}$. \end{corollary} \bigskip The results in this paper (apart from Lemma \ref{lemma1}) seem to be tailored for the planar case. A different (perhaps simpler) approach might be necessary in order to deal with the multi-dimensional cases.
{'timestamp': '2015-04-14T02:16:27', 'yymm': '1504', 'arxiv_id': '1504.03251', 'language': 'en', 'url': 'https://arxiv.org/abs/1504.03251'}
\section{Introduction}\label{sec_intro} The motivation of this paper lies in the minimization of a differentiable function $F:\mathbb{R}^n\rightarrow \mathbb{R}$ with at least one minimizer. Inspired by Nesterov pioneering work \cite{nesterov1983method}, we study the following ordinary differential equation (ODE): \begin{equation} \label{ODE} \ddot{x}(t)+\frac{\alpha}{t}\dot{x}(t)+\nabla F(x(t))=0, \end{equation} where $\alpha>0$, with $t_0>0$, $x(t_0)=x_0$ and $\dot x(t_0)=v_0$. This ODE is associated to the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA)\cite{beck2009fast} or the Accelerated Gradient Method \cite{nesterov1983method} : \begin{equation} x_{n+1}=y_n-h \nabla F(y_n) \text{ and } y_n=x_n+\frac{n}{n+\alpha}(x_n-x_{n-1}), \end{equation} with $h$ and $\alpha$ positive parameters. This equation, including or not a perturbation term, has been widely studied in the literature \cite{attouch2000heavy,su2016differential,cabot2009long,balti2016asymptotic,may2015asymptotic}. This equation belongs to a set of similar equations with various viscosity terms. It is impossible to mention all works related to the heavy ball equation or other viscosity terms. We refer the reader to the following recent works \cite{Begout2015,jendoubi2015asymptotics,may2015asymptotic,cabot2007asymptotics,attouch2002dynamics,polyak2017lyapunov,attouch2017asymptotic} and the references therein. Throughout the paper, we assume that, for any initial conditions $(x_0,v_0)\in \mathbb{R}^n\times\mathbb{R}^n$, the Cauchy problem associated with the differential equation \eqref{ODE}, has a unique global solution $x$ satisfying $(x(t_0),\dot x(t_0) )=(x_0,v_0)$. This is guaranteed for instance when the gradient function $\nabla F$ is Lipschitz on bounded subsets of $\mathbb{R}^n$. In this work we investigate the convergence rates of the values $F(x(t))-F^*$ for the trajectories of the ODE \eqref{ODE}. It was proved in \cite{attouch2018fast} that if $F$ is convex with Lipschitz gradient and if $\alpha>3$, the trajectory $F(x(t))$ converges to the minimum $F^*$ of $F$. It is also known that for $\alpha\geqslant 3$ and $F$ convex we have: \begin{equation} F(x(t))-F^*=O \left( t^{-2} \right). \end{equation} Extending to the continuous setting the work of Chambolle-Dossal~\cite{chambolle2015convergence} of the convergence of iterates of FISTA, Attouch et al. \cite{attouch2018fast} proved that for $\alpha>3$ the trajectory $x$ converges (weakly in infinite-dimensional Hilbert space) to a minimizer of $F$. Su et al. \cite{su2016differential} proposed some new results, proving the integrability of $t\mapsto t(F(x(t))-F^*)$ when $\alpha>3$, and they gave more accurate bounds on $F(x(t))-F^*$ in the case of strong convexity. Always in the case of the strong convexity of $F$, Attouch, Chbani, Peypouquet and Redont proved in \cite{attouch2018fast} that the trajectory $x(t)$ satisfies $F(x(t))-F^*=O\left( t^{-\frac{2\alpha}{3}}\right)$ for any $\alpha>0$. More recently several studies including a perturbation term \cite{attouch2018fast,AujolDossal,aujol2015stability,vassilis2018differential} have been proposed. In this work, we focus on the decay of $F(x(t))-F^*$ depending on more general geometries of $F$ around its set of minimizers than strong convexity. Indeed, Attouch et al. in \cite{attouch2018fast} proved that if $F$ is convex then for any $\alpha>0$, $F(x(t))-F^*$ tends to $0$ when $t$ goes to infinity. Combined with the coercivity of $F$, this convergence implies that the distance $d(x(t),X^*)$ between $x(t)$ and the set of minimizers $X^*$ tends to $0$. To analyse the asymptotic behavior of $F(x(t))-F^*$ we can thus only assume hypotheses on $F$ only on the neighborhood of $X^*$ and may avoid the tough question of the convergence of the the trajectory $x(t)$ to a point of $X^*$. More precisely, we consider functions behaving like $\norm{x-x^*}^{\gamma}$ around their set of minimizers for any $\gamma\geqslant 1$. Our aim is to show the optimal convergence rates that can be obtained depending on this local geometry. In particular we prove that if $F$ is strongly convex with a Lipschitz continuous gradient, the decay is actually better than $O\left( t^{-\frac{2\alpha}{3}}\right)$. We also prove that the actual decay for quadratic functions is $O\left( t^{-\alpha} \right)$. These results rely on two geometrical conditions: a first one ensuring that the function is sufficiently flat around the set of minimizers, and a second one ensuring that it is sufficiently sharp. In this paper, we will show that both conditions are important to get the expected convergence rates: the flatness assumption ensures that the function is not too sharp and may prevent from bad oscillations of the solution, while the sharpness condition ensures that the magnitude of the gradient of the function is not too low in the neighborhood of the minimizers. The paper is organized as follows. In Section~\ref{sec_geom}, we introduce the geometrical hypotheses we consider on the function $F$, and their relation with \L ojasiewicz property. We then recap the state of the art results on the ODE \eqref{ODE} in Section~\ref{sec_state}. We present the contributions of the paper in Section~\ref{sec_contrib}: depending on the geometry of the function $F$ and the value of the damping parameter $\alpha$, we give optimal rates of convergence. The proofs of the theorems are given in Section~\ref{sec_proofs}. Some technical proofs are postponed to Appendix~\ref{appendix}. \section{Local geometry of convex functions}\label{sec_geom} Throughout the paper we assume that the ODE \eqref{ODE} is defined in $\mathbb{R}^n$ equipped with the euclidean scalar product $\langle \cdot,\cdot\rangle$ and the associated norm $\|\cdot\|$. As usual $B(x^*,r)$ denotes the open euclidean ball with center $x^*$ and radius $r>0$ while $\bar B(x^*,r)$ denotes the closed euclidean ball with center $x^*$ and radius $r>0$. In this section we introduce two notions describing the geometry of a convex function around its minimizers. \begin{definition} Let $F:\mathbb{R}^n\rightarrow \mathbb{R}$ be a convex differentiable function, $X^*:=\textup{argmin}\, F\neq \emptyset$ and: $F^*:=\inf F$. \begin{enumerate} \item[(i)] Let $\gamma \geqslant 1$. The function $F$ satisfies the hypothesis $\textbf{H}_1(\gamma)$ if, for any minimizer $x^*\in X^*$, there exists $\eta>0$ such that: $$\forall x\in B(x^*,\eta),\quad F(x) - F^* \leqslant \frac{1}{\gamma} \langle \nabla F(x),x-x^*\rangle.$$ \item[(ii)] Let $r\geqslant 1$. The function $F$ satisfies the growth condition $\textbf{H}_2(r)$ if, for any minimizer $x^*\in X^*$, there exist $K>0$ and $\varepsilon>0$, such that: \begin{equation*} \forall x\in B(x^*,\varepsilon),\quad K d(x,X^*)^{r}\leqslant F(x)-F^*. \end{equation*} \end{enumerate} \end{definition} The hypothesis $\textbf{H}_1(\gamma)$ has already been used in \cite{cabot2009long} and later in \cite{su2016differential,AujolDossal}. This is a mild assumption, requesting slightly more than the convexity of $F$ in the neighborhood of its minimizers. Observe that any convex function automatically satisfies $\textbf{H}_1(1)$ and that any differentiable function $F$ for which $(F-F^*)^{\frac{1}{\gamma}}$ is convex for some $\gamma\geq 1$, satisfies $\textbf{H}_1(\gamma)$. Nevertheless having a better intuition of the geometry of convex functions satisfying $\textbf{H}_1(\gamma)$ for some $\gamma\geq 1$, requires a little more effort: \begin{lemma} Let $F:\mathbb{R}^n \rightarrow \mathbb{R}$ be a convex differentiable function with $X^*=\textup{argmin}\, F\neq \emptyset$, and $F^*=\inf F$. If $F$ satisfies $\textbf{H}_1(\gamma)$ for some $\gamma\geq 1$, then: \begin{enumerate} \item $F$ satisfies $\textbf{H}_1(\gamma')$ for all $\gamma'\in[1,\gamma]$. \item For any minimizer $x^*\in X^*$, there exists $M>0$ and $\eta >0$ such that: \begin{equation} \forall x\in B(x^*,\eta),~F(x) -F^* \leqslant M \|x-x^*\|^\gamma.\label{hyp:H1} \end{equation} \end{enumerate} \label{lem:geometry} \end{lemma} \begin{proof} The proof of the first point of Lemma \ref{lem:geometry} is straightforward. The second point relies on the following elementary result in dimension $1$: let $g:\mathbb{R}\rightarrow \mathbb{R}$ be a convex differentiable function such that $0\in \textup{argmin}\, g$, $g(0)=0$ and: $$\forall t\in [0,1],~g(t) \leq \frac{t}{\gamma}g'(t),$$ for some $\gamma\geqslant 1$. Then the function $t\mapsto t^{-\gamma}g(t)$ is monotonically increasing on $[0,1]$ and: \begin{equation} \forall t\in [0,1],~g(t)\leqslant g(1)t^\gamma.\label{majo1D} \end{equation} Consider now any convex differentiable function $F:\mathbb{R}^n \rightarrow \mathbb{R}$ satisfying the condition $\textbf{H}_1(\gamma)$, and $x^*\in X^*$. There then exists $\eta>0$ such that: $$\forall x\in B(x^*,\eta),\quad 0\leqslant F(x) - F^* \leqslant \frac{1}{\gamma} \langle \nabla F(x),x-x^*\rangle.$$ Let $\eta'\in(0,\eta)$. For any $x\in\bar B(x^*,\eta')$ with $x \neq x^*$, we introduce the following univariate function: $$g_x:t\in [0,1]\mapsto F\left(x^*+t\eta'\frac{x-x^*}{\|x-x^*\|}\right)-F^*.$$ First observe that, for all $x\in \bar B(x^*,\eta')$ with $x \neq x^*$ and for all $t\in [0,1]$, we have: $x^*+t\eta'\frac{x-x^*}{\|x-x^*\|}\in \bar B(x^*,\eta').$ Since $F$ is continuous on the compact set $\bar B(x^*,\eta')$, we deduce that: \begin{equation} \exists M>0,~\forall x\in \bar B(x^*,\eta') \ \mbox{ with $x \neq x^*$},~ \forall t\in[0,1],~g_x(t)\leq M.\label{bounded} \end{equation} Note here that the constant $M$ only depends on the point $x^*$ and the real constant $\eta'$. Then, by construction, $g_x$ is a convex differentiable function satisfying: $0\in \textup{argmin}\,(g_x)$, $g_{x}(0)=0$ and: \begin{eqnarray*} \forall t\in (0,1],~g_x'(t) &=& \left\langle \nabla F\left(x^*+t\eta'\frac{x-x^*}{\|x-x^*\|}\right),\eta'\frac{x-x^*}{\|x-x^*\|}\right\rangle\\%=\frac{1}{t}\langle \nabla F(x^*+t\frac{x-x^*}{\|x-x^*\|}),t\frac{x-x^*}{\|x-x^*\|}\rangle\\ &\geqslant & \frac{\gamma}{t}\left(F\left(x^*+t\eta\frac{x-x^*}{\|x-x^*\|}\right)-F^*\right) = \frac{\gamma}{t} g_x(t). \end{eqnarray*} Thus, using the one dimensional result \eqref{majo1D} and the uniform bound \eqref{bounded}, we get: \begin{equation} \forall x\in \bar B(x^*,\eta') \ \mbox{ with $x \neq x^*$},~\forall t\in [0,1],~g_{x}(t) \leqslant g_x(1)t^{\gamma}\leqslant Mt^{\gamma}. \end{equation} Finally by choosing $t=\frac{1}{\eta'}\|x-x^{\ast}\|$, we obtain the expected result. \end{proof} In other words, the hypothesis $\textbf{H}_1(\gamma)$ can be seen as a ``flatness'' condition on the function $F$ in the sense that it ensures that $F$ is sufficiently flat (at least as flat as $x\mapsto \|x\|^\gamma$) in the neighborhood of its minimizers. The hypothesis $\textbf{H}_2(r)$, $r\geqslant 1$, is a growth condition on the function $F$ around any minimizer (any critical point in the non-convex case). It is sometimes also called $r$-conditioning \cite{garrigos2017convergence} or H\"olderian error bounds \cite{Bolte2017}. This assumption is motivated by the fact that, when $F$ is convex, $\textbf{H}_2(r)$ is equivalent to the famous \L ojasiewicz inequality \cite{Loja63,Loja93}, a key tool in the mathematical analysis of continuous (or discrete) subgradient dynamical systems, with exponent $\theta = 1-\frac{1}{r}$ \begin{definition} \label{def_loja} A differentiable function $F:\mathbb R^n \to \mathbb R$ is said to have the \L ojasiewicz property with exponent $\theta \in [0,1)$ if, for any critical point $x^*$, there exist $c> 0$ and $\varepsilon >0$ such that: \begin{equation} \forall x\in B(x^*,\varepsilon),~\|\nabla F(x)\| \geqslant c|F(x)-F(x^*)|^{\theta},\label{loja} \end{equation} where: $0^0=0$ when $\theta=0$ by convention. \end{definition} When the set $X^*$ of the minimizers is a connected compact set, the \L ojasiewicz inequality turns into a geometrical condition on $F$ around its set of minimizers $X^*$, usually referred to as H\"older metric subregularity \cite{kruger2015error}, and whose proof can be easily adapted from \cite[Lemma 1]{AttouchBolte2009}: \begin{lemma} Let $F:\mathbb{R}^n\rightarrow \mathbb{R}$ be a convex differentiable function satisfying the growth condition $\textbf{H}_2(r)$ for some $r\geqslant 1$. Assume that the set $X^*=\textup{argmin}\, F$ is compact. Then there exist $K>0$ and $\varepsilon >0$ such that for all $x\in \mathbb{R}^n$: $$d(x,X^*) \leqslant \varepsilon\Rightarrow K d(x,X^*)^{r}\leqslant F(x)-F^*.$$\label{lem:H2} \end{lemma} Typical examples of functions having the \L ojasiewicz property are real-analytic functions, $C^1$ subanalytic functions or semi-algebraic functions \cite{Loja63,Loja93}. Strongly convex functions satisfy a global \L ojasiewicz property with exponent $\theta=\frac{1}{2}$ \cite{AttouchBolte2009}, or equivalently a global version of the hypothesis $\textbf{H}_2(2)$, namely: $$\forall x\in \mathbb{R}^n, F(x)-F^*\geqslant \frac{\mu}{2}\|x-x^*\|^2,$$ where $\mu>0$ denotes the parameter of strong convexity and $x^*$ the unique minimizer of $F$. By extension, uniformly convex functions of order $p\geqslant 2$ satisfy the global version of the hypothesis $\textbf{H}_2(p)$ \cite{garrigos2017convergence}. Let us now present two simple examples of convex differentiable functions to illustrate situations where the hypothesis $\textbf{H}_1$ and $\textbf{H}_2$ are satisfied. Let $\gamma > 1$ and consider the function defined by: $F:x\in \mathbb{R}\mapsto |x|^\gamma$. We easily check that $F$ satisfies the hypothesis $\textbf{H}_1(\gamma')$ for some $\gamma'\geq 1$ if and only if $\gamma'\in [1,\gamma]$. By definition, $F$ also naturally satisfies $\textbf{H}_2(r)$ if and only if $r\geqslant \gamma$. Same conditions on $\gamma'$ and $r$ can be derived without uniqueness of the minimizer for functions of the form: \begin{equation} F(x) = \left\{\begin{array}{ll} \max(|x|-a,0)^\gamma &\mbox{ if } |x| \geqslant a,\\ 0 &\mbox{otherwise,} \end{array}\right.\label{ex2} \end{equation} with $a>0$, and whose set of minimizers is: $X^*=[-a,a]$, since conditions $\textbf{H}_1(\gamma)$ and $\textbf{H}_2(r)$ only make sense around the extremal points of $X^*$. Let us now investigate the relation between the parameters $\gamma$ and $r$ in the general case: any convex differentiable function $F$ satisfying both $\textbf{H}_1(\gamma)$ and $\textbf{H}_2(r)$, has to be at least as flat as $x\mapsto \|x\|^\gamma$ and as sharp as $x\mapsto \|x\|^r$ in the neighborhood of its minimizers. Combining the flatness condition $\textbf{H}_1(\gamma)$ and the growth condition $\textbf{H}_2(r)$, we consistently deduce: \begin{lemma} If a convex differentiable function satisfies both $\textbf{H}_1(\gamma)$ and $\textbf{H}_2(r)$ then necessarily $r\geqslant \gamma$. \label{lem:geometry2} \end{lemma} Finally, we conclude this section by showing that an additional assumption of the Lipschitz continuity of the gradient provides additional information on the local geometry of $F$: indeed, for convex functions, the Lipschitz continuity of the gradient is equivalent to a quadratic upper bound on $F$: \begin{equation}\label{lipschitz} \forall (x,y)\in \mathbb{R}^n\times \mathbb{R}^n, ~F(x)-F(y) \leqslant \langle \nabla F(y),x-y \rangle + \frac{L}{2}\|x-y\|^{2}. \end{equation} Applying \eqref{lipschitz} at $y=x^*$, we then deduce: \begin{equation}\label{lipschitz2} \forall x\in \mathbb{R}^n,~F(x)-F^* \leqslant \frac{L}{2}\|x-x^*\|^{2}, \end{equation} which indicates that $F$ is at least as flat as $\|x-x^*\|^2$ around $X^*$. More precisely: \begin{lemma} Let $F:\mathbb{R}^n \rightarrow \mathbb{R}$ be a convex differentiable function with a $L$-Lipschitz continuous gradient for some $L>0$. Assume also that $F$ satisfies the growth condition $\textbf{H}_2(2)$ for some constant $K>0$. Then $F$ automatically satisfies $\textbf{H}_1(\gamma)$ with $\gamma=1+\frac{K}{2L}\in(1,2]$.\label{lem:Lipschitz} \end{lemma} \begin{proof} Since $F$ is convex with a Lipschitz continuous gradient, we have: $$\forall (x,y)\in \mathbb{R}^n, F(y)-F(x)-\langle \nabla F(x),y-x\rangle \geqslant \frac{1}{2L}\|\nabla F(y)-\nabla F(x)\|^2,$$ hence: $$\forall x\in \mathbb{R}^n, F(x)-F^*\leqslant \langle\nabla F(x),x-x^*\rangle -\frac{1}{2L}\|\nabla F(x)\|^2.$$ Assume in addition that $F$ satisfies the growth condition $\textbf{H}_2(2)$ for some constant $K>0$. Then $F$ has the \L ojasiewicz property with exponent $\theta=\frac{1}{2}$ and constant $c=\sqrt{K}$. Thus: $$\left(1+\frac{K}{2L}\right)(F(x)-F^*) \leqslant\langle\nabla F(x),x-x^*\rangle,$$ in the neighborhood of its minimizers, which means that $F$ satisfies $\textbf{H}_1(\gamma)$ with $\gamma=1+\frac{K}{2L}$. \end{proof} \begin{remark} Observe that Lemma \ref{lem:Lipschitz} can be easily extended to the case of convex differentiable functions with a $\nu$-H\"older continuous gradient. Indeed, let $F$ be a convex differentiable functions with a $\nu$-H\"older continuous gradient for some $\nu\geqslant 1$. If $F$ also satisfies the growth condition $\textbf{H}_2(1+\nu)$ (for some constant $K>0$), then $F$ automatically satisfies $\textbf{H}_1(\gamma)$ with $\gamma =1 + \frac{\alpha K}{(1+\nu)L^\frac{1}{\nu}}$. This result is based on a notion of generalized co-coercivity for functions having a H\"older continuous gradient. \end{remark} \section{Related results}\label{sec_state} In this section, we recall some classical state of the art results on the convergence properties of the trajectories of the ODE \eqref{ODE}. Let us first recall that as soon as $\alpha>0$, $F(x(t))$ converges to $F^*$ \cite{AujolDossal,attouch2017rate}, but a larger value of $\alpha$ is required to show the convergence of the trajectory $x(t)$. More precisely, if $F$ is convex and $\alpha >3$, or if $F$ satisfies $\textbf{H}_1(\gamma)$ hypothesis and $\alpha>1+\frac{2}{\gamma}$ then: \begin{equation*} F(x(t))-F^*=o\left(\frac{1}{t^2}\right), \end{equation*} and the trajectory $x(t)$ converges (weakly in an infinite dimensional space) to a minimizer $x^*$ of $F$ \cite{su2016differential,AujolDossal,may2015asymptotic}. This last point generalizes what is known on convex functions: thanks to the additional hypothesis $\textbf{H}_1(\gamma)$, the optimal decay $\frac{1}{t^2}$ can be achieved for a damping parameter $\alpha$ smaller that $3$. In the sub-critical case (namely when $\alpha <3$), it has been proven in \cite{attouch2017rate,AujolDossal} that if $F$ is convex, the convergence rate is then given by: \begin{equation} F(x(t))-F^*=O\left(\frac{1}{t^\frac{2\alpha}{3}}\right), \end{equation} but we can no longer prove the convergence of the trajectory $x(t)$. The purpose in this paper is to prove that by exploiting the geometry of the function $F$, better rates of convergence can be achieved for the values $F(x(t))-F^*$. Consider first the case when $F$ is convex and $\alpha \leqslant 1+\frac{2}{\gamma}$. A first contribution in this paper is to provide convergence rates for the values when $F$ only satisfies $\textbf{H}_1(\gamma)$. Although we can no longer prove the convergence of the trajectory $x(t)$, we still have the following convergence rate for $F(x(t))-F^*$: \begin{equation} F(x(t))-F^*=O\left(\frac{1}{t^{\frac{2\gamma\alpha}{2+\gamma}}}\right), \end{equation} and this decay is optimal and achieved for $F(x)=\vert x\vert^{\gamma}$ for any $\gamma\geqslant 1$. These results have been first stated and proved in the unpublished report \cite{AujolDossal} by Aujol and Dossal in 2017 for convex differentiable functions satisfying $(F-F^*)^\frac{1}{\gamma}$ convex. Observe that this decay is still valid for $\gamma=1$ i.e. with the sole assumption of convexity as shown in \cite{attouch2017rate}, and that the constant hidden in the big $O$ is explicit and available also for $\gamma<1$, that is for non-convex functions (for example for functions whose square is convex). Consider now the case when $\alpha > 1+\frac{2}{\gamma}$. In that case, with the sole assumption $\textbf{H}_1(\gamma)$ on $F$ for some $\gamma \geqslant 1$, it is not possible to get a bound on the decay rate like $O(\frac{1}{t^\delta})$ with $\delta>2$. Indeed as shown in \cite[Example 2.12]{attouch2018fast}, for any $\eta>2$ and for a large friction parameter $\alpha$, the solution $x$ of the ODE associated to $F(x)=|x|^{\eta}$ satisfies: $$F(x(t))-F^*=Kt^{-\frac{2\eta}{\eta-2}},$$ and the power $\frac{2\eta}{\eta-2}$ can be chosen arbitrary close to $2$. More conditions are thus needed to obtain a decay faster than $O\left(\frac{1}{t^2}\right)$, which is the uniform rate that can be achieved for $\alpha \geqslant 3$ for convex functions. Our main contribution is to show that a flatness condition $\textbf{H}_1$ associated to classical sharpness conditions such as the \L ojasiewicz property provides new and better decay rates on the values $F(x(t))-F^*$, and to prove the optimality of these rates in the sense that they are achieved for instance for the function $F(x)=|x|^\gamma$, $x\in \mathbb{R}$, $\gamma\geqslant 1$. We will then confront our results to well-known results in the literature. In particular we will focus on the case when $F$ is strongly convex or has a strong minimizer \cite{cabot2009long}. In that case, Attouch Chbani, Peypouquet and Redont in \cite{attouch2018fast} following Su, Boyd and Candes \cite{su2016differential} proved that for any $\alpha>0$ we have: $$F(x(t))-F^*=O\left(t^{-\frac{2\alpha}{3}}\right),$$ (see also \cite{attouch2017rate} for more general viscosity term in that setting). In Section~\ref{sec_contrib}, we will prove the optimality of the power $\frac{2\alpha}{3}$ in \cite{attouch2016fast}, and that if $F$ has additionally a Lipschitz gradient then the decay rate of $F(x(t))-F^*$ is always strictly better than $O\left(t^{-\frac{2\alpha}{3}}\right)$. Eventually several results about the convergence rate of the solutions of ODE associated to the classical gradient descent : \begin{equation}\label{EDOGrad} \dot{x}(t)+\nabla F(x(t))=0, \end{equation} or the ODE associated to the heavy ball method \begin{equation}\label{EDO} \ddot{x}+\alpha\dot x(t)+\nabla F(x(t))=0 \end{equation} under geometrical conditions such that the \L ojasiewicz property have been proposed, see for example Polyak-Shcherbakov~\cite{polyak2017lyapunov}. The authors prove that if the function $F$ satisfies $\textbf{H}_2(2)$ and some other conditions, the decay of $F(x(t))-F^*$ is exponential for the solutions of both previous equations. These rates are the continuous counterparts of the exponential decay rate of the classical gradient descent algorithm and the heavy ball method algorithm for strongly convex functions. In the next section we will prove that this exponential rate is not true for solutions of \eqref{ODE} even for quadratic functions, and we will prove that from an optimization point of view, the classical Nesterov acceleration may be less efficient than the classical gradient descent. \section{Contributions}\label{sec_contrib} In this section, we state the optimal convergence rates that can be achieved when $F$ satisfies hypotheses such as $\textbf{H}_1(\gamma)$ and/or $\textbf{H}_2(r)$. The first result gives optimal control for functions whose geometry is sharp : \begin{theorem}\label{Theo1} Let $\gamma\geqslant 1$ and $\alpha >0$. If $F$ satisfies $\textbf{H}_1(\gamma)$ and if $\alpha\leqslant 1+\frac{2}{\gamma}$ then: \begin{equation*} F(x(t))-F^*=O\left(\frac{1}{t^{\frac{2\gamma\alpha}{\gamma+2}}}\right). \end{equation*} \end{theorem} \begin{figure}[h] \includegraphics[width=\textwidth]{RateTheo1.png} \caption{Decay rate $r(\alpha,\gamma)=\frac{2\alpha\gamma}{\gamma+2}$ depending on $\alpha$ when $\alpha\leqslant 1+\frac{2}{\gamma}$ and when $F$ satisfies $\textbf{H}_1(\gamma)$ (as in Theorem \ref{Theo1}) for four values $\gamma$: $\gamma_1=1.5$ dashed line, $\gamma_2=2$, solid line, $\gamma_3=3$ dotted line and $\gamma_4=5$ dashed-dotted line.} \end{figure} Note that a proof of the Theorem \ref{Theo1} has been proposed in the unpublished report \cite{AujolDossal}. The obtained decay is proved to be optimal in the sense that it can be achieved for some explicit functions $F$ for any $\gamma <1$. As a consequence one cannot expect a $o(t^{-\frac{2\gamma\alpha}{\gamma+2}})$ decay when $\alpha <1+\frac{2}{\gamma}$. Let us now consider the case when $\alpha > 1+\frac{2}{\gamma}$. The second result in this paper provides optimal convergence rates for functions whose geometry is sharp, with a large friction coefficient: \begin{theorem}\label{Theo1b} Let $\gamma\geqslant 1$ and $\alpha >0$. If $F$ satisfies $\textbf{H}_1(\gamma)$ and $\textbf{H}_2(2)$ for some $\gamma\leqslant 2$, if $F$ has a unique minimizer and if $\alpha>1+\frac{2}{\gamma}$ then \begin{equation*}\label{eqTheo1} F(x(t))-F^*=O\left(\frac{1}{t^{\frac{2\gamma\alpha}{\gamma+2}}}\right). \end{equation*} Moreover this decay is optimal in the sense that for any $\gamma\in(1,2]$ this rate is achieved for the function $F(x)=\vert x\vert^\gamma$. \end{theorem} \begin{figure}[h] \includegraphics[width=\textwidth]{RateSharp.png} \caption{Decay rate $r(\alpha,\gamma)=\frac{2\alpha\gamma}{\gamma+2}$ depending on the value of $\alpha$ when $F$ satisfies $\textbf{H}_1(\gamma)$ and $\textbf{H}_2(2)$ (as in Theorem \ref{Theo1b}) with $\gamma\leqslant 2$ for two values $\gamma$ : $\gamma_1=1.5$ dashed line, $\gamma_2=2$, solid line.} \end{figure} Note that Theorem~\ref{Theo1b} only applies for $\gamma\leqslant 2$, since there is no function that satisfies both conditions $\textbf{H}_1(\gamma)$ with $\gamma>2$ and $\textbf{H}_2(2)$ (see Lemma \ref{lem:geometry2}). The optimality of the convergence rate result is precisely stated in the next Proposition: \begin{proposition} \label{prop_optimal} Let $\gamma\in (1,2]$. Let us assume that $\alpha>0$. Let $x$ be a solution of \eqref{ODE} with $F(x)=\vert x\vert^{\gamma}$, $|x(t_0)|<1$ and $\dot{x}(t_0)=0$ where $t_0>\sqrt{\max(0,\frac{\alpha \gamma(\alpha -1-2/\gamma)}{(\gamma +2)^2})}$. There exists $K >0$ such that for any $T>0$, there exists $t \geqslant T$ such that \begin{equation} F(x(t))-F^* \geqslant \frac{K}{t^{\frac{2 \gamma \alpha}{\gamma +2}}}. \end{equation} \end{proposition} Let us make several observations: first, to apply Theorem~\ref{Theo1b}, more conditions are needed than for Theorem~\ref{Theo1}: the hypothesis $\textbf{H}_2(2)$ and the uniqueness of the minimizer are needed to prove a decay faster than $O(\frac{1}{t^2})$, which is the uniform rate than can be achieved with $\alpha\geqslant 3$ for convex functions \cite{su2016differential}. The uniqueness of the minimizer is crucial in the proof of Theorem~\ref{Theo1b}, but it is still an open problem to know if this uniqueness is a necessary condition. In particular, observe that if $\dot x(t_0)=0$, then for all $t \geqslant t_0$, $x(t)$ belongs to $x_0+ {\rm Im} (\nabla F) $ where ${\rm Im} (\nabla F) $ stands for the vector space generated by $\nabla F (x)$ for all $x$ in $\mathbb{R}^n$. As a consequence, Theorem~\ref{Theo1b} still holds true as long as the assumptions are valid in $x_0+ {\rm Im} (\nabla F) $. \begin{remark}[The Least-Square problem] Let us consider the classical Least-Square problem defined by: $$\displaystyle\min_{x\in \mathbb{R}^n}F(x):=\frac{1}{2}\|Ax-b\|^2,$$ where $A$ is a linear operator and $b\in \mathbb{R}^n$. If $\dot x(t_0)=0$, then for all $t \geqslant t_0$, we have thus that $x(t)$ belongs to the affine subspace $x_0+{\rm Im}(A^*)$. Since we have uniqueness of the solution on $x_0+{\rm Im}(A^*)$, Theorem~\ref{Theo1b} can be applied. \end{remark} We can also remark that if $F$ is a quadratic function in the neighborhood of $x^*$, then $F$ satisfies $\textbf{H}_1(\gamma)$ for any $\gamma \in [1,2]$. Consequently, Theorem~\ref{Theo1b} applies with $\gamma=2$ and thus: \begin{equation*} F(x(t))-F^*=O\left(\frac{1}{t^{\alpha}}\right). \end{equation*} Observe that the optimality result provided by the Proposition~\ref{prop_optimal} ensures that we cannot expect an exponential decay of $F(x(t))-F^*$ for quadratic functions whereas this exponential decay can be achieved for the ODE associated to Gradient descent or Heavy ball method \cite{polyak2017lyapunov}. Likewise, if $F$ is a convex differentiable function with a Lipschitz continuous gradient, and if $F$ satisfies the growth condition $\textbf{H}_2(2)$, then $F$ automatically satisfies the assumption $\textbf{H}_1(\gamma)$ with some $1<\gamma\leqslant 2$ as shown by Lemma~\ref{lem:Lipschitz}, and Theorem~\ref{Theo1b} applies with $\gamma>1$. Finally if $F$ is strongly convex or has a strong minimizer, then $F$ naturally satisfies $\textbf{H}_1(1)$ and a global version of $\textbf{H}_2(2)$. Since we prove the optimality of the decay rates given by Theorem~\ref{Theo1b}, a consequence of this work is also the optimality of the power $\frac{2\alpha}{3}$ in \cite{attouch2016fast} for strongly convex functions and functions having a strong minimizer. In both cases, we thus obtain convergence rates which are strictly better than $O(t^{-\frac{2\alpha}{3}})$ that is proposed for strongly convex functions by Su et al. \cite{su2016differential} and Attouch et al. \cite{attouch2018fast}. Finally it is worth noticing that the decay for strongly convex functions is not exponential while it is the case for the classical gradient descent scheme (see e.g. \cite{garrigos2017convergence}). This shows that applying the classical Nesterov acceleration on convex functions without looking more at the geometrical properties of the objective functions may lead to sub-optimal algorithms. Let us now focus on flat geometries i.e. geometries associated to $\gamma>2$. Note that the uniqueness of the minimizer is not need anymore: \begin{theorem}\label{Theo2} Let $\gamma_1>2$ and $\gamma_2 >2$. Assume that $F$ is coercive and satisfies $\textbf{H}_1(\gamma_1)$ and $\textbf{H}_2(\gamma_2)$ with $\gamma_1\leqslant \gamma_2$. If $\alpha\geqslant \frac{\gamma_1+2}{\gamma_1-2}$ then we have: \begin{equation*}\label{eqTheo2} F(x(t))-F^*=O\left(\frac{1}{t^{\frac{2\gamma_2}{\gamma_2-2}}}\right). \end{equation*} \end{theorem} In the case when $\gamma_1= \gamma_2$, we have furthermore the convergence of the trajectory: \begin{corollary}\label{Corol2} Let $\gamma>2$. If $F$ is coercive and satisfies $\textbf{H}_1(\gamma)$ and $\textbf{H}_2(\gamma)$, and if $\alpha\geqslant \frac{\gamma+2}{\gamma-2}$ then we have: \begin{equation*}\label{eqCorol2} F(x(t))-F^*=O\left(\frac{1}{t^{\frac{2\gamma}{\gamma-2}}}\right), \end{equation*} and \begin{equation} \norm{\dot x(t)}=O\left(\frac{1}{t^{\frac{\gamma}{\gamma-2}}}\right). \end{equation} Moreover the trajectory $x(t)$ has a finite length and it converges to a minimizer $x^*$ of $F$. \end{corollary} \begin{figure}[h] \includegraphics[width=\textwidth]{RateFlat.png} \caption{Decay rate $r(\alpha,\gamma)=\frac{2\gamma}{\gamma-2}$ depending on the value of $\alpha$ when $\alpha\geqslant \frac{\gamma+2}{\gamma-2}$ when $F$ satisfies $\textbf{H}_1(\gamma)$ (as in Theorem \ref{Theo2}) for two values $\gamma$: $\gamma_3=3$ dotted line and $\gamma_4=5$ dashed-dotted line.}\label{fig:flat1} \end{figure} \begin{figure}[h] \includegraphics[width=\textwidth]{RateAll.png} \caption{Decay rate $r(\alpha,\gamma)$ depending on the value of $\alpha$ if $F$ satisfies $\textbf{H}_1(\gamma)$ and $\textbf{H}_2(r)$ with $r=\max(2,\gamma)$ for four values $\gamma$ : $\gamma_1=1.5$ dashed line, $\gamma_2=2$, solid line, $\gamma_3=3$ dotted line and $\gamma_4=5$ dashed-dotted line.}\label{fig:flat2} \end{figure} Observe that the decay obtained in Corollary \ref{Corol2} is optimal since Attouch et al. proved that it is achieved for the function $F(x)=\vert x\vert^\gamma$ in \cite{attouch2018fast}.\\ From Theorems \ref{Theo1}, \ref{Theo1b} and \ref{Theo2}, we can make the following comments: first in Theorems~\ref{Theo1b} and \ref{Theo2}, both conditions $\textbf{H}_1$ and $\textbf{H}_2$ are used to get a decay rate and it turns out that these two conditions are important. With the sole hypothesis $\textbf{H}_2(\gamma)$ it seems difficult to establish optimal rate. Consider for instance the function $F(x)=|x|^3$ which satisfies $\textbf{H}_1(3)$ and $\textbf{H}_2(3)$. Applying Theorem \ref{Theo2} with $\gamma_1=\gamma_2=3$, we know that for this function with $\alpha=\frac{\gamma_1+2}{\gamma_1-2}=5$, we have $F(x(t))-F^*=O\left(\frac{1}{t^6}\right)$. But, with the sole hypothesis $\textbf{H}_2(3)$, such a decay cannot be achieved. Indeed, the function $F(x)=|x|^{2}$ satisfies $\textbf{H}_2(3)$, but from the optimality part of Theorem \ref{Theo1b} we know that we cannot achieve a decay better than $\frac{1}{t^{\frac{2\alpha \gamma}{\gamma+2}}}=\frac{1}{t^5}$ for $\alpha=5$. Consider now a convex function $F$ behaving like $\norm{x-x^*}^{\gamma}$ in the neighborhood of its unique minimizer $x^*$. The decay of $F(x(t))-F^*$ then depends directly on $\alpha$ if $\gamma\leqslant 2$, but it does not depend on $\alpha$ for large $\alpha$ if $\gamma>2$. Moreover for such functions the best decay rate of $F(x(t))-F^*$ is $O\left(\frac{1}{t^{\alpha}}\right)$ and is achieved for $\gamma=2$ i.e. for quadratic like functions around the minimizer. If $\gamma<2$, it seems that the oscillations of the solution $x(t)$ prevent us from getting an optimal decay rate. The inertia seems to be too large for such functions. If $\gamma>2$, for large $\alpha$, the decay is not as fast because the gradient of the functions decays too fast in the neighborhood of the minimizer. For these functions a larger inertia could be more efficient. Finally, observe that as shown in Figures \ref{fig:flat1} and \ref{fig:flat2}, the case when $1+\frac{2}{\gamma}<\alpha<\frac{\gamma+2}{\gamma-2}$ is not covered by our results. Although we did not get a better convergence rate than $\frac{1}{t^2}$ in that case, we can prove that there exist some initial conditions for which the convergence rate can not be better than $t^{-\frac{2\gamma\alpha}{\gamma+2}}$: \begin{proposition}\label{PropOpt2} Let $\gamma >2$ and $1+\frac{2}{\gamma}<\alpha<\frac{\gamma+2}{\gamma-2}$. Let $x$ be a solution of \eqref{EDO} with $F(x)=|x|^\gamma$, $|x(t_0)|<1$ and $\dot x(t_0)=0$ for any given $t_0>0$. Then there exists $K>0$ such that for any $T>0$, there exists $t\geqslant T$ such that: $$F(x(t))-F^* \geqslant \dfrac{K}{t^{\frac{2\gamma\alpha}{\gamma+2}}}.$$\label{prop:gap} \end{proposition} \paragraph{Numerical Experiments} In the following numerical experiments, the optimality of the decays given in all previous theorems, are tested for various choices of $\alpha$ and $\gamma$. More precisely we use a discrete Nesterov scheme to approximate the solution of \eqref{ODE} for $F(x)=|x|^{\gamma}$ on the interval $[t_0,T]$ with $t_0=0$ and $\dot{x}(t_0)=0$, see \cite{su2016differential}. If $\gamma\geqslant 2$, $\nabla F$ is a Lipschitz function and we define the sequence $(x_n)_{n\in\mathbb{N}}$ as follows: \begin{equation*} x_n=y_n-h\nabla F(y_n)\text{ with }y_n=x_n+\frac{n}{n+\alpha}(x_n-x_{n-1}), \end{equation*} where $h\in(0,1)$ is a time step. If $\gamma<2$, we use a proximal step : \begin{equation*} x_n=prox_{h F}(y_n)\text{ with }y_n=x_n+\frac{n}{n+\alpha}(x_n-x_{n-1}). \end{equation*} It has been shown that $x_n\approx x(n\sqrt{h})$ where the function $x$ is a solution of the ODE \eqref{ODE}. In the following numerical experiments the sequence $(x_n)_{n\in\mathbb{N}}$ is computed for various pairs $(\gamma,\alpha)$. The step size is always set to $h=10^{-7}$. We define the function $rate(\alpha,\gamma)$ as the expected rate given in all the previous theorems and Proposition \ref{PropOpt2}, that is: \begin{eqnarray*} rate(\alpha,\gamma)&:=&\left\{\begin{array}{ll} \dfrac{2\alpha \gamma}{\gamma+2} &\text{ if } \gamma\leqslant 2 \text{ or if }\gamma>2\text{ and }\alpha\leqslant 1+\frac{2}{\gamma}, \\ \dfrac{2\gamma}{\gamma-2}& \text{ if } \gamma>2\text{ and }\alpha\geqslant \frac{\gamma+2}{\gamma-2}, \\ \dfrac{2\alpha \gamma}{\gamma+2} &\text{ if } \gamma>2 \text{ and } \alpha\in(1+\frac{2}{\gamma}, \frac{\gamma+2}{\gamma-2}). \end{array}\right. \end{eqnarray*} If the function $z(t):=\left(F(x(t))-F(x^*)\right)t^{\delta}$ is bounded but does not tend to 0, we can deduce that $\delta$ is the largest value such that $F(x(t))-F(x^*)=O\left(t^{-\delta}\right)$. We define \begin{equation*} z_n:=(F(x_n)-F(x^*))\times (n\sqrt{h})^{rate(\alpha,\gamma)}\approx (F(x(t))-F(x^*))t^{rate(\alpha,\gamma)}, \end{equation*} and if the function $rate(\alpha,\gamma)$ is optimal we expect that the sequence $(z_n)_{n\in\mathbb{N}}$ is bounded but do not decay to 0. The following figures give for various choices of $(\alpha,\gamma)$ the trajectory of the sequence $(z_n)_{n\in\mathbb{N}}$. The values are re-scaled such that the maximum is always $1$. In all these numerical examples, we will observe that the sequence $(z_n)_{n\in\mathbb{N}}$ is bounded and does not tend to $0$. \begin{figure}[h] \includegraphics[width=0.495\textwidth]{Trajgam1dot5alpha1r0dot86.png} \includegraphics[width=0.495\textwidth]{Trajgam1dot5alpha6r5dot14.png} \caption{Case when $\gamma=1.5$. On the left $\alpha=1$ and $rate(\alpha,\gamma)=\frac{2\alpha\gamma}{\gamma+2}=\frac{6}{7}$. On the right $\alpha=6$ and $rate(\alpha,\gamma)=\frac{2\alpha\gamma}{\gamma+2}=\frac{36}{7}$}\label{fig:gamma1.5} \end{figure} \begin{figure}[h] \includegraphics[width=0.495\textwidth]{Trajgam2alpha1r1.png} \includegraphics[width=0.495\textwidth]{Trajgam2alpha6r6.png} \caption{Case when $\gamma=2$. On the left $\alpha=1$ and $rate(\alpha,\gamma)=\frac{2\alpha\gamma}{\gamma+2}=1$. On the right $\alpha=6$ and $rate(\alpha,\gamma)=\frac{2\alpha\gamma}{\gamma+2}=6$}\label{fig:gamma2} \end{figure} \begin{figure}[h] \includegraphics[width=0.495\textwidth]{Trajgam3alpha1r1dot2.png} \includegraphics[width=0.495\textwidth]{Trajgam3alpha4r4dot8.png} \includegraphics[width=0.495\textwidth]{Trajgam3alpha6r6.png} \includegraphics[width=0.495\textwidth]{Trajgam3alpha8r6.png} \caption{Case when $\gamma=3$. On the top left $\alpha=1$ and $rate(\alpha,\gamma)=\frac{2\alpha\gamma}{\gamma+2}=1.2$, on the top right $\alpha=4$ and $rate(\alpha,\gamma)=\frac{2\alpha\gamma}{\gamma+2}=4.8$, on bottom left $\alpha=6$ and $rate(\alpha,\gamma)=\frac{2\gamma}{\gamma-2}=6$, on bottom right $\alpha=8$ and $rate(\alpha,\gamma)=\frac{2\gamma}{\gamma-2}=6$}\label{fig:gamma3} \end{figure} \begin{itemize} \item The Figures \ref{fig:gamma1.5} and \ref{fig:gamma2} with $\gamma=1.5$ and $\gamma=2$ illustrate Theorem \ref{Theo1}, Theorem \ref{Theo1b} and Proposition \ref{prop_optimal}. Indeed for sharp functions (i.e for $\gamma\leqslant 2$) the rate is proved to be optimal. \item In the case $\gamma=3$ and $\alpha=1$, the fact that $(F(x(t))-F(x^*))t^{rate(\alpha,\gamma)}$ is bounded is also a consequence of Theorem \ref{Theo1}. The optimality of this rate is not proven but the experiments show that it numerically is. \item In the case $\gamma=3$ and $\alpha=4$, $\alpha\in(\frac{\gamma+2}{\gamma},\frac{\gamma+2}{\gamma-2})$ then the fact that $(F(x(t))-F(x^*))t^{rate(\alpha,\gamma)}$ is bounded is not proved but the experiments from Figure \ref{fig:gamma3} show that it numerically is. However Proposition \ref{PropOpt2} proves that the sequence $(z_n)_{n\in\mathbb{N}}$ does not tend to 0, which is illustrated by the experiments. \item When $\gamma=3$ and $\alpha=6$ or $\alpha=8$, Theorem \ref{Theo2} ensures that the sequence $(z_n)_{n\in\mathbb{N}}$ is bounded. This rate is proved to be optimal and the numerical experiments from Figure \ref{fig:gamma3} show that this rate is actually achieved for this specific choice of parameters. \end{itemize} \section{Proofs}\label{sec_proofs} In this section, we detail the proofs of the results presented in Section~\ref{sec_contrib}, namely Theorems \ref{Theo1}, \ref{Theo1b} and \ref{Theo2}, Propositions~\ref{prop_optimal} and \ref{prop:gap}, Corollary~\ref{Corol2}. The proofs of the theorems rely on Lyapunov functions $\mathcal{E}$ and $\mathcal{H}$ introduced by Su, Boyd and Candes \cite{su2016differential}, Attouch, Chbani, Peypouquet and Redont \cite{attouch2018fast} and Aujol-Dossal \cite{AujolDossal} : \begin{equation*} \mathcal{E}(t)=t^2(F(x(t))-F^*)+\frac{1}{2} \norm{\lambda(x(t)-x^*)+t\dot{x}(t)}^2+\frac{\xi}{2}\norm{x(t)-x^*}^2, \end{equation*} where $x^*$ is a minimizer of $F$ and $\lambda$ and $\xi$ are two real numbers. The function $\mathcal{H}$ is defined from $\mathcal{E}$ and it depends on another real parameter $p$ : \begin{equation*} \mathcal{H}(t)=t^p\mathcal{E}(t). \end{equation*} Using the following notations: \begin{align*} a(t)&=t(F(x(t))-F^*),\\ b(t)&=\frac{1}{2t}\norm{\lambda(x(t)-x^*)+t\dot{x}(t)}^2,\\ c(t)&=\frac{1}{2t}\norm{x(t)-x^*}^2, \end{align*} we have: \begin{equation*} \mathcal{E}(t)=t(a(t)+b(t)+\xi c(t)). \end{equation*} From now on we will choose \begin{equation*} \xi=\lambda(\lambda+1-\alpha), \end{equation*} and we will use the following Lemma whose proof is postponed to Appendix~\ref{appendix}: \begin{lemma}\label{LemmeFonda} If $F$ satisfies $\textbf{H}_1(\gamma)$ for any $\gamma\geq 1$, and if $\xi=\lambda(\lambda-\alpha+1)$ then \begin{equation*} \mathcal{H}'(t)\leqslant t^{p}\left((2-\gamma\lambda+p)a(t)+(2\lambda+2-2\alpha+p)b(t)+\lambda(\lambda+1-\alpha)(-2\lambda+p)c(t)\right). \end{equation*} \end{lemma} Note that this inequality is actually an equality for the specific choice $F(x)=\vert x\vert^\gamma$, $\gamma>1$. \subsection{Proof of Theorems \ref{Theo1} and \ref{Theo1b}} In this section we prove Theorem \ref{Theo1} and Theorem \ref{Theo1b}. Note that a complete proof of Theorem~\ref{Theo1}, including the optimality of the rate, can be found in the unpublished report \cite{AujolDossal} under the hypothesis that $(F-F^*)^\frac{1}{\gamma}$ is convex. The proof of both Theorems are actually similar. The choice of $p$ and $\lambda$ are the same but, to prove the first point, due to the value of $\alpha$, the function $\mathcal{H}$ is non-increasing and sum of non-negative terms, which simplifies the analysis and necessitates less hypotheses to conclude. We choose here $p=\frac{2\gamma \alpha}{\gamma+2}-2$ and $\lambda=\frac{2\alpha}{\gamma+2}$ and thus \begin{equation*} \xi=\frac{2\alpha\gamma}{(\gamma+2)^2}(1+\frac{2}{\gamma}-\alpha). \end{equation*} From Lemma \ref{LemmeFonda}, it appears that: \begin{equation}\label{ineqH1} \mathcal{H}'(t)\leqslant K_1t^{p}c(t \end{equation} where the real constant $K_1$ is given by: \begin{eqnarray*} K_1 & = & \lambda(\lambda+1-\alpha)(-2\lambda+p)\\ &=& \frac{2\alpha}{\gamma+2} \left(\frac{2\alpha}{\gamma+2}+1-\alpha\right) \left(-2 \frac{2\alpha}{\gamma+2}+\frac{2\gamma \alpha}{\gamma+2}-2 \right) \\ & = & \frac{4\alpha}{(\gamma+2)^3} \left(2\alpha+\gamma+2-\alpha \gamma -2 \alpha \right) \left(-2\alpha+\gamma \alpha - \gamma -2 \right) \\ & = & \frac{4\alpha}{(\gamma+2)^3} \left(\gamma+2-\alpha \gamma\right) \left(\alpha(-2+\gamma) - \gamma -2 \right). \end{eqnarray*} Hence: \begin{equation}\label{eqdefK1} K_1=\frac{4\alpha\gamma}{(\gamma+2)^3} \left(1+\frac{2}{\gamma}-\alpha\right) \left(\alpha(-2+\gamma) - \gamma -2 \right). \end{equation} Consider first the case when: $\alpha\leqslant 1+\frac{2}{\gamma}$. In that case, we observe that: $\xi\geq 0$, so that the energy $\mathcal{H}$ is actually a sum of non-negative terms. Coming back to \eqref{ineqH1}, we have: \begin{equation} \mathcal{H}'(t)\leqslant K_1t^{p}c(t).\label{ineqH1b1} \end{equation} Since $\alpha\leqslant 1+\frac{2}{\gamma}$, the sign of the constant $K_1$ is the same as that of $\alpha(-2+\gamma) - \gamma -2$, and thus $K_1\leqslant 0$ for any $\gamma\geqslant 1$. According to \eqref{ineqH1b1}, the energy $\mathcal{H}$ is thus non-increasing and bounded i.e.: $$\forall t\geqslant t_0,~\mathcal{H}(t)\leqslant \mathcal{H}(t_0).$$ Since $\mathcal{H}$ is a sum of non-negative terms, it follows directly that: $$\forall t\geqslant t_0,~t^{p+2}(F(x(t))-F^*)\leqslant \mathcal{H}(t_0),$$ which concludes the proof of Theorem~\ref{Theo1}. Consider now the case when: $\alpha > 1+\frac{2}{\gamma}$. In that case, we first observe that: $\xi<0$, so that $\mathcal{H}$ is not a sum of non-negative functions anymore, and an additional growth condition $\textbf{H}_2(2)$ will be needed to bound the term in $\norm{x(t)-x^*}^2$. Coming back to \eqref{ineqH1}, we have: \begin{equation} \mathcal{H}'(t)\leqslant K_1t^{p}c(t).\label{ineqH1b2} \end{equation} Since $\alpha > 1+\frac{2}{\gamma}$, the sign of the constant $K_1$ is the opposite of the sign of $\alpha(\gamma -2)-(\gamma+2)$. Moreover, since $\gamma\leqslant 2$, then $\alpha(\gamma -2)-(\gamma+2)<0$ and thus $K_1 >0$. Using Hypothesis $\textbf{H}_2(2)$ and the uniqueness of the minimizer, there exists $K>0$ such that: \begin{equation*} Kt\norm{x(t)-x^*}^2\leqslant t(F(x(t))-F^*)=a(t), \end{equation*} and thus \begin{equation} c(t)\leqslant \frac{1}{2Kt^2}a(t).\label{eqct} \end{equation} Since $\xi<0$ with our choice of parameters, we get: \begin{eqnarray} \mathcal{H}(t) &\geqslant & t^{p+1}(a(t) + \xi c(t)) \geqslant t^{p+1}(1+\frac{\xi}{2Kt^2})a(t).\label{H:bound} \end{eqnarray} It follows that there exists $t_1$ such that for all $t\geqslant t_1$, $\mathcal{H}(t)\geqslant 0$ and: \begin{equation} \mathcal{H}(t) \geqslant \frac{1}{2}t^{p+1}a(t).\label{eqat} \end{equation} From \eqref{ineqH1b2}, \eqref{eqct} and \eqref{eqat}, we get: \begin{equation*} \mathcal{H}'(t)\leqslant \frac{K_1}{K}\frac{\mathcal{H}(t)}{t^3}. \end{equation*} From the Gr\"onwall Lemma in its differential form, there exists $A>0$ such that for all $t\geqslant t_1$, we have: $\mathcal{H}(t)\leqslant A$. According to \eqref{eqat}, we then conclude that $t^{p+2}(F(x(t))-F^*)=t^{p+1}a(t)$ is bounded which concludes the proof of Theorem \ref{Theo1b}. \subsection{Proof of Proposition~\ref{prop_optimal} (Optimality of the convergence rates)} Before proving the optimality of the convergence rate stated in Proposition~\ref{prop_optimal}, we need the following technical lemma: \begin{lemma} \label{lemmatech} Let $y$ a continuously differentiable function with values in $\mathbb R$. Let $T>0$ and $\epsilon > 0$. If $y$ is bounded, then there exists $t_1 >T$ such that: \begin{equation*} |\dot{y}(t_1) | \leqslant \frac{\epsilon}{t_1}. \end{equation*} \end{lemma} \begin{proof} We split the proof into two cases. \begin{enumerate} \item There exists $t_1 >T$ such that $\dot{y}(t_1)=0$. \item $\dot{y}(t)$ is of constant sign for $t> T$. For instance we assume $\dot{y}(t)>0$. By contradiction, let us assume that $\dot{y}(t)>\frac{\epsilon}{t}$ $\forall t > T$. Then $y(t)$ cannot be a bounded function as assumed. \end{enumerate} \end{proof} Let us now prove the Proposition~\ref{prop_optimal}: the idea of the proof is the following: we first show that $\mathcal{H}$ is bounded from below. Since $\mathcal{H}$ is a sum of 3 terms including the term $F-F^*$, we then show that given $t_1 \geq t_0$, there always exists a time $t \geq t_1$ such that the value of $\mathcal{H}$ is concentrated on the term $F-F^*$. We start the proof by using the fact that, for the function $F(x)=\vert x\vert^{\gamma}$, $\gamma>1$, the inequality of Lemma~\ref{LemmeFonda} is actually an equality. Using the values $p=\frac{2\gamma \alpha}{\gamma+2}-2$ and $\lambda=\frac{2\alpha}{\gamma+2}$ of Theorems~\ref{Theo1} and \ref{Theo1b}, we have a closed form for the derivative of function $\mathcal{H}$: \begin{equation}\label{eqH1} \mathcal{H}'(t) = K_1 t^{p}c(t) =\frac{K_1}{2} t^{p-1}\vert x(t)\vert^{2}, \end{equation} where $K_1$ is the constant given in \eqref{eqdefK1}. We will now prove that it exists $\ell>0$ such that for $t$ large enough: \begin{equation*} \mathcal{H}(t) \geqslant \ell. \end{equation*} To prove that point we consider two cases depending on the sign of $\alpha-(1+\frac{2}{\gamma})$. \begin{enumerate} \item Case when $\alpha\leqslant 1+\frac{2}{\gamma}$, $\xi\geqslant 0$ and $K_1\leqslant 0$. We can first observe that $\mathcal{H}$ is a non negative and non increasing function. Moreover it exists $\tilde t\geqslant t_0$ such that for $t\geqslant \tilde t$, $|x(t)|\leqslant 1$ and: \begin{equation*} t^pc(t)\leqslant \frac{t^pa(t)}{2t^2}\leqslant \frac{\mathcal{H}(t)}{t^3}, \end{equation*} which implies using \eqref{eqH1} that: \begin{equation*} |\mathcal{H}'(t)|\leqslant |K_1|\frac{\mathcal{H}(t)}{t^3}. \end{equation*} If we denote $G(t)=\ln (\mathcal{H}(t))$ we get for all $t\geqslant \tilde t$, \begin{equation*} |G(t)-G(\tilde t)|\leqslant \int_{\tilde t}^t\frac{|K_1|}{s^3}ds. \end{equation*} We deduce that $|G(t)|$ is bounded below and then that it exists $\ell>0$ such that for $t$ large enough: \begin{equation*} \mathcal{H}(t) \geqslant \ell, \end{equation*} \item Case when $\alpha> 1+\frac{2}{\gamma}$, $\xi< 0$ and $K_1> 0$. This implies in particular that $\mathcal{H}$ is non-decreasing. Moreover, from Theorem~\ref{Theo1b}, $\mathcal{H}$ is bounded above. Coming back to the inequality \eqref{H:bound}, we observe that $\mathcal{H}(t_0)>0$ provided that $1+\frac{\xi}{2t_0^2}>0$, with $K=1$ and $\xi = \lambda(\lambda-\alpha+1)$, i.e.: $$t_0>\sqrt{\frac{\alpha\gamma}{(\gamma +2)^2}(\alpha-(1+\frac{2}{\gamma}))}.$$ In particular, we have that for any $t\geqslant t_0$ \begin{equation*} \mathcal{H}(t) \geqslant \ell, \end{equation*} with $\ell=\mathcal{H}(t_0)$ \end{enumerate} Hence for any $\alpha>0$ and for $t$ large enough \begin{equation*} a(t)+b(t)+ \xi c(t) \geqslant \frac{\ell}{t^{p+1}}. \end{equation*} Moreover, since $c(t)=o(a(t))$ when $t \to + \infty$, we have that for $t$ large enough, \begin{equation*} a(t)+b(t) \geqslant \frac{\ell}{2 t^{p+1}}. \end{equation*} Let $T>0$ and $\epsilon >0$. We set: \begin{equation*} y(t):=t^{\lambda} x(t), \end{equation*} where: $\lambda =\frac{2 \alpha}{\gamma+2}$. From the Theorem~\ref{Theo1} and Theorem \ref{Theo1b}, we know that $y(t)$ is bounded. Hence, from Lemma~\ref{lemmatech}, there exists $t_1 > T$ such that \begin{equation} \label{eqt1} |\dot{y}(t_1) |\leqslant \frac{\epsilon}{t_1}. \end{equation} But: \begin{equation*} \dot{y}(t) = t^{\lambda-1} \left(\lambda x(t) +t \dot{x}(t) \right). \end{equation*} Hence using \eqref{eqt1}: \begin{equation*} t_1^{\lambda} \left|\lambda x(t_1) +t_1 \dot{x}(t_1) \right| \leqslant \epsilon. \end{equation*} We recall that: $b(t)=\frac{1}{2t}\norm{\lambda(x(t)-x^*)+t\dot{x}(t)}^2$. We thus have: \begin{equation*} b(t_1) \leqslant \frac{\epsilon^2}{2 t_1^{2\lambda+1}}. \end{equation*} Since $\gamma \leqslant 2$, $\lambda =\frac{2 \alpha}{\gamma+2}$ and $p=\frac{2\gamma \alpha}{\gamma+2}-2$, we have $ 2\lambda+1 \geq p+1 $, and thus \begin{equation*} b(t_1) \leqslant \frac{\epsilon^2}{2 t_1^{p+1}}. \end{equation*} For $\epsilon =\sqrt{\frac{ \ell}{2}}$ for example, there exists thus some $t_1 >T$ such that $b(t_1) \leqslant \frac{\ell}{4 t_1^{p+1}}$. Then $a(t_1) \geqslant \frac{\ell}{4 t_1^{p+1}}$, i.e. $F(x(t_1))-F^* \geqslant \frac{\ell}{4 t_1^{p+2}}$. Since $p+2= \frac{2 \gamma \alpha}{\gamma +2}$, this concludes the proof. \subsection{Proof of Theorem \ref{Theo2}} We detail here the proof of Theorem \ref{Theo2}. Let us consider $\gamma_1>2$, $\gamma_2 >2$, and $\alpha\geqslant \frac{\gamma_1+2}{\gamma_1-2}$. We consider here functions $\mathcal{H}$ for all $x^*$ in the set $X^*$ of minimizers of $F$ and prove that these functions are uniformely bounded. More precisely for any $x^*\in X^*$ we define $\mathcal{H}(t)$ with $p=\frac{4}{\gamma_1-2}$ and $\lambda=\frac{2}{\gamma_1-2}$. With this choice of $\lambda$ and $p$, using Hypothesis $\textbf{H}_1(\gamma_1)$ we have from Lemma~\ref{LemmeFonda}: \begin{equation*} \mathcal{H}'(t)\leqslant 2t^{\frac{4}{\gamma_1-2}}\left(\frac{\gamma_1+2}{\gamma_1-2}-\alpha\right)b(t). \end{equation*} which is non-positive when $\alpha\geqslant\frac{\gamma_1+2}{\gamma_1-2}$, which implies that the function $\mathcal{H}$ is bounded above. Hence for any choice of $x^*$ in the set of minimizers $X^*$, the function $\mathcal{H}$ is bounded above and since the set of minimizers is bounded (F is coercive), there exists $A>0$ and $t_0$ such that for all choices of $x^*$ in $X^*$, \begin{equation*} \label{inegHHAtz} \mathcal{H}(t_0)\leqslant A, \end{equation*} which implies that for all $x^* \in X^{*}$ and for all $t\geqslant t_0$ \begin{equation*} \label{inegHHA} \mathcal{H}(t)\leqslant A. \end{equation*} Hence for all $t\geqslant t_0$ and for all $x^*\in X^*$ \begin{equation*}\label{BoundWold} t^{\frac{4}{\gamma_1-2}}t^2(F(x(t))-F^*)\leqslant \frac{\vert \xi\vert}{2} t^{\frac{4}{\gamma_1-2}}\norm{x(t)-x^*}^2 +A, \end{equation*} which implies that \begin{equation}\label{BoundW} t^{\frac{4}{\gamma_1-2}}t^2(F(x(t))-F^*)\leqslant \frac{\vert \xi\vert}{2} t^{\frac{4}{\gamma_1-2}}d(x(t),X^{*})^2 +A. \end{equation} We now set: \begin{equation}\label{defv} v(t):=t^{\frac{4}{\gamma_2-2}}d(x(t),X^*)^2. \end{equation} Using \eqref{BoundW} we have: \begin{equation}\label{Boundu1bos} t^{\frac{2 \gamma_1}{\gamma_1-2}}(F(x(t))-F^*)\leqslant \frac{\vert \xi\vert}{2} t^{\frac{4}{\gamma_1-2}-\frac{4}{\gamma_2-2}} v(t)+A. \end{equation} Using the hypothesis $\textbf{H}_2(\gamma_2)$ applied under the form given by Lemma \ref{lem:H2} (since $X^*$ is compact), there exists $K>0$ such that \begin{equation*} K\left(t^{-\frac{4}{\gamma_2-2}}v(t)\right)^{\frac{\gamma_2}{2}}\leqslant F(x(t))-F^*, \end{equation*} which is equivalent to \begin{equation*} Kv(t)^{\frac{\gamma_2}{2}} t^{\frac{-2\gamma_2}{\gamma_2-2}} \leqslant F(x(t))-F^*. \end{equation*} Hence: \begin{equation*} K t^{\frac{2\gamma_1}{\gamma_1-2}} t^{\frac{-2\gamma_2}{\gamma_2-2}} v(t)^{\frac{\gamma_2}{2}}\leqslant t^{\frac{2\gamma_1}{\gamma_1-2}} (F(x(t))-F^*). \end{equation*} Using \eqref{Boundu1bos}, we obtain: \begin{equation*} K t^{\frac{2\gamma_1}{\gamma_1-2}-\frac{2\gamma_2}{\gamma_2-2}} v(t)^{\frac{\gamma_2}{2}}\leqslant \frac{|\xi|}{2} t^{\frac{4}{\gamma_1-2}-\frac{4}{\gamma_2-2}} v(t)+A, \end{equation*} i.e.: \begin{equation}\label{vbounded} K v(t)^{\frac{\gamma_2}{2}}\leqslant \frac{|\xi|}{2} v(t)+A t^{\frac{4}{\gamma_2-2}-\frac{4}{\gamma_1-2}}. \end{equation} Since $2<\gamma_1 \leqslant \gamma_2$, we deduce that $v$ is bounded. Hence, using \eqref{Boundu1bos} there exists some positive constant $B$ such that: \begin{equation*} F(x(t))-F^*\leqslant B t^{\frac{-2 \gamma_2}{\gamma_2-2}} +A t^{\frac{-2 \gamma_1}{\gamma_1-2}}. \end{equation*} Since $2<\gamma_1 \leqslant \gamma_2$, we have $\frac{-2 \gamma_2}{\gamma_2-2} \geqslant \frac{-2 \gamma_1}{\gamma_1-2}$. Hence we deduce that $F(x(t))-F^* = O \left( t^{\frac{-2 \gamma_2}{\gamma_2-2}}\right)$.\\ \subsection{Proof of Corollary~\ref{Corol2}} We are now in position to prove Corollary~\ref{Corol2}. The first point of Corollary~\ref{Corol2} is just a particular instance of Theorem~\ref{Theo2}. In the sequel, we prove the second point of Corollary~\ref{Corol2}. Let $t\geqslant t_0$ and $\tilde x\in X^*$ such that \begin{equation*} \|x(t)-\tilde x\|=d(x(t),X^*). \end{equation*} We previously proved that there exists $A>0$ such that for any $t\geqslant t_0$ and any $x^*\in X^*$, \begin{equation*} \mathcal{H}(t)\leqslant A. \end{equation*} For the choice $x^*=\tilde x$ this inequality ensures that \begin{equation*} \frac{t^{\frac{4}{\gamma-2}}}{2}\norm{\lambda (x(t)-\tilde x)+t\dot x(t)}^2+t^{\frac{4}{\gamma-2}}\frac{\xi}{2}d(x(t),\tilde x)^2\leqslant A, \end{equation*} which is equivalent to \begin{equation*} \frac{t^{\frac{4}{\gamma-2}}}{2}\norm{\lambda (x(t)-\tilde x)+t\dot x(t)}^2\leqslant \frac{|\xi|}{2}v(t)+A, \end{equation*} where $v(t)$ is defined in \eqref{defv} with $\gamma=\gamma_2$. Using the fact that the function $v$ is bounded (a consequence of \eqref{vbounded}) we deduce that there exists a positive constant $A_1>0$ such that: \begin{equation*} \norm{\lambda (x(t)-\tilde x)+t\dot x(t)}\leqslant \frac{A_1}{t^{\frac{2}{\gamma-2}}}. \end{equation*} Thus: \begin{equation*} t\norm{\dot x(t)}\leqslant \frac{A_1}{t^{\frac{2}{\gamma-2}}}+|\lambda|d(x(t),\tilde x)=\frac{A_1+|\lambda|\sqrt{v(t)}}{t^{\frac{2}{\gamma-2}}}. \end{equation*} Using once again the fact that the function $v$ is bounded we deduce that there exists a real number $A_2$ such that \begin{equation*} \norm{\dot x(t)}\leqslant \frac{A_2}{t^{\frac{\gamma}{\gamma-2}}}, \end{equation*} which implies that $\norm{\dot x(t)}$ is an integrable function. As a consequence, we deduce that the trajectory $x(t)$ has a finite length. \subsection{Proof of Proposition \ref{prop:gap}} The idea of the proof is very similar to that of Proposition~\ref{prop_optimal} (optimality of the convergence rate in the sharp case i.e. when $\gamma \in (1,2]$). For the exact same choice of parameters $p=\frac{2\gamma\alpha}{\gamma+2}-2$ and $\lambda=\frac{2\alpha}{\gamma+2}$ and assuming that $1+\frac{2}{\gamma}<\alpha < \frac{\gamma+2}{\gamma-2}$, we first show that the energy $\mathcal H$ is non-decreasing and then: \begin{equation} \forall t\geqslant t_0,~\mathcal H(t) \geqslant \ell,\label{infH} \end{equation} where: $\ell=\mathcal H(t_0)>0$. Indeed, since $\gamma>2$ and $\alpha <\frac{\gamma+2}{\gamma-2}$, a straightforward computation shows that: $\lambda^2 -|\xi|>0$, so that: \begin{eqnarray*} \mathcal H(t_0) &=& t_0^{p+2}|x(t_0)|^\gamma + \frac{t_0^p}{2}\left(|\lambda x(t_0)+t_0\dot x(t_0)|^2 -|\xi||x(t_0)|^2\right)\\ &=& t_0^{p+2}|x(t_0)|^\gamma + \frac{t_0^p}{2}\left( \lambda^2 -|\xi|\right)|x(t_0)|^2 >0, \end{eqnarray*} without any additional assumption on the initial time $t_0>0$. Let $T>t_0$. We set: $y(t)=t^\lambda x(t)$. If $y(t)$ is bounded as it is in Proposition~\ref{prop_optimal}, by the exact same arguments, we prove that there exists $t_1>T$ such that: $b(t_1) \leq \frac{\ell}{4t_1^{p+1}}$. Moreover since $\xi<0$ we deduce from \eqref{infH} that: $$t_1^{p+1}(a(t_1)+b(t_1)) \geqslant \ell.$$Hence: $$a(t_1)=t_1(F(x(t_1)-F^*) \geqslant \frac{\ell}{4t_1^{p+1}},$$ i.e.: $F(x(t_1))-F^* \geqslant \frac{\ell}{4t_1^{p+2}} = \frac{\ell}{4t_1^\frac{2\alpha\gamma}{\gamma+2}}$. If $y(t)$ is not bounded, then the proof is even simpler: indeed, in that case, for any $K>0$, there exists $t_1\geqslant T$ such that: $y(t_1)\geq K$, hence: $$F(x(t_1))-F^*=|x(t_1)|^\gamma \geqslant \frac{K}{t_1^{\lambda\gamma}}=\frac{K}{t_1^{\frac{2\alpha\gamma}{\gamma+2}}},$$ which concludes the proof. \section*{Acknowledgement} This study has been carried out with financial support from the French state, managed by the French National Research Agency (ANR GOTMI) (ANR-16-VCE33-0010-01) and partially supported by ANR-11-LABX-0040-CIMI within the program ANR-11-IDEX-0002-02. J.-F. Aujol is a member of Institut Universitaire de France.
{'timestamp': '2019-07-09T02:21:58', 'yymm': '1805', 'arxiv_id': '1805.05719', 'language': 'en', 'url': 'https://arxiv.org/abs/1805.05719'}
\section*{TOC Graphic} \begin{figure}% \centering \includegraphics[width=5.1cm]{graph_abs.pdf} \end{figure} \textbf{Keywords}: Thiophene, P3HT, fullerene, blend, C$_{60}$, Classical Molecular Dynamics, UFF \newpage Organic photovoltaics is a rich and growing area of research, and among the most promising device architectures we find those in which the active layer contains thiophene polymers and fullerenes, more specifically poly(3-hexyl-thiophene) P3HT and [6,6]-Phenyl-C61-butyric acid methyl ester PCBM.\cite{brab+10am,verp+10afm,dang+11am,chen+11nl,holm+13semsc,chin+16nsc,busb+17acsami,rich+17cr} This active layer is known as a bulk heterojunction (BHJ), in the sense that we will find regions (domains) where there is prevalence of P3HT mass proportion, others where the prevalence is of PCBM. The nature of the frontier between domains can be well-defined (bicontinuous heterojunction) or not, as well as the prevalence inside a domain, that can be absolute (bicontinuous) or not, in the sense that the material, in domains or in the frontier, is formed by a blend of the two components. There is still ongoing discussion on these topics, however it is agreed that the photo-conversion efficiency is related to charge transfer, exciton migration and recombination, internally in the domains and in the frontier between domains; a plethora of experimental and theoretical works concerning these issues is available, and it emerges that one of the most important features regards the blend morphology.\cite{schi+05cm,guo+10jacs} While PCBM is a reasonably simple molecule, consisting of a fullerene C$_{60}$ molecule with a methyl-ethyl ester side chain, due to the lower cost of C$_{60}$ production, blends consisting of P3HT and C$_{60}$ (or a mixture of the two molecules) are also interesting\cite{rait+07semsc,li+09oe,tada-onod12semsc,comi+15mcp,mori+17jpst}. Since the alkyl side chains can strongly interact, the morphology of the final complete blend, polymer and molecule, can be more or less complex, more homogeneous or phase-separated, depending on the average length of the P3HT main chain. The molecules can segregate and form molecular almost crystalline domains, or migrate into the polymer region, depending wether the vicinal polymer region is more or less crystalline. P3HT belongs to the second generation of electronic polymers\cite{brab+10am}, and is one of the most studied to this day\cite{busb+17acsami}. The hexyl functionalization enhances solubility and, when regioregular, may contribute\cite{nort07prb,guo+10jacs} to the formation of stacked crystalline domains depending on the growth method; it is however usually found that in devices the polymer is present in complex morphology, with important proportion of amorphous domains. As a semiconducting polymer, the optical and transport properties are defined by the character and distribution of electron and hole states in the bulk, which in turn are defined by the \textit{local} morphology. The exciton splitting and charge transfer properties when considering the blend are also defined by local morphology, and the investigation of such transitions, theory or experiment, must take that into account. In order to study theoretically such properties we must have access to a reliable description of the structural phases, which is only accessible through Classical Molecular Dynamics (CMD) -- since now we are not describing periodic crystals, with a number of atoms-per-cell that could be studied through atomistic first-principles calculations \cite{buss+02apl,ruin+03sm,ferr+04prb,humm+05pssb}. Concerning CMD calculations, it can be done through simplified formulations, usually referred to as coarse-grained molecular dynamics, which allows for treatment of systems with a very large number of atoms; this procedure has been recently applied to P3HT and blends\cite{negi+16mts,root+16mm,ales+17jacs,yoo+17cms}, and contributes to the understanding of morphology in these complex organic compounds. On the other hand, when we need to look more deeply into the local structural characteristics of these compounds, we must go to atomistic simulations which, even in the classical formulation\cite{luko+17ms}, restricts the number of included atoms to some tens of thousands. We present here an atomistic CMD simulation of condensed P3HT, and P3HT blends not with PCBM but with simple fullerene C$_{60}$; the models are built with not-so-short oligomers for the thiophene unit (30 3HT mers) and with P3HT:C$_{60}$ blend mass proportion very close to 1:1, as often found\cite{dang+11am,holm+13semsc,busb+17acsami} in the experimental literature for P3HT:PCBM blends. We apply for this a well-tuned force-field, which we describe explicitly, based on the Universal Force Field of Rapp\'{e} and coll.\cite{rapp+92jacs} We start from two different initial models, random or phase separated P3HT/C$_{60}$ spatial distribution. We find that after reaching equilibrium conditions, the P3HT chains present mostly non-linear structure, and concomitantly the polymer domains present disordered amorphous character. We see weak intermixing of C$_{60}$ and P3HT for the phase-separated, and strong clustering of C$_{60}$ molecules for the random starting point. Our main conclusions are that for blends at room temperature, even for these non-crystalline polymer domains, phase separation is dominating. \section{Methodology} One of the most used and reliable force-fields for CMD of polymers is the Universal Force Field UFF of Rapp\'{e} and collaborators\cite{rapp+92jacs}, which however presents unusual problems in the case of thiophenes. Specifically, it was found\cite{alve-cald09sm} that the standard charge assignment scheme of UFF (charge equilibration) for sulfur S atoms results in negative partial charge $Q_S<0$, as frequently happening in other molecules, which is not correct for thiophene (T) since the special [$=HC-S-CH=$] sequence in the T rings results in a positive $Q_S>0$ effective charge. Furthermore, the dihedral ring-to-ring torsion angle in oligothiophenes (OTs) was also found not to be properly described, which lead us to adapt the related parameters. Here we describe only briefly the methodology for reparametrization of the force field. In short, the atom-atom interaction in the UFF can be broadly separated in bonded and non-bonded functions. For bonded-functions we have 2, 3 and 4-body potentials: bond-length (2), direct angle (3), dihedral and inversion angles (4). For non-bonded functions the FFs normally adopt pair-potentials including electrostatic Coulomb long-range interactions, van der Waals attraction and Pauli repulsion; these last two are grouped in the Lennard-Jones (LJ) format in the UFF. Adequate LJ parameters are crucial for the reliable description of the condensate morphology. Starting with the non-bonded potentials, as mentioned above we need reliable values for atomic charges. We will adopt fixed atomic charges and define specific atomic types also for the C- and H-atoms in the T-ring and side-chains. To arrive at the atomic type charges ATC (more than one ATC for the same atom type of the UFF) we perform calculations with Density Functional Theory (DFT), using the FHI-aims code\cite{blum+09cpc} with the PBE exchange-correlation functional\cite{perd+96prl}, known to provide reliable structural properties for organic compounds. The partial charges for all atoms in our chosen prototype-molecules ensemble were calculated through the Hirschfeld method\cite{hirs77tca} and averaged in order to achieve the series of ATCs. For the Lennard-Jones potentials we calculate the dispersion coefficients $D_{IJ}$ and equilibrium distance $R_{IJ}$ for atom pairs of the ensemble using the Tkatchenko-Scheffler formalism\cite{tkat-sche09prl}. Our final values for ATCs, Ds and Rs are tabulated in the Supplementary Information. We check our ATC results with experimental data\cite{bak+61jms,harr+53jcs,frin+77ahc,spoe-cols98cp} for the electric dipole of the single thiophene molecule T1, that range from 0.46 to 0.60D; our PBE result adopting the TS approach is 0.50D, in excellent agreement with the experimental data, and from our Nanomol-FF ATCs we arrive at 0.52D. With respect to the complete reparametrization of the dihedral T-T angle and all non-bonded parameters for the HT molecules and polymers, we selected 13 different\cite{nist} thiophene-based molecular crystals, from clean T2 to longer oligomer chains, including also linked oligomers, alkyl-terminated oligomers and polymers (details in the Supplementary Information); for this ensemble we realized geometry optimization through Molecular Mechanics within the conjugate gradient procedure and with rigid convergence criteria. The maximal deviations in lattice constants and angles with respect to experimental data are below 10$\%$, again indicating that our parameters allow for reliable description of condensed systems. Moving to the bonded, dihedral inter-ring torsion angle parametrization, we base our procedure on experimental data\cite{bucc+jacs74} for bi-thiophene T2, which state that at room-temperature in liquid crystalline solvent, 70$\%$ of the molecules present quasi-antiparallel arrangement with dihedral angle of 140$^{\circ}$, while the remaining 30$\%$ equilibrate at 40$^{\circ}$, quasi-parallel. This is in very good agreement with many-body theoretical calculations which allows us to adopt the full theoretical potential curve\cite{alve06phd} for the parametrization. To simulate longer polymers we perform calculations for the T4 oligomer, and adopt the displaced-dihedral form for the 4-body potential. The equation and related parameters are shown in the Supplementary Information. The other bonding 2-body parameters from the original UFF are kept. All CMD and Molecular Mechanics (MM, temperature 0K) calculations were performed with the Cerius$^2$ package, Accelrys Inc.\cite{cerius2}. We adopt periodic boundary conditions in order to simulate condensates, with fixed number of particles N. For CMD we use fixed or variable volume V and pressure P, and fixed temperature T=300K. The time step for the sequential resolution of the MD equations is 10$^{-3}$ps, using the Verlet integrator\cite{verl67pr}. Depending on the sequence of steps, we adopt microcanonical NVE, canonical NVT or isothermal-isobaric NPT ensembles\cite{alle-tild05book}. For NVT we use the Berendsen scheme\cite{bere+84jpc}, and for NPT the Parrinello-Rahman\cite{parr-rahm81jap}. The relaxation time is 0.1ps. For MM we adopt different energy minimization protocols, conjugate-gradient for the crystalline structures used for definition of the Nanomol FF parameters, and the Smart Minimizer present in the Cerius$^2$ package for the condensates. This last protocol incorporates sequential steps of steepest descent, quasi-Newton and truncated-Newton minimization\cite{rapp-case97book}. \begin{figure}[!htb] \begin{center} \begin{tabular}{cc} \hline\\ \vspace{0.0cm}\\ \multicolumn{2}{c}{\includegraphics[width=0.7\textwidth]{laminar.pdf}}\\ \multicolumn{2}{c}{\includegraphics[width=0.7\textwidth]{mod1A.pdf}}\\ \hline\\ \vspace{0.0cm}\\ \includegraphics[width=0.35\textwidth]{mod2.pdf} & \includegraphics[width=0.4\textwidth]{mod3.pdf}\\ \hline \end{tabular} \caption{Condensed amorphous models obtained in this work; we show the initial random distribution, and the final equilibrated distribution in unit-by-unit and full cell view. Top, P3HT laminar deposition, side and top view; Center, P3HT isotropic distribution; Bottom left, segregated P3HT:C$_{60}$ blend (different viewpoints for the unit-by-unit and full cell); Bottom right, isotropic P3HT:C$_{60}$ blend. } \label{condensation} \end{center} \end{figure} We simulate condensates of regioregular P3HT, pure and in blends with fullerene C$_{60}$. We use not-so-short oligomers with 30 3HT-units O3HT$_{30}$, molecular weight $\sim$5kg$\cdot$mol$^{-1}$, feasible for our atomistic molecular simulation and small but comparable to regular experimentally-grown samples\cite{dang+11am,holm+13semsc,spol+15oe}. The simulation is performed for four different initial configurations, namely pure P3HT in laminar and isotropic distributions and P3HT:C$_{60}$ in segregated and isotropic blends. In the case of pure P3HT these two distributions simulate the first a patterned deposition, and the second a spin-cast deposition. For pure P3HT we include 40 chains of O3HT$_{30}$ for the laminar distribution (30080 atoms), and 50 chains for the isotropic (37600 atoms); for the blends we include 40 chains of O3HT$_{30}$ and 250 C$_{60}$ molecules (45080 atoms). The initial and final atomic distributions are illustrated in \ref{condensation}. The initial spatial distribution of units in done in a very disperse molecular packing, with large inter-molecular distances, via the Packmol package.\cite{packmol} A smooth, cautious procedure is then adopted in order to allow for realistic arrangement of the molecules and oligomers, without unnatural high temperature or pressure effects. We first reduce slowly, by hand, the cell parameters, and for each fixed-cell parameters perform CMD at 300K; this slow cell-reduction scheme continues until reaching the minimum of the van der Waals energy for the cell. With these optimal cell parameters we start the free NPT CMD minimization until convergence is attained. \section{Results and Discussion The formation of heterojunctions in P3HT:PCBM films has been extensively investigated from the experimental side, and more than one detail of the growth method has been highlighted as a factor to be considered\cite{schi+05cm,agos+11afm,dang+11am,chen+11nl,holm+13semsc,spol+15oe,chin+16nsc,rich+17cr}. When the complete film is grown by deposition and then annealing of a in-solution mixture, as normally done, the factors can be, at least: for P3HT, molecular weight or average molecular mass (it is expected that crystalline regions will form\cite{schi+05cm} if it arrives at $\sim$ 10 kg$\cdot$mol$^{-1}$), and degree of regioregularity; for the blend, mass proportion, solvation of each compound in the deposition solution (choice of solvent), annealing time and temperature, and finally even the architecture of the monitored sample\cite{busb+17acsami}. A general understanding is that PCBM will diffuse (with thermal annealing) only to disordered regions of P3HT, will not migrate into already crystallized regions of the polymer, and on the other hand more probably will segregate forming the junction structure. When the junction is built from bilayer deposition of the two compounds the process is different,\cite{trea+11aem} since the P3HT original layer already has a good fraction of crystalline regions; it is found that with temperature there is fast diffusion of PCBM, however only into the disordered regions of P3HT. The adoption of C$_{60}$ for the blend is highly desirable, and experimental investigations of the P3HT:C$_{60}$ blend have been carried out. The blend properties have also been shown to depend on the growth methodology\cite{rait+07semsc,li+09oe,tada-onod12semsc,comi+15mcp,mori+17jpst} and, as expected, the behavior of C$_{60}$ is slightly different from that of PCBM, however the main trends are retrieved: dependence on the solvent, best efficiency for 1:1 mass proportion, and so forth. P3HT, C$_{60}$ or PCBM are soluble in polar solvents and usually, for the blend, polymer and molecule weights are mixed to arrive at a desired mass proportion, that can vary\cite{maye+09afm,dang+11am,geli+14sci} from 1:4 to 3:2 (P3HT:molecule), being this last possibly the maximum proportion allowing for efficient exciton charge-transfer splitting. In this work, we use the NanomolFF to investigate phase segregation in a P3HT/C$_{60}$ blend. We first simulate pure systems, and in sequence the blends. We implement here for the blend simulations a mass proportion 11:10, very close to the suggested\cite{rait+07semsc,dang+11am,holm+13semsc} proportion 1:1. \subsection{P3HT properties in films and blends} We first analyse the P3HT single-chain morphological distribution for the different condensates, using for that the ring-to-ring angular distribution. For each T-ring characteristic vector axes can be defined: $\vec{c}$, linking the two C-atoms bonded to the S-atom, which defines the bonding or chain direction; a normal vector $\vec{n}$ perpendicular to the T-plane; and a basal `` dipole'' $\vec{d}$-vector, $\vec{n}=\vec{c}\times\vec{d}$ in-plane and pointing from the 2 other C-atoms to the S-atom. In this way, for each single chain we can follow the orientation of one ring to the next T$_{i+1}$-T$_{i}$, through the angles $\varphi_{i+1,i}=\arccos({\vec{n}_{i+1}}\cdot{\vec{n}_i})$; $\theta_{i+1,i}=\arccos[(\vec{n}_{i+1}\cdot\vec{c}_i) / (\sqrt{(\vec{n}_{i+1}\cdot\vec{d}_i)^2+(\vec{n}_{i+1}\cdot\vec{n}_i)^2})]$. In this, $\varphi$ describes mostly the dihedral torsion angle between neighbor units while $\theta$ describes roughly the concavity at that segment. We show in \ref{map} the results for the four condensates. As a first output, we see that for the laminar-deposition condensate we mostly find, as the final geometry for the individual chains, the alternate $\varphi > 90^{\circ}$ ring-to-ring pattern, even so with a non-zero signal for $\varphi<90^\circ$, and still some remaining traces of linear (straight ideal $\theta=90^{\circ}$) ring-to-ring orientation. For the pure P3HT isotropic condensate we see a ``butterfly'' pattern characteristic of torsion angles maxima around 130$^{\circ}$ (quasi-antiparallel), and with a smaller but definite proportion around 50$^{\circ}$ (quasi-parallel); the proportion of linear ring-to-ring orientation is almost null. For the blend simulations, for which the polymer region is isotropically initialized, we also find the butterfly pattern now with mainly the same percentage of parallel and anti-parallel ring-to-ring pattern. At the same time, we see that the full majority of chains in these three phases show concavity, that is, linearity is not a relevant characteristic, which is a clear indication of the disordered, amorphous character of the domains. We should point out here that this is expected for the molecular weight we simulate.\cite{schi+05cm} \begin{figure}[!htb] \begin{center} \includegraphics[width=\textwidth]{graphic.pdf} \caption{(\textit{color online}) Map of torsion and concavity angles $(\varphi, \theta)$ for the condensates: Laminar P3HT (left); Isotropic P3HT (middle left); Segregated P3HT:C$_{60}$ (middle right); Isotropic P3HT:C$_{60}$ (right). Extreme values $\theta=0^{\circ},\ 180^{\circ}$ would correspond to extreme kinks, not realistic, moderate angles correspond to chain concavity; $\varphi<90^{\circ}$ non-alternate, $\varphi>90^{\circ}$ alternate ring-to-ring torsion angle. } \label{map} \end{center} \end{figure} Summarizing our results for the polymer configurations in the different simulation schemes, we see that in the laminar structure the individual chains mostly maintain linearity and ring-to-ring alternation. In the next three cases, for pure or blended P3HT, linearity is not a signature anymore, and we find a strong presence of chiral mer-to-mer angles (butterfly pattern). More significant is that for the two blend simulations, initially segregated or isotropic, the final individual polymer structures show strong similarity. \subsection{C$60$ and P3HT-C$60$ properties in the blends} We now analyse the difference between the two blend models focusing on the C$_{60}$ distribution, and to do that we use the radial distribution function RDF of fullerene molecules, shown in \ref{RDF-C-C} for the two blend simulations. In the figure we show also, as (blue) vertical lines, the results obtained through Nanomol-FF at 0K (geometry minimization) for the ideal crystal, starting with the \textit{fcc} ideal lattice; we find the structure to be very close to the \textit{fcc}, with one lattice constant slightly larger than the other two, and thus the second-neighbor distances result very close but not equal, so the lines are not summed with the resolution used in the figure. Focusing on the blend results, the RDF's are built from 100 snapshots of each CMD simulation, and we see that the first-neighbor distance is a little larger than in the 0K structure, as expected, due to temperature effects; the value is in very good accord with available experimental data\cite{heba93arm} for the pure fullerene compound at normal temperatures. For the segregated model we find that the RDF peaks are clearly defined up to the fourth neighbor distances, and specially up to the second neighbor (with a spread), which indicates the prevalence of C$_{60}$ local ordered domains. It should be noticed that also for the initially isotropic blend we see (less defined) RDF peaks to the third neighbor, which points to a natural segregation of the C$_{60}$ molecules. The main conclusions here are that we see a tendency to formation of separated polymer and fullerene domains in both simulations, that is, even with the relatively low polymer molecular weight we adopt, that as we find does not favor formation of crystalline P3HT domains, our results indicate segregation of C$_{60}$ molecules. \begin{figure}[!htb] \begin{center} \includegraphics[width=\textwidth]{rdf-CC-fig.pdf} \caption{(\textit{color online}) Radial distribution function RDF for fullerene-fullerene molecules, 100 snapshots, in the segregated (left) and isotropic (right) blends P3HT:C$_{60}$. Arbitrary units, same normalization ratio for the two systems. The vertical blue lines correspond to first- to third-neighbor RDF signals for ideal \textit{fcc} crystalline C$_{60}$, see text, obtained with the same Nanomol-FF at 0K. } \label{RDF-C-C} \end{center} \end{figure} We finally analyse the RDF, shown in \ref{RDF-T-C}, for the distance from a fullerene molecule surface to the central point of a thiophene ring. For both types of blend, we see in the figure a broad peak at $\sim$4{\AA} measuring the distance of the closest neighbor T-rings from a fullerene surface; the decrease at $\sim$5{\AA} indicates the curvature of the polymer chain when adjacent to the molecule, that is, a T-chain will not be linear when adjacent to a $C_{60}$ molecule. \begin{figure}[!htb] \begin{center} \includegraphics[width=\textwidth]{rdf-CT-fig.pdf} \caption{Radial distribution function RDF for fullerene molecule-thiophene unit, 100 snapshots, in the segregated (left) and isotropic (right) blends P3HT:C$_{60}$. Arbitrary units, same normalization ratio for the two systems. The distance is measured from the center of a C$_{60}$ molecule to the center of the thiophene rings, subtracted the normal fullerene molecular radius.} \label{RDF-T-C} \end{center} \end{figure} We now focus on the presence of thiophene-fullerene T-C$_{60}$ compared to C$_{60}$-C$_{60}$ first-neighbor pairs from \ref{RDF-C-C} and \ref{RDF-T-C}. We note that we find a $\sim$60$\%$ higher occurrence of T-C$_{60}$ pairs in the initially isotropic compared to the initially segregated blend, and the opposite proportion for C$_{60}$-C$_{60}$ pairs: these proportions are not so high considering the difference is the initial structures, and again point to the natural segregation of $C_{60}$ molecules. Summarizing our results, overall we see that for the molecular weight we adopt and in the spin-cast type of deposition, there is no formation of crystalline P3HT domains. On the other hand, even so we see a tendency to segregation of fullerene molecules, indicating formation of nano-domains typical of bulk heterojunctions. \section{Conclusions} As said in the Introduction, one of the more important factors affecting efficiency in bulk heterojunction polymer/molecule photovoltaic cells is the probability of exciton migration against recombination, and the ease of charge transfer. This will be governed by the morphology of the junctions, coming from different factors including blend mass proportion, and average molecular weight of the polymer. We simulated, through our finely tuned force-field based on the UFF\cite{rapp+92jacs} and cautious CMD procedure, different blend mixing for P3HT/C$_{60}$. We find that, even with the relatively low molecular weight of $\sim$5kg$\cdot$mol$^{-1}$ where there will be no polymer crystallization, fullerene segregation is dominant. This indicates that the use of simple fullerene for the blend should bring high performance to photovoltaic cells. \begin{acknowledgement} The authors are grateful to M. Alves-Santos for useful discussions. We acknowledge support from INEO, FAPESP and CNPq (Brazil). \end{acknowledgement}
{'timestamp': '2018-05-29T02:01:24', 'yymm': '1805', 'arxiv_id': '1805.10335', 'language': 'en', 'url': 'https://arxiv.org/abs/1805.10335'}
\section{Introduction} {A}{}rtificial Neural Networks (ANNs) have undergone a revolution from their initial stages of research \cite{C57, C61, C58, C59, C62, C60} compared to the highly extensible and customisable structure that they have now\cite{C63, C64, C65, C66, C11}. Deep Learning community has been successful in extending its framework to solve a wide variety of problems, as they have found their applications ranging from visual classification tasks \cite{C1, C2, C3, C4} to natural language processing \cite{C5, C6, C7, C8, C9, C10} and various other miscellaneous tasks \cite{C67, C11, C12, C13}. The field has almost faced a resurrection since \cite{C1} and from then on, the researchers have continued to produce meaningful modifications on each part of the ANN architecture to improve upon the state-of-the-art methods. Such great flexibility in designing each element of the entire structure separately is one of the main reasons of Deep Learning being so successful. With that being said, it is also important to mention about the years of research that has been done to improve upon each and every component of the neural networks, be it significant or non-significant. For eg. imposing changes at the data preprocessing stage i.e. prior to inputting the data in the neural network such as normalization \cite{C14}, data augmentation \cite{C1, C15} or using large-scale training data \cite{C16}. Further, the important techniques to improve the performance that engineer the internal structure of the neural networks include incorporating deeper structures \cite{C16, C15}, changing stride sizes \cite{C17}, utilizing different pooling types \cite{C18}, regularization techniques \cite{C20, C21, C22}, activation functions \cite{C19, C23, C24, C25} and in optimizing the networks \cite{C26, C27}. However, even though after such breakthroughs following the desire to achieve modular performance, one element is still kept unviolated--loss function. For almost 30 years, since the works \cite{C68, C69, C70} have proposed to use cross-entropy as a loss function, it has universally remained the default choice of loss function for most classification and prediction tasks involving deep learning. And after that, neither there is much work done on the improvement of prevalent loss functions as compared to other parts of the ANN architecture nor is much variety being developed in the choice of loss functions. The loss functions form one of the most important characteristic in the design of ANNs \cite{C28} whatsoever. Moreover, the luxation of the component of loss function imposes nominal intervention in the designed model as compared to the other methods that often demand exhaustive changeover in the entire design of neural networks. Still, the research work on improvement of loss functions is relatively dwarf as compared to the other components of the network. During the earlier stages of the research, Mean Squared Error (MSE) loss had been the choice of loss function \cite{C29, C30, C31}. It was originally used for the regression problems before being derived for neural networks using maximum likelihood principle by assuming the target data to be normalized \cite{C32}. The problem observed with this error function is in the assumption of normalized nature of the data. The real world data is not always observed to be normalized. Also, it is often desirable while designing a neural network that its cost function gradients are of the considerable magnitude so that it can learn properly. Unfortunately, the networks start producing very small gradients when MSE is used as a cost function. And this often causes the learning to saturate quickly. It is also sometimes observed that MSE function fails a model from training even when it produces highly incorrect outputs \cite{C28}. This had lead to move to cross-entropy as a choice of loss function. All of the properties for MSE can also be applied to this function as well like this function can also be derived using the principle of maximum likelihood. There are several other reasons as well that motivate to use cross entropy function. One of the well established reason is that when it is coupled with a commonly used softmax output layer, it undoes those exponentialized output units. Further, the properties of cross-entropy function can be even thought to be closely related to the properties of Kullback-Leibner divergence that describes the differences between two probability distributions. Hence, minimizing the cross entropy function would result in minimizing the KL divergence. Due to these and many other reasons, in the recent years, cross-entropy has been the persistent choice of loss function. Some of the literature have mentioned using other functions such as Hinge Loss when it comes to performing the classification task \cite{C33, C34, C35, C36}. In addition to these popularly used loss function there are also few other complex loss functions proposed such as triplet loss \cite{C37}, contrastive loss \cite{C38} etc. Triplet loss requires 3 input samples to be injected simultaneously whereas the contrastive loss inputs two samples simultaneously and requires the features of same class to be as similar as possible. These loss functions, even after having such pervasive intuition behind their usage, seem to be missing out on one of the issues i.e. these functions do not tend to be implicitly flexible about the amount of information that has to be back-propagated depending upon both the data on which the model is trained and the design of the network model. To this end, we propose a new adaptable loss function that is built upon the core idea of flexibility. Subsequently, upon analysis and experimental validation, we justify that the function not only comprises of the fundamental properties that are seen in other cost functions but also presents with some unique and advantageous properties that give us interesting directions to work upon in future. The key intuition behind this function is that the Neural Network has to be flexible in learning and therefore in estimating its error as well, depending upon the characteristics proclaimed by the data and learning environment. Such adaptability to the characteristics of the data can have several upsides. Because the loss that is calculated can be scalable, its difficulty can be adjusted that can force the network to learn only the most noteworthy and discriminative features and the features contributing the most to the corresponding class. This can lead the neural network to be more immune to overfitting problem, show higher convergence rate and therefore may also require lesser data to train. `One may surely find upon investigation that the field of Deep Learning has every now and then, drawn inspiration from the learnings of Neuroscience \cite{C40}. Therefore, our intuition is also backed by the learnings of neuro-psychology, a branch of neuroscience. We understand that our brain has different approximation for our mistakes under different events. One trivial instance for this can be the difference in manually diagnosing and predicting images from MNIST and CIFAR dataset. On interpreting an image containing a handwritten 6 as being an 8, the mistake can be easily excused or pardoned to clumsy handwriting. On the other hand, the error in incorrectly classifying a bird as an airplane is taken much more seriously, almost to an extent that such mistake never happens again. It can be argued in this case that difference between learning in both the cases shall be implicit adhering to difference in their respective pixel arrangements of MNIST and CIFAR datasets. However, we suggest that in manual case the difference in errors and resultant learning from it is more because of difference in the respective categories of the datasets. Similarly, we sometimes fail to recognize the same image correctly when under different mindsets or moods. This suggest that our decisions and perceptions are comprehensive of various factors and therefore are flexible and adaptable in different conditions. This inspires to make the loss functions responsible for calculating the errors and making decisions in a deep learning model to also become flexible and adaptable in their tasks as well. As a result, the amount of information that is flown backward in the form of a correction of a mistake becomes flexible as well. Therefore, in this work we propose a novel flexible loss function for neural networks. Using this function, we show how the flexibility of the loss curve of the function can be adjusted to improve the performance as such--reducing the fluctuation in learning, attaining higher convergence rates and so on. This can altogether help in achieving the state-of-the-art performance in a more plausible manner. Commonly, whenever starting out with the novel, nature inspired idea, it becomes a need to mention the current state-of-the-art methodologies and justify how the proposed idea, even after differing from them, shall fit well into scenario. Then, the intuition behind the idea needs to be described and finally, the idea shall be proved through experimental validation \cite{C39}. Thus, we also follow the same chronology in this paper. The paper is further organized as follows: section \ref{second} reviews key concepts behind the loss functions and work done to add adaptability in the loss functions. Then, in Section \ref{third}, we propose the loss function and describe its various properties. In Section \ref{four} the technicalities of experimental validation is specified and consecutively in Section \ref{fifth} a discussion is presented about the results achieved through experimental validation performed. Lastly, in Section \ref{six}, we make the concluding remarks. \section{Background} \label{second} \subsection{Preliminaries \& Notation} In general, a problem of k-class classification is considered. Here the input feature space $ \mathbb{X} \subset R^d$ is classified as belonging to one of the label space $ \mathbb{Y}=\{ 1, \dots, c \}$ and the classifier function $f(x;\theta)$ is used to map the input feature space to the classes $f : \mathbb{X} \rightarrow \mathbb{Y}$. The function $f$ predicts the posterior probabilities for each class i.e. each element of set $\mathbb{Y}$. The goal, according to the Maximum Likelihood (ML) principle is to estimate the optimal set of parameters $\theta$ for which the expectation for the classifer $f(X;\theta)$ of generating the true set of probability distribution $\mathbb{Y}$ is maximum which means, \begin{equation}\theta^*=\underset{\theta}{arg\,max}\,\mathbb{E}_{x\sim y}f(X;\theta)\end{equation} It can be interpreted that maximizing the expectation of the classifier function $f(X;\theta)$'s prediction to be equivalent to the true label $y$ is to minimize the the dissimilarity between the two distribution. This quantification of the difference between the model output $f(x;\theta)$ and true label is done using the loss function.\begin{equation}\mathcal{L}(f(X;\theta), y)\end{equation} \begin{equation}\sum_{j=1}^{c}\mathcal{L}(f(x;\theta), j)\end{equation} The end goal, however is to find an optimal function $f^*$ such that the dissimilarity is at its minimum. \begin{equation} f^*=\underset{f}{arg\,min}\,\mathbb{E}_{x\sim y}\mathcal{L}(f(X;\theta), y) \label{eq4} \end{equation}There different loss functions proposed and each of them have their own properties and they quantify the difference in their own way. The good design of the loss function is thus an important issue to ponder upon for appropriate learning by the neural network. \subsection{Related Work} The core agenda of the proposed work is to introduce adaptability or flexibility during the quantification of loss. Hence, in this subsection we review the efforts that have been made to extend the calculation of loss done by the cost functions to be more adaptable. The most common method is to couple the loss given by the model with some a regularization method such as parameter norm penalty that can act as constraint. Such constraint can penalize the capacity of the model causing it to learn the most important features first \cite{C28}. Further, when it comes to adding a constraint on the model, providing a neural network with the symbolic knowledge has also proved to be a satisfactory choice \cite{C41}. Such symbolic knowledge is interpreted as a constraint by the model which can guide the model even more fittingly especially under weak supervision. Another way to enable the model learning to be more adaptable is to appropriately characterize the separability between the learned features of the input data \cite{C42}. In this method, a combination of the last fully connected layer, softmax function and cross-entropy is done and the functionality of angular similarity between the input sample and parameters of the corresponding class is added to explicitly encourage the intra-class compactness and inter-class seperability of the learned features. The combination is termed as softmax loss and is justified to be a large-margin softmax (L-Softmax) loss by adding the flexibility of inter-class angular margin constraint. Further, the performance of the model is also often suffered due to the noise present in the data in form of mislabelled examples. Recent efforts are done to make the loss function generalized from the noise-robustness viewpoint \cite{C44, C43, C45}. Noise robustness of the Mean Absolute Error (MAE) is proved theoretically by Ghosh et al.\cite{C43}. They have mentioned that since there is no presence of the term $1/f(x;\theta)$ in its gradient, it would treat every sample equally which makes it more robust to noisy label. Nevertheless, it can be demonstrated empirically that this can lead to longer training procedure. Also, the stochasticity that results without implicit weighting can cause the learning to be more difficult. On the other hand, the cross-entropy has implicit weighting functionality but can not be replaced by MAE when it comes to making the learning noise-robust because of its nonsymmetric and unbounded nature. Therefore, Zhang et al. \cite{C45} proposes to exploit both implicit weighting feature of cross-entropy and noise-robustness of MAE using negative of Box-Cox transformation as loss function. By adjusting a single parameter, it can interchangeably act as both MAE and cross-entropy. Since, the cross-entropy itself is sensitive to noise, recently Amid et al. \cite{C46} introduced a parameter called "temperature" in both the final softmax output layer and in the cross-entropy loss. By tuning both "temperature" parameters, they make the training all the more robust to noise. When it comes to using various loss function interchangeably, an unconventional approach is made by Wu et al. \cite{C47} where a meta learning concept is developed and explained as teacher-student relation. The parametric model (teacher) is trained to output the loss function that is most appropriate for the machine learning model (student). One work that deserves to be particularly addressed is that of Focal Loss \cite{C71}. The function proposes flexibility as means of focusing more on harder misclassified samples. We spot some difference between the objectives of the function proposed here and focal loss. First, even though the function's curves are shown to be flexible, its limits remain unaltered. This means there is still one side of function asymptoting to infinity. Hence, there is still a possibility of loss leakage phenomenon leading to hit NaN loss while learning. Second, since the function is based on cross-entropy its acceptance is also partially backed by the intuition that log function undoes the exponential units of softmax to allow back-propagating its gradients linearly. We show with our loss function that even further exponentializing the already exponentialized outputs of softmax can achieve state-of-the-art performance. Third, it importance is less justified for straightforward image classification tasks and lastly this function as well is based on extending the design of cross-entropy to incorporate flexibility in it. Hence, all of the above described approaches have one thing in common i.e. all of them necessitate performance improvement upon the respective loss function itself or on the combination with some other element of the model. In a nutshell, it can be inferred that none of the functions allow the model to learn flexibly and robustly when in its primary structure. Thus, unlike any of the previous approaches described, we introduce a loss function that has flexibility as its core idea. And as we shall see, this flexibility property of the loss function shall allow the function to also emulate the behaviour of other loss functions. \section{Adma: An Adaptable Loss Function} \label{third} Following the notation described in the Background, the exact definition of the Adma function is: \begin{equation} \mathcal{L}(f(x;\theta), y)= y(e-e^{f(x;\theta)^a})\label{eq5} \end{equation} where the literal $a$ is the \textit{scaling factor} that controls the flexibility while quantifying loss. Consequently, substituting the value of eq. \ref{eq5} in eq. \ref{eq4}, the final optimized function $f^*$ or set of parameters $\theta^*$ can be obtained as, \begin{equation} f^*=\underset{f}{arg\,min}\,y(e-e^{f(x;\theta)^a}) \end{equation} \begin{equation} \theta^*=\underset{\theta}{arg\,min}\,y(e-e^{f(x;\theta)^a}) \end{equation} From our experiments, it is observed that the model yields best performance for when $a \in [0.0, 0.5]$. Also, the function satisfies the following basic mathematical requirements with $a$ in this range which otherwise seem to be violated: \begin{itemize} \item \textbf{Well-defined} range from $[0, e-1]$. \item \textbf{Non-negative} in this range. \item \textbf{Differentiable} domain. \item \textbf{Monotonically} decreasing nature. \end{itemize} Therefore, we make implicit consideration to analyse the properties of the function supposing $a$ in this range. Further, the loss function also assumes the function $f(x;\theta)$ to be the set of posterior probabilities for each of the classes produced by the model. The function, clearly has a well-defined range. This can be attributed to being more shielded to numerical instability such as hitting \textit{NaN} loss, starting to diverge quickly etc. Since, for the same value of output, on increasing the value of scaling factor, the effective exponential output diminishes, the scaling factor can be indirectly thought as a regularization parameter to limit the capacity of the model. This may force the to learn the most reasonable features first and prevent it from becoming overfitted. The function's flexibility in the shape and slope is clearly demonstrated in the Figure \ref{fig1}. Here, it is shown that for the same range of values the Adma loss function can show a high flexibility in quantification of loss for the model's output. It can be deduced from the figure that the function can even imitate the shape equivalent to that of cross-entropy for some value of $a$. However, as we see that on increasing the value of $a$ beyond 0.5, the function begins to lose its convexity and starts showing a concave nature. \begin{figure}[H] \centering \includegraphics[scale=0.28]{flexibility.pdf} \caption{The loss curve of various loss functions. Here, the flexible nature of Adma clearly shows to match the loss curve of Cross-Entropy} \label{fig1} \end{figure} \subsection{Gradient Analysis} The gradient of the \eqref{eq5} is, \begin{equation} \frac{\partial\mathcal{L}(f(x;\theta), y)}{\partial\theta}= -a{f(x;\theta)^{a-1}}(e^{f(x;\theta)^a})\cdot\nabla_{\theta}f(x;\theta) \end{equation} The explicit components of scaling factor, predicted value and the exponential function are observed in the gradient from which some of the evident advantages can be directly inferred. \begin{enumerate} \item The property of contrast for the scaling parameter $a$ as observed in the function's expression is also observed here amongst the co-efficients of the gradient, \begin{equation} a \propto \frac{1}{f(x;\theta)^{a-1}(e^{f(x;\theta)^a})} \end{equation} This suggests that there has to be an optimal value of $a$ for training. Both an excessively increased or decreased value of $a$ can cause cumbersome learning in the model. \item Eq. \ref{eq5} can be rewritten as, \begin{equation} \frac{\partial\mathcal{L}(f(x;\theta), y)}{\partial\theta}= -a{\frac{1}{f(x;\theta)^{1-a}}}e^{f(x;\theta)^a}\cdot\nabla_{\theta}f(x;\theta) \label{eq10} \end{equation} \begin{equation} \implies \frac{\partial\mathcal{L}(f(x;\theta), y)}{\partial\theta} \propto \frac{1}{f(x;\theta)} \label{eq11} \end{equation} For the presumed range of values of $a$, the co-efficient of the model's output (present in R.H.S of eq. \ref{eq10}), is inversely proportional to the gradient of the function (refer eq. \ref{eq11}). Therefore, this function also has the implicit weighting scheme analogous to the one present in the cross-entropy function, which means that the term $1/f(x;\theta)^{1-a}$ would comparatively weigh more on the samples that match weakly with the true label and conversely weigh less on the samples matching strongly with the ground truth. \item The gradient has the component $e^{f(x;\theta)^a}$ where \begin{equation} f(x;\theta)^a \in [0, 1] \implies e^{f(x;\theta)^a} \in [1, e] \end{equation} So, the gradients during backpropagation would always be amplified and in turn satisfy one of the most important criteria for efficient design of the neural network i.e. having adequately large gradients so that the function can serve as coherent guide for learning without saturating quickly. \end{enumerate} Hence, the above theoretical discussion advocates that the proposed function besides being flexible can also emulate the behavior observed in commonly used loss functions to a notable extent. Therefore, we now move forward to evaluate the model's performance by exploiting the proposed function on various datasets and diverse settings. \section{Experimental Setup} \label{four} We provide an empirical study to experimentally validate the capabilities of the proposed loss function. In this section, we describe all of the configurations and conditions under which our loss function was tested. To ensure a genuine comparision, we perform all tests by replacing only the loss function and keeping rest of the configurations identical. \subsection{Architecture Design} To perform the tests with our loss function, we consider a manually designed convolutional neural network model. The reason being the need to explore the settings or hyper-parameters under which the function tends to perform best. As described in Figure \ref{fig2}, the CNN architecture basically has 3 blocks, with each block having exponentially higher number of convolutions than the previous one. The model has an 'ELU' activation unit. Further, the structure of this baseline architecture is negotiated by editing its components like activation function, adjusting dropout rate, introducing regularization etc. The performance change is evaluated corresponding to such changes under each of the considered loss functions. In the rest of the paper, this architecture is referred simply as ConvNet. Lastly, just to confirm upon the results, we perform a revised round of experiments with a pretrained model--ResNet34. Thus, the model shall only be used for benchmarking purpose and therefore shall not have to go through the extensive experimentation similar to that of manually designed ConvNet. \begin{figure}[!htbp] \centering \includegraphics[width=0.5\columnwidth]{UntitledDiagram_1_.pdf} \caption{Layout of a custom built CNN (ConvNet)} \label{fig2} \end{figure} \subsection{Dataset Selection} We evaluate our models performance on a total of 8 datasets: MNIST\cite{C49}, Fashion-MNIST \cite{C50}, CIFAR10 \cite{C51}, CIFAR100 \cite{C51}, Flower Recognition \cite{C48}, Skin Cancer \cite{C52}, Street View House Number Dataset \cite{C53} and Dogs vs. Cats \cite{C54}. Following passage gives description about the characteristics of the chosen datasets and the baseline settings under which they are trained. \subsubsection{MNIST} We start by considering the most commonly used image classification dataset. It consists of a total of 70,000 28x28 greyscale images of handwritten digits belonging to a total of 10 classes. From the total dataset 50,000 images for training, 10,000 for validation and last 10,000 for testing. Since this is the most standard dataset having elementary characteristics, models usually end up achieving almost a 100\% accuracy. Therefore, we only run our experiments for 10 iterations where each iteration lasts for 50 epochs. The dataset, however, is augmented by allowing rotation range of $30^{\circ}$ and shifting range of 20\%. \subsubsection{FASHION-MNIST} This dataset consists of Zalando's downsampled low-resolution article images. This dataset aims to serve as a more challenging yet as a standard replacement for MNIST. Other than the classes of this dataset, all of the other characteristics are similar to that of MNIST dataset. Therefore, we apply identical pre-processing and training setup as in MNIST except that the model is trained for 75 epochs. \subsubsection{CIFAR-10} The CIFAR-10 dataset is formed as a small subset of the original 80 million tiny images dataset containing labels. It is comparatively more sophisticated than the previous two datasets. Here we employ a standard subset of the entire dataset for our experiments. It consists of 32x32 colored RGB images belonging to 10 classes and split into 50,000 train and 10,000 validation images. Our experiments on this dataset run for 20 iterations where each iteration in-turn runs for 125 epochs. Data normalization, horizontal random flip, warping, height and width shifting upto 20\% were performed as part of data preprocessing and augmentation. The batch-size is of 64. \subsubsection{CIFAR-100} We also evaluate our loss function on the original CIFAR-100 dataset. It consists a total of 50,000 32x32 training images over 100 labelled categories and another 10,000 images for validation. We train the our models on this dataset for 20 iterations and 170 epochs. Since the dataset itself has quite a few classes, we impose a relatively low data augmentation compared to CIFAR10 dataset. The augmentation includes $15^{\circ}$ rotation range, 10\% of horizontal as well as vertical shift and horozontal random flip. \subsubsection{Flower Recognition} This dataset consists of colored images of 5 different flowers namely, chamomile, tulip, rose, sunflower, dandelion. It has a total of 4,242 flower images with approximately 800 images of each of the flowers. The dataset originally has a resolution of 320x240. However, since it is quite difficult to train the model on such high resolution images with the available computational resources, they have been sized to 150x150 resolution. The train-test split was kept at 3:1 ratio i.e. from each class around 600 samples were utilized for training and the rest 200 were left for validation. Further, the ConvNet model is employed for training with a batch-size of 64 for 20 iterations and each iteration lasting 150 epochs. The learning rate was $4\times10^{-4}$. \subsubsection{Skin Cancer} This dataset has 10,015 dermatoscopic images. The images belong to either of the seven types of cancer: Actinic keratoses (akiec), basal cell carcinoma (bcc), benign keratosis-like lesions (bkl), dermatofibroma (df), melanoma (mel), melanocytic nevi (nv) and vascular lesions (vasc). The dataset originally had the images of the shape 450x600x3. Due to computational constraints, we resize the images to shape of 75x100. Apart from this, the rest of the training procedure is identical to the Flower Recognition dataset. \subsubsection{Street View House Numbers} The Street View House Numbers (SVHN) dataset consists of RGB images of housenumbers collected by google street view. The images consist of multiple numbers and goal is to correctly identify the digit at the center of the image. The dataset has a total of 73,257 images in the training set and 26,032 images in test set. Other than these, it also has additional 5,31,131 less difficult examples. Since, the task is to identify only a single number, we ignore the additional examples and only consider the standard training and testing set for our examples. \subsubsection{Dogs vs. Cats} Lastly, as a binary classification problem we test the loss function on the classical dogs vs cats dataset. The dataset originally consists of over 3 million images. However, for validation, only a subset of 25,000 images is considered, having 12,500 images belonging to each of the classes. All of the images are cropped to a 150x150 resolution. Since this is only a binary classification problem, the model is run for only 50 epochs. \subsection{Other Settings} Here we describe all of the details other than the models and datasets that were required in our training setup. For the optimization of our models, we consider SGD and Adam as they are the most widely used optimizers. And also, to some extent they possess the beneficial properties of the other optimizers. However, both the optimizers have there own pros and cons and there aren't any grounded proofs mentioning one optimizer being better than another under all circumstance. The experimental difference between the two is that Adam being adaptive, has higher convergence rate and requires less experimentation, whereas, SGD tends to have a better generalization capability for the models that are over-parametrized \cite{C55}. Therefore, we perform the experiments sequentially considering both these optimizers. Further, the results are validated in both the presence and absence of regularization. Generally, a high variance is introduced in the model due to such regularization. Therefore, just to validate the robustness of our loss function as compared to others, we evaluate the performance of the model under L2 regularization with a weight decay of $10^{-4}$ employed in each of the convolutional layers. Its impact is discussed in the discussed in Section \ref{fifth}. Lastly, the results of the function are compared with the most commonly used functions i.e. Categorical Cross-Entropy (CCE), Mean Squared Error (MSE) and Squared Hinge. As an evaluation parameter, we report the average validation accuracies (VA) obtained at the end of every iteration in case of each of the datasets corresponding to each of the models. \section{Results \& Discussion} \label{fifth} Since, flexibility is the central theme of the function, we analysed and tested the same for our function on plethora of settings. In this section, we mention about all these settings and discuss the characteristics that show the most significant role-play in the performance with our loss function. Initially, the ConvNet is trained one by one on each dataset using Adam optimizer as it is adaptable and has a faster convergence rate than SGD. These experiments had shown that Adma was able to nearly parallel the performance of CCE and even surpass it by a marginal value in some cases (refer tab. \ref{tab2}, \ref{tab1} \& \ref{fig:3}). One thing that can be primarily noticed from both the figures and tables is that the prime contention during all of the experimentation was observed to be only between Adma and the CCE. For this reason, we did each of our experiments under the settings when the performance with CCE was found to be optimal and then try to surpass that accuracy by adjusting the \textit{scaling factor} of Adma. Otherwise, there are certain conditions under which Adma's accuracy surpasses all of the other cost function's accuracies by a marginal value. We shall be mentioning about these conditions and the reason behind this behaviour as we proceed through the discussion. Thus, the primary objective of this section is to discuss about the observations and results obtained during the experiments and further in developing an intuition towards using the function to improve the results. \begin{table}[H] \begin{tabular}{|c|c|c|c|c|} \hline & Adma & CCE & Squared-Hinge & MSE \\ \hline MNIST & \textbf{(a=0.2624) 99.34\%} & 99.30\% & 98.45\% & 98.74\% \\ \hline FASHION-MNIST & \textbf{(a=0.2580) 91.48\%} & 91.42\% & 86.13\% & 88.76\% \\ \hline CIFAR10 & (a=0.2622) 86.76\% & \textbf{87.28\%} & 82.41\% & 85.42\% \\ \hline CIFAR100 & (a=0.2820) 57.60\% & \textbf{57.76\%} & 20.23\% & 35.19\% \\ \hline Flower Recognition & (a=0.2625) 77.70\% & \textbf{77.78\%} & 55.93\% & 58.34\% \\ \hline Skin Cancer & \textbf{(a=0.2625) 77.08\%} & 75.39\% & 66.03\% & 67.40\% \\ \hline SVHN & (a=0.2612) 95.14\% & \textbf{95.67\%} & 92.21\% & 93.12\% \\ \hline Dogs vs. Cats & (a=0.2451) 98.31\% & 97.67\% & 98.53\% & \textbf{98.64\%} \\ \hline \end{tabular} \caption{Accuracies achieved with regularization and Adam as an optimizer.} \label{tab1} \end{table} \begin{table}[H] \begin{tabular}{|c|c|c|c|c|} \hline & Adma & CCE & Squared-Hinge & MSE \\ \hline MNIST & (a=0.2712) 98.54\% & \textbf{98.78\%} & 96.87\% & 97.46\% \\ \hline FASHION-MNIST & (a=0.2729) 83.21\% & \textbf{84.31\%} & 79.71\% & 78.06\% \\ \hline CIFAR10 & \textbf{(a=0.2809) 83.74\%} & 83.45\% & 78.46\% & 82.03\% \\ \hline CIFAR100 & (a=0.3110) 31.31\% & \textbf{40.91\%} & 12.39\% & 20.23\% \\ \hline Flower Recognition & (a=0.2762) 54.68\% & \textbf{57.32\%} & 45.15\% & 49.41\% \\ \hline Skin Cancer & \textbf{(a=0.2918) 71.11\%} & 68.05\% & 37.58\% & 63.96\% \\ \hline SVHN & (a=0.3007) 69.92\% & \textbf{74.12\%} & 49.01\% & 53.46\% \\ \hline Dogs vs. Cats & \textbf{(a=0.2727) 88.16\%} & 85.29\% & 79.98\% & 78.45\% \\ \hline \end{tabular} \caption{Accuracies achieved with regularization and SGD as an optimizer.} \label{tab2} \end{table} \begin{table}[H] \begin{tabular}{|c|c|c|c|c|} \hline & Adma & CCE & Squared-Hinge & MSE \\ \hline MNIST & \textbf{(a=0.2624) 99.51\%} & 99.42\% & 99.29\% & 99.46\% \\ \hline FASHION-MNIST & \textbf{(a=0.2481) 92.14\%} & 91.86\% & 90.15\% & 91.56\% \\ \hline CIFAR10 & \textbf{(a=0.2489) 85.08\%} & 84.92\% & 82.92\% & 83.81\% \\ \hline CIFAR100 & (a=0.3080) 57.74\% & \textbf{58.65\%} & 34.22\% & 43.28\% \\ \hline Flower Recognition & \textbf{(a=0.2671) 80.72\%} & 80.57\% & 53.36\% & 58.87\% \\ \hline Skin Cancer & \textbf{(a=0.2625) 78.12\%} & 75.77\% & 62.12\% & 66.95\% \\ \hline SVHN & (a=0.2631) 95.80\% & 95.56\% & 95.45\% & \textbf{95.88\%} \\ \hline Dogs vs. Cats & (a=0.2653) 96.88\% & 97.18\% & \textbf{97.53\%} & 95.44\% \\ \hline \end{tabular} \caption{Accuracies achieved without regularization and Adam as an optimizer.} \label{tab3} \end{table} \begin{table}[H] \begin{tabular}{|c|c|c|c|c|} \hline & Adma & CCE & Squared-Hinge & MSE \\ \hline MNIST & (a=0.2628) 98.97\% & \textbf{99.07\%} & 97.06\% & 98.37\% \\ \hline FASHION-MNIST & \textbf{(a=0.2654) 87.75\%} & 86.16\% & 81.45\% & 85.78\% \\ \hline CIFAR10 & (a=0.2807) 83.99\% & \textbf{84.12\%} & 80.03\% & 82.71\% \\ \hline CIFAR100 & (a=0.3071) 25.31\% & \textbf{45.13\%} & 19.57\% & 32.14\% \\ \hline Flower Recognition & (a=0.2697) 59.92\% & \textbf{65.25\%} & 50.87\% & 56.15\% \\ \hline Skin Cancer & \textbf{(a=0.2789) 76.36\%} & 72.46\% & 47.81\% & 65.42\% \\ \hline SVHN & \textbf{(a=0.2986) 68.49\%} & 66.25\% & 59.58\% & 56.78\% \\ \hline Dogs vs. Cats & \textbf{(a=0.3118) 93.24\%} & 92.71\% & 83.94\% & 86.64\% \\ \hline \end{tabular} \caption{Accuracies achieved without regularization and SGD as an optimizer.} \label{tab4} \end{table} \begin{figure*}[!ht] \begin{multicols}{4} \includegraphics[width=\linewidth]{mnist.pdf}\par \includegraphics[width=\linewidth]{fashionmnist.pdf}\par \includegraphics[width=\linewidth]{cifar10.pdf}\par \includegraphics[width=\linewidth]{cifar100.pdf}\par \end{multicols} \begin{multicols}{4} \includegraphics[width=\linewidth]{svhn.pdf}\par \includegraphics[width=\linewidth]{flower.pdf}\par \includegraphics[width=\linewidth]{skincancer.pdf}\par \includegraphics[width=\linewidth]{dogscats.pdf}\par \end{multicols} \caption{The above set of figures depict a typical tracing of learning in the model. The order from left-to-right row-wise is as follows: MNIST, Fasion MNIST, CIFAR10, CIFAR100, SVHN, Flower Recognition, Skin Cancer, Dogs-vs-Cats} \label{fig:3} \end{figure*} Firstly, with L2 regularization technique being applied on the model, the performance with Adma is highest on half the datasets namely MNIST, FASHION-MNIST, Flower Recognition and Skin Cancer. The highest margin is observed for the Skin Cancer dataset. For the rest of the datasets, difference is minimal. Therefore, it may not be incorrect to say that for some alternative conditions CCE may be better than Adma. However, One distinguishing characteristic of the Skin Cancer dataset as compared to other datasets is that it has a rectangular shape. This hints that Adma might be better with the irregular shaped data. Further, shifting from with regularization to without regularization, Adma takes the lead two more datasets. This shows that when the model is used unequivocally, without any implicit nourishment, Adma enables the learning more appropriately. On the other hand, CCE even though it starts out with lesser accuracy, later starts learning more rapidly. The ConvNet, with Adma as a loss function, exhibits a state-of-the-art performance on not only the common but also on somewhat peculiar datasets. For eg. CIFAR100 is a 100 label dataset with only 500 images per category for training. On that, the model outputs the accuracy of approx. 57\%, which remains invariant even when the regularization is removed. The performance in this case is very comparable to the CCE and is remarkably higher than the other two cases. Next, on the flower dataset which is small dataset with high resolution RGB images, the performance with Adma remains the same as on the other datasets i.e. highly contentious to CCE. In fact, without the support of regularization, Adma performs the best amongst all. And this accuracy is the highest registered on this dataset taking all of the explored hyperparameter search space into consideration. Lastly, on the SVHN dataset also, where the task is to correctly classify the centre digit from the images containing 5 digits in total, Adma gives the reasonable results. Thus, from the performance on all these vivid datasets, it can be derived that Adma is a reliable and a genuine loss function. Thereafter, we also believe that all of the graphs presented in Fig. \ref{fig:3} indicate some common things about Adma. The first behaviour that can be roughly observed is that Adma almost always starts out better than the rest of the functions. And that also by a significant margin. For eg. on the most common, MNIST dataset, Adma was the only function that had achieved an accuracy of approx. 98\% during its first epoch, when CCE ~96.5\%, MSE ~95.5\% and Squared Hinge was ~93.5\%. The another thing the figures are evident of is that throughout training, Adma learns gradually and is relatively more stagnant (showing less variance) than other cost functions. From this it can be interpreted that Adma, more efficiently quantifies to the noise in the data and is in true sense the adaptable loss function. Thus, the above two cruxes point towards the fact that model shows higher convergence rate with Adma as a loss function. One another thing to note here is that even though the performance of CCE is emulating (or higher in some cases) to Adma, comparatively higher variance is observed for the CCE. Further, talking about the other loss functions, there are some datasets namely Fashion MNIST, CIFAR100, Flower Dataset and Skin Cancer that advocate Squared Hinge as worst performing function amongst all. In fact, the learning seems to be completely impaired for CIFAR100 and Skin Cancer datasets. Next, the MSE has all the same but quite better performance than Squared Hinge in terms of accuracy and variance. Still, with MSE as a loss function, model struggles to achieve the accuracies similar to that of Adma or CCE on all the datasets except for SVHN and Dogs-vs-Cats. Thus, in overall sense, collectively considering accuracy, variance and convergence rate, Adma clearly seems to be the most suitable choice. With SGD as an optimizer, it was quite a hard time finding the optimal scaling factor. And therefore a competent accuracy. It was observed from the experiments that Adma when used with SGD requires higher scaling factor to rise to satisfactory accuracy. Here, the model was compiled with SGD by keeping the default value of the learning rate i.e. $1\times10^{-3}$. Therefore, to lift off the performance of Adma, we tried incorporating the more complex functionalities such as enabling Nestorov Momentum, learning rate annealer etc. However, the similar proportion of upliftment was observed for almost all of the loss functions. But, amid such readjustments, when we reduced the learning rate to as low as $2\times10^{-4}$, exceeded performance was recorded with Adma than the rest of the functions. On applying the same technique with Adam, an identical behavior was observed and Adma achieved leading accuracy on SVHN, CIFAR10 and Flower Recognition datasets as well. Thus, for some reduced learning rate, ConvNet with Adma as loss function was successfully able to overshadow the performance recorded with all the other loss functions. One thing that can be derived for this is that for non-adaptive optimizers like SGD, getting higher accuracy with our loss function seems to be an interplay of the chosen learning rate and scaling factor. Ultimately for some combination of these two we were able to achieve the accuracy higher or parallel to that with other loss functions. However, it is important to mention here that the accuracies observed under this condition of lowered learning rate, are not the benchmarking accuracies. Also, the lowered learning rate on SGD may lead to prolonged convergance. Also note that in this particular experimentation, the objective was to manifest the conditions that demonstrate the possibility of Adma to fully exceed the prevalent functions' standards so that they may prove as a perceptive guidelines for the improvements in future. We would like to present another such setting--test-train split that hints Adma as an outlier as compared to other functions. The test-train split of the standard datasets is fixed and thus we do not tamper the same. In this particular scenario, we regard the non-standard datasets as the datasets that are not made available by the deep learning frameworks itself. However, for the non standard datasets, the proportion of the test-train split can be easily changed. By default, for each of the non standard datasets, the test-train split was 0.8. With this setting, Adma already emulates the performance of all objective functions on flower recognition and skin cancer datasets. Further, on reducing the test-train split to 0.75, all of the cost functions' performances was dropped. But, the proportion by which the performance of Adma function had dropped was the least. Finally on bringing down the test-train split to as low as 0.6, Adma was able to lead the CCE function on an average by approzimately 4\%. This means that when you have lower amount of data to train with and subsequently, higher amount of data to validate on, the Adma seems to be a significantly better fit. Moreover, under this set of circumstances the training accuracy with CCE was always higher than rest of the datasets but on the validation data the accuracy with Adma was found to be higher than all other functions. One interpretation for this can be that Adma enables a more generalized learning of the data and is less prone to overfitting. Thus, this is another result confirming the robustness of the Adma besides the one achieved upon the removal of regularization. Now we briefly mention about the rest of the environments and settings that were considered for testing our loss function and still did not turn out to be so lucrative. One substitution was of various activation functions. The default activation function in all of the experiments done is ELU. Therefore, we replace the same with the activation functions ReLU, it's variant Leaky ReLU and Maxout. Nevertheless, no significant exception was observed for Adma. Similarly, we also tried tampering the internal structure of the ConvNet such as adjusting the dropout rate, changing window size and stride length etc. but no significant improvement over other functions was observed for Adma. Lastly, to validate the performance on state-of-the-art pretrained model, we shift to ResNet34. However, due to lack of time, we only tested the model on 4 standard datasets--MNIST, Fashion MNIST, CIFAR10, CIFAR100. Also, due to the limitation upon the computational resources available, the model is trained directly by inheriting the pre-trained weights and not from scratch. Here, we only allow to train the top 5 layers of the model. Thereafter, all of the same behavior and the results were observed on ResNet34 model as well. To say the least, the model successfully achieves the state-of-the-art performance on all the tested datasets. MNIST has an error rate of 0.025\%, Fashion MNIST of around 6.5\% CIFAR10 $\sim$9\% and CIFAR100 $\sim$38\%. Moreover, we also observed highest first epoch accuracy depicting loftier finessing power of the function. However, on this model rejigging the learning rate did not eventuate any substantial boost. We suppose that this is because only the top 5 layers were allowed to be trained. Hence the values of the errors to be propogated or update steps were plenteously large for all 5 layers to be trained. \section{Conclusion} \label{six} With this, we shall conclude with proposing our novel, flexible loss function and put forth the concluding remarks and future scope of this work. It is shown by empirical demonstration that the proposed loss function, motivated by the idea of flexibility, can very emulate or even surpass the performance of prevalent loss functions. Here, we have presented a rather more detailed analysis on the experimental validation section than the theoretical analysis because we feel that it shall be more helpful guide to the community for further analysis and improvements. And also several works in deep learning mention the theoretical results to be largely deviating from experimental results. To the best of our knowledge, this is the pioneering work on the development of purely flexible loss function for neural networks. The importance of such function is fostered by the biologically plausibility of the variable loss elicitation in our prefrontal cortex. The presented work broaches the fundamental reasons for it to be noteworthy enough to reinstigate an active research on the improvement of loss functions. We also feel that somehow an extensive research on the incorporation of the flexibility, adaptability and dynamicity criteria in the structure of neural networks can cause a paradigm shift and help us quickly achieve an unparalleled performance in Artificial General Intelligence. One may notice that this work questioned one of the most commonly held belief why CCE is widely accepted-- the logarithm undoes the exponentialized softmax output units so that the gradients, while back-propagation show linear nature. Whereas, in Adma loss function the exponentialized output units are even further exponentialized, and still similar performance is achieved. This suggests that one should not take even the Adma function as it is. Rather, one should question its structure as well. For eg. trying out different base values other than $\exp$ since, it only has a numerical importance. Due to such precocious nature of the proposed work, we intentionally miss out on providing any solid theoretical framework for the function. A theoretical framework may limit the structure of proposed function whereas the function requires to be explored more which we believe can be done best by experimentative exploration rather than theoretical proofs. In general, the intuition about the optimal value of scaling factor, just like many other values of hyper-parameters can be developed as one proceeds with the experimentation. However, the future work can be to turn the same into a parameter and learn the same during the training process. Further, the function can even be investigated with more complicated networks such as generative models, image segmentation networks etc. where the design and choice of loss function plays a more significant role. Thus, all of the above facts insinuate that the proposed Adma loss function is already able to deliver state-of-the-art performance, alongside leaving a room for significant exploration and further possibilities. This may henceforth introduce a new paradigm of flexibility and adaptability in deep learning research. \bibliographystyle{IEEEtran}
{'timestamp': '2020-07-27T02:12:00', 'yymm': '2007', 'arxiv_id': '2007.12499', 'language': 'en', 'url': 'https://arxiv.org/abs/2007.12499'}
\section{Introduction} The octupole (or pear-like) deformation in nuclei is one of the most prominent and studied themes in nuclear structure physics \cite{butler1996}. Measurement of permanent octupole deformation has an implication for new physics beyond the Standard Model of elementary particles. Experiments using radioactive-ion beams are planned or already operational around the world to find evidence for strong octupole deformation in several mass regions, e.g., $A\approx 220$ and $A\approx 144$. In this context, timely systematic and reliable nuclear structure calculations on octupole deformations and the related spectroscopic properties over a wide range of the chart of nuclides are required. We have carried out large-scale spectroscopic studies on octupole shapes and excitations in medium-heavy and heavy nuclei \cite{nomura2013oct,nomura2014,nomura2015,nomura2018oct}. The theoretical method is based on microscopic framework provided by the nuclear energy density functional theory (DFT) \cite{bender2003} and the interacting boson model (IBM) \cite{IBM}. The self-consistent mean-field (SCMF) calculation with a given relativistic or non-relativistic energy density functional (EDF) is performed to obtain potential energy surface (PES) in terms of the axial quadrupole $\beta_{20}$ and octupole $\beta_{30}$ shape degrees of freedom. The low-lying positive- ($\pi=+1$) and negative- ($\pi=-1$) parity states, and electromagnetic transition rates that characterize the octupole collectivity are computed by means of the IBM: The strength parameters of the IBM Hamiltonian, which comprises both positive- and negative-parity bosons, are completely determined by mapping the DFT energy surface onto the expectation value of the Hamiltonian in the boson condensate state. In this contribution we discuss specifically the quantum phase transitions (QPTs) of nuclear shapes \cite{cejnar2010} with quadrupole and octupole degrees of freedom in Th, Ra, Sm, Gd, and Ba isotopes \cite{nomura2013oct,nomura2014,nomura2015}, and the octupole correlations in neutron-rich odd-mass Ba isotopes \cite{nomura2018oct}. \section{Octupole shape phase transitions in light actinide and rare-earth nuclei} The axially-symmetric quadrupole and octupole PESs are shown in Figs.~\ref{fig:pes-thra} and \ref{fig:pes-smba}, that are computed by the constrained SCMF calculation within the relativistic Hartree-Bogoliubov method with the DD-PC1 EDF \cite{niksic2011}. Already at the SCMF level, features of the shape-phase transitions are observed: Non-zero $\beta_{30}$ deformation appears already at $^{224}$Th, and this octupole minimum becomes much more pronounced at $^{226,228}$Th, for which rigid octupole deformation is predicted. One then sees a transition to octupole-soft shapes at $^{230,232}$Th. The quadrupole $\beta_{20}$ deformation stays constant at $\beta_{20}\approx 0.2$ for $A\geq 226$. A similar observation applies to the PESs for the $^{220-230}$Ra isotopes. As for Sm isotopes (in Fig.~\ref{fig:pes-smba}), the most pronounced octupole minimum appears at around the neutron number $N=88$ ($^{150}$Sm) and, for heavier Sm isotopes, the octupole minimum is no longer present. Somewhat similar systematic is found for Ba. \begin{figure}[h] \begin{center} \includegraphics[width=0.48\linewidth]{th_pes_v2.png} \includegraphics[width=0.48\linewidth]{ra_pes_v2.png}\\ \end{center} \caption{\label{fig:pes-thra} Axially-symmetric $(\beta_{20},\beta_{30})$ SCMF PESs for the $^{222-232}$Th, and $^{222-232}$Ra isotopes. Contours are plotted with steps of 0.5 MeV, and the global minimum is identified by open circle.} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.48\linewidth]{sm_pes_v2.png} \includegraphics[width=0.48\linewidth]{ba_pes_v2.png}\\ \end{center} \caption{\label{fig:pes-smba} Same as Fig.~\ref{fig:pes-thra}, but for the $^{222-232}$Sm, and $^{142-150}$Ba isotopes.} \end{figure} For a more quantitative analysis of the QPT, it is necessary to compute spectroscopic properties, including excitation spectra and transition rates, by taking into account dynamical correlations beyond the mean-field approximation, i.e., those arising from symmetry restoration and fluctuations for the collective coordinates. To this end, we resort to the diagonalization of the IBM Hamiltonian, which is determined by mapping, at each configuration $(\beta_{20},\beta_{30})$ the SCMF PES $E_\mathrm{SCMF}(\beta_{20},\beta_{30})$, onto the bosonic one $E_\mathrm{IBM}(\beta_{20},\beta_{30})$, i.e., $E_\mathrm{SCMF}(\beta_{20},\beta_{30})\approx E_\mathrm{IBM}(\beta_{20},\beta_{30})$. The boson system consists of the positive-parity $0^+$ ($s$) and $2^+$ ($d$) and negative-parity $3^-$ ($f$) bosons. The bosonic PES is represented by the expectation value of the $sdf$-IBM Hamiltonian in the boson coherent state. See Refs.~\cite{nomura2008,nomura2014} for details of the whole procedure. \begin{figure}[h] \begin{center} \includegraphics[width=0.8\linewidth]{{energies.pos}.pdf}\\ \includegraphics[width=0.8\linewidth]{{energies.neg}.pdf} \end{center} \caption{\label{fig:level} Excitation energies for the $\pi=\pm 1$ yrast states in the Th, Ra, Sm, and Ba isotopes. } \end{figure} Figure~\ref{fig:level} depicts the calculated positive- and negative-parity low-lying levels in comparison with the experimental data. Firstly one should notice a very nice agreement between our calculation and the data, even though no phenomenological adjustment of the IBM parameters is made. In all the considered isotopic chains, the positive-parity yrast levels become lowered with increasing neutron number within each isotopic chain, suggesting spherical vibrational to strongly axially deformed states. What is of particular interest is the behaviors of the low-lying negative-parity states. They demonstrate a parabolic systematic as functions of the neutron number, cantered at a particular nucleus, e.g., $^{226}$Th, where the corresponding PES indicates the most pronounced octupole global minimum. In the Th isotopic chain, for instance, at $^{226}$Th the positive- and negative-parity bands are so close in energy to each other and seem to form an approximate alternating parity band typical of the stable octupole deformation. For those nuclei heavier than $^{226}$Th, however, both the positive- and negative-parity band begin to form separate bands. A similar result is obtained in the Ra isotopic chain. As for Sm, the negative-parity bands become lower in energy toward $^{150}$Sm but, from this nucleus on, stays rather constant, which means there is no notable change in the evolution of octupole collectivity. In Ba, the negative-parity levels become lowest at $^{144}$Ba and remain constant for the heavier Ba nuclei. \begin{figure}[h] \begin{center} \includegraphics[width=\linewidth]{transitions.pdf} \caption{\label{fig:trans} The $B(E3)$ and $B(E1)$ values for the considered Th, Ra, Sm, and Ba isotopes.} \end{center} \end{figure} Next we show in Fig.~\ref{fig:trans} the $B(E3; 3^-_1\to 0^+_1)$ and $B(E1; 1_1^-\to 0^+_1)$ transition rates. In particular the $B(E3)$ rates are a good measure for the octupole collectivity and, indeed, the predicted $B(E3)$ value becomes maximal at that nucleus where the PES exhibits the most pronounced $\beta_3\neq 0$ octupole minimum in each isotopic chain. On the other hand, the E1 property is accounted for by single-particle degrees of freedom, which is, by construction, not included in the model, as it is build only on the collective valence nucleons. That is the reason why the calculation fails to reproduce some experimental $B(E1)$ systematics. \begin{figure}[h] \begin{center} \includegraphics[width=0.48\linewidth]{{th.ratio}.pdf} \includegraphics[width=0.48\linewidth]{{ra.ratio}.pdf} \\ \includegraphics[width=0.48\linewidth]{{sm.ratio}.pdf} \includegraphics[width=0.48\linewidth]{{ba.ratio}.pdf} \\ \end{center} \caption{\label{fig:ratio} Energy ratios $E(I^\pi)/E(2^+_1)$ for the $\pi=\pm 1$ yrast states with angular momentum $I$.} \end{figure} As another signature of the octupole QPT, we show in Fig.~\ref{fig:ratio} the energy ratio $E(I^\pi)/E(2^+_1)$ for the $\pi=\pm 1$ yrast states plotted against the angular momentum $I$. If the nucleus has stable octupole deformation and exhibits alternating-parity band, the ratio increases linearly with $I$. The staggering pattern shown in the figure, that starts from a particular nucleus, e.g., $^{226}$Th in the Th chain, indicates that the $\pi=+1$ and $\pi=-1$ yrast bands are decoupled and the octupole vibrational structure emerges. We have also done a spectroscopic study on the octupole deformations in Sm and Gd isotopic chains by using the non-relativistic, Gogny EDF \cite{nomura2015}. There we confirmed the robustness of the mapping procedure: irrespectively of whether relativistic or non-relativistic EDF is employed, a very nice description of the experimental low-lying positive- and negative-parity spectra, as well as the evolution of octupole deformation is obtained. Another interesting result in Ref.~\cite{nomura2015} is that many of the excited $0^+$ states in the considered Sm and Gd nuclei could have in their wave functions double octupole phonon (i.e., $f$ boson) component, and this result gives a possible explanation for why so many low-lying excited $0^+$ states are observed in rare-earth nuclei. \section{Octupole correlations in odd-mass systems} Extension to odd-mass system is made by introducing an unpaired nucleon, which is then coupled to the octupole deformed even-even nucleus as a core. The low-lying structure of even-even nucleus is described in terms of the interacting $s$, $d$, and $f$ bosons, and the particle-boson coupling is modeled within the interacting boson-fermion model (IBFM) \cite{IBFM}. The Hamiltonian for the IBFM consists of the $sdf$-IBM Hamiltonian $\hat H_\mathrm{B}$, the Hamiltonian for the single neutron $\hat H_\mathrm{F}$, and the term $\hat H_\mathrm{BF}$ that couples the fermion and boson spaces \cite{nomura2018oct}: $\hat H_\mathrm{IBFM} = \hat H_\mathrm{B}+\hat H_\mathrm{F}+\hat H_\mathrm{BF}$ Input from the SCMF calculation are the spherical single-particle energies $\epsilon_j$ (needed for $\hat H_\mathrm{F}$) and occupation numbers $v^2_j$ (for $\hat H_\mathrm{BF}$) for the odd particle in orbital $j$. \begin{figure}[h] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.48\linewidth]{ba145.pdf} & \includegraphics[width=0.48\linewidth]{ba144.pdf}\\ \end{tabular} \end{center} \caption{\label{fig:ba}The theoretical and experimental excitation spectra for the $^{145}$Ba and $^{144}$Ba nuclei.} \end{figure} Here we illustrate the method in its application to the isotope $^{145}$Ba. Since the corresponding even-even boson core nucleus $^{144}$Ba exhibits an octupole-soft potential at the SCMF level, we also expect that the octupole correlations play an important role in the low-energy spectra of this odd-mass nucleus. The calculated excitation spectrum for $^{145}$Ba is compared to the corresponding experimental bands in Fig~\ref{fig:ba}. Those calculated positive- and negative-parity bands shown in bold in the figure are made of the one-$f$-boson configuration coupled with a single neutron in the $p_{1/2,3/2}f_{5/2,7/2}h_{9/2}$ and $i_{13/2}$ orbitals, respectively. The corresponding experimental bands that are suggested to be of octupole in nature are also depicted in the figure. The absolute energies of the bandheads and energy spacings within the bands are well reproduced by the calculation. We have also predicted the $B(E3)$ transition rates from the octupole bands to the ground-state bands to be typically within the range 20$\sim$30 W.u., which are of the same order of magnitude as the calculated $B(E3;3^-\rightarrow 0^+)$ value of 23 W.u. for the neighboring even-even nucleus $^{144}$Ba. To examine quality of the model prediction, more experimental information about the $B(E3)$ transitions for the odd-mass Ba isotopes is expected. \section{Summary} Based on a global theoretical framework of the nuclear DFT, we have analyzed the octupole deformations and the related spectroscopic properties in a large set of medium-heavy and heavy nuclei. The results of the SCMF calculations with a given relativistic and non-relativistic EDF are used to completely determine the Hamiltonian of the IBM, which then provides excitation spectra and transition rates. Evolution of the axially-symmetric quadrupole and octupole PES, the resultant positive- and negative-parity low-lying states, and E3 transition rates and moments, all consistently points to the onset of octupole deformations and the phase transition between stable octupole deformation and octupole soft shapes in the considered Th, Ra, Sm, Gd, and Ba isotopes chain. The octupole correlation plays an important role in describing low-lying spectra in odd-mass nuclei as well as in neighboring octupole deformed even-even nuclei. The theoretical method presented here is general and allows a computationally feasible prediction of octupole collective states and will be, therefore, extended further to study many other radioactive nuclei that are becoming of much more importance for the RIB experiments. \section*{Acknowledgments} The results presented in this contribution are based on the works with D. Vretenar, T. Nik{\v s}i\'c, B.-N. Lu, L. M. Robledo, and R. Rodr\'iguez-Guzm\'an, This work is financed within the Tenure Track Pilot Programme of the Croatian Science Foundation and the \'Ecole Polytechnique F\'ed\'erale de Lausanne and the Project TTP-2018-07-3554 Exotic Nuclear Structure and Dynamics, with funds of the Croatian-Swiss Research Programme.
{'timestamp': '2019-09-24T02:20:38', 'yymm': '1909', 'arxiv_id': '1909.10161', 'language': 'en', 'url': 'https://arxiv.org/abs/1909.10161'}
\section{Introduction} \paragraph*{Motivation.} Since the early days of the quantum theory physicists have looked for ways to represent quantum states as probability distributions in the phase space, a procedure in which information about coordinate and momentum representations is encoded in one single function \cite{weyl27, wigner32, groen46, moyal49, hillery84}. This type of representation, thoroughly employed in classical statistical mechanics, is nowadays widely used to study quantum systems with applications in experimental photonics, quantum information, numerical semiclassical methods and other applied and theoretical fields. In general, functions $Q$ representing probability densities in the phase space satisfy the continuity equation \begin{equation} \frac{\partial Q}{\partial t}+\text{div} \mathbf{J}=\sigma, \label{eq:ContGeneral} \end{equation} where the divergence is taken with respect to the coordinates $x$ and momenta $p$. This equation defines the probability current $\mathbf{J}$ and the generation rates $\sigma$ for $Q$, which can be either positive or negative. In classical mechanics $\sigma = 0$ due to probability conservation, and the equations of motion for the coordinates determine the current \cite{marsden}. For example, in an one-dimensional system \begin{equation*} \mathbf{J}=\left( \begin{array}{c} J_{x} \\ J_{p} \end{array} \right) = Q \left( \begin{array}{c} \dot{x} \\ \dot{p} \end{array} \right), \end{equation*} where the dots indicate total derivative with respect to time. Classical solutions of the equation (\ref{eq:ContGeneral}) can be obtained from an initial function $Q$ and are given in terms of the trajectories governed by the equations of motion: $Q \left( x , p ; t \right) = Q \left( x_0 , p_0 ; 0 \right)$; where $x_0$ and $p_0$ are the initial conditions of the trajectory ending at $x$ and $p$ in the time $t$. These classical probability functions have two properties: their marginal distributions are the physical coordinate and momentum distributions and they are positive semi-definite, $Q \geq 0$. The quantum mechanical analog of this formulation uses phase space functions to represent states but the general solutions have different properties from that of the classical ones. Firstly, it has long been known that any quantum function violates at least one of the two classical properties stated before. For example, the Wigner function \cite{hillery84, ozorio98} has as marginal distributions the wavefunctions of the state in the coordinate and momentum representations but it is not positive, and the Husimi function \cite{perelomov, klauder} is positive, but it does not have the wavefunctions as its marginal distributions. Secondly, quantum solutions are not guided by trajectories because it is impossible to assess both coordinate and momentum for a given test particle, even though the quantum functions satisfy the continuity equation and have well a defined probability current \cite{steuer2013, veronez2013}. Since authentic quantum trajectories do not exist as in the classical dynamics, in order to characterize both dynamical regimes in the phase space on the same grounds we need to attain the analysis on the probability flow itself. \paragraph*{Quantum phase space flow.} Continuity equations in quantum mechanics cannot be derived from equations of motion. Instead they are built by casting the von Neumann equation for the dynamics of the states in some representation, in which the density operator of the state is mapped to the function $Q$ in the phase space, where a suitable current $\mathbf{J}$ is obtained. In recent years attention has been drawn to the topological structures the quantum current generates and the relations to its classical counterpart. So far it has been shown that the Wigner current has a strict topological order in the dynamics of its stagnation points \cite{steuer2013}, and in a recent work the same kind of order was explored for the Husimi function \cite{veronez2013}. The Husimi function is obtained by averaging the density operator in a basis of Gaussian coherent states and it can be interpreted as the probability of measuring the position and momentum of the quantum system within an uncertainty area centered in the basis state. This function inherits the analiticity of the coherent states, which restricts the dynamics of the stagnation points. In a previous work \cite{veronez2013} we presented a detailed demonstration of the construction of the Husimi current for one-dimensional systems, and we showed that the current adds new topological structures to the dynamics when compared with its classical counterpart. We also conjectured that these new structures emerge because every zero of the Husimi function is also a saddle point of the flow. In this paper we prove this conjecture and we explore the effects of this topological structures in a toy model. Specifically, we consider transmission through a gaussian barrier and show the connection between the classical and quantum transmission coefficients and the position of the zeros relative to the peaks of Husimi function and the classical energy levels. It has already been observed that the emergence of zeros in the phase space is a signature of quantum interference \cite{leboeuf90, cibils92}; our work adds another layer of comprehension of this signature by analysing a dynamical feature of the zeros. \paragraph*{Structure of the paper.} In section 2 we present a summary of our previous work \cite{veronez2013}. Section 3 contains the demonstration of the conjecture for the two-dimensional phase space. In section 4 we use the Gaussian barrier as a toy model to analyse the dynamical effect of the topological structures. Section 5 contains the final remarks. \section{The Husimi flow} The coherent states $\vert z \rangle$ of the harmonic oscillator of mass $m$ and frequency $\omega$ are defined as the eigenstates of the annihilation operator, \begin{equation*} \widehat{a} \vert z \rangle = z \vert z \rangle, \end{equation*} where $z$ is a complex number that labels the eigenvalue. The normalized coherent states can be written as \begin{equation*} \vert z \rangle = e^{- \bar{z} z / 2} e^{z \widehat{a}^{\dagger}} \vert 0 \rangle, \end{equation*} where $\bar{z}$ is the complex conjugate of $z$ and $\vert 0 \rangle$ is the ground state of the harmonic oscillator. Throughout this paper we use a bar to denote the complex conjugate. The annihilation and creation operators, $\widehat{a}$ and $\widehat{a}^{\dagger}$, are given by \begin{equation*} \widehat{a} = \frac{\widehat{x}}{\sigma_{x}} + i \frac{\widehat{p}}{\sigma_{p}} , \qquad \widehat{a}^{\dagger} = \frac{\widehat{x}}{\sigma_{x}} - i \frac{\widehat{p}}{\sigma_{p}}, \end{equation*} where $\sigma_{x} = \sqrt{\hbar / 2 m \omega}$ and $\sigma_{p} = \sqrt{\hbar m \omega / 2}$ are the coordinate and momentum widths of the ground state, respectively \cite{perelomov, klauder}. The coherent states are minimum uncertainty localized Gaussian wavepackets centered at $x = \left \langle z \vert \widehat{x} \vert z \right \rangle = \sigma_{x} \mathfrak{Re} \left( z \right)$ and $p = \left \langle z \vert \widehat{p} \vert z \right \rangle = \sigma_{p} \mathfrak{Im} \left( z \right)$. In terms of $x$ and $p$ we can rewrite $z$ as \begin{equation} z = \frac{x}{\sigma_{x}} + i \frac{p}{\sigma_{p}} , \qquad \bar{z} = \frac{x}{\sigma_{x}} - i \frac{p}{\sigma_{p}}. \label{eq:definitionZxp} \end{equation} The change of variables $\left( x , p \right) \mapsto \left( z , i \hbar \bar{z} \right)$ is a canonical transformation and due to the scaling by $\sigma_{x}$ and $\sigma_{p}$ the dimensionless $z$ coordinate have characteristic size of $1 / \sqrt{\hbar}$, which is important when expansions in powers of $\hbar$ are needed. The identity operator in the coherent state representation is \begin{equation*} \widehat{\mathbf{1}} = \int \text{d}^{2} z \vert z \rangle \langle z \vert, \end{equation*} where the $\text{d}^{2} z = \text{d} \bar{z} \text{d} z / 2 \pi i = \text{d} x \text{d} p / 2 \pi \hbar$ is the displacement invariant volume of the phase space. A quantum state represented by the density operator $\widehat{\rho}$ can be mapped to a phase space function by means of its average over coherent states: \begin{equation} Q \left( \bar{z} , z \right) = \text{tr} \left( \widehat{\rho} \vert z \rangle \langle z \vert \right). \label{eq:definitionQ} \end{equation} This is the Husimi function of the quantum state, also called its $Q$-symbol. If the state is pure the Husimi function is the squared modulus of the wavefunction and is non-negative. It represents the minimal uncertainty probability of measuring the position and momentum of the state. From the definition of $\vert z\rangle$, the wavefunction in the coherent state representation can be written as \begin{equation} \psi \left( \bar{z} , z \right) \equiv \langle z \vert \psi \rangle = e^{-\bar{z} z / 2} \theta \left( \bar{z} \right), \label{eq:definitionTheta} \end{equation} where $\theta \left( \bar{z} \right) = \langle 0 \vert e^{\bar{z} \widehat{a}} \vert \psi \rangle$ is an analytic function of $\bar{z}$ \cite{perelomov, klauder}. Similarly, $\bar{\psi} \left( z , \bar{z} \right) = \langle \psi \vert z \rangle = e^{-\bar{z} z / 2} \bar{\theta} \left( z \right)$. This factorization of the wave function will be important in the next section. The dynamics of the density operator is governed by the von Neumann equation, \begin{equation} i \hbar \frac{\partial\widehat{\rho}}{\partial t} = \left[ \widehat{H} , \widehat{\rho} \right], \label{eq:definitionVonN} \end{equation} where $\widehat{H}$ is the Hamiltonian operator and $\left[ \cdot , \cdot \right]$ is the commutator. In order to represent the dynamics in the phase space, we define a Hamiltonian function for $\widehat{H}$, \begin{equation} H \left( \bar{z} , z \right) = \text{tr} \left( \widehat{H} \vert z \rangle \langle z \vert \right), \label{eq:definitionH} \end{equation} which is an average of the operator over the coherent states. If the Hamiltonian operator can be written as a normal ordered power series in $\widehat{a}$ and $\widehat{a}^{\dagger}$, namely $\widehat{H} = \sum_{m,n} h_{mn} \widehat{a}^{\dagger m} \widehat{a}^{n}$, then its averaged function is $H = \sum_{m,n} h_{mn} \bar{z}^{m} z^{n}$. With this definition, equation (\ref{eq:definitionVonN}) can be rewritten as a differential equation in the phase space for the Husimi function: \begin{equation*} i \hbar \frac{\partial Q}{\partial t} = \sum_{m,n} h_{mn} \bar{z}^{m} \left( \frac{\partial}{\partial\bar{z}} + z \right)^{n} Q - \text{c.c.}, \end{equation*} where $\text{c.c.}$ stands for the complex conjugate of the expression immediately preceding it. Taking the limit $\hbar \rightarrow 0$ the right hand side of this equation reduces to the Poisson bracket, and the classical dynamics is obtained. In this limit the evolution of a phase space function can be further written as a continuity equation: \begin{equation*} \frac{\partial Q}{\partial t} + \frac{\partial J_{cl}}{\partial z} + \frac{\partial\bar{J}_{cl}}{\partial\bar{z}} = 0, \end{equation*} where $J_{cl} \left( \bar{z} , z \right) = \frac{1}{i \hbar} Q \frac{\partial H}{\partial \bar{z}}$ is the classical probability current. This classical dynamics is based on the existence of trajectories, guided by the equation of motion $\dot{z} = \frac{1}{i \hbar} \frac{\partial H}{\partial \bar{z}}$, that carry the $Q$ function over the phase space. In a similar fashion we can transform the full quantum dynamical equation into a sourceless continuity equation if the Hamiltonian operator is Hermitian: \begin{equation} \frac{\partial Q}{\partial t} + \frac{\partial J}{\partial z} + \frac{\partial \bar{J}}{\partial \bar{z}} = 0; \label{eq:definitionCont} \end{equation} where $J\left(\bar{z},z\right)$ is the quantum probability current and is given by \begin{equation} J = \frac{1}{i \hbar} \sum_{m,n = 0}^{\infty} \sum_{k = 0}^{\min \left( m , n \right)} \sum_{l = 1}^{m - k} \frac{h_{mn} \left( -1 \right)^{k} m! n!}{k! l! \left( m - k - l \right)! \left( n - k \right)!} \frac{\partial^{l - 1}}{\partial z^{l - 1}} \left(\bar{z}^{m - k - l}z^{n - k} Q \right). \label{eq:definitionJ1} \end{equation} The lowest order $\hbar$ term on the right hand side of (\ref{eq:definitionJ1}) is obtained with $l = 1$ and $k = 0$, and this term retrieves the classical current $J_{cl}$. In this way, the quantum current can be separated into the classical current plus higher order corrections in $\hbar$. If the Hamiltonian is not Hermitian, $h_{mn} \neq \bar{h}_{nm}$, a source term \begin{equation*} \sigma = \frac{Q}{i \hbar}e^{-\frac{\partial}{\partial \bar{z}}\frac{\partial}{\partial z}} \left( H - \bar{H} \right) \end{equation*} is added to the right hand side of (\ref{eq:definitionCont}). This term accounts for the absence of norm conservation of the Husimi function. In this paper we will only consider Hermitian Hamiltonians. \paragraph*{Simplification of $J$.} The expression (\ref{eq:definitionJ1}) can be further simplified by changing the summation indexes in the following way: \begin{equation*} \sum_{m = 0}^{\infty} \sum_{n = 0}^{\infty} \sum_{k = 0}^{\min \left( m , n \right)} \sum_{l = 1}^{m - k} \mapsto \sum_{k = 0}^{\infty} \sum_{l = 1}^{\infty} \sum_{m = k + l + 1}^{\infty} \sum_{n = k}^{\infty}. \end{equation*} Expanding the derivatives in $z$ contained in (\ref{eq:definitionJ1}) and gathering conveniently the terms we obtain \begin{equation} J = \frac{1}{i \hbar} \sum_{l = 1}^{\infty} \frac{\partial^{l - 1} Q}{\partial z^{l - 1}} \sum_{k = 0}^{\infty} \frac{\left( - 1 \right)^{k}}{\left( k + l \right)!} \frac{\partial^{2k + l} H}{\partial \bar{z}^{k + l} \partial z^{k}}, \label{eq:definitionJ2} \end{equation} which is a more compact and manageable expression than the original (\ref{eq:definitionJ1}). Our interest concerns the stagnation points of the flow, those points of the phase space where $J = 0$. \section{Stagnation Points of the Flow} In \cite{veronez2013} it was conjectured that the zeros of the Husimi function are also saddle points of $J$. This means that whenever a zero of the function occurs, there are topological differences between the classical and quantum probability flows induced by these new stagnation points, and the phase space dynamics of these regimes are different. In this section we prove that this conjecture is true, and we analyse how the existence of the saddle points can change the dynamical behaviour of a system.The quantum current (\ref{eq:definitionJ2}), like the classical one, has a gauge freedom under the transformation $J \mapsto J + i \frac{\partial}{\partial \bar{z}} \Phi$, for any phase space valued real function $\Phi$. Apart from this gauge, in this work we regard (\ref{eq:definitionJ2}) as the probability current associated to the Husimi function due to its special factorization (\ref{eq:factorizationJ}) shown below. Let $z_{0}$ be a stagnation point, $J \left( \bar{z}_{0} , z_{0} \right) = 0$. The probability current can be expanded around $z_{0}$ as \begin{eqnarray*} J & \approx & \left( z - z_{0} \right) \left. \frac{\partial J}{\partial z} \right|_{\bar{z}_{0} , z_{0}} + \left( \bar{z} - \bar{z}_{0} \right) \left. \frac{\partial J}{\partial \bar{z}} \right|_{\bar{z}_{0} , z_{0}},\\ \bar{J} & \approx & \left( z - z_{0} \right) \left. \frac{\partial \bar{J}}{\partial z} \right|_{\bar{z}_{0} , z_{0}} + \left( \bar{z} - \bar{z}_{0} \right) \left. \frac{\partial \bar{J}}{\partial \bar{z}} \right|_{\bar{z}_{0} , z_{0}}. \end{eqnarray*} The topological behaviour of the current about this point is determined by the eigenvalues $\lambda$ of the matrix $\mathcal{G}$ of the linear coefficients of the expansion. This matrix is also the vector gradient of the current, and is given by \begin{equation*} \mathcal{G} \left. \left[ \bar{J} , J \right] \right|_{\bar{z}_{0} , z_{0}} = \left. \left( \begin{array}{cc} \frac{\partial J}{\partial z} & \frac{\partial J}{\partial \bar{z}}\\ \frac{\partial \bar{J}}{\partial z} & \frac{\partial \bar{J}}{\partial \bar{z}} \end{array} \right) \right|_{\bar{z}_{0} , z_{0}}. \end{equation*} As an illustration consider the classical case, where the current is given by $J_{cl} = \frac{1}{i \hbar} Q \frac{\partial H}{\partial \bar{z}}$, with $Q = \left| \psi \left( \bar{z} , z \right) \right|^{2}$. There are two sets of stagnation points, given by $Q = 0$ and by $\frac{\partial H}{\partial \bar{z}} = 0$. The eigenvalues of $\mathcal{G}$ at these points are \begin{equation} \begin{cases} \lambda_{cl} = 0, & \text{if } Q = 0,\\ \lambda_{cl} = \pm \frac{Q}{\hbar} \sqrt{\det \mathcal{K}}, & \text{if } \frac{\partial H}{\partial \bar{z}} = 0; \end{cases} \label{eq:lambdaClassical} \end{equation} where $\mathcal{K}$ is the Hessian matrix of the Hamiltonian function evaluated at the stagnation point: \begin{equation*} \mathcal{K} = \left. \left( \begin{array}{cc} \frac{\partial ^{2} H}{\partial z ^{2}} & \frac{\partial ^{2} H}{\partial \bar{z} \partial z}\\ \frac{\partial ^{2} H}{\partial \bar{z} \partial z} & \frac{\partial ^{2} H}{\partial \bar{z} ^{2}} \end{array} \right) \right|_{\bar{z}_{0} , z_{0}}. \end{equation*} When $Q = 0$ the structure of the stagnation point cannot be inferred. When $\frac{\partial H}{\partial \bar{z}} = 0$ the eigenvalues are either real numbers with opposite sign if the Hessian determinant is positive, or pure imaginary conjugate numbers if the determinant is negative. In the former case the point is a saddle of the flow, with an attractive and a repulsive direction; in the latter, it is a vortex \cite{marsden}. For each stagnation point $z_{0}$ the topological characterization of the probability current can be done by the index $I$ of the flow, which counts the number of times the current rotates completely while we move clockwise around the point. Taking a loop $L$ around $z_{0}$ containing no other stagnation point, $I$ is calculated as the clockwise integral of the angle $\phi$ between the flow and some fixed reference axis in $L$: \begin{equation*} I \left( z_{0} \right) = \frac{1}{2 \pi} \oint_{L} \text{d} \phi. \end{equation*} One counterclockwise rotation of $J$ adds $-1$, whereas one clockwise rotation adds $+1$. In general, for saddle points $I = -1$ and for vortices $I = +1$. In this way the index define a topological charge $\pm 1$ for each point \cite{marsden, guillemin}. When we analyse the quantum flow, the behaviour of stagnation points is different from the classical flow. For pure states, the Husimi function can be factored as \begin{equation} Q \left( \bar{z} , z \right) = e^{-\bar{z} z} \theta \left( \bar{z} \right) \bar{\theta} \left( z \right), \label{eq:factorizationQ} \end{equation} where $\theta$ is given in (\ref{eq:definitionTheta}). As the quantum current (\ref{eq:definitionJ2}) depends only on the derivatives of $Q$ with respect to $z$, it can be written as \begin{eqnarray} J & = & \theta \left( \bar{z} \right) \frac{1}{i \hbar} \sum_{k \geq 0 , l \geq 1} \frac{\left( -1 \right)^{k}}{\left( k + l \right)!} \frac{\partial^{2k + l} H}{\partial \bar{z}^{k + l} \partial z^{k}} \frac{\partial^{l - 1} \left( e^{-\bar{z} z} \bar{\theta} \left( z \right) \right)}{\partial z^{l - 1}} \nonumber \\ & = & \theta \left( \bar{z} \right) f \left( \bar{z} , z \right). \label{eq:factorizationJ} \end{eqnarray} Here there are two possibilities that produce $J = 0$: $\theta = 0$ or $f = 0$. The points that satisfy the condition $\theta = 0$, which are the zeros of the Husimi function, will be named the \emph{trivial stagnation points}, while those that satisfy $f = 0$, given by an intricate relation between the phase space functions $Q$ and $H$, will be called the \emph{non-trivial stagnation points}. The eigenvalues of the vector gradient for both classes of stagnation points can be calculated using the factorization (\ref{eq:factorizationJ}): \begin{equation*} \mathcal{G} \equiv \left( \begin{array}{cc} \theta \frac{\partial f}{\partial z} & \frac{\partial \left(\theta f \right)}{\partial \bar{z}}\\ \frac{\partial \left( \bar{\theta} \bar{f} \right)}{\partial z} & \bar{\theta} \frac{\partial \bar{f}}{\partial \bar{z}} \end{array} \right), \end{equation*} and are given by the roots of the secular equation \begin{equation} \lambda^{2} - \lambda \left( \theta \frac{\partial f}{\partial z} + \bar{\theta} \frac{\partial \bar{f}}{\partial \bar{z}} \right) + \theta \bar{\theta}\frac{\partial f}{\partial z} \frac{\partial \bar{f}}{\partial \bar{z}} - \left( \theta \frac{\partial f}{\partial \bar{z}} + \frac{\partial \theta}{\partial \bar{z}} f \right) \left( \bar{\theta} \frac{\partial \bar{f}}{\partial z} + \frac{\partial \bar{\theta}}{\partial z} \bar{f} \right) = 0. \label{eq:eqLambda} \end{equation} The solutions of (\ref{eq:eqLambda}) are different for each case considered before and are given by \begin{equation} \begin{cases} \lambda^{\pm} = \pm \left| \frac{\partial \theta}{\partial \bar{z}} f \right| , & \text{if } \theta = 0;\\ \lambda^{\pm} = \frac{1}{2} \left( \text{tr} \mathcal{G} \pm \sqrt{\text{tr}^{2} \mathcal{G} - 4 \det \mathcal{G}} \right) , & \text{if } f = 0. \end{cases} \label{eq:lambdaQuantum} \end{equation} Therefore, the trivial stagnation points are saddles of the flow, and their indices are equal to $-1$. This is the proof of the previous conjecture. There are two possibilities to the eigenvalues of the non-trivial stagnation points. First, when the term under the square root is positive, the eigenvalues are both negative (positive) numbers, and the stagnation point is an attractive (repulsive) node. Second, when the term inside the root is negative, the eigenvalues are a pair of complex conjugate numbers, and in this case the stagnation point is an attractive (repulsive) spiral if their real parts are negative (positive). For both possibilities, the real parts of the $\lambda^{\pm}$'s add to $- \frac{\partial Q}{\partial t}$ and the index of the point is equal to $+1$. During the time evolution of the quantum state, the movement of the Husimi function is accompanied by the movement of its zeros. In view of the Poincar\'{e}-Hopf theorem the total index of the flow must be conserved during the dynamics, thus the emergency of a saddle point must always be accompanied by the emergency of a non-trivial stagnation point; for this reason stagnation points exist \emph{in pairs}. In \cite{veronez2013} it has been observed that the stagnation points in the pair move closely to each other in the phase space and form a structure similar to the one depicted in Figure 1. Since this structure is similar to a dipole with opposite charges, in this work we name it \emph{topological dipole}. In the next section we analyse a toy model where the presence of the topological dipoles works as a signature of differences between two regimes of transmission across a potential barrier. \begin{figure}[!ht] \begin{centering} \includegraphics[scale=0.75]{Figure1.pdf} \par\end{centering} \caption{{\small Sketch of one topological dipole, comprised by a saddle point (left, blue spot) and a spiral (right, red spot). The total index around both points sums to zero. The vortex can also have an attractive or repulsive character.}} \label{fig01} \end{figure} \section{Tunneling in the Gaussian Barrier} We consider a particle of mass $m$ scattering off a one-dimensional Gaussian barrier with amplitude $V_{0}$ and width $1/\sqrt{2k}$. We are interested in the comparison between the classical and quantum transmission rates through the barrier, $T_{C}$ and $T_{Q}$, respectively. The classical Hamiltonian is \begin{equation} H_{cl} = \frac{p^{2}}{2m} + V_{0} \exp \left( - k x^{2} \right), \label{eq:Hclassical} \end{equation} and the quantum Hamiltonian for the model is given by \begin{equation} \widehat{H} = \frac{\widehat{p}^{2}}{2m} + V_{0} \exp \left( - k \widehat{x}^{2} \right), \label{eq:Hquantum} \end{equation} In general, the classical Hamiltonian is different of the averaged Hamiltonian function defined in (\ref{eq:definitionH}), and the latter contains terms of higher order in $\hbar$. For the operator above, the averaged Hamiltonian (\ref{eq:definitionH}) is given by \begin{equation} H = \frac{p^{2}}{2m} + \frac{\hbar \omega}{4} + \alpha V_{0} \exp \left( - \alpha k x^{2} \right), \label{eq:Haverage} \end{equation} where $x$ and $p$ are given by expressions (\ref{eq:definitionZxp}) and $\alpha = \left( 1 + 2k \sigma_{x}^{2} \right)^{-1 / 2}$ is a smoothing factor. The classical Hamiltonian is recovered from the averaged one when $\hbar \rightarrow 0$. The initial state was chosen as a coherent state centered at $x_{0}$ and $p_{0}$ with position and momentum widths $\sigma_{x}$ and $\sigma_{p}$, respectively. The parameters were set to $m = \omega = 1$ and $\hbar = 1 / 100$, which implies $\sigma_{x} = \sigma_{p} = 1/ \left( 10 \sqrt{2} \right)$. We also fixed $V_{0} = 2$, $k = 3$ and $x_{0} = -4$. We also define $p_{C}=\sqrt{2m V_0}=2$, corresponding to a classical kinetic energy equal to the barrier top. For further details on the numerical calculations we refer the reader to the Appendix. In order to quantify the relative classical-to-quantum transmission $T$ and reflection $R=1-T$ rates we define \begin{eqnarray*} D_{T} & = & \frac{T_{C} - T_{Q}}{T_{C}},\\ D_{R} & = & \frac{R_{C} - R_{Q}}{R_{C}}; \end{eqnarray*} If $D_{T} > 0$ and $D_{R} < 0$ ($D_{T} < 0$ and $D_{R} > 0$) the probability of the classical particle crossing the barrier is higher (smaller) than the tunneling probability of the quantum particle. There are two different regimes of transmission according to the average initial momentum $p_{0}$ of the particle, as can be seen in Figure 2a. For $p_{0} \lessapprox p_{C}$ the classical transmission is greater than the quantum one, while if $p_{0} \gtrapprox p_{C}$ the quantum transmission is greater. We investigated the structure of the quantum flow for both regimes for a state with initial momentum $p_{0} = 1.8$ and $p_{0} = 2.1$. The quantum movement of the Husimi function is shown for reference in Figure 2b for $p_{0} = 1.8$. \begin{figure} \begin{centering} \includegraphics[scale=0.35]{Figure2.pdf} \par\end{centering} \caption{{\small (a) Relative transmission and reflection coefficients for different values of $p_{0}$. (b) Husimi function for selected times for $p_{0}=1.8$. Along the arrow, time steps increase by $0.2$ with initial time $1.7$. Amplitude scale relative to the highest value, chosen to be 1.}} \label{fig02} \end{figure} Figure 3 shows the logarithmic plot of the Husimi function, highlighting the position of its zeros, the so called stellar representation \cite{majorana32, leboeuf90, cibils92}. The panels show the Husimi function at different times when the particle is hitting the barrier. In all panels it is possible to see a row of zeros in front of the maximum of the Husimi function (marked by an ellipse with the letters RZ) in a region where classical trajectories cross the potential barrier ($p>0$). The presence of the zeros causes the quantum flow to be highly distorted with respect to the classical flow. This is seen in Figure 4, which shows the Husimi function superimposed with the quantum current. Three zeros, which are saddles of the current, and their corresponding center companions, are clearly visible (marked by squares, triangles and circles). The flow, that would classically go through the top of the barier to the other side, gets partially blocked by the topological dipoles. This dynamical feature leads to a smaller quantum probability of transmission compared to the classical one. \begin{figure} \begin{centering} \includegraphics[scale=0.25]{Figure3.pdf} \par\end{centering} \caption{{\small Gray scale log-plot of the Husimi function for time instants $T = 1.9 \text{, } 2.1 \text{, } 2.3 \text{ and } 2.5$ (see Figure 2b), organized in the reading direction. Black represents the absolute maximum and white represents zero. A row of zeros (RZ) is seen in front of the Husimi maximal values, which is framed by the border zeros (BZ). The outer zeros (OZ) in the external region are numerical artifacts.}} \label{fig03} \end{figure} In a similar fashion the zeros of the Husimi function and the associated stagnation points of the current help understand the dynamics of the transmission for $p_0 > p_c$, when the quantum transmission is larger than the classical. Figure 4b shows a few classical trajectories superimposed with the Husimi function for $p_{0} = 2.1$. Once again the row of zeros is visible, but this time they are situated in a region near the classical separatrix. Below the separatrix, where the Husimi function is large and the classical trajectories are reflected back, the zeros distort the flow again, allowing portions of the Husimi function to cross it. Notice that alongside with the last zero of the row, below the separatrix, is the vortex of the topological pair, allowing the flow to circulate around it and move to the other side of the separatrix. This leads to a higher quantum than classical transmission. \begin{figure} \begin{centering} \includegraphics[scale=0.42]{Figure4.pdf} \par\end{centering} \caption{{\small (a) Zoom in the row of zeros for $T = 2.1$ and $p_{0} = 1.8$. The black continous lines are the energy levels of the classical Hamiltonian, which coincides with the direction of the classical flow. Three topological dipoles are visible in the image, marked with square, triangle and circle. (b) Husimi function for $p_{0} = 2.1$ and $T = 2.2$. The black continuous lines identify the classical energy levels and the green curve is the separatrix. The row of zeros can be seen crossing the searatrix.}} \label{fig04} \end{figure} In summary, the position of the zeros relative to the Husimi function's maximum and to the classical flow lines are a signature of the transmission regime. This particular model exhibited two particular possibilities of relative position, other systems could offer different situations to be explored. \section{Final Remarks} In this work we studied the dynamics of the Husimi function and the role of its zeros in producing stagnation points of the corresponding phase space flow. We showed that the zeros are stagnation saddle points of the quantum probability flow, leading to new topological structures in the current when compared to the classical dynamics. These new stagnation points of the current are created in pairs due to topological restrictions, with indexes equal to $+1$ and $-1$, so that each pair behaves as a topological dipole with total index equal to zero. As an example we studied the scattering of a wave packet through a Gaussian barrier, where two regimes of transmission can be identified according to the initial average momentum of the particle. For initial average kinetic energy below the barrier top the classical transmission is greater than the quantum one, whereas the quantum transmission is larger than the classical if the average kinetic energy is above the barrier top. In each case the relative position of the topological dipoles is different, accounting for the dynamical mechanism behind the differences between the classical and quantum flow. When the classical transmission is greater than the quantum, the dipoles are located in regions of the phase space where classical trajectories cross the barrier, partially blocking the transmission. When the quantum transmission is greater than the classical, the dipoles are situated near the classical separatrix and they offer a path for probability transmission across it, a classically forbidden mechanism. The zeros of the Husimi function have already been pointed out as signatures of quantum phenomena. Understanding how these zeros change the phase space probability flow adds new information about the dynamical mechanisms of quantum phenomena in the phase space. \section*{Acknowledgements} This work was partly supported by FAPESP and CNPq. \section*{Appendix} \paragraph*{Quantum dynamics of the Husimi function} Given an initial coherent quantum state $\vert z_{0} \rangle$, with mean position and momentum $x_{0}$ and $p_{0}$, respectively, its time evolution is obtained by the split-time operator method (STOM) \cite{bandrauk93}. The output of this method is the wavefunction $\psi \left( x \right)$ in the coordinate representation, and the Husimi function (\ref{eq:definitionQ}) is obtained through a convolution with the coherent state function $\phi^{*} \left( \bar{z} , z ; x \right) = \langle z \vert x \rangle$: \begin{equation*} Q \left( \bar{z} , z \right) = \left|\int \psi \left( x \right) \phi^{*} \left( \bar{z} , z ; x \right) \text{d} x \right|^{2}. \end{equation*} The quantum current is then evaluated with (\ref{eq:definitionJ2}): \begin{equation*} J = \frac{1}{i \hbar} \sum_{l = 1}^{\infty} \frac{\partial^{l - 1} Q}{\partial z^{l - 1}} \sum_{k = 0}^{\infty} \frac{\left(-1 \right)^{k}}{\left(k + l \right)!} \frac{\partial^{2k + l} H}{\partial \bar{z}^{k + l} \partial z^{k}}. \end{equation*} In our model the average Hamiltonian (\ref{eq:Haverage}) is not a finite polynomial and for computational purpose we need to truncate the sums. If we consider an $\hbar$ expansion of the current, the contribution of order $\hbar^{N}$ is obtained when $l + k = N + 1$. For example, the classical current has order $\hbar^{0}$, and we would use only the $l = 1$ and $k = 0$ term; the first quantum correction, of order $\hbar^{1}$, would use $l = 2$, $k = 0$ \emph{and} $l = 1$, $k = 1$, and so on. In our calculations we went up to $N=10$. The STOM routine for the wavefunction was made in the range $-10.0 \leq x \leq 10.0$ with resolution $\Delta x = 0.0025$. The time step was $\Delta t = 0.01$. In Figures 3 and 4 the grid of the Husimi function has resolution of $500 \times 500$ points. In the sequence of images in Figure 3 the region where the Husimi function assumes the greatest values is surrounded by a border of aligned zeros, with an outlying ``sea of zeros''. These zeros were robust-tested and only the border was observed to be accurately reproduced; the position of the zeros in the sea is very sensitive to the range and the resolution of the STOM. \paragraph*{Classical dynamics of the Husimi function} The classical evolution of the initial Husimi function was made integrating the trajectories generated by the classical Hamiltonian (\ref{eq:Hclassical}). For time instant $t$, the classical function is \begin{equation*} Q \left( x \left( t \right) , p \left( t \right) ; t \right) = Q \left( x , p ; 0 \right), \end{equation*} where $\left( x \left( t \right) , p \left( t \right) \right)$ is the trajectory with initial condition $\left( x \left( 0 \right) , p \left( 0 \right) \right) = \left( x , p \right)$, driven by Hamilton's equations of movement: \begin{equation*} \dot{x} = \frac{\partial H_{cl}}{\partial p} , \qquad \dot{p} = - \frac{\partial H_{cl}}{\partial x}. \end{equation*} In the classical case we used as initial probability density the Husimi function for this state.
{'timestamp': '2015-07-29T02:12:14', 'yymm': '1507', 'arxiv_id': '1507.07867', 'language': 'en', 'url': 'https://arxiv.org/abs/1507.07867'}
\section{Introduction} \label{sec:intro} There has been a lot of recent work in modeling network data, primarily driven by novel applications in social network analysis, molecular biology, public health, etc. A common feature of network data in numerous applications is the presence of {\em community structure}, which means that a subset of nodes exhibits higher degree of connectivity amongst themselves than the remaining nodes in the network. The problem of community detection has been extensively studied in the statistics and networks literature, and various approaches proposed, including spectral clustering (\cite{white2005spectral}, \cite{rohe2011spectral} etc.), likelihood based methods (\cite{airoldi2008mixed}, \cite{amini2013}, \cite{nowicki2001estimation} etc.), and modularity based techniques (\cite{girvan2002community}), as well as approaches inspired by statistical physics principles (\cite{fortunato2010community}). For likelihood based methods, a popular generative statistical model used is the \textit{Stochastic Block Model} (SBM) (\cite{holland1983stochastic}). Edges in this model are generated at random with probabilities corresponding to entries of an inter-community probability matrix, which in turn leads to community structures in the network. However, on many applications, the network data are complemented either by node-specific or edge-specific covariates. Some of the available work in the literature focuses on node covariates for the SBM (or some variant of it) (\citet{tallberg2004bayesian, mariadassou2010uncovering, choi2012stochastic, airoldi2008mixed}), while other papers focus on edge-specific covariates (\cite{hoff2002latent, mariadassou2010uncovering,choi2012stochastic}). The objective of this work is to obtain maximum likelihood estimates (MLE) of the model parameters in large scale SBMs with covariates. This is a challenging computational problem, since the latent structure of the model requires an EM-type algorithm to obtain the estimates. It is known (\citet{snijders1997estimation, handcock2007model}) that for a network of size $n$, each EM update requires $O(n^2)$ computations, an expensive calculation for large networks. Further, one also needs $O(nK)$ calculations to obtain the community memberships, which could also prove to be a computationally expensive step for large $n$, especially if the number of communities $K$ scales with $n$. \citet{amini2013} provided a pseudo-likelihood method for community detection in large sparse networks, which can be used for fast parameter estimation in a regular SBM, but it is not readily applicable to settings where the SBM has also covariates. The recent work \cite{ma2017exploration} deals with large scale likelihood-based inference for networks, but focuses on latent space models. Hence, there is a need to scale up likelihood-based inference for large SBMs with covariates. The goal of this work is to fill that gap. To deal with the computational problem we develop a divide-and-conquer parallelizable algorithm that can take advantage of multi-processor computers. The algorithm allows communication between the processors during its iterations. As shown in Section \ref{sec:methods}, this communication step improves estimation accuracy, while creating little extra computational overhead, compared to a straightforward divide-and-conquer parallelizable algorithm. We believe that such an algorithm is particularly beneficial for inference purposes when the data exhibit intricate dependencies, such as in an SBM. To boost performance, the proposed algorithm is enhanced with a case-control approximation of the log-likelihood. The remainder of the paper is organized as follows: In Section \ref{sec:sbm}, we describe the general K-class SBM with covariates and present a Monte-Carlo EM for SBM with covariates in Section \ref{mcem:covsec}. In Section \ref{sec:ccapprox}, we give a general overview of the case-control approximation used for faster computation of the log-likelihood in large network data and also discuss the specific approximation employed for the log-likelihood in SBMs. In Section \ref{par:algodesc}, we describe two generic parallel schemes in estimating the parameters of the model, in Section \ref{simresults}, we provide numerical evidence on simulated data regarding the performance of the proposed algorithm together with comparisons with two existing latent space models utilizing covariate information viz. (1) an additive and mixed effects model focusing on dyadic networks (AMEN) (\citet{hoff2005bilinear,hoff2015dyadic}) and (2) a latent position cluster model using Variational Bayes implementation (VBLPCM) (\citet{salter2013variational}). We conclude with a real data application involving Facebook networks of US colleges with a specific number of covariates in Section \ref{sec:application}. \section{Modeling Framework and a Scalable Algorithm} \label{sec:methods} \subsection{A SBM with covariates} \label{sec:sbm} Suppose that we have a $0-1$ symmetric adjacency matrix $A=((a_{ij}))\in\mathbb R^{n\times n}$, where $a_{ii}=0$. It corresponds to an undirected graph with nodes $\{1,\ldots,n\}$, where there is an edge between nodes $(i,j)$, if $a_{ij}=1$. Suppose that in addition to the adjacency matrix $A$, we observe some symmetric covariates $X(i,j)=X(j,i)\in\mathbb R^p$ on each pair of nodes $(i,j)$ on the graph that influence the formation of the graph. In such cases, it is naturally appealing to extend the basic SBM to include the covariate information. Let $Z=(Z_1,\ldots,Z_n)$ denote the group membership of the $n$ nodes. We assume that $Z_i\in\{1,\ldots,K\}$, and that the $Z_i$'s are independent random variables with a multinomial distribution with probabilities $\pi=(\pi_1,\ldots,\pi_K)$. We assume that given $Z$, the random variables $\{a_{ij},\;1\leq i<j\leq n\}$ are conditionally independent Bernoulli random variables, and \begin{equation} \label{sbm:covariate} a_{ij}\sim\mbox{Ber}(P_{ij}),\;\mbox{ where } \log\frac{P_{ij}}{1-P_{ij}}=\theta_{Z_iZ_j}+\beta^TX(i,j),\;\;\;1\leq i<j\leq n, \end{equation} with $\theta\in\mathbb{R}^{K\times K}$ being a symmetric matrix. The parameter of the model is $\xi \equiv (\theta,\beta,\pi)\in\Xi\ensuremath{\stackrel{\mathrm{def}}{=}}\mathbb R^{K\times K}\times \mathbb R^p\times \Delta$, where $\Delta$ is the set of probability distributions on $\{1,\ldots,K\}$. For some convenience in the notation we shall henceforth write $\xi$ to denote the parameter set $(\theta,\beta,\pi)$. A recent paper by \cite{latouche2015goodness} also considered a logistic model for random graphs with covariate information. Their goal was to assess the goodness of fit of the model, where the network structure is captured by a graphon component. To overcome the intractability of the graphon function, the original model is approximated by a sequence of models involving a blockstructure. An instance of that approximation corresponds to the proposed model, but the direct objectives of the two works are rather different. The log-likelihood of the posited model for the observed data is given by \begin{equation} \label{ll:obdata} \log\int_{\mathcal{Z}} L(\theta,\beta,\pi|A,z) dz, \end{equation} where $\mathcal{Z} = \{1,\ldots,K\}^n$, and $L(\xi|A,z) = L(\theta,\beta,\pi|A,z)$ is the complete data likelihood given by \begin{equation} \label{lcomp:cov} L(\xi|A,z) =\prod_{i<j}\left(\frac{\mathrm{e}^{\theta_{z_iz_j}+\beta^TX(i,j)}}{1+\mathrm{e}^{\theta_{z_iz_j}+\beta^TX(i,j)}}\right)^{a_{ij}}\left(\frac{1}{1+\mathrm{e}^{\theta_{z_iz_j}+\beta^TX(i,j)}}\right)^{1-a_{ij}}\prod_{i=1}^n \pi_{z_i}. \end{equation} Although $\mathcal{Z}$ is a discrete set, we write it as an integral with respect to a counting measure for notational convenience. When $n$ is large, obtaining the maximum-likelihood estimate (MLE) $$(\hat{\theta},\hat{\beta},\hat{\pi})=\displaystyle\argmax_{(\theta,\beta,\pi)\in\Xi}\;\log\int_{\mathcal{Z}} L(\theta,\beta,\pi|A,z) dz$$ is a difficult computational problem. We describe below a Monte Carlo EM (MCEM) implementation for parameter estimation of the proposed SBM with covariates. \subsection{Monte Carlo EM for SBM with Covariates} \label{mcem:covsec} As mentioned in the introductory section, since direct computation of the log-likelihood or its gradient is intractable, estimating SBMs is a nontrivial computational task, especially for large size networks. The MCEM algorithm (\cite{wei1990monte}) is a natural algorithm to tackle this problem. Let $p(\cdot\vert \xi, A)$ denotes the posterior distribution on $\mathcal{Z}$ of the latent variables $z=(z_1,\ldots,z_n)$ given parameter $\xi = (\theta,\beta,\pi)$ and data $A$. More precisely, \begin{equation}\label{eq:cond:dist:latent:z} p(z\vert \xi, A) \propto \prod_{i<j}\left(\frac{\mathrm{e}^{\theta_{z_iz_j}+\beta^TX(i,j)}}{1+\mathrm{e}^{\theta_{z_iz_j}+\beta^TX(i,j)}}\right)^{a_{ij}}\left(\frac{1}{1+\mathrm{e}^{\theta_{z_iz_j}+\beta^TX(i,j)}}\right)^{1-a_{ij}} \prod_{i=1}^n \pi_{z_i}.\end{equation} We assume that we have available a Markov kernel $\mathcal{K}_{\xi, A}$ on $\mathcal{Z}$ with invariant distribution $p(\cdot\vert\xi, A)$ that we can use to generate MCMC draws from $p(\cdot\vert \xi, A)$. In all our simulations below a Gibbs sampler (\cite{robert2013monte}) is used for that purpose. We now present the main steps of the MCEM algorithm for a SBM with covariates. \begin{algorithm}[!ht] \caption{Basic Monte Carlo EM}\label{basic:mcem} \begin{itemize} \item Initialize $\xi_0 = (\theta_0,\beta_0,\pi_0)$ \item At the $r$-th iteration, given working estimate $\xi_r = (\theta_r,\beta_r,\pi_r)$, do the following two steps. \begin{description} \item [(E-step)] Generate a Markov sequence\footnotemark{} $(z^{(1)}_{r+1}, \ldots, z_{r+1}^{(M_r)})$, using the Markov kernel $\mathcal{K}_{\xi_r,A}$ with invariant distribution $p(\cdot|\xi_r,A)$. Use this Monte Carlo sample to derive the approximate Q-function \begin{equation} \label{Q-function:EM covariate} \widehat{Q}\left(\xi;\xi_r\right)=\frac{1}{M_r}\displaystyle\sum_{m=1}^{M_r}\log L\left(\theta,\beta,\pi|A,z^{(m)}_{r+1}\right). \end{equation} \item [(M-step)] Maximize the approximate Q-function to obtain a new estimates: \[\xi_{r+1} = \left(\theta_{r+1},\beta_{r+1},\pi_{r+1}\right)=\displaystyle\argmax_{\xi\in\Xi} \ \ \widehat{Q}\left(\xi;\xi_r\right).\] \end{description} \item Repeat the above two steps for $r=1,2,\ldots$ until convergence. \end{itemize} \end{algorithm} \footnotetext{We draw the initial state $z_{r+1}^{(0)}$ using spectral clustering with perturbation (\citet{amini2013}). However other choices are possible.} Because the Monte Carlo samples are allowed to change with the iterations, the MCEM algorithm described above generates a non-homogeneous Markov chain with sequence of transition kernels $\{\mathcal{M}_r,\;r\geq 1\}$, where $\mathcal{M}_r(\xi_r,A;\cdot)$ denote the conditional distribution of $\xi_{r+1}$ given $(\xi_0,\ldots,\xi_r)$. We made explicit the dependence of these transition kernels on the dataset $A$. This notation will come handy later on as we run the same algorithm on different datasets. Using this notation, the MCEM algorithm can be succinctly presented as follows: choose some initial estimate $\xi_0\in\Xi$; for $r=1,\ldots$, draw \[\xi_{r+1}\vert (\xi_0,\ldots,\xi_r) \sim \mathcal{M}_r(\xi_r,A,\cdot).\] This representation is very convenient, and helps providing a clear description of the main algorithm introduced below. The $r$-th iteration of the MCEM algorithm outlined above requires $\mathcal{O}(n^2M_r)$ calculations\footnote{A more precise cost estimate is $\mathcal{O}(dn^2M_r)$, where $d$ is the number of covariates. However here we assume that $d$ is much small compared to $n$ and $M_r$.}, where $M_r$ is the number of Monte Carlo samples used at iteration $r$ and $n$ denotes the number of network nodes. Note that since MCMC is used for the Monte Carlo approximation, large values of $M_r$ are typically needed to obtain reasonably good estimates\footnote{In fact, since the mixing of the MCMC algorithm would typically depend on the size of $\mathcal{Z}=\{1,\ldots,K\}^n$ (and hence on $n$), how large $M_r$ should be to obtain a reasonably good Monte Carlo approximation in the E-step depends in an increasing fashion on $n$.}. This demonstrates that obtaining the MLE for the posited model becomes computationally expensive as the size of the network $n$ grows. The main bottleneck is the computation of the complete data log-likelihood \begin{equation} \label{loglik:compldata} \log L(\xi|A,z)=\sum_{i<j}\left[a_{ij}\left(\theta_{z_iz_j}+\beta^TX(i,j)\right) - \log\left(1+\mathrm{e}^{\theta_{z_iz_j}+\beta^TX(i,j)}\right)\right] + \sum_{i=1}^n \log\pi_{z_i}. \end{equation} We use the case-control approximation (\citet{raftery2012fast}) to obtain a fast approximation of the log-likelihood $\log L\left(\xi|A,z\right)$. A general overview of this approximation and the specific implementation for the model under consideration are provided in the next section. \subsection{Case-Control Approximation in Monte Carlo EM} \label{sec:ccapprox} The main idea of case-control approximations comes from cohort studies, where the presence of case subjects is relatively rare compared to that of control subjects (for more details see \citet{breslow1996statistics, breslow1982statistical}). In a network context, if its topology is relative sparse (there are a number of tightly connected communities, but there do not exist too many connections between members of different communities), then the number of edges (cases) is relatively small compared to the absence of edges (controls). Then, the sum in Equation~\eqref{loglik:compldata} consists mostly of terms with $a_{ij}=0$ and therefore fast computation of the likelihood through case-control approximation (\citet{raftery2012fast}) becomes attractive. Specifically, splitting the individual by group, we can express the log-likelihood as \begin{equation} \label{loglik:ccapprox} \ell(\theta,\beta,\pi\vert A,z) \equiv \log L(\theta,\beta,\pi|A,z)= \frac{1}{2}\displaystyle\sum_{k=1}^K \sum_{i:\;z_i=k} \left[\ell_i\left(\theta,\beta|A,z\right) + \log\pi_k\right] \end{equation} where \begin{align*} \begin{split} \ell_i\left(\theta,\beta|A,z\right) &\equiv\displaystyle\sum_{j\neq i}\left\{a_{ij}\left(\theta_{z_iz_j}+\beta^T X(i,j)\right)-\log\left(1+\mathrm{e}^{\theta_{z_iz_j}+\beta^T X(i,j)}\right)\right\}\\ &=\displaystyle\sum_{j\neq i,a_{ij}=1}\left\{\left(\theta_{z_iz_j}+\beta^T X(i,j)\right)-\log\left(1+\mathrm{e}^{\theta_{z_iz_j}+\beta^T X(i,j)}\right)\right\}\\ &-\displaystyle\sum_{j\neq i,a_{ij}=0}\log\left(1+\mathrm{e}^{\theta_{z_iz_j}+\beta^T X(i,j)}\right)\\ &=\ell_{i,1} + \ell_{i,0}, \end{split} \end{align*} where \[\ell_{i,0} \equiv -\displaystyle\sum_{j\neq i,a_{ij}=0}\log\left(1+\mathrm{e}^{\theta_{z_iz_j}+\beta^T X(i,j)}\right).\] Given a node $i$, with $z_i=k$, we set $\mathcal{N}_{i,0}=\{j\neq i:\; a_{ij}=0\}$, and $\mathcal{N}_{i,g,0}=\{j\neq i:\; z_j=g,\; a_{ij}=0\}$ for some group index $g$. Using these notations we further split the term $\ell_{i,0}$ as \[ \ell_{i,0} = -\sum_{g=1}^K \displaystyle\sum_{j\in \mathcal{N}_{i,g,0}}\log\left(1+\mathrm{e}^{\theta_{kg}+\beta^T X(i,j)}\right).\] Let $\mathcal{S}_{i,g}$ denotes a randomly selected\footnote{We do an equal-probability random selection with replacement. If $m_0\geq |\mathcal{N}_{i,g,0}|$ an exhaustive sampling is done} subset of size $m_0$ from the set $\mathcal{N}_{i,g,0}$. Following the case control approximation, we approximate the term $\ell_{i,0}$ by \[\tilde{\ell}_{i,0}=-\sum_{g=1}^K \frac{N_{i,g,0}}{m_0}\displaystyle\sum_{J\in\mathcal{S}_{i,g,0}}\log\left(1+\mathrm{e}^{\theta_{kg}+\beta^T X(i,J)}\right),\] where $N_{i,g,0} = |\mathcal{N}_{i,g,0}|$ is the cardinality of $\mathcal{N}_{i,g,0}$. Note that $\tilde{\ell}_{i,0}$ is an unbiased Monte Carlo estimate of $\ell_{i,0}$. Hence \[\tilde \ell_i(\theta,\beta|A,z) = \ell_{i,1} + \tilde \ell_{i,0}\] is an unbiased Monte Carlo estimate of $\ell_i\left(\theta,\beta\vert A,z\right)$, and \begin{equation} \label{approx:compll} \tilde \ell(\theta,\beta,\pi|A,z)=\frac{1}{2}\displaystyle\sum_{k=1}^K\displaystyle\sum_{i:z_i=k} \left[\tilde \ell_i(\theta,\beta|A,z) + \log\pi_k\right], \end{equation} is an unbiased estimator of the log-likelihood. Hence, one can use a relatively small sample $m_0K$ to obtain an unbiased and fast approximation of the complete log-likelihood. The variance decays like $O(1/(Km_0))$. In this work we have used a simple random sampling scheme. Other sampling schemes developed with variance reduction in mind can be used as well, and this include stratified case-control sampling (\citet{raftery2012fast}), local case-control subsampling (\citet{fithian2014local}). However these schemes come with additional computational costs. The case-control approximation leads to an approximation of the conditional distribution of the latent variables $z$ given by \[\tilde p(z\vert A,\xi) \propto e^{\tilde \ell(\theta,\beta,\pi|A,z)},\] which replaces (\ref{eq:cond:dist:latent:z}). As with the basic MCEM algorithm, we assume that we can design, for any $\xi\in\Xi$, a Markov kernel $\widetilde{\mathcal{K}}_{\xi}$ on $\mathcal{Z}$ with invariant distribution $\tilde p(\cdot\vert A,\xi)$ that can be easily implemented. In our implementation a Gibbs sampler is used. We thus obtain a new (case-control approximation based) Monte Carlo EM algorithm. \begin{algorithm}[H] \caption{Case-Control Monte Carlo EM}\label{cc:mcem} \begin{itemize} \item Initialize $\xi_0 = (\theta_0,\beta_0,\pi_0)$ \item At the $r$-th iteration, given working estimate $\xi_r = (\theta_r,\beta_r,\pi_r)$, do the following two steps. \begin{enumerate} \item \label{approx Estep:MCEM} Generate a Markov chain $(z^{(1)}_{r+1}, \ldots, z_{r+1}^{(M_r)})$ with transition kernel $\widetilde{\mathcal{K}}_{\xi_r,A}$ and invariant distribution $\tilde p(\cdot |\xi_r,A)$. Use this Monte Carlo sample to form \begin{equation} \label{Q-function:ccEM covariate} \widetilde{Q}\left(\xi;\xi_r\right)=\frac{1}{M_r}\displaystyle\sum_{m=1}^{M_r} \tilde{\ell}\left(\theta,\beta,\pi|A,z^{(m)}_{r+1}\right). \end{equation} \item \label{approx Mstep:MCEM} Compute the new estimate \[\xi_{r+1} = \displaystyle\argmax_{\xi\in\Xi} \ \ \widetilde{Q}\left(\xi;\xi_r\right).\] \end{enumerate} \item Repeat the above two steps for $r=1,2,\ldots$ until convergence. \end{itemize} \end{algorithm} As with the MCEM algorithm, we will compactly represent the Case-Control MCEM algorithm as a non-homogeneous Markov chain with a sequence of transition kernels $\{\widetilde \mathcal{M}_r,\;r\geq 1\}$. In conclusion, using the case-control approximation reduces the computational cost of every EM iteration from $O(n^2M_r)$ to $O(Km_0nM_r)$, where $Km_0\ll n$ is the case-control sample size. In our simulations, we choose $m_0 = \lambda r$, where $\lambda$ is the average node degree of the network, and $r$ is the global case-to-control rate. \section{Parallel implementation by sub-sampling} \label{par:algodesc} The Case-Control Monte Carlo EM described in \textbf{Algorithm} \ref{cc:mcem} could still be expensive to use for very large networks. We propose a parallel implementation of the algorithm to further reduce the computational cost. The main idea is to draw several sub-adjacency matrices that are processed in parallel on different machines. The computational cost is hence further reduced since the case-control MCEM algorithm is now applied on smaller adjacency matrices. The novelty of our approach resides in the proposed parallelization scheme. Parallelizable algorithms have recently become popular for very large-scale statistical optimization problems; for example \cite{nedic2009distributed,ram2010distributed,johansson2009randomized,duchi2012dual} considered distributed computation for minimizing a sum of convex objective functions. For solving the corresponding optimization problem, they considered subgradient methods in a distributed setting. \cite{zhang2013communication} considered a straightforward divide and conquer strategy and show a reduction in the mean squared error for the parameter vector minimizing the population risk under the parallel implementation compared to a serial method. Their applications include large scale linear regression, gradient based optimization, etc. The simple divide and conquer strategy of parallel implementation has also been studied for some classification and estimation problems by \cite{mcdonald2009efficient, mcdonald2010distributed}, as well as for certain stochastic approximation methods by \cite{zinkevich2010parallelized} and by \cite{recht2011hogwild} for a variant of parallelizable stochastic gradient descent. \citet{dekel2012optimal} considered a gradient based online prediction algorithm in a distributed setting, while \citet{agarwal2011distributed} considered optimization in an asynchronous distributed setting based on delayed stochastic gradient information. Most of the literature outlined above has focused on the divide and conquer (with no communication) strategy. However this strategy works only in cases where the random subsamples from the dataset produce unbiased estimates of the gradient of the objective function. Because of the inherent heterogeneity of network data, this property does not hold for the SBM. Indeed, fitting the SBM on a randomly selected sub-adjacency matrix can lead to sharply biased estimate of the parameter\footnote{Consider for instance the extreme case where all the nodes selected belong to the same community.}. We introduce a parallelization scheme where running estimates are shared between the machines to help mitigate the bias. Suppose that we have $T$ machines to be used to fit the SBM. Let $\{A^{(u)},\;u=1,\ldots,T\}$ be a set of $T$ randomly and independently selected sub-adjacency matrices from $A$, where $A^{(u)}\in\{0,1\}^{n_0\times n_0}$. These sub-matrices can be drawn in many different ways. Here we proceed as follows. Given an initial clustering of the nodes (by spectral clustering with perturbation (\citet{amini2013})) into $K$ groups, we draw the sub-matrix $A^{(u)}$ by randomly selecting $\lfloor n_0/K\rfloor$ nodes with replacement from each of the $K$ groups. The sub-matrix $A^{(u)}$ is then assigned (and sent to) machine $u$. A divide and conquer approach to fitting the SBM consists in running, without any further communication between machines, the case-control MCEM algorithm for $R$ iterations on each machine: for each $u=1,\ldots,T$ \[\xi^{(u)}_r \vert(\xi^{(u)}_0,\ldots,\xi_{r-1}^{(u)}) \sim \widetilde\mathcal{M}_{r-1}(\xi_{r-1}^{(u)},A^{(u)};\cdot),\;\;\;r=1,\ldots,R.\] Then we estimate $\xi$ by \[\frac{1}{T}\sum_{u=1}^T \xi_R^{(u)}.\] This plain divide and conquer algorithm is summarized in \textbf{Algorithm} \ref{parallel:wcomm}. To mitigate the potential bias due to using sub-adjacency matrices, we allow the machines to exchange their running estimates after each iteration. More precisely, after the $r$-th iteration a master processor collects all the running estimates $\{\xi_r^{(i)},\;1\leq i\leq T\}$ (where $T$ is the number of slave processors), and then send estimate $\xi_r^{(1)}$ to processor $2$, $\xi_r^{(2)}$ to processor $3$, etc... and send $\xi_r^{(T)}$ to processor $1$. In this fashion, after $T$ iterations or more, each running estimate has been updated based on all available sub-adjacency matrices, and this helps mitigate any potential bias induced by the selected sub-matrices. The algorithm is summarized in \textbf{Algorithm} \ref{parallel:comm}. The computational cost is similar to the no-communication scheme, but we now have the additional cost of communication which on most shared-memory computing architecture would be relatively small. At the end of the $R$-th iteration, we estimate $\xi$ by \[\frac{1}{T}\sum_{u=1}^T \xi_R^{(u)}.\] \begin{algorithm}[H] \caption{Parallel Case-Control Monte Carlo EM without Communication}\label{parallel:wcomm} \begin{algorithmic}[1] \Require Adjacency matrix $A\in\mathbb R^{n\times n}$, random subsamples $\left\{A^{(i)}\right\}_{i=1}^T\in\mathbb{R}^{n_0\times n_0}$, Number of machines $T$, Number of iterations $R$. \Ensure $\bar{\xi}_R=\frac{1}{T}\displaystyle\sum_{i=1}^T\xi_R^{(i)}$ \State\hskip-\ALG@thistlm For each machine $i$ initialize $\xi_0^{(i)}=\left(\theta_0^{(i)},\beta_0^{(i)},\pi_0^{(i)}\right)$ \ParFor {$i=1$ to $T$} (for each machine) \For {$r=1$ to $R$}, draw \State $\xi_r^{(i)}\vert (\xi_0^{(i)},\ldots,\xi_{r-1}^{(i)}) \sim \widetilde\mathcal{M}_{r-1}\left(\xi_{r-1}^{(i)},A^{(i)};\cdot\right)$. \EndFor \State \textbf{end} \EndParFor \State \textbf{end} \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Parallel Case-Control Monte Carlo EM with Communication}\label{parallel:comm} \begin{algorithmic}[1] \Require Adjacency matrix $A\in\mathbb R^{n\times n}$, random subsamples $\left\{A^{(i)}\right\}_{i=1}^T\in\mathbb{R}^{n_0\times n_0}$, Number of machines $T$, Number of iterations $R$. \Ensure $\bar{\xi}_R=\frac{1}{T}\displaystyle\sum_{i=1}^T\xi_R^{(i)}$ \State\hskip-\ALG@thistlm For each machine $i$ initialize $\xi_0^{(i)}=\left(\theta_0^{(i)},\beta_0^{(i)},\pi_0^{(i)}\right)$ \For {$r=1$ to $R$} (for each iteration) \ParFor {$i=1$ to $T$} (parallel computation), \State $\check\xi_r^{(i)}\vert (\xi_0^{(i)},\ldots,\xi_{r-1}^{(i)})\sim\widetilde\mathcal{M}_{r-1}\left(\xi_{r-1}^{(i)},A^{(i)};\cdot\right)$. \EndParFor \State \textbf{end} \State Set $\xi = \check\xi_r^{(T)}$. \For { $i=2$ to $T$} (exchange of running estimates) \State $\xi_r^{(i)} = \check\xi_r^{(i-1)}$. \EndFor \State \textbf{end} \State $\xi_r^{(1)} = \xi$. \EndFor \State \textbf{end} \end{algorithmic} \end{algorithm} \section{Performance evaluation} \label{simresults} We compare the proposed algorithm (Algorithm \ref{parallel:comm}) with Algorithm \ref{parallel:wcomm} (non-communication case-control MCEM), and with the baseline MCEM algorithm using the full data (Algorithm \ref{basic:mcem}). We also include in the comparison the pseudo-likelihood method of \cite{amini2013}. We simulate observations from the SBM given in Equation~\eqref{sbm:covariate} as follows. We fix the number of communities to $K$=3, and the network size to $n=1000$. We generate the latent membership vector $z=\left(z_1,z_2,\ldots,z_n\right)$ as independent random variables from a Multinomial distribution with parameter $\pi$. We experiment with two different class probabilities for the 3 communities, viz. $\pi=(1/3, 1/3, 1/3)^{\prime}$ (balanced community size) and $\pi=(0.5, 0.3, 0.2)^{\prime}$ (unbalanced community size). We vary two intrinsic quantities related to the network, namely the out-in-ratio (OIR) (denoted $\mu$) and the average degree (denoted $\lambda$). The OIR $\mu$ (\citet{decelle2011asymptotic}) is the ratio of the number of links between members in different communities to the number of links between members of same communities. We vary $\mu$ as $(0.04, 0.08, 0.2)$ which we term as \textit{low} OIR, \textit{medium} OIR and \textit{high} OIR, respectively. The average degree $\lambda$ is defined as $n$ times the ratio of the total number of links present in the network to the total number of possible pairwise connections (that is ${n\choose 2}$). We vary $\lambda$ in the set $(4, 8, 14)$, which we term as \textit{low}, \textit{medium} and \textit{high} degree regimes, respectively. Using $\mu$ and $\lambda$, and following \cite{amini2013}, we generate the link probability matrix $\theta\in \mathbb{R}^{3\times 3}$ as follows \[\theta = \frac{\lambda}{(n-1)\pi^T\theta^{(0)}\pi}\theta^{(0)},\;\;\mbox{ where }\;\; \theta^{(0)} = \left(\begin{tabular}{ccc} $\mu$ & 1 & 1 \\ 1 & $\mu$ & 1 \\ 1 & 1 & $\mu$ \end{tabular}\right).\] We set the number of covariates to $p=3$ and the regression coefficients $\beta$ to $(1,-2,1)$. For each pair of nodes $(i,j)$, its covariates are generated by drawing $p$ independent $\text{Ber}(0,1)$ random variables. And we obtain the probability of a link between any two individuals $i$ and $j$ in the network as \[P_{ij}=\frac{\exp(\theta_{z_iz_j}+\beta^TX(i,j))}{1+\exp(\theta_{z_iz_j}+\beta^TX(i,j))}.\] Given the latent membership vector $z$, we then draw the entries of an adjacency matrix $A=((a_{ij}))_{n\times n}$ as \[a_{ij}\stackrel{\text{ind}}{\sim}\mbox{Ber}(P_{ij})\mbox{ $i,j=1,2,\ldots,n$ }\] We evaluate the algorithms using the mean squared error (MSE) of the parameters $\pi,\theta,\beta$, and a measure of recovery of the latent node labels obtained by computing the Normalized Mutual Information (NMI) between the recovered clustering and the true clustering (\citet{amini2013}). The Normalized Mutual Information between two sets of clusters $C$ and $C^{\prime}$ is defined \[\text{NMI}=\frac{I(C,C^{\prime})}{H(C)+H(C^{\prime})}\] where $H(\cdot)$ is the entropy function and $I(\cdot,\cdot)$ is the mutual information between the two sets of clusters. We have $\text{NMI}\in [0,1]$, and the two sets of clusters are similar if NMI is close to 1. For all algorithms we initialize $\xi$ as follows. We initialize the node labels $z$ using spectral clustering with perturbations (\cite{amini2013}), that we subsequently use to initialize $\pi_0$ as \[\pi_{0k} = \frac{1}{n}\sum_{i=1}^n \textbf{1}(z_i=k).\] We initialize $\theta$ by \[\theta_0(a,b) = \frac{\sum_{i\neq j} A_{ij}\textbf{1}(z_{0i} = a)\textbf{1}(z_{0j}=b)}{\sum_{i\neq j} \textbf{1}(z_{0i} = a)\textbf{1}(z_{0j}=b)},\] and we initialize the regression parameter $\beta$ by fitting a logistic regression using the binary entries of the adjacency $A(i,j)$ are responses and $X(i,j)$ are covariates. For the case-control algorithms we employ a global case-to-control rate $r=7$, so that the case-control sample sizes are set to $\lambda r = 7\lambda$. We also choose the subsample size to be $\lfloor\frac{n_0}{K}\rfloor=50$ from each group where $K$ is the number of groups. All the simulations were replicated 30 times. We first illustrate the statistical and computational performance of the parallelizable MCEM algorithm {\em with} and {\em without} communication on a small network of size $n=100$ with $K=3$ communities and latent class probability vector $\pi=\left(1/3, 1/3, 1/3\right)^{\prime}$). The results are depicted in Table \ref{toyexample:methods}. \begin{center} \captionof{table}{Estimation Errors and NMI Values (standard errors are in parenthesis) for Balanced Community Size with Varying out-in-ratio (OIR)} \label{toyexample:methods} \input{n100_toyexample.tex} \end{center} It can be seen that both versions of the parallel MCEM algorithm are almost five times faster than the serial one; further, the communications based variant is ~10\% inferior in terms of statistical accuracy on all parameters of interest, while the performance of the non-communication one is ~20\% worse than the full MCEM algorithm. Similar performance of the communications variant, albeit with larger estimation gains has been observed in many other settings of the problem.\\Tables \ref{perftab1result} and \ref{perftab2result} depict the results when $\text{OIR}$ is varied from \textit{low} to \textit{high}. In Tables \ref{perftab1result} and \ref{perftab2result}, the average degree is kept at 8. Along with the MSE we also report the bias of parameters $\left(\pi,\theta,\beta\right)$ in the parenthesis in Tables \ref{perftab1result}-\ref{perftab4result}. One can observe that the MSE for different parameters for parallel MCEM with communication is only around ~10\% worse than the corresponding values for MCEM on the full data. On the other hand, the non-communicative parallel version could be more than ~50\% worse than the MCEM on the full data and could possibly be even worse in the high OIR regime for unbalanced communities. In Table 2 for $\text{OIR}=0.2$ one can observe that the bias reduction in the parameter estimates is between 60-90\% whereas gain in the NMI is only about 3\% (colored red in Table \ref{perftab1result}).\\Tables \ref{perftab3result} and \ref{perftab4result} show the performance of the three different methods when the average degre $\lambda$ is varied from the \textit{low} to the \textit{high} regime. The OIR is kept at 0.04 in both Tables. As before, we observe significant improvements in MSE for the communications version over its non-communications counterpart, with the gain being even higher for smaller $\lambda$ values compared to the higher ones. In Table \ref{perftab4result} for $\lambda=4$ one can observe that the bias reduction in the parameter estimates is between 62-90\% whereas NMI increases from non-communication setting to communication one only by 2\% (colored red in Table \ref{perftab3result}). The similar trend in bias reduction compared to the NMI value, albeit with different percentages of reduction are observable in other settings of OIR and $\lambda$. Further, the performance of parallel MCEM with communication is close to the level of performance of MCEM on the full data over different values of $\lambda$. The NMI values for the communications version is around ~4\% better than the non-communications one.\\ We also compare the proposed modeling approach and \textbf{Algorithm} \ref{parallel:comm} to two other models in the literature- (1) an additive and mixed effects model focusing on dyadic networks (AMEN) (\citet{hoff2005bilinear,hoff2015dyadic}) and (2) a latent position cluster model using Variational Bayes implementation (VBLPCM) (\citet{salter2013variational}). As before, we use two different settings- balanced and unbalanced community size and make the comparison in the bar diagrams given in Figures~\ref{fig3} and ~\ref{fig4}, respectively. Also in one case, we keep the average degree $\lambda$ fixed at 8 and vary OIR as $(0.04, 0.08, 0.2)$, while on another occasion we fix OIR at 0.04 and vary $\lambda$ as $(4, 8, 14)$. To compare the performance of our parallel communication algorithm to AMEN and VBLPCM with respect to community detection, we use bar diagrams of the NMI values under the settings described above. Based on the results depicted in Figures~\ref{fig3} and \ref{fig4}, we observe that both AMEN and VBLPCM tend to exhibit a slightly better performance in terms of NMI values and RMSE of parameter estimates when OIR is low (assortative network structure) or medium and $\lambda$ is medium or high. Our parallel algorithm tends to perform significantly better than both AMEN and VBLPCM when OIR is high and $\lambda$ is low. In fact, gains for AMEN and VBLPCM in terms of performance over \textbf{Algorithm} \ref{parallel:comm} in the mentioned settings are less compared to the gain of \textbf{Algorithm} \ref{parallel:comm} over its competitors in high OIR (disassortative network structure) and low $\lambda$ (sparse) settings. The simulation studies do convey the fact that for sparse networks and in cases where communities have high interactions (many real world networks have one or both of these features) amongst their member nodes, \textbf{Algorithm} \ref{parallel:comm} exhibits a superior performance compared to AMEN or VBLPCM for likelihood based inference in SBMs. \begin{small} \begin{center} \captionof{table}{Comparison of performance of three different methods for $\lambda=8$, $n=1000$, $K=3$ and balanced community size with varying OIR (bias of the estimates are given in parentheses)\label{perftab1result}} \begin{tabular}{l c c c c c} \hline\hline OIR & Methods & est.err($\pi$) & est.err($\theta$) & est.err($\beta$) & NMI \\ [0.5ex] \hline \multirow{3}{*}{0.04} & \text{MCEM on Full Data} & 0.0313 & 0.0893 & 0.0185 & 1.0000\\ & \text{Parallel Communication} & 0.0340 (0.0020) & 0.0987 (0.0049) & 0.0232 (0.0016) & 1.0000\\ & \text{Parallel Non-communication} & 0.0483 (0.0039) & 0.1194 (0.0078) & 0.0433 (0.0035) & 0.9000\\ \hline \multirow{3}{*}{0.08} & \text{MCEM on Full Data} & 0.0321 & 0.0916 & 0.0228 & 0.9876\\ & \text{Parallel Communication} & 0.0349 (0.0024) & 0.1042 (0.0060) & 0.0320 (0.0020) & 0.9830\\ & \text{Parallel Non-communication} & 0.0568 (0.0043) & 0.1377 (0.0104) & 0.0549 (0.0039) & 0.8939\\ \hline \multirow{3}{*}{\textcolor{red}{0.2}} & \text{MCEM on Full Data} & 0.0385 & 0.0988 & 0.0378 & 0.7916\\ & \textcolor{red}{Parallel Communication} & \textcolor{red}{0.0406 (0.0029)} & \textcolor{red}{0.1061 (0.0079)} & \textcolor{red}{0.0476 (0.0036)} & \textcolor{red}{0.7796}\\ & \textcolor{red}{Parallel Non-communication} & \textcolor{red}{0.0617 (0.0358)} & \textcolor{red}{0.1459 (0.0671)} & \textcolor{red}{0.0701 (0.0091)} & \textcolor{red}{0.7534}\\ \hline \end{tabular} \end{center} \begin{center} \captionof{table}{Comparison of performance of three different methods for $\lambda=8$, $n=1000$, $K=3$ and unbalanced community size with varying OIR (bias of the estimates are given in parentheses)\label{perftab2result}} \begin{tabular}{l c c c c c} \hline\hline OIR & Methods & est.err($\pi$) & est.err($\theta$) & est.err($\beta$) & NMI \\ [0.5ex] \hline \multirow{3}{*}{0.04} & \text{MCEM on Full Data} & 0.0511 & 0.0879 & 0.0412 & 0.9510\\ & \text{Parallel Communication} & 0.0604 (0.0036) & 0.0937 (0.0047) & 0.0644 (0.0045) & 0.9327\\ & \text{Parallel Non-communication} & 0.0782 (0.0051) & 0.1185 (0.0077) & 0.0750 (0.0053) & 0.8681\\ \hline \multirow{3}{*}{0.08} & \text{MCEM on Full Data} & 0.0589 & 0.0933 & 0.0612 & 0.9054\\ & \text{Parallel Communication} & 0.0736 (0.0048) & 0.1048 (0.0068) & 0.0732 (0.0051) & 0.8852\\ & \text{Parallel Non-communication} & 0.0874 (0.0065) & 0.1253 (0.0125) & 0.0867 (0.0069) & 0.8428\\ \hline \multirow{3}{*}{0.2} & \text{MCEM on Full Data} & 0.0657 & 0.1041 & 0.0804 & 0.8251\\ & \text{Parallel Communication} & 0.0803 (0.0058) & 0.1187 (0.0088) & 0.0954 (0.0072) & 0.7896\\ & \text{Parallel Non-communication} & 0.1010 (0.0586) & 0.1503 (0.0691) & 0.1309 (0.0170) & 0.7314\\ \hline \end{tabular} \end{center} \begin{center} \captionof{table}{Comparison of performance of three different methods for $OIR=0.04$, $n=1000$, $K=3$ and balanced community size with varying $\lambda$ (bias of the estimates are given in parentheses)\label{perftab3result}} \begin{tabular}{l c c c c c} \hline\hline $\lambda$ & Methods & est.err($\pi$) & est.err($\theta$) & est.err($\beta$) & NMI \\ [0.5ex] \hline \multirow{3}{*}{\textcolor{red}{4}} & \text{MCEM on Full Data} & 0.0467 & 0.0885 & 0.0455 & 0.8532\\ & \textcolor{red}{Parallel Communication} & \textcolor{red}{0.0508 (0.0037)} & \textcolor{red}{0.0948 (0.0070)} & \textcolor{red}{0.0516 (0.0049)} & \textcolor{red}{0.8240}\\ & \textcolor{red}{Parallel Non- communication} & \textcolor{red}{0.0664 (0.0385)} & \textcolor{red}{0.1343 (0.0698)} & \textcolor{red}{0.0724 (0.0145)} & \textcolor{red}{0.8084}\\ \hline \multirow{3}{*}{8} & \text{MCEM on Full Data} & 0.0389 & 0.0703 & 0.0393 & 0.9976\\ & \text{Parallel Communication} & 0.0451 (0.0028) & 0.0721 (0.0053) & 0.0487 (0.0034) & 0.9889\\ & \text{Parallel Non-communication} & 0.0604 (0.0054) & 0.0925 (0.0148) & 0.0613 (0.0061) & 0.9670\\ \hline \multirow{3}{*}{14} & \text{MCEM on Full Data} & 0.0302 & 0.0508 & 0.0297 & 1.0000\\ & \text{Parallel Communication} & 0.0340 (0.0020) & 0.0540 (0.0035) & 0.0354 (0.0025) & 0.9968\\ & \text{Parallel Non-communication} & 0.0515 (0.0031) & 0.0805 (0.0056) & 0.0575 (0.0046) & 0.9856\\ \hline \end{tabular} \end{center} \begin{center} \captionof{table}{Comparison of performance of three different methods for $OIR=0.04$, $n=1000$, $K=3$ and unbalanced community size with varying $\lambda$ (bias of the estimates are given in parentheses)\label{perftab4result}} \begin{tabular}{l c c c c c} \hline\hline $\lambda$ & Methods & est.err($\pi$) & est.err($\theta$) & est.err($\beta$) & NMI \\ [0.5ex] \hline \multirow{3}{*}{4} & \text{MCEM on Full Data} & 0.0778 & 0.1189 & 0.0651 & 0.7832\\ & \text{Parallel Communication} & 0.0853 (0.0061) & 0.1244 (0.0092) & 0.0706 (0.0053) & 0.7447\\ & \text{Parallel Non-communication} & 0.1052 (0.0610) & 0.1605 (0.0738) & 0.1082 (0.0141) & 0.7192\\ \hline \multirow{3}{*}{8} & \text{MCEM on Full Data} & 0.0554 & 0.1087 & 0.0543 & 0.8982\\ & \text{Parallel Communication} & 0.0628 (0.0041) & 0.1186 (0.0071) & 0.0612 (0.0043) & 0.8681\\ & \text{Parallel Non-communication} & 0.0815 (0.0059) & 0.1419 (0.0114) & 0.0811 (0.0081) & 0.8337\\ \hline \multirow{3}{*}{14} & \text{MCEM on Full Data} & 0.0368 & 0.0974 & 0.0410 & 0.9889\\ & \text{Parallel Communication} & 0.0433 (0.0026) & 0.1047 (0.0052) & 0.0478 (0.0033) & 0.9668\\ & \text{Parallel Non-communication} & 0.0575 (0.0040) & 0.1286 (0.0077) & 0.0695 (0.0049) & 0.9334\\ \hline \end{tabular} \end{center} \end{small} \begin{figure}[ht] \centering \begin{tabular}{ccc} \epsfig{file=unbalanced_esterr_lambda_P_cov,width=0.3\linewidth,clip=} & \epsfig{file=unbalanced_esterr_lambda_Pi_cov,width=0.3\linewidth,clip=} & \epsfig{file=unbalanced_nmi_lambda_label_cov,width=0.3\linewidth,clip=} \\ \epsfig{file=balanced_esterr_lambda_P_cov,width=0.3\linewidth,clip=} & \epsfig{file=balanced_esterr_lambda_Pi_cov,width=0.3\linewidth,clip=} & \epsfig{file=balanced_nmi_lambda_label_cov,width=0.3\linewidth,clip=} \\ \end{tabular} \caption{Comparison of \textbf{Algorithm} \ref{parallel:comm} to the additive and mixed effect linear models in networks (AMEN) and a variational Bayes implementation of latent position cluster model (VBLPCM) for parameter estimation in Stochastic Blockmodels for low, medium and high degree networks, respectively. Top row corresponds to the unbalanced, while bottom row to the balanced community size case, respectively.} \label{fig3} \end{figure} \begin{figure}[ht] \centering \begin{tabular}{ccc} \epsfig{file=unbalanced_esterr_oir_P_cov,width=0.3\linewidth,clip=} & \epsfig{file=unbalanced_esterr_oir_Pi_cov,width=0.3\linewidth,clip=} & \epsfig{file=unbalanced_nmi_oir_label_cov,width=0.3\linewidth,clip=} \\ \epsfig{file=balanced_esterr_oir_P_cov,width=0.3\linewidth,clip=} & \epsfig{file=balanced_esterr_oir_Pi_cov,width=0.3\linewidth,clip=} & \epsfig{file=balanced_nmi_oir_label_cov,width=0.3\linewidth,clip=} \\ \end{tabular} \caption{Comparison of \textbf{Algorithm} \ref{parallel:comm} to the additive and mixed effect linear models in networks (AMEN) and a variational Bayes implementation of latent position cluster model (VBLPCM) for parameter estimation in Stochastic Blockmodels for low, medium and high OIR networks, respectively. Top row corresponds to the unbalanced, while bottom row to the balanced community size case, respectively.} \label{fig4} \end{figure} \section{Application to Collegiate Facebook Data} \label{sec:application} We use the proposed model to analyze a publicly available social network data set. The data come from {\tt https://archive.org/details/oxford-2005-facebook-matrix} that contains the social structure of Facebook friendship networks at one hundred American colleges and universities at a single point in time. This data set was analyzed by \cite{traud2012social} . The focus of their study was to illustrate how the relative importance of different characteristics of individuals vary across different institutions. They examine the influence of the common attributes at the dyad level in terms of assortativity coefficients and regression models. We on the other hand pick a data set corresponding to a particular university and show the performance of our algorithm and compare the clusters obtained from it with the ones obtained in case of fitting an SBM without covariates.\\We examine the Rice University data set from the list of one hundred American colleges and universities and use our K-class SBM with and without covariates to identify group/community structures in the data set. We examine the role of the user attributes- dorm/house number, gender and class year along with the latent structure.\\ Dorm/house number is a multi-category variable taking values as 202, 203, 204 etc., gender is a binary ($\left\{0,1\right\}$) variable and class year is a integer valued variable (e.g. ``2004'', ``2005'', ``2006'' etc.). We evaluate the performance of \textbf{Algorithm} \ref{parallel:comm} fitted to SBM with covariate viz. model~\eqref{sbm:covariate}.\\ There are some missing values in the data set although it is only around 5\%. Since the network size is 4087 which is large enough, we discard the missing value cases. We also consider the covariate values only between year 2004 to 2010. Further, we drop those nodes with degree less than or equal to 1. After this initial cleaning up, the adjacency matrix is of order $3160\times 3160$. We choose number of communities $K=20$. The choice of the number of communities is made by employing Bayesian Information Criterion (BIC) where the observed data likelihood is computed by path sampling (\citet{gelman1998simulating}). The corresponding plot is given in Figure~\ref{fig_bic} where the possible number of communities are plotted along the horizontal axis and the BIC values along the vertical one. \begin{figure} \includegraphics[scale=0.7]{BIC_realdata_Rice} \vspace{-15em} \caption{Choice of the number of clusters (communities) in the Rice University Dataset. Plot of BIC values over possible number of clusters in the Dataset.} \label{fig_bic} \end{figure} Recall the K-class SBM with covariates \begin{equation} \label{sbm:covariate:real data} \log\frac{P_{ij}}{1-P_{ij}}=\theta_{z_iz_j}+\beta^TX(i,j)\mbox{ $i=1,\ldots,n;j=i+1,\ldots,n$ } \end{equation} where $P$ is the matrix describing the probability of the edges between any two individuals in the network and the probability of a link between $i$ and $j$ is assumed to be composed of the ``latent'' part given by $\theta_{z_iz_j}$ and the ``covariate'' part given by $\beta^TX(i,j)$ where $\beta$ is a parameter of size $3\times 1$ and $X(i,j)$ a vector of covariates of the same order indicating shared group membership. The vector $\beta$ is implemented here with sum to zero identifiability constraints. \\We apply \textbf{Algorithm} \ref{parallel:comm} to fit model~\eqref{sbm:covariate:real data} to the Rice university facebook network with three covariates dorm/house number, gender and class year \\We plot the communities found by fitting a SBM without covariates ($\beta=0$ in model (\ref{sbm:covariate})) and a blockmodel with covariates to the given data. Let $\mathcal{C}$ and $\mathcal{C}$ be the two sets of clustering obtained by fitting with and without covariate blockmodel respectively. We define a measure called Minimal Matching Distance (MMD)(\cite{von2010clustering}) to find a \textit{best greedy} 1-1 matching between the two sets of cluster. Suppose $\Pi=\left\{\pi\right\}$ denote the set of all permutations of $k$ labels. Then MMD is defined as \[\text{MMD}=\frac{1}{n}\min_{\pi\in\Pi}\displaystyle\sum_{i=1}^n 1_{\mathcal{C}(i)\neq\mathcal{C}^{\prime}(i)}\] where $\mathcal{C}(i)$ ($\mathcal{C}^{\prime}(i)$ respectively) denote the clustering label of $i$ in $\mathcal{C}$ ($\mathcal{C}^{\prime}(i)$ respectively). Finding the best permutation then reduces to a problem of maximum bipartite matching and we align the two sets of clustering (with and without covariate) by finding the maximum overlap between the two sets of cluster. The two sets of clustering solutions (with and without covariates respectively) are plotted in Fig. \ref{fig:community}. The estimate of the parameter beta linked with the covariate effects is given by $$\hat{\beta}=[0.7956, -0.1738, -0.6218]^{\prime}$$\\ We compare this finding with the ones observed in \citet{traud2012social}. They studied the ``Facebook" friendships networks of one hundred American institutions at a given point of time. In particular, they calculate the assortativity coefficients and the regression coefficients based on the observed ties to understand homophily at the local level. Further, exploring the community structure reveals the corresponding macroscopic structure. For the Rice University data set, their findings support that residence/dorm number plays a key role in the organization of the friendship network. In fact, residence/dorm number provides the highest assortativity values for the Rice University network. We obtain a similar result, by observing that the effect of the first component of $\hat{\beta}$ is quite high. Further, their study reveals that class year also plays a strong role in influencing the community structure. This is again supported by our finding as the magnitude of the third component in $\hat{\beta}$ is sufficiently large. Finally, as seen in the analysis in \citet{traud2012social}, gender plays a less significant role in the organization of the community structure; a similar conclusion is obtained by examining the magnitude of the second component of $\hat{\beta}$. \begin{figure}[ht] \centering \begin{tabular}{ccc} \hspace{-40pt}\epsfig{file=MCEM_K20_cov_comm_new,width=0.5\linewidth,clip=} & \hspace{40pt}\epsfig{file=MCEM_K20_wcov_comm_new,width=0.5\linewidth,clip=} \\ \end{tabular} \vspace{-5em} \caption{Community detection plots for parallel MCEM with and without covariate respectively. The two sets of clustering are very similar although the one with covariate (left) appears to be less noisy than the without covariate one (right).}\label{fig:community} \end{figure} Further, we employ the Normalized Mutual Information (NMI) to compare the two sets of clusters. The NMI between the two sets of clustering (with and without covariate) is 0.8071 which indicates that the two sets of clustering are quite close i.e. the effects of the covariates in clustering the individuals into groups are not strong. \section{Conclusion} Large heterogenous network data are ubiquitous in many application domains. The SBM framework is useful in analyzing networks with a community/group structure. Often, the interest lies in extracting the underlying community structure (inferring about the latent membership vector $z$) in the network, whereas in other situations (where the observed network can be thought of a sample from a large population) the interest lies in the estimation of the model parameters ($(\theta,\beta,\pi)$). There are certainly fast methods (e.g. the pseudo-likelihood based method in \citep{amini2013}) available for community detection in large networks, but these approximations are readily not applicable to settings when there is also covariate information available for the nodes. Further, comparison with some of the existing latent space models with covariates reveal that in certain settings (for sparse networks and in cases where communities have high interactions) our proposed algorithm performs much better than the existing ones. To obtain maximum likelihood estimates in a large SBM with covariates is computationally challenging. Traditional approaches like MCEM becomes computationally infeasible and hence there is a need for fast computational algorithms. Our proposed algorithm provides a solution in this direction. The proposed parallel implementation of case-control MCEM across different cores with communication offers the following advantages: (1) fast computation of the ML estimates of the model parameters by reducing the EM update cost to $O(Km_0n_0M_r)$ -$Km_0$ being the case-control sample size and $n_0$ the number of subsamples, from $O(n^2M_r)$; (2) the parallel version with communication also exhibits further benefits over its non-communication counterpart, since it provides a bias reduction of the final estimates. It is evident from the results in Section \ref{simresults} that the communications based variant performs much better than the non-communication one when compared to the MCEM on the full data. \section{Supplementary Materials} \begin{description} \item\hspace{20pt} We provide the Matlab codes for the simulations and the real data analysis in the supplementary materials. The Rice University dataset is also provided there. We also provide additional two figures- (a) degree distribution of the Rice University network and (b) a plot of the estimated class probabilities for the covariate model inside the supplementary material. All the matrials are zipped into a file named $\verb supp_materials.zip $. This file include a detailed readme file that describes the contents and instructs the reader on their use. The readme file also contain diagrammatic representations of the two parallel algorithms. All the supplementary files are contained in a single archive and can be obtained via a single download. \end{description} \bibliographystyle{Chicago}
{'timestamp': '2018-08-08T02:12:40', 'yymm': '1610', 'arxiv_id': '1610.09724', 'language': 'en', 'url': 'https://arxiv.org/abs/1610.09724'}
\section{Introduction} Recently, data affects every field in the world as its commercial value is being gradually excavated. It plays an important role in commercial ecosystems where many producers sell data such as movies, novels, and videos for profits. Meanwhile, artificial intelligence takes data as raw material, generating useful data for further usage. To facilitate data exchange, data trading intermediaries (also known as dealers, agent or data executor) have recently proliferated \cite{liu2019novel} and many of them earn agency fees between sellers and consumers. Consumers, sellers, and dealers together form a thriving data trading ecosystem. In a traditional data trading ecosystem, a consumer has to pay a fortune to the seller for acquisition and the seller could make some profits by providing the appropriate data. Fairness and security are all guaranteed by a trusted dealer. However, such a high density of centralization is likely to be the weak spot to be attacked. On the one hand, any participating roles may act maliciously in the unsupervised system. In a transaction, sellers may provide fake data for profits as they may not own data as they claimed. The consumer may refuse to pay after receiving the appropriate data. The service providers (dealers) such as cloud service providers may manipulate the stored data without permission from users \cite{wang2011toward}\cite{zhu2011dynamic}. On the other hand, relying on centralized servers confront heavy communication and security pressure, greatly constraining the efficiency of the entire trading system. A single point service provider may lead to unpleasant downloading experiences due to its limited resources, such as bandwidth, which may fail to cope with the environment of large-scale downloading. Moreover, the service providers providing download services may fabricate data to earn profits. Rewarding or punishing them for their misbehaviors is difficult due to the absence of valid evidence. An inappropriate incentive may decrease their willingness to act honest behaviors. These challenges lead to the following questions, \smallskip \textit{Is it possible to propose a protocol in the data trading system with guaranteed fairness for both participated parities without significantly compromising efficiency?} \smallskip Traditional solutions \cite{kupccu2010usable}\cite{micali2003simple} provide countermeasures by using cryptography technologies. This series of schemes implicitly rely on a strong assumption of the trusted third party (TTP). This means a TTP must obey all the predefined rules and act honestly. However, realizing a completely fair exchange protocol without the TTP is impossible \cite{pagnia1999impossibility}, and it is widely acknowledged that finding such a TTP is reckon hard in practice. Instead of using the seemingly old-fashioned cryptographic \textit{gradual release} method\footnote{The \textit{gradual release} method means each party in the game takes turns to release a small part of the secret. Once a party is detected doing evil, other parties can stop immediately and recover the desired output with similar computational resources.} \cite{blum1983exchange}\cite{pinkas2003fair}, many solutions \cite{dziembowski2018fairswap}\cite{he2021fair}\cite{shin2017t}\cite{choudhuri2017fairness}\cite{kiayias2016fair}\cite{eckey2020optiswap} have been proposed by leveraging blockchain technologies \cite{wood2014ethereum} for better decentralization. Blockchain provides a public bulletin board for every participating party with persistent evidence. Any malicious activities can be traced back later on for the claim of compensation. Based on such investigations, we adopt the blockchain technique as our baseline solution. Blockchain brings benefits to our fair exchange protocol from three folds. Firstly, thanks to the decentralization character of blockchain, this mechanism ensures that the platform can always benignly act and follow the agreed principles. A normal operating blockchain platform greatly reduces the risk of being attacked like a single-point failure or compromised by adversaries. Secondly, exposing parts of data on the blockchain as public information can decrease the risk of sellers' providing data far from the description, (which is called \textit{fake data} below). The content of these parts can be verified by potential consumers so that fake data cannot be accepted and will not create profits. Thirdly, using distributed and peer-to-peer optimization as references, we also allow any participants, who own data backup, to provide downloading services for earning extra revenues. Consumers can download from multiple dealers directly and accordingly pay some fees to these self-service providers. This will incentive involved participants. Specifically, we implement our scheme with strict logic of fair exchange on the Hyperledger blockchain \cite{androulaki2018hyperledger} to ensure the exchange fairness that are threatened from four aspects: \textit{(i)} the seller may provide \textit{fake data}, \textit{(ii)} the seller may provide \textit{a false decryption key}, \textit{(iii)} the consumer may \textit{not pay for authentic data} or \textit{(iv)} \textit{resell data} for multiple times. In our design, the deployed blockchain smart contracts can regulate that: the uploaded data by users and the related decryption key from sellers are authentic whereas fake ones cannot pass the verification (conquer \textit{i} and \textit{ii}); a consumer automatically gets charged for requested data (overcome \textit{iii}); and every data can only be sold once since each transaction is unique in the blockchain systems (address \textit{iv}). Notably, we use the uniqueness index mechanism and compare Merkle roots of different data to prevent someone from reselling data purchased from others. We only guarantee that the data sold on this platform is unique and reselling through other platforms is beyond this article's scope. \smallskip \noindent\textbf{Contributions.} Overall, implementing a data trading system with both exchange fairness and efficiency for real usage is the key task in this paper. To fill the gap, our contributions are as follows. \begin{itemize} \item We purpose BDTS, an innovative data trading system based on blockchain. The proposed scheme realize the exchange fairness for \textit{all} participated parties, namely, \textit{consumer}, \textit{seller} and \textit{service provider}. Each party has to obey the rules, and benign actors can fairly obtain rewards as the incentive. \item We prove the security of our scheme majorly from the economical side based on game theory. Our game theory-based proof simulates the behaviors of different parities in real scenarios, which is an effective hint to show actual reflection towards conflicts as well as real action affected by competitive peer members. The proofs demonstrate that our game reaches the \textit{subgame perform equilibrium}. \item We implement our scheme on the Hyperledager blockchain platform with comprehensive evaluations. Experimental testing results prove this. the practicability and efficiency of our scheme. Compared to existing solutions with complex crypto-algorithms (such as ZKP), our scheme is sufficiently fast for lightweight deceives. \end{itemize} \noindent\textbf{Paper Structure.} Section \ref{sec-prelimi} presents preliminaries. Section \ref{sec-archi} provides a bird's view of our scheme with security assumptions and models. Section \ref{sec-system} details our system. Section \ref{sec-imple} presents our implementations on Hyperledger. Section \ref{sec-security} proves its security while Section \ref{sec-efficiency} discusses efficiency. Section \ref{sec-discuss} provide further discussions. Section \ref{sec-rw} reviews related work surrounding our system. Appendix A-C provide supplementary details of this work. \section{Preliminary} \label{sec-prelimi} This section introduces the core components of blockchain systems that are closely related to our scheme, including smart contract, and Merkle Tree. Also, we give a snapshot of the concept of Nash equilibrium used in our security proofs. \smallskip \noindent\textbf{Smart Contract.} The smart contract, proposed by Szabo \cite{szabo1996smart}, aims to enable the self-execution of clauses and logic. Since the emergence of Ethereum \cite{wood2014ethereum}, the concept of the smart contract becomes prevailing \cite{dwivedi2021legally}\cite{li2022sok}. It is a program deployed and executed based on blockchain with benefits of decentralization, automation, and unforgeability. A normal operating blockchain system can trigger the state transition of the contract. Miners would guarantee the consistency of state across different nodes. It can be seen as a procedure that will be executed by almost all miners and the result agreed upon by more than half will be considered the final result. Also, they will verify the results and reject the false ones. Thus, the smart contract can hardly be falsified and it’s considered to be sufficiently secure in the blockchain ecosystem. Technically, in Ethereum, a smart contract contains a 160-bit address named \textit{contract account}, \textit{runtime}, \textit{bytecode}, and a bunch of related \textit{transaction}s \cite{delmolino2016step}. After being written and compiled into bytecode, a smart contract should be deployed so that miners can record and then invoke it. The deployer builds a transaction (Tx) with bytecode, signature, and so on. Then, the deployer sends Tx to miners, waiting for miners to pack it. The miner will verify the signature and then pack it into a block so that every node can reach an agreement on the same state. After being deployed, the smart contract gets its unique account address. Everyone can interact with it by sending another Tx to that address. This latter Tx will be packed into a block as well and every miner will execute the smart contract in Ethereum Virtual Machine (EVM). The result of it will be recorded and confirmed by miners. \smallskip \noindent\textbf{Merkle Tree.} Merkle tree is a type of tree designed in a layered architecture. Every parent node (the top one is denoted as \textit{root}) stores the hash of its subnodes, while the leaf node in the bottom layer stores the hash value of data. A typical Merkle Tree consists of three algorithms, namely, $\mathsf{Mtree}$, $\mathsf{Mproof}$, and $\mathsf{Mvrfy}$. The algorithm $\mathsf{Mtree}$ takes data $(x_1,x_2,...,x_n)$ as input and outputs Merkle tree $M$ with root $r$. $\mathsf{Mproof}$ takes an index $i$ and $M$ as input and outputs the proof $\pi$. $\mathsf{Mvrfy}$ takes $(\pi,r,x_i,p)$ as input and return \textit{true} if is the same as before, otherwise it returns \textit{false}. Merkle tree is used to efficiently verify a large size of stored data. It ensures the data received from peers stays undamaged and unchanged. Merkle tree is one of the foundational components for building a blockchain, securing the system with a strong data integrity promise. \begin{wraptable}{r}{3.5cm} \centering \begin{tabular}{cc|ccc} \toprule \multicolumn{2}{c}{\textit{Strategy} } & \multicolumn{1}{|c}{\textbf{A} } & \textbf{B} & $p_i$ \\ \midrule \multirow{2}{*}{$p_j$} & \textbf{A} & 1,1 & 1,-1 \\ & \textbf{B} & -1,1 & 0,0 \\ \bottomrule \end{tabular} \end{wraptable} \smallskip \noindent\textbf{Game theory.} Game theory provides an ideal environment to simulate the interactions among rational decision-makers (also known as \textit{player}s). It utilizes a set of mathematical tools to drive every possibility and corresponding impact. Each decision-maker in the game targets to maximize its utility by also considering strategies from others. The game will reach a Nash equilibrium if no players can gain higher utility by altering their strategy while others remain unchanged as well. For instance, in a most generic model (static non-cooperative game model \cite{harker1991generalized}), if the first player $p_i$ adopts strategy \textbf{A} whereas the second player $p_j$ selects \textbf{B}, both of them cannot obtain the overall maximum utility. If $p_i$ adopts \textbf{A} while $p_j$ with \textbf{B}, the system will reach a Nash equilibrium, as shown in the following table. Our analysis is based on a (dynamically) sequential game theory model \cite{fang2021introduction} where a follower acts later than the previous player. Our security goal is to make each participant adopt an honest strategy so that the game will finally achieve the subgame perfect Nash equilibrium \cite{moore1988subgame}. This is also the Nash equilibrium for the entire system. \section{Architecture and Security model} \label{sec-archi} In this section, we firstly introduce system targets and architecture. Then, we provide security assumptions and a security model. \subsection{Problem Overview} \smallskip \noindent\textbf{Entities.} First of all, we clarify the participated roles in our scheme. A total of three types entities are involved: \textit{consumer (CM)}, \textit{seller (SL)}, and \textit{service provider (SP)}\footnote{Service providers generally act as the role of centralized authorities such as dealers and agencies in a traditional fair exchange protocol.}. Consumers pay for data and downloading service with cryptocurrencies such as Ether. Sellers provide encrypted data as well as exposing the segment of divided data when necessary to guarantee correctness. Service providers take tasks of download services, and any participant who stores encrypted data can be regarded as a service provider. Miners and other entities participating in systems are omitted as they are out of the scope of this paper. \smallskip \noindent\textbf{Problem Statement.} Then, we have a quick summary of problems existing in current data trading systems. \noindent\hangindent 1em \textit{The fairness of data trading cannot be guaranteed}. The fairness of the data trading system includes two folds. The data providers (sellers) should be paid enough according to their contributions and data users (consumers) should be charged faithfully. Data reselling is another threat that may harm the sellers’ profits. \noindent\hangindent 1em \textit{The quality of data may not as true as descriptions provided by sellers}. Sellers should provide authentic data just as they claim False advertising will damage the interests of consumers. How to prevent sellers from providing lousy data is a challenge as well. \noindent\hangindent 1em \textit{Rewarding or punishing service providers of their misbehaviors is difficult}. Service providers provide download services to earn profits, but they may manipulate or misuse data. Judging whether a service provider send appropriate data or not is a hard task. \noindent\hangindent 1em \textit{A large-scale downloading request may lead to high communication pressure}. For trading markets, the trading size is huge and the efficiency requirement is strict. Traditional single-point model cannot bear this pressure and may reduce efficiency. The involvement of multiple service providers can speed up the download. \smallskip \noindent\textbf{Desired Requirements.} To address such issues, the proposed BDTS system should fulfill the following requirements. \noindent\hangindent 1em \textit{A protocol with guarantees on exchange fairness.} The fairness ensures both sellers and consumers can faithfully obtain the rewards based on their actual activities. \noindent\hangindent 1em \textit{Every piece of uploaded data should be authentic.} The requirement mainly refers to data during the on-chain stage, rather than the off-chain phase. The system should guarantee that the data transferred in the system cannot be falsified or destroyed. This ensures participants can exchange high-quality data. \noindent\hangindent 1em \textit{Incentive designs should be considered to avoid unnecessary frictions.} Economical incentive is important for the encouragement of involved parties to act honestly. \noindent\hangindent 1em \textit{The system should operate with satisfactory performance.} Regarding the high volume of data such as streaming data (video/audio), the system should have enough bandwidth and sufficient efficiency. \subsection{Architecture.} We design a novel data trading ecosystem that builds on the top of the blockchain platform. A typical workflow in BDTS is that: \textit{Sellers upload their encrypted data and description to service providers. Service providers store received data and establish download services. Consumers decide which pieces of data to purchase based on related descriptions. At last, the consumer downloads from service providers and pays the negotiated prices to sellers and service providers}. Our fair exchange scheme is used to ensure every participant can exchange data with payments without cheating and sudden denial. The detailed execution flow is presented as follows. \noindent\hangindent 1em \textit{\textbf{Data Upload.}} The seller first sends his description of data to the blockchain. Description and other information such as the Merkle root of data would be recorded by blockchain. Here, the seller must broadcast the Merkle root and they are demanded to provide some parts in plaintext form. Service providers can decide whether they are going to store it by the stated information. At the time, the seller waits for the decision from the service providers. If a service provider decides to store encrypted data for earning future downloading fees, he first sends his information to the blockchain. The seller will post encrypted data to the service provider and the service provider starts to store the raw data. Notably, the seller can also become a self-maintained service provider if he can build up similar basic services. \noindent\hangindent 1em \textit{\textbf{Data Download.}} The consumer decides to download or not according to the description and exposed parts provided by the seller. Before downloading, the consumer should first store enough tokens on the smart contract. Then, the consumer sends a request for data from service providers. Service providers will send it to the consumer after encrypting the raw data with the private key. For security and efficiency, these processes will be executed via smart contracts, except for data encryption and downloading. \noindent\hangindent 1em \textit{\textbf{Decryption and Appealing.}} At last, the consumer should pay for data and get the corresponding decryption key. The service provider and seller will provide their decryption key separately. The decryption key is broadcast through the blockchain so that it cannot be tampered with. The consumer can appeal if he confronts any issues. According to predefined rules. The smart contract will arbitrate based on evidence provided by the consumer. \subsection{Security Assumption} In this scheme, we have three basic security assumptions. \noindent\hangindent 1em \textit{The blockchain itself is assumed to be safe.} Our scheme operates on a safe blockchain model with well-promised \textit{liveness} and \textit{safety} \cite{garay2015bitcoin}. Meanwhile, miners are considered to be honest but \textit{curious}: they will execute smart contracts correctly but may be curious about the plaintext recorded on-chain. \noindent\hangindent 1em \textit{The basic crypto-related algorithm is safe.} This assumption indicates that the encryption and decryption algorithm will not suffer major attacks that may destroy the system. Specifically, AES and the elliptic curve, used for asymmetric encryption algorithms, are sufficiently supposed to be safe in cryptography. \noindent\hangindent 1em \textit{Participants in this scheme are rational.} As the assumption of game theory, all players (consumer, seller, and service provider) are assumed to be rational: these three types of players will act honestly but still pursue profits within the legal scope. \subsection{Security model}\label{subsec-secritymodel} Before analyzing the game among \textit{consumers}, \textit{sellers}, and \textit{service providers}, we dive into the strategies of each party. \noindent\hangindent 1em \textit{\textbf{Seller}}. Sellers intend to obtain more payment by selling their data. In our scheme, a seller needs to provide mainly three sectors: \textit{data}, \textit{description}, and \textit{decryption-key}. To earn profits, a seller would claim the data is popular and deserved to be downloaded, but he may provide fake data. From the perspective of commercialization, the description of data is hard to distinguish, in which consumers pay for data merely by relying on it. The exchange is deemed as \textit{fair} if consumers obtain authentic data that is matched with claimed descriptions. Then, the seller can receive rewards. The encryption is another component provided by the seller. Only the correct decryption key can decrypt the encrypted data, whereas the false one cannot. In summary, there are four potential strategies for sellers: a) \textit{matched data (towards corresponding description) and matched key (towards data)}, b) \textit{matched data and non-matched key}, c) \textit{non-matched data and matched key}, and d) \textit{non-matched data and non-matched key}. \noindent\hangindent 1em \textit{\textbf{Consumer}}. Consumers intend to exchange their money for data and downloading services. Downloading ciphertext and decrypting it to gain plaintext is in their favor. Consumers provide related fees in our scheme and then download encrypted data from service providers who store the uploaded data. To earn profits, they intend to obtain data without paying for it, or paying less than expected. Paying the full price as expected for data is a sub-optimal choice. The payment of consumers can be divided into two parts: paying the seller for the decryption key and paying service providers for the downloading service. Based on that, there are four strategies for consumers: a) \textit{pay enough for sellers}, b) \textit{pay less for sellers}, c) \textit{pay enough for service providers}, and d) \textit{pay less for service providers}. \noindent\hangindent 1em \textit{\textbf{Service Provider}}. Service providers intend to provide the downloading service and earn profits. They will obtain a downloading fee paid from consumers instead of a conventional storage fee. Service providers can receive encryption data uploaded from sellers and then provide downloading services for consumers. For uploading, service providers can choose whether to store data or not. Here, a seller can act as a service provider by himself if he can provide similar services of storage and download. For downloading, service providers will provide encrypted data and the corresponding decryption key. The strategies for service providers are listed as follows: a) \textit{authentic correct data and matched key}, b) \textit{authentic data and non-matched key}, c) \textit{fake data and matched key}, and d) \textit{fake data and non-matched key}. The first two need the premise of storing the seller's data. \noindent For security, an ideal strategy for the system is to reach a Nash equilibrium for all participants: sellers adopt the \textit{correct data and matched key} strategy, consumers adopt the \textit{pay enough for sellers, paying enough for service providers} and service providers who provide storing services adopt the \textit{authentic correct data and matched key} strategy (discussed in Sec.\ref{sec-security}). \section{The BDTS Scheme} \label{sec-system} In this section, we provide the concrete construction. To achieve security goals as discussed, we propose our blockchain-based trading system, called BDTS. It includes four stages: \textit{contract deployment}, \textit{encrypted data uploading}, \textit{data downloading}, and \textit{decryption and appealing}. Our scheme involves three types of contracts. Here, we omit the procedures such as the signature verification and block mining because they are known as the common sense. \subsection{Module design} The system contains three types of smart contracts: seller-service provider matching contract (SSMC), service provider-consumer matching contract (SCMC), and consumer payment contract (CPC). \noindent\hangindent 1em \textit{\textbf{Seller-service provider matching contract.}} SSMC records the description and the Merkle root of data. The seller is required to broadcast certain parts of data and the index of these parts should be randomly generated by the blockchain. It should be noted that these indexes cannot be changed once they have been identified. Last, SSMC matches service providers for every seller. \noindent\hangindent 1em \textit{\textbf{Service provider-consumer matching contract.}} SCMC helps service providers and consumers reach an agreement. It receives the consumers’ data, including required data and related information, and then stores them. The contract requires consumers to send payment. Then, the payment is sent to CPC for the next step. \noindent\hangindent 1em \textit{\textbf{Consumer payment contract.}} CPC works to command consumers to pay for data and command sellers to provide the decryption key. It achieves the fair exchange between decryption key and payment as well as between downloading and payment. \subsection{Encrypted Data Upload} In this module, a seller registers on SSMC and exposes some parts of his data. The service provider stores encrypted data (cf. Fig.\ref{fig:combined}.a). \noindent\hangindent 1em \textit{Step1}. When a seller expects to sell data for profits, he should first divide data into several pieces and encrypt them separately with different keys (denoted as $K_i$, where $i=1,2,...,n$), which is generated based on $K$. Such pieces of data should be valuable so that others can judge the quality of full data with the received segments. Here, $D_i=Enc^{AES}_{K_i}(Data_i)$ is the encrypted data. \noindent\hangindent 1em \textit{Step2}. The seller sends a registration demand in the form of a transaction. The registration demand includes seller's information and data description. The seller information consists of $A_{seller}$ and $IP_{seller}$. Data description includes four main parts: \textit{content description}, \textit{data size}, \textit{the root} $r_d$ and \textit{the root} $r_ed$. Here, $r_d$ is the root of $M_d$ and $r_{ed}$ is the root of $M_{ed}$, where $M_d=\mathsf{Mtree}(Data_1,Data_2,...,Data_n)$ and $M_{ed}=\mathsf{Mtree}(D_1,D_2,...,D_n)$. They will be recorded in SSMC. The deposit will also be sent in this step. SSMC will reject the request if the corresponding $r_d$ is the same as that of data recorded before. This mechanism prevents reselling on the blockchain platform. \noindent\hangindent 1em \textit{Step3} and \textit{Step4}. After approving the seller's registration demand, SSMC stores useful information. Blockchain generates the hash of the next block, which is used as a public random $seed$. \noindent\hangindent 1em \textit{Step5}, and \textit{Step6}. The seller runs $\mathsf{Rand}(seed)$ to get $I_{rand}$ and provides ($data_{I_{rand}},P_{d_i},P_{ed_i})$ to SSMC, where $P_{d_i}=\mathsf{Mproof}(M_d,i)$ and $P_{ed_i}=\mathsf{Mproof}(M_{ed},i)$. Then, the contract SSMC checks $\mathsf{Mvrfy}(i,r_d,Data_i,P_{d_i})==1$ and $\mathsf{Mvrfy}(i,r_{ed},D_i,P_{ed})==1$. If not, SSMC stops execution and returns error. Then, the exposed pieces of data will be compared to other pieces by utilizing the uniqueness index \cite{chen2017bootstrapping}. Data plagiarism will result in deposit loss, preventing the reselling behavior. \noindent\hangindent 1em \textit{Step7} and \textit{Step8}. The service provider (SP) registration demand can be divide into $IP_{sp}$, $A_{sp}$, and $ID_{data}$. \noindent\hangindent 1em \textit{Step9}, \textit{Step10} and \textit{Step11}. The seller sends encrypted data and Merkle proof to the service provider according to $IP_{sp}$ and confirms the service provider registration demand so that the corresponding service provider can participate in the next stage. \subsection{Matching and Data Downloading} In this module, a consumer registers on SCMC and selects the service provider to download data (see Fig.\ref{fig:combined}.b). \noindent\hangindent 1em \textit{Step1} and \textit{Step2}. The consumer queries for data description to the corresponding service provider. Once the consumer receives the feedback from SSMC, he select data according to requirements. \noindent\hangindent 1em \textit{Step3}, \textit{Step4}, \textit{Step5}. The consumer stores the tuple $(IP_{sp},A_{cm},ID_{data})$ on SCMC and sends enough tokens to pay for the download service. These tokens will be sent to CPC and, if unfortunately the service provider or seller cheat at this transaction, will be returned to the consumer. When receiving the demand, SCMC queries SSMC with $ID_{data}$ to obtain price and data size. Then, SCMC will verify $Tkn_{cm} \geq price+size*unit\, price$. Failed transactions will be discarded while the rest be broadcast. The seller can determine to download which part of data or from which service providers by giving index $i$ and the corresponding address. \noindent\hangindent 1em \textit{Step6}, \textit{Step7}, \textit{Step8}. The consumer contacts with the service provider based on $IP_{sp}$, which is received in \textit{Step2}. In \textit{Step7}, a service provider encrypts data $D$ with the random key $K_{sp}$. The service provider will calculate $M_{eed}$, where $M_{eed}=\mathsf{Mtree}(Enc^{AES}_{K_{sp}}(D_1), \\ Enc^{AES}_{K_{sp}}(D_2),...,Enc^{AES}_{K_{sp}}(D_n))$, with the Merkle root $r_{eed}$ and uploaded $P_{eed_i}$, where $P_{eed_i}=\mathsf{Mproof}(M_{eed},i)$ and $i$ is the index. \noindent\hangindent 1em \textit{Step9}. The selected service provider information is provided in this step. It is composed of $A_{sp}$ and the index of downloading pieces from the corresponding service providers. The consumer can download data from different service providers for efficiency. \noindent\hangindent 1em \textit{Step12}, \textit{Step13}, and \textit{Step14}. The consumer acquires from SCMC. After the service provider sends $Enc^{AES}_{K_{sp}}$ and $P_{eed_i}$ to consumers, consumers will verify whether $\mathsf{Mvrfy}(i,r_{eed},Enc^{AES}_{K_{sp}},P_{eed_i})==1$. If not, the (double-)encrypted data will be considered as an error if it cannot pass the verification and the consumer, as a result, will not execute \textit{step14}. \subsection{Decryption and Appealing} In this module, the consumer pays to both the service provider and the seller (see Fig.\ref{fig:combined}.c). \begin{itemize} \item[-] \textit{Step1}, \textit{Step2}, \textit{Step3}. SSMC transfers tokens and $(A_{cm}, A_{sp}, \\ID_{data})$ to CPC. The consumer generates a key pair $(Pub_{cm}, \\ Pri_{cm})$ and broadcast $Pub_{cm}$ to CPC. CPC waits for the service provider to get $Enc_{Pub_{cm}}(K_{sp})$. If obtaining it, CPC will wait for a while for the appealing process. Otherwise, CPC will send tokens to the service provider directly. \item[-] \textit{Step4}, \textit{Step5}, and \textit{Step6}. The consumer obtains $Enc_{Pub_{cm}}(K_{sp})$ from CPC and decrypts it with his private key. Then decrypt data with $K_{sp}$ to get $D_i$. If $\mathsf{Mvrfy(i,r_{ed},D_i,P_{ed})\neq 1}$, the consumer executes the appealing phase. Appeal contains $(Pri_{cm},i, Enc^{AES}_{K_{sp}}(D_i))$. Here, $Pri_{cm}$ is generated in every download process. \item[-] \textit{Step7}, \textit{Step8}, \textit{Step9}. CPC calculates both $K_{sp}$ and $D_i$, where $K_{sp} = Dec_{pub_{cm}}(Enc_{pub_{cm}}(K_{sp}))$ while the decryption key $D_i=Dec^{AES}(Enc^{AES}_{K_{sp}}(D_i))$. Then, CPC verifies whether $\mathsf{Mvrfy}(i,r_{ed},D_i,P_{ed})\neq 1$. If it passes the verification, CPC withdraws the tokens to SSMC. Otherwise, CPC will pay to the service providers. \end{itemize} \noindent Paying the seller is similar to paying to the service providers, the differences between mainly concentrate on \textit{Step2}, \textit{Step3}, \textit{Step4}, \textit{Step7}, and \textit{Step8} (see Fig.\ref{fig:combined}.d). \begin{itemize} \item[-] \textit{Step2}, \textit{Step3} and \textit{Step4}. The consumer generates a new public-private key pair $(Pub_{cm},Pri_{cm})$ and broadcasts $Pub_{cm}$ to CPC. After listening to CPC to get $Pub_{cm}$, the seller calculate $Enc_{pub_{cm}}(K_{seller})$ and send it to CPC. \item[-] \textit{Step7} and \textit{Step8}. During the appealing phase, the consumer relies on his private key to prove his ownership. CPC verifies the encryption of the corresponding data, which is similar to the step of paying for service providers. The verification will determine the token flow. \end{itemize} \section{Implementations} \label{sec-imple} In this section, we provide the detailed implementation of three major functions, including \textit{sharding encryption} that splits a full message into several pieces, \textit{product matching} to show the progress of finding a targeted product, and \textit{payment} that present the ways to pay for each participant. Our full practical implementation is based on Go language with 5 files, realizing the major functions of each contact that can be operated on Hyperledger platform\footnote{Source code: Hidden due to the anonymity policy.}. We provide implementation details in Appendix A. \section{Security Analysis} \label{sec-security} In this section, we provide the analysis of BDTS based on game theory. The basic model of our solution is a \textit{dynamically sequential game} with \textit{multiple players}. The analyses are based on \textit{backward induction}. We prove that our model can achieve a subgame perfect Nash equilibrium (SPNE) if all participants honestly behave. Specifically, our proposed scheme consists of three types of parties, including seller (SL), service provider (SP), and consumer (SM) as shown in Fig.\ref{fig-game} (\textit{left}). These parties will act one by one, forming a sequential game. The following party can learn the actions from the previous. Specifically, A SL will first upload the data with the corresponding encryption key to the SP (workflow in \textit{black} line). Once receiving data, the SP encrypts data by his private key, and stores the raw data at local while related information is on-chain. A CM searches online to find products and pay for the favorite ones both to SP and SL via smart contracts (in \textit{blue} line). Last, the SP sends the raw data and related keys to CM (in \textit{brown} line). Based on that, we define our analysis model as follows. \begin{defi} Our SM-SP-CM involved system forms an extensive game denoted by $$\mathcal{G}=\{ \mathcal{N},\mathcal{H},\mathcal{R},P,u_i \}.$$ Here, $\mathcal{N}$ represents the participated players where $\mathcal{N}=\{SL,SP,CM\}$; $\mathcal{R}$ is the strategy set; $\mathcal{H}$ is the history, $P$ is the player function where $P: \mathcal{N}\times\mathcal{R} \xrightarrow{} \mathcal{H}$; and $u_i$ is the payoff function. \end{defi} For each of participated parties, they have four strategies as defined in Sec.\ref{subsec-secritymodel} (\textit{security model}). SL has actions on both updated data and related decryption keys (AES for raw data), forming his strategies $\mathcal{R}_{SL}$, where $\mathcal{R}_{SL}=\{a,b,c,d\}$. Similarly, CM has strategies $\mathcal{R}_{CM}=\{e,f,g,h\}$ to show his actions on payments to SL or SP. SP has strategies $\mathcal{R}_{SP}=\{i,j,k,l\}$ for actions on downloading data and related keys. We list them at Table.\ref{tab:game}. However, it is not enough for quantitative analysis of the costs of these actions are unknown. According to the market prices and operation cost, we suppose that a piece of raw data worth $10$ \textit{unit}s, while generating keys compensates $1$ \textit{unit}. The service fee during the transactions is $1$ \textit{unit}s for each party. Thus, we provide the cost of each strategy in the Table.\ref{tab:game}. The parameters of $x$ and $y$ are actual payments from CM, where $0\leq x<20,0\leq y<4, x+y<24$. \begin{figure}[!hbt] \centering \includegraphics[width=\linewidth]{img/game.png} \caption{Game and Game Tree} \label{fig-game} \end{figure} Then, we dive into the history set $\mathcal{H}$ that reflects the conducted strategies from all parties before. For instance, the history $aei$ represents all parties performing honestly. There are a total of 64 possible combinations (calculated by $64=4*4*4$) based on sequential steps of SL, SM, and SP. We provide their game tree in Tab.\ref{fig-game} (\textit{right}). We omit their detailed representation due to their intuitive induction. Our analysis is based on these fundamental definitions and knowledge. We separately show the optimal strategy (with maximum rewards) for each party, and then show how to reach a subgame perfect Nash equilibrium, which is also the Nash equilibrium of the entire game. Before diving into the details of calculating each subgame, we first drive a series of lemmas as follows. \begin{lem}\label{lma-seller} If one seller provides data not corresponding to the description, the seller cannot obtain payments. \end{lem} \begin{prf} The description and Merkle root of data are first broadcast before the generation of random indexes. Once completing the registration of the seller, the blockchain generates a random index. Exposed pieces are required to match the Merkle roots so that the seller cannot provide fake ones. Meanwhile, these pieces ensure that data can conform to the description. Otherwise, consumers will not pay for the content and service providers will not store it, neither. \qed \end{prf} \begin{lem}\label{lma-sellerkey} If one seller provides a decryption key not conforming to the description, the seller cannot obtain payments. \end{lem} \begin{prf} The seller encrypts data (segmented data included) with his private keys. The results of both encryption and related evidence will be recorded by the smart contract, which covers the Merkle root of encrypted data and the Merkle root of data. If a seller provides a mis-matched key, the consumer cannot decrypt the data and he has to start the appealing process. As $D_i$ and receipt are owned by the consumer, if the consumer cannot obtain correct data, the consumer can appeal with evidence. The smart contract can automatically judge this appeal. If submitted evidence is correct as well as the decryption result cannot match the Merkle root of data, the smart contract will return deposited tokens to the consumer. \qed \end{prf} \begin{table}[!hbt] \caption{Strategies and Costs}\label{tab:game} \begin{center} \resizebox{0.85\linewidth}{!}{ \begin{tabular}{c|cc} \toprule \multicolumn{1}{c}{\textit{\textbf{SL Strategy}} } & \multicolumn{1}{|c}{Matched data} & {Non-matched data} \\ \midrule {Matched key} & a, -11 & b, -1 \\ {Non-matched key} & c, -10 & d, 0\\ \midrule \multicolumn{1}{c}{\textit{\textbf{CM Strategy}} } & \multicolumn{1}{|c}{Sufficiently} & {Insufficiently} (to SL) \\ \midrule {Sufficiently} (to SP) & e, -24 & f, -(x+4) \\ {Insufficiently Paid} & g, -(y+20) & h, -(x+y) \\ \midrule \multicolumn{1}{c}{\textit{\textbf{SP Strategy}} } & \multicolumn{1}{|c}{Authentic data} & Non-authentic data \\ \midrule {Matched key} & i, -2 & j, -1 \\ {Non-matched key} & k, -1 & l, 0 \\ \bottomrule \end{tabular} }\end{center} \begin{tablenotes} \footnotesize \item \quad\quad\quad The cost of -11 \textit{unit}s are short for -11, applicable to all. \item \quad\quad\quad Data is sold at 20 (to SL) while the service fee is 4 (to SP). \end{tablenotes} \end{table} \begin{lem}\label{lma-cm} A consumer without sufficient payments cannot normally use the data. \end{lem} \begin{prf} The consumer will first send enough tokens to SCMC and this code of the smart contract is safe. The smart contract will verify whether the received tokens are enough for the purchase. After the seller and consumer provide their decryption key through the smart contract, the consumer can appeal at a certain time, or it’s considered that the key is correct and payments will be distributed to the seller and service providers. \qed \end{prf} \begin{lem} If one service provider provides data not conforming to that of the seller, he cannot obtain payments. \end{lem} \begin{prf} This proof is similar to Lemma \ref{lma-seller}. \qed \end{prf} \begin{lem}\label{lma-sp} If one service provider provides a decryption key not conforming to data, he cannot obtain payments. \end{lem} Here, \textbf{Lemma \ref{lma-seller}} to \textbf{Lemma \ref{lma-sp}} prove the payoff function of each behavior. Based on such analyses, we can precisely calculate the payoff function of combined strategies in our sequential game. As discussed before, a total of 64 possible combinations exist, and we accordingly calculate the corresponding profits as presented in Tab.\ref{tab:net}. We demonstrate that the system can reach the subgame perfect Nash equilibrium under the following theorem. \begin{thm} The game will achieve the only subgame perfect Nash equilibrium (SPNE) if all three parties act honestly: sellers upload the matched data, matched key, service providers adopt the authentic data, and matched decryption key, and consumers purchase with sufficient payments. Meanwhile, the SPE is also the optimal strategy for the entire system as a Nash Equilibrium. \qed \end{thm} \begin{prf} First, we dive into the rewards of each role, investigating their payoffs under different strategies. For the seller, we observe that the system is not stable (cannot reach Nash equilibrium) under his optimal strategies. As shown in Tab.\ref{tab:net}, the optimal strategies for sellers (\textit{dei},\textit{dej},\textit{dek},\textit{del},\textit{dgi},\textit{dgj},\textit{dgk},\textit{dgl}) is to provide mismatched keys and data, while at the same time obtain payments from consumers. However, based on Lemma \ref{lma-seller} and Lemma \ref{lma-sellerkey}, the seller in such cases cannot obtain payments due to the punishment from smart contracts. These are impractical strategies when launching the backward induction for the subgame tree in Fig.\ref{fig-game}. Similarly, for both consumers and service providers, the system is not stable and cannot reach Nash equilibrium under their optimal strategies. Based on that, we find that the optimal strategy for each party is not the optimal strategy for the system. Then, we focus on strategies with the highest payoffs (equiv. utilities). As illustrated in Tab.\ref{tab:net} (red background), the strategies of \textit{aei}, \textit{afi} and \textit{agi} hold the maximal payoffs where $u_{aei}=u_{afi}=u_{agi}=7$. Their payoffs are greater than all competitive strategies in the history set $\mathcal{H}$. This means the system reaches Nash equilibrium under these three strategies. However, multiple Nash equilibriums cannot drive the most optimal strategy because some of them are impractical. We conduct the backward induction for each game with Nash equilibriums. We find that only one of them is the subgame perfect Nash equilibrium with feasibility in the real world. Based on Lemma \ref{lma-cm}, a consumer without sufficient payments, either to the seller or service provider, cannot successfully decrypt the raw data. He will lose all the paid money ($x+y$). This means both \textit{afi} and \textit{agi} are impractical. With the previous analyses in the arm, we finally conclude that only the strategy \textit{aei}, in which all parties act honestly, can reach the subgame perfect Nash equilibrium. This strategy is also the Nash equilibrium for the entire BDTS game. \qed \end{prf} \begin{table}[!hbt] \caption{Payoff Function and Profits}\label{tab:net} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c} \toprule \multicolumn{1}{c}{\textit{\textbf{$\mathcal{H}$}} } & \multicolumn{4}{c}{\textbf{Payoff} in the form of (SL,CM,SP)} \\ \midrule \textit{aei} & \textcolor{blue}{\text{(9,-4,2)}} & \textit{bei} & (19,-24,2) & \textit{cei} & (10,-24,2) & \textit{dei} & (20,-24,2) \\ \textit{aej} & (9,-24,3) & \textit{bej} & (19,-24,3) & \textit{cej}& (10,-24,3) & \textit{dej} & (20,-24,3) \\ \textit{aek} & (9,-24,3) & \textit{bek} & (19,-24,3) & \textit{cek}& (10,-24,3) & \textit{dek} & (20,-24,3) \\ \textit{ael} & (9,-24,4) & \textit{bel} & (19,-24,4) & \textit{cel} & (10,-24,4) & \textit{del} & (20,-24,4) \\ \midrule \textit{afi} & \textcolor{blue}{\text{(x-11,16-x,2)}} & \textit{bfi} & (x-1,-24,2) & \textit{cfi}& (x-10,-24,2) & \textit{dfi} & (x,-24,2) \\ \textit{afj} & (x-11,-24,3) & \textit{bfj} & (x-1,-24,3) & \textit{cfj}& (x-10,-24,3) & \textit{dfj} & (x,-24,3) \\ \textit{afk} & (x-11,-24,3) & \textit{bfk} & (x-1,-24,3) & \textit{cfk}& (x-10,-24,3) & \textit{dfk} & (x,-24,3) \\ \textit{afl} & (x-11,-24,4) & \textit{bfl} & (x-1,-24,4) & \textit{cfl}& (x-10,-24,4) & \textit{dfl} & (x,-24,4) \\ \midrule \textit{agi} & \textcolor{blue}{\text{(9,-y,y-2)}} & \textit{bgi} & (19,-24,y-2) & \textit{cgi}& (10,-24,y-2) & \textit{dgi} & (20,-24,y-2) \\ \textit{agj} & (9,-24,y-1) & \textit{bgj} & (19,-24,y-1) & \textit{cgj}& (10,-24,y-1) & \textit{dgj} & (20,-24,y-1) \\ \textit{agk} & (9,-24,y-1) & \textit{bgk} & (19,-24,y-1) & \textit{cgk}& (10,-24,y-1) & \textit{dgk} & (20,-24,y-1) \\ \textit{agl} & (9,-24,y) & \textit{bgl} & (19,-24,y) & \textit{cgl}& (10,-24,y) & \textit{dgl} & (20,-24,y) \\ \midrule \textit{ahi} &(x-11,-x-y,y-2) & \textit{bhi} & (x-1,-24,y-2) & \textit{chi}& (x-10,,-24,y-2) & \textit{dhi} & (x,-24,y-2) \\ \textit{ahj} & (x-11,-24,y-1) & \textit{bhj} & (x-1,-24,y-1) & \textit{chj}& (x-10,,-24,y-1) & \textit{dhj} & (x,-24,y-1) \\ \textit{ahk} & (x-11,-24,y-1) & \textit{bhk} & (x-1,-24,y-1) & \textit{chk}& (x-10,,-24,y-1) & \textit{dhk} & (x,-24,y-1) \\ \textit{ahl} & (x-11,-24,y) & \textit{bhl} & (x-1,-24,y) & \textit{chl}& (x-10,,-24,y) & \textit{dhl} & (x,-24,y) \\ \bottomrule \end{tabular} } \begin{tablenotes} \footnotesize \item \quad The texts in blue color reach Nash Equilibrium. \end{tablenotes} \end{center} \end{table} \section{Efficiency Analysis} \label{sec-efficiency} In this section, we evaluate the performance. We firstly provide a theoretical analysis of the system and make comparisons with competitive schemes. Then, we launch experimental tests in multi-dimensions, covering data type, data size, and storage capacity. \smallskip \noindent\textbf{Experimental Configurations.} Our implementation operates on Hyperledger Fabric blockchain \cite{androulaki2018hyperledger}, running on a desk server with Intel(R) Core(TM) i7-7500U CPU\@2.70GHz and 8.00 GB RAM. We simulate each role of BDTS (\textit{consumer}, \textit{seller} and \textit{service providers}) at three virtual nodes, respectively. These nodes are enclosed inside separated dockers under the Ubuntu 18.04 TLS operating system. \subsection{Theoretical Analysis} We first analyze the computational complexity. We set $\tau_E$, $\tau_{E_{A}}$, $\tau_D$, $\tau_{D_{A}}$, $\tau_M$ and $\tau_V$ to separately represent the asymmetric encryption time, the symmetric encryption time (AES), the asymmetric decryption time and the symmetric encryption time, the Merkle tree merging operation time and the Merkle proof verification time. We give our theoretical analsysis of each step in Tab.\ref{tab-complexity}. Firstly, at the \textit{encrypted data uploading} module, the seller will divide the entire data into several pieces of data and upload their proofs on-chain. The main off-chain procedure is to encrypt the segmented data. We assume the data has been split into $i$ pieces, and every piece of data $Data_i$ needs to be encrypted into $D_i$. Then, these encrypted data have been stored at the Merkle leaves, merging both $Data_i$ and $D_i$ to obtain $M_d$ and $r_{ed}$. Secondly, at the \textit{matching and data downloading} module, the consumer can select service providers to download different data segments from them. Before providing the service, the service provider needs to encrypt the received $D_i$ with their private keys, accompanied by corresponding Merkle proofs as in the previous step. Here, the encryption is based on a symmetric encryption algorithm. Once completed, multiple downloads occur at the same time. More service providers will improve the efficiency of downloading because the P2P connection can make full use of network speed. Last, at the \textit{decryption and appealing} module, the consumer obtains each encrypted piece of data and starts to decryption them. They need to verify whether the received data and its proof are matched. If all passes, they can use the valid keys (after payment) for the decryption. Here, the appeal time is related to the number of appeal parts instead of appeal size. We further make an comparison, in terms of on-chain costs, with existing blockchain-based fair exchange protocols. Gringotts \cite{goyal2019secure} spends $O(n)$ as they store all the chunks of delivering data on-chain. CacheCas \cite{almashaqbeh2019cachecash} takes the cost at a range of $[\mathcal{O}(1), \mathcal{O}(n)]$ due to its \textit{lottery tickets} mechanism. FairDwonload \cite{he2021fair}, as they claimed, spends $\mathcal{O}(1)$. But they separate the functions of delivering streaming content and download chunks. Our protocol retain these functions without compromising efficiency, which only takes $\mathcal{O}(1)$. \begin{table}[!hbt] \caption{Computational Complexity and Comparison} \label{tab-complexity} \resizebox{0.95\linewidth}{!}{ \begin{tabular}{c|c} \toprule \multicolumn{1}{c}{\textbf{Algorithm}} & \multicolumn{1}{c}{\textbf{Complexity}} \\ \midrule Encrypted data uploading & $i{\tau_E}+2{\tau_M}+2{\tau_V}$ \\ Matching and Data downloading & $i{\tau_{E_{A}}}+2{\tau_M}+2{\tau_V}$ \\ Encryption and appealing & $ i{\tau_D+\tau_{D_{A}}}+2{\tau_M}+2{\tau_V}$ \\ \midrule \multicolumn{1}{c}{\textbf{Competitive Schemes}} & \\ \midrule \multicolumn{1}{c|}{Gringotts \cite{goyal2019secure} } & $\mathcal{O}(n)$ \\ \multicolumn{1}{c|}{ CacheCash \cite{almashaqbeh2019cachecash} } & $[\mathcal{O}(1), \mathcal{O}(n)]$ \\ \multicolumn{1}{c|}{ FairDwonload \cite{he2021fair} } & $\mathcal{O}(1)$ \\ \multicolumn{1}{c|}{\textbf{\textit{our BDTS}}} & $\mathcal{O}(1)$ \\ \bottomrule \end{tabular} } \begin{tablenotes} \footnotesize \item \quad $i$ is the number of segmented data; $n$ represents a full chunk of data. \end{tablenotes} \end{table} \subsection{Experimental Evaluation} Then, we evaluate the practical performance. We focus on the functionaries of \textbf{\textit{download}}, which is the most essential function (due to its high frequency and large bandwidth) invoked by users. We set the experiments from three orthogonal dimensions, covering different data type, data size and storage capacity. \smallskip \noindent\textbf{Data Type.} We evaluate three mainstream data types, covering text, image, and video. The text-based file is the most prevailing data format in personal computers. As a standard, a text file contains plain text that can be opened and edited in any word-processing program. The image format encompasses a variety of different subtypes such as TIFF, png, jpeg, and BMP, which are used for different scenarios like printing or web graphics. We omit the subtle differences between each sub-format because they perform equivalently in terms of download services. Similarly, video has a lot of sub-types including MP4, MOV, MP4, WMV, AVI, FLV, etc. We only focus on its general type. From the results in Fig.\ref{fig-tests} with distinguished colors, we can observe that all three types of data have approximately the same performance, under different configurations of data size and storage capacity. The results indicate that \textit{the performance of the download service has no significant relationship with the data type. } This is an intuitive outcome that can also be proved by our common sense. The upload and download service merely opens a channel for inside data, regardless of its content and types. \smallskip \noindent\textbf{Data Size.} We adjust data sizes at three levels, including 10M, 100M, and 1G, to represent a wide range of applications in each level. As shown in Fig.\ref{fig-tests}, 10M data (Text, 1 storage) costs at most no more than $2$ seconds, 100M data in the same format spends around 18s, and 1G data costs 170s. The results indicate that \textit{the download time is positively proportional to its data size.} The larger the data have, the slower it downloads. This can also apply to different types of data and different storage capacities. A useful inspiration from evaluations of data size is to ensure a small size. This is also a major consideration to explain reasons of splitting data into pieces. The splitting procedure can significantly improve service quality either for uploading or downloading. Sharded data can be reassembled into its full version once receiving all pieces of segments. \smallskip \noindent\textbf{Storage Capacity.} The storage capacity refers to the number of storage devices that can provide download services. The device is a general term that can be a single laptop or a cluster of cloud servers. If each service provider maintains one device, the number of devices is equal to the participated service providers. We adjust the storage capacity from 1 device to 4 devices in each data type and data size. All the subfigures (the columns in \textit{left}, \textit{middle} and \textit{right}) in Fig.\ref{fig-tests} show the same trend: \textit{increasing the storage capacity over the distributed network will shorten the download time.} The result can apply to all the data types and data sizes. The most obvious change in this series of experiments is adding devices from 1 to 2, which is almost short half of the download time. A reasonable explanation might be that a single point service is easily affected by other factors such as network connection, bandwidth usage, or propagating latency. Any changes of these factors may greatly influence the download service from users. But once adding another device, the risk of single-point diminishes as the download service become decentralized and robust. More connections can drive better availability, as also proved by setting devices to 2, 3 and 4. \begin{figure*}[!hbt] \centering \includegraphics[width=\linewidth]{img/tests.png} \caption{\textbf{Download Times of Different Data Type, Data Size and Storage Capacity}: We evaluate three types of data formats including video \textit{(the grey color)}, image \textit{(orange)}, and text \textit{(blue)}. For each of data type, we separately test their download times in distinguished data size with 10M \textit{(the left figure)}, 100M \textit{(middle)} and 1G \textit{(right)}. Meanwhile, we also investigate the performance along with increased number of storage devices \textit{(from 1 to 4)}, or equivalently the number of service providers.} \label{fig-tests} \end{figure*} \smallskip \noindent\textbf{Average Time.} We dive into one of the data types to evaluate its i) average download times that are measured in MB/sec by repeating multiple times of experiments under different data sizes; and 2) the trend along with the increased number of storage devices. Compared to previous evaluations, this series of experiments scrutinize the subtle variations under different configurations, figuring a suite of curves. As stated in Tab.\ref{tab-avgtime}, the average downloading times under the storage capacity (from 1 to 6) are respectively $0.167$s, $0.102$s, $0.068$s, $0.051$s, $0.039$s, and $0.031$s. Their changes start to deteriorate, approaching a convex (downward) function as illustrated in Fig.\ref{fig-performance}. This indicates that the trend of download time is not strictly proportional to the changes in storage capacity. They merely have a positive relation, following a diminishing marginal effect. \section{Discussion} \label{sec-discuss} In this section, we highlight several major features of BDTS, covering its \textit{usability}, \textit{compatibility}, and \textit{extensibility}. \smallskip \noindent\textbf{Usability.} Our proposed scheme improves the usability in two folds. Firstly, we separately store the raw data and abstract data. The raw data provided by the sellers are stored at the local servers of service providers, while the corresponding abstract data (in the context of this paper, covering \textit{data}, \textit{description} and \textit{proof}) is recorded on-chain. A successful download requires matching both decryption keys and data proofs under the supervision of smart contracts. Secondly, the data trade in our system includes all types of streaming data such as video, audio, and text. These types can cover the most range of existing online resources. \smallskip \noindent\textbf{Compatibility.} Our solution can be integrated with existing crypto-based schemes. For instance, to avoid the repeated payment of the same data, simply relying on the index technique is not enough due to its weakness. The watermarking \cite{yang2020collusion} technique is a practical way to embed a specific piece of mark into data without significantly changing its functionality. It can also incorporate bio-information from users, greatly enhance security. Beyond that, the storage (encrypted) data can leverage the hierarchical scheme \cite{gentry2002hierarchical} to manage its complicated data, as well as remain the efficiency of fast query. \smallskip \noindent\textbf{Extensibility.} BDTS can extend functionalities by incorporating off-chain payment techniques (also known as layer-two solutions \cite{gudgeon2020sok}. Off-chain payment has the advantage of low transaction fees in multiple trades with the same person. Besides, existing off-chain payment solutions have many advanced properties such as privacy-preserving and concurrency \cite{malavolta2017concurrency}. Our implementation only set the backbone protocol for fair exchange, leaving many flexible slots to extend functionalities by equipping matured techniques. \begin{figure*} \begin{minipage}[!h]{0.65\linewidth} \captionof{table}{Average Download Time} \label{tab-avgtime} \resizebox{0.93\linewidth}{!}{ \begin{tabular}{c|cccccc|cc} \toprule \multicolumn{1}{c}{\quad} & \multicolumn{6}{c}{\quad\textbf{Data Size (Text)}\quad} & \multicolumn{1}{c}{\quad\textbf{Average Time }\quad} \\ \midrule \multicolumn{1}{c|}{\textbf{Storage}} & {\textbf{1M}} & {\textbf{10M}} & {\textbf{50M}} & {\textbf{100M}} & {\textbf{500M}} & \textbf{1G} & \quad\textbf{(M/s)}\quad \\% & \quad\textbf{2G (Video)}\quad \midrule 1 & 0.16 & 1.78 & 7.96 & 16.55 & 80.52 & 166.45 & 0.167 \\ 2 & 0.10 & 0.98 & 4.89 & 8.60 & 43.48 & 88.04 & 0.102 \\ 3 & 0.07 & 0.77 & 2.54 & 5.29 & 27.44 & 56.15 & 0.068 \\ 4 & 0.05 & 0.61 & 2.03 & 4.21 & 22.22 & 43.51 & 0.051 \\%& 88.75 5 & 0.04 & 0.38 & 1.79 & 3.33 & 18.88 & 34.52 & 0.039 \\%& 70.55 6 & 0.03 & 0.32 & 1.56 & 2.88 & 14.69 & 29.48 & 0.031 \\%& 60.95 \bottomrule \end{tabular} } \end{minipage} \begin{minipage}[!h]{0.33\linewidth} \caption{Trend of Download Time} \label{fig-performance} \includegraphics[height = 3.65cm]{img/performance.png} \end{minipage} \end{figure*} \section{Related Work} \label{sec-rw} In this section, we provide related primitives surrounding \textit{fair exchange} protocols and \textit{blockchain}s. We also give backgrounds of supporting techniques of \textit{game theory}. \smallskip \noindent\textbf{Blockchain in Trading Systems.} Blockchain has been widely used in trading systems due to its advantageous security properties of non-repudiation, non-equivocation, and non-frameability \cite{li2020accountable}. Many scholars work to build their robust trading systems by leveraging blockchain and smart contracts. Jung et al. \cite{jung2017accounttrade} propose AccountTrade, an accountable trading system between customers who distrust each other. Any misbehaving consumer can be detected and punished by using book-keeping abilities. Chen et al. \cite{chen2017bootstrapping} design an offline digital content trading system. If a dispute occurs, the arbitration institution will conduct it. Dai et al. \cite{dai2019sdte} propose SDTE, a trading system that protects data as well as prevents analysis code from leakage. They employ the TEE technology (especially Intel SGX) to protect the data in an isolated area at the hardware level. Similarly, Li et al. \cite{li2020accountable} leverage the TEE-assisted smart contract to trace the evidence of investigators' actions. Automatic executions enable warrant execution accountability under the help of TEE. Zhou et al. \cite{zhou2018distributed} introduce a data trading system that prevents index data leakage. In this scheme, participants exchange data based on a smart contract. These solutions rely on blockchain to generate persistent evidence and act as a transparent authority to solve disputes. However, these solutions merely performs effective in trading the data in the \textit{text} version, rather than data cast in streaming channels such as TV shows, films, which are costly. The fairness issue in trades has, neither, not been seriously discussed. \smallskip \noindent\textbf{Fair Exchanges using Blockchain.} Traditional ways of promoting fair exchange across distrustful parties rely on trusted third parties because they can monitor the activities of participants, judging whether they have faithfully behaved. However, centralization is the major hurdle. Blockchain with the intrinsic nature of decentralization, automation, and accountability can perfectly replace the role of TTP. The behaviors of involved parties are transparently recorded on-chain, avoiding any types of cheating and compromising. Meanwhile, a predefined incentive model can be automatically operated by smart contracts, guaranteeing that each participant can be rewarded according to their contributions. Based on that, blockchain-based fair exchange protocols have been well researched. Dziembowski et al. \cite{dziembowski2018fairswap} propose Fairswap, utilizing the smart contract to guarantee fairness. The contract plays the role of an external judge to resolve the disagreement. He et al. \cite{he2021fair} propose a fair content delivery scheme by using blockchain. They scrutinize the concepts between exchange fairness and delivery fairness during the trades. Eckey et al. \cite{eckey2020optiswap} propose a smart contract-based fair exchange protocol with an optimistic mode. They maximally decrease the interaction between different parties. Janin et al. \cite{janin2020filebounty} present FileBounty, a fair protocol using the smart contract. The scheme ensures a buyer purchases data at an agreed price without compromising content integrity. Besides, blockchains are further applied to multi-party computations of trading systems \cite{shin2017t}\cite{choudhuri2017fairness}\cite{kiayias2016blockchain}. \smallskip \noindent\textbf{Game Theory in Blockchain.} Game theory is frequently adopted in the blockchain field due to the players' profit-pursuing nature \cite{liu2019survey}. It is used to simulate the actual behaviors of each rational player under complex scenarios. Among the analysis, the very first thing is to set the constraints including either static or dynamic, two-party or multi-party, etc. Based on that, the theory can be used in simulating optimal profitable behaviors of a rational miner in many different game scenarios such as the stochastic game \cite{kiayias2016blockchain}, the cooperative game \cite{lewenberg2015bitcoin}, the evolutionary game \cite{kim2019mining} and the Stackelberg game \cite{chen2022absnft}. The second step is to select a suitable model that can match each assumed condition, and apply it to real cases. Lohr et al. \cite{lohr2022formalizing} and Janin et al. \cite{janin2020filebounty} give their analyses of two-party exchange protocol. Analyzing whether a behavior deviates from the Nash equilibrium drives many useful results on the loss and gain of a protocol design. Besides, other interesting outcomes can also be investigated, especially for the fairness and security of mining procedures. Existing studies independently observes the miner's strategy in selfish mining \cite{eyal2015miner}\cite{kwon2017selfish}\cite{negy2020selfish}\cite{sapirshtein2016optimal}, multiple miners strategy \cite{bai2021blockchain}, the compliance strategy towards PoW/PoS \cite{karakostas2022blockchain}, pooled strategies \cite{wang2019pool}\cite{li2020mining}, and fickle mining in different chains \cite{kwon2019bitcoin}. Our analyses are compliant with previous research principles. \section{Conclusion} In this paper, we explore the fairness issue existing in current data trading solutions. Traditional centralized authorities are not reliable due to their superpower without any supervision. Our proposed scheme, BDTS, addresses such issues by leveraging then blockchain technology with well-designed smart contracts. The scheme utilizes automatically-operating smart contracts to act the role of a data executor with transparency and accountability. Meanwhile, BDTS can incentive involved parties to behave honestly. Our analyses, based on strict game theory induction, prove that the game can achieve a subgame perfect Nash equilibrium with optimal payoffs under the benign actions of all players. Furthermore, we implement the scheme on the top of the Hyperleder Fabric platform with comprehensive evaluations. The results demonstrate that our system can provide fast and reliable services for users. \bibliographystyle{unsrt} \section{Introduction} Recently, data affects every field in the world as its commercial value is being gradually excavated. It plays an important role in commercial ecosystems where many producers sell data such as movies, novels, and videos for profits. Meanwhile, artificial intelligence takes data as raw material, generating useful data for further usage. To facilitate data exchange, data trading intermediaries (also known as dealers, agent or data executor) have recently proliferated \cite{liu2019novel} and many of them earn agency fees between sellers and consumers. Consumers, sellers, and dealers together form a thriving data trading ecosystem. In a traditional data trading ecosystem, a consumer has to pay a fortune to the seller for acquisition and the seller could make some profits by providing the appropriate data. Fairness and security are all guaranteed by a trusted dealer. However, such a high density of centralization is likely to be the weak spot to be attacked. On the one hand, any participating roles may act maliciously in the unsupervised system. In a transaction, sellers may provide fake data for profits as they may not own data as they claimed. The consumer may refuse to pay after receiving the appropriate data. The service providers (dealers) such as cloud service providers may manipulate the stored data without permission from users \cite{wang2011toward}\cite{zhu2011dynamic}. On the other hand, relying on centralized servers confront heavy communication and security pressure, greatly constraining the efficiency of the entire trading system. A single point service provider may lead to unpleasant downloading experiences due to its limited resources, such as bandwidth, which may fail to cope with the environment of large-scale downloading. Moreover, the service providers providing download services may fabricate data to earn profits. Rewarding or punishing them for their misbehaviors is difficult due to the absence of valid evidence. An inappropriate incentive may decrease their willingness to act honest behaviors. These challenges lead to the following questions, \smallskip \textit{Is it possible to propose a protocol in the data trading system with guaranteed fairness for both participated parities without significantly compromising efficiency?} \smallskip Traditional solutions \cite{kupccu2010usable}\cite{micali2003simple} provide countermeasures by using cryptography technologies. This series of schemes implicitly rely on a strong assumption of the trusted third party (TTP). This means a TTP must obey all the predefined rules and act honestly. However, realizing a completely fair exchange protocol without the TTP is impossible \cite{pagnia1999impossibility}, and it is widely acknowledged that finding such a TTP is reckon hard in practice. Instead of using the seemingly old-fashioned cryptographic \textit{gradual release} method\footnote{The \textit{gradual release} method means each party in the game takes turns to release a small part of the secret. Once a party is detected doing evil, other parties can stop immediately and recover the desired output with similar computational resources.} \cite{blum1983exchange}\cite{pinkas2003fair}, many solutions \cite{dziembowski2018fairswap}\cite{he2021fair}\cite{shin2017t}\cite{choudhuri2017fairness}\cite{kiayias2016fair}\cite{eckey2020optiswap} have been proposed by leveraging blockchain technologies \cite{wood2014ethereum} for better decentralization. Blockchain provides a public bulletin board for every participating party with persistent evidence. Any malicious activities can be traced back later on for the claim of compensation. Based on such investigations, we adopt the blockchain technique as our baseline solution. Blockchain brings benefits to our fair exchange protocol from three folds. Firstly, thanks to the decentralization character of blockchain, this mechanism ensures that the platform can always benignly act and follow the agreed principles. A normal operating blockchain platform greatly reduces the risk of being attacked like a single-point failure or compromised by adversaries. Secondly, exposing parts of data on the blockchain as public information can decrease the risk of sellers' providing data far from the description, (which is called \textit{fake data} below). The content of these parts can be verified by potential consumers so that fake data cannot be accepted and will not create profits. Thirdly, using distributed and peer-to-peer optimization as references, we also allow any participants, who own data backup, to provide downloading services for earning extra revenues. Consumers can download from multiple dealers directly and accordingly pay some fees to these self-service providers. This will incentive involved participants. Specifically, we implement our scheme with strict logic of fair exchange on the Hyperledger blockchain \cite{androulaki2018hyperledger} to ensure the exchange fairness that are threatened from four aspects: \textit{(i)} the seller may provide \textit{fake data}, \textit{(ii)} the seller may provide \textit{a false decryption key}, \textit{(iii)} the consumer may \textit{not pay for authentic data} or \textit{(iv)} \textit{resell data} for multiple times. In our design, the deployed blockchain smart contracts can regulate that: the uploaded data by users and the related decryption key from sellers are authentic whereas fake ones cannot pass the verification (conquer \textit{i} and \textit{ii}); a consumer automatically gets charged for requested data (overcome \textit{iii}); and every data can only be sold once since each transaction is unique in the blockchain systems (address \textit{iv}). Notably, we use the uniqueness index mechanism and compare Merkle roots of different data to prevent someone from reselling data purchased from others. We only guarantee that the data sold on this platform is unique and reselling through other platforms is beyond this article's scope. \smallskip \noindent\textbf{Contributions.} Overall, implementing a data trading system with both exchange fairness and efficiency for real usage is the key task in this paper. To fill the gap, our contributions are as follows. \begin{itemize} \item We purpose BDTS, an innovative data trading system based on blockchain. The proposed scheme realize the exchange fairness for \textit{all} participated parties, namely, \textit{consumer}, \textit{seller} and \textit{service provider}. Each party has to obey the rules, and benign actors can fairly obtain rewards as the incentive. \item We prove the security of our scheme majorly from the economical side based on game theory. Our game theory-based proof simulates the behaviors of different parities in real scenarios, which is an effective hint to show actual reflection towards conflicts as well as real action affected by competitive peer members. The proofs demonstrate that our game reaches the \textit{subgame perform equilibrium}. \item We implement our scheme on the Hyperledager blockchain platform with comprehensive evaluations. Experimental testing results prove this. the practicability and efficiency of our scheme. Compared to existing solutions with complex crypto-algorithms (such as ZKP), our scheme is sufficiently fast for lightweight deceives. \end{itemize} \noindent\textbf{Paper Structure.} Section \ref{sec-prelimi} presents preliminaries. Section \ref{sec-archi} provides a bird's view of our scheme with security assumptions and models. Section \ref{sec-system} details our system. Section \ref{sec-imple} presents our implementations on Hyperledger. Section \ref{sec-security} proves its security while Section \ref{sec-efficiency} discusses efficiency. Section \ref{sec-discuss} provide further discussions. Section \ref{sec-rw} reviews related work surrounding our system. Appendix A-C provide supplementary details of this work. \section{Preliminary} \label{sec-prelimi} This section introduces the core components of blockchain systems that are closely related to our scheme, including smart contract, and Merkle Tree. Also, we give a snapshot of the concept of Nash equilibrium used in our security proofs. \smallskip \noindent\textbf{Smart Contract.} The smart contract, proposed by Szabo \cite{szabo1996smart}, aims to enable the self-execution of clauses and logic. Since the emergence of Ethereum \cite{wood2014ethereum}, the concept of the smart contract becomes prevailing \cite{dwivedi2021legally}\cite{li2022sok}. It is a program deployed and executed based on blockchain with benefits of decentralization, automation, and unforgeability. A normal operating blockchain system can trigger the state transition of the contract. Miners would guarantee the consistency of state across different nodes. It can be seen as a procedure that will be executed by almost all miners and the result agreed upon by more than half will be considered the final result. Also, they will verify the results and reject the false ones. Thus, the smart contract can hardly be falsified and it’s considered to be sufficiently secure in the blockchain ecosystem. Technically, in Ethereum, a smart contract contains a 160-bit address named \textit{contract account}, \textit{runtime}, \textit{bytecode}, and a bunch of related \textit{transaction}s \cite{delmolino2016step}. After being written and compiled into bytecode, a smart contract should be deployed so that miners can record and then invoke it. The deployer builds a transaction (Tx) with bytecode, signature, and so on. Then, the deployer sends Tx to miners, waiting for miners to pack it. The miner will verify the signature and then pack it into a block so that every node can reach an agreement on the same state. After being deployed, the smart contract gets its unique account address. Everyone can interact with it by sending another Tx to that address. This latter Tx will be packed into a block as well and every miner will execute the smart contract in Ethereum Virtual Machine (EVM). The result of it will be recorded and confirmed by miners. \smallskip \noindent\textbf{Merkle Tree.} Merkle tree is a type of tree designed in a layered architecture. Every parent node (the top one is denoted as \textit{root}) stores the hash of its subnodes, while the leaf node in the bottom layer stores the hash value of data. A typical Merkle Tree consists of three algorithms, namely, $\mathsf{Mtree}$, $\mathsf{Mproof}$, and $\mathsf{Mvrfy}$. The algorithm $\mathsf{Mtree}$ takes data $(x_1,x_2,...,x_n)$ as input and outputs Merkle tree $M$ with root $r$. $\mathsf{Mproof}$ takes an index $i$ and $M$ as input and outputs the proof $\pi$. $\mathsf{Mvrfy}$ takes $(\pi,r,x_i,p)$ as input and return \textit{true} if is the same as before, otherwise it returns \textit{false}. Merkle tree is used to efficiently verify a large size of stored data. It ensures the data received from peers stays undamaged and unchanged. Merkle tree is one of the foundational components for building a blockchain, securing the system with a strong data integrity promise. \begin{wraptable}{r}{3.5cm} \centering \begin{tabular}{cc|ccc} \toprule \multicolumn{2}{c}{\textit{Strategy} } & \multicolumn{1}{|c}{\textbf{A} } & \textbf{B} & $p_i$ \\ \midrule \multirow{2}{*}{$p_j$} & \textbf{A} & 1,1 & 1,-1 \\ & \textbf{B} & -1,1 & 0,0 \\ \bottomrule \end{tabular} \end{wraptable} \smallskip \noindent\textbf{Game theory.} Game theory provides an ideal environment to simulate the interactions among rational decision-makers (also known as \textit{player}s). It utilizes a set of mathematical tools to drive every possibility and corresponding impact. Each decision-maker in the game targets to maximize its utility by also considering strategies from others. The game will reach a Nash equilibrium if no players can gain higher utility by altering their strategy while others remain unchanged as well. For instance, in a most generic model (static non-cooperative game model \cite{harker1991generalized}), if the first player $p_i$ adopts strategy \textbf{A} whereas the second player $p_j$ selects \textbf{B}, both of them cannot obtain the overall maximum utility. If $p_i$ adopts \textbf{A} while $p_j$ with \textbf{B}, the system will reach a Nash equilibrium, as shown in the following table. Our analysis is based on a (dynamically) sequential game theory model \cite{fang2021introduction} where a follower acts later than the previous player. Our security goal is to make each participant adopt an honest strategy so that the game will finally achieve the subgame perfect Nash equilibrium \cite{moore1988subgame}. This is also the Nash equilibrium for the entire system. \section{Architecture and Security model} \label{sec-archi} In this section, we firstly introduce system targets and architecture. Then, we provide security assumptions and a security model. \subsection{Problem Overview} \smallskip \noindent\textbf{Entities.} First of all, we clarify the participated roles in our scheme. A total of three types entities are involved: \textit{consumer (CM)}, \textit{seller (SL)}, and \textit{service provider (SP)}\footnote{Service providers generally act as the role of centralized authorities such as dealers and agencies in a traditional fair exchange protocol.}. Consumers pay for data and downloading service with cryptocurrencies such as Ether. Sellers provide encrypted data as well as exposing the segment of divided data when necessary to guarantee correctness. Service providers take tasks of download services, and any participant who stores encrypted data can be regarded as a service provider. Miners and other entities participating in systems are omitted as they are out of the scope of this paper. \smallskip \noindent\textbf{Problem Statement.} Then, we have a quick summary of problems existing in current data trading systems. \noindent\hangindent 1em \textit{The fairness of data trading cannot be guaranteed}. The fairness of the data trading system includes two folds. The data providers (sellers) should be paid enough according to their contributions and data users (consumers) should be charged faithfully. Data reselling is another threat that may harm the sellers’ profits. \noindent\hangindent 1em \textit{The quality of data may not as true as descriptions provided by sellers}. Sellers should provide authentic data just as they claim False advertising will damage the interests of consumers. How to prevent sellers from providing lousy data is a challenge as well. \noindent\hangindent 1em \textit{Rewarding or punishing service providers of their misbehaviors is difficult}. Service providers provide download services to earn profits, but they may manipulate or misuse data. Judging whether a service provider send appropriate data or not is a hard task. \noindent\hangindent 1em \textit{A large-scale downloading request may lead to high communication pressure}. For trading markets, the trading size is huge and the efficiency requirement is strict. Traditional single-point model cannot bear this pressure and may reduce efficiency. The involvement of multiple service providers can speed up the download. \smallskip \noindent\textbf{Desired Requirements.} To address such issues, the proposed BDTS system should fulfill the following requirements. \noindent\hangindent 1em \textit{A protocol with guarantees on exchange fairness.} The fairness ensures both sellers and consumers can faithfully obtain the rewards based on their actual activities. \noindent\hangindent 1em \textit{Every piece of uploaded data should be authentic.} The requirement mainly refers to data during the on-chain stage, rather than the off-chain phase. The system should guarantee that the data transferred in the system cannot be falsified or destroyed. This ensures participants can exchange high-quality data. \noindent\hangindent 1em \textit{Incentive designs should be considered to avoid unnecessary frictions.} Economical incentive is important for the encouragement of involved parties to act honestly. \noindent\hangindent 1em \textit{The system should operate with satisfactory performance.} Regarding the high volume of data such as streaming data (video/audio), the system should have enough bandwidth and sufficient efficiency. \subsection{Architecture.} We design a novel data trading ecosystem that builds on the top of the blockchain platform. A typical workflow in BDTS is that: \textit{Sellers upload their encrypted data and description to service providers. Service providers store received data and establish download services. Consumers decide which pieces of data to purchase based on related descriptions. At last, the consumer downloads from service providers and pays the negotiated prices to sellers and service providers}. Our fair exchange scheme is used to ensure every participant can exchange data with payments without cheating and sudden denial. The detailed execution flow is presented as follows. \noindent\hangindent 1em \textit{\textbf{Data Upload.}} The seller first sends his description of data to the blockchain. Description and other information such as the Merkle root of data would be recorded by blockchain. Here, the seller must broadcast the Merkle root and they are demanded to provide some parts in plaintext form. Service providers can decide whether they are going to store it by the stated information. At the time, the seller waits for the decision from the service providers. If a service provider decides to store encrypted data for earning future downloading fees, he first sends his information to the blockchain. The seller will post encrypted data to the service provider and the service provider starts to store the raw data. Notably, the seller can also become a self-maintained service provider if he can build up similar basic services. \noindent\hangindent 1em \textit{\textbf{Data Download.}} The consumer decides to download or not according to the description and exposed parts provided by the seller. Before downloading, the consumer should first store enough tokens on the smart contract. Then, the consumer sends a request for data from service providers. Service providers will send it to the consumer after encrypting the raw data with the private key. For security and efficiency, these processes will be executed via smart contracts, except for data encryption and downloading. \noindent\hangindent 1em \textit{\textbf{Decryption and Appealing.}} At last, the consumer should pay for data and get the corresponding decryption key. The service provider and seller will provide their decryption key separately. The decryption key is broadcast through the blockchain so that it cannot be tampered with. The consumer can appeal if he confronts any issues. According to predefined rules. The smart contract will arbitrate based on evidence provided by the consumer. \subsection{Security Assumption} In this scheme, we have three basic security assumptions. \noindent\hangindent 1em \textit{The blockchain itself is assumed to be safe.} Our scheme operates on a safe blockchain model with well-promised \textit{liveness} and \textit{safety} \cite{garay2015bitcoin}. Meanwhile, miners are considered to be honest but \textit{curious}: they will execute smart contracts correctly but may be curious about the plaintext recorded on-chain. \noindent\hangindent 1em \textit{The basic crypto-related algorithm is safe.} This assumption indicates that the encryption and decryption algorithm will not suffer major attacks that may destroy the system. Specifically, AES and the elliptic curve, used for asymmetric encryption algorithms, are sufficiently supposed to be safe in cryptography. \noindent\hangindent 1em \textit{Participants in this scheme are rational.} As the assumption of game theory, all players (consumer, seller, and service provider) are assumed to be rational: these three types of players will act honestly but still pursue profits within the legal scope. \subsection{Security model}\label{subsec-secritymodel} Before analyzing the game among \textit{consumers}, \textit{sellers}, and \textit{service providers}, we dive into the strategies of each party. \noindent\hangindent 1em \textit{\textbf{Seller}}. Sellers intend to obtain more payment by selling their data. In our scheme, a seller needs to provide mainly three sectors: \textit{data}, \textit{description}, and \textit{decryption-key}. To earn profits, a seller would claim the data is popular and deserved to be downloaded, but he may provide fake data. From the perspective of commercialization, the description of data is hard to distinguish, in which consumers pay for data merely by relying on it. The exchange is deemed as \textit{fair} if consumers obtain authentic data that is matched with claimed descriptions. Then, the seller can receive rewards. The encryption is another component provided by the seller. Only the correct decryption key can decrypt the encrypted data, whereas the false one cannot. In summary, there are four potential strategies for sellers: a) \textit{matched data (towards corresponding description) and matched key (towards data)}, b) \textit{matched data and non-matched key}, c) \textit{non-matched data and matched key}, and d) \textit{non-matched data and non-matched key}. \noindent\hangindent 1em \textit{\textbf{Consumer}}. Consumers intend to exchange their money for data and downloading services. Downloading ciphertext and decrypting it to gain plaintext is in their favor. Consumers provide related fees in our scheme and then download encrypted data from service providers who store the uploaded data. To earn profits, they intend to obtain data without paying for it, or paying less than expected. Paying the full price as expected for data is a sub-optimal choice. The payment of consumers can be divided into two parts: paying the seller for the decryption key and paying service providers for the downloading service. Based on that, there are four strategies for consumers: a) \textit{pay enough for sellers}, b) \textit{pay less for sellers}, c) \textit{pay enough for service providers}, and d) \textit{pay less for service providers}. \noindent\hangindent 1em \textit{\textbf{Service Provider}}. Service providers intend to provide the downloading service and earn profits. They will obtain a downloading fee paid from consumers instead of a conventional storage fee. Service providers can receive encryption data uploaded from sellers and then provide downloading services for consumers. For uploading, service providers can choose whether to store data or not. Here, a seller can act as a service provider by himself if he can provide similar services of storage and download. For downloading, service providers will provide encrypted data and the corresponding decryption key. The strategies for service providers are listed as follows: a) \textit{authentic correct data and matched key}, b) \textit{authentic data and non-matched key}, c) \textit{fake data and matched key}, and d) \textit{fake data and non-matched key}. The first two need the premise of storing the seller's data. \noindent For security, an ideal strategy for the system is to reach a Nash equilibrium for all participants: sellers adopt the \textit{correct data and matched key} strategy, consumers adopt the \textit{pay enough for sellers, paying enough for service providers} and service providers who provide storing services adopt the \textit{authentic correct data and matched key} strategy (discussed in Sec.\ref{sec-security}). \section{The BDTS Scheme} \label{sec-system} In this section, we provide the concrete construction. To achieve security goals as discussed, we propose our blockchain-based trading system, called BDTS. It includes four stages: \textit{contract deployment}, \textit{encrypted data uploading}, \textit{data downloading}, and \textit{decryption and appealing}. Our scheme involves three types of contracts. Here, we omit the procedures such as the signature verification and block mining because they are known as the common sense. \subsection{Module design} The system contains three types of smart contracts: seller-service provider matching contract (SSMC), service provider-consumer matching contract (SCMC), and consumer payment contract (CPC). \noindent\hangindent 1em \textit{\textbf{Seller-service provider matching contract.}} SSMC records the description and the Merkle root of data. The seller is required to broadcast certain parts of data and the index of these parts should be randomly generated by the blockchain. It should be noted that these indexes cannot be changed once they have been identified. Last, SSMC matches service providers for every seller. \noindent\hangindent 1em \textit{\textbf{Service provider-consumer matching contract.}} SCMC helps service providers and consumers reach an agreement. It receives the consumers’ data, including required data and related information, and then stores them. The contract requires consumers to send payment. Then, the payment is sent to CPC for the next step. \noindent\hangindent 1em \textit{\textbf{Consumer payment contract.}} CPC works to command consumers to pay for data and command sellers to provide the decryption key. It achieves the fair exchange between decryption key and payment as well as between downloading and payment. \subsection{Encrypted Data Upload} In this module, a seller registers on SSMC and exposes some parts of his data. The service provider stores encrypted data (cf. Fig.\ref{fig:combined}.a). \noindent\hangindent 1em \textit{Step1}. When a seller expects to sell data for profits, he should first divide data into several pieces and encrypt them separately with different keys (denoted as $K_i$, where $i=1,2,...,n$), which is generated based on $K$. Such pieces of data should be valuable so that others can judge the quality of full data with the received segments. Here, $D_i=Enc^{AES}_{K_i}(Data_i)$ is the encrypted data. \noindent\hangindent 1em \textit{Step2}. The seller sends a registration demand in the form of a transaction. The registration demand includes seller's information and data description. The seller information consists of $A_{seller}$ and $IP_{seller}$. Data description includes four main parts: \textit{content description}, \textit{data size}, \textit{the root} $r_d$ and \textit{the root} $r_ed$. Here, $r_d$ is the root of $M_d$ and $r_{ed}$ is the root of $M_{ed}$, where $M_d=\mathsf{Mtree}(Data_1,Data_2,...,Data_n)$ and $M_{ed}=\mathsf{Mtree}(D_1,D_2,...,D_n)$. They will be recorded in SSMC. The deposit will also be sent in this step. SSMC will reject the request if the corresponding $r_d$ is the same as that of data recorded before. This mechanism prevents reselling on the blockchain platform. \noindent\hangindent 1em \textit{Step3} and \textit{Step4}. After approving the seller's registration demand, SSMC stores useful information. Blockchain generates the hash of the next block, which is used as a public random $seed$. \noindent\hangindent 1em \textit{Step5}, and \textit{Step6}. The seller runs $\mathsf{Rand}(seed)$ to get $I_{rand}$ and provides ($data_{I_{rand}},P_{d_i},P_{ed_i})$ to SSMC, where $P_{d_i}=\mathsf{Mproof}(M_d,i)$ and $P_{ed_i}=\mathsf{Mproof}(M_{ed},i)$. Then, the contract SSMC checks $\mathsf{Mvrfy}(i,r_d,Data_i,P_{d_i})==1$ and $\mathsf{Mvrfy}(i,r_{ed},D_i,P_{ed})==1$. If not, SSMC stops execution and returns error. Then, the exposed pieces of data will be compared to other pieces by utilizing the uniqueness index \cite{chen2017bootstrapping}. Data plagiarism will result in deposit loss, preventing the reselling behavior. \noindent\hangindent 1em \textit{Step7} and \textit{Step8}. The service provider (SP) registration demand can be divide into $IP_{sp}$, $A_{sp}$, and $ID_{data}$. \noindent\hangindent 1em \textit{Step9}, \textit{Step10} and \textit{Step11}. The seller sends encrypted data and Merkle proof to the service provider according to $IP_{sp}$ and confirms the service provider registration demand so that the corresponding service provider can participate in the next stage. \subsection{Matching and Data Downloading} In this module, a consumer registers on SCMC and selects the service provider to download data (see Fig.\ref{fig:combined}.b). \noindent\hangindent 1em \textit{Step1} and \textit{Step2}. The consumer queries for data description to the corresponding service provider. Once the consumer receives the feedback from SSMC, he select data according to requirements. \noindent\hangindent 1em \textit{Step3}, \textit{Step4}, \textit{Step5}. The consumer stores the tuple $(IP_{sp},A_{cm},ID_{data})$ on SCMC and sends enough tokens to pay for the download service. These tokens will be sent to CPC and, if unfortunately the service provider or seller cheat at this transaction, will be returned to the consumer. When receiving the demand, SCMC queries SSMC with $ID_{data}$ to obtain price and data size. Then, SCMC will verify $Tkn_{cm} \geq price+size*unit\, price$. Failed transactions will be discarded while the rest be broadcast. The seller can determine to download which part of data or from which service providers by giving index $i$ and the corresponding address. \noindent\hangindent 1em \textit{Step6}, \textit{Step7}, \textit{Step8}. The consumer contacts with the service provider based on $IP_{sp}$, which is received in \textit{Step2}. In \textit{Step7}, a service provider encrypts data $D$ with the random key $K_{sp}$. The service provider will calculate $M_{eed}$, where $M_{eed}=\mathsf{Mtree}(Enc^{AES}_{K_{sp}}(D_1), \\ Enc^{AES}_{K_{sp}}(D_2),...,Enc^{AES}_{K_{sp}}(D_n))$, with the Merkle root $r_{eed}$ and uploaded $P_{eed_i}$, where $P_{eed_i}=\mathsf{Mproof}(M_{eed},i)$ and $i$ is the index. \noindent\hangindent 1em \textit{Step9}. The selected service provider information is provided in this step. It is composed of $A_{sp}$ and the index of downloading pieces from the corresponding service providers. The consumer can download data from different service providers for efficiency. \noindent\hangindent 1em \textit{Step12}, \textit{Step13}, and \textit{Step14}. The consumer acquires from SCMC. After the service provider sends $Enc^{AES}_{K_{sp}}$ and $P_{eed_i}$ to consumers, consumers will verify whether $\mathsf{Mvrfy}(i,r_{eed},Enc^{AES}_{K_{sp}},P_{eed_i})==1$. If not, the (double-)encrypted data will be considered as an error if it cannot pass the verification and the consumer, as a result, will not execute \textit{step14}. \subsection{Decryption and Appealing} In this module, the consumer pays to both the service provider and the seller (see Fig.\ref{fig:combined}.c). \begin{itemize} \item[-] \textit{Step1}, \textit{Step2}, \textit{Step3}. SSMC transfers tokens and $(A_{cm}, A_{sp}, \\ID_{data})$ to CPC. The consumer generates a key pair $(Pub_{cm}, \\ Pri_{cm})$ and broadcast $Pub_{cm}$ to CPC. CPC waits for the service provider to get $Enc_{Pub_{cm}}(K_{sp})$. If obtaining it, CPC will wait for a while for the appealing process. Otherwise, CPC will send tokens to the service provider directly. \item[-] \textit{Step4}, \textit{Step5}, and \textit{Step6}. The consumer obtains $Enc_{Pub_{cm}}(K_{sp})$ from CPC and decrypts it with his private key. Then decrypt data with $K_{sp}$ to get $D_i$. If $\mathsf{Mvrfy(i,r_{ed},D_i,P_{ed})\neq 1}$, the consumer executes the appealing phase. Appeal contains $(Pri_{cm},i, Enc^{AES}_{K_{sp}}(D_i))$. Here, $Pri_{cm}$ is generated in every download process. \item[-] \textit{Step7}, \textit{Step8}, \textit{Step9}. CPC calculates both $K_{sp}$ and $D_i$, where $K_{sp} = Dec_{pub_{cm}}(Enc_{pub_{cm}}(K_{sp}))$ while the decryption key $D_i=Dec^{AES}(Enc^{AES}_{K_{sp}}(D_i))$. Then, CPC verifies whether $\mathsf{Mvrfy}(i,r_{ed},D_i,P_{ed})\neq 1$. If it passes the verification, CPC withdraws the tokens to SSMC. Otherwise, CPC will pay to the service providers. \end{itemize} \noindent Paying the seller is similar to paying to the service providers, the differences between mainly concentrate on \textit{Step2}, \textit{Step3}, \textit{Step4}, \textit{Step7}, and \textit{Step8} (see Fig.\ref{fig:combined}.d). \begin{itemize} \item[-] \textit{Step2}, \textit{Step3} and \textit{Step4}. The consumer generates a new public-private key pair $(Pub_{cm},Pri_{cm})$ and broadcasts $Pub_{cm}$ to CPC. After listening to CPC to get $Pub_{cm}$, the seller calculate $Enc_{pub_{cm}}(K_{seller})$ and send it to CPC. \item[-] \textit{Step7} and \textit{Step8}. During the appealing phase, the consumer relies on his private key to prove his ownership. CPC verifies the encryption of the corresponding data, which is similar to the step of paying for service providers. The verification will determine the token flow. \end{itemize} \section{Implementations} \label{sec-imple} In this section, we provide the detailed implementation of three major functions, including \textit{sharding encryption} that splits a full message into several pieces, \textit{product matching} to show the progress of finding a targeted product, and \textit{payment} that present the ways to pay for each participant. Our full practical implementation is based on Go language with 5 files, realizing the major functions of each contact that can be operated on Hyperledger platform\footnote{Source code: Hidden due to the anonymity policy.}. We provide implementation details in Appendix A. \section{Security Analysis} \label{sec-security} In this section, we provide the analysis of BDTS based on game theory. The basic model of our solution is a \textit{dynamically sequential game} with \textit{multiple players}. The analyses are based on \textit{backward induction}. We prove that our model can achieve a subgame perfect Nash equilibrium (SPNE) if all participants honestly behave. Specifically, our proposed scheme consists of three types of parties, including seller (SL), service provider (SP), and consumer (SM) as shown in Fig.\ref{fig-game} (\textit{left}). These parties will act one by one, forming a sequential game. The following party can learn the actions from the previous. Specifically, A SL will first upload the data with the corresponding encryption key to the SP (workflow in \textit{black} line). Once receiving data, the SP encrypts data by his private key, and stores the raw data at local while related information is on-chain. A CM searches online to find products and pay for the favorite ones both to SP and SL via smart contracts (in \textit{blue} line). Last, the SP sends the raw data and related keys to CM (in \textit{brown} line). Based on that, we define our analysis model as follows. \begin{defi} Our SM-SP-CM involved system forms an extensive game denoted by $$\mathcal{G}=\{ \mathcal{N},\mathcal{H},\mathcal{R},P,u_i \}.$$ Here, $\mathcal{N}$ represents the participated players where $\mathcal{N}=\{SL,SP,CM\}$; $\mathcal{R}$ is the strategy set; $\mathcal{H}$ is the history, $P$ is the player function where $P: \mathcal{N}\times\mathcal{R} \xrightarrow{} \mathcal{H}$; and $u_i$ is the payoff function. \end{defi} For each of participated parties, they have four strategies as defined in Sec.\ref{subsec-secritymodel} (\textit{security model}). SL has actions on both updated data and related decryption keys (AES for raw data), forming his strategies $\mathcal{R}_{SL}$, where $\mathcal{R}_{SL}=\{a,b,c,d\}$. Similarly, CM has strategies $\mathcal{R}_{CM}=\{e,f,g,h\}$ to show his actions on payments to SL or SP. SP has strategies $\mathcal{R}_{SP}=\{i,j,k,l\}$ for actions on downloading data and related keys. We list them at Table.\ref{tab:game}. However, it is not enough for quantitative analysis of the costs of these actions are unknown. According to the market prices and operation cost, we suppose that a piece of raw data worth $10$ \textit{unit}s, while generating keys compensates $1$ \textit{unit}. The service fee during the transactions is $1$ \textit{unit}s for each party. Thus, we provide the cost of each strategy in the Table.\ref{tab:game}. The parameters of $x$ and $y$ are actual payments from CM, where $0\leq x<20,0\leq y<4, x+y<24$. \begin{figure}[!hbt] \centering \includegraphics[width=\linewidth]{img/game.png} \caption{Game and Game Tree} \label{fig-game} \end{figure} Then, we dive into the history set $\mathcal{H}$ that reflects the conducted strategies from all parties before. For instance, the history $aei$ represents all parties performing honestly. There are a total of 64 possible combinations (calculated by $64=4*4*4$) based on sequential steps of SL, SM, and SP. We provide their game tree in Tab.\ref{fig-game} (\textit{right}). We omit their detailed representation due to their intuitive induction. Our analysis is based on these fundamental definitions and knowledge. We separately show the optimal strategy (with maximum rewards) for each party, and then show how to reach a subgame perfect Nash equilibrium, which is also the Nash equilibrium of the entire game. Before diving into the details of calculating each subgame, we first drive a series of lemmas as follows. \begin{lem}\label{lma-seller} If one seller provides data not corresponding to the description, the seller cannot obtain payments. \end{lem} \begin{prf} The description and Merkle root of data are first broadcast before the generation of random indexes. Once completing the registration of the seller, the blockchain generates a random index. Exposed pieces are required to match the Merkle roots so that the seller cannot provide fake ones. Meanwhile, these pieces ensure that data can conform to the description. Otherwise, consumers will not pay for the content and service providers will not store it, neither. \qed \end{prf} \begin{lem}\label{lma-sellerkey} If one seller provides a decryption key not conforming to the description, the seller cannot obtain payments. \end{lem} \begin{prf} The seller encrypts data (segmented data included) with his private keys. The results of both encryption and related evidence will be recorded by the smart contract, which covers the Merkle root of encrypted data and the Merkle root of data. If a seller provides a mis-matched key, the consumer cannot decrypt the data and he has to start the appealing process. As $D_i$ and receipt are owned by the consumer, if the consumer cannot obtain correct data, the consumer can appeal with evidence. The smart contract can automatically judge this appeal. If submitted evidence is correct as well as the decryption result cannot match the Merkle root of data, the smart contract will return deposited tokens to the consumer. \qed \end{prf} \begin{table}[!hbt] \caption{Strategies and Costs}\label{tab:game} \begin{center} \resizebox{0.85\linewidth}{!}{ \begin{tabular}{c|cc} \toprule \multicolumn{1}{c}{\textit{\textbf{SL Strategy}} } & \multicolumn{1}{|c}{Matched data} & {Non-matched data} \\ \midrule {Matched key} & a, -11 & b, -1 \\ {Non-matched key} & c, -10 & d, 0\\ \midrule \multicolumn{1}{c}{\textit{\textbf{CM Strategy}} } & \multicolumn{1}{|c}{Sufficiently} & {Insufficiently} (to SL) \\ \midrule {Sufficiently} (to SP) & e, -24 & f, -(x+4) \\ {Insufficiently Paid} & g, -(y+20) & h, -(x+y) \\ \midrule \multicolumn{1}{c}{\textit{\textbf{SP Strategy}} } & \multicolumn{1}{|c}{Authentic data} & Non-authentic data \\ \midrule {Matched key} & i, -2 & j, -1 \\ {Non-matched key} & k, -1 & l, 0 \\ \bottomrule \end{tabular} }\end{center} \begin{tablenotes} \footnotesize \item \quad\quad\quad The cost of -11 \textit{unit}s are short for -11, applicable to all. \item \quad\quad\quad Data is sold at 20 (to SL) while the service fee is 4 (to SP). \end{tablenotes} \end{table} \begin{lem}\label{lma-cm} A consumer without sufficient payments cannot normally use the data. \end{lem} \begin{prf} The consumer will first send enough tokens to SCMC and this code of the smart contract is safe. The smart contract will verify whether the received tokens are enough for the purchase. After the seller and consumer provide their decryption key through the smart contract, the consumer can appeal at a certain time, or it’s considered that the key is correct and payments will be distributed to the seller and service providers. \qed \end{prf} \begin{lem} If one service provider provides data not conforming to that of the seller, he cannot obtain payments. \end{lem} \begin{prf} This proof is similar to Lemma \ref{lma-seller}. \qed \end{prf} \begin{lem}\label{lma-sp} If one service provider provides a decryption key not conforming to data, he cannot obtain payments. \end{lem} Here, \textbf{Lemma \ref{lma-seller}} to \textbf{Lemma \ref{lma-sp}} prove the payoff function of each behavior. Based on such analyses, we can precisely calculate the payoff function of combined strategies in our sequential game. As discussed before, a total of 64 possible combinations exist, and we accordingly calculate the corresponding profits as presented in Tab.\ref{tab:net}. We demonstrate that the system can reach the subgame perfect Nash equilibrium under the following theorem. \begin{thm} The game will achieve the only subgame perfect Nash equilibrium (SPNE) if all three parties act honestly: sellers upload the matched data, matched key, service providers adopt the authentic data, and matched decryption key, and consumers purchase with sufficient payments. Meanwhile, the SPE is also the optimal strategy for the entire system as a Nash Equilibrium. \qed \end{thm} \begin{prf} First, we dive into the rewards of each role, investigating their payoffs under different strategies. For the seller, we observe that the system is not stable (cannot reach Nash equilibrium) under his optimal strategies. As shown in Tab.\ref{tab:net}, the optimal strategies for sellers (\textit{dei},\textit{dej},\textit{dek},\textit{del},\textit{dgi},\textit{dgj},\textit{dgk},\textit{dgl}) is to provide mismatched keys and data, while at the same time obtain payments from consumers. However, based on Lemma \ref{lma-seller} and Lemma \ref{lma-sellerkey}, the seller in such cases cannot obtain payments due to the punishment from smart contracts. These are impractical strategies when launching the backward induction for the subgame tree in Fig.\ref{fig-game}. Similarly, for both consumers and service providers, the system is not stable and cannot reach Nash equilibrium under their optimal strategies. Based on that, we find that the optimal strategy for each party is not the optimal strategy for the system. Then, we focus on strategies with the highest payoffs (equiv. utilities). As illustrated in Tab.\ref{tab:net} (red background), the strategies of \textit{aei}, \textit{afi} and \textit{agi} hold the maximal payoffs where $u_{aei}=u_{afi}=u_{agi}=7$. Their payoffs are greater than all competitive strategies in the history set $\mathcal{H}$. This means the system reaches Nash equilibrium under these three strategies. However, multiple Nash equilibriums cannot drive the most optimal strategy because some of them are impractical. We conduct the backward induction for each game with Nash equilibriums. We find that only one of them is the subgame perfect Nash equilibrium with feasibility in the real world. Based on Lemma \ref{lma-cm}, a consumer without sufficient payments, either to the seller or service provider, cannot successfully decrypt the raw data. He will lose all the paid money ($x+y$). This means both \textit{afi} and \textit{agi} are impractical. With the previous analyses in the arm, we finally conclude that only the strategy \textit{aei}, in which all parties act honestly, can reach the subgame perfect Nash equilibrium. This strategy is also the Nash equilibrium for the entire BDTS game. \qed \end{prf} \begin{table}[!hbt] \caption{Payoff Function and Profits}\label{tab:net} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c} \toprule \multicolumn{1}{c}{\textit{\textbf{$\mathcal{H}$}} } & \multicolumn{4}{c}{\textbf{Payoff} in the form of (SL,CM,SP)} \\ \midrule \textit{aei} & \textcolor{blue}{\text{(9,-4,2)}} & \textit{bei} & (19,-24,2) & \textit{cei} & (10,-24,2) & \textit{dei} & (20,-24,2) \\ \textit{aej} & (9,-24,3) & \textit{bej} & (19,-24,3) & \textit{cej}& (10,-24,3) & \textit{dej} & (20,-24,3) \\ \textit{aek} & (9,-24,3) & \textit{bek} & (19,-24,3) & \textit{cek}& (10,-24,3) & \textit{dek} & (20,-24,3) \\ \textit{ael} & (9,-24,4) & \textit{bel} & (19,-24,4) & \textit{cel} & (10,-24,4) & \textit{del} & (20,-24,4) \\ \midrule \textit{afi} & \textcolor{blue}{\text{(x-11,16-x,2)}} & \textit{bfi} & (x-1,-24,2) & \textit{cfi}& (x-10,-24,2) & \textit{dfi} & (x,-24,2) \\ \textit{afj} & (x-11,-24,3) & \textit{bfj} & (x-1,-24,3) & \textit{cfj}& (x-10,-24,3) & \textit{dfj} & (x,-24,3) \\ \textit{afk} & (x-11,-24,3) & \textit{bfk} & (x-1,-24,3) & \textit{cfk}& (x-10,-24,3) & \textit{dfk} & (x,-24,3) \\ \textit{afl} & (x-11,-24,4) & \textit{bfl} & (x-1,-24,4) & \textit{cfl}& (x-10,-24,4) & \textit{dfl} & (x,-24,4) \\ \midrule \textit{agi} & \textcolor{blue}{\text{(9,-y,y-2)}} & \textit{bgi} & (19,-24,y-2) & \textit{cgi}& (10,-24,y-2) & \textit{dgi} & (20,-24,y-2) \\ \textit{agj} & (9,-24,y-1) & \textit{bgj} & (19,-24,y-1) & \textit{cgj}& (10,-24,y-1) & \textit{dgj} & (20,-24,y-1) \\ \textit{agk} & (9,-24,y-1) & \textit{bgk} & (19,-24,y-1) & \textit{cgk}& (10,-24,y-1) & \textit{dgk} & (20,-24,y-1) \\ \textit{agl} & (9,-24,y) & \textit{bgl} & (19,-24,y) & \textit{cgl}& (10,-24,y) & \textit{dgl} & (20,-24,y) \\ \midrule \textit{ahi} &(x-11,-x-y,y-2) & \textit{bhi} & (x-1,-24,y-2) & \textit{chi}& (x-10,,-24,y-2) & \textit{dhi} & (x,-24,y-2) \\ \textit{ahj} & (x-11,-24,y-1) & \textit{bhj} & (x-1,-24,y-1) & \textit{chj}& (x-10,,-24,y-1) & \textit{dhj} & (x,-24,y-1) \\ \textit{ahk} & (x-11,-24,y-1) & \textit{bhk} & (x-1,-24,y-1) & \textit{chk}& (x-10,,-24,y-1) & \textit{dhk} & (x,-24,y-1) \\ \textit{ahl} & (x-11,-24,y) & \textit{bhl} & (x-1,-24,y) & \textit{chl}& (x-10,,-24,y) & \textit{dhl} & (x,-24,y) \\ \bottomrule \end{tabular} } \begin{tablenotes} \footnotesize \item \quad The texts in blue color reach Nash Equilibrium. \end{tablenotes} \end{center} \end{table} \section{Efficiency Analysis} \label{sec-efficiency} In this section, we evaluate the performance. We firstly provide a theoretical analysis of the system and make comparisons with competitive schemes. Then, we launch experimental tests in multi-dimensions, covering data type, data size, and storage capacity. \smallskip \noindent\textbf{Experimental Configurations.} Our implementation operates on Hyperledger Fabric blockchain \cite{androulaki2018hyperledger}, running on a desk server with Intel(R) Core(TM) i7-7500U CPU\@2.70GHz and 8.00 GB RAM. We simulate each role of BDTS (\textit{consumer}, \textit{seller} and \textit{service providers}) at three virtual nodes, respectively. These nodes are enclosed inside separated dockers under the Ubuntu 18.04 TLS operating system. \subsection{Theoretical Analysis} We first analyze the computational complexity. We set $\tau_E$, $\tau_{E_{A}}$, $\tau_D$, $\tau_{D_{A}}$, $\tau_M$ and $\tau_V$ to separately represent the asymmetric encryption time, the symmetric encryption time (AES), the asymmetric decryption time and the symmetric encryption time, the Merkle tree merging operation time and the Merkle proof verification time. We give our theoretical analsysis of each step in Tab.\ref{tab-complexity}. Firstly, at the \textit{encrypted data uploading} module, the seller will divide the entire data into several pieces of data and upload their proofs on-chain. The main off-chain procedure is to encrypt the segmented data. We assume the data has been split into $i$ pieces, and every piece of data $Data_i$ needs to be encrypted into $D_i$. Then, these encrypted data have been stored at the Merkle leaves, merging both $Data_i$ and $D_i$ to obtain $M_d$ and $r_{ed}$. Secondly, at the \textit{matching and data downloading} module, the consumer can select service providers to download different data segments from them. Before providing the service, the service provider needs to encrypt the received $D_i$ with their private keys, accompanied by corresponding Merkle proofs as in the previous step. Here, the encryption is based on a symmetric encryption algorithm. Once completed, multiple downloads occur at the same time. More service providers will improve the efficiency of downloading because the P2P connection can make full use of network speed. Last, at the \textit{decryption and appealing} module, the consumer obtains each encrypted piece of data and starts to decryption them. They need to verify whether the received data and its proof are matched. If all passes, they can use the valid keys (after payment) for the decryption. Here, the appeal time is related to the number of appeal parts instead of appeal size. We further make an comparison, in terms of on-chain costs, with existing blockchain-based fair exchange protocols. Gringotts \cite{goyal2019secure} spends $O(n)$ as they store all the chunks of delivering data on-chain. CacheCas \cite{almashaqbeh2019cachecash} takes the cost at a range of $[\mathcal{O}(1), \mathcal{O}(n)]$ due to its \textit{lottery tickets} mechanism. FairDwonload \cite{he2021fair}, as they claimed, spends $\mathcal{O}(1)$. But they separate the functions of delivering streaming content and download chunks. Our protocol retain these functions without compromising efficiency, which only takes $\mathcal{O}(1)$. \begin{table}[!hbt] \caption{Computational Complexity and Comparison} \label{tab-complexity} \resizebox{0.95\linewidth}{!}{ \begin{tabular}{c|c} \toprule \multicolumn{1}{c}{\textbf{Algorithm}} & \multicolumn{1}{c}{\textbf{Complexity}} \\ \midrule Encrypted data uploading & $i{\tau_E}+2{\tau_M}+2{\tau_V}$ \\ Matching and Data downloading & $i{\tau_{E_{A}}}+2{\tau_M}+2{\tau_V}$ \\ Encryption and appealing & $ i{\tau_D+\tau_{D_{A}}}+2{\tau_M}+2{\tau_V}$ \\ \midrule \multicolumn{1}{c}{\textbf{Competitive Schemes}} & \\ \midrule \multicolumn{1}{c|}{Gringotts \cite{goyal2019secure} } & $\mathcal{O}(n)$ \\ \multicolumn{1}{c|}{ CacheCash \cite{almashaqbeh2019cachecash} } & $[\mathcal{O}(1), \mathcal{O}(n)]$ \\ \multicolumn{1}{c|}{ FairDwonload \cite{he2021fair} } & $\mathcal{O}(1)$ \\ \multicolumn{1}{c|}{\textbf{\textit{our BDTS}}} & $\mathcal{O}(1)$ \\ \bottomrule \end{tabular} } \begin{tablenotes} \footnotesize \item \quad $i$ is the number of segmented data; $n$ represents a full chunk of data. \end{tablenotes} \end{table} \subsection{Experimental Evaluation} Then, we evaluate the practical performance. We focus on the functionaries of \textbf{\textit{download}}, which is the most essential function (due to its high frequency and large bandwidth) invoked by users. We set the experiments from three orthogonal dimensions, covering different data type, data size and storage capacity. \smallskip \noindent\textbf{Data Type.} We evaluate three mainstream data types, covering text, image, and video. The text-based file is the most prevailing data format in personal computers. As a standard, a text file contains plain text that can be opened and edited in any word-processing program. The image format encompasses a variety of different subtypes such as TIFF, png, jpeg, and BMP, which are used for different scenarios like printing or web graphics. We omit the subtle differences between each sub-format because they perform equivalently in terms of download services. Similarly, video has a lot of sub-types including MP4, MOV, MP4, WMV, AVI, FLV, etc. We only focus on its general type. From the results in Fig.\ref{fig-tests} with distinguished colors, we can observe that all three types of data have approximately the same performance, under different configurations of data size and storage capacity. The results indicate that \textit{the performance of the download service has no significant relationship with the data type. } This is an intuitive outcome that can also be proved by our common sense. The upload and download service merely opens a channel for inside data, regardless of its content and types. \smallskip \noindent\textbf{Data Size.} We adjust data sizes at three levels, including 10M, 100M, and 1G, to represent a wide range of applications in each level. As shown in Fig.\ref{fig-tests}, 10M data (Text, 1 storage) costs at most no more than $2$ seconds, 100M data in the same format spends around 18s, and 1G data costs 170s. The results indicate that \textit{the download time is positively proportional to its data size.} The larger the data have, the slower it downloads. This can also apply to different types of data and different storage capacities. A useful inspiration from evaluations of data size is to ensure a small size. This is also a major consideration to explain reasons of splitting data into pieces. The splitting procedure can significantly improve service quality either for uploading or downloading. Sharded data can be reassembled into its full version once receiving all pieces of segments. \smallskip \noindent\textbf{Storage Capacity.} The storage capacity refers to the number of storage devices that can provide download services. The device is a general term that can be a single laptop or a cluster of cloud servers. If each service provider maintains one device, the number of devices is equal to the participated service providers. We adjust the storage capacity from 1 device to 4 devices in each data type and data size. All the subfigures (the columns in \textit{left}, \textit{middle} and \textit{right}) in Fig.\ref{fig-tests} show the same trend: \textit{increasing the storage capacity over the distributed network will shorten the download time.} The result can apply to all the data types and data sizes. The most obvious change in this series of experiments is adding devices from 1 to 2, which is almost short half of the download time. A reasonable explanation might be that a single point service is easily affected by other factors such as network connection, bandwidth usage, or propagating latency. Any changes of these factors may greatly influence the download service from users. But once adding another device, the risk of single-point diminishes as the download service become decentralized and robust. More connections can drive better availability, as also proved by setting devices to 2, 3 and 4. \begin{figure*}[!hbt] \centering \includegraphics[width=\linewidth]{img/tests.png} \caption{\textbf{Download Times of Different Data Type, Data Size and Storage Capacity}: We evaluate three types of data formats including video \textit{(the grey color)}, image \textit{(orange)}, and text \textit{(blue)}. For each of data type, we separately test their download times in distinguished data size with 10M \textit{(the left figure)}, 100M \textit{(middle)} and 1G \textit{(right)}. Meanwhile, we also investigate the performance along with increased number of storage devices \textit{(from 1 to 4)}, or equivalently the number of service providers.} \label{fig-tests} \end{figure*} \smallskip \noindent\textbf{Average Time.} We dive into one of the data types to evaluate its i) average download times that are measured in MB/sec by repeating multiple times of experiments under different data sizes; and 2) the trend along with the increased number of storage devices. Compared to previous evaluations, this series of experiments scrutinize the subtle variations under different configurations, figuring a suite of curves. As stated in Tab.\ref{tab-avgtime}, the average downloading times under the storage capacity (from 1 to 6) are respectively $0.167$s, $0.102$s, $0.068$s, $0.051$s, $0.039$s, and $0.031$s. Their changes start to deteriorate, approaching a convex (downward) function as illustrated in Fig.\ref{fig-performance}. This indicates that the trend of download time is not strictly proportional to the changes in storage capacity. They merely have a positive relation, following a diminishing marginal effect. \section{Discussion} \label{sec-discuss} In this section, we highlight several major features of BDTS, covering its \textit{usability}, \textit{compatibility}, and \textit{extensibility}. \smallskip \noindent\textbf{Usability.} Our proposed scheme improves the usability in two folds. Firstly, we separately store the raw data and abstract data. The raw data provided by the sellers are stored at the local servers of service providers, while the corresponding abstract data (in the context of this paper, covering \textit{data}, \textit{description} and \textit{proof}) is recorded on-chain. A successful download requires matching both decryption keys and data proofs under the supervision of smart contracts. Secondly, the data trade in our system includes all types of streaming data such as video, audio, and text. These types can cover the most range of existing online resources. \smallskip \noindent\textbf{Compatibility.} Our solution can be integrated with existing crypto-based schemes. For instance, to avoid the repeated payment of the same data, simply relying on the index technique is not enough due to its weakness. The watermarking \cite{yang2020collusion} technique is a practical way to embed a specific piece of mark into data without significantly changing its functionality. It can also incorporate bio-information from users, greatly enhance security. Beyond that, the storage (encrypted) data can leverage the hierarchical scheme \cite{gentry2002hierarchical} to manage its complicated data, as well as remain the efficiency of fast query. \smallskip \noindent\textbf{Extensibility.} BDTS can extend functionalities by incorporating off-chain payment techniques (also known as layer-two solutions \cite{gudgeon2020sok}. Off-chain payment has the advantage of low transaction fees in multiple trades with the same person. Besides, existing off-chain payment solutions have many advanced properties such as privacy-preserving and concurrency \cite{malavolta2017concurrency}. Our implementation only set the backbone protocol for fair exchange, leaving many flexible slots to extend functionalities by equipping matured techniques. \begin{figure*} \begin{minipage}[!h]{0.65\linewidth} \captionof{table}{Average Download Time} \label{tab-avgtime} \resizebox{0.93\linewidth}{!}{ \begin{tabular}{c|cccccc|cc} \toprule \multicolumn{1}{c}{\quad} & \multicolumn{6}{c}{\quad\textbf{Data Size (Text)}\quad} & \multicolumn{1}{c}{\quad\textbf{Average Time }\quad} \\ \midrule \multicolumn{1}{c|}{\textbf{Storage}} & {\textbf{1M}} & {\textbf{10M}} & {\textbf{50M}} & {\textbf{100M}} & {\textbf{500M}} & \textbf{1G} & \quad\textbf{(M/s)}\quad \\% & \quad\textbf{2G (Video)}\quad \midrule 1 & 0.16 & 1.78 & 7.96 & 16.55 & 80.52 & 166.45 & 0.167 \\ 2 & 0.10 & 0.98 & 4.89 & 8.60 & 43.48 & 88.04 & 0.102 \\ 3 & 0.07 & 0.77 & 2.54 & 5.29 & 27.44 & 56.15 & 0.068 \\ 4 & 0.05 & 0.61 & 2.03 & 4.21 & 22.22 & 43.51 & 0.051 \\%& 88.75 5 & 0.04 & 0.38 & 1.79 & 3.33 & 18.88 & 34.52 & 0.039 \\%& 70.55 6 & 0.03 & 0.32 & 1.56 & 2.88 & 14.69 & 29.48 & 0.031 \\%& 60.95 \bottomrule \end{tabular} } \end{minipage} \begin{minipage}[!h]{0.33\linewidth} \caption{Trend of Download Time} \label{fig-performance} \includegraphics[height = 3.65cm]{img/performance.png} \end{minipage} \end{figure*} \section{Related Work} \label{sec-rw} In this section, we provide related primitives surrounding \textit{fair exchange} protocols and \textit{blockchain}s. We also give backgrounds of supporting techniques of \textit{game theory}. \smallskip \noindent\textbf{Blockchain in Trading Systems.} Blockchain has been widely used in trading systems due to its advantageous security properties of non-repudiation, non-equivocation, and non-frameability \cite{li2020accountable}. Many scholars work to build their robust trading systems by leveraging blockchain and smart contracts. Jung et al. \cite{jung2017accounttrade} propose AccountTrade, an accountable trading system between customers who distrust each other. Any misbehaving consumer can be detected and punished by using book-keeping abilities. Chen et al. \cite{chen2017bootstrapping} design an offline digital content trading system. If a dispute occurs, the arbitration institution will conduct it. Dai et al. \cite{dai2019sdte} propose SDTE, a trading system that protects data as well as prevents analysis code from leakage. They employ the TEE technology (especially Intel SGX) to protect the data in an isolated area at the hardware level. Similarly, Li et al. \cite{li2020accountable} leverage the TEE-assisted smart contract to trace the evidence of investigators' actions. Automatic executions enable warrant execution accountability under the help of TEE. Zhou et al. \cite{zhou2018distributed} introduce a data trading system that prevents index data leakage. In this scheme, participants exchange data based on a smart contract. These solutions rely on blockchain to generate persistent evidence and act as a transparent authority to solve disputes. However, these solutions merely performs effective in trading the data in the \textit{text} version, rather than data cast in streaming channels such as TV shows, films, which are costly. The fairness issue in trades has, neither, not been seriously discussed. \smallskip \noindent\textbf{Fair Exchanges using Blockchain.} Traditional ways of promoting fair exchange across distrustful parties rely on trusted third parties because they can monitor the activities of participants, judging whether they have faithfully behaved. However, centralization is the major hurdle. Blockchain with the intrinsic nature of decentralization, automation, and accountability can perfectly replace the role of TTP. The behaviors of involved parties are transparently recorded on-chain, avoiding any types of cheating and compromising. Meanwhile, a predefined incentive model can be automatically operated by smart contracts, guaranteeing that each participant can be rewarded according to their contributions. Based on that, blockchain-based fair exchange protocols have been well researched. Dziembowski et al. \cite{dziembowski2018fairswap} propose Fairswap, utilizing the smart contract to guarantee fairness. The contract plays the role of an external judge to resolve the disagreement. He et al. \cite{he2021fair} propose a fair content delivery scheme by using blockchain. They scrutinize the concepts between exchange fairness and delivery fairness during the trades. Eckey et al. \cite{eckey2020optiswap} propose a smart contract-based fair exchange protocol with an optimistic mode. They maximally decrease the interaction between different parties. Janin et al. \cite{janin2020filebounty} present FileBounty, a fair protocol using the smart contract. The scheme ensures a buyer purchases data at an agreed price without compromising content integrity. Besides, blockchains are further applied to multi-party computations of trading systems \cite{shin2017t}\cite{choudhuri2017fairness}\cite{kiayias2016blockchain}. \smallskip \noindent\textbf{Game Theory in Blockchain.} Game theory is frequently adopted in the blockchain field due to the players' profit-pursuing nature \cite{liu2019survey}. It is used to simulate the actual behaviors of each rational player under complex scenarios. Among the analysis, the very first thing is to set the constraints including either static or dynamic, two-party or multi-party, etc. Based on that, the theory can be used in simulating optimal profitable behaviors of a rational miner in many different game scenarios such as the stochastic game \cite{kiayias2016blockchain}, the cooperative game \cite{lewenberg2015bitcoin}, the evolutionary game \cite{kim2019mining} and the Stackelberg game \cite{chen2022absnft}. The second step is to select a suitable model that can match each assumed condition, and apply it to real cases. Lohr et al. \cite{lohr2022formalizing} and Janin et al. \cite{janin2020filebounty} give their analyses of two-party exchange protocol. Analyzing whether a behavior deviates from the Nash equilibrium drives many useful results on the loss and gain of a protocol design. Besides, other interesting outcomes can also be investigated, especially for the fairness and security of mining procedures. Existing studies independently observes the miner's strategy in selfish mining \cite{eyal2015miner}\cite{kwon2017selfish}\cite{negy2020selfish}\cite{sapirshtein2016optimal}, multiple miners strategy \cite{bai2021blockchain}, the compliance strategy towards PoW/PoS \cite{karakostas2022blockchain}, pooled strategies \cite{wang2019pool}\cite{li2020mining}, and fickle mining in different chains \cite{kwon2019bitcoin}. Our analyses are compliant with previous research principles. \section{Conclusion} In this paper, we explore the fairness issue existing in current data trading solutions. Traditional centralized authorities are not reliable due to their superpower without any supervision. Our proposed scheme, BDTS, addresses such issues by leveraging then blockchain technology with well-designed smart contracts. The scheme utilizes automatically-operating smart contracts to act the role of a data executor with transparency and accountability. Meanwhile, BDTS can incentive involved parties to behave honestly. Our analyses, based on strict game theory induction, prove that the game can achieve a subgame perfect Nash equilibrium with optimal payoffs under the benign actions of all players. Furthermore, we implement the scheme on the top of the Hyperleder Fabric platform with comprehensive evaluations. The results demonstrate that our system can provide fast and reliable services for users. \bibliographystyle{unsrt}
{'timestamp': '2022-11-21T02:05:17', 'yymm': '2211', 'arxiv_id': '2211.10001', 'language': 'en', 'url': 'https://arxiv.org/abs/2211.10001'}
\section{Introduction} There is growing evidence for the existence of non-ordinary hadrons that do not follow the quark model, {\it i.e.} the quark-antiquark-meson or three-quark-baryon classification. Meson Regge trajectories relate resonance spins $J$ to the square of their masses and for ordinary mesons they are approximately linear. The functional form of a Regge trajectory depends on the underlying dynamics and, for example, the linear trajectory for mesons is consistent with the quark model as it can be explained in terms of a rotating relativistic flux tube that connects the quark with the antiquark. Regge trajectories associated with non-ordinary mesons do not, however, have to be linear. The non-ordinary nature of the lightest scalar meson, the $f_0(500)$ also referred to as the $\sigma$, together with a few other scalars, has been postulated long ago \cite{Jaffe:1976ig}. In the context of the Regge classification, in a recent study of the meson spectrum in \cite{Anisovich:2000kxa} it was concluded that the $\sigma$ meson does not belong to the same set of trajectories that many ordinary mesons do. In \cite{Masjuan:2012gc}, it was concluded that the $\sigma$ can be omitted from the fits to linear $(J,M^2)$ trajectories because of its large width. The reason is that its width was taken as measure of the uncertainty on its mass and it was found that, when fitting trajectory parameters, its contribution to the overall $\chi^2$ was insignificant. In a recent work \cite{Londergan:2013dza} we developed a formalism based on dispersion relations that, instead of fitting a specific, {\it e.g.} linear, form to spins and masses of various resonances, enables us to calculate the trajectory using as input the position and the residue of a complex resonance pole in a scattering amplitude. When the method was applied to the $\rho(770)$ resonance, which appears as a pole in the elastic $P$-wave $\pi\pi$ scattering, the resulting trajectory was found to be, to a good approximation, linear. The resulting slope and intercept are in a good agreement with phenomenological Regge fits. The slope, which is slightly less than 1 GeV$^{-2}$, is expected to be universal for all ordinary trajectories. It is worth noting that in this approach the resonance width is, as it should be, related to the imaginary part of the trajectory and not a source of an uncertainty. The $\sigma$ meson also appears as a pole in the $\pi\pi$ $S$-wave scattering. The position and residue of the pole has recently been accurately determined in \cite{Caprini:2005zr} using rigorous dispersive formalisms. \color{black} When the same method was applied to the $\sigma$ meson, however, we found quite a different trajectory. It has a significantly larger imaginary part and the slope parameter, computed at the physical mass as a derivative of the spin with respect to the mass squared, is more than one order of magnitude smaller than the universal slope. The trajectory is far from linear, instead it is qualitatively similar to a trajectory of a Yukawa potential. We also note that deviation from linearity is not necessarily implied by the large width of the $\sigma$ since it was also shown in \cite{Londergan:2013dza} that resonances with large widths may belong to linear trajectories. Our findings give further support for the non-ordinary nature of the $\sigma$. Still, one may wonder if the single case of the $\rho$ meson, where the method agrees with Regge phenomenology, gives sufficient evidence that it can distinguish between ordinary and non-ordinary mesons. In this letter, therefore we show that other ordinary trajectories can be predicted with the same technique, as long as the underlying resonances are almost elastic. For this purpose, we have concentrated on resonances that decay nearly $100\%$ to two mesons. In addition to the $\rho$ there are two other well-known examples: the $f_2(1270)$, whose branching ratio to $\pi\pi$ is $84.8^{+2.4}_{-1.2}\%$, and the $f_2'(1525)$, with branching ratio to $K\bar K$ of $(88.7\pm2.2)\%$. These resonances are well established in the quark model and as we show below, Regge trajectories predicted by our method come out almost real and linear with a slope close to the universal one. There is an additional check on the method that we perform here. Since the formalism used in the case of the $\rho$ was based on a twice-subtracted dispersion relation, the trajectory had a linear term plus a dispersive integral over the imaginary part. Since the imaginary part of the trajectory is closely related to the decay width, one might wonder if the $\rho(770)$, $f_2(1270)$ and $f_2'(1525)$ trajectories come out straight just because their widths are small. In other words, that for narrow resonances, the straight line behavior is not predicted but it is already built in through subtractions. For this reason, in this work, we also consider three subtractions and show that for the ordinary resonances under study the quadratic term is negligible. The paper is organized as follows. In the next section we briefly review the dispersive method and in Sect.\ref{sec:numerical results} we present the numerical results. In Sect.\ref{sec:3subs} we discuss results of the calculation with three subtractions. Summary and outlook are given in Sect.\ref{conclusions}. \section{Dispersive determination of a Regge trajectory from a single pole} The partial wave expansion of the elastic scattering amplitude, $T(s,t)$, of two spinless mesons of mass $m$ is given by \begin{equation} T(s,t)=32 K \pi \sum_l (2l+1) t_l(s) P_l(z_s(t)), \label{fullamp} \end{equation} where $z_s(t)$ is the s-channel scattering angle and $K=1,2$ depending on whether the two mesons are distinguishable or not. The partial waves $t_l(s)$ are normalized according to \begin{equation} t_l(s) = e^{i\delta_l(s)}\sin{\delta_l(s)}/\rho(s), \quad \rho(s) = \sqrt{1-4m^2/s}, \end{equation} where $\delta_l(s)$ is the phase shift. The unitarity condition on the real axis in the elastic region, \begin{equation} \mbox{Im}t_l(s)=\rho(s)|t_l(s)|^2, \label{pwunit} \end{equation} is automatically satisfied. When $t_l(s)$ is continued from the real axis to the entire complex plane, unitarity determines the amplitude discontinuity across the cut on the real axis above $s=4m^2$. It also determines the continuation in $s$, at fixed $l$, onto the second sheet where resonance poles are located. It follows from Regge theory that the same resonance poles appear when the amplitude is continued into the complex $l$-plane \cite{Reggeintro}, leading to \begin{equation} t_l(s) = \frac{\,\beta(s)}{l-\alpha(s)\,} + f(l,s), \label{Reggeliket} \end{equation} where $f(l,s)$ is analytical near $l=\alpha(s)$. The Regge trajectory $\alpha(s)$ and residue $\beta(s)$ satisfy $\alpha(s^*)=\alpha^*(s)$, $\beta(s^*)=\beta^*(s)$, in the complex-$s$ plane cut along the real axis for $s > 4m^2$. Thus, as long as the pole dominates in Eq.\eqref{Reggeliket}, partial wave unitarity, Eq.\eqref{pwunit}, analytically continued to complex $l$ implies, \begin{equation} \mbox{Im}\,\alpha(s) = \rho(s) \beta(s), \label{unit} \end{equation} and determines the analytic continuation of $\alpha(s)$ to the complex plane \cite{Chu:1969ga}. At threshold, partial waves behave as $t_l(s) \propto q^{2l}$, where $q^2=s/4-m^2$, so that if the Regge pole dominates the amplitude, we must have $\beta(s) \propto q^{2\alpha(s)}$. Moreover, following Eq.\eqref{fullamp}, the Regge pole contribution to the full amplitude is proportional to $(2\alpha + 1) P_\alpha(z_s)$, so that in order to cancel poles of the Legendre function $P_\alpha(z_s)\propto\Gamma(\alpha + 1/2)$ the residue has to vanish when $\alpha + 3/2$ is a negative integer, {\it i.e.}, \begin{equation} \beta(s) = \gamma(s) \hat s^{\alpha(s)} /\Gamma(\alpha(s) + 3/2). \label{reduced} \end{equation} Here we defined $\hat s =( s-4m^2)/s_0$ and introduced a scale $s_0$ to have the right dimensions. The so-called reduced residue, $\gamma(s)$, is a real analytic function. Hence, on the real axis above threshold, since $\beta(s)$ is real, the phase of $\gamma$ is \begin{equation} \mbox{arg}\,\gamma(s) = - \mbox{Im}\alpha(s) \log(\hat s) + \arg \Gamma(\alpha(s) + 3/2). \end{equation} Consequently, we can write for $\gamma(s)$ a dispersion relation: \begin{equation} \gamma(s) = P(s) \exp\left(c_0 + c' s + \frac{s}{\pi} \int_{4m^2}^\infty \!\!\!\!ds' \frac{\mbox{arg}\,\gamma(s')}{s' (s' - s)} \right), \label{g} \end{equation} where $P(s)$ is an entire function. Note that the behavior at large $s$ cannot be determined from first principles, but, as we expect linear Regge trajectories for ordinary mesons, we should allow $\alpha$ to behave as a first order polynomial at large-$s$. This implies that $\mbox{Im}\alpha(s)$ decreases with growing $s$ and thus it obeys the dispersion relation~\cite{Reggeintro,Collins-PLB}: \begin{equation} \alpha(s) = \alpha_0 + \alpha' s + \frac{s}{\pi} \int_{4m^2}^\infty ds' \frac{ \mbox{Im}\alpha(s')}{s' (s' -s)}. \label{alphadisp} \end{equation} Assuming $\alpha' \ne 0$, from unitarity, Eq.\eqref{unit}, in order to match the asymptotic behavior of $\beta(s)$ and $\mbox{Im}\alpha(s)$ it is required that $c' = \alpha' ( \log(\alpha' s_0) - 1)$ and that $P(s)$ can at most be a constant, $P(s) = \mbox{const}$. Therefore, using Eq.\eqref{Reggeliket}, we arrive at the following three equations, which define the ``constrained Regge-pole'' amplitude~\cite{Chu:1969ga}: \begin{align} \mbox{Re} \,\alpha(s) & = \alpha_0 + \alpha' s + \frac{s}{\pi} PV \int_{4m^2}^\infty ds' \frac{ \mbox{Im}\alpha(s')}{s' (s' -s)}, \label{iteration1}\\ \mbox{Im}\,\alpha(s)&= \frac{ \rho(s) b_0 \hat s^{\alpha_0 + \alpha' s} }{|\Gamma(\alpha(s) + \frac{3}{2})|} \exp\Bigg( - \alpha' s[1-\log(\alpha' s_0)] + \!\frac{s}{\pi} PV\!\!\!\int_{4m^2}^\infty\!\!\!\!\!\!\!ds' \frac{ \mbox{Im}\alpha(s') \log\frac{\hat s}{\hat s'} + \mbox{arg }\Gamma\left(\alpha(s')+\frac{3}{2}\right)}{s' (s' - s)} \Bigg), \label{iteration2}\\ \beta(s) &= \frac{ b_0\hat s^{\alpha_0 + \alpha' s}}{\Gamma(\alpha(s) + \frac{3}{2})} \exp\Bigg( -\alpha' s[1-\log(\alpha' s_0)] + \frac{s}{\pi} \int_{4m^2}^\infty \!\!\!\!\!\!\!ds' \frac{ \mbox{Im}\alpha(s') \log\frac{\hat s}{\hat s'} + \mbox{arg }\Gamma\left(\alpha(s')+\frac{3}{2}\right)}{s' (s' - s)} \Bigg), \label{betafromalpha} \end{align} where $PV$ denotes the principal value. For real $s$, the last two equations reduce to Eq.\eqref{unit}. The three equations are solved numerically with the free parameters fixed by demanding that the pole on the second sheet of the amplitude in Eq.~(\ref{Reggeliket}) is at a given location. Thus we will be able to obtain the two independent trajectories corresponding to the $f_2(1270)$ and $f_2'(1525)$ resonances from their respective pole parameters. Note that we are not imposing, but just allowing, linear trajectories. \section{Numerical Results} \label{sec:numerical results} In principle, the method described in the previous section is suitable for resonances that appear in the elastic scattering amplitude, {\it i.e.} they only decay to one two-body channel. For simplicity we are also focusing on cases were the two mesons in the scattering state have the same mass. We assume that both the $f_2(1270)$ and the $f_2'(1525)$ resonances can be treated as purely elastic and we will use their decay fractions into channels other than $\pi\pi$ and $K\bar K$, respectively, as an additional systematic uncertainty in their widths and couplings. In our numerical analysis we fit the pole, $s_p$, and residue, $|g^2|$, found in the second Riemann sheet of the Regge amplitude. In this amplitude the $\alpha(s)$ and $\beta(s)$ are constrained to satisfy the dispersion relations in Eqs.\eqref{iteration2} and \eqref{betafromalpha}. Thus, the fit determines the parameters $\alpha_0, \alpha',b_0$ for the trajectory of each resonance. In practice, we minimize the sum of squared differences between the input and output values for the real and imaginary parts of the pole position and for the absolute value of the squared coupling, divided by the square of the corresponding uncertainties. At each step in the minimization procedure a set of $\alpha_0, \alpha'$ and $b_0$ parameters is chosen and the system of Eqs.\eqref{iteration1} and \eqref{iteration2} is solved iteratively. The resulting Regge amplitude for each $\alpha_0, \alpha'$ and $b_0$ is then continued to the complex plane, in order to determine the resonance pole in the second Riemann sheet, and the $\chi^2$ is calculated by comparing this pole to the corresponding input. \subsection{$f_2(1270)$ resonance} \label{subsec:f2(1270)} In the case of the $f_2(1270)$ resonance, we use as input the pole obtained from the conformal parameterization of the D0 wave from Ref~\cite{GarciaMartin:2011cn}. In that work the authors use different parameterizations in different regions of energy and impose a matching condition. Here we will use the parameterization valid in the region where the resonance dominates the scattering amplitude, namely, in the interval $2m_K\le s^{1/2}\le 1420$ MeV. Moreover, we will decrease the width down to $85\%$ of the value found in \cite{GarciaMartin:2011cn} to account for the inelastic channels. The conformal parameterization results in the pole located at $$\sqrt{s_{f_2}}=M-i\Gamma/2=1267.3^{+0.8}_{-0.9}-i(87\pm9)\text{ MeV}$$ and a coupling of $$|g_{f_2\pi\pi}|^2=25\pm3\text{ GeV}^{-2}.$$ With these input parameters, we follow the minimization procedure as explained above, until we get a Regge pole at $\sqrt{s_{f_2}}=(1267.3\pm0.9)-i(89\pm10)\text{ MeV}$ and coupling $|g_{f_2\pi\pi}|^2=25\pm3\text{ GeV}^{-2}$. In Fig.~\ref{Fig:ampl_f2} we show the corresponding constrained Regge-pole amplitude on the real axis versus the conformal parameterization that was constrained by the data \cite{GarciaMartin:2011cn}. This comparison is a check that our Regge-pole amplitude, which neglects the background $f(l,s)$ term in Eq.\eqref{Reggeliket}, describes well the amplitude in the pole region, namely for $(M-\Gamma/2)^2<s<(M+\Gamma/2)^2$. The grey bands cover the uncertainties arising from the errors of the input and include an additional $15\%$ systematic uncertainty in the width as explained above. Taking into account that only parameters of the pole have been fitted, but not the whole amplitude in the real axis, and that we have completely neglected the background in Eq.~(\ref{Reggeliket}), the agreement between the two amplitude models is very good, particularly in the resonance region. Of course, the agreement deteriorates as we move away from the peak region as illustrated by the shadowed energy regions $s<(M-\Gamma/2)^2$ and $s>(M+\Gamma/2)^2$. \begin{figure} \hspace*{-.6cm} \includegraphics[scale=0.9,angle=-90]{amplitud-f2-both-band-abs.eps} \caption{\rm \label{Fig:ampl_f2} The solid line represents the absolute value of the constrained Regge-pole amplitude for the $f_2(1270)$ resonance. The gray bands cover the uncertainties due to the errors in the input pole parameters. The dashed line corresponds to the absolute value of the data fit obtained in \cite{GarciaMartin:2011cn}. Let us recall that only the parameters of the pole given by this parameterization have been used as input, and not the amplitude itself. The regions covered with a mesh correspond to $s<(M-\Gamma/2)^2$ and $s>(M+\Gamma/2)^2$, where the background might not be negligible anymore. } \end{figure} Since our constrained Regge amplitude provides a good description of the resonance region we can trust the resulting Regge trajectory. The parameters of the trajectory obtained through our minimization procedure are as follows, \begin{equation} \alpha_0=0.9^{+0.2}_{-0.3}\,;\hspace{3mm} \alpha'=0.7^{+0.3}_{-0.2} \text{ GeV}^{-2};\hspace{3mm}b_0=1.3^{+1.4}_{-0.8}\,.\label{eq:paramf2} \end{equation} In Fig.~\ref{Fig:alpha_f2} we show the real and imaginary parts of $\alpha(s)$, with solid and dashed lines, respectively. Again, the gray bands cover the uncertainties coming from the errors in the input pole parameters. We find that the real part of the trajectory is almost linear and much bigger than the imaginary part. It is as expected for Regge trajectories of ordinary mesons. For comparison, we also show, with a dotted line, the Regge trajectory obtained in~\cite{Anisovich:2000kxa} by fitting a Regge linear trajectory to the meson states associated with $f_2(1270)$, which is traditionally referred to as the $P'$ trajectory. We see that the two trajectories are in good agreement. Indeed, our parameters are compatible, within errors, with those in~\cite{Anisovich:2000kxa}: $\alpha_{P'}\approx0.71$ and $\alpha'_{P'}\approx0.83\text{ GeV}^{-2}$. We also include in Fig.~\ref{Fig:alpha_f2} the resonances from the PDG~\cite{PDG} listing that could be associated with this trajectory. In Fig.~\ref{Fig:alpha_f2} the trajectory has been extrapolated to high energies, where the elastic approximation does not hold any more and we cannot hope to give a precise prediction for its behavior. The only reason to do this is to show the position of the candidate states connected to the $f_2(1270)$. In the figure, this region is covered with a mesh to the right from the line at the $s$ that corresponds to the resonance mass plus three half-widths. Of course, we cannot confirm which of these resonances belongs to the $f_2(1270)$ trajectory, but we observe that the $J=4$ resonance could be the $f_4(2050)$, as proposed in~\cite{Anisovich:2000kxa}, or the $f_J(2220)$ \footnote{This resonance still ``needs confirmation'' and it is not yet known whether its spin is 2 or 4 \cite{PDG}.} or even the $f_4(2300)$. All these resonances appear in the PDG, but are omitted from the summary tables. \begin{figure} \hspace*{-.6cm} \includegraphics[scale=0.9,angle=-90]{alpha-f2.eps} \caption{\rm \label{Fig:alpha_f2} Real (solid) and imaginary (dashed) parts of the $f_2(1270)$ Regge trajectory. The gray bands cover the uncertainties due to the errors in the input pole parameters. The area covered with a mesh is the mass region starting three half-widths above the resonance mass, where our elastic approach should be considered only as a mere extrapolation. For comparison, we show with a dotted line the $f_2(1270)$ Regge trajectory obtained in~\cite{Anisovich:2000kxa}, traditionally called the $P'$ trajectory. We also show the resonances listed in the PDG that are candidates for this trajectory. Note that their average mass does not always coincide with the nominal one, as is the case for the $f_2(1270)$.} \end{figure} \subsection{$f_2'(1525)$ resonance} \label{subsec:f2'(1525)} As commented above, the $f_2'(1525)$ decays mainly to two kaons. Although there is no scattering data on the $l=2$ elastic $\bar KK$ phase shift in this mass region, the mass and width of the $f_2'(1525)$ are given in the PDG~\cite{PDG}. Thus we use $M_{f_2'}=1525\pm5$ MeV and $\Gamma^{KK}_{f_2'}=69^{+10}_{-9}$ MeV, where the central value of this width corresponds to the decay into $\bar KK$ only. Now, we infer the scattering pole parameters assuming the $f_2'(1525)$ is well described by an elastic Breit-Wigner shape, so that we take the pole to be at $s_{f_2'}=(M_{f_2'}-i \Gamma_{f_2'}/2)^2$ and the residue to be ${\rm Res}=-M_{f_2'}^2\Gamma_{f_2'}^{KK}/2p$, where $p$ is the CM momenta of the two kaons. Since $\vert g \vert^2=-16 \pi (2l+1)\,{\rm Res}/(2p)^{2l}$, we find $|g_{f_2'KK}|^2=19\pm 3 \text{ GeV}^{-2}$. With these input parameters we solve the dispersion relations using the same minimization method and obtain the following Regge pole parameters: $\sqrt{s_{f_2'}}=(1525\pm5 )-i(34^{+4}_{-5})\text{ MeV}$ and $|g_{f_2'KK}|^2=19\pm3\text{ GeV}^{-2}$. Since we lack experimental data to compare the amplitudes, we proceed to examining the trajectory. The parameters that we obtain are, \begin{equation} \alpha_0=0.53^{+0.10}_{-0.44}\,;\hspace{3mm} \alpha'=0.63^{+0.20}_{-0.05} \text{ GeV}^{-2};\hspace{3mm}b_0=1.33^{+0.63}_{-0.09}\,,\label{eq:paramsf2p} \end{equation} which give the Regge trajectory shown in Fig.~\ref{Fig:alpha_ff2}. Again, we find the real part nearly linear and much larger than the imaginary part. As in the case of the $f_2(1270)$, the slope is compatible with that found for the $P'$ trajectory in~\cite{Anisovich:2000kxa} $\alpha'_{P'}\approx0.83\text{ GeV}^{-2}$, and the intercepts also agree. As we did for $f_2(1270)$, we include in Fig.~\ref{Fig:alpha_ff2} the $J=4$ candidates for the $f_2'(1525)$ trajectory. These are the $f_J(2220)$ and the $f_4(2300)$. We remark that there is no experimental evidence of the $f_4(2150)$ that was predicted in~\cite{Anisovich:2000kxa} from their analysis of the $f_2'(1525)$ trajectory. As commented before, these resonances lie in a region, covered with a mesh in Fig.~\ref{Fig:alpha_ff2}, beyond the strict applicability limit of our approach, where our results must be considered qualitatively at most. Finally, we remark that the PDG list includes another $f_2$ resonance, albeit requiring confirmation. It has a mass between that of the $f_2(1270)$ and the $f_2'(1525)$ and it could also have either the $f_J(2220)$ or $f_4(2300)$ as the higher mass partner. \begin{figure} \hspace*{-.6cm} \includegraphics[scale=0.9,angle=-90]{alpha-ff2.eps} \caption{\rm \label{Fig:alpha_ff2} Real (solid) and imaginary (dashed) parts of the $f_2'(1525)$ Regge trajectory. The gray bands cover the uncertainties due to the errors in the input pole parameters and the area covered with a mesh is the mass region starting three half-widths above the resonance mass, where our elastic approach must be considered just as an extrapolation. For comparison, we show with a dotted line the Regge trajectory obtained in~\cite{Anisovich:2000kxa} and the resonances listed in the PDG that could belong to this trajectory.} \end{figure} \section{Dispersion relation with three subtractions} \label{sec:3subs} As already mentioned in the introduction, one may wonder whether the linearity of the trajectories we obtain for the two D-wave resonances, as well as for the $\rho(770)$ in~\cite{Londergan:2013dza}, is related to use of two subtractions in the dispersion relation for $\alpha(s)$. In particular, since the resonances are rather narrow one could expect the imaginary part of their trajectories to be small, so that if the last term in Eq.~\eqref{alphadisp} was dropped the trajectory would be reduced to a straight line. Thus, in order to show that the linearity of the trajectory is not forced by the particular parameterization, we repeated the calculations using three subtractions in the dispersion relations, \begin{align} \mbox{Re} \,\alpha(s) & = \alpha_0 + \alpha' s + \alpha'' s^2 + \frac{s^2}{\pi} PV \int_{4m^2}^\infty ds' \frac{ \mbox{Im}\alpha(s')}{s'^2 (s' -s)}, \label{iteration1-3sub}\\ \mbox{Im}\,\alpha(s)&= \frac{ \rho(s) b_0 \hat s^{\alpha_0 + \alpha' s+\alpha'' s^2} }{|\sqrt{\Gamma(\alpha(s) + \frac{3}{2})}|} \exp\Bigg( - \frac{1}{2}[1-\log(\alpha'' s_0^2)] s (R+\alpha'' s)-Q s\nonumber\\ & \hspace{4.4cm} + \!\frac{s^2}{\pi} PV\!\!\!\int_{4m^2}^\infty ds' \frac{ \mbox{Im}\alpha(s') \log\frac{\hat s}{\hat s'} + \frac{1}{2}\mbox{arg }\Gamma\left(\alpha(s')+\frac{3}{2}\right)}{s'^2 (s' - s)} \Bigg), \label{iteration2-3sub} \end{align} with \begin{equation} R=B-\frac1{\pi}\int_{4m^2}^\infty ds' \frac{ \mbox{Im}\alpha(s')}{s'^2}, \end{equation} and \begin{equation} Q=-\frac1{\pi}\int_{4m^2}^\infty ds' \frac{ -\mbox{Im}\alpha(s')\log{\hat s'}+\frac{1}{2}\mbox{arg }\Gamma\left(\alpha(s')+\frac{3}{2}\right)}{s'^2}. \end{equation} The reason why the constants $R$ and $Q$ and the square root of $\Gamma$ have been introduced is to ensure that, at large $s$, $\mbox{Im}\,\alpha(s)$ behaves as $1/s$. The parameters that we obtain for the trajectories with these dispersion relations are shown in Table~\ref{Tab:3sub}. \renewcommand{\arraystretch}{1.3} \begin{table} \centering \caption{Parameters of the $f_2(1270)$, $f_2'(1525)$ and $\rho(770)$ Regge trajectories using three-time-subtracted dispersion relations.} \label{Tab:3sub} \begin{tabular*}{0.7\textwidth}{@{\extracolsep{\fill} }ccccc}\hline & $\alpha_0$ & $\alpha'$ (GeV$^{-2}$) & $\alpha''$ (GeV$^{-4}$) & \hspace{5mm}$b_0$\hspace{5mm} \\\hline $f_2(1270)$ & 1.01 & 0.97 & 0.04 & 2.13\\ $f_2'(1525)$ & 0.42 & 0.65 & 0.02 & 4.58 \\ $\rho(770)$ & 0.56 & 1.11 & 0.03 & 0.88 \\ \hline \end{tabular*} \end{table} With the above parameterization we obtain for the fitted pole parameters $\sqrt{s_{f_2}}=1267.3-i 90 \text{ MeV}$, $|g_{f_2\pi\pi}|^2=25\text{ GeV}^{-2}$, $\sqrt{s_{f_2'}}=1525-i35\text{ MeV}$, $|g_{f_2'\pi\pi}|^2=19\text{ GeV}^{-2}$, $\sqrt{s_{\rho}}=763-i 74\text{ MeV}$ and $|g_{\rho\pi\pi}|^2=35\text{ GeV}^{-2}$. Therefore, despite having four parameters to fit three numbers, we find no real improvement in the description of the poles. In the case of three subtractions, neglecting the imaginary part of the resonances results in a quadratic trajectory. Therefore, in Fig.~\ref{Fig:3sub} we compare the trajectories using the three (solid line) and the two (dashed line) subtractions in the dispersion relations. We observe that in both cases these is a curvature, but that in the elastic region the trajectories are almost linear. The difference between the two methods only becomes apparent for masses well above the range of applicability. Moreover, the difference between the results obtained using two and three subtractions can be used as an indicator of the stability of our results and therefore confirms that the applicability range for our method is well estimated and ends as soon as the inelasticity in the wave becomes sizable. \begin{figure} \begin{tabular}{ccc} \includegraphics[scale=0.6,angle=-90]{alpha-f2-3sub.eps}& \hspace{-3mm}\includegraphics[scale=0.6,angle=-90]{alpha-ff2-3sub.eps}& \hspace{-3mm}\includegraphics[scale=0.6,angle=-90]{alpha-rho-3sub.eps} \end{tabular} \caption{\rm \label{Fig:3sub} Regge trajectories obtained using three-time-subtracted dispersion relations (solid lines) compared to the ones obtained with twice-subtracted dispersion relations (dashed lines with gray error bands).} \end{figure} \section{Discussion, conclusions and outlook} \label{conclusions} In~\cite{Londergan:2013dza} a dispersive method was developed to calculate Regge trajectories of resonances that appear in the elastic scattering of two mesons. We showed how, using the associated scattering pole of the resonance it is possible to determine whether its trajectory is of a standard type, {\it i.e.} real and linear as followed by ``ordinary'' $\bar qq$-mesons, or not. This method thus provides a possible benchmark for identifying non-ordinary mesons. In particular the ordinary Regge trajectory of the $\rho(770)$, which is a well-established $\bar qq$ state, was successfully predicted, whereas the $\sigma$ meson, a long-term candidate for a non-ordinary meson, was found to follow a completely different trajectory. In the first part of this work we have successfully predicted the trajectories of other two, well-established ordinary resonances, the $f_2(1270)$ and $f_2'(1525)$. In particular, from parameters of the associated poles in the complex energy plane we have calculated their trajectories and have shown that they are almost real and very close to a straight line, as expected. In the second part of this work we have addressed the question of whether choosing two subtractions in the dispersion relations of \cite{Londergan:2013dza} was actually imposing that the real part of the trajectory is a straight line for relatively narrow resonances. To address this question we analyzed the same resonances using a dispersion relation with an additional subtraction. We have shown that within the range of applicability of our approach, which basically coincides with the elastic regime, the resulting trajectories are once again very close to a straight line. In the future it will be interesting to use such dispersive methods to determine trajectories of other mesons {\it e.g} the $K^*(892)$ as well as the controversial ``partner" the scalar $K^*(800)$, which is another long-time candidate for a non-ordinary meson. Heavy mesons in charm and beauty sectors can also be examined. We also plan to extend the method to meson-baryon scattering, where, for example, the $\Delta(1232)$ is another candidate for an ordinary resonance. We are also extending the approach to coupled channels. {\bf Acknowledgments} We would like to thank M.R. Pennington for several discussions. JRP and JN are supported by the Spanish project FPA2011-27853-C02-02. JN acknowledges funding by the Fundaci\'on Ram\'on Areces. APS work is supported in part by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract DE-AC05-06OR23177 and DE-FG0287ER40365. APS\ is supported in part by the U.S.\ Department of Energy under Grant DE-FG0287ER40365. \vspace*{-.2cm}
{'timestamp': '2015-04-14T02:16:23', 'yymm': '1504', 'arxiv_id': '1504.03248', 'language': 'en', 'url': 'https://arxiv.org/abs/1504.03248'}
\section{Introduction} Throughout this paper, $\k$ denotes an algebraically closed field of characteristic $0$, all vector spaces are over $\k$. All algebras considered in this paper are noetherian and affine unless stated otherwise. The antipode of a Hopf algebra is assumed to be bijective. \subsection{Motivation.} We are motivated by the following three seemingly irrelevant but indeed related phenomenons. The first one is based on the next simple observation. It is well-known that the affine line $\mathbb{A}^{1}$ is a commutative algebraic group of dimension one. If we consider the infinite dimensional Taft algebra $T(n, t, \xi)$ (see Subsection \ref{ss2.3} for its definition), then we find that the affine line (here and the following we identify an affine variety with its coordinate algebra) is also a Hopf algebra in the braided tensor category $^{\mathbb{Z}_{n}}_{\mathbb{Z}_n}{\mathcal{YD}}$ of Yetter-Drinfeld modules of $\k \Z_{n}$. Intuitively, \begin{figure}[hbt] \begin{picture}(100,60)(0,-60) \put(0,0){\line(-1,-1){50}}\put(60,-30){\makebox(0,0){$\in {^{\mathbb{Z}_{n}}_{\mathbb{Z}_n}{\mathcal{YD}}}.$}} \end{picture} \end{figure} \noindent From this, a natural question is: \begin{equation}\label{1.1}\textsf{ Can we realize other irreducible curves as Hopf algebras in } ^{\mathbb{Z}_{n}}_{\mathbb{Z}_n}{\mathcal{YD}}?\end{equation} In order to answer this question, we need give two remarks at first. Firstly, observe that above line is smooth and thus the infinite dimensional Taft algebra is \emph{regular}, i.e. has finite global dimension. Secondly, it is harmless to assume that the action of $\Z_n$ on the curve is faithful since otherwise one can take a smaller group $\Z_{m}$ with $m|n$ to substitute $\Z_n$. This assumption implies the infinite dimensional Taft algebra is \emph{prime}. Put them together, the the infinite dimensional Taft algebra is prime regular of Gelfand-Kirillov dimension (GK-dimension for short) one. Under this assumption, one can show that the affine line $\k[x]$ and the multiplicative group $\k[x^{\pm1}]$ are the \emph{only} smooth curves which can be realized as Hopf algebras in $^{\mathbb{Z}_{n}}_{\mathbb{Z}_n}{\mathcal{YD}}$ (see Corollary \ref{c2.9}). Therefore, the only left chance is to consider singular curves. We find that at least for some special curves the answer is ``Yes"! As an illustration, consider the example $T(\{2,3\},1,\xi)$ (see Subsection \ref{ss4.1}) and from this example we find the the cusp $y_1^2=y_2^3$ is a Hopf algebra in $^{\mathbb{Z}_{6}}_{\mathbb{Z}_6}{\mathcal{YD}}$. That is, \begin{tikzpicture} \begin{picture}(100,60)(-100,0) \draw (0,0) parabola (1,1.5);\put(60,0){\makebox(0,0){$\in {^{\mathbb{Z}_{6}}_{\mathbb{Z}_6}{\mathcal{YD}}}.$}} \draw (0,0) parabola (1,-1.5); \end{picture} \end{tikzpicture} So above analysis tell us that we need consider the structures of prime Hopf algebras of GK-dimension one which are \emph{not regular} if we want to find the answer to question \eqref{1.1}. The second one is a wide range of recent researches and interest on the classification of Hopf algebras of finite GK-dimensions. See for instance \cite{AS, AAH, BZ, GZ, Liu, LWZ, WZZ1, WZZ2, WZZ3, WLD}. Up to the authors's knowledge, there are two different lines to classify such Hopf algebras. One line focuses on pointed versions, in particular about braidings (i.e. Nichols algebras). The first celebrated work in this line is the Rosso's basic observation about the structure of Nichols algebras of finite GK-dimension with positive braiding (see \cite[Theorem 21.]{Ro}). Then the pointed Hopf algebra domains of finite GK-dimension with generic infinitesimal braiding were classified by Andruskiewitsch and Schneider \cite[Theorem 5.2.]{AS} and Andruskiewitsch and Angiono \cite[Theorem 1.1.]{AA}. Recently, Andruskiwwitsch-Angiono-Heckenberger \cite{AAH} conjectured that a Nichols algebra of diagonal type has finite GK-dimension if and only if the corresponding generalized root system is finite, and under assuming the validity of this conjecture they classified a natural class of braided spaces whose Nichols algebra has finite GK-dimension \cite[Theorem 1.10.]{AAH}. Another line focuses more on algebraic and homological properties of these Hopf algebras, which is motivated by noncommutative algebras and noncommutative algebraic geometry. Historically, Lu, Wu and Zhang initiated the the program of classifying Hopf algebras of GK-dimension one \cite{LWZ}. Then the author found a new class of examples about prime regular Hopf algebras of GK-dimension one \cite{Liu}. Brown and Zhang \cite[Theorem 0.5]{BZ} made further efforts in this direction and classified all prime regular Hopf algebras $H$ of GK-dimension one under an extra hypothesis. In 2016, Wu, Ding and the author \cite[Theorem 8.3]{WLD} removed this hypothesis and gave a complete classification prime regular Hopf algebras of GK-dimension one at last. One interesting fact is that some non-pointed Hopf algebras of GK-dimension one were found in \cite{WLD} and as far as we know they are the only non-pointed Hopf algebras with finite GK-dimension (except GK-dimension zero) until today. For Hopf algebras $H$ of GK-dimension two, all known classification results are given under the condition of $H$ being domains. In \cite[Theorem 0.1.]{GZ}, Goodearl and Zhang classified all Hopf algebras $H$ of GK-dimension two which are domains and satisfy the condition $\Ext^{1}_{H}(\k, \k)\neq 0$. For those with vanishing Ext-groups, some interesting examples were constructed by Wang-Zhang-Zhuang \cite[Section 2.]{WZZ2} and they conjectured these examples together with Hopf algebras given in \cite{GZ} exhausted all Hopf algebra domains with GK-dimension two. In order to study Hopf algebras $H$ of GK-dimensions three and four, a more restrictive condition was added: $H$ is connected, that is, the coradical of $H$ is $1$-dimensional. All connected Hopf algebras with GK-dimension three and four were classified by Zhuang in \cite[Theorem 7.6]{Zh} and Wang, Zhang and Zhuang \cite[Theorem 0.3.]{WZZ3} respectively. So, as a natural development of this line we want to classify prime Hopf algebras of GK-dimension one without regularity. The third one is the lack of knowledge about non-pointed Hopf algebras. In the last two decades, the people achieved an essential progress in understanding the structures and even classifications of pointed Hopf algebras under many experts's, like Andruskiewitsch, Schneider, Heckenberger etc., efforts. See for example \cite{AS1,He,He1}. On the contrast, we know very little about non-pointed Hopf algebras. In fact, we almost can't or are very hard to provide any nontrivial examples of them. The short of examples of non-pointed Hopf algebras obviously hampers our research and understanding of non-pointed Hopf algebras. Inspired by our previous work \cite{WLD} on the classification of prime regular Hopf algebras, which prompted us to find a series of new examples of non-pointed Hopf algebras, we expect to get more examples through classifying prime Hopf algebras of GK-dimension one without regularity. \subsection{Setting.} As the research continues, we gradually realize that the condition ``regular" is very delicate and strong. The situation becomes much worse if we just remove the regularity condition directly. In another word, we still need some ingredients from regularity at present. To get suitable ingredients, let's go back to the question \eqref{1.1} and in such case the Hopf algebra has a natural projection to the group algebra $\k\Z_n$. The first question is: what is this natural number $n?$ In the Taft algebra $H$ case, it is not hard to see that this $n$ is just the PI degree of $H$, that is, $n=$PI.deg$(H)$. So crudely speaking $n$ measures how far is a Hopf algebra from a commutative one. At the same time, the Hopf algebra who has a projection to $\k\Z_n$ will have a 1-dimensional representation $M$ with order $n$, that is $M^{\otimes n}\cong \k$. Putting them together, we form our first hypothesis about prime Hopf algebras of GK-dimension one: \textsf{ \begin{itemize}\item[$\;\;$(Hyp1):] The Hopf algebra $H$ has a $1$-dimensional representation $\pi:\;H\to \k$ whose order is equal to PI.deg$(H)$.\end{itemize}} The second question is: where is the curve? It is not hard to see that the curve is exactly the coinvariant algebra under the projection to $\k\Z_n$. We will see that for each $1$-dimensional representation of $H$ one has an analogue of coinvariant algebras which are called the \emph{invariant components} with respect to this representation (see Subsection \ref{subs2.2} for details.) Due to the (Hyp1), our second hypothesis is: \textsf{\begin{itemize}\item[(Hyp2):] The invariant components with respect to $\pi_{H}$ are domains.\end{itemize}} By definition, a Hopf algebra $H$ we considered has two invariant components, that is the left invariant component $H_{0,\pi}^l$ and right invariant component $H_{0,\pi}^r$ (see Definition \ref{de1}). By Lemma \ref{l2.7}, we see that $H_{0,\pi}^{l}$ is a domain if and only if $H_{0,\pi}^r$ is a domain. So the (Hyp2) can be weakened to require that any one of two invariant components is a domain. But, in practice (Hyp2) is more convenient for us. Usually, one may wonder that (Hyp1) is strange and strong. Actually, any noetherian affine Hopf algebra $H$ has natural $1$-dimensional representations: the space of right (resp. left) homological integrals. The order of any one of these 1-dimensional modules is called the \emph{integral order} (see Subsection \ref{subs2.2} for related definitions) of $H$ and we denote it by $\io (H)$, which is used widely in the regular case. So a plausible alternative of (Hyp1) is \textsf{(Hyp1)$'$ $\quad$ $\io(H)=$ PI.deg$(H)$.} Clearly, (Hyp1)$'$ is stronger than (Hyp1) and should be easier to use (Hyp1)$'$ instead of (Hyp1). But we will see that the (Hyp1)$'$ is not so good because it excludes some nice and natural examples (see Remark \ref{r4.3}). Note that all prime regular Hopf algebras of GK-dimension one satisfy both (Hyp1)$'$ and (Hyp2) automatically (see \cite[Theorem 7.1.]{LWZ}). Since we have examples which satisfy (Hyp1) and (Hyp2) while they are not regular (see, say, the example about the cusp given above), regularity is a really more stronger than (Hyp1) $+$ (Hyp2) for prime Hopf algebras of GK-dimension one. The main result of this paper is to give a classification of all prime Hopf algebras of GK-dimension one satisfying (Hyp1) $+$ (Hyp2) (see Theorem \ref{t7.1}). As byproducts, a number of new Hopf algebras, in particular some non-pointed Hopf algebras, were found and the answer to question \eqref{1.1} was given easily. Moreover, many new, up to the author's knowledge, finite-dimensional Hopf algebras were gotten which in particular helps us to find a Hopf algebra of dimension $24$ (see \cite{BGNR}). \subsection{Strategy and organization.} In a word, the idea of this paper just is to build a ``relative version" (i.e. with respect to any $1$-dimensional representation rather than just the $1$-dimensional representation of homological integrals) and extend the methods of \cite{BZ,WLD} to our general setting. So the strategy of the proof of the main result is divided into two parts: the ideal case and the remaining case. However, we need point out that the most significant difference between the regular Hopf algebras of GK-dimension one and our setting is: In the regular case, the invariant components are Dedekind domains (see \cite[Theorem 2.5 (f)]{BZ}) while in our case they are just required to be general domains! At the first glance, there is a huge distance between a general domain and a Dedekind domain. A contribution of this paper is to overcome this difficulty and prove that we can classify these domains under the requirement that they are the invariant components of prime Hopf algebra of GK-dimension one. To overcome this difficulty, a new concept called \emph{a fraction of natural number} is introduced (see Definition \ref{d3.1}). As the first step to realize our idea, we construct a number of new prime Hopf algebras of GK-dimension one which are called the ``fraction versions" of known examples of prime regular Hopf algebras of GK-dimension one. Then we use the concepts so called representation minor, denoted as $\im (\pi)$, and representation order, denoted as $\ord (\pi)$, of a noetherian affine Hopf algebra $H$ to deal with the ideal case, that is, the case either $\im(\pi)=1$ or $\ord (\pi)=\im(\pi)$. In the ideal case, we proved that every prime Hopf algebras of GK-dimension one satisfying (Hyp1) $+$ (Hyp2) must be isomorphic to either a known regular Hopf algebra given in \cite[Section 3]{BZ} or a fraction version of one of these regular Hopf algebras. Then, we consider the remaining case, that is the case $\ord(\pi)>\im(\pi)>1$ (note that by definition $\im(\pi)|\ord(\pi)$). We show that for each prime Hopf algebra $H$ of GK-dimension one in the remaining case one always can construct a Hopf subalgebra $\widetilde{H}$ which lies in the ideal case. As one of difficult parts of this paper, we show that $\widetilde{H}$ indeed determine the structure of $H$ essentially and from which we can not only get a complete classification of prime Hopf algebras of GK-dimension one satisfying (Hyp1) $+$ (Hyp2) but also find a series of new examples of non-pointed Hopf algebras. At last, we give some applications of our results, in particular the questions \eqref{1.1} is solved, a partial solution to \cite[Question 7.3C.]{BZ} is given and some new examples of finite dimensional Hopf algebras including semisimple and nonsemisimple Hopf algebras are found. In particular, we provide an example of 24-dimensional Hopf algebra, which seems not written out explicitly in \cite{BGNR}. Moreover, at the end of the paper we formulate a conjecture (see Conjecture \ref{con7.19}) about the structure of a general prime Hopf algebra of GK-dimension one for further researches and considerations. The paper is organized as follows. Necessary definitions, known examples and preliminary results are collected in Section 2. In particular, in order to compare regular Hopf algebras and non-regular ones, the widely used tool called homological integral is recalled. The definition of a fraction of natural number, a fraction version of a Taft algebra and some combinatorial relations, which are crucial to the following analysis, will be given in Section 3. Section 4 is devoted to construct new examples of prime Hopf algebras of GK-dimension one which satisfy (Hyp1) and (Hyp2). We should point out that the proof of the example $D(\underline{m},d,\gamma)$, which are not pointed in general, being a Hopf algebra is quite nontrivial. The properties of these new examples are also built in this section and in particular we show that they are pivotal Hopf algebras. The question about the classification of prime Hopf algebras of GK-dimension one satisfying (Hyp1) $+$ (Hyp2) in ideal cases is solved in Section 5, and Section 6 is designed to solve the same question in the remaining case. The main result is formulated in the last section and we end the paper with some consequences, questions and a conjecture on the structure of a general prime Hopf algebra of GK-dimension one. Among of them, a new kinds of semisimple Hopf algebras are found and studied. Their fusion rules are given. We also give another series of finite-dimensional nonsemisimple Hopf algebras in this last section. \textbf{{Acknowledgments.}} The work started during my visiting to Department of Mathematics, MIT. I would like thank, from the bottom of my heart, Professor Pavel Etingof for his heuristic discussion, encouragements and hospitality. The author also want to thank Professor James Zhang for his continued help and support for the author, and in particular for showing him their examples of non-regular Hopf algebras given in Subsection \ref{sub4.2}. I appreciate Professors Ken Brown, Q.-S. Wu and D.-M Lu for useful communications and in particular thank Ken Brown for showing the author his nice slides on infinite dimensional Hopf algebras. \section{Preliminaries} In this section we recall the urgent needs around affine noetherian Hopf algebras for completeness and the convenience of the reader. About general background knowledge, the reader is referred to \cite{Mo} for Hopf algebras, \cite{MR} for noetherian rings, \cite{Br,LWZ,BZ,Go} for exposition about noetherian Hopf algebras and \cite{EGNO} for general knowledge of tensor categories. Usually we are working on left modules (resp. comodules). Let $A^{op}$ denote the opposite algebra of $A$. Throughout, we use the symbols $\Delta,\epsilon$ and $S$ respectively, for the coproduct, counit and antipode of a Hopf algebra $H$, and the Sweedler's notation for coproduct $\D(h)=\sum h_1\otimes h_2=h_1\otimes h_2=h'\otimes h''\;(h\in H)$ will be used freely. Similarly, the coaction of left comodule $M$ is denoted by $\delta(m)=m_{(-1)}\otimes m_{(0)}\in H\otimes M,\;m\in M.$ \subsection{Stuffs from ring theory and Homological integrals.} In this paper, a ring $R$ is called \emph{regular} if it has finite global dimension, it is \emph{prime} if $0$ is a prime ideal and it is \emph{affine} if it is finitely generated. {$\bullet$ \emph{PI-degree}.} If $Z$ is an Ore domain, then the {\it rank } of a $Z$-module $M$ is defined to be the $Q(Z)$-dimension of $Q(Z)\otimes_Z M$, where $Q(Z)$ is the quotient division ring of $Z$. Let $R$ be an algebra satisfying a polynomial identity (PI for short). The PI-degree of $R$ is defined to be $$\text{PI-deg}(R)=\text{min}\{n|R\hookrightarrow M_n(C)\ \text{for some commutative ring}\ C\}$$(see \cite[Chapter 13]{MR}). If $R$ is a prime PI ring with center $Z$, then the PI-degree of $R$ equals the square root of the rank of $R$ over $Z$. {$\bullet$\emph{ Artin-Schelter condition}.} Recall that an algebra $A$ is said to be \emph{augmented} if there is an algebra morphism $\epsilon:\; A\to \k$. Let $(A,\epsilon)$ be an augmented noetherian algebra. Then $A$ is \emph{Artin-Schelter Gorenstein}, we usually abbreviate to \emph{AS-Gorenstein}, if \begin{itemize} \item [(AS1)] injdim$_AA=d<\infty$, \item [(AS2)] dim$_\k\Ext_A^d(_A\k, \;_AA)=1$ and dim$_\k\Ext_A^i(_A\k,\; _AA)=0$ for all $i\neq d$, \item [(AS3)] the right $A$-module versions of (AS1, AS2) hold.\end{itemize} The following result is the combination of \cite[Theorem 0.1]{WZ} and \cite[Theorem 0.2 (1)]{WZ}, which shows that a large number of Hopf algebras are AS-Gorenstein. \begin{lemma}\label{l2.1} Each affine noetherian PI Hopf algebra is AS-Gorenstein. \end{lemma} {$\bullet$\emph{ Homological integral}.} The concept \emph{homological integral} can be defined for an AS-Gorenstein augmented algebra. \begin{definition}\cite[Definition 1.3]{BZ} \emph{Let $(A, \epsilon)$ be a noetherian augmented algebra and suppose that $A$ is AS-Gorenstein of injective dimension $d$. Any non-zero element of the 1-dimensional $A$-bimodule $\Ext_A^d( _A\k,\; _AA)$ is called a \emph{left homological integral} of $A$. We write $\int_A^l=\Ext_A^d(_A\k,\; _AA)$. Any non-zero element in $\Ext_{A^{op}}^d(\k_A, A_A)$ is called a \emph{right homological integral} of $A$. We write $\int_A^r=\Ext_{A^{op}}^d(\k_A, A_A)$. By abusing the language we also call $\int_A^l$ and $\int_A^r$ the left and the right homological integrals of $A$ respectively.} \end{definition} \subsection{Relative version.}\label{subs2.2} Assuming that a Hopf algebra $H$ has a $1$-dimensional representation $\pi: H\to \k$, we give some results according to this $\pi$, most of them coming from \cite[Section 2]{BZ}, by using slightly different notations with \cite{BZ}. Throughout this subsection, we fix this representation $\pi$. \noindent{$\bullet$ \emph{Winding automorphisms}.} We write $\Xi_\pi^l$ for the \emph{left winding automorphism} of $H$ associated to $\pi$, namely $$\Xi_\pi^l(a):=\sum\pi(a_1)a_2 \;\;\;\;\;\;\;\textrm{for} \;a\in H.$$ Similarly we use $\Xi_\pi^r$ for the right winding automorphism of $H$ associated to $\pi$, that is, $$\Xi_\pi^r(a):=\sum a_1\pi(a_2)\;\;\;\;\;\;\textrm{for}\; a\in H.$$ Let $G_\pi^l$ and $G_\pi^r$ be the subgroups of $\text{Aut}_{\k\text{-alg}}(H)$ generated by $\Xi_\pi^l$ and $\Xi_\pi^r$, respectively. Define: $$G_\pi:=G_\pi^l\bigcap G_\pi^r.$$ The following is some parts of \cite[Propostion 2.1.]{BZ}. \begin{lemma}\label{l2.3} Let $H_{0,\pi}^{l},H_{0,\pi}^{r}$ and $H_{0,\pi}$ be the subalgebra of invariants $H^{G_\pi^l},H^{G_\pi^r}$ and $H^{G_\pi}$ respectively. Then we have \begin{itemize} \item[(1)] $H_{0,\pi}=H_{0,\pi}^{l}\bigcap H_{0,\pi}^{r}.$ \item[(2)] $\Xi_\pi^l\Xi_\pi^r=\Xi_\pi^r\Xi_\pi^l.$ \item[(3)] $\Xi_\pi^r\circ S=S\circ (\Xi_\pi^l)^{-1}$. Therefore, $S(H_{0,\pi}^{l})\subseteq H_{0,\pi}^{r}$ and $S(H_{0,\pi}^{r})\subseteq H_{0,\pi}^{l}$. \end{itemize} \end{lemma} \emph{$\bullet$ $\pi$-order and $\pi$-minor.} With the same notions as above, the {\it $\pi$- order} (denoted as $\ord(\pi)$) of $H$ is defined by the order of the group $G_\pi^l$ : \begin{equation}\ord(\pi):=|G_\pi^l|.\end{equation} \begin{lemma} We always have $|G_\pi^l|=|G_\pi^r|$. \end{lemma} \begin{proof} Assume that $|G_\pi^l|=n$, and then by the definition we know that $$a=\sum \pi^{n}(a_1)a_2$$ for all $a\in H$. Therefore, $\pi^{n}=\e$ (because above formula implies that $\pi^{n}$ is the left counit) and thus $a=\sum a_1\pi^{n}(a_2)$ for all $a$. So, $|G_\pi^l|\geq |G_\pi^r|$. Similarly, we have $|G_\pi^r|\geq |G_\pi^l|$. \end{proof} By this lemma, the above definition is independent of the choice of $G_\pi^l$ or $G_\pi^r$. The \emph{$\pi$-minor} (denoted by $\mi(\pi)$) of $H$ is defined by \begin{equation}\mi(\pi):=|G_\pi^l/G_\pi^l\cap G_\pi^r|.\end{equation} \begin{remark}\emph{In particular, if the 1-dimensional representation is given by the (right module structure) of left integrals, then the corresponding representation order and representation minor are called \emph{integral order} and \emph{integral minor}, denoted as $$\io(H)\quad\quad \textrm{and}\quad\quad \im(H),$$ respectively. Both the integral order and integral minor are used widely in \cite{BZ,WLD}. Therefore, we can consider a general $1$-dimensional representation as a relative version of homological integrals. Note that the notations $\io(H)$ and $\im(H)$ will be used freely in this paper too.} \end{remark} $\bullet$ \emph{Invariant components and strongly graded property}. Let $H$ be a prime Hopf algebra of GK-dimension one. By a fundamental results of Small, Stafford and Warfield \cite{SSW}, a semiprime affine algebra of GK-dimension one is a finite module over its center. Therefore, it is PI and has finite PI-order. Now we assume that $H$ satisfies the (Hyp1) (see Subsection 1.1) and thereby $|G_\pi^l|=$ PI-deg$(H)$ is finite, say $n$. Moreover, since $G_\pi^l$ is a cyclic group, its character group $\widehat{G_\pi^l}:=\text{Hom}_{\k\text{-alg}}(\k G_\pi^l, \k)$ is isomorphic to itself. Similarly, the character group $\widehat{G_\pi^r}$ of $G_\pi^r$ is isomorphic to $G_\pi^r$. Fix a primitive $n$th root $\zeta$ of 1 in $\k$, and define $\chi\in\widehat{G_\pi^l}$ and $\eta\in \widehat{G_\pi^r}$ by setting $$\chi(\Xi_\pi^l)=\zeta \quad \text{and} \quad \eta(\Xi_\pi^r)=\zeta.$$ Thus $\widehat{G_\pi^l}=\{\chi^i|0\leqslant i\leqslant n-1\}$ and $ \widehat{G_\pi^r}=\{\eta^j|0\leqslant j\leqslant n-1\}$. For each $0\leqslant i, j\leqslant n-1$, let $$H_{i,\pi}^l:=\{a\in H|\Xi_\pi^l(a)=\chi^i(\Xi_\pi^l)a\} \;\;\textrm{and}\;\; H_{j,\pi}^r:=\{a\in H|\Xi_\pi^r(a)=\eta^j(\Xi_\pi^r)a\}.$$ The following lemma is \cite[Theorem 2.5 (b)]{BZ} (Note that for the part (b) of \cite[Theorem 2.5.]{BZ} we don't need the condition about regularity). \begin{lemma}\label{l2.5} \begin{itemize} \item[(1)] $H=\bigoplus_{\chi^i\in\widehat{G_\pi^l}}H_{i,\pi}^l$ is strongly $\widehat{G_\pi^l}$-graded. \item[(2)] $H=\bigoplus_{\eta^j\in\widehat{G_\pi^r}}H_{j,\pi}^r$ is strongly $\widehat{G_\pi^r}$-graded. \end{itemize} \end{lemma} \begin{definition}\label{de1} \emph{The subalgebra $H_{0,\pi}^{l}$ (resp. $H_{0,\pi}^{r}$) is called the left (resp. right) \emph{invariant component} of $H$ with respect to $\pi$.} \end{definition} Therefore, (Hyp2) just says that both $H_{0,\pi}^{l}$ and $H_{0,\pi}^{r}$ are domains. In fact, these two algebras are closely related. \begin{lemma}\label{l2.7} Let $H$ be a prime Hopf algebra of GK-dimension one. Then \begin{itemize}\item[(1)] As algebras, we have $H_{0,\pi}^{l}\cong (H_{0,\pi}^r)^{op}$. \item[(2)] If moreover either $H_{0,\pi}^{l}$ or $H_{0,\pi}^{r}$ is a domain, then both $H_{0,\pi}^{l}$ and $H_{0,\pi}^{r}$ are commutative domains and thus $H_{0,\pi}^{l}\cong H_{0,\pi}^r$. \end{itemize} \end{lemma} \begin{proof} By Lemma \ref{l2.3}. (3), we have $S(H_{0,\pi}^{l})\subseteq H_{0,\pi}^{r}$ and $S(H_{0,\pi}^{r})\subseteq H_{0,\pi}^{l}$. Now (1) is proved. For (2), it is harmless to assume that $H_{0,\pi}^l$ is a domain. By $H$ is of GK-dimension one and $H=\bigoplus_{\chi^i\in\widehat{G_\pi^l}}H_{i,\pi}^l$ is strongly graded (see Lemma \ref{l2.5}), $H_{0,\pi}^l$ has GK-dimension one too. Now it is well-known that a domain with GK-dimension one must be commutative (see for example \cite[Lemma 4.5]{GZ}). Therefore $H_{0,\pi}^l$ is commutative and $H_{0,\pi}^{l}\cong H_{0,\pi}^r$ by (1). So $H_{0,\pi}^r$ is a commutative domain too. \end{proof} By Lemma \ref{l2.3}. (2), $\Xi_\pi^l \Xi_\pi^r=\Xi_\pi^r \Xi_\pi^l$, and thus $H_{i,\pi}^l$ is stable under the action of $G_\pi^r$. Consequently, the $\widehat{G_\pi^l}$- and $\widehat{G_\pi^r}$-gradings on $H$ are \emph{compatible} in the sense that $$H_{i,\pi}^l=\bigoplus_{0\leqslant j \leqslant n-1}(H_{i,\pi}^l\cap H_{j,\pi}^r)\quad \text{and}\quad H_{j,\pi}^r=\bigoplus_{0\leqslant i \leqslant n-1}(H_{i,\pi}^l\cap H_{j,\pi}^r)$$ for all $i, j$. Then $H$ is a bigraded algebra: \begin{equation}\label{eq2.3} H=\bigoplus_{0\leqslant i,j\leqslant n-1} H_{ij,\pi},\end{equation} where $H_{ij,\pi}=H_{i,\pi}^l\cap H_{j,\pi}^r$. And we write $H_{0,\pi}:=H_{00,\pi}$ for convenience. For later use, we collect some more properties about $H$ which were proved in \cite{BZ} without the requirement about regularity. For details, see \cite[Proposition 2.1 (c)(e)]{BZ} and \cite[Lemma 6.3]{BZ}. \begin{lemma}\label{l2.8} Let $H$ be a prime Hopf algebra of GK-dimensional one satisfying (Hyp1). Then \begin{itemize} \item[(1)] $\Delta(H_{i,\pi}^l)\subseteq H_{i,\pi}^l\otimes H$ and $\Delta (H_{j,\pi}^r)\subseteq H\otimes H_j^r$; thus $H_{i,\pi}^l$ is a right coideal of $H$ and $H_{j,\pi}^r$ is a left coideal of $H$ for all $0\leq i,j\leq n-1$; \item[(2)] $\Xi_\pi^r \circ S=S\circ(\Xi_\pi^l)^{-1},$ where $(\Xi_\pi^l)^{-1}=\Xi_{\pi\circ S}^l.$ \item[(3)] $S(H_{i,\pi}^l)=H_{-i,\pi}^{r}$ and $S(H_{ij,\pi})=H_{-j,-i,\pi}$. \item[(4)] If $i\neq j$, then $\e(H_{ij,\pi})=0$. \item[(5)] If $i=j$, then $\e(H_{ii,\pi})\neq 0.$ \end{itemize} \end{lemma} \begin{remark} \emph{(1) In the regular case, that is, $H$ is a prime regular Hopf algebra of GK-dimension one, the set of all right homological integrals forms a $1$-dimensional representation whose order is equal to the PI.deg$(H)$. In such case, the invariant components are called \emph{classical components} by \cite[Section 2]{BZ}. } \emph{(2) In the following of this paper, we will omit the notation $\pi$ when the representation is clear from context. Therefore, say, sometimes we just write $H_{0,\pi}$ as $H_0$ when there is no confusion about which representation we are considering.} \end{remark} The following result is the combination of some parts of \cite[Proposition 5.1, Corollary 5.1]{BZ}, which is very useful for us. \begin{lemma}\label{l2.10} Let $A$ be a $\k$-algebra and let $G$ be a finite abelian group of order $n$ acting faithfully on $A$. So $A$ is $\widehat{G}$-graded, $A=\bigoplus_{\chi\in \widehat{G}}A_{\chi}$. Assume that \emph{1)} this grading is strong and \emph{2)} the invariant component $A_0$ is a commutative domain. Then we have \begin{itemize} \item[(a)] Every non-zero homogeneous element is a regular element of $A$ and PI.deg$(A)\leq n$. \item[(b)] There is an action $\triangleright$ of $\widehat{G}$ on $A_0$ with the following property: For any $\chi\in \widehat{G}$ and $a\in A_0$, \begin{equation}\label{eq2.4}(\chi\triangleright a)u_{\chi} =u_{\chi}a\end{equation} where $u_{\chi}$ is an arbitrary nonzero element belonging to $A_{\chi}.$ \item[(c)] \emph{PI.deg}$(A)=n$ if and only if the action $\triangleright$ is faithful. \item[(d)] If \emph{PI.deg}$(A)=n$, then $A$ is prime. \item[(e)] Let $K< G$ be a subgroup $\widehat{G}$ and let $B$ be the subalgebra $\bigoplus_{\chi\in K}A_{\chi}$. If \emph{PI.deg}$(A)=n$, then $B$ is prime with PI-degree $|K|.$ \end{itemize} \end{lemma} \subsection{Known examples.}\label{ss2.3} The following examples appeared in \cite{BZ,WLD} already and we recall them for completeness. \noindent$\bullet$ \emph{Connected algebraic groups of dimension one}. It is well-known that there are precisely two connected algebraic groups of dimension one (see, say \cite[Theorem 20.5]{Hu}) over an algebraically closed field $\k$. Therefore, there are precisely two commutative $\k$-affine domains of GK-dimension one which admit a structure of Hopf algebra, namely $H_1=\k[x]$ and $H_2=\k[x^{\pm 1}]$. For $H_1$, $x$ is a primitive element, and for $H_2$, $x$ is a group-like element. Commutativity and cocommutativity imply that $\io(H_i)=\im(H_i)=1$ for $i=1, 2$. \noindent$\bullet$ \emph{Infinite dihedral group algebra}. Let $\mathbb{D}$ denote the infinite dihedral group $\langle g, x | g^2 = 1, gxg=x^{-1}\rangle$. Both $g$ and $x$ are group-like elements in the group algebra $\k\mathbb{D}$. By cocommutativity, $\im(\k\mathbb{D})=1$. Using \cite[Lemma 2.6]{LWZ}, one sees that as a right $H$-module, $\int_{\k\mathbb{D}}^l\cong \k\mathbb{D}/\langle x-1, g+1\rangle.$ This implies $\io(\k\mathbb{D})=2$. \noindent$\bullet$ \emph{Infinite dimensional Taft algebras}. Let $n$ and $t$ be integers with $n>1$ and $0\leqslant t \leqslant n-1$. Fix a primitive $n$th root $\xi$ of $1$. Let $T=T(n, t, \xi)$ be the algebra generated by $x$ and $g$ subject to the relations $$g^n=1\quad \text{and} \quad xg=\xi gx.$$ Then $T(n, t, \xi)$ is a Hopf algebra with coalgebra structure given by $$\D(g)=g\otimes g,\ \epsilon(g)=1 \quad \text{and} \quad \D(x)=x\otimes g^t+1\otimes x,\ \epsilon(x)=0,$$ and with $$S(g)=g^{-1}\quad \text{and} \quad S(x)=-xg^{-t}.$$ As computed in \cite[Subsection 3.3]{BZ}, we have $\int_T^l \cong T/\langle x, g-\xi^{-1}\rangle,$ and the corresponding homomorphism $\pi$ yields left and right winding automorphisms \[{\Xi_{\pi}^l:} \begin{cases} x\longmapsto x, &\\ g\longmapsto \xi^{-1}g, & \end{cases} \textrm{and} \;\;\;\;\; \Xi_{\pi}^r: \begin{cases} x\longmapsto \xi^{-t}x, &\\ g\longmapsto \xi^{-1}g. & \end{cases}\] So that $G_\pi^l=\langle \Xi_{\pi}^l\rangle$ and $G_\pi^r=\langle \Xi_{\pi}^r\rangle$ have order $n$. If gcd$(n, t)=1$, then $G_\pi^l\cap G_\pi^r=\{1\}$ and \cite[Propositon 3.3]{BZ} implies that there exists a primitive $n$th root $\eta$ of 1 such that $T(n, t, \xi)\cong T(n, 1, \eta)$ as Hopf algebras. If gcd$(n,t)\neq 1$, let $m:=n/\text{gcd}(n, t)$, then $G_\pi^l\cap G_\pi^r=\langle (\Xi_{\pi}^l)^m\rangle$. Thus we have $\io(T(n, t, \xi))=n$ and $\im(T(n, t, \xi))=m$ for any $t$. In particular, $\im(T(n, 0, \xi))=1$, $\im(T(n, 1, \xi))=n$ and $\im(T(n, t, \xi))=m=n/t$ when $t|n$. \noindent$\bullet$ \emph{Generalized Liu algebras}. Let $n$ and $\omega$ be positive integers. The generalized Liu algebra, denoted by $B(n, \omega, \gamma)$, is generated by $x^{\pm 1}, g$ and $y$, subject to the relations \begin{equation*} \begin{cases} xx^{-1}=x^{-1}x=1,\quad xg=gx,\quad xy=yx, & \\ yg=\gamma gy, & \\ y^n=1-x^\omega=1-g^n, & \\ \end{cases} \end{equation*} where $\gamma$ is a primitive $n$th root of 1. The comultiplication, counit and antipode of $B(n, \omega, \gamma)$ are given by $$\Delta(x)=x\otimes x,\quad \Delta(g)=g\otimes g, \quad \Delta(y)=y\otimes g+1\otimes y,$$ $$\epsilon(x)=1,\quad \epsilon(g)=1,\quad \epsilon(y)=0,$$ and $$S(x)=x^{-1},\quad S(g)=g^{-1} \quad S(y)=-yg^{-1}.$$ Let $B:=B(n, \omega, \gamma)$. Using \cite[Lemma 2.6]{LWZ}, we get $\int_B^l=B/\langle y, x-1, g-\gamma^{-1}\rangle$. The corresponding homomorphism $\pi$ yields left and right winding automorphisms \[{\Xi_{\pi}^l:} \begin{cases} x\longmapsto x, &\\ g\longmapsto \gamma^{-1}g, &\\ y\longmapsto y, & \end{cases} \textrm{and}\;\;\;\; \Xi_{\pi}^r: \begin{cases} x\longmapsto x, &\\ g\longmapsto \gamma^{-1}g, &\\ y\longmapsto \gamma^{-1}y. & \end{cases}\] Clearly these automorphisms have order $n$ and $G_\pi^l\cap G_\pi^r=\{1\}$, whence $\io(B)=\im(B)=n$. \noindent$\bullet$ \emph{The Hopf algebras $D(m,d,\gamma)$}. Let $m,d$ be two natural numbers satisfying that $(1+m)d$ is even and $\gamma$ a primitive $m$th root of $1$. Define $$\omega:=md,\;\;\;\;\xi:=\sqrt{\gamma}.$$ As an algebra, $D=D(m, d, \gamma)$ is generated by $x^{\pm 1}, g^{\pm 1}, y, u_0, u_1, \cdots, u_{m-1}$, subject to the following relations \begin{align*} &xx^{-1}=x^{-1}x=1,\quad\quad gg^{-1}=g^{-1}g=1,\quad\quad xg=gx,\\ &xy=yx,\quad\quad\quad\quad\quad\quad yg=\gamma gy,\ \quad\quad\quad\quad\quad y^m=1-x^\omega=1-g^m,\\ & xu_i=u_ix^{-1},\ \quad \quad\quad\quad yu_i=\phi_iu_{i+1}=\xi x^d u_i y,\quad u_i g=\gamma^i x^{-2d}gu_i,\\ & u_iu_j=\left \{ \begin{array}{lll} (-1)^{-j}\xi^{-j}\gamma^{\frac{j(j+1)}{2}}\frac{1}{m}x^{-\frac{1+m}{2}d}\phi_i\phi_{i+1}\cdots \phi_{m-2-j}y^{i+j}g, & i+j \leqslant m-2,\\ (-1)^{-j}\xi^{-j}\gamma^{\frac{j(j+1)}{2}}\frac{1}{m}x^{-\frac{1+m}{2}d}y^{i+j}g, & i+j=m-1,\\ (-1)^{-j}\xi^{-j}\gamma^{\frac{j(j+1)}{2}}\frac{1}{m}x^{-\frac{1+m}{2}d}\phi_i \cdots \phi_{m-1}\phi_0\cdots \phi_{m-2-j}y^{i+j-m}g, & \textrm{otherwise}, \end{array}\right.\end{align*} where $\phi_i=1-\gamma^{-i-1}x^d$ and $0 \leqslant i, j \leqslant m-1$. The coproduct $\D$, the counit $\epsilon$ and the antipode $S$ of $D(m,d,\gamma)$ are given by \begin{align*} &\D(x)=x\otimes x,\;\; \D(g)=g\otimes g, \;\;\D(y)=y\otimes g+1\otimes y,\\ &\D(u_i)=\sum_{j=0}^{m-1}\gamma^{j(i-j)}u_j\otimes x^{-jd}g^ju_{i-j};\\ &\epsilon(x)=\epsilon(g)=\epsilon(u_0)=1,\;\;\epsilon(y)=\epsilon(u_s)=0;\\ &S(x)=x^{-1},\;\; S(g)=g^{-1}, \;\;S(y)=-yg^{-1},\\ &S(u_i)=(-1)^i\xi^{-i}\gamma^{-\frac{i(i+1)}{2}}x^{id+\frac{3}{2}(1-m)d}g^{m-i-1}u_i, \end{align*} for $0\leq i\leq m-1$ and $1\leqslant s\leqslant m-1$. Direct computation shows that $\int_D^l=D/(y, x-1, g-\gamma^{-1}, u_0-\xi^{-1}, u_1, u_2, \cdots, u_{m-1}),$ and the left and right winding automorphisms are: \[{\Xi_{\pi}^l:} \begin{cases} x\longmapsto x, &\\ y\longmapsto y, &\\ g\longmapsto \gamma^{-1}g, &\\ u_i\longmapsto \xi^{-1}u_i, & \end{cases} \textrm{and}\;\;\;\; \Xi_{\pi}^r: \begin{cases} x\longmapsto x, &\\ y\longmapsto \gamma^{-1}y. &\\ g\longmapsto \gamma^{-1}g, &\\ u_i\longmapsto \xi^{-(2i+1)} u_i. & \end{cases}\] From these, we know that $\io(D)=2m$ and $\im(D)=m$. \begin{remark} In \cite{WLD}, the authors used the notation $D(m,d,\xi)$ rather than $D(m,d,\gamma)$ used here. We will see that the notation $D(m,d,\gamma)$ is more convenient for us. \end{remark} Up to an isomorphism of Hopf algebras, all of above examples form a complete list of prime regular Hopf algebras of GK-dimension one (see \cite[Theorem 8.3.]{WLD}). \begin{lemma}\label{l2.12} Let $H$ be a prime regular Hopf algebra of GK-dimension one, then it is isomorphic to one of Hopf algebras listed above. \end{lemma} \subsection{Yetter-Drinfeld modules.}\label{ss2.4} This subsection is just a preparation for the question \eqref{1.1} and will not be used in the proof of our main result. Let $H$ be an arbitrary Hopf algebra. By definition, a \emph{left-left Yetter-Drinfeld module} $V$ over $H$ is a left $H$-module and a left $H$-comodule such that $$\delta(h\cdot v)=h_1v_{(-1)}S(h_3)\otimes h_2\cdot v_{(0)}$$ for $h\in H, v\in V.$ The category of left-left Yetter-Drinfeld modules over $H$ is denoted by ${^{H}_{H}\mathcal{YD}}$. It is a braided tensor category. In particular, when $H=\k G$ a group algebra, we denote this category by ${^{G}_{G}\mathcal{YD}}.$ We briefly summarize results from \cite{Rad}, see also \cite{Maj}. Let $A$ be a Hopf algebra provided with Hopf algebra maps $\pi:\; A\to H.\;\iota: H\to A$, such that $\pi\iota=\id_{H}.$ Let $R=A^{coH}=\{a\in A|(\in\otimes \pi)\D(a)=a\otimes 1\}.$ Then $R$ is a braided Hopf algebra in ${^{H}_{H}\mathcal{YD}}$ through \begin{align*} &h\cdot r:= h_1rS(h_2),\\ & r_{(-1)}\otimes r_{(0)}:=\pi(r_1)\otimes r_2,\\ & r^1\otimes r^2:=\vartheta(r_1)\otimes r_2 \end{align*} for $r\in R, \;h\in H$, $\D(r)=r^1\otimes r^2$ denote the coproduct of $r\in R$ in the category ${^{H}_{H}\mathcal{YD}}$ and $\vartheta(a):=a_1\iota\pi(S(a_2))$ for $a\in A.$ Conversely, let $R$ be a Hopf algebra in ${^{H}_{H}\mathcal{YD}}$. A construction discovered by Radford, and interpreted in terms of braided tensor categories by Majid, produces a Hopf algebra $R\# H$ through: As a vector space $R\# H=R\otimes H;$ if $r\# h:=r\otimes h, \; r\in R, h\in H$, the multiplication and coproduct are given by \begin{align*} &(r\# h)(s\# f)=r(h_1\cdot s)\# h_2f,\\ &\D(r\# h)= r^{1}\#(r^2)_{(-1)}h_{1}\otimes (r^2)_{(0)}\#h_2. \end{align*} The resulted Hopf algebra $R\# H$ is called a Radford's biproduct or Majid's bosonization. Now go back to the situation of $\pi:\; A\to H.\;\iota: H\to A$ such that $\pi\iota=\id_H$. In such case we have $A\cong R\# H$ and \begin{equation}\label{eq2.5} r_{1}\otimes r_{2}=r^{1}(r^{2})_{(-1)}\otimes (r^{2})_{(0)} \end{equation} for $r\in R.$ With these preparations, we can set the question of \eqref{1.1} for smooth curves at first. \begin{corollary}\label{c2.9} The affine line and $\k[x^{\pm 1}]$ are the only irreducible smooth curves which can be realized as Hopf algebras in ${^{\Z_n}_{\Z_n}{\mathcal{YD}}}$ for some $n$. \end{corollary} \begin{proof} Let $C$ be an irreducible smooth curve which can be realized as a Hopf algebra in ${^{\Z_n}_{\Z_n}{\mathcal{YD}}}$ for some $n$. There is no harm to assume that the action of $\Z_n$ on this curve (more precisely, on the coordinate algebra $\k[C]$ of this curve) is faithful. Therefore, the Radford's biproduct $$A:=\k[C]\# \k \Z_n$$ constructed above is a Hopf algebra of GK-dimension one. We claim that it is prime and regular. Primeness is gotten from Lemma \ref{l2.10}: Clearly $$A=\bigoplus_{i=0}^{n-1} \k[C]g^{i}.$$ From this, $A$ is a strongly $\widehat{\Z_n}=\langle \chi|\chi^n=1 \rangle$-graded algebra through $\chi(ag^i)=\xi^{i}$ for any $a\in \k[C]$ and $0\leq i\leq n-1.$ Therefore, the conditions 1) and 2) of Lemma \ref{l2.10} are fulfilled. By part (b) of Lemma \ref{l2.10}, the action of $\widehat{\Z_n}$ is just the adjoint action of $\Z_{n}=\langle g|g^n=1\rangle$ on $\k[C]$ which by definition is faithful. Therefore, PI.deg$(A)=n$ by part (c) of Lemma \ref{l2.10}. In addition, the part (d) of Lemma \ref{l2.10} implies that $A$ is prime now. Regularity is clear since the smoothness of $C$ implies the regularity of $\k[C]$ and thus regularity of $A$. In one word, $A$ is a prime regular Hopf algebra of GK-dimension one. Therefore, the result is followed from above classification stated in Lemma \ref{l2.12} by checking it one by one. \end{proof} \subsection{Pivotal tensor categories.} The only purpose of this subsection is just to tell us that the representation categories of our new examples stated in Section 4 are quite delightful: they are pivotal. The readers can refer \cite[Section 4.7]{EGNO} for details of the following content of this subsection. Recall that a tensor category $\mathcal{C}=(\mathcal{C},\otimes,\Phi,\unit, l,r)$ is called \emph{rigid} if every object in $\mathcal{C}$ has a left and a right dual. By definition, a left dual object of $V\in \mathcal{C}$ is a triple $(V^{*},\textrm{ev}_{V}, \textrm{coev}_{V})$ with an object $V^{*}\in \mathcal{C}$ and morphisms $\textrm{ev}_{V}:\; V^{*}\otimes V\to \unit$ and $\textrm{coev}_{V}:\;\unit\to V\otimes V^{*}$ such that the compositions \begin{figure}[hbt] \begin{picture}(300,60)(0,0) \put(0,40){\makebox(0,0){$ V$}} \put(10,40){\vector(1,0){60}}\put(40,50){\makebox(0,0){$ \textrm{coev}_V\otimes \id_V$}} \put(105,40){\makebox(0,0){$(V\otimes V^{*})\otimes V$}} \put(140,40){\vector(1,0){20}} \put(150,50){\makebox(0,0){$ \Phi$}}\put(200,40){\makebox(0,0){$ V\otimes(V^{*}\otimes V)$}} \put(240,40){\vector(1,0){60}}\put(270,50){\makebox(0,0){$ \id_V \otimes \textrm{ev}_V$}} \put(320,40){\makebox(0,0){$V$,}} \put(0,0){\makebox(0,0){$ V^{*}$}} \put(10,0){\vector(1,0){60}}\put(40,10){\makebox(0,0){$ \id_{V^{*}}\otimes \textrm{coev}_V $}} \put(105,0){\makebox(0,0){$V^{*}\otimes (V\otimes V^{*})$}} \put(140,0){\vector(1,0){20}} \put(150,10){\makebox(0,0){$ \Phi^{-1}$}}\put(200,0){\makebox(0,0){$ (V^{*}\otimes V)\otimes V^{*}$}} \put(240,0){\vector(1,0){60}}\put(270,10){\makebox(0,0){$ \textrm{ev}_V\otimes \id_{V^{*}}$}} \put(320,0){\makebox(0,0){$V^{*}$,}}\end{picture} \end{figure} are identities. The right dual can be defined similarly. Then we have the following functor $$(-)^{**}:\mathcal{C}\to \mathcal{C},\;\;V\mapsto V^{**}$$ which is a tensor autoequivalence of $\mathcal{C}.$ \begin{definition}\emph{ Let $\mathcal{C}$ be a rigid tensor category. A \emph{pivotal structure} on $\mathcal{C}$ is an isomorphism $j$ of tensor functors $j_{V}:V\mapsto V^{**}.$ A rigid tensor category $\mathcal{C}$ is said pivotal if it has a pivotal structure.} \end{definition} As nice properties of a pivotal tensor category, one can define categorical dimensions \cite[Section 4.7]{EGNO}, the Frobenius-Schur indicators \cite{NS}, semisimplifications \cite{EO} etc. The following result is well-known. \begin{lemma}\label{l2.16} Let $H$ be a Hopf algebra. If $S^2(h)=ghg^{-1}$ for a group-like element $g\in H$ and any $h\in H$, then the representation category of $H$ is pivotal. \end{lemma} \begin{proof} Let Rep$(H)$ be the tensor category of representations of $H$. Clearly, the map $$V\to V^{**}=V,\;v\mapsto g\cdot v, \;\;\;V\in \Rep(H), \;v\in V$$ gives us the desired pivotal structure on Rep$(H)$. \end{proof} \section{Fractions of a number} As a necessary ingredient to define new examples, we give the definition of a fraction of a natural number firstly in this section. Then we use it to ``fracture" the Taft algebra and thus we get the fraction version of a Taft algebra. At last, some combinatorial identities are collected for the future analysis. \subsection{Fraction.} Let $m$ be a natural number and $m_1,m_2,\ldots,m_\theta$ be $\theta$ number of natural numbers. For each $m_i\;(1\leq i\leq \theta)$, we have many natural numbers $a$ such that $m|am_i$. Among of them, we take the smallest one and denote it by $e_i$, that is, $e_i$ is the smallest natural number such that $m|e_im_i.$ Define $$A:=\{\underline{a}=(a_1,\ldots,a_\theta)|0\leq a_i< e_i, \;1\leq i\leq \theta\}.$$ With these notations, we give the definition of a fraction as follows. \begin{definition} \label{d3.1} \emph{We call $m_1,\ldots,m_\theta$ is} a fraction of $m$ of length $\theta$ \emph{if the following conditions are satisfied:} \begin{itemize}\item[(1)] \emph{For each $1\leq i\leq \theta$, $e_i$ is coprime to $m_i$, i.e. $(e_i,m_i)=1$;} \item[(2)] \emph{For each pair $1\leq i\neq j\leq \theta$, $m|m_im_j$;} \item[(3)] \emph{The production of $e_i$ is equal to $m$, that is, $m=e_1e_2\cdots e_\theta$;} \item[(4)] \emph{For any two elements $\underline{a},\underline{b}\in A$, we have $\sum_{i=1}^{\theta}a_im_i\not\equiv \sum_{i=1}^{\theta}b_im_i\;(\emph{mod}\; m)$ if $\underline{a}\neq \underline{b}$.} \end{itemize} \emph{The set of all fractions of $m$ of length $\theta$ is denoted by $F_{\theta}(m)$ and let $\mathcal{F}(m):=\bigcup_{\theta}F_{\theta}(m),\;\mathcal{F}=\bigcup_{m\in \mathbb{N}}\mathcal{F}(m).$} \end{definition} \begin{remark}\label{r3.2} \emph{(1) Conditions (3) and (4) in this definition is equivalent to say that up to modulo $m$, each number $0\leq j\leq m-1 $ can be represented \emph{uniquely} as a linear combination of $m_1,\ldots,m_\theta$ with coefficients in $A$. That is, under basis $m_1,\ldots,m_\theta$, $j$ has a coordinate and we denote this coordinate by $(j_1,\ldots,j_\theta)$, i.e.} $$j\equiv j_1m_1+j_2m_2+\ldots+j_\theta m_\theta\;(\emph{mod}\;m).$$ \emph{Moreover, for any $j\in \Z$ it has a unique remainder $\overline{j}$ in $\Z_{m}$ and thus we can define the coordinate for any integer accordingly, that is, $j_i:=\overline{j}_i$ for $1\leq i\leq \theta$.} In the following of this paper, this expression will be used freely. \emph{(2) For each $1\leq i\leq \theta$, we call $e_i$ \emph{the exponent} of $m_i$ with respect to $m$. Intuitively, it seems more natural to call these exponents $e_1,\ldots,e_\theta$ a fraction of $m$ due to the condition (3). However, there are as least two reasons forbidding us to do it. The first one is that we will meet $m_i$'s rather than $e_i$'s in the following analysis. The second reason is that the exponents can not determine $m_i$'s uniquely. As an example, let $m=6$, we see that both $\{2,3\}$ and $\{4,3\}$ have the same set of exponents. } \emph{(3) It is not hard to see that $\theta=1$ if and only if $(m,m_1)=1$.} \emph{(4) Usually, we use the notation such as $\underline{m},\underline{m}'\cdots$ to denote a fraction of $m$, that is, $\underline{m},\underline{m}'\in \mathcal{F}(m)$.} \end{remark} \subsection{Fraction version of a Taft algebra.} Now let $m_1,\ldots,m_\theta$ be a fraction of $m$, $m_0:=(m_1,\ldots,m_{\theta})$ biggest common divisor of $m_1,\ldots,m_\theta$ and fix a primitive $m$th root of unity $\xi$. We want to define a Hopf algebra $T(m_1,\ldots,m_\theta,\xi)$ as follows. As an algebra, it is generated by $g,y_{m_1},\ldots,y_{m_\theta}$ and subject to the following relations: \begin{equation}\label{eq3.1} g^m=1,\;\;y_{m_i}^{e_i}=0,\;\;y_{m_i}y_{m_j}=y_{m_j}y_{m_i},\;\; y_{m_i}g=\xi^{\frac{m_i}{m_0}}gy_{m_i}, \end{equation} for $1\leq i,j\leq \theta.$ The coproduct $\D$, the counit $\epsilon$ and the antipode $S$ of $T(m_1,\ldots,m_\theta,\xi)$ are given by $$\D(g)=g\otimes g,\quad \D(y_{m_i})=1\otimes y_{m_i}+y_{m_i}\otimes g^{m_i},$$ $$\e(g)=1,\quad\e(y_{m_i})=0,$$ $$S(g)=g^{-1},\quad S(y_{m_i})=-y_{m_i}g^{-m_i}$$ for $ 1\leq i\leq \theta.$ Since $(m_0,m)=1$, if we take $\xi':=\xi^{m_0}$ in the above definition then it is not hard to see that $\xi'$ is still a primitive $m$th root of unity. So in \eqref{eq3.1} we can substitute the relation $y_{m_i}g=\xi^{\frac{m_i}{m_0}}gy_{m_i}$ by a more convenient version $$y_{m_i}g=\xi^{m_i}gy_{m_i},\;\;\;\;1\leq i\leq \theta.$$ \begin{lemma}\label{l3.3} The algebra $T(m_1,\ldots,m_\theta,\xi)$ defined above is an $m^2$-dimensional Hopf algebra. \end{lemma} \begin{proof} This is clear. We just point out that: The condition (1) of Definition \ref{d3.1} ensures that each $y_{m_i}^{e_i}$ is a primitive element and the condition (2) of Definition \ref{d3.1} ensures that $y_{m_i}y_{m_j}-y_{m_j}y_{m_i}$ is a skew-primitive element for all $1\leq i,j\leq \theta.$ \end{proof} \begin{proposition}\label{p3.4} Let $m'$ be another natural number and $\underline{m'}=\{m_1',\ldots,m'_{\theta'}\}$ be a fraction of $m'$. Then as Hopf algebras, $T(m_1,\ldots,m_\theta,\xi)\cong T(m_1',\ldots,m_\theta',\xi')$ if and only if $m=m',\;\theta=\theta'$ and there exists $x_0\in \N$ which is relatively prime to $m$ such that up to an order of $m_1,\ldots,m_\theta$ we have $m_i'\equiv m_ix_0$ $($\emph{mod} $m)$ and $\xi=\xi'^{x_0}.$ \end{proposition} \begin{proof} We denote the generators and numbers of $T(m_1',\ldots,m_\theta',\xi')$ by adding the symbol $'$ to that of $T(m_1,\ldots,m_\theta,\xi)$ for convenience. The sufficiency of the proposition is clear. We only prove the necessity. Assume that we have an isomorphism of Hopf algebras $$\varphi:\;T(m_1,\ldots,m_\theta,\xi)\stackrel{\cong}{\longrightarrow} T(m_1',\ldots,m_\theta',\xi').$$ By this isomorphism, they have the same dimension and thus $m=m'$ according to Lemma \ref{l3.3}. Comparing the number of nontrivial skew primitive elements, we know that $\theta=\theta'.$ Up to an order of $m_1,\ldots,m_\theta$, there is no harm to assume that $\varphi(y_{m_i})=y_{m_i'}$ for $1\leq i\leq \theta.$ (More precisely, we should take $\varphi(y_{m_i})=y_{m_i'}+c(1-(g')^{m_i'})$ at first. But through the relation $y_{m_i}g=\xi^{m_i}gy_{m_i}$ we have $c=0$.) Since $\varphi(g)$ is a group-like and generates all group-likes, $\varphi(g)=g'^{x_0}$ for some $x_0\in \N$ and $(x_0,m)=1$. Due to $$\D(\varphi(y_{m_i}))=\D(y_{m_i'})=1\otimes y_{m_i'}+y_{m_i'}\otimes (g')^{m_i'}$$ which equals to $$(\varphi\otimes \varphi)(\D(y_{m_i}))=1\otimes y_{m_i'}+y_{m_i'}\otimes (g')^{m_ix_0}.$$ Therefore, $m_i'\equiv m_ix_0$ (mod $m$). By this, we can assume that $(m_1',\ldots,m_\theta')=(m_1,\ldots,m_\theta)x_0$, that is, $m_0'=m_0x_0.$ So $\varphi(y_{m_i}g)=\varphi(\xi^{\frac{m_i}{m_0}}gy_{m_i})$ implies that $$\xi'^{\frac{m_i'}{m'_0}x_0}(g')^{x_0}y_{m_i'}=\xi^{\frac{m_i}{m_0}}(g')^{x_0}y_{m_i'}$$ which implies that $\xi^{\frac{m_i}{m_0}}=\xi'^{x_0\frac{m_i}{m_0}}$ for all $1\leq i\leq \theta.$ Since by definition $(\frac{m_1}{m_0},\ldots,\frac{m_\theta}{m_0})=1$, there exist $c_1,\ldots,c_{\theta}$ such that $\sum_{i=1}^{\theta}c_i\frac{m_i}{m_0}=1$. Therefore, $$\xi=\xi^{\sum_{i=1}^{\theta}c_i\frac{m_i}{m_0}}= \xi'^{x_0\sum_{i=1}^{\theta}c_i\frac{m_i}{m_0}}=\xi'^{x_0}.$$ \end{proof} \subsection{Some combinatorial identities.} Firstly, we will rewrite some combinatorial identities appeared in \cite[Section 3]{WLD} in a suitable form for our purpose. Secondly, we prove some more identities which are not included in \cite[Section 3]{WLD}. Let $m,d$ be two natural numbers. As before, let $\underline{m}=\{m_1,\ldots,m_{\theta}\}\in \mathcal{F}(m)$ be a fraction of $m$ and $e_i$ the exponent of $m_i$ with respect to $m$ for $1\leq i\leq \theta.$ Let $\gamma$ be a primitive $m$th root of unity. By definition, we know that $$\gamma_i:=\gamma^{-{m_i^2}}$$ is a primitive $e_i$th root of unity. For any $j\in \Z$, the polynomial $\phi_{m_i,j}$ is defined through \begin{equation}\phi_{m_i,j}:=1-\gamma^{-{m_i}(m_i+j)}x^{m_id} =1-\gamma^{-m_i^2(1+j_i)}x^{m_id}=1-\gamma_{i}^{(1+j_i)}x^{m_id}\end{equation} for any $1\leq i\leq \theta$ and the second equality is due to the (2) of the definition of the fraction. In the following of this subsection, we fix an $1\leq i\leq \theta.$ Take $j$ to be an arbitrary integer, define $\bar{j}$ to be the unique element in $\{0,1,\ldots,e_i-1\}$ satisfying $\bar{j}\equiv j$ {(mod}\,$e_i$). Then we have $$\phi_{m_i,j}=\phi_{m_i,\bar{j}}$$ since $\gamma_i^{e_i}=1$. With this observation, we can use $${]s,t[_{m_i}}$$ to denote the resulted polynomial by omitting all items \emph{from} $\phi_{m_i,\overline{s}m_i}$ \emph{to} $\phi_{m_i,\overline{t}m_i}$ in $$\phi_{m_i,0}\phi_{m_i,m_i}\cdots \phi_{m_i,(e_i-1)m_i},$$ that is \begin{equation}\label{eqomit} {]s,t[_{m_i}}=\begin{cases} \phi_{m_i,(\bar{t}+1)m_i}\cdots \phi_{m_i,(e_i-1)m_i}\phi_{m_i,0}\cdots\phi_{m_i,(\bar{s}-1)m_i}, & \textrm{if}\; \bar{t}\geqslant \bar{s} \\1, & \textrm{if}\; \bar{s}=\overline{t}+1 \\ \phi_{m_i,(\bar{t}+1)m_i}\cdots \phi_{m_i,(\bar{s}-1)m_i}, & \textrm{if}\; \overline{s}\geqslant \bar{t}+2. \end{cases} \end{equation} For example, ${]{-1},{-1}[_{m_i}}={]{e_i-1},{e_i-1}[}_{m_i}=\phi_{m_i,0}\phi_{m_i,m_i}\cdots \phi_{m_i,(e_i-2)m_i}$. In practice, in particular to formulate the multiplication of our new examples of Hopf algebras, the next notation is also useful for us, which can be considered as the resulted polynomial (except the case $\bar{s}=\bar{t}+1$) by preserving all items \emph{from} $\phi_{m_i,\overline{s}m_i}$ \emph{to} $\phi_{m_i,\overline{t}m_i}$ in $\phi_{m_i,0}\phi_{m_i,m_i}\cdots \phi_{m_i,(e_i-1)m_i}$. \begin{equation}\label{eqpre} {[s,t]_{m_i}}:=\begin{cases} \phi_{m_i,\bar{s}m_i}\phi_{m_i,(\bar{s}+1)m_i}\cdots \phi_{m_i,\bar{t}m_i}, & \textrm{if}\; \bar{t}\geqslant \bar{s} \\1, & \textrm{if}\; \bar{s}=\overline{t}+1 \\ \phi_{m_i,\bar{s}m_i}\cdots \phi_{m_i,(e_i-1)m_i}\phi_{m_i,0}\cdots \phi_{m_i,\bar{t}m_i}, & \textrm{if}\; \overline{s}\geqslant \bar{t}+2. \end{cases} \end{equation} So, by definition, we have \begin{equation}\label{eqrel} {[i, m-2-j]_{m_i}}={]{-1-j},{i-1}[_{m_i}}. \end{equation} Due to the equality \eqref{eqrel}, we just study equations with omitting items. The following formulas already were proved or already implicated in \cite[Section 3]{WLD} in different forms. So we just state them in our forms without proofs. \begin{lemma}\label{l3.4} With notions defined as above, we have \begin{itemize} \item[(1)]$\sum_{j=0}^{e_i-1}{]{j-1},{j-1}[_{m_i}}\;=e_i.$ \item[(2)] $\phi_{m_i,0}\phi_{m_i,m_i}\cdots \phi_{m_i,(e_i-1)m_i}=1-x^{e_im_id}.$ \item[(3)] $\sum_{j=0}^{e_i-1}\gamma_i^{j}\;{]{j-1},{j-1}[_{m_i}}\;=e_ix^{(e_i-1)m_id}.$ \item[(4)] $\sum_{j=0}^{e_i-1}\gamma_i^{j}\;{]{j-2},{j-1}[_{m_i}}\;=0.$ \item[(5)] Fix $k$ such that $1\leqslant k\leqslant e_i-1$ and let $1\leqslant i'\leqslant k$. Then $$\sum_{j=0}^{e_i-1}\gamma_i^{i'j}\;{]{j-1-k},{j-1}[_{m_i}}=0.$$ \item[(6)] Let $0\leq t\leq j+l\leq e_i-1,\; 0\leq \alpha \leq e_i-1-j-l$. Then \begin{align*} &\quad (-1)^{\alpha+t}\gamma_i^{\frac{(\alpha+t)(\alpha+t+1)}{2}+t(j+l-t)} \binom{e_i-1-t}{\alpha}_{\gamma_i}\binom{e_i-1+t-j-l}{\alpha+t}_{\gamma_i} \\ &=\binom{j+l}{t}_{\gamma_i}\binom{m-1-j-l}{\alpha}_{\gamma_i}. \end{align*} \end{itemize} \end{lemma} We still need two more observations which were not included in \cite[Section 3]{WLD}. \begin{lemma}\label{l3.5} With notations as above. Then \begin{itemize} \item[(1)] For any $e_i$th root of unity $\xi$, we have $$\sum_{j=0}^{e_i-1}\xi^{j}\;{]{j-1},{j-1}[_{m_i}}\neq 0.$$ \item[(2)] Let $\xi$ be an $e_i$th root of unity. Then $\sum_{j=0}^{e_i-1}\xi^{j}\;{]{j-2},{j-1}[_{m_i}}=\;0$ if and only if $\xi=\gamma_i.$ \end{itemize} \end{lemma} \begin{proof} (1) Otherwise, we assume that $\sum_{j=0}^{e_i-1}\xi^{j}\;{]{j-1},{j-1}[_{m_i}}= 0.$ From this, we know that $\xi\neq 1$ by (3) of Lemma \ref{l3.4}. By the definition of $]{j-1},{j-1}[_{m_i}$, we know that \begin{align*} & \quad\sum_{j=0}^{e_i-1}\;\xi^{j}{]{j-1},{j-1}[_{m_i}}\;- \sum_{j=0}^{e_i-1}\xi^{j}\gamma_i^{j}x^{m_id}\;{]{j-1},{j-1}[_{m_i}}\;\\ &=\sum_{j=0}^{e_i-1}\xi^{j}(1-\gamma_i^{j}x^{m_id})\;{]{j-1},{j-1}[_{m_i}}\;\\ &=\sum_{j=0}^{e_i-1}\xi^{j}\phi_{m_i,0}\phi_{m_i,m_i}\cdots\phi_{m_i,(e_i-1)m_i}\\ &=\sum_{j=0}^{e_i-1}\xi^{j}(1-x^{e_im_id})\\ &=0. \end{align*} where the third equality is due to (2) of Lemma \ref{l3.4} and the last equality follows from $\xi\neq 1$ being an $e_i$th root of unity. Therefore, $\sum_{j=0}^{e_i-1}\xi^{j}\gamma_i^{j}x^{m_id}\;{]{j-1},{j-1}[_{m_i}}=0$ and thus $\sum_{j=0}^{e_i-1}(\gamma_i\xi)^{j}\;{]{j-1},{j-1}[_{m_i}}=0$. Repeat above process, we know that for any $k$ $$\sum_{j=0}^{e_i-1}(\gamma_i^k\xi)^{j}\;{]{j-1},{j-1}[_{m_i}}=0.$$ Since $\xi$ is an $e_i$th root of unity while $\gamma_i$ is a primitive $e_i$th root of unity, there exists a $k$ such that $\gamma_i^k\xi=1$. But in this case $\sum_{j=0}^{e_i-1}(\gamma_i^k\xi)^{j}\;{]{j-1},{j-1}[_{m_i}}=e_i\neq 0.$ That is a contradiction. (2) ``$\Leftarrow$" This is just the (4) of Lemma \ref{l3.4}. ``$\Rightarrow$" Before prove this part, we recall a formula (see \cite[Proposition IV.2.7]{Kas}) at first: $$(a-z)(a-qz)\cdots (a-q^{n-1}z)=\sum_{l=0}^{n}(-1)^l\binom{n}{l}_q q^{\tfrac{l(l-1)}{2}}a^{n-l}z^l,$$ where $q$ is a nonzero element in $\k$ and any $a\in \k$. From this, \begin{align*} {]{j-2},{j-1}[_{m_i}}&=(1-\gamma_i^{j+1}x^{m_id})(1-\gamma_i^{j+2}x^{m_id})\cdots(1-\gamma_i^{e_i+j-2}x^{m_id})\\ &=\sum_{l=0}^{e_i-2}(-1)^l\binom{e_i-2}{l}_{\gamma_{i}}\gamma_i^{\tfrac{l(l-1)}{2}}(\gamma_i^{j+1}x^{m_id})^l\\ &=\sum_{l=0}^{e_i-2}(-1)^l\binom{e_i-2}{l}_{\gamma_i}\gamma_i^{\tfrac{l(l+1)}{2}+lj}x^{lm_id}. \end{align*} So from this, we have \begin{align*} \sum_{j=0}^{e_i-1}\xi^{j}\;{]{j-2},{j-1}[_{m_i}}& =\sum_{l=0}^{e_i-2}(-1)^l\binom{e_i-2}{l}_{\gamma_i}\gamma_i^{\tfrac{l(l+1)}{2}} \sum_{j=0}^{e_i-1}\xi^{j}\gamma_i^{lj}x^{lm_id}. \end{align*} Therefore assumption implies that $$\sum_{j=0}^{e_i-1}\xi^{j}\gamma_i^{lj}=0$$ for all $0\leqslant l \leqslant e_i-2$. So we see that the only possibility is $\xi=\gamma_i$. \end{proof} \section{New examples} In this section, we will introduce the fraction versions of infinite dimensional Taft algebras, generalized Liu algebras and the Hopf algebras $D(m,d,\gamma)$ respectively. Some properties of them are listed. Most of these Hopf algebras, as far as we know, are knew. \subsection{Fraction of infinite dimensional Taft algebra $T(\underline{m},t,\xi)$. }\label{ss4.1} Let $m,t$ be two natural numbers and set $n=mt$. Let $\underline{m}=\{m_1,\ldots,m_\theta\}$ be a fraction of $m$ and $m_0=(m_1,\ldots,m_{\theta})$ the greatest common divisor. So it is not hard to see that $(m,m_0)=1$. Now fix a primitive $n$th root of unity $\xi$ satisfying $$\xi^{e_1\frac{m_1}{m_0}}=\xi^{e_2\frac{m_2}{m_0}}=\cdots=\xi^{e_\theta \frac{m_\theta}{m_0}}.$$ Note that such $\xi$ does not always exist (for example, taking $m=6,\;t=2$ and $\{4,3\}$ be a fraction of $6$, we find that we have no such $\xi$). If it exists, then we can define a Hopf algebra $T(\underline{m},t,\xi)$ as follows. As an algebra, it is generated by $g,y_{m_1},\ldots,y_{m_\theta}$ and subject to the following relations: \begin{equation} g^n=1,\;\;y_{m_i}^{e_i}=y_{m_j}^{e_j},\;\;y_{m_i}y_{m_j}=y_{m_j}y_{m_i},\;\; y_{m_i}g=\xi^{\frac{m_i}{m_0}}gy_{m_i}, \end{equation} for $1\leq i,j\leq \theta.$ The coproduct $\D$, the counit $\epsilon$ and the antipode $S$ of $T(\underline{m},t,\xi)$ are given by $$\D(g)=g\otimes g,\quad \D(y_{m_i})=1\otimes y_{m_i}+y_{m_i}\otimes g^{tm_i},$$ $$\e(g)=1,\quad\e(y_{m_i})=0,$$ $$S(g)=g^{-1},\quad S(y_{m_i})=-y_{m_i}g^{-tm_i}$$ for $ 1\leq i\leq \theta.$ \begin{proposition}\label{p4.1} Let the $\k$-algebra $T=T(\{m_1,\ldots,m_\theta\}, t, \xi)$ be the algebra defined as above. Then \begin{itemize} \item[(1)] The algebra $T$ is a Hopf algebra of GK-dimension one, with center $\k[y_{m_1}^{e_1t}]$. \item[(2)] The algebra $T$ is prime and PI-deg $(T)=n$. \item[(3)] The algebra $T$ has a $1$-dimensional representation whose order is $n$. \end{itemize} \end{proposition} \begin{proof} (1) Since the proof of $T(\underline{m},t,\xi)$ being a Hopf algebra is routine, we leave it to the readers. (In fact, since for each $1\leq i\leq \theta$ the subalgebra generated by $g, y_{m_i}$ is just a generalized infinite dimensional Taft algebra, one can reduce the proof to just considering the mixed relation $y_{m_i}y_{m_j}=y_{m_j}y_{m_i}$ and $y_{m_i}^{e_i}=y_{m_j}^{e_j}$ for $1\leq i,j\leq \theta$.) Through direct computations, one can see that the subalgebra $\k[y_{m_1}^{e_1t}]\cong \k[x]$ is the center of $T(\underline{m},t,\xi)$ and $T$ is finite module over $\k[y_{m_1}^{e_1t}]$. This means the GK-dimension of $T(\underline{m},t,\xi)$ is one. (2) We want to apply Lemma \ref{l2.10} to prove this result and we use similar argument developed in the proof of Corollary \ref{c2.9}. At first, let $T_0$ be the subalgebra generated by $y_{m_1},\ldots,y_{m_\theta}$. Then clearly $$T=\bigoplus_{i=0}^{n-1} T_{0}g^{i}.$$ From this, $T$ is a strongly $\widehat{\Z_n}=\langle \chi|\chi^n=1 \rangle$-graded algebra through $\chi(ag^i)=\xi^{i}$ for any $a\in T_0$ and $0\leq i\leq n-1.$ Therefore, the conditions 1) and 2) of Lemma \ref{l2.10} are satisfied. By part (b) of Lemma \ref{l2.10}, the action of $\widehat{\Z_n}$ is just the adjoint action of $\Z_{n}=\langle g|g^n=1\rangle$ on $T_0$ which by definition is faithful. Therefore, PI.deg$(T)=n$ by part (c) of Lemma \ref{l2.10}. In addition, the part (d) of Lemma \ref{l2.10} implies that $T$ is prime now. (3) By the definition of $T(\underline{m},t,\xi)$, it has a $1$-dimensional representation $$\pi:\;T(\underline{m},t,\xi)\to \k, \;\;y_{m_i}\mapsto 0,\;g\mapsto \xi\;\;(1\leq i\leq \theta).$$ It's order is clear $n$. \end{proof} \begin{remark}\label{r4.3} \emph{ We call the representation in Proposition \ref{p4.1} (c) the \emph{canonical representation} of $T(\underline{m},t,\xi)$. Since $\ord(\pi)=n$ which is same as the PI-degree of $T(\underline{m},t,\xi)$, the Hopf algebra $T(\underline{m},t,\xi)$ satisfies the (Hyp1). At the same time, let $\{2,5\}$ be a fraction of $10$ and consider the example $T=T(\{2,5\},3,\xi)$ where $\xi$ is a primitive $30$th root of unity. Applying \cite[Lemma 2.6]{LWZ}, we find that the right module structure of the left homological integrals is given by $$\int_{T}^{l}=T/(y_{m_i}\;(1\leq i\leq \theta),g-\xi^{10-7}).$$ Therefore $\io(T)=10$ which does \emph{not} equal the PI-degree of $T$, which is $30$. So, $T(\underline{m},t,\xi)$ only satisfies (Hyp1) rather than (Hyp1)$'$, that is, $\io(T)\neq$ PI.deg$(T)$ in general.} \end{remark} The canonical representation of $T=T(\underline{m},t,\xi)$ yields the corresponding left and right winding automorphisms \[{\Xi_{\pi}^l:} \begin{cases} y_{m_i}\longmapsto y_{m_i}, &\\ g\longmapsto \xi g, & \end{cases} \textrm{and} \;\;\;\;\; \Xi_{\pi}^r: \begin{cases} y_{m_i}\longmapsto \xi^{m_it}y_{m_i}, &\\ g\longmapsto \xi g, & \end{cases}\] for $1\leq i\leq \theta$. Using above expression of $\Xi_{\pi}^{l}$ and $\Xi_{\pi}^{r}$, it is not difficult to find that \begin{equation} T_{i}^{l}=\k[y_{m_1},\ldots,y_{m_\theta}]g^{i}\;\;\;\;\textrm{and} \;\;\;\; T_{j}^{r}=\k[g^{-m_1t}y_{m_1},\ldots,g^{-m_{\theta}t}y_{m_\theta}]g^{j} \end{equation} for all $0\leq i,j\leq n-1.$ Thus we have \begin{equation} T_{00}=\k[y_{m_1}^{e_1}]\;\;\;\;\textrm{and} \;\;\;\;T_{i,i+jt}=\k[y_{m_1}^{e_1}]y_{j}g^{i} \end{equation} for all $0\leq i\leq n-1$, $0\leq j\leq m-1$ where $y_{j}=y_{m_1}^{j_1}\cdots y_{m_\theta}^{j_\theta}$ (see (1) of Remark \ref{r3.2}). Moreover, we can see that $$T_{ij}=0\;\;\textrm{if}\;\; i-j\not\equiv 0\;(\textrm{mod}\; t)$$ for all $0\leq i,j\leq n-1.$ As a concluding remark of this subsection, we want to discriminate these fractions of infinite dimensional Taft algebras. \begin{proposition}\label{p4.3} Keep above notations. Let $\underline{m'}=\{m_1',\ldots,m'_{\theta'}\}$ be a fraction of another integer $m'$. Then $T(\underline{m},t, \xi)\cong T(\underline{m'},t',\xi')$ if and only if $m=m',\;\theta=\theta',\;t=t'$ and there exists $x_0\in \N$ which is relatively prime to $n=mt$ such that up to an order of $m_1,\ldots,m_\theta$ we have $m_i'\equiv m_ix_0$ $($\emph{mod} $n)$ and $\xi=\xi'^{x_0}.$ \end{proposition} \begin{proof} We write the proof out for completeness. We denote the corresponding generators and numbers of $T(\underline{m'},t',\xi')$ by adding the symbol $'$ to that of $T(\underline{m},t, \xi)$. The sufficiency is clear (for example, just take $\varphi:\;T(\underline{m},t, \xi)\to T(\underline{m'},t',\xi')$ through $g\mapsto g'^{x_0},\;\;y_{m_i}\mapsto y'_{m_i'}$ for $1\leq i\leq \theta$. Then one can $\varphi$ gives the desired isomorphism). We next prove the necessity. Assume that we have an isomorphism of Hopf algebras $$\varphi:\;T(\underline{m},t, \xi)\stackrel{\cong}{\longrightarrow} T(\underline{m'},t',\xi').$$ By this isomorphism, they have the same number of group-likes which implies that $n=mt=m't'=n'$ and $\varphi(g)=(g')^{x_0}$ for some $x_0\in \N$ satisfying $x_0$ and $n$ are coprime. Comparing the number of nontrivial skew primitive elements, we know that $\theta=\theta'.$ Up to an order of $m_1,\ldots,m_\theta$, there is no harm to assume that $\varphi(y_{m_i})=y_{m_i'}$ for $1\leq i\leq \theta.$ (Just as the case of a fraction of a Taft algebra, one should take $\varphi(y_{m_i})=y_{m_i'}+c_i(1-(g')^{m_i'})$ at the beginning for some $c_i\in \k$. Then through the relation $y_{m_i}g=\xi^{\frac{m_i}{m_0}}gy_{m_i}$ we can find that $c_i=0.$) Since both $y_{m_i}^{e_i}$ and $y_{m_i'}^{e_i'}$ are primitive, $e_i=e_i'.$ Therefore $m=e_1\cdots e_{\theta}=e'_1\cdots e'_{\theta}=m'$ and thus $t=t'.$ Then one can repeat the proof of Proposition \ref{p3.4} and get that $m_i'\equiv m_ix_0$ (mod $n$) and $\xi=\xi'^{x_0}.$ \end{proof} \subsection{$T(\underline{m},t,\xi)$ vs the Brown-Goodearl-Zhang's example.}\label{sub4.2} In the paper of Goodeal and Zhang \cite[Section 2]{GZ}, they found a new kind of Hopf domains of GK-dimension two. From these Hopf domains, one can get some Hopf algebras of GK-dimension one through quotient method. In fact, through this way Brown and Zhang \cite[Example 7.3]{BZ} got the first example of a prime Hopf algebra of GK-dimension one which is not regular. Let's recall their construction at first. \begin{example}[Brown-Goodearl-Zhang's example] \emph{Let $n,p_0,p_1,\ldots,p_s$ be positive integers and $a\in \k^{\times}$ with the following properties: \begin{itemize}\item[(a)] $s\geq 2$ and $1<p_1<p_2<\cdots<p_s;$ \item[(b)] $p_0|n$ and $p_0,p_1,\ldots,p_s$ are pairwise relatively prime; \item[(c)] $q$ is a primitive $l$th root of unity, where $l=(n/p_0)p_1p_2\cdots p_s.$ \end{itemize} Set $m_i=p_i^{-1}\prod_{j=1}^{s}p_j$ for $i=1,\ldots,s$. Let $A$ be the subalgebra of $\k[y]$ generated by $y_i:=y^{m_i}$ for $i=1,\ldots,s$. The $\k$-algebra automorphism of $\k[y]$ sending $y\mapsto qy$ restricts to an algebra automorphism $\sigma$ of $A$. There is a unique Hopf algebra structure on the Laurent polynomial ring $B=A[x^{\pm 1};\sigma]$ such that $x$ is group-like and the $y_i$ are skew primitive, with $$\D(y_i)=1\otimes y_i+y_i\otimes x^{m_i n}$$ for $i=1,\ldots,s$. It is a PI Hopf domain of GK-dimension two, and is denoted by $B(n,p_0,p_1,\ldots,p_s,q)$. Now let $$\overline{B}(n,p_0,p_1,\ldots,p_s,q):=B(n,p_0,p_1,\ldots,p_s,q)/(x^l-1).$$ Then Brown-Zhang proved that the quotient Hopf algebra $\overline{B}(n,p_0,p_1,\ldots,p_s,q)$ is a prime Hopf algebra of GK-dimension one.} \end{example} There is a close relationship between the Brown-Goodearl-Zhang's example and the fractions of infinite dimensional Taft algebras. \begin{proposition} The Hopf algebra $\overline{B}(n,p_0,p_1,\ldots,p_s,q)$ is a fraction of an infinite dimensional Taft algebra, that is, $\overline{B}(n,p_0,p_1,\ldots,p_s,q)=T(\underline{m},t,\xi)$ for some $\underline{m}\in \mathcal{F},t\in \mathbb{N}$ and $\xi$ a root of unity. \end{proposition} \begin{proof} By definition of $\overline{B}=\overline{B}(n,p_0,p_1,\ldots,p_s,q)$, we know that $y_i=y^{m_i}$ (we also use the same notation as $B(n,p_0,p_1,\ldots,p_s,q)$) and thus the following relation is satisfied $$y_i^{p_i}=y_j^{p_j}$$ for all $1\leq i,j\leq s.$ At the same time, in $\overline{B}$ the group like element $x$ satisfying the following relations $$x^{l}=1, \;\;y_ix=q^{m_i}xy_i$$ for $i=1,\ldots,s.$ By these observations, define $$m_i':=p_0m_i,\;\;\;\;1\leq i\leq s.$$ Then it is tedious to show that $m_1',m_2',\ldots,m_s'$ is a fraction of $m:=\prod_{i=1}^{s} p_i$. Moreover, let $t:=n/p_0$. Now we see that the Hopf algebra $T(\{m_1',m_2',\ldots,m_s'\},t,q)$ is generated by $y_{m_1'},\ldots,y_{m_s'},\;g$ and satisfies the following relations $$g^l=1,\;\;y_{m_i'}^{p_i}=y_{m_j'}^{p_j},\;\;y_{m_i'}y_{m_j'}=y_{m_j'}y_{m_i'},\;\; y_{m_i'}g=q^{\frac{m_i'}{p_0}}gy_{m_i'}=q^{m_i}gy_{m_i'}.$$ From this, there is an algebra epimorphism $$f:\;T(\{m_1',m_2',\ldots,m_s'\},n/p_0,q)\to \overline{B}(n,p_0,p_1,\ldots,p_s,q),\;\; y_{m_i'}\mapsto y_i, \;g\mapsto x$$ which is clear a Hopf epimorphism. Since both of them are prime of GK-dimension one, $f$ must be an isomorphism. \end{proof} But not all fractions of infinite dimensional Taft algebras belong to the class of Brown-Goodearl-Zhang's examples. \begin{example} \emph{Let $5,12$ be a fraction of $30$ and $\xi$ a primitive $30$th root of unity. Then the corresponding $T(\{12,5\},1,\xi)$ is generated by $y_{5},y_{12},g$ satisfying $$y_{12}^{5}=y_{5}^{6},\;\;y_{12}y_{5}=y_{5}y_{12},\;\;y_{12}g=\xi^{12}gy_{12}, \;\;y_{5}g=\xi^{5}gy_{5},\;\;g^{30}=1.$$ If there is an isomorphism between this Hopf algebra and a Brown-Goodearl-Zhang's example $$f:\; T(\{12,5\},1,\xi)\stackrel{\cong}{\longrightarrow}\overline{B}(n,p_0,p_1,\ldots,p_s,q),$$ then clearly $s=2$ (by the number of non-trivial skew primitive elements) and $l=(n/{p_0})p_1p_2=30$ (due to they have the same group of group-likes). Therefore, $f(g)=x^{t}$ with $(t,30)=1$. By $$\D(y_5)=1\otimes y_5+g^{5}\otimes y_5,\;\;\;\;\D(y_{12})=1\otimes y_{12}+g^{12}\otimes y_{12},$$ we know that $np_1\equiv 5t, np_2\equiv 12t$ (mod $30$). Since $p_1,p_2$ are factors of $30$ and $t$ is coprime to $30$, $p_1=5$ and thus $n\equiv t$ (mod $30$), $p_2=12$. This contradicts to $l=(n/{p_0})p_1p_2=30$.} \end{example} This example also shows that not every fraction version of infinite dimensional Taft algebra can be realized as a quotient of a Hopf domain of GK-dimension two. \subsection{Fraction of generalized Liu algebra $B(\underline{m},\omega,\gamma)$.}\label{ss4.3} Let $m,\omega$ be positive integers and $m_1,\ldots,m_\theta$ a fraction of $m$. A fraction of a generalized Liu algebra, denoted by $B(\underline{m}, \omega, \gamma)=B(\{m_1,\ldots,m_\theta\}, \omega, \gamma)$, is generated by $x^{\pm 1}, g$ and $y_{m_1},\ldots,y_{m_{\theta}}$, subject to the relations \begin{equation} \begin{cases} xx^{-1}=x^{-1}x=1,\quad xg=gx,\quad xy_{m_i}=y_{m_i}x, & \\ y_{m_i}g=\gamma^{m_i} gy_{m_i},\quad y_{m_i}y_{m_j}=y_{m_j}y_{m_i} & \\ y_{m_i}^{e_i}=1-x^{\omega\frac{e_im_i}{m}},\quad g^m=x^{\omega}, & \\ \end{cases} \end{equation} where $\gamma$ is a primitive $m$th root of 1 and $1\leq i, j\leq \theta$. The comultiplication, counit and antipode of $B(\{m_1,\ldots,m_\theta\}, \omega, \gamma)$ are given by $$\Delta(x)=x\otimes x,\quad \Delta(g)=g\otimes g, \quad \Delta(y_{m_i})=y_{m_i}\otimes g^{m_i}+1\otimes y_{m_i},$$ $$\epsilon(x)=1,\quad \epsilon(g)=1,\quad \epsilon(y_{m_i})=0,$$ and $$S(x)=x^{-1},\quad S(g)=g^{-1} \quad S(y_{m_i})=-y_{m_i}g^{-m_i},$$ for $1\leq i\leq \theta$. \begin{proposition}\label{p4.6} Let the $\k$-algebra $B=B(\{m_1,\ldots,m_\theta\}, \omega, \gamma)$ be defined as above. Then \begin{itemize} \item[(1)] The algebra $B$ is a Hopf algebra of GK-dimension one, with center $\k[x^{\pm 1}]$. \item[(2)] The algebra $B$ is prime and PI-deg $(B)=m$. \item[(3)] The algebra $B$ has a $1$-dimensional representation whose order is $m$. \item[(4)] $\io(B)=m.$ \end{itemize} \end{proposition} \begin{proof} (1) It is not hard to see that the center of $B$ is $\k[x^{\pm 1}]$ and $B$ is a free module over $\k[x^{\pm 1}]$ with finite rank. Actually, through a direct computation one can find that $\{y_jg^{i}|0\leq i,j\leq m-1\}$ is a basis of $B$ over $\k[x^{\pm 1}]$. Here recall that if $j\equiv j_1m_1+\ldots +j_{\theta}m_{\theta}$ (mod $m$) then $y_j=\prod_{i=1}^{\theta}y_{m_i}^{j_i}.$ Therefore, it has GK-dimension one. Similar to the case of $T(\underline{m},t,\xi)$, we leave the task to the readers to check that $B$ is a Hopf algebra. Actually, the same as the case of Taft algebras, since for each $1\leq i\leq \theta$ the subalgebra generated by $x^{\pm 1},g, y_{m_i}$ is just a similar kind of generalized Liu algebra which may be not prime now, one can reduce the proof to just considering the mixed relation $y_{m_i}y_{m_j}=y_{m_j}y_{m_i}$ and $y_{m_i}^{e_i}=y_{m_j}^{e_j}$ for $1\leq i,j\leq \theta$. (2) As the case of $T(\underline{m},t,\xi)$, we want to apply Lemma \ref{l2.10} to prove that $B$ is prime with PI-degree $m$. At first, let $B_0$ be the subalgebra generated by $y_{m_1},\ldots,y_{m_\theta}$ and $x^{\pm 1}$. Clearly, $B_0$ is a domain and $$B=\bigoplus_{i=0}^{m-1} B_{0}g^{i}.$$ From this, $B$ is a strongly $\widehat{\Z_m}=\langle \chi|\chi^m=1 \rangle$-graded algebra through $\chi(ag^i)=\gamma^{i}$ for any $a\in B_0$ and $0\leq i\leq m-1.$ Therefore, the conditions 1) and 2) of Lemma \ref{l2.10} are fulfilled. By part (b) of Lemma \ref{l2.10}, the action of $\widehat{\Z_m}$ is just the adjoint action of $\Z_{m}=\langle g|g^m=1\rangle$ on $B_0$ which by definition of a fraction of $m$ is faithful. Therefore, PI.deg$(B)=m$ by part (c) of Lemma \ref{l2.10}. In addition, the part (d) of Lemma \ref{l2.10} implies that $B$ is prime now. (3) By the definition of $B$, it has a $1$-dimensional representation $$\pi:\;B\to \k, \;\;x\mapsto 1,\;y_{m_i}\mapsto 0,\;g\mapsto \gamma\;\;(1\leq i\leq \theta).$$ It's order is clear $m$. (4) Using \cite[Lemma 2.6]{LWZ}, we have the right module structure of the left integrals is $$\int_{B}^{l}=B/(x-1,\;y_{m_i},\;g-\gamma^{-\sum_{i=1}^{\theta}m_{i}},\;1\leq i\leq \theta).$$ Next, we want to show that $\sum_{i=1}^{\theta}m_{i}$ is coprime to $m$. Recall that in the definition of a fraction (see Definition \ref{d3.1}), we ask that $(m_i,e_i)=1$ and $m|m_im_j$ for all $1\leq i,j\leq \theta$. Thus $$(e_i,e_j)=1,\;\;\;\;e_i|m_j$$ for all $1\leq i\neq j\leq \theta$. By (3) of Definition \ref{d3.1}, $m=e_1\cdots e_{\theta}$. On the contrary, assume that $(\sum_{i=1}^{\theta}m_{i},m)\neq 1$. Then there exists $1\leq i\leq \theta$ and a prime factor $p_i| e_i$ such that $p_i|m$ and $p_i|\sum_{i=1}^{\theta}m_{i}.$ Since $e_i|m_j$ for all $j\neq i$, $p_i|m_j$ for all $j\neq i$. Therefore, $p_i|m_i$ which is impossible since $(m_i,e_i)=1$. Therefore, we know that $(\sum_{i=1}^{\theta}m_{i},m)= 1$ and thus $\gamma^{-\sum_{i=1}^{\theta}m_{i}}$ is still a primitive $m$th root of unity which implies that $\io(B)=m.$ \end{proof} We also call the $1$-dimensional representation stated in (3) of Proposition \ref{p4.6} the \emph{canonical representation} of $B=B(\{m_1,\ldots,m_\theta\}, \omega, \gamma)$. This canonical representation of $B$ yields the corresponding left and right winding automorphisms \[{\Xi_{\pi}^l:} \begin{cases}x\longmapsto x,&\\ y_{m_i}\longmapsto y_{m_i}, &\\ g\longmapsto \gamma g, & \end{cases} \textrm{and} \;\;\;\;\; \Xi_{\pi}^r: \begin{cases}x\longmapsto x,&\\ y_{m_i}\longmapsto \gamma^{m_i}y_{m_i}, &\\ g\longmapsto \gamma g, & \end{cases}\] for $1\leq i\leq \theta$. Using above expression of $\Xi_{\pi}^{l}$ and $\Xi_{\pi}^{r}$, it is not difficult to find that \begin{equation} B_{i}^{l}=\k[x^{\pm 1},y_{m_1},\ldots,y_{m_\theta}]g^{i}\;\;\;\;\textrm{and} \;\;\;\; B_{j}^{r}=\k[x^{\pm 1},g^{-m_1}y_{m_1},\ldots,g^{-m_{\theta}}y_{m_\theta}]g^{j} \end{equation} for all $0\leq i,j\leq m-1.$ Thus we have \begin{equation}\label{eq4.5} B_{00}=\k[x^{\pm 1}]\;\;\;\;\textrm{and} \;\;\;\;B_{i,i+j}=\k[x^{\pm 1}]y_{j}g^{i} \end{equation} for all $0\leq i,j\leq m-1$ where $y_{j}=y_{m_1}^{j_1}\cdots y_{m_\theta}^{j_\theta}$ (see (1) of Remark \ref{r3.2}). At the end of this subsection, we also want to consider when two fractions of generalized Liu algebras are the same. To do that, let $m'\in \N$ and $\{m_1',\ldots,m'_{\theta'}\}$ a fraction of $m'$. As before, we denote the corresponding generators and numbers of $B(\underline{m'},\omega',\gamma')$ by adding the symbol $'$ to that of $B(\underline{m},\omega,\gamma).$ \begin{proposition}\label{pliu} As Hopf algebras, if $B(\underline{m},\omega,\gamma)\cong B(\underline{m'},\omega',\gamma')$, then $m=m', \theta=\theta'$ and up to an order of $m_i$'s, $\omega m_i=\omega'm_i'$ for all $1\leq i\leq \theta.$ \end{proposition} \begin{proof} Since they have the same PI-degrees, $m=m'$. We know the the center of $B(\underline{m},\omega,\gamma)$ is $\k [x^{\pm 1}]$ and thus $\varphi(x)=x'$ or $\varphi(x)=(x')^{-1}$. Also, as before, through comparing the nontrivial skew primitive elements, $\theta=\theta'$ and after a reordering the generators we can assume that $\varphi(y_{m_i})=y'_{m_i'}.$ The relation $y_{m_i}^{e_i}=1-x^{\omega\frac{e_im_i}{m}}$ implies that $e_i=e_i'$ and $\varphi(x)=x'$ since by assumption all $e_i,m_i$ and $m$ are positive. From which one has $$\omega\frac{e_im_i}{m}=\omega'\frac{e_i'm_i'}{m'}.$$ Since $m=m'$ and $e_i=e_i'$, $\omega m_i=\omega'm_i'$ for all $1\leq i\leq \theta.$ \end{proof} It is a pity that the conditions in above proposition is only a necessary condition for $B(\underline{m},\omega,\gamma)\cong B(\underline{m'},\omega',\gamma')$. To get a sufficient one, or an equivalent condition, we need the following observation. \begin{lemma}\label{l4.9} Any fraction of generalized Liu algebra $B(\underline{m},\omega,\gamma)$ is isomorphic to a \emph{unique} $B(\underline{m'},\omega',\gamma')$ satisfying $(m_1',\ldots,m'_{\theta'})=1$. \end{lemma} \begin{proof} We prove the existence at first and then prove the uniqueness. Take an arbitrary $B(\underline{m},\omega,\gamma)$. Let $m_0=(m_1,\ldots,m_{\theta})$. Above proposition suggests us to construct the following algebra $$B(\{\frac{m_1}{m_0},\ldots, \frac{m_\theta}{m_0}\}, \omega m_0,\gamma^{m^2_0}).$$ Clearly, $\{\frac{m_1}{m_0},\ldots, \frac{m_\theta}{m_0}\}$ is a fraction of $m$ with length $\theta$ and $(\frac{m_1}{m_0},\ldots, \frac{m_\theta}{m_0})=1$. \emph{Claim 1: As Hopf algebras, $B(\underline{m},\omega,\gamma)\cong B(\{\frac{m_1}{m_0},\ldots, \frac{m_\theta}{m_0}\}, \omega m_0,\gamma^{m^2_0}).$} \emph{Proof of the claim 1.} Since $(m_0, m)=1$, there exist $a\in \N, b\in \Z$ such that $am_0+bm=1$. Define the following map \begin{align*}\varphi:\;& B(\underline{m},\omega,\gamma)\longrightarrow B(\{\frac{m_1}{m_0},\ldots, \frac{m_\theta}{m_0}\}, \omega m_0,\gamma^{m^2_0}),\\ &\;\;x\mapsto x',\;g\mapsto (g')^{a}(x')^{b\omega},\;y_{m_i}\mapsto y'_{\frac{m_i}{m_0}},\;\;(1\leq i\leq \theta).\end{align*} Since \begin{align*}\varphi(g^{m_i})&=\varphi(g)^{m_i}=((g')^{a}(x')^{b\omega})^{m_i} =(g')^{am_0\frac{m_i}{m_0}}(x')^{b\omega' \frac{m_i}{m_0}}\\ &=(g')^{am_0\frac{m_i}{m_0}}(g')^{bm \frac{m_i}{m_0}}=(g')^{(am_0+bm)\frac{m_i}{m_0}}\\ &=(g')^{\frac{m_i}{m_0}} \end{align*} and \begin{align*}\varphi(y_{m_i}g)&=\varphi(y_{m_i})\varphi(g) =y'_{\frac{m_i}{m_0}}(g')^{a}(x')^{b\omega}\\ &=\gamma^{am^2_0\frac{m_i}{m_0}}(g')^{a}(x')^{b\omega}y'_{\frac{m_i}{m_0}} =\gamma^{m_i}\varphi(g)\varphi(y_{m_i})\\ &=\varphi(\gamma^{m_i}gy_{m_i}), \end{align*} for all $1\leq i\leq \theta$, it is not hard to prove that $\varphi$ gives the desired isomorphism. Next, let's show that uniqueness. To prove it, it is enough to built the following statement. \emph{Claim 2: Let $\{m_1,\ldots,m_\theta\}$ and $\{m'_1,\ldots,m'_\theta\}$ be two fractions of $m$ with length $\theta$ satisfying $(m_1,\ldots,m_\theta)=(m'_1,\ldots,m'_\theta)=1$. If $B(\underline{m},\omega,\gamma)$ is isomorphic to $B(\underline{m'},\omega',\gamma')$, then up to an order of $m_i$'s we have $m_i=m_i'$, $\omega=\omega'$ and $\gamma=\gamma'$ for $1\leq i\leq \theta$.} \emph{Proof of Claim 2.} By Proposition \ref{pliu}, $\omega m_i=\omega' m_i'$. Since $$(m_1,\ldots,m_\theta)=(m'_1,\ldots,m'_\theta)=1,$$ $\omega|\omega'$ and $\omega'|\omega$. Therefore $\omega=\omega'$ and thus $m_i=m_i'$ for all $1\leq i\leq \theta.$ From this, we know the isomorphism given in the proof of Proposition \ref{pliu} must sent $g^{m_i}$ to $(g')^{m_i}$, i.e., keeping the notations used in the proof of Proposition \ref{pliu}, we have $\varphi(g^{m_i})=(g')^{m_i}$ for all $1\leq i\leq\theta$. Since $(m_1,\ldots, m_\theta)=1$, there exist $a_i\in \Z$ such that $\sum_{i=1}^{\theta}a_im_i=1$. Thus $$\varphi(g)=\varphi(g^{\sum_{i=1}^{\theta}a_im_i})=(g')^{\sum_{i=1}^{\theta}a_im_i}=g'.$$ This implies that $$\gamma^{m_i}=(\gamma')^{m_i}$$ through using the relation $y_{m_i}g=\gamma^{m_i}gy_{m_i}$. So, $$\gamma=\gamma^{\sum_{i=1}^{\theta}a_im_i}=(\gamma')^{\sum_{i=1}^{\theta}a_im_i}=\gamma'.$$ \end{proof} \begin{definition}\label{d4.10} \emph{We call the Hopf algebra $B(\{\frac{m_1}{m_0},\ldots, \frac{m_\theta}{m_0}\}, \omega m_0,\gamma^{m^2_0})$ the \emph{basic form} of $B(\underline{m},\omega,\gamma)$.} \end{definition} By this lemma, we can tell when two fractions of generalized Liu algebras are isomorphic now. Keeping notations before, let $m,m'\in \N$ and $\{m_1,\ldots,m_{\theta}\},\;$ $\{m_1',\ldots,m'_{\theta'}\}$ be fractions of $m$ and $m'$ respectively. Let $m_0:=(m_1,\ldots,m_\theta)$ and $m_0':=(m_1',\ldots,m'_{\theta'}).$ \begin{proposition}\label{iliu} Retain above notations. As Hopf algebras, $B(\underline{m},\omega,\gamma)\cong B(\underline{m'},\omega',\gamma')$ if and only if $m=m', \theta=\theta',\; \omega m_0=\omega'm_0'$ and $\gamma^{m^2_0}=\gamma'^{(m_0')^2}.$ \end{proposition} \begin{proof} Note that $B(\underline{m},\omega,\gamma)\cong B(\underline{m'},\omega',\gamma')$ if and only if they have the same basic forms by above lemma. Now the condition listed in the proposition is clearly equivalent to say that the basic forms of them are same. \end{proof} \subsection{Fraction of the Hopf algebra $D(\underline{m},d,\gamma)$.}\label{ss4.4} Let $m,d$ be two natural numbers, $m_1,\ldots,m_\theta$ a fraction of $m$ satisfying the following two conditions: \begin{equation}\label{eq4.7} 2|\sum_{i=1}^{\theta}(m_i-1)(e_i-1)\quad \textrm{and}\quad 2|\sum_{i=1}^{\theta}(e_i-1)m_id. \end{equation} Let $\gamma$ a primitive $m$th root of unity and define \begin{equation}\xi_{m_i}:=\sqrt{\gamma^{m_i}},\;\;\;\;1\leq i\leq \theta.\end{equation} That is, $\xi_{m_i}$ is a primitive square root of $\gamma^{m_i}$. Therefore in particular, one has \begin{equation}\xi_{m_i}^{e_i}=-1\end{equation} for all $1\leq i\leq \theta$. In order to give the definition of the Hopf algebra $D(\underline{m},d,\gamma)$, we still need recall two notations introduced in Section 3: \begin{equation} {]s,t[_{m_i}}=\begin{cases} \phi_{m_i,(\bar{t}+1)m_i}\cdots \phi_{m_i,(e_i-1)m_i}\phi_{m_i,0}\cdots\phi_{m_i,(\bar{s}-1)m_i}, & \textrm{if}\; \bar{t}\geqslant \bar{s} \\1, & \textrm{if}\; \bar{s}=\overline{t}+1 \\ \phi_{m_i,(\bar{t}+1)m_i}\cdots \phi_{m_i,(\bar{s}-1)m_i}, & \textrm{if}\; \overline{s}\geqslant \bar{t}+2. \end{cases} \end{equation} and \begin{equation} {[s,t]_{m_i}}:=\begin{cases} \phi_{m_i,\bar{s}m_i}\phi_{m_i,(\bar{s}+1)m_i}\cdots \phi_{m_i,\bar{t}m_i}, & \textrm{if}\; \bar{t}\geqslant \bar{s} \\1, & \textrm{if}\; \bar{s}=\overline{t}+1 \\ \phi_{m_i,\bar{s}m_i}\cdots \phi_{m_i,(e_i-1)m_i}\phi_{m_i,0}\cdots \phi_{m_i,\bar{t}m_i}, & \textrm{if}\; \overline{s}\geqslant \bar{t}+2. \end{cases} \end{equation} where $\phi_{m_i,j}=1-\gamma^{-m_i^2(j_i+1)}x^{m_id}$ for all $1\leq i\leq \theta$. See \eqref{eqomit} and \eqref{eqpre} for details. Now we are in the position to give the definition of $D(\underline{m},d,\gamma)$. $\bullet$ As an algebra, $D=D(\underline{m},d,\gamma)$ is generated by $x^{\pm 1}, g^{\pm 1}, y_{m_1},\ldots, y_{m_\theta}, u_0, u_1, \cdots, u_{m-1}$, subject to the following relations \begin{eqnarray} &&xx^{-1}=x^{-1}x=1,\quad gg^{-1}=g^{-1}g=1,\quad xg=gx,\quad xy_{m_i}=y_{m_i}x\\ &&y_{m_i}y_{m_k}=y_{m_k}y_{m_i},\quad y_{m_i}g=\gamma^{m_i} gy_{m_i},\quad y_{m_i}^{e_i}=1-x^{e_im_id},\quad g^{m}=x^{md},\\ &&xu_j=u_jx^{-1},\quad y_{m_i}u_j=\phi_{m_i,j}u_{j+m_i}=\xi_{m_i}x^{m_id}u_jy_{m_i}\quad u_j g=\gamma^j x^{-2d}gu_j,\\ \label{eq4.14}&&u_ju_l=(-1)^{\sum_{i=1}^{\theta}l_i}\gamma^{\sum_{i=1}^{\theta}m_i^2\frac{l_i(l_i+1)}{2}} \frac{1}{m}x^{-\frac{2+\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d} \\ \notag &&\quad \quad\quad\prod_{i=1}^{\theta}\xi_{m_i}^{-l_i}[j_i,e_i-2-l_i]_{m_i}y_{\overline{j+l}}g \end{eqnarray} for $1\leq i,k\leq \theta,$ and $0\leq j,l\leq m-1$ and here for any integer $n$, $\overline{n}$ means remainder of division of $n$ by $m$ and as before $n\equiv\sum_{i=1}^{\theta} n_im_i$ (mod $m$) by Remark \ref{r3.2}. $\bullet$ The coproduct $\D$, the counit $\epsilon$ and the antipode $S$ of $D(\underline{m},d,\gamma)$ are given by \begin{eqnarray} &&\D(x)=x\otimes x,\;\; \D(g)=g\otimes g, \;\;\D(y_{m_i})=y_{m_i}\otimes g^{m_i}+1\otimes y_{m_i},\\ &&\D(u_j)=\sum_{k=0}^{m-1}\gamma^{k(j-k)}u_k\otimes x^{-kd}g^ku_{j-k};\\ &&\epsilon(x)=\epsilon(g)=\epsilon(u_0)=1,\;\;\epsilon(y_{m_i})=\epsilon(u_s)=0;\\ &&S(x)=x^{-1},\;\; S(g)=g^{-1}, \;\;S(y_{m_i})=-y_{m_i}g^{-m_i},\\ &&\label{eq4.19} S(u_j)=(-1)^{\sum_{i=1}^{\theta}j_i}\gamma^{-\sum_{i=1}^{\theta} m_i^2\frac{j_i(j_i+1)}{2}} x^{b+\sum_{i=1}^{\theta}j_im_id} g^{m-1-(\sum_{i=1}^{\theta}j_im_i)}\prod_{i=1}^{\theta}\xi_{m_i}^{-j_i}u_j, \end{eqnarray} for $1\leq i\leq \theta,\;1\leq s\leq m-1\;,0\leq j\leq m-1$ and $b=(1-m)d-\frac{\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d$. Before we prove that $D(\underline{m},d,\gamma)$ is a Hopf algebra, which is highly nontrivial, we want to express the formula \eqref{eq4.14} and \eqref{eq4.19} in a more convenient way. On one hand, we find that \begin{equation}\label{eq4.20}(-1)^{-ke_i-j_i}\xi_{m_i}^{-ke_{i}-j_{i}} \gamma^{m_i^{2}\tfrac{(ke_i+j_i)(ke_i+j_i+1)}{2}}=(-1)^{-j_i}\xi_{m_i}^{-j_i} \gamma^{m_i^{2}\tfrac{j_i(j_i+1)}{2}}\end{equation} for any $k\in \mathbbm{Z}$. Therefore, if we define $$u_{s}:=u_{\overline{s}},$$ where $\overline{s}$ means the remainder of $s$ modulo $m$, then the relation \eqref{eq4.14} can be replaced by \begin{align} \notag u_ju_l&=(-1)^{\sum_{i=1}^{\theta}l_i}\gamma^{\sum_{i=1}^{\theta}m_i^2\frac{l_i(l_i+1)}{2}} \frac{1}{m}x^{-\frac{2+\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d} \\ \notag &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \prod_{i=1}^{\theta}\xi_{m_i}^{-l_i}[j_i,e_i-2-l_i]_{m_i}y_{\overline{j+l}}g\\ \notag &=(-1)^{\sum_{i=1}^{\theta}l_i}\gamma^{\sum_{i=1}^{\theta}m_i^2\frac{l_i(l_i+1)}{2}} \frac{1}{m}x^{-\frac{2+\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d} \\ \notag&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \prod_{i=1}^{\theta}\xi_{m_i}^{-l_i}]-1-l_i,j_i-1[_{m_i}y_{\overline{j+l}}g\\ \label{eq4.21} &=\frac{1}{m}x^{-\frac{2+\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d} \prod_{i=1}^{\theta} (-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2\frac{l_i(l_i+1)}{2}}]-1-l_i,j_i-1[_{m_i}y_{\overline{j+l}}g\\ \notag &=\frac{1}{m}x^{-\frac{2+\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d} \prod_{i=1}^{\theta} (-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2\frac{l_i(l_i+1)}{2}}[j_i,e_i-2-l_i]_{m_i}y_{\overline{j+l}}g \end{align} for all $j, l\in \mathbbm{Z}$, that is, we need not always ask that $0\leq j,l\leq m-1$. On other hand, since $g^m=x^{md}$ and \eqref{eq4.20} , the definition about $S(u_{j})$ still holds for any integer $j$, that is, \eqref{eq4.19} can be replaced in the following way: \begin{align}\notag S(u_j)&=(-1)^{\sum_{i=1}^{\theta}j_i}\prod_{i=1}^{\theta}\xi_{m_i}^{-j_i}\gamma^{-\sum_{i=1}^{\theta} m_i^2\frac{j_i(j_i+1)}{2}} x^{\sum_{i=1}^{\theta}j_im_id}x^{b} g^{m-1-(\sum_{i=1}^{\theta}j_im_i)}u_j\\ &= x^{b}g^{m-1}\prod_{i=1}^{\theta}(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{- m_i^2\frac{j_i(j_i+1)}{2}} x^{j_im_id} g^{-j_im_i}u_j \end{align} for all $j\in \mathbbm{Z}$. We also need to give a bigrading on this algebra for the proof. Let $\xi:=\sqrt{\gamma}$ and define the following two algebra automorphisms of $D(\underline{m},d,\gamma)$: \[{\Xi_{\pi}^l:} \begin{cases} x\longmapsto x, &\\ y_{m_i}\longmapsto y_{m_i}, &\\ g\longmapsto \gamma g, &\\ u_i\longmapsto \xi u_i, & \end{cases} \textrm{and}\;\;\;\; \Xi_{\pi}^r: \begin{cases} x\longmapsto x, &\\ y_{m_i}\longmapsto \gamma^{m_i}y_{m_i}, &\\ g\longmapsto \gamma g, &\\ u_j\longmapsto \xi^{2j+1} u_j,& \end{cases}\] for $1\leq i\leq \theta$ and $0\leq j\leq m-1.$ It is straightforward to show that $\Xi_{\pi}^l$ and $\Xi_{\pi}^r$ are indeed algebra automorphisms of $D(\underline{m},d,\gamma)$ and these automorphisms have order $2m$ by noting that $\xi$ is a primitive $2m$th root of 1. Define \[{D_i^l=} \begin{cases} \k [x^{\pm 1}, y_{m_1},\ldots,y_{m_\theta}] g^{\tfrac{i}{2}}, & i=\textrm{even},\\ \sum_{s=0}^{m-1}\k[ x^{\pm 1}] g^{\tfrac{i-1}{2}}u_s, & i=\textrm{odd}, \end{cases}\] and \[ D_j^r= \begin{cases} \k[ x^{\pm 1}, y_{m_1}g^{-m_1},\dots,y_{m_\theta}g^{-m_\theta}] g^{\tfrac{j}{2}}, & j=\textrm{even},\\ \sum_{s=0}^{m-1}\k[x^{\pm 1}] g^su_{\tfrac{j-1}{2}-s}, & j=\textrm{odd}. \end{cases}\] Therefore \begin{equation}\label{eqD} D_{ij}:=D_i^l\cap D_j^r= \begin{cases} \k[x^{\pm 1}]y_{\overline{\frac{j-i}{2}}}g^{\frac{i}{2}}, & i, j=\textrm{even},\\ \k[x^{\pm 1}]g^{\tfrac{i-1}{2}}u_{{\tfrac{j-i}{2}}}, & i, j=\textrm{odd},\\ 0, & \textrm{otherwise}. \end{cases}\end{equation} Since $\sum_{i,j}D_{ij}=D(\underline{m},d,\gamma)$, we have \begin{equation}\label{eq4.24}D(\underline{m},d,\gamma)=\bigoplus_{i,j=0}^{2m-1}D_{ij}\end{equation} which is a bigrading on $D(\underline{m},d,\gamma)$ automatically. Let $D:= D(\underline{m},d,\gamma)$, then $D\otimes D$ is graded naturally by inheriting the grading defined above. In particular, for any $h\in D\otimes D$, we use $$h_{(s_1,t_1)\otimes (s_2,t_2)}$$ to denote the homogeneous part of $h$ in $D_{s_{1},t_{1}}\otimes D_{s_{2},t_{2}} $. This notion will be used freely in the proof of the following desired proposition. \begin{proposition}\label{p4.7} The algebra $D(\underline{m},d,\gamma)$ defined above is a Hopf algebra. \end{proposition} \emph{Proof:} The proof is standard but not easy. We are aware that one can not apply the fact that the non-fraction version $D(m,d,\gamma)$ (see Subsection 2.3) is already a Hopf algebra to simply the proof although we can do this in the proofs of Proposition \ref{p4.6} and \ref{p4.1}. The reason is that if we consider the subalgebra generated by $x^{\pm 1},g,u_{0},\ldots, u_{m-1}$ together with a single $y_{m_i}$ (this is the case of $D(m,d,\gamma)$) then we can find that the other $y_{m_j}$'s will be created naturally. So, one has to prove it step by step. Since the subalgebra generated by $x^{\pm 1}, y_{m_1},\dots,y_{m_{\theta}}, g$ is just a fraction version of generalized Liu algebra $B(\underline{m}, \omega, \gamma)$, which is a Hopf algebra already (by Proposition \ref{p4.6}), we only need to verify the related relations in $D(\underline{m},d,\gamma)$ where $u_j$ are involved. \noindent $\bullet$ \emph{Step} 1 ($\D$ and $\epsilon$ are algebra homomorphisms). First of all, it is clear that $\epsilon$ is an algebra homomorphism. Since $x$ and $g$ are group-like elements, the verifications of $\D(x)\D(u_i)=\D(u_i)\D(x^{-1})$ and $\D(u_i)\D(g)=\gamma^{i}\D(x^{-2d})\D(g)\D(u_i)$ are simple and so they are omitted. \noindent (1) \emph{The proof of} $\D(\phi_{m_i,j})\D(u_{m_i+j})=\D(y_{m_i})\D(u_j)=\xi_{m_i}\D(x^{m_id})\D(u_j)\D(y_{m_i})$. Define $$\gamma_i:=\gamma^{-m_i^2}$$ for all $1\leq i\leq \theta.$ By definition $\D(u_j)=\sum_{k=0}^{m-1}\gamma^{k(k-j)}u_k\otimes x^{-kd}g^ku_{j-k}$ for all $0 \leqslant j \leqslant m-1$, we have \begin{align*} \D(\phi_{m_i,j})\D(u_{m_i+j})&=(1\otimes 1 - \gamma_i^{1+j_i}x^{m_id} \otimes x^{m_id})\sum_{k=0}^{m-1}\gamma^{k(j+m_i-k)}u_k\otimes x^{-kd}g^ku_{j+m_i-k}\\ &=\sum_{k=0}^{m-1}\gamma^{k(j+m_i-k)}u_k\otimes x^{-kd}g^ku_{j+m_i-k} \\ &\quad -\sum_{k=0}^{m-1}\gamma^{-m_i^2(1+j_i)+k(j+m_i-k)}x^{m_id}u_k\otimes x^{m_id-kd}g^ku_{j+m_i-k}. \end{align*} And \begin{align*} \D(y_{m_i})\D(u_j)&=(1\otimes y_{m_i}+y_{m_i}\otimes g^{m_i})(\sum_{k=0}^{m-1}\gamma^{k(k-j)}u_k\otimes x^{-kd}g^ku_{j-k})\\ &=\sum_{k=0}^{m-1}\gamma^{k(j-k)}u_k\otimes x^{-kd}g^{k}\gamma^{km_i}\phi_{m_i,j-k}u_{j+m_i-k}\\ & \quad + \sum_{k=0}^{m-1}\gamma^{k(j-k)}\phi_{m_i,k}u_{m_i+k}\otimes x^{-kd}g^{m_i+k}u_{j-k}\\ &=\sum_{k=0}^{m-1}\gamma^{k(j-k)+km_i}u_{k}\otimes x^{-kd}g^{k}u_{j+m_i-k}\\ &\quad - \sum_{k=0}^{m-1}\gamma^{k(j-k)}u_{k}\otimes\gamma^{-m_i^2(j_i+1-2k_i)} x^{(m_i-k)d}g^{k}u_{j+m_i-k}\\ &\quad +\sum_{k=0}^{m-1}\gamma^{k(j-k)}u_{m_i+k}\otimes x^{-kd}g^{m_i+k}u_{j-k}\\ &\quad - \sum_{k=0}^{m-1}\gamma^{k(j-k)-m_i^2(1+k_i)}x^{m_id}u_{m_i+k}\otimes x^{-kd}g^{m_i+k}u_{j-k}\\ &=\sum_{k=0}^{m-1}\gamma^{k(j-k)+km_i}u_{k}\otimes x^{-kd}g^{k}u_{j+m_i-k}\\ &\quad - \sum_{k=0}^{m-1}\gamma^{k(j-k)-m_i^2(j_i+1-2k_i)}u_{k}\otimes x^{(m_i-k)d}g^{k}u_{j+m_i-k}\\ &\quad +\sum_{k=0}^{m-1} \gamma^{(k-m_i)(j-k+m_i)}u_{k}\otimes x^{-(k-m_i)d}g^ku_{j+m_i-k}\\ &\quad -\sum_{k=0}^{m-1}\gamma^{(k-m_i)(j+m_i-k)-m_i^2k_i}x^{m_id}u_k\otimes x^{-(k-m_i)d}g^{k}u_{j+m_i-k}\\ &= \sum_{k=0}^{m-1}\gamma^{k(j+m_i-k)}u_k\otimes x^{-kd}g^ku_{j+m_i-k} \\ &\quad -\sum_{k=0}^{m-1}\gamma^{-m_i^2(1+j_i)+k(j+m_i-k)}x^{m_id}u_k\otimes x^{m_id-kd}g^ku_{j+m_i-k}. \end{align*} Here we use the following equalities $$\gamma^{(k-m_i)(j-k+m_i)}=\gamma^{k(j-k)+km_i-m_i(j-k)-m_{i}^2}=\gamma^{k(j-k)+2k_im_i^2-m_i^2(1+j_i)},$$ and $$\gamma^{(k-m_i)(j+m_i-k)-m_i^2k_i}=\gamma^{-m_i^2(1+j_i)+k(j+m_i-k)}.$$ Hence $\D(\phi_{m_i,j})\D(u_{m_i+j})=\D(y_{m_i})\D(u_j)$. Similarly, $\xi_{m_i}\D(x^{m_id})\D(u_j)\D(y_{m_i})$ \begin{align*} &=\xi_{m_i}(x^{m_id}\otimes x^{m_id})(\sum_{k=0}^{m-1}\gamma^{k(j-k)}u_k\otimes x^{-kd}g^ku_{j-k})(1\otimes y_{m_i} +y_{m_i}\otimes g^{m_i})\\ &=\sum_{k=0}^{m-1}\xi_{m_i}\gamma^{k(j-k)}x^{m_id}u_k\otimes x^{(m_i-k)d}g^{k}u_{j-k}y_{m_i}\\ &\quad + \sum_{k=0}^{m-1}\xi_{m_i}\gamma^{k(j-k)}x^{m_id}u_ky_{m_i}\otimes x^{(m_i-k)d}g^ku_{j-k}g^{m_i}\\ &=\sum_{k=0}^{m-1}\gamma^{k(j-k)}x^{m_id}u_k\otimes x^{-kd}g^{k}\phi_{m_i,j-k}u_{j+m_i-k}\\ &\quad +\sum_{k=0}^{m-1}\gamma^{k(j-k)}\phi_{m_i,k}u_{k+m_i}\otimes \gamma^{(j-k)m_i}x^{(-m_i-k)d}g^{k+m_i}u_{j-k}\\ &=\sum_{k=0}^{m-1}\gamma^{k(j-k)}x^{m_id}u_k\otimes x^{-kd}g^{k}u_{j+m_i-k}\\ &\quad - \sum_{k=0}^{m-1}\gamma^{k(j-k)-m_i^2(1+j_i-k_i)}x^{m_id}u_k\otimes x^{(-k+m_i)d}g^{k}u_{j+m_i-k}\\ &\quad + \sum_{k=0}^{m-1}\gamma^{(k-m_i)(j-k+m_i)}(1-\gamma^{-m_i^2k_i}x^{m_i}d)u_{k}\otimes \gamma^{(j-k+m_i)m_i}x^{-kd}g^{k}u_{j+m_i-k}\\ &=\sum_{k=0}^{m-1}\gamma^{k(j-k)}x^{m_id}u_k\otimes x^{-kd}g^{k}\phi_{m_i,j-k}u_{j+m_i-k}\\ &\quad +\sum_{k=0}^{m-1}\gamma^{k(j-k)}\phi_{m_i,k}u_{k+m_i}\otimes \gamma^{(j-k)m_i}x^{(-m_i-k)d}g^{k+m_i}u_{j-k}\\ &=\sum_{k=0}^{m-1}\gamma^{k(j-k)}x^{m_id}u_k\otimes x^{-kd}g^{k}u_{j+m_i-k}\\ &\quad - \sum_{k=0}^{m-1}\gamma^{k(j-k)-m_i^2(1+j_i-k_i)}x^{m_id}u_k\otimes x^{(-k+m_i)d}g^{k}u_{j+m_i-k}\\ &\quad + \sum_{k=0}^{m-1}\gamma^{k(j-k+m_i)}u_{k}\otimes x^{-kd}g^{k}u_{j+m_i-k}\\ &-\sum_{k=0}^{m-1}\gamma^{k(j-k)}x^{m_id}u_k\otimes x^{-kd}g^{k}u_{j+m_i-k}\\ &= \sum_{k=0}^{m-1}\gamma^{k(j-k+m_i)}u_k\otimes x^{-kd}g^{k}u_{j+m_i-k}\\ &\quad -\sum_{k=0}^{m-1}\gamma^{k(j-k)-m_i^2(1+j_i-k_i)}x^{m_id}u_k\otimes x^{m_id-kd}g^ku_{j+m_i-k}\\ &= \D(\phi_{m_i,j})\D(u_{m_i+j}). \end{align*} \noindent (2)\emph{ The proof of} $\D(u_ju_l)=\D(u_j)\D(u_l)$. Direct computation shows that \begin{align*} \D(u_j)\D(u_l)&=\sum_{s=0}^{m-1}\gamma^{s(j-s)}u_s\otimes x^{-sd}g^su_{j-s} \sum_{t=0}^{m-1}\gamma^{t(l-t)}u_t\otimes x^{-td}g^tu_{l-t}\\ &=\sum_{t=0}^{m-1}\sum_{s=0}^{m-1}\gamma^{s(j-s)}u_s\gamma^{(t-s)(l-t+s)}u_{t-s} \otimes x^{-sd}g^su_{j-s}x^{-(t-s)d}g^{t-s}u_{l-t+s}\\ &=\sum_{t=0}^{m-1}\sum_{s=0}^{m-1}\gamma^{(t-s)(l-t+s)+(j-s)t}u_su_{t-s}\otimes x^{-td}g^tu_{j-s}u_{l-t+s}. \end{align*} By the bigrading given in \eqref{eq4.24}, we can find that for each $0\leqslant t \leqslant m-1$, $$\sum_{s=0}^{m-1}\gamma^{(t-s)(l-t+s)+(j-s)t}u_su_{t-s}\otimes x^{-td}g^tu_{j-s}u_{l-t+s}\in D_{2, 2+2t}\otimes D_{2+2t, 2+2(j+l)},$$ where the suffixes in $ D_{2, 2+2t}\otimes D_{2+2t, 2+2(j+l)}$ are interpreted mod $2m$. Using equation \eqref{eq4.21}, we get that $$u_su_{t-s}=\frac{1}{m}x^{a}\prod_{i=1}^{\theta}(-1)^{(t-s)_i}\xi_{m_i}^{-(t-s)_i}\gamma^{m_i^2 \frac{(t-s)_i((t-s)_i+1)}{2}}\;{[s_i, e_i-2-(t-s)_i]_{m_i}}\; y_{t}g$$ and \begin{eqnarray*} u_{j-s}u_{l-t+s}&=&\frac{1}{m}x^{a}\prod_{i=1}^{\theta} (-1)^{(l-t+s)_i}\xi_{m_i}^{-(l-t+s)_i}\gamma^{m_i^{2}\frac{(l-t+s)_i[(j-t+s)_i+1]}{2}}\\ &&{[(j-s)_i, e_i-2-(l-t+s)_i]_{m_i}}\;y_{\overline{j+l-t}}g\end{eqnarray*} here and the following of this proof $a=-\frac{2+\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d$. Using \cite[Proposition IV.2.7]{Kas}, for each $1\leq i\leq \theta$ \begin{align*} \;{[s_i, e_i-2-(t-s)_i]_{m_i}}&=(1-\gamma_i^{s+1}x^{m_id})(1-\gamma_i^{s+2}x^{m_id})\cdots (1-\gamma_i^{(e_i-1-t_i+s_i)}x^{m_id})\\ &=\sum_{\alpha_i=0}^{e_i-1-t_i}(-1)^{\alpha_i}\binom{e_i-1-t_i}{\alpha_i}_{\gamma_i} \gamma_i^{\tfrac{\alpha_i(\alpha_i-1)}{2}}(\gamma_i^{s+1}x^{m_id})^{\alpha_i}\\ &=\sum_{\alpha_i=0}^{e_i-1-t_i}(-1)^{\alpha_i}\binom{e_i-1-t_i}{\alpha_i}_{\gamma_i} \gamma_i^{\tfrac{\alpha_i(\alpha_i+1)}{2}+s_i\alpha_i}x^{m_id\alpha_i}, \end{align*} and \begin{eqnarray*} \;&&{[(j-s)_i, e_i-2-(l-t+s)_i]_{m_i}}\\ &=&(1-\gamma_i^{j_i-s_i+1}x^{m_id})(1-\gamma_i^{j_i-s_i+2}x^{m_id})\cdots (1-\gamma_i^{j_i-s_i+e_i-1-\overline{(j_i+l_i-t_i)}}x^{m_id})\\ &=&\sum_{\beta_i=0}^{e_i-1-\overline{(j_i+l_i-t_i)}} (-1)^{\beta_i}\binom{e_i-1-\overline{(j_i+l_i-t_i)}}{\beta_i}_{\gamma_i} \gamma_i^{\tfrac{\beta_i(\beta_i-1)}{2}}(\gamma_i^{j_i-s_i+1}x^{m_id})^{\beta_i}\\ &=&\sum_{\beta_i=0}^{e_i-1-\overline{(j_i+l_i-t_i)}} (-1)^{\beta_i}\binom{e_i-1-\overline{(j_i+l_i-t_i)}}{\beta_i}_{\gamma_i} \gamma_i^{\tfrac{\beta_i(\beta_i+1)}{2}+(j_i-s_i)\beta_i}x^{m_id\beta_i}, \end{eqnarray*} where $\overline{(j_i+l_i-t_i)}$ is the remainder of $j_i+l_i-t_i$ divided by $e_i$. Then for each $0\leqslant t \leqslant m-1$, \begin{eqnarray} && \D(u_j)\D(u_l)_{{(2, 2+2t)}\otimes {(2+2t,2+2(j+l))}}\\ \notag &=&\sum_{s=0}^{m-1}\gamma^{(t-s)(l-t+s)+(j-s)t}u_su_{t-s}\otimes x^{-td}g^tu_{j-s}u_{l-t+s}\\ \notag &=&\sum_{s=0}^{m-1}\gamma^{(t-s)(l-t+s)+(j-s)t}\frac{1}{m}x^{a}\prod_{i=1}^{\theta}(-1)^{(t-s)_i}\xi_{m_i}^{-(t-s)_i}\gamma^{m_i^2 \frac{(t-s)_i((t-s)_i+1)}{2}}\\ \notag&&{[s_i, e_i-2-(t-s)_i]_{m_i}} y_{t}g\\ \notag&&\otimes x^{-td}g^t\frac{1}{m}x^{a}\prod_{i=1}^{\theta} (-1)^{(l-t+s)_i}\xi_{m_i}^{-(l-t+s)_i}\gamma^{m_i^{2}\frac{(l-t+s)_i[(j-t+s)_i+1]}{2}}\\ \notag&& {[(j-s)_i, e_i-2-(l-t+s)_i]_{m_i}}\;y_{\overline{j+l-t}}g\\ \notag &=&[\sum_{s=0}^{m-1}\gamma^{(j-s)t-t(j+l-t)}\frac{1}{m^2}\prod_{i=1}^{\theta} (-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2 \frac{l_i(l_i+1)}{2}}[s_i, e_i-2-(t-s)_i]_{m_i}\\ \notag&& \otimes x^{-td}\prod_{i=1}^{\theta}[(j-s)_i, e_i-2-(l-t+s)_i]_{m_i}](x^{a}y_t g\otimes x^a y_{\overline{j+l-t}}g^{t+1})\\ \notag &=& [\sum_{s=0}^{m-1}\gamma^{(j-s)t-t(j+l-t)}\frac{1}{m^2}\prod_{i=1}^{\theta} (-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2 \frac{l_i(l_i+1)}{2}}\\ \notag&&\sum_{\alpha_i=0}^{e_i-1-t_i}(-1)^{\alpha_i}\binom{e_i-1-t_i}{\alpha_i}_{\gamma_i} \gamma_i^{\tfrac{\alpha_i(\alpha_i+1)}{2}+s_i\alpha_i}\\ \notag&\otimes&\prod_{k=1}^{\theta} \sum_{\beta_k=0}^{e_k-1-\overline{(j_k+l_k-t_k)}} (-1)^{\beta_k}\binom{e_k-1-\overline{(j_k+l_k-t_k)}}{\beta_k}_{\gamma_k}\\ \notag&&\gamma_k^{\tfrac{\beta_k(\beta_k+1)}{2}+(j_k-s_k)\beta_k} (x^{m_id\alpha_i}\otimes x^{m_kd\beta_k-td})](x^{a}y_t g\otimes x^a y_{\overline{j+l-t}}g^{t+1})\\ \notag&=& \frac{1}{m^2}\prod_{i=1}^{\theta} (-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2 \frac{l_i(l_i+1)}{2}} \prod_{i,k=1}^{\theta}\\ \notag&& [\sum_{\alpha_i=0}^{e_i-1-t_i}\sum_{\beta_{k}=1}^{e_k-1-\overline{j_k+l_k-t_k}} (-1)^{\alpha_i+\beta_k}\binom{e_i-1-t_i}{\alpha_i}_{\gamma_i} \binom{e_k-1-\overline{(j_k+l_k-t_k)}}{\beta_k}_{\gamma_k}\\ \notag \label{eq4.26}&& \gamma_i^{\tfrac{\alpha_i(\alpha_i+1)}{2}} \gamma_k^{\tfrac{\beta_k(\beta_k+1)}{2}+j_k\beta_k}(x^{m_id\alpha_i}\otimes x^{m_kd\beta_k-td})\\ && \gamma^{t(t-l)}\sum_{s=0}^{m-1}\gamma^{-ts}\gamma^{-m_i^2s_i\alpha_i+m_k^2s_k\beta_k}](x^{a}y_t g\otimes x^a y_{\overline{j+l-t}}g^{t+1}). \end{eqnarray} Meanwhile, $u_ju_l=\frac{1}{m}x^{a}\prod_{i=1}^{\theta} (-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2\frac{l_i(l_i+1)}{2}}\frac{1}{m}{[j_i, e_i-2-l_i]_{m_i}}\;y_{\overline{j+l}}g$. By definition, $$y_{\overline{j+l}}=y_{m_1}^{\overline{j_1+l_1}}y_{m_2}^{\overline{j_2+l_2}}\cdots y_{m_\theta}^{\overline{j_\theta+l_\theta}}$$ where $\overline{j_i+l_i}$ is the remainder of $j_i+l_i$ divided by $e_i$ for $1\leq i\leq \theta.$ Therefore, \begin{align*} \D(y_{\overline{j+l}})&=\prod_{i=1}^{\theta}(1\otimes y_{m_i}+ y_{m_i}\otimes g^{m_i})^{\overline{j_i+l_i}}\\ &=\prod_{i=1}^{\theta}\sum_{t_i=0}^{\overline{j_i+l_i}} \binom{\overline{j_i+l_i}}{t_i}_{\gamma_i}(1\otimes y_{m_i})^{\overline{j_i+l_i}-t_i} (y_{m_i}\otimes g^{m_i})^{t_i}\\ &=\prod_{i=1}^{\theta}\sum_{t_i=0}^{\overline{j_i+l_i}} \binom{\overline{j_i+l_i}}{t_i}_{\gamma_i} y_{m_i}^{t_i}\otimes y_{m_i}^{\overline{j_i+l_i}-t_i}g^{m_it_i}. \end{align*} and \begin{eqnarray*} &&\D({[j_i, e_i-2-l_i]_{m_i}})\\ &=&(1\otimes 1 - \gamma_i^{j_i+1}x^{m_id}\otimes x^{m_id}) \cdots (1\otimes 1 - \gamma_i^{e_i-1+j_i-\overline{j_i+l_i}}x^{m_id}\otimes x^{m_id})\\ &=&\sum_{\alpha_i=0}^{e_i-1-\overline{j_i+l_i}}(-1)^{\alpha_i} \binom{e_i-1-\overline{j_i+l_i}}{\alpha_i}_{\gamma_i}\gamma_{i}^{\tfrac{\alpha_i(\alpha_i-1)}{2}} (\gamma_i^{j_i+1}x^{m_id}\otimes x^{m_id})^{\alpha_{i}}\\ &=&\sum_{\alpha_i=0}^{e_i-1-\overline{j_i+l_i}}(-1)^{\alpha_i} \binom{e_i-1-\overline{j_i+l_i}}{\alpha_i}_{\gamma_i}\gamma_{i}^{\tfrac{\alpha_i(\alpha_i+1)}{2}+j_i\alpha_i} (x^{m_id\alpha_i}\otimes x^{m_id\alpha_i}), \end{eqnarray*} we get \begin{eqnarray*} \D(u_ju_l)&=&\frac{1}{m}\D(x^a)\prod_{i=1}^{\theta} (-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2\frac{l_i(l_i+1)}{2}}\D({[j_i, e_i-2-l_i]_{m_i}}) \D(y_{\overline{j+l}})\D(g)\\ &=&\frac{1}{m} \prod_{i=1}^{\theta}[(-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2\frac{l_i(l_i+1)}{2}} \sum_{\alpha_i=0}^{e_i-1-\overline{j_i+l_i}}(-1)^{\alpha_i} \binom{e_i-1-\overline{j_i+l_i}}{\alpha_i}_{\gamma_i} \gamma_{i}^{\tfrac{\alpha_i(\alpha_i+1)}{2}+j_i\alpha_i}\\ &&\sum_{t_i=0}^{\overline{j_i+l_i}} \binom{\overline{j_i+l_i}}{t_i}_{\gamma_i}(x^a\otimes x^{a})(x^{m_id\alpha_i}\otimes x^{m_id\alpha_i})(y_{m_i}^{t_i}\otimes y_{m_i}^{\overline{j_i+l_i}-t_i}g^{m_it_i})](g\otimes g)\\ &=& \frac{1}{m} \prod_{i=1}^{\theta}[(-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2\frac{l_i(l_i+1)}{2}} \sum_{t_i=0}^{\overline{j_i+l_i}}\sum_{\alpha_i=0}^{e_i-1-\overline{j_i+l_i}}(-1)^{\alpha_i} \binom{e_i-1-\overline{j_i+l_i}}{\alpha_i}_{\gamma_i}\binom{\overline{j_i+l_i}}{t_i}_{\gamma_i}\\ && \gamma_{i}^{\tfrac{\alpha_i(\alpha_i+1)}{2}+j_i\alpha_i}(x^{m_id\alpha_i}\otimes x^{m_id\alpha_i})(x^ay_{m_i}^{t_i}g\otimes x^ay_{m_i}^{\overline{j_i+l_i}-t_i}g^{m_it_i+1})]. \end{eqnarray*} Clearly, for each $t$ satisfying $0\leq t_i\leq \overline{j_i+l_i}$, \begin{eqnarray} \label{eq5.26}&&\D(u_ju_l)_{(2,2+2t)\otimes (2+2t,2+2(j+l))}\\ \notag&=& \frac{1}{m} \prod_{i=1}^{\theta}[(-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2\frac{l_i(l_i+1)}{2}} \sum_{\alpha_i=0}^{e_i-1-\overline{j_i+l_i}}(-1)^{\alpha_i} \binom{e_i-1-\overline{j_i+l_i}}{\alpha_i}_{\gamma_i}\binom{\overline{j_i+l_i}}{t_i}_{\gamma_i}\\ \notag&& \gamma_{i}^{\tfrac{\alpha_i(\alpha_i+1)}{2}+j_i\alpha_i}(x^{m_id\alpha_i}\otimes x^{m_id\alpha_i})(x^ay_{m_i}^{t_i}\otimes x^ay_{m_i}^{\overline{j_i+l_i}-t_i}g^{m_it_i})](g\otimes g). \end{eqnarray} By the graded structure of $D\otimes D$, $\D(u_i)\D(u_j)=\D(u_iu_j)$ if and only if \begin{equation}\label{eq4.27}\D(u_i)\D(u_j)_{{(2, 2+2t)}\otimes {(2+2t,2+2(j+l))}}=0\end{equation} for all $t$ satisfying there is an $1\leq i\leq \theta$ such that $\overline{j_i+l_i}+1\leqslant t_i \leqslant e_i-1$ and \begin{equation}\label{eq4.28}\D(u_iu_j)_{{(2, 2+2t)}\otimes {(2+2t,2+2(j+l))}}=\D(u_i)\D(u_j)_{{(2, 2+2t)}\otimes {(2+2t,2+2(j+l))}} \end{equation} for all $t$ satisfying $0\leqslant t_i \leqslant \overline{j_i+l_i}$ for all $1\leq i\leq \theta$. Now let's go back to equation \eqref{eq4.26} in which there is an item \begin{eqnarray}\label{eq4.29}&&\sum_{s=0}^{m-1} \gamma^{-ts}\gamma^{-m_i^2s_i\alpha_i+m_k^2s_k\beta_k}\\ \notag&=&\prod_{z=1}^{\theta}\sum_{s_z=0}^{e_z-1} \gamma^{-t_zs_zm_z^2}\gamma^{-m_i^2s_i\alpha_i+m_k^2s_k\beta_k}\\ \notag&=&\left \{ \begin{array}{ll} \sum_{s_i=0}^{e_i-1} \gamma^{-s_im_i^2(\alpha_i+t_i)}\sum_{s_k=0}^{e_k-1}\gamma^{-s_km_m^2(\beta_k-t_k)} \prod_{z\neq i,k}\sum_{s_z=0}^{e_z-1} \gamma^{-t_zs_zm_z^2} & \;\;\;\;i\neq k\\ \sum_{s_i=0}^{e_i-1}\gamma^{-m_i^2s_i(t_i+\alpha_i-\beta_i)}\prod_{z\neq i}\sum_{s_z=0}^{e_z-1} \gamma^{-t_zs_zm_z^2} & \;\;\;\;i=k \end{array}\right. \end{eqnarray} Therefore, in order to make this equality \eqref{eq4.29} not zero, we must have $$\left \{ \begin{array}{ll} \alpha_i=-t_i,\;\;\beta_k=t_k & \;\;\;\;i\neq k \\ \beta_i=\alpha_i+t_i & \;\;\;\;i=k \end{array}\right.$$ But in the expression of equality \eqref{eq4.26} one always have $0\leq \alpha_i\leq e_i-1-t_i$ which implies that $\alpha_i\neq -t_i$. Thus, as a conclusion, in the equality \eqref{eq4.26} we can assume that $$i=k,\;\;\;\;\beta_i=\alpha_i+t_i,\;\;(1\leq i\leq \theta).$$ So, the equality can be simplified as \begin{eqnarray} \notag &&\frac{1}{m^2}\prod_{i=1}^{\theta} (-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2 \frac{l_i(l_i+1)}{2}} \prod_{i=1}^{\theta} \sum_{\alpha_i=0}^{e_i-1-t_i}\sum_{\beta_{i}=0}^{e_i-1-\overline{j_i+l_i-t_i}} (-1)^{\alpha_i+\beta_i}\binom{e_i-1-t_i}{\alpha_i}_{\gamma_i}\\ \notag && \binom{e_i-1-\overline{(j_i+l_i-t_i)}}{\beta_i}_{\gamma_i}\gamma_i^{\tfrac{\alpha_i(\alpha_i+1)}{2} +\tfrac{\beta_i(\beta_i+1)}{2}+j_i\beta_i} (x^{m_id\alpha_i}\otimes x^{m_id\beta_i-t_im_id})\\ \notag&&\gamma^{t(t-l)}\sum_{s=0}^{m-1}\gamma^{-ts} \gamma^{-m_i^2s_i(\alpha_i-\beta_i)}(x^{a}\prod_{i=1}^{\theta}y_{m_i}^{t_i} \otimes x^a \prod_{i=1}^{\theta}y_{m_i}^{\overline{j_i+l_i}-t_i}g^{m_it_i})(g\otimes g). \end{eqnarray} From this, we find the following fact: if $t_i\geq \overline{j_i+l_i}+1$ for some $i$, then $e_i-1-\overline{j_i+l_i-t_i}=t_i-1-\overline{j_i+l_i}$. So, $0\leq \beta_i\leq t_i-1-\overline{j_i+l_i}$ and thus $1-e_i\leq \beta_i-\alpha_i-t_i\leq -1-\overline{j_i+l_i}$ which contradicts to $\beta_i=\alpha_i+t_i$. So the equation \eqref{eq4.27} is proved. Under $\beta_i=\alpha_i+t_i$, we know that $$\prod_{i=1}^{\theta}\sum_{s=0}^{m-1} \gamma^{-ts}\gamma^{-m_i^2s_i\alpha_i+m_k^2s_i\beta_i}=e_1e_2\cdots e_{\theta}=m$$ and \eqref{eq4.26} can be simplified further \begin{eqnarray} \notag &&\frac{1}{m}\prod_{i=1}^{\theta} (-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2 \frac{l_i(l_i+1)}{2}} \prod_{i=1}^{\theta} \sum_{\alpha_i=0}^{e_i-1-\overline{j_i+l_i}} (-1)^{t_i}\binom{e_i-1-t_i}{\alpha_i}_{\gamma_i}\\ \notag && \binom{e_i-1-\overline{(j_i+l_i-t_i)}}{\alpha_i+t_i}_{\gamma_i} \gamma_i^{\tfrac{\alpha_i(\alpha_i+1)}{2}+\tfrac{(\alpha_i+t_i)(\alpha_i+t_i+1)}{2} +j_i(\alpha_i+t_i)+t_i(l_i-t_i)}\\ \notag&&(x^{m_id\alpha_i}\otimes x^{m_id\alpha_i}) (x^a y_{m_i}^{t_i} \otimes x^a y_{m_i}^{\overline{j_i+l_i}-t_i}g^{m_it_i})(g\otimes g). \end{eqnarray} Comparing with equation \eqref{eq5.26}, to prove the desired equation \eqref{eq4.28} it is enough to show the following combinatorial identity \begin{eqnarray*} &&(-1)^{t_i+\alpha_i}\gamma_i^{\tfrac{(\alpha_i+t_i)(\alpha_i+t_i+1)}{2} +t_i(j_i+l_i-t_i)}\binom{e_i-1-t_i}{\alpha_i}_{\gamma_i}\binom{e_i-1-\overline{(j_i+l_i-t_i)}}{\alpha_i+t_i}_{\gamma_i}\\ &=& \binom{e_i-1-\overline{j_i+l_i}}{\alpha_i}_{\gamma_i} \binom{\overline{j_i+l_i}}{t_i}_{\gamma_i} \end{eqnarray*} which is true by (6) of Lemma \ref{l3.4}.\\ \noindent{$\bullet$ \emph{Step} 2 (Coassociative and couint). Indeed, for each $0\leqslant j\leqslant m-1$ \begin{align*} (\D\otimes \Id)\D(u_j)&=(\D\otimes \Id)(\sum_{k=0}^{m-1}\gamma^{k(j-k)}u_k\otimes x^{-kd}g^ku_{j-k})\\ &=\sum_{k=0}^{m-1}\gamma^{k(j-k)}(\sum_{s=0}^{m-1}\gamma^{s(k-s)}u_s\otimes x^{-sd}g^su_{k-s})\otimes x^{-kd}g^ku_{j-k}\\ &=\sum_{k,s=0}^{m-1}\gamma^{k(j-k)+s(k-s)}u_s\otimes x^{-sd}g^su_{k-s}\otimes x^{-kd}g^ku_{j-k}, \end{align*} and \begin{align*} (\Id\otimes \D)\D(u_j)&=(\Id\otimes \D)(\sum_{s=0}^{m-1}\gamma^{s(j-s)}u_s\otimes x^{-sd}g^su_{j-s})\\ &=\sum_{s=0}^{m-1}\gamma^{s(j-s)} u_s\otimes (\sum_{t=0}^{m-1}\gamma^{t(j-s-t)}x^{-sd}g^su_{t}\otimes x^{-sd}g^s x^{-td}g^{t}u_{j-s-t})\\ &=\sum_{s,t=0}^{m-1}\gamma^{s(j-s)+t(j-s-t)}u_s\otimes x^{-sd}g^su_{t}\otimes x^{-(s+t)d}g^{(s+t)}u_{j-s-t}. \end{align*} It is not hard to see that $(\D\otimes \Id)\D(u_j)=(\Id\otimes \D)\D(u_j)$ for all $0\leqslant j \leqslant m-1$. The verification of $(\epsilon\otimes \Id)\D(u_j)=(\Id\otimes \epsilon)\D(u_j)=u_j$ is easy and it is omitted.\\ \noindent $\bullet$ \emph{Step} 3 (Antipode is an algebra anti-homomorphism). Because $x$ and $g$ are group-like elements, we only check $$S(u_{j+m_i})S(\phi_{m_i,j})=S(u_j)S(y_{m_i})=\xi_{m_i} S(y_{m_i})S(u_j)S(x^{m_id})$$ and $$S(u_ju_l)=S(u_l)S(u_j)$$ for $1\leq i\leq \theta$ and $1\leq j,l\leq m-1$ here. \noindent (1) \emph{The proof of} $S(u_{j+m_i})S(\phi_{m_i,j})=S(u_j)S(y_{m_i})=\xi_{m_i} S(y_{m_i})S(u_j)S(x^{m_id}).$ Clearly $u_jS(\phi_{m_i,j})=\phi_{m_i,j}u_j$ for all $i, j$ and thus \begin{eqnarray*}&&S(u_{j+m_i})S(\phi_{m_i,j})\\ &=&x^{b}g^{m-1}\prod_{i=1}^{\theta}(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{- m_i^2\frac{j_i(j_i+1)}{2}} x^{j_im_id} g^{-j_im_i}u_j\\ &=&\phi_{m_i,j}S(u_{j+m_i})\end{eqnarray*} here and the following of this proof $b=(1-m)d-\frac{\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d.$ Through direct calculation, we have \begin{align*} &S(u_j)S(y_{m_i})\\ &=x^{b}g^{m-1}\prod_{i=1}^{\theta}[(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{- m_i^2\frac{j_i(j_i+1)}{2}}x^{j_im_id}g^{-j_im_i}]u_j\cdot (-y_{m_i}g^{-m_i})\\ &=-x^{b}g^{m-1}\prod_{i=1}^{\theta}[(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{- m_i^2\frac{j_i(j_i+1)}{2}}x^{j_im_id}g^{-j_im_i}](\xi_{m_i}^{-1}\gamma^{-jm_i}x^{m_id}y_{m_i}g^{-m_i}u_j)\\ &=-x^{b}g^{m-1}\prod_{i=1}^{\theta}[(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{- m_i^2\frac{j_i(j_i+1)}{2}}x^{j_im_id}g^{-j_im_i}] (\xi_{m_i}^{-1}\gamma^{-m_i^2(j_i+1)}x^{m_id}g^{-m_i}y_{m_i}u_j)\\ &=x^{b}g^{m-1}(-1)^{j_1+\cdots+(j_i+1)+\cdots+j_{\theta}}\xi_{m_1}^{-j_1}\cdots \xi_{m_i}^{-(j_i+1)}\cdots \xi_{m_\theta}^{-j_\theta}\gamma_{1}^{\frac{j_1(j_1+1)}{2}} \cdots \gamma_{i}^{\frac{(j_i+1)(j_1+2)}{2}}\cdots \gamma_{\theta}^{\frac{j_\theta(j_\theta+1)}{2}}\\ & \quad x^{j_1m_1d}\cdots x^{(j_i+1)m_id}\cdots x^{j_\theta m_\theta d} g^{-j_1m_1}\cdots g^{-(j_i+1)m_i}\cdots g^{-j_\theta m_\theta}\phi_{m_i,j}u_{j+m_i}\\ &=\phi_{m_i,j}S(u_{j+m_i}) \end{align*} and \begin{align*} &\xi_{m_i}S(y_{m_i})S(u_j)S(x^{m_id})\\ &= \xi_{m_i}(-y_{m_i}g^{-m_i})g^{m-1}x^b\prod_{i=1}^{\theta}[(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{- m_i^2\frac{j_i(j_i+1)}{2}}x^{j_im_id}g^{-j_im_i}]u_jx^{-m_id}\\ &=x^{b}g^{m-1}(-1)^{j_1+\cdots+(j_i+1)+\cdots+j_{\theta}}\xi_{m_1}^{-j_1}\cdots \xi_{m_i}^{-(j_i+1)}\cdots \xi_{m_\theta}^{-j_\theta}\gamma_{1}^{\frac{j_1(j_1+1)}{2}} \cdots \gamma_{i}^{\frac{(j_i+1)(j_1+2)}{2}}\cdots \gamma_{\theta}^{\frac{j_\theta(j_\theta+1)}{2}}\\ & \quad x^{j_1m_1d}\cdots x^{(j_i+1)m_id}\cdots x^{j_\theta m_\theta d} g^{-j_1m_1}\cdots g^{-(j_i+1)m_i}\cdots g^{-j_\theta m_\theta}\phi_{m_i,j}u_{j+m_i}\\ &=\phi_{m_i,j}S(u_{j+m_i}). \end{align*} \noindent (2) \emph{The proof of} $S(u_ju_l)=S(u_l)S(u_j)$. Define $\overline{\phi_{m_i,s}}:=1-\gamma_i^{s_i+1}x^{-m_id}$ for all $s\in \mathbb{Z}$. Using this notion, \begin{align*} x^{m_id}\overline{\phi_{m_i,s}}&=x^{m_id}(1-\gamma_i^{s_i+1}x^{-m_id})\\ &=-\gamma_i^{s_i+1}(1-\gamma_i^{(e_i-s_i-2)+1}x^{m_id})\\ &=-\gamma_i^{s_i+1}\phi_{m_i,e_i-s_i-2}.\end{align*} And so \begin{align*} S(u_ju_l)&=S(\frac{1}{m}x^{a} \prod_{i=1}^{\theta} (-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2\frac{l_i(l_i+1)}{2}}[j_i,e_i-2-l_i]_{m_i}y_{\overline{j+l}}g)\\ &= \frac{1}{m}g^{-1}x^{-a}\prod_{i=1}^{\theta} [(-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2\frac{l_i(l_i+1)}{2}} (-y_{m_i}g^{-m_i})^{\overline{j_i+l_i}}S([j_i,e_i-2-l_i]_{m_i})]\\ &=\frac{1}{m}g^{-1}x^{-a}\prod_{i=1}^{\theta}[(-1)^{l_i}\xi_{m_i}^{-l_i} \gamma^{m_i^2\frac{l_i(l_i+1)}{2}} (-1)^{\overline{j_i+l_i}}\gamma^{m_i^2\frac{\overline{j_i+l_i}(\overline{j_i+l_i}-1)}{2}}\\ &\quad S([j_i,e_i-2-l_i]_{m_i}) y_{m_i}^{\overline{j_i+l_i}}g^{-m_i\overline{j_i+l_i}}]\\ &=\frac{1}{m}x^{-a}\gamma^{j+l}\prod_{i=1}^{\theta}[(-1)^{l_i}\xi_{m_i}^{-l_i} \gamma^{m_i^2\frac{l_i(l_i+1)}{2}} (-1)^{\overline{j_i+l_i}}\gamma^{m_i^2\frac{\overline{j_i+l_i}(\overline{j_i+l_i}-1)}{2}}\\ &\quad S([j_i,e_i-2-l_i]_{m_i})] y_{\overline{j+l}}g^{-\overline{j+l}-1}\\ &=\frac{1}{m}x^{-a}\gamma^{j+l}\prod_{i=1}^{\theta}[(-1)^{l_i}\xi_{m_i}^{-l_i} \gamma^{m_i^2\frac{l_i(l_i+1)}{2}} (-1)^{\overline{j_i+l_i}}\gamma^{m_i^2\frac{\overline{j_i+l_i}(\overline{j_i+l_i}-1)}{2}}\\ &\quad (-1)^{e_i-1-\overline{j_i+l_i}} \gamma^{m_i^2\frac{(e_i-1-\overline{j_i+l_i})(\overline{j_i+l_i}-2j_i-e_i)}{2}} x^{-(e_i-1-\overline{j_i+l_i})m_id}[l_i,e_i-2-j_i]_{m_i}] y_{\overline{j+l}}g^{-\overline{j+l}-1}\\ &=\frac{1}{m}x^{-a}\gamma^{j+l}\prod_{i=1}^{\theta}[(-1)^{l_i}\xi_{m_i}^{-l_i} \gamma^{m_i^2\frac{l_i(l_i+1)}{2}}\gamma^{m_i^2(j_i^2+j_il_i-l_i)}\\ &\quad x^{-(e_i-1-\overline{j_i+l_i})m_id}[l_i,e_i-2-j_i]_{m_i}] y_{\overline{j+l}}g^{-\overline{j+l}-1}. \end{align*} Here the last equality follows from \begin{align*} &(-1)^{e_i-1}\gamma^{m_i^2\frac{\overline{j_i+l_i}(\overline{j_i+l_i}-1)}{2}} \gamma^{m_i^2\frac{(e_i-1-\overline{j_i+l_i})(\overline{j_i+l_i}-2j_i-e_i)}{2}}\\ &=\quad \gamma^{m_i^2(j_i^2+j_il_i-l_i)}. \end{align*} Now let's compute the other side. \begin{align*} S(u_l)S(u_j)&=g^{m-1}x^b\prod_{i=1}^{\theta}[(-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{- m_i^2\frac{l_i(l_i+1)}{2}}x^{l_im_id}g^{-l_im_i}]u_l\\ &\quad g^{m-1}x^b\prod_{i=1}^{\theta}[(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{- m_i^2\frac{j_i(j_i+1)}{2}}x^{j_im_id}g^{-j_im_i}]u_j\\ &= g^{m-1}\prod_{i=1}^{\theta}[(-1)^{l_i+j_i}\xi_{m_i}^{-l_i-j_i}\gamma^{- m_i^2[\frac{l_i(l_i+1)}{2}+\frac{j_i(j_i+1)}{2}]}x^{(l_i-j_i)m_id}g^{-l_im_i}]\\ &\quad u_lg^{m-1-\sum_{i=1}^{\theta}j_im_i}u_j\\ &= \gamma^{-l-lj}\prod_{i=1}^{\theta}[(-1)^{l_i+j_i}\xi_{m_i}^{-l_i-j_i}\gamma^{- m_i^2[\frac{l_i(l_i+1)}{2}+\frac{j_i(j_i+1)}{2}]}x^{(l_i+j_i)m_id}g^{-l_im_i-j_im_i}]\\ &\quad g^{-2}x^{2d}u_lu_j\\ &=\gamma^{-l-lj}\prod_{i=1}^{\theta}[(-1)^{l_i+j_i}\xi_{m_i}^{-l_i-j_i}\gamma^{- m_i^2[\frac{l_i(l_i+1)}{2}+\frac{j_i(j_i+1)}{2}]}x^{(l_i+j_i)m_id}g^{-l_im_i-j_im_i}]\\ &\quad g^{-2}x^{2d}\frac{1}{m}x^{a} \prod_{i=1}^{\theta} (-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{m_i^2\frac{j_i(j_i+1)}{2}}[l_i,e_i-2-j_i]_{m_i}y_{\overline{j+l}}g\\ &=\gamma^{-l-lj}\frac{1}{m}\prod_{i=1}^{\theta}[(-1)^{l_i}\xi_{m_i}^{-l_i-2j_i}\gamma^{- m_i^2[\frac{l_i(l_i+1)}{2}]}[l_i,e_i-2-j_i]_{m_i} x^{(e_i-1-\overline{(l_i+j_i)})m_id}g^{-\overline{l_i+j_i}m_i}]\\ &\quad g^{-2}\frac{1}{m}x^{-a} y_{\overline{j+l}}g\\ &=\frac{1}{m}\prod_{i=1}^{\theta}[(-1)^{l_i}\xi_{m_i}^{-l_i-2j_i}\gamma^{- m_i^2(\frac{l_i(l_i+1)}{2})-l_im_i-l_ij_im_i^2+m_i^2(l_i+j_i)^2 +2(l_i+j_i)m_i}\\ &\quad[l_i,e_i-2-j_i]_{m_i} x^{(e_i-1-\overline{(l_i+j_i)})m_id}]x^{-a} y_{\overline{j+l}}g^{-(\overline{j+l}+1)}\\ &= \frac{1}{m}x^{-a}\gamma^{j+l}\prod_{i=1}^{\theta}[(-1)^{l_i}\xi_{m_i}^{-l_i} \gamma^{m_i^2\frac{l_i(l_i+1)}{2}}\gamma^{m_i^2(j_i^2+j_il_i-l_i)}\\ &\quad x^{-(e_i-1-\overline{j_i+l_i})m_id}[l_i,e_i-2-j_i]_{m_i}] y_{\overline{j+l}}g^{-\overline{j+l}-1}. \end{align*} where the fifth equality follows from $$x^{a+2d}=x^{-\frac{2+\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d+2d}=x^{-a-\sum_{i=1}^{\theta}(e_i-1)m_id}$$ and the last equality is followed by \begin{align*} &\xi_{m_i}^{-2j_i}\gamma^{- m_i^2(\frac{l_i(l_i+1)}{2})-l_im_i-l_ij_im_i^2+m_i^2(l_i+j_i)^2 +2(l_i+j_i)m_i}\\ &=\gamma^{ m_i^2(\frac{l_i(l_i+1)}{2})-m_ij_i-m_i^2l_i(l_i+1)-l_im_i-l_ij_im_i^2+m_i^2(l_i+j_i)^2 +2(l_i+j_i)m_i}\\ &= \gamma^{m_i^2\frac{l_i(l_i+1)}{2}+m_i^2(j_i^2+j_il_i-l_i)+j_im_i+l_im_i}. \end{align*} The proof is done.\\ \noindent $\bullet$ \emph{Step} 4 ($(S*\Id)(u_j)=(\Id*S)(u_j)=\epsilon(u_j)$). In fact, \begin{align*} (S*\Id)(u_0)&=\sum_{j=0}^{m-1}S(\gamma^{-j^2}u_j)x^{-jd}g^ju_{-j} \\ &=\sum_{j=0}^{m-1}\gamma^{-j^2}g^{m-1}x^{b}\prod_{i=1}^{\theta}[(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{- m_i^2\frac{j_i(j_i+1)}{2}}x^{j_im_id}g^{-j_im_i}]u_jx^{-jd}g^ju_{-j} \\ &=\sum_{j=0}^{m-1}g^{m-1}x^{b}\prod_{i=1}^{\theta}[(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{- m_i^2\frac{j_i(j_i+1)}{2}}]u_ju_{-j} \\ &=\sum_{j=0}^{m-1}g^{m-1}x^{b}\prod_{i=1}^{\theta}[(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{- m_i^2\frac{j_i(j_i+1)}{2}}]\\ &\quad \frac{1}{m}x^{a} \prod_{i=1}^{\theta} (-1)^{-j_i}\xi_{m_i}^{j_i}\gamma^{m_i^2\frac{-j_i(-j_i+1)}{2}}[j_i,e_i-2-j_i]_{m_i}g\\ &=\frac{1}{m}x^{a+b}g^m \prod_{i=1}^{\theta}[\sum_{j_i=0}^{e_i-1}\gamma_i^{j_i}[j_i,e_i-2-j_i]_{m_i}]\\ &=\frac{1}{m}x^{-\sum_{i=0}^{\theta}(e_i-1)m_id} \prod_{i=1}^{\theta}[\sum_{j_i=0}^{e_i-1}\gamma_i^{j_i}]j_i-1,j_i-1[_{m_i}] \\ &=\frac{1}{m}x^{-\sum_{i=0}^{\theta}(e_i-1)m_id}\prod_{i=1}^{\theta}e_ix^{(e_i-1)m_id} \quad\quad (\textrm{Lemma} \;\ref{l3.4}\;(3)) \\ &=1\\ &=\epsilon(u_{0}). \end{align*} And, \begin{align*} (\Id*S)(u_0)&=\sum_{j=0}^{m-1}\gamma^{-j^2}u_j S(x^{-jd}g^ju_{-j}) \\ &=\sum_{j=0}^{m-1}\gamma^{-j^2}u_j S(u_{-j})S(g^j)x^{jd} \\ &=\sum_{j=0}^{m-1}\gamma^{-j^2}u_j g^{m-1}x^{b}\prod_{i=1}^{\theta}[(-1)^{-j_i}\xi_{m_i}^{j_i}\gamma^{- m_i^2\frac{-j_i(-j_i+1)}{2}}x^{-j_im_id}g^{j_im_i}]u_{-j}g^{-j}x^{jd}\\ &=\sum_{j=0}^{m-1} x^{(1-m)d+\frac{\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d}\prod_{i=1}^{\theta}[(-1)^{-j_i}\xi_{m_i}^{j_i}\gamma^{- m_i^2\frac{-j_i(-j_i+1)}{2}}\gamma^{-j_im_i}]g^{m-1}u_ju_{-j}\\ &=\sum_{j=0}^{m-1} x^{(1-m)d+\frac{\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d}\prod_{i=1}^{\theta}[(-1)^{-j_i}\xi_{m_i}^{j_i}\gamma^{- m_i^2\frac{-j_i(-j_i+1)}{2}}\gamma^{-j_im_i}]g^{m-1}\\ &\quad \frac{1}{m}x^{a} \prod_{i=1}^{\theta} (-1)^{-j_i}\xi_{m_i}^{j_i}\gamma^{m_i^2\frac{-j_i(-j_i+1)}{2}}[j_i,e_i-2-j_i]_{m_i}g\\ &=\sum_{j=0}^{m-1}\frac{1}{m}\prod_{i=1}^{\theta} \xi_{m_i}^{2j_i}\gamma^{-j_im_i}]j_i-1,j_i-1[_{m_i}\\ &=\frac{1}{m}\prod_{i=1}^{\theta}\sum_{j_i=0}^{e_i-1} ]j_i-1,j_i-1[_{m_i}\\ &=\frac{1}{m}\prod_{i=1}^{\theta}e_i \quad\quad (\textrm{Lemma}\; \ref{l3.4}\;(1))\\ &=1 \\ &=\epsilon(u_{0}). \end{align*} For $1\leqslant j \leqslant m-1$, \begin{align*} (S*\Id)(u_{j})&=\sum_{k=0}^{m-1}\gamma^{k(j-k)}S(u_k)x^{-kd}g^ku_{j-k}\\ &=\sum_{j=0}^{m-1}\gamma^{k(j-k)} g^{m-1}x^{b}\prod_{i=1}^{\theta}[(-1)^{k_i}\xi_{m_i}^{-k_i}\gamma^{- m_i^2\frac{k_i(k_i+1)}{2}}x^{k_im_id}g^{-k_im_i}]u_kx^{-kd}g^ku_{j-k}\\ &=\sum_{k=0}^{m-1}\gamma^{k(j-k)}g^{m-1}x^{b}\prod_{i=1}^{\theta}[(-1)^{k_i}\xi_{m_i}^{-k_i}\gamma^{- m_i^2\frac{k_i(k_i+1)}{2}}\gamma^{k_i^2m_i^2}]u_ku_{j-k}\\ &=\sum_{k=0}^{m-1}\gamma^{k(j-k)}g^{m-1}x^{b}\prod_{i=1}^{\theta}[(-1)^{k_i}\xi_{m_i}^{-k_i}\gamma^{- m_i^2\frac{k_i(k_i+1)}{2}}\gamma^{k_i^2m_i^2}]\\ & \quad \frac{1}{m}x^{a} \prod_{i=1}^{\theta}[ (-1)^{j_i-k_i}\xi_{m_i}^{-j_i+k_i}\gamma^{m_i^2\frac{(j_i-k_i)(j_i-k_i+1)}{2}} [k_i,e_i-2-j_i+k_i]_{m_i}]y_jg\\ &=\frac{1}{m}x^{a+b}\sum_{k=0}^{m-1}\prod_{i=1}^{\theta}[ (-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{m_i^2\frac{j_i^2+j_i}{2}+j_im_i-k_im_i^2} [k_i,e_i-2-j_i+k_i]_{m_i}]y_j\\ &=\frac{1}{m}x^{a+b}\prod_{i=1}^{\theta}[ (-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{m_i^2\frac{j_i^2+j_i}{2}+j_im_i}]y_j\\ &\quad \prod_{i=1}^{\theta}[\sum_{k_i=0}^{e_i-1}\gamma_i^{k_i}]k_i-1-j_i,k_i-1[_{m_i}]\\ &=0\quad\quad (\textrm{Lemma}\; \ref{l3.4}\;(5))\\ &=\e(u_j) \end{align*} \begin{align*} (\Id*S)(u_{j})&=\sum_{k=0}^{m-1}\gamma^{k(j-k)}u_kS(u_{j-k})g^{-k}x^{kd}\\ &=\sum_{k=0}^{m-1}\gamma^{k(j-k)}u_k g^{m-1}x^{b}\prod_{i=1}^{\theta}[(-1)^{j_i-k_i}\xi_{m_i}^{k_i-j_i}\gamma^{- m_i^2\frac{(j_i-k_i)(j_i-k_i+1)}{2}}\\ &\quad \quad x^{(j_i-k_i)m_id}g^{-(j_i-k_i)m_i}]u_{j-k}g^{-k}x^{kd}\\ &=\sum_{k=0}^{m-1}u_k g^{m-1}x^{b}\prod_{i=1}^{\theta}[(-1)^{j_i-k_i}\xi_{m_i}^{k_i-j_i}\gamma^{- m_i^2\frac{(j_i-k_i)(j_i-k_i+1)}{2}}\\ &\quad \quad x^{j_im_id}g^{-j_im_i}]u_{j-k}\\ &=\sum_{k=0}^{m-1} \gamma^{-k}g^{m-1}x^{(1-m)d+\frac{\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d} \prod_{i=1}^{\theta}[(-1)^{j_i-k_i}\xi_{m_i}^{k_i-j_i}\gamma^{- m_i^2\frac{(j_i-k_i)(j_i-k_i+1)}{2}}\\ &\quad \quad x^{j_im_id}\gamma^{-kj_im_i}g^{-j_im_i}]u_ku_{j-k}\\ &=\sum_{k=0}^{m-1} \gamma^{-k}g^{m-1}x^{(1-m)d+\frac{\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d} \prod_{i=1}^{\theta}[(-1)^{j_i-k_i}\xi_{m_i}^{k_i-j_i}\gamma^{- m_i^2\frac{(j_i-k_i)(j_i-k_i+1)}{2}}\\ &\quad\quad x^{j_im_id}\gamma^{-kj_im_i}g^{-j_im_i}]\\ &\quad\quad \frac{1}{m}x^{a} \prod_{i=1}^{\theta}[ (-1)^{j_i-k_i}\xi_{m_i}^{-j_i+k_i}\gamma^{m_i^2\frac{(j_i-k_i)(j_i-k_i+1)}{2}} [k_i,e_i-2-j_i+k_i]_{m_i}]y_jg\\ &=\frac{1}{m}x^{-md}\sum_{k=0}^{m-1} \gamma^{-k} \prod_{i=1}^{\theta}[\xi_{m_i}^{2(k_i-j_i)}\gamma^{-kj_im_i+j_im_i} x^{j_im_id}g^{-j_im_i}[k_i,e_i-2-j_i+k_i]_{m_i}]g^{m}y_j\\ &=\frac{1}{m} \prod_{i=1}^{\theta}[\xi_{m_i}^{-2j_i}\gamma^{j_im_i} x^{j_im_id}g^{-j_im_i}]\prod_{i=1}^{\theta} [\sum_{k_i=0}^{e_i-1}\gamma_i^{k_ij_i}]k_i-1-j_i,k_i-1]_{m_i}]y_j\\ &=0\quad\quad (\textrm{Lemma}\; \ref{l3.4}\;(5))\\ &=\e(u_j). \end{align*} By steps 1, 2, 3, 4, $D(\underline{m}, d, \gamma)$ is a Hopf algebra.\qed \begin{proposition}\label{p4.8} Under above notations, the Hopf algebra $D(\underline{m}, d, \gamma)$ has the following properties. \begin{itemize} \item[(1)] The Hopf algebra $D(\underline{m}, d, \gamma)$ is prime with PI-degree $2m$. \item[(2)] The Hopf algebra $D(\underline{m}, d, \gamma)$ has a $1$-dimensional representation whose order is $2m$. \item[(3)] The Hopf algebra $D(\underline{m}, d, \gamma)$ is not pointed and its coradical is not a Hopf subalgebra if $m>1$. \item[(4)] The Hopf algebra $D(\underline{m}, d, \gamma)$ is pivotal, that is, its representation category is a pivotal tensor category. \end{itemize} \end{proposition} \begin{proof} (1) Recall that the Hopf algebra $D=D(\underline{m}, d, \gamma)=\bigoplus_{i=0}^{2m}D_{i}^{l}$ is strongly $\Z_{2m}$-graded with \[{D_i^l=} \begin{cases} \k [x^{\pm 1}, y_{m_1},\ldots,y_{m_\theta}] g^{\tfrac{i}{2}}, & i=\textrm{even},\\ \sum_{s=0}^{m-1}\k[ x^{\pm 1}] g^{\tfrac{i-1}{2}}u_s, & i=\textrm{odd}. \end{cases}\] So the algebra $D$ meets the the initial condition of Lemma \ref{l2.10}. Using the notation given in the Lemma \ref{l2.10}, we find that $$\chi\triangleright y_{m_i}=\xi_{m_i}^{-1}x^{-m_id}y_{m_i}$$ for all $1\leq i\leq \theta.$ This indeed implies the action of $\Z_{2m}$ on $D^{l}_{0}=\k[x^{\pm 1},y_{m_1},\ldots,y_{m_\theta}]$ is faithful. Therefore, by (c) and (d) of Lemma \ref{l2.10}, $D$ is prime with PI-degree $2m$. (2) This 1-dimensional representation can be given through left homological integrals. In fact, the direct computation shows that the right module structure of left homological integrals is given by: $$\int_{D}^{l}=D/(x-1,y_{m_1},\ldots,y_{m_\theta},u_1,\ldots,u_{m-1}, u_0-\prod_{i=1}^{\theta}\xi_{m_i}^{(e_i-1)},g-\prod_{i=1}^{\theta}\gamma^{-m_i}).$$ Through the relation that $\xi_{m_i}=\sqrt{\gamma^{m_i}}$ it is not hard to see that the $\io(D)=2m$. (3) Through direct computations, we find that the subspace $C_{m}(d)$ spanned by $\{(x^{-d}g)^{i}u_j|0\leq i,j\leq m-1\}$ is a simple coalgebra (see Proposition \ref{p7.6} for a detailed proof of this fact) and the coradical of $D$ equals to $$\bigoplus_{i\in \Z,\;0\leq j\leq m-1}x^{i}g^{j}\oplus(\bigoplus_{i\in \Z,\;0\leq j\leq m-1}x^{i}g^{j}C_{m}(d)).$$ Since $m>1$, it has a simple subcoalgebra $C_{m}(d)$ with dimension $m^2>1$. Therefore, $D$ is not pointed. Its coradical is not a Hopf subalgebra since it is clear it is not closed under multiplication. (4) See the proof of (3) of Proposition \ref{p7.14} where we built the result through proving that $D$ being pivotal. \end{proof} \begin{remark}\label{r4.9} \begin{itemize} \item[(1)] \emph{As a special case, through takeing $m=1$ one is not hard to see that the Hopf algebra $D$ constructed above is just the infinite dihedral group algebra $\k \mathbb{D}$. This justifies the choice of the notation ``D".} \item[(2)]\emph{It is not hard to see the other new examples, i.e., $T(\underline{m},t,\xi),\;B(\underline{m},\omega,\gamma)$, are pivotal since they are pointed and thus the proof of this fact become easier. In fact, keep the notations above, we have $$S^2(h)=(\prod_{i=1}^{\theta} g^{tm_i})h(\prod_{i=1}^{\theta} g^{tm_i})^{-1}$$ for $h\in T(\underline{m},t,\xi)$ and $$S^2(h)=(\prod_{i=1}^{\theta} g^{m_i})h(\prod_{i=1}^{\theta} g^{m_i})^{-1}$$ for $h\in B(\underline{m},\omega,\gamma)$. Through applying Lemma \ref{l2.16}, we get the result.} \end{itemize} \end{remark} Now let $m'\in \N$ and $\{m_1',\ldots,m'_{\theta'}\}$ a fraction of $m'$. As before, we need to compare different fractions of Hopf algebras $D(m,d,\gamma)$. Also, we denote the greatest common divisors of $\{m_1,\ldots,m_\theta\}$ and $\{m_1',\ldots,m'_{\theta'}\}$ by $m_0$ and $m_0'$ respectively. Parallel to case of generalized Liu algebras, we have the following observation. \begin{proposition} As Hopf algebras, $D(\underline{m},d,\gamma)\cong D(\underline{m'},d',\gamma')$ if and only if $m=m', \;\theta=\theta',\; d m_0=d' m_0'$ and $\gamma^{m_0^2}=(\gamma')^{(m_0')^2}$.\end{proposition} \begin{proof} By Proposition \ref{iliu}, it it enough to show that $D(\underline{m},d,\gamma)\cong D(\underline{m'},d',\gamma')$ if and only if their Hopf subalgebras $B(\underline{m},md,\gamma)$ and $B(\underline{m'},m'd',\gamma')$ are isomorphic. It is clear the isomorphism of $D(\underline{m},d,\gamma)$ and $D(\underline{m'},d',\gamma')$ will imply the isomorphism between $B(\underline{m},md,\gamma)$ and $B(\underline{m'},m'd',\gamma')$. Conversely, assume that $B(\underline{m},md,\gamma)\cong B(\underline{m'},m'd',\gamma')$. By Proposition \ref{p6.11}, $D(\underline{m},d,\gamma)$ is determined by $B(\underline{m},md,\gamma)$ entirely. Therefore, $D(\underline{m},d,\gamma)\cong D(\underline{m'},d',\gamma')$ too. \end{proof} At last, we point out the examples we constructed until now are not the same. \begin{proposition} If $m>1$, the Hopf algebras $T(\underline{m'},t,\xi), \;B(\underline{m''},\omega,\gamma'')$ and $D(\underline{m},d,\gamma)$ are not isomorphic to each other. \end{proposition} \begin{proof} Since $m>1$, $D(\underline{m},d,\gamma)$ is not pointed by Proposition \ref{p4.8} (3) while $T(\underline{m'},t,\xi)$ and $B(\underline{m''},\omega,\gamma'')$ are pointed. Therefore, $D(\underline{m},d,\gamma)\not\cong T(\underline{m'},t,\xi)$ and $D(\underline{m},d,\gamma)\not\cong B(\underline{m''},\omega,\gamma'').$ Comparing the number of group-likes, we know that $T(\underline{m'},t,\xi)\not\cong B(\underline{m''},\omega,\gamma'')$ either. \end{proof} \section{Ideal cases} In this section, we always assume that $H$ is a prime Hopf algebra of GK-dimension one satisfying (Hyp1) and (Hyp2). So by (Hyp1), $H$ has a $1$-dimensional representation $$\pi:H\longrightarrow \k$$ whose order equals to PI-deg$(H).$ Recall that in the Subsection \ref{subs2.2}, we already gave the definition of $\pi$-order $\ord (\pi)$ and $\pi$-minor $\mi(\pi)$. The aim of this section is to classify $H$ in the following two ideal cases: \begin{center} $\mi (\pi)=1$ or $\ord (\pi)=\mi (\pi)$.\end{center} If moreover assume that $H$ is regular, then the main result of \cite{BZ} is to classify $H$ in ideal cases. Here we apply similar program to classify prime Hopf algebras which may be not regular. \subsection{Ideal case one: $ \textrm{min}(\pi)=1$.} In this subsection, $H$ is a prime Hopf algebra of GK-dimension one satisfying (Hyp1), (Hyp2) and $ \mi(\pi)=1$. Let PI.deg$(H)=n>1$ (if $=1$, then it is clear that $H$ is commutative and thus $H$ is the coordinate algebra of connected algebraic group of dimension one). Recall that by the equation \eqref{eq2.3}, $H$ is an $\Z_{n}$-bigraded algebra $$H=\bigoplus_{i,j=0}^{n-1}H_{ij,\pi}.$$ Here and the following we write $H_{ij,\pi}$ just as $H_{ij}$ for simple. \begin{lemma} Under above notations, the subalgebra $H_{00}$ is a Hopf subalgebra which is isomorphic to either $\k[x]$ or $\k[x^{\pm 1}].$ \end{lemma} \begin{proof} Since $\textrm{min}(\pi)=1$, $H_{0}^{l}=H_{0}^{r}=H_{00}$. By (1) and (3) of Lemma \ref{l2.8}, $H_{00}$ is stable under the operations $\D$ and $S$. This implies that $H_{00}$ is a Hopf subalgebra. By Lemma \ref{l2.7} and its proof, we know that $H_{00}$ is a commutative domain of GK-dimension one. So $H_{00}$ is the coordinate algebra of connected algebraic group of dimension one. Thus it is isomorphic to either $\k[x]$ or $\k[x^{\pm 1}].$ \end{proof} Therefore, we have a dichotomy on the structure of $H$ now. \begin{definition} \emph{Let $H$ be a prime Hopf algebra of GK-dimension one satisfying (Hyp1), (Hyp2) and $ \textrm{min}(\pi)=1$.} \begin{itemize} \item[(a)] \emph{We call }$H$ additive \emph{if $H_{00}$ is the coordinate algebra of the additive group, that is, $H_{00}=\k[x]$.} \item[(b)] \emph{We call} $H$ multiplicative \emph{if $H_{00}$ is the coordinate algebra of the multiplicative group, that is, $H_{00}=\k[x^{\pm 1}]$.} \end{itemize} \end{definition} \begin{remark} \emph{In both \cite{BZ} and \cite{WLD}, the additive $H$ was called} primitive \emph{while the multiplicative $H$ was called} group-like. \emph{Here we used a slightly different terminology for intuition.} \end{remark} If we check the proof of the \cite[Propositions 4.2, 4.3]{BZ} carefully, then one can find that these propositions are still valid even we remove the requirement about regularity. So we state the following result, the same as \cite[Propositions 4.2, 4.3]{BZ}, without proof. \begin{proposition}\label{p5.4} Let $H$ be a prime Hopf algebra of GK-dimension one with PI-deg$(H)=n>1$ and satisfies (Hyp1), (Hyp2) and $\textrm{min}(\pi)=1$. Then \begin{itemize} \item[(a)] If $H$ is additive, then $H\cong T(n,0,\xi)$ of Subsection 2.3. \item[(b)] If $H$ is multiplicative, then $H\cong \k \mathbb{D}$ of Subsection 2.3. \end{itemize} In particular, such $H$ must be regular. \end{proposition} \subsection{Ideal case two: $\textrm{ord}(\pi)=\textrm{min}(\pi)$.} In this subsection, $H$ is a prime Hopf algebra of GK-dimension one satisfying (Hyp1), (Hyp2) and $ n:=\ord(\pi)=\textrm{min}(\pi)>1$ (if $=1$, then clearly $H$ commutative by our (Hyp2)). Recall that we have the following bigrading $$H=\bigoplus_{i,j=0}^{n-1}H_{ij}.$$ The following is some parts of \cite[Proposition 5.2, Theorem 5.2]{BZ}, which are proved without the hypothesis on regularity and thus they are true in our case. \begin{lemma}\label{l5.5} Retain the notations above. Then \begin{itemize}\item[(a)] The center of $H$ equals to $H_{0}:=H_{00}$. \item[(b)] The center of $H$ is a Hopf subalgebra. \end{itemize} \end{lemma} The statement $(b)$ in this lemma also imply that we are in the same situation as ideal case one now: $H$ is either additive or multiplicative. No matter what kind of $H$ is, $H_{ij}$ is a free $H_0$-module of rank one (see the analysis given in \cite[Page 287]{BZ}), that is $$H=\bigoplus_{i,j=0}^{n-1}H_{ij}=\bigoplus_{i,j=0}^{n-1}H_{0}u_{ij} =\bigoplus_{i,j=0}^{n-1}u_{ij}H_{0},$$ and the action of winding automorphism (relative to $\pi$) is given by $$\Xi_{\pi}^{l}(u_{ij}a)=\xi^{i}u_{ij}a,\quad\quad \textrm{and} \quad\quad \Xi_{\pi}^{r}(u_{ij}a)=\xi^{j}u_{ij}a$$ for $a\in H_0$ and $\xi$ a primitive $n$th root of unity. Due to \cite[Proposition 6.2]{BZ}, all these elements $u_{ij}\;(0\leq i,j\leq n-1)$ are normal. Moreover, by \cite[Lemma 6.2]{BZ}, they satisfy the following relation: \begin{equation}\label{eq5.1} u_{ij}u_{i'j'}=\xi^{i'j-ij'}u_{i'j'}u_{ij}. \end{equation} By Lemma \ref{l5.5}, $H_{00}$ is a normal Hopf subalgebra of $H$ which implies that there is an exact sequence of Hopf algebras \begin{equation}\label{eq5.2} \k\longrightarrow H_{00}\longrightarrow H\longrightarrow \overline{H}\longrightarrow \k, \end{equation} where $\overline{H}=H/HH_{00}^{+}$ and by definition $H_{00}^{+}=H_{00}\bigcap \ker \e.$ As one of basic observations of this paper, we have the following result. \begin{lemma}\label{l5.6} As a Hopf algebra, $\overline{H}$ is isomorphic to a fraction version of a Taft algebra $T(n_1,\ldots,n_\theta,\xi)$ for $n_1,\ldots,n_{\theta}$ a fraction of $n$. \end{lemma} \begin{proof} Denote the image of $u_{ij}$ in $\overline{H}$ by $v_{ij}$ for $0\leq i,j\leq n-1$. Due to $H$ is bigraded, $$\overline{H}=\bigoplus_{i,j=0}^{n-1}\overline{H}_{ij}=\bigoplus_{i,j=0}^{n-1}\k v_{ij}.$$ Let $g=v_{11}$. Then by (a), (b) and (e) of \cite[Proposition 6.6]{BZ}, which are still true even $H$ is not regular, these elements $v_{ij}$ can be chosen to satisfy $$g^{n}=1,\;\;\;\;v_{ii}=g^{i},\;\;(0\leq i\leq n-1),\;\;\;\;v_{ij}=g^{i}v_{0(j-i)},\;\;(0\leq i\neq j\leq n-1) $$ and $$v_{ij}^{n}=0,\;\;\;\;(0\leq i\neq j\leq n-1).$$ Moreover, one can use (1), (4) and (5) of Lemma \ref{l2.8} and the axioms for a coproduct to show that $g$ is group-like and $$\D(v_{ij})=v_{ii}\otimes v_{ij}+v_{ij}\otimes v_{jj}+\sum_{s\neq i,j}c_{ss}^{ij}v_{is}\otimes v_{sj}=g^{i}\otimes v_{ij}+v_{ij}\otimes g^{j}+\sum_{s\neq i,j}c_{ss}^{ij}v_{is}\otimes v_{sj}$$ for some $c_{ss}^{ij}\in \k$ and $0\leq i\neq j\leq n-1$ (see also \cite[Lemma 6.5]{BZ} for a explicit proof). Using this formula for coproduct, it is not hard to see that $\overline{H}$ is a pointed Hopf algebra with $G(\overline{H})=\{g^{i}|0\leq i\leq n-1\}.$ Let $\overline{H}_{i}^{l}:=\bigoplus _{j=0}^{n-1} \overline{H}_{ij}$ and then through inheriting the strongly graded property of $H$, we know that $\overline{H}=\bigoplus_{i=0}^{n-1}\overline{H}_{i}^{l}$ is strongly graded. We want to consider the subalgebra $\overline{H}_{0}^{l}=\bigoplus_{j=0} \k v_{0j}$. For this, we take the following linear map $$\pi':\overline{H}\longrightarrow \k G(\overline{H}),\;\;v_{ij}\longmapsto \delta_{ij}v_{ij}. $$ At first, we prove that $\pi'$ is an algebraic map. For this, it is enough to show that $$v_{ij}v_{kl}=0 $$ for all $i\neq j$ with $i+k\equiv j+l$ (mod $n$). Assume that this is not true, then $v_{ij}v_{kl}=av_{i+k,j+l}$ for some $0\neq a\in \k$, which is invertible by $v_{ii}=g^{i}$ for all $0\leq i\leq n-1$. But this is impossible since $v_{ij}$ is nilpotent. So, $\pi'$ is an algebraic map. In addition, the formula for the coproduct implies that $\pi'$ is also a coalgebra map. Therefore, $\pi'$ is a Hopf projection. Using the classical Radford's biproduct (see Subsection \ref{ss2.4}), we have the following decomposition $$\overline{H}=\overline{H}_{0}^{l}\#\k G(\overline{H}).$$ By \cite[Theorem 2]{Ang}, $\overline{H}_{0}^{l}$ is generated by skew primitive elements, say $x_1,\dots, x_{\theta}$ (we ask that $\theta$ is as small as possible). Moreover, by the proof of \cite[Theorem 2]{Ang} we know that $gx_ig^{-1}\in \k x_i$ for $(1\leq i\leq \theta)$. So, equation \eqref{eq5.1} implies that up to a nonzero scalar $x_i$ equals to a $v_{0j}$ for some $j$. In one word, we prove that the subalgebra $\overline{H}_{0}^{l}$ is generated by $v_{0n_1},\ldots,v_{0n_\theta}$ which are skew primitive elements. \emph{Claim: $n_1,\ldots,n_\theta$ is a fraction of $n$.} \emph{Proof of the claim:} Let $e_i$ be the exponent of $n_i$ for $1\leq i\leq \theta.$ We find that $e_i$ is the smallest number such $v_{0n_i}^{e_i}=0$. Indeed, on one hand it is not hard to see that $v_{0n_i}^{e_i}=0$ since by definition $v_{0n_i}^{e_i}\in \overline{H}_{00}=\k$ and $v_{0n_i}$ is nilpotent. On the other hand, assume that there is $l<e_i$ which is smallest such that $v_{0n_i}^{l}=0$. Then $$0=\D(v_{0n_i})^{l}=(1\otimes v_{0n_i}+v_{0n_i}\otimes g^{n_i})^{l}=\sum_{k=0}^{l}\left ( \begin{array}{c} l\\k \end{array}\right)_{\xi^{n_i^2}}v_{0n_i}^{k}\otimes g^{n_i(l-k)}v_{0n_i}^{l-k}$$ which implies that $\left ( \begin{array}{c} l\\k \end{array}\right)_{\xi^{n_i^2}}=0$ for all $1\leq k\leq l-1$ and thus $\xi^{n_i^2}$ must be a primitive $l$th root of unity. Now we consider the element $v_{0,ln_i}$ which is not $1$ by the definition of $l$ (explicitly, $n\nmid ln_i$ since $l< e_i$). Thus the elements $ g':=g^{ln_i}, x:=v_{0,ln_i}$ generate a Hopf subalgebra satisfying $$g'x=xg',\;\;\;\;\D(x)=1\otimes x+x\otimes g'.$$ (We need prove these two relations. The relation $g'x=xg'$ is clear. The proof of $\D(x)=1\otimes x+x\otimes g'$ is given as follows: Lifting these $v_{0j}$ to $H$, we get the corresponding elements $u_{0j}$ for $0\leq j\leq n-1$. Due to \cite[Propostion 6.2]{BZ}, they are normal and thus $u_{0n_i}^{l}=f(x)u_{0,ln_i}$ for some $0\neq f(x)\in H_{00}$. By the claim in the proof of the next proposition, that is, Proposition \ref{p5.7}, $u_{0n_i}$ is a skew primitive element. Using the fact that $\xi^{n_i^2}$ is a primitive $l$th root of unity, $u_{0n_i}^{l}$ is still a skew primitive element. This implies that $\D(f(x)u_{0,ln_i})$ and thus $\D(u_{0,ln_i})\in H_{00}\otimes H_{0,ln_i}+H_{0,ln_i}\otimes H_{ln_i,ln_i}$. Therefore, $v_{0,ln_i}$ has to be skew-primitive.) It is well known that a Hopf algebra satisfying above relations must be infinite dimensional (in fact, a infinite dimensional Taft algebra) which is a contradiction. Thus, $e_i$ is the smallest number such $v_{0n_i}^{e_i}=0$. Now, we want to show that $(e_i,n_i)=1$. Otherwise, let $d_i=(e_i,n_i)>1$. Therefore, we consider $$\D(v_{0n_i})^{\frac{e_i}{d_i}}=(1\otimes v_{0n_i}+v_{0n_i}\otimes g^{n_i} )^{\frac{e_i}{d_i}}.$$ By definition, $e_i/d_i$ is coprime to $n_i$ thus coprime to $n_i^2$. This implies that $\xi^{n_i^2}$ is a primitive $e_i/d_i$th root of unity. Therefore, $$\D(v_{0n_i})^{\frac{e_i}{d_i}}=1\otimes v_{0n_i}^{\frac{e_i}{d_i}}+ g^{n_ie_i/d_i}\otimes v_{0n_i}^{\frac{e_i}{d_i}}.$$ Since $e_i$ is the smallest number such $v_{0n_i}^{e_i}=0$, $v_{0n_i}^{\frac{e_i}{d_i}}\neq 0$. This means that we go into the following situation again: Let $g'=g^{n_ie_i/d_i},\;x=v_{0n_i}^{e_i/d_i}$, then the Hopf subalgebra generated by $g',x$ is infinite dimensional. This is impossible. Next, we want to show that $n|n_in_j$ for all $1\leq i\neq j\leq \theta.$ Through computation, $$\D(v_{0n_i}v_{0n_j})=1\otimes v_{0n_i}v_{0n_j}+v_{0n_i}\otimes g^{n_i}v_{0n_j}+v_{0n_j}\otimes v_{0n_i}g^{n_j}+v_{0n_i}v_{0n_j}\otimes g^{n_i+n_j}$$ and $$\D(v_{0n_j}v_{0n_i})=1\otimes v_{0n_j}v_{0n_i}+v_{0n_j}\otimes g^{n_j}v_{0n_i}+v_{0n_i}\otimes v_{0n_j}g^{n_i}+v_{0n_j}v_{0n_i}\otimes g^{n_i+n_j}.$$ By equation \eqref{eq5.1}, one has $v_{0n_i}v_{0n_j}=v_{0n_j}v_{0n_i}$. This implies that $g^{n_j}v_{0n_i}=v_{0n_i}g^{n_j}=\xi^{n_in_j}g^{n_j}v_{0n_i}$. Therefore, $\xi^{n_in_j}=1$ and thus $n|n_in_j$. At last, we need to prove the conditions (3) and (4) of a fraction (see Definition \ref{d3.1}). Clearly, conditions (3) and (4) is equivalent to say that every $v_{0t}$ can be expressed as a product of $v_{0n_1},\ldots,v_{0,n_{\theta}}$ \emph{uniquely} (up to the order of these $v_{0,n_i}$'s due to the community of them) for all $0\leq t\leq n-1$. Since we already know that $v_{0n_1},\ldots,v_{0n_\theta}$ generate the whole algebra $\overline{H}_{0}^{l}$, it is enough to prove the following two conclusion: 1) $v_{0n_1}^{l_1}\cdots v_{0n_\theta}^{l_\theta}\neq 0$ for all $0\leq l_1\leq e_1-1,\ldots,0\leq l_\theta\leq e_\theta-1;$ 2) the elements in the set $\{v_{0n_1}^{l_1}\cdots v_{0n_\theta}^{l_\theta}|0\leq l_1\leq e_1-1,\ldots,0\leq l_\theta\leq e_\theta-1\}$ are linear independent. Of course, 1) is just a necessary part of 2). However, we find that they help each other. To show them, we introduce the lexicographical order on $A=\{(l_1,\ldots,l_{\theta})|0\leq l_1\leq e_1-1,\ldots,0\leq l_\theta\leq e_\theta-1\}$ through $$(l_1,\ldots,l_{\theta})<(l'_1,\ldots,l'_{\theta})\Leftrightarrow \textrm{exsits}\; 1\leq i\leq \theta\; \textrm{s.t.}\; l_j=l_j' \;\textrm{for} j<i\; \textrm{and} \; l_i<l_i'.$$ Now let $S=\{(s_1,\ldots,s_\theta)\in A|v_{0n_1}^{s_1}\cdots v_{0n_\theta}^{s_\theta}\neq 0\}$. Clearly, $S$ is nonempty due to $v_{0n_i}\neq 0$ for all $1\leq i\leq \theta.$ We prove that all elements $\{v_{0n_1}^{s_1}\cdots v_{0n_\theta}^{s_\theta}|(s_1,\ldots,s_\theta)\in S\}$ are linear independent firstly and then show that $S=A$. From this, 1) and 2) are proved clearly. In fact, assume we have a linear dependent relation among the elements in $\{v_{0n_1}^{s_1}\cdots v_{0n_\theta}^{s_\theta}|(s_1,\ldots,s_\theta)\in S\}$. Then there exists a linear combination $$a_{l_1,\ldots,l_\theta}v_{0n_1}^{l_1}v_{0n_2}^{l_2}\cdots v_{0n_\theta}^{l_\theta}+\cdots=0$$ with $a_{l_1,\ldots,l_\theta}\neq 0$ and $(l_1,\ldots,l_\theta)$ is as small as possible. Takeing the coproduct to the above equality and one can get a smaller item involving in a linear dependent equation. That is a contradiction. Next, let's show that $S=A$. Otherwise, there exists $v_{0n_1}^{l_1}\cdots v_{0n_\theta}^{l_\theta}=0$ for some $(l_1,\ldots,l_{\theta})\in A$. Then take $(l_1,\ldots,l_{\theta})$ as small as possible under above lexicographical order. Without loss generality, we can assume that $l_1>0$. Then take a $k_1$ such $0\leq k_1< l_1$. In the expression of $\D(v_{0n_1}^{l_1}\cdots v_{0n_\theta}^{l_\theta})$ on can find the coefficient of the item $v_{0n_1}^{l_1-k1}\otimes g^{k_1n_1}v_{0n_1}^{k_1}v_{0n_2}^{l_2}\cdots v_{0n_\theta}^{l_\theta}$ is $$\left ( \begin{array}{c} l_1\\k_1 \end{array}\right)_{\xi^{n_1^2}}$$ which is not zero since we already know that $\xi^{n_1^2}$ is a primitive $e_1$th root of unity. This implies that either $v_{0n_1}^{l_1-k_1}=0$ or $v_{0n_1}^{k_1}v_{0n_2}^{l_2}\cdots v_{0n_\theta}^{l_\theta}=0$ by the linear independent relation we proved. But both of them are not possible. Therefore, $S=A$. So 1) and 2) are proved. The proof of the claim is done. Let's go back to prove this lemma. Until now, we have proved that the Hopf algebra $\overline{H}$ is generated by $v_{0n_1},\ldots,v_{0n_\theta}$ and $g$ such that $n_1,\ldots,n_\theta$ is a fraction of $n$ and $$g^{n}=1, \;\; v_{0n_i}g=\xi^{n_i}gv_{0n_i},\;\;v_{0n_i}v_{0n_j}=v_{0n_j}v_{0n_i},\;\;v_{0n_i}^{e_i}=0$$ and $g$ is group-like, $v_{0n_i}$ is a $(1,g^{n_i})$-skew primitive element for all $1\leq i,j\leq \theta$. Therefore, we have a Hopf surjection $$T(n_1,\ldots,n_\theta,\xi)\longrightarrow \overline{H},\;\;y_{n_i}\mapsto v_{0n_i},\;g\mapsto g,\;\;1\leq i\leq \theta.$$ Comparing the dimension of them, we know that this surjection is a bijection. \end{proof} With help of this lemma, we are in the position to give the main result of this subsection now. \begin{proposition}\label{p5.7} Let $H$ be a prime Hopf algebra of GK-dimension one satisfying (Hyp1), (Hyp2) and $ n:=\ord(\pi)=\mi(\pi)>1$. Retain all above notations, then \begin{itemize} \item[(1)] If $H$ is additive, then it is isomorphic to a fraction version of a infinite dimensional Taft algebra $T(\underline{n},1,\xi)$ of Subsection \ref{ss4.1}. \item[(2)] If $H$ is multiplicative, then it is isomorphic to a fraction version of a generalized Liu algebra $B(\underline{n},\omega,\gamma)$ of Subsection \ref{ss4.4}. \end{itemize} \end{proposition} \begin{proof} Before we prove (1) and (2), we want to recall some basic facts, which are still valid in our case, on the coproduct from \cite[Proposition 6.7]{BZ}. The first fact is that $g:=u_{11}$ is a group-like element and $u_{ii}$ can defined as $u_{ii}:=u_{11}^{i}$ (see (a) of \cite[Proposition 6.7]{BZ}). By (1) of Lemma \ref{l2.8}, in general one has $$\D(u_{ij})=\sum_{s,t}C_{st}^{ij}(u_{is}\otimes u_{tj})$$ for $C_{st}^{ij}\in H_{00}\otimes H_{00}$ and $0\leq i,j,s,t\leq n-1.$ The second fact is $C_{st}^{ij}=0$ when $s\neq t$ (see (6.7.5) in the proof of \cite[Proposition 6.7]{BZ}). Therefore, the coproduct for $u_{ij}$ can be written as \begin{equation} \D(u_{ij})= C_{ii}^{ij}g^{i}\otimes u_{ij}+C_{jj}^{ij} u_{ij}\otimes g^{j}+\sum_{s\neq i,j} C_{ss}^{ij} u_{is}\otimes u_{sj} \end{equation} for all $0\leq i,j\leq n-1.$ Now by Lemma \ref{l5.6} we can assume that $\overline{H}=T(n_1,\ldots,n_{\theta},\xi)$. Then we get the following observation. \emph{Claim. For all $1\leq i\leq \theta$, the element $u_{0n_i}$ is a $(1,g^{n_i})$-skew primitive element.} \emph{Proof of the claim.} By direct computation, \begin{eqnarray*}&&(\id\otimes \D)\D(u_{0n_i})\\ &=&(\id\otimes \D)(C_{00}^{0n_i} 1\otimes u_{0n_i}+C_{n_in_i}^{0n_i}u_{0n_i}\otimes g^{n_i}+\sum_{s\neq 0,n_i}C_{ss}^{0n_i}u_{0s}\otimes u_{sn_i})\\ &=& (\id\otimes \D)(C_{00}^{0n_i})1\otimes (C_{00}^{0n_i} 1\otimes u_{0n_i}+C_{n_in_i}^{0n_i}u_{0n_i}\otimes g^{n_i}+\sum_{s\neq 0,n_i}C_{ss}^{0n_i}u_{0s}\otimes u_{sn_i})\\ &&+ (\id\otimes \D)(C_{n_in_i}^{0n_i})u_{0n_i}\otimes g^{n_i}\times g^{n_i}+\sum_{s\neq 0,n_i}(\id\otimes \D)(C_{ss}^{0n_i})u_{0s}\otimes\\ && \quad [C_{ss}^{sn_i}g^s\otimes u_{sn_i}+C_{n_in_i}^{sn_i}u_{sn_i}\otimes g^{n_i}+\sum_{t\neq s,n_i}C_{tt}^{sn_i}u_{st}\otimes u_{tn_i}] \end{eqnarray*} and \begin{eqnarray*} &&(\D\otimes \id)\D(u_{0n_i})\\ &=&(\D\otimes \id)(C_{00}^{0n_i} 1\otimes u_{0n_i}+C_{n_in_i}^{0n_i}u_{0n_i}\otimes g^{n_i}+\sum_{s\neq 0,n_i}C_{ss}^{0n_i}u_{0s}\otimes u_{sn_i})\\ &=& (\D\otimes \id)(C_{00}^{0n_i})1\otimes 1\otimes u_{0n_i}+(\D\otimes \id)(C_{n_in_i}^{0n_i})[ C_{00}^{0n_i}1\otimes u_{0n_i}\\ &&+C_{n_in_i}^{0n_i}u_{0n_i}\otimes g^{n_i}+\sum_{s\neq 0,n_i}u_{0s}\otimes u_{sn_i}]\otimes g^{n_i}\\ &&+ \sum_{s\neq 0,n_i}(\D\otimes \id)(C_{ss}^{0n_i})[C_{00}^{0s}1\otimes u_{0s}+C_{ss}^{0s}u_{0s}\otimes g^s+\sum_{t\neq 0,s}C_{tt}^{0s}u_{0t}\otimes u_{ts}]\otimes u_{sn_i}. \end{eqnarray*} By associativity, we get the following identities: \begin{eqnarray} \notag&&(\id\otimes \D)(C_{00}^{0n_i})(1\otimes C_{00}^{0n_i})=(\D\otimes \id)(C_{00}^{0n_i})\\ \notag&&(\id\otimes \D)(C_{00}^{0n_i})(1\otimes C_{n_in_i}^{0n_i})=(\D\otimes \id)(C_{n_in_i}^{0n_i})\\ \notag&&(\id\otimes \D)(C_{n_in_i}^{0n_i})=(\D\otimes \id)(C_{n_in_i}^{0n_i})(C_{n_in_i}^{0n_i}\otimes 1)\\ \label{eq5.4}&&(\id\otimes \D)(C_{00}^{0n_i})(1\otimes C_{ss}^{0n_i})=(\D\otimes \id)(C_{ss}^{0n_i})(C_{00}^{0s}\otimes 1)\\ \label{eq5.5}&&(\id\otimes \D)(C_{ss}^{0n_i})(1\otimes C_{ss}^{sn_i})=(\D\otimes \id)(C_{ss}^{0n_i})(C_{ss}^{0s}\otimes 1) \end{eqnarray} for $s\neq 0,n_i.$ From the first three identities, we find that $C_{00}^{0n_i}=C_{n_in_i}^{0n_i}=1$ by using the same method given in \cite[Page 297]{BZ}. This indeed implies that $$C_{00}^{0t}=C_{tt}^{0t}=1$$ for all $0\leq t\leq n-1$ since we have the same first three identities just through replacing $n_i$ by $t$. Recall again the dichotomy of $H_{00}$: either $H_{00}=\k[x]$ or $H_{00}=\k[x^{\pm 1}]$. From this we know that $C_{ss}^{0n_i}=\sum_{k,l} a^{s,0,n_i}_{kl}x^k\otimes x^{l}$ for $s\neq 0,n_i$ and $a^{s,0,n_i}_{kl}\in \k.$ We just prove our claim in the case $H_{00}=\k[x]$ since the other case can be proved similarly. By the image of $u_{0n_i}$ in $\overline{H}$ is a skew primitive element, $$a^{s,0,n_i}_{00}=0.$$ Since $C_{00}^{0t}=C_{tt}^{0t}=1$ for all $0\leq t\leq n-1$, the equation \eqref{eq5.4} is simplified into $$(1\otimes C_{ss}^{0n_i})=(\D\otimes \id)(C_{ss}^{0n_i})$$ which implies that $a^{s,0,n_i}_{kl}=0$ if $k\neq 0$. Similarly, the equation \eqref{eq5.5} implies that $a^{s,0,n_i}_{0l}=0$ if $l\neq 0$. Thus, $C_{ss}^{0n_i}=0$ for $s\neq 0,n_i$ and $u_{0n_i}$ is a $(1,g^{n_i})$-skew primitive element for $1\leq i\leq \theta.$ Moreover, we point out that through the same way given in \cite[Theorem 6.7]{BZ} one can show that as an algebra the Hopf algebra $H$ is generated by $H_{00}, g=u_{11} $ and $u_{0n_i}$ for $1\leq i\leq \theta.$ (1) Now $H$ is additive with $H_{00}=\k[x]$. We already know that $g=u_{11}$ is group-like and thus $g^{n}$ is a group-like in $H_{00}$ by the bigrading property. But the only group-like in $H_{00}$ is $1$ and thus $$g^{n}=1.$$ Consider the element $u_{0n_i}$ for $1\leq i\leq \theta.$ Through the quantum binomial theorem, $u_{0n_i}^{e_i}$ is a primitive element now. This means there exists $c_i\in \k$ such that $u_{0n_i}^{e_i}=c_ix$. Since $H$ is prime, $c_i\neq 0$. Therefore, through multiplying $u_{0n_i}$ by a suitable scalar one can assume that $$u_{0n_i}^{e_i}=x$$ for all $1\leq i\leq \theta.$ By equation \eqref{eq5.1}, $u_{0n_i}u_{0n_j}=u_{0n_j}u_{0n_i}$ for all $1\leq i,j\leq \theta.$ Therefore, we have a Hopf surjection $$\phi:\;T(\underline{n},1,\xi)\longrightarrow H,\;\;x\mapsto x,\;\;y_{n_i}\mapsto u_{0n_i},\;\;g\mapsto g,$$ where $\underline{n}=\{n_1,\dots,n_\theta\}.$ Since both of them are prime of GK-dimension one, $\phi$ is an isomorphism. (2) Now $H$ is multiplicative with $H_{00}=\k[x^{\pm 1}]$. We already know that $g=u_{11}$ is group-like and thus $g^{n}$ is a group-like element in $H_{00}$ by the bigrading property. Since $\{x^i|i\in \Z\}$ are all the group-likes in $H_{00}$, $$g^{n}=x^{\omega}$$ for some $\omega\geq 0$ (noting that we can replace $x$ by $x^{-1}$ if $\omega$ is negative). We claim that $\omega\neq 0$. If not, then as the proof of (1) we know that $u_{0n_i}^{e_i}$ is primitive in $H_{00}$. Hence $u_{0n_i}^{e_i}=0$ which is impossible since $H$ is prime. Consider the element $u_{0n_i}$ for $1\leq i\leq \theta.$ Through the quantum binomial theorem, $u_{0n_i}^{e_i}$ is a $(1,g^{e_in_i})=(1,x^{\omega\frac{e_in_i}{n}})$-skew primitive element in $H_{00}$. Therefore, after dividing if necessary by non-zero scalar, $$u_{0n_i}^{e_i}=1-x^{\omega\frac{e_in_i}{n}}$$ for all $1\leq i\leq \theta.$ Also by equation \eqref{eq5.1}, $u_{0n_i}u_{0n_j}=u_{0n_j}u_{0n_i}$ for all $1\leq i,j\leq \theta.$ Therefore, we have a Hopf surjection $$\phi:\;B(\underline{n},\omega,\xi)\longrightarrow H,\;\;x\mapsto x,\;\;y_{n_i}\mapsto u_{0n_i},\;\;g\mapsto g,$$ where $\underline{n}=\{n_1,\dots,n_\theta\}.$ Since both of them are prime of GK-dimension one, $\phi$ is an isomorphism. \end{proof} \section{Remaining case} In the previous section, we already dealt with the ideal cases: the case $\mi (\pi)=1$ and the case $\ord(\pi)=\mi(\pi)>1.$ In this section, we want to deal with the remaining case: $\ord{(\pi)}>\mi(\pi)>1.$ The main aim of this section is to classify prime Hopf algebras of GK-dimension one $H$ in this remaining case. To realize this aim, we apply the similar idea used in \cite{WLD}, that is, we first construct a special Hopf subalgebra $\widetilde{H}$, which can be classified by previous results, and then we show that $\widetilde{H}$ determines the structure of $H$ entirely. In this section, $H$ is a prime Hopf algebra of GK-dimension one satisfying (Hyp1), (Hyp2) and $n:=\ord{(\pi)}>m:=\mi(\pi)>1$ unless stated otherwise. And as before, the $1$-dimensional representation in (Hyp1) is denoted by $\pi.$ Recall that $$H=\bigoplus_{i,j\in \Z_{n}}H_{ij}$$ is $\Z_n$-bigraded by \eqref{eq2.3}. \subsection{The Hopf subalgebra $\widetilde{H}$.} By definition, we know that $m|n$ and thus let $t:=\frac{n}{m}.$ We define the following subalgebra $$\widetilde{H}:=\bigoplus_{0\leq i,j\leq m-1} H_{it,jt}.$$ The following result is a collection of \cite[Proposition 5.4, Lemma 5.5]{WLD}, which were proved in \cite{WLD} without using the condition of regularity. \begin{lemma}\label{l6.1} Retain above notations. \begin{itemize}\item[(1)] For every $i, j$ with $1\leqslant i, j\leqslant n-1$, $H_{ij}\neq 0$ if and only if $i-j\equiv 0$ \emph{(mod} $t$\emph{)} for all $0\leqslant i, j\leqslant n-1 $ . \item[(2)] The algebra $\widetilde{H}$ is a Hopf subalgebra of $H$. \end{itemize} \end{lemma} The key observation of \cite{WLD} and here is that Hopf subalgebra $\widetilde{H}$ lives in an ideal case. \begin{proposition}\label{p6.2} For the Hopf algebra $\widetilde{H}$, we have the following results. \begin{itemize}\item[(1)] It is prime of GK-dimension one. \item[(2)] It satisfies (Hyp1) and (Hyp2) through the restriction $\pi|_{\widetilde{H}}$ of $\pi$ to $\widetilde{H}$. \item[(3)] $\ord(\pi|_{\widetilde{H}})=\mi(\pi|_{\widetilde{H}})=m.$ \end{itemize} \end{proposition} \begin{proof} (1) For each $0\leq i\leq m-1$, let $\widetilde{H}^{l}_{it}:=\bigoplus_{0\leq j\leq m-1}H_{it,jt}.$ By Lemma \ref{l6.1}, we know that $\widetilde{H}_{it}^{l}=H_{it}^{l}$. Therefore $\widetilde{H}=\bigoplus_{0\leq i\leq m-1}\widetilde{H}^{l}_{it}$ is strongly graded and $\widetilde{H}^{l}_{0}$ is a commutative domain. Thus the Lemma \ref{l2.10} is applied. As consequences, $\widetilde{H}$ is prime with PI-degree $m$. Since $\widetilde{H}^{l}_{0}=H^{l}_{0}$ is of GK-dimension one and $\widetilde{H}$ is $\Z_{m}$-strongly graded, $\widetilde{H}$ is of GK-dimension one. (2) Denote the restriction of the actions of $\Xi_\pi^l$ and $\Xi_\pi^r$ to $\widetilde{H}$ by $\Gamma^l$ and $\Gamma^r$, respectively. Since $\widetilde{H}=\bigoplus_{0\leqslant i\leqslant m-1 } H_{it}^l$, we can see that for each $0\leqslant i\leqslant m-1$ and any $0\neq x \in H_{it}^l$, $$(\Gamma^l)^m(x)=\xi^{itm}x=x$$ for $\xi$ a primitive $n$th root of unity. This implies that the group $\langle\Gamma^l\rangle$ has order $m$ and thus $\pi|_{\widetilde{H}}$ is of order $m$. We already know that PI-deg$(\widetilde{H})=m$ and the invariant component $\widetilde{H}^{l}_{0}=H^{l}_{0}$ is a domain. So $\widetilde{H}$ satisfies (Hyp1) and (Hyp2). (3) Similarly, $|\langle\Gamma^r\rangle|=m$. We claim that $$\langle\Gamma^l\rangle\cap \langle\Gamma^r\rangle={1}.$$ In fact, if $(\Gamma^l)^i=(\Gamma^r)^j$ for some $0\leqslant i, j\leqslant m-1$. Choose $0\neq x\in H_{tt}$, we find $$\xi^{ti}x=(\Gamma^l)^i(x)=(\Gamma^r)^j(x)=\xi^{tj}x$$ which implies $i=j$. Let $0\neq y\in H_{0,t}$, then $$y=(\Gamma^l)^i(y)=(\Gamma^r)^j(y)=\xi^{tj}y$$ forces $j=0$. Thus we get $i=j=0$, i.e., $\langle\Gamma^l\rangle\cap \langle\Gamma^r\rangle={1}$. This implies that $\mi(\pi|_{\widetilde{H}})=m$. \end{proof} \begin{corollary}\label{c6.3} As a Hopf algebra $\widetilde{H}$ is isomorphic to either a faction version of infinite dimensional Taft algebra $ T(\underline{m},1,\xi)$ or a fraction version of generalized Liu algebra $B(\underline{m},\omega,\gamma)$. \end{corollary} \begin{proof} This is a direct consequence of Propositions \ref{p5.7} and \ref{p6.2}. \end{proof} This corollary implies that either $H_{00}=\k[x]$ (i.e. $H\cong T(\underline{m},1,\xi)$) or $H_{00}=\k[x^{\pm1}]$ (i.e. $H\cong B(\underline{m},\omega,\gamma)$) again. That is, we go back to a familiar situation that we have a dichotomy on $H$ now. \begin{definition} \emph{We call $H$ is} additive \emph{(resp.} multiplicative\emph{) if $H_{00}=\k[x]$} \emph{(resp.} $H_{00}=\k[x^{\pm 1}])$. \end{definition} We realize that the \cite[Proposition 6.6]{WLD} is also true in our case and we recall it as follows. \begin{lemma}\label{l6.5} Every homogeneous component $H_{i,i+jt}$ of $H$ is a free $H_{00}$-module of rank one on both sides for all $0\leq i\leq n-1$ and $0\leq j\leq m-1.$ \end{lemma} From this lemma, there is a generating set $\{u_{i,i+jt}|0\leq i\leq n-1,\;0\leq j\leq m-1\}$ satisfying $$u_{00}=1\;\;\;\;\textrm{and}\;\;\;\;H_{i, i+jt}=u_{i, i+jt}H_{00}=H_{00}u_{i, i+jt}.$$ So, $H$ can be written as \begin{equation}\label{eq6.1} H=\underset{0\leqslant j\leqslant m-1}{\bigoplus_{0\leqslant i\leqslant n-1}}H_{00}u_{i, i+jt}=\underset{0\leqslant j\leqslant m-1}{\bigoplus_{0\leqslant i\leqslant n-1}}u_{i, i+jt}H_{00}.\end{equation} \subsection{Additive case.} If $H$ is additive, $\widetilde{H}=T(\underline{m}, 1, \xi)$. Recall that $n$ is the $\pi$-order and $n=mt$. We will prove $H$ is isomorphic as a Hopf algebra to $T(\underline{m}, t, \zeta)$, for $\zeta$ some primitive $n$th root of 1. Recall that \begin{eqnarray*}\widetilde{H}=T(\underline{m}, 1, \xi)&=&\k\langle g, y_{m_1},\ldots,y_{m_\theta}|g^m=1,y_{m_i}g=\xi^{m_i} gy_{m_i}, y_{m_i}y_{m_j}=y_{m_j}y_{m_i},\\ && \quad y_{m_i}^{e_i}=y_{m_j}^{e_j},\,1\leq i,j\leq \theta\rangle,\end{eqnarray*} here by Proposition \ref{p4.3} we assume that $(m_1,\ldots,m_{\theta})=1$ without loss of generality. Note that $H=\bigoplus_{0\leqslant i\leqslant n-1, 0\leqslant j\leqslant m-1} H_{i, i+jt}$, $\widetilde{H}=\bigoplus_{0\leqslant i, j\leqslant m-1} H_{it, jt}$ and $H_{it, jt}=\k[y_{m_1}^{e_1}]y_{j-i}g^{i}$ (the index $j-i$ is interpreted mod $m$). In particular, $H_{00}=\k[y_{m_1}^{e_1}]$, $H_{0, jt}=\k[y_{m_1}^{e_1}]y_{j}$ and $H_{tt}=\k[y_{m_1}^{e_1}]g$. By Lemma \ref{l2.8} (5), $\epsilon(u_{11})\neq 0$. Multiplied with a suitable scalar, we can assume that $\epsilon(u_{11})=1$ throughout this subsection. The following results are parallel to \cite[Lemma 7.1, Propositions 7.2, 7.3]{WLD}. Since the situation is changed, we write the details out. \begin{lemma}\label{l6.6} Let $u:=u_{11}$. Then $H_1^l=H_0^lu$, $H=\bigoplus_{0\leqslant k\leqslant t-1}\widetilde{H}u^k$ and $u$ is invertible. \end{lemma} \begin{proof} By the bigraded structure of $H$, we have $$H_{0,m_it}H_{11}\subseteq H_{1, 1+m_it},\;\;\;\; H_{0, (e_i-1)m_it}H_{1, 1+m_it}\subseteq H_{11},$$ which imply $$H_{0,m_it}H_{0, (e_i-1)m_it}H_{1, 1+m_it}\subseteq H_{0,m_it}H_{11}\subseteq H_{1, 1+m_it},$$ for all $1\leq i\leq \theta.$ Since $H_{0,m_it}H_{0,(e_i-1)m_it}=y_{m_i}^{e_i}H_{00}$ is a maximal ideal of $H_{00}=\k[y_{m_1}^{e_1}]=\k[y_{m_i}^{e_i}]$ and $H_{1,1+m_it}$ is a free $H_{00}$-module of rank one (by Lemma \ref{l6.5}), $H_{0,m_it}H_{0,(e_i-1)m_it}H_{1, 1+m_it}$ is a maximal $H_{00}$-submodule of $H_{1, 1+m_it}$. Thus $$H_{0,m_it}H_{11}=H_{0,m_it}H_{0, (e_i-1)m_it}H_{1, 1+m_it}=y_{m_i}^{e_i}H_{1, 1+m_it}\;\; \textrm{or}\;\; H_{0,m_it}H_{11}=H_{1, 1+m_it}.$$ If $H_{0,m_it}H_{11}=y_{m_i}^{e_i}H_{1, 1+m_it}$, then $y_{m_i}u_{11}=y_{m_i}^{e_i}\alpha(y_{m_i}^{e_i})u_{1, 1+m_it}$ for some polynomial $\alpha(y_{m_i}^{e_i})\in \k[y_{m_i}^{e_i}]$. So $$y_{m_i}(u_{11}-y_{m_i}^{e_i-1}\alpha(y_{m_i}^{e_i})u_{1, 1+m_it})=0.$$ Therefore, $y_{m_i}^{e_i}(u_{11}-y_{m_i}^{e_i-1}\alpha(y_{m_i}^{e_i})u_{1, 1+m_it})=0$. Note that each homogenous $H_{i, i+jt}$ is a torsion-free $H_{00}$-module, so $$u_{11}=y_{m_i}^{e_i-1}\alpha(y_{m_i}^{e_i})u_{1, 1+m_it}.$$ By assumption, $\epsilon(u_{11})=1$. But, by definition, $\epsilon(y_{m_i})= 0$. This is impossible. So $H_{0,m_it}H_{11}=H_{1, 1+m_it}$ which implies that $H_{0,m_it}u_{11}=H_{1, 1+m_it}$. Since above $i$ is arbitrary, that is $1\leq i\leq \theta$, we can show that $H_{0,jt}u_{11}=H_{1, 1+jt}$ for $0\leqslant j\leqslant m-1$. Thus $H_1^l=H_0^lu_{11}$. Since $H=\bigoplus_{0\leqslant j\leqslant n-1} H_j^l$ is strongly graded, $u_{11}$ is invertible and $H_j^l=H_0^lu_{11}^j$ for all $0\leqslant j\leqslant n-1$. Let $u:=u_{11}$, then we have $$H=\bigoplus_{0\leqslant k\leqslant t-1}\widetilde{H}u^k.$$ \end{proof} We are in a position to determine the structure of $H$ now. \begin{lemma}\label{l6.7} With above notations, we have $$u^t=g, \;\;\;\;y_{m_i}u=\zeta^{m_i} uy_{m_i}\;\;\;\;(1\leq i\leq \theta),$$ where $\zeta$ is a primitive $n$th root of $1$. \end{lemma} \begin{proof} For all $1\leq i\leq \theta$, by $H_{0,m_it}u=uH_{0,m_it}$, there exists a polynomial $\beta_i(y_{m_i}^{e_i})\in \k[y_{m_i}^{e_i}]$ such that $$y_{m_i}u=uy_{m_i}\beta_i(y_{m_i}^{e_i}).$$ Then $$y_{m_i}u^t=u^ty_{m_i}{\beta}_i'(y_{m_i}^{e_i})$$ for some polynomial ${\beta}_i'(y_{m_i}^{e_i})\in \k[y_{m_i}^{e_i}]$ induced by $\beta(y_{m_i}^{e_i})$. Since $u^t$ is invertible and $u^t\in H_{t,t}=\k[y_{m_i}^{e_i}]g$, $u^t=ag$ for $0\neq a\in k$. By assumption, $\epsilon(u)=1$ and thus $a=1$. Therefore, $$u^t=g.$$ Since $y_{m_i}g=\xi^{m_i} gy_{m_i}$, we have ${\beta}_{i}'(y_{m_i}^{e_i})=\xi^{m_i}$. Then it is easy to see that $\beta_i(y_{m_i}^{e_i})=\zeta_i\in \k$ with $\zeta_i^t=\xi^{m_i}$. By assumption, $(m_1,\ldots,m_{\theta})=1$ and thus there exists $\zeta\in \k$ such that $\zeta^{t}=\xi$ and $\zeta^{m_i}=\zeta_i$ for all $1\leq i\leq \theta.$ Of course, $\zeta^n=1$. The last job is to show that $\zeta$ is a primitive $n$th root of $1$. Indeed, assume $\zeta$ is a primitive $n'$th root of 1. By definition, $m|n'|n$ and $n'\neq n$. Therefore, it is not hard to see that $$u':=u^{n'}\in C(H)$$ the center of $H$. Since $g^{m}=u^{n}=(u')^{\frac{n}{n'}}=1$, we have orthogonal central idempotents $1_{l}:=\sum_{j=0}^{\frac{n}{n'}-1} \varsigma^{-lj}(u')^{j}$ for $0\leq l\leq \frac{n}{n'}-1$ and $\varsigma$ a primitive $\frac{n}{n'}$th root of unity. This contradicts to the fact that $H$ is prime. \end{proof} \begin{lemma}\label{l6.8} The element $u$ is a group-like element of $H$. \end{lemma} \begin{proof} First of all $H_0^r\cong \k[x]\cong H_0^l$. Then $H_0^r\otimes H_0^l\cong \k[x, y]$ and the only invertible elements in $H_0^r\otimes H_0^l$ are nonzero scalars in $\k$. Since $\Delta(u)$ and $u\otimes u$ are invertible, $\Delta(u)(u\otimes u)^{-1}$ is invertible (and hence a scalar). Thus $u$ must be group-like by noting that $\epsilon(u)=1$. \end{proof} The next proposition follows from above lemmas directly. \begin{proposition}\label{p6.9} Let $H$ be a prime regular Hopf algebra of GK-dimension one satisfying (Hyp1), (Hyp2) and $\ord(\pi)=n>\mi(\pi)=m>1$. If $H$ is additive, then $H$ is isomorphic as a Hopf algebra to a fraction version of infinite dimensional Taft algebra. \end{proposition} \subsection{Multiplicative case.} If $H$ is multiplicative, then $\widetilde{H}=B(\underline{m}, \omega, \gamma)$ for $\underline{m}=\{m_1,\ldots,m_\theta\}$ a fraction of $m$, $\gamma$ a primitive $m$th root of $1$ and $\omega$ a positive integer. As usual, the generators of $B(\underline{m}, \omega, \gamma)$ are denoted by $x^{\pm 1},y_{m_1},\ldots,y_{m_\theta}$ and $g$. By equation \eqref{eq4.5} and \cite[Remark 6.3]{WLD}, we can assume that $\widetilde{H}=\bigoplus_{0\leqslant i, j\leqslant m-1}H_{it, jt}$ with $$H_{it, jt}=\k[x^{\pm 1}]y_{j-i}g^i$$ (the index $j-i$ is interpreted mod $m$). In particular, $H_{00}=\k[x^{\pm 1}]$, $H_{0, jt}=\k[x^{\pm 1}]y_{j}$ and $H_{t, t}=\k[x^{\pm 1}]g$. Set $u_j:=u_{1,1+jt} (0\leqslant j\leqslant m-1)$ for convenience. By the structure of the bigrading of $H$, we have \begin{equation}\label{eq6.2} y_{m_i}u_j=\phi_{m_i,j}u_{m_i+j}\end{equation} and \begin{equation}\label{eq6.3}u_jy_{m_i}=\varphi_{m_i,j}u_{m_i+j}\end{equation} for some polynomials $\phi_{m_i,j}, \varphi_{m_i,j}\in \k[x^{\pm 1}]$ and $1\leqslant i\leqslant \theta,\;0\leq j\leq m-1.$ With these notions and the equality $y_{m_i}^{e_i}=1-x^{\omega \frac{e_im_i}{m}}$, we find that \begin{equation}\label{eq6.4}(1-x^{\omega \frac{e_im_i}{m}})u_j=y_{m_i}^{e_i}u_j=\phi_{m_i,j}\phi_{m_i,m_i+j}\cdots \phi_{m_i,(e_i-1)m_i+j}u_j\end{equation} and \begin{equation}\label{eq6.5} u_j(1-x^{\omega \frac{e_im_i}{m}})=u_jy_{m_i}^{e_i}=\varphi_{m_i,j}\varphi_{m_i,m_i+j}\cdots \varphi_{m_i,(e_i-1)m_i+j}u_j,\end{equation} for $1\leqslant i\leqslant \theta$ and $0\leq j\leq m-1.$ \begin{lemma}\label{l6.10} There is no such $H$ satisfying $\ord(\pi)=n>\mi(\pi)=m>1$ and $n/m>2$. \end{lemma} \begin{proof} Since $u_jH_{00}=H_{00}u_j$, we have $$u_jx=\alpha_j(x^{\pm 1})u_j \ \ \ \ \text{and}\ \ \ \ u_jx^{-1}=\beta_j(x^{\pm 1})u_j$$ for some $\alpha_j(x^{\pm 1}), \beta_j(x^{\pm 1})\in \k[x^{\pm 1}]$ for $0\leq j\leq m-1$. From $$u_j=u_jxx^{-1}=\alpha_j(x^{\pm 1})u_jx^{-1}=\alpha_j(x^{\pm 1})\beta_j(x^{\pm 1})u_j,$$ we get $\alpha_j(x^{\pm 1})\beta_j(x^{\pm 1})=1$ and thus $\alpha_j(x^{\pm 1})=\lambda_jx^{a_j}$ for some $0\neq \lambda_j\in \k, 0\neq a_j\in \mathbb{Z}$. Note that $u_j^t\in H_{t,(1+jt)t}=\k[x^{\pm 1}]y_{\bar{jt}}g$, where $\bar{jt}\equiv jt\; (\text{mod}\; m)$. So we have $u_j^t=\gamma_j(x^{\pm 1})y_{\bar{jt}}g$ for some $\gamma_j(x^{\pm 1})\in \k[x^{\pm 1}]$. Hence $u_j^t$ commutes with $x$. Applying $u_jx=\lambda_jx^{a_j}u_j$ to $u_j^tx=xu_j^t$, we get $\lambda_j^{\sum_{s=0}^{t-1}a_j^s}=1$ and $x^{a_j^t}=x$. If $t$ is odd, $a_j=1$ and if $t$ is even, then $a_j$ is either $1$ or $-1$. Now we consider the special case $j=0$. By $\epsilon(xu_0)=\epsilon(u_0x)\neq 0$, we find that $\lambda_0=1$. If $a_0=1$, that is $u_0x=xu_0$, then we will see $u_jx=xu_j$ for all $0\leqslant j\leqslant m-1$. In fact, for this, it is enough to show that $u_{m_{i}}x=xu_{m_{i}}$ for all $1\leq i\leq \theta.$ Since $$\phi_{m_i,0}xu_{m_i}=x\phi_{m_i,0}u_{m_i}=xy_{m_i}u_0=yxu_0=y_{m_i}u_0x=\phi_{m_i,0}u_{m_i}x,$$ we have $u_{m_i}x=xu_{m_i}$ since $H_{1,1+m_i t}$ is a torsion-free $H_{00}$-module. Then by the strongly graded structure $u_{i,i+jt}\in H_i^l=(H_1^l)^i$ and $x$ is commutative with $H_1^l$, it is not hard to see that $u_{i,i+jt}x=xu_{i,i+jt}$ for all $0\leqslant i\leqslant n-1, 0\leqslant j\leqslant m-1$. Therefore the center $C(H)\supseteq H_{00}=\k[x^{\pm 1}]$. By \cite[Lemma 5.2]{BZ}, $C(H)\subseteq H_0$ and thus $C(H)=H_0=\k[x^{\pm 1}]$. This implies that $$\text{rank}_{C(H)}H=\text{rank}_{H_{00}}H= nm < n^2,$$ which contradicts the fact: the PI-degree of $H$ is $n$ and equals the square root of the rank of $H$ over $C(H)$. If $a_0=-1$, that is $u_0x=x^{-1}u_0$, we can deduce that $u_{i,i+jt}x=x^{-1}u_{i,i+jt}$ for all $0\leqslant i\leqslant n-1,\ 0\leqslant j\leqslant m-1$ by using the parallel proof of the case $a_0=1$. For $s\in \mathbbm{N}$, let $z_s:=x^s+x^{-s}$. Define $\k[z_s|s\geq 0]$ to be the subalgebra of $\k[x^{\pm 1}]$ generated by all $z_{s}$. Note that $\k[x^{\pm 1}]$ has rank 2 over $\k[z_s| s\geqslant 1]$. Thus $C(H)\supseteq \k[z_s|s\geq 0]$. Using \cite[Lemma 5.2]{BZ} again, we have $C(H)= \k[z_s|s\geq 0]$. Hence $$\text{rank}_{C(H)}H = 2nm\neq n^2$$ since $n/m >2$ by assumption. This contradicts the fact that the PI-deg$H=n$ again. Combining these two cases, we get the desired result. \end{proof} We turn now to consider the case: $\ord(\pi)=2\mi(\pi)=2m$. In this case, $t=2$. As discussed in the proof of Lemma \ref{l6.10}, if such $H$ exists then the following relations \begin{equation}\label{rl} u_jx=x^{-1}u_j\ \ (0\leqslant i\leqslant m-1)\end{equation} hold in $H$. Using these relations and \eqref{eq6.5}, we have \begin{equation}\label{r2} \varphi_{m_i,j}\varphi_{m_i,m_i+j}\cdots \varphi_{m_i,(e_i-1)m_i+j}=1-x^{-\omega\frac{e_im_i}{m}}. \end{equation} for all $1\leq i\leq \theta$ and $0\leq j\leq m-1$. To determine the structure of $H$, we need to give some harmless assumptions on the choice of $u_j$ ($0\leqslant j\leqslant m-1$) and $\phi_{m_i,j}$: \begin{itemize}\item[(1)] We assume $\epsilon(u_0)=1$; \item[(2)] For each $1\leq i\leq \theta$, let $\xi_{i,s}:=e^{\frac{2s\pi i}{\omega\frac{e_im_i}{m}}}$ and thus $1-x^{\omega\frac{e_im_i}{m}}=\Pi_{s\in S_i}(1-\xi_{i,s}x)$, where $S_i:=\{0,1, \cdots, \omega\frac{e_im_i}{m}-1\}$. Since \begin{equation*}\phi_{m_i,j}\cdots \phi_{m_i,(e_i-1)m_i+j}=y_{m_i}^{e_i}=1-x^{\omega\frac{e_im_i}{m}},\end{equation*} there is no harm to assume that \begin{equation*} \phi_{m_i, tm_i+j}=\Pi_{s\in S_{i,j,t}}(1-\xi_{i,s}x),\end{equation*} where $S_{i,j,t}$ is a subset of $S_{i}$. \item[(3)] By the strongly graded structure of $H$, the equality $H_2^l=H^{l}_0g$ and the fact that $g$ is invertible in $H$, we can take $u_{k, k+2j}$ such that \begin{equation*} u_{k, k+2j}= \begin{cases} g^{\frac{k-1}{2}}u_j & \text{if}\ \ k \ \ \text{is odd},\\ y^jg^{\frac{k}{2}} & \text{if}\ \ k \ \ \text{is even}, \end{cases} \end{equation*} for all $2\leqslant k \leqslant 2m-1$. \end{itemize} In the rest of this section, we always make these assumptions. We still need two notations, which appeared in the proof of Proposition \ref{p4.7}. For a polynomial $f=\sum a_ix^{b_i} \in \k[x^{\pm 1}]$, we denote by $\bar{f}$ the polynomial $\sum a_ix^{-b_i}$. Then by \eqref{rl}, we have $fu_i=u_i\bar{f}$ and $u_if=\bar{f}u_i$ for all $0\leqslant i\leqslant m-1$. For any $h\in H\otimes H$, we use $$h_{(s_1,t_1)\otimes (s_2,t_2)}$$ to denote the homogeneous part of $h$ in $H_{s_{1},t_{1}}\otimes H_{s_{2},t_{2}} $. Both these notations will be used frequently in the proof of the next proposition. \begin{proposition}\label{p6.11} Keep the notations above. Let $H$ be a prime Hopf algebra of GK-dimension one satisfying (Hyp1) and (Hyp2). Assume that $\widetilde{H}=B(\underline{m},\omega,\gamma)$ and $\ord(\pi)=2\mi(\pi)>2$, then we have \begin{itemize} \item[(1)]$m|\omega,\;\; 2|\sum_{i=1}^{\theta}(m_i-1)(e_i-1), \;\;2|\sum_{i=1}^{\theta}(e_i-1)m_i\frac{\omega}{m}$. \item[(2)] As a Hopf algebra, $$H\cong D(\underline{m},d,\gamma)$$ constructed as in Subsection \ref{ss4.4} where $d=\frac{\omega}{m}$. \end{itemize} \end{proposition} \noindent\emph{Proof.} We divide the proof into several steps. \emph{Claim 1. We have $m|\omega$ and for $1\leq i\leq \theta, 0 \leq j\leq m-1$, $y_{m_i}u_j=\phi_{m_i,j} u_{m_i+j}=\xi_{m_i} x^{dm_i}u_jy_{m_i}$ for $d=\frac{\omega}{m}$ and some $\xi_{m_i}\in \k$ satisfying $\xi_{m_i}^{e_i}=-1$.} \emph{Proof of Claim 1:} By associativity of the multiplication, we have many equalities: \begin{align*} y_{m_i}u_jy_{m_i}^{e_i-1}&=\phi_{m_i,j}\varphi_{m_i,m_i+j}\varphi_{m_i,2m_i+j}\cdots \varphi_{m_i,(e_i-1)m_i+j}u_0\\ &=\varphi_{m_i,j}\phi_{m_i,m_i+j}\varphi_{m_i,2m_i+j}\cdots \varphi_{m_i,(e_i-1)m_i+j}u_0\\ &\cdots\\ &=\varphi_{m_i,j}\varphi_{m_i,m_i+j}\varphi_{m_i,2m_i+j}\cdots \phi_{m_i,(e_i-1)m_i+j}u_0, \end{align*} which imply that \begin{equation}\label{eq6.8}\phi_{m_i,sm_i+j}\varphi_{m_i,tm_i+j} =\varphi_{m_i,sm_i+j}\phi_{m_i,tm_i+j}\end{equation} for all $0\leq s, t\leq e_i-1$. Using associativity again, we have \begin{align*} y_{m_i}^{e_i}u_jy_{m_i}^{e_i(e_i-1)}&=(1-x^{\omega\frac{e_im_i}{m}}) u_j(1-x^{\omega\frac{e_im_i}{m}})^{e_i-1}\\ &=-x^{\omega\frac{e_im_i}{m}} (1-x^{-\omega\frac{e_im_i}{m}})^{e_i}u_j\\ &=-x^{\omega\frac{e_im_i}{m}}(\varphi_{m_i,j}\varphi_{m_i,m_i+j}\varphi_{m_i,2m_i+j}\cdots \varphi_{m_i,(e_i-1)m_i+j})^{e_i}u_j\\ &=(\phi_{m_i,j}\varphi_{m_i,m_i+j}\varphi_{m_i,2m_i+j}\cdots \varphi_{m_i,(e_i-1)m_i+j})^{e_i}u_j\\ &=(\varphi_{m_i,j}\phi_{m_i,m_i+j}\varphi_{m_i,2m_i+j}\cdots \varphi_{m_i,(e_i-1)m_i+j})^{e_i}u_j\\ &\cdots\\ &=(\varphi_{m_i,j}\varphi_{m_i,m_i+j}\varphi_{m_i,2m_i+j}\cdots \phi_{m_i,(e_i-1)m_i+j})^{e_i}u_j, \end{align*} where the fourth ``$=$", for example, is gotten in the following way: We multiply $u_j$ by one $y_{m_i}$ from left side at first, then multiply it with $y_{m_i}^{e_i-1}$ from right side, then continue the procedures above. From these equalities, we have $$\phi_{m_i,sm_i+j}^{e_i}=-x^{\omega\frac{e_im_i}{m}}\varphi_{m_i,sm_i+j}^{e_i}$$ for all $0\leq s\leq e_i-1$. This implies that $$e_i|\omega\frac{e_im_i}{m}.$$ So, $m|\omega m_i$ for all $1\leq i\leq \theta$. Since $m$ is coprime to $(m_1,\ldots,m_{\theta})$, we have $$m|\omega.$$ So $\phi_{m_i,sm_i+j}=\xi_{m_i,sm_i+j}x^{dm_i}\varphi_{m_i,sm_i+j}$ where $d=\frac{\omega}{m}$ and $\xi_{m_i,sm_i+j}\in \k$ satisfying $\xi_{m_i,sm_i+j}^{e_i}=-1$. We next want to prove that $\xi_{m_i,sm_i+j}$ does not depend on the number $sm_i+j.$ In fact, by equation \eqref{eq6.8}, we can see $\xi_{m_i,sm_i+j}=\xi_{m_i,tm_i+j}$ for all $0\leq s, t\leq e_i-1$, and so we can write it through $\xi_{m_i,j}$. Now consider for any $1\leq i'\leq \theta$, by definition we have $\phi_{m_{i'},0}u_{m_{i'}}=y_{m_{i'}}u_{0}$. Therefore \begin{align*} y_{m_i}y_{m_{i'}}u_{0}&= \phi_{m_{i'},0}y_{m_i}u_{m_{i'}}\\ &=\phi_{m_{i'},0}\xi_{m_i,m_{i'}}x^{m_id}u_{m_{i'}}y_{m_i}, \end{align*} and \begin{align*} y_{m_i}y_{m_{i'}}u_{0}&= y_{m_{i'}}y_{m_i}u_{0}\\ &=\xi_{m_i,0}x^{m_id}y_{m_{i'}}u_{0}y_{m_i}\\ &=\phi_{m_{i'},0}\xi_{m_i,0}x^{m_id}u_{m_{i'}}y_{m_i}. \end{align*} So, $\xi_{m_i,0}=\xi_{m_i,m_{i'}}$ which indeed tells us that $\xi_{m_i,j}$ does depend on $j$ (due to $j$ is generated by these $m_{i'}$'s) and so we write it as $\xi_{m_i}.$\qed In the following of the proof, $d$ is fixed to be the number $\omega/m$. \noindent\emph{Claim 2. We have $u_jg=\lambda_j x^{-2d}gu_j$ for $\lambda_j=\pm \gamma^j$ and $0\leq j\leq m-1$.} \emph{Proof of Claim 2:} Since $g$ is invertible in $H$, $u_jg=\psi_jgu_j$ for some invertible $\psi_j\in \k[x^{\pm 1}]$. Then $u_jg^m=\psi_j^mg^mu_j$ yields $\psi_j^m=x^{-2\omega}$. So $\psi_j=\lambda_jx^{-2d}$ for $\lambda_j\in \k$ with $\lambda_j^m=1$. Our last task is to show that $\lambda_j=\pm \gamma^j$. To show this, we need a preparation, that is, we need to show that $u_ju_l\neq 0$ for all $j,l$. Otherwise, assume that there exist $j_0,l_0\in \{0,\ldots,m-1\}$ such that $u_{j_0}u_{l_0}=0$. Using Claim 1, we can find that $u_{j_0}u_l\equiv 0$ and $u_ju_{l_0}\equiv 0$ for all $j,l$. Let $(u_{j_0})$ and $(u_{l_0})$ be the ideals generated by $u_{j_0}$ and $u_{l_0}$ respectively. Then it is not hard to find that $(u_{j_0})(u_{l_0})=0$ which contradicts $H$ being prime. So we always have \begin{equation}\label{r3} u_ju_l\neq 0 \end{equation} for all $0\leq j,l\leqslant m-1$. Applying this observation, we have $0\neq u_j^2\in H_{2,2+4j}=\k[x^{\pm 1}]y_{2j}g$, $u_j^2g=\psi_j\overline{\psi_j}gu_j^2=\gamma^{2j}gu_j^2$. Thus $\psi_j=\pm \gamma^jx^{-2d}$ which implies that $\lambda_j=\pm \gamma^j$. The proof is ended.\qed We can say more about $\lambda_j$ at this stage. By $0\neq u_ju_lg=\gamma^{j+l}gu_ju_l$, we know that $\psi_j=\gamma^jx^{-2d}$ for all $j$ or $\psi_j=-\gamma^jx^{-2d}$ for all $j$. So \begin{equation}\label{r5} \lambda_{j}=\gamma^{j}\ \ \textrm{or}\ \ \lambda_{j}=-\gamma^{j} \end{equation} for all $0\leq j\leq m-1$. In fact, we will show that $\psi_j=\gamma^jx^{-2d}$ for all $j$ later. \noindent\emph{Claim 3. For each $0\leqslant j\leqslant m-1$, there are $f_{jl},h_{jl}\in \k[x^{\pm 1}]$ with $h_{jl}$ monic such that \begin{equation}\label{r4}\D(u_j)=\sum_{k=0}^{m-1}f_{jk}u_k\otimes h_{jk}g^ku_{j-k},\end{equation} where the following $j-k$ is interpreted \emph{mod} $m$.} \emph{Proof of Claim 3:} Since $u_j\in H_{1,1+2j}$, $\D(u_j)\in H_1^l\otimes H_{1+2j}^r$ by Lemma \ref{l2.8}. Noting that $H_1^l=\bigoplus_{k=0}^{m-1}H_{00}u_k$ and $H_{1+2j}^r=\bigoplus_{s=0}^{m-1}H_{00}g^su_{j-s}$, we can write $$\D(u_j)=\sum_{0\leq k,s\leq m-1}F^j_{ks}(u_k\otimes g^su_{j-s}),$$ where $F^j_{ks}\in H_{00}\otimes H_{00}$. Then we divide the proof into two steps. \noindent $\bullet$ \emph{Step} 1 ($\D(u_j)=\sum_{0\leq k\leqslant m-1}F^j_{kk}(u_k\otimes g^ku_{j-k})$). Recall that $u_jg=\lambda_jx^{-2d}gu_j$, where $\lambda_j$ is either $\gamma^j$ for all $j$ or $-\gamma^j$ for all $j$. The equations \begin{align*} \D(u_jg)&=\D(u_j)\D(g)=\sum_{0\leq k,s\leq m-1}F^j_{ks}(u_k\otimes g^su_{j-s})(g\otimes g)\\ &=\sum_{0\leq k,s\leq m-1}F^j_{ks}(\lambda_k x^{-2d}gu_k\otimes \lambda_{j-s} x^{-2d}g^{s+1}u_{j-s})\\ &=\sum_{0\leq k,s\leq m-1}\lambda_k\lambda_{j-s}( x^{-2d}g\otimes x^{-2d}g)F^j_{ks}(u_k\otimes g^s u_{k-s}) \end{align*} and \begin{align*} \D(\lambda_j x^{-2d}gu_j)&=\lambda_j ( x^{-2d}g\otimes x^{-2d}g) \sum_{0\leq k,s\leq m-1}F^j_{ks}(u_k\otimes g^su_{j-s})\\ &=\sum_{0\leq k,s\leq m-1}\lambda_j ( x^{-2d}g\otimes x^{-2d}g)F^j_{ks}(u_k\otimes g^s u_{j-s}) \end{align*} imply that $\lambda_j=\lambda_k\lambda_{j-s}$ for all $k, s$. If $\lambda_j=-\gamma^j$ for all $j$, then we have $-\gamma^j=\lambda_j=\lambda_k\lambda_{j-s}=\gamma^{k+j-s}.$ This implies $k=s\pm m/2$. Applying $(\epsilon\otimes \id)$ to $\D(u_j)$, $$(\epsilon\otimes \id)\D(u_j)=(\epsilon\otimes \id)(F^j_{0,\; m/2})g^{m/2}u_{j-m/2}\neq u_j,$$ which is absurd. If $\lambda_j=\gamma^j$ for all $j$, then $\gamma^j=\lambda_j=\lambda_k\lambda_{j-s}=\gamma^{k+j-s}$. This implies $k=s$ (which is compatible with the equality $(\epsilon\otimes \id)\D(u_j)=u_j$). So we get $F^j_{ks}\neq 0$ only if $k=s$ and $\lambda_j=\gamma^j$ for all $j$. Thus we have $\D(u_j)=\sum_{0\leq k\leq m-1}F^j_{kk}(u_k\otimes g^ku_{j-k})$ for all $j$. \noindent $\bullet$ \emph{Step} 2 (There exist $f_{jk}, h_{jk} \in H_{00}$ with $h_{jk}$ monic such that $F^j_{kk}=f_{jk}\otimes h_{jk}$ for $0\leq j,k\leq m-1$). We replace $F^j_{kk}$ by $F^j_{k}$ for convenience. Since \begin{align*} (\D\otimes \id)\D(u_j)&=(\D\otimes \id)(\sum_{0\leq k\leq m-1}F^j_{k}(u_k\otimes g^ku_{j-k}))\\ &=\sum_{0\leq k\leq m-1}(\D\otimes \id)(F^j_{k})(\sum_{0\leq s\leq m-1}F_s^k(u_s\otimes g^su_{k-s})\otimes g^ku_{j-k})\\ &=\sum_{0\leq k, s\leqslant m-1}(\D\otimes \id)(F^j_{k})(F_s^k\otimes 1)(u_s\otimes g^su_{k-s}\otimes g^ku_{j-k}) \end{align*} and \begin{align*} (\id\otimes\D)\D(u_j)&=(\id\otimes\D)(\sum_{0\leq k\leq m-1}F^j_{k}(u_k\otimes g^ku_{j-k}))\\ &=\sum_{0\leq k\leq m-1}(\id\otimes\D)(F^j_{k})(u_k\otimes(\sum_{0\leq s\leq m-1}F_s^{j-k}(g^ku_s\otimes g^{k+s}u_{j-k-s}))\\ &=\sum_{0\leq k, s\leqslant m-1}(\id\otimes\D)(F^j_{s})(1\otimes F_{k-s}^{j-s})(u_s\otimes g^su_{k-s}\otimes g^ku_{j-k}), \end{align*} we have \begin{equation}\label{eqclaim3}(\D\otimes \id)(F^j_{k})(F_s^k\otimes 1)=(\id\otimes\D)(F^j_{s})(1\otimes F_{k-s}^{j-s})\end{equation} for all $0\leq j,k, s\leq m-1$. Begin with the case $j=k=s=0$. Let $F_0^0=\sum_{p,q}k_{pq}x^p\otimes x^q$. Comparing equation \begin{align*} (\D\otimes \id)(F^0_{0})(F_0^0\otimes 1)&=(\sum_{p,q}k_{pq}x^p\otimes x^p\otimes x^q)(\sum_{p',q'}k_{p'q'}x^{p'}\otimes x^{q'}\otimes 1)\\ &=(\sum_{p,q,p',q'}k_{pq}k_{p'q'}x^{p+p'}\otimes x^{p+q'}\otimes x^q) \end{align*} and equation \begin{align*} (\id\otimes\D)(F^0_{0})(1\otimes F_0^0)&=(\sum_{p,q}k_{pq}x^p\otimes x^q\otimes x^q)(\sum_{p',q'}k_{p'q'}1 \otimes x^{p'}\otimes x^{q'} )\\ &=(\sum_{p,q,p',q'}k_{pq}k_{p'q'}x^{p}\otimes x^{q+p'}\otimes x^{q+q'}), \end{align*} one can see that $p=q=0$ by comparing the degrees of $x$ in these two expressions. Then $F_0^0= 1\otimes 1$ by applying $(\epsilon\otimes \id)\D$ to $u_0$. Next, consider the case $k=s=0$. Write $F_0^j=\sum_{p,q}k_{pq}x^p\otimes x^q$. Similarly, we have $F_0^j= x^{a_j}\otimes 1$ for some $a_j\in \mathbb{Z}$ by the equation $$(\D\otimes \id)(F^j_{0})(F_0^0\otimes 1)=(\id\otimes\D)(F^j_{0})(1\otimes F_0^j).$$ Finally, write $F_k^j=\sum_{p,q}k_{pq}x^p\otimes x^q$ and consider the case $s=0$. Let $F_0^j= x^{a_j}\otimes 1$ and $F_0^k=x^{a_k}\otimes 1$. The equation \begin{align*} &(\sum_{p,q}k_{pq}x^{p+a_k}\otimes x^p\otimes x^q) =(\D\otimes \id)(F^j_{k})(F_0^k\otimes 1)\\ &\quad \quad=(\id\otimes\D)(F^j_{0})(1\otimes F_k^j) =(\sum_{p,q}k_{pq}x^{a_j}\otimes x^p\otimes x^q) \end{align*} shows that $p=a_j-a_k$, that is, $F_k^j=x^{c_{jk}}\otimes \beta_{jk}$ some $c_{jk}\in \mathbb{Z}, \beta_{jk}\in H_{00}$. By steps 1 and 2, $F^j_k$ can be written as $f_{jk}\otimes h_{jk}$ with $h_{jk}$ monic after multiplying suitable scalar, where $f_{jk}, h_{jk}\in \k[x^{\pm 1}]$. That is, $$\D(u_j)=\sum_{k=0}^{m-1}f_{jk}u_k\otimes h_{jk}g^ku_{j-k},$$ where $f_{jk},h_{jk}\in \k[x^{\pm 1}]$ with $h_{jk}$ monic. \qed Since $\lambda_j=\gamma^j$ for all $j$ has been shown above, we can improve Claim 2 as \noindent \emph{Claim 2'. We have $u_jg=\gamma^{j} x^{-2d}gu_j$ for $0\leq j\leq m-1$.} By Claim 2', we have a unified formula in $H$: For all $s\in \mathbbm{Z}$, \begin{equation}\label{r7} u_jg^s=\gamma^{js}x^{-2sd}g^su_j.\end{equation} \noindent\emph{Claim 4. We have $\phi_{m_i,j}=1-\gamma^{-m_i(m_i+j)}x^{m_id}=1-\gamma^{-m_i^2(1+j_i)}x^{m_id}$ for $1\leq i\leq \theta$ and $0\leq j\leq m-1$.} \emph{Proof of Claim 4:} By Claim 3, there are polynomials $f_{0j},h_{0j},$ such that $$\D(u_0)=u_0\otimes u_0+ f_{01}u_1\otimes h_{01}gu_{m-1}+ \cdots + f_{0, m-1}u_{m-1}\otimes h_{0, m-1}g^{m-1}u_1.$$ Firstly, we will show $\phi_{m_i,0}=1-\gamma^{-m_i^2}x^{m_id}$ by considering the equations $$\D(y_{m_i}u_0)_{11\otimes (1,1+2m_i)}=\D(\xi_{m_i} x^{m_id} u_0y_{m_i})_{11\otimes (1,1+2m_i)}=\D(\phi_{m_i,0}u_{m_i})_{11\otimes (1,1+2m_i)}.$$ Direct computations show that \begin{align*} &\D(y_{m_i}u_0)_{11\otimes (1,1+2m_i)}\\ &=u_0\otimes y_{m_i}u_0+y_{m_i} f_{0, (e_i-1)m_i} u_{(e_i-1)m_i}\otimes g^{m_i} h_{0, (e_i-1)m_i} g^{(e_i-1)m_i}u_{-(e_i-1)m_i}\\ &=u_0\otimes \phi_{m_i,0}u_{m_i}+ f_{0, (e_i-1)m_i}\phi_{m_i,(e_i-1)m_i} u_{0}\otimes x^{e_im_id} h_{0, (e_i-1)m_i} u_{-(e_i-1)m_i},\\ &\D(\xi_{m_i} x^{m_id} u_0y_{m_i})_{11\otimes (1,1+2m_i)}=\xi_{m_i} x^{m_id}u_0\otimes x^{m_id}u_0y_{m_i}\\ &\quad\quad + \xi_{m_i} x^{m_id} f_{0, (e_i-1)m_i} u_{(e_i-1)m_i}y_{m_i}\otimes x^{m_id} h_{0, (e_i-1)m_i} g^{(e_i-1)m_i}u_{-(e_i-1)m_i} g^{m_i}\\ &= x^{m_id}u_0\otimes \phi_{m_i,0}u_{m_i}+ f_{0, (e_i-1)m_i}\phi_{m_i,(e_i-1)m_i} u_{0}\otimes \gamma^{m_i^2}x^{(e_i-1)m_id} h_{0, (e_i-1)m_i}u_{-(e_i-1)m_i}. \end{align*} Owing to $\D(y_{m_i}u_0)_{11\otimes (1,1+2m_i)}=\D(\xi_{m_i} x^{m_id} u_0y_{m_i})_{11\otimes (1,1+2m_i)}$, \begin{align*}&(1-x^{m_id})u_0\otimes \phi_{m_i,0}u_{m_i}\\ & \quad + f_{0, (e_i-1)m_i}\phi_{m_i,(e_i-1)m_i}u_0\otimes (x^{m_id}-\gamma^{m_i^2}) x^{(e_i-1)m_id} h_{0, (e_i-1)m_i} u_{-(e_i-1)m_i}\\ &=0.\end{align*} Thus we can assume $\phi_{m_i,0}=c_0 (x^{m_id}-\gamma^{m_i^2}) x^{(e_i-1)m_id} h_{0, (e_i-1)m_i}$ for some $0\neq c_0\in \k$. Then $1-x^{m_id}=-c_0^{-1}f_{0, (e_i-1)m_i}\phi_{m_i,(e_i-1)m_i}$. Therefore, \begin{align*} &\D(y_{m_i}u_0)_{11\otimes (1,1+2m_i)}\\ &=u_0\otimes \phi_{m_i,0}u_{m_i}-c_0(1-x^{m_id})u_0\otimes \frac{1}{c_0}\frac{x^{m_id}\phi_{m_i,0}}{x^{m_id}-\gamma^{m_i^2}} u_{-(e_i-1)m_i}\\ &=u_0\otimes (1-\frac{x^{m_id}}{x^{m_id}-\gamma^{m_i^2}})\phi_{m_i,0}u_{-(e_i-1)m_i} +x^{m_id}u_{0}\otimes \frac{x^{m_id}\phi_{m_i,0}}{x^{m_id}-\gamma^{m_i^2}} u_{-(e_i-1)m_i}\\ &=u_0\otimes -\frac{\gamma^{m_i^2}}{x^{m_id}-\gamma^{m_i^2}}\phi_{m_i,0}u_{-(e_i-1)m_i} +x^{m_id}u_{0}\otimes \frac{x^{m_id}\phi_{m_i,0}}{x^{m_id}-\gamma^{m_i^2}} u_{-(e_i-1)m_i}, \end{align*} where $\frac{\phi_{m_i,0}}{x^{m_id}-\gamma^{m_i^2}}$ is understood as $c_0 x^{(e_i-1)m_i}h_{0,(e_i-1)m_i}$. Note that $\D(y_{m_i}u_0)_{11\otimes (1,1+2m_i)}=\D(\phi_{m_i,0}u_{m_i})_{11\otimes (1,1+2m_i)}=\D(\phi_{m_i,0})(f_{m_i,0}u_0\otimes u_{m_i})$. From which, we get $\phi_{m_i,0}=1+c x^{m_id}$ for some $c\in \k$. Then it is not hard to see that $f_{m_i, 0}=1, h_{0, (e_i-1)m_i}=x^{-(e_i-1)m_id}$ and $c=-\gamma^{-m_i^2}$. So $\phi_{m_i,0}=1-\gamma^{-m_i^2}x^{m_id}$. Secondly, we want to determine $\phi_{m_i,j}$ for $0\leq j\leq m-1$. We note that we always have $h_{j0}=f_{jj}=1$ due to $(\e\otimes \id)\D(u_j)=u_j$. To determine $\phi_{m_i,j}$, we will prove the fact \begin{equation}\label{r6} f_{j0}=1 \end{equation} for all $0\leqslant j\leqslant m-1$ at the same time. We proceed by induction. We already know that $f_{00}=h_{00}=f_{m_i 0}=1$. Assume that $f_{j, 0}=1$ now. We consider the case $j+m_i$. Similarly, direct computations show that \begin{align*} &\D(y_{m_i}u_j)_{11\otimes (1, 1+2j+2m_i)}\\ &=u_0\otimes y_{m_i}u_j+y_{m_i} f_{j, (e_i-1)m_i} u_{(e_i-1)m_i}\otimes g^{m_i} h_{j, (e_i-1)m_i} g^{(e_i-1)m_i}u_{m_i+j}\\ &=u_0\otimes \phi_{m_i,j}u_{m_i+j}+f_{j, (e_i-1)m_i}\phi_{m_i,(e_i-1)m_i} u_{0}\otimes x^{e_im_id} h_{j, (e_i-1)m_i}u_{m_i+j},\\ &\D(\xi_{m_i} x^{m_id} u_jy_{m_i})_{11\otimes (1, 1+2j+2m_i)}\\ &=\xi_{m_i} x^{m_id}u_0\otimes x^{m_id}u_jy_{m_i}+ \xi_{m_i} x^{m_id} f_{j, (e_i-1)m_i} u_{(e_i-1)m_i} y_{m_i}\otimes x^{m_id} h_{j, (e_i-1)m_i} g^{(e_i-1)m_i}u_{j+m_i} g^{m_i}\\ &=x^{m_id}u_0\otimes \phi_{m_i,j}u_{m_i+j}+ f_{j, (e_i-1)m_i}\phi_{m_i,(e_i-1)m_i} u_{0} \otimes \gamma^{m_i(j+m_i)}x^{(e_i-1)m_id} h_{j, (e_i-1)m_i} u_{j+m_i}. \end{align*} By $\D(y_{m_i}u_j)_{11\otimes (1, 1+2j+2m_i)}=\D(\xi_{m_i} x^{m_id} u_jy_{m_i})_{11\otimes (1, 1+2j+2m_i)}$, \begin{align*}&(1-x^{m_id})u_0\otimes \phi_{m_i,j}u_{m_i+j}\\ &\quad + f_{j, (e_i-1)m_i}\phi_{m_i,(e_i-1)m_i}u_0\otimes (x^{m_id}-\gamma^{m_i(m_i+j)}) x^{(e_i-1)m_id} h_{j, (e_i-1)m_i} u_{j+m_i}\\ &=0.\end{align*} Thus we can assume $\phi_{m_i,j}=c_j (x^{m_id}-\gamma^{m_i(m_i+j)}) x^{(e_i-1)m_id} h_{j, (e_i-1)m_i}$ for some $0\neq c_j\in \k$. Then $1-x^{m_id}=-c_j^{-1}f_{j, (e_i-1)m_i}\phi_{m_i,(e_i-1)m_i}$. Therefore \begin{align*} &\D(y_{m_i}u_j)_{11\otimes (1, 1+2j+2m_i)}\\ &=u_0\otimes \phi_{m_i,j}u_{m_i+j}-c_j(1-x^{m_id})u_0\otimes \frac{1}{c_j}\frac{x^{m_id}}{x^{m_id}-\gamma^{m_i(m_i+j)}} \phi_{m_i,j} u_{m_i+j}\\ &=u_0\otimes \frac{-\gamma^{m_i(m_i+j)}}{x^{m_id}-\gamma^{m_i(m_i+j)}}\phi_{m_i,j}u_{m_i+j} +x^{m_id}u_0\otimes \frac{x^{m_id}}{x^{m_id}-\gamma^{m_i(m_i+j)}} \phi_{m_i,j} u_{m_i+j}. \end{align*} Note that $\D(y_{m_i}u_j)_{11\otimes (1, 1+2j+2m_i)}=\D(\phi_{m_i,j}u_{m_i+j})_{11\otimes (1, 1+2j+2m_i)}=\D(\phi_{m_i,j})(f_{m_i+j, 0}u_0\otimes h_{m_i+j, 0}u_{m_i+j})$. Comparing the first components of $$\D(y_{m_i}u_j)_{11\otimes (1, 1+2j+2m_i)}\;\; \textrm{and}\;\; \D(\phi_{m_i,j}u_{m_i+j})_{11\otimes (1, 1+2j+2m_i)},$$ we get $\phi_{m_i,j}=1-\gamma^{-m_i(m_i+j)}x^{m_id}$ similarly. And it is not hard to see that $f_{m_i+j, 0}=1$. Since here $i$ is arbitrary and $m_1,\ldots,m_\theta$ generate $0,1,\ldots,m-1$, we prove that $f_{j, 0}=h_{j, 0}=1$ at the same time for all $0\leq j\leq m-1$.\qed \noindent \emph{Claim 5. The coproduct of $H$ is given by $$\D(u_j)=\sum_{k=0}^{m-1}\gamma^{k(j-k)}u_k\otimes x^{-kd}g^ku_{j-k}$$for $0\leq j\leqslant m-1$.} \emph{Proof of Claim 5:} By Claim 3, $\D(u_j)=\sum_{k=0}^{m-1}f_{jk}u_k\otimes h_{jk}g^ku_{j-k}$. So, to show this claim, it is enough to determine the explicit form of every $f_{jk}$ and $h_{jk}$. By \eqref{r6} and the sentence before it, $f_{j,0}=h_{j,0}=1$ for all $0\leq j\leq m-1$. We will prove that $f_{jk}=\gamma^{k(j-k)}$ and $h_{jk}=x^{-kd}$ for all $0\leqslant j, k \leqslant m-1$ by induction. So it is enough to show that $f_{j,k+m_i}=\gamma^{(k+m_i)(j-k-m_i)}$ and $h_{j, k+m_i}=x^{-(k+m_i)d}$ for all $1\leq i\leq \theta$ under the hypothesis of $f_{jk}=\gamma^{k(j-k)}$ and $h_{jk}=x^{-kd}$. In fact, for $1\leq i\leq \theta$, \begin{align*} &\D(y_{m_i}u_j)_{(1, 1+2k+2m_i)\otimes (1+2k+2m_i, 1+2j+2m_i)}\\ &=y_{m_i}f_{jk}u_k\otimes g^{m_i}h_{jk}g^ku_{j-k}+ f_{j, k+m_i} u_{k+m_i}\otimes y_{m_i} h_{j, k+m_i} g^{k+m_i}u_{j-k-m_i}\\ &=f_{jk}y_{m_i}u_k\otimes h_{jk}g^{k+m_i}u_{j-k}+ f_{j, k+m_i} u_{k+m_i}\otimes \gamma^{(k+m_i)m_i} h_{j, k+m_i} g^{k+m_i}y_{m_i}u_{j-k-m_i},\\ &\D(\xi_{m_i} x^{m_id} u_jy_{m_i})_{(1, 1+2k+2m_i)\otimes (1+2k+2m_i, 1+2j+2m_i)}\\ &=\xi_{m_i} x^{m_id}f_{jk}u_ky_{m_i}\otimes x^{m_id}h_{jk}g^ku_{j-k}g^{m_i}\\ &\quad+ \xi_{m_i} x^{m_id}f_{j, k+m_i} u_{k+m_i}\otimes x^{m_id} h_{j, k+m_i} g^{k+m_i}u_{j-k-m_i} y_{m_i}\\ &=f_{jk}y_{m_i}u_k\otimes \gamma^{(j-k)m_i}x^{-m_id}h_{jk}g^{m_i+k}u_{j-k}\\ &\quad+ x^{m_id}f_{j, k+m_i} u_{k+m_i}\otimes h_{j, k+m_i} g^{k+m_i}y_{m_i}u_{j-k-m_i}. \end{align*} Since they are equal, \begin{align*}&f_{jk}y_{m_i}u_k\otimes (1-\gamma^{(j-k)m_i}x^{-m_id})h_{jk}g^{m_i+k}u_{j-k}\\ &= (x^{m_id}-\gamma^{(k+m_i)m_i})f_{j, k+m_i} u_{k+m_i}\otimes h_{j, k+m_i} g^{k+m_i}y_{m_i} u_{j-k-m_i}.\end{align*} Using induction and the expression of $\phi_{m_i,k}$, we have \begin{align*}&\gamma^{k(j-k)}(1-\gamma^{-m_i(m_i+k)}x^{m_id})u_{k+m_i}\otimes (1-\gamma^{(j-k)m_i}x^{-m_id})x^{-kd}g^{m_i+k}u_{j-k}\\& = \gamma^{k(j-k)}(1-\gamma^{-m_i(m_i+k)}x^{m_id})u_{k+m_i}\otimes (x^{m_id}-\gamma^{(j-k)m_i})x^{-(k+m_i)d}g^{m_i+k}u_{j-k}\\ &= (x^{m_id}-\gamma^{(k+m_i)m_i})f_{j, k+m_i} u_{k+m_i}\otimes (1-\gamma^{-(j-k)m_i}x^{m_id})h_{j, k+m_i} g^{k+m_i}u_{j-k}.\end{align*} This implies that $h_{j, k+m_i}=x^{-(k+m_i)d}$ and $$f_{j,k+m_i}=\gamma^{k(j-k)-m_i^2-m_ik+m_ij-m_ik}=\gamma^{(k+m_i)(j-k-m_i)}.$$\qed \noindent \emph{Claim 6. For $0\leqslant j,l\leqslant m-1$, the multiplication between $u_j$ and $u_l$ satisfies that $$u_ju_l=\frac{1}{m}x^{a} \prod_{i=1}^{\theta} (-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2\frac{l_i(l_i+1)}{2}}[j_i,e_i-2-l_i]_{m_i}y_{j+l}g$$ for some $a\in \mathbbm{Z}$ and where $[-,-]_{m_i}$ is defined as \eqref{eqpre} and $j+l$ is interpreted \emph{mod} $m$.} \emph{Proof of Claim 6:} We need to consider the relation between $u_0^2$ and $u_ju_{m-j}$ for all $1\leqslant j \leqslant m-1$ at first. We remark that as before for any $k\in \Z$ we write $u_{k}:=u_{\overline{k}}$ where $\overline{k}$ is the remainder of $k$ dividing by $m$. Thus $u_j=u_{j_1m_1+\ldots+j_{\theta}m_{\theta}}$ and $u_{m-j}=u_{(e_1-j_1)m_1+\ldots+(e_\theta-j_\theta)m_{\theta}}.$ By definition, $x^{m_id}\overline{\phi_{m_i,sm_i}}=-\gamma^{-m_i^2(s+1)}\phi_{m_i,(e_i-s-2)m_i}$ for all $s$. Then \begin{align*} &y_{m_1}^{e_1}y_{m_2}^{e_2}\cdots y_{m_\theta}^{e_\theta} u_0^2\\ &=\xi_{m_1}^{e_1-j_1}\xi_{m_2}^{e_2-j_2}\cdots \xi_{m_\theta}^{e_\theta-j_\theta} x^{(e_1-j_1)m_1d+\ldots +(e_{\theta}-j_\theta)m_\theta d} y_{j}u_0y_{m-j}u_0\\ &=\prod_{i=1}^{\theta}[\xi_{m_i}^{e_i-j_i}x^{(e_i-j_i)m_id}\phi_{m_{i},0}\cdots \phi_{m_i,(j_i-1)m_i}]u_j \prod_{i=1}^{\theta}[\phi_{m_{i},0}\cdots \phi_{m_i,(e_i-j_i-1)m_i}]u_{m-j}\\ &=\prod_{i=1}^{\theta}[\xi_{m_i}^{e_i-j_i}x^{(e_i-j_i)m_id}\phi_{m_{i},0}\cdots \phi_{m_i,(j_i-1)m_i}\overline{\phi_{m_{i},0}}\cdots \overline{\phi_{m_i,(e_i-j_i-1)m_i}}]u_j u_{m-j}\\ &=\prod_{i=1}^{\theta}[(-1)^{e_i-j_i}\xi_{m_i}^{e_i-j_i} \gamma^{-m_i^2\frac{(e_i-j_i)(e_i-j_i+1)}{2}}\phi_{m_{i},0}\cdots \phi_{m_i,(e_i-2)m_i}{\phi_{m_{i},(j_i-1)m_i}}]u_j u_{m-j}. \end{align*} By $\phi_{m_{i},0}\cdots \phi_{m_i,(e_i-2)m_i}\phi_{m_i,(e_i-1)m_i}=1-x^{e_im_id}$ (see Lemma \ref{l3.4} (2)), we have \begin{align*} \label{r8}&\phi_{m_1,(e_1-1)m_1}\cdots \phi_{m_\theta,(e_\theta-1)m_\theta} y_{m_1}^{e_1}\cdots y_{\theta}^{e_\theta} u_0^2\\ &=\prod_{i=1}^{\theta}[(-1)^{e_i-j_i}\xi_{m_i}^{e_i-j_i} \gamma^{-m_i^2\frac{(e_i-j_i)(e_i-j_i+1)}{2}}(1-x^{e_im_ed}){\phi_{m_{i},(j_i-1)m_i}}]u_ju_{m-j} .\end{align*} Due to $y_{m_i}^{e_i}=1-x^{e_im_id}$, we get a desired formula \begin{equation}\label{r8} \prod_{i=1}^{\theta}[\phi_{m_i,(e_i-1)m_i}]u_0^2=\prod_{i=1}^{\theta}[(-1)^{e_i-j_i}\xi_{m_i}^{e_i-j_i} \gamma^{-m_i^2\frac{(e_i-j_i)(e_i-j_i+1)}{2}}{\phi_{m_{i},(j_i-1)m_i}}]u_ju_{m-j} . \end{equation} Since $u_0^2, u_ju_{m-j}\in H_{22}=\k[x^{\pm 1}]g$, we may assume $u_0^2=\alpha_0 g, u_ju_{m-j}=\alpha_j g$ for some $\alpha_0, \alpha_j \in \k[x^{\pm 1}]$ for all $1\leqslant j \leqslant m-1$. Then Equation \eqref{r8} implies $\alpha_0=\alpha \prod_{i=1}^{\theta}[\phi_{m_{i},0}\cdots \phi_{m_i,(e_i-2)m_i}]$ for some $\alpha \in \k[x^{\pm 1}]$. We claim $\alpha$ is invertible. Indeed, by $$\prod_{i=1}^{\theta}[\phi_{m_i,(e_i-1)m_i}]\alpha_0=\prod_{i=1}^{\theta}[(-1)^{e_i-j_i}\xi_{m_i}^{e_i-j_i} \gamma^{-m_i^2\frac{(e_i-j_i)(e_i-j_i+1)}{2}}{\phi_{m_{i},(j_i-1)m_i}}] \alpha_j,$$ we have $$\alpha_j=\prod_{i=1}^{\theta}[(-1)^{j_i-e_i}\xi_{m_i}^{j_i-e_i} \gamma^{m_i^2\frac{(e_i-j_i)(e_i-j_i+1)}{2}}]j_i-1,j_i-1[_{m_i}]\alpha.$$ Then $$H_{11}\cdot H_{11} + \sum_{j=1}^{m-1}H_{1, 1+2j}\cdot H_{1, 1+2(m-j)}\subseteq \alpha H_{22}.$$ By the strong grading of $H$, $$H_{22}=H_{11}\cdot H_{11} + \sum_{j=1}^{m-1}H_{1, 1+2j}\cdot H_{1, 1+2(m-j)},$$ which shows that $\alpha$ must be invertible. Since $\epsilon(\alpha_{0})=1, \epsilon(\phi_{m_i,0}\cdots \phi_{m_i,(e_i-2)m_i})=e_i$ and $m=e_1\cdots e_\theta$, we may assume $\alpha_0=\frac{1}{m}x^a \prod_{i=1}^{\theta}[\phi_{m_i,0}\cdots \phi_{m_i,(e_i-2)m_i}]$ for some integer $a$. Thus \begin{align*} u_ju_{m-j}&=\frac{1}{m}x^{a}\prod_{i=1}^{\theta}[(-1)^{j_i-e_i}\xi_{m_i}^{j_i-e_i} \gamma^{m_i^2\frac{(e_i-j_i)(e_i-j_i+1)}{2}}]j_i-1,j_i-1[_{m_i}]\;g. \end{align*} Now \begin{align*} &y_{j}y_{l}u_0^2\\ &=\prod_{i=1}^{\theta}\xi_{m_i}^{l_i}x^{l_im_id} y_{j}u_0y_{l}u_{0}\\ &=\prod_{i=1}^{\theta}[\xi_{m_i}^{l_i}x^{l_im_id}\phi_{m_i,0}\phi_{m_i,m_i}\cdots \phi_{m_i,(j_i-1)m_i}]u_j \prod_{i=1}^{\theta}[\phi_{m_i,0}\phi_{m_i,m_i}\cdots \phi_{m_i,(l_i-1)m_i}]u_l\\ &=\prod_{i=1}^{\theta}[\xi_{m_i}^{l_i}x^{l_im_id}\phi_{m_i,0}\cdots \phi_{m_i,(j_i-1)m_i}\overline{\phi_{m_i,0}}\cdots \overline{\phi_{m_i,(l_i-1)m_i}}]u_ju_{l}\\ &=\prod_{i=1}^{\theta}[(-1)^{l_i}\xi_{m_i}^{l_i}\gamma^{-m_i^2\frac{l_i(l_i+1)}{2}} \phi_{m_i,0}\cdots \phi_{m_i,(j_i-1)m_i}{\phi_{m_i,(e_i-2)m_i}}\cdots {\phi_{m_i,(e_i-1-l_i)m_i}}]u_ju_{l} \end{align*} For each $1\leq i\leq \theta$, we find that \begin{align*}&\phi_{m_i,0}\cdots \phi_{m_i,(j_i-1)m_i}{\phi_{m_i,(e_i-2)m_i}}\cdots {\phi_{m_i,(e_i-1-l_i)m_i}}\\ &=\begin{cases} \phi_{m_i,0}\cdots \phi_{m_i,(j_i-1)m_i}\phi_{m_i,(e_i-1-l_i)m_i}\cdots\phi_{m_i,(e_i-2)m_i}, & \textrm{if}\; j_i+l_i\leq e_i-2 \\\phi_{m_i,0}\cdots \phi_{m_i,(e_i-2)m_i}, & \textrm{if}\; j_i+l_i=e_i-1 \\ \phi_{m_i,0}\cdots \phi_{m_i,(j_i-1)m_i}\phi_{m_i,(e_i-1-l_i)m_i}\cdots\phi_{m_i,(e_i-1)m_i}, & \textrm{if}\; j_i+l_i\geq e_i. \end{cases} \end{align*} Using the same method to compute $u_ju_{m-j}$ given above and the notations introduced in equations \eqref{eqomit} and \eqref{eqpre}, we have a unified expression: \begin{align*} u_ju_l&=\frac{1}{m}x^{a} \prod_{i=1}^{\theta} (-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2\frac{l_i(l_i+1)}{2}}[j_i,e_i-2-l_i]_{m_i}y_{j+l}g\\ &=\frac{1}{m}x^{a} \prod_{i=1}^{\theta} (-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2\frac{l_i(l_i+1)}{2}}]-1-l_i,j_i-1[_{m_i}y_{j+l}g \end{align*} for all $0\leq j, l\leq m-1$. \qed \noindent \emph{Claim 7. We have $\xi_{m_i}^2=\gamma^{m_i}, \ a=-\frac{2+\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d$ and $$S(u_j)=x^{b}g^{m-1}\prod_{i=1}^{\theta}[(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{- m_i^2\frac{j_i(j_i+1)}{2}} x^{j_im_id} g^{-j_im_i}]u_j$$ for $0\leqslant j\leqslant m-1$ and $b=(1-m)d-\frac{\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d.$} \emph{Proof of Claim 7:} By Lemma \ref{l2.8} (3), $S(H_{ij})=H_{-j, -i}$ and thus $S(u_0)=hg^{m-1}u_0$ for some $h\in \k[x^{\pm 1}]$. Combining \begin{align*} S(y_{m_i}u_0)&=S(u_0)S(y_{m_i})=hg^{m-1}u_0 (-y_{m_i}g^{-m_i})=-\xi_{m_i}^{-1} x^{-m_id}hg^{m-1}y_{m_i}u_0g^{-m_i}\\ &=-\xi_{m_i}^{-1}\gamma^{-m_i^2} x^{m_id}hg^{m-1-m_i}y_{m_i}u_0=-\xi_{m_i}^{-1}\gamma^{-m_i^2} x^{m_id}hg^{m-1-m_i}\phi_{m_i,0}u_{m_i} \end{align*} with $$S(y_{m_i}u_0)=S(\phi_{m_i,0} u_{m_i})=S(u_{m_i})S(\phi_{m_i,0})=\phi_{m_i,0} S(u_{m_i}),$$ we get $S(u_{m_i})=-\xi_{m_i}^{-1} \gamma^{-m_i^2} x^{m_id} h g^{m-1-m_i} u_{m_i}$. The computation above tells us that we can prove that $$S(u_j)=hg^{m-1}\prod_{i=1}^{\theta}[(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{- m_i^2\frac{j_i(j_i+1)}{2}} x^{j_im_id} g^{-j_im_i}]u_j$$ by induction. In fact, in order to prove above formula for the antipode it is enough to show that it is still valid for $j+m_i$ for all $1\leq i\leq \theta$ under assumption that it is true for $j$. By combining \begin{align*} &S(y_{m_i}u_j)\\ &=S(u_j)S(y_{m_i})\\ &=hg^{m-1}\prod_{s=1}^{\theta}[(-1)^{j_s}\xi_{m_s}^{-j_s}\gamma^{- m_s^2\frac{j_s(j_s+1)}{2}} x^{j_sm_sd} g^{-j_sm_s}]u_j (-y_{m_i}g^{-m_i})\\ &=-\xi_{m_i}^{-1}x^{-m_id}hg^{m-1}\prod_{s=1}^{\theta}[(-1)^{j_s}\xi_{m_s}^{-j_s}\gamma^{- m_s^2\frac{j_s(j_s+1)}{2}} x^{j_sm_sd} g^{-j_sm_s}]y_{m_i}u_jg^{-m_i}\\ &= -\xi_{m_i}^{-1}x^{-m_id}hg^{m-1}\prod_{s=1}^{\theta}[(-1)^{j_s}\xi_{m_s}^{-j_s}\gamma^{- m_s^2\frac{j_s(j_s+1)}{2}} x^{j_sm_sd} g^{-j_sm_s}]y_{m_i}\gamma^{-m_i^2j_i}x^{2m_id}g^{-m_i}u_j\\ &= -\xi_{m_i}^{-1}x^{m_id}hg^{m-1}\prod_{s=1}^{\theta}[(-1)^{j_s}\xi_{m_s}^{-j_s}\gamma^{- m_s^2\frac{j_s(j_s+1)}{2}} x^{j_sm_sd} g^{-j_sm_s}]\gamma^{-m_i^2(j_i+1)}g^{-m_i}y_{m_i}u_j\\ &= -\xi_{m_i}^{-1}x^{m_id}hg^{m-1}\prod_{s=1}^{\theta}[(-1)^{j_s}\xi_{m_s}^{-j_s}\gamma^{- m_s^2\frac{j_s(j_s+1)}{2}} x^{j_sm_sd} g^{-j_sm_s}]\gamma^{-m_i^2(j_i+1)}g^{-m_i}\phi_{m_i,j}u_{j+m_i} \end{align*} with $$S(y_{m_i}u_j)=S(\phi_{m_i,j} u_{j+m_i})=S(u_{j+m_i})S(\phi_{m_i,j})=\phi_{m_i,j} S(u_{j+m_i}),$$ we find that $$S(u_{j+m_i})=hg^{m-1}\prod_{s=1}^{\theta}[(-1)^{(j+m_i)_s}\xi_{m_s}^{-(j+m_i)_s}\gamma^{- m_s^2\frac{(j+m_i)_s((j+m_i)_s+1)}{2}} x^{(j+m_i)_sm_sd} g^{-(j+m_i)_sm_s}]u_j.$$ In order to determine the relationship between $\xi$ and $\gamma$, we consider the equality $(\Id*S)(u_{m_i})=0$. By computation, \begin{align*} &(\Id*S)(u_{m_i})\\ &=\sum_{j=0}^{m-1}\gamma^{j(m_i-j)}u_jS(x^{-jd}g^ju_{m_i-j}) \\ &=\sum_{j=0}^{m-1}\gamma^{j(m_i-j)}u_jhg^{m-1} \prod_{s=1}^{\theta} [(-1)^{(m_i-j)_s}\xi_{m_s}^{-(m_i-j)_s}\gamma^{-m_s^2\frac{(m_i-j)_s((m_i-j)_s+1)}{2}} \\ &\quad\quad\quad x^{(m_i-j)_sm_sd}g^{-(m_i-j)_sm_s}]u_{m_i-j} g^{-j}x^{jd}\\ &=\sum_{j=0}^{m-1}\gamma^{-j(m_i-j)}\overline{h}u_jg^{m-1} \prod_{s=1}^{\theta} [(-1)^{(m_i-j)_s}\xi_{m_s}^{-(m_i-j)_s}\gamma^{-m_s^2\frac{(m_i-j)_s((m_i-j)_s+1)}{2}} \\ &\quad\quad\quad x^{(m_i-j)_sm_sd}g^{-(m_i-j)_sm_s}]\gamma^{(m_i-j)j}x^{jd}g^{-j}u_{m_i-j} \\ &=\sum_{j=0}^{m-1}\overline{h}u_jg^{m-1} \prod_{s=1}^{\theta} [(-1)^{(m_i-j)_s}\xi_{m_s}^{-(m_i-j)_s}\gamma^{-m_s^2\frac{(m_i-j)_s((m_i-j)_s+1)}{2}}] \\ &\quad\quad\quad x^{m_id}g^{-m_i}u_{m_i-j}\\ &=\sum_{j=0}^{m-1} \prod_{s=1}^{\theta} [(-1)^{(m_i-j)_s}\xi_{m_s}^{-(m_i-j)_s}\gamma^{-m_s^2\frac{(m_i-j)_s((m_i-j)_s+1)}{2}}] \\ &\quad\quad\quad \overline{h}x^{-m_id}\gamma^{j(-1-m_i)}x^{-2(m-1-m_i)d}g^{m-1-m_i}u_ju_{m_i-j}\\ &=\sum_{j=0}^{m-1} \prod_{s=1}^{\theta} [(-1)^{(m_i-j)_s}\xi_{m_s}^{-(m_i-j)_s}\gamma^{-m_s^2\frac{(m_i-j)_s((m_i-j)_s+1)}{2}}] \\ &\quad\quad\quad \gamma^{j(-1-m_i)}x^{(-2m+2+m_i)d}\overline{h}g^{m-1-m_i}u_ju_{m_i-j}\\ &=\sum_{j=0}^{m-1} \prod_{s=1}^{\theta} [(-1)^{(m_i-j)_s}\xi_{m_s}^{-(m_i-j)_s}\gamma^{-m_s^2\frac{(m_i-j)_s((m_i-j)_s+1)}{2}}] \gamma^{j(-1-m_i)}x^{(-2m+2+m_i)d}\overline{h}g^{m-1-m_i}\\ &\quad\quad\quad \frac{1}{m}x^{a} \prod_{s=1}^{\theta} (-1)^{(m_i-j)_i}\xi_{m_s}^{-(m_i-j)_s}\gamma^{m_s^2\frac{(m_i-j)_s((m_i-j)_s+1)}{2}} [j_s,e_s-2-(m_i-j)_s]_{m_s}y_{m_i}g\\ &=\frac{1}{m}\gamma^{m_i}x^{(-2m+2+m_i)d+a}\overline{h}g^{m-m_i}y_{m_i}\sum_{j=0}^{m-1} \prod_{s=1}^{\theta} [\xi_{m_s}^{-2(m_i-j)_s}\gamma^{j(-1-m_i)}[j_s,e_s-2-(m_i-j)_s]_{m_s}]\\ &=\frac{1}{m}\gamma^{m_i}\xi_{m_i}^{-2}x^{(-2m+2+m_i)d+a}\overline{h}g^{m-m_i}y_{m_i}\\ &\prod_{s=1,s\neq i}^{\theta}[ \sum_{j_s=0}^{e_s-1} \xi_{m_s}^{2j_s}\gamma^{-j_sm_s}]j_s-1,j_s-1[_{m_s}]\sum_{j_i=0}^{e_i-1} \xi_{m_i}^{2j_i}\gamma^{-j_im_i(1+m_i)}]j_i-2,j_i-1[_{m_i} \end{align*} where Equation \eqref{r7} is used. By Lemma \ref{l3.5} (1), each $\sum_{j_s=0}^{e_s-1} \xi_{m_s}^{2j_s}\gamma^{-j_sm_s}]j_s-1,j_s-1[_{m_s}\neq 0$. Thus $$(\Id*S)(u_i)=0 \Leftrightarrow \sum_{j_i=0}^{e_i-1} \xi_{m_i}^{2j_i}\gamma^{-j_im_i(1+m_i)}]j_i-2,j_i-1[_{m_i}=0.$$ This forces $\xi_{m_i}^2=\gamma^{m_i}$ by Lemma \ref{l3.5} (2). Next, we will determine the expression of $h$ and $a$ through considering the equations $$(S*\Id)(u_0)=(\Id*S)(u_0)=1.$$ Indeed, \begin{align*} &(S*\Id)(u_0)\\ &=\sum_{j=0}^{m-1}S(\gamma^{-j^2}u_j)x^{-jd}g^ju_{-j} \\ &=\sum_{j=0}^{m-1}\gamma^{-j^2} hg^{m-1} \prod_{i=1}^{\theta} [(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{-m_i^2\frac{j_i(j_i+1)}{2}} x^{j_im_id}g^{-j_im_i}]u_jx^{-jd}g^ju_{-j}\\ &=hg^{m-1}\sum_{j=0}^{m-1}\prod_{i=1}^{\theta} [(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{-m_i^2\frac{j_i(j_i+1)}{2}}]u_ju_{-j}\\ &=hg^{m-1}\sum_{j=0}^{m-1}\prod_{i=1}^{\theta} [(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{-m_i^2\frac{j_i(j_i+1)}{2}}] \frac{1}{m}x^{a} \\ &\quad\prod_{i=1}^{\theta} [(-1)^{(-j)_i}\xi_{m_i}^{-(-j)_i}\gamma^{m_i^2\frac{(-j)_i((-j)_i+1)}{2}}]j_i-l,j_i-1[_{m_i}]g\\ &=\frac{1}{m}x^{a}hg^{m}\sum_{j=0}^{m-1}\prod_{i=1}^{\theta} [(-1)^{e_i}\xi_{m_i}^{-e_i}\gamma^{-m_i^2(\frac{e_i(e_i+1)}{2}-j_i)}]j_i-l,j_i-1[_{m_i}] \\ &=\frac{1}{m}x^{a}hg^{m}(-1)^{\sum_{i=1}^{\theta}(m_i-1)(e_i+1)} \prod_{i=1}^{\theta}\sum_{j_i=0}^{e_i-1} \gamma^{-m_i^2j_i}]j_i-l,j_i-1[_{m_i}\\ &=\frac{1}{m}x^{a}hg^{m}(-1)^{\sum_{i=1}^{\theta}(m_i-1)(e_i+1)} \prod_{i=1}^{\theta}e_ix^{(e_i-1)m_id} \quad (\,\textrm{by Lemma}\ \ref{l3.4}\ (3))\\ &=(-1)^{\sum_{i=1}^{\theta}(m_i-1)(e_i+1)}x^{a+\sum_{i=1}^{\theta}(e_i-1)m_id+md}h, \end{align*} \begin{align*} &(\Id*S)(u_0)\\ &=\sum_{j=0}^{m-1}\gamma^{-j^2}u_j S(x^{-jd}g^ju_{-j}) \\ &=\sum_{j=0}^{m-1}\gamma^{-j^2}u_j S(u_{-j})g^{-j}x^{jd} \\ &=\sum_{j=0}^{m-1}\gamma^{-j^2}u_j hg^{m-1} \prod_{i=1}^{\theta} [(-1)^{(-j)_i}\xi_{m_i}^{-(-j)_i}\gamma^{-m_i^2\frac{(-j)_i((-j)_i+1)}{2}} x^{(-j)_im_id}g^{-(-j)_im_i}]u_{-j}g^{-j}x^{jd}\\ &=\sum_{j=0}^{m-1}u_j \prod_{i=1}^{\theta} [(-1)^{(-j)_i}\xi_{m_i}^{-(-j)_i}\gamma^{-m_i^2\frac{(-j)_i((-j)_i+1)}{2}} ]hg^{m-1}u_{-j} \\ &= x^{(2-2m)d}\overline{h}g^{m-1}\sum_{j=0}^{m-1} \gamma^{-j}\prod_{i=1}^{\theta} [(-1)^{(-j)_i}\xi_{m_i}^{-(-j)_i}\gamma^{-m_i^2\frac{(-j)_i((-j)_i+1)}{2}} ]u_{j}u_{-j}\\ &=\sum_{j=0}^{m-1}u_j \prod_{i=1}^{\theta} [(-1)^{(-j)_i}\xi_{m_i}^{-(-j)_i}\gamma^{-m_i^2\frac{(-j)_i((-j)_i+1)}{2}} ]hg^{m-1}u_{-j} \\ &= x^{(2-2m)d}\overline{h}g^{m-1}\sum_{j=0}^{m-1} \gamma^{-j}\prod_{i=1}^{\theta} [(-1)^{(-j)_i}\xi_{m_i}^{-(-j)_i}\gamma^{-m_i^2\frac{(-j)_i((-j)_i+1)}{2}}]\\ &\quad \frac{1}{m}x^{a} \prod_{i=1}^{\theta} [(-1)^{(-j)_i}\xi_{m_i}^{-(-j)_i}\gamma^{m_i^2\frac{(-j)_i((-j)_i+1)}{2}}]j_i-l,j_i-1[_{m_i}]g\\ &=\frac{1}{m}x^{(2-m)d+a}\overline{h}\sum_{j=0}^{m-1} \gamma^{-j}\prod_{i=1}^{\theta} \xi_{m_i}^{-2(-j)_i}]j_i-l,j_i-1[_{m_i}]\\ &=x^{(2-m)d+a}\overline{h} \quad (\,\textrm{by Lemma}\ \ref{l3.4}\ (1)). \end{align*} So, $(S*\Id)(u_0)=(\Id*S)(u_0)=1$ implies $h=x^{-a-\sum_{i=1}^{\theta}(e_i-1)m_id- md}(-1)^{\sum_{i=1}^{\theta}(m_i-1)(e_i-1)}=x^{(2-m)d+a}$. Thus $$a=-d-\frac{\sum_{i=1}^{\theta}(e_i-1)m_id}{2}\quad \textrm{and}\quad 2|\sum_{i=1}^{\theta}(m_i-1)(e_i-1),\;\; 2|\sum_{i=1}^{\theta}(e_i-1)m_id.$$ And $h=x^{(1-m)d-\frac{\sum_{i=1}^{\theta}(e_i-1)m_id}{2}}.$ Therefore, for $0\leqslant j\leqslant m-1$,$$S(u_j)=x^{b}g^{m-1}\prod_{i=1}^{\theta}[(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{- m_i^2\frac{j_i(j_i+1)}{2}} x^{j_im_id} g^{-j_im_i}]u_j$$ for $b=(1-m)d-\frac{\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d.$ \qed From Claim 7, we know that $a=-d-\frac{\sum_{i=1}^{\theta}(e_i-1)m_id}{2}$ and we can improve Claim 6 as the following form: \noindent \emph{Claim 6'. For $0\leqslant j,l\leqslant m-1$, the multiplication between $u_j$ and $u_l$ satisfies that $$u_ju_l=\frac{1}{m}x^{-d-\frac{\sum_{i=1}^{\theta}(e_i-1)m_id}{2}} \prod_{i=1}^{\theta} (-1)^{l_i}\xi_{m_i}^{-l_i}\gamma^{m_i^2\frac{l_i(l_i+1)}{2}}[j_i,e_i-2-l_i]_{m_i}y_{j+l}g$$ where $j+l$ is interpreted \emph{mod} $m$.} We can prove Proposition \ref{p6.11} now. The statement (1) is gotten from Claim 1 and the proof of Claim 7. For (2), by Claims 1,2',3,4,5,6' and 7, we have a natural surjective Hopf homomorphism $$f:\; D(\underline{m},d,\gamma)\to H,\;x\mapsto x,\ y_{m_i}\mapsto y_{m_i},\ g\mapsto g,\ u_{j}\mapsto u_{j}$$ for $1\leq i\leq \theta$ and $0\leq j\leq m-1$. It is not hard to see that $f|_{D_{st}}:\; D_{st}\to H_{st}$ is an isomorphism of $\k[x^{\pm 1}]$-modules for $0\leqslant s,t\leqslant 2m-1$. So $f$ is an isomorphism. \qed \section{Main result and consequences} We conclude this paper by giving the classification of prime Hopf algebras of GK-dimension one satisfying (Hyp1), (Hyp2) and some consequences. \subsection{Main result.} The main result of this paper can be stated as follows. \begin{theorem}\label{t7.1} Let $H$ be a prime Hopf algebra of GK-dimension one which satisfies (Hyp1) and (Hyp2). Then $H$ is isomorphic to one of Hopf algebras constructed in Section 4. \end{theorem} \begin{proof} Let $\pi:\;H\to \k$ be the canonical 1-dimensional representation of $H$ which exits by (Hyp1). If PI-deg$(H)=1$, then it is easy to see that $H$ is commutative and thus $H\cong \k[x]$ or $\k[x^{\pm 1}]$. So, we assume that $n:=$ PI-deg$(H)>1$ in the following analysis. If $\mi (\pi)=1$, then $H$ is isomorphic to either a $T(n,0,\xi)$ or $\k\mathbb{D}$ by Proposition \ref{p5.4}. If $\ord(\pi)=\mi(\pi)$, then $H$ is isomorphic to either a $T(\underline{n},1,\xi)$ or a $B(\underline{n},\omega,\gamma)$ by Proposition \ref{p5.7}. The last case is $n=\ord(\pi)>m:=\mi(\pi)>1$. In such case, using Corollary \ref{c6.3}, $H$ is either additive or multiplicative. If, moreover, $H$ is additive then $H$ is isomorphic to a $T(\underline{m},t,\xi)$ by Proposition \ref{p6.9} for $t=\frac{n}{m}$ and if $H$ is multiplicative then it is isomorphic to a $D(\underline{m},d,\gamma)$ by Proposition \ref{p6.11}. \end{proof} \begin{remark} \emph{(1) All prime Hopf algebras of GK-dimension one which are regular are special cases of their fraction versions. For example, the infinite dimensional Taft algebra $T(n,t,\xi)$ is isomorphic to $T(\underline{n},t,\xi)$ where $\underline{n}=\{1\}$ is a fraction of $n$ of length $1$ (that is, $\theta=1$ by previous notation).} \emph{(2) By Proposition \ref{p4.8}, we know that $D(\underline{m},d,\gamma)$ is not a pointed Hopf algebra if $m\neq 1$. Thus we get more examples of non-pointed Hopf algebras of GK-dimension one.} \emph{(3) In \cite[Question 7.3C.]{BZ}, the authors asked that what other Hopf algebras can be included if the regularity hypothesis is dropped. So our result gives many this kind of Hopf algebras. } \end{remark} \subsection{Question \eqref{1.1}.} As an application, we can give the answer to question \eqref{1.1} now. We give the following definition at first. \begin{definition} \emph{We call an irreducible algebraic curve $C$} a fraction line \emph{if there is a natural number $m$ and a fraction $m_1,\ldots, m_\theta$ of $m$ such that it's coordinate algebra $\k[C]$ is isomorphic to $\k[y_{m_1},\ldots,y_{m_{\theta}}]/(y_{m_i}^{e_i}-y_{m_j}^{e_j},\;1\leq i\neq j\leq \theta).$} \end{definition} The answer to question \eqref{1.1} is given as follows. \begin{proposition} Assume $C$ is an irreducible algebraic curve over $\k$ which can be realized as a Hopf algebra in ${^{\Z_{n}}_{\Z_n}\mathcal{YD}}$ where $n$ is as small as possible. Then $C$ is either an algebraic group or a fraction line. \end{proposition} \begin{proof} If $n=1$, then $\k[C]$ is a Hopf algebra and thus $C$ is an algebraic group of dimension one. Now assume $n>1$. By assumption, $\Z_n$ acts on $\k[C]$ faithfully. Using Lemma \ref{l2.10} and the argument developed in the proof of Corollary \ref{c2.9}, the Hopf algebra $\k[C]\# \k\Z_n$ (the Radford's biproduct) is a prime Hopf algebra of GK-dimension one with PI-degree $n$. It is known that $\k\Z_n$ has a 1-dimensional representation of order $n$: $$\k\Z_n=\k\langle g|g^n=1\rangle \To \k,\;\;\;\;g\mapsto \xi$$ for a primitive $n$th root of unity $\xi$. Through the canonical projection $\k[C]\# \k\Z_n\to \k\Z_n$ we get a 1-dimensional representation $\pi$ of $H:=\k[C]\# \k\Z_n$ of order $n=$PI-deg$(H)$. Therefore, $H$ satisfies (Hyp1). Also, by the definition of the Radford's biproduct we know that the right invariant component $H_{0}^{r}$ of $\pi$ is exactly the domain $\k[C]$. Therefore, $H$ satisfies (Hyp2) too. The classification result, that is Theorem \ref{t7.1}, can be applied now. One can check the proposition case by case. \end{proof} \subsection{Finite-dimensional quotients.} We realize that from the Hopf algebra $D(\underline{m},d,\gamma)$ we can get many new finite-dimensional Hopf algebras through quotient method. Among of them, two kinds of Hopf algebras are particularly interesting for us: one series are semisimple and another series are nonsemisimple. As a byproduct of these new examples, we can give an answer to a professor Siu-Hung Ng's question at least. We will give and analysis the structures and representation theory of these two kinds of finite-dimensional Hopf algebras. $\bullet$ \emph{The series of semisimple Hopf algebras.} Keep the notations used in Section 4 and let $D=D(\underline{m},d,\gamma)$ where $\underline{m}=\{m_1,\ldots,m_\theta\}$ a fraction of $m$. For simple, we assume that $(m_1,m_2,\ldots,m_\theta)=1$. Consider the quotient Hopf algebra $$\overline{D}:=D/(y_{m_1},\ldots,y_{m_\theta}).$$ We want to give the generators, relations and operations for $\overline{D}$ at first. For notational convenience, the images of $x,g,u_{j}$ in $\overline{D}$ are still written as $x,g$ and $u_j$ respectively. By the definition of $D$, we see that: As an algebra, $\overline{D}=\overline{D}(\underline{m},d,\gamma)$ is generated by $x^{\pm 1}, g^{\pm 1}, u_0, u_1, \cdots, u_{m-1}$, subject to the following relations \begin{eqnarray} \notag&&xx^{-1}=x^{-1}x=1,\quad gg^{-1}=g^{-1}g=1,\quad xg=gx,\\ && 0=1-x^{e_im_id},\quad g^{m}=x^{md},\label{eq7.1}\\ \notag&&xu_j=u_jx^{-1},\quad 0=\phi_{m_i,j}u_{j+m_i},\quad u_j g=\gamma^j x^{-2d}gu_j,\\ \notag &&u_ju_l=\left \{ \begin{array}{ll} \frac{1}{m}x^{a}\prod_{i=1}^{\theta}(-1)^{l_i} \gamma^{\frac{l_i(l_i+1)}{2}}\xi_{m_i}^{-l_i}[j_i,e_i-2-l_i]_{m_i}g, & \ j+l\equiv 0\ (\textrm{mod} \ m),\\ 0, & \text{otherwise,} \end{array}\right. \end{eqnarray} for $1\leq i\leq \theta,\;0\leq j,l\leq m-1$ and $a=-\frac{2+\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d$. The coproduct $\D$, the counit $\epsilon$ and the antipode $S$ of $\overline{D}(\underline{m},d,\gamma)$ are given by \begin{eqnarray*} &&\D(x)=x\otimes x,\;\; \D(g)=g\otimes g,\\ &&\D(u_j)=\sum_{k=0}^{m-1}\gamma^{k(j-k)}u_k\otimes x^{-kd}g^ku_{j-k};\\ &&\epsilon(x)=\epsilon(g)=\epsilon(u_0)=1,\;\;\epsilon(u_s)=0;\\ &&S(x)=x^{-1},\;\; S(g)=g^{-1}, \\ && S(u_j)=x^{b}g^{m-1}\prod_{i=1}^{\theta}(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{- m_i^2\frac{j_i(j_i+1)}{2}} x^{j_im_id} g^{-j_im_i}u_j, \end{eqnarray*} for $1\leq s\leq m-1\;,0\leq j\leq m-1$ and $b=(1-m)d-\frac{\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d$. As an observation, we find that \begin{lemma}\label{l7.5} The Hopf algebra $\overline{D}$ is a semisimple Hopf algebra of dimension $2m^2d.$ \end{lemma} \begin{proof} Before the proof, we want to simplify a relation given in the definition of $\overline{D}.$ That is, a relation formulated in \eqref{eq7.1}: $x^{e_im_id}=1$ for all $1\leq i\leq \theta.$ We claim that it it equivalent to the following relation \begin{equation}\label{eq7.2} x^{md}=1. \end{equation} Clearly, it is enough to show that \eqref{eq7.1} implies \eqref{eq7.2} since by definition $m|e_im_i$ for all $1\leq i\leq \theta.$ Indeed, by (3) of the definition of a fraction \ref{d3.1}, $e_i|m$ and thus we know that $(\frac{e_1m_1}{m},\frac{e_2m_2}{m},\ldots,\frac{e_\theta m_\theta}{m})=1$ since we already assume that $(m_1,m_2,\ldots,m_\theta)=1.$ Therefore, there exist $s_i\in\Z$ such that $\sum_{i=1}^{\theta}s_i\frac{e_im_i}{m}=1$ and thus $$x^{md}=x^{md\sum_{i=1}^{\theta}s_i\frac{e_im_i}{m}}=x^{\sum_{i=1}^{\theta}s_i e_im_id}=1.$$ By \eqref{eq7.2}, we further get $g^{m}=1$ since $g^{m}=x^{md}.$ We use the classical Maschke Theorem to show that $\overline{D}$ is semisimple. To do that, we construct the left integral of $\overline{D}$ as follows: $$\int_{\overline{D}}^l:=\sum_{i=0}^{md-1}\sum_{j=0}^{m-1}x^ig^j+\sum_{i=0}^{md-1}\sum_{j=0}^{m-1}x^ig^ju_0.$$ Let's show that it is really a left integral. Indeed, it is not hard to see that $x\int_{\overline{D}}^l=g\int_{\overline{D}}^l=\int_{\overline{D}}^l$ and \begin{align*}u_0\cdot \int_{\overline{D}}^l&= \sum_{i=0}^{md-1}\sum_{j=0}^{m-1}x^ig^ju_0+\sum_{i=0}^{md-1}\sum_{j=0}^{m-1}x^ig^ju_0^{2}\\ &=\sum_{i=0}^{md-1}\sum_{j=0}^{m-1}x^ig^ju_0+\sum_{i=0}^{md-1}\sum_{j=0}^{m-1}x^ig^j\\ &=\epsilon(u_0)\cdot \int_{\overline{D}}^l. \end{align*} Now by the relation $0=\phi_{m_i,j}u_{j+m_i}$, \begin{equation}\label{eq7.3} u_{j+m_i}=\gamma_i^{1+j_i}x^{m_id}u_{j+m_i}\end{equation} for $0\leq j\leq m-1$ and $\gamma_i=\gamma^{-m_i^2}.$ So for any $1\leq s\leq m-1$, there must exit an $1\leq i\leq \theta$ such that $s_i\neq 0.$ From \eqref{eq7.3}, we have $u_s=\gamma_{i}^{s_i}x^{m_id}u_s$ and thus \begin{align*}u_s\cdot \int_{\overline{D}}^l&= \sum_{i=0}^{md-1}\sum_{j=0}^{m-1}\gamma^{sj}x^ig^ju_s\\ &=\sum_{i=0}^{md-1}\sum_{j=0}^{m-1}\gamma^{sj}x^ig^j\gamma_{i}^{s_i}x^{d}u_s\\ &=\gamma_{i}^{s_i}\sum_{i=0}^{md-1}\sum_{j=0}^{m-1}\gamma^{sj}x^ig^ju_s, \end{align*} which implies that $\sum\limits_{i=0}^{md-1}\sum\limits_{j=0}^{m-1}\gamma^{sj}x^ig^ju_s=0$, and so $u_s\cdot \int_{\overline{D}}^l=0=\epsilon(u_{s})\int_{\overline{D}}^l$ for all $1 \leqslant s \leqslant m-1$. Combining above equations together, $\int_{\overline{D}}^l$ is a left integral of $\overline{D}.$ Clearly, $\epsilon(\int_{\overline{D}}^l)=2m^{2}d\neq 0$. So $\overline{D}$ is semisimple. At last, we want to determine the dimension of this semisimple Hopf algebra. The main idea is to apply the bigrading \eqref{eqD} and \eqref{eq4.24} of $D$ to $\overline{D}.$ To apply them, we need determine the dimension of space spanned by $\{x^{t}u_j|0\leq t\leq md-1\}$ for any $0\leq j\leq m-1$. To do that, we want give an equivalent form of the \eqref{eq7.3}. By \eqref{eq7.3}, $x^{m_id}u_{k}=\gamma^{m_ik}u_k$ for any $0\leq k\leq m-1$. Note that $(m_1,\ldots,m_{\theta})=1$ and thus we have $s_i\in \Z$ such that $\sum_{i=1}^{\theta}s_im_i=1$. Therefore \begin{equation}\label{e7.4} x^du_{k}=x^{\sum_{i=0}^{\theta}s_im_id}u_{k}=\gamma^{\sum_{i=0}^{\theta}s_im_ik}u_{k}=\gamma^{k}u_{k}. \end{equation}By this formula, the space spanned by $\{x^{t}u_j|0\leq t\leq md-1\}$ is the same as the space spanned by $\{x^{t}u_j|0\leq t\leq d-1\}$ and its dimension is $d.$ Now applying the bigrading \eqref{eqD} and \eqref{eq4.24}, we see that the set $$\{x^ig^j, x^{t}g^{j}u_s|0\leq i\leq md-1, 0\leq j\leq m-1, 0\leq t\leq d-1, 0\leq s\leq m-1\}$$ is a basis of $\overline{D}$ and thus $$\dim_{\k} \overline{D}=2m^2d.$$ \end{proof} Next we want to analysis the coalgebra and algebra structure of this semisimple Hopf algebra. Its coalgebra structure can be determined easily. \begin{proposition}\label{p7.6} Keep above notations. \begin{itemize}\item[(1)] Let $C$ be the subspace spanned by $\{g^iu_j|0\leq i,j\leq m-1\}$. Then $C$ is simple coalgebra. \item[(2)] The following is the decomposition of $\overline{D}$ into simple coalgbras $$\overline{D}=\bigoplus_{i=0}^{md-1}\bigoplus_{j=0}^{m-1}\k x^ig^j\oplus \bigoplus_{i=0}^{d-1}x^iC.$$ \item[(3)] Up to isomorphisms of comodules, $\overline{D}$ has $m^2d$-number of 1-dimensional comodules and $d$-number of $m$-dimensional simple comodules. \end{itemize} \end{proposition} \begin{proof} (1) One can apply similar method used in \cite{Wu} to prove this statement. For completeness, we write the details out. Clearly, to show the result, it is sufficient to show that the $\k$-linear dual $C^*:=\Hom_\k(C, \k)$ is a simple algebra. In fact, we will see that $C^*$ is the matrix algebra of order $m$. We change the basis of $C$ for the convenience. Using relation \eqref{e7.4}, $C$ is also spanned by $\{(x^{-d}g)^iu_j|0\leq i,j\leq m-1\}$. Denote by $f_{ij}:=((x^{-d}g)^iu_j)^*$, that is, $\{f_{ij}| 0\leqslant i, j\leqslant m-1\}$ is the dual basis of the basis $\{(x^{-d}g)^iu_j|0\leq i,j\leq m-1\}$ of $C$. We prove this fact by two steps: firstly, we study the multiplication of the dual basis; secondly, we construct an algebraic isomorphism from $C^*$ to the matrix algebra of order $m$. \noindent \emph{Step 1.}\quad Since \begin{align*} &(f_{i_1, j_1}*f_{i_2, j_2})((x^{-d}g)^iu_j)\\ &=m(f_{i_1, j_1}\otimes f_{i_2, j_2})(\D((x^{-d}g)^iu_j))\\ \notag &=m(f_{i_1, j_1}\otimes f_{i_2, j_2})(\sum_{s=0}^{m-1}\gamma^{s(j-s)}(x^{-d}g)^iu_s\otimes (x^{-d}g)^{i+s}u_{j-s} )\\ &=\sum_{s=0}^{m-1}\gamma^{s(j-s)}f_{i_1, j_1}((x^{-d}g)^iu_s)f_{i_2, j_2}((x^{-d}g)^{i+s}u_{j-s})\\ \end{align*} one can see that $(f_{i_1, j_1}*f_{i_2, j_2})((x^{-d}g)^iu_j)\neq 0$ if and only if $i_1=i, j_1=s, i_2=i+s$ and $j_2=j-s$ for some $0\leqslant s\leqslant m-1$. This forces $i_1+j_1=i_2, i=i_1$ and $j=j_1+j_2$. So we have \begin{equation}\label{eq*}f_{i_1, j_1}*f_{i_2, j_2}=\begin{cases} \gamma^{j_1j_2}f_{i_1, j_1+j_2}, & \textrm{if}\; i_1+j_1=i_2,\\ 0, & \textrm{otherwise}. \end{cases}\end{equation} \noindent \emph{Step 2.}\quad Set $M=M_m(\k)$ and let $E_{ij}$ be the matrix units (that is, the matrix with 1 is in the (i, j) entry and 0 elsewhere) for $0\leqslant i, j\leqslant m-1$. Now we claim that $$\varphi: C^*\to M, f_{ij}\mapsto \gamma^{ij}E_{i, i+j}$$ is an algebraic isomorphism (the index $i+j$ in $E_{i+j}$ is interpreted mod $m$). It is sufficient to verify that $\varphi$ is an algebraic map. In fact, \begin{align*}\varphi(f_{i_1, j_1})\varphi(f_{i_2, j_2})&=\gamma^{i_1j_1}E_{i_1, i_1+j_1}\gamma^{i_2j_2}E_{i_2, i_2+j_2}\\ &=\begin{cases} \gamma^{i_1j_1+i_2j_2}E_{i_1, i_2+j_2}, & \textrm{if}\; i_1+j_1=i_2,\\ 0, & \textrm{otherwise}, \end{cases}\\ &=\begin{cases} \gamma^{i_1j_1+i_2j_2-i_1(j_1+j_2)}\varphi(f_{i_1, j_1+j_2}), & \textrm{if}\; i_1+j_1=i_2,\\ 0, & \textrm{otherwise}, \end{cases}\\ &=\begin{cases} \varphi(f_{i_1, j_1+j_2}), & \textrm{if}\; i_1+j_1=i_2,\\ 0, & \textrm{otherwise}, \end{cases}\\ &=\varphi(f_{i_1, j_1}*f_{i_2, j_2}). \end{align*} So $\varphi$ is an algebraic map and the proof is completed. (2) Comparing the dimensions of left side and right side, we have the statement. (3) This is a direct consequence of (2). \end{proof} Next, we want to determine the algebraic structure of $\overline{D}$. As in the proof of Lemma \ref{l7.5}, $\{x^ig^j, x^tg^ju_s|0\leq i\leq md-1, 0\leq j\leq m-1, 0\leq t\leq d-1, 0\leq s\leq m-1\}$ is a basis of $\overline{D}$. Denote by $G$ the group of all group-likes of $\overline{D}$. Then clearly every element in $\overline{D}$ can be written uniquely in the following way: $$f+\sum_{i=0}^{m-1}f_iu_i$$ for $f, f_{i}\in \k G$ and $0\leq i\leq m-1$. We use $C(\overline{D})$ to denote the center of $\overline{D}$. Next result helps us to determine the center of $\overline{D}$. \begin{lemma}\label{l7.7} The element $e=f+\sum_{i=0}^{m-1}f_iu_i\in C(\overline{D})$ if and only if $f, f_0u_0\in C(\overline{D})$ and $f_1=\ldots=f_{m-1}=0$. \end{lemma} \begin{proof} The sufficiency is obvious. We just prove the necessity. At first, we show that $f_1=\ldots=f_{m-1}=0$. Otherwise, assume that, say, $f_1\neq 0$. By assumption, $ge=eg$ which implies that $gf_1u_1=f_1u_1g$. By the definition of $\overline{D}$, $f_1u_1g=\gamma^{-1}gf_1u_1$. So we have $\gamma^{-1}gf_1u_1=gf_1u_1$ which is absurd. Similarly, we have $f_2=\ldots=f_{m-1}=0$. Secondly, let's show that $f\in C(\overline{D})$. Also, by $eu_i=u_ie$ we know that $fu_i=u_if$ for $0\leq i\leq m-1$. By definition, $f$ commutes with all elements in $G$. Therefore, $f\in C(\overline{D})$. Since $e=f+f_0u_0$ and $e\in C(\overline{D})$, $f_0u_0\in C(\overline{D})$ too. \end{proof} Let $\zeta$ be an $md$th root of unity satisfying $\zeta^{d}=\gamma$. Define $$1_i^{x}:=\frac{1}{md}\sum_{j=0}^{md-1}\zeta^{-ij}x^j,\ \ 1_k^{g}:=\frac{1}{m}\sum_{j=0}^{m-1}\gamma^{-kj}g^j$$ for $0\leq i\leq md-1$ and $0\leq k\leq m-1$. It is well-known that $\{1_i^{x}1_k^{g}|0\leq i\leq md-1, 0\leq k\leq m-1\}$ is also a basis of $\k G$. Therefore, one can assume that $$f=\sum_{i=0}^{md-1}\sum_{j=0}^{m-1}a_{ij}1_i^{x}1_j^{g}=\sum_{i,j}a_{ij}1_i^{x}1_j^{g}.$$ For any natural number $i$, we use $i'$ to denote the remainder of $i$ divided by $m$ in the following of this subsection. \begin{lemma}\label{l7.8} Let $f=\sum_{i,j}a_{ij}1_i^{x}1_j^{g}$ be an element in $\k G$. Then $f\in C(\overline{D})$ if and only if $a_{ij}=a_{md-i,j-i'}$ for all $0\leq i\leq md-1, 0\leq j\leq m-1$. \end{lemma} \begin{proof} Define $$\unit_{i}^x:=\frac{1}{d}(1+\zeta^{-i} x+\zeta^{-2i} x^2+\ldots+\zeta^{-(d-1)i} x^{d-1})$$ for $0\leq i\leq md-1$. For any $0\leq k\leq m-1$, it is not hard to see that the elements in $\{\unit_{i}^{x}|i\equiv k\ (\textrm{mod}\ m)\}$ are linear independent. Using equation \eqref{e7.4} and a direct computation, one can show that \begin{equation}\label{eq7.5}1_i^xu_k=\left \{ \begin{array}{ll} \unit_i^{x}u_k, & \;\;\text{if}\ \ i\equiv k\ (\textrm{mod}\ m),\\ 0, & \;\;\text{otherwise,} \end{array}\right.\end{equation} \begin{equation}u_k1_i^x=\left \{ \begin{array}{ll} u_k\unit_i^{x}, & \;\;\text{if}\ \ i+k\equiv 0\ (\textrm{mod}\ m),\\ 0, & \;\;\text{otherwise.} \end{array}\right.\end{equation} and $$1_{md-i}^{x}=1_{i}^{x^{-1}}.$$ Therefore, we have $$ fu_k=\sum_{i,j}a_{ij}1_{i}^{x}1_{j}^{g}u_k=\sum_{i,j}a_{ij}1_{i}^{x}u_k1_{j-k}^{g}= \sum_{i\equiv k\;(\textrm{mod}\;m),j}a_{ij}\unit_{i}^{x}1_{j}^{g}u_k, $$ \begin{eqnarray*}u_kf&&=\sum_{i,j}a_{ij}u_k1_{i}^{x}1_{j}^{g}=\sum_{i+k\equiv 0\;(\textrm{mod}\;m),j}a_{ij}u_{k}1_{i}^{x}1_{j}^{g}=\sum_{i+k\equiv 0\;(\textrm{mod}\;m),j}a_{ij}1_{i}^{x^{-1}}u_k1_{j}^{g}\\ &&=\sum_{i+k\equiv 0\;(\textrm{mod}\;m),j}a_{ij}1_{md-i}^{x}u_k1_{j}^{g}=\sum_{i+k\equiv 0\;(\textrm{mod}\;m),j}a_{ij}\unit_{md-i}^{x}1_{j+k}^{g}u_k. \end{eqnarray*} This means that $fu_k=u_kf$ if and only if $a_{k+lm,j}=a_{m(d-l)-k,j-k}$ for some $0\leq k\leq m-1$. From this, the proof is done. \end{proof} Assume that $f_0=\sum_{i,j}b_{ij}1_i^x1_j^g$. Using \eqref{eq7.5}, we know that $$f_0u_0=\sum_{i\equiv 0\;(\textrm{mod}\;m),j}b_{ij}\unit_i^x1_j^gu_0.$$ So we can assume that $f_0=\sum_{i\equiv 0\;(\textrm{mod}\;m),j}b_{ij}1_i^x1_j^g$ directly. With this assumption, we have the following result. \begin{lemma} The element $f_0u_0$ belongs to the center of $\overline{D}$ if and only if \begin{equation}\label{eq7.7} f_0=\left \{ \begin{array}{ll} \sum_{j}b_{0j}1_0^{x}1_j^g, & \;\;\emph{if}\ \ d\ \emph{is\ odd}, \\ \sum_{j}b_{0j}1_0^{x}1_j^g+\sum_{j}b_{\frac{d}{2}m,j}1_{\frac{d}{2}m}^{x}1_j^g, & \;\;\emph{if}\ \ d\ \emph{is\ even}.\end{array}\right. \end{equation} \end{lemma} \begin{proof} From $xf_0u_0=f_0u_0x$, we have $xf_0=x^{-1}f_0$, which implies exactly the equation \eqref{eq7.7}. The converse is straightforward. \end{proof} Next, we want determine when a central element is idempotent. \begin{lemma}\label{l7.9} Let $e=f+f_0u_0$ be an element living in the center $C(\overline{D})$. Then $e^2=e$ if and only if $f=f^2+f_0^2u_0^2$ and $f_0=2ff_0$.\end{lemma} \begin{proof} By Lemma \ref{l7.7}, $f$ commutes with $f_0u_0$ and $(f_0u_0)^2=f_0^2u_0^2$. From this, the lemma becomes clear. \end{proof} \begin{lemma}\label{l7.11} Let $f=\sum_{i,j}a_{ij}1_i^x1_j^g$ and $f_0=\sum_{i,j}b_{ij}1_i^x1_j^g$ satisfying $e=f+f_0u_0$ is a central element. Then $e$ is an idempotent if and only if \begin{eqnarray} a_{sm,j}&=&a_{sm,j}^2+b_{sm,j}^2\zeta^{asm}\gamma^j\;\;\;\;(0\leq s\leq d-1, \ 0\leq j\leq m-1)\label{eq7.8}\\ a_{ij}^2&=&a_{ij}\;\;\;\;(i\not\equiv 0\ (\emph{mod} \ m), \ 0\leq j\leq m-1)\\ b_{ij}&=&2a_{ij}b_{ij}\;\;\;\;(0\leq i\leq md-1,\ 0\leq j\leq m-1).\label{eq7.10} \end{eqnarray} where $a=-\frac{2+\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d.$ \end{lemma} \begin{proof} We just translate the equivalent conditions in Lemma \ref{l7.9} into the equalities about coefficients. \end{proof} By equation \eqref{eq7.10}, we know that $a_{ij}=\frac{1}{2}$ if $b_{ij}\neq 0$. By equation \eqref{eq7.8}, we have that $b_{sm,j}=\pm \frac{1}{2}\sqrt{\gamma^{-j}\zeta^{-asm}}$ if $b_{sm,j}\neq 0$. We use $[.]$ to denote the floor function, i.e. for any rational number $t$, $[t]$ is the biggest integer which is not bigger than $t$. Now we can give the algebraic structure of $\overline{D}$. \begin{proposition}\label{p7.12} Keep above notations. \begin{itemize}\item[(1)] If $d$ is even, then the following is a complete set of primitive central idempotents of $\overline{D}$: \begin{eqnarray*} &&\frac{1}{2}1_0^x1_j^g+ \frac{1}{2}\sqrt{\gamma^{-j}}1_0^x1_j^gu_0,\ \ \frac{1}{2}1_0^x1_j^g- \frac{1}{2}\sqrt{\gamma^{-j}}1_0^x1_j^gu_0,\\ && \frac{1}{2}1_{\frac{d}{2}m}^x1_j^g+ \frac{1}{2}\sqrt{\gamma^{-j}(-1)^{-a}}1_{\frac{d}{2}m}^x1_j^gu_0,\ \ \frac{1}{2}1_{\frac{d}{2}m}^x1_j^g- \frac{1}{2}\sqrt{\gamma^{-j}(-1)^{-a}}1_{\frac{d}{2}m}^x1_j^gu_0,\\ && 1_{sm}^x1_j^g+1_{(d-s)m}^x1_j^g,\ \ \ \ (0< s\leq d-1, s\neq \frac{d}{2},\ 0\leq j\leq m-1)\\ && 1_{lm+i}^x1_j^g+1_{(d-l-1)m+(m-i)}^x1_{j-i}^g,\ \ \ \ (0\leq l\leq d-1, 0< i\leq [\frac{m}{2}],\ 0\leq j\leq m-1). \end{eqnarray*} If $d$ is odd, then the following is a complete set of primitive central idempotents of $\overline{D}$: \begin{eqnarray*} &&\frac{1}{2}1_0^x1_j^g+ \frac{1}{2}\sqrt{\gamma^{-j}}1_0^x1_j^gu_0,\ \ \frac{1}{2}1_0^x1_j^g- \frac{1}{2}\sqrt{\gamma^{-j}}1_0^x1_j^gu_0,\\ && 1_{sm}^x1_j^g+1_{(d-s)m}^x1_j^g,\ \ \ \ (0< s\leq d-1, \ 0\leq j\leq m-1)\\ && 1_{lm+i}^x1_j^g+1_{(d-l-1)m+(m-i)}^x1_{j-i}^g,\ \ \ \ (0\leq l\leq d-1, 0< i\leq [\frac{m}{2}],\ 0\leq j\leq m-1). \end{eqnarray*} \item[(2)] If $d$ is even, then as an algebra $\overline{D}$ has the following decomposition: $$\overline{D}= \k^{(4m)}\oplus M_{2}(\k)^{(\frac{m^2d-2m}{2})}.$$ If $d$ is odd, then as an algebra $\overline{D}$ has the following decomposition: $$\overline{D}= \k^{(2m)}\oplus M_{2}(\k)^{(\frac{m^2d-m}{2})}.$$ \end{itemize} \end{proposition} \begin{proof} (1) According to Lemmas \ref{l7.8}-\ref{l7.11}, we know all above elements are central idempotents. It is easy to find that the sum of these elements is just $1$. So to show the result, it is enough to show that they are all primitive central idempotents. We just prove this fact for the case $d$ even since the other case can be proved in the same way. In fact, by definition we can find the elements in the last two lines presented in this proposition can be decomposed into a sum of two idempotents which are not central, and so the simple modules corresponding to these central idempotents have dimension $\geq 2$. There are $\frac{(d-2)m}{2}+\frac{(m-1)dm}{2}$ cental idempotents in the last two lines and $4m$ ones in the first two lines. Therefore, all of these idempotents create an ideal with dimension $\geq 4m+4(\frac{(d-2)m}{2}+\frac{(m-1)dm}{2})=2m^2d=\dim_{\k} \overline{D}$. This implies they are all primitive. (2) This is just a direct consequence of the statement (1). \end{proof} Due to our recent great interest on finite tensor categories \cite{EGNO}, in particular fusion categories \cite{ENO}, it seems better to present all simple modules of $\overline{D}$ and their tensor product decomposition law here. As the proof of above proposition, we only deal with the case $d$ being even (actually, the case of $d$ being odd is quite similar and in fact easier). According to the central idempotents stated in Proposition \ref{p7.12} (1), we construct the following six kinds of simple modules of $\overline{D}$: \begin{itemize} \item[(1)] $V_{0,j}^{+}\;(0\leq j\leq m-1)$: The dimension of $V_{0,j}^{+}$ is $1$ and the action of $\overline{D}$ is given by \begin{align*}& x\mapsto 1,\quad\quad\quad\quad g\mapsto \gamma^{j}\\ & u_{0}\mapsto \sqrt{\gamma^j}, \quad\quad u_{i}\mapsto 0 \;(1\leq i\leq m-1). \end{align*} A basis of this module can be chosen as $\frac{1}{2}1_0^x1_j^g+ \frac{1}{2}\sqrt{\gamma^{-j}}1_0^x1_j^gu_0.$\\ \item[(2)] $V_{0,j}^{-}\;(0\leq j\leq m-1)$: The dimension of $V_{0,j}^{-}$ is $1$ and the action of $\overline{D}$ is given by \begin{align*}& x\mapsto 1,\quad\quad\quad\quad\quad g\mapsto \gamma^{j}\\ & u_{0}\mapsto -\sqrt{\gamma^j}, \quad\quad u_{i}\mapsto 0 \;(1\leq i\leq m-1). \end{align*} A basis of this module can be chosen as $\frac{1}{2}1_0^x1_j^g- \frac{1}{2}\sqrt{\gamma^{-j}}1_0^x1_j^gu_0.$\\ \item[(3)] $V_{\frac{d}{2}m,j}^{+}\;(0\leq j\leq m-1)$: The dimension of $V_{\frac{d}{2}m,j}^{+}$ is $1$ and the action of $\overline{D}$ is given by \begin{align*}& x\mapsto -1,\quad\quad\quad\quad\quad\quad\quad g\mapsto \gamma^{j}\\ & u_{0}\mapsto \sqrt{\gamma^j(-1)^{-a}}, \quad\quad u_{i}\mapsto 0 \;(1\leq i\leq m-1). \end{align*} A basis of this module can be chosen as $\frac{1}{2}1_{\frac{d}{2}m}^x1_j^g+ \frac{1}{2}\sqrt{\gamma^{-j}(-1)^{-a}}1_{\frac{d}{2}m}^x1_j^gu_0.$ Recall that by definition $a=-\frac{2+\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d.$\\ \item[(4)] $V_{\frac{d}{2}m,j}^{-}\;(0\leq j\leq m-1)$: The dimension of $V_{\frac{d}{2}m,j}^{-}$ is $1$ and the action of $\overline{D}$ is given by \begin{align*}& x\mapsto -1,\quad\quad\quad\quad\quad\quad\quad g\mapsto \gamma^{j}\\ & u_{0}\mapsto -\sqrt{\gamma^j(-1)^{-a}}, \quad\quad u_{i}\mapsto 0 \;(1\leq i\leq m-1). \end{align*} A basis of this module can be chosen as $\frac{1}{2}1_{\frac{d}{2}m}^x1_j^g- \frac{1}{2}\sqrt{\gamma^{-j}(-1)^{-a}}1_{\frac{d}{2}m}^x1_j^gu_0.$\\ \item[(5)] $V_{sm,j}\;(0<s\leq d-1,\;s\neq \frac{d}{2},\; 0\leq j\leq m-1)$: The dimension of $V_{sm,j}$ is $2$ with basis $\{1_{sm}^x1_{j}^g, 1_{(d-s)m}^x1_{j}^gu_0\}$ and the action of $\overline{D}$ is given by \begin{align*}& x\mapsto \left ( \begin{array}{cc} \zeta^{sm}&0\\0&\zeta^{(d-s)m} \end{array}\right),\quad\quad\quad\quad g\mapsto \left ( \begin{array}{cc} \gamma^{j}&0\\0&\gamma^{j} \end{array}\right)\\ & u_{0}\mapsto \left ( \begin{array}{cc}0&\zeta^{asm}\gamma^{j}\\1&0 \end{array}\right), \quad\quad u_{i}\mapsto 0 \;(1\leq i\leq m-1). \end{align*} Note that we have $$V_{sm,j}\cong V_{(d-s)m,j}.$$\\ \item[(6)] $V_{lm+i,j}\;(0\leq l\leq d-1,\;0<i<m,\; 0\leq j\leq m-1)$: The dimension of $V_{lm+i,j}$ is $2$ with basis $\{1_{lm+i}^x1_{j}^g, 1_{(d-l-1)m+(m-i)}^x1_{j-i}^gu_{m-i}\}$ and the action of $\overline{D}$ is given by \begin{align*}& x\mapsto \left ( \begin{array}{cc} \zeta^{lm+i}&0\\0&\zeta^{(d-l-1)m+(m-i)} \end{array}\right),\quad\quad\quad\quad g\mapsto \left ( \begin{array}{cc} \gamma^{j}&0\\0&\gamma^{j-i} \end{array}\right)\\ & u_{m-i}\mapsto \left ( \begin{array}{cc}0&0\\1&0 \end{array}\right), \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad u_{i}\mapsto \left ( \begin{array}{cc}0&c_i\\0&0 \end{array}\right),\\ & u_{t}\mapsto 0\;\;\;\quad\quad\quad\quad\quad\quad\quad\quad(0\leq t\leq m-1, t\neq m-i,i), \end{align*} where $c_i=\frac{1}{m}\zeta^{(lm+i)a}\gamma^{j}\prod_{s=1}^{\theta}(-1)^{-i_s}\xi_{m_s}^{i_s} \gamma^{m_s^2\frac{-i_s(-i_s+1)}{2}}[i_s,e_s-2-(m-i)_s]_{m_s}.$ Note that we have $$V_{lm+i,j}\cong V_{(d-l-1)m+(m-i),j-i}.$$ \end{itemize} The following table give us the tensor product decomposition law for these simple modules. We omit the proof since it is routine. \renewcommand{\arraystretch}{2.4} \setlength{\tabcolsep}{8pt} \begin{center} \begin{tabular}{c} \hline $\quad\quad\quad\quad \quad\quad\quad\quad \quad\quad\quad\quad \textrm{The \;fusion\; rule \;I}\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad $\\ \end{tabular} \begin{tabular}{|ccccc|ccccc|} \hline $V^+_{0,j}$&$\otimes$&$ V^+_{0,j'}$&$=$&$V^+_{0,j+j'}$ & $V^{-}_{0,j}$&$\otimes$&$V^+_{0,k}$&$=$&$V^-_{0,j+k}$\\ $V^+_{0,j}$&$\otimes$&$ V^-_{0,k}$&$=$&$V^-_{0,j+k}$ & $V^-_{0,j}$&$\otimes$&$V^-_{0,k}$&$=$&$V^+_{0,j+k}$\\ $V^+_{0,j}$&$\otimes$&$ V^+_{\frac{d}{2}m,k}$&$=$&$V^+_{\frac{d}{2}m,j+k}$ & $V^-_{0,j}$&$\otimes$&$V^+_{\frac{d}{2}m,k}$&$=$&$V^-_{\frac{d}{2}m,j+k}$\\ $V^+_{0,j}$&$\otimes$&$ V^-_{\frac{d}{2}m,k}$&$=$&$V^-_{\frac{d}{2}m,j+k}$ & $V^-_{0,j}$&$\otimes$&$V^-_{\frac{d}{2}m,k}$&$=$&$V^+_{\frac{d}{2}m,j+k}$\\ $V^+_{0,j}$&$\otimes$&$ V_{sm,k}$&$=$&$V_{sm,j+k}$ & $V^-_{0,j}$&$\otimes$&$V_{sm,k}$&$=$&$V_{sm,j+k}$\\ $V^+_{0,j}$&$\otimes$&$ V_{lm+i,k}$&$=$&$V_{lm+i,j+k}$ & $V^-_{0,j}$&$\otimes$&$V_{lm+i,k}$&$=$&$V_{lm+i,j+k}$\\ \hline $V^+_{\frac{d}{2}m,j}$&$\otimes$&$ V^+_{0,k}$&$=$&$V^+_{\frac{d}{2}m,j+k}$ & $V^-_{\frac{d}{2}m,j}$&$\otimes$&$V^+_{0,k}$&$=$&$V^-_{\frac{d}{2}m,j+k}$\\ $V^+_{\frac{d}{2}m,j}$&$\otimes$&$ V^-_{0,k}$&$=$&$V^-_{\frac{d}{2}m,j+k}$ & $V^-_{\frac{d}{2}m,j}$&$\otimes$&$V^-_{0,k}$&$=$&$V^+_{\frac{d}{2}m,j+k}$\\ $V^+_{\frac{d}{2}m,j}$&$\otimes$&$ V^+_{\frac{d}{2}m,k}$&$=$&$V^+_{0,j+k}$ & $V^-_{\frac{d}{2}m,j}$&$\otimes$&$V^+_{\frac{d}{2}m,k}$&$=$&$V^-_{0,j+k}$\\ $V^+_{\frac{d}{2}m,j}$&$\otimes$&$ V^-_{\frac{d}{2}m,k}$&$=$&$V^-_{0,j+k}$ & $V^-_{\frac{d}{2}m,j}$&$\otimes$&$V^-_{\frac{d}{2}m,k}$&$=$&$V^+_{0,j+k}$\\ $V^+_{\frac{d}{2}m,j}$&$\otimes$&$ V_{sm,k}$&$=$&$V_{(s+\frac{d}{2})m,j+k}$ & $V^-_{\frac{d}{2}m,j}$&$\otimes$&$V_{sm,k}$&$=$&$V_{(s+\frac{d}{2})m,j+k}$\\ $V^+_{\frac{d}{2}m,j}$&$\otimes$&$ V_{lm+i,k}$&$=$&$V_{(l+\frac{d}{2})m,j+k}$ & $V^-_{\frac{d}{2}m,j}$&$\otimes$&$V_{lm+i,k}$&$=$&$V_{(l+\frac{d}{2})m,j+k}$\\ \hline \end{tabular} \end{center} \renewcommand{\arraystretch}{3.4} \setlength{\tabcolsep}{8pt} \begin{center} \begin{tabular}{c} \hline $\quad\quad\quad\quad \quad\quad\quad\quad \quad\quad\quad\quad \textrm{The \;fusion\; rule \;II}\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad $\\ \end{tabular} \begin{tabular}{|ccccc|} \hline $V_{sm,j}$&$\otimes$&$ V^{+}_{0,k}$&$=$&$V_{sm,j+k}$\\ $V_{sm,j}$&$\otimes$&$ V^{-}_{0,k}$&$=$&$V_{sm,j+k}$\\ $V_{sm,j}$&$\otimes$&$ V^{+}_{\frac{d}{2}m,k}$&$=$&$V_{(s+\frac{d}{2})m,j+k}$\\ $V_{sm,j}$&$\otimes$&$ V^{-}_{\frac{d}{2}m,k}$&$=$&$V_{(s+\frac{d}{2})m,j+k}$\\ $V_{sm,j}$&$\otimes$&$ V_{lm,k}$&$=$&$V_{(s+l)m,j+k}\oplus V_{(s-l)m,j+k}\;\;\;\;\;\;(\ast)$\\ $V_{sm,j}$&$\otimes$&$ V_{lm+i,k}$&$=$&$V_{(s+l)m+i,j+k}\oplus V_{(l-s)m+i,j+k}$\\ \hline $V_{lm+i,j}$&$\otimes$&$ V^{+}_{0,k}$&$=$&$V_{lm+i,j+k}$\\ $V_{lm+i,j}$&$\otimes$&$ V^{-}_{0,k}$&$=$&$V_{lm+i,j+k}$\\ $V_{lm+i,j}$&$\otimes$&$ V^{+}_{\frac{d}{2}m,k}$&$=$&$V_{(l+\frac{d}{2})m+i,j+k}$\\ $V_{lm+i,j}$&$\otimes$&$ V^{-}_{\frac{d}{2}m,k}$&$=$&$V_{(l+\frac{d}{2})m+i,j+k}$\\ $V_{lm+i,j}$&$\otimes$&$ V_{sm,k}$&$=$&$V_{(s+l)m+i,j+k}\oplus V_{(l-s)m+i,j+k}$\\ $V_{lm+i,j}$&$\otimes$&$ V_{sm+t,k}$&$=$&$V_{(s+l)m+(i+t),j+k}\oplus V_{(l-s)m+(i-t),j+k-t}\;\;\;\;\;\;(\ast)$\\ \hline \end{tabular} \end{center} where the mark $(\ast)$, say for the case $V_{sm,j}\otimes V_{lm,k}$, has the following meaning: \begin{itemize}\item[(1)] If $(s+l)m\not\equiv 0,\frac{d}{2}m$ (mod $dm$) and $(s-l)m\not\equiv 0,\frac{d}{2}m$ (mod $dm$), then \begin{equation}\label{eq7.12} V_{sm,j}\otimes V_{lm,k}=V_{(s+l)m,j+k}\oplus V_{(s-l)m,j+k}.\end{equation} \item[(2)] If $(s+l)m\equiv 0$ (mod $dm$), then in the formula \eqref{eq7.12} $V_{(s+l)m,j+k}$ is decomposed further and represents $$V^{+}_{0,j+k}\oplus V^{-}_{0,j+k}.$$ \item[(3)] If $(s+l)m\equiv \frac{d}{2}m$ (mod $dm$), then in the formula \eqref{eq7.12} $V_{(s+l)m,j+k}$ is decomposed further and represents $$V^{+}_{\frac{d}{2}m,j+k}\oplus V^{-}_{\frac{d}{2}m,j+k}.$$ \item[(4)] If $(s-l)m\equiv 0$ (mod $dm$), then in the formula \eqref{eq7.12} $V_{(s-l)m,j+k}$ is decomposed further and represents $$V^{+}_{0,j+k}\oplus V^{-}_{0,j+k}.$$ \item[(5)] If $(s-l)m\equiv \frac{d}{2}m$ (mod $dm$), then in the formula \eqref{eq7.12} $V_{(s-l)m,j+k}$ is decomposed further and represents $$V^{+}_{\frac{d}{2}m,j+k}\oplus V^{-}_{\frac{d}{2}m,j+k}.$$ \end{itemize} Similarly, one can work out the meaning of mark $(\ast)$ for the formula $V_{lm+i,j}\otimes V_{sm+t,k}$. That is, whenever $(s+l)m+(i+t)\equiv 0$ (mod $dm$) or $(s+l)m+(i+t)\equiv \frac{d}{2}m$ (mod $dm$) the item $V_{(s+l)m+(i+t),j+k}$ will split further and whenever $(l-s)m+(i-t)\equiv 0$ (mod $dm$) or $(l-s)m+(i-t)\equiv \frac{d}{2}m$ (mod $dm$) the item $V_{(l-s)m+(i-t),j+k-t}$ will split further. $\bullet$\emph{The series of nonsemisimple finite-dimensional Hopf algebras.} Using the Hopf algebra $D=D(\underline{m},d,\gamma)$, we also can get many nonsemisimple finite-dimensional Hopf algebras, which are knew up the author's knowledge. The main idea to construct these finite-dimensional Hopf algebras is to generalize the exact sequence \eqref{eq5.2} \begin{equation*} \k\longrightarrow H_{00}\longrightarrow H\longrightarrow \overline{H}\longrightarrow \k. \end{equation*} That is, we want to substitute $H$ by our Hopf algebra $D(\underline{m},d,\gamma)$ and thus get finite-dimensional quotients. One can realize this idea through showing that every Hopf subalgebra $\k[x^{\pm t}]$ for $t\in \N$ is a normal Hopf subalgebra of $D.$ Since by definition we know that the element $x$ commutes with $g, y_{m_i}\;(1\leq i\leq \theta)$, we only need to show that $ad(u_j)(x^{t})=u_j'x^{t}S(u_j'')\in \k[x^{\pm t}]$ for all $0\leq j\leq m-1.$ Through direct computation, we have $$ad(u_j)(x^{t})=x^{-t}u_j'S(u_j'')=x^{-t}\e(u_j)\in \k[x^{\pm t}].$$ So we have the following exact sequence of Hopf algebras \begin{equation}\label{eq7.13} \k\longrightarrow \k[x^{\pm t}]\longrightarrow D\longrightarrow D/(x^{t}-1)\longrightarrow \k. \end{equation} We denote the resulted Hopf algebra $D/(x^{t}-1)$ by $D_t$, i.e., $D_t:=D/(x^{t}-1).$ \begin{lemma} The Hopf algebra $D_t$ is finite-dimensional and has dimension $2m^2t.$ \end{lemma} \begin{proof} We also want to use the bigrading of $D$ to compute the dimension of $D_t.$ By equation \eqref{eq6.1}, we know that $D$ is a free $\k[x^{\pm 1}]$-module of rank $2m^2$. Now through this bigrading \eqref{eq6.1} and the relation modular $x^{t}-1$, $D_t$ is also bigraded and is a free $\k[x]/(x^t-1)$-module of rank $2m^2$. Therefore, $\dim_{k} D_t=2m^2t.$ Actually, the following elements $\{x^ig^jy_t, x^{i}g^{j}u_{t}|0\leq i\leq t-1, 0\leq j\leq m-1, 0\leq t\leq m-1\}$ (we use the same notations as $D$ for simple) is a basis of $D_t.$ \end{proof} It is not hard to give the generators and relations of this Hopf algebra: one just need add one more relation in the definition of the Hopf algebra $D$, that is the relation $x^t=1.$ The coproduct, counit and the antipode are the same as $D$. It seems that there is no need to repeat them again. About this Hopf algebra, it has the following properties. \begin{proposition}\label{p7.14} Retain above notations. \begin{itemize}\item[(1)] The Hopf algebra $D_t$ is not pointed unless $m=1$. And in case $m>1$, its coradical is not a Hopf subalgebra. \item[(2)] The Hopf algebra $D_t$ is not semisimple unless $m=1$. \item[(3)] The Hopf algebra $D_t$ is pivotal, that is, its representation category is a pivotal tensor category. \end{itemize} \end{proposition} \begin{proof} (1) Using totally the same method given the proof of Proposition \ref{p7.6}, the subspace spanned by $\{(x^{-d}g)^{i}u_j|0\leq i,j\leq m-1\}$ is a simple coalgebra, where $x^{-d}$ means its image in $\k[x^{\pm 1}]/(x^t-1).$ So $D_t$ has a simple coalgebra of dimension $m^2$. Therefore it is not pointed. If $m=1$, then it is easy to see that $D_t$ is just a group algebra. Using the same arguments stated in the proof of Proposition \ref{p4.8} (3), its coradical is not a Hopf subalgebra. (2) Assume $m>1$ and we want to show that $D_t$ is not semisimple. On the contrary, if $D_t$ is semisimple then it is cosemisimple \cite{LR}. This implies every $y_j$ should lie in the coradical. This is absurd since clearly $y_{m_i}$ does not due to it is a nontrivial skew primitive element. (3) Actually we can prove a stronger result, that is, the Hopf algebra $D$ is pivotal. To prove this stronger result, by Lemma \ref{l2.16} we only need to set the following formula for $S^2$: \begin{equation}\label{eq7.14} S^2(h)=(g^{\sum_{i=1}^{\theta}m_i}x^c)h(g^{\sum_{i=1}^{\theta}m_i}x^c)^{-1},\;\;\;\;h\in D, \end{equation} where $c=-\frac{\sum_{i=1}^{\theta}(e_i+1)m_id}{2}$. Note that by second equation of \eqref{eq4.7}, $\sum_{i=1}^{\theta}(e_i+1)m_id$ is always even. Our task is to prove above formula. Indeed, on one hand, \begin{align*}S^2(u_j)&=S(x^{b}g^{m-1}\prod_{i=1}^{\theta}(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{- m_i^2\frac{j_i(j_i+1)}{2}} x^{j_im_id} g^{-j_im_i}u_j)\\ &=S(u_j)\prod_{i=1}^{\theta}(-1)^{j_i}\xi_{m_i}^{-j_i}\gamma^{- m_i^2\frac{j_i(j_i+1)}{2}} x^{-j_im_id} g^{j_im_i}g^{1-m}x^{-b}\\ &=x^bg^{m-1}\prod_{i=1}^{\theta}\xi_{m_i}^{-2j_i}\gamma^{- m_i^2j_i(j_i+1)} x^{j_im_id} g^{-j_im_i}u_jx^{-j_im_id} g^{j_im_i}g^{1-m}x^{-b}\\ &=x^{2b}g^{m-1}\gamma^{(1-m)j}\prod_{i=1}^{\theta}\xi_{m_i}^{-2j_i}\gamma^{- m_i^2j_i(j_i+1)}\gamma^{j(j_im_i)} x^{-2(1-m)d}g^{1-m}u_j\\ &=x^{2b-2(1-m)d}\prod_{i=1}^{\theta}\gamma^{-m_i^2j_i}u_j, \end{align*} where recall that $b=(1-m)d-\frac{\sum_{i=1}^{\theta}(e_i-1)m_i}{2}d.$ On the other hand, \begin{align*}(g^{\sum_{i=1}^{\theta}m_i}x^c)u_j(g^{\sum_{i=1}^{\theta}m_i}x^c)^{-1} &=x^{2c}x^{2d\sum_{i=1}^{\theta}m_i}\gamma^{-j\sum_{i=1}^{\theta}m_i}u_j\\ &=x^{2c+2d\sum_{i=1}^{\theta}m_i}\prod_{i=1}^{\theta}\gamma^{-m_i^2j_i}u_j. \end{align*} Since $$2c+2d\sum_{i=1}^{\theta}m_i=-\sum_{i=1}^{\theta}(e_i-1)m_id=2b-2(1-m)d,$$ we have $S^2(u_j)=(g^{\sum_{i=1}^{\theta}m_i}x^c)u_j(g^{\sum_{i=1}^{\theta}m_i}x^c)^{-1}.$ So to show the formula \eqref{eq7.14}, we only need to check it for $y_{m_i}$ for $1\leq i\leq \theta$ now. This is not hard. In fact, \begin{align*}S^2(y_{m_i})&=S(-y_{m_i}g^{-m_i})\\ &=g^{m_i}y_{m_i}g^{-m_i}=\gamma^{-m_i^2}y_{m_i}\\ &=\gamma^{-m_i(m_1+\cdots+m_{\theta})}y_{m_i}\\ &=(g^{\sum_{i=1}^{\theta}m_i}x^c)y_{m_i}(g^{\sum_{i=1}^{\theta}m_i}x^c)^{-1} \end{align*} due to $\gamma^{m_im_j}=1$ for $i\neq j$ and $x$ commutes with $y_{m_i}.$ Therefore, the representation category of $D$ is pivotal. As a tensor subcategory, the category of representations of $D_t$ is pivotal automatically. \end{proof} In \cite{BGNR}, the authors posed the following sentence ``it remains unknown whether there exists any Hopf algebra $H$ of dimension 24 such that neither $H$ nor $H^{\ast}$ has the Chevalley property" (see \cite[Introduction, third paragraph]{BGNR}). With the helping of the Hopf algebra $D_t$, we can give one now. We will show that $D_3$ has dimension 24 and has no Chevalley property. However, its dual $(D_3)^{\ast}$ indeed has Chevalley property. That is, we still can't fix the question posed in \cite{BGNR}. Anyway, it seems that the following example didn't written out explicitly in \cite{BGNR} and should be implicated in their classification in a dual version. \begin{example}\emph{ Let $m=2$ (this implies that $m$ has no nontrivial fraction now, that is, $\theta=1$) and take $d=6$. The condition $d=6$ guarantees the condition \eqref{eq4.7} is fulfilled and thus Hopf algebra $D(m,d,\gamma)$ exists. So we take $t=3$ and then we find $$\dim_{\k} D_t=2m^2t=24.$$ In order to understand this Hopf algebra well, we give the presentation of this $D_3$: as an algebra, it is generated by $g,x,y,u_0,u_1$ and satisfies \begin{align*} & x^3=1,\quad g^2=1,\quad xg=gx,\quad y^2=0,\quad xy=yx,\quad yg=-gy,\\ & xu_0=u_0x^{-1},\quad xu_1=u_1x^{-1},\quad yu_0=2u_1=\textrm{i}u_0y,\quad yu_1=0=u_1y,\\ & u_0g=gu_0,\quad u_1g=-u_1g\\ & u_0u_0=g,\quad u_0u_1=\frac{- \textrm{i}}{2} yg,\quad u_1u_0=\frac{1}{2}yg,\quad u_1u_1=0, \end{align*} where $\textrm{i}$ is the imaginary square root of $-1$, that is $\textrm{i}=\sqrt{-1}.$ The coproduct $\D$, the counit $\epsilon$ and the antipode $S$ of $D_3$ are given by \begin{eqnarray*} &&\D(x)=x\otimes x,\;\; \D(g)=g\otimes g,\;\;\D(y)=1\otimes y+y\otimes g,\\ &&\D(u_0)=u_0\otimes u_0-u_1\otimes gu_1,\;\;\D(u_1)=u_0\otimes u_1+u_1\otimes gu_0,\\ &&\epsilon(x)=\epsilon(g)=\epsilon(u_0)=1,\;\;\epsilon(u_1)=\epsilon(y)=0;\\ &&S(x)=x^{-1},\;\; S(g)=g^{-1},\;\;S(y)=-yg^{-1}, \\ && S(u_0)=gu_0,\;\;S(u_1)=-\textrm{i}u_1. \end{eqnarray*} Next we claim that $D_3$ has no Chevalley property while its $(D_3)^{\ast}$ does. Recall that a Hopf algebra is said to have Chevalley property if it's coradical is a Hopf subalgebra. So to show the claim, it is enough to prove that the coradical of $D_3$ is not a Hopf subalgebra and its (Jacobson) radical is a Hopf ideal. In fact, by Proposition \ref{p7.14} (1), its coradical is not a Hopf subalgebra. Now let's prove that its radical is a Hopf ideal. As usual, denote its radical by $J$ and then it is not hard to see that $y\in J$ since $y$ generates a nilpotent ideal. Using the relation $yu_0=2u_1$, $u_1\in J$. Now consider the quotient $D_3/(y,u_1)$. It is not hard to see that $D_3/(y,u_1)\cong \k (\Z_4\times \Z_3)$. Therefore, $J=(y,u_1)$ and it is a Hopf ideal clearly.} \end{example} \subsection{The Hypothesis.} We point out that our final aim is to classify all prime Hopf algebras of GK-dimension one. So, as a natural step, we want to consider the question about the Hypothesis (Hyp1) and (Hyp2) listed in the introduction. $\bullet$\emph{ The Hypothesis (Hyp1).} Let $H$ be a prime Hopf algebra of GK-dimension one, does $H$ satisfy (Hyp1) automatically? It is a pity that this is not true as we have the following counterexample. \begin{example}\label{ex7.5} \emph{Let $n$ be a natural number. As an algebra, $\Lambda(n)$ is generated by $X_1,\ldots,X_n$ and $g$ subject to the following relations: $$X_i^2=X_j^2,\;\;X_iX_j=-X_jX_i,\;\;g^2=1,\;\;-gX_i=X_ig$$ for all $1\leq i\neq j\leq n.$ The coproduct, counit and the antipode are given by $$\D(X_i)=1\otimes X_i+X_i\otimes g,\;\;\D(g)=g\otimes g,$$ $$\e(X_i)=0,\;\;\e(g)=1$$ $$S(X_i)=-X_ig,\;\;S(g)=g^{-1}$$ for all $1\leq i\leq n.$ By the following lemma, we know that $\Lambda(n)$ is a prime Hopf algebra of GK-dimension one when $n$ is odd. Moreover, if $n=2m+1$, then the PI-degree of $\Lambda(n)$ is $2^{m+1}$. } \emph{Now let $$\pi: \Lambda(n)\to \k$$ be a 1-dimensional representation of $\Lambda(n)$. Since $g^2=1$, $\pi(g)=1$ or $\pi(g)=-1$. From the relation $-gX_i=X_ig$, we get $\pi(X_i)=0$ for all $1\leq i\leq n$. This implies that $\ord(\pi)=1$ or $\ord(\pi)=2$. In general, we find that PI-deg$(\Lambda(n))>\ord(\pi)$ and the difference PI-deg$(H)-\ord(\pi)$ can be very large.} \end{example} \begin{lemma} Keep the notations and operations used in above example. Then \begin{itemize}\item[(1)] The algebra $\Lambda(n)$ is a Hopf algebra of GK-dimension one. \item[(2)] The algebra $\Lambda(n)$ is prime if and only if $n$ is odd. \item[(3)] If $n=2m+1$ is an odd, then PI-deg $(\Lambda(n))=2^{m+1}$. \end{itemize} \end{lemma} \begin{proof} (1) is clear. (2) If $n$ is even, then we consider the element $g\prod_{i=1}^{n}X_i$. Direct computation shows that this element belongs to the center $C(\Lambda(n))$. Also we know that $X_{1}^{n}$ lives in the center too. Thus $$X_1^{n}-ag\prod_{i=1}^{n}X_i\in C(\Lambda(n))$$ for any $a\in \k$. Now, $(X_1^{n}-ag\prod_{i=1}^{n}X_i)(X_1^{n}+ag\prod_{i=1}^{n}X_i) =X_1^{2n}-a^2(-1)^\frac{n(n+1)}{2}\prod_{i=1}^{n}X_i^2= X_1^{2n}-a^2(-1)^\frac{n(n+1)}{2}X_1^{2n}.$ Taking $a$ such that $a^2(-1)^\frac{n(n+1)}{2}=1$, we see that the central element $X_1^{n}-ag\prod_{i=1}^{n}X_i$ has nontrivial zero divisor and thus $\Lambda(n)$ is not prime. So the left task is to show that $\Lambda(n)$ is prime when $n$ is odd. To prove this, we give the following two facts about the algebra $\Lambda(n)$: 1) The center of $\Lambda(n)$ is $\k[X_1^2]$ ($=\k[X_i^2]$ for $1\leq i\leq n$); 2) $\Lambda(n)$ is a free module over its center with basis $\{g^{l}\prod_{i=1}^{n}X_{i}^{j_i}|0\leq l\leq 1, 0\leq j_i\leq 1\}$. Both of these two facts can be gotten through the following observation easily: As an algebra, one has $\Lambda(n)\cong \overline{U}(n)\# \k\Z_2$ where $\overline{U}(n)=U(n)/(X_i^2-X_j^2|1\leq i\neq j\leq n)$ and $U(n)$ is the enveloping algebra of the commutative Lie superalgebra of dimension $n$ with degree one basis $\{X_i|1\leq i\leq n\}$. From above two facts about $\Lambda(n)$, every monomial generated by $g$ and $X_i$ ($1\leq i\leq n$) is not a zero divisor and in fact regular. Now to show the result, assume that $I,J$ be two nontrivial ideals of $\Lambda(n)$ satisfying $IJ=0$. We will show that $I$ contains a monomial and thus get a contradiction. For this, through setting $\deg (g)=0$ and $\deg (X_i)=1$ we find that $\Lambda(n)$ is a graded algebra. Let $a$ and $b$ be two nonzero element of $I$ and $J$ respectively. Since $\Lambda(n)$ is $\Z$-graded which is an order group, we can assume that both $a$ and $b$ are homogenous elements through $ab=0$. In particular, we can take $a$ to be a nonzero homogenous element. For simple, we assume that $a$ has degree one (for other degrees one can prove the result using the same way as degree one). So, $$a=\sum_{i=1}^{n} a_iX_i+\sum_{i=1}^{n}a_i'gX_{i},$$ for $a_i,a_i'\in \k.$ Now $a':=X_1a+aX_1=2a_1X_1^2-2\sum_{i\neq 1}a_i'gX_1X_i$. For any $i\neq 1$, we have $a'':=X_{i}a'+a'X_i=4a_1X_1^2X_i-4a_i'gX_1X_i^2$ and continue this process $a''':=X_ja''+a''X_j=-8a_i'gX_1X_i^2X_j\in I$ for any $j\neq 1,i$ (such $j$ exists unless $n=1$. But in case $n=1$, $\Lambda(n)$ is clear prime). This implies that we have a monomial in $I$ if $a_i'\neq 0$ for $i\neq 1$. We next consider the case $a_i'\equiv 0$ for all $i\neq 1.$ Looking back the element $a''$, we can assume that $a_1=0$ too. Repeat above precess through substituting $X_1$ by other $X_j$ and we can assume all $a_j=0$ and $a_t'=0$ with $t\neq j$. That's impossible since $0\neq a$ and in one word we must have a monomial in $I$. (3) By the proof of the part (2), we know that $\Lambda(n)$ is a free module over its center with basis $\{g^{l}\prod_{i=1}^{n}X_{i}^{j_i}|0\leq l\leq 1, 0\leq j_i\leq 1\}$ and so the rank of $\Lambda(n)$ over its center is $2^{n+1}=2^{2(m+1)}$. Therefore, PI-deg$(\Lambda(n))=\sqrt{2^{2(m+1)}}=2^{m+1}.$ \end{proof} $\bullet$ \emph{The Hypothesis (Hyp2).} We next want to consider the question about the second hypothesis (Hyp2): Let $H$ be a prime Hopf algebra of GK-dimension one, does $H$ has a one-dimensional representation $\pi: H\to \k$ such its invariant components are domains? This is also not true in general. In fact, by Example \ref{ex7.5}, we find that the left invariant component must contains the subalgebra generated by $X_i$ ($1\leq i\leq n$) for any one-dimensional representation and thus it is not a domain (if it is, it must be commutative by the proof of Lemma \ref{l2.7}). $\bullet$ \emph{Relation between (Hyp1) and (Hyp2).} In the introduction, (Hyp2) is built on (Hyp1), i.e., they used the same one-dimensional representation. However, it is clear we can consider (Hyp1) and (Hyp2) individually, that is, for each hypothesis we consider a one-dimensional representation which may be different. Until now, we still don't know the exactly relationship between (Hyp1) and (Hyp2) for a prime Hopf algebra of GK-dimension one. So, we formulate the following question for further considerations. \begin{question} \begin{itemize} \item[(1)] \emph{Let $H$ be a prime Hopf algebra of GK-dimension one satisfying (Hyp1), does $H$ satisfy (Hyp2) automatically?} \item[(2)] \emph{Let $H$ be a prime Hopf algebra of GK-dimension one satisfying (Hyp2), does $H$ satisfy (Hyp1) automatically?} \end{itemize} \end{question} \subsection{A conjecture.} From all examples stated in this paper, it seems that prime Hopf algebras of GK-dimension one exist widely. However, we still can find some common points about them. Among of these points, we formulate a conjecture on the structure of prime Hopf algebras of GK-dimension in the following way. \begin{conjecture}\label{con7.19} \emph{Let $H$ be a prime Hopf of GK-dimension one. Then we have an exact sequence of Hopf algebras:} \begin{equation} \k\longrightarrow \emph{alg.gp} \longrightarrow H\longrightarrow \emph{f.d. Hopf}\longrightarrow \k, \end{equation} \emph{where ``alg.gp" denotes the coordinate algebra of a connected algebraic group of dimension one and ``f.d. Hopf" means a finite-dimensional Hopf algebra.} \end{conjecture} It is not hard to see that all examples given in this paper always satisfy above conjecture. \begin{remark} \emph{Recently, professor Ken Brown showed the author one of his slides in which he introduced the definition so called \emph{commutative-by-finite} as follows: A Hopf algebra is commutative-by-finite if it is a finite (left or right) module over a commutative normal Hopf subalgebra. So our Conjecture \ref{con7.19} just says that every prime Hopf algebra of GK-dimension one should be a commutative-by-finite Hopf algebra.} \end{remark}
{'timestamp': '2018-04-30T02:07:02', 'yymm': '1804', 'arxiv_id': '1804.08973', 'language': 'en', 'url': 'https://arxiv.org/abs/1804.08973'}
\section{Introduction} An origin of cosmological baryon asymmetry is one of the prime open questions in particle physics as well as in cosmology. Among various mechanisms of baryogenesis, leptogenesis~\cite{FukugitaYanagida} is one of the most attractive idea because of its simplicity and the connection to neutrino physics. Particularly, thermal leptogenesis requires only the thermal excitation of heavy right-handed Majorana neutrinos which generate tiny neutrino masses via the seesaw mechanism~\cite{Type1seesaw} and provides several implications for the light neutrino mass spectrum~\cite{Buchmulleretal}. The size of CP asymmetry in a right-handed neutrino decay is, roughly speaking, proportional to the mass of right-handed neutrino. Thus, we obtain only insufficiently small CP violation for a lighter right-handed neutrino mass. That is the reason why it has been regarded that leptogenesis in low energy scale is in general difficult in the conventional Type I seesaw mechanism~\cite{LowerBound,Davidson:2002qv}. On the other hand, in supersymmetric models with conserved R-parity to avoid rapid proton decay, thermal leptogenesis faces with ``gravitino problem'' that the overproduction of gravitinos spoils the success of Big Bang Nucleosynthesis (BBN)~\cite{GravitinoProblem}, whereas the stable lightest supersymmetric particle (LSP) becomes dark matter candidate. In order not to overproduce gravitinos, the reheating temperature after inflation should not be so high that right-handed neutrinos can be thermally produced~\cite{GravitinoProblem2}. In the framework of gravity mediated supersymmetry (SUSY) breaking, a few solutions, e.g., gravitino LSP with R-parity violation~\cite{Buchmuller:2007ui}, very light axino LSP~\cite{Asaka:2000ew} and strongly degenerated right-handed neutrino masses~\cite{ResonantLeptogenesis}, have been proposed. Recently, a new class of two Higgs doublet models (THDM)~\cite{2hdm} has been considered in Refs.~\cite{Ma,Nandi,Ma:2006km,Davidson:2009ha,Logan:2010ag, HabaHirotsu}. The motivation is as follows. As mentioned above, seesaw mechanism naturally realizes tiny masses of active neutrinos through heavy particles coupled with left-handed neutrinos. However, those heavy particles are almost decoupled in the low-energy effective theory, few observations are expected in collider experiments. Then, some people consider a possibility of reduction of seesaw scale to TeV~\cite{TeVseesaw,Haba:2009sd}, where effects of TeV scale right-handed neutrinos might be observed in collider experiments such as Large Hadron Collider (LHC) and International Linear Collider (ILC). However, they must introduce a fine-tuning in order to obtain both tiny neutrino mass and detectable left-right neutrino mixing through which experimental evidences can be discovered. Other right-handed neutrino production processes in extended models by e.g., $Z'$ exchange~\cite{Zprime} or Higgs/Higgsino decay~\cite{CerdenoSeto} also have been pointed out. Here, let us remind that Dirac masses of fermions are proportional to their Yukawa couplings as well as a vacuum expectation value (VEV) of the relevant Higgs field. Hence, the smallness of a mass might be due to not a small Yukawa coupling but a small VEV of the Higgs field. Such a situation is indeed realized in some THDM. For example, in Type-II THDM with a large $\tan\beta$, the mass hierarchy between up-type quark and down-type quark can be explained by the ratio of Higgs VEVs, and when $\tan\beta \sim 40$, Yukawa couplings of top and bottom quark are same scale of order of unity~\cite{Hashimoto:2004xp}. Similarly, there is a possibility that smallness of the neutrino masses comparing to those of quarks and charged leptons is originating from an extra Higgs doublet with the tiny VEV. This idea is that neutrino masses are much smaller than other fermions because the origin of them comes from different VEV of different Higgs doublet, and then we do not need extremely tiny neutrino Yukawa couplings. Let us call this kind of model~\cite{Ma,Nandi,Ma:2006km,Davidson:2009ha,Logan:2010ag,HabaHirotsu} neutrinophilic Higgs doublet model. Especially, in models in Refs.\cite{Ma, HabaHirotsu}, tiny Majorana neutrino masses are obtained through a TeV scale Type-I seesaw mechanism without requiring tiny Yukawa couplings. Notice that neutrino Yukawa couplings in neutrinophilic Higgs doublet models do not need to be so small. This fact has significant implication to leptogenesis. The CP violation of right-handed neutrino decay is proportional to neutrino Yukawa coupling squared. We can obtain a large CP violation for such large neutrino Yukawa couplings. This opens new possibility of low scale thermal leptogenesis. In this paper, we will show that CP asymmetry is enhanced and thermal leptogenesis suitably works in multi-Higgs models with a neutrinophilic Higgs doublet field, where the tiny VEV of the neutrinophilic Higgs field has equivalently larger neutrino Yukawa couplings, and then TeV-scale seesaw works well. We will show that the thermal leptogenesis suitably works at low energy scale as avoiding enhancement of lepton number violating wash out effects. We will also point out that thermal leptogenesis in gravity mediated SUSY breaking works well without confronting gravitino problem in a supersymmetric model. \section{Neutrinophilic Higgs doublet models} \subsection{Minimal neutrinophilic THDM } \label{subsec:Minimal} Let us show a two Higgs doublet model, which we call neutrinophilic THDM model, originally suggested in Ref.~\cite{Ma}. In the model, one additional Higgs doublet $\Phi_{\nu}$, which gives only neutrino Dirac masses, besides the SM Higgs doublet $\Phi$ and a discrete $Z_2$-parity are introduced. The $Z_2$-symmetry charges (and also lepton number) are assigned as the following table. \begin{table}[h] \centering \begin{center} \begin{tabular}{|l|c|c|} \hline fields & $Z_{2}$-parity & lepton number \\ \hline\hline SM Higgs doublet, $\Phi$ & $+$ & 0 \\ \hline new Higgs doublet, $\Phi_{\nu}$ & $-$ & 0 \\ \hline right-handed neutrinos, $N$ & $-$ & $1$ \\ \hline others & $+$ & $\pm 1$: leptons, $0$: quarks \\ \hline \end{tabular} \end{center} \end{table} Under the discrete symmetry, Yukawa interactions are given by \begin{eqnarray} {\mathcal L}_{yukawa}=y^{u}\bar{Q}_L \Phi U_{R} +y^d \bar{Q}_{L}\tilde{\Phi}D_{R}+y^{l}\bar{L}\Phi E_{R} +y^{\nu}\bar{L}\Phi_{\nu}N+ \frac{1}{2}M \bar{N^{c}}N +{\rm h.c.}\; \label{Yukawa:nuTHDM} \end{eqnarray} where $\tilde{\Phi}=i\sigma_{2}\Phi^{\ast}$, and we omit a generation index. $\Phi_\nu$ only couples with $N$ by the $Z_2$-parity so that flavor changing neutral currents (FCNCs) are suppressed. Quark and charged lepton sectors are the same as Type-I THDM, but notice that this neutrinophilic THDM is quite different from conventional Type-I, II, X, Y THDMs~\cite{2hdm}. The Higgs potential of the neutrinophilic THDM is given by \begin{align} V^\text{THDM} & = m_\Phi^2 \Phi^\dag \Phi + m_{\Phi_\nu}^2 \Phi_\nu^\dag \Phi_\nu -m_3^2\left(\Phi^\dag \Phi_\nu+\Phi_\nu^\dag \Phi\right) +\frac{\lambda_1}2(\Phi^\dag \Phi)^2 +\frac{\lambda_2}2(\Phi_\nu^\dag \Phi_\nu)^2\nonumber \\ &\qquad+\lambda_3(\Phi^\dag \Phi)(\Phi_\nu^\dag \Phi_\nu) +\lambda_4(\Phi^\dag \Phi_\nu)(\Phi_\nu^\dag \Phi) +\frac{\lambda_5}2\left[(\Phi^\dag \Phi_\nu)^2 +(\Phi_\nu^\dag \Phi)^2\right]. \label{Eq:HiggsPot} \end{align} The $Z_2$-symmetry is softly broken by $m_3^2$. Taking a parameter set, \begin{equation} m_\Phi^2 < 0, ~~~ m_{\Phi_\nu}^2 > 0, ~~~ |m_{3}^2| \ll m_{\Phi_\nu}^2, \end{equation} we can obtain the VEV hierarchy of Higgs doublets, \begin{equation} v^2 \simeq \frac{-m_\Phi^2}{\lambda_1}, ~~~ v_{\nu} \simeq \frac{-m_{3}^2 v}{ m_{\Phi_\nu}^2 + (\lambda_3 + \lambda_4 + \lambda_5 ) v^2} , \end{equation} where we have decomposed the SM Higgs doublet $\Phi$ and the extra Higgs doublet $\Phi_{\nu}$ as \begin{eqnarray} \Phi = \left( \begin{array}{c} v+ \frac{1}{\sqrt{2}}\phi^{0}\\ \phi^{-} \end{array} \right) ,\;\; \Phi_{\nu}= \left( \begin{array}{c} v_{\nu}+\frac{1}{\sqrt{2}}\phi^{0}_{\nu} \\ \phi^{-}_{\nu} \end{array} \right). \end{eqnarray} When we take values of parameters as $m_\Phi \sim 100$ GeV, $m_{\Phi_\nu} \sim 1$ TeV, and $|m_{3}^2| \sim 10$ GeV$^2$, we can obtain $v_\nu \sim 1$ MeV. The smallness of $|m_{3}^2|$ is guaranteed by the ``softly-broken'' $Z_2$-symmetry. For a very large $\tan \beta=v/v_{\nu} (\gg 1)$ limit we are interested in, the five physical Higgs boson states and those masses are respectively given by \begin{eqnarray} H^\pm \simeq \ [\phi_\nu^\pm] , && ~~~ m^2_{H^\pm} \simeq m_\nu^2 + \lambda_3 v^2 , \\ A \simeq {\rm Im} [\phi_{\nu}^0] , && ~~~ m^2_A \simeq m_\nu^2 + (\lambda_3 + \lambda_4+ \lambda_5) v^2 , \\ h \simeq {\rm Re} [\phi^0] , && ~~~ m^2_{h} \simeq 2 \lambda_1 v^2 , \\ H \simeq {\rm Re} [\phi_\nu^0] , && ~~~ m^2_{H} \simeq m_\nu^2 + (\lambda_3 + \lambda_4+\lambda_5) v^2 , \end{eqnarray} where negligible ${\cal O}(v_{\nu}^2)$ and ${\cal O}(m_3^2)$ corrections are omitted. Notice that $\tan\beta$ is extremely large so that the SM-like Higgs $h$ is almost originated from $\Phi$, while other physical Higgs particles, $H^\pm, H, A$, are almost originated from $\Phi_\nu$. Since $\Phi_\nu$ has Yukawa couplings only with neutrinos and lepton doublets, remarkable phenomenology can be expected which is not observed in other THDMs. For example, lepton flavor violation (LFV) processes and oblique corrections are estimated in Ref.~\cite{Ma}, and charged Higgs processes in collider experiments are discussed in Refs.~\cite{Davidson:2009ha, Logan:2010ag}~\footnote{ The model deals with Dirac neutrino version in neutrinophilc THDM, but phenomenology of charged lepton has a similar region in part.}. The neutrino masses including one-loop radiative corrections~\cite{Ma:2006km} are estimated as \begin{equation} (m_\nu)_{ij} = \sum_k\frac{y^{\nu}_{ik} v_\nu y^{\nu T}{}_{kj} v_\nu}{M_k} + \sum_k {y^\nu_{ik} y^{\nu T}{}_{kj} M_{k} \over 16 \pi^{2}} \left[ {m_R^{2} \over m_R^{2}-M_{k}^{2}} \ln {m_R^{2} \over M_{k}^{2}} - {m_I^{2} \over m_I^{2}-M_{k}^{2}} \ln {m_I^{2} \over M_{k}^{2}} \right], \end{equation} where $m_R$ and $m_I$ are the masses of $ {\rm Re} [\phi^{0}]$ and ${\rm Im} [\phi_\nu^{0}]$ respectively. It is easy to see the tree level contribution gives ${\cal O} (0.1)$ eV neutrino masses for $M_k \sim 1$ TeV, $v_\nu \sim 1$ MeV and $y^{\nu} = {\cal O}(1)$. The one-loop contribution is induced for a nonvanishing $\lambda_5$. When $m_R^{2} - m_I^{2} = 2 \lambda_5 v^{2} \ll m_0^{2} = (m_R^{2} + m_I^{2})/2$, \begin{equation} ({m}_\nu)_{ij} = {\lambda_5 v^{2} \over 8 \pi^{2}} \sum_k {y^\nu_{ik} y^\nu_{jk} M_{k} \over m_0^{2} - M_{k}^{2}} \left[ 1 - {M_{k}^{2} \over m_0^{2}-M_{k}^{2}} \ln {m_0^{2} \over M_{k}^{2}} \right], \end{equation} and it shows \begin{eqnarray} &&({m}_\nu)_{ij} = {\lambda_5 v^{2} \over 8 \pi^{2}} \sum_k {y\nu_{ik} y^nu_{jk} \over M_{k}} \left[ \ln {M_{k}^{2} \over m_0^{2}} - 1 \right], \;\; (M_{k}^{2} \gg m_0^{2}), \\ && ({m}_\nu)_{ij} = {\lambda_5 v^{2} \over 8 \pi^{2} m_0^{2}} \sum_k y^\nu_{ik} y^\nu_{jk} M_{k}, \;\; (m_0^{2} \gg M_{k}^{2}), \\ && ({m}_\nu)_{ij} \simeq {\lambda_5 v^{2} \over 16 \pi^{2}} \sum_k {y^\nu_{ik} y^\nu_{jk} \over M_{k}}, \;\; (m_0^{2} \simeq M_{k}^{2}). \end{eqnarray} Thus, when the masses of Higgs bosons (except for $h$) and right-handed neutrinos are ${\mathcal O}(1)$ TeV, light neutrino mass scale of order ${\mathcal O}(0.1)$ eV is induced with $\lambda_5 \sim 10^{-4}$. Thus, whether tree-level effect is larger than loop-level effect or not is determined by the magnitude of $\lambda_5$ (and $m_A, m_H$), which contribute one-loop diagram. \subsection{A UV theory of neutrinophilic THDM } \label{subsec:HabaHirotsu} Here let us show a model in Ref.~\cite{HabaHirotsu} as an example of UV theory of the neutrinophilic THDM. This model is constructed by introducing one gauge singlet scalar field $S$, which has a lepton number, and $Z_3$-symmetry shown as the following table. \begin{table}[h] \centering \begin{center} \begin{tabular}{|l|c|c|} \hline fields & $Z_{3}$-charge & lepton number \\ \hline\hline SM Higgs doublet, $\Phi$ & 1 & 0 \\ \hline new Higgs doublet, $\Phi_{\nu}$ & $\omega^{2}$ & 0 \\ \hline new Higgs singlet, $S$ & $\omega$ & $-2$ \\ \hline right-handed neutrinos, $N$ & $\omega$ & 1 \\ \hline others & 1 & $\pm 1$: leptons, $0$: quarks \\ \hline \end{tabular} \end{center} \end{table} Under the discrete symmetry, Yukawa interactions are given by \begin{eqnarray} {\mathcal L}_{yukawa}=y^{u}\bar{Q}^{L}\Phi U_{R} +y^d \bar{Q}_{L}\tilde{\Phi}D_{R}+y^{l}\bar{L}\Phi E_{R} +y^{\nu}\bar{L}\Phi_{\nu}N+\frac{1}{2}y^{N}S\bar{N^{c}}N +{\rm h.c.} . \label{22} \end{eqnarray} The Higgs potential can be written as \begin{eqnarray} V=&m_\Phi^{2}|\Phi|^{2}+m_{\Phi_{\nu}}^{2}|\Phi_{\nu}|^{2}-m_S^{2}|S|^{2} -\lambda S^{3}-\kappa S\Phi^{\dagger}\Phi_{\nu}\nonumber\\ &+\frac{\lambda_{1}}{2}|\Phi|^{4}+\frac{\lambda_{2}}{2}|\Phi_{\nu}|^{4}+\lambda_{3}|\Phi|^{2}|\Phi_{\nu}|^{2}+\lambda_{4}|\Phi^{\dagger}\Phi_{\nu}|^{2}\nonumber\\ &+\lambda_{S}|S|^4+ \lambda_{\Phi}|S|^{2}|\Phi|^{2}+\lambda_{\Phi_{\nu}}|S|^{2}|\Phi_{\nu}|^{2} + {\it h.c.} . \label{Potential:HabaHirotsu} \end{eqnarray} $Z_3$-symmetry forbids dimension four operators, $(\Phi^\dagger \Phi_{\nu})^{2}$, $\Phi^\dagger \Phi_{\nu}|\Phi|^2$, $\Phi^\dagger \Phi_{\nu}|\Phi_{\nu}|^2$, $S^4$, $S^2|S|^2$, $S^2|\Phi|^2$, $S^2|\Phi_{\nu}|^2$, and dimension two or three operators, $\Phi^\dagger \Phi_{\nu}$, $S|\Phi|^{2}$, $S|\Phi_\nu|^{2}$. Although there might be introduced small soft breaking terms such as $m_3'^2\Phi^\dagger \Phi_{\nu}$ to avoid domain wall problem, we omit them here, for simplicity. It has been shown that, with $\kappa \sim 1$ MeV, the desirable hierarchy of VEVs \begin{eqnarray} && v_s \equiv \langle S \rangle \sim 1 \;\hbox{TeV},\;\;\; v \sim 100 \;\hbox{GeV}, \;\;\; v_{\nu} \sim 1 \;\hbox{MeV}, \end{eqnarray} and neutrino mass \begin{equation} m_\nu \simeq \frac{y^{\nu2} v_{\nu}^{2}} {M_N } . \end{equation} with Majorana mass of right-handed neutrino $M_N = y^N v_s$ can be realized~\cite{HabaHirotsu}. This is so-called Type-I seesaw mechanism in a TeV scale, when coefficients $y^\nu$ and $y^N$ are assumed to be of order one. The masses of scalar and pseudo-scalar mostly from $S$ are given by \begin{eqnarray} m^{2}_{H_{S}} &=& m_3^{2}+2\lambda_{S} v_s^2 , \;\;\;\;\;\; m^{2}_{A_{S}} = 9 \lambda v_s , \end{eqnarray} in the potential Eq.~(\ref{Potential:HabaHirotsu}) without CP violation. For parameter region with $v_s \gg 1$ TeV, both scalar and pseudo-scalar are heavier than other particles. After integrating out $S$, thanks to the $Z_3$-symmetry, the model ends up with an effectively neutrinophilic THDM with approximated $Z_2$-symmetry, $\Phi \to \Phi, \Phi_\nu \to -\Phi_\nu$. Comparing to the neutrinophilic THDM, the value of $m_3^2$, which is a soft $Z_2$-symmetry breaking term, is expected to be $\kappa v_s$. $\lambda_{5}$ is induced by integrating out $S$, which is estimated as ${\mathcal O}(\kappa^2/m_S^2)\sim 10^{-12}$. Thus, the neutrinophilic THDM has an approximate $Z_2$-symmetry. As for the neutrino mass induced from one-loop diagram \footnote{ We would like to thank J. Kubo and H. Sugiyama for letting us notice this topic. }, UV theory induces small $\lambda_5\sim 10^{-12}$ due to $Z_3$-symmetry, so that radiative induced neutrino mass from one-loop diagram is estimated as $\lambda_5 v^2/(4\pi)^2M \sim 10^{-4}$ eV. This can be negligible comparing to light neutrino mass which is induced from tree level Type-I seesaw mechanism. The tree level neutrino mass is \begin{equation} m_\nu^{tree} \sim {y_\nu^2 v_\nu^2 \over M}\sim {y_\nu^2 \kappa^2v^2 \over v_s^2M}, \end{equation} where we input $v_\nu \sim {\kappa v \over v_s}$. On the other hand, one-loop induced neutrino mass is estimated as \begin{equation} m_\nu^{loop} \sim {\lambda_5 y_\nu^2 \over 16\pi^2}{v^2 \over M} \sim {y_\nu^2 \over 16\pi^2}{\kappa^2 v^2 \over M^2 M}. \end{equation} Putting $M\sim v_s$, \begin{equation} {m_\nu^{loop} \over m_\nu^{tree}}\sim {1 \over 16\pi^2}, \end{equation} which shows loop induced neutrino mass is always smaller than tree level mass if UV theory is the model of Ref.~\cite{HabaHirotsu}. \subsection{Supersymmetic extension of neutrinophilic Higgs doublet model} \label{subsec:Super} Now let us show the supersymmetric extension of the neutrinophilic Higgs doublet model. The supersymmetric extension is straightforward by extending its Higgs sector to be a four Higgs doublet model. The superpotential is given by \begin{eqnarray} {\mathcal W}&=&y^{u}\bar{Q}^{L}H_u U_{R} +y^d \bar{Q}_{L}{H_d}D_{R}+y^{l}\bar{L}H_d E_{R} +y^{\nu}\bar{L}H_{\nu}N+M {N^{}}^2 \nonumber \\ && +\mu H_uH_d + \mu' H_\nu H_{\nu'} +\rho H_u H_{\nu'} + \rho' H_\nu H_d, \end{eqnarray} where $H_u$ ($H_d$) is Higgs doublet which gives mass of up- (down-) sector. $H_\nu$ gives neutrino Dirac mass and $H_{\nu'}$ does not contribute to fermion masses. For the $Z_2$-parity, $H_u, H_d$ are even, while $H_\nu, H_{\nu'}$ are odd. The $Z_2$-partity is softly broken by the $\rho$ and $\rho'$. We assume that $|\mu|, |\mu'| \gg |\rho|, |\rho'|$, and SUSY breaking soft squared masses can trigger suitable electro-weak symmetry breaking. The Higgs potential is given by \begin{eqnarray} V &=& (|\mu|^2 +|\rho|^2) H_u^\dag H_u + (|\mu|^2+|\rho'|^2) H_d^\dag H_d + (|\mu'|^2 +|\rho'|^2) H_{\nu}^\dag H_{\nu} + (|\mu'|^2+|\rho|^2) H_{\nu'}^\dag H_{\nu'} \nonumber \\ && + \frac{g_1^2}{2} \left( H_u^\dag \frac{1}{2} H_u - H_d^\dag\frac{1}{2} H_d + H_{\nu}^\dag \frac{1}{2} H_{\nu} - H_{\nu'}^\dag \frac{1}{2}H_{\nu'} \right)^2 \nonumber \\ && + \sum_a \frac{g_2^2}{2} \left( H_u^\dag \frac{\tau^a}{2} H_u + H_d^\dag\frac{\tau^a}{2} H_d + H_{\nu}^\dag \frac{\tau^a}{2} H_{\nu} + H_{\nu'}^\dag \frac{\tau^a}{2}H_{\nu'} \right)^2 \nonumber \\ && + m_{H_u}^2 H_u^\dag H_u + m_{H_d}^2 H_d^\dag H_d + m_{H_\nu}^2 H_{\nu}^\dag H_{\nu}+ m_{H_{\nu'}}^2 H_{\nu'}^\dag H_{\nu'} \nonumber \\ && + B \mu H_u \cdot H_d + B' \mu' H_{\nu}\cdot H_{\nu'} + \hat{B} \rho H_u \cdot H_{\nu'} + \hat{B}' \rho' H_{\nu}\cdot H_{d}\nonumber\\ && + \mu^* \rho H_d^\dag H_{\nu'}+\mu^* \rho' H_u^\dag H_{\nu}+ \mu'^* \rho' H_{\nu'}^\dag H_{d}+\mu'^* \rho H_\nu^\dag H_{u} + {\it h.c.} , \end{eqnarray} where $\tau^a$ and dot represent a generator of $SU(2)$ and its anti-symmetric product respectively. We assume Max.[$|\hat{B}\rho|, |\hat{B}'\rho'|, |\mu\rho|, |\mu'\rho|,|\mu\rho'|,|\mu'\rho'|$] $\sim {\mathcal O}(10)$ GeV$^2$, which triggers VEV hierarchy between the SM Higgs doublet and neutrinophilic Higgs doublets. Notice that quarks and charged lepton have small non-holomorphic Yukawa couplings with $H_\nu$, through one-loop diagrams associated with small mass parameters of $\hat{B}\rho, \hat{B}'\rho', \mu\rho, \mu'\rho, \mu\rho', \mu'\rho'$. This situation is quite different from non-SUSY model, where these couplings are extremely suppressed by factor of $v_\nu/v$. As for the gauge coupling unification, we must introduce extra particles, but anyhow, the supersymmetric extension of neutrinophilic Higgs doublet model can be easily constructed as shown above. \section{Leptogenesis} \subsection{A brief overview of thermal leptogenesis} In the seesaw model, the smallness of the neutrino masses can be naturally explained by the small mixing between left-handed neutrinos and heavy right-handed Majorana neutrinos $N_i$. The basic part of the Lagrangian in the SM with right-handed neutrinos is described as \begin{eqnarray} {\cal L}_{N}^{\rm SM}=-y^{\nu}_{ij} \overline{l_{L,i}} \Phi N_j -\frac{1}{2} \sum_{i} M_i \overline{ N^c_i} N_i + h.c. , \label{SMnuYukawa} \end{eqnarray} where $i,j=1,2,3$ denote the generation indices, $h$ is the Yukawa coupling, $l_L$ and $\Phi$ are the lepton and the Higgs doublets, respectively, and $M_i$ is the lepton-number-violating mass term of the right-handed neutrino $N_i$ (we are working on the basis of the right-handed neutrino mass eigenstates). With this Yukawa couplings, the mass of left-handed neutrino is expressed by the well-known formula \begin{equation} m_{ij} = \sum_k \frac{y^{\nu}_{ik}v y^{\nu}{}^T_{kj}v}{M_k}. \end{equation} The decay rate of the lightest right-handed neutrino is given by \begin{eqnarray} \Gamma_{N_1} = \sum_j\frac{y^{\nu}_{1j}{}^\dagger y^{\nu}_{j1}}{8\pi}M_1 = \frac{(y^{\nu}{}^\dagger y^{\nu})_{11}}{8\pi}M_1. \end{eqnarray} Comparing to the Friedmann equation for a spatially flat spacetime \begin{equation} H^2 = \frac{1}{3 M_P^2}\rho , \label{FriedmannEq} \end{equation} with the energy density of the radiation \begin{equation} \rho = \frac{\pi^2}{30}g_*T^4 , \end{equation} where $g_*$ denotes the effective degrees of freedom of relativistic particles and $M_P \simeq 2.4 \times 10^{18}$ GeV is the reduced Planck mass, the condition of the out of equilibrium decay $\Gamma_{N_1} < \left.H\right|_{T=M_1}$ is rewritten as \begin{eqnarray} \tilde{m}_1 \equiv (y^{\nu}{}^\dagger y^{\nu})_{11} \frac{v^2}{M_1} < \frac{8\pi v^2}{M_1^2} \left.H\right|_{T=M_1} \equiv m_* \end{eqnarray} with $ m_* \simeq 1\times 10^{-3}$ eV and $v=174$ GeV. In the case of the hierarchical mass spectrum for right-handed neutrinos, the lepton asymmetry in the Universe is generated dominantly by CP-violating out of equilibrium decay of the lightest heavy neutrino, $N_1 \rightarrow l_L \Phi^*$ and $ N_1 \rightarrow \overline{l_L} \Phi $. Then, its CP asymmetry is given by~\cite{FandG} \begin{eqnarray} \varepsilon &\equiv& \frac{\Gamma(N_1\rightarrow \Phi+\bar{l}_j)-\Gamma(N_1\rightarrow \Phi^*+l_j)} {\Gamma(N_1\rightarrow \Phi+\bar{l}_j)+\Gamma(N_1\rightarrow \Phi^*+l_j)} \nonumber \\ &\simeq& -\frac{3}{8\pi}\frac{1}{(y^{\nu} y^{\nu}{}^{\dagger})_{11}}\sum_{i=2,3} \textrm{Im}(y^{\nu}y^{\nu}{}^{\dagger})^2_{1i} \frac{M_1}{M_i}, \qquad \textrm{for} \quad M_i \gg M_1 . \end{eqnarray} Through the relations of the seesaw mechanism, this can be roughly estimated as \begin{eqnarray} \varepsilon \simeq \frac{3}{8\pi}\frac{M_1 m_3}{v^2} \sin\delta \simeq 10^{-6}\left(\frac{M_1}{10^{10}\textrm{GeV}}\right) \left(\frac{m_3}{0.05 \textrm{eV}}\right) \sin\delta, \label{epsilon} \end{eqnarray} where $m_3$ is the heaviest light neutrino mass normalized by $0.05$ eV which is a preferred to account for atmospheric neutrino oscillation data~\cite{atm}. Using the above $\varepsilon$, the resultant baryon asymmetry generated via thermal leptogenesis is expressed as \begin{equation} \frac{n_b}{s} \simeq C \kappa \frac{\varepsilon}{g_*} , \label{b-sRatio} \end{equation} where $\left. g_*\right|_{T=M_1} \sim 100$ , the so-called dilution (or efficiency) factor $ \kappa \leq {\cal O}(0.1) $ denotes the dilution by wash out processes, the coefficient \begin{equation} C = \frac{8 N_f + 4 N_H}{22 N_f + 13 N_H} , \label{C} \end{equation} with $N_f$ and $N_H$ being the number of fermion generation and Higgs doublet~\cite{C} is the factor of the conversion from lepton to baryon asymmetry by the sphaleron~\cite{KRS}. In order to obtain the observed baryon asymmetry in our Universe $n_b/s \simeq 10^{-10}$~\cite{WMAP}, the inequality $\varepsilon \gtrsim 10^{-7}$ is required. This can be rewritten as $M_1 \gtrsim 10^9$ GeV, which is the so-called Davidson-Ibarra bound for models with hierarchical right-handed neutrino mass spectrum~\cite{LowerBound,Davidson:2002qv}. \subsection{leptogenesis in neutrinophilic THDM } Now we consider leptogenesis in the neutrinophilic THDM with the extra Higgs doublet $\Phi_{\nu}$ described in Sec.~\ref{subsec:Minimal}. The relevant interaction part of Lagrangian Eq.~(\ref{Yukawa:nuTHDM}) is expressed as \begin{eqnarray} {\cal L}_{N}=-y^{\nu}_{ij} \overline{l_{L,i}} \Phi_{\nu} N_j -\frac{1}{2} \sum_{i} M_i \overline{ N^c_i} N_i + h.c. . \label{Yukawa:nuTHDM(2)} \end{eqnarray} The usual Higgs doublet $\Phi$ in Eq.~(\ref{SMnuYukawa}) is replaced by new Higgs doublet $\Phi_{\nu}$. Again, we are working on the basis of the right-handed neutrino mass eigenstates. Then, with these Yukawa couplings, the mass of left-handed neutrino is given by \begin{equation} m_{ij} = \sum_k \frac{y^{\nu}_{ik}v_{\nu} y^{\nu}{}^T_{kj}v_{\nu}}{M_k}. \end{equation} Thus, for a smaller VEV of $v_{\nu}$, a larger $y^{\nu}$ is required. The Boltzmann equation for the lightest right-handed neutrino $N_1$, which is denoted by $N$ here, is given by \begin{eqnarray} \dot{n}_N+3Hn_N &=& -\gamma (N\rightarrow L\Phi_{\nu}) - \gamma (N\rightarrow \bar{L}\Phi_{\nu}^*) \qquad\textrm{:decay}\nonumber\\ && +\gamma (L\Phi_{\nu}\rightarrow N) + \gamma (\bar{L}\Phi_{\nu}^* \rightarrow N) \qquad \textrm{:inverse decay}\nonumber\\ && -\gamma (N L \rightarrow A \Phi_{\nu})-\gamma ( N \Phi_{\nu} \rightarrow L A) -\gamma (N \bar{L} \rightarrow A \Phi_{\nu}^* )-\gamma ( N \Phi_{\nu}^* \rightarrow \bar{L} A) \nonumber\\ && +\textrm{inverse processes} \qquad \qquad \qquad \qquad : \textrm{s-channel scattering} \nonumber\\ && -\gamma (N L \rightarrow A \Phi_{\nu})-\gamma ( N \Phi_{\nu} \rightarrow L A) -\gamma ( N A \rightarrow L \Phi_{\nu}) \nonumber\\ && -\gamma (N \bar{L} \rightarrow A \Phi_{\nu}^* )-\gamma ( N \Phi_{\nu}^* \rightarrow \bar{L} A) -\gamma ( N A \rightarrow \bar{L} \Phi_{\nu}^*) \nonumber\\ && +\textrm{inverse processes} \qquad \qquad \qquad \qquad : \textrm{t-channel scattering} \nonumber\\ && -\gamma (N N \rightarrow {\rm Final}) + \gamma ({\rm Final} \rightarrow N N) : {\rm annihilation} \nonumber\\ &=& -\Gamma_D (n_N-n_N^{eq})-\Gamma_{scat} (n_N-n_N^{eq}) -\langle\sigma v(\rightarrow \Phi, \Phi_{\nu})\rangle (n_N^2-n_N^{eq}{}^2) \label{Boltzman:N} \end{eqnarray} where $\Phi, \Phi_{\nu}$ and $A$ denote the Higgs bosons, the neutrinophilic Higgs bosons and gauge bosons, respectively. Notice that usual $\Delta L =1$ lepton number violating scattering processes involving top quark is absent in this model, because $\Phi_{\nu}$ has neutrino Yukawa couplings. Although the annihilation processes $(N N \rightarrow {\rm Final})$ is noted in Eq.~(\ref{Boltzman:N}), in practice, this is not relevant because the coupling $y^{\nu}_{i1}$ is so small, as will be shown later, to satisfy the out of equilibrium decay condition. The Boltzmann equation for the lepton asymmetry $L \equiv l-\bar{l}$ is given by \begin{eqnarray} && \dot{n}_L+3H n_L \nonumber \\ &=& \gamma(N\rightarrow l\Phi_{\nu}) - \gamma( \bar{N} \rightarrow \bar{l}\Phi_{\nu}^*) \nonumber \\ && -\{ \gamma(l\Phi_{\nu}\rightarrow N) - \gamma( \bar{l}\Phi_{\nu}^* \rightarrow \bar{N}) \} \qquad\textrm{:decay and inverse decay} \nonumber \\ && -\gamma ( l A \rightarrow N \Phi_{\nu} )+\gamma ( \bar{l} A \rightarrow \bar{N} \Phi_{\nu}^* ) -\gamma (N l \rightarrow A \Phi_{\nu}) \nonumber \\ && +\gamma ( \bar{N} \bar{l} \rightarrow A \Phi_{\nu}^*) \quad\textrm{:s-channel $\Delta L=1$ scattering} \nonumber\\ && -\gamma (N l \rightarrow A \Phi_{\nu})+\gamma (\bar{N} \bar{l} \rightarrow A \Phi_{\nu}^*)-\gamma (l A \rightarrow N \Phi_{\nu})+\gamma (\bar{l} A \rightarrow \bar{N} \Phi_{\nu}^*) \nonumber\\ && -\gamma ( l \Phi_{\nu} \rightarrow N A)+\gamma ( \bar{l} \Phi_{\nu}^* \rightarrow \bar{N} A) \quad\textrm{:t-channel $\Delta L=1$ scattering} \nonumber \\ && +\gamma( \bar{l}\bar{l} \rightarrow \Phi_{\nu}^*\Phi_{\nu}^*)-\gamma(ll\rightarrow \Phi_{\nu}\Phi_{\nu}) \nonumber \\ && +2 \{ \gamma(\bar{l}\Phi_{\nu}^*\rightarrow l\Phi_{\nu})- \gamma(l\Phi_{\nu}\rightarrow \bar{l}\Phi_{\nu}^*) \} \quad\textrm{:t and s-channel $\Delta L=2$ scattering} \nonumber \\ &=& \varepsilon\Gamma_D(n_N-n_N^{eq}) - \Gamma_W n_L \end{eqnarray} where \begin{equation} \Gamma_W = \frac{1}{2}\frac{n_N^{eq}}{n_{\gamma}^{eq}}\Gamma_N + \frac{n_N}{n_N^{eq}}\Gamma_{\Delta L=1,t} + 2\Gamma_{\Delta L=1,s}+ 2\Gamma_{\Delta L=2} \end{equation} is the wash-out rate. The condition of the out of equilibrium decay is given as \begin{eqnarray} \tilde{m}_1 \equiv (y^{\nu}{}^\dagger y^{\nu})_{11}\frac{v_{\nu}^2}{M_1} < \frac{8\pi v_{\nu}^2}{M_1^2} \left.H\right|_{T=M_1} \equiv m_* \left(\frac{v_{\nu}}{v}\right)^2 \end{eqnarray} Notice that for $v_{\nu} \ll v$ the upper bound on $\tilde{m}_1$ becomes more stringent, which implies that the lightest left-handed neutrino mass is almost vanishing $m_1 \simeq 0$. Alternatively the condition can be expressed as \begin{equation} ( y^{\nu}{}^{\dagger} y^{\nu})_{11} < 8 \pi \sqrt{ \frac{\pi^2 g_*}{90} }\frac{M_1}{M_P} . \label{OoEqDecay} \end{equation} Hence, for the TeV scale $M_1$, the value of $ (y^{\nu}{}^{\dagger} y^{\nu} )_{11}$ must be very small, which can be realized by taking all $y^{\nu}_{i1}$ to be small. Under such neutrino Yukawa couplings $y^{\nu}_{i1} \ll y^{\nu}_{i2}, y^{\nu}_{i3}$ and hierarchical right-handed neutrino mass spectrum, the CP asymmetry, \begin{eqnarray} \varepsilon &\simeq & -\frac{3}{8\pi}\frac{1}{(y^{\nu}{}^{\dagger}y^{\nu})_{11}} \left(\textrm{Im}(y^{\nu}{}^{\dagger}y^{\nu})^2_{12} \frac{M_1}{M_2} + \textrm{Im}(y^{\nu}{}^{\dagger}y^{\nu})^2_{13} \frac{M_1}{M_3} \right) \nonumber \\ & \simeq & -\frac{3}{8\pi}\frac{m_{\nu} M_1}{v_{\nu}^2} \sin\theta \nonumber \\ & \simeq & -\frac{3}{16\pi} 10^{-6} \left(\frac{0.1 {\rm GeV}}{v_{\nu}}\right)^2 \left(\frac{M_1}{100 {\rm GeV}}\right) \left(\frac{m_{\nu}}{0.05 {\rm eV}}\right) \sin\theta , \label{CPasym} \end{eqnarray} is significantly enhanced due to large Yukawa couplings $y^{\nu}_{2i}$ and $y^{\nu}_{3i}$ as well as the tiny Higgs VEV $v_{\nu}$ . The thermal averaged interaction rate of $\Delta L =2$ scatterings is expressed as \begin{eqnarray} \Gamma^{(\Delta L =2)} = \frac{1}{n_{\gamma}} \frac{T}{32 \pi (2 \pi )^4} \int ds \sqrt{s} K_1\left(\frac{\sqrt{s}}{T} \right) \int \frac{d \cos\theta}{2}\sum \overline{ |{\cal M}|^2} \end{eqnarray} with \begin{eqnarray} \sum \overline{|{\cal M}|^2} &=& 2 \overline{|{\cal M}_{\rm t}|^2} + 2 \overline{|{\cal M}_{\rm s}|^2} \simeq \sum_{j,(\alpha, \beta)} 2 |y^{\nu}_{\alpha j} y^{\nu}_{\beta j}{}^{\dagger}|\frac{s}{M_{N_j}^2}, \quad \textrm{ for} \quad s \ll M_j^2 \ . \end{eqnarray} The decoupling condition \begin{eqnarray} \Gamma^{(\Delta L =2)} < \sqrt{\frac{\pi^2 g_*}{90}} \frac{T^2}{M_P}, \end{eqnarray} for $T < M_1$ is rewritten as \begin{eqnarray} \sum_i \left(\sum_j \frac{ y^{\nu}_{ij} y^{\nu}_{ji}{}^{\dagger} v_{\nu}^2}{M_j}\right)^2 < 32 \pi^3 \zeta(3) \sqrt{\frac{\pi^2 g_*}{90}} \frac{v_{\nu}^4}{T M_P} . \label{L2DecouplingCondition} \end{eqnarray} For lower $v_{\nu}$, $\Delta L =2$ wash out processes are more significant. Inequality~(\ref{L2DecouplingCondition}) gives the lower bound on $v_{\nu}$ in order to avoid too strong wash out. We here summarize all conditions for successful thermal leptogenesis, and the result is presented in Fig.~\ref{fig:AvailableRegion}. The horizontal axis is the VEV of neutrino Higgs $v_{\nu}$ and the vertical axis is the mass of the lightest right-handed neutrino, $M_1$. In the red brown region, the lightest right-handed neutrino decay into Higgs boson $H$ with assuming $M_H= 100$ GeV, and lepton is kinematically not allowed. In turquoise region corresponds to inequality~(\ref{L2DecouplingCondition}), $\Delta L=2$ wash out effect is too strong. The red and green line is contour of the CP asymmetry of $\varepsilon=10^{-6}$ and $10^{-7}$, respectively, with the lightest right-handed neutrino decay in hierarchical right-handed neutrino mass spectrum. Thus, in the parameter region above the line of $\varepsilon = 10^{-7}$, thermal leptogenesis easily works even with hierarchical masses of right-handed neutrinos. For the region below the line of $\varepsilon = 10^{-7}$, the resonant leptogenesis mechanism~\cite{ResonantLeptogenesis}, where CP asymmetry is enhanced resonantly by degenerate right-handed neutrino masses, may work. Here we stress that, for $v_{\nu} \ll 100$ GeV, the required degree of mass degeneracy is considerably milder than that for the original resonant leptogenesis. \begin{figure} \centerline{\includegraphics{Lepto.eps}} \caption{ Available region for leptogenesis. The horizontal axis is the VEV of neutrino Higgs $v_{\nu}$ and the vertical axis is the mass of the lightest right-handed neutrino mass $M_1$. In the red brown region, the lightest right-handed neutrino decay into Higgs boson $\Phi_{\nu}$ and lepton is kinematically forbidden. In turquoise region, $\Delta L=2$ wash out effect is too strong. The red and green line is contour of the CP asymmetry of $\varepsilon=10^{-6}$ and $10^{-7}$, respectively, with the lightest right-handed neutrino decay in hierarchical right-handed neutrino mass spectrum. } \label{fig:AvailableRegion} \end{figure} \subsection{Constraints on an UV theory} Let us suppose that neutrinophilic THDM is derived from a model reviewed in Sec.~\ref{subsec:HabaHirotsu} by integrated out a singlet field $S$. If $S$ is relatively light, thermal leptogenesis discussed above could be affected. That is the annihilation processes of $N_1$ which has been justifiably ignored in Eq.~(\ref{Boltzman:N}). However, the annihilation could take place more efficiently via S-channel $S$ scalar exchange processes in the UV theory~\cite{HabaHirotsu}. For example, the annihilation $N_1 N_1 \rightarrow \Phi_{\nu}\Phi_{\nu}^*$ with the amplitude \begin{eqnarray} \overline{ |{\cal M}|}^2 = \left| \frac{y^{\nu}_1 \lambda_{\Phi_{\nu}} v_s}{ s - M_{H_S}^2 -i M_{H_S} \Gamma_{H_S}}\right|^2 \frac{s-4 M_1^2}{4} , \end{eqnarray} would not be in equilibrium, if \begin{eqnarray} \lambda_{\Phi_{\nu}} \lesssim 40 \frac{M_{H_S}^2}{M_1^{3/2} M_P^{1/2}} \simeq 0.1 \left(\frac{10^5 \, {\rm GeV}}{M_1}\right)^{3/2}\left(\frac{M_{H_S}}{10^7 \, {\rm GeV}}\right)^2 , \end{eqnarray} for $M_S \gg T > M_1$ is satisfied. Here, $\Gamma_{H_S}$ denotes the decay width of $H_S$. Constraints on other parameters such as $\lambda_\Phi$ and $\kappa$ can be similarly obtained. \section{Supersymetric case: Reconciling to thermal leptogenesis, gravitino problem and neutralino dark matter} As we have shown in Sec.~\ref{subsec:Super}, it is possible to construct a supersymmetric model with $\Phi_{\nu}$. A discrete symmetry, called ``R-parity'', is imposed in many supersymmetric models in order to prohibit rapid proton decay. Another advantage of the conserved R-parity is that it guarantees the absolute stability of the LSP, which becomes a dark matter candidate. In large parameter space of supergravity model with gravity mediated SUSY breaking, gravitino has the mass of ${\cal O}(100)$ GeV and decays into LSP (presumablly the lightest neutralino) at late time after BBN. Then, decay products may affect the abundances of light elements produced during BBN. This is so-called ``gravitino problem''~\cite{GravitinoProblem}. To avoid this problem, the upper bound on the reheating temperature after inflation \begin{equation} T_R < 10^6 - 10^7 \, {\rm GeV}, \label{ConstraintsOnTR} \end{equation} has been derived as depending on gravitino mass~\cite{GravitinoProblem2}. By comparing Eq.~(\ref{ConstraintsOnTR}) with the CP violation in supersymmetric models with hierarchical right-handed neutrino masses, which is about four times larger than that in non-supersymmetric model~\cite{SUSYFandG} as, \begin{eqnarray} \varepsilon \simeq -\frac{3}{2\pi}\frac{1}{(y^{\nu} y^{\nu}{}^{\dagger})_{11}}\sum_{i=2,3} \textrm{Im}(y^{\nu}y^{\nu}{}^{\dagger})^2_{1i} \frac{M_1}{M_i}, \label{SUSYepsilon} \end{eqnarray} it has been regarded that thermal leptogenesis through the decay of heavy right-handed neutrinos hardly work because of gravitino problem. As we have shown in the previous section, a sufficient CP violation $\varepsilon = {\cal O}(10^{-6})$ can be realized for $v_{\nu} = {\cal O}(1)$ GeV in the hierarchical right-handed neutrino masses with $M_1$ of ${\cal O}(10^5 - 10^6)$ GeV. This implies that the reheating temperature after inflation $T_R$ of ${\cal O}(10^6)$ GeV is high enough in order to produce right-handed neutrinos by thermal scatterings. Thus, it is remarkable that SUSY neutrinophilic model with $v_{\nu} = {\cal O}(1)$ GeV can realize thermal leptogenesis in gravity mediated SUSY breaking with unstable gravitino. In this setup, the lightest neutralino could be LSP and dark matter with the standard thermal freeze out scenario. \begin{figure} \centerline{\includegraphic {LeptoSUSY.eps}} \caption{ The same as Fig.~\ref{fig:AvailableRegion} but with Eq.~(\ref{SUSYepsilon}). The additional horizontal black dashed line represents a reference value of the upper bound on reheating temperature after inflation $T_R$ of $10^6$ GeV from gravitino overproduction. } \label{fig:SUSYAvailableRegion} \end{figure} \section{Conclusion} We have examined the possibility of thermal leptogenesis in neutrinophilic Higgs doublet models, whose tiny VEV gives neutrino Dirac mass term. Thanks to the tiny VEV of the neutrinophilic Higgs field, neutrino Yukawa couplings are not necessarily small, instead, they tend to be large, and the CP asymmetry in the lightest right-handed neutrino decay is significantly enlarged. Although the $\Delta L = 2$ wash out effect also could be enhanced simlitaneously, we have found the available parameter region where its wash out effect is avoided as keeping the CP asymmetry large enough. In addition, in a supersymmetric neutrinophilic Higgs doublet model, we have pointed out that thermal leptogenesis in gravity mediated SUSY breaking works well without confronting gravitino problem. Where the lightest neutralino could be LSP and dark matter with the standard thermal freeze out scenario. \section*{Acknowledgements} We would like to thank M.~Hirotsu for collaboration in the early stage of this work. We are grateful to S.~Matsumoto, S.~Kanemura and K.~Tsumura for useful and helpful discussions. This work is partially supported by Scientific Grant by Ministry of Education and Science, Nos. 20540272, 22011005, 20039006, 20025004 (N.H.), and the scientific research grants from Hokkai-Gakuen (O.S.).
{'timestamp': '2011-02-15T02:04:49', 'yymm': '1102', 'arxiv_id': '1102.2889', 'language': 'en', 'url': 'https://arxiv.org/abs/1102.2889'}
\section{Introduction} High resolution observations acquired by the new generation of ground-based and space-borne telescopes are allowing us to study more and more details of the photospheric fine structure that characterizes sunspots \citep{Sol03, Sch09}. In particular, the investigation of the bright structures observed inside the dark umbrae, like umbral dots or light bridges (LBs), has become essential to understand the physical mechanisms that are responsible for the heat transport from the convection zone into the photosphere and for the diffusion of the magnetic field. LBs are structures that rapidly intrude from the leading edge of penumbral filaments into the umbra. They are often linked to fragmentation of sunspots and to the final phases of the sunspot evolution \citep[e.g.,][and references therein]{Fal16,Fel16}, although not all spots exhibit this phenomenon. \citet{Sob97} classified LBs into strong or faint, depending on whether they separate the umbra into distinct umbral cores. Moreover, he distinguished between granular or filamentary LBs, depending on their internal structure. In particular, filamentary LBs have a central dark lane, i.e., a main dark axis along the LB, and branches of dark and bright hairs connected to the central lane \citep[e.g.,][]{Lit04}. Another classification was proposed by \citet{Tho04}, who distinguished between segmented and unsegmented LBs. The former, which are by far the most common, are characterized by bright granules separated by dark lanes perpendicular to the LB axis \citep{Ber03}, although the granular pattern visible in these structures differs in size, lifetime, and brightness from the quiet Sun granulation \citep[see, e.g.,][]{Lag14,Fal17}. On the contrary, unsegmented LBs resemble the elongated filaments forming the penumbra of sunspots, without any evidence of granular cells. In recent observations performed at the GREGOR telescope, \citet{Sch16} identified a new class of LBs, called thick LBs, with small transverse \textsc{Y}-shaped dark lanes similar to dark-cored penumbral filaments. Doppler velocity measurements show the presence of upward plasma motions above LBs at the photopheric level, specially along the dark lane \citep{Rim97, Gio08, Rou10}. It seems that the plasma convection takes the lead in these regions, which are characterized by a magnetic field weaker and more horizontal than in the surrounding umbra \citep{Lit91, Lek97, Jur06}. However, it is still controversial whether LBs have a magneto-convective origin or are due to the field-free convection that penetrates into the strong umbral magnetic field from below the photosphere and forms a cusplike magnetic field near the visible surface \citep{Tho04, BI11}. Remarkable long-lasting plasma ejections or surge activities are observed in the chromosphere along some LBs \citep[e.g.][]{Bha07, Shi09, Lou14, Tor15}. Magnetic reconnection occurring in the low chromosphere and the upper photosphere is thought to originate this small-scale eruptive activity above LBs \citep{Son17}. Recently, \citet{Kle13} analyzed unusual filamentary structures observed within the umbra of the very flare-productive active region (AR) NOAA 11302 preceding sunspot. These structures, called umbral filaments (UFs), do not resemble typical LBs in morphology or in evolution and are formed by curled filaments that reach from the penumbra well into the umbra. Furthermore, UFs show counter Evershed flow along them at the photospheric level and energy dissipation phenomena in the higher atmospheric layers. In this Letter, we describe the observation of an elongated bright structure inside the umbra of the big preceding sunspot of AR NOAA 12529. Curiously, its appearance transformed the original spot into a huge, heart-shaped sunspot, so that it became celebrated in popular media. We investigated this feature, which at first glance resembles a filamentary LB. We find that it is characterized by mixed directions of the plasma motions and a strong horizontal magnetic field, with a portion of the structure having opposite polarity to that of the hosting sunspot. These and other signatures allow us to interpret this structure as a manifestation of a flux rope that is located in higher layers of the solar atmosphere above the umbra. We describe the data set and its analysis in Sect.~2. The observations are presented in Sect.~3 and our conclusions are reported in Section~4. \begin{figure}[t] \centering \includegraphics[trim=10 125 120 480, clip, scale=0.6]{Fig1.ps} \caption{AR NOAA 12529 as seen in the continuum filtergram, with overlain contours of the simultaneous map of the vertical component of the magnetic field (red/blue = -800/+800~G), taken by HMI in the \ion{Fe}{1} 6173~\AA{} line, at the time of the central meridian passage of the AR. The box frames the portion of the field of view shown in Figure~\ref{fig2}. \label{fig1}} \end{figure} \section{Data and analysis} We have analyzed Space-weather Active Region Patches (SHARPs) data \citep{Hoe14} acquired by the Helioseismic and Magnetic Imager \citep[HMI;][]{Sch12} onboard the \textit{Solar Dynamic Observatory} \citep[SDO;][]{Pes12} satellite from 2016 April 8 to April 19 to study the main sunspot of the AR NOAA 12529. We have used continuum filtergrams, Dopplergrams, and vector magnetograms acquired along the \ion{Fe}{1} line at 6173~\AA{}, with a pixel size of 0\farcs51 and a time cadence of 12~min. The vector field has been computed using the Very Fast Inversion of the Stokes Vector code \citep{Bor11}, which performs a Milne-Eddington inversion of the observed Stokes profiles, optimized for the HMI pipeline; the remaining $180^{\circ}$-azimuth ambiguity is resolved with the minimum energy code \citep{Met94}. More details about the SHARP pipeline are reported in \citet{Bob14}. The SHARP data have been corrected for the rotation angle of $180^{\circ}$ of HMI images. Finally, the vector magnetic field components have been transformed into the local solar frame, according to \citet{Gar90}. HMI Dopplergrams have been corrected for the effect of solar rotation, by subtracting the mean velocity averaged over 10 days, deduced from the HMI data series relevant to Carrington rotation 2176. Doppler velocity has been calibrated assuming umbra (i.e., pixels with normalized continuum intensity $I_c < 0.4$) on average at rest \citep[e.g.][]{Rim94}. We have also used images acquired in the extreme ultraviolet (EUV) by the Atmospheric Imaging Assembly \citep[AIA;][]{Lem12} at 171 \AA{} and 304 \AA{}, with a pixel size of about 0\farcs6 and a time cadence of 12~s. Furthermore, we have analyzed full disc images of the chromosphere acquired in the H$\alpha$ line at 6562.8~\AA{} by INAF - Catania Astrophysical Observatory. These images have been coaligned together with HMI and AIA data. \section{Results} During the entire passage across the solar disc, AR NOAA 12529 was composed by a large preceding sunspot of negative polarity and some following pores of positive polarity, as shown in Figure~\ref{fig1}. The main sunspot was already formed when it appeared at the East limb on 2016 April 8, at heliographic latitude $\sim 10$~N. It occupied an area of about $2000 \,\mathrm{Mm}^2$. On April 11, when the sunspot was located at $\mu = 0.8$, several bright elongated structures similar to small LBs appeared to the north-west of the spot, intruding into the umbra (see the top-left panel of Figure~\ref{fig2}). They have a curved shape, not showing a granular pattern at the spatial resolution of the HMI continuum images. In the map of the vertical component of the magnetic field taken at the same time (first column, second row of Figure~\ref{fig2}), we note that these structures are characterized by a lower field intensity, in comparison with the other parts of the sunspot umbra. The horizontal component of the magnetic field in the hook-shaped structure observed to the north-east of the sunspot, located at $[-500\arcsec,-490\arcsec] \times [250\arcsec, 260\arcsec]$, is weaker than the surrounding penumbral filaments, whereas in the intruding structures located to the west it is comparable to that of the penumbra (first column, third row of Figure~\ref{fig2}). The longest of these structures, seen at $[-480\arcsec,-470\arcsec] \times [255\arcsec, 265\arcsec]$ on April 11, reached its maximum length of about 30\arcsec{} on April 13 (top-right panel of Figure~\ref{fig2}), when it was located at $[-80\arcsec,-65\arcsec] \times [65\arcsec, 80\arcsec]$, the sunspot being located at $\mu = 0.96$. At that time, a portion along the axis of the structure has a magnetic field of positive polarity, i.e., opposite to that of the magnetic field of the surroundings (second column, second row of Figure~\ref{fig2}). Moreover, the horizontal field in the structure is $\approx 2000 \,\mathrm{G}$, about 50\% stronger than in the surrounding penumbra (second column, third row of Figure~\ref{fig2}). Many of these bright structures, including the longest one, remained visible till the AR reached the West limb, when the sunspot appeared already fragmenting, in a decay phase. \begin{figure*} \centering \includegraphics[trim=10 155 155 290, clip, scale=0.45]{Fig2a.ps}% \includegraphics[trim=40 155 40 290, clip, scale=0.45]{Fig2e.ps} \includegraphics[trim=10 155 155 330, clip, scale=0.45]{Fig2b.ps}% \includegraphics[trim=40 155 40 330, clip, scale=0.45]{Fig2f.ps} \includegraphics[trim=10 155 155 330, clip, scale=0.45]{Fig2c.ps}% \includegraphics[trim=40 155 40 330, clip, scale=0.45]{Fig2g.ps} \includegraphics[trim=10 95 155 330, clip, scale=0.45]{Fig2d.ps}% \includegraphics[trim=40 95 40 330, clip, scale=0.45]{Fig2h.ps} \caption{Evolution of the main sunspot of AR NOAA 12529 between 2016 April 11 (left) and April 13 (right) as seen in the continuum filtergrams (first row), in the simultaneous maps of the vertical (second row) and horizontal (third row) magnetic field components, and in the simultaneous Dopplergrams (fourth row), taken by HMI in the \ion{Fe}{1} 6173~\AA{} line. Contours represent the umbra-penumbra boundary at $I_c = 0.4$. \label{fig2}} \end{figure*} Interestingly, we find a noticeable behaviour of the plasma motions along the line of sight in these bright structures. On April 11, although the AR was near the East limb and the uncertainties due to the projection effects are not negligible, the velocity observed along the bright structures in the umbra (bottom-left panel of Figure~\ref{fig2}) is slightly larger than in the surroundings. In particular, the hook-shaped bright structure located to the north-eastern side of the umbra shows a velocity direction inverse to the Evershed flow in its northern leg. The bright structure to the western side of the umbra clearly also exhibits a plasma velocity higher than the surrounding Evershed flow, in this case along the normal Evershed flow direction. However, on April 13, when the longest of these structures reached its maximum length and the AR was approximating to the central meridian, we note plasma motions with both directions along it (see the arrow in the bottom-right panel of Figure~\ref{fig2}): upflows and downflows alternate from the outer to the inner part of the bright structure with velocities of the order of $1\,\mathrm{km \,s}^{-1}$. Each portion of the bright structure with the same direction of the plasma motion is about 5\arcsec{} long. Note that these alternate, coherent plasma flows may suggest a helical motion along the structure. This behaviour is mostly visible when the sunspot passed near the central meridian, i.e., on April 13 and 14, but we cannot exclude that these plasma motions may persist during the whole life of the bright structure, i.e., for about 5 days, from April 11 to April 16, although they are not visible due to the foreshortening effects. \begin{figure} \centering \includegraphics[trim=10 125 120 480, clip, scale=0.6]{Fig3a.ps}\\ \includegraphics[trim=10 125 120 490, clip, scale=0.6]{Fig3b_neg.ps}\\ \includegraphics[trim=10 95 120 490, clip, scale=0.6]{Fig3c_neg.ps} \caption{AR NOAA 12529 as seen in the H$\alpha$ line by INAF - Catania Astrophysical Observatory (top panel), at 304 \AA{} (middle panel) and at 171 \AA{} (bottom panel) by AIA. Note that AIA images are shown in a reversed color table. The contour in the middle panel corresponds to the umbra-penumbra boundary at $I_c = 0.4$ in the simultaneous continuum image. Red/blue contours in the bottom panel correspond to -400/+400~G in the simultaneous map of the vertical component of the magnetic field. The arrow in the bottom panel indicates the EUV filament channels corresponding to the filaments observed in the H$\alpha$ line and at 304~\AA. \label{fig3}} \end{figure} In the image of the chromosphere acquired on April 13 at 10:26 UT in the H$\alpha$ line at INAF -- Catania Astrophysical Observatory, we note the presence of chromospheric filaments located to the north-western side of the sunspot (top panel of Figure~\ref{fig3}). These filaments are also visible in the AIA image at 304~\AA{} (middle panel of Figure~\ref{fig3}), connecting the sunspot to opposite polarity flux concentrations observed to the west of the preceding spot. It is worth noting that these filaments have the same curvature as the longest photospheric bright structure observed in the umbra of the sunspot. In particular, the portion of the filament corresponding to the bright structure inside the umbra at photospheric level also appears bright at 304~\AA{}. Moreover, at 171~\AA{} the filaments correspond to EUV filament channels (see the bottom panel of Figure~\ref{fig3}), surrounded by some bright loops with the same curvature, indicated with an arrow in the same figure. \section{Discussion and conclusions} In April 2016 a big sunspot passed over the visible hemisphere of the Sun showing bright elongated features apparently similar to unsegmented LBs. In this Letter we analyzed these structures, in particular the longest one, whose characteristics suggest a completely different origin with respect to LBs. We found that no granular pattern was visible in this intruding filamentary feature at the HMI resolution. Moreover, the magnetic field strength in the structure is not weaker than in the surrounding umbral region. At the same time, we showed the presence of a strong horizontal field component of the magnetic field in the structure and a vertical component opposite to that of the surrounding penumbral filaments in a portion of it. Peculiar line-of-sight plasma motions along this structure are also observed at the photospheric level. In addition, images of the chromosphere at 6562.8~\AA{} and at 304~\AA{} reveal the existence of some small filaments in the nearby, which are characterized by the same curvature of the photospheric structure, corresponding to EUV channels observed at 171~\AA{}. As regards the magnetic configuration, one has to be cautious with the interpretation of the results deduced from the Milne-Eddington solution implemented in the VFISV code. In fact, it may not account for the complexity of the Stokes profiles present in regions where peculiar processes are at work. Actually, this affects the values of $\chi^2$ of the fits, which are $2-3$ times higher in the bright structure than in the spot. Moreover, the smearing due to the point-spread function of the telescope may dilute into the bright structure part of the magnetic signal belonging to the surrounding umbra and penumbral filaments. However, the average estimated errors for the vertical and horizontal component of the field in the structure are about $\pm 35$~G and $\pm 70$~G, respectively. Thus, the strong horizontal field of about 2000~G appears to be a robust result. The presence of such a significant horizontal component of the magnetic field allows us to exclude the field-free configuration typically observed in LBs. Conversely, it supports the possibility that the magnetic field of the flux rope forming the filaments around the sunspot may have a counterpart in the middle of the sunspot umbra, where it appears to end in the cospatial chromospheric images. This would also provide an explanation for the detection of opposite polarity in the structure with respect to the umbra, as well as an alternative interpretation for the opposite polarity of an apparently filamentary LB with respect its hosting umbra observed by \citet{Bha07}. Note that this phenomenon is different from the magnetic reversals found by \citet{Lag14} and \citet{Fel16} in LBs, due to the bending of the field lines along the atmospheric stratification of the structure. Moreover, this scenario is confirmed by the portions of the bright feature characterized by both upward and downward plasma motions along it, which are compatible with plasma flows along the helical field lines of a flux rope. Actually, the length of each portion of the filamentary structure with coherent plasma motion is of about 5\arcsec{}, which could correspond to the pitch of a helically twisted flux tube. Therefore, we guess that this bright filamentary structure is not a filamentary LB, but it is due to the accumulation of plasma coming from higher solar atmospheric levels into the photosphere. The presence of stably inclined fields belonging to the flux rope, which are touching part of the sunspot umbra, would set in radiatively driven penumbral magneto-convective mode \citep{Jur14, Jur17}. Indeed, high-resolution observations have shown that plasma fallen from chromospheric layers can lead to the formation of penumbra in sunspots \citep{Shi12, Rom13, Rom14}. Furthermore, penumbral-like structures, i.e., so called orphan penumbrae, have been reported as a photospheric counterpart of a chromospheric filament \citep{Kuc12a, Kuc12b, Bue16}. Observations of similar bright filaments inside the umbra, called UFs, have been recently reported by \citet{Kle13}. They found that UFs were characterized by a horizontal flow opposite to the Evershed flow and they also found from coronal images the presence of bright coronal loops which seemed to end in UFs. To interpret such phenomena, \citet{Kle13} conjectured two different scenarios. In the first, the bright filament ending in the umbra is formed by a sheet spanning many atmospheric layers and producing a siphon inflow by the pressure difference between the umbra and the network. In this case, the emission observed in the filament could be due to the energy dissipation at the boundary layers between the sheet and the sunspot magnetic field. In the second scenario, they interpreted the emission as an effect of a thick magnetic flux tube with high enough density such that the observed region is formed higher in the atmosphere. However, in both cases \citet{Kle13} did not take into account the presence of a flux rope whose signatures are clearly visible in our observation, albeit the second scenario they proposed shares some similarities with ours. In fact, we do not detect counter Evershed flow, with the exception of one leg of the hook-shaped bright structure observed on April 11 to the north-eastern side of the umbra. We find, instead, a peculiar pattern of the plasma flows which seems to describe a helical motion along the flux rope of a filament ending in the sunspot umbra. Note that the target of \citet{Kle13} suffered from severe projection effect, due to the position of AR NOAA 11302 on the solar disc when UFs were observed ($\mu \approx 0.6$). It is also worth mentioning that AR NOAA 12529 was not a flare-rich AR: the strongest event was a unique M-class (M6.7) flare on April 18, while \citet{Kle13} suggested that the UFs could be related to high flare-productivity, as they would influence the structure of the overlying coronal loop leading to magnetic rearrangement. To support our scenario, further analyses are necessary. In this respect, in a next paper we shall analyze the high-resolution observations performed by the spectropolarimeter aboard the \textit{Hinode} satellite in the photosphere and by the \textit{IRIS} spacecraft during the central meridian passage of AR NOAA 12529. A preliminary analysis of the \textit{Hinode} data shows that a strong linear polarization signal is detected in the structure within the umbra, which also exhibits a portion with circular polarization of sign opposite to that of the surrounding umbra, much larger than that observed with HMI. Note that these polarization signals cannot be due to the contamination induced by the point spread function of the telescope with the signals from the surrounding magnetic elements. This strengthens the interpretation that we are proposing here. The question about the conditions to observe such a phenomenon remains still open. Why do we not often observe this kind of bright structures similar to LBs but without any granular pattern? Can these structures be caused by the particular proximity of the footpoints of chromospheric filaments to a sunspot? Or can they be due to a peculiar configuration of the flux rope forming these filaments? An answer will likely be obtained by benefitting from the high spatial resolution and continuous temporal coverage provided by the next generation of solar observatories, such as the Solar Orbiter space mission \citep{Muller:13} and the large-aperture ground-based telescopes DKIST \citep{Keil:10} and EST \citep{Collados:10}. \acknowledgments The authors wish to thank an anonymous referee for his/her insightful comments. This work was supported by the Istituto Nazionale di Astrofisica (PRIN-INAF 2014), by the Italian MIUR-PRIN grant 2012P2HRCR on The active Sun and its effects on space and Earth climate, by Space Weather Italian COmmunity (SWICO) Research Program, and by the Universit\`a degli Studi di Catania. The research leading to these results has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no.~739500 (PRE-EST project). The SDO data used in this paper are courtesy of NASA/SDO science team. \facility{SDO (HMI, AIA)}, \facility{OACT:0.15m}
{'timestamp': '2017-08-09T02:05:03', 'yymm': '1708', 'arxiv_id': '1708.02398', 'language': 'en', 'url': 'https://arxiv.org/abs/1708.02398'}
\section{Introduction} \label{section:Intro} An action formulation for radiation reaction (RR) was presented in \cite{BirnholtzHadarKol2013a}. While \cite{BirnholtzHadarKol2013a} focused on the post-Newtonian approximation to the two body problem in Einstein's gravity, it stressed the method's generality, and presented detailed calculations also for radiation reaction of scalar and electromagnetic (EM) fields. Following demand, this paper highlights the method by focusing on the well-known electromagnetic Abraham--Lorentz--Dirac (ALD) force \cite{Abraham,Lorentz1,Lorentz2,Dirac,ALD-Jackson,ALD-Rohrlich}. The method offers certain advantages over the more standard approach. First, the action formulation allows the application of efficient tools including the elimination of fields through Feynman diagrams, and allows ready formulation and systematic computation of analogous quantities in non-linear theories such as gravity. Secondly, a single effective action would be seen to encode several ALD-like forces and its form would stress a connection between the source and target not seen in the usual force expression. We start in this section by surveying the background and reviewing the general method and its key ingredients before proceeding to concrete demonstrations in the following sections. In section \ref{section:mass--rope} the method is illustrated in a simple set-up -- a mass attached to an infinite rope. Section \ref{section:ALD} describes the application of the method to the leading non-relativistic ALD force (also known as the Abraham-Lorentz force), i.e. the force acting on an accelerating electric charge due to its own EM field. The derivation has a strong analogy with the mass--rope system, especially after a reduction to a radial system through the use of spherical waves; many technicalities of the reduction are deferred to Appendix \ref{sec:gauge}, and may be skipped in a first reading. \vspace{.5cm} \noindent {\bf Background}. Feynman diagrams and effective field theories were first introduced in the context of quantum field theories, but they turn out to be applicable already in the context of classical (non-quantum) field theories. One such theory is the Effective Field Theory (EFT) approach to general relativity (GR) introduced in \cite{GoldbergerRothstein1}, see also contributions in \cite{CLEFT-caged,NRG}. Similarly, the development of an action method for dissipative systems was first developed in the early 60's in the context of quantum field theory, and is known as the Closed Time Path formalism \cite{CTP}. The essential idea is to formally introduce a doubling of the fields which can account for dissipation. Here too relevant non-quantum problems exist, such as radiation reaction, and hence it was natural to seek a theory for the non-quantum limit. This was discussed in the context of the EFT approach to GR in \cite{GalleyEFT} and was promoted to general classical non-conservative systems in \cite{GalleyNonConservative}. We also recommend comparing the mass--rope system presented here with the case of coupling a mass to infinitely many oscillators, presented in \cite{GalleyNonConservative}, and with the action formulations of a damped harmonic oscillator in \cite{Kosyakov:2007qc}. Essential elements of the theory were reformulated in \cite{BirnholtzHadarKol2013a}. This formulation was extended from 4d to general dimensions in \cite{BirnholtzHadar2013b}. \subsection*{Brief review of method} Here we present the tools generally, before demonstrating their use in the following sections. {\bf Field doubling}. The theory in \cite{BirnholtzHadarKol2013a} was formulated for classical (non-quantum) field theories with dissipative effects, such as radiation reaction. The source for the departure from the standard theory was identified to be a \emph{non-symmetric (or directed) propagator}, such as the retarded propagator. For such theories the fields are doubled together with the sources, so that the propagator always connects a field and its double and thereby assigns a direction to it. Here the term field should be understood to refer also to dynamical variables in mechanics (viewed as a field theory in 0+1 d). A generic theory may be described by a field $\phi=\phi(x)$ together with its source $\rho=\rho(x)$ and the equation of motion for $\phi$, $0=EOM_\phi(x)$, where the subscript notation readily generalizes to a multi-field set-up. The double field action is given by \cite{BirnholtzHadarKol2013a} \begin{equation} {\hat S}} \def\hJ{{\hat J}} \def\hQ{{\hat Q}} \def\hx{{\hat x} \[ \phi,{\hat \phi};\, \rho,{\hat \rho} \] := \int d^d x\, \[ EOM_\phi(x)\,{\hat \phi}(x) +\int d^d y\, \frac{\delta EOM_\phi (y) }{\delta \rho(x)} {\hat \rho}(x)\, \phi(y) \], \label{doubled-action} \end{equation} where ${\hat \phi}$ is a doubled auxiliary field, ${\hat \rho}$ is the corresponding source, and $d^dx \equiv d^{d-1} {\bf x}\, dt$ is a space-time volume element. The action is best described in terms of functional variations, but in our examples it will amount to simple concrete expressions shown in the following sections. The expression is constructed such that $\delta {\hat S}} \def\hJ{{\hat J}} \def\hQ{{\hat Q}} \def\hx{{\hat x}/\delta {\hat \phi} = EOM_\phi$, namely that the equation of motion with respect to ${\hat \phi}$ reproduces the original $EOM_\phi$. The first part introduces terms of the form $\phi(x)\, {\hat \phi}(x)$, $\rho(x)\, {\hat \phi}(x)$, while the second part introduces terms of the form ${\hat \rho}(x)\, \phi(x)$. This formulation is applicable to arbitrary equations of motion, not necessarily associated with an action. When they \emph{can} be derived from an action $S=S[\phi,\rho]$, such as in the case of radiation reaction, namely if $EOM_\phi=\frac{\delta S}{\delta \phi}$ for some $S$, the double field action is \begin{equation} {\hat S}} \def\hJ{{\hat J}} \def\hQ{{\hat Q}} \def\hx{{\hat x} \[ \phi,{\hat \phi};\, \rho,{\hat \rho} \] := \int d^d x\, \[ \frac{\delta S[\phi,\rho]}{\delta \phi(x)}\,{\hat \phi}(x) +\frac{\delta S[\phi,\rho]}{\delta \rho(x)}\, {\hat \rho}(x)\, \] ~. \label{doubled-action2} \end{equation} Here ${\hat \phi},\, {\hat \rho}$ can be assigned the following meaning: ${\hat \phi}$ is the linearized field perturbation, sourced by reverse (advanced) propagation from a generic source ${\hat \rho}$. They correspond to the Keldysh basis of the Closed Time Path formulation and ${\hat S}} \def\hJ{{\hat J}} \def\hQ{{\hat Q}} \def\hx{{\hat x}$ is a linearization of the CTP action with respect to ${\hat \phi}$. We remark that this procedure determines the action explicitly, and in fact gives a recipe for the function $K$ mentioned in \cite{GalleyNonConservative}. It also generalizes the specific field doubling method (using complex conjugation) of \cite{Kosyakov:2007qc}. {\bf Zone separation}. The EFT approach is characterized by a hierarchy of scales leading to multiple zones (and a matched asymptotic expansion). In particular when the velocities of the sources are small with respect to the velocity of outgoing waves, such as in the Post-Newtonian approximation to GR, one defines a system zone and a radiation zone. {\bf Elimination}. Given ${\hat S}} \def\hJ{{\hat J}} \def\hQ{{\hat Q}} \def\hx{{\hat x}$, one may proceed to eliminate fields through Feynman diagrams. \emph{Radiation sources $Q[x]$} are defined diagrammatically by \begin{eqnarray} -Q[x] := \parbox{10mm} {\includegraphics[scale=0.3]{DiagsGeneralRulesVertex.pdf}} \label{def:Q} \end{eqnarray} where the double heavy line denotes the whole system zone. Their definition involves the elimination of the system zone field and matching, see \cite{BirnholtzHadarKol2013a} eq. (2.49), but this shall mostly not be required in the present paper. The doubled radiation sources are given by \begin{equation} \hQ[x, \hx] = \int dt\, \frac{\delta Q[x]}{\delta x^i(t)}\, \hx^i(t) \, \, . \label{def:Qhat} \end{equation} which is the linearized perturbation of the source and as such does not require the computation of new diagrams beyond (\ref{def:Q}). Here too this general formula will be seen to reduce to simple expressions in the examples. Next the \emph{radiation reaction effective action} ${\hat S}} \def\hJ{{\hat J}} \def\hQ{{\hat Q}} \def\hx{{\hat x}_{RR}$ may be computed through elimination of the radiation zone, see \cite{BirnholtzHadarKol2013a} eq. (2.53). This action describes the effects of dissipation and radiation reaction. {\bf Gauge invariant spherical waves in the radiation zone}. When the spatial dimension $D>1$ (as in the ALD problem) each zone respects an enhanced symmetry and corresponding field variables and gauge should be chosen: \begin{itemize}} \def\ei{\end{itemize} \item In the system zone time-independence (stationarity) is an approximate symmetry, due to the slow velocity assumption, and hence non-Relativistic fields (and a compatible gauge) should be used \cite{NRG}. \item In the radiation zone the system appears point-like and hence it was recognized in \cite{BirnholtzHadarKol2013a} that this zone is spherically symmetric and accordingly gauge invariant spherical waves (see \cite{AsninKol}) should be chosen as the field variables. \ei {\bf Matching within action}. Subsection 2.3 of \cite{BirnholtzHadarKol2013a} demonstrated how the matching equations which are a well-known, yet sometime sticky element of the effective field theory approach, can be promoted to the level of the action. This is achieved by introducing an action coupling between the two zones, whereby the matching equations become ordinary equations of motion with respect to novel field variables. These are termed two-way multipoles, and reside in the overlap region. This is an important ingredient of our method, but is not used directly in this paper. \section{Mass--rope system} \label{section:mass--rope} As a first example we consider a system shown in fig. \ref{fig:mass-rope} composed of a rope stretched along the $x$ axis with linear mass density $\lambda$ and tension $T$ (hence the velocity of waves along it is $c := \sqrt{T/\lambda}$). Its displacement along the $y$ axis is given by $\phi(x)$. The rope is attached at $x=0$ to a point mass $m$ which is free to move along the $y$ axis and is connected to the origin through a spring with spring constant $k$. We assume there are no incoming waves from infinity. Our goal is to find an effective action describing the evolution of the point mass in time \emph{without direct reference} to the field -- that is, after the field's elimination. The eliminated field carries waves, and therefore energy, away from the mass; we show directly how this elimination gives a generalized (non-conservative) effective action describing the dynamics of the mass, which is that of a damped harmonic oscillator. \begin{figure}[t!] \centering \noindent \includegraphics[scale=0.5]{mass_spring.pdf} \caption[]{The mass--rope system.} \label{fig:mass-rope} \end{figure} The action describing the system (mass+rope) is given by \begin{equation} S \, = \, \frac{1}{2} \int\!\! dt \left[ m \dot{y}^2 - k y^2 \right] + \frac{T}{2}\!\! \int\!\! dt dx \, \left( \partial \phi \right)^2 - \int\!\! dt \,Q \left[ \phi(0) - y \right] \, \, , \label{mass rope action} \end{equation} where $\left( \partial \phi \right)^2 = \frac{1}{c^2}\dot{\phi}^2-(\partial_x \phi)^2$ and $\dot{y} := \partial_{t} y$. The action is composed of the mass's action, a dynamical rope term, and a coupling term. This coupling enforces the boundary condition $y \equiv \phi(x=0)$ at the level of the action and introduces the Lagrange multiplier $Q(t)$, where the choice of notation will be explained later. We wish to obtain a long-distance effective action for the rope, and for that purpose we analyze the equations of motion. Varying the action with respect to $\phi$ and $Q$ we find \begin{eqnarray} \square \phi &=& -\frac{Q}{T}\, \delta(x) \label{EOM1} \, \, , \\ y &=& \phi(0) \label{EOM2} \, \, , \label{eom full theory} \end{eqnarray} where $\Box := \frac{1}{c^2} \partial^2_t \, - \partial^2_x$. Analysis of (\ref{EOM1}) near $x=0$ gives $\left[ \partial_x \phi \right]=Q/T$ where $\left[ \partial_x \phi \right]:=(\partial_x \phi)|_{0^+}-(\partial_x \phi)|_{0^-}$ denotes the jump at the origin. On the other hand the equations of motion and the outgoing wave condition imply \begin{equation} \phi(x,t) = \left\{ \begin{array}{c} y(t - x/c) \qquad x>0 \\ y(t + x/c) \qquad x<0 \end{array} \right. \label{full theory radiation} \end{equation} and hence $\left[ \partial_x \phi \right] = -2 \dot{y}/c$. Altogether we obtain \begin{equation} \frac{Q}{T} = \left[ \partial_x \phi \right] = -\frac{2}{c} \, \dot{y} \, \, . \label{lambda1} \end{equation} At this point we can substitute (\ref{lambda1}) in the eq. of motion for $y$ and obtain a damped harmonic oscillator. Yet, here we wish to demonstrate how such a dissipative system may be described by a (double field) action. For that purpose we proceed and find that the following action implies the same equations of motion\footnote { In fact there is a slight difference: the effective action (\ref{mass rope far region effective action}) implies $\dot{\phi}(0)=\dot{y}$ which is not exactly the same as (\ref{EOM2}). } including (\ref{lambda1}) \begin{equation} S \, = \, \frac{1}{2} \int\!\! dt \left[ m \dot{y}^2 - k y^2 \right] + \frac{T}{2}\!\! \int\!\! dt dx \, \left( \partial \phi \right)^2 + 2\, Z \!\! \int\!\! dt \, \dot{y} \, \phi(0) \, \, , \label{mass rope far region effective action} \end{equation} where $Z:= \sqrt{T \lambda}$ is the rope's impedance. We interpret $S$ as a long-distance effective action for $\phi$ (as it incorporates the solution (\ref{full theory radiation}) including the asymptotic boundary conditions). Comparing (\ref{mass rope far region effective action}) with a standard origin source term $S_{\mathrm{int}} = -\int \! \phi(0,t) \, Q(t) dt \, \,$ justifies the notation $Q$. In fact, $Q$ is the force exerted on the mass by the rope. We double the field $\phi$ as well as its source $y$ in (\ref{mass rope far region effective action}) as described in (\ref{doubled-action2}) and find the double field action \begin{equation} \hat{S} \, = \, T \int dt\, dx\, \partial \phi\, \partial \hat{\phi} + 2\, Z \!\! \int\!\! dt \[ \dot{y} \, \hat{\phi}(0) + \dot{\hat{y}}\, \phi(0) \] \, , \, \label{mass rope action frequency domain} \end{equation} Transforming to frequency domain with the convention $\phi(t) = \!\int\! \frac{d\omega}{2\pi}\, \phi(\omega)\, e^{-i \omega t}$, complex conjugate fields appear such as $\phi^*(\omega)$ and we obtain the Feynman rules. The directed propagator from $\hat{\phi}^*$ to $\phi$ is \begin{align} \label{Feynman Rule Propagator} \parbox{20mm} {\includegraphics[scale=0.5]{DiagsGeneralRulesProp_x.pdf}} =G_{\omega}(x',x) \, = \, - c\, \frac{e^ {i \frac{\omega}{c} \, |x-x'|}}{i\, \omega \, Z} ~, \end{align} and the sources for the fields $\hat{\phi}^*,\, \phi$ are (compare (\ref{def:Q}),(\ref{def:Qhat})) \begin{align} \label{Feynman Rule Vertex} -Q_\omega \equiv \parbox{20mm} {\includegraphics[scale=0.3]{DiagsGeneralRulesVertex.pdf}} \!\!\!\!\!\!\!\!\!\!= - i \omega \, Z \, y_{\omega}~~~~~ ,~~\, -\hat{Q}^*_\omega \equiv \parbox{20mm} {\includegraphics[scale=0.3]{DiagsGeneralRulesVertexHat.pdf}} \!\!\!\!\!\!\!\!\!\!=i \omega \, Z \, \hat{y}^{*}_{\omega}~~ .~~~~~~ \end{align} Similar Feynman rules hold for $\phi^*,\, \hat{\phi}$. Now that we have the Feynman rules we can proceed to compute the outgoing radiation and the radiation reaction effective action. Radiation away from the source, for $x>0$, is given by \begin{eqnarray} \phi_\omega(x) = \parbox{20mm}{\includegraphics[scale=0.5]{DiagRadiationScalar_x.pdf}} = c \, y_{\omega} \, e^{ i \omega x /c} ~ , \label{rad} \end{eqnarray} which in the time domain becomes \begin{equation} \phi(t,x) = \int \frac{d \omega}{2 \pi}\, y_\omega \, e^{ - i \omega (t - \frac{x}{c})} = y(t - \frac{x}{c})~ , \label{rad solution time} \end{equation} reproducing (\ref{full theory radiation}). The \emph{radiation reaction effective action} is found by eliminating the field $\phi$ \begin{eqnarray} \hat{S}_{RR} &=& \parbox{20mm}{\includegraphics[scale=0.5]{ActionDiagScalarL.pdf}} ~ + ~ c.c. ~ = \, \int \frac{d \omega}{2 \pi} \, \, \hat{Q}^{*}_{\omega} \, G_{\omega}(0,0) \, Q_{\omega} \, + c.c. \, = \nonumber \\ &=& \int \frac{d \omega}{2 \pi} \, i \omega \, Z \, \hat{y}^{*}_{\omega} \, y_{\omega} ~+ ~ c.c. = \, -2\, Z \int \hat{y} \, \dot{y} \, dt ~ . \label{RR effective action} \end{eqnarray} Thus the full generalized effective action for the mass becomes \begin{equation} \hat{S}_{tot} \, = \, \hat{y} \frac{\delta}{\delta y }\left\{ \frac{1}{2} \int\!\! dt \left[ m \dot{y}^2 - k y^2\right]\right\} - 2 Z \int \hat{y} \partial_t y \, dt \label{final action rope} \end{equation} and by taking the Euler--Lagrange equation with respect to $\hat{y}$ we obtain \begin{equation} m \ddot{y} = -k y - 2 Z \dot{y} ~ , \label{EOM for mass on rope} \end{equation} which is the equation of a damped harmonic oscillator, as expected. We remark that the $1+1$ mass--rope system can be regarded as a specific case of a scalar point charge coupled to a scalar field in $d=D+1$ dimensions, and thus falls under the general considerations of \cite{BirnholtzHadar2013b}, for $d=2$. Excluding relativistic, retardation and higher-multipole effects (irrelevant in our 1 spatial dimension), we recover the form given there for the radiation-reaction force (eq. 2.50), with $G q^2 = Z$, and a factor of 2 because the rope has two sides. \section{Electromagnetic charge--field system} \label{section:ALD} The physical problem of the self-force (radiation-reaction force) on an accelerating electric charge has been treated for over 100 years \cite{Abraham,Lorentz1,Lorentz2,Dirac,ALD-Jackson,ALD-Rohrlich} in different methods. We wish to show how in its simplest form and to leading order, the Abraham--Lorentz force can be easily found in essentially the same method as in the mass--rope system. We start likewise with the standard Maxwell electromagnetic action\footnote { We use Gaussian units, the speed of light $c=1$, and the metric signature is $(+,-,-,-)$. } \begin{equation} \label{MaxwellEMAction} S_{full}=\!-\frac{1}{16\pi}\!\! \int\!\! d^4 x \,F_{\mu\nu} F^{\mu\nu} - \!\!\int\!\! d^4 x\, A_{\mu} J^{\mu}, \end{equation} where the field of the rope $\phi$ has been replaced by the EM field $A^\mu$ (with $F_{\mu\nu} \!=\! \partial_\nu A_\mu \!-\! \partial_\mu A_\nu$)\footnote { As usual $F_{\mu\nu}$ encodes the electric and magnetic fields through $E_i=F_{0i}$, $B_i=-\frac{1}{2}\eps_{ijk}F_{jk}$. } and the mass on a spring is replaced by a point-charge $q$ with trajectory ${\bf x}_p(t)$ and current density $J^\mu=q\frac{dx^\mu_p}{d\tau}\delta({\bf x} - {\bf x}_p)$. As a 3+1\,d problem, this appears more complicated than the mass-rope system. However, in the radiation zone the system appears point-like and the problem becomes spherically symmetric. As dissipation and reaction are related to the waves propagating to spatial infinity, we can reduce the problem using the spherical symmetry to an effective 1+1 dimensional ($r,t$) system, which can be treated analogously to the mass--rope system. Physically, the reduction amounts to working with spherical wave variables which are described in appendix \ref{sec:gauge}. For the purpose of the leading non-relativistic ALD force it suffices to consider the electric dipole ($\ell=1$) sector. We denote the corresponding field variable by ${\bf A}={\bf A}(r,t)$ and the source by $\boldsymbol{\rho}=\boldsymbol{\rho}(r,t)$, both defined in Appendix \ref{sec:gauge}. In this sector the action, reduced to 1+1 dimensions, is given by (\ref{EM Action leading2}). In the time domain it becomes \begin{equation} S= \int \!\! dt \! \int\!\! dr \left[ - \frac{r^4}{12} {\bf A} \!\cdot\! \Box {\bf A} - {\bf A} \!\cdot\! \boldsymbol{\rho} \right], \label{EM Action leading} \end{equation} where now $\Box :=\partial_t^2 - \partial_r^2 - \frac{4}{r}\partial_r$. This field content is very similar to that of the rope: the coordinate $r$ replaces $x$, the field ${\bf A}(r)$ replaces the field $\phi(x)$. As in (\ref{mass rope action}) the action has both a kinetic field term and a source-coupling term, though their form here differs. The double-field action is found using (\ref{doubled-action2}) as in the mass--rope system, and is given by \begin{equation} {\hat S}} \def\hJ{{\hat J}} \def\hQ{{\hat Q}} \def\hx{{\hat x} = \int \!\! dt \! \int\!\! dr \left[ - \frac{r^4}{6} \hat{{\bf A}} \!\cdot\! \Box {\bf A} - \( \hat{{\bf A}} \!\cdot\! \boldsymbol{\rho} + {\bf A} \!\cdot\! \hat{\boldsymbol{\rho}} \) \right]. \label{EM hatted Action leading} \end{equation} The method thus proceeds similarly to find the Feynman propagator for the field and the expressions for the source vertices - non-hatted and hatted. To obtain the propagator for $ {\bf A}$ we transform to the $\omega$ frequency domain and consider the homogenous part of the field equation (\ref{EOM PhiS}), which in dimensionless variables $x:= \omega r$, becomes \begin{equation} [ {\partial}} \def\dag{\dagger_x^2 + \frac{4}{x}{\partial}} \def\dag{\dagger_x + 1]\, \tilde{b}_{3/2}=0. \label{Modified Bessel equation} \end{equation} Its solutions are the origin-normalized Bessel functions \begin{equation} \tilde{b}_{3/2} := \Gamma(\frac{5}{2}) \, 2^{3/2}\, \frac{B_{3/2}\alpha(x)}{x^{3/2}}~ , \end{equation} where $B \equiv \{J,Y,H^\pm\}$ includes the Bessel functions $J,Y$ and the Hankel functions, $H^\pm=J \pm i\, Y$. The origin-normalized Bessel $\tilde{j}_{3/2} = 1 + \co\(x^2\)$ is smooth at the origin and $\tilde{h}^+_{3/2} = -\frac{3}{x^2}\, e^{i\,x} \( 1 + \co\(\frac1x\) \)$ is an outgoing wave. More details on these Bessel functions are given in Appendix \ref{sec:Bessel}. Thus \emph{the propagator for spherical waves is} \begin{align} \label{Feynman Rule Propagator} \parbox{20mm} {\includegraphics[scale=0.5]{DiagsGeneralRulesProp.pdf}} =G(r',r) \, = \, -\frac{2i\omega^3}{3} \, \tilde{j}_{3/2}(\omega r_1) \, \tilde{h}^+_{3/2}(\omega r_2) \,;\\ r_1:=\text{min}\{r',r\},\,\,\,r_2:=\text{max}\{r',r\}. \nonumber \end{align} In this sector, the source in the radiation zone is nothing but the electric dipole $ {\bf Q} \equiv {\bf D}$. Its form is identified through matching (\ref{def:Q}) the full theory with the radiation zone \begin{equation} {\bf A}_{full}(r) \!=\! \int\!\! dr' \boldsymbol{\rho}(r') \!\! \( \frac{-2i\omega^{3}}{3} \tilde{j}_{3/2}(\omega r') \tilde{h}^+_{3/2}(\omega r) \) \!\! = {\bf Q} \frac{-2i\omega^{3}}{3} \tilde{h}^+_{3/2}(\omega r) \!=\! {\bf A}_{rad}(r)~. ~~~ \label{EM scalar wavefunction at radiation zone1} \end{equation} Using (\ref{def:Q},\ref{def:Qhat},\ref{Modified Bessel equation}, \ref{EM inverse sources}, \ref{EM source scalar Phi}) and integration by parts, we read from (\ref{EM scalar wavefunction at radiation zone1}) the electric dipole source vertex ${\bf Q}$ at leading non-relativistic order \begin{eqnarray} \label{Feynman Rule Vertex} \parbox{10mm} {\includegraphics[scale=0.3]{DiagsGeneralRulesVertex.pdf}} &=& {\bf Q}\!=\!\!\!\int\!\!dr' \tilde{j}_{3/2}(\omega r') \boldsymbol{\rho} (r') \!=\!-\frac{q_\omega}{2}\!\!\int\!\! d^3 x' \tilde{j}_{3/2}(\omega r') {\bf n}' \left[r'^2 \delta({\bf x}' - {\bf x}_p) \right]' = q {\bf x}_p + \dots \equiv {\bf D} \nonumber\\ \parbox{10mm} {\includegraphics[scale=0.3]{DiagsGeneralRulesVertexHat.pdf}} &=&\hat{\bf Q} = \frac{\delta {\bf Q}}{\delta x^{i}}\hat{x}^{i} = q \hat{\bf x}_p + \dots \equiv \hat{\bf D} \end{eqnarray} We remark that this is merely the static electric dipole moment ${\bf D}$ of the source, where the ellipsis denote relativistic corrections. With the Feynman rules at hand we can proceed to determine the outgoing radiation and the radiation reaction effective action. Radiation away from the source is given diagrammatically by (compare \ref{rad}) \begin{eqnarray} {\bf A}(r)= \parbox{20mm}{\includegraphics[scale=0.5]{DiagRadiationScalar.pdf}} = -{\bf Q}\, G(0,r) =-\frac{2i\,\omega \, {\bf Q}\, e^{i \omega\, r}}{r^2}~ , ~~~ \label{rad EM} \label{Radiation EM using feynman scalar} \end{eqnarray} where we have used (\ref{Feynman Rule Propagator},\ref{Feynman Rule Vertex},\ref{Bessel H asymptotic2}). In the time domain, we find \begin{equation} {\bf A} ({\bf x},t)=\frac{2}{r}\, \partial_t {\bf Q}(t-r) ~. \label{radiation A} \end{equation} The EM radiation reaction effective action is \begin{eqnarray} {\hat S}} \def\hJ{{\hat J}} \def\hQ{{\hat Q}} \def\hx{{\hat x}_{EM} = \parbox{20mm}{\includegraphics[scale=0.5]{ActionDiagScalarL.pdf}} ~ + ~ c.c. ~ &=& \, \frac{1}{2} \int \! \frac{d \omega}{2 \pi} ~ \hat{{\bf Q}}^{*} \, G(0,0) \, {\bf Q} \, + c.c. \, \nonumber\\ &=&\frac{2}{3}\int\!\!dt \, \hat{\bf Q} \cdot \partial_t^{3}{\bf Q} \label{RR effective action EM general dipole} \end{eqnarray} where the propagator was evaluated at $r=r'=0$, as in (\ref{RR effective action}), and we regulated $ \tilde{h}^+(0) \to \tilde{j}(0)=1$. In the process of computing ${\hat S}} \def\hJ{{\hat J}} \def\hQ{{\hat Q}} \def\hx{{\hat x}_{EM}$ the fields were eliminated and only the particle's dipole remains. Note that the third time derivative originates from the $\omega^3$ term in (\ref{Feynman Rule Propagator}), which in turn originates from the behavior of the Bessel function near its origin. For a single charge we substitute (\ref{Feynman Rule Vertex}) in (\ref{RR effective action EM general dipole}) to obtain \begin{equation} {\hat S}} \def\hJ{{\hat J}} \def\hQ{{\hat Q}} \def\hx{{\hat x}_{EM} = \frac{2}{3}q^2\!\!\int\!\!dt \, \hat{\bf x} \cdot \partial_t^{3}{\bf x} ~. \label{RR effective action EM} \end{equation} This can now be used to find the radiation-reaction (self) force, similarly to (\ref{final action rope},\ref{EOM for mass on rope}), through the Euler-Lagrange equation with respect to $\hat{\bf x}$ \begin{equation} {\bf F}_{RR} = \frac{2}{3} \, q^2 \dddot{\bf x}. \label{EOM for charge ALD} \end{equation} This of course matches the Abraham--Lorentz result \cite{Abraham,Lorentz1,Lorentz2}, and is the leading order term in the fully relativistic result of Dirac \cite{Dirac} (given in similar form in \cite{BirnholtzHadarKol2013a}, eq. (3.67)). In addition our approach offers some benefits. Rewriting (\ref{RR effective action EM general dipole}) with the notation ${\bf D} \equiv {\bf Q}$ we have \begin{equation} {\hat S}} \def\hJ{{\hat J}} \def\hQ{{\hat Q}} \def\hx{{\hat x}_{EM} = \frac{2}{3}\int\!\!dt \, \hat{\bf D} \cdot \partial_t^{3}{\bf D} \label{hSD} \end{equation} This expression applies not only to single charge, but also to a system of charges if only we set \begin{equation} {\bf D} \left[ {\bf x}_a \right] := \sum_a q_a\, {\bf x}_a \end{equation} where the sum is over all the particles in the system. To get the radiation reaction force on any specific charge in the system we need only vary the single object ${\hat S}} \def\hJ{{\hat J}} \def\hQ{{\hat Q}} \def\hx{{\hat x}$ \begin{equation} F_{RR,a}^i = \frac{\delta {\hat S}} \def\hJ{{\hat J}} \def\hQ{{\hat Q}} \def\hx{{\hat x}}{\delta {\bf x}_a^i} \end{equation} Moreover, the form (\ref{hSD}) and the Feynman diagram (\ref{RR effective action EM general dipole}) reveal that the dipole appears in the self-force twice: once in an obvious way as ${\bf D}$, the source of the radiation and reaction fields, and a second, less obvious, time as $\hat{\bf D}$ which is the ``target'' coupling through which the reaction field acts back on the charges. In this sense the source and the target are seen to be connected. Going beyond the non-relativistic limit, an expression for all the relativistic corrections was given in eq. (3.65) of \cite{BirnholtzHadarKol2013a}. When expanded, the first relativistic correction includes the electric quadrupole term, the magnetic dipole term, and relativistic corrections to the electric dipole. These were shown to confirm the expansion of Dirac's formula to next-to-leading order (eq. (3.68) there). \section*{Conclusions} \label{section:Conclusion} Inspired by the (quantum) Closed Time Path formalism, we have shown how dissipative systems can be treated with an action principle in classical contexts. The explicit algorithm to find the generalized action incorporates field doubling, zone separation and spherical waves. They were demonstrated by deriving the generalized (dissipative) action for a classical oscillator attached to a rope (\ref{RR effective action}) and for the Abraham--Lorentz--Dirac EM self-force (\ref{RR effective action EM}). While these two problems may seem remote from each other, and involve different dimensionality, fields, and sources, the treatment of their radiation follows a very similar path. \subsection*{Acknowledgments} We thank B. Kosyakov for encouragement and many helpful comments and B. Remez for commenting on a draft. This research was supported by the Israel Science Foundation grant no. 812/11 and it is part of the Einstein Research Project "Gravitation and High Energy Physics", which is funded by the Einstein Foundation Berlin. OB was partly supported by an ERC Advanced Grant to T. Piran.
{'timestamp': '2014-03-14T01:02:52', 'yymm': '1402', 'arxiv_id': '1402.2610', 'language': 'en', 'url': 'https://arxiv.org/abs/1402.2610'}
\section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \section*{Acknowledgment} The authors would like to thank... \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE Computer Society conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \section{Introduction} \IEEEPARstart{W}{ith} the release of low-cost depth cameras, 3D human action recognition has attracted more and more attention from researchers. 3D human action recognition can be divided into three categories based on different data types: 1) skeleton sequence-based 3D human action recognition \cite{S1,S2,S3,S4,S5}, 2) depth sequence-based 3D human action recognition \cite{D1,D2,D3,D4,D5,D6}, 3) point cloud sequence-based 3D human action recognition \cite{P1,P2,P3}. Compared with skeleton sequence-based approaches, point cloud sequences are more convenient to be collected without additional pose estimation algorithms. Compared with depth sequence-based approaches, point cloud sequence-based methods yield lower computation costs. For these reasons, we focus on point cloud sequence-based 3D human action recognition in this work. \par Point cloud sequences of 3D human actions exhibit unordered intra-frame spatial information and ordered inter-frame temporal information. Due to the complex data structures, capturing the spatio-temporal textures from point cloud sequences is extremely challenging. An intuitive way is to convert point cloud sequences to 3D point clouds and employ static point cloud methods, e.g., PointNet++ \cite{Pointnet++}, to process. However, using static 3D point clouds to represent the entire point cloud sequence loses a lot of spatio-temporal information, which reduces the recognition performance. Therefore, point cloud sequence methods are necessary, which directly consumes the point cloud sequence for 3D human action classification. In order to model the dynamics of point cloud sequences, cross-frame spatio-temporal local neighborhoods around the centroids are usually constructed \cite{MeteorNet,PSTNet,P4Transformer}. Then, point convolution \cite{MeteorNet} or PointNet \cite{PSTNet,P4Transformer} are used to encode the spatio-temporal local structures. However, this complex cross-frame computing may be time-consuming and is not conducive to parallel computation. \par In this paper, we propose a strong frame-level parallel point cloud sequence network called SequentialPointNet. First, each point cloud frame is flattened into a hyperpoint by the same trainable hyperpoint embedding module. The hyperpoint embedding module is a kind of static point cloud technology, which designed based on PointNet. Since PointNet can arbitrarily approximate continuous functions, the hyperpoint sequence has the ability to retain complete human appearance information of each frame. Then, the Hyperpoint-Mixer module is proposed to mix features of the hyperpoint sequence. In this module, intra-frame channel-mixing operations and inter-frame frame-mixing operations are clearly separated and the frame-mixing operation is placed at the end of all feature mixing operations. Taking the frame-mixing operation as the watershed, SequentialPointNet can be divided into a front part and a back part. Since there are no cross-frame computational dependencies, the front part can be divided into frame-level units executed in parallel. Furthermore, separating the channel-mixing operations and frame-mixing operations avoids the mutual influence of temporal and spatial information extraction, which maximizes the human appearance encoding ability. Hence, SequentialPointNet also yields superior recognition accuracy for 3D human action recognition. \par Our main contributions are summarized as follows: \par• We propose a strong frame-level parallel point cloud sequence classification network, dubbed SequentialPointNet, to recognize 3D human actions. The key to SequentialPointNet is to divide the main modeling operations into frame-level units executed in parallel, which greatly improves the efficiency of modeling point cloud sequences. \par• In terms of our technical contribution, we propose to flatten the point cloud sequence into a new point data type named hyperpoint sequence and introduce a novel Hyperpoint-Mixer module to perform feature mixing for the hyperpoint sequence, based on which the appearance and motion information can be readily extracted for 3D human action recognition. \par• Our SequentialPointNet achieves up to 10$\times$ faster than existing point cloud sequence models and yields the cross-view accuracy of 97.6$\%$ on the NTU RGB+D 60 dataset, the cross-setup accuracy of 95.4$\%$ on the NTU RGB+D 120 dataset, the cross-subject accuracy of 91.94$\%$ on the MSR Action3D dataset, and the cross-subject accuracy of 92.31$\%$ on the UTD-MHAD dataset, which outperforms the state-of-the-art methods. \section{Related Work} \subsection{Static Point Cloud Modeling} With the popularity of low-cost 3D sensors, deep learning on static point clouds have attracted much attention from researchers due to extensive applications ranging from object classification \cite{PC1,PC2,PC3,PC4,PC5}, part segmentation \cite{PS1,PS2,PS3,PS4,PS5}, to scene semantic parsing \cite{PP1,PP2}. Static point clouds modeling can be divided into volumetric-based methods \cite{PVB1,PVB2,PVB3,PVB4,PVB5,PVB6} and point-based methods \cite{PPB1,PPB2,PPB3,PPB4}. Volumetric-based methods usually voxelize a point cloud into 3D grids, and then a 3D Convolution Neural Network (CNN) is applied on the volumetric representation for classification. Point-based methods are directly performed on raw point clouds. PointNet \cite{PointNet} is a pioneering effort that directly processes point sets. The key idea of PointNet is to abstract each point using a set of Multi-layer Perceptrons (MLPs) and then assemble all individual point features by a symmetry function, $i.e$., a max pooling operation. However, PointNet lacks the ability to capture local structures. Therefore, in \cite{Pointnet++}, a hierarchical network PointNet++ is proposed to encode fine geometric structures from the neighborhood of each point. PointNet++ is made of several set abstraction levels. A set abstraction level is composed of three layers: sampling layer, grouping layer, and PointNet-based learning layer. By stacking several set abstraction levels, PointNet++ progressively abstracts larger and larger local regions along the hierarchy. \subsection{Time Sequence Modeling} Sequential data widely exist in various research fields, such as text, audio, and video. The point cloud sequence is also a kind of sequential data. Time sequence models have been studied by the natural language processing community for decades. The emergence of Recurrent Neural Networks (RNN) \cite{RNN1,RNN2,RNN3,RNN4} pushes the boundaries of time series models. Due to the capability of extracting high-dimensional features to learn complex patterns, RNN performs time-series point prediction. LSTM \cite{LSTM} captures the contextual representations of words with a short memory and has additional “forget” gates to thereby overcoming both the vanishing and exploding gradient problem. GRU \cite{GRU} comprises of reset gate and update gate, and handles the information flow like LSTM sans a memory unit. Transformer \cite{Transformer}, a fully-connected attention model, models the dependency between words in the sequence. Since the model includes no recurrence and no convolution, Transformer secures the relative or absolute position within the sequence by injecting positional encoding. \subsection{Point Cloud Sequence-based 3D Human Action Recognition} Point cloud sequence-based 3D human action recognition is a fairly new and challenging task. To capture spatio-temporal information from point cloud sequences, one solution is to convert point cloud sequences to 3D point clouds and employ static point cloud methods, e.g., PointNet++, to process 3D point clouds. 3DV-PointNet++ \cite{3DV} is the first work to recognize human actions from point cloud sequences. In 3DV-PointNet++, 3D dynamic voxel (3DV) is proposed as a novel 3D motion representation. A set of points is extracted from 3DV and input into PointNet++ for 3D action recognition in the end-to-end learning way. However, since the point cloud sequence is converted into a static 3D point cloud set, 3DV loses a lot of spatio-temporal information and increases additional computational costs. \par To overcome this problem, researchers have focused mainly on investigating point cloud sequence networks that directly consume point cloud sequences for human action recognition. MeteorNet \cite{MeteorNet} is the first work on deep learning for modeling point cloud sequences. In MeteorNet, two ways are proposed to construct spatio-temporal local neighborhoods for each point in the point cloud sequence. The abstracted feature of each point is learned by aggregating the information from these neighborhoods. Fan $et$ $al$. \cite{PSTNet} propose a Point spatio-temporal (PST) convolution to encode the spatio-temporal local structures of point cloud sequences. PST convolution first disentangles space and time in point cloud sequences. Furthermore, PST convolutions are incorporated into a deep network namely PSTNet to model point cloud sequences in a hierarchical manner. To avoid point tracking, Point 4D Transformer (P4Transformer) network \cite{P4Transformer} is proposed to model point cloud videos. Specifically, P4Transformer consists of a point 4D convolution to embed the spatio-temporal local structures presented in a point cloud video and a Transformer to encode the appearance and motion information by performing self-attention on the embedded local features. However, in these point cloud sequence networks, cross-frame spatio-temporal local neighborhoods are usually constructed during modeling point cloud sequences, which is quite time-consuming and limits the parallel ability of networks. \begin{figure}[t] \centering \includegraphics[width=8.6cm]{Fig1} \caption{SequentialPointNet contains a hyperpoint embedding module, a Hyperpoint-Mixer module, and a classifier head.\label{fig1}} \end{figure} \section{METHODOLOGY} In this section, a strong frame-level parallel point cloud sequence network called SequentialPointNet is proposed to recognize 3D human actions. The overall flowchart of SequentialPointNet is described in Fig. 1. SequentialPointNet contains a hyperpoint embedding module, a Hyperpoint-Mixer module, and a classifier head. In Section III-A, the hyperpoint embedding module is used to flatten each point cloud frame into a hyperpoint that retains the complete human static appearance information. The intra-frame spatial features and inter-frame temporal features of the hyperpoint sequence are mixed by the Hyperpoint-Mixer module in Section III-B. In Section III-C, we analyze the frame-level parallelism of SequentialPointNet. \subsection{Hyperpoint embedding module} Recent point cloud sequence models typically construct cross-frame spatio-temporal local neighborhoods and employ point convolution or PointNet-based learning to extract spatial and temporal information. This complex cross-frame computing precludes frame-level parallelism. To avoid complex cross-frame computing, we propose to flatten each point cloud frame into a hyperpoint (also referred to as a token) by a same trainable hyperpoint embedding module. This module is employed to aggregate intra-frame geometric details and preserve them in the hyperpoint. The independent embedding of each point cloud frame enables the hyperpoint sequence to be generated in frame-level parallel. An additional advantage of this flattening operation is that superior static point cloud models can be used almost out of the box as the hyperpoint embedding module. \par In the hyperpoint embedding module, each point cloud frame is flattened to a hyperpoint by static point cloud processing technology to summarize human static appearance. we first adopt the set abstraction operation twice to downsample each point cloud frame. In this process, the texture information from space partitions is aggregated into the corresponding centroids. Then, in order to characterize human entire appearance information , a PointNet layer is used. The hyperpoint embedding module is shown in Fig. 2. \begin{figure*}[t] \centering \includegraphics[width=18cm]{Fig2} \caption{The hyperpoint embedding module contains two set abstraction operation, and a PointNet layer.\label{fig2}} \end{figure*} \par The set abstraction operation in our work is made of three key layers: sampling layer, grouping layer, and augmentation PointNet layer. Specifically, let $\bm{{S}}=\left\{\mathcal{S}_{t}\right\}_{t=1}^{T}$ denotes a point cloud sequence of $T$ frames, and $\mathcal{S}_{t}=\left\{x^t_1, x^t_2, ..., x^t_n \right\}$ denotes the unordered point set of the $t$-th frame, where $n$ is the number of points. The $m$-th set abstraction level takes an $n_{m-1} \times (d + c_{m-1})$ matrix as input that is from $n_{m-1}$ points with $d$-dim coordinates and $c_{m-1}$-dim point features. It outputs an $n_{m} \times\left(d+c_{m}\right)$ matrix of $n_{m}$ subsampled points with $d$-dim coordinates and new $c_{m}$-dim feature vectors summarizing local context. The size of the input in the first set abstraction level is $n \times (d + c)$. In this work, $d$ is set as 3 corresponding to the 3-dim coordinate ($X$, $Y$, $Z$) of each point, and $c$ is set as 0. \par In the sampling layer, farthest point sampling (FPS) is used to choose $n_{m}$ points as centroids from the point set. \par In the grouping layer, a point set of size $n_{m-1} \times (d + c_{m-1})$ and the coordinates of a set of centroids of size $n_{m} \times d$ are taken as input. The output is $n_{m}$ groups of point sets of size $n_{m} \times k_{m} \times \left(d+c_{m-1}\right)$, where each group corresponds to a local region and $k_{m}$ is the number of local points in the neighborhood of centroid points. Ball query finds all points that are within a radius to the query point, in which an upper limit of $k_{m}$ is set. \par The augmentation PointNet layer in the set abstraction operation includes an inter-feature attention mechanism, a set of MLPs, and a max pooling operation. The input of this layer is $n_{m}$ local regions with data size $n_{m} \times k_{m} \times \left(d+c_{m-1}\right)$. First, the coordinates of points in a local region are translated into a local frame relative to the centroid point. Second, the distance between each local point and the corresponding centroid is used as a 1-dim additional point feature to alleviate the influence of rotational motion on action recognition. Then, an inter-feature attention mechanism is used to optimize the fusion effect of different features. The inter-feature attention mechanism is realized by Convolutional Block Attention Module (CBAM) in \cite{CBAM}. The inter-feature attention mechanism is not used in the first set abstraction operation due to only the 1-dim point feature. In the following, a set of MLPs are applied to abstract the features of each local point. Then, the representation of a local region is generated by incorporating the abstracted features of all local points using a max pooling operation. Finally, coordinates of the centroid point and its local region representation are concatenated as abstracted features of this centroid point. The augmentation PointNet layer is formalized as follows: \begin{equation} {r}_{j}^{t}=\underset{i=1, \ldots, k_{m}}{\operatorname{MAX}}\left\{\operatorname{MLP}\left(\left[\left({l}_{j,i}^{t}-{o}_{j}^{t}\right)\ominus{e}_{j,i}^{t}\ominus{p}_{j,i}^{t}\right]\odot A\right)\right\}\ominus{o}_{j}^{t} \end{equation} where ${l}_{j,i}^{t}$ is the coordinates of $i$-th point in the $j$-th local region from the $t$-th point cloud frame. ${o}_{j}^{t}$ and ${p}_{j,i}^{t}$ are the coordinates of the centroid point and the point features corresponding to ${l}_{j,i}^{t}$, respectively. ${e}_{j,i}^{t}$ is the euclidean distance between ${l}_{j,i}^{t}$ and ${o}_{j}^{t}$. $A$ is the attention mechanism with (3+1+$c_{m-1}$)-dim scores corresponding to the coordinates and features of each point. Attention scores in $A$ are shared among all local points from all point cloud frames. $\ominus$ and $\odot$ are the concatenation operation and dot product operation, respectively. ${r}_{j}^{t}$ is the abstracted features of the $j$-th centroid point from the $t$-th point cloud frame. \par The set abstract operation is performed twice in the hyperpoint sequence module. In order to characterize the spatial appearance information of the entire point cloud frame, a PointNet layer consisting of a set of MLPs and a max pooling operation is used as follows: \begin{equation} \bm{F}_{t}=\underset{j=1, \ldots, n_2}{\operatorname{MAX}}\left\{\operatorname{MLP}\left({r}_{j}^{t}\right)\right\} \end{equation} where $\bm{F}_{t}$ is the hyperpoint of the $t$-th point cloud frame. So the hyperpoint sequence is represented as $\bm{{F}}=\left\{\bm{F}_{t}\right\}_{t=1}^{T}$. \par The hyperpoint is a high-dimensional point, which is a texture summary of the static point cloud in a selected area. As shown in Fig. 3, the hyperpoint sequence is a set of ordered hyperpoints in the hyperpoint space, recording changing internal textures of each point cloud frame along the time dimension. The hyperpoint sequence is a new point data type. Adjacent hyperpoints in the hyperpoint sequence have high similarity. Compared with complex internal spatial information, the change ($i.e.$, temporal information) between hyperpoints is relatively simple. \begin{figure}[t] \centering \includegraphics[width=8.6cm]{Fig33} \caption{The hyperpoint sequence is a set of ordered hyperpoints in the hyperpoint space, recording changing internal textures of each point cloud frame along the time dimension.\label{fig33}} \end{figure} \par To the best of our knowledge, we are the first to flatten the point cloud frame to a hyperpoint in point cloud sequence models. In order to demonstrate that the hyperpoint retains the complete human static appearance information, we provide a theoretical foundation for our flattening operation by showing the universal approximation ability of the hyperpoint embedding module to continuous functions on point cloud frames. \par Formally, let $\mathcal{X}=\left\{S: S \subseteq[0,1]^{c}\right.$ and $\left.|S|=n\right\}$ is the set of c-dimensional point clouds inside a c-dimensional unit cube. $f:\mathcal{X} \rightarrow \mathbb{R}$ is a continuous set function on $\mathcal{X}$ w.r.t to Hausdorff distance $D_{H}(\cdot, \cdot)$, i.e., $\forall \epsilon>0, \exists \delta>0$, for any $S, S^{\prime} \in \mathcal{X}$, if $D_{H}(\cdot, \cdot)<\delta$, then $|f(S)-f(S^{\prime})|<\epsilon$. The theorem 1 \cite{PointNet} says that $f$ can be arbitrarily approximated by PointNet given enough neurons at the max pooling layer. \par $\textbf{Theorem 1.}$ $\textit{Suppose}$ $f: \mathcal{X} \rightarrow \mathbb{R}$ $\textit{is}$ $\textit{a}$ $\textit{continuous}$ $\textit{set}$ $\textit{function}$ $\textit{w.r.t}$ $\textit{Hausdorff}$ $\textit{distance}$ $D_{H}(\cdot, \cdot) . \quad \forall \epsilon>$ $0, \exists$ $\textit{a}$ $\textit{continuous}$ $\textit{function}$ $h$ $\textit{and}$ $\textit{a}$ $\textit{symmetric}$ $\textit{function}$ $g\left(x_{1}, \ldots, x_{n}\right)=\gamma \circ M A X$$\textit{, such}$ $\textit{that}$ $\textit{for}$ $\textit{any}$ $S \in \mathcal{X}$, $$ \left|f(S)-\gamma\left(\underset{x_{i} \in S}{\operatorname{MAX}}\left\{h\left(x_{i}\right)\right\}\right)\right|<\epsilon $$ $\textit{where}$ $x_{1}, \ldots, x_{n}$ $\textit{is}$ $\textit{the}$ $\textit{full}$ $\textit{list}$ $\textit{of}$ $\textit{elements}$ $\textit{in}$ $S$ $\textit{ordered}$ $\textit{arbitrarily,}$ $\gamma$ $\textit{is}$ $\textit{a}$ $\textit{continuous}$ $\textit{function,}$ $\textit{and}$ $\textit{MAX}$ $\textit{is}$ $\textit{a}$ $\textit{vector}$ $\textit{max}$ $\textit{operator}$ $\textit{that}$ $\textit{takes}$ $n$ $\textit{vectors}$ $\textit{as}$ $\textit{input}$ $\textit{and}$ $\textit{returns}$ $\textit{a}$ $\textit{new}$ $\textit{vector}$ $\textit{of}$ $\textit{the}$ $\textit{element-wise}$ $\textit{maximum}$. \par As stated above, continuous functions can be arbitrarily approximated by PointNet given enough neurons at the max pooling layer. The hyperpoint embedding module is a recursive application of PointNet on nested partitions of the input point set. Therefore, the hyperpoint embedding module is able to arbitrarily approximate continuous functions on point cloud frames given enough neurons at max pooling layers and a suitable partitioning strategy. In other words, the hyperpoint embedding module is not only a frame-level parallel architecture but also has the ability to extract the complete human static appearance information from the point cloud sequence. \begin{figure*}[t] \centering \includegraphics[width=18cm]{Fig3} \caption{Left: Hyperpoint-Mixer module. Right: Space dislocation layer. The Hyperpoint-Mixer module consists of multiple space dislocation layers of identical size, a frame-mixing layer, and a multi-level feature learning based on skip-connection operations. The space dislocation layer is composed of a hyperpoint dislocation block, and a channel-mixing MLP block.\label{fig3}} \end{figure*} \subsection{Hyperpoint-Mixer module} Hyperpoint-Mixer module is proposed to perform feature mixing of hyperpoint sequences. Existing point cloud sequence models consist of architectures that mix intra-frame spatial and inter-frame temporal features. MeteorNet, P4Transformer, and PSTNet mix spatial and temporal features at once in cross-frame spatio-temporal local neighborhoods. The idea behind the Hyperpoint-Mixer module is to (i) clearly separate the spatial (channel-mixing) operation and the temporal (frame-mixing) operation, (ii) design a frame-mixing operation of low computational complexity which is executed following the last channel-mixing operation. The intra-frame spatial information and the inter-frame temporal information in hyperpoint sequences may not be compatible. Separating the channel-mixing operations and the frame-mixing operation can avoid the mutual influence of spatial and temporal information extraction, which maximizes the human appearance encoding ability. Moreover, due to (i) and (ii), the Hyperpoint-Mixer module motivates the main operations of SequentialPointNet to execute in frame-level parallel. \begin{figure}[t] \centering \includegraphics[width=8.6cm]{Fig444} \caption{In the hyperpoint dislocation block, by adding the corresponding displacement vector, hyperpoints are dislocated from the initial space to different spatial positions to record the temporal order.\label{fig444}} \end{figure} \par Fig. 4 summarizes the Hyperpoint-Mixer module. The point cloud sequence is flattened to a hyperpoint sequence $\bm{F} \in \mathbb{R}^{T \times d_\text{H}}$ and input into the Hyperpoint-Mixer module to mix spatial and temporal features. This module consists of multiple space dislocation layers of identical size, a multi-level feature learning based on skip-connection operations, and a frame-mixing layer. \subsubsection{Space dislocation layer} The space dislocation layer is composed of a hyperpoint dislocation block, and a channel-mixing MLP block, which is presented to perform the intra-frame spatial feature mixing of hyperpoints in the new dislocated space. Simple symmetric functions like averaging, maximum, and addition operations are common feature aggregating manners with low computational complexity. In this paper, the symmetric function is adopted as the frame-mixing layer. However, symmetric functions are insensitive to the input order of hyperpoints. To overcome this problem, we must embed temporal information into the hyperpoint sequence. In the hyperpoint dislocation block, by adding the corresponding displacement vector, hyperpoints are dislocated from the initial space to different spatial positions to record the temporal order as shown in Fig. 5. The essence of the hyperpoint dislocation block is to convert the hyperpoint order into a coordinate component of the hyperpoint to embed temporal information. Specifically, displacement vectors are generated using the sine and cosine functions of different frequencies \cite{Transformer}: \begin{equation} D V_{t, 2 h}=\sin \left(t/ 10000^{2 h / d_{\text {H }}}\right) \end{equation} \begin{equation} D V_{t, 2 h+1}=\cos \left(t/ 10000^{2 h / d_{\text {H}}}\right) \end{equation} where $d_{\text {H}}$ denotes the number of channels. $t$ is the temporal position and $h$ is the dimension position. Then, the channel-mixing MLP block acts on each hyperpoint in the dislocation space to perform the channel-mixing operation, maps $\mathbb{R}^{d_\text{H}}\rightarrow \mathbb{R}^{d_\text{H}}$, and is shared across all hyperpoints. Each channel-mixing MLP block contains a set of fully-connected layers and ReLU non-linearities. Space dislocation layers can be formalized as follows: \begin{equation} \bm{F}^{\ell}_{t}=MLP(\bm{F}^{\ell-1}_{t}+DV_t), \quad \text { for } t=1 \ldots T. \end{equation} where $\bm{F}^{\ell}$ is the new hyperpoint sequences after the $\ell$-th space dislocation layer. \par Each space dislocation layer takes an input of the same size, which is an isotropic design. With the stacking of space dislocation layers, hyperpoints are dislocated at increasingly larger scales. In the larger dislocation space, the coordinates of hyperpoints record more temporal information but less spatial information. Multi-level feature learning \cite{multi-scale} is often used in vision tasks, which can improve recognition accuracy. In order to obtain more discriminant information, multi-level features from different dislocation spaces are added by skip-connection operations and sent into the symmetric function as follows: \begin{equation} \bm{R}_{i}=\underset{t=1, \ldots, T}{g}\left\{(\bm{F}+\bm{F}^1+,...,+\bm{F}^{\ell})_{t,i}\right\} \end{equation} where $g$: $\mathbb{R}^{T \times d_\text{H}} \rightarrow \mathbb{R}^{d_\text{H}}$ is a symmetric function. $\bm{R}$ is the global spatio-temporal feature of the hyperpoint sequence. \subsubsection{Frame-mixing layer} The frame-mixing layer is used to perform the feature mixing across frames. Since the temporal information has been injected, the max pooling operation is adopted as the frame-mixing layer to aggregate spatio-temporal information from all the hyperpoints. In order to capture the subactions within the point cloud sequence, the hierarchical pyramid max pooling operation is used, which divides the fused hyperpoint sequence $\bm{\widetilde{F}}=\bm{F}+\bm{F}^1+,...,+\bm{F}^{\ell}$ into multiple temporal partitions of the equal number of hyperpoints and then performs the max pooling operation in each partition to generate the corresponding descriptors. In this work, we employ a 2-layer pyramid with three partitions. Then, the descriptors from all temporal partitions are simply concatenated to form the global spatio-temporal feature $\bm{R}$. Finally, the output of the Hyperpoint-Mixer module is input to a fully-connected classifier head for recognizing human actions. \par It is worth mentioning that the Hyperpoint-Mixer module can also be viewed as a time sequence model. Compared to RNN-like and Transformer-like networks, the Hyperpoint-Mixer module implements more competitive performance on the hyperpoint sequence classification task. Different from the conventional sequential data, the internal structures of the hyperpoint generate the main discriminant information and the dynamics between the hyperpoints are auxiliary. Therefore, the time sequence models that implement strict temporal inference are not suitable for hyperpoint sequences. Channel-mixing operations and the frame-mixing operation are separated in the Hyperpoint-Mixer module, which avoids the mutual influence of temporal and spatial information extraction and maximizes the encoding ability for intra-frame human appearance. The symmetry function assisted by the hyperpoint dislocation block also preserves plentiful dynamic information for effective human action recognition. \begin{figure}[t] \centering \includegraphics[width=8.6cm]{Fig4} \caption{The computation graph of SequentialPointNet.\label{fig4}} \end{figure} \subsection{Frame-level parallelism} \par Fig. 6 demonstrates the computation graph of SequentialPointNet. Taking the frame-mixing layer as the watershed, SequentialPointNet can be divided into a front part and a back part. Since there is no cross-frame computational dependency, operations of the front part can be divided into frame-level units executed in parallel. Each frame-level unit includes a hyperpoint embedding module and all space dislocation layers. The back part only contains architectures of low computational complexity including the frame-mixing operation and a classifier head. Therefore, the main operations ($i.e.$, the front part) in SequentialPointNet can be executed in frame-level parallel, based on which the efficiency of modeling point cloud sequences is greatly improved. \section{Experiments} In this section, we firstly introduce the datasets and experimental implementation details. Then, we compare our SequentialPointNet with the existing state-of-the-art methods. Again, we conduct detailed ablation studies to further demonstrate the contributions of different components in our SequentialPointNet. Finally, we compare the memory usage and computational efficiency of our SequentialPointNet with other point cloud sequence models. \subsection{Datasets} \par We evaluate the proposed method on two large-scale public datasets ($i.e.$, NTU RGB+D 60 \cite{NTU60} and NTU RGB+D 120 \cite{NTU120}) and two small-scale public dataset ($i.e.$, MSR Action3D dataset \cite{MSR} and UTD Multimodal Human Action Dataset \cite{UTD} (UTD-MHAD)). \par The NTU RGB+D 60 dataset is composed of 56880 depth video sequences for 60 actions and is one of the largest human action datasets. Both cross-subject and cross-view evaluation criteria are adopted for training and testing. \par The NTU RGB+D 120 dataset is the largest dataset for 3D action recognition, which is an extension of the NTU RGB-D 60 dataset. The NTU RGB+D 120 dataset is composed of 114480 depth video sequences for 120 actions. Both cross-subject and cross-setup evaluation criteria are adopted for training and testing. \par The MSR Action3D dataset contains 557 depth video samples of 20 actions from 10 subjects. Each action is performed 2 or 3 times by every subject. We adopt the same cross-subject settings in \cite{MSR}, where all the 20 actions were employed. Half of the subjects are used for training and the rest for testing. \par The UTD-MHAD dataset is composed of 861 depth video samples for 27 actions performed by 8 subjects. Every subject performs each action 4 times. We adopt the same experimental settings in \cite{UTD}, where the 27 actions were employed. Half of the subjects were used for training and the other half for testing. \subsection{Implementation details} Each depth frame is converted to a point cloud set using the public code provided by 3DV-PointNet++. We sample 512 points from the point cloud set as a point cloud frame. Specifically, we first randomly sample 2048 points from the point cloud set. Then, 512 points are chosen from the 2048 points using the PFS algorithm. In the hyperpoint embedding module, the set abstraction operation is performed twice on each point cloud frame to model spatial structures. In the first set abstraction operation, 128 centroids are chosen to determine point groups. The group radius is set to 0.06. The point number in each point group is set to 48. In the second set abstraction operation, 32 centroids are chosen to determine point groups. The group radius is set to 0.1. The point number in each point group is set to 16. In the Hyperpoint-Mixer module, the number of space dislocation layers is set to 2. The same data augmentation strategies in 3DV-PointNet++ are adopted on training data, including random rotation around Y and X axis, jittering, and random points dropout. We apply Adam as the optimizer. The batch size is set to 32. The learning rate begins with 0.001 and decays with a rate of 0.5 every 10 epochs. Training will end when reaching 90 epochs. \par$\textbf{Details on the network architecture.}$ We provide the details of SequentialPointNet as follows. In the hyperpoint embedding module, three sets of MLPs are used ($i.e.$, two sets of MLPs are used in two set abstraction operations and another set of MLPs are used in the finally PointNet layer). Each set of MLPs includes three MLPs. The out channels of the first set of MLPs are set as 64, 64, and 128, respectively. The out channels of the second set of MLPs are set as 128, 128, and 256, respectively. The out channels of the third set of MLPs are set as 256, 512, and 1024, respectively. In the Hyperpoint-Mixer module, two sets of MLPs are used in two space dislocation layers. Each set of MLPs includes one MLP of 1024 out channels. The output channels of the fully-connected classifier head are set as 256 and the number of action categories. \subsection{Comparison with the state-of-the-art methods} In this section, in order to verify the recognition accuracy of our SequentialPointNet, comparison experiments with other state-of-the-art approaches are implemented on NTU RGB+D 60 dataset, NTU RGB+D 120 dataset, MSR Action3D dataset, and UTD-MHAD dataset. \subsubsection{NTU RGB+D 60 dataset} We first compare our SequentialPointNet with the state-of-the-art methods on the NTU RGB+D 60 dataset. The NTU RGB+D 60 dataset is a large-scale indoor human action dataset. As indicated in Table I, SequentialPointNet achieves 90.3$\%$ and 97.6$\%$ on the cross-subject and cross-view test settings, respectively. Although there is a small gap between our method and skeleton sequence-based DDGCN on the cross-subject test setting, SequentialPointNet surpasses DDGCN by 0.5$\%$ on the cross-view test setting. Compared with depth sequences and point cloud sequences, skeleton sequences have the point tracking nature which is convenient for exploring fine temporal information. Fine temporal information mitigates the effect of subject diversity on action recognition, chieving higher recognition accuracy on the cross-subject test setting. It is worth mentioning that SequentialPointNet shows strong performance on par or even better than other point sequence-based approaches. Our SequentialPointNet achieves state-of-the-art performance among all methods on the cross-view test setting and results in similar recognition accuracy as PSTNet on the cross-subject test setting. The key success of our SequentialPointNet lies in effectively capturing human appearance information from each point cloud frame by the hyperpoint embedding module and the channel-mix layer in the Hyperpoint-Mixer module. Separating the channel-mixing operations and the frame-mixing operation avoids the mutual influence of spatial and temporal information extraction, which maximizes the human appearance encoding ability. Moreover, the symmetry function assisted by the hyperpoint dislocation block also preserves plentiful temporal information for effective human action recognition. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Action recognition accuracy ($\%$) on NTU RGB+D 60} \label{table1} \centering \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cccc@{}} \hline \hline \bfseries Method$/$Year & \bfseries Input & \bfseries Cross-subject& \bfseries Cross-view\\ \hline \hline \noalign{\global\arrayrulewidth1pt} \noalign{\global\arrayrulewidth0.4pt} Wang $et$ $al$.(2018)\cite{DPBL} & depth & 87.1& 84.2\\ MVDI(2019)\cite{MVDI} & depth & 84.6& 87.3\\ 3DFCNN(2020)\cite{3DFCNN} & depth & 78.1& 80.4\\ Stateful ConvLSTM(2020)\cite{Statefull} & depth & 80.4& 79.9\\ \cmidrule(r){1-4} ST-GCN(2018)\cite{STGCN} & skeleton & 81.5& 88.3\\ AS-GCN(2019)\cite{ASGCN} & skeleton & 86.8& 94.2\\ 2s-AGCN(2019)\cite{2sAGCN} & skeleton & 88.5& 95.1\\ DGNN(2019)\cite{DGNN} & skeleton & 89.9& 96.1\\ DDGCN(2020)\cite{DDGNN} & skeleton & $\mathbf{91.1}$& 97.1\\ 3s-CrosSCLR(2021)\cite{3sCrosSCLR} & skeleton & 86.2& 92.5\\ Sym-GNN(2021)\cite{symGNN} & skeleton & 90.1& 96.4\\ \cmidrule(r){1-4} 3DV-PointNet++(2020)\cite{3DV} & point& 88.8& 96.3\\ P4Transformer(2021)\cite{P4Transformer} & point & 90.2& 96.4\\ PSTNet(2021)\cite{PSTNet} & point & 90.5& 96.5\\ SequentialPointNet(ours) & point & 90.3& $\mathbf{97.6}$\\ \hline \hline \end{tabular*} \end{table} \subsubsection{NTU RGB+D 120 dataset} We then compare our SequentialPointNet with the state-of-the-art methods on the NTU RGB+D 120 dataset. The NTU RGB+D 120 dataset is the largest dataset for 3D action recognition. Compared with NTU RGB+D 60 dataset, it is more challenging to perform 3D human motion recognition on the NTU RGB+D 120 dataset. As indicated in Table II, SequentialPointNet achieves 83.5$\%$ and 95.4$\%$ on the cross-subject and cross-setup test settings, respectively. Note that, even on the largest human action dataset, SequentialPointNet still gains a strong lead on the cross-setup test setting among all 3D human action recognition methods and achieves state-of-the-art performance. Compared with the state-of-the-art method, SequentialPointNet does not show a competitive recognition accuracy on the cross-subject setting. There is a gap between our method and PSTNet, which is due to the loss of fine temporal changes. Fine temporal changes mitigate the effect of subject diversity on the performance of action recognition. However, a small loss of temporal information is a necessary concession to frame-level parallelism. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Action recognition accuracy ($\%$) on NTU RGB+D 120} \label{table1} \centering \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cccc@{}} \hline \hline \bfseries Method$/$Year & \bfseries Input & \bfseries Cross-subject& \bfseries Cross-setup\\ \hline \hline \noalign{\global\arrayrulewidth1pt} \noalign{\global\arrayrulewidth0.4pt} Baseline(2018)\cite{NTU120} & depth & 48.7& 40.1\\ \cmidrule(r){1-4} ST-GCN(2018)\cite{STGCN} & skeleton & 81.5& 88.3\\ MS-G3D Net (2020)\cite{MSG3D} & skeleton & 86.9& 88.4\\ 4s Shift-GCN(2020)\cite{4sShiftGCN} & skeleton & 85.9& 87.6\\ SGN(2020)\cite{SGN} & skeleton & 79.2& 81.5\\ 3s-CrosSCLR(2021)\cite{3sCrosSCLR} & skeleton & 80.5& 80.4\\ \cmidrule(r){1-4} 3DV-PointNet++(2020)\cite{3DV} & point& 82.4& 93.5\\ P4Transformer(2021)\cite{P4Transformer} & point & 86.4& 93.5\\ PSTNet(2021)\cite{PSTNet} & point & $\mathbf{87.0}$& 93.5\\ SequentialPointNet(ours) & point & 83.5& $\mathbf{95.4}$\\ \hline \hline \end{tabular*} \end{table} \subsubsection{MSR Action3D dataset} In order to comprehensively evaluate our method, comparative experiments are also carried out on the small-scale MSR Action3D dataset. In order to alleviate the overfitting problem on the small-scale dataset, the batch size is set as 16. Other parameter settings remain the same as those on the two large-scale datasets. Table III illustrates the recognition accuracy of different methods when using different numbers of point cloud frames. It’s interesting to see that, as the number of point cloud frames increases, the recognition accuracy of our method increases faster than MeteorNet, P4Transformer, and PSTNet. When using 24 point cloud frames as inputs, our model achieved state-of-the-art performance on the MSR Action3D dataset. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Action recognition accuracy ($\%$) on MSR-Action3D} \label{table1} \centering \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cccc@{}} \hline \hline \bfseries Method$/$Year & \bfseries Input & \bfseries $\#$ Frames& \bfseries Accuracy\\ \hline \hline \noalign{\global\arrayrulewidth1pt} \noalign{\global\arrayrulewidth0.4pt} Kläser $et$ $al$.(2008)\cite{ASTDB} & depth& 18& 81.43\\ Vieira $et$ $al$.(2012)\cite{SSTO} & depth& 20& 78.20\\ Actionlet(2012)\cite{Actionlet} & skeleton& all& 88.21\\ \cmidrule(r){1-4} \multirow{5}{*}{MeteorNet(2019)\cite{MeteorNet}} & \multirow{5}{*}{point}& 4 & 78.11 \\ & & 8 & 81.14 \\ & & 12 & 86.53 \\ & & 16 & 88.21 \\ & & 24 & 88.50 \\ \cmidrule(r){1-4} PointNet++(2020)\cite{Pointnet++} & point& 1& 61.61\\ \cmidrule(r){1-4} \multirow{6}{*}{P4Transformer(2021)\cite{P4Transformer}} & \multirow{6}{*}{point}& 4 & 80.13 \\ & & 8 & 83.17 \\ & & 12 & 87.54 \\ & & 16 & 89.56 \\ & & 20 & 90.24 \\ & & 24 & 90.94 \\ \cmidrule(r){1-4} \multirow{6}{*}{PSTNet(2021)\cite{PSTNet}} & \multirow{6}{*}{point}& 4 & 81.14 \\ & & 8 & 83.50 \\ & & 12 & 87.88 \\ & & 16 & 89.90 \\ & & 24 & 91.20 \\ \cmidrule(r){1-4} \multirow{6}{*}{SequentialPointNet(ours)} & \multirow{6}{*}{point}& 4 & 77.66 \\ & & 8 & 86.45 \\ & & 12 & 88.64 \\ & & 16 & 89.56 \\ & & 20 & 91.21 \\ & & 24 & $\mathbf{91.94}$\\ \hline \hline \end{tabular*} \end{table} \subsubsection{UTD-MHAD dataset} We evaluate our method on another small-scale UTD-MHAD dataset. The batch size is set as 16. Other parameter settings remain the same as those on the two large-scale datasets. SequentialPointNet is the first point cloud sequence model conducted on the UTD-MHAD dataset. In this paper, point cloud sequences are converted from depth sequences. To verify the recognition performance of SequentialPointNet, we compare the proposed approach with other depth sequence-based methods. Table IV illustrates the recognition accuracy of different methods. SequentialPointNet has the highest recognition accuracy of 92.31$\%$. Experimental results on two small-scale datasets demonstrate that our approach can achieve superior recognition accuracy even without a large amount of data for training. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Action recognition accuracy ($\%$) on UTD-MHAD} \label{table1} \centering \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}ccc@{}} \hline \hline \bfseries Method$/$Year & \bfseries Input & \bfseries Accuracy\\ \hline \hline \noalign{\global\arrayrulewidth1pt} \noalign{\global\arrayrulewidth0.4pt} 3DHOT-MBC(2017)\cite{3DHOT-MBC} &depth &84.40\\ HP-DMM-CNN(2018)\cite{HP-DMM-CNN} & depth&82.75\\ DMI-CNN(2018)\cite{Kamel} & depth&50.00\\ HP-DMM(2018)\cite{HP-DMM} & depth&73.72\\ Yang $et$ $al$.(2020)\cite{Yang} &depth &88.37\\ Trelinski $et$ $al$.(2021)\cite{Trelinski} & depth&88.14\\ DRDIS(2021)\cite{Wu} & depth&87.88\\ \cmidrule(r){1-3} SequentialPointNet(ours) & point& $\mathbf{92.31}$\\ \hline \hline \end{tabular*} \end{table} \subsection{Ablation study} In this section, comprehensive ablation studies are performed on NTU RGB+D 60 dataset to validate the contributions of different components in our SequentialPointNet. \subsubsection{Effectiveness of Hyperpoint-Mixer module} We conduct the experiments to demonstrate the effectiveness of the Hyperpoint-Mixer module, and results are reported in Table V. In order to prove modeling ability of this module to hyperpoint sequences, several strong deep networks are used to instead of the Hyperpoint-Mixer module in our SequentialPointNet. In SequentialPointNet (LSTM), LSTM is employed. In SequentialPointNet (GRU), GRU is employed. In SequentialPointNet (Transformer), a Transformer of two attention layers is used. In SequentialPointNet (MLP-Mixer), a MLP-Mixer of two mixer layers is used. \par We can see from the table that results of SequentialPointNet (LSTM), SequentialPointNet (GRU), and SequentialPointNet (Transformer) are much worse when compared with the result of SequentialPointNet. Internal structures of hyperpoints generate the main discriminant information and the changes between hyperpoints are auxiliary. Therefore, recurrent models that implement strict temporal inference are not suitable for hyperpoint sequences. Recently, self-attention-based Transformer has remained dominant in natural language processing and computer vision \cite{ViT}. Transformer is expected to achieve superior performance on the hyperpoint sequence classification task. However, due to the lack of larger-scale data for pre-training, Transformer does not show a promising result. SequentialPointNet (MLP-Mixer) achieves the accuracy of 94.5$\%$, which is 3.1$\%$ lower than our SequentialPointNet. In MLP-Mixer, channel-mixing and token-mixing are performed alternately, resulting in the mutual influence of spatial and temporal information extraction. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Cross-view recognition accuracy ($\%$) when using models on the NTU RGB+D 60 dataset} \label{table3} \centering \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cc@{}} \hline \hline \bfseries Method & \bfseries Accuracy \\ \hline \hline SequentialPointNet(LSTM) & 85.9\\ SequentialPointNet(GRU) & 86.4\\ SequentialPointNet(Transformer) & 81.6\\ SequentialPointNet(MLP-Mixer) & 94.5\\ SequentialPointNet & 97.6\\ \hline \hline \end{tabular*} \end{table} \subsubsection{Different temporal information embedding manners} To demonstrate the effectiveness of the hyperpoint dislocation block in the Hyperpoint-Mixer module, we also report the results of SequentialPointNet (4D, w/o hdb), which does not use the hyperpoint dislocation block and injects the order information by appending the 1D temporal dimension to raw 3D points in each point cloud frame. Results are tabulated in Table VI. From the table, we observe that SequentialPointNet with the hyperpoint dislocation block outperforms SequentialPointNet (4D, w/o hdb). Therefore, the temporal information embedding manner used in SequentialPointNet is more efficient. From the experimental results, we can draw a conclusion that premature embedding of temporal information will affect spatial information encoding and reduce the accuracy of human action recognition. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Cross-view recognition accuracy ($\%$) of different methods on the NTU RGB+D 60 dataset} \label{table3} \centering \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cc@{}} \hline \hline \bfseries Method & \bfseries Accuracy \\ \hline \hline SequentialPointNet(4D, w/o hdb) & 95.4\\ SequentialPointNet(w/o mlfl) & 96.9\\ SequentialPointNet(w/o apn) & 97.1\\ SequentialPointNet & 97.6\\ \hline \hline \end{tabular*} \end{table} \subsubsection{Effectiveness of multi-level feature learning in the Hyperpoint-Mixer module} With the stacking of the space dislocation layer in the Hyperpoint-Mixer module, hyperpoints are dislocated at increasingly larger scales. In the larger dislocation space, the coordinates of hyperpoints record more temporal information but less spatial information. In order to obtain more discriminant information, multi-level features from different dislocation spaces are added by skip-connection operations. To verify the effectiveness of the multi-level feature learning, we report the result of SequentialPointNet (w/o mlfl) that classify human actions without the multi-level features. We can see from Table VI that the recognition accuracy of SequentialPointNet (w/o mlfl) decreases by 0.7$\%$ without the multi-level feature learning. \subsubsection{Effectiveness of Augmentation PointNet layer} We design the augmentation PointNet layer in set abstract operations of the hyperpoint embedding module to encode the spatial structures of human actions. There are two main improvements. First, we used the distance between each local point and the corresponding centroid as a 1-dim additional point feature to alleviate the influence of rotational motion on action recognition. Second, the inter-feature attention mechanism is used to optimize the fusion effect of different features. In order to verify the effectiveness of the augmentation PointNet layer, we report the result of SequentialPointNet (w/o apn) that classifies human actions without the augmentation PointNet layer. From Table VI, we observe that the recognition accuracy of SequentialPointNet with the augmentation PointNet layer increases by 0.5$\%$ than SequentialPointNet (w/o apn). \subsubsection{Results of our SequentialPointNet when setting different layers of pyramids in the hierarchical pyramid max pooling operation} To investigate the choice of the number of the pyramid layers in the hierarchical pyramid max pooling operation, we compare the performance of our SequentialPointNet with the different numbers of pyramid layers. The hierarchical pyramid max pooling operation is used as the frame-mixing operation to mix the dynamic information within subactions. Too few layers of the pyramid cannot obtain enough temporal multi-scale information of human motion. Too many layers of the pyramid tend to introduce redundant information and increase computational costs. The results are presented in Fig. 7. Particularly, the 2-layer pyramid is the optimal choice for SequentialPointNet. \begin{figure}[t] \begin{minipage}[t]{0.49\linewidth} \centering \includegraphics[width=1\textwidth]{Fig5} \caption{Cross-view recognition accuracy ($\%$) of our SequentialPointNet when using different numbers of pyramid layers on the NTU RGB+D 60 dataset.\label{fig5}} \end{minipage} \hfill \begin{minipage}[t]{0.49\linewidth} \centering \includegraphics[width=1\textwidth]{Fig6} \caption{Cross-view recognition accuracy ($\%$) of our SequentialPointNet when using different numbers of frames on the NTU RGB+D 60 dataset.\label{fig6}} \end{minipage} \end{figure} \subsubsection{Results of our SequentialPointNet when using different numbers of point cloud frames} We also investigate the performance variation of our method when using different numbers of point cloud frames. As shown in Fig. 8, better performance is gained when the number of point cloud frames increases. When the number of frames is greater than 20, the recognition accuracy tends to stabilize. Therefore, the number of the point cloud frames is set to 20. \subsection{Memory usage and computational efficiency} In this section, we evaluate the memory usage, computational efficiency and parallelism of our method. Experiments are conducted on the machine with one Intel(R) Xeon(R) W-3175X CPU and one Nvidia RTX 3090 GPU. In Table VII, the number of parameters, the number of floating point operations (FLOPs), and the running time of our SequentialPointNet are compared with MeteorNet, 3DV-PointNet++, P4Transformer, and PSTNet. The running time is the network forward inference time per point cloud sequence and counted in the same way as 3DV-PointNet++. FLOPs of MeteorNet, P4Transformer, and PSTNet are not provided by their authors. \begin{table}[H] \renewcommand{\arraystretch}{1.3} \caption{Parameters, floating point operations, and running times comparison of different methods} \label{table1} \centering \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cccc@{}} \hline \hline \bfseries Method & \bfseries Parameters (M) & \bfseries FLOPs (G) & \bfseries Time (ms)\\ \hline \hline \noalign{\global\arrayrulewidth1pt} \noalign{\global\arrayrulewidth0.4pt} MeteorNet\cite{MeteorNet} & 17.60& -& 56.49\\ 3DV-PointNet++\cite{3DV} & $\mathbf{1.24}$& $\mathbf{1.24}$& 54\\ P4Transformer\cite{P4Transformer} & 44.1& -& 96.48\\ PSTNet\cite{PSTNet} & 20.46& -& 106.35\\ SequentialPointNet & 3.72& 2.84& $\mathbf{5}$\\ \hline \hline \end{tabular*} \end{table} \par From the table, we can see that parameters in 3DV-PointNet++ are the fewest in all methods. This is because 3DV-PointNet++ converts point cloud sequences to 3D point clouds, and employs static point cloud methods to process 3D point clouds. Compared with point cloud sequence methods, the static point cloud method has fewer parameters but lower recognition accuracy. It is noticed from the table that only our SequentialPointNet has a similar number of parameters as 3DV-PointNet++. The parameter number of SequentialPointNet is much less than MeteorNet, P4Transformer, and PSTNet, which verifies that our approach is the most lightweight point cloud sequence model. In addition, SequentialPointNet is far faster than other methods, even though 3DV-PointNet++ has fewer FLOPs. SequentialPointNet takes only 5 milliseconds to classify a point cloud sequence. In 3DV-PointNet++, a multi-stream 3D action recognition manner is proposed to learn motion and appearance features jointly. However, the multi-stream manner limits the parallelism of 3DV-PointNet++. In MeteorNet, P4Transformer, and PSTNet, Cross-frame spatio-temporal local neighborhoods are constructed, which is time-consuming and not conducive to parallel computing. SequentialPointNet improves the speed of modeling point cloud sequences by more than 10 times. The superior computational efficiency of our SequentialPointNet is due to the lightweight network architecture and strong frame-level parallelism. \par Since there is no computational dependency, in the case of limited computing power, frame-level units can also be deployed on different devices or executed sequentially on a single device. \section{Conclusion} In this paper, we have proposed a strong frame-level parallel point cloud sequence network referred to as SequentialPointNet for 3D action recognition, which greatly improves the efficiency of modeling point cloud sequences. SequentialPointNet is composed of two serial modules, $i.e.$, a hyperpoint embedding module and a Hyperpoint-Mixer module. First, each point cloud frame is flattened into a hyperpoint by the same trainable hyperpoint embedding module. Then, we input the hyperpoint sequence into the Hyperpoint-Mixer module to perform feature mixing operations. In the Hyperpoint-Mixer module, channel-mixing operations and frame-mixing operations are clearly separated and the frame-mixing operation is placed at the end of all operations. By doing so, the main operations of SequentialPointNet can be divided into frame-level units executed in parallel. Extensive experiments conducted on four public datasets show that SequentialPointNet has the fastest computation speed and superior recognition performance among all point cloud sequence models. \section{Introduction}\label{sec:introduction}} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Communications Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE \textsc{Transactions on Magnetics} journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \section*{Acknowledgment} The authors would like to thank... \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE Computer Society conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \section{Introduction} \IEEEPARstart{W}{ith} the release of low-cost depth cameras, 3D human action recognition has attracted more and more attention from researchers. 3D human action recognition can be divided into three categories based on different data types: 1) skeleton sequence-based 3D human action recognition \cite{S1,S2,S3,S4,S5}, 2) depth sequence-based 3D human action recognition \cite{D1,D2,D3,D4,D5,D6}, 3) point cloud sequence-based 3D human action recognition \cite{P1,P2,P3}. Compared with skeleton sequence-based approaches, point cloud sequences are more convenient to be collected without additional pose estimation algorithms. Compared with depth sequence-based approaches, point cloud sequence-based methods yield lower computation costs. For these reasons, we focus on point cloud sequence-based 3D human action recognition in this work. \par Point cloud sequences of 3D human actions exhibit unordered intra-frame spatial information and ordered inter-frame temporal information. Due to the complex data structures, capturing the spatio-temporal textures from point cloud sequences is extremely challenging. An intuitive way is to convert point cloud sequences to 3D point clouds and employ static point cloud methods, e.g., PointNet++ \cite{Pointnet++}, to process. However, using static 3D point clouds to represent the entire point cloud sequence loses a lot of spatio-temporal information, which reduces the recognition performance. Therefore, point cloud sequence methods are necessary, which directly consumes the point cloud sequence for 3D human action classification. In order to model the dynamics of point cloud sequences, cross-frame spatio-temporal local neighborhoods around the centroids are usually constructed \cite{MeteorNet,PSTNet,P4Transformer}. Then, point convolution \cite{MeteorNet} or PointNet \cite{PSTNet,P4Transformer} are used to encode the spatio-temporal local structures. However, this complex cross-frame computing may be time-consuming and is not conducive to parallel computation. \par In this paper, we propose a strong frame-level parallel point cloud sequence network called SequentialPointNet. First, each point cloud frame is flattened into a hyperpoint by the same trainable hyperpoint embedding module. The hyperpoint embedding module is a kind of static point cloud technology, which designed based on PointNet. Since PointNet can arbitrarily approximate continuous functions, the hyperpoint sequence has the ability to retain complete human appearance information of each frame. Then, the Hyperpoint-Mixer module is proposed to mix features of the hyperpoint sequence. In this module, intra-frame channel-mixing operations and inter-frame frame-mixing operations are clearly separated and the frame-mixing operation is placed at the end of all feature mixing operations. Taking the frame-mixing operation as the watershed, SequentialPointNet can be divided into a front part and a back part. Since there are no cross-frame computational dependencies, the front part can be divided into frame-level units executed in parallel. Furthermore, separating the channel-mixing operations and frame-mixing operations avoids the mutual influence of temporal and spatial information extraction, which maximizes the human appearance encoding ability. Hence, SequentialPointNet also yields superior recognition accuracy for 3D human action recognition. \par Our main contributions are summarized as follows: \par• We propose a strong frame-level parallel point cloud sequence classification network, dubbed SequentialPointNet, to recognize 3D human actions. The key to SequentialPointNet is to divide the main modeling operations into frame-level units executed in parallel, which greatly improves the efficiency of modeling point cloud sequences. \par• In terms of our technical contribution, we propose to flatten the point cloud sequence into a new point data type named hyperpoint sequence and introduce a novel Hyperpoint-Mixer module to perform feature mixing for the hyperpoint sequence, based on which the appearance and motion information can be readily extracted for 3D human action recognition. \par• Our SequentialPointNet achieves up to 10$\times$ faster than existing point cloud sequence models and yields the cross-view accuracy of 97.6$\%$ on the NTU RGB+D 60 dataset, the cross-setup accuracy of 95.4$\%$ on the NTU RGB+D 120 dataset, the cross-subject accuracy of 91.94$\%$ on the MSR Action3D dataset, and the cross-subject accuracy of 92.31$\%$ on the UTD-MHAD dataset, which outperforms the state-of-the-art methods. \section{Related Work} \subsection{Static Point Cloud Modeling} With the popularity of low-cost 3D sensors, deep learning on static point clouds have attracted much attention from researchers due to extensive applications ranging from object classification \cite{PC1,PC2,PC3,PC4,PC5}, part segmentation \cite{PS1,PS2,PS3,PS4,PS5}, to scene semantic parsing \cite{PP1,PP2}. Static point clouds modeling can be divided into volumetric-based methods \cite{PVB1,PVB2,PVB3,PVB4,PVB5,PVB6} and point-based methods \cite{PPB1,PPB2,PPB3,PPB4}. Volumetric-based methods usually voxelize a point cloud into 3D grids, and then a 3D Convolution Neural Network (CNN) is applied on the volumetric representation for classification. Point-based methods are directly performed on raw point clouds. PointNet \cite{PointNet} is a pioneering effort that directly processes point sets. The key idea of PointNet is to abstract each point using a set of Multi-layer Perceptrons (MLPs) and then assemble all individual point features by a symmetry function, $i.e$., a max pooling operation. However, PointNet lacks the ability to capture local structures. Therefore, in \cite{Pointnet++}, a hierarchical network PointNet++ is proposed to encode fine geometric structures from the neighborhood of each point. PointNet++ is made of several set abstraction levels. A set abstraction level is composed of three layers: sampling layer, grouping layer, and PointNet-based learning layer. By stacking several set abstraction levels, PointNet++ progressively abstracts larger and larger local regions along the hierarchy. \subsection{Time Sequence Modeling} Sequential data widely exist in various research fields, such as text, audio, and video. The point cloud sequence is also a kind of sequential data. Time sequence models have been studied by the natural language processing community for decades. The emergence of Recurrent Neural Networks (RNN) \cite{RNN1,RNN2,RNN3,RNN4} pushes the boundaries of time series models. Due to the capability of extracting high-dimensional features to learn complex patterns, RNN performs time-series point prediction. LSTM \cite{LSTM} captures the contextual representations of words with a short memory and has additional “forget” gates to thereby overcoming both the vanishing and exploding gradient problem. GRU \cite{GRU} comprises of reset gate and update gate, and handles the information flow like LSTM sans a memory unit. Transformer \cite{Transformer}, a fully-connected attention model, models the dependency between words in the sequence. Since the model includes no recurrence and no convolution, Transformer secures the relative or absolute position within the sequence by injecting positional encoding. \subsection{Point Cloud Sequence-based 3D Human Action Recognition} Point cloud sequence-based 3D human action recognition is a fairly new and challenging task. To capture spatio-temporal information from point cloud sequences, one solution is to convert point cloud sequences to 3D point clouds and employ static point cloud methods, e.g., PointNet++, to process 3D point clouds. 3DV-PointNet++ \cite{3DV} is the first work to recognize human actions from point cloud sequences. In 3DV-PointNet++, 3D dynamic voxel (3DV) is proposed as a novel 3D motion representation. A set of points is extracted from 3DV and input into PointNet++ for 3D action recognition in the end-to-end learning way. However, since the point cloud sequence is converted into a static 3D point cloud set, 3DV loses a lot of spatio-temporal information and increases additional computational costs. \par To overcome this problem, researchers have focused mainly on investigating point cloud sequence networks that directly consume point cloud sequences for human action recognition. MeteorNet \cite{MeteorNet} is the first work on deep learning for modeling point cloud sequences. In MeteorNet, two ways are proposed to construct spatio-temporal local neighborhoods for each point in the point cloud sequence. The abstracted feature of each point is learned by aggregating the information from these neighborhoods. Fan $et$ $al$. \cite{PSTNet} propose a Point spatio-temporal (PST) convolution to encode the spatio-temporal local structures of point cloud sequences. PST convolution first disentangles space and time in point cloud sequences. Furthermore, PST convolutions are incorporated into a deep network namely PSTNet to model point cloud sequences in a hierarchical manner. To avoid point tracking, Point 4D Transformer (P4Transformer) network \cite{P4Transformer} is proposed to model point cloud videos. Specifically, P4Transformer consists of a point 4D convolution to embed the spatio-temporal local structures presented in a point cloud video and a Transformer to encode the appearance and motion information by performing self-attention on the embedded local features. However, in these point cloud sequence networks, cross-frame spatio-temporal local neighborhoods are usually constructed during modeling point cloud sequences, which is quite time-consuming and limits the parallel ability of networks. \begin{figure}[t] \centering \includegraphics[width=8.6cm]{Fig1} \caption{SequentialPointNet contains a hyperpoint embedding module, a Hyperpoint-Mixer module, and a classifier head.\label{fig1}} \end{figure} \section{METHODOLOGY} In this section, a strong frame-level parallel point cloud sequence network called SequentialPointNet is proposed to recognize 3D human actions. The overall flowchart of SequentialPointNet is described in Fig. 1. SequentialPointNet contains a hyperpoint embedding module, a Hyperpoint-Mixer module, and a classifier head. In Section III-A, the hyperpoint embedding module is used to flatten each point cloud frame into a hyperpoint that retains the complete human static appearance information. The intra-frame spatial features and inter-frame temporal features of the hyperpoint sequence are mixed by the Hyperpoint-Mixer module in Section III-B. In Section III-C, we analyze the frame-level parallelism of SequentialPointNet. \subsection{Hyperpoint embedding module} Recent point cloud sequence models typically construct cross-frame spatio-temporal local neighborhoods and employ point convolution or PointNet-based learning to extract spatial and temporal information. This complex cross-frame computing precludes frame-level parallelism. To avoid complex cross-frame computing, we propose to flatten each point cloud frame into a hyperpoint (also referred to as a token) by a same trainable hyperpoint embedding module. This module is employed to aggregate intra-frame geometric details and preserve them in the hyperpoint. The independent embedding of each point cloud frame enables the hyperpoint sequence to be generated in frame-level parallel. An additional advantage of this flattening operation is that superior static point cloud models can be used almost out of the box as the hyperpoint embedding module. \par In the hyperpoint embedding module, each point cloud frame is flattened to a hyperpoint by static point cloud processing technology to summarize human static appearance. we first adopt the set abstraction operation twice to downsample each point cloud frame. In this process, the texture information from space partitions is aggregated into the corresponding centroids. Then, in order to characterize human entire appearance information , a PointNet layer is used. The hyperpoint embedding module is shown in Fig. 2. \begin{figure*}[t] \centering \includegraphics[width=18cm]{Fig2} \caption{The hyperpoint embedding module contains two set abstraction operation, and a PointNet layer.\label{fig2}} \end{figure*} \par The set abstraction operation in our work is made of three key layers: sampling layer, grouping layer, and augmentation PointNet layer. Specifically, let $\bm{{S}}=\left\{\mathcal{S}_{t}\right\}_{t=1}^{T}$ denotes a point cloud sequence of $T$ frames, and $\mathcal{S}_{t}=\left\{x^t_1, x^t_2, ..., x^t_n \right\}$ denotes the unordered point set of the $t$-th frame, where $n$ is the number of points. The $m$-th set abstraction level takes an $n_{m-1} \times (d + c_{m-1})$ matrix as input that is from $n_{m-1}$ points with $d$-dim coordinates and $c_{m-1}$-dim point features. It outputs an $n_{m} \times\left(d+c_{m}\right)$ matrix of $n_{m}$ subsampled points with $d$-dim coordinates and new $c_{m}$-dim feature vectors summarizing local context. The size of the input in the first set abstraction level is $n \times (d + c)$. In this work, $d$ is set as 3 corresponding to the 3-dim coordinate ($X$, $Y$, $Z$) of each point, and $c$ is set as 0. \par In the sampling layer, farthest point sampling (FPS) is used to choose $n_{m}$ points as centroids from the point set. \par In the grouping layer, a point set of size $n_{m-1} \times (d + c_{m-1})$ and the coordinates of a set of centroids of size $n_{m} \times d$ are taken as input. The output is $n_{m}$ groups of point sets of size $n_{m} \times k_{m} \times \left(d+c_{m-1}\right)$, where each group corresponds to a local region and $k_{m}$ is the number of local points in the neighborhood of centroid points. Ball query finds all points that are within a radius to the query point, in which an upper limit of $k_{m}$ is set. \par The augmentation PointNet layer in the set abstraction operation includes an inter-feature attention mechanism, a set of MLPs, and a max pooling operation. The input of this layer is $n_{m}$ local regions with data size $n_{m} \times k_{m} \times \left(d+c_{m-1}\right)$. First, the coordinates of points in a local region are translated into a local frame relative to the centroid point. Second, the distance between each local point and the corresponding centroid is used as a 1-dim additional point feature to alleviate the influence of rotational motion on action recognition. Then, an inter-feature attention mechanism is used to optimize the fusion effect of different features. The inter-feature attention mechanism is realized by Convolutional Block Attention Module (CBAM) in \cite{CBAM}. The inter-feature attention mechanism is not used in the first set abstraction operation due to only the 1-dim point feature. In the following, a set of MLPs are applied to abstract the features of each local point. Then, the representation of a local region is generated by incorporating the abstracted features of all local points using a max pooling operation. Finally, coordinates of the centroid point and its local region representation are concatenated as abstracted features of this centroid point. The augmentation PointNet layer is formalized as follows: \begin{equation} {r}_{j}^{t}=\underset{i=1, \ldots, k_{m}}{\operatorname{MAX}}\left\{\operatorname{MLP}\left(\left[\left({l}_{j,i}^{t}-{o}_{j}^{t}\right)\ominus{e}_{j,i}^{t}\ominus{p}_{j,i}^{t}\right]\odot A\right)\right\}\ominus{o}_{j}^{t} \end{equation} where ${l}_{j,i}^{t}$ is the coordinates of $i$-th point in the $j$-th local region from the $t$-th point cloud frame. ${o}_{j}^{t}$ and ${p}_{j,i}^{t}$ are the coordinates of the centroid point and the point features corresponding to ${l}_{j,i}^{t}$, respectively. ${e}_{j,i}^{t}$ is the euclidean distance between ${l}_{j,i}^{t}$ and ${o}_{j}^{t}$. $A$ is the attention mechanism with (3+1+$c_{m-1}$)-dim scores corresponding to the coordinates and features of each point. Attention scores in $A$ are shared among all local points from all point cloud frames. $\ominus$ and $\odot$ are the concatenation operation and dot product operation, respectively. ${r}_{j}^{t}$ is the abstracted features of the $j$-th centroid point from the $t$-th point cloud frame. \par The set abstract operation is performed twice in the hyperpoint sequence module. In order to characterize the spatial appearance information of the entire point cloud frame, a PointNet layer consisting of a set of MLPs and a max pooling operation is used as follows: \begin{equation} \bm{F}_{t}=\underset{j=1, \ldots, n_2}{\operatorname{MAX}}\left\{\operatorname{MLP}\left({r}_{j}^{t}\right)\right\} \end{equation} where $\bm{F}_{t}$ is the hyperpoint of the $t$-th point cloud frame. So the hyperpoint sequence is represented as $\bm{{F}}=\left\{\bm{F}_{t}\right\}_{t=1}^{T}$. \par The hyperpoint is a high-dimensional point, which is a texture summary of the static point cloud in a selected area. As shown in Fig. 3, the hyperpoint sequence is a set of ordered hyperpoints in the hyperpoint space, recording changing internal textures of each point cloud frame along the time dimension. The hyperpoint sequence is a new point data type. Adjacent hyperpoints in the hyperpoint sequence have high similarity. Compared with complex internal spatial information, the change ($i.e.$, temporal information) between hyperpoints is relatively simple. \begin{figure}[t] \centering \includegraphics[width=8.6cm]{Fig33} \caption{The hyperpoint sequence is a set of ordered hyperpoints in the hyperpoint space, recording changing internal textures of each point cloud frame along the time dimension.\label{fig33}} \end{figure} \par To the best of our knowledge, we are the first to flatten the point cloud frame to a hyperpoint in point cloud sequence models. In order to demonstrate that the hyperpoint retains the complete human static appearance information, we provide a theoretical foundation for our flattening operation by showing the universal approximation ability of the hyperpoint embedding module to continuous functions on point cloud frames. \par Formally, let $\mathcal{X}=\left\{S: S \subseteq[0,1]^{c}\right.$ and $\left.|S|=n\right\}$ is the set of c-dimensional point clouds inside a c-dimensional unit cube. $f:\mathcal{X} \rightarrow \mathbb{R}$ is a continuous set function on $\mathcal{X}$ w.r.t to Hausdorff distance $D_{H}(\cdot, \cdot)$, i.e., $\forall \epsilon>0, \exists \delta>0$, for any $S, S^{\prime} \in \mathcal{X}$, if $D_{H}(\cdot, \cdot)<\delta$, then $|f(S)-f(S^{\prime})|<\epsilon$. The theorem 1 \cite{PointNet} says that $f$ can be arbitrarily approximated by PointNet given enough neurons at the max pooling layer. \par $\textbf{Theorem 1.}$ $\textit{Suppose}$ $f: \mathcal{X} \rightarrow \mathbb{R}$ $\textit{is}$ $\textit{a}$ $\textit{continuous}$ $\textit{set}$ $\textit{function}$ $\textit{w.r.t}$ $\textit{Hausdorff}$ $\textit{distance}$ $D_{H}(\cdot, \cdot) . \quad \forall \epsilon>$ $0, \exists$ $\textit{a}$ $\textit{continuous}$ $\textit{function}$ $h$ $\textit{and}$ $\textit{a}$ $\textit{symmetric}$ $\textit{function}$ $g\left(x_{1}, \ldots, x_{n}\right)=\gamma \circ M A X$$\textit{, such}$ $\textit{that}$ $\textit{for}$ $\textit{any}$ $S \in \mathcal{X}$, $$ \left|f(S)-\gamma\left(\underset{x_{i} \in S}{\operatorname{MAX}}\left\{h\left(x_{i}\right)\right\}\right)\right|<\epsilon $$ $\textit{where}$ $x_{1}, \ldots, x_{n}$ $\textit{is}$ $\textit{the}$ $\textit{full}$ $\textit{list}$ $\textit{of}$ $\textit{elements}$ $\textit{in}$ $S$ $\textit{ordered}$ $\textit{arbitrarily,}$ $\gamma$ $\textit{is}$ $\textit{a}$ $\textit{continuous}$ $\textit{function,}$ $\textit{and}$ $\textit{MAX}$ $\textit{is}$ $\textit{a}$ $\textit{vector}$ $\textit{max}$ $\textit{operator}$ $\textit{that}$ $\textit{takes}$ $n$ $\textit{vectors}$ $\textit{as}$ $\textit{input}$ $\textit{and}$ $\textit{returns}$ $\textit{a}$ $\textit{new}$ $\textit{vector}$ $\textit{of}$ $\textit{the}$ $\textit{element-wise}$ $\textit{maximum}$. \par As stated above, continuous functions can be arbitrarily approximated by PointNet given enough neurons at the max pooling layer. The hyperpoint embedding module is a recursive application of PointNet on nested partitions of the input point set. Therefore, the hyperpoint embedding module is able to arbitrarily approximate continuous functions on point cloud frames given enough neurons at max pooling layers and a suitable partitioning strategy. In other words, the hyperpoint embedding module is not only a frame-level parallel architecture but also has the ability to extract the complete human static appearance information from the point cloud sequence. \begin{figure*}[t] \centering \includegraphics[width=18cm]{Fig3} \caption{Left: Hyperpoint-Mixer module. Right: Space dislocation layer. The Hyperpoint-Mixer module consists of multiple space dislocation layers of identical size, a frame-mixing layer, and a multi-level feature learning based on skip-connection operations. The space dislocation layer is composed of a hyperpoint dislocation block, and a channel-mixing MLP block.\label{fig3}} \end{figure*} \subsection{Hyperpoint-Mixer module} Hyperpoint-Mixer module is proposed to perform feature mixing of hyperpoint sequences. Existing point cloud sequence models consist of architectures that mix intra-frame spatial and inter-frame temporal features. MeteorNet, P4Transformer, and PSTNet mix spatial and temporal features at once in cross-frame spatio-temporal local neighborhoods. The idea behind the Hyperpoint-Mixer module is to (i) clearly separate the spatial (channel-mixing) operation and the temporal (frame-mixing) operation, (ii) design a frame-mixing operation of low computational complexity which is executed following the last channel-mixing operation. The intra-frame spatial information and the inter-frame temporal information in hyperpoint sequences may not be compatible. Separating the channel-mixing operations and the frame-mixing operation can avoid the mutual influence of spatial and temporal information extraction, which maximizes the human appearance encoding ability. Moreover, due to (i) and (ii), the Hyperpoint-Mixer module motivates the main operations of SequentialPointNet to execute in frame-level parallel. \begin{figure}[t] \centering \includegraphics[width=8.6cm]{Fig444} \caption{In the hyperpoint dislocation block, by adding the corresponding displacement vector, hyperpoints are dislocated from the initial space to different spatial positions to record the temporal order.\label{fig444}} \end{figure} \par Fig. 4 summarizes the Hyperpoint-Mixer module. The point cloud sequence is flattened to a hyperpoint sequence $\bm{F} \in \mathbb{R}^{T \times d_\text{H}}$ and input into the Hyperpoint-Mixer module to mix spatial and temporal features. This module consists of multiple space dislocation layers of identical size, a multi-level feature learning based on skip-connection operations, and a frame-mixing layer. \subsubsection{Space dislocation layer} The space dislocation layer is composed of a hyperpoint dislocation block, and a channel-mixing MLP block, which is presented to perform the intra-frame spatial feature mixing of hyperpoints in the new dislocated space. Simple symmetric functions like averaging, maximum, and addition operations are common feature aggregating manners with low computational complexity. In this paper, the symmetric function is adopted as the frame-mixing layer. However, symmetric functions are insensitive to the input order of hyperpoints. To overcome this problem, we must embed temporal information into the hyperpoint sequence. In the hyperpoint dislocation block, by adding the corresponding displacement vector, hyperpoints are dislocated from the initial space to different spatial positions to record the temporal order as shown in Fig. 5. The essence of the hyperpoint dislocation block is to convert the hyperpoint order into a coordinate component of the hyperpoint to embed temporal information. Specifically, displacement vectors are generated using the sine and cosine functions of different frequencies \cite{Transformer}: \begin{equation} D V_{t, 2 h}=\sin \left(t/ 10000^{2 h / d_{\text {H }}}\right) \end{equation} \begin{equation} D V_{t, 2 h+1}=\cos \left(t/ 10000^{2 h / d_{\text {H}}}\right) \end{equation} where $d_{\text {H}}$ denotes the number of channels. $t$ is the temporal position and $h$ is the dimension position. Then, the channel-mixing MLP block acts on each hyperpoint in the dislocation space to perform the channel-mixing operation, maps $\mathbb{R}^{d_\text{H}}\rightarrow \mathbb{R}^{d_\text{H}}$, and is shared across all hyperpoints. Each channel-mixing MLP block contains a set of fully-connected layers and ReLU non-linearities. Space dislocation layers can be formalized as follows: \begin{equation} \bm{F}^{\ell}_{t}=MLP(\bm{F}^{\ell-1}_{t}+DV_t), \quad \text { for } t=1 \ldots T. \end{equation} where $\bm{F}^{\ell}$ is the new hyperpoint sequences after the $\ell$-th space dislocation layer. \par Each space dislocation layer takes an input of the same size, which is an isotropic design. With the stacking of space dislocation layers, hyperpoints are dislocated at increasingly larger scales. In the larger dislocation space, the coordinates of hyperpoints record more temporal information but less spatial information. Multi-level feature learning \cite{multi-scale} is often used in vision tasks, which can improve recognition accuracy. In order to obtain more discriminant information, multi-level features from different dislocation spaces are added by skip-connection operations and sent into the symmetric function as follows: \begin{equation} \bm{R}_{i}=\underset{t=1, \ldots, T}{g}\left\{(\bm{F}+\bm{F}^1+,...,+\bm{F}^{\ell})_{t,i}\right\} \end{equation} where $g$: $\mathbb{R}^{T \times d_\text{H}} \rightarrow \mathbb{R}^{d_\text{H}}$ is a symmetric function. $\bm{R}$ is the global spatio-temporal feature of the hyperpoint sequence. \subsubsection{Frame-mixing layer} The frame-mixing layer is used to perform the feature mixing across frames. Since the temporal information has been injected, the max pooling operation is adopted as the frame-mixing layer to aggregate spatio-temporal information from all the hyperpoints. In order to capture the subactions within the point cloud sequence, the hierarchical pyramid max pooling operation is used, which divides the fused hyperpoint sequence $\bm{\widetilde{F}}=\bm{F}+\bm{F}^1+,...,+\bm{F}^{\ell}$ into multiple temporal partitions of the equal number of hyperpoints and then performs the max pooling operation in each partition to generate the corresponding descriptors. In this work, we employ a 2-layer pyramid with three partitions. Then, the descriptors from all temporal partitions are simply concatenated to form the global spatio-temporal feature $\bm{R}$. Finally, the output of the Hyperpoint-Mixer module is input to a fully-connected classifier head for recognizing human actions. \par It is worth mentioning that the Hyperpoint-Mixer module can also be viewed as a time sequence model. Compared to RNN-like and Transformer-like networks, the Hyperpoint-Mixer module implements more competitive performance on the hyperpoint sequence classification task. Different from the conventional sequential data, the internal structures of the hyperpoint generate the main discriminant information and the dynamics between the hyperpoints are auxiliary. Therefore, the time sequence models that implement strict temporal inference are not suitable for hyperpoint sequences. Channel-mixing operations and the frame-mixing operation are separated in the Hyperpoint-Mixer module, which avoids the mutual influence of temporal and spatial information extraction and maximizes the encoding ability for intra-frame human appearance. The symmetry function assisted by the hyperpoint dislocation block also preserves plentiful dynamic information for effective human action recognition. \begin{figure}[t] \centering \includegraphics[width=8.6cm]{Fig4} \caption{The computation graph of SequentialPointNet.\label{fig4}} \end{figure} \subsection{Frame-level parallelism} \par Fig. 6 demonstrates the computation graph of SequentialPointNet. Taking the frame-mixing layer as the watershed, SequentialPointNet can be divided into a front part and a back part. Since there is no cross-frame computational dependency, operations of the front part can be divided into frame-level units executed in parallel. Each frame-level unit includes a hyperpoint embedding module and all space dislocation layers. The back part only contains architectures of low computational complexity including the frame-mixing operation and a classifier head. Therefore, the main operations ($i.e.$, the front part) in SequentialPointNet can be executed in frame-level parallel, based on which the efficiency of modeling point cloud sequences is greatly improved. \section{Experiments} In this section, we firstly introduce the datasets and experimental implementation details. Then, we compare our SequentialPointNet with the existing state-of-the-art methods. Again, we conduct detailed ablation studies to further demonstrate the contributions of different components in our SequentialPointNet. Finally, we compare the memory usage and computational efficiency of our SequentialPointNet with other point cloud sequence models. \subsection{Datasets} \par We evaluate the proposed method on two large-scale public datasets ($i.e.$, NTU RGB+D 60 \cite{NTU60} and NTU RGB+D 120 \cite{NTU120}) and two small-scale public dataset ($i.e.$, MSR Action3D dataset \cite{MSR} and UTD Multimodal Human Action Dataset \cite{UTD} (UTD-MHAD)). \par The NTU RGB+D 60 dataset is composed of 56880 depth video sequences for 60 actions and is one of the largest human action datasets. Both cross-subject and cross-view evaluation criteria are adopted for training and testing. \par The NTU RGB+D 120 dataset is the largest dataset for 3D action recognition, which is an extension of the NTU RGB-D 60 dataset. The NTU RGB+D 120 dataset is composed of 114480 depth video sequences for 120 actions. Both cross-subject and cross-setup evaluation criteria are adopted for training and testing. \par The MSR Action3D dataset contains 557 depth video samples of 20 actions from 10 subjects. Each action is performed 2 or 3 times by every subject. We adopt the same cross-subject settings in \cite{MSR}, where all the 20 actions were employed. Half of the subjects are used for training and the rest for testing. \par The UTD-MHAD dataset is composed of 861 depth video samples for 27 actions performed by 8 subjects. Every subject performs each action 4 times. We adopt the same experimental settings in \cite{UTD}, where the 27 actions were employed. Half of the subjects were used for training and the other half for testing. \subsection{Implementation details} Each depth frame is converted to a point cloud set using the public code provided by 3DV-PointNet++. We sample 512 points from the point cloud set as a point cloud frame. Specifically, we first randomly sample 2048 points from the point cloud set. Then, 512 points are chosen from the 2048 points using the PFS algorithm. In the hyperpoint embedding module, the set abstraction operation is performed twice on each point cloud frame to model spatial structures. In the first set abstraction operation, 128 centroids are chosen to determine point groups. The group radius is set to 0.06. The point number in each point group is set to 48. In the second set abstraction operation, 32 centroids are chosen to determine point groups. The group radius is set to 0.1. The point number in each point group is set to 16. In the Hyperpoint-Mixer module, the number of space dislocation layers is set to 2. The same data augmentation strategies in 3DV-PointNet++ are adopted on training data, including random rotation around Y and X axis, jittering, and random points dropout. We apply Adam as the optimizer. The batch size is set to 32. The learning rate begins with 0.001 and decays with a rate of 0.5 every 10 epochs. Training will end when reaching 90 epochs. \par$\textbf{Details on the network architecture.}$ We provide the details of SequentialPointNet as follows. In the hyperpoint embedding module, three sets of MLPs are used ($i.e.$, two sets of MLPs are used in two set abstraction operations and another set of MLPs are used in the finally PointNet layer). Each set of MLPs includes three MLPs. The out channels of the first set of MLPs are set as 64, 64, and 128, respectively. The out channels of the second set of MLPs are set as 128, 128, and 256, respectively. The out channels of the third set of MLPs are set as 256, 512, and 1024, respectively. In the Hyperpoint-Mixer module, two sets of MLPs are used in two space dislocation layers. Each set of MLPs includes one MLP of 1024 out channels. The output channels of the fully-connected classifier head are set as 256 and the number of action categories. \subsection{Comparison with the state-of-the-art methods} In this section, in order to verify the recognition accuracy of our SequentialPointNet, comparison experiments with other state-of-the-art approaches are implemented on NTU RGB+D 60 dataset, NTU RGB+D 120 dataset, MSR Action3D dataset, and UTD-MHAD dataset. \subsubsection{NTU RGB+D 60 dataset} We first compare our SequentialPointNet with the state-of-the-art methods on the NTU RGB+D 60 dataset. The NTU RGB+D 60 dataset is a large-scale indoor human action dataset. As indicated in Table I, SequentialPointNet achieves 90.3$\%$ and 97.6$\%$ on the cross-subject and cross-view test settings, respectively. Although there is a small gap between our method and skeleton sequence-based DDGCN on the cross-subject test setting, SequentialPointNet surpasses DDGCN by 0.5$\%$ on the cross-view test setting. Compared with depth sequences and point cloud sequences, skeleton sequences have the point tracking nature which is convenient for exploring fine temporal information. Fine temporal information mitigates the effect of subject diversity on action recognition, chieving higher recognition accuracy on the cross-subject test setting. It is worth mentioning that SequentialPointNet shows strong performance on par or even better than other point sequence-based approaches. Our SequentialPointNet achieves state-of-the-art performance among all methods on the cross-view test setting and results in similar recognition accuracy as PSTNet on the cross-subject test setting. The key success of our SequentialPointNet lies in effectively capturing human appearance information from each point cloud frame by the hyperpoint embedding module and the channel-mix layer in the Hyperpoint-Mixer module. Separating the channel-mixing operations and the frame-mixing operation avoids the mutual influence of spatial and temporal information extraction, which maximizes the human appearance encoding ability. Moreover, the symmetry function assisted by the hyperpoint dislocation block also preserves plentiful temporal information for effective human action recognition. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Action recognition accuracy ($\%$) on NTU RGB+D 60} \label{table1} \centering \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cccc@{}} \hline \hline \bfseries Method$/$Year & \bfseries Input & \bfseries Cross-subject& \bfseries Cross-view\\ \hline \hline \noalign{\global\arrayrulewidth1pt} \noalign{\global\arrayrulewidth0.4pt} Wang $et$ $al$.(2018)\cite{DPBL} & depth & 87.1& 84.2\\ MVDI(2019)\cite{MVDI} & depth & 84.6& 87.3\\ 3DFCNN(2020)\cite{3DFCNN} & depth & 78.1& 80.4\\ Stateful ConvLSTM(2020)\cite{Statefull} & depth & 80.4& 79.9\\ \cmidrule(r){1-4} ST-GCN(2018)\cite{STGCN} & skeleton & 81.5& 88.3\\ AS-GCN(2019)\cite{ASGCN} & skeleton & 86.8& 94.2\\ 2s-AGCN(2019)\cite{2sAGCN} & skeleton & 88.5& 95.1\\ DGNN(2019)\cite{DGNN} & skeleton & 89.9& 96.1\\ DDGCN(2020)\cite{DDGNN} & skeleton & $\mathbf{91.1}$& 97.1\\ 3s-CrosSCLR(2021)\cite{3sCrosSCLR} & skeleton & 86.2& 92.5\\ Sym-GNN(2021)\cite{symGNN} & skeleton & 90.1& 96.4\\ \cmidrule(r){1-4} 3DV-PointNet++(2020)\cite{3DV} & point& 88.8& 96.3\\ P4Transformer(2021)\cite{P4Transformer} & point & 90.2& 96.4\\ PSTNet(2021)\cite{PSTNet} & point & 90.5& 96.5\\ SequentialPointNet(ours) & point & 90.3& $\mathbf{97.6}$\\ \hline \hline \end{tabular*} \end{table} \subsubsection{NTU RGB+D 120 dataset} We then compare our SequentialPointNet with the state-of-the-art methods on the NTU RGB+D 120 dataset. The NTU RGB+D 120 dataset is the largest dataset for 3D action recognition. Compared with NTU RGB+D 60 dataset, it is more challenging to perform 3D human motion recognition on the NTU RGB+D 120 dataset. As indicated in Table II, SequentialPointNet achieves 83.5$\%$ and 95.4$\%$ on the cross-subject and cross-setup test settings, respectively. Note that, even on the largest human action dataset, SequentialPointNet still gains a strong lead on the cross-setup test setting among all 3D human action recognition methods and achieves state-of-the-art performance. Compared with the state-of-the-art method, SequentialPointNet does not show a competitive recognition accuracy on the cross-subject setting. There is a gap between our method and PSTNet, which is due to the loss of fine temporal changes. Fine temporal changes mitigate the effect of subject diversity on the performance of action recognition. However, a small loss of temporal information is a necessary concession to frame-level parallelism. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Action recognition accuracy ($\%$) on NTU RGB+D 120} \label{table1} \centering \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cccc@{}} \hline \hline \bfseries Method$/$Year & \bfseries Input & \bfseries Cross-subject& \bfseries Cross-setup\\ \hline \hline \noalign{\global\arrayrulewidth1pt} \noalign{\global\arrayrulewidth0.4pt} Baseline(2018)\cite{NTU120} & depth & 48.7& 40.1\\ \cmidrule(r){1-4} ST-GCN(2018)\cite{STGCN} & skeleton & 81.5& 88.3\\ MS-G3D Net (2020)\cite{MSG3D} & skeleton & 86.9& 88.4\\ 4s Shift-GCN(2020)\cite{4sShiftGCN} & skeleton & 85.9& 87.6\\ SGN(2020)\cite{SGN} & skeleton & 79.2& 81.5\\ 3s-CrosSCLR(2021)\cite{3sCrosSCLR} & skeleton & 80.5& 80.4\\ \cmidrule(r){1-4} 3DV-PointNet++(2020)\cite{3DV} & point& 82.4& 93.5\\ P4Transformer(2021)\cite{P4Transformer} & point & 86.4& 93.5\\ PSTNet(2021)\cite{PSTNet} & point & $\mathbf{87.0}$& 93.5\\ SequentialPointNet(ours) & point & 83.5& $\mathbf{95.4}$\\ \hline \hline \end{tabular*} \end{table} \subsubsection{MSR Action3D dataset} In order to comprehensively evaluate our method, comparative experiments are also carried out on the small-scale MSR Action3D dataset. In order to alleviate the overfitting problem on the small-scale dataset, the batch size is set as 16. Other parameter settings remain the same as those on the two large-scale datasets. Table III illustrates the recognition accuracy of different methods when using different numbers of point cloud frames. It’s interesting to see that, as the number of point cloud frames increases, the recognition accuracy of our method increases faster than MeteorNet, P4Transformer, and PSTNet. When using 24 point cloud frames as inputs, our model achieved state-of-the-art performance on the MSR Action3D dataset. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Action recognition accuracy ($\%$) on MSR-Action3D} \label{table1} \centering \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cccc@{}} \hline \hline \bfseries Method$/$Year & \bfseries Input & \bfseries $\#$ Frames& \bfseries Accuracy\\ \hline \hline \noalign{\global\arrayrulewidth1pt} \noalign{\global\arrayrulewidth0.4pt} Kläser $et$ $al$.(2008)\cite{ASTDB} & depth& 18& 81.43\\ Vieira $et$ $al$.(2012)\cite{SSTO} & depth& 20& 78.20\\ Actionlet(2012)\cite{Actionlet} & skeleton& all& 88.21\\ \cmidrule(r){1-4} \multirow{5}{*}{MeteorNet(2019)\cite{MeteorNet}} & \multirow{5}{*}{point}& 4 & 78.11 \\ & & 8 & 81.14 \\ & & 12 & 86.53 \\ & & 16 & 88.21 \\ & & 24 & 88.50 \\ \cmidrule(r){1-4} PointNet++(2020)\cite{Pointnet++} & point& 1& 61.61\\ \cmidrule(r){1-4} \multirow{6}{*}{P4Transformer(2021)\cite{P4Transformer}} & \multirow{6}{*}{point}& 4 & 80.13 \\ & & 8 & 83.17 \\ & & 12 & 87.54 \\ & & 16 & 89.56 \\ & & 20 & 90.24 \\ & & 24 & 90.94 \\ \cmidrule(r){1-4} \multirow{6}{*}{PSTNet(2021)\cite{PSTNet}} & \multirow{6}{*}{point}& 4 & 81.14 \\ & & 8 & 83.50 \\ & & 12 & 87.88 \\ & & 16 & 89.90 \\ & & 24 & 91.20 \\ \cmidrule(r){1-4} \multirow{6}{*}{SequentialPointNet(ours)} & \multirow{6}{*}{point}& 4 & 77.66 \\ & & 8 & 86.45 \\ & & 12 & 88.64 \\ & & 16 & 89.56 \\ & & 20 & 91.21 \\ & & 24 & $\mathbf{91.94}$\\ \hline \hline \end{tabular*} \end{table} \subsubsection{UTD-MHAD dataset} We evaluate our method on another small-scale UTD-MHAD dataset. The batch size is set as 16. Other parameter settings remain the same as those on the two large-scale datasets. SequentialPointNet is the first point cloud sequence model conducted on the UTD-MHAD dataset. In this paper, point cloud sequences are converted from depth sequences. To verify the recognition performance of SequentialPointNet, we compare the proposed approach with other depth sequence-based methods. Table IV illustrates the recognition accuracy of different methods. SequentialPointNet has the highest recognition accuracy of 92.31$\%$. Experimental results on two small-scale datasets demonstrate that our approach can achieve superior recognition accuracy even without a large amount of data for training. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Action recognition accuracy ($\%$) on UTD-MHAD} \label{table1} \centering \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}ccc@{}} \hline \hline \bfseries Method$/$Year & \bfseries Input & \bfseries Accuracy\\ \hline \hline \noalign{\global\arrayrulewidth1pt} \noalign{\global\arrayrulewidth0.4pt} 3DHOT-MBC(2017)\cite{3DHOT-MBC} &depth &84.40\\ HP-DMM-CNN(2018)\cite{HP-DMM-CNN} & depth&82.75\\ DMI-CNN(2018)\cite{Kamel} & depth&50.00\\ HP-DMM(2018)\cite{HP-DMM} & depth&73.72\\ Yang $et$ $al$.(2020)\cite{Yang} &depth &88.37\\ Trelinski $et$ $al$.(2021)\cite{Trelinski} & depth&88.14\\ DRDIS(2021)\cite{Wu} & depth&87.88\\ \cmidrule(r){1-3} SequentialPointNet(ours) & point& $\mathbf{92.31}$\\ \hline \hline \end{tabular*} \end{table} \subsection{Ablation study} In this section, comprehensive ablation studies are performed on NTU RGB+D 60 dataset to validate the contributions of different components in our SequentialPointNet. \subsubsection{Effectiveness of Hyperpoint-Mixer module} We conduct the experiments to demonstrate the effectiveness of the Hyperpoint-Mixer module, and results are reported in Table V. In order to prove modeling ability of this module to hyperpoint sequences, several strong deep networks are used to instead of the Hyperpoint-Mixer module in our SequentialPointNet. In SequentialPointNet (LSTM), LSTM is employed. In SequentialPointNet (GRU), GRU is employed. In SequentialPointNet (Transformer), a Transformer of two attention layers is used. In SequentialPointNet (MLP-Mixer), a MLP-Mixer of two mixer layers is used. \par We can see from the table that results of SequentialPointNet (LSTM), SequentialPointNet (GRU), and SequentialPointNet (Transformer) are much worse when compared with the result of SequentialPointNet. Internal structures of hyperpoints generate the main discriminant information and the changes between hyperpoints are auxiliary. Therefore, recurrent models that implement strict temporal inference are not suitable for hyperpoint sequences. Recently, self-attention-based Transformer has remained dominant in natural language processing and computer vision \cite{ViT}. Transformer is expected to achieve superior performance on the hyperpoint sequence classification task. However, due to the lack of larger-scale data for pre-training, Transformer does not show a promising result. SequentialPointNet (MLP-Mixer) achieves the accuracy of 94.5$\%$, which is 3.1$\%$ lower than our SequentialPointNet. In MLP-Mixer, channel-mixing and token-mixing are performed alternately, resulting in the mutual influence of spatial and temporal information extraction. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Cross-view recognition accuracy ($\%$) when using models on the NTU RGB+D 60 dataset} \label{table3} \centering \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cc@{}} \hline \hline \bfseries Method & \bfseries Accuracy \\ \hline \hline SequentialPointNet(LSTM) & 85.9\\ SequentialPointNet(GRU) & 86.4\\ SequentialPointNet(Transformer) & 81.6\\ SequentialPointNet(MLP-Mixer) & 94.5\\ SequentialPointNet & 97.6\\ \hline \hline \end{tabular*} \end{table} \subsubsection{Different temporal information embedding manners} To demonstrate the effectiveness of the hyperpoint dislocation block in the Hyperpoint-Mixer module, we also report the results of SequentialPointNet (4D, w/o hdb), which does not use the hyperpoint dislocation block and injects the order information by appending the 1D temporal dimension to raw 3D points in each point cloud frame. Results are tabulated in Table VI. From the table, we observe that SequentialPointNet with the hyperpoint dislocation block outperforms SequentialPointNet (4D, w/o hdb). Therefore, the temporal information embedding manner used in SequentialPointNet is more efficient. From the experimental results, we can draw a conclusion that premature embedding of temporal information will affect spatial information encoding and reduce the accuracy of human action recognition. \begin{table}[h] \renewcommand{\arraystretch}{1.3} \caption{Cross-view recognition accuracy ($\%$) of different methods on the NTU RGB+D 60 dataset} \label{table3} \centering \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cc@{}} \hline \hline \bfseries Method & \bfseries Accuracy \\ \hline \hline SequentialPointNet(4D, w/o hdb) & 95.4\\ SequentialPointNet(w/o mlfl) & 96.9\\ SequentialPointNet(w/o apn) & 97.1\\ SequentialPointNet & 97.6\\ \hline \hline \end{tabular*} \end{table} \subsubsection{Effectiveness of multi-level feature learning in the Hyperpoint-Mixer module} With the stacking of the space dislocation layer in the Hyperpoint-Mixer module, hyperpoints are dislocated at increasingly larger scales. In the larger dislocation space, the coordinates of hyperpoints record more temporal information but less spatial information. In order to obtain more discriminant information, multi-level features from different dislocation spaces are added by skip-connection operations. To verify the effectiveness of the multi-level feature learning, we report the result of SequentialPointNet (w/o mlfl) that classify human actions without the multi-level features. We can see from Table VI that the recognition accuracy of SequentialPointNet (w/o mlfl) decreases by 0.7$\%$ without the multi-level feature learning. \subsubsection{Effectiveness of Augmentation PointNet layer} We design the augmentation PointNet layer in set abstract operations of the hyperpoint embedding module to encode the spatial structures of human actions. There are two main improvements. First, we used the distance between each local point and the corresponding centroid as a 1-dim additional point feature to alleviate the influence of rotational motion on action recognition. Second, the inter-feature attention mechanism is used to optimize the fusion effect of different features. In order to verify the effectiveness of the augmentation PointNet layer, we report the result of SequentialPointNet (w/o apn) that classifies human actions without the augmentation PointNet layer. From Table VI, we observe that the recognition accuracy of SequentialPointNet with the augmentation PointNet layer increases by 0.5$\%$ than SequentialPointNet (w/o apn). \subsubsection{Results of our SequentialPointNet when setting different layers of pyramids in the hierarchical pyramid max pooling operation} To investigate the choice of the number of the pyramid layers in the hierarchical pyramid max pooling operation, we compare the performance of our SequentialPointNet with the different numbers of pyramid layers. The hierarchical pyramid max pooling operation is used as the frame-mixing operation to mix the dynamic information within subactions. Too few layers of the pyramid cannot obtain enough temporal multi-scale information of human motion. Too many layers of the pyramid tend to introduce redundant information and increase computational costs. The results are presented in Fig. 7. Particularly, the 2-layer pyramid is the optimal choice for SequentialPointNet. \begin{figure}[t] \begin{minipage}[t]{0.49\linewidth} \centering \includegraphics[width=1\textwidth]{Fig5} \caption{Cross-view recognition accuracy ($\%$) of our SequentialPointNet when using different numbers of pyramid layers on the NTU RGB+D 60 dataset.\label{fig5}} \end{minipage} \hfill \begin{minipage}[t]{0.49\linewidth} \centering \includegraphics[width=1\textwidth]{Fig6} \caption{Cross-view recognition accuracy ($\%$) of our SequentialPointNet when using different numbers of frames on the NTU RGB+D 60 dataset.\label{fig6}} \end{minipage} \end{figure} \subsubsection{Results of our SequentialPointNet when using different numbers of point cloud frames} We also investigate the performance variation of our method when using different numbers of point cloud frames. As shown in Fig. 8, better performance is gained when the number of point cloud frames increases. When the number of frames is greater than 20, the recognition accuracy tends to stabilize. Therefore, the number of the point cloud frames is set to 20. \subsection{Memory usage and computational efficiency} In this section, we evaluate the memory usage, computational efficiency and parallelism of our method. Experiments are conducted on the machine with one Intel(R) Xeon(R) W-3175X CPU and one Nvidia RTX 3090 GPU. In Table VII, the number of parameters, the number of floating point operations (FLOPs), and the running time of our SequentialPointNet are compared with MeteorNet, 3DV-PointNet++, P4Transformer, and PSTNet. The running time is the network forward inference time per point cloud sequence and counted in the same way as 3DV-PointNet++. FLOPs of MeteorNet, P4Transformer, and PSTNet are not provided by their authors. \begin{table}[H] \renewcommand{\arraystretch}{1.3} \caption{Parameters, floating point operations, and running times comparison of different methods} \label{table1} \centering \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cccc@{}} \hline \hline \bfseries Method & \bfseries Parameters (M) & \bfseries FLOPs (G) & \bfseries Time (ms)\\ \hline \hline \noalign{\global\arrayrulewidth1pt} \noalign{\global\arrayrulewidth0.4pt} MeteorNet\cite{MeteorNet} & 17.60& -& 56.49\\ 3DV-PointNet++\cite{3DV} & $\mathbf{1.24}$& $\mathbf{1.24}$& 54\\ P4Transformer\cite{P4Transformer} & 44.1& -& 96.48\\ PSTNet\cite{PSTNet} & 20.46& -& 106.35\\ SequentialPointNet & 3.72& 2.84& $\mathbf{5}$\\ \hline \hline \end{tabular*} \end{table} \par From the table, we can see that parameters in 3DV-PointNet++ are the fewest in all methods. This is because 3DV-PointNet++ converts point cloud sequences to 3D point clouds, and employs static point cloud methods to process 3D point clouds. Compared with point cloud sequence methods, the static point cloud method has fewer parameters but lower recognition accuracy. It is noticed from the table that only our SequentialPointNet has a similar number of parameters as 3DV-PointNet++. The parameter number of SequentialPointNet is much less than MeteorNet, P4Transformer, and PSTNet, which verifies that our approach is the most lightweight point cloud sequence model. In addition, SequentialPointNet is far faster than other methods, even though 3DV-PointNet++ has fewer FLOPs. SequentialPointNet takes only 5 milliseconds to classify a point cloud sequence. In 3DV-PointNet++, a multi-stream 3D action recognition manner is proposed to learn motion and appearance features jointly. However, the multi-stream manner limits the parallelism of 3DV-PointNet++. In MeteorNet, P4Transformer, and PSTNet, Cross-frame spatio-temporal local neighborhoods are constructed, which is time-consuming and not conducive to parallel computing. SequentialPointNet improves the speed of modeling point cloud sequences by more than 10 times. The superior computational efficiency of our SequentialPointNet is due to the lightweight network architecture and strong frame-level parallelism. \par Since there is no computational dependency, in the case of limited computing power, frame-level units can also be deployed on different devices or executed sequentially on a single device. \section{Conclusion} In this paper, we have proposed a strong frame-level parallel point cloud sequence network referred to as SequentialPointNet for 3D action recognition, which greatly improves the efficiency of modeling point cloud sequences. SequentialPointNet is composed of two serial modules, $i.e.$, a hyperpoint embedding module and a Hyperpoint-Mixer module. First, each point cloud frame is flattened into a hyperpoint by the same trainable hyperpoint embedding module. Then, we input the hyperpoint sequence into the Hyperpoint-Mixer module to perform feature mixing operations. In the Hyperpoint-Mixer module, channel-mixing operations and frame-mixing operations are clearly separated and the frame-mixing operation is placed at the end of all operations. By doing so, the main operations of SequentialPointNet can be divided into frame-level units executed in parallel. Extensive experiments conducted on four public datasets show that SequentialPointNet has the fastest computation speed and superior recognition performance among all point cloud sequence models. \section{Introduction}\label{sec:introduction}} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Communications Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE \textsc{Transactions on Magnetics} journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi
{'timestamp': '2022-03-11T02:21:43', 'yymm': '2111', 'arxiv_id': '2111.08492', 'language': 'en', 'url': 'https://arxiv.org/abs/2111.08492'}